[Openstack-operators] Setting affinity based on instance type
Robert Starmer
robert at kumul.us
Thu Mar 3 16:57:43 UTC 2016
There was work done on enabling much more dynamic scheduling, including
cross project scheduling (e.g. get additional placement hints from Neutron
or Cinder), and I believe the framework is even in place to make use of
this, but I don't believe anyone has written a scheduling component to make
use of this. I think your best bet would be to build a custom weighted
scheduler, which could be as simple as a linearly decreasing weight for one
group and the inverse for the other group. Certainly this wouldn't be
perfect, but might address your needs.
Robert
On Thu, Mar 3, 2016 at 8:36 AM, Jonathan Proulx <jon at csail.mit.edu> wrote:
> On Wed, Mar 02, 2016 at 09:35:07PM -0500, Mathieu Gagné wrote:
> :What would prevent the next user from having workloadB collocated with
> :an other user's workloadA if that's the only capacity available?
> :
> :Unless aggregates are used, it will be hard to guaranty that workloadA
> :and workloadB (from any users) are never collocated.
> :
> :You could probably play with custom weighers where a specialized
> :aggregate would be preferred over the others unless there isn't capacity
> :left. This would also mean that strict filters can't be used anymore
> :like suggested. (and it will need custom Python code to be written)
> :
> :The main challenge I see is not the single first anti-affinity request,
> :it's all the subsequent others which will also require anti-affinity.
>
>
> My reading of the question suggesst they don't want a 'hard
> never|always colocate' which hsot aggregates and server groups have
> ways of enforcing, but rather a 'soft preference to avoid|achieve
> colocation'.
>
> I don't think there's an existing way to do this other than writing a
> custom weighter.
>
> I've frequntly wished for this scheduling option but not hard enough
> to implement it myself...
>
> -Jon
>
> :
> :Mathieu
> :
> :On 2016-03-02 8:46 PM, Adam Lawson wrote:
> :> Hi Kris,
> :>
> :> When using aggregates as an example, anyone can assign
> :> workloadA<>aggregateA and workloadB<>aggregateB. That's easy. But if we
> :> have outstanding requests for workloadB and have a glut of capacity in
> :> aggregateA, workloadB won't be able to use those hosts so we have spare
> :> capacity and no way to utilize it.
> :>
> :> So I want to set an affinity for workloads and not at the host level.
> :> That way, hosts remain fungible, workload affinity policies are
> :> respected and cloud capacity is properly utilizing capacity.
> :>
> :> Does that make sense?
> :>
> :> //adam
> :>
> :> */
> :> Adam Lawson/*
> :>
> :> AQORN, Inc.
> :> 427 North Tatnall Street
> :> Ste. 58461
> :> Wilmington, Delaware 19801-2230
> :> Toll-free: (844) 4-AQORN-NOW ext. 101
> :> International: +1 302-387-4660
> :> Direct: +1 916-246-2072
> :>
> :> On Wed, Mar 2, 2016 at 3:08 PM, Kris G. Lindgren <klindgren at godaddy.com
> :> <mailto:klindgren at godaddy.com>> wrote:
> :>
> :> You can set attributes on flavors that must match the attributes on
> :> hosts or the host aggregates. So you can basically always make sure
> :> a specific flavors goes to a specific compute node or type (like
> :> disks=ssd or class=gpu). Look at nova flavor extra_specs
> :> documentation and the aggregate_Instance_extra_specs under the
> :> scheduler options.
> :>
> :>
> :> ___________________________________________________________________
> :> Kris Lindgren
> :> Senior Linux Systems Engineer
> :> GoDaddy
> :>
> :> From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov <mailto:Kevin.Fox at pnnl.gov
> >>
> :> Date: Wednesday, March 2, 2016 at 3:58 PM
> :> To: Adam Lawson <alawson at aqorn.com <mailto:alawson at aqorn.com>>,
> :> "openstack-operators at lists.openstack.org
> :> <mailto:openstack-operators at lists.openstack.org>"
> :> <openstack-operators at lists.openstack.org
> :> <mailto:openstack-operators at lists.openstack.org>>
> :> Subject: Re: [Openstack-operators] Setting affinity based on
> :> instance type
> :>
> :> you usually do that on an instance level with server groups. do you
> :> have an example where you might want to do it at the flavor level?
> :>
> :> Thanks,
> :> Kevin
> :>
> ------------------------------------------------------------------------
> :> *From:* Adam Lawson [alawson at aqorn.com <mailto:alawson at aqorn.com>]
> :> *Sent:* Wednesday, March 02, 2016 2:48 PM
> :> *To:* openstack-operators at lists.openstack.org
> :> <mailto:openstack-operators at lists.openstack.org>
> :> *Subject:* [Openstack-operators] Setting affinity based on instance
> type
> :>
> :> I'm sure this is possible but I'm trying to find the info I need in
> :> the docs so I figured I'd pitch this to you guys while I continue
> :> looking:
> :>
> :> Is it possible to set an affinity/anti-affinity policy to ensure
> :> instance Type A is weighted for/against co-location on the same
> :> physical host with instance Type B?
> :>
> :> Basically I have no requirement for server-group affinity but rather
> :> to ensure specific workloads are as separate as possible.
> :>
> :> Thoughts?
> :>
> :> //adam
> :>
> :> */
> :> Adam Lawson/*
> :>
> :> AQORN, Inc.
> :> 427 North Tatnall Street
> :> Ste. 58461
> :> Wilmington, Delaware 19801-2230
> :> Toll-free: (844) 4-AQORN-NOW ext. 101
> :> International: +1 302-387-4660 <tel:%2B1%20302-387-4660>
> :> Direct: +1 916-246-2072 <tel:%2B1%20916-246-2072>
> :>
> :>
> :>
> :>
> :> _______________________________________________
> :> OpenStack-operators mailing list
> :> OpenStack-operators at lists.openstack.org
> :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> :>
> :
> :
> :_______________________________________________
> :OpenStack-operators mailing list
> :OpenStack-operators at lists.openstack.org
> :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> --
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160303/28908f19/attachment.html>
More information about the OpenStack-operators
mailing list