[Openstack-operators] Setting affinity based on instance type

Silence Dogood matt at nycresistor.com
Thu Mar 3 17:50:42 UTC 2016


We did some early affinity work and discovered some interesting problems
with affinity and scheduling. =/  by default openstack used to ( may still
) deploy nodes across hosts evenly.

Personally, I think this is a bad approach.  Most cloud providers stack
across a couple racks at a time filling them then moving to the next.  This
allows older equipment to age out instances more easily for removal /
replacement.

The problem then is, if you have super large capacity instances they can
never be deployed once you've got enough tiny instances deployed across the
environment.  So now you are fighting with the scheduler to ensure you have
deployment targets for specific instance types ( not very elastic /
ephemeral ).  goes back to the wave scheduling model being superior.

Anyways we had the braindead idea of locking whole physical nodes out from
the scheduler for a super ( full node ) instance type.  And I suppose you
could do this with AZs or regions if you really needed to.  But, it's not a
great approach.

I would say that you almost need a wave style scheduler to do this sort of
affinity work.

On Thu, Mar 3, 2016 at 12:34 PM, Jay Pipes <jaypipes at gmail.com> wrote:

> On 03/03/2016 08:57 AM, Robert Starmer wrote:
>
>> There was work done on enabling much more dynamic scheduling, including
>> cross project scheduling (e.g. get additional placement hints from
>> Neutron or Cinder), and I believe the framework is even in place to make
>> use of this, but I don't believe anyone has written a scheduling
>> component to make use of this.  I think your best bet would be to build
>> a custom weighted scheduler, which could be as simple as a linearly
>> decreasing weight for one group and the inverse for the other group.
>> Certainly this wouldn't be perfect, but might address your needs.
>>
>
> My ideas for replacing server groups with generic placement policies are
> written down here, with some excellent feedback from a number of reviewers:
>
> https://review.openstack.org/#/c/183837/
>
> Would be great to get additional eyeballs on it. I was planning on
> reviving these ideas in the Newton and Ocata releases once the scheduler is
> split out from Nova.
>
> Best,
> -jay
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160303/3d679946/attachment.html>


More information about the OpenStack-operators mailing list