[Openstack-operators] Setting affinity based on instance type

Silence Dogood matt at nycresistor.com
Thu Mar 3 18:43:24 UTC 2016


cool!

On Thu, Mar 3, 2016 at 1:39 PM, Mathieu Gagné <mgagne at internap.com> wrote:

> On 2016-03-03 12:50 PM, Silence Dogood wrote:
> > We did some early affinity work and discovered some interesting problems
> > with affinity and scheduling. =/  by default openstack used to ( may
> > still ) deploy nodes across hosts evenly.
> >
> > Personally, I think this is a bad approach.  Most cloud providers stack
> > across a couple racks at a time filling them then moving to the next.
> > This allows older equipment to age out instances more easily for removal
> > / replacement.
> >
> > The problem then is, if you have super large capacity instances they can
> > never be deployed once you've got enough tiny instances deployed across
> > the environment.  So now you are fighting with the scheduler to ensure
> > you have deployment targets for specific instance types ( not very
> > elastic / ephemeral ).  goes back to the wave scheduling model being
> > superior.
> >
> > Anyways we had the braindead idea of locking whole physical nodes out
> > from the scheduler for a super ( full node ) instance type.  And I
> > suppose you could do this with AZs or regions if you really needed to.
> > But, it's not a great approach.
> >
> > I would say that you almost need a wave style scheduler to do this sort
> > of affinity work.
> >
>
> You can already do it with the RAMWeigher using the
> ram_weight_multiplier config:
>
>   Multiplier used for weighing ram.  Negative
>   numbers mean to stack vs spread.
>
> Default is 1.0 which means spread.
>
> --
> Mathieu
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160303/6cfd9f9a/attachment.html>


More information about the OpenStack-operators mailing list