[Openstack-operators] Setting affinity based on instance type

Kris G. Lindgren klindgren at godaddy.com
Thu Mar 3 23:50:18 UTC 2016


Cern actually did a pretty good write up of this:

http://openstack-in-production.blogspot.com/2014/07/openstack-plays-tetris-stacking-and.html

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Adam Lawson <alawson at aqorn.com<mailto:alawson at aqorn.com>>
Date: Thursday, March 3, 2016 at 4:28 PM
To: Silence Dogood <matt at nycresistor.com<mailto:matt at nycresistor.com>>
Cc: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: Re: [Openstack-operators] Setting affinity based on instance type

Mathieu,

Blame it on my scattered brain but I'm now curious. How would this be approached practically speaking? I.e. how would ram_weight_multiplier enable the scenario I mentioned in my earliest post ?

//adam


Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Thu, Mar 3, 2016 at 10:43 AM, Silence Dogood <matt at nycresistor.com<mailto:matt at nycresistor.com>> wrote:
cool!

On Thu, Mar 3, 2016 at 1:39 PM, Mathieu Gagné <mgagne at internap.com<mailto:mgagne at internap.com>> wrote:
On 2016-03-03 12:50 PM, Silence Dogood wrote:
> We did some early affinity work and discovered some interesting problems
> with affinity and scheduling. =/  by default openstack used to ( may
> still ) deploy nodes across hosts evenly.
>
> Personally, I think this is a bad approach.  Most cloud providers stack
> across a couple racks at a time filling them then moving to the next.
> This allows older equipment to age out instances more easily for removal
> / replacement.
>
> The problem then is, if you have super large capacity instances they can
> never be deployed once you've got enough tiny instances deployed across
> the environment.  So now you are fighting with the scheduler to ensure
> you have deployment targets for specific instance types ( not very
> elastic / ephemeral ).  goes back to the wave scheduling model being
> superior.
>
> Anyways we had the braindead idea of locking whole physical nodes out
> from the scheduler for a super ( full node ) instance type.  And I
> suppose you could do this with AZs or regions if you really needed to.
> But, it's not a great approach.
>
> I would say that you almost need a wave style scheduler to do this sort
> of affinity work.
>

You can already do it with the RAMWeigher using the
ram_weight_multiplier config:

  Multiplier used for weighing ram.  Negative
  numbers mean to stack vs spread.

Default is 1.0 which means spread.

--
Mathieu

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160303/0ce556fd/attachment.html>


More information about the OpenStack-operators mailing list