anti-affinity: what're the mechanics?

Sean Mooney smooney at redhat.com
Wed Mar 3 00:00:10 UTC 2021


On Tue, 2021-03-02 at 15:31 -0500, Ken D'Ambrosio wrote:
> Hey, all.  Turns out we really need anti-affinity running on our (very 
> freaking old -- Juno) clouds.  I'm trying to find docs that describe its 
> functionality, and am failing.  If I enable it, and (say) have 10 
> hypervisors, and 12 VMs to fire off, what happens when VM #11 goes to 
> fire?  Does it fail, or does the scheduler just continue to at least 
> *try* to maintain as few possible on each hypervisor?
in juno i belive we only have hard anti affinity via the filter.
i belive it predates the soft affinit/anti-affinity filter so it will error out.
the behavior i belive will depend on if you ddi a multi create or booted the vms serally
if you do it serially then you should be able to boot 10 vms. if you do a multi create then
it depends on if you set the min value or not. if you dont set the min value i think only 10 will boot
and the last two will error. if you set --min 12 --max 12 i think they all will go to error or be deleted.
i have not checked that but i belive we are ment to try and role back in that case.

the soft affinity weigher was added by https://github.com/openstack/nova/commit/72ba18468e62370522e07df796f5ff74ae13e8c9
in mitaka. if you want to be able to boot all 12 then you need a weigher like that.

for the most part you can proably backport that directly from master and use it in juno
as i dont think we have matirally altered the way the filters work that much bu
the weigher like the filters are also plugabl so you can backport it externally and load it if you wanted too.
that is proably your best bet.

> 
> Thanks!
> 
> -Ken
> 





More information about the openstack-discuss mailing list