An end user creating VMs which are expected to be kept apart should use Nova's server groups with [soft-]anti-affinity to hint to the scheduler that those VMs are to be kept apart. Soft anti-affinity will try to keep them apart as best it can but will allow double-ups on hosts, while plain anti-affinity will actually fail the create if it can be kept away from other members of the server group.

There are AFAIK some corner cases involving how an operator might do live migration that could break this, but it should generally work.


David Zanetti

Chief Technology Officer

Catalyst Cloud

Aotearoa New Zealand's cloud provider

e david.zanetti@catalystcloud.nz

m +64-21-402260

w catalystcloud.nz

Follow us on LinkedIn

Level 5, 2 Commerce Street, Auckland 1010

Confidentiality Notice: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 4 499 2267

On Thu, 2024-05-23 at 18:58 -0400, Satish Patel wrote:

I have noticed when I spin up a bunch of vms then they all go to the same hypervisor until it runs out of resources. I can understand that it's trying to fill the hypervisor before it picks the next one. But it's kind of dangerous behavior. For example one customer created 3 vms to build a mysql galera cluster and all those 3 nodes endup on the same hypervisor (This is dangerous). I would like openstack to pick the hypervisor more randomly if resources are available in the pool instead of trying to fill the hypervisor. How do I change that behavior? ( There is a feature called affinity but again I would like nova to place vm more randomly instead shaving all to a single node). 

I am not an expert in scheduler logic so please educate me if I am missing something here.