[Openstack-operators] Migrating an instance to a host with less cores fails

Aubrey Wells awells at digiumcloud.com
Fri May 15 18:52:51 UTC 2015

Trying to decide if this is a bug or just a config option that I can't
find. The setup I'm currently testing in my lab with is two compute nodes
running Kilo, one has 40 cores (2x 10c with HT) and one has 16 cores (2x 4c
+ HT). I don't have any CPU pinning enabled in my nova config, which seems
to have the effect of setting in libvirt.xml a vcpu cpuset element like (if
created on the 40c node):


And then if I migrate that instance to the 16c node, it will bomb out with
an exception:

Live Migration failure: Invalid value
'0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38' for 'cpuset.cpus':
Invalid argument

Which makes sense, since that node doesn't have any vcpus after 15 (0-15).

I can fix the symptom by commenting out a line in
nova/virt/libvirt/config.py (circa line 1831) so it always has an empty
cpuset and thus doesn't write that line to libvirt.xml:
# vcpu.set("cpuset", hardware.format_cpu_spec(self.cpuset))

And the instance will happily migrate to the host with less CPUs, but this
loses some of the benefit of openstack trying to evenly spread out the core
usage on the host, at least that's what I think the purpose of that is.

I'd rather fix it the right way if there's a config option I don't see or
file a bug if its a bug.

What I think should be happening is that when it creates the libvirt
definition on the destination compute node, it write out the correct cpuset
per the specs of the hardware its going on to.

If it matters, in my nova-compute.conf file, I also have cpu mode and model
defined to allow me to migrate between the two different architectures to
begin with (the 40c is Sandybridge and the 16c is Westmere so I set it to
the lowest common denominator of Westmere):


Any help is appreciated.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150515/95519a19/attachment.html>

More information about the OpenStack-operators mailing list