I think it creates 2 Virtual NUMA nodes on one single Physical NUMA node
If you see below cpu 0-1 are in one NUMA node and 2-3 are in the other.
This is what I can understand from the xml.

<cell id='0' cpus='0-1' memory='524288'/>
    <cell id='1' cpus='2-3' memory='524288'/

On 29 April 2015 at 10:36, Blair Bethwaite <blair.bethwaite@gmail.com> wrote:
Hi all,

Just reading over the docs describing NUMA scheduler filter testing
(http://docs.openstack.org/developer/nova/devref/testing/libvirt-numa.html#testing-instance-boot-with-1-numa-cell-requested).
Somewhat confused by the scenario at the end, which seems to call for
a guest with 1 NUMA node but looks like it gets created with 2... I'm
obviously misunderstanding something in the resultant XML, but there's
no topology info shown from inside the guest, so I'm not sure. Anyone
tried this?

--
Cheers,
~Blairo

_______________________________________________
OpenStack-HPC mailing list
OpenStack-HPC@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc