Hi Satish,
This is very odd, I am running NUMA aware openstack cloud and my VMs are getting scheduled on both sides of NUMA zone. Following is my flavor settings. Also I am using huge pages for performance. (make sure you have NUMATopologyFilter filter configured).
hw:cpu_policy='dedicated', hw:cpu_sockets='2', hw:cpu_threads='2', hw:mem_page_size='large'
what if you remove hw:numa_nodes=1 ?
Note that we are using a shared CPU policy (for various hosts). I don't know if this is causing our issue or not, but we definitely do not want to pin CPUs to VMs on these hosts. Without the hw:numa_nodes property, an individual VM is created with its vCPUs and Memory divided between the two NUMA nodes, which is not what we would prefer. We would prefer, instead, to have all vCPUs and Memory for the VM placed into a single NUMA node so all cores of the VM have access to this NUMA node's memory instead of having one core require cross-NUMA communications. With large core processors and large amounts of memory, it doesn't make much sense to have small VMs (such as 4 core VMs) span two NUMA nodes. With our current settings, every VM is placed into a single NUMA node (as we wanted), but they always land in NUMA node 0 and never in NUMA node 1. It does, however, appear that QEMU's memory overhead and Linux' buffer/cache is landing in NUMA node 1. Native processes on the hosts are spread between NUMA nodes. We don't have huge pages enabled, so we have not enabled the NUMATopologyFilter. Eric