<div dir="ltr"><div><div><div><div>Hi Steve,<br><br></div>Thank you for your feedback.<br><br></div>I figured out a solution for the problem. In my case the root cause lies in NUMA Topology, the memory slots was not physically split up evenly across the processors, so i disturbed them accordingly and everything works fine now.<br><br></div>Thanks again,<br></div>Hamza<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 19 April 2016 at 19:37, Steve Gordon <span dir="ltr"><<a href="mailto:sgordon@redhat.com" target="_blank">sgordon@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">----- Original Message -----<br>
> From: "Benyatou Hamza" <<a href="mailto:h16mara@gmail.com">h16mara@gmail.com</a>><br>
> To: <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
><br>
> Hi,<br>
><br>
> I established a mapping between virtual CPU to the physical core following<br>
> this Mirantis guide [1], I went through it successfully but the problem I<br>
> have is that I am only able to launch one instance regardless of the the<br>
> number of vCPUs specified in the Flavor (2,4,16.....).<br>
><br>
> Have you implemented this option before ?<br>
<br>
</span>Hi Hamza,<br>
<br>
Do you have debug logging from the scheduling attempt? Output from `virsh capabilities` might be of help too.<br>
<br>
Thanks,<br>
<br>
Steve<br>
<div class="HOEnZb"><div class="h5"><br>
> Thank you<br>
><br>
> Hamza<br>
><br>
> <a href="https://www.mirantis.com/blog/mirantis-openstack-7-0-nfvi-deployment-guide-numacpu-pinning/" rel="noreferrer" target="_blank">https://www.mirantis.com/blog/mirantis-openstack-7-0-nfvi-deployment-guide-numacpu-pinning/</a><br>
</div></div></blockquote></div><br></div>