<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 07 Feb 2017, at 20:05, Steve Gordon <<a href="mailto:sgordon@redhat.com" class="">sgordon@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class=""><br class=""><br class="">----- Original Message -----<br class=""><blockquote type="cite" class="">From: "Arne Wiebalck" <<a href="mailto:arne.wiebalck@cern.ch" class="">arne.wiebalck@cern.ch</a>><br class="">To: "Steve Gordon" <<a href="mailto:sgordon@redhat.com" class="">sgordon@redhat.com</a>><br class="">Cc: "Manuel Sopena Ballesteros" <<a href="mailto:manuel.sb@garvan.org.au" class="">manuel.sb@garvan.org.au</a>>, <a href="mailto:openstack@lists.openstack.org" class="">openstack@lists.openstack.org</a><br class="">Sent: Tuesday, February 7, 2017 2:00:23 PM<br class="">Subject: Re: [Openstack] nova assign isntances to cpu pinning<br class=""><br class=""><br class=""><blockquote type="cite" class="">On 07 Feb 2017, at 18:57, Steve Gordon <<a href="mailto:sgordon@redhat.com" class="">sgordon@redhat.com</a>> wrote:<br class=""><br class="">----- Original Message -----<br class=""><blockquote type="cite" class="">From: "Arne Wiebalck" <<a href="mailto:Arne.Wiebalck@cern.ch" class="">Arne.Wiebalck@cern.ch</a><br class=""><<a href="mailto:Arne.Wiebalck@cern.ch" class="">mailto:Arne.Wiebalck@cern.ch</a>>><br class="">To: "Manuel Sopena Ballesteros" <<a href="mailto:manuel.sb@garvan.org.au" class="">manuel.sb@garvan.org.au</a><br class=""><<a href="mailto:manuel.sb@garvan.org.au" class="">mailto:manuel.sb@garvan.org.au</a>>><br class="">Cc: <a href="mailto:openstack@lists.openstack.org" class="">openstack@lists.openstack.org</a> <<a href="mailto:openstack@lists.openstack.org" class="">mailto:openstack@lists.openstack.org</a>><br class="">Sent: Tuesday, February 7, 2017 2:46:39 AM<br class="">Subject: Re: [Openstack] nova assign isntances to cpu pinning<br class=""><br class="">Manuel,<br class=""><br class="">Rather than with aggregate metadata we assign instances to NUMA nodes via<br class="">flavor extra_specs,<br class=""></blockquote><br class="">These are not necessarily mutually exclusive, the purpose of the aggregates<br class="">in the documentation (and assumed in the original design) is to segregate<br class="">compute nodes for dedicated guests (if using truly dedicated CPU by<br class="">setting "hw:cpu_policy" to "dedicated" as Chris mentions) from those for<br class="">over-committed guests. If you are only using the NUMA node alignment (as<br class="">shown below) this doesn't apply, because it's only guaranteeing how many<br class="">nodes your guest will be spread across not that it will have dedicated<br class="">access to the CPU(s) it is on. Folks who want truly dedicated vCPU:pCPU<br class="">mappings should still use the aggregates, unless *only* running workloads<br class="">with dedicated CPU needs.<br class=""></blockquote><br class="">Right: we’re using cells to separate computing-intensive from overcommitted<br class="">resources, configured the flavor<br class="">extra-specs for NUMA nodes (and huge pages) only in the compute part and have<br class="">made good experiences<br class="">with this setup.<br class="">Depending on the use case and the individual deployment, there are certainly<br class="">different options to set things up.<br class="">If not already available somewhere, it may be good to document the options<br class="">depending on needs and setup?<br class=""><br class="">Cheers,<br class=""> Arne<br class=""></blockquote><br class="">A fairly decent amount of it was contributed by some of the Nova folks who worked on this functionality and ended up in the Admin Guide here:<br class=""><br class=""><a href="http://docs.openstack.org/admin-guide/compute-adv-config.html" class="">http://docs.openstack.org/admin-guide/compute-adv-config.html</a><br class=""></div></div></blockquote><div><br class=""></div><div>Awesome, thanks for pointing this out!</div><div><br class=""></div><div>Cheers,</div><div> Arne</div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div class=""><br class=""><blockquote type="cite" class=""><blockquote type="cite" class="">-Steve<br class=""><br class=""><blockquote type="cite" class="">i.e. nova flavor-show reports something like<br class=""><br class="">—><br class="">| extra_specs                | {"hw:numa_nodes": "1"} |<br class=""><—<br class=""><br class="">for our NUMA-aware flavors.<br class=""><br class="">This seems to work pretty well and gives the desired performance<br class="">improvement.<br class=""><br class="">Cheers,<br class="">Arne<br class=""><br class=""><br class=""><br class=""><blockquote type="cite" class="">On 07 Feb 2017, at 01:19, Manuel Sopena Ballesteros<br class=""><<a href="mailto:manuel.sb@garvan.org.au" class="">manuel.sb@garvan.org.au</a>> wrote:<br class=""><br class="">Hi,<br class=""><br class="">I am trying to isolate my instances by cpu socket in order to improve my<br class="">NUMA hardware performance.<br class=""><br class="">[root@openstack-dev ~(keystone_admin)]# nova aggregate-set-metadata numa<br class="">pinned=true<br class="">Metadata has been successfully updated for aggregate 1.<br class="">+----+--------+--------------------+--------+--------------------+<br class="">| Id | Name | Availability Zone | Hosts | Metadata            |<br class="">+----+--------+--------------------+--------+--------------------+<br class="">| 1  | numa   | -                               |             |<br class="">| 'pinned=true'     |<br class="">+----+--------+--------------------+--------+--------------------+<br class=""><br class="">I have done the changes on the nova metadata but my admin can’t see the<br class="">instances<br class=""><br class="">[root@openstack-dev ~(keystone_admin)]# nova aggregate-add-host numa<br class="">4d4f3c3f-2894-4244-b74c-2c479e296ff8<br class="">ERROR (NotFound): Compute host 4d4f3c3f-2894-4244-b74c-2c479e296ff8 could<br class="">not be found. (HTTP 404) (Request-ID:<br class="">req-286985d8-d6ce-429e-b234-dd5eac5ad62e)<br class=""><br class="">And the user who has access to those instances does not have privileges<br class="">to<br class="">add the hosts<br class=""><br class="">[root@openstack-dev ~(keystone_myuser)]# nova aggregate-add-host numa<br class="">4d4f3c3f-2894-4244-b74c-2c479e296ff8<br class="">ERROR (Forbidden): Policy doesn't allow<br class="">os_compute_api:os-aggregates:index<br class="">to be performed. (HTTP 403) (Request-ID:<br class="">req-a5687fd4-c00d-4b64-af9e-bd5a82eb99c1)<br class=""><br class="">What would be the recommended way to do this?<br class=""><br class="">Thank you very much<br class=""><br class="">Manuel Sopena Ballesteros | Big data Engineer<br class="">Garvan Institute of Medical Research<br class="">The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010<br class="">T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E:<br class=""><a href="mailto:manuel.sb@garvan.org.au" class="">manuel.sb@garvan.org.au</a><br class=""><br class="">NOTICE<br class="">Please consider the environment before printing this email. This message<br class="">and any attachments are intended for the addressee named and may contain<br class="">legally privileged/confidential/copyright information. If you are not the<br class="">intended recipient, you should not read, use, disclose, copy or<br class="">distribute<br class="">this communication. If you have received this message in error please<br class="">notify us at once by return email and then delete both messages. We<br class="">accept<br class="">no liability for the distribution of viruses or similar in electronic<br class="">communications. This notice should not be removed.<br class="">_______________________________________________<br class="">Mailing list:<br class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack<br class="">Post to     : openstack@lists.openstack.org<br class="">Unsubscribe :<br class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack<br class=""></blockquote><br class="">--<br class="">Arne Wiebalck<br class="">CERN IT<br class=""><br class="">_______________________________________________<br class="">Mailing list:<br class=""><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br class="">Post to     : openstack@lists.openstack.org<br class="">Unsubscribe :<br class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack<br class=""><br class=""></blockquote></blockquote></blockquote><br class=""></div></div></blockquote></div><br class=""></body></html>