[Openstack] nova assign isntances to cpu pinning

Arne Wiebalck arne.wiebalck at cern.ch
Tue Feb 7 19:20:45 UTC 2017


> On 07 Feb 2017, at 20:05, Steve Gordon <sgordon at redhat.com> wrote:
> 
> 
> 
> ----- Original Message -----
>> From: "Arne Wiebalck" <arne.wiebalck at cern.ch>
>> To: "Steve Gordon" <sgordon at redhat.com>
>> Cc: "Manuel Sopena Ballesteros" <manuel.sb at garvan.org.au>, openstack at lists.openstack.org
>> Sent: Tuesday, February 7, 2017 2:00:23 PM
>> Subject: Re: [Openstack] nova assign isntances to cpu pinning
>> 
>> 
>>> On 07 Feb 2017, at 18:57, Steve Gordon <sgordon at redhat.com> wrote:
>>> 
>>> ----- Original Message -----
>>>> From: "Arne Wiebalck" <Arne.Wiebalck at cern.ch
>>>> <mailto:Arne.Wiebalck at cern.ch>>
>>>> To: "Manuel Sopena Ballesteros" <manuel.sb at garvan.org.au
>>>> <mailto:manuel.sb at garvan.org.au>>
>>>> Cc: openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>>> Sent: Tuesday, February 7, 2017 2:46:39 AM
>>>> Subject: Re: [Openstack] nova assign isntances to cpu pinning
>>>> 
>>>> Manuel,
>>>> 
>>>> Rather than with aggregate metadata we assign instances to NUMA nodes via
>>>> flavor extra_specs,
>>> 
>>> These are not necessarily mutually exclusive, the purpose of the aggregates
>>> in the documentation (and assumed in the original design) is to segregate
>>> compute nodes for dedicated guests (if using truly dedicated CPU by
>>> setting "hw:cpu_policy" to "dedicated" as Chris mentions) from those for
>>> over-committed guests. If you are only using the NUMA node alignment (as
>>> shown below) this doesn't apply, because it's only guaranteeing how many
>>> nodes your guest will be spread across not that it will have dedicated
>>> access to the CPU(s) it is on. Folks who want truly dedicated vCPU:pCPU
>>> mappings should still use the aggregates, unless *only* running workloads
>>> with dedicated CPU needs.
>> 
>> Right: we’re using cells to separate computing-intensive from overcommitted
>> resources, configured the flavor
>> extra-specs for NUMA nodes (and huge pages) only in the compute part and have
>> made good experiences
>> with this setup.
>> Depending on the use case and the individual deployment, there are certainly
>> different options to set things up.
>> If not already available somewhere, it may be good to document the options
>> depending on needs and setup?
>> 
>> Cheers,
>> Arne
> 
> A fairly decent amount of it was contributed by some of the Nova folks who worked on this functionality and ended up in the Admin Guide here:
> 
> http://docs.openstack.org/admin-guide/compute-adv-config.html <http://docs.openstack.org/admin-guide/compute-adv-config.html>

Awesome, thanks for pointing this out!

Cheers,
 Arne

> 
>>> -Steve
>>> 
>>>> i.e. nova flavor-show reports something like
>>>> 
>>>> —>
>>>> | extra_specs                | {"hw:numa_nodes": "1"} |
>>>> <—
>>>> 
>>>> for our NUMA-aware flavors.
>>>> 
>>>> This seems to work pretty well and gives the desired performance
>>>> improvement.
>>>> 
>>>> Cheers,
>>>> Arne
>>>> 
>>>> 
>>>> 
>>>>> On 07 Feb 2017, at 01:19, Manuel Sopena Ballesteros
>>>>> <manuel.sb at garvan.org.au> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I am trying to isolate my instances by cpu socket in order to improve my
>>>>> NUMA hardware performance.
>>>>> 
>>>>> [root at openstack-dev ~(keystone_admin)]# nova aggregate-set-metadata numa
>>>>> pinned=true
>>>>> Metadata has been successfully updated for aggregate 1.
>>>>> +----+--------+--------------------+--------+--------------------+
>>>>> | Id | Name | Availability Zone | Hosts | Metadata            |
>>>>> +----+--------+--------------------+--------+--------------------+
>>>>> | 1  | numa   | -                               |             |
>>>>> | 'pinned=true'     |
>>>>> +----+--------+--------------------+--------+--------------------+
>>>>> 
>>>>> I have done the changes on the nova metadata but my admin can’t see the
>>>>> instances
>>>>> 
>>>>> [root at openstack-dev ~(keystone_admin)]# nova aggregate-add-host numa
>>>>> 4d4f3c3f-2894-4244-b74c-2c479e296ff8
>>>>> ERROR (NotFound): Compute host 4d4f3c3f-2894-4244-b74c-2c479e296ff8 could
>>>>> not be found. (HTTP 404) (Request-ID:
>>>>> req-286985d8-d6ce-429e-b234-dd5eac5ad62e)
>>>>> 
>>>>> And the user who has access to those instances does not have privileges
>>>>> to
>>>>> add the hosts
>>>>> 
>>>>> [root at openstack-dev ~(keystone_myuser)]# nova aggregate-add-host numa
>>>>> 4d4f3c3f-2894-4244-b74c-2c479e296ff8
>>>>> ERROR (Forbidden): Policy doesn't allow
>>>>> os_compute_api:os-aggregates:index
>>>>> to be performed. (HTTP 403) (Request-ID:
>>>>> req-a5687fd4-c00d-4b64-af9e-bd5a82eb99c1)
>>>>> 
>>>>> What would be the recommended way to do this?
>>>>> 
>>>>> Thank you very much
>>>>> 
>>>>> Manuel Sopena Ballesteros | Big data Engineer
>>>>> Garvan Institute of Medical Research
>>>>> The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
>>>>> T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E:
>>>>> manuel.sb at garvan.org.au
>>>>> 
>>>>> NOTICE
>>>>> Please consider the environment before printing this email. This message
>>>>> and any attachments are intended for the addressee named and may contain
>>>>> legally privileged/confidential/copyright information. If you are not the
>>>>> intended recipient, you should not read, use, disclose, copy or
>>>>> distribute
>>>>> this communication. If you have received this message in error please
>>>>> notify us at once by return email and then delete both messages. We
>>>>> accept
>>>>> no liability for the distribution of viruses or similar in electronic
>>>>> communications. This notice should not be removed.
>>>>> _______________________________________________
>>>>> Mailing list:
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>> Post to     : openstack at lists.openstack.org
>>>>> Unsubscribe :
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> 
>>>> --
>>>> Arne Wiebalck
>>>> CERN IT
>>>> 
>>>> _______________________________________________
>>>> Mailing list:
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to     : openstack at lists.openstack.org
>>>> Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170207/d2cea149/attachment.html>


More information about the Openstack mailing list