cpu pinning

Sean Mooney smooney at redhat.com
Tue Jan 4 12:54:06 UTC 2022


On Tue, 2022-01-04 at 18:08 +0530, open infra wrote:
> I recently noticed that available vpcu usage is "used 49 of 30" [1] but I
> have a total of 128 vcpus  [2] and allocated only 126 for applications.
> Is this due to misconfiguration in my environment?
the hyperviosr api only reprots the vCPU not the pCPUs which are used for pining
so if you have cpu_dedicated_set and cpu_shared_set defiend then the vcpu reported in the horizon ui
will only contain the  cpus form the cpu_shared_set
if you are using the older vcpu_pin_set config value instead of cpu_dedicated_set then the host can only
be used for either pinned or unpinned vms and the value in the hypervior api for vcpus will be the total number
of cores in vcpu_pin_set.

looking at the output of openstack hypervisor stats show below we see 126 vcpus are reported
so this looks like a horizon bug of some kind.

the used cpus is correct.

did you perhaps change form using vcpu_pin_set to cpu_dedicated_set while vms were on the host?
that is not supported. if you did an you allocated 30 cpus to the cpu_shared set then the horizon output would make sense
but based on the "openstack hypervisor stats show" below this shoudl be 49/126 

i should also point out that starting in wallaby this infomations nolonger reproted
the stats endpoint was removed entirely form the hyperviors api and the 
cpu_info, free_disk_gb, local_gb, local_gb_used, disk_available_least, free_ram_mb, memory_mb, memory_mb_used, vcpus, vcpus_used, and running_vms
fields were removed form teh hypervior detail show endpoint.

https://specs.openstack.org/openstack/nova-specs/specs/wallaby/implemented/modernize-os-hypervisors-api.html has the details and what you should used
instead.
> 
> [1] https://pasteboard.co/TI0WbbyZiXsn.png
> [2] https://paste.opendev.org/show/811910/
> 
> Regards,
> Danishka
> 
> On Wed, Dec 8, 2021 at 10:04 PM open infra <openinfradn at gmail.com> wrote:
> 
> > Managed to set cpu_dedicated_setin nova.
> > Thanks, Sean!
> > 
> > On Thu, Dec 2, 2021 at 9:16 PM Sean Mooney <smooney at redhat.com> wrote:
> > 
> > > On Thu, 2021-12-02 at 08:58 +0530, open infra wrote:
> > > > Hi,
> > > > 
> > > > I have created a flavor with following properties and created an
> > > instance.
> > > > Instance failed with the error "No valid host was found. There are not
> > > > enough hosts available."
> > > > When I set the cpu policy as 'shared' I can create the instance.  The
> > > host
> > > > machine has two numa nodes and a total of 128 vcpu.
> > > > I can not figure out what's missing here.
> > > i suspect the issue is not with the flavor but with yoru host
> > > configurtion.
> > > 
> > > you likely need to defience cpu_dedicated_set and cpu_shared_set in the
> > > nova.conf
> > > 
> > > we do not support mixing pinned and floating cpus on the same host unless
> > > you partion the cpu pool
> > > using cpu_dedicated_set and cpu_shared_set.
> > > 
> > > as of train cpu_dedicated_set replaced vcpu_pin_set as the supported way
> > > to report the pool of cpus to be
> > > used for pinned vms to placment.
> > > 
> > > if you do "openstack resource provider inventory show <compute node
> > > uuid>" it should detail the avaiabel pcpu and vcpu inventories.
> > > when you use  hw:cpu_policy='dedicated' it will claim PCPUs not VCPUs in
> > > placment.
> > > That is likely the issue you are encountering.
> > > 
> > > by default we have a fallback query to make this work while you are
> > > upgrading
> > > 
> > > 
> > > https://docs.openstack.org/nova/latest/configuration/config.html#workarounds.disable_fallback_pcpu_query
> > > 
> > > which we should be disabling by default soon.
> > > 
> > > but i suspect that is likely why you are getting the no valid host.
> > > 
> > > to debug this properly you should enable debug logging on the nova
> > > schduler and then confirm if you got
> > > host back form placment and then if the numa toplogy filter is rejectign
> > > the host or not.
> > > 
> > > without the schduler debug logs for the instance creation we cannt really
> > > help more then this since we do not have the info required.
> > > > 
> > > > controller-1:~$ openstack flavor show dn.large -c properties
> > > > 
> > > > 
> > > +------------+--------------------------------------------------------------------------------------------------------+
> > > > 
> > > > > Field      | Value
> > > >                                           |
> > > > 
> > > > 
> > > +------------+--------------------------------------------------------------------------------------------------------+
> > > > 
> > > > > properties | hw:cpu_cores='2', hw:cpu_policy='dedicated',
> > > > hw:cpu_sockets='1', hw:cpu_threads='2', hw:numa_nodes='1' |
> > > > 
> > > > 
> > > +------------+--------------------------------------------------------------------------------------------------------+
> > > > 
> > > > controller-1:~$ openstack hypervisor stats show
> > > > 
> > > > +----------------------+--------+
> > > > 
> > > > > Field                | Value  |
> > > > 
> > > > +----------------------+--------+
> > > > 
> > > > > count                | 1      |
> > > > 
> > > > > current_workload     | 0      |
> > > > 
> > > > > disk_available_least | 187    |
> > > > 
> > > > > free_disk_gb         | 199    |
> > > > 
> > > > > free_ram_mb          | 308787 |
> > > > 
> > > > > local_gb             | 219    |
> > > > 
> > > > > local_gb_used        | 20     |
> > > > 
> > > > > memory_mb            | 515443 |
> > > > 
> > > > > memory_mb_used       | 206656 |
> > > > 
> > > > > running_vms          | 7      |
> > > > 
> > > > > vcpus                | 126    |
> > > > 
> > > > > vcpus_used           | 49     |
> > > > 
> > > > +----------------------+--------+
> > > > 
> > > > 
> > > > 
> > > > Regards,
> > > > 
> > > > Danishka
> > > 
> > > 




More information about the openstack-discuss mailing list