cpu pinning

open infra openinfradn at gmail.com
Wed Dec 8 16:34:35 UTC 2021


Managed to set cpu_dedicated_setin nova.
Thanks, Sean!

On Thu, Dec 2, 2021 at 9:16 PM Sean Mooney <smooney at redhat.com> wrote:

> On Thu, 2021-12-02 at 08:58 +0530, open infra wrote:
> > Hi,
> >
> > I have created a flavor with following properties and created an
> instance.
> > Instance failed with the error "No valid host was found. There are not
> > enough hosts available."
> > When I set the cpu policy as 'shared' I can create the instance.  The
> host
> > machine has two numa nodes and a total of 128 vcpu.
> > I can not figure out what's missing here.
> i suspect the issue is not with the flavor but with yoru host configurtion.
>
> you likely need to defience cpu_dedicated_set and cpu_shared_set in the
> nova.conf
>
> we do not support mixing pinned and floating cpus on the same host unless
> you partion the cpu pool
> using cpu_dedicated_set and cpu_shared_set.
>
> as of train cpu_dedicated_set replaced vcpu_pin_set as the supported way
> to report the pool of cpus to be
> used for pinned vms to placment.
>
> if you do "openstack resource provider inventory show <compute node uuid>"
> it should detail the avaiabel pcpu and vcpu inventories.
> when you use  hw:cpu_policy='dedicated' it will claim PCPUs not VCPUs in
> placment.
> That is likely the issue you are encountering.
>
> by default we have a fallback query to make this work while you are
> upgrading
>
>
> https://docs.openstack.org/nova/latest/configuration/config.html#workarounds.disable_fallback_pcpu_query
>
> which we should be disabling by default soon.
>
> but i suspect that is likely why you are getting the no valid host.
>
> to debug this properly you should enable debug logging on the nova
> schduler and then confirm if you got
> host back form placment and then if the numa toplogy filter is rejectign
> the host or not.
>
> without the schduler debug logs for the instance creation we cannt really
> help more then this since we do not have the info required.
> >
> > controller-1:~$ openstack flavor show dn.large -c properties
> >
> >
> +------------+--------------------------------------------------------------------------------------------------------+
> >
> > > Field      | Value
> >                                           |
> >
> >
> +------------+--------------------------------------------------------------------------------------------------------+
> >
> > > properties | hw:cpu_cores='2', hw:cpu_policy='dedicated',
> > hw:cpu_sockets='1', hw:cpu_threads='2', hw:numa_nodes='1' |
> >
> >
> +------------+--------------------------------------------------------------------------------------------------------+
> >
> > controller-1:~$ openstack hypervisor stats show
> >
> > +----------------------+--------+
> >
> > > Field                | Value  |
> >
> > +----------------------+--------+
> >
> > > count                | 1      |
> >
> > > current_workload     | 0      |
> >
> > > disk_available_least | 187    |
> >
> > > free_disk_gb         | 199    |
> >
> > > free_ram_mb          | 308787 |
> >
> > > local_gb             | 219    |
> >
> > > local_gb_used        | 20     |
> >
> > > memory_mb            | 515443 |
> >
> > > memory_mb_used       | 206656 |
> >
> > > running_vms          | 7      |
> >
> > > vcpus                | 126    |
> >
> > > vcpus_used           | 49     |
> >
> > +----------------------+--------+
> >
> >
> >
> > Regards,
> >
> > Danishka
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211208/070f3ed3/attachment.htm>


More information about the openstack-discuss mailing list