[nova][ptg] pinned and unpinned CPUs in one instance

Sean Mooney smooney at redhat.com
Thu Nov 14 11:14:21 UTC 2019


On Thu, 2019-11-14 at 09:08 +0000, Stephen Finucane wrote:
> On Mon, 2019-11-11 at 11:58 +0000, Wang, Huaqiang wrote:
> > > -----Original Message-----
> > > From: Bal√°zs Gibizer <balazs.gibizer at est.tech>
> > > Sent: Friday, November 8, 2019 3:10 PM
> > > To: openstack-discuss <openstack-discuss at lists.openstack.org>
> > > Subject: [nova][ptg] pinned and unpinned CPUs in one instance
> > > 
> > > spec: https://review.opendev.org/668656
> > > 
> > > Agreements from the PTG:
> > > 
> > > How we will test it:
> > > * do functional test with libvirt driver, like the pinned cpu tests we have
> > > today
> > > * donyd's CI supports nested virt so we can do pinned cpu testing but not
> > > realtime. As this CI is still work in progress we should not block on this.
> > > * coverage inhttps://opendev.org/x/whitebox-tempest-pluginis a nice to
> > > have
> > > 
> > > Naming: use the 'shared' and 'dedicated' terminology
> > > 
> > > Support both the hw:pinvcpus=3 and the resources:PCPU=2 flavor extra
> > > specs syntaxtbut not in the same flavor. The resources:PCPU=2 syntax will
> > > have less expression power until nova models NUMA in placement. So nova
> > > will try to evenly distribute PCPUs between numa nodes. If it not possible we
> > > reject the request and ask the user to use the
> > > hw:pinvcpus=3 syntax.
> > > 
> > > Realtime mask is an exclusion mask, any vcpus not listed there has to be in
> > > the dedicated set of the instance.
> > > 
> > > TODOInvestigate whether we want to enable NUMA by default
> > > * Pros: Simpler, everything is NUMA by default
> > > * Cons: We'll either have to break/make configurablethe 1:1 guest:host
> > > NUMA mapping else we won't be able to boot e.g. a 40 core shared instance
> > > on a 40 core, 2 NUMA node host
> > 
> > For the case of 'booting a 40 core shared instance on 40 core 2NUMA node' that will
> > not be covered by the new 'mixed' policy. It is just a legacy 'shared' instance with no
> > assumption about instance NUMA topology. 
> 
> Correct. However, this investigation refers to *all* instances, not
> just those using the 'mixed' policy. For the 'mixed' policy, I assume
> we'll need to apply a virtual NUMA topology since we currently apply
> one for instances using the 'dedicated' policy.
yes for consitency i think that would be the correct approch too.
> 
> > By the way if you want a 'shared' instance, with 40 cores, to be scheduled on a host
> > of 40cores, 2 NUMA nodes, you also need to register all host cores as 'shared' cpus
> > through 'conf.compute.cpu_shared_set'. 
> > 
> > For instance with 'mixed' policy, what I want to propose is the instance should
> > demand at least one 'dedicated'(or PCPU) core. Thus, any 'mixed' instance or 'dedicated'
> > instance will not be scheduled one this host due to no PCPU available on this host.
> > 
> > And also, a 'mixed' instance should also demand at least one 'shared' (or VCPU) core.
> > a 'mixed' instance demanding all cores from PCPU resource should be considered as
> > an invalid one. And an instance demanding all cores from PCPU resource is just a
> > legacy 'dedicated' instance, which CPU allocation policy is 'dedicated'.
> > 
> > In conclusion,  a instance with the policy of 'mixed' 
> > -. demands at least one 'dedicated' cpu and at least one 'shared' cpu. 
> > -. with NUMA topology by default due to requesting pinned cpu
> > 
> > In my understanding the cons does not exist by making above rules. 
> > 
> > Br
> > Huaqiang
> > 
> > > 
> > > Cheers,
> > > gibi
> 
> 




More information about the openstack-discuss mailing list