[nova][ptg] pinned and unpinned CPUs in one instance

Wang, Huaqiang huaqiang.wang at intel.com
Mon Nov 11 12:45:30 UTC 2019



> -----Original Message-----
> From: Sean Mooney <smooney at redhat.com>
> Sent: Friday, November 8, 2019 8:21 PM
> To: Balázs Gibizer <balazs.gibizer at est.tech>; openstack-discuss <openstack-
> discuss at lists.openstack.org>
> Subject: Re: [nova][ptg] pinned and unpinned CPUs in one instance
> 
> On Fri, 2019-11-08 at 07:09 +0000, Balázs Gibizer wrote:
> > spec: https://review.opendev.org/668656
> >
> > Agreements from the PTG:
> >
> > How we will test it:
> > * do functional test with libvirt driver, like the pinned cpu tests we
> > have today
> > * donyd's CI supports nested virt so we can do pinned cpu testing but
> > not realtime. As this CI is still work in progress we should not block
> > on this.
> we can do realtime testing in that ci.
> i already did. also there is a new label that is available across 3 providers so
> we wont just be relying on donyd's good work.
> 
> > * coverage inhttps://opendev.org/x/whitebox-tempest-pluginis a nice to
> > have
> >
> > Naming: use the 'shared' and 'dedicated' terminology
> didn't we want to have a hw:cpu_policy=mixed specificaly for this case?
> >
> > Support both the hw:pinvcpus=3 and the resources:PCPU=2 flavor extra
> > specs syntaxtbut not in the same flavor. The resources:PCPU=2 syntax
> > will have less expression power until nova models NUMA in placement.
> > So nova will try to evenly distribute PCPUs between numa nodes. If it
> > not possible we reject the request and ask the user to use the
> > hw:pinvcpus=3 syntax.
> >
> > Realtime mask is an exclusion mask, any vcpus not listed there has to
> > be in the dedicated set of the instance.
> >
> > TODOInvestigate whether we want to enable NUMA by default
> > * Pros: Simpler, everything is NUMA by default
> > * Cons: We'll either have to break/make configurablethe 1:1 guest:host
> in the context of mix if we dont enable numa affinity by default we should
> remove that behavior from all case where we do it today.
> > NUMA mapping else we won't be able to boot e.g. a 40 core shared
> > instance on a 40 core, 2 NUMA node host

Hi gabi or sean,

To help me to understand the issue under discussion, if I change the
instance requirement a little bit to:
-. an instance demanding 1 dedicated core and 39 shared cores
-. instance vcpu allocation ratio is 1
-. host has 2 NUMA nodes and 40 cores in total
-. 39 of 40 cores are registered as VCPU resource the 1 core is
registered as PCPU

It will raise the same problem, right? because it hopes the instance
to be scheduled on the host.

> if this is a larger question of if we should have all instance be numa by
> default i have argued yes for quite a while as i think having 1 code path has
> many advantages. that said im aware of this limitation.
> one way to solve this was the use of the proposed can_split placmenent
> paramter. so if you did not specify a numa toplogy we would add
> can_split=vCPUs and then create a singel or multiple numa node toplogy
> based on the allcoations. if we combine that with a allocation weigher we
> could sort the allocation candiates by smallest number of numa nodes so we
> would prefer landing on hosts that can fit it on 1 numa node.
> its a big change but long overdue.
> 

I have read the 'can_split' spec, it will help if I understand the issue correctly.
Then I agree with Sean that it is another issue that is not belong to spec 668656.

> that said i have also argued the other point too in responce to pushback on
> "all vms have numa of 1 unless you say otherwise" i.e. that the 1:1 between
> mapping virtual and host numa nodes shoudl be configurable and is not
> required by the api today. the backwards compatible way to do that is its not
> requried by default if you are using shared cores and is required if you are
> using pinned but that is a littel confusing.
> 
> i dont really know what the right answer to this is but i think its a seperate
> question form the topic of this thread.
> we dont need to solve this to enable pinned and unpinned cpus in one
> instance but we do need to adress this before we can model numa in
> placment.
> 
> >
> >
> > Cheers,
> > gibi
> >
> >
> >
> 



More information about the openstack-discuss mailing list