<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Stephen Finucane <<a href="mailto:sfinucan@redhat.com">sfinucan@redhat.com</a>> 于2019年6月18日周二 下午5:55写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Tue, 2019-06-18 at 06:41 +0000, Shewale, Bhagyashri wrote:<br>
> > As above, ignore 'cpu_shared_set' but issue a warning. Use the value of<br>
> > ‘vcpu_pin_set' to report both VCPU and PCPU inventory. Note that<br>
> > ‘vcpu_pin_set' is already used to calculate VCPU inventory.<br>
> <br>
> As mentioned in the spec, If operator sets the ``vcpu_pin_set`` in<br>
> the Stein and upgrade to Train then both VCPU and PCPU inventory<br>
> should be reported in placement. <br>
> <br>
> As on current master (Stein) if operator sets ``vpcu_pin_set=0-3`` on<br>
> Compute node A and adds that node A into the host aggregate say<br>
> “agg1” having metadata ``pinned=true``, then it allows to create <br>
> both pinned and non-pinned instances which is known big issue.<br>
> Create instance A having flavor extra specs<br>
> ("aggregate_instance_extra_specs:pinned": "true") then instance A<br>
> will float on cpus 0-3<br>
> Create the instance B having flavor extra specs<br>
> ("aggregate_instance_extra_specs:pinned": "true", "hw:cpu_policy":<br>
> "dedicated") then instance B will be pinned to one of the cpu say 0.<br>
> Now, operator will do the upgrade (Stein to Train), nova compute will<br>
> report both VCPU and PCPU inventory. In this case if<br>
> cpu_allocation_ratio is 1, then total PCPU available will be 4<br>
> (vpcu_pin_set=0-3) and VCPU will also be 4. And this will allow user<br>
> to create maximum of 4 instances with flavor extra specs<br>
> ``resources:PCPU=1`` and 4 instances with flavor extra specs<br>
> ``resources:VCPU=1``.<br>
<br>
If the cpu_allocation_ratio is 1.0 then yes, this is correct. However,<br>
if it's any greater (and remember, the default is 16.0) then the gap is<br>
much smaller, though still broken.<br>
<br>
> With current master code, it’s possible to create only 4 instances<br>
> where now, by reporting both VCPU and PCPU, it will allow user to<br>
> create total of 8 instances which is adding another level of problem<br>
> along with the existing known issue. Is this acceptable? because<br>
> this is decorating the problems.<br>
<br>
I think is acceptable, yes. As we've said, this is broken behavior and<br>
things are just slightly more broken here, though not horribly so. As<br>
it stands, if you don't isolate pinned instances from non-pinned<br>
instances, you don't get any of the guarantees pinning is supposed to<br>
provide. Using the above example, if you booted two pinned and two<br>
unpinned instances on the same host, the unpinned instances would float<br>
over the pinned instances' cores [*] and impact their performance. If<br>
performance is an issue, host aggregrates will have been used.<br>
<br>
[*] They'll actually float over the entire range of host cores since<br>
instnace without a NUMA topology don't respect the 'vcpu_pin_set'<br>
value.<br></blockquote><div><br></div><div>Yes, agree with Stephen, we don't suggest the user mix the pin and non-pin instance on the same host with current master.</div><div><br></div><div>If user want to mix pin and non-pin instance, the user need update his configuration to use dedicated_cpu_set and shared_cpu_set.</div><div>The vcpu_pin_set reports VCPU and PCPU inventories is the intermediate status. In that intermediate status, the operator still need to separate the pin and non-pin instance into different host.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> If not acceptable, then we can report only PCPU in this case which<br>
> will solve two problems:-<br>
> The existing known issue on current master (allowing both pinned and<br>
> non-pinned instances) on the compute host meant for pinning.<br>
> Above issue of allowing 8 instances to be created on the host.<br>
> But there is one problem in taking this decision, if no instances are<br>
> running on the compute node in case only ``vcpu_pinned_set`` is set,<br>
> how do you find out this compute node is configured to create pinned<br>
> or non-pinned instances? If instances are running, based on the Host<br>
> numa_topology.pinned_cpus, it’s possible to detect that. <br>
<br>
As noted previously, this is too complex and too error prone. Let's<br>
just suffer the potential additional impact on performance for those<br>
who haven't correctly configured their deployment, knowing that as soon<br>
as they get to U, where we can require the 'cpu_dedicated_set' and<br>
'cpu_shared_set' options if you want to use pinned instances, things<br>
will be fixed.<br>
<br>
Stephen<br>
<br>
<br>
</blockquote></div></div>