[placement][nova][ptg] resource provider affinity

Alex Xu soulxu at gmail.com
Mon Apr 15 23:34:02 UTC 2019


we miss the maillist address, sorry I start that one....add it back.

Alex Xu <soulxu at gmail.com> 于2019年4月16日周二 上午7:16写道:

>
>
> Sean Mooney <smooney at redhat.com> 于2019年4月16日周二 上午2:27写道:
>
>> On Mon, 2019-04-15 at 23:16 +0800, Alex Xu wrote:
>> >
>> > >
>> > > ?resources=DISK_GB&
>> > > resources1=VCPU:2,MEMORY_MB:128&
>> > > resources1.1=VF:1&
>> > > resources2=VCPU:2,MEMORY_MB:128&
>> > > resources2.1=VF:1&
>> > > group_policy=isolate
>> > >
>> > > Is it the case you talking about? Sorry, I probably didn't get what
>> you
>> > > mean about changing grouping and group policies. Is there any
>> conflict case
>> > > from you vision?
>> > >
>> >
>> > Sorry, I miss read your case. It should be
>> >
>> > ?resources=DISK_GB:10,VF:1&
>> > resources1=VCPU:2,MEMORY_MB:128&
>> > resources2=VCPU:2,MEMORY_MB:128&
>> > group_policy=isolate
>> >
>> > VF may get from any RP in the whole tree.
>> no that wont work because it would require the disk_GB and the VF to come
>> form the same resouce provideer.
>>  so you would have to do
>>
>
> No, that isn't un-numbered resources meaning.  DISK_GB and VF are in
> un-numbered request group. It may get from any RP in the whole tree, and
> needn't to be same resource provider.
>
>
> http://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html#semantics
> "The semantic for the (single) un-numbered grouping is unchanged. That is,
> it may still return results from different RPs in the same tree (or, when
> “shared” is fully implemented, the same aggregate)."
>
>
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> group_policy=isolate
>>
>> the issue aries if i want 2 VF
>>
>> do you do
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> resoucees4=VF:1
>> group_policy=isolate
>>
>> or
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:2
>> group_policy=isolate
>>
>> or
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> resoucees4=VF:1
>> grou
>> p_policy=None
>>
>> i woudl say the last one is the most correct from the neutorn point of
>> view
>> however we lose guarentee teh cpu and ram come form different numa node
>> the first option forces the vif to be form different RP and the second
>> requires them to
>> be form the same RPs
>>
>> what you really want is
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> resoucees4=VF:1
>> grou
>> p_policy=isolate;none:3,4
>>
>> i.e. the vfs can come form any RP in the tree but resouce group 1 an 2
>> need to be isolated.
>> or said another way by default each resouce group is isolated but resouce
>> groups 3 and 4 have policy none.
>>
>> >
>> >
>> >
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190416/ad24f8d7/attachment-0001.html>


More information about the openstack-discuss mailing list