[openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

Miguel Angel Ajo Pelayo majopela at redhat.com
Wed Apr 20 13:25:22 UTC 2016


Inline update.

On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
<majopela at redhat.com> wrote:
> On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
[...]
>> Yes, Nova's conductor gathers information about the requested networks
>> *before* asking the scheduler where to place hosts:
>>
>> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
>>
>>>      That would require identifying that the port has a "qos_policy_id"
>>> attached to it, and then, asking neutron for the specific QoS policy
>>>   [3], then look out for a minimum bandwidth rule (still to be defined),
>>> and extract the required bandwidth from it.
>>
>>
>> Yep, exactly correct.
>>
>>>     That moves, again some of the responsibility to examine and
>>> understand external resources to nova.
>>
>>
>> Yep, it does. The alternative is more retries for placement decisions
>> because accurate decisions cannot be made until the compute node is already
>> selected and the claim happens on the compute node.
>>
>>>      Could it make sense to make that part pluggable via stevedore?, so
>>> we would provide something that takes the "resource id" (for a port in
>>> this case) and returns the requirements translated to resource classes
>>> (NIC_BW_KB in this case).
>>
>>
>> Not sure Stevedore makes sense in this context. Really, we want *less*
>> extensibility and *more* consistency. So, I would envision rather a system
>> where Nova would call to Neutron before scheduling when it has received a
>> port or network ID in the boot request and ask Neutron whether the port or
>> network has any resource constraints on it. Neutron would return a
>> standardized response containing each resource class and the amount
>> requested in a dictionary (or better yet, an os_vif.objects.* object,
>> serialized). Something like:
>>
>> {
>>   'resources': {
>>     '<UUID of port or network>': {
>>       'NIC_BW_KB': 2048,
>>       'IPV4_ADDRESS': 1
>>     }
>>   }
>> }
>>
>
> Oh, true, that's a great idea, having some API that translates a
> neutron resource, to scheduling constraints. The external call will be
> still required, but the coupling issue is removed.
>
>


I had a talk yesterday with @iharchys, @dansmith, and @sbauzas about
this, and we believe the synthesis of resource usage / scheduling
constraints from neutron makes sense.

We should probably look into providing those details in a read only
dictionary during port creation/update/show in general, in that way,
we would not be adding an extra API call to neutron from the nova
scheduler to figure out any of those details. That extra optimization
is something we may need to discuss with the neutron community.



>> In the case of the NIC_BW_KB resource class, Nova's scheduler would look for
>> compute nodes that had a NIC with that amount of bandwidth still available.
>> In the case of the IPV4_ADDRESS resource class, Nova's scheduler would use
>> the generic-resource-pools interface to find a resource pool of IPV4_ADDRESS
>> resources (i.e. a Neutron routed network or subnet allocation pool) that has
>> available IP space for the request.
>>
>
> Not sure about the IPV4_ADDRESS part because I still didn't look on
> how they resolve routed networks with this new framework, but for
> other constraints makes perfect sense to me.
>
>> Best,
>> -jay
>>
>>
>>> Best regards,
>>> Miguel Ángel Ajo
>>>
>>>
>>> [1]
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
>>> [2] https://bugs.launchpad.net/neutron/+bug/1560963
>>> [3]
>>> http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy



More information about the OpenStack-dev mailing list