[openstack-dev] [nova][scheduler][placement] Allocating Complex Resources
Mooney, Sean K
sean.k.mooney at intel.com
Wed Jun 7 18:44:41 UTC 2017
> -----Original Message-----
> From: Jay Pipes [mailto:jaypipes at gmail.com]
> Sent: Wednesday, June 7, 2017 6:47 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [nova][scheduler][placement] Allocating
> Complex Resources
>
> On 06/07/2017 01:00 PM, Edward Leafe wrote:
> > On Jun 6, 2017, at 9:56 AM, Chris Dent <cdent+os at anticdent.org
> > <mailto:cdent+os at anticdent.org>> wrote:
> >>
> >> For clarity and completeness in the discussion some questions for
> >> which we have explicit answers would be useful. Some of these may
> >> appear ignorant or obtuse and are mostly things we've been over
> >> before. The goal is to draw out some clear statements in the present
> >> day to be sure we are all talking about the same thing (or get us
> >> there if not) modified for what we know now, compared to what we
> knew
> >> a week or month ago.
> >
> > One other question that came up: do we have any examples of any
> > service (such as Neutron or Cinder) that would require the modeling
> > for nested providers? Or is this confined to Nova?
>
> The Cyborg project (accelerators like FPGAs and some vGPUs) need nested
> resource providers to model the relationship between a virtual resource
> context against an accelerator and the compute node itself.
[Mooney, Sean K] neutron will need to use nested resource providers to track
Network backend specific consumable resources in the future also. One example is
is hardware offloaded virtual(e.g. vitio/vhost-user) interfaces which due to
their hardware based implementation are both a finite consumable
resources and have numa affinity and there for need to track and nested.
Another example for neutron would be bandwidth based scheduling / sla enforcement
where we want to guarantee that a specific bandwidth is available on the selected host
for a vm to consume. From an ovs/vpp/linux bridge perspective this would likely be tracked at
the physnet level so when selecting a host we would want to ensure that the physent
is both available from the host and has enough bandwidth available to resever for the instance.
Today nova and neutron do not track either of the above but at least the lather has been started
In the sriov context without placemet and should be extended to other non-sriov backend.
Snabb switch actually supports this already with vendor extentions via the neutron bining:profile
https://github.com/snabbco/snabb/blob/b7d6d77ba5fd6a6b9306f92466c1779bba2caa31/src/program/snabbnfv/doc/neutron-api-extensions.md#bandwidth-reservation
but nova is not aware of the capacity or availability info when placing the instance so if
the host cannot fufill the request the degrade to the least over subscribed port.
https://github.com/snabbco/snabb-neutron/blob/master/snabb_neutron/mechanism_snabb.py#L194-L200
with nested resource providers they could harden this request from best effort to a guaranteed bandwidth reservation
by informing the placemnt api of the bandwith availability of the physical interface and also the numa affinity the interfaces
by created a nested resource provider.
>
> Best,
> -jay
>
> _______________________________________________________________________
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list