[openstack-dev] Some idea on device assignment support

Jiang, Yunhong yunhong.jiang at intel.com
Fri Nov 2 05:22:20 UTC 2012


> -----Original Message-----
> From: Ian Wells [mailto:ijw.ubuntu at cack.org.uk]
> Sent: Thursday, November 01, 2012 10:30 PM
> To: OpenStack Development Mailing List
> Cc: vladimir at zadarastorage.com
> Subject: Re: [openstack-dev] Some idea on device assignment support
> 
> Note that I'm fantasising a little here; these things would be awesome, but
> actually assigning a device to a VM would be a fine first step.
> 
> On 1 November 2012 11:26, Jiang, Yunhong <yunhong.jiang at intel.com> wrote:
> >                 ResourceType:                   NIC_VF_1G
> >                 Resource Information:           PCI path:
> Bus:Device:vFunction
> >                 Resource Description:           SR-IOV NIC with 1G
> bandwidth
> >                 Resource Count:                 3
> >                 Resource handler class path: nova.virt.libvirt.pci
> 
> NICs are not created equal. It's one thing to map in a NIC but you also have to
> know what it's attached to.  Some NICs in a system may be equivalent (e.g.
> attached to a single segment) but not all NICs in a system will necessarily be
> the same.  Also, it's that much cooler if you can tell Quantum 'add this NIC to
> a segment' and get it to program the main device of a virtualised NIC card to
> VLAN tag appropriately, and/or play with the config on the attached switch.
> (The point here would be to define a Quantum API that would allow this to work,
> which would probably come up in relation to the current VIF plugging blueprint
> that's under discussion.)
> 

Thanks for your input very much!

Yes, NIC is something special. In fact, every device has some special attribute, for example, seems vGPU in XenAPI need some extra step to bind the vCPU to pGPU.

I think for " not all NICs in a system will necessarily be the same", it could be achieved through resource information, which can then be handled by the driver. This is implemented  by the https://review.openstack.org/#/c/776/3/nova/virt/pci.py as class-specific-parameters.

As for the communication with Quantum, I think it can be achieved through some hook during the flow, like pre-assignment, post-assignment etc. And can be achieved by specific driver.

> >         3) How user specifies a given instance needs hardware resource?
> As stated by Mate, it should be through extra specs like "
> hardware_resoruce:NIC_VF_1G=1".
> 
> You could add specifications on the image ('I only support directmap devices of
> type <X>'; 'I am an image that requires a directmap device and will not run
> without'); on the flavour ('you pay this much and I will give you a directmap
> device'); or in the boot call ('Give this image a directmap device', 'use a
> directmap device for this network').

Yes, image is another place to get such information. But I'm not sure what's the boot call? Will it be in guest?

In fact, I'm considering to merge the image property filter and computer capability filter, although I'm not sure if it's welcomed. Per my understanding, both are requiring host capabilities. We possibly need collect all information about host requirement, from image, from flavor etc, and do it in one filter.

> 
> You also need to know which devices in a class are directmapped once the VM
> starts - that is, the VM needs to know which they are, so NIC ordering must be
> predictable, for instance, if both virtual and physical interfaces are supplied.

I think several mechanism can be achieved, either through check device information in VM (like PCI vendor ID or class ID), or through instance metadata. If through instance metadata, then we can save it in instance property, or in compute node. 

> 
> > We can provide a new filer to make sure the cloud can meet such requirement,
> or, we can extend current compute capability filter. Also such information will
> be kept in the instance entry, so that corresponding handler will be invoked
> when the instance created.
> 
> There are other things that are related to this scheduling problem - there are
> unlimited resources that you might want to schedule for, too, such as the
> current patch that's being worked up for libvirt resource restrictions, where it
> must (at present) be scheduled on libvirt if you expect the resource restriction
> to actually occur.  But for the specific limited-resource case there's a question
> of enumeration - how many devices are in the system - and allocation - how
> many devices are free, and which devices become free when a VM terminates.

Can you share me more information on the "libvirt resource restrictions"?
I don't think unlimited resources (like xen PV vnif) is not in this scope, since they have different attribute comparing the direct assigned features.
 As for the limited-resource case, as I stated in item 1, possibly administrator should provide such information in a configuration file. Considering the machine type in cloud environment will not varies too much (cloud provider usually buy machine in batch), I hope the provisioning should be acceptable. 

The driver will determine when will the device be free, although mostly it should be free when VM terminated, while some device type need some clean-up work be done.

--jyh
> 
> --
> Ian.
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list