[openstack-dev] [nova] [neutron] PCI pass-through network support

Robert Li (baoli) baoli at cisco.com
Thu Jan 30 12:09:42 UTC 2014


I hope that you guys are in agreement on this. But take a look at the wiki: https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support and see if it has any difference from your proposals.  IMO, it's the critical piece of the proposal, and hasn't been specified in exact term yet. I'm not sure about vif_attributes or vif_stats, which I just heard from you. In any case, I'm not convinced with the flexibility and/or complexity, and so far I haven't seen a use case that really demands it. But I'd be happy to see one.


On 1/29/14 4:43 PM, "Ian Wells" <ijw.ubuntu at cack.org.uk<mailto:ijw.ubuntu at cack.org.uk>> wrote:

My proposals:

On 29 January 2014 16:43, Robert Li (baoli) <baoli at cisco.com<mailto:baoli at cisco.com>> wrote:
1. pci-flavor-attrs is configured through configuration files and will be
available on both the controller node and the compute nodes. Can the cloud
admin decide to add a new attribute in a running cloud? If that's
possible, how is that done?

When nova-compute starts up, it requests the VIF attributes that the schedulers need.  (You could have multiple schedulers; they could be in disagreement; it picks the last answer.)  It returns pci_stats by the selected combination of VIF attributes.

When nova-scheduler starts up, it sends an unsolicited cast of the attributes.  nova-compute updates the attributes, clears its pci_stats and recreates them.

If nova-scheduler receives pci_stats with incorrect attributes it discards them.

(There is a row from nova-compute summarising devices for each unique combination of vif_stats, including 'None' where no attribute is set.)

I'm assuming here that the pci_flavor_attrs are read on startup of nova-scheduler and could be re-read and different when nova-scheduler is reset.  There's a relatively straightforward move from here to an API for setting it if this turns out to be useful, but firstly I think it would be an uncommon occurrence and secondly it's not something we should implement now.

2. PCI flavor will be defined using the attributes in pci-flavor-attrs. A
flavor is defined with a matching expression in the form of attr1 = val11
[| val12 Š.], [attr2 = val21 [| val22 Š]], Š. And this expression is used
to match one or more PCI stats groups until a free PCI device is located.
In this case, both attr1 and attr2 can have multiple values, and both
attributes need to be satisfied. Please confirm this understanding is

This looks right to me as we've discussed it, but I think we'll be wanting something that allows a top level AND.  In the above example, I can't say an Intel NIC and a Mellanox NIC are equally OK, because I can't say (intel + product ID 1) AND (Mellanox + product ID 2).  I'll leave Yunhong to decice how the details should look, though.

3. I'd like to see an example that involves multiple attributes. let's say
pci-flavor-attrs = {gpu, net-group, device_id, product_id}. I'd like to
know how PCI stats groups are formed on compute nodes based on that, and
how many of PCI stats groups are there? What's the reasonable guidelines
in defining the PCI flavors.

I need to write up the document for this, and it's overdue.  Leave it with me.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140130/1ff330f4/attachment.html>

More information about the OpenStack-dev mailing list