[openstack-dev] [nova] [neutron] PCI pass-through network support

Jiang, Yunhong yunhong.jiang at intel.com
Fri Jan 10 06:40:44 UTC 2014


Robert, sorry that I'm not fan of * your group * term. To me, *your group" mixed two thing. It's an extra property provided by configuration, and also it's a very-not-flexible mechanism to select devices (you can only select devices based on the 'group name' property).


1)       A dynamic group is much better. For example, user may want to select GPU device based on vendor id, or based on vendor_id+device_id. In another word, user want to create group based on vendor_id, or vendor_id+device_id and select devices from these group.  John's proposal is very good, to provide an API to create the PCI flavor(or alias). I prefer flavor because it's more openstack style.



2)       As for the second thing of your 'group', I'd understand it as an extra property provided by configuration.  I don't think we should put it into the white list, which is to configure devices that are assignable.  I'd add another configuration option to provide extra attribute to devices. When nova compute is up, it will parse this configuration and add them to the corresponding PCI devices. I don't think adding another configuration will cause too many trouble to deployment. Openstack already have a lot of configuration items :)



3)       I think currently we mixed the neutron and nova design. To me, Neutron SRIOV support is a user of nova PCI support. Thus we should firstly analysis the requirement from neutron PCI support to nova PCI support in a more generic  way, and then, we can discuss how we enhance the nova PCI support, or, if you want, re-design the nova PCI support. IMHO, if don't consider network, current implementation should be ok.



4)       IMHO, the core for nova PCI support is *PCI property*. The property means not only generic PCI devices like vendor id, device id, device type, compute specific property like BDF address, the adjacent switch IP address,  but also user defined property like nuertron's physical net name etc. And then, it's about how to get these property, how to select/group devices based on the property, how to store/fetch these properties.



Thanks
--jyh

From: Robert Li (baoli) [mailto:baoli at cisco.com]
Sent: Thursday, January 09, 2014 8:49 AM
To: OpenStack Development Mailing List (not for usage questions); Irena Berezovsky; Sandhya Dasu (sadasu); Jiang, Yunhong; Itzik Brown; john at johngarbutt.com; He, Yongli
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Hi Folks,

With John joining the IRC, so far, we had a couple of productive meetings in an effort to come to consensus and move forward. Thanks John for doing that, and I appreciate everyone's effort to make it to the daily meeting. Let's reconvene on Monday.

But before that, and based on our today's conversation on IRC, I'd like to say a few things. I think that first of all, we need to get agreement on the terminologies that we are using so far. With the current nova PCI passthrough

        PCI whitelist: defines all the available PCI passthrough devices on a compute node. pci_passthrough_whitelist=[{ "vendor_id":"xxxx","product_id":"xxxx"}]
        PCI Alias: criteria defined on the controller node with which requested PCI passthrough devices can be selected from all the PCI passthrough devices available in a cloud.
                Currently it has the following format: pci_alias={"vendor_id":"xxxx", "product_id":"xxxx", "name":"str"}

        nova flavor extra_specs: request for PCI passthrough devices can be specified with extra_specs in the format for example:"pci_passthrough:alias"="name:count"

As you can see, currently a PCI alias has a name and is defined on the controller. The implications for it is that when matching it against the PCI devices, it has to match the vendor_id and product_id against all the available PCI devices until one is found. The name is only used for reference in the extra_specs. On the other hand, the whitelist is basically the same as the alias without a name.

What we have discussed so far is based on something called PCI groups (or PCI flavors as Yongli puts it). Without introducing other complexities, and with a little change of the above representation, we will have something like:

pci_passthrough_whitelist=[{ "vendor_id":"xxxx","product_id":"xxxx", "name":"str"}]

By doing so, we eliminated the PCI alias. And we call the "name" in above as a PCI group name. You can think of it as combining the definitions of the existing whitelist and PCI alias. And believe it or not, a PCI group is actually a PCI alias. However, with that change of thinking, a lot of benefits can be harvested:

         * the implementation is significantly simplified
         * provisioning is simplified by eliminating the PCI alias
         * a compute node only needs to report stats with something like: PCI group name:count. A compute node processes all the PCI passthrough devices against the whitelist, and assign a PCI group based on the whitelist definition.
         * on the controller, we may only need to define the PCI group names. if we use a nova api to define PCI groups (could be private or public, for example), one potential benefit, among other things (validation, etc),  they can be owned by the tenant that creates them. And thus a wholesale of PCI passthrough devices is also possible.
         * scheduler only works with PCI group names.
         * request for PCI passthrough device is based on PCI-group
         * deployers can provision the cloud based on the PCI groups
         * Particularly for SRIOV, deployers can design SRIOV PCI groups based on network connectivities.

Further, to support SRIOV, we are saying that PCI group names not only can be used in the extra specs, it can also be used in the -nic option and the neutron commands. This allows the most flexibilities and functionalities afforded by SRIOV.

Further, we are saying that we can define default PCI groups based on the PCI device's class.

For vnic-type (or nic-type), we are saying that it defines the link characteristics of the nic that is attached to a VM: a nic that's connected to a virtual switch, a nic that is connected to a physical switch, or a nic that is connected to a physical switch, but has a host macvtap device in between. The actual names of the choices are not important here, and can be debated.

I'm hoping that we can go over the above on Monday. But any comments are welcome by email.

Thanks,
Robert

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140110/ac9e69a5/attachment.html>


More information about the OpenStack-dev mailing list