[openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Robert Li (baoli) baoli at cisco.com
Wed Jan 29 22:31:01 UTC 2014


Hi Irena,

With your reply, and after taking a close look at the code, I think that I understand it now.

Regarding the cli change:

  neutron port-create –binding:profile type=dict vnic_type=direct

following the neutron net-create —provider:physical_network as an example, --binding:* can be treated as unknown arguments. And they are opaquely transmitted to the neutron plugin for processing. I have always wondered why net-create help doesn't display the —provider:* arguments, and sometimes have to google the syntax. After taking look at the code, I think I kind of know what's going on out of there.  I'd like to know why it's done that way. But I think that it will work for —binding:* in the neutron port-create commands.

now regarding binding:profile for SR-IOV, from your google doc, it will have the following properties:
           pci_slot in the format of vendor_id:product_id:domain:bus:slot.fn.
           pci_flavor: will be a PCI flavor name when the API is available and it's desirable for neutron to use it. For now, it will be a physical network name.
           profileid: for 802.1qbh/802.1br
           vnic-type: it's still debatable whether or not this property belongs here. I kind of second you on making it binding:vnic-type.

They all seem to be non plugin or MD specific. Of course, a MD that supports 802.1br would enforce profileid. But in terms of persisting them, I don't feel like they should be done in the plugin. On the other hand, the examples you gave me do show that these plugins are responsible for storing plugin-specific binding:profile in the DB. And in the case of —provider:* for neutron network, it's the individual plugins that persist it, and duplicate the code. Therefore, we may not have options other than following the existing examples.


thanks,
Robert



On 1/29/14 12:17 PM, "Irena Berezovsky" <irenab at mellanox.com<mailto:irenab at mellanox.com>> wrote:

Hi Robert,
Please see inline, I’ll try to post my understanding.


From: Robert Li (baoli) [mailto:baoli at cisco.com]
Sent: Wednesday, January 29, 2014 6:03 PM
To: Irena Berezovsky; rkukura at redhat.com<mailto:rkukura at redhat.com>; Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Irena,

I'm now even more confused. I must have missed something. See inline….

Thanks,
Robert

On 1/29/14 10:19 AM, "Irena Berezovsky" <irenab at mellanox.com<mailto:irenab at mellanox.com>> wrote:

Hi Robert,
I think that I can go with Bob’s suggestion, but think it makes sense to cover the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will probably impose changes to existing Mech. Drivers while PCI-passthru is about introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:baoli at cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkukura at redhat.com<mailto:rkukura at redhat.com>; Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue the discussion in this thread so that we can be more productive in tomorrow's meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to PCI-passthru, defining the vnic-type (wherever it goes) and any keys for binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think that it's important that we put more details in each BP on what's exactly covered by it. One question I have is about where binding:profile will be implemented. I see that portbinding is defined/implemented under its extension and neutron.db. So when both of you guys are saying that implementing binding:profile in ML2, I'm kind of confused. Please let me know what I'm missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin for port binding. So it supports the port binding extension, but uses its own DB table to store relevant attributes. Making it work for ML2 means not adding this support to PortBindingMixin.

[ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?
[IrenaB] binding:profile is can be used by any plugin that supports binding extension. To persist the binding:profile (in the DB), plugin should add DB table for this . The PortBindingMixin does not persist the binding:profile for now.

Another issue that came up during the meeting is about whether or not vnic-type should be part of the top level binding or part of binding:profile. In other words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
[IrenaB] As long as existing binding capable Mech Drivers will take vnic_type into its consideration, I guess doing it via binding:profile will introduce less changes all over (CLI, API). But I am not sure this reason is strong enough to choose this direction
We also need one or two BPs to cover the change in the neutron port-create/port-show CLI/API.
[IrenaB] binding:profile is already supported, so it probably depends on direction with vnic_type

[ROBERT] Can you let me know where in the code binding:profile is supported? in portbindings_db.py, the PortBindingPort model doesn't have a column for binding:profile. So I guess that I must have missed it.
[IrenaB] For existing examples for supporting binding:profile by existing plugins you can look at two examples:
https://github.com/openstack/neutron/blob/master/neutron/plugins/mlnx/mlnx_plugin.py – line 266<https://github.com/openstack/neutron/blob/master/neutron/plugins/mlnx/mlnx_plugin.py%20–%20line%20266>

https://github.com/openstack/neutron/blob/master/neutron/plugins/nec/nec_plugin.py – line 424<https://github.com/openstack/neutron/blob/master/neutron/plugins/nec/nec_plugin.py%20–%20line%20424>

Regarding BPs for the CLI/API, we are planning to add vnic-type and profileid in the CLI, also the new keys in binding:profile. Are you saying no changes are needed (say display them, interpret the added cli arguments, etc), therefore no new BPs are needed for them?
[IrenaB] I think so. It should work bysetting on neutron port-create –binding:profile type=dict vnic_type=direct

Another thing is that we need to define the binding:profile dictionary.
[IrenaB] With regards to PCI SRIOV related attributes, right?

[ROBERT] yes.


Thanks,
Robert



On 1/29/14 4:02 AM, "Irena Berezovsky" <irenab at mellanox.com<mailto:irenab at mellanox.com>> wrote:

Will attend

From: Robert Li (baoli) [mailto:baoli at cisco.com]
Sent: Wednesday, January 29, 2014 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we are going to have and what each BP will be covering.

thanks,
Robert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/13f863a7/attachment.html>


More information about the OpenStack-dev mailing list