[openstack-dev] [nova][neutron][ml2] Proposal to support VIF security, PCI-passthru/SR-IOV, and other binding-specific data

Robert Kukura rkukura at redhat.com
Fri Jan 31 20:47:50 UTC 2014

On 01/29/2014 10:26 AM, Robert Kukura wrote:
> The neutron patch [1] and nova patch [2], proposed to resolve the
> "get_firewall_required should use VIF parameter from neutron" bug [3],
> replace the binding:capabilities attribute in the neutron portbindings
> extension with a new binding:vif_security attribute that is a dictionary
> with several keys defined to control VIF security. When using the ML2
> plugin, this binding:vif_security attribute flows from the bound
> MechanismDriver to nova's GenericVIFDriver.
> Separately, work on PCI-passthru/SR-IOV for ML2 also requires
> binding-specific information to flow from the bound MechanismDriver to
> nova's GenericVIFDriver. See [4] for links to various documents and BPs
> on this.
> A while back, in reviewing [1], I suggested a general mechanism to allow
> ML2 MechanismDrivers to supply arbitrary port attributes in order to
> meet both the above requirements. That approach was incorporated into
> [1] and has been cleaned up and generalized a bit in [5].
> I'm now becoming convinced that proliferating new port attributes for
> various data passed from the neutron plugin (the bound MechanismDriver
> in the case of ML2) to nova's GenericVIFDriver is not such a great idea.
> One issue is that adding attributes keeps changing the API, but this
> isn't really a user-facing API. Another is that all ports should have
> the same set of attributes, so the plugin still has to be able to supply
> those attributes when a bound MechanismDriver does not supply them. See [5].
> Instead, I'm proposing here that the binding:vif_security attribute
> proposed in [1] and [2] be renamed binding:vif_details, and used to
> transport whatever data needs to flow from the neutron plugin (i.e.
> ML2's bound MechanismDriver) to the nova GenericVIFDriver. This same
> dictionary attribute would be able to carry the VIF security key/value
> pairs defined in [1], those needed for [4], as well as any needed for
> future GenericVIFDriver features. The set of key/value pairs in
> binding:vif_details that apply would depend on the value of
> binding:vif_type.

I've filed a blueprint for this:


Also, for a similar flow of binding-related information into the
plugin/MechanismDriver, I've filed a blueprint to implement the existing
binding:profile attribute in ML2:


Both of these are admin-only dictionary attributes on port. One is
read-only for output data, the other read-write for input data. Together
they enable optional features like SR-IOV PCI passthrough to be
implemented in ML2 MechanismDrivers without requiring feature-specific
changes to the plugin itself.


> If this proposal is agreed to, I can quickly write a neutron BP covering
> this and provide a generic implementation for ML2. Then [1] and [2]
> could be updated to use binding:vif_details for the VIF security data
> and eliminate the existing binding:capabilities attribute.
> If we take this proposed approach of using binding:vif_details, the
> internal ML2 handling of binding:vif_type and binding:vif_details could
> either take the approach used for binding:vif_type and
> binding:capabilities in the current code, where the values are stored in
> the port binding DB table. Or they could take the approach in [5] where
> they are obtained from bound MechanismDriver when needed. Comments on
> these options are welcome.
> Please provide feedback on this proposal and the various options in this
> email thread and/or at today's ML2 sub-team meeting.
> Thanks,
> -Bob
> [1] https://review.openstack.org/#/c/21946/
> [2] https://review.openstack.org/#/c/44596/
> [3] https://bugs.launchpad.net/nova/+bug/1112912
> [4] https://wiki.openstack.org/wiki/Meetings/Passthrough
> [5] https://review.openstack.org/#/c/69783/
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list