[openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Irena Berezovsky irenab at mellanox.com
Thu Jul 10 07:07:57 UTC 2014


Hi,
For passing  information from neutron to nova VIF Driver, you should use binding:vif_details dictionary.  You may not require new VIF_TYPE, but can leverage the existing VIF_TYPE_OVS, and add ‘use_dpdk’   in vif_details dictionary. This will require some rework of the existing libvirt vif_driver VIF_TYPE_OVS.

Binding:profile is considered as input dictionary that is used to pass information required for port binding on Server side. You  may use binding:profile to pass in  a dpdk ovs request, so it will be taken into port binding consideration by ML2 plugin.

I am not sure regarding new vnic_type, since it will require  port owner to pass in the requested type. Is it your intention? Should the port owner be aware of dpdk ovs usage?
There is also VM scheduling consideration that if certain vnic_type is requested, VM should be scheduled on the node that can satisfy the request.

Regards,
Irena


From: loy wolfe [mailto:loywolfe at gmail.com]
Sent: Thursday, July 10, 2014 6:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Mooney, Sean K
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

i think both a new vnic_type and a new vif_type should be added. now vnic has three types: normal, direct, macvtap, then we need a new type of "uservhost".

as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB, so we need a new VIF_TYPE_USEROVS

I don't think it's a good idea to directly reuse ovs agent, for we have to consider use cases that ovs and userovs co-exists. Now it's a little painful to fork and write a new agent, but it will be easier when ML2 agent BP is merged in the future. (https://etherpad.openstack.org/p/modular-l2-agent-outline)

On Wed, Jul 9, 2014 at 11:08 PM, Czesnowicz, Przemyslaw <przemyslaw.czesnowicz at intel.com<mailto:przemyslaw.czesnowicz at intel.com>> wrote:
Hi

We (Intel Openstack team) would like to add support for dpdk based userspace openvswitch using mech_openvswitch and mech_odl from ML2 plugin.
The dpdk enabled ovs comes in two flavours one is netdev incorporated into vanilla ovs the other is a fork of ovs with a dpdk datapath (https://github.com/01org/dpdk-ovs ).
Both flavours use userspace vhost mechanism to connect the VMs to the switch.

Our initial approach was to extend ovs vif bindings in nova and add a config parameter to specify when userspace vhost should be used.
Spec : https://review.openstack.org/95805
Code: https://review.openstack.org/100256

Nova devs rejected this approach saying that Neutron should pass all necessary information to nova to select vif bindings.

Currently we are looking for a way to pass information from Neutron to Nova that dpdk enabled ovs is being used while still being able to use mech_openvswitch and ovs_neutron_agent or mech_odl.

We thought of two possible solutions:

1.      Use binding_profile to provide node specific info to nova.

Agent rpc api would be extended to allow agents to send node profile to neutron plugin.

That info would be stored in db and passed to nova when binding on this specific host is requested.

This could be used to support our use case or pass other info to nova (i.e name of integration bridge)



2.      Let mech_openvswitch and mech_odl detect what binding type to use.

When asked for port binding mech_openvswitch and mech_odl would call the agent or odl  to check what bindings to use (VIF_TYPE_OVS or VIF_TYPE_DPDKVHOST)


So, what would be the best way to support our usecase, is it one of the above ?

Best regards
Przemek

--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140710/e0b89a92/attachment-0001.html>


More information about the OpenStack-dev mailing list