[openstack-dev] [networking] [ml2] General Questions

Rich Curran (rcurran) rcurran at cisco.com
Wed Jun 12 01:48:29 UTC 2013


Inline at rcurran

-----Original Message-----
From: Robert Kukura [mailto:rkukura at redhat.com] 
Sent: Tuesday, June 11, 2013 12:24 PM
To: Rich Curran (rcurran)
Cc: OpenStack Development Mailing List
Subject: Re: [networking] [ml2] General Questions

On 06/11/2013 11:45 AM, Rich Curran (rcurran) wrote:
> Hi Bob and others planning on supporting ML2

Hi Rich,

-
> 
> While reviewing the ML2 design looking at how the Cisco sub-plugins could use (be rewritten) the mechanism driver I've come up with a few questions.
> 
> 1.) Although not necessary, I'm guessing that the Ml2Plugin would be used as the core_plugin. Is this true?
>       If used as a core_plugin then the actions taken by received events (create_network, create_port, etc.) are written to support the OVS or LinuxBridge
>       vSwitches. I'm guessing this works for most deployments but will be an issue when using the Cisco N1K plugin.  For Cisco, N1K replaces the OVS
>      plugin.  We may have to create/rewrite/inject a Cisco core_plugin when using the Cisco N1K under the ML2 design. Which honestly is something
>      we wouldn't want to do.
>      Will any other existing plugins have this issue?

You are correct that ml2 is intended to be "the" core plugin. It potentially could be used underneath the metaplugin, but is intended to provide a much more flexible and powerful way to simultaneously support multiple virtual L2 mechanisms than the metaplugin.

When using the ml2 plugin with the openvswitch, linuxbridge, and/or hyperv L2 agents (note that these can all work with ml2 simultaneously on different nodes), the ml2 plugin completely replaces the legacy monolithic plugin. The various L2 agents now all use the same RPC interface, which the ml2 plugin supports directly.

I admit I'm not very familiar (yet) with the current Cisco plugin architecture. In particular, I'm not clear on what aspects of the openvswitch plugin and/or agent it reuses, and what it needs to do differently.

[rcurran - A Cisco customer would be able to configure (via cisco_plugins.ini) a virtual switch (sub)plugin to either:
vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
or
vswitch_plugin=quantum.plugins.cisco.n1kv.n1kv_quantum_plugin.N1kvQuantumPluginV2
not both.
(A nexus_switch variable (set to nexus subplugin) can also be set for programming external Cisco Nexus switch h/w, but that's not relevant for this answer.)
When the N1K vswitch_plugin is used no calls are made to the OVS plugin. I'm not familiar w/ the details of the N1K code but I believe that whatever programming of OVS that's required would be done by the VSM (Virtual Supervisor Module) portion of N1K. 
i.e. config path programming = OpenStack networking plugin -- (ReST API) --> N1K VSM  -> OVS agent.
So hardcoding the OVS/LinuxBridge plugin code into ML2 is going to cause some problems for the Cisco implementation.
Thanks, Rich]


With ml2, new network types are plugged in as TypeDrivers, and the capability to communicate with controllers/hardware is (or soon will be) plugged in as MechanismDrivers. If a TypeDriver adds a new network type, any deployed RPC-based L2 agents will see this type via the (network_type, physical_network, segmenation_id) tuples passed over the RPC interface. New MechanismDrivers can do whatever they like with whatever network types they are designed to handle.

The ml2-portbindig BP covers how MechanismDrivers for the L2 agents will verify that the the port binding can actually be established with an L2 agent on the compute node. This same portbinding process will apply to MechanismDrivers for the L2 agents as well as MechanismDrivers that communicate with devices or controllers. The goal is for nova to fail bringing up the VM if the required connectivity cannot be provided on the selected compute node.

> 
> 2.) Taking the case were an OVS vSwitch is being used with an external device (mechanism_driver), we'll need to roll back the action taken on an OVS
>       event if there is an error on the action taken for the external device.
>       i.e. the create_port() events occurs, Ml2Plugin create_port() called and is successful, mechanism_driver(s) create_port(s) are called. If an error 
>      occurs here then the Ml2Plugin delete_port() should be called.

This sounds like what I tried to describe briefly in the ml2-mechanism-driver BP. The MechanismDrivers would be called first from within the DB transaction, where raising an exception would cause the transaction to rollback immediately. After that transaction commits, the MechanismDrivers would be called again, and if they indicate failure, the ml2 plugin would undo the previous transaction as you described. The idea is for the DB transaction to commit quickly, without waiting too long for communication with external devices/controllers.

The tricky part here will be handling bulk operations correctly and efficiently. I think they currently are just one big transaction when managed by the plugin base class, so ml2 will need to override handling bulk operations.

>      I realize this code hasn't been written yet but wanted to see if others agreed with this design. 

It seems you and I are on the same page here. Lets see what others think.

A related point that's being discussed is we need a way to re-sync the plugin DB state with the state in external any devices/controllers. We might want to rely on this in some cases rather than undoing transactions.

> 
> Thanks,
> Rich
>      
> 

-Bob




More information about the OpenStack-dev mailing list