[openstack-dev] [Neutron] Too much "shim rest proxy" mechanism drivers in ML2

henry hly henry4hly at gmail.com
Mon Jun 9 07:35:37 UTC 2014

hi mathieu,

> I totally agree. By using l2population with tunnel networks (vxlan,
> gre), you will not be able to plug an external device which could
> possibly terminate your tunnel. The ML2 plugin has to be aware a new
> port in the vxlan segment. I think this is the scope of this bp :

> mixing several SDN controller (when used with ovs/of/lb agent, neutron
> could be considered has a SDN controller) could be achieved the same
> way, with the SDN controller sending notification to neutron for the
> ports that it manages.

I agree with basic ieda of this BP, especially controller agnostic and no
vendor specific code to handle segment id. since Neutron already has all
information about ports and a standard way to populate it (l2 pop), why not
just reuse it?

>>  And with the help of coming ML2 agent framework, hardware
>>  device or middleware controller adaption agent could be more simplified.

> I don't understand the reason why you want to move middleware
> controller to the agent.

this BP suggest a driver side hook plug, my idea is that existing agent
side router VIF plug processing should be ok. Suppose we have a hardware
router with VETP termination, just keep the L3 plugin unchanged, for L2
part maybe a very thin DEV specific mechanism driver is there (just like
OVS mech driver, doing necessary validation with 10's line of code). Most
work is in agent side: when a router interface is created, DEV specific L3
agent will interact with the router (either directly config wih
netconf/cli, or indirectly via some controller middleware), and then hook
to DEV specific L2 agent co-located with it, doing a "virtual" VIF plug-in.
Exactly same as OVS agent, this L2 agent scanned the newly "plugged" VIF,
then rpc call back to ML2 plugin with port-update and standard l2 pop.

While OVS/linux bridge agent VIF plug is identified by port name in br-int,
these appliance specific L3 & L2 agents may need a new "virtual" plug hook.
Any producer/consumer pattern is ok, shared file in tmpfs, name pipe, etc.
Anyway, these work shouldn't happen in plugin side, just leave it in agent
side, to keep with the same framework as exist ovs/bridge agent.

Today DEV specific L2 agent can fork from OVS agent, just like what ofagent
does. In the future, modularized ML2 agent can reduce work to write code
for a new switch engine.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140609/5bc2c0a6/attachment.html>

More information about the OpenStack-dev mailing list