[openstack-dev] [Neutron][ml2] Too much "shim rest proxy" mechanism drivers in ML2

Luke Gorrie luke at snabb.co
Tue Jun 10 09:46:59 UTC 2014


Hi Irena,

Thanks for the very interesting perspective!

On 10 June 2014 10:57, Irena Berezovsky <irenab at mellanox.com> wrote:

>  *[IrenaB] The DB access approach was previously used by OVS and
> LinuxBridge Agents and at some point (~Grizzly Release) was changed to use
> RPC communication.*
>

That is very interesting. I've been involved in OpenStack since the Havana
cycle and was not familiar with the old design.

I'm optimistic about the scalability of our implementation. We have
sanity-tested with 300 compute nodes and a 300ms sync interval. I am sure
we will find some parts that we need to spend optimization energy on,
however.

The other scalability aspect we are being careful of is the cost of
individual update operations. (In LinuxBridge that would be the iptables,
ebtables, etc commands.) In our implementation the compute nodes preprocess
the Neutron config into a small config file for the local traffic plane and
then load that in one atomic operation ("SIGHUP" style). Again, I am sure
we will find cases that we need to spend optimization effort on, but the
design seems scalable to me thanks to the atomicity.

For concreteness, here is the agent we are running on the DB node to make
the Neutron config available:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-master

and here is the agent that pulls it onto the compute node:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-agent

TL;DR we snapshot the config with mysqldump and distribute it with git.

Here's the sanity test I referred to:
https://groups.google.com/d/msg/snabb-devel/blmDuCgoknc/PP_oMgopiB4J

I will be glad to report on our experience and what we change based on our
deployment experience during the Juno cycle.

*[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there
> will be need for some sort of agent to handle port update events even
> though it might not be required in order to bind the port.*
>

True. Indeed, we do have an agent running on the compute host, and it we
are synchronizing it with port updates based on the mechanism described
above.

Really what I mean is: Can we keep our agent out-of-tree and apart from ML2
and decide for ourselves how to keep it synchronized (instead of using the
MQ)? Is there a precedent for doing things this way in an ML2 mech driver
(e.g. one of the SDNs)?

Cheers!
-Luke
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140610/f82f2c56/attachment.html>


More information about the OpenStack-dev mailing list