<div dir="ltr">Hi Irena,<div><br></div><div>Thanks for the very interesting perspective!<br><div class="gmail_extra"><br>On 10 June 2014 10:57, Irena Berezovsky <span dir="ltr"><<a href="mailto:irenab@mellanox.com" target="_blank">irenab@mellanox.com</a>></span> wrote:<br>
<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple">
<div>
<p class="MsoNormal"><b><i><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">[IrenaB] The DB access approach was previously used by OVS and LinuxBridge Agents and at some point (~Grizzly Release) was changed to use RPC communication.</span></i></b></p>
</div></div></blockquote><div><br></div><div>That is very interesting. I've been involved in OpenStack since the Havana cycle and was not familiar with the old design.</div><div><br></div><div>I'm optimistic about the scalability of our implementation. We have sanity-tested with 300 compute nodes and a 300ms sync interval. I am sure we will find some parts that we need to spend optimization energy on, however.</div>
<div><br></div><div>The other scalability aspect we are being careful of is the cost of individual update operations. (In LinuxBridge that would be the iptables, ebtables, etc commands.) In our implementation the compute nodes preprocess the Neutron config into a small config file for the local traffic plane and then load that in one atomic operation ("SIGHUP" style). Again, I am sure we will find cases that we need to spend optimization effort on, but the design seems scalable to me thanks to the atomicity.</div>
<div><br></div><div>For concreteness, here is the agent we are running on the DB node to make the Neutron config available:</div><div><a href="https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-master">https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-master</a><br>
</div><div><br></div><div>and here is the agent that pulls it onto the compute node:</div><div><a href="https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-agent">https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-agent</a><br>
</div><div><br></div><div>TL;DR we snapshot the config with mysqldump and distribute it with git.</div><div><br></div><div>Here's the sanity test I referred to: <a href="https://groups.google.com/d/msg/snabb-devel/blmDuCgoknc/PP_oMgopiB4J">https://groups.google.com/d/msg/snabb-devel/blmDuCgoknc/PP_oMgopiB4J</a></div>
<div><br></div><div>I will be glad to report on our experience and what we change based on our deployment experience during the Juno cycle.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple"><div><div><div><div><div><p class="MsoNormal"><b><i><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there will be need for some sort of agent to handle port update events even though it
might not be required in order to bind the port.</span></i></b></p></div></div></div></div></div></div></blockquote><div><br></div><div>True. Indeed, we do have an agent running on the compute host, and it we are synchronizing it with port updates based on the mechanism described above.</div>
<div><br></div><div>Really what I mean is: Can we keep our agent out-of-tree and apart from ML2 and decide for ourselves how to keep it synchronized (instead of using the MQ)? Is there a precedent for doing things this way in an ML2 mech driver (e.g. one of the SDNs)?</div>
<div><br></div><div>Cheers!</div><div>-Luke</div><div><br></div><div><br></div></div></div></div></div>