<div dir="ltr">Some inline comments.<br><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Feb 17, 2014 at 10:30 AM, Joe Topjian <span dir="ltr"><<a href="mailto:joe@topjian.net" target="_blank">joe@topjian.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi Édouard,<div><br></div><div>Thank you for the info. Please see inline.</div>
<div class="gmail_extra"><br><div class="gmail_quote"><div class="">On Mon, Feb 17, 2014 at 9:20 AM, Édouard Thuleau <span dir="ltr"><<a href="mailto:thuleau@gmail.com" target="_blank">thuleau@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi Joe,<div><br></div><div>Which version of the Linux kernel do you use? Do you need to set multicast on your fabric?</div>
</div></blockquote><div><br></div></div><div>I'm using 3.11 in Ubuntu 12.04.4. I'm also using a newer version of iproute2 from this ppa: <a href="https://launchpad.net/~dirk-computer42/+archive/c42-backport" target="_blank">https://launchpad.net/~dirk-computer42/+archive/c42-backport</a></div>
<div><br></div><div>AFAIK, I don't require multicast. Is there a good reason or use of multicast at the cloud/infrastructure level?</div></div></div></div></blockquote><div><br></div><div>The first Linux VXLAN implementation used multicast to emulate a virtual broadcast domain. That was the VXLAN recommendation designed on first drafts. But VXLAN doesn't need multicast anymore and the 3.11 Linux kernel release also.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">
<div class=""><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div>We worked in Havana to improve the overlay propagation on the fabric and we made a mechanism driver for the ML2 plugin called 'l2-pop' (one bug still persist on H [1]). Did you use it?<br></div>
</div></blockquote><div><br></div></div><div>I'm using the linuxbridge mechanism driver at the moment. I'm unable to find any documentation on the l2pop driver. Could you explain what it is and why I should use it? </div>
</div></div></div></blockquote><div><br></div><div>Here the blueprint design document <a href="https://docs.google.com/document/d/1sUrvOQ9GIl9IWMGg3qbx2mX0DdXvMiyvCw2Lm6snaWQ/edit">https://docs.google.com/document/d/1sUrvOQ9GIl9IWMGg3qbx2mX0DdXvMiyvCw2Lm6snaWQ/edit</a>, a good blog post <a href="http://assafmuller.wordpress.com/2013/10/14/gre-tunnels-in-openstack-neutron/">http://assafmuller.wordpress.com/2013/10/14/gre-tunnels-in-openstack-neutron/</a> and associated FOSDEM presentation <a href="http://bofh.nikhef.nl/events/FOSDEM//2014/UD2120_Chavanne/Sunday/Tunnels_as_a_Connectivity_and_Segregation_Solution_for_Virtualized_Networks.webm">http://bofh.nikhef.nl/events/FOSDEM//2014/UD2120_Chavanne/Sunday/Tunnels_as_a_Connectivity_and_Segregation_Solution_for_Virtualized_Networks.webm</a> (thanks to Assaf Muller). The post are OVS oriented but it interesting to understand the objectives of l2-pop mechanism driver.</div>
<div><br></div><div>Just, to precise, in ML2 plugin you can set more than one MD and mix them. The l2-pop MD requires at least the LB or OVS MD to work and it works only with GRE or VXLAN type drivers.</div><div><br></div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">
<div class="">
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div>
<div>In my opinion, the LB agent with VXLAN is very simpler (no flows, kernel integrated, 1 bridge = 1 network, netfilter aware...) and as effective than OVS agent. And I think, it's more stable than OVS.<br></div></div>
</blockquote><div><br></div></div><div>Speaking of flows, a topic that has come up in discussion is the use of OVS, OpenStack, and OpenFlow. We have some network guys who are getting into OpenFlow, OpenDaylight et al. With the current Neutron OVS implementation, is it incorrect to say that there is no way to take advantage of a higher level of network control at the moment? Meaning: it seems to me like the OVS implementation is simply being used as a complex drop-in replacement of the linux bridge system.</div>
</div></div></div></blockquote><div><br></div><div>No, the OVS neutron implementation cannot permit to use a higher network control. For that, you need to implement a new ML2 MD to pilot your OVS/OF controller (as it done by plugin NEC, MD ODL...).</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">
<div class="">
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div>
<div>The only one inconvenient, it needs for the moment, a recent Linux kernel (and iproute2 binaries, obviously). I recommend the release 3.11 (version distributed with Ubuntu LTS 12.04.4 actually) to have all the powerful of the VXLAN module (edge replication for multicast, broadacst and unknown unicast). Some distributions backport that module on older kernel (like RedHat, it seems to me).</div>
<div><br></div><div>Another improvement, a local ARP responder (to avoid ARP broadcasting emulation overlay which is costly) is available with VXLAN module and recent iproute2 version and the l2-pop MD uses it, while the OVS agent doesn't yet support it [2]. Just a remark, when it's used, unknown unicast (packets where destination does not match entry in fdb populated by the l2-pop MD) are drop by default (it's not configurable. A kernel an iproute2 improvement needed).</div>
</div></blockquote><div><br></div></div><div>Thank you for noting this.</div><div class=""><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">
<div>In my opinion, the default agent that OpenStack CI should use for testing must be the LB agent. I think it's more stable and easy to debug.<br></div></div></blockquote><div><br></div></div><div>Since I'm not a developer, I can't comment on the CI aspect, but I'd be inclined to agree. I share your opinion, but more about a basic, generic reference installation of Neutron for new users. Now I'm trying to learn more about why one would choose OVS over LB in order to validate that opinion. :)</div>
<div class="">
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div><br></div><div>
[1] <a href="https://review.openstack.org/#/c/71821/" target="_blank">https://review.openstack.org/#/c/71821/</a></div>
<div>[2] <a href="https://review.openstack.org/#/c/49227/" target="_blank">https://review.openstack.org/#/c/49227/</a></div><div><br></div><div>Regards,</div><div>Édouard.</div><div><br></div></div><div class="gmail_extra">
<br><br><div class="gmail_quote"><div><div>
On Sat, Feb 15, 2014 at 12:39 PM, Joe Topjian <span dir="ltr"><<a href="mailto:joe@topjian.net" target="_blank">joe@topjian.net</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div><div>
<div dir="ltr">Hello,<div><br></div><div>I'm curious if anyone uses the linuxbridge driver in production?</div><div><br></div><div>I've just finished setting up a lab environment using ML2, linuxbridge, and vxlan and everything works just as it did with OVS. </div>
<div><br></div><div>I see benefits of a *much* simpler network layout, a non-deprecated vif driver, and none of the OVS issues that have been discussed on this list.</div><div><br></div><div>But maybe I'm missing something... what are the reasons of using OVS over linuxbridge? All of the official installation guides use it and I've never seen anyone mention linuxbridge on this list.</div>
<span><font color="#888888">
<div><br></div><div>Joe</div></font></span></div>
<br></div></div>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div>
</blockquote></div></div><br></div></div>
</blockquote></div><br></div></div>