<div>Just trying to add my two cents here.</div><div><br></div><div>I have been using OVS plugin extensively (Please, refer to my talk in OpenStack Summit 2013 at Portland: Using OpenStack In A Traditional Hosting Environment).</div>
<div><br></div><div>In our environment, we made some patches to fully utilize OVS flows using OVS plugin for providing anti-ip/arp spoofing, isolated networks, etc (<a href="https://github.com/JunPark/quantum/tree/bluehost/master">https://github.com/JunPark/quantum/tree/bluehost/master</a>). While working on them, we got to know some reasoning as to why two separate bridges could be useful at least for the OVS flows-management perspective. For the performance perspective, I could not see any significant practical performance penalty as least for the bandwidth-related issue. I would say that the potential latency gap between one bridge and two bridges isn't a big deal anyway in most of the cases.</div>
<div><br></div><div>Here is my understanding:</div><div><br></div><div>1. Having two separate bridges (br-int and phy-eth0) that are connected via a pair of veth interfaces (int-br-eth0 and phy-br-eth0) is easier to manipulate OVS flows. So a typical use case is as follows. For incoming packets that are supposed to be delivered to VMs, you can set up OVS flows (e.g., adding a VLAN tag) on the integration bridge of 'br-int' where int-br-eth0 (veth) receives the incoming packets. For outgoing packets from VMs to outside, you can set up OVS flows (e.g., stripping off a VLAN tag) on the physical bridge of 'phy-eth0' where phy-br-eth0 (veth) receives the outgoing packets. This way, you can have two separate logics on two bridges. Of course, it doesn't have to be this way only. But, it seems convenient to manage two bridges for dealing with incoming and outgoing packets separately.</div>
<div><br></div><div>2. In some cases (refer to my github), it is impossible to find a O(n) solution of OVS flows without having two bridges where n is the number of VMs on a host. If you look into my patches in my github, you can see that I newly introduced another pair of veth for dealing with VM intra-traffic OVS flows. Without using two separate bridges, I could only find a solution that requires O(n^2) OVS flows to achieve the same functionality of VM intra-traffic.</div>
<div><br></div><div>3. I had several performance testing results regarding the number of bridges: one or two. But, please don't expect a thorough argument here. First, with two bridges when I tested outgoing-bandwidth from VMs on 1Gbps NIC, I could not see any bandwidth penalty, meaning I could max out the physical bandwidth. When I used one bridge only for our VM intra-traffic, I could see a huge difference in bandwidth: 1Gbps vs. 8Gbps, with two bridges and one bridge, respectively. I'm not sure what was the main root cause of producing such a big gap. But, the testing was strongly related to the memory size, as far as I recall, because intra-traffic used a lot of memory. For the VMs perpective, such intra-traffic gap doesn't really matter as long as the VMs can have some guaranteed bandwidth to the outside. </div>
<div><br></div><div>Thanks,</div><div><br></div><div>-Jun</div><br><div class="gmail_quote">On Fri, May 10, 2013 at 3:23 PM, Salvatore Orlando <span dir="ltr"><<a href="mailto:sorlando@nicira.com" target="_blank">sorlando@nicira.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p><br>
Il giorno 08/mag/2013 16:37, "Lorin Hochstein" <<a href="mailto:lorin@nimbisservices.com" target="_blank">lorin@nimbisservices.com</a>> ha scritto:</p><div class="im"><br>
><br>
> (I originally asked this question a couple of days ago on the main OpenStack mailing list).<br>
><br>
> I'm trying to wrap my head around how Quantum works. If understanding things correctly, when using the openvswitch plugin, a packet traveling from a guest out to the physical switch has to cross two software bridges (not counting the additional Linux bridge if security groups are required): <br>
><br>
> 1. br-int<br>
> 2. br-ethN or br-tun (depending on whether using VLANs or GRE tunnels)<br>
><br>
> So, I think I understand the motivation behind this design: the integration bridge handles the rules associated with the virtual networks defined by OpenStack users, and the (br-ethN | br-tun) bridge handles the rules associated with moving the packets across the physical network. <br>
></div><p></p>
<p>From my understanding of the OVS plugin, that's correct.</p><div class="im">
<p>> My question is: Does having two software bridges in the path incur a larger network performance penalty than if there was only a single software bridge between the VIF and the physical network interface? For example, I would guess that there would be additional latency involved in hopping across two bridges instead of one.<br>
></p>
</div><p>I am pretty sure we are paying a performance penalty in term of latency. Howrver, as far as I am aware this penalty has never been measured, esp. in relation to the number of flows traversing the bridges.<br>
A long time ago I measured the penalty of doing segmentation in SW with GRE, but did not measure the penalty of having an extra bridge.</p><div class="im">
<p>> If there is a performance panelty, was Quantum implemented to use multiple openvswitch bridges because it's simply not possible to achieve the desired functionality using a single bridge, or was it because using the multiple bridge approach simplifies the Quantum implementation through separation of concerns, or was there some other reason?</p>
</div><p>If you would be doing just GRE overlays, you could definetely do with a single bridge. As one might use provider networks to uplink to one ore more phy interfaces, I am not sure that would be feasible/manageable with a single switch.</p>
<p>Salvatore</p><div class="im"><br>
><br>
> Lorin<br>
> -- <br>
> Lorin Hochstein<br>
> Lead Architect - Cloud Services<br>
> Nimbis Services, Inc.<br>
> <a href="http://www.nimbisservices.com" target="_blank">www.nimbisservices.com</a><br>
><br></div>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
<p></p>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br>