<p><br>
Il giorno 08/mag/2013 16:37, "Lorin Hochstein" <<a href="mailto:lorin@nimbisservices.com">lorin@nimbisservices.com</a>> ha scritto:<br>
><br>
> (I originally asked this question a couple of days ago on the main OpenStack mailing list).<br>
><br>
> I'm trying to wrap my head around how Quantum works. If understanding things correctly, when using the openvswitch plugin, a packet traveling from a guest out to the physical switch has to cross two software bridges (not counting the additional Linux bridge if security groups are required): <br>
><br>
> 1. br-int<br>
> 2. br-ethN or br-tun (depending on whether using VLANs or GRE tunnels)<br>
><br>
> So, I think I understand the motivation behind this design: the integration bridge handles the rules associated with the virtual networks defined by OpenStack users, and the (br-ethN | br-tun) bridge handles the rules associated with moving the packets across the physical network. <br>
></p>
<p>From my understanding of the OVS plugin, that's correct.</p>
<p>> My question is: Does having two software bridges in the path incur a larger network performance penalty than if there was only a single software bridge between the VIF and the physical network interface? For example, I would guess that there would be additional latency involved in hopping across two bridges instead of one.<br>
></p>
<p>I am pretty sure we are paying a performance penalty in term of latency. Howrver, as far as I am aware this penalty has never been measured, esp. in relation to the number of flows traversing the bridges.<br>
A long time ago I measured the penalty of doing segmentation in SW with GRE, but did not measure the penalty of having an extra bridge.</p>
<p>> If there is a performance panelty, was Quantum implemented to use multiple openvswitch bridges because it's simply not possible to achieve the desired functionality using a single bridge, or was it because using the multiple bridge approach simplifies the Quantum implementation through separation of concerns, or was there some other reason?</p>
<p>If you would be doing just GRE overlays, you could definetely do with a single bridge. As one might use provider networks to uplink to one ore more phy interfaces, I am not sure that would be feasible/manageable with a single switch.</p>
<p>Salvatore<br>
><br>
> Lorin<br>
> -- <br>
> Lorin Hochstein<br>
> Lead Architect - Cloud Services<br>
> Nimbis Services, Inc.<br>
> <a href="http://www.nimbisservices.com">www.nimbisservices.com</a><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
</p>