[openstack-dev] [Quantum] Impact of separate bridges for integration & physical/tunneling
Salvatore Orlando
sorlando at nicira.com
Fri May 10 21:23:37 UTC 2013
Il giorno 08/mag/2013 16:37, "Lorin Hochstein" <lorin at nimbisservices.com>
ha scritto:
>
> (I originally asked this question a couple of days ago on the main
OpenStack mailing list).
>
> I'm trying to wrap my head around how Quantum works. If understanding
things correctly, when using the openvswitch plugin, a packet traveling
from a guest out to the physical switch has to cross two software bridges
(not counting the additional Linux bridge if security groups are required):
>
> 1. br-int
> 2. br-ethN or br-tun (depending on whether using VLANs or GRE tunnels)
>
> So, I think I understand the motivation behind this design: the
integration bridge handles the rules associated with the virtual networks
defined by OpenStack users, and the (br-ethN | br-tun) bridge handles the
rules associated with moving the packets across the physical network.
>
>From my understanding of the OVS plugin, that's correct.
> My question is: Does having two software bridges in the path incur a
larger network performance penalty than if there was only a single software
bridge between the VIF and the physical network interface? For example, I
would guess that there would be additional latency involved in hopping
across two bridges instead of one.
>
I am pretty sure we are paying a performance penalty in term of latency.
Howrver, as far as I am aware this penalty has never been measured, esp. in
relation to the number of flows traversing the bridges.
A long time ago I measured the penalty of doing segmentation in SW with
GRE, but did not measure the penalty of having an extra bridge.
> If there is a performance panelty, was Quantum implemented to use
multiple openvswitch bridges because it's simply not possible to achieve
the desired functionality using a single bridge, or was it because using
the multiple bridge approach simplifies the Quantum implementation through
separation of concerns, or was there some other reason?
If you would be doing just GRE overlays, you could definetely do with a
single bridge. As one might use provider networks to uplink to one ore more
phy interfaces, I am not sure that would be feasible/manageable with a
single switch.
Salvatore
>
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130510/020eb995/attachment.html>
More information about the OpenStack-dev
mailing list