[openstack-dev] [Quantum] Impact of separate bridges for integration & physical/tunneling

Jun Cheol Park jun.park.earth at gmail.com
Fri May 17 16:34:09 UTC 2013


Just trying to add my two cents here.

I have been using OVS plugin extensively (Please, refer to my talk in
OpenStack Summit 2013 at Portland: Using OpenStack In A Traditional Hosting
Environment).

In our environment, we made some patches to fully utilize OVS flows using
OVS plugin for providing anti-ip/arp spoofing, isolated networks, etc (
https://github.com/JunPark/quantum/tree/bluehost/master). While working on
them, we got to know some reasoning as to why two separate bridges could be
useful at least for the OVS flows-management perspective. For the
performance perspective, I could not see any significant practical
performance penalty as least for the bandwidth-related issue. I would say
that the potential latency gap between one bridge and two bridges isn't a
big deal anyway in most of the cases.

Here is my understanding:

1. Having two separate bridges (br-int and phy-eth0) that are connected via
a pair of veth interfaces (int-br-eth0 and phy-br-eth0) is easier to
manipulate OVS flows. So a typical use case is as follows. For incoming
packets that are supposed to be delivered to VMs, you can set up OVS flows
(e.g., adding a VLAN tag) on the integration bridge of 'br-int' where
int-br-eth0 (veth) receives the incoming packets. For outgoing packets from
VMs to outside, you can set up OVS flows (e.g., stripping off a VLAN tag)
on the physical bridge of 'phy-eth0' where phy-br-eth0 (veth) receives the
outgoing packets. This way, you can have two separate logics on two
bridges. Of course, it doesn't have to be this way only. But, it seems
convenient to manage two bridges for dealing with incoming and outgoing
packets separately.

2. In some cases (refer to my github), it is impossible to find a O(n)
solution of OVS flows without having two bridges where n is the number of
VMs on a host. If you look into my patches in my github, you can see that I
newly introduced another pair of veth for dealing with VM intra-traffic OVS
flows. Without using two separate bridges, I could only find a solution
that requires O(n^2) OVS flows to achieve the same functionality of VM
intra-traffic.

3. I had several performance testing results regarding the number of
bridges: one or two. But, please don't expect a thorough argument here.
First, with two bridges when I tested outgoing-bandwidth from VMs on 1Gbps
NIC, I could not see any bandwidth penalty, meaning I could max out the
physical bandwidth. When I used one bridge only for our VM intra-traffic, I
could see a huge difference in bandwidth: 1Gbps vs. 8Gbps, with two bridges
and one bridge, respectively. I'm not sure what was the main root cause of
producing such a big gap. But, the testing was strongly related to the
memory size, as far as I recall, because intra-traffic used a lot of
memory. For the VMs perpective, such intra-traffic gap doesn't really
matter as long as the VMs can have some guaranteed bandwidth to the
outside.

Thanks,

-Jun

On Fri, May 10, 2013 at 3:23 PM, Salvatore Orlando <sorlando at nicira.com>wrote:

>
> Il giorno 08/mag/2013 16:37, "Lorin Hochstein" <lorin at nimbisservices.com>
> ha scritto:
>
> >
> > (I originally asked this question a couple of days ago on the main
> OpenStack mailing list).
> >
> > I'm trying to wrap my head around how Quantum works. If understanding
> things correctly, when using the openvswitch plugin, a packet traveling
> from a guest out to the physical switch has to cross two software bridges
> (not counting the additional Linux bridge if security groups are required):
> >
> > 1. br-int
> > 2. br-ethN or br-tun (depending on whether using VLANs or GRE tunnels)
> >
> > So, I think I understand the motivation behind this design: the
> integration bridge handles the rules associated with the virtual networks
> defined by OpenStack users, and the (br-ethN | br-tun) bridge handles the
> rules associated with moving the packets across the physical network.
> >
>
> From my understanding of the OVS plugin, that's correct.
>
> > My question is:  Does having two software bridges in the path incur a
> larger network performance penalty than if there was only a single software
> bridge between the VIF and the physical network interface? For example, I
> would guess that there would be additional latency involved in hopping
> across two bridges instead of one.
> >
>
> I am pretty sure we are paying a performance penalty in term of latency.
> Howrver, as far as I am aware this penalty has never been measured, esp. in
> relation to the number of flows traversing the bridges.
> A long time ago I measured the penalty of doing segmentation in SW with
> GRE, but did not measure the penalty of having an extra bridge.
>
> > If there is a performance panelty, was Quantum implemented to use
> multiple openvswitch bridges because it's simply not possible to achieve
> the desired functionality using a single bridge, or was it because using
> the multiple bridge approach simplifies the Quantum implementation through
> separation of concerns, or was there some other reason?
>
> If you would be doing just GRE overlays, you could definetely do with a
> single bridge. As one might use provider networks to uplink to one ore more
> phy interfaces, I am not sure that would be feasible/manageable with a
> single switch.
>
> Salvatore
>
> >
> > Lorin
> > --
> > Lorin Hochstein
> > Lead Architect - Cloud Services
> > Nimbis Services, Inc.
> > www.nimbisservices.com
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130517/3467c78b/attachment.html>


More information about the OpenStack-dev mailing list