[Openstack-operators] Neutron and linuxbridge

Li Ma skywalker.nick at gmail.com
Thu Feb 27 14:09:03 UTC 2014


Hi all, I'm trying to build a testbed for Linux Bridge + VxLAN + ML2
plugin. Could you provide hints on how to set l3-agent up.

Right now, the fixed IPs are all working well, but floating IPs are not
working. I find that br-ex is not set up in the right way, because there
are no virtual ports on br-ex!

Thanks a lot,
--
cheers,
Li Ma

On 2/17/2014 10:20 PM, Joe Topjian wrote:
> Nice - thank you very much!
>
>
> On Mon, Feb 17, 2014 at 3:01 PM, Édouard Thuleau <thuleau at gmail.com
> <mailto:thuleau at gmail.com>> wrote:
>
>     Some inline comments.
>
>
>     On Mon, Feb 17, 2014 at 10:30 AM, Joe Topjian <joe at topjian.net
>     <mailto:joe at topjian.net>> wrote:
>
>         Hi Édouard,
>
>         Thank you for the info. Please see inline.
>
>         On Mon, Feb 17, 2014 at 9:20 AM, Édouard Thuleau
>         <thuleau at gmail.com <mailto:thuleau at gmail.com>> wrote:
>
>             Hi Joe,
>
>             Which version of the Linux kernel do you use? Do you need
>             to set multicast on your fabric?
>
>
>         I'm using 3.11 in Ubuntu 12.04.4. I'm also using a newer
>         version of iproute2 from this
>         ppa: https://launchpad.net/~dirk-computer42/+archive/c42-backport
>         <https://launchpad.net/%7Edirk-computer42/+archive/c42-backport>
>
>         AFAIK, I don't require multicast. Is there a good reason or
>         use of multicast at the cloud/infrastructure level?
>
>
>     The first Linux VXLAN implementation used multicast to emulate a
>     virtual broadcast domain. That was the VXLAN recommendation
>     designed on first drafts. But VXLAN doesn't need multicast anymore
>     and the 3.11 Linux kernel release also.
>
>
>             We worked in Havana to improve the overlay propagation on
>             the fabric and we made a mechanism driver for the ML2
>             plugin called 'l2-pop' (one bug still persist on H [1]).
>             Did you use it?
>
>
>         I'm using the linuxbridge mechanism driver at the moment. I'm
>         unable to find any documentation on the l2pop driver. Could
>         you explain what it is and why I should use it? 
>
>
>     Here the blueprint design
>     document https://docs.google.com/document/d/1sUrvOQ9GIl9IWMGg3qbx2mX0DdXvMiyvCw2Lm6snaWQ/edit, a
>     good blog
>     post http://assafmuller.wordpress.com/2013/10/14/gre-tunnels-in-openstack-neutron/ and
>     associated FOSDEM
>     presentation http://bofh.nikhef.nl/events/FOSDEM//2014/UD2120_Chavanne/Sunday/Tunnels_as_a_Connectivity_and_Segregation_Solution_for_Virtualized_Networks.webm (thanks
>     to Assaf Muller). The post are OVS oriented but it interesting to
>     understand the objectives of l2-pop mechanism driver.
>
>     Just, to precise, in ML2 plugin you can set more than one MD and
>     mix them. The l2-pop MD requires at least the LB or OVS MD to work
>     and it works only with GRE or VXLAN type drivers.
>
>          
>
>             In my opinion, the LB agent with VXLAN is very simpler (no
>             flows, kernel integrated, 1 bridge = 1 network, netfilter
>             aware...) and as effective than OVS agent. And I think,
>             it's more stable than OVS.
>
>
>         Speaking of flows, a topic that has come up in discussion is
>         the use of OVS, OpenStack, and OpenFlow. We have some network
>         guys who are getting into OpenFlow, OpenDaylight et al. With
>         the current Neutron OVS implementation, is it incorrect to say
>         that there is no way to take advantage of a higher level of
>         network control at the moment? Meaning: it seems to me like
>         the OVS implementation is simply being used as a complex
>         drop-in replacement of the linux bridge system.
>
>
>     No, the OVS neutron implementation cannot permit to use a higher
>     network control. For that, you need to implement a new ML2 MD to
>     pilot your OVS/OF controller (as it done by plugin NEC, MD ODL...).
>
>          
>
>             The only one inconvenient, it needs for the moment, a
>             recent Linux kernel (and iproute2 binaries, obviously). I
>             recommend the release 3.11 (version distributed with
>             Ubuntu LTS 12.04.4 actually) to have all the powerful of
>             the VXLAN module (edge replication for multicast,
>             broadacst and unknown unicast). Some distributions
>             backport that module on older kernel (like RedHat, it
>             seems to me).
>
>             Another improvement, a local ARP responder (to avoid ARP
>             broadcasting emulation overlay which is costly) is
>             available with VXLAN module and recent iproute2 version
>             and the l2-pop MD uses it, while the OVS agent doesn't yet
>             support it [2]. Just a remark, when it's used, unknown
>             unicast (packets where destination does not match entry in
>             fdb populated by the l2-pop MD) are drop by default (it's
>             not configurable. A kernel an iproute2 improvement needed).
>
>
>         Thank you for noting this.
>          
>
>             In my opinion, the default agent that OpenStack CI should
>             use for testing must be the LB agent. I think it's more
>             stable and easy to debug.
>
>
>         Since I'm not a developer, I can't comment on the CI aspect,
>         but I'd be inclined to agree. I share your opinion, but more
>         about a basic, generic reference installation of Neutron for
>         new users. Now I'm trying to learn more about why one would
>         choose OVS over LB in order to validate that opinion. :)
>          
>
>
>             [1] https://review.openstack.org/#/c/71821/
>             [2] https://review.openstack.org/#/c/49227/
>
>             Regards,
>             Édouard.
>
>
>
>             On Sat, Feb 15, 2014 at 12:39 PM, Joe Topjian
>             <joe at topjian.net <mailto:joe at topjian.net>> wrote:
>
>                 Hello,
>
>                 I'm curious if anyone uses the linuxbridge driver in
>                 production?
>
>                 I've just finished setting up a lab environment using
>                 ML2, linuxbridge, and vxlan and everything works just
>                 as it did with OVS. 
>
>                 I see benefits of a *much* simpler network layout, a
>                 non-deprecated vif driver, and none of the OVS issues
>                 that have been discussed on this list.
>
>                 But maybe I'm missing something... what are the
>                 reasons of using OVS over linuxbridge? All of the
>                 official installation guides use it and I've never
>                 seen anyone mention linuxbridge on this list.
>
>                 Joe
>
>                 _______________________________________________
>                 OpenStack-operators mailing list
>                 OpenStack-operators at lists.openstack.org
>                 <mailto:OpenStack-operators at lists.openstack.org>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 

cheers,
Li Ma




More information about the OpenStack-operators mailing list