[openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

Kevin Benton blak111 at gmail.com
Wed Aug 27 21:44:44 UTC 2014


It's more than just an optimization when it comes to overlay networks
though. It's the only way for agents to establish segment connectivity when
something like vxlan multicast discovery isn't possible. It shouldn't be
l2pop's responsibility for something like basic connectivity. That should
be handled by the tunnel type drivers.

I'm fine with l2pop optimizing things like ARP responses and tunnel
pruning, but a network should still be able to function without l2pop.


On Wed, Aug 27, 2014 at 6:36 AM, Mathieu Rohon <mathieu.rohon at gmail.com>
wrote:

> l2pop is about l2 networks optimization with tunnel creation and arp
> repsonder population (so this is
> not only a overlays network optimization. For example ofagent now use
> l2pop info for flat and vlan optimization [1]),
> This optimization is orthogonal to several agent based mechanism
> driver (lb, ovs, ofagent).
> I agree that this optimization should be accessible to every MD, by
> providing an access to fdb dict directly from ML2.db.
> a controler based MD like ODL could use those fdb entries the same way
> agents use it, by optimizing the datapath under its control.
>
> [1]https://review.openstack.org/#/c/114119/
>
> On Wed, Aug 27, 2014 at 10:30 AM, Kevin Benton <blak111 at gmail.com> wrote:
> >>So why not agent based?
> >
> > Maybe I have an experimental operating system that can't run python.
> Maybe
> > the RPC channel between compute nodes and Neutron doesn't satisfy certain
> > security criteria. Regardless of the reason, it doesn't matter because
> that
> > is an implementation detail that should be irrelevant to separate ML2
> > drivers.
> >
> > l2pop should be concerned with tunnel endpoints and tunnel endpoints
> only.
> > Whether or not you're running a chunk of code responding to messages on
> an
> > RPC bus and sending heartbeats should not be Neutron's concern. It
> defeats
> > the purpose of ML2 if everything that can bind a port has to be running a
> > neutron RPC-compatible agent.
> >
> > The l2pop functionality should become part of the tunnel type drivers and
> > the mechanism drivers should be able to provide the termination endpoints
> > for the tunnels using whatever mechanism it chooses. Agent-based drivers
> can
> > use the agent DB to do this and then the REST drivers can provide
> whatever
> > termination point they want. This solves the interoperability problem and
> > relaxes this tight coupling between vxlan and agents.
> >
> >
> > On Wed, Aug 27, 2014 at 1:09 AM, loy wolfe <loywolfe at gmail.com> wrote:
> >>
> >>
> >>
> >>
> >> On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton <blak111 at gmail.com>
> wrote:
> >>>
> >>> Ports are bound in order of configured drivers so as long as the
> >>> OpenVswitch driver is put first in the list, it will bind the ports it
> can
> >>> and then ODL would bind the leftovers. [1][2] The only missing
> component is
> >>> that ODL doesn't look like it uses l2pop so establishing tunnels
> between the
> >>> OVS agents and the ODL-managed vswitches would be an issue that would
> have
> >>> to be handled via another process.
> >>>
> >>> Regardless, my original point is that the driver keeps the neutron
> >>> semantics and DB in tact. In my opinion, the lack of compatibility with
> >>> l2pop isn't an issue with the driver, but more of an issue with how
> l2pop
> >>> was designed. It's very tightly coupled to having agents managed by
> Neutron
> >>> via RPC, which shouldn't be necessary when it's primary purpose is to
> >>> establish endpoints for overlay tunnels.
> >>
> >>
> >> So why not agent based? Neutron shouldn't be treated as just an resource
> >> storage, built-in backends naturally need things like l2pop and dvr for
> >> distributed dynamic topology control,  we couldn't say that something
> as a
> >> part was "tightly coupled".
> >>
> >> On the contrary, 3rd backends should adapt themselves to be integrated
> >> into Neutron as thin as they can, focusing on the backend device
> control but
> >> not re-implement core service logic duplicated with Neutron . BTW,
> Ofagent
> >> is a good example for this style.
> >>
> >>>
> >>>
> >>>
> >>> 1.
> >>>
> https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8cccc/neutron/plugins/ml2/drivers/mech_agent.py#L53
> >>> 2.
> >>>
> https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8cccc/neutron/plugins/ml2/drivers/mechanism_odl.py#L326
> >>>
> >>>
> >>> On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe <loywolfe at gmail.com>
> wrote:
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton <blak111 at gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> >I think that "opensource" is not the only factor, it's about
> built-in
> >>>>> > vs. 3rd backend. Built-in must be opensource, but opensource is not
> >>>>> > necessarily built-in. By my thought, current OVS and linuxbridge
> are
> >>>>> > built-in, but shim RESTful proxy for all kinds of sdn controller
> should be
> >>>>> > 3rd, for they keep all virtual networking data model and service
> logic in
> >>>>> > their own places, using Neutron API just as the NB shell (they
> can't even
> >>>>> > co-work with built-in l2pop driver for vxlan/gre network type
> today).
> >>>>>
> >>>>>
> >>>>> I understand the point you are trying to make, but this blanket
> >>>>> statement about the data model of drivers/plugins with REST backends
> is
> >>>>> wrong. Look at the ODL mechanism driver for a counter-example.[1]
> The data
> >>>>> is still stored in Neutron and all of the semantics of the API are
> >>>>> maintained. The l2pop driver is to deal with decentralized overlays,
> so I'm
> >>>>> not sure how its interoperability with the ODL driver is relevant.
> >>>>
> >>>>
> >>>> If we create a vxlan network,  then can we bind some ports to built-in
> >>>> ovs driver, and other ports to ODL driver? linux bridge agnet, ovs
> agent,
> >>>> ofagent can co-exist in the same vxlan network, under the common l2pop
> >>>> mechanism. By that scenery, I'm not sure whether ODL can just add to
> them in
> >>>> a heterogeneous multi-backend architecture , or work exclusively and
> have to
> >>>> take over all the functionality.
> >>>>
> >>>>>
> >>>>>
> >>>>> 1.
> >>>>>
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe <loywolfe at gmail.com>
> wrote:
> >>>>>>
> >>>>>> Forwarded from other thread discussing about incubator:
> >>>>>>
> >>>>>>
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> Completely agree with this sentiment. Is there a crisp distinction
> >>>>>>> between a "vendor" plugin and an "open source" plugin though?
> >>>>>>>
> >>>>>>
> >>>>>> I think that "opensource" is not the only factor, it's about
> built-in
> >>>>>> vs. 3rd backend. Built-in must be opensource, but opensource is not
> >>>>>> necessarily built-in. By my thought, current OVS and linuxbridge are
> >>>>>> built-in, but shim RESTful proxy for all kinds of sdn controller
> should be
> >>>>>> 3rd, for they keep all virtual networking data model and service
> logic in
> >>>>>> their own places, using Neutron API just as the NB shell (they
> can't even
> >>>>>> co-work with built-in l2pop driver for vxlan/gre network type
> today).
> >>>>>>
> >>>>>> As for the Snabb or DPDKOVS (they also plan to support official qemu
> >>>>>> vhost-user), or some other similar contributions, if one or two of
> them win
> >>>>>> in the war of this high performance userspace vswitch, and receive
> large
> >>>>>> common interest, then it may be accepted as built-in.
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> The Snabb NFV (http://snabb.co/nfv.html) driver superficially
> looks
> >>>>>>> like a vendor plugin but is actually completely open source. The
> development
> >>>>>>> is driven by end-user organisations who want to make the standard
> upstream
> >>>>>>> Neutron support their NFV use cases.
> >>>>>>>
> >>>>>>> We are looking for a good way to engage with the upstream
> community.
> >>>>>>> In this cycle we have found kindred spirits in the NFV subteam.,
> but we did
> >>>>>>> not find a good way to engage with Neutron upstream (see
> >>>>>>> https://review.openstack.org/#/c/116476/). It would be wonderful
> if there is
> >>>>>>> a suitable process available for us to use in Kilo e.g. incubation.
> >>>>>>>
> >>>>>>> Cheers,
> >>>>>>> -Luke
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> OpenStack-dev mailing list
> >>>>>>> OpenStack-dev at lists.openstack.org
> >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> OpenStack-dev mailing list
> >>>>>> OpenStack-dev at lists.openstack.org
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Kevin Benton
> >>>>>
> >>>>> _______________________________________________
> >>>>> OpenStack-dev mailing list
> >>>>> OpenStack-dev at lists.openstack.org
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev at lists.openstack.org
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Kevin Benton
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Kevin Benton
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140827/d93cee42/attachment.html>


More information about the OpenStack-dev mailing list