[openstack-dev] [neutron][networking-calico] To be or not to be an ML2 mechanism driver?

Ian Wells ijw.ubuntu at cack.org.uk
Mon Jan 25 06:22:31 UTC 2016


On 22 January 2016 at 10:35, Neil Jerram <Neil.Jerram at metaswitch.com> wrote:

> * Why change from ML2 to core plugin?
>
> - It could be seen as resolving a conceptual mismatch.
> networking-calico uses
>   IP routing to provide L3 connectivity between VMs, whereas ML2 is
> ostensibly
>   all about layer 2 mechanisms.


You've heard my view on this before, but to reiterate: Neutron *itself* is
all about layer 2 mechanisms (at least at the level of what a 'network'
is).  A Neutron plugin implements the Neutron API, so if you choose to use
a plugin you will still have one network with one subnet and ports that
should receive addresses on creation, which constrains what you can do.  As
such, I'm not sure what constraints you're escaping.

What I think might be interesting to you is that you would no longer expect
to work with the ML2 DHCP system (which I guess probably doesn't do what
you need) or the router system.  Part of what ML2 provides is that you need
*only* implement the L2 bit of what a core plugin does and can reuse the
rest, and that was frequently because people were doing exactly that in
less elegant ways with the plugins they wrote prior to it.

  Let's look at Types first.  ML2 supports multiple provider network types,
>   with the Type for each network being specified explicitly by the
> provider API
>   extension (provider:network_type), or else defaulting to the
>   'external_network_type' ML2 config setting.  However, would a cloud
> operator
>   ever actually use more than one provider Type?


Up front: there's really no distinction between provider and tenant
networks, when it comes to it.  Really tenant networks are just provider
networks where Neutron has chosen the type and segment.  The resulting
network is indistinguishable once created.

It's possible, and sometimes useful, to mix VLAN and VXLAN types.  You can
use VLANs for your provider networks over physical segments that also
communicate with external devices, and VXLAN for cloud-local tenant
networking.  This means your tenant networks scale and your interface to
the world is straightforward.

I don't think I've ever set up Neutron deliberately with multiple
tenant-only network types.  However, if the properties of a Calico network
were sufficiently different to other types, then I might in some
circumstances choose to use a Calico network or another network for a
specific use.  That's possible in ML2 with the provider system, but not
really end-user consumable for non-admins (I can't think of a policy that
would really do the trick).  You'd really need some means of choosing a
network on properties, of which probably the only candidate today is VLAN
transparency.

ML2 also supports multiple mechanism drivers.  When a new Port is being
> created, ML2 calls each mechanism driver to give it the chance to do
> binding
>   and connectivity setup for that Port.  In principle, if mechanism
> drivers are
>   present, I guess each one is supposed to look at some of the available
> Port
>   data - and perhaps the network Type - and thereby infer whether it
> should be
>   responsible for that Port, and so do the setup for it.  But I wonder if
>   anyone runs a cloud where that really happens?  If so, have I got it
> right?
>

This *does* happen, and 'responsible' is the wrong phrase.  No one
mechanism driver is 'responsible', but only one 'binds' the port to a
segment (normally, OVS, LB or SRIOV in the open source drivers).  Other
drivers might not actually do the final binding, but they support it by,
for instance, reconfiguring switches (the Cisco Nexus switch driver being
an example).  Other mechanism drivers may not be interested in that type of
network and will be skipped over.  This is of benefit to what exists but
probably not terribly useful for Calico.


> All in all, if hybrid ML2 networking is a really used thing, I'd like to
> make
> sure I fully understand it, and would tend to prefer networking-calico
> remaining as an ML2 mechanism driver.  (Which means I also need to discuss
> further about conceptually extending 'ML2' to L3-only implementations, and
> raise another point about what happens when the service_plugin that you
> need
> for some extension - say a floating IP - depends on which mechanism
> driver was
> used to set up the relevant Port...)


This would be the argument I was making at the summit for Gluon - if you
have strayed from the Neutron datamodel of what a network is (or even if a
network is needed; with L3, 'VRF' would be a better term, given its
behaviour is quite different), there comes a point that you're not actually
implementing Neutron at all and you should probably set all of it aside
rather than trying to adapt it to do two very different tasks.  Come talk
to me if you want to experiment - I've got the code up on github but the
instructions are a little convoluted at the moment.
-- 
Ian.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160124/54574dd3/attachment.html>


More information about the OpenStack-dev mailing list