[openstack-dev] [neutron][networking-calico] To be or not to be an ML2 mechanism driver?
Neil.Jerram at metaswitch.com
Fri Jan 22 18:35:55 UTC 2016
networking-calico  is currently implemented as an ML2 mechanism
I'm wondering if it might be better as its own core plugin, and I'm
input about the implications of that, or for experience with that kind of
change; and also for experience and understanding of hybrid ML2
Here the considerations that I'm aware of:
* Why change from ML2 to core plugin?
- It could be seen as resolving a conceptual mismatch.
IP routing to provide L3 connectivity between VMs, whereas ML2 is
all about layer 2 mechanisms. Arguably it's the Wrong Thing for a
network to be implemented as an ML2 driver, and changing to a core plugin
would fix that.
On the other hand, the current ML2 implementation seems to work fine,
think that the L2 focus of ML2 may be seen as traditional assumption just
like the previously assumed L2 semantics of neutron Networks; and it
that the scope of 'ML2' could and should be expanded to both L2- and
implementations, just as  is proposing to expand the scope of the
Network object to encompass L3-only behaviour as well as L2/L3.
- Some simplification of the required config. A single 'core_plugin =
setting could replace 'core_plugin = ml2' plus a handful of ML2 settings.
- Code-wise, it's a much smaller change than you might imagine, because
core plugin can still derive from ML2, and so internally retain the ML2
* Why stay as an ML2 driver?
- Perhaps because of ML2's support for multiple networking
the same cluster. To the extent that it makes sense, I'd like
networking-calico networks to coexist with other networking
in the same data center.
But I'm not sure to what extent such hybrid networking is a real
this is the main point on which I'd appreciate input. In principle ML2
supports multiple network Types and multiple network Mechanisms, but I
how far that really works - or is useful - in practice.
Let's look at Types first. ML2 supports multiple provider network types,
with the Type for each network being specified explicitly by the
extension (provider:network_type), or else defaulting to the
'external_network_type' ML2 config setting. However, would a cloud
ever actually use more than one provider Type? My understanding is that
provider networks are designed to map closely onto the real network, and I
guess that an operator would also favour a uniform design there, hence
using a single provider network Type.
For tenant networks ML2 allows multiple network Types to be configured
'tenant_network_types' setting. However, if my reading of the code is
correct, only the first of these Types will ever be used for a tenant
- unless the system runs out of the 'resources' needed for that Type, for
example if the first Type is 'vlan' but there are no VLAN IDs left to use.
Is that a feature that is used in practice, within a given
exampe, to first use VLANs for tenant networks, then switch to
when those run out?
ML2 also supports multiple mechanism drivers. When a new Port is being
created, ML2 calls each mechanism driver to give it the chance to do
and connectivity setup for that Port. In principle, if mechanism
present, I guess each one is supposed to look at some of the available
data - and perhaps the network Type - and thereby infer whether it
responsible for that Port, and so do the setup for it. But I wonder if
anyone runs a cloud where that really happens? If so, have I got it
All in all, if hybrid ML2 networking is a really used thing, I'd like to
sure I fully understand it, and would tend to prefer networking-calico
remaining as an ML2 mechanism driver. (Which means I also need to discuss
further about conceptually extending 'ML2' to L3-only implementations, and
raise another point about what happens when the service_plugin that you need
for some extension - say a floating IP - depends on which mechanism
used to set up the relevant Port...) But if not, perhaps it would be a
choice for networking-calico to be its own core plugin.
Thanks for reading! What do you think?
More information about the OpenStack-dev