[openstack-dev] [neutron][networking-calico] To be or not to be an ML2 mechanism driver?

Salvatore Orlando salv.orlando at gmail.com
Mon Jan 25 08:31:03 UTC 2016


I agree with Armando that at the end of the day user requirements should
drive these decisions.
I think you did a good job in listing what are the pro and the cons of
choosing standalone plugin rather than a ML2 driver.

The most important point you made, in my opinion, concerns the ability of
supporting multiple backends.
I find your analysis correct; however I might simplify it by saying that as
the Calico driver is probably unlikely to interact with any other mechanism
driver, then the remaining value of adopting ML2 is probably more a way to
re-use code and implement common Neutron "paradigms" - and as you wrote you
can still retain ML2's architecture even in a new plugin.

Further, it is also true what Ian wrote - even with a standalone plugin you
will still be constrained by entities which are meant to represent L2
constructs.

Salvatore



On 24 January 2016 at 23:45, Armando M. <armamig at gmail.com> wrote:

>
>
> On 22 January 2016 at 10:35, Neil Jerram <Neil.Jerram at metaswitch.com>
> wrote:
>
>> networking-calico [1] is currently implemented as an ML2 mechanism
>> driver, but
>> I'm wondering if it might be better as its own core plugin, and I'm
>> looking for
>> input about the implications of that, or for experience with that kind of
>> change; and also for experience and understanding of hybrid ML2
>> networking.
>>
>> Here the considerations that I'm aware of:
>>
>> * Why change from ML2 to core plugin?
>>
>> - It could be seen as resolving a conceptual mismatch.
>> networking-calico uses
>>   IP routing to provide L3 connectivity between VMs, whereas ML2 is
>> ostensibly
>>   all about layer 2 mechanisms.  Arguably it's the Wrong Thing for a
>> L3-based
>>   network to be implemented as an ML2 driver, and changing to a core
>> plugin
>>   would fix that.
>>
>>   On the other hand, the current ML2 implementation seems to work fine,
>> and I
>>   think that the L2 focus of ML2 may be seen as traditional assumption
>> just
>>   like the previously assumed L2 semantics of neutron Networks; and it
>> may be
>>   that the scope of 'ML2' could and should be expanded to both L2- and
>> L3-based
>>   implementations, just as [2] is proposing to expand the scope of the
>> neutron
>>   Network object to encompass L3-only behaviour as well as L2/L3.
>>
>> - Some simplification of the required config.  A single 'core_plugin =
>> calico'
>>   setting could replace 'core_plugin = ml2' plus a handful of ML2
>> settings.
>>
>> - Code-wise, it's a much smaller change than you might imagine, because
>> the new
>>   core plugin can still derive from ML2, and so internally retain the ML2
>>   coding architecture.
>>
>> * Why stay as an ML2 driver?
>>
>> - Perhaps because of ML2's support for multiple networking
>> implementations in
>>   the same cluster.  To the extent that it makes sense, I'd like
>>   networking-calico networks to coexist with other networking
>> implementations
>>   in the same data center.
>>
>>   But I'm not sure to what extent such hybrid networking is a real
>> thing, and
>>   this is the main point on which I'd appreciate input.  In principle ML2
>>   supports multiple network Types and multiple network Mechanisms, but I
>> wonder
>>   how far that really works - or is useful - in practice.
>>
>>   Let's look at Types first.  ML2 supports multiple provider network
>> types,
>>   with the Type for each network being specified explicitly by the
>> provider API
>>   extension (provider:network_type), or else defaulting to the
>>   'external_network_type' ML2 config setting.  However, would a cloud
>> operator
>>   ever actually use more than one provider Type?  My understanding is that
>>   provider networks are designed to map closely onto the real network,
>> and I
>>   guess that an operator would also favour a uniform design there, hence
>> just
>>   using a single provider network Type.
>>
>>   For tenant networks ML2 allows multiple network Types to be configured
>> in the
>>   'tenant_network_types' setting.  However, if my reading of the code is
>>   correct, only the first of these Types will ever be used for a tenant
>> network
>>   - unless the system runs out of the 'resources' needed for that Type,
>> for
>>   example if the first Type is 'vlan' but there are no VLAN IDs left to
>> use.
>>   Is that a feature that is used in practice, within a given
>> deployment?  For
>>   exampe, to first use VLANs for tenant networks, then switch to
>> something else
>>   when those run out?
>>
>>   ML2 also supports multiple mechanism drivers.  When a new Port is being
>>   created, ML2 calls each mechanism driver to give it the chance to do
>> binding
>>   and connectivity setup for that Port.  In principle, if mechanism
>> drivers are
>>   present, I guess each one is supposed to look at some of the available
>> Port
>>   data - and perhaps the network Type - and thereby infer whether it
>> should be
>>   responsible for that Port, and so do the setup for it.  But I wonder if
>>   anyone runs a cloud where that really happens?  If so, have I got it
>> right?
>>
>
> Have you consider asking these questions to your 'customers'? They are the
> ones you should listen to :)
>
> Ultimately both choices are reasonably valid and both have pros and cons:
> moving away from ML2 frees you up and let you be fully in control but
> you'll lose access to compl(i|e)mentary L2 and L3 services. Do you need
> those? That's up to you and/or your customers. There's no right nor wrong,
> but knowing that calico has an already unique relationship with Neutron
> (which you worked hard to nail down) and the ongoing effort [1], perhaps
> it's safer to stay put for a cycle and see how that plays out.
>
> OVN went down the same decision making process, you may want to reach out
> to those folks to see what their opinion was, and reconsider the urgency of
> the switch.
>
> Should you switch, you should also take in consideration the cost of
> migrating (if you have existing deployments).
>
> Cheers,
> Armando
>
> [1] https://review.openstack.org/#/c/225384/
>
>
>>
>> All in all, if hybrid ML2 networking is a really used thing, I'd like to
>> make
>> sure I fully understand it, and would tend to prefer networking-calico
>> remaining as an ML2 mechanism driver.  (Which means I also need to discuss
>> further about conceptually extending 'ML2' to L3-only implementations, and
>> raise another point about what happens when the service_plugin that you
>> need
>> for some extension - say a floating IP - depends on which mechanism
>> driver was
>> used to set up the relevant Port...)  But if not, perhaps it would be a
>> better
>> choice for networking-calico to be its own core plugin.
>>
>> Thanks for reading!  What do you think?
>>
>>        Neil
>>
>>
>> [1] http://docs.openstack.org/developer/networking-calico/
>> [2] https://review.openstack.org/#/c/225384/
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160125/0c55f7a2/attachment.html>


More information about the OpenStack-dev mailing list