[openstack-dev] [neutron] ML2 versus core plugin for OVN

Kevin Benton blak111 at gmail.com
Thu Feb 26 02:50:38 UTC 2015


You can horizontally split as well (if I understand what axis definitions
you are using). The Big Switch driver for example will bind ports that
belong to hypervisors running IVS while leaving the OVS driver to bind
ports attached to hypervisors running OVS.

I don't fully understand your comments about  the architecture of neutron.
Most work is delegated to either agents or a backend server. Basically
every ML2 driver pushes the work via an agent notification or an HTTP call
of some sort. If you do want to have a discussion about the architecture of
neutron, please start a new thread. This one is related to developing an
OVN plugin/driver and we have already diverged too far.
On Feb 25, 2015 6:15 PM, "loy wolfe" <loywolfe at gmail.com> wrote:

> Oh, what you mean is vertical splitting, while I'm talking about
> horizontal splitting.
>
> I'm a little confused about why Neutron is designed so differently with
> Nova and Cinder. In fact MD could be very simple, delegating nearly all
> things out to agent. Remember Cinder volume manager? The real storage
> backend could also be deployed outside the server farm as the dedicated
> hardware, not necessary the local host based resource. The agent could act
> as the proxy to an outside module, instead of heavy burden on central
> plugin servers, and also, all backend can inter-operate and co-exist
> seamlessly (like a single vxlan across ovs and tor in hybrid deployment)
>
>
> On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton <blak111 at gmail.com> wrote:
>
>> In the cases I'm referring to, OVS handles the security groups and
>> vswitch.  The other drivers handle fabric configuration for VLAN tagging to
>> the host and whatever other plumbing they want to do.
>> On Feb 25, 2015 5:30 PM, "loy wolfe" <loywolfe at gmail.com> wrote:
>>
>>>
>>>
>>> On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton <blak111 at gmail.com> wrote:
>>>
>>>> The fact that a system doesn't use a neutron agent is not a good
>>>> justification for monolithic vs driver. The VLAN drivers co-exist with OVS
>>>> just fine when using VLAN encapsulation even though some are agent-less.
>>>>
>>> so how about security group, and all other things which need
>>> coordination between vswitchs?
>>>
>>>
>>>>  There is a missing way to coordinate connectivity with tunnel
>>>> networks across drivers, but that doesn't mean you can't run multiple
>>>> drivers to handle different types or just to provide additional features
>>>> (auditing,  more access control, etc).
>>>> On Feb 25, 2015 2:04 AM, "loy wolfe" <loywolfe at gmail.com> wrote:
>>>>
>>>>> +1 to separate monolithic OVN plugin
>>>>>
>>>>> The ML2 has been designed for co-existing of multiple heterogeneous
>>>>> backends, it works well for all agent solutions: OVS, Linux Bridge, and
>>>>> even ofagent.
>>>>>
>>>>> However, when things come with all kinds of agentless solutions,
>>>>> especially all kinds of SDN controller (except for Ryu-Lib style),
>>>>> Mechanism Driver became the new monolithic place despite the benefits of
>>>>> code reduction:  MDs can't inter-operate neither between themselves nor
>>>>> with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
>>>>> mapping/broadcasting solution.
>>>>>
>>>>> So my suggestion is that keep those "thin" MD(with agent) in ML2
>>>>> framework (also inter-operate with native Neutron L3/service plugins),
>>>>> while all other "fat" MD(agentless) go with the old style of monolithic
>>>>> plugin, with all L2-L7 features tightly integrated.
>>>>>
>>>>> On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) <
>>>>> amisaha at cisco.com> wrote:
>>>>>
>>>>>>  Hi,
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am new to OpenStack (and am particularly interested in networking).
>>>>>> I am getting a bit confused by this discussion. Aren’t there already a few
>>>>>> monolithic plugins (that is what I could understand from reading the
>>>>>> Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
>>>>>> Available networking plugi-ins)? So how do we have interoperability between
>>>>>> those (or do we not intend to)?
>>>>>>
>>>>>>
>>>>>>
>>>>>> BTW, it is funny that the acronym ML can also be used for
>>>>>> “monolithic” J
>>>>>>
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Amit Saha
>>>>>>
>>>>>> Cisco, Bangalore
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150225/4300317c/attachment.html>


More information about the OpenStack-dev mailing list