[openstack-dev] [neutron] ML2 versus core plugin for OVN

loy wolfe loywolfe at gmail.com
Thu Feb 26 06:04:15 UTC 2015


On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton <blak111 at gmail.com> wrote:

> You can horizontally split as well (if I understand what axis definitions
> you are using). The Big Switch driver for example will bind ports that
> belong to hypervisors running IVS while leaving the OVS driver to bind
> ports attached to hypervisors running OVS.
>

That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.


>  I don't fully understand your comments about  the architecture of
> neutron. Most work is delegated to either agents or a backend server.
> Basically every ML2 driver pushes the work via an agent notification or
> an HTTP call of some sort
>

Here is the key difference: thin MD such as ovs and bridge never push any
work to agent, which only handle port bind, just as a scheduler selecting
the backend vif type. Those agent notification is handled by other common
code in ML2, so thin MDs can seamlessly be integrated with each other
horizontally for all features, like tunnel l2pop. On the other hand fat MD
will push every work to backend through HTTP call, which partly block
horizontal inter-operation with other backends.

Then I'm thing about this pattern: ML2 /w thin MD -> agent -> HTTP call to
backend? Which should be much easier for horizontal inter-operate.


On Feb 25, 2015 6:15 PM, "loy wolfe" <loywolfe at gmail.com> wrote:
>
>> Oh, what you mean is vertical splitting, while I'm talking about
>> horizontal splitting.
>>
>> I'm a little confused about why Neutron is designed so differently with
>> Nova and Cinder. In fact MD could be very simple, delegating nearly all
>> things out to agent. Remember Cinder volume manager? The real storage
>> backend could also be deployed outside the server farm as the dedicated
>> hardware, not necessary the local host based resource. The agent could act
>> as the proxy to an outside module, instead of heavy burden on central
>> plugin servers, and also, all backend can inter-operate and co-exist
>> seamlessly (like a single vxlan across ovs and tor in hybrid deployment)
>>
>>
>> On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton <blak111 at gmail.com> wrote:
>>
>>> In the cases I'm referring to, OVS handles the security groups and
>>> vswitch.  The other drivers handle fabric configuration for VLAN tagging to
>>> the host and whatever other plumbing they want to do.
>>> On Feb 25, 2015 5:30 PM, "loy wolfe" <loywolfe at gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton <blak111 at gmail.com>
>>>> wrote:
>>>>
>>>>> The fact that a system doesn't use a neutron agent is not a good
>>>>> justification for monolithic vs driver. The VLAN drivers co-exist with OVS
>>>>> just fine when using VLAN encapsulation even though some are agent-less.
>>>>>
>>>> so how about security group, and all other things which need
>>>> coordination between vswitchs?
>>>>
>>>>
>>>>>  There is a missing way to coordinate connectivity with tunnel
>>>>> networks across drivers, but that doesn't mean you can't run multiple
>>>>> drivers to handle different types or just to provide additional features
>>>>> (auditing,  more access control, etc).
>>>>> On Feb 25, 2015 2:04 AM, "loy wolfe" <loywolfe at gmail.com> wrote:
>>>>>
>>>>>> +1 to separate monolithic OVN plugin
>>>>>>
>>>>>> The ML2 has been designed for co-existing of multiple heterogeneous
>>>>>> backends, it works well for all agent solutions: OVS, Linux Bridge, and
>>>>>> even ofagent.
>>>>>>
>>>>>> However, when things come with all kinds of agentless solutions,
>>>>>> especially all kinds of SDN controller (except for Ryu-Lib style),
>>>>>> Mechanism Driver became the new monolithic place despite the benefits of
>>>>>> code reduction:  MDs can't inter-operate neither between themselves nor
>>>>>> with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
>>>>>> mapping/broadcasting solution.
>>>>>>
>>>>>> So my suggestion is that keep those "thin" MD(with agent) in ML2
>>>>>> framework (also inter-operate with native Neutron L3/service plugins),
>>>>>> while all other "fat" MD(agentless) go with the old style of monolithic
>>>>>> plugin, with all L2-L7 features tightly integrated.
>>>>>>
>>>>>> On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) <
>>>>>> amisaha at cisco.com> wrote:
>>>>>>
>>>>>>>  Hi,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I am new to OpenStack (and am particularly interested in
>>>>>>> networking). I am getting a bit confused by this discussion. Aren’t there
>>>>>>> already a few monolithic plugins (that is what I could understand from
>>>>>>> reading the Networking chapter of the OpenStack Cloud Administrator Guide.
>>>>>>> Table 7.3 Available networking plugi-ins)? So how do we have
>>>>>>> interoperability between those (or do we not intend to)?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> BTW, it is funny that the acronym ML can also be used for
>>>>>>> “monolithic” J
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> Amit Saha
>>>>>>>
>>>>>>> Cisco, Bangalore
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150226/6a6affb6/attachment.html>


More information about the OpenStack-dev mailing list