[openstack-dev] [neutron] ML2 versus core plugin for OVN
Kevin Benton
blak111 at gmail.com
Thu Feb 26 09:00:52 UTC 2015
>That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.
There is no tunnel network in this case, just VLAN networks. Security
groups work fine and they can communicate with each other over the network.
The IVS agent wires its ports' security groups and OVS wires its own.
Security group filtering is local to a port. Why didn't you think that
would work?
>Those agent notification is handled by other common code in ML2, so thin
MDs can seamlessly be integrated with each other horizontally for all
features, like tunnel l2pop.
That's just the tunnel coordination issue that has already been brought up.
That's orthogonal to whether or not a mechanism driver is 'thin' or 'fat'.
Someone could implement another 'fat' driver that doesn't communicate with
a backend and it could still be incompatible with the OVS driver if it sets
up tunnels in its own way.
To bring this back to the relevant topic. OVN can have an ML2 driver that
calls a backend without having neutron agents (agents != ML2).
Interoperability with other vxlan drivers will be an issue because there
isn't a general solution for that yet. That's still better (from an
interoperability perspective) than being a monolithic plugin that doesn't
allow anything else to run.
On Wed, Feb 25, 2015 at 10:04 PM, loy wolfe <loywolfe at gmail.com> wrote:
>
> On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton <blak111 at gmail.com> wrote:
>
>> You can horizontally split as well (if I understand what axis definitions
>> you are using). The Big Switch driver for example will bind ports that
>> belong to hypervisors running IVS while leaving the OVS driver to bind
>> ports attached to hypervisors running OVS.
>>
>
> That's just what I mean about horizontal, which is limited for some
> features. For example, ports belonging to BSN driver and OVS driver can't
> communicate with each other in the same tunnel network, neither does
> security group across both sides.
>
>
>> I don't fully understand your comments about the architecture of
>> neutron. Most work is delegated to either agents or a backend server.
>> Basically every ML2 driver pushes the work via an agent notification or
>> an HTTP call of some sort
>>
>
> Here is the key difference: thin MD such as ovs and bridge never push any
> work to agent, which only handle port bind, just as a scheduler selecting
> the backend vif type. Those agent notification is handled by other common
> code in ML2, so thin MDs can seamlessly be integrated with each other
> horizontally for all features, like tunnel l2pop. On the other hand fat MD
> will push every work to backend through HTTP call, which partly block
> horizontal inter-operation with other backends.
>
> Then I'm thing about this pattern: ML2 /w thin MD -> agent -> HTTP call to
> backend? Which should be much easier for horizontal inter-operate.
>
>
> On Feb 25, 2015 6:15 PM, "loy wolfe" <loywolfe at gmail.com> wrote:
>>
>>> Oh, what you mean is vertical splitting, while I'm talking about
>>> horizontal splitting.
>>>
>>> I'm a little confused about why Neutron is designed so differently with
>>> Nova and Cinder. In fact MD could be very simple, delegating nearly all
>>> things out to agent. Remember Cinder volume manager? The real storage
>>> backend could also be deployed outside the server farm as the dedicated
>>> hardware, not necessary the local host based resource. The agent could act
>>> as the proxy to an outside module, instead of heavy burden on central
>>> plugin servers, and also, all backend can inter-operate and co-exist
>>> seamlessly (like a single vxlan across ovs and tor in hybrid deployment)
>>>
>>>
>>> On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton <blak111 at gmail.com> wrote:
>>>
>>>> In the cases I'm referring to, OVS handles the security groups and
>>>> vswitch. The other drivers handle fabric configuration for VLAN tagging to
>>>> the host and whatever other plumbing they want to do.
>>>> On Feb 25, 2015 5:30 PM, "loy wolfe" <loywolfe at gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton <blak111 at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> The fact that a system doesn't use a neutron agent is not a good
>>>>>> justification for monolithic vs driver. The VLAN drivers co-exist with OVS
>>>>>> just fine when using VLAN encapsulation even though some are agent-less.
>>>>>>
>>>>> so how about security group, and all other things which need
>>>>> coordination between vswitchs?
>>>>>
>>>>>
>>>>>> There is a missing way to coordinate connectivity with tunnel
>>>>>> networks across drivers, but that doesn't mean you can't run multiple
>>>>>> drivers to handle different types or just to provide additional features
>>>>>> (auditing, more access control, etc).
>>>>>> On Feb 25, 2015 2:04 AM, "loy wolfe" <loywolfe at gmail.com> wrote:
>>>>>>
>>>>>>> +1 to separate monolithic OVN plugin
>>>>>>>
>>>>>>> The ML2 has been designed for co-existing of multiple heterogeneous
>>>>>>> backends, it works well for all agent solutions: OVS, Linux Bridge, and
>>>>>>> even ofagent.
>>>>>>>
>>>>>>> However, when things come with all kinds of agentless solutions,
>>>>>>> especially all kinds of SDN controller (except for Ryu-Lib style),
>>>>>>> Mechanism Driver became the new monolithic place despite the benefits of
>>>>>>> code reduction: MDs can't inter-operate neither between themselves nor
>>>>>>> with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
>>>>>>> mapping/broadcasting solution.
>>>>>>>
>>>>>>> So my suggestion is that keep those "thin" MD(with agent) in ML2
>>>>>>> framework (also inter-operate with native Neutron L3/service plugins),
>>>>>>> while all other "fat" MD(agentless) go with the old style of monolithic
>>>>>>> plugin, with all L2-L7 features tightly integrated.
>>>>>>>
>>>>>>> On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) <
>>>>>>> amisaha at cisco.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I am new to OpenStack (and am particularly interested in
>>>>>>>> networking). I am getting a bit confused by this discussion. Aren’t there
>>>>>>>> already a few monolithic plugins (that is what I could understand from
>>>>>>>> reading the Networking chapter of the OpenStack Cloud Administrator Guide.
>>>>>>>> Table 7.3 Available networking plugi-ins)? So how do we have
>>>>>>>> interoperability between those (or do we not intend to)?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> BTW, it is funny that the acronym ML can also be used for
>>>>>>>> “monolithic” J
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Amit Saha
>>>>>>>>
>>>>>>>> Cisco, Bangalore
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> __________________________________________________________________________
>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>> Unsubscribe:
>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150226/92669cd5/attachment.html>
More information about the OpenStack-dev
mailing list