[openstack-dev] [Neutron][ML2] Modular L2 agent architecture
henry hly
henry4hly at gmail.com
Thu Jun 19 06:44:47 UTC 2014
OVS agent manipulate not only ovs flow table, but also linux stack, which
is not so easily replaced by pure openflow controller today.
fastpath-slowpath separation sounds good, but really a nightmare for high
concurrent connection application if we set L4 flow into OVS (in our
testing, vswitchd daemon always stop working in this case).
Someday when OVS can do all the L2-L4 rules in the kernel without bothering
userspace classifier, pure OF controller can replace agent based solution
then. OVS hooking to netfilter conntrack may come this year, but not enough
yet.
On Wed, Jun 18, 2014 at 12:56 AM, Armando M. <armamig at gmail.com> wrote:
> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
> On 17 June 2014 18:38, Kyle Mestery <mestery at noironetworks.com> wrote:
>
>> Another area of improvement for the agent would be to move away from
>> executing CLIs for port commands and instead use OVSDB. Terry Wilson
>> and I talked about this, and re-writing ovs_lib to use an OVSDB
>> connection instead of the CLI methods would be a huge improvement
>> here. I'm not sure if Terry was going to move forward with this, but
>> I'd be in favor of this for Juno if he or someone else wants to move
>> in this direction.
>>
>> Thanks,
>> Kyle
>>
>> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando <sorlando at nicira.com>
>> wrote:
>> > We've started doing this in a slightly more reasonable way for icehouse.
>> > What we've done is:
>> > - remove unnecessary notification from the server
>> > - process all port-related events, either trigger via RPC or via
>> monitor in
>> > one place
>> >
>> > Obviously there is always a lot of room for improvement, and I agree
>> > something along the lines of what Zang suggests would be more
>> maintainable
>> > and ensure faster event processing as well as making it easier to have
>> some
>> > form of reliability on event processing.
>> >
>> > I was considering doing something for the ovs-agent again in Juno, but
>> since
>> > we've moving towards a unified agent, I think any new "big" ticket
>> should
>> > address this effort.
>> >
>> > Salvatore
>> >
>> >
>> > On 17 June 2014 13:31, Zang MingJie <zealot0630 at gmail.com> wrote:
>> >>
>> >> Hi:
>> >>
>> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> >> intent to rebuild a more stable flexible agent.
>> >>
>> >> Taking the experience of ovs-agent bugs, I think the concurrency
>> >> problem is also a very important problem, the agent gets lots of event
>> >> from different greenlets, the rpc, the ovs monitor or the main loop.
>> >> I'd suggest to serialize all event to a queue, then process events in
>> >> a dedicated thread. The thread check the events one by one ordered,
>> >> and resolve what has been changed, then apply the corresponding
>> >> changes. If there is any error occurred in the thread, discard the
>> >> current processing event, do a fresh start event, which reset
>> >> everything, then apply the correct settings.
>> >>
>> >> The threading model is so important and may prevent tons of bugs in
>> >> the future development, we should describe it clearly in the
>> >> architecture
>> >>
>> >>
>> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi <mb at us.ibm.com>
>> >> wrote:
>> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
>> >> > added
>> >> > more information on the etherpad [1] describing the proposed
>> >> > architecture
>> >> > for modular L2 agents. I have also posted some code fragments at [2]
>> >> > sketching the implementation of the proposed architecture. Please
>> have a
>> >> > look when you get a chance and let us know if you have any comments.
>> >> >
>> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
>> >> > [2] https://review.openstack.org/#/c/99187/
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev at lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/2696588d/attachment.html>
More information about the OpenStack-dev
mailing list