[openstack-dev] [Neutron][ML2] Modular L2 agent architecture

Narasimhan, Vivekanandan vivekanandan.narasimhan at hp.com
Tue Jun 17 17:25:23 UTC 2014



Managing the ports and plumbing logic is today driven by L2 Agent, with little assistance

from controller.



If we plan to move that functionality to the controller,  the controller has to be more

heavy weight (both hardware and software)  since it has to do the job of L2 Agent for all

the compute servers in the cloud. , We need to re-verify all scale numbers for the controller

on POC’ing of such a change.



That said, replacing CLI with direct OVSDB calls in the L2 Agent is certainly a good direction.



Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or processing) to follow up

on success or failure of such invocations.  Nor there is certain guarantee that all such

flow invocations would be executed by the third-process fired by OVS-Lib to execute CLI.



When we transition to OVSDB calls which are more programmatic in nature, we can

enhance the Flow API (OVS-Lib) to provide more fine grained errors/return codes (or content)

and ovs-agent (and even other components) can act on such return state more

intelligently/appropriately.



--

Thanks,



Vivek





From: Armando M. [mailto:armamig at gmail.com]
Sent: Tuesday, June 17, 2014 10:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture



just a provocative thought: If we used the ovsdb connection instead, do we really need an L2 agent :P?



On 17 June 2014 18:38, Kyle Mestery <mestery at noironetworks.com<mailto:mestery at noironetworks.com>> wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle


On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando <sorlando at nicira.com<mailto:sorlando at nicira.com>> wrote:
> We've started doing this in a slightly more reasonable way for icehouse.
> What we've done is:
> - remove unnecessary notification from the server
> - process all port-related events, either trigger via RPC or via monitor in
> one place
>
> Obviously there is always a lot of room for improvement, and I agree
> something along the lines of what Zang suggests would be more maintainable
> and ensure faster event processing as well as making it easier to have some
> form of reliability on event processing.
>
> I was considering doing something for the ovs-agent again in Juno, but since
> we've moving towards a unified agent, I think any new "big" ticket should
> address this effort.
>
> Salvatore
>
>
> On 17 June 2014 13:31, Zang MingJie <zealot0630 at gmail.com<mailto:zealot0630 at gmail.com>> wrote:
>>
>> Hi:
>>
>> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> intent to rebuild a more stable flexible agent.
>>
>> Taking the experience of ovs-agent bugs, I think the concurrency
>> problem is also a very important problem, the agent gets lots of event
>> from different greenlets, the rpc, the ovs monitor or the main loop.
>> I'd suggest to serialize all event to a queue, then process events in
>> a dedicated thread. The thread check the events one by one ordered,
>> and resolve what has been changed, then apply the corresponding
>> changes. If there is any error occurred in the thread, discard the
>> current processing event, do a fresh start event, which reset
>> everything, then apply the correct settings.
>>
>> The threading model is so important and may prevent tons of bugs in
>> the future development, we should describe it clearly in the
>> architecture
>>
>>
>> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi <mb at us.ibm.com<mailto:mb at us.ibm.com>>
>> wrote:
>> > Following the discussions in the ML2 subgroup weekly meetings, I have
>> > added
>> > more information on the etherpad [1] describing the proposed
>> > architecture
>> > for modular L2 agents. I have also posted some code fragments at [2]
>> > sketching the implementation of the proposed architecture. Please have a
>> > look when you get a chance and let us know if you have any comments.
>> >
>> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
>> > [2] https://review.openstack.org/#/c/99187/
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140617/9e1e9aeb/attachment-0001.html>


More information about the OpenStack-dev mailing list