<div dir="ltr">We've started doing this in a slightly more reasonable way for icehouse.<div>What we've done is:</div><div>- remove unnecessary notification from the server</div><div>- process all port-related events, either trigger via RPC or via monitor in one place</div>
<div><br></div><div>Obviously there is always a lot of room for improvement, and I agree something along the lines of what Zang suggests would be more maintainable and ensure faster event processing as well as making it easier to have some form of reliability on event processing.</div>
<div><br></div><div>I was considering doing something for the ovs-agent again in Juno, but since we've moving towards a unified agent, I think any new "big" ticket should address this effort.</div><div><br></div>
<div>Salvatore</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 17 June 2014 13:31, Zang MingJie <span dir="ltr"><<a href="mailto:zealot0630@gmail.com" target="_blank">zealot0630@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi:<br>
<br>
Awesome! Currently we are suffering lots of bugs in ovs-agent, also<br>
intent to rebuild a more stable flexible agent.<br>
<br>
Taking the experience of ovs-agent bugs, I think the concurrency<br>
problem is also a very important problem, the agent gets lots of event<br>
from different greenlets, the rpc, the ovs monitor or the main loop.<br>
I'd suggest to serialize all event to a queue, then process events in<br>
a dedicated thread. The thread check the events one by one ordered,<br>
and resolve what has been changed, then apply the corresponding<br>
changes. If there is any error occurred in the thread, discard the<br>
current processing event, do a fresh start event, which reset<br>
everything, then apply the correct settings.<br>
<br>
The threading model is so important and may prevent tons of bugs in<br>
the future development, we should describe it clearly in the<br>
architecture<br>
<div><div class="h5"><br>
<br>
On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi <<a href="mailto:mb@us.ibm.com">mb@us.ibm.com</a>> wrote:<br>
> Following the discussions in the ML2 subgroup weekly meetings, I have added<br>
> more information on the etherpad [1] describing the proposed architecture<br>
> for modular L2 agents. I have also posted some code fragments at [2]<br>
> sketching the implementation of the proposed architecture. Please have a<br>
> look when you get a chance and let us know if you have any comments.<br>
><br>
> [1] <a href="https://etherpad.openstack.org/p/modular-l2-agent-outline" target="_blank">https://etherpad.openstack.org/p/modular-l2-agent-outline</a><br>
> [2] <a href="https://review.openstack.org/#/c/99187/" target="_blank">https://review.openstack.org/#/c/99187/</a><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div><br></div>