[openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

James Polley jp at jamezpolley.com
Wed Dec 10 07:29:38 UTC 2014


On Fri, Oct 31, 2014 at 3:28 PM, Ben Nemec <openstack at nemebean.com> wrote:

> On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> > On Wed, Oct 29, 2014 at 7:25 AM, Hly <henry4hly at gmail.com> wrote:
> >>
> >>
> >> Sent from my iPad
> >>
> >> On 2014-10-29, at 下午8:01, Robert van Leeuwen <
> Robert.vanLeeuwen at spilgames.com> wrote:
> >>
> >>>>> I find our current design is remove all flows then add flow by
> entry, this
> >>>>> will cause every network node will break off all tunnels between
> other
> >>>>> network node and all compute node.
> >>>> Perhaps a way around this would be to add a flag on agent startup
> >>>> which would have it skip reprogramming flows. This could be used for
> >>>> the upgrade case.
> >>>
> >>> I hit the same issue last week and filed a bug here:
> >>> https://bugs.launchpad.net/neutron/+bug/1383674
> >>>
> >>> From an operators perspective this is VERY annoying since you also
> cannot push any config changes that requires/triggers a restart of the
> agent.
> >>> e.g. something simple like changing a log setting becomes a hassle.
> >>> I would prefer the default behaviour to be to not clear the flows or
> at the least an config option to disable it.
> >>>
> >>
> >> +1, we also suffered from this even when a very little patch is done
> >>
> > I'd really like to get some input from the tripleo folks, because they
> > were the ones who filed the original bug here and were hit by the
> > agent NOT reprogramming flows on agent restart. It does seem fairly
> > obvious that adding an option around this would be a good way forward,
> > however.
>
> Since nobody else has commented, I'll put in my two cents (though I
> might be overcharging you ;-).  I've also added the TripleO tag to the
> subject, although with Summit coming up I don't know if that will help.
>

Summit did lead to some delays - I started this response and then got
distracted, and only just found the draft again

>
> Anyway, if the bug you're referring to is the one I think, then our
> issue was just with the flows not existing.  I don't think we care
> whether they get reprogrammed on agent restart or not as long as they
> somehow come into existence at some point.
>

Is https://bugs.launchpad.net/bugs/1290486 the bug in you'rethinking of?

That seems to have been solved with https://review.openstack.org/#/c/96919/

My memory of that problem is that prior to 96919, when the daemon was
restarted, existing flows were thrown away. We'd end up with just a NORMAL
flow, which didn't route the traffic where we need it.

The fix implemented there seems to have been to implement a canary rule to
detect when this happens - ie, detect that all the existing flows had been
thrown away. Once we know they've been thrown away, we know we need to
recreate the flows that were thrown away when the daemon restarted.

If my memory is correct (and it may not be, I'm not 100% sure I fully
understood the problem at the time), the root cause here is not the change
added in 96919 - by the time that code is triggered and the flows are
reprogrammed, they've already been lost.



> It's possible I'm wrong about that, and probably the best person to talk
> to would be Robert Collins since I think he's the one who actually
> tracked down the problem in the first place.
>

I think (if I'm looking at the right bug) that you're referring to his
comment:

we're trying to do things before ovs-db is up and running and neutron-
openvswitch-agent is not handling ovs-db being down properly - it should
back off and retry, or alternatively, do a full sync once the db is
available.


As far as I can tell, everything after that point (ie, once I got involved)
focused on the latter, which is why we ended up with the canary and the
reprogramming. Assuming he's right about the race condition, it sounds as
though fixing that might be preferable. Later discussion on this thread has
centered around a full flow-synchornization approach: it sounds to me as
though handling the db being unavailable will need to be part of that
approach (we don't want to synchronize towards "no rules" just because we
can't get a canonical list of rules from the DB)


> -Ben
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141210/3d7e5064/attachment.html>


More information about the OpenStack-dev mailing list