[neutron][CI] How to reduce number of rechecks - brainstorming

Ronelle Landy rlandy at redhat.com
Thu Nov 18 23:52:46 UTC 2021


On Wed, Nov 17, 2021 at 5:22 AM Balazs Gibizer <balazs.gibizer at est.tech>
wrote:

>
>
> On Wed, Nov 17 2021 at 09:13:34 AM +0100, Slawek Kaplonski
> <skaplons at redhat.com> wrote:
> > Hi,
> >
> > Recently I spent some time to check how many rechecks we need in
> > Neutron to
> > get patch merged and I compared it to some other OpenStack projects
> > (see [1]
> > for details).
> > TL;DR - results aren't good for us and I think we really need to do
> > something
> > with that.
>
> I really like the idea of collecting such stats. Thank you for doing
> it. I can even imagine to make a public dashboard somewhere with this
> information as it is a good indication about the health of our projects
> / testing.
>
> >
> > Of course "easiest" thing to say is that we should fix issues which
> > we are
> > hitting in the CI to make jobs more stable. But it's not that easy.
> > We are
> > struggling with those jobs for very long time. We have CI related
> > meeting
> > every week and we are fixing what we can there.
> > Unfortunately there is still bunch of issues which we can't fix so
> > far because
> > they are intermittent and hard to reproduce locally or in some cases
> > the
> > issues aren't realy related to the Neutron or there are new bugs
> > which we need
> > to investigate and fix :)
>
>
> I have couple of suggestion based on my experience working with CI in
> nova.
>

We've struggled with unstable tests in TripleO as well. Here are some
things we tried and implemented:

1. Created job dependencies so we only ran check tests once we knew we had
the resources we needed (example we had pulled containers successfully)

2. Moved some testing to third party where we have easier control of the
environment (note that third party cannot stop a change merging)

3. Used dependency pipelines to pre-qualify some dependencies ahead of
letting them  run wild on our check jobs

4. Requested testproject runs of changes in a less busy environment before
running a full set of tests in a public zuul

5. Used a skiplist to keep track of tech debt and skip known failures that
we could temporarily ignore to keep CI moving along if we're waiting on an
external fix.



>
> 1) we try to open bug reports for intermittent gate failures too and
> keep them tagged in a list [1] so when a job fail it is easy to check
> if the bug is known.
>
> 2) I offer my help here now that if you see something in neutron runs
> that feels non neutron specific then ping me with it. Maybe we are
> struggling with the same problem too.
>
> 3) there was informal discussion before about a possibility to re-run
> only some jobs with a recheck instead for re-running the whole set. I
> don't know if this is feasible with Zuul and I think this only treat
> the symptom not the root case. But still this could be a direction if
> all else fails.
>
> Cheers,
> gibi
>
> > So this is  never ending battle for us. The problem is that we have
> > to test
> > various backends, drivers, etc. so as a result we have many jobs
> > running on
> > each patch - excluding UT, pep8 and docs jobs we have around 19 jobs
> > in check
> > and 14 jobs in gate queue.
> >
> > In the past we made a lot of improvements, like e.g. we improved
> > irrelevant
> > files lists for jobs to run less jobs on some of the patches,
> > together with QA
> > team we did "integrated-networking" template to run only Neutron and
> > Nova
> > related scenario tests in the Neutron queues, we removed and
> > consolidated some
> > of the jobs (there is still one patch in progress for that but it
> > should just
> > remove around 2 jobs from the check queue). All of that are good
> > improvements
> > but still not enough to make our CI really stable :/
> >
> > Because of all of that, I would like to ask community about any other
> > ideas
> > how we can improve that. If You have any ideas, please send it in
> > this email
> > thread or reach out to me directly on irc.
> > We want to discuss about them in the next video CI meeting which will
> > be on
> > November 30th. If You would have any idea and would like to join that
> > discussion, You are more than welcome in that meeting of course :)
> >
> > [1]
> > http://lists.openstack.org/pipermail/openstack-discuss/2021-November/
> > 025759.html
>
>
> [1]
>
> https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure&orderby=-date_last_updated&start=0
>
>
> >
> > --
> > Slawek Kaplonski
> > Principal Software Engineer
> > Red Hat
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211118/c25658ba/attachment.htm>


More information about the openstack-discuss mailing list