[neutron][CI] How to reduce number of rechecks - brainstorming

Oleg Bondarev oleg.bondarev at huawei.com
Thu Nov 25 08:57:36 UTC 2021


Hello,

A few thoughts from my side is scope of brainstorm:


1)      Recheck actual bugs (“recheck bug 123456”)

-        not a new idea to better keep track of all failures

-        force a developer to investigate the reason of each CI failure and increase corresponding bug rating, or file a new bug (or go and fix this bug finally!)

-        this implies having some gate failure bugs dashboard with hottest bugs on top

-        simple “recheck” could be forbidden, at least during “crisis management” window


2)      Allow recheck TIMEOUT/POST_FAILURE jobs

-        while I agree that re-run particular jobs is evil, TIMEOUT/POST_FAILURE are not related to the patch in majority of cases

-        performance issues are usually caught by Rally jobs

-        of course core team should monitor if timeouts become a rule for some jobs


3)      Ability to block rechecks in some cases, like known gate blocker

-        not everyone is always aware that gates are blocked with some issue

-        PTL (or any core team member) can turn off rechecks during that time (with a message from Zuul)

-        happens not often but still can save some CI resources

Thanks,
Oleg
---
Advanced Software Technology Lab
Huawei

From: Rodolfo Alonso Hernandez [mailto:ralonsoh at redhat.com]
Sent: Monday, November 22, 2021 11:54 AM
To: Ronelle Landy <rlandy at redhat.com>
Cc: Balazs Gibizer <balazs.gibizer at est.tech>; Slawek Kaplonski <skaplons at redhat.com>; openstack-discuss <openstack-discuss at lists.openstack.org>; Oleg Bondarev <oleg.bondarev at huawei.com>; lajos.katona at ericsson.com; Bernard Cafarelli <bcafarel at redhat.com>; Miguel Lavalle <miguel at mlavalle.com>
Subject: Re: [neutron][CI] How to reduce number of rechecks - brainstorming

Hello:

I think the last idea Ronelled presented (a skiplist) could be feasible in Neutron. Of course, this list could grow indefinitely, but we can always keep an eye on it.

There could be another issue with Neutron tempest tests when using the "advance" image. Despite the recent improvements done recently, we are frequently having problems with the RAM size of the testing VMs. We would like to have 20% more RAM, if possible. I wish we had the ability to pre-run some checks in specific HW (tempest plugin or grenade tests).

Slawek commented the different number of backends we need to provide support and testing. However I think we can remove the Linux Bridge tempest plugin from the "gate" list (it is already tested in the "check" list). Tempest plugin tests are expensive in time and prone to errors.

This paragraph falls under the shoulders of the Neutron team. We can also identify those long running tests that usually fail (those that take more than 1000 seconds). A test that takes around 15 mins to run, will probably fail. We need to find those tests, investigate the slowest parts of those tests and try to improve/optimize/remove them.

Thank you all for your comments and proposals. That will help a lot to improve the Neutron CI stability.

Regards.


On Fri, Nov 19, 2021 at 12:53 AM Ronelle Landy <rlandy at redhat.com<mailto:rlandy at redhat.com>> wrote:


On Wed, Nov 17, 2021 at 5:22 AM Balazs Gibizer <balazs.gibizer at est.tech<mailto:balazs.gibizer at est.tech>> wrote:


On Wed, Nov 17 2021 at 09:13:34 AM +0100, Slawek Kaplonski
<skaplons at redhat.com<mailto:skaplons at redhat.com>> wrote:
> Hi,
>
> Recently I spent some time to check how many rechecks we need in
> Neutron to
> get patch merged and I compared it to some other OpenStack projects
> (see [1]
> for details).
> TL;DR - results aren't good for us and I think we really need to do
> something
> with that.

I really like the idea of collecting such stats. Thank you for doing
it. I can even imagine to make a public dashboard somewhere with this
information as it is a good indication about the health of our projects
/ testing.

>
> Of course "easiest" thing to say is that we should fix issues which
> we are
> hitting in the CI to make jobs more stable. But it's not that easy.
> We are
> struggling with those jobs for very long time. We have CI related
> meeting
> every week and we are fixing what we can there.
> Unfortunately there is still bunch of issues which we can't fix so
> far because
> they are intermittent and hard to reproduce locally or in some cases
> the
> issues aren't realy related to the Neutron or there are new bugs
> which we need
> to investigate and fix :)


I have couple of suggestion based on my experience working with CI in
nova.

We've struggled with unstable tests in TripleO as well. Here are some things we tried and implemented:

1. Created job dependencies so we only ran check tests once we knew we had the resources we needed (example we had pulled containers successfully)

2. Moved some testing to third party where we have easier control of the environment (note that third party cannot stop a change merging)

3. Used dependency pipelines to pre-qualify some dependencies ahead of letting them  run wild on our check jobs

4. Requested testproject runs of changes in a less busy environment before running a full set of tests in a public zuul

5. Used a skiplist to keep track of tech debt and skip known failures that we could temporarily ignore to keep CI moving along if we're waiting on an external fix.



1) we try to open bug reports for intermittent gate failures too and
keep them tagged in a list [1] so when a job fail it is easy to check
if the bug is known.

2) I offer my help here now that if you see something in neutron runs
that feels non neutron specific then ping me with it. Maybe we are
struggling with the same problem too.

3) there was informal discussion before about a possibility to re-run
only some jobs with a recheck instead for re-running the whole set. I
don't know if this is feasible with Zuul and I think this only treat
the symptom not the root case. But still this could be a direction if
all else fails.

Cheers,
gibi

> So this is  never ending battle for us. The problem is that we have
> to test
> various backends, drivers, etc. so as a result we have many jobs
> running on
> each patch - excluding UT, pep8 and docs jobs we have around 19 jobs
> in check
> and 14 jobs in gate queue.
>
> In the past we made a lot of improvements, like e.g. we improved
> irrelevant
> files lists for jobs to run less jobs on some of the patches,
> together with QA
> team we did "integrated-networking" template to run only Neutron and
> Nova
> related scenario tests in the Neutron queues, we removed and
> consolidated some
> of the jobs (there is still one patch in progress for that but it
> should just
> remove around 2 jobs from the check queue). All of that are good
> improvements
> but still not enough to make our CI really stable :/
>
> Because of all of that, I would like to ask community about any other
> ideas
> how we can improve that. If You have any ideas, please send it in
> this email
> thread or reach out to me directly on irc.
> We want to discuss about them in the next video CI meeting which will
> be on
> November 30th. If You would have any idea and would like to join that
> discussion, You are more than welcome in that meeting of course :)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-discuss/2021-November/
> 025759.html


[1]
https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure&orderby=-date_last_updated&start=0


>
> --
> Slawek Kaplonski
> Principal Software Engineer
> Red Hat


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211125/cfe59382/attachment-0001.htm>


More information about the openstack-discuss mailing list