[qa][ptg] QA Bobcat vPTG summary

Martin Kopec mkopec at redhat.com
Thu Apr 6 09:37:51 UTC 2023


Hi everyone,

here is a summary of the discussions. The full agenda can be found here [1].

== Antelope Retrospective ==
Good things: kudos to all who are still around and help with reviews,
patches, bug fixes
Bad things: unstable gate due to tox update, timeouts, job flakiness
(mainly multinode job)

== uWSGI is in the maintenance mode ==
This doesn't require an immediate action as it's still in the maintenance
mode = any bugs should be fixed and until we need new features we don't
have to rush into replacing it.
We should have it on the watchlist though and potentially test some
alternatives - I've put this as one of the priority items for this cycle
[2].

== FIPS interop testing ==
We had a FIPS goal since the last PTG [3].
The patches [4] and [5] were to have FIPS on ubuntu and these [6] and [7]
are to add FIPS on a centos job.
The next goal is to have a FIPS job on Rocky linux as it's more stable than
centos.

== Retiring Patrole ==
We are going to retire Patrole project. 2 main facts that speak in favor of
retiring:
Fact 1: plugins have enough RBAC tests (and are writing more to test SRBAC)
- therefore patrole is not needed, the testing coverage got by the plugins
is enough.
Fact 2: patrole gates have been broken for a long time and no one was
affected by that.

== Additional PyPi maintainer cleanup ==
Only 2 projects from QA left with additional maintainers:
* openstack/os-performance-tools - additional maintainer SpamapS (Clint
Byrum)
* openstack/patrole - additional maintainer DavidPurcell
If you know any of them, please, tell them to reach out to me. I've sent
emails to them, but maybe the email addresses I found aren't checked
regularly anymore.

== Tempest/Devstack gate stability ==
Several precautions have been done such as reduce mysql memory [8] or
reduce the number of tests executed per job, increase of a timeout etc [9].
However, those precautions are mainly workarounds. The important note is
that the instability was only highlighted by our testing, not caused by it.
Many problems are inefficiencies in openstack software, e.g.:
* openstack-client issues a new token for every operation
* python startup time - entrypoint lookup
* privesp processes in aggregate use more memory than nova (every service
runs their own copy)
* inefficient ansible code, etc
Any volunteers who would like to first analyse the situation, find which of
those inefficiencies are the biggest culprits and how big effect they have
on the system?

[1] https://etherpad.opendev.org/p/qa-bobcat-ptg
[2] https://etherpad.opendev.org/p/qa-bobcat-priority
[3] https://etherpad.opendev.org/p/antelope-ptg-interop
[4] https://review.opendev.org/c/zuul/zuul-jobs/+/866881
[5] https://review.opendev.org/c/zuul/zuul-jobs/+/873893
[6] https://review.opendev.org/c/openstack/devstack/+/871606
[7] https://review.opendev.org/c/openstack/tempest/+/873697
[8] https://review.opendev.org/c/openstack/devstack/+/873646
[9] https://review.opendev.org/q/topic:bug%252F2004780

Regards,
-- 
Martin Kopec
Principal Software Quality Engineer
Red Hat EMEA
IM: kopecmartin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230406/fbc22288/attachment.htm>


More information about the openstack-discuss mailing list