[gate][nova][qa] tempest-*-centos-8-stream job failing consistently since yesterday
Hello Everyone, You might have noticed that 'tempest-integrated-compute-centos-8-stream' and 'tempest tempest-full-py3-centos-8-stream' job started failing the below two tests consistently since yesterday (~7 PM CST) - https://169dddc67bd535a0361f-0632fd6194b48b475d9eb0d8f7720c6c.ssl.cf2.rackcd... I have filed the bug and to unblock the gate (nova & tempest), I have pushed patch to make these job non voting until bug is fixed. - https://review.opendev.org/c/openstack/tempest/+/824740 Please hold the recheck on nova or tempest (or any other effected project). ralonsoh mentioned that this is not the same issue which is raised in - http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026682.h... or may be triggered due to same root cause? -gmann
On Fri, Jan 14, 2022, at 9:18 AM, Ghanshyam Mann wrote:
Hello Everyone,
You might have noticed that 'tempest-integrated-compute-centos-8-stream' and 'tempest tempest-full-py3-centos-8-stream' job started failing the below two tests consistently since yesterday (~7 PM CST)
- https://169dddc67bd535a0361f-0632fd6194b48b475d9eb0d8f7720c6c.ssl.cf2.rackcd...
I have filed the bug and to unblock the gate (nova & tempest), I have pushed patch to make these job non voting until bug is fixed.
- https://review.opendev.org/c/openstack/tempest/+/824740
Please hold the recheck on nova or tempest (or any other effected project).
ralonsoh mentioned that this is not the same issue which is raised in - http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026682.h...
or may be triggered due to same root cause?
I think this is the very same issue. These tests are failing in tempest when attempting to verify VM connectivity. The vm connectivity checking routines appear to fork and exec ping in tempest and tempest does not run as root.
-gmann
---- On Fri, 14 Jan 2022 11:18:28 -0600 Ghanshyam Mann <gmann@ghanshyammann.com> wrote ----
Hello Everyone,
You might have noticed that 'tempest-integrated-compute-centos-8-stream' and 'tempest tempest-full-py3-centos-8-stream' job started failing the below two tests consistently since yesterday (~7 PM CST)
- https://169dddc67bd535a0361f-0632fd6194b48b475d9eb0d8f7720c6c.ssl.cf2.rackcd...
I have filed the bug and to unblock the gate (nova & tempest), I have pushed patch to make these job non voting until bug is fixed.
- https://review.opendev.org/c/openstack/tempest/+/824740
Please hold the recheck on nova or tempest (or any other effected project).
With the devstack workaround (https://review.opendev.org/c/openstack/devstack/+/824862), jobs are made voting again and working fine https://review.opendev.org/c/openstack/tempest/+/824962 Thanks Yatin. -gmann
ralonsoh mentioned that this is not the same issue which is raised in - http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026682.h...
or may be triggered due to same root cause?
-gmann
On Thu, Jan 20, 2022, at 12:42 PM, Ghanshyam Mann wrote:
Hello Everyone,
You might have noticed that 'tempest-integrated-compute-centos-8-stream' and 'tempest tempest-full-py3-centos-8-stream' job started failing
tests consistently since yesterday (~7 PM CST)
- https://169dddc67bd535a0361f-0632fd6194b48b475d9eb0d8f7720c6c.ssl.cf2.rackcd...
I have filed the bug and to unblock the gate (nova & tempest), I have pushed patch to make these job non voting until bug is fixed.
- https://review.opendev.org/c/openstack/tempest/+/824740
Please hold the recheck on nova or tempest (or any other effected
---- On Fri, 14 Jan 2022 11:18:28 -0600 Ghanshyam Mann <gmann@ghanshyammann.com> wrote ---- the below two project).
With the devstack workaround (https://review.opendev.org/c/openstack/devstack/+/824862), jobs are made voting again and working fine https://review.opendev.org/c/openstack/tempest/+/824962
Looks like http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os/Packages/systemd-2... exists upstream of us now. Our mirrors haven't updated to pull that in yet but should soon. Then we will also need new centos 8 stream images built as systemd is including in them and I'm not sure that systemd will get updated later. Once that happens you should be able to revert the various workarounds that have been made.
Thanks Yatin.
-gmann
ralonsoh mentioned that this is not the same issue which is raised in -
http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026682.h...
or may be triggered due to same root cause?
-gmann
On Thu, Jan 20, 2022 at 5:20 PM Clark Boylan <cboylan@sapwetik.org> wrote:
On Thu, Jan 20, 2022, at 12:42 PM, Ghanshyam Mann wrote:
Hello Everyone,
You might have noticed that 'tempest-integrated-compute-centos-8-stream' and 'tempest tempest-full-py3-centos-8-stream' job started failing
tests consistently since yesterday (~7 PM CST)
- https://169dddc67bd535a0361f-0632fd6194b48b475d9eb0d8f7720c6c.ssl.cf2.rackcd...
I have filed the bug and to unblock the gate (nova & tempest), I have pushed patch to make these job non voting until bug is fixed.
- https://review.opendev.org/c/openstack/tempest/+/824740
Please hold the recheck on nova or tempest (or any other effected
---- On Fri, 14 Jan 2022 11:18:28 -0600 Ghanshyam Mann <gmann@ghanshyammann.com> wrote ---- the below two project).
With the devstack workaround (https://review.opendev.org/c/openstack/devstack/+/824862), jobs are made voting again and working fine https://review.opendev.org/c/openstack/tempest/+/824962
Looks like http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os/Packages/systemd-2... exists upstream of us now. Our mirrors haven't updated to pull that in yet but should soon. Then we will also need new centos 8 stream images built as systemd is including in them and I'm not sure that systemd will get updated later.
Once that happens you should be able to revert the various workarounds that have been made.
Unfortunately per the bz (https://bugzilla.redhat.com/show_bug.cgi?id=2037807) that package might not fix it. The fix they applied incorrectly included a -. Looks like we need to wait for a different version. Thanks, -Alex
Thanks Yatin.
-gmann
ralonsoh mentioned that this is not the same issue which is raised in -
http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026682.h...
or may be triggered due to same root cause?
-gmann
---- On Thu, 20 Jan 2022 21:20:15 -0600 Alex Schultz <aschultz@redhat.com> wrote ----
On Thu, Jan 20, 2022 at 5:20 PM Clark Boylan <cboylan@sapwetik.org> wrote:
On Thu, Jan 20, 2022, at 12:42 PM, Ghanshyam Mann wrote:
Hello Everyone,
You might have noticed that 'tempest-integrated-compute-centos-8-stream' and 'tempest tempest-full-py3-centos-8-stream' job started failing
tests consistently since yesterday (~7 PM CST)
- https://169dddc67bd535a0361f-0632fd6194b48b475d9eb0d8f7720c6c.ssl.cf2.rackcd...
I have filed the bug and to unblock the gate (nova & tempest), I have pushed patch to make these job non voting until bug is fixed.
- https://review.opendev.org/c/openstack/tempest/+/824740
Please hold the recheck on nova or tempest (or any other effected
---- On Fri, 14 Jan 2022 11:18:28 -0600 Ghanshyam Mann <gmann@ghanshyammann.com> wrote ---- the below two project).
With the devstack workaround (https://review.opendev.org/c/openstack/devstack/+/824862), jobs are made voting again and working fine https://review.opendev.org/c/openstack/tempest/+/824962
Looks like http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os/Packages/systemd-2... exists upstream of us now. Our mirrors haven't updated to pull that in yet but should soon. Then we will also need new centos 8 stream images built as systemd is including in them and I'm not sure that systemd will get updated later.
Once that happens you should be able to revert the various workarounds that have been made.
Unfortunately per the bz (https://bugzilla.redhat.com/show_bug.cgi?id=2037807) that package might not fix it. The fix they applied incorrectly included a -. Looks like we need to wait for a different version.
As jobs are again failing on RETRY_LIMIT (404 from CentOS-Stream - AppStream), I made them n-v again until they are stable - https://review.opendev.org/c/openstack/tempest/+/825730 -gmann
Thanks, -Alex
Thanks Yatin.
-gmann
ralonsoh mentioned that this is not the same issue which is raised in -
http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026682.h...
or may be triggered due to same root cause?
-gmann
On 2022-01-21 10:50:33 -0600 (-0600), Ghanshyam Mann wrote: [...]
jobs are again failing on RETRY_LIMIT (404 from CentOS-Stream - AppStream) [...]
Yes, reviewing the mirror-update log[*], it appears we copied a problem state from an official mirror again, which we were publishing from 00:45:01 to 06:57:37 UTC today, so similar to yesterday's event. There's a proposal[**] to switch to a different mirror, asserting that it's somehow got additional verification measures to make it immune to these sorts of inconsistencies, so I suppose it's worth trying. [*] https://static.opendev.org/mirror/logs/rsync-mirrors/centos.log [**] https://review.opendev.org/825446 -- Jeremy Stanley
participants (4)
-
Alex Schultz
-
Clark Boylan
-
Ghanshyam Mann
-
Jeremy Stanley