[tripleo] CI is red
Greetings, It's a been a week that started w/ CentOS mirror outage [0], followed by breaking changes caused by CentOS-8.2 [.5] and has not improved much since. The mirror issues are resolved, the updates required for CentOS-8.2 have been made here's the latest issue causing your gate jobs to fail. tripleo-common and python-tripleoclient became out of sync and started to fail unit tests. You can see this in the dlrn builds of your changes. [1] There was also a promotion to train [2] today and we noticed that mirrors were failing on container pulls for a bit (train only). This should resolve over time as the mirrors refresh themselves. Usually the mirrors handle the promotion more elegantly. CI status is updated in the $topic in #tripleo. I update the $topic as needed. Tomorrow is another day.. [0] https://bugs.launchpad.net/tripleo/+bug/1883430 [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 [1] https://bugs.launchpad.net/tripleo/+bug/1884138 http://dashboard-ci.tripleo.org/d/wb8HBhrWk/cockpit?orgId=1&var-launchpad_tags=alert&var-releases=master&var-promotion_names=current-tripleo&var-promotion_names=current-tripleo-rdo&fullscreen&panelId=61 http://status.openstack.org/elastic-recheck/ http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22ERROR:dlrn:Received%20exception%20Error%20in%20build_rpm_wrapper%5C%22%20AND%20tags:%5C%22console%5C%22%20AND%20voting:1&from=864000s https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcd... [2] https://trunk.rdoproject.org/centos7-train/
One more thing.. I forgot to mention 3rd party RDO clouds are experiencing problems or outages causing 3rd party jobs to fail as well. Ignore 3rd party check results until I update the list. Thanks On Thu, Jun 18, 2020 at 4:01 PM Wesley Hayutin <whayutin@redhat.com> wrote:
Greetings,
It's a been a week that started w/ CentOS mirror outage [0], followed by breaking changes caused by CentOS-8.2 [.5] and has not improved much since.
The mirror issues are resolved, the updates required for CentOS-8.2 have been made here's the latest issue causing your gate jobs to fail.
tripleo-common and python-tripleoclient became out of sync and started to fail unit tests. You can see this in the dlrn builds of your changes. [1]
There was also a promotion to train [2] today and we noticed that mirrors were failing on container pulls for a bit (train only). This should resolve over time as the mirrors refresh themselves. Usually the mirrors handle the promotion more elegantly.
CI status is updated in the $topic in #tripleo. I update the $topic as needed.
Tomorrow is another day..
[0] https://bugs.launchpad.net/tripleo/+bug/1883430 [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 [1] https://bugs.launchpad.net/tripleo/+bug/1884138
http://status.openstack.org/elastic-recheck/
https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcd...
On Thu, Jun 18, 2020 at 4:03 PM Wesley Hayutin <whayutin@redhat.com> wrote:
One more thing..
I forgot to mention 3rd party RDO clouds are experiencing problems or outages causing 3rd party jobs to fail as well. Ignore 3rd party check results until I update the list.
Thanks
K.. quick update. Upstream seems to back to it's normal GREENISH status. We're working on a fix for the ipa-server install atm. Third party is still RED, but we think we're close. OVB BMC's updated, we've reduced the load on vexhost and testing out the latest changes in ironic. Thanks all, have a good weekend
On Thu, Jun 18, 2020 at 4:01 PM Wesley Hayutin <whayutin@redhat.com> wrote:
Greetings,
It's a been a week that started w/ CentOS mirror outage [0], followed by breaking changes caused by CentOS-8.2 [.5] and has not improved much since.
The mirror issues are resolved, the updates required for CentOS-8.2 have been made here's the latest issue causing your gate jobs to fail.
tripleo-common and python-tripleoclient became out of sync and started to fail unit tests. You can see this in the dlrn builds of your changes. [1]
There was also a promotion to train [2] today and we noticed that mirrors were failing on container pulls for a bit (train only). This should resolve over time as the mirrors refresh themselves. Usually the mirrors handle the promotion more elegantly.
CI status is updated in the $topic in #tripleo. I update the $topic as needed.
Tomorrow is another day..
[0] https://bugs.launchpad.net/tripleo/+bug/1883430 [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 [1] https://bugs.launchpad.net/tripleo/+bug/1884138
http://status.openstack.org/elastic-recheck/
https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcd...
On Fri, Jun 19, 2020 at 1:05 PM Wesley Hayutin <whayutin@redhat.com> wrote:
On Thu, Jun 18, 2020 at 4:03 PM Wesley Hayutin <whayutin@redhat.com> wrote:
One more thing..
I forgot to mention 3rd party RDO clouds are experiencing problems or outages causing 3rd party jobs to fail as well. Ignore 3rd party check results until I update the list.
Thanks
K.. quick update. Upstream seems to back to it's normal GREENISH status. We're working on a fix for the ipa-server install atm. Third party is still RED, but we think we're close. OVB BMC's updated, we've reduced the load on vexhost and testing out the latest changes in ironic.
Thanks all, have a good weekend
OK... last update. We're turning off OVB jobs on all non-ci repos and the openstack-virtual-baremetal repo. We're going to try and lower usage of the 3rd party clouds significantly until the jobs run consistently green [1]. Once we have some consistent passes we will start to add it back to various tripleo projects. I'll keep everyone updated. Thanks :) [1] https://review.rdoproject.org/r/28173
On Thu, Jun 18, 2020 at 4:01 PM Wesley Hayutin <whayutin@redhat.com> wrote:
Greetings,
It's a been a week that started w/ CentOS mirror outage [0], followed by breaking changes caused by CentOS-8.2 [.5] and has not improved much since.
The mirror issues are resolved, the updates required for CentOS-8.2 have been made here's the latest issue causing your gate jobs to fail.
tripleo-common and python-tripleoclient became out of sync and started to fail unit tests. You can see this in the dlrn builds of your changes. [1]
There was also a promotion to train [2] today and we noticed that mirrors were failing on container pulls for a bit (train only). This should resolve over time as the mirrors refresh themselves. Usually the mirrors handle the promotion more elegantly.
CI status is updated in the $topic in #tripleo. I update the $topic as needed.
Tomorrow is another day..
[0] https://bugs.launchpad.net/tripleo/+bug/1883430 [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 [1] https://bugs.launchpad.net/tripleo/+bug/1884138
http://status.openstack.org/elastic-recheck/
https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcd...
participants (1)
-
Wesley Hayutin