[openstack-dev] [all] Zuul job backlog

Wesley Hayutin whayutin at redhat.com
Wed Oct 3 16:26:50 UTC 2018


On Fri, Sep 28, 2018 at 3:02 PM Matt Riedemann <mriedemos at gmail.com> wrote:

> On 9/28/2018 3:12 PM, Clark Boylan wrote:
> > I was asked to write a followup to this as the long Zuul queues have
> persisted through this week. Largely because the situation from last week
> hasn't changed much. We were down the upgraded cloud region while we worked
> around a network configuration bug, then once that was addressed we ran
> into neutron port assignment and deletion issues. We think these are both
> fixed and we are running in this region again as of today.
> >
> > Other good news is our classification rate is up significantly. We can
> use that information to go through the top identified gate bugs:
> >
> > Network Connectivity issues to test nodes [2]. This is the current top
> of the list, but I think its impact is relatively small. What is happening
> here is jobs fail to connect to their test nodes early in the pre-run
> playbook and then fail. Zuul will rerun these jobs for us because they
> failed in the pre-run step. Prior to zuulv3 we had nodepool run a ready
> script before marking test nodes as ready, this script would've caught and
> filtered out these broken network nodes early. We now notice them late
> during the pre-run of a job.
> >
> > Pip fails to find distribution for package [3]. Earlier in the week we
> had the in region mirror fail in two different regions for unrelated
> errors. These mirrors were fixed and the only other hits for this bug come
> from Ara which tried to install the 'black' package on python3.5 but this
> package requires python>=3.6.
> >
> > yum, no more mirrors to try [4]. At first glance this appears to be an
> infrastructure issue because the mirror isn't serving content to yum. On
> further investigation it turned out to be a DNS resolution issue caused by
> the installation of designate in the tripleo jobs. Tripleo is aware of this
> issue and working to correct it.
> >
> > Stackviz failing on py3 [5]. This is a real bug in stackviz caused by
> subunit data being binary not utf8 encoded strings. I've written a fix for
> this problem athttps://review.openstack.org/606184, but in doing so found
> that this was a known issue back in March and there was already a proposed
> fix,https://review.openstack.org/#/c/555388/3. It would be helpful if the
> QA team could care for this project and get a fix in. Otherwise, we should
> consider disabling stackviz on our tempest jobs (though the output from
> stackviz is often useful).
> >
> > There are other bugs being tracked by e-r. Some are bugs in the
> openstack software and I'm sure some are also bugs in the infrastructure. I
> have not yet had the time to work through the others though. It would be
> helpful if project teams could prioritize the debugging and fixing of these
> issues though.
> >
> > [2]http://status.openstack.org/elastic-recheck/gate.html#1793370
> > [3]http://status.openstack.org/elastic-recheck/gate.html#1449136
> > [4]http://status.openstack.org/elastic-recheck/gate.html#1708704
> > [5]http://status.openstack.org/elastic-recheck/gate.html#1758054
>
> Thanks for the update Clark.
>
> Another thing this week is the logstash indexing is behind by at least
> half a day. That's because workers were hitting OOM errors due to giant
> screen log files that aren't formatted properly so that we only index
> INFO+ level logs, and were instead trying to index the entire file,
> which some of which are 33MB *compressed*. So indexing of those
> identified problematic screen logs has been disabled:
>
> https://review.openstack.org/#/c/606197/
>
> I've reported bugs against each related project.
>
> --
>
> Thanks,
>
> Matt
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Greetings Clark and all,
The TripleO team would like to announce a significant change to the
upstream CI the project has in place today.

TripleO can at times consume a large share of the compute resources [1]
provided by the OpenStack upstream infrastructure team and OpenStack
providers.  The TripleO project has a large code base and high velocity of
change which alone can tax the upstream CI system [3]. Additionally like
other projects the issue is particularly acute when gate jobs are reset at
a high rate.  Unlike most other projects in OpenStack, TripleO uses
multiple nodepool nodes in each job to more closely emulate customer like
deployments.  While using multiple nodes per job helps to uncover bugs
that are not found in other projects, the resources used, the run time of
each job, and usability have proven to be challenging.  It has been a
challenge to maintain job run times, quality and usability for TripleO and
a challenge for the infra team to provide the required compute resources
for the project.

A simplification of our upstream deployments to check and gate changes is
in order.

The TripleO project has created a single node container based composable
OpenStack deployment [2]. It is the projects intention to replace most of
the TripleO upstream jobs with the Standalone deployment.  We would like to
reduce our multi-node usage to a total of two or three multinode jobs to
handle a basic overcloud deployment, updates and upgrades[a]. Currently in
master we are relying on multiple multi-node scenario jobs to test many of
the OpenStack services in a single job. Our intention is to move these
multinode scenario jobs to single node job(s) that tests a smaller subset
of services. The goal of this would be target the specific areas of the
TripleO code base that affect these services and only run those there. This
would replace the existing 2-3 hour two node job(s) with single node job(s)
for specific services that completes in about half the time.  This
unfortunately will reduce the overall coverage upstream but still allows us
a basic smoke test of the supported OpenStack services and their deployment
upstream.

Ideally projects other than TripleO would make use of the Standalone
deployment to test their particular service with containers, upgrades or
for various other reasons.  Additional projects using this deployment would
help ensure bugs are found quickly and resolved providing additional
resilience to the upstream gate jobs. The TripleO team will begin review to
scope out and create estimates for the above work starting on October 18
2018.  One should expect to see updates on our progress posted to the
list.  Below are some details on the proposed changes.

Thank you all for your time and patience!

Performance improvements:
  * Standalone jobs use half the nodes of multinode jobs
  * The standalone job has an average run time of 60-80 minutes, about half
the run time of our multinode jobs

Base TripleO Job Definitions (Stein onwards):
Multi-node jobs
  * containers-multinode
  * containers-multinode-updates
  * containers-multinode-upgrades
Single node jobs
  * undercloud
  * undercloud-upgrade
  * standalone

Jobs to be removed (Stein onwards):
Multi-node jobs[b]
  * scenario001-multinode
  * scenario002-multinode
  * scenario003-multinode
  * scenario004-multinode
  * scenario006-mulitinode
  * scenario007-multinode
  * scenario008-multinode
  * scenario009-multinode
  * scenario010-multinode
  * scenario011-multinode

Jobs that may need to be created to cover additional services[4] (Stein
onwards):
Single node jobs[c]
  * standalone-barbican
  * standalone-ceph[d]
  * standalone-designate
  * standalone-manila
  * standalone-octavia
  * standalone-openshift
  * standalone-sahara
  * standalone-telemetry

[1] https://gist.github.com/notmyname/8bf3dbcb7195250eb76f2a1a8996fb00
[2]
https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134867.html
[4]
https://github.com/openstack/tripleo-heat-templates/blob/master/README.rst#service-testing-matrix



-- 

Wes Hayutin

Associate MANAGER

Red Hat

<https://www.redhat.com/>

whayutin at redhat.com    T: +1919 <+19197544114>4232509     IRC:  weshay
<https://red.ht/sig>

View my calendar and check my availability for meetings HERE
<https://calendar.google.com/calendar/b/1/embed?src=whayutin@redhat.com&ctz=America/New_York>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20181003/ab7d3bcb/attachment.html>


More information about the OpenStack-dev mailing list