On Thu, Nov 1, 2018 at 5:47 AM Derek Higgins <derekh@redhat.com> wrote:
On Wed, 31 Oct 2018 at 17:22, Alex Schultz <aschultz@redhat.com> wrote:
>
> Hey everyone,
>
> Based on previous emails around this[0][1], I have proposed a possible
> reducing in our usage by switching the scenario001--011 jobs to
> non-voting and removing them from the gate[2]. This will reduce the
> likelihood of causing gate resets and hopefully allow us to land
> corrective patches sooner.  In terms of risks, there is a risk that we
> might introduce breaking changes in the scenarios because they are
> officially non-voting, and we will still be gating promotions on these
> scenarios.  This means that if they are broken, they will need the
> same attention and care to fix them so we should be vigilant when the
> jobs are failing.
>
> The hope is that we can switch these scenarios out with voting
> standalone versions in the next few weeks, but until that I think we
> should proceed by removing them from the gate.  I know this is less
> than ideal but as most failures with these jobs in the gate are either
> timeouts or unrelated to the changes (or gate queue), they are more of
> hindrance than a help at this point.
>
> Thanks,
> -Alex

While on the topic of reducing the CI footprint

something worth considering when pushing up a string of patches would
be to remove a bunch of the check jobs at the start of the patch set.

e.g. If I'm working on t-h-t and have a series of 10 patches, while
looking for feedback I could remove most of the jobs from
zuul.d/layout.yaml in patch 1 so all 10 patches don't run the entire
suite of CI jobs. Once it becomes clear that the patchset is nearly
ready to merge, I change patch 1 leave zuul.d/layout.yaml as is.

I'm not suggesting everybody does this but anybody who tends to push
up multiple patch sets together could consider it to not tie up
resources for hours.

>
> [0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html
> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html
> [2] https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Greetings,

Just a quick update..  The TripleO CI team is just about done migrating multinode scenario 1-4 jobs to the SingleNode job.  This update and a few other minor changes have moved the needle with regards to TripleO's upstream resource consumption.

In October of 2017 we had the following footprint..
tripleo: 111256883.96s, 52.45%  [1]

Today our footprint is now..
tripleo: 313097590.30s, 36.70% [2]

We are still working the issue and we should see further improvement over the next couple months.  I'll update the list again at the end of Stein.

Thanks to Clark, Doug, Alex, Emilien and Juan for the work to make this happen!!
Also thank you to the folks on the TripleO-CI team, you know who you are :)