[openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

Sagi Shnaidman sshnaidm at redhat.com
Tue Mar 21 18:03:27 UTC 2017


Paul,
if we run 750 ovb jobs per day, than adding 12 more will be less than 2%
increase. I don't believe it will be a serious issue.

Thanks

On Tue, Mar 21, 2017 at 7:34 PM, Paul Belanger <pabelanger at redhat.com>
wrote:

> On Tue, Mar 21, 2017 at 12:40:39PM -0400, Wesley Hayutin wrote:
> > On Tue, Mar 21, 2017 at 12:03 PM, Emilien Macchi <emilien at redhat.com>
> wrote:
> >
> > > On Mon, Mar 20, 2017 at 3:29 PM, Paul Belanger <pabelanger at redhat.com>
> > > wrote:
> > > > On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
> > > >> Hi, Paul
> > > >> I would say that real worthwhile try starts from "normal" priority,
> > > because
> > > >> we want to run promotion jobs more *often*, not more *rarely* which
> > > happens
> > > >> with low priority.
> > > >> In addition the initial idea in the first mail was running them each
> > > after
> > > >> other almost, not once a day like it happens now or with "low"
> priority.
> > > >>
> > > > As I've said, my main reluctance is is how the gate will react if we
> > > create a
> > > > new pipeline with the same priority as our check pipeline.  I would
> much
> > > rather
> > > > since on caution, default to 'low', see how things react for a day /
> > > week /
> > > > month, then see what it would like like a normal.  I want us to be
> > > caution about
> > > > adding a new pipeline, as it dynamically changes how our existing
> > > pipelines
> > > > function.
> > > >
> > > > Further more, this is actually a capacity issue for
> > > tripleo-test-cloud-rh1,
> > > > there currently too many jobs running for the amount of hardware. If
> > > these jobs
> > > > were running on our donated clouds, we could get away with a low
> priority
> > > > periodic pipeline.
> > >
> > > multinode jobs are running under donated clouds but as you know ovb
> not.
> > > We want to keep ovb jobs in our promotion pipeline because they bring
> > > high value to the tests (ironic, ipv6, ssl, probably more).
> > >
> > > Another alternative would be to reduce it to one ovb job (ironic with
> > > introspection + ipv6 + ssl at minimum) and use the 4 multinode jobs
> > > into the promotion pipeline -instead of the 3 ovb.
> > >
> >
> > I'm +1 on using one ovb jobs + 4 multinode jobs.
> >
> >
> > >
> > > current: 3 ovb jobs running every night
> > > proposal: 18 ovb jobs per day
> > >
> > > The addition will cost us 15 jobs into rh1 load. Would it be
> acceptable?
> > >
> > > > Now, allow me to propose another solution.
> > > >
> > > > RDO project has their own version of zuul, which has the ability to
> do
> > > periodic
> > > > pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB
> > > ability, I
> > > > would suggest configuring this promoting pipeline within RDO, as to
> not
> > > affect
> > > > the capacity of tripleo-test-cloud-rh1.  This now means, you can
> > > continuously
> > > > enqueue jobs at a rate of 4 hours, priority shouldn't matter as you
> are
> > > the only
> > > > jobs running on tripleo-test-cloud-rh2, resulting in faster
> promotions.
> > >
> > > Using RDO would also be an option. I'm just not sure about our
> > > available resources, maybe other can reply on this one.
> > >
> >
> > The purpose of the periodic jobs are two fold.
> > 1. ensure the latest built packages work
> > 2. ensure the tripleo check gates continue to work with out error
> >
> > Running the promotion in review.rdoproject would not cover #2.  The
> > rdoproject jobs
> > would be configured in slightly different ways from upstream tripleo.
> > Running the promotion
> > in ci.centos has the same issue.
> >
> Right, there is some leg work to use the images produced by opentack-infra
> in
> RDO, but that is straightforward. It would be the same build process that
> a 3rd
> party CI system does.  It would be a matter of copying nodepool.yaml from
> openstack-infra/project-config, and (this is harder) using
> nodepool-builder to
> build the images.  Today RDO does snapshot images.
>
> > Using tripleo-testcloud-rh2 I think is fine.
> >
> >
> > >
> > > > This also make sense, as packaging is done in RDO, and you are
> > > triggering Centos
> > > > CI things as a result.
> > >
> > > Yes, it would make sense. Right now we have zero TripleO testing when
> > > doing changes in RDO packages (we only run packstack and puppet jobs
> > > which is not enough). Again, I think it's a problem of capacity here.
> > >
> >
> > We made a pass at getting multinode jobs running in RDO with tripleo.
> That
> > was
> > initially not very successful and we chose to instead focus on upstream.
> > We *do*
> > have it on our list to gate packages from RDO builds with tripleo.  In
> the
> > short term
> > that gate will use rdocloud, in the long term we'd also like to gate w/
> > multinode nodepool jobs in RDO.
> >
> >
> >
> > >
> > > Thoughts?
> > >
> > > >> Thanks
> > > >>
> > > >> On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger <
> pabelanger at redhat.com>
> > > >> wrote:
> > > >>
> > > >> > On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
> > > >> > >
> > > >> > >
> > > >> > > On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
> > > >> > > > Hi, all
> > > >> > > >
> > > >> > > > I submitted a change: https://review.openstack.org/#
> /c/443964/
> > > >> > > > but seems like it reached a point which requires an additional
> > > >> > discussion.
> > > >> > > >
> > > >> > > > I had a few proposals, it's increasing period to 12 hours
> instead
> > > of 4
> > > >> > > > for start, and to leave it in regular periodic *low*
> precedence.
> > > >> > > > I think we can start from 12 hours period to see how it goes,
> > > although
> > > >> > I
> > > >> > > > don't think that 4 only jobs will increase load on OVB cloud,
> it's
> > > >> > > > completely negligible comparing to current OVB capacity and
> load.
> > > >> > > > But making its precedence as "low" IMHO completely removes any
> > > sense
> > > >> > > > from this pipeline to be, because we already run
> > > experimental-tripleo
> > > >> > > > pipeline which this priority and it could reach timeouts like
> 7-14
> > > >> > > > hours. So let's assume we ran periodic job, it's queued to run
> > > now 12 +
> > > >> > > > "low queue length" - about 20 and more hours. It's even worse
> than
> > > >> > usual
> > > >> > > > periodic job and definitely makes this change useless.
> > > >> > > > I'd like to notice as well that those periodic jobs unlike
> "usual"
> > > >> > > > periodic are used for repository promotion and their value are
> > > equal or
> > > >> > > > higher than check jobs, so it needs to run with "normal" or
> even
> > > "high"
> > > >> > > > precedence.
> > > >> > >
> > > >> > > Yeah, it makes no sense from an OVB perspective to add these as
> low
> > > >> > priority
> > > >> > > jobs.  Once in a while we've managed to chew through the entire
> > > >> > experimental
> > > >> > > queue during the day, but with the containers job added it's
> very
> > > >> > unlikely
> > > >> > > that's going to happen anymore.  Right now we have a 4.5 hour
> wait
> > > time
> > > >> > just
> > > >> > > for the check queue, then there's two hours of experimental jobs
> > > queued
> > > >> > up
> > > >> > > behind that.  All of which means if we started a low priority
> > > periodic
> > > >> > job
> > > >> > > right now it probably wouldn't run until about midnight my time,
> > > which I
> > > >> > > think is when the regular periodic jobs run now.
> > > >> > >
> > > >> > Lets just give it a try? A 12 hour periodic job with low priority.
> > > There is
> > > >> > nothing saying we cannot iterate on this after a few days / weeks
> /
> > > months.
> > > >> >
> > > >> > > >
> > > >> > > > Thanks
> > > >> > > >
> > > >> > > >
> > > >> > > > On Thu, Mar 9, 2017 at 10:06 PM, Wesley Hayutin <
> > > whayutin at redhat.com
> > > >> > > > <mailto:whayutin at redhat.com>> wrote:
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >     On Wed, Mar 8, 2017 at 1:29 PM, Jeremy Stanley <
> > > fungi at yuggoth.org
> > > >> > > >     <mailto:fungi at yuggoth.org>> wrote:
> > > >> > > >
> > > >> > > >         On 2017-03-07 10:12:58 -0500 (-0500), Wesley Hayutin
> > > wrote:
> > > >> > > >         > The TripleO team would like to initiate a
> conversation
> > > about
> > > >> > the
> > > >> > > >         > possibility of creating a new pipeline in Openstack
> > > Infra to
> > > >> > allow
> > > >> > > >         > a set of jobs to run periodically every four hours
> > > >> > > >         [...]
> > > >> > > >
> > > >> > > >         The request doesn't strike me as
> > > contentious/controversial.
> > > >> > Why not
> > > >> > > >         just propose your addition to the zuul/layout.yaml
> file
> > > in the
> > > >> > > >         openstack-infra/project-config repo and hash out any
> > > resulting
> > > >> > > >         concerns via code review?
> > > >> > > >         --
> > > >> > > >         Jeremy Stanley
> > > >> > > >
> > > >> > > >
> > > >> > > >     Sounds good to me.
> > > >> > > >     We thought it would be nice to walk through it in an email
> > > first :)
> > > >> > > >
> > > >> > > >     Thanks
> > > >> > > >
> > > >> > > >
> > > >> > > >         ______________________________
> > > ______________________________
> > > >> > ______________
> > > >> > > >         OpenStack Development Mailing List (not for usage
> > > questions)
> > > >> > > >         Unsubscribe:
> > > >> > > >         OpenStack-dev-request at lists.openstack.org?subject:
> > > unsubscribe
> > > >> > > >         <http://OpenStack-dev-request@
> lists.openstack.org?subject
> > > :
> > > >> > unsubscribe>
> > > >> > > >         http://lists.openstack.org/cgi-bin/mailman/listinfo/
> > > >> > openstack-dev <http://lists.openstack.org/
> cgi-bin/mailman/listinfo/
> > > >> > openstack-dev>
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >     ______________________________
> ______________________________
> > > >> > ______________
> > > >> > > >     OpenStack Development Mailing List (not for usage
> questions)
> > > >> > > >     Unsubscribe:
> > > >> > > >     OpenStack-dev-request at lists.openstack.org?subject:
> unsubscribe
> > > >> > > >     <http://OpenStack-dev-request@lists.openstack.org?subject
> :
> > > >> > unsubscribe>
> > > >> > > >     http://lists.openstack.org/cgi-bin/mailman/listinfo/
> > > openstack-dev
> > > >> > > >     <http://lists.openstack.org/cgi-bin/mailman/listinfo/
> > > openstack-dev
> > > >> > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > --
> > > >> > > > Best regards
> > > >> > > > Sagi Shnaidman
> > > >> > > >
> > > >> > > >
> > > >> > > > ____________________________________________________________
> > > >> > ______________
> > > >> > > > OpenStack Development Mailing List (not for usage questions)
> > > >> > > > Unsubscribe: OpenStack-dev-request at lists.
> openstack.org?subject:
> > > >> > unsubscribe
> > > >> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-dev
> > > >> > > >
> > > >> > >
> > > >> > > ____________________________________________________________
> > > >> > ______________
> > > >> > > OpenStack Development Mailing List (not for usage questions)
> > > >> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> > > >> > unsubscribe
> > > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-dev
> > > >> >
> > > >> > ____________________________________________________________
> > > ______________
> > > >> > OpenStack Development Mailing List (not for usage questions)
> > > >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> > > unsubscribe
> > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> Best regards
> > > >> Sagi Shnaidman
> > > >
> > > >> ____________________________________________________________
> > > ______________
> > > >> OpenStack Development Mailing List (not for usage questions)
> > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> > > unsubscribe
> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > > ____________________________________________________________
> > > ______________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> > > unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > > --
> > > Emilien Macchi
> > >
> > > ____________________________________________________________
> ______________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> > ____________________________________________________________
> ______________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards
Sagi Shnaidman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170321/ed31b908/attachment-0001.html>


More information about the OpenStack-dev mailing list