[OpenStack-Infra] Status of check-tempest-dsvm-f20 job

Dan Prince dprince at redhat.com
Wed Jun 18 01:32:20 UTC 2014


On Tue, 2014-06-17 at 17:05 -0400, Sean Dague wrote:
> On 06/17/2014 04:16 PM, Ian Wienand wrote:
> > Hi,
> > 
> > I added an item to today's meeting but we didn't get to it.
> > 
> > I'd like to bring up the disablement of the F20 based job, disabled in
> > [1] with some discussion in [2].
> > 
> > It's unclear to me why there are insufficient Fedora nodes.  Is the
> > problem that Fedora is booting too slowly compared to other
> > distributions?  Is there some other Fedora specific issue we can work
> > on?
> > 
> > Demoting to experimental essentially means stopping the job and
> > letting it regress; when the job was experimental before I was
> > triggering the run for each devstack change (to attempt to maintain
> > stability) but this also triggers about 4 other experimental jobs,
> > making the load issues even worse.
> > 
> > What needs to happen before we can get this job promoted again?
> > 
> > Thanks
> 
> It was demoted yesterday when devstack and devstack-gate changes were
> stacking up in check waiting on an f20 node to be allocated for a
> non-voting job.
> 
> When we it off devstack changes had been waiting 5 hrs in check, with no
> f20 node allocated. One of those was a critical fix for gate issues
> which we just manually gate promoted.
> 
> Because this is the way this degrades when we are using all our quota,
> I'm really wary of adding these back until we discuss the expectations
> here (possibly in Germany). Because devstack ends up often being a knob
> we can adjust to dig ourselves out of a gate backup, making it get extra
> delay when we are at load, is something I don't think serves us well.
> 
> If nodepool (conceptually) filled the longest outstanding requests with
> higher priority, I'd be uber happy. This would also help with more fully
> using our capacity, because the mix of nodes that we need any given hour
> kind of changes. But as jeblair said, this is non trivial to implement.
> Ensuring a minimum number of nodes (where that might be 1 or 2) for each
> class would have helped this particular situation. We actually had 0
> nodes in use or ready of the type at the time.

Would this fix (or something similar) help nodepool to allocate things
more efficiently?

https://review.openstack.org/#/c/88223/

We've seen similar behavior in the TripleO pipeline when the queue gets
full.

> 
> So I'm in the 'prefer not' camp for devstack right now.
> 
> 	-Sean
> 
> _______________________________________________
> OpenStack-Infra mailing list
> OpenStack-Infra at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra





More information about the OpenStack-Infra mailing list