[placement] zuul job dependencies for greater good?

Sean Mooney smooney at redhat.com
Mon Feb 25 18:40:45 UTC 2019


On Mon, 2019-02-25 at 18:20 +0000, Sorin Sbarnea wrote:
> I asked the save some time ago but we didn't had time to implement it, as is much harder to do this projects where
> list of jobs does change a lot quickly.
> 
> Maybe if we would have some placeholder job like phase1/2/3 it would be easier to migrate to such setup.
> stage1 - cheap jobs like linters, docs,... - <10min
> stage2 - medium jobs like functional <30min
> stage3 - fat/expensive jobs like tempest, update/upgrade. >30min
yep i also suggesting somting similar where we woudls run all the non dvsm jobs first then everything else
whtere the second set was conditonal or always run was a seperate conversation but i think
there is value in reporting the result of the quick jobs first then everything else.
i peronally would do just two levels
os-vif for exampl complete all jobs except the one temest job in under 6 minutes.
grated i run all the non integration job locally for my own patches but it would be nice to get
the feedback quicker for other people patches as i ofter find my self checking zuul.openstack.org

> 
> The idea to placeholders is to avoid having to refactor lots of dependencies
> 
> Cheers
> Sorin
> > On 25 Feb 2019, at 17:47, Chris Dent <cdent+os at anticdent.org> wrote:
> > 
> > 
> > Zuul has a feature that makes it possible to only run some jobs
> > after others have passed:
> > 
> >    https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies
> > 
> > Except for tempest and grenade (which take about an hour to 1.5
> > hours to run, sometimes a lot more) the usual time for any of the
> > placement tests is less than 6 minutes each, sometimes less than 4.
> > 
> > I've been wondering if we might want to consider only running
> > tempest and grenade if the other tests have passed first? So here's
> > this message seeking opinions.
> > 
> > On the one hand this ought to be redundant. The expectation is that
> > a submitter has already done at least one python version worth of
> > unit and functional tests. Fast8 too. On one of my machines 'tox
> > -efunctional-py37,py37,pep8' on warmed up virtualenvs is a bit under
> > 53 seconds. So it's not like it's a huge burden or cpu melting.
> > 
> > But on the other hand, if someone has failed to do that, and they
> > have failing tests, they shouldn't get the pleasure of wasting a
> > tempest or grenade node.
> > 
> > Another argument I've heard for not doing this is if there are
> > failures of different types in different tests, having all that info
> > for the round of fixing that will be required is good. That is,
> > getting a unit failure, fixing that, then subumitting again, only to
> > get an integration failure which then needs another round of fixing
> > (and testing) might be rather annoying.
> > 
> > I'd argue that that's important information about unit or functional
> > tests being insufficient.
> > 
> > I'm not at all sold on the idea, but thought it worth "socializing"
> > for input.
> > 
> > Thanks.
> > 
> > -- 
> > Chris Dent                       ٩◔̯◔۶           https://anticdent.org/
> > freenode: cdent                                         tw: @anticdent
> 
> 




More information about the openstack-discuss mailing list