[placement] zuul job dependencies for greater good?
Sorin Sbarnea
ssbarnea at redhat.com
Mon Feb 25 18:20:13 UTC 2019
I asked the save some time ago but we didn't had time to implement it, as is much harder to do this projects where list of jobs does change a lot quickly.
Maybe if we would have some placeholder job like phase1/2/3 it would be easier to migrate to such setup.
stage1 - cheap jobs like linters, docs,... - <10min
stage2 - medium jobs like functional <30min
stage3 - fat/expensive jobs like tempest, update/upgrade. >30min
The idea to placeholders is to avoid having to refactor lots of dependencies
Cheers
Sorin
> On 25 Feb 2019, at 17:47, Chris Dent <cdent+os at anticdent.org> wrote:
>
>
> Zuul has a feature that makes it possible to only run some jobs
> after others have passed:
>
> https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies
>
> Except for tempest and grenade (which take about an hour to 1.5
> hours to run, sometimes a lot more) the usual time for any of the
> placement tests is less than 6 minutes each, sometimes less than 4.
>
> I've been wondering if we might want to consider only running
> tempest and grenade if the other tests have passed first? So here's
> this message seeking opinions.
>
> On the one hand this ought to be redundant. The expectation is that
> a submitter has already done at least one python version worth of
> unit and functional tests. Fast8 too. On one of my machines 'tox
> -efunctional-py37,py37,pep8' on warmed up virtualenvs is a bit under
> 53 seconds. So it's not like it's a huge burden or cpu melting.
>
> But on the other hand, if someone has failed to do that, and they
> have failing tests, they shouldn't get the pleasure of wasting a
> tempest or grenade node.
>
> Another argument I've heard for not doing this is if there are
> failures of different types in different tests, having all that info
> for the round of fixing that will be required is good. That is,
> getting a unit failure, fixing that, then subumitting again, only to
> get an integration failure which then needs another round of fixing
> (and testing) might be rather annoying.
>
> I'd argue that that's important information about unit or functional
> tests being insufficient.
>
> I'm not at all sold on the idea, but thought it worth "socializing"
> for input.
>
> Thanks.
>
> --
> Chris Dent ٩◔̯◔۶ https://anticdent.org/
> freenode: cdent tw: @anticdent
More information about the openstack-discuss
mailing list