On Tue, Feb 26, 2019, at 3:04 AM, Sean Mooney wrote:
On Mon, 2019-02-25 at 19:42 -0500, Clark Boylan wrote:
On Mon, Feb 25, 2019, at 12:51 PM, Ben Nemec wrote:
snip
That said, I wouldn't push too hard in either direction until someone crunched the numbers and figured out how much time it would have saved to not run long tests on patch sets with failing unit tests. I feel like it's probably possible to figure that out, and if so then we should do it before making any big decisions on this.
clark this sound like a interesting topic to dig into in person at the ptg/fourm. do you think we could do two things in parallel. 1 find a slot maybe in the infra track to discuss this. 2 can we create a new "fast-check" pipeline in zuul so we can do some experiment
if we have a second pipeline with almost identical trrigers we can propose in tree job changes and not merge them and experiment with how this might work. i can submit a patch to do that to the project-config repo but wanted to check on the ml first.
again to be clear my suggestion for an experiment it to modify the gate jobs to require approval from zuul in both the check and fast check pipeline and kick off job in both pipeline in parallel so inially the check pipeline jobs would not be condtional on the fast-check pipeline jobs.
Currently zuul depends on the Gerrit vote data to determine if check has been satisfied for gating requirements. Zuul's verification voting options are currently [-2,-1,0,1,2] with +/-1 for check and +/-2 for gate. Where this gets complicated is how do you resolve different values from different check pipelines, and how do you keep them from racing on updates. This type of setup likely requires a new type of pipeline in zuul that can coordinate with another pipeline to ensure accurate vote posting. Another approach may be to update zuul's reporting capabilities to report intermediate results without votes. That said, is there something that the dashboard is failing to do that this would address? At any time you should be able to check the zuul dashboard for an up to date status of your in progress jobs.
the intent is to run exactly the same amount of test we do today but just to have zuul comment back in two batchs one form each pipeline.
as a step two i would also be interested with merging all of the tox env jobs into one. i think that could be done by creating a new job that inherits form the base tox job and just invoke the run play book of all the tox-<env> jobs from a singel playbook.
i can do experiment 2 without entirly form the in repo zuul.yaml file
i think it would be interesting to do a test with "do not merge" patches to nova or placement and see how that works