[openstack-dev] [third party] - minimum response time for 3rd party CI responses

Michael Still mikal at stillhq.com
Thu Jul 3 00:44:08 UTC 2014


On Thu, Jul 3, 2014 at 4:33 AM, Luke Gorrie <luke at snabb.co> wrote:
>
> On 30 June 2014 21:04, Kevin Benton <blak111 at gmail.com> wrote:
>>
>> As a maintainer of a small CI system that tends to get backed up during
>> milestone rush hours, it would be nice if we were allowed up to 12 hours.
>> However, as a developer this seems like too long to have to wait for the
>> results of a patch.
>
> Interesting question!
>
> Taking one hundred steps back :-) what is the main purpose of the 3rd party
> reviews, and what are the practical consequences when they are not promptly
> available?

The main purpose is to let change reviewers know that a change might
be problematic for a piece of code not well tested by the gate -- that
might be a driver we don't have hardware for, but it might also simply
be a scenario that is hard to express in the gate (for example, the
nova schema update testing turbo hipster does). In a perfect universe,
change reviewers would use these votes to decide if they should
approve a change or not.

> Is the main purpose to allow 3rd parties to automatically object to changes
> that will cause them problems, and the practical consequence of a slow
> review being that OpenStack may merge code that will cause a problem for
> that third party?

This is also true, but I feel that out of tree code objecting is a
secondary use case and not as persuasive as the first.

> How do genuine negative reviews by 3rd party CIs play out in practice? (Do
> the change author and the 3rd party get together offline to work out the
> problem? Or does the change-author treat Gerrit as an edit-compile-run loop
> to fix the problem themselves?) I'd love to see links to such reviews, if
> anybody has some? (I've only seen positive reviews and false-negative
> reviews from 3rd party CIs so far in my limited experience.)

I have seen both. Normally there's a failure, reviewers notice, and
then the developer spins trying out fixes by uploading new patch sets.

> Generally it seems like 12 hours is the blink of an eye in terms of the
> whole lifecycle of a change, or alternatively an eternity in terms of
> somebody sitting around waiting to take action on the result.

12 hours is way too long... Mostly because a 12 hour delay means
you're not keeping up with the workload (unless the test actually runs
for 12 hours, which I find hard to imagine).

My rule of thumb is three hours by the way. I'd like to say something
like "not significantly slower than jenkins", but that's hard to
quantify.

Michael

-- 
Rackspace Australia



More information about the OpenStack-dev mailing list