<div dir="ltr">><span style="font-family:arial,sans-serif;font-size:13.333333969116211px">In short, you need to test every single proposed patch to the system fully and consistently, otherwise there's simply no point in running any tests at all, as you will spend an inordinate amount of time tracking down what broke what.</span><div>
<span style="font-family:arial,sans-serif;font-size:13.333333969116211px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:13.333333969116211px">I agree that every patch should be tested. However, since third party systems aren't involved in the serial gate merge process, there is still a chance that a patch can break a third party system after it gets merged into master. To check for this condition with a third-party CI, you also need a job that runs after every merge into master so the maintainers can immediately identify a patch that caused a failure after merging and disable their checks until it is fixed. </span></div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jul 3, 2014 at 10:06 AM, Jay Pipes <span dir="ltr"><<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On 07/03/2014 08:42 AM, Luke Gorrie wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">
On 3 July 2014 02:44, Michael Still <<a href="mailto:mikal@stillhq.com" target="_blank">mikal@stillhq.com</a><br></div><div><div class="h5">
<mailto:<a href="mailto:mikal@stillhq.com" target="_blank">mikal@stillhq.com</a>>> wrote:<br>
<br>
The main purpose is to let change reviewers know that a change might<br>
be problematic for a piece of code not well tested by the gate<br>
<br>
<br>
Just a thought:<br>
<br>
A "sampling" approach could be a reasonable way to stay responsive under<br>
heavy load and still give a strong signal to reviewers about whether a<br>
change is likely to be problematic.<br>
<br>
I mean: Kevin mentions that his CI gets an hours-long queue during peak<br>
review season. One way to deal with that could be skipping some events<br>
e.g. toss a coin to decide whether to test the next revision of a change<br>
that he has already +1'd previously. That would keep responsiveness<br>
under control even when throughput is a problem.<br>
<br>
(A bit like how a router manages a congested input queue or how a<br>
sampling profiler keeps overhead low.)<br>
<br>
Could be worth keeping the rules flexible enough to permit this kind of<br>
thing, at least?<br>
</div></div></blockquote>
<br>
The problem with this is that it assumes all patch sets contain equivalent levels of change, which is incorrect. One patch set may contain changes that significantly affect the SnappCo plugin. A sampling system might miss that important patchset, and you'd spend a lot of time trying to figure out which patch caused issues for you when a later patchset (that included the problematic important patch that was merged) causes failures that seem unrelated to the patch currently undergoing tests.<br>
<br>
In short, you need to test every single proposed patch to the system fully and consistently, otherwise there's simply no point in running any tests at all, as you will spend an inordinate amount of time tracking down what broke what.<br>
<br>
Best,<br>
-jay<div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<u></u>_________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.<u></u>org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div>Kevin Benton</div>
</div>