<div dir="ltr">On 30 June 2014 21:08, Anita Kuno <span dir="ltr"><<a href="mailto:anteaya@anteaya.info" target="_blank">anteaya@anteaya.info</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><span style="color:rgb(34,34,34)">I am disappointed to realize that Ilya (or stackalytics, I don't know</span><br></div></div>
where this is coming from) is unwilling to cease making up definitions<br>
of success for third party ci systems to allow the openstack community<br>
to arrive at its own definition.<br></blockquote><div><br></div><div>There is indeed a risk that the new dashboards won't give a meaningful view of whether a 3rd party CI is voting correctly or not.</div><div><br></div>
<div>However, there is an elephant in the room and a much more important problem:</div><div><br></div><div>To measure how accurately a CI is voting says much more about a driver author's "Gerrit-fu" ability to operate a CI system than it does about whether the code they have contributed to OpenStack actually works, and the latter is what is actually important.</div>
<div><br></div><div>To my mind the whole 3rd party testing discussion should refocus on helping developers maintain good working code and less on waving "you will be kicked out of OpenStack if you don't keep your swarm of nebulous daemons running 24/7".</div>
<div><br></div><div><br></div></div></div></div>