[openstack-dev] [TripleO] Reviews: tweaking priorities, and continual-deployment approvals

marios@redhat.com mandreou at redhat.com
Wed Oct 16 07:29:15 UTC 2013


On 16/10/13 03:22, Robert Collins wrote:
> Hi, during the TripleO meeting today we had two distinct discussions
> about reviews.
> 
> Firstly, our stats have been slipping:
> http://russellbryant.net/openstack-stats/tripleo-openreviews.html
> 
> 
> Stats since the last revision without -1 or -2 (ignoring jenkins):
> 
> Average wait time: 2 days, 16 hours, 18 minutes
> 1rd quartile wait time: 0 days, 11 hours, 1 minutes
> Median wait time: 1 days, 9 hours, 37 minutes
> 3rd quartile wait time: 5 days, 1 hours, 50 minutes
> 
> 
> Longest waiting reviews (based on oldest rev without nack, ignoring jenkins):
> 
> 7 days, 16 hours, 40 minutes https://review.openstack.org/50010 (Fix a
> couple of default config values)
> 7 days, 4 hours, 21 minutes https://review.openstack.org/50199
> (Utilizie pypi-mirror from tripleo-cd)
> 6 days, 2 hours, 28 minutes https://review.openstack.org/50431 (Make
> pypi-mirror more secure and robust)
> 6 days, 1 hours, 36 minutes https://review.openstack.org/50750 (Remove
> obsolete redhat-eventlet.patch)
> 5 days, 1 hours, 50 minutes https://review.openstack.org/51032
> (Updated from global requirements)
> 
> This is holding everyone up, so we want to fix it. When we discussed
> it we found that there were two distinct issues:
>  A - not enough cross-project reviews
>  B - folk working on the kanban TripleO Continuous deployment stuff
> had backed off on reviews - and they are among the most prolific
> reviewers.
> 
> A: Cross project reviews are super important: even if you are only
> really interested in (say) os-*-config, it's hard to think about
> things in context unless you're also up to date with changing code
> (and the design of code) in the rest of TripleO. *It doesn't matter*
> if you aren't confident enough to do a +2 - the only way you get that
> confidence is by reviewing and reading code so you can come up to
> speed, and the only way we increase our team bandwidth is through folk
> doing that in a consistent fashion.
> 
> So please, whether your focus is Python APIs, UI, or system plumbing
> in the heart of diskiamge-builder, please take the time to review
> systematically across all the projects:
> https://wiki.openstack.org/wiki/TripleO#Review_team
> 
> B: While the progress we're making on delivering a production cloud is
> hugely cool, we need to keep our other responsibilities in check -
> https://wiki.openstack.org/wiki/TripleO#Team_responsibilities - is a
> new section I've added based on the meeting. Even folk working on the
> pointy end of the continuous delivery story need to keep pulling on
> the common responsibilities. We said in the meeting that we might
> triage it as follows:
>  - review reviews for firedrills first. (Critical bugs, things
> breaking the CD cloud)
>  - review reviews for the CD cloud
>  - then all reviews for the program
> with a goal of driving them all to 0: if we're on top of things, that
> should never be a burden. If we run out of time, we'll have unblocked
> critical things first, unblocked folk working on the pointy edge
> second - bottlenecks are important to unblock. We'll review how this
> looks next week.
> 


I can at least promise :) to become a more useful member of the team
over time; to be honest there's a lot of _new_ going on very quickly and
even the workflow is new to me (gerrit, bluprints, bugs, etc). The
discussion about triage on yesterday's call (and your email) definitely
makes it a more immediately accessible task for me to look at and the
clarification around +1/-1 also helps,

thanks, marios





> # The second thing
> 
> The second issue was raised during the retrospective (which will be up
> at https://wiki.openstack.org/wiki/TripleO/TripleOCloud/MVP1and2Retrospective
> a little later today. With a production environment, we want to ensure
> that only released code is running on it - running something from a
> pending review is something to avoid. But, the only way we've been
> able to effectively pull things together has been to run ahead of
> reviews :(. A big chunk of that is due to a lack of active +2
> reviewers collaborating with the CD cloud folk - we would get a -core
> putting a patch up, and a +2, but no second +2. We decided in the
> retrospective to try permitting -core to +2 their own patch if it's
> straightforward and part of the current CD story [or a firedrill]. We
> set an explicit 'but be sure you tested this first' criteria on that :
> so folk might try it locally, or even monkey patch it onto the cloud
> for one run to check it really works [until we have gating on the CD
> story/ies].
> 
> Cheers,
> Rob
> 
> 
> 




More information about the OpenStack-dev mailing list