[openstack-dev] [heat] Core criteria, review stats vs reality
robertc at robertcollins.net
Mon Dec 9 19:34:04 UTC 2013
On 10 December 2013 00:31, Steven Hardy <shardy at redhat.com> wrote:
> Hi all,
> So I've been getting concerned about $subject recently, and based on some
> recent discussions so have some other heat-core folks, so I wanted to start
> a discussion where we can agree and communicate our expectations related to
> nomination for heat-core membership (becuase we do need more core
Great! (Because I think you do too :).
> The issues I have are:
> - Russell's stats (while very useful) are being used by some projects as
> the principal metric related to -core membership (ref TripleO's monthly
> cull/name&shame, which I am opposed to btw). This is in some cases
> encouraging some stats-seeking in our review process, IMO.
With all due respect - you are entirely wrong, and I am now worried
about the clarity of my emails vis-a-vis review team makeup. I presume
you've read them in detail before referencing them of course - have
you? What can I do to make the actual process clearer?
IMO the primary metric for inclusion is being perceived by a bunch of
existing -cores as being substantial contributors of effective, useful
reviews. The reviewstats stats are a useful aide de memoir, nothing
more. Yes, if you don't do enough reviews for an extended period -
several months - then you're likely to no longer be perceived as being
a substantial contributor of effective, useful reviews - and no longer
aware enough of the current codebase and design to just step back into
So it's a gross mischaracterisation to imply that a democratic process
aided by some [crude] stats has been reduced to name & shame, and a
rather offensive one.
Anyone can propose members for inclusion in TripleO-core, and we all
vote - likewise removal. The fact I do a regular summary and propose
some folk and some cleanups is my way of ensuring that we don't get
complacent - that we recognise folk who are stepping up, and give them
guidance if they aren't stepping up in an effective manner - or if
they are no longer being effective enough to be recognised - in my
opinion. If the rest of the TripleO core team agrees with my opinion,
we get changes to -core, if not, we don't. If someone else wants to
propose a -core member, they are welcome to! Hopefully with my taking
point on this, that effort isn't needed - but it's still possible.
> - Review quality can't be measured mechanically - we have some folks who
> contribute fewer, but very high quality reviews, and are also very active
> contributors (so knowledge of the codebase is not stale). I'd like to
> see these people do more reviews, but removing people from core just
> because they drop below some arbitrary threshold makes no sense to me.
In principle, review quality *can* be measured mechanically, but the
stats we have do not do that - and cannot. We'd need mechanical follow
through to root cause analysis for reported defects (including both
crashes and softer defects like 'not as fast as I want' or 'feature X
is taking too long to develop') and impact on review and contribution
rates to be able to assess the impact of a reviewer over time - what
bugs they prevented entering the code base, how their reviews kept the
code base maintainable and flexible, and how their interactions with
patch submitters helped grow the community. NONE of the stats we have
today even vaguely approximate that.
> So if you're aiming for heat-core nomination, here's my personal wish-list,
> but hopefully others can proide their input and we can update the wiki with
> the resulting requirements/guidelines:
I'm not aiming for heat-core, so this is just kibbitzing- take it for
what it's worth:
> - Make your reviews high-quality. Focus on spotting logical errors,
> reducing duplication, consistency with existing interfaces, opportunities
> for reuse/simplification etc. If every review you do is +1, or -1 for a
> trivial/cosmetic issue, you are not making a strong case for -core IMHO.
There is a tension here. I agree that many trivial/cosmetic issues are
not a big deal in themselves. But in aggregate I would argue that a
codebase with lots of tpyos, poor idioms, overly large classes and
other shallow-at-the-start issues is one that will become
progressively harder to contribute to.
> - Send patches. Some folks argue that -core membership is only about
> reviews, I disagree - There are many aspects of reviews which require
> deep knowledge of the code, e.g spotting structural issues, logical
> errors caused by interaction with code not modified by the patch,
> effective use of test infrastructure, etc etc. This deep knowledge comes
> from writing code, not only reviewing it. This also gives us a way to
> verify your understanding and alignment with our sylistic conventions.
I think seeing patches from people is a great way to see whether they
are able to write code that fits with the codebase. I'm not sure it's
a good indicator that folk will be effective, positive reviewers.
Personally, I would look over an extended time - say three months - to
see whether someone *is* noticing structural issues - when reviewing
their reviews. If they aren't, their contributions may still be
valuable, but I wouldn't really want to see two such deep-issue-blind
folk +2 a patch. OTOH if someone is deep-issue-blind, and knows they
are; they may be useful for taking load off of
> - Fix bugs. Related to the above, help us fix real problems by testing,
> reporting bugs, and fixing them, or take an existing bug and post a patch
> fixing it. Ask an existing team member to direct you if you're not sure
> which bug to tackle. Sending patches doing trivial cosmetic cleanups is
> sometimes worthwhile, but make sure that's not all you do, as we need
> -core folk who can find, report, fix and review real user-impacting
> problems (not just new features). This is also a great way to build
> trust and knowledge if you're aiming to contribute features to Heat.
I think you have a very different definition of -core to the rest of
OpenStack. I was actually somewhat concerned about the '+2 Guard the
gate stuff' at the summit because it's so easily misinterpreted - and
there is a meme going around (I don't know if it's true or not) that
some people are assessed - performance review stuff within vendor
organisations - on becoming core reviewers.
Core reviewer is not intended to be a gateway to getting features or
ideas into OpenStack projects. It is solely a volunteered contribution
to the project: helping the project accept patches with confidence
about their long term integrity: providing explanation and guidance to
people that want to contribute patches so that their patch can be
I'm not sure if you were intending it, but your paragraph - as I read
it - implies that patches from core reviewers would be preferentially
treated and reviewed. If so I think this is a very bad thing as it
will build a clique and put pressure on folk to become -core just to
get code landed fast. Which may lead to *exactly* the sort of
behaviour you are concerned about.
The only reason code from a -core reviewer should land faster than
anyone else is because they already self reviewed and fixed everything
that reviewers look for. In all other regards their work should be
> - Engage in discussions related to the project (here on the ML, helping
> users on the general list, in #heat on Freenode, attend our weekly
> meeting if it's not an anti-social time in your TZ)
All important things!
> Anyone have any more thoughts to add here?
I hope my opinions above are useful :) I realise I haven't stated it
above, but I'm a huge believer in 'you get what you measure' and it's
corollary, 'if you measure the wrong thing, you're f**ked'. Closed
loops optimise to their metric, and proxy metrics will optimise to the
proxy, not to the primary goal - they can and usually will be
Robert Collins <rbtcollins at hp.com>
HP Converged Cloud
More information about the OpenStack-dev