[openstack-dev] [all] The future of the integrated release

Nikola Đipanov ndipanov at redhat.com
Thu Aug 7 15:59:14 UTC 2014


On 08/07/2014 03:20 PM, Russell Bryant wrote:
> On 08/07/2014 09:07 AM, Sean Dague wrote:> I think the difference is
> slot selection would just be Nova drivers. I
>> think there is an assumption in the old system that everyone in Nova
>> core wants to prioritize the blueprints. I think there are a bunch of
>> folks in Nova core that are happy having signaling from Nova drivers on
>> high priority things to review. (I know I'm in that camp.)
>>
>> Lacking that we all have picking algorithms to hack away at the 500 open
>> reviews. Which basically means it's a giant random queue.
>>
>> Having a few blueprints that *everyone* is looking at also has the
>> advantage that the context for the bits in question will tend to be
>> loaded into multiple people's heads at the same time, so is something
>> that's discussable.
>>
>> Will it fix the issue, not sure, but it's an idea.
> 
> OK, got it.  So, success critically depends on nova-core being willing
> to take review direction and priority setting from nova-drivers.  That
> sort of assumption is part of why I think agile processes typically
> don't work in open source.  We don't have the ability to direct people
> with consistent and reliable results.
> 
> I'm afraid if people doing the review are not directly involved in at
> least ACKing the selection and commiting to review something, putting
> stuff in slots seems futile.
> 

Forgive my bluntness here, but here's how I read the slots proposal:

If there is a limited number of slots, and let's say there is a group of
people (nova-core or nova-drivers) that can decide which ones get
attention for a given cycle based on "technical merit alone" (and
disregard what Russell notices above - there is no real way to make
people review things), I can easily see this as a very polite way of
saying - core gets their stuff in and then maybe if there is a slot or 2
left, we'll consider other proposals... but we can talk about them -
sure :).

So why not own up to it? As ttx points out elsewhere in the thread - it
is about setting expectations really.

Now for a small intermezzo - here's a snippet from a wiki of an
OpenStack-like project being run in a slightly different parallel
universe (not called OpenStack tho - trademarks may cross boundaries of
parallel universes):

"""
We (The Core) are The Upstream and have control of what gets in. We will
land no less than 0 outside features this cycle and that is all that we
will guarantee! If you want your stuff to be considered:

1) write up a BP and a nice spec so we know what you're thinking, ping
us to read it and comment on it.
2) Write some code and post it in our Gerrit code review system - ping
us to read and comment on it.
3) (Ideally) run your code in production and tell us how it solves the
particular problem you have, and why other people may have the same
problem, the code is public on Gerrit so get other people to use the
code as well and comment how useful it is - maybe they will propose
changes and fixes.
4) If The Core sees this feature as something that fits the direction of
the project (you got that feedback hopefully at 1) and gets the feel
that there is enough community interest (for some easy things non-0 is
fine for others it may not be), The Core will review it and land it at
some point, after you've fixed their nits too. We are landing code
that's been battle tested at this point, so we won't have so many of
those hopefully.
5) For additional karma - work on bugs and help us, The Core, review and
fix those, and as we see that you are an awesome contributor, we are
likely to take your pings and code seriously.
"""

In the parallel universe the velocity has been capped in a more natural
way, there are no arbitrary numbers (of slots for example) that cannot
be backed up by actual metrics - there are no numbers at all for that
matter. Also - cores do not worry about the number of open reviews in
Gerrit - the more is better, as it means there is more stuff being tried
out!

They focus on stability and fixing bugs, until a time when there is a
feature that is really interesting and ready for them to land it.

If you think about it - the only real difference between the two is how
expectations are communicated.

Now I have no idea if we as individual projects, or as a community are
ready to set such expectations, but reading all the discussions around
review times and gate failures over the last 2 cycles or so - it seems
to me that "velocity" is something we want to control, and stability is
something we want to increase and we seem to be coming up with very
round-about ways of doing it, which we back up with arbitrary metrics
like "number of (open) reviews" that seem to me are just a distraction.
Why not just do it and say _very loudly_ that we are doing it, and
then... do it :).

We would need to give up trying to set expectations to our
"stakeholders" but the reality is (it seems to me) that we are in the
business of making cloud software (and are reasonably good at it), not
in the business of outsorcing cloud software project management.

Thanks for reading :),
N.



More information about the OpenStack-dev mailing list