[openstack-dev] Hyper-V meeting Minutes

Christopher Yeoh cbkyeoh at gmail.com
Wed Oct 16 11:47:00 UTC 2013


On Wed, Oct 16, 2013 at 6:49 PM, Robert Collins
<robertc at robertcollins.net>wrote:

> On 16 October 2013 20:14, Alessandro Pilotti
> <apilotti at cloudbasesolutions.com> wrote:>
> >
>
> > Drivers are IMO not part of the core of Nova, but completely separated
> and decoupled entities, which IMO should be treated that way. As a
> consequence, we frankly don't stricly feel as part of Nova, although some
> of us have a pretty strong understanding of how all the Nova pieces work.
>
> I don't have a particular view on whether they *should be* separate
> decoupled entities, but today I repeatedly hear concerns about the
> impact of treating 'internal APIs' as stable things. That's relevant
> because *if* nova drivers are to be separate decoupled entities, the
> APIs they use - and expose - have to be treated as stable things with
> graceful evolution, backwards compatibility etc. Doing anything else
> will lead to deployment issues and race conditions in the gate.
>
>
+1 - I think we really want to have a strong preference for a stable api if
we start separating parts out (and this has been the case in the past from
what I can see). Otherwise we either end up with lots of pain in making
infrastructure changes or asymmetric gating which is to be avoided wherever
possible.


> And by their effectiveness [this is more subjective:)]
>  - train more -core reviewers [essentially linear, very easy to predict]
>  - provide patches that are easier to review [many patches are good
> already, has a low upper bound on effectiveness]
>  - split the drivers out [won't help *at all* with changes required in
> core to support a driver feature]
>
>
I'd like to add to that, better tools (which will help both core and non
core reviewers). For example, rebase hell was mentioned in this thread. I
was in that a fair bit with the Nova v3 API changes where I'd have a long
series of dependent patches which would get fairly even review attention.
This sometimes had the unfortunate result that many in the series would end
up with a single +2. Not enough to merge, and the +2's would  get lost in
the inevitable rebase. Now perhaps as reviewers we should probably know
better to follow the dependency chain on reviews to review the changesets
with the least dependencies first, but we're only human and we don't always
remember to do that. So perhaps it'd be nice if gerrit or some other tool
showed changesets to review as a tree rather than a list. We might get more
changesets merged with the same number of reviews if the tools encouraged
the most efficient behaviour.

Another example is when you review a lot of patches the gerrit dashboard
doesn't seem to show all of the patches that you have reviewed. And I find
I get rather overwhelmed with the volume of email from gerrit with updates
of patches I've reviewed and so I find its not a great source of working
out what to review next. I'm sure I'm guilty of reviewing some patches and
then not getting back to them for a while because I've effectively lost
track of them (which is where an irc ping is appreciated). Perhaps better
tools could help here?

Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131016/78e140bc/attachment.html>


More information about the OpenStack-dev mailing list