[openstack-dev] [Hyper-V] Havana status
Matt Riedemann
mriedem at us.ibm.com
Fri Oct 11 01:57:56 UTC 2013
Getting integration testing hooked up for the hyper-v driver with tempest
should go a long way here which is a good reason to have it. As has been
mentioned, there is a core team of people that understand the internals of
the hyper-v driver and the subtleties of when it won't work, and only
those with a vested interest in using it will really care about it.
My team has the same issue with the powervm driver. We don't have
community integration testing hooked up yet. We run tempest against it
internally so we know what works and what doesn't, but besides standard
code review practices that apply throughout everything (strong unit test
coverage, consistency with other projects, hacking rules, etc), any other
reviewer has to generally take it on faith that what's in there works as
it's supposed to. Sure, there is documentation available on what the
native commands do and anyone can dig into those to figure it out, but I
wouldn't expect that low-level of review from anyone that doesn't
regularly work on the powervm driver. I think the same is true for
anything here. So the equalizer is a rigorously tested and broad set of
integration tests, which is where we all need to get to with tempest and
continuous integration.
We've had the same issues as mentioned in the original note about things
slipping out of releases or taking a long time to get reviewed, and we've
had to fork code internally because of it which we then have to continue
to try and get merged upstream - and it's painful, but it is what it is,
that's the nature of the business.
Personally my experience has been that the more I give the more I get. The
more I'm involved in what others are doing and the more I review other's
code, the more I can build a relationship which is mutually beneficial.
Sometimes I can only say 'hey, you need unit tests for this or this
doesn't seem right but I'm not sure', but unless you completely automate
code coverage metrics and build that back into reviews, e.g. does your
1000 line blueprint have 95% code coverage in the tests, you still need
human reviewers on everything, regardless of context. Even then it's not
going to be enough, there will always be a need for people with a broader
vision of the project as a whole that can point out where things are going
in the wrong direction even if it fixes a bug.
The point is I see both sides of the argument, I'm sure many people do. In
a large complicated project like this it's inevitable. But I think the
quality and adoption of OpenStack speaks for itself and I believe a key
component of that is the review system and that's only as good as the
people which are going to uphold the standards across the project. I've
been on enough development projects that give plenty of lip service to
code quality and review standards which are always the first thing to go
when a deadline looms, and those projects are always ultimately failures.
Thanks,
MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development
Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mriedem at us.ibm.com
3605 Hwy 52 N
Rochester, MN 55901-1407
United States
From: Tim Smith <tsmith at gridcentric.com>
To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>,
Date: 10/10/2013 07:48 PM
Subject: Re: [openstack-dev] [Hyper-V] Havana status
On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant <rbryant at redhat.com>
wrote:
Please understand that I only want to help here. Perhaps a good way for
you to get more review attention is get more karma in the dev community
by helping review other patches. It looks like you don't really review
anything outside of your own stuff, or patches that touch hyper-v. In
the absence of significant interest in hyper-v from others, the only way
to get more attention is by increasing your karma.
NB: I don't have any vested interest in this discussion except that I want
to make sure OpenStack stays "Open", i.e. inclusive. I believe the concept
of "reviewer karma", while seemingly sensible, is actually subtly counter
to the goals of openness, innovation, and vendor neutrality, and would
also lead to overall lower commit quality.
Brian Kernighan famously wrote: "Debugging is twice as hard as writing the
code in the first place." A corollary is that constructing a mental model
of code is hard; perhaps harder than writing the code in the first place.
It follows that reviewing code is not an easy task, especially if one has
not been intimately involved in the original development of the code under
review. In fact, if a reviewer is not intimately familiar with the code
under review, and therefore only able to perform the functions of human
compiler and style-checker (functions which can be and typically are
performed by automatic tools), the rigor of their review is at best
less-than-ideal, and at worst purely symbolic.
It is logical, then, that a reviewer should review changes to code that
he/she is familiar with. Attempts to gamify the implicit review
prioritization system through a "karma" scheme are sadly doomed to fail,
as contributors hoping to get their patches reviewed will have no option
but to "build karma" reviewing patches in code they are unfamiliar with,
leading to a higher number of low quality reviews.
So, if a cross-functional "karma" system won't accomplish the desired
result (high-quality reviews of commits across all functional units), what
will it accomplish (besides overall lower commit quality)?
Because the "karma" system inherently favors entrenched (read: heavily
deployed) code, it forms a slippery slope leading to a mediocre
"one-size-fits-all" stack, where contributors of new technologies,
approaches, and hardware/software drivers will see their contributions die
on the vine due to lack of core reviewer attention. If the driver team for
a widely deployed hypervisor (outside of the OpenStack space - they can't
really be expected to have wide OpenStack deployment without a mature
driver) is having difficulty with reviews due to an implicit "karma"
deficit, imagine the challenges that will be faced by the future
SDN/SDS/SDx innovators of the world hoping to find a platform for their
innovation in OpenStack.
Again, I don't have any vested interest in this discussion, except that I
believe the concept of "reviewer karma" to be counter to both software
quality and openness. In this particular case it would seem that the
simplest solution to this problem would be to give one of the hyper-v team
members core reviewer status, but perhaps there are consequences to that
that elude me.
Regards,
Tim
https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z
--
Russell Bryant
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131010/a1190556/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 1851 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131010/a1190556/attachment.gif>
More information about the OpenStack-dev
mailing list