[openstack-dev] Avoiding regression in project governance
kuvaja at hp.com
Thu Mar 12 01:34:08 UTC 2015
> -----Original Message-----
> From: Stefano Maffulli [mailto:stefano at openstack.org]
> Sent: 12 March 2015 00:26
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] Avoiding regression in project governance
> On Wed, 2015-03-11 at 17:59 -0500, Ed Leafe wrote:
> > The longer we try to be both sides of this process, the longer we will
> > continue to have these back-and-forths about stability vs. innovation.
> If I understand correctly your model, it works only for users/operators who
> decide to rely on a vendor to consume OpenStack. There are quite large
> enterprises out there who consume directly the code as it's shipped from
> git.openstack.org, some from trunk others from the stable release .tgz:
> these guys can't count on companies A, B, C or D to put resources to fix their
> problems, because they don't talk to those companies.
> One thing I like of your proposal though, when you say:
> > So what is "production-ready"? And how would you trust any such
> > designation? I think that it should be the responsibility of groups
> > outside of OpenStack development to make that call.
> This problem has been bugging the European authorities for a long time and
> they've invested quite a lot of money to find tools that would help IT
> managers of the public (and private) sector estimate the quality of open
> source code. It's a big deal in fact when on one hand you have Microsoft and
> IBM sales folks selling your IT managers overpriced stuff that "just works"
> and on the other hand you have this "Linux" thing that nobody has heard of,
> it's gratis and I can find it on the web and many say it "just works", too...
> crazy, right? Well, at the time it was and to some extent, it still is. So the EU
> has funded lots of research in this area.
> One group of researcher that I happen to be familiar with, recently has
> received another bag of Euros and released code/methodologies to evaluate
> and compare open source projects. The principles they use to evaluate
> software are not that hard to find and are quite objective. For
> example: is there a book published about this project? If there is, chances
> are this project is popular enough for a publisher to sell copies. Is the
> project's documentation translated in multiple languages?
> Then we can assume the project is popular. How long has the code been
> around? How large is the pool of contributors? Are there training programs
> offered? You get the gist.
> Following up on my previous crazy ideas (did I hear someone yell "keep 'em
> coming"?), probably a set of tags like:
> book-exists (or book-chapter-exists)
> translated-in-1-language (and its bigger brothers translated-in-5,
> contributor-size-high (or low, and we can set a rule as we do for the
> diversity metric used in incubation/graduation)
> codebase-age-baby, -young and -mature, (in classes, like less than 1, 1-3,
> 3+ years old)
> would help a user understand that Nova or Neutron are different from
> (say) Barbican or Zaqar. These are just statements of facts, not a qualitative
> assessment of any of the projects mentioned. At the same time, I have the
> impression these facts would help our users make up their mind.
Just one, is it too late to change the name, tag is pretty overloaded and I rather like the sound of badge. I would be nice to see project working towards different new badges and carrying them proudly after earning them.
Oh another one, I'm not convinced that 3+ years is still mature project. I think there is room to look bit out of our own sandbox and think where we are in 2, 3 or 5 years time. Perhaps we need to change the governance again, perhaps this could be something that is flexible all the way there, but I would hate to call Nova, Swift, Glance etc. "ancient" or "granny" just because they have been around double/triple the mature time.
More information about the OpenStack-dev