[openstack-dev] Avoiding regression in project governance

Stefano Maffulli stefano at openstack.org
Thu Mar 12 00:26:08 UTC 2015


On Wed, 2015-03-11 at 17:59 -0500, Ed Leafe wrote:
> The longer we try to be both sides of this process, the longer we will
> continue to have these back-and-forths about stability vs. innovation.

If I understand correctly your model, it works only for users/operators
who decide to rely on a vendor to consume OpenStack. There are quite
large enterprises out there who consume directly the code as it's
shipped from git.openstack.org, some from trunk others from the stable
release .tgz: these guys can't count on companies A, B, C or D to put
resources to fix their problems, because they don't talk to those
companies.

One thing I like of your proposal though, when you say:

> So what is "production-ready"? And how would you trust any such
> designation? I think that it should be the responsibility of groups
> outside of OpenStack development to make that call. 

This problem has been bugging the European authorities for a long time
and they've invested quite a lot of money to find tools that would help
IT managers of the public (and private) sector estimate the quality of
open source code. It's a big deal in fact when on one hand you have
Microsoft and IBM sales folks selling your IT managers overpriced stuff
that "just works" and on the other hand you have this "Linux" thing that
nobody has heard of, it's gratis and I can find it on the web and many
say it "just works", too... crazy, right? Well, at the time it was and
to some extent, it still is. So the EU has funded lots of research in
this area.

One group of researcher that I happen to be familiar with, recently has
received another bag of Euros and released code/methodologies to
evaluate and compare open source projects[1]. The principles they use to
evaluate software are not that hard to find and are quite objective. For
example: is there a book published about this project? If there is,
chances are this project is popular enough for a publisher to sell
copies. Is the project's documentation translated in multiple languages?
Then we can assume the project is popular. How long has the code been
around? How large is the pool of contributors? Are there training
programs offered? You get the gist.

Following up on my previous crazy ideas (did I hear someone yell "keep
'em coming"?), probably a set of tags like:

   book-exists (or book-chapter-exists)
   specific-training-offered
   translated-in-1-language (and its bigger brothers translated-in-5,
translated-in-10+languages)
   contributor-size-high (or low, and we can set a rule as we do for the
diversity metric used in incubation/graduation)
   codebase-age-baby, -young and  -mature,  (in classes, like less than
1, 1-3, 3+ years old)

would help a user understand that Nova or Neutron are different from
(say) Barbican or Zaqar. These are just statements of facts, not a
qualitative assessment of any of the projects mentioned. At the same
time, I have the impression these facts would help our users make up
their mind.

Thoughts?

[1]
http://www.ict-prose.eu/2014/12/09/osseval-prose-open-source-evaluation-methodology-and-tool/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: This is a digitally signed message part
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150311/73567037/attachment.pgp>


More information about the OpenStack-dev mailing list