[openstack-dev] [metrics] Summary of the BOF session on software engineering metrics

Jesus M. Gonzalez-Barahona jgb at bitergia.com
Wed Oct 28 10:32:14 UTC 2015

Hi all

This morning we had the BOF session on software engineering metrics for
OpenStack. Since we agreed on continuing the conversation in this
mailing list, here is a summary of the session, based on the notes
taken by Dani (in CC).

First of all, the link to the slides we used to frame the session:


Now, some comments about what we talked.

The general idea of the BOF can be found in slide #3: OpenStack is a
project leading in open development metrics, the idea was talking about
current experiences with this, and about what we would like to have in
the future.

Current dashboards, collections of metrics, etc. In addition to the
ones presented in the slides (Stackalytics, Activity Dashboard, Kibana
-based dashboards, Russell Bryant's stats, Bitergia reports, Jenkins
Logstash dashboard), some others, below, were mentioned. If you know of
any other, please comment about it in this thread.

- Somebody has been looking at patterns of how persons contribute over
- There are some in-house (non public) machinery to check the
total time to deploy in companies for their customers.
- A tool
reporting on coverage in reporting unit tests was mentioned.

We discussed about how each of these efforts is targeted at different
uses, at answering different questions. Some of them are focused on the
contributions by different actors like persons or companies (such as
Stackalytics), some allow for getting knowledge about how development
processes are happening and performing (such as the Activity Board,
Russell's stats, or the Jenkins Logstash dashboard), some are summaries
of interest about certain aspects (such as the reports). In most cases,
they there are overlappings.

It was also mentioned how OpenStack is one of the projects which has
advanced more in the idea of open development analytics, thanks in part
to all the previous efforts, and to a certain state of mind that makes
participants in the OpenStack community more aware of the importance of
metrics than in other projects.

Then we discussed about use cases of metrics within OpenStack, both
cases that are happening now, cases that are happening in other
communities but could be translated, and others not yet happening that
could be interesting. In addition to those mentioned in slide #11, some
others were:

- QA: being able to look at code test coverage, duplication of testing,
test time cycle.
- Hotspots: places of the code where more people are
touching (more 
complex areas).
- Bugs: what files are having more bugs?
How's the complexity in them?
- Complexity: changes that are touching
more complex areas than others?
- Operators: stats about backporting
fixes to stable branches
- Metrics in stable branch, such as failures on
stable branches
- Dependencies on feature requests: Have a new feature
tested internally and later uploaded to upstream, to find that this
breaks even when it was previously in master.
- Frequency of rebase
- How fast fixes for critical vulnerabilities are merged
into master
- External example: Wikimedia Gerrit clean up days: lists of
changesets, times, etc. to track the impact of the day.
- External
example: Xen community checking current status of the time to merge,to
see if their perception was in line with real metrics.

Some other issues that were discussed include:

- Should SonarQube be used, and the results made public to the people
when releasing? Every day?
- It may be hard to compare metrics in Stackalytics with other
projects, given that it is very specific of Openstack. On the other
hand, this specificity makes it very well adapted to OpenStack.
- It would be interesting tracking contributions coming from operators
- Having public performance metrics could be of interest, although that
may be a bit beyond the current discussion on software development

As a summary, we had time to agree on two lines (see slide #13):

* Open development analytics is a core value of the
OpenStack community
* Let's fostering more lively discussions about metrics in OpenStack,
using the openstack-dev mailing list

Please, comment about any other conclusion that you may propose (were
you in the BOF or not), by answering in this thread.

Thanks to all of you who made this BOF possible! Please, those of you
who attended, comment anything that could missing in these notes.



Bitergia: http://bitergia.com
/me at Twitter: https://twitter.com/jgbarah

More information about the OpenStack-dev mailing list