Hi, I’m reaching out to you about the analysis that OpenInfra Foundation community managers have been running, to understand challenges that contributors and maintainers in OpenStack are facing in order to improve the experience for both. In addition to running surveys, which I shared the anonymized and aggregated results of in my last email[1], we also collected a selection of metrics to get an insight into the load and efficiency of OpenStack project teams. We pulled data from the Bitergia OpenStack dashboard[2] with a script[3]. We collected metrics for 13 project teams covering 5 release cycles from Antelope to Epoxy. As we’ve been collecting survey results and metrics at the same time, the project lists for the two efforts haven’t been synchronized yet. We picked the initial set of projects for the metrics analysis to get an overview from a combination of well-established and newer teams. As we progress further in the process we will match the data sets more closely. We collected the following metrics: - Review Efficiency Index (REI): Merged patches over Opened Patches - Median and Average Time to Review: These metrics focus on the time passed between the creation of a review and first time it gets reviewed by someone - Median and Average Time to Merge: These metrics show the time needed from the creation to the approval of a review - Median and Average Patchsets per Review: These metrics show the number of revisions a review has throughout its lifecycle - # of Changes Opened - # of Changes Closed - #/% of Changes Opened but not Closed in a Release Cycle - # of Maintainers: Anyone in a project team who voted with CR+2, CR-2 or W+1 in a release cycle in any of the project team’s repos. This may not be an exact match to people who are listed as core reviewers in the Gerrit core teams for the project team. - # of Reviewers: Anyone who commented or voted in any way on changes in a release cycle. The intention with building a baseline from a set of metrics is to be able to determine whether or not the current efforts are making any positive impact, or if we should adjust the approach we’ve been taking in any way. To avoid people making comparisons between project teams’ efficiency and performance, we only share anonymized and aggregated data, and provide a high-level picture to the community of what we’ve found so far. - REI: * Over the course of 5 release cycles, most teams had an REI slightly above 1, which means that more changes got closed (merged or abandoned) than opened. * Only the Bobcat release had an REI lower than one, the average across the 13 teams was 0.99. - Patches Opened and Closed * On average throughout the 5 release cycles teams got 412 new reviews, with the minimum of 64 and maximum of 873. * On average throughout the 5 release cycles teams closed 441 patches, with the minimum of 69 and maximum of 1012. * Collectively over the 5 release cycles, the 13 teams received a total of 5356 new changes and closed 5730 reviews. - Review backlog * On average throughout the 5 release cycles, teams had about 6% of new changes that didn’t get closed (merged or abandoned) throughout the release cycle, with the minimum of 1.17% and maximum of 15.8%. * Looking at release cycles, from Antelope to Dalmation the percentage of changes that didn’t close within the release cycle fluctuated between 5% and 8.2%, which jumped to 12.6% in Epoxy. * Of the 5356 new changes throughout the 5 release cycles, 331 took longer than a release cycle to close, or might still be open. - Merge Times * On average throughout the 5 release cycles it took teams 129 days to approve a review, with a median of 25 days. * The team with the lowest average needed 12 days, while their median settled at 5.7 days per cycle. * The team with the lowest median needed 5.2 days, while their average time window was 126 days. * The team with the highest average needed 247 days, while also grabbing the highest median number of 88 days. - Review Times * On average throughout the 5 release cycles it took teams 36 days to conduct the first review on new changes, with a median of 5 days. * The team with the lowest average needed 32 days, while their median settled at 0.9 days per cycle. * The team with the lowest median needed 0.7 days, while their average time window was 22 days. * The team with the highest average needed 88 days, while also grabbing the highest median number of 16 days. Throughout the survey responses challenges with review attention got the highest number of votes, and turned out to be a challenge in every project team who received responses. This is in line with the relatively long timeframe for first reviews shown by the above average numbers. Here I would like to note the difference between ‘average’ and ‘median’ numbers, where the latter shows the most common experience and is not influenced by the outlying datapoints the same way as 'average' is. The overall median review and merge times numbers suggest a relatively good overall experience for all teams across all 5 release cycles, however, the average numbers show areas for improvement. - # of Maintainers * On average teams had 11 active maintainers in the release cycles, with a team having as low as 5 and another team having as high as 21. - # of Reviewers * On average teams had 76 active reviewers in the release cycles, with a team with the lowest of 20 and another team as the highest of 148. There seems to be 7x as many reviewers as maintainers throughout the 13 project teams whose data we looked at. With accepting that not every reviewer is present throughout multiple release cycles, or even throughout a single one, there might be an opportunity to capture some of the more casual reviewers to further increase review bandwidth and nurture a group of new maintainers. The final analysis, with which we will be reaching out to project teams, is building on the combination of survey data and metrics. Please note that we do not have the bandwidth to reach out to every project team at once, as we would like to work with teams on strategy and next steps to address the challenges that we collectively uncover. In the following days, we will be starting to reach out to those teams who got the most survey responses and will expand our outreach as bandwidth allows. Thank you for your patience and understanding. Thanks, Ildikó [1] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.... [2] https://openstack.biterg.io/app/dashboards#/view/Overview [3] https://opendev.org/opendev/engagement/src/branch/master/tools/metrics/query...
On May 19, 2025, at 10:12, Ildiko Vancsa <ildiko.vancsa@gmail.com> wrote:
Hi,
I’m reaching out to you to give you an update on the surveys that the OpenInfra Foundation community managers have been running. Our goal has been to understand challenges that contributors and maintainers in OpenStack have in order to improve the experience for both.
To protect the anonymity of the survey respondents we are sharing anonymized and aggregated survey results reflecting feedback we received throughout OpenStack.
As a next step we will reach out to project teams who received more than one survey response, starting with the team(s) that received most, to share team-specific feedback and understand how we can help to address them.
Below please find highlights of the two surveys:
__MAINTAINER SURVEY__
We received 15 responses from 11 project teams, with 36% of teams receiving more than one response.
We asked respondents to rate the below statements between 1 (very bad) and 5 (excellent), which averaged at: - Code review - Changes you propose are reviewed in a timely manner: 2.73 - Code review - You receive actionable feedback from other reviewers: 4.26 - Code review - Automated test failures quickly direct you to problems in your changes: 3.6 - Contributor docs - It is comprehensive to cover processes and good practices: 3.73 - Contributor docs - It is up to date: 3.53
53% of responders said they didn’t refer to their projects Contributor Guide during the Epoxy release cycle, while 20% referred to it 1-9 times and 13% 10-19 times as well as 13% 20+ times.
The survey also asked respondents to mark which listed issues they face while trying to land their own changes, as well as trying to land other contributors’ changes. Please note that respondents were allowed to choose as many options as were applicable to them.
The below numbers are the percentages of respondents who faced each challenge:
- Landing their own changes - Trouble getting review attention - 73% - Other - 33% - No changes pass CI - 13% - Need more assistance to proceed with review feedback - 6% - Expected to expand on review's scope - 6% - Disagreement or conflicting review comments - 6%
- Landing other people’s changes - Change owner/stakeholders are unreachable to discuss change - 66% - Change owner can't/won't add missing pieces to the change - 66% - Change owner's response is slow - 60% - Change is lower priority than others - 40% - Change is beyond expertise - 40% - Change broken, doesn't pass CI - 40% - Testing is broken for the project - 20% - Other - 13% - Change needs a BP/spec - 6%
The below numbers are the percentages of project teams who faced each challenge:
- Landing their own changes - Trouble getting review attention - 64% - Other - 36% - No changes pass CI - 18% - Need more assistance to proceed with review feedback - 9% - Expected to expand on review's scope - 9% - Disagreement or conflicting review comments - 9%
- Landing other people’s changes - Change owner/stakeholders are unreachable to discuss change - 82% - Change owner can't/won't add missing pieces to the change - 73% - Change owner's response is slow - 82% - Change is lower priority than others - 45% - Change is beyond expertise - 55% - Change broken, doesn't pass CI - 45% - Testing is broken for the project - 27% - Other - 18% - Change needs a BP/spec - 9%
__CONTRIBUTOR SURVEY__
We received 28 responses from 15 project teams, with 53% of teams receiving more than one response.
We asked respondents to rate the below statements between 1 (very bad) and 5 (excellent), which averaged at: - Contributor experience - Changes you propose are reviewed in a timely manner: 2.5 - Contributor experience - You receive actionable feedback from other reviewers: 4.25 - Contributor experience - Automated test failures quickly direct you to problems in your changes: 3.64 - Contributor documentation - You were able to find information about the processes the project team is using: 3.64 - Contributor documentation - It helped you to apply better practices throughout your contribution journey and achieve results faster: 3.57 - Contributor documentation - It is easy to discover: 3.32 - Contributor documentation - It is easy to navigate: 3.36
The survey also asked respondents to mark which listed issues they face while trying to land their own changes. Please note that respondents were allowed to choose as many options as were applicable to them.
The below numbers are the percentages of respondents who faced each challenge: - Trouble getting review attention - 68% - Other - 36% - Unable to determine priorities - 18% - Reviewers keep coming back with new requests - 11% - Need more clarification on feedback to move forward - 7% - Reviewer expectation or consensus shifts over time - 7% - Expected to expand scope to address other project issues - 7% - Reviewers disagree or give conflicting feedback - 4% - Asked to make changes deviating from past consensus - 4% - Reviewer forgets to un-block change - 4% - Unhelpful or incorrect reviews - 4%
The below numbers are the percentages of project teams who faced each challenge: - Trouble getting review attention - 73% - Other - 47% - Unable to determine priorities - 20% - Reviewers keep coming back with new requests - 20% - Need more clarification on feedback to move forward - 13% - Reviewer expectation or consensus shifts over time - 13% - Expected to expand scope to address other project issues - 13% - Reviewers disagree or give conflicting feedback - 7% - Asked to make changes deviating from past consensus - 7% - Reviewer forgets to un-block change - 7% - Unhelpful or incorrect reviews - 7%
While most challenges are affecting some or many projects, the biggest issues across the board seem to be getting attention from reviewers, and having delays on both the reviewer and change owner side. This is not necessarily surprising knowing that many or rather most contributors and maintainers are working on OpenStack part-time, or even outside of their day jobs. With the limited bandwidth that the community has, prioritizing work items and clear communication are more important than ever for the success and sustainability of the project.
OpenInfra community managers will start reaching out to project teams with most responses and start working on addressing the challenges they face. As we progress with next steps we will also share practices with the OpenStack community at large to help improve contributor experience for everyone.
While the official survey deadline has passed, we are still looking for feedback. If you missed filling out the survey(s), please provide your input now!
~ Please fill out this survey separately for every OpenStack project you contributed to during the Epoxy release: https://openinfrafoundation.formstack.com/forms/openstack_contributor_satisf... ~ Please fill out this survey separately for every OpenStack project you were a core reviewer of during the Epoxy release: https://openinfrafoundation.formstack.com/forms/openstack_maintainer_satisfa...
Thanks, Ildikó
———
Ildikó Váncsa Director of Community Open Infrastructure Foundation
On Apr 2, 2025, at 07:27, Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2025-04-01 19:49:13 +0000 (+0000), Jeremy Stanley wrote:
As the next step in this ongoing effort to improve the contributor and maintainer experience, community managers on the OpenInfra Foundation staff worked together to convert the pain points identified in prior phases into a pair of very quick (2-3 minute) anonymous surveys... [...]
Many thanks to Jens for pointing out in IRC that these are not entirely anonymous. The survey does require you to supply a name and E-mail address so that the foundation staff can reach out for clarification if needed on any free-form responses, and to help weed out bogus responses from bots or vandals.
Your responses are treated confidentially, however, as only foundation staff will have access to the raw response data which will be thoroughly anonymized and aggregated before results are shared publicly with the community. The included contact information will only be used for purposes relating to the surveys, not for any other foundation activities.
Please let me know if you have any questions, of course! -- Jeremy Stanley