Hi, I’m reaching out to you about the analysis that OpenInfra Foundation community managers have been running, to understand challenges that contributors and maintainers in OpenStack are facing in order to improve the experience for both. In addition to running surveys, which I shared the anonymized and aggregated results of in my last email[1], similarly to the previous round[2], we also collected a selection of metrics to get an insight into the load and efficiency of OpenStack project teams. We collected metrics, with the same method as previously, for 23 project teams, compared to the 13 teams that served as an initial set in the first round, covering 5 release cycles from Bobcat to Flamingo. This time around we included all the teams who received survey responses in either this or the previous round. We collected the following metrics: - Review Efficiency Index (REI): A ratio of Merged patches over Opened Patches - Median and Average Time to Review: These metrics focus on the time passed between the creation of a review and first time it gets reviewed by someone - Median and Average Time to Merge: These metrics show the time needed from the creation to the approval and merging of a review - Median and Average Patchsets per Review: These metrics show the number of revisions a review has throughout its lifecycle - # of Changes Opened - # of Changes Closed - # of Active Maintainers: Anyone in a project team who voted with CR+2, CR-2 or W+1 in a release cycle in any of the project team’s repos. This may not be an exact match to people who are listed as core reviewers in the Gerrit core teams for the project team. - # of Reviewers: Anyone who commented or voted in any way on changes in a release cycle. We excluded the ‘#/% of Changes Opened but not Closed in a Release Cycle’ metric in this round. Highlights in trends comparing the two rounds of metrics analysis: * The average number of incoming changes per team dropped by 23% + With more teams included in the data set, the minimum number of incoming changes dropped to half, while the maximum number remained close to the same. This is due to adding teams to the data set who are less active. + Teams were still able to close changes by a similar, slightly higher rate * The average number of reviewers per team dropped by 24% + The average number of maintainers also dropped by 19% + The ratio between the number of maintainers and reviewers dropped by 9% * The review and merge time metrics are comparable with the previous round’s results To avoid divisive comparisons between project teams' efficiency and performance, we only share anonymized and aggregated data, and provide a high-level picture to the community of what we’ve found so far. - REI: * Over the course of 5 release cycles, most teams had an REI slightly above 1, which means that more changes got closed (merged or abandoned) than opened. * Still only the Bobcat release had an REI lower than one, the average across the 23 teams was 1.09 - Patches Opened and Closed * On average throughout the 5 release cycles teams got 319 new changes, with the minimum of 33 and maximum of 902. * On average throughout the 5 release cycles teams closed 343 changes, with the minimum of 32 and maximum of 952. * Collectively over the 5 release cycles, the 23 teams received a total of 7352 new changes and closed 7889. - Merge Times * On average throughout the 5 release cycles it took teams 125 days to approve a review, with a median of 24 days. * The team with the lowest average needed 14 days, while their median settled at 1.6 days per cycle, which was also the lowest across all teams. * The team with the highest average needed 240 days, while also grabbing the highest median number of 84 days. - Review Times * On average throughout the 5 release cycles it took teams 38 days to conduct the first review on new changes, with a median of 6 days. * The team with the lowest average needed 2.5 days, while their median settled at 0.45 days per cycle, which was also the lowest across all teams. * The team with the highest average needed 96 days, while their median settled at 9 days. * The team with the highest median needed 35 days, while their average settled at 85 days. Throughout the survey responses, challenges with review attention still got the highest number of votes, and turned out to be a challenge in most project teams who received responses. This was the tendency shown in our analysis following the Epoxy release cycle. The overall median and average numbers between the two rounds are almost the same, even though the per averages show more significant differences. Here I would like to note the difference between ‘average’ (a.k.a. ‘mean’) and ‘median’ numbers, where the latter shows the most common experience and is not influenced by the outlying data points the same way as average is. In that regard, averages may provide a better indication of perceived performance while medians are closer to actual day-to-day reality. The overall median review and merge times numbers suggest a relatively good overall experience for all teams across all 5 release cycles, however, the average numbers show areas for improvement. - # of Active Maintainers * On average teams had 9 active maintainers in the release cycles, with a team having as low as 3 and another team having as high as 20 - # of Reviewers * On average teams had 58 active reviewers in the release cycles, with a team with the lowest of 17 and another team as the highest of 146 There is a slight decrease in the number of reviewers on average compared to the previous round of metrics analysis. There still seems to be 6.4x as many reviewers as maintainers throughout the 23 project teams whose data we looked at. With accepting that not every reviewer is active throughout multiple release cycles, or even throughout a single one, the opportunity still seems to be present to capture some of the more casual reviewers to further increase review bandwidth and nurture a group of new maintainers. The final analysis, with which we will be reaching out to project teams, is building on the combination of survey data and metrics. Please note that we do not have the bandwidth to reach out to every project team at once, as we would like to work with teams on a strategy and next steps to address the challenges that we collectively uncover. In the following days, we will be starting to reach out to those teams who got the most survey responses and will expand our outreach as bandwidth allows. Thank you for your patience and understanding. Thanks, Ildikó [1] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.... [2] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.... ——— Ildikó Váncsa Director of Community OpenInfra Foundation