[User-committee] User Survey Data Analysis

Roland Chan roland at aptira.com
Tue Nov 17 01:04:47 UTC 2015


On 17 November 2015 at 11:24, Stefano Maffulli <stefano at openstack.org>
wrote:

> On 11/16/2015 03:50 PM, Roland Chan wrote:
> > I'm requesting further analysis of the user survey data in order to
> > determine what factors maybe contributing to the deterioriation of the
> > NPS results over the last two surveys.
>
> This is one of the cases where I'm not particularly concerned in the
> change that score but in its absolute value: the latest NPS in the
> latest edition could probably be considered a new base point instead of
> a down trend because the sample size of the survey has changed so
> dramatically.
>

​​Yes it could. The detractor numbers from the last survey alone still
warrant investigation in my opinion.

​I'm not considering the change as a trend yet, but clearly something
happened, and ignoring it would be foolhardy. We could be looking at
anything from a significant change in user sentiment to an innocuous
mistake in the execution of the survey process. Or anything in between.

Whatever it is, finding out sooner rather than later would be beneficial,
particularly if we are planning to take action based on the results at some
point.


> Also, I'm not sure how exactly valuable the NPS score is for the thing
> that is called 'openstack' in the context of the survey: is that the
> upstream code? the packages from vendors? the old releases unmaintained?
> neutron? devstack? How valuable is really that number?
>

​​It's the user experience. The detail comes from further investigation. I
imagine that the survey data contains some of this information, hence the
request for analysis.
​
​Investigation from other angles, such as the Diversity working group, has
suggested that tools, people and process are all sources of dissatisfaction.


>
> > I don't know what form the data for the last two user surveys is, but if
> > it's in surveymonkey,
>
> it's not in surveymonkey, it's in a custom-built tool managed by the
> Foundation. Access to the data is restricted to very few individuals
> because of its super-sensitive content.
>
>
​OK. Then we need to determine whether the people with access are able to
perform the analysis or whether the data can be anonymised and/or
aggregated sufficiently to allow analysis outside of that group.

That will tell us if analysis is feasible.


> > we had some good results using their built in
> > toolset (credit to Lauren for that) with the Diversity survey results.
>
> Speaking of which, I heard rumors that the results were published at the
> Board meeting in Tokyo but I haven't seen that presentation made public
> yet. Did I miss the public announcement?
>

​There wasn't an announcement per se. The presentation is in the board
agenda:

https://wiki.openstack.org/wiki/Governance/Foundation/26Oct2015BoardMeeting

Minutes of the meeting don't seem to be out yet, but it wasn't
confidential, so it will be public.

​Roland​
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/user-committee/attachments/20151117/a13057e8/attachment.html>


More information about the User-committee mailing list