[User-committee] User Survey Data Analysis

Roland Chan roland at aptira.com
Wed Nov 18 13:10:14 UTC 2015


I simply call for analysis. Contributing factors could be anything. A lot
of people around here think distros are bad. Could be that. Could be
something else. Could be verticals, vendors or almost anything.

I haven't the foggiest.

I would counsel against focus on prod
NPS deltas. Both prod and dev/qa scored badly, and we will want to examine
the broad spectrum of results: what worked, what didn't.

Repeatability of the analysis would be useful too, from the perspective of
reducing costs over multiple surveys.

Roland

On Wed, 18 Nov 2015 23:58 Lauren Sell <lauren at openstack.org> wrote:

> I doubt anyone would argue against more data analysis :)
>
> I’d like to ask the independent analyst (who helped produce the report) to
> dig into production stage deployments with low NPS scores and see if there
> are any other correlations.
>
> Jon and I also threw out a few ideas on the list - release version, cloud
> size, projects used - are there any other factors you are specifically
> interested in analyzing?
>
> Cheers,
> Lauren
>
>
> On Nov 17, 2015, at 2:50 PM, Roland Chan <roland at aptira.com> wrote:
>
> That's exactly the sort of analysis required.
>
> Distribution of the data is only needed if the people who currently have
> access to it are unable to perform the analysis for some reason.
>
> First though, the committee must resolve that the analysis is needed. Once
> that is done we can investigate what needs to happen.
>
> Do we have agreement on whether further analysis is warranted?
>
> Roland
>
> On Wed, 18 Nov 2015 02:28 Jonathan Proulx <jon at csail.mit.edu> wrote:
>
>>
>> Perhaps it would be valuable to take a couple more cuts of the
>> deployment data (size, release, projects used etc) by promoters and
>> detractors?
>>
>> That might be informative without requiring further distribution of
>> raw data.  Are there any other specific comparisions people think
>> might be useful?
>>
>> Remembering that this is a vouluntary survey mostly announced through
>> chennels frequented by people involved in the community to some extent
>> I'd expect to see a relatively strong positive bias in results, so if
>> anyone has actionable ideas on how to get deeeper I'm for it, but I'm
>> also not sure I put too much importance on that metric by itself for
>> reasons others have mentioned.
>>
>> -Jon
>>
>> --
>>
> _______________________________________________
>
>
> User-committee mailing list
> User-committee at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/user-committee/attachments/20151118/ef2818b1/attachment.html>


More information about the User-committee mailing list