[User-committee] User Survey
Lauren Sell
lauren at openstack.org
Mon Aug 25 17:40:58 UTC 2014
Of course, sorry for not providing more explanation on the NPS-style question. I heard several board members mention adding a user satisfaction question to the survey, specifically the popular net-promoter score format.
Basically, it would mean adding one question with a 0-10 response range: “How likely is it that you would recommend OpenStack to a friend or colleague?”
We would need to discuss whether NPS is the best fit for our open source community, or if there’s a better approach.
==
More background: http://en.wikipedia.org/wiki/Net_Promoter
"NPS is based on a direct question: How likely is it that you would recommend our company/product/service to a friend or colleague? The scoring for this answer is most often based on a 0 to 10 scale.
Promoters are those who respond with a score of 9 or 10 and are considered loyal enthusiasts. Detractors are those who respond with a score of 0 to 6 - unhappy customers. Scores of 7 and 8 are ignored. NPS is calculated by subtracting the percentage of customers who are Detractors from the percentage of customers who are Promoters”
On Aug 22, 2014, at 11:55 AM, Tim Bell <Tim.Bell at cern.ch> wrote:
>
> Lauren,
>
> Can you expand on the NPS-style survey ?
>
> Some help with the data processing would certainly be welcome as we're currently in transition with respect to user committee staffing.
>
> Equally, externally driven surveys would be a useful cross-check on the extent to which a voluntary survey is reaching the community. A survey such as the ones we have run so far are naturally biased by those who have some time to respond and are in a position to share (even within the anonymous constraints).
>
> Tim
>
> On 22 Aug 2014, at 19:37, Lauren Sell <lauren at openstack.org> wrote:
>
>> I have a few thoughts, as well.
>>
>> As far as question updates:
>> - In the last Board meeting, there was talk about adding an NPS-style satisfaction question to the survey that we could benchmark over time
>> - One thing that has been enlightening is to see the number of NFV-related speaking submissions for the Paris Summit. The NFV community is definitely picking up, and I wonder if we should be capturing more or different information in the survey. Specifically in terms of the types of clouds, workloads and types of organizations (carriers/telcos). We could probably pull in some members of the NFV team for their input to make sure we’re not missing anything…without adding too many questions, of course :)
>> - I was referencing the workloads data for a project the other day, and I think the options could use some clean up / consolidation. I can take a stab making edits, but we should probably solicit input from the WTE team or other working groups if you agree we should consider refining the responses
>>
>> As far as methodology:
>> - I agree with expiring data that has not been updated in 12 months, although it will still be helpful to have a historical view / trends. It may be worth sending a separate email to the administrator of those deployments as well.
>> - I know that in the previous cycles it’s taken quite a few hours to analyze the data and qualitative responses, and it’s been understandably difficult for user committee volunteers to pull away from their day jobs to do it. I know Ryan Lane invested quite a few hours in the last cycle, but he won’t be involved this time around. Tom has of course been a huge help, but we also have Foundation budget available to bring in another resource with professional survey experience. I think it would be beneficial to have professional and independent help that we could put under NDA (no ties to vendor companies). I’m not thinking an analyst firm in this case, but rather an independent researcher or boutique firm that can provide some extra manpower working with the user committee. What do you all think?
>>
>> Separate but related, I’m interested in commissioning a broader marketshare / industry report with an analyst firm like 451 or IDC. I think people often conflate the user survey with a comprehensive list of our users, when it’s really an opt-in chance to provide feedback to our development community and broader ecosystem. I think having a macro / statistically relevant view of OpenStack adoption across the market would complement and provide an interesting companion to the community user survey.
>>
>> Thanks!
>> Lauren
>>
>>
>> On Jul 28, 2014, at 2:55 PM, Matt Van Winkle <mvanwink at rackspace.com> wrote:
>>
>>> Sorry for the delay in commenting. Playing catch up from being out.
>>>
>>> From: Tim Bell <Tim.Bell at cern.ch>
>>> Date: Tuesday, July 15, 2014 1:31 PM
>>> To: "User-committee at lists.openstack.org" <User-committee at lists.openstack.org>
>>> Subject: [User-committee] User Survey
>>>
>>>>
>>>> We have run a user survey prior to each of the OpenStack summits for the past two years.
>>>>
>>>> My default proposal would be to run another one before Paris to see how the community is evolving.
>>>>
>>>
>>> I would agree.
>>>
>>>> In Hong Kong, we had not planned to present but in the end, an impromptu meetup was organised (which led amongst other things to the start of the operations meet up).
>>>>
>>>> Given that we’re hoping to organise an operations ‘design’ style track at Paris, I would propose to include a quick highlight presentation in that session along with a more detailed document for those who wish to drill down further.
>>>
>>> This seems like a great fit to me. The only concern I would have is that the user survey captures two voices – those who interface with OpenStack APIs to deploy applications (end users) and those who deploy and mange OpenStack itself. Would those in the first group find their way to the session? It's probably more of a logistical questions – making sure that the session time in the new "Ops" pieces are on a day when the main summit is underway as well. (I'm trying to recall the dates proposed in a meeting in ATL, and I believe there was some overlap between the time for the Ops meet ups and the main conference)
>>>
>>>>
>>>> The other question is expiry of clouds which have not been updated. I think that if the data on a deployment has not been updated within 12 months, it should not be considered within the statistics.
>>>>
>>>
>>> Seems reasonable to me
>>>
>>>> Given the timing of the summit at the start of November, a timing such as below would seem reasonable:
>>>>
>>>> - 1st September – Questions finalised
>>>> - 15th September – User Survey pages updated and published on openstack.org. Requests to update sent out.
>>>> - 7th October – Survey closed
>>>> - 4th November – Results published
>>>>
>>>> Any thoughts ?
>>>>
>>>
>>> Sounds good – how can I help?
>>>
>>>> Tim
>>>>
>>>>
>>>
>>> Thanks!
>>> Matt
>>> _______________________________________________
>>> User-committee mailing list
>>> User-committee at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>>
>>
>> _______________________________________________
>> User-committee mailing list
>> User-committee at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>
More information about the User-committee
mailing list