[Openstack-track-chairs] the meaning of the 'How to contribute' track

Lauren Sell lauren at openstack.org
Tue Aug 18 21:46:24 UTC 2015


Thanks for the thoughtful reply, Florian. I’ll share some of my thoughts below, as well. 

I’m actually enjoying the great conversation on the mailing list versus just getting survey results :)

> On Aug 18, 2015, at 1:05 PM, Florian Haas <florian.haas at hastexo.com> wrote:
> 
> Warning: longish email ahead, consume only if fully caffeinated.
> 
> Hi everyone,
> 
> If i may humbly weigh in here: while having a survey tool (both for
> track chairs and for attendees) will be great, having an open
> discussion here on the list probably doesn't hurt either.
> 
> On the issue of whether community voting has merit: I'm not a fan of
> conferences selected purely by a small group of curators, and in fact
> recall trying to convince the OSCON folks to consider a community
> voting scheme some time ago, before we had it for the Summit.

I like voting for a few reasons:
- I think knowing your session is going to be posted publicly and voted by the community puts peer pressure on the quality of the submission
- I like the transparency it provides by publishing all submissions, some people like having access to it for analysis of hot issues or even to recruit speakers/topics for other community events
- Of course, it’s gives the community a feedback mechanism, so as Florian said, it’s not just a small group of people making the decisions
- And finally, it gets people excited about the Summit when the presentations go live and they start to see topics and themes emerge for the event

That said, if ultimately we decide that voting is not the best feedback mechanism and we’re not meeting any of those goals, I’m open to killing it. As always, we continue to iterate (and hopefully improve!) each Summit cycle, and we always want to be responsive to feedback. 

> However I do realize that over the past few summits, the "quality" of
> community votes (defined as the degree to which community voting
> results are a good indicator of talk quality) is diminishing. This is
> part due to the fact that we have so many talks available to vote on
> that it's no longer possible for anyone but a select few to review all
> of them, part due to people simply voting based on company
> affiliation.

The number of submissions is definitely adding complexity and load to every step of the process.

The ability to filter by track/category is very important in this regard, so you can choose to review sessions in one or two tracks that interest you most. The tool also serves up random sessions in order to help distribute the votes.

And I think there is still room to improve the voting system. For example, I think the ability to comment on a presentation could provide even more value to track chairs (and speakers) than the quantitative votes. 

> So that means that while community voting is generally a great thing
> to have, we'll still need track chair freedom to weed out calls where
> there was obvious ballot stuffing. In total, I believe that while the
> system with non-binding community voting and then track chair
> selection is non-perfect, it's probably the best we can come up with.
> I think it makes sense neither to ditch community voting, nor to
> abolish track chairs.
> 
> On another topic raised here, I do not think that the quality of a
> talk can simply be judged by the length or detail of an abstract
> alone. In fact, quite the opposite may be true: I once spoke at a
> conference in Denmark that had a rule of "max 7 words in title, max 2
> sentences in abstract, max 1 sentence in bio". The talks were stellar,
> and I believe there may have been causality not just correlation: it
> takes real thinking, consideration and expertise to get your point
> across to a reviewer using such a low allotted bandwidth.

I don’t necessarily support the push for longer abstracts (although I don’t know that I would reduce the max either). In my experience as a track chair, if the abstract didn’t provide enough information, I just disregarded the session or in some exceptional cases would follow up with the speaker. If we want to get more information, the best approach would probably be to add another question or two, not just suggest a minimum length for the abstract.

> To cover another thing mentioned in this thread, I also don't think
> that anonymizing abstracts is a good idea. I know Sir Tim is an
> entertaining speaker, of course I'd prefer him over a hypothetical
> speaker who bores people to tears. I know Josh Durgin is the Ceph RBD
> guy, so of course I'd want him to talk about, say, Ceph/Cinder
> performance optimization rather than someone else. Besides, an
> anonymized system can be easily circumvented. What if you leave names
> out, but still have a bio? "The speaker is an infrastructure lead at
> an international nuclear research facility's computing centre in
> Switzerland." (Rings a Bell, doesn't it?) Or you leave out any
> biographical information altogether, at which point you have zero clue
> whether the speaker is qualified to talk about their subject. So
> whichever way you look at it, having speaker information in
> submissions is a net benefit.

I agree, I don’t think anonymizing the submissions is a good idea for the reasons you mention above. 

Additionally, if we start gathering speaker/session feedback in Tokyo, it wouldn’t be helpful in the selection process for Austin if you didn’t know who the speaker was.

> One other thing that we have seen is people submitting several similar
> but distinct talks to give the community and the track chairs some
> opportunity to decide on which one would be the most interesting. I'm
> sure this has been done with the best of intentions (give the
> community/the curators more options to choose from), but it actually
> diminishes the voting and selection experience.
> 
> So all in all, while the *voting* process as such is probably
> "optimal" (in the sense of "as good as it gets", not "perfect"), it is
> the submission process that could be improved. Here are a few rules
> that I think would make sense:
> 
> - Limited submissions per speaker; a count of 3 comes to mind. This
> would include co-speaking submissions, so if you've been recruited for
> two panels, you only get one other talk submission.

I’m also open to trying this out for Austin, but it seems to be a polarizing issue. I’ve gotten feedback that whenever you set artificial limits, people will find a way around it, and ultimately it could end up benefiting larger companies who have more people to submit.

FWIW, I would probably set the limit at 5 including panels and co-presentations to start.

> - Tighter abstracts. If someone can't get the content of their talk
> across in 3 sentences, they probably don't have a clear plan what to
> talk about.
> 
> - Limited speakers per talk. Again, 2-3 would be a reasonable limit,
> possibly 1 more for hands-on labs.

What about panels, or do you just dislike panels in general?

> With that, we would make reviewing and voting more meaningful. We
> would reduce the number of submitted talks, reduce the time required
> to process and review each abstract, and make it possible for people
> to actually vote for talks other than just those of their colleagues.
> 
> Also, with the issue the Everett Toews raised, I think we should also
> consider a rule in which obvious plagiarism in a talk abstract will
> result not only in non-consideration of all involved speakers for the
> summit they are submitting for, but will also get them sin-binned for
> immediately subsequent summit.
> 
> Am I making sense? Please feel free to rip my thinking to shreds. :)

> Cheers,
> Florian
> 


Thanks,
Lauren


> _______________________________________________
> Openstack-track-chairs mailing list
> Openstack-track-chairs at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-track-chairs




More information about the Openstack-track-chairs mailing list