[openstack-community] Proposal: remove voting on speaking proposals for Barcelona Summit

Florian Haas florian at hastexo.com
Wed May 18 18:14:47 UTC 2016

On Wed, May 18, 2016 at 7:29 PM, Nick Chase <nchase at mirantis.com> wrote:
> We wrote about another suggestion here:
> https://www.mirantis.com/blog/fixing-openstack-summit-submission-process/
> (Look after "... and here's mine".) I know there there some plusses and
> minuses in the interviews we did about it, but here it is:
> Borodaenko suggests a more radical change.  “I think we should go more in
> the direction used by the scientific community and more mature open source
> communities such as the linux kernel.”  The process, he explained, works
> like this:
> All submissions are made privately; they cannot be disclosed until after the
> selection process is over, so there’s no campaigning, and no biasing of the
> judges.
> The Peer Review panel is made up of a much larger number of people, and it’s
> known who they are, but not who reviewed what.  So instead of 3 people
> reviewing all 300 submissions for a single track, you might have 20 people
> for each track, each of whom review a set of randomly selected submissions.
> So in this case, if each of those submissions was reviewed by 3 judges,
> that’s only 45 per person, rather than 300.
> Judges are randomly assigned proposals, which have all identifying
> information stripped out.  The system will know not to give a judge a
> proposal from his/her own company.
> Judges score each proposal on content (is it an interesting topic?), fit for
> the conference (should we cover this topic at the Summit?), and presentation
> (does it look like it’s been well thought out and will be presented well?).
> These scores are used to determine which presentations get in.
> Proposal authors get back the scores, and the explanations. In an ideal
> world, authors have a chance to appeal and resubmit with improvements based
> on the comments to be rescored as in steps 3 and 4, but even if there’s not
> enough time for that, authors will have a better idea of why they did or
> didn’t get in, and can use that feedback to create better submissions for
> next time.
> Scores determine which proposals get in, potentially with a final step where
> a set of publicly known individuals reviews the top scorers to make sure
> that we don’t wind up with 100 sessions on the same topic, but still, the
> scores should be the final arbiters between whether one proposal or the
> other is accepted.
> “So in the end,” he explained, “it’s about the content of the proposal, and
> not who you work for or who knows you or doesn’t know you.  Scientific
> conferences have been doing it this way for many years, and it works very
> well.”

This has some characteristics that are congruent with my own proposal
(random selection of a subset being offered to reviewers being one of
them), which I obviously agree with. Others, not so much.

- While I love the idea of giving detailed feedback and possibly even
an option to refine and resubmit talks, even for "just" 45 talks that
is a daunting task that would likely keep a reviewer occupied for a
week, if the feedback is expected to be any good.

- Anonymizing submissions is a double-edged sword. Yes, of course it
removes all bias related to an individual speaker's person or company
affiliation. However, when you talk to attendees of technical
conferences, most people will much prefer to listen to a knowledgeable
person who is also a good presenter, than to *the* best expert in
their field who can't present worth a damn. Judging the quality of the
oral presentation from the written submission is impossible (not least
because there is no way of telling whether the submission was written
by the presenter), and past speaker performance is a good indicator of
future speaker performance. Anonymizing submissions means the latter
is ignored, and the former will be attempted. If we determine the
benefits to outweigh the downsides that's fine, but this downside
should be considered.

Then again, at the risk of repeating myself, why not take this to its
logical conclusion and make the review "panel" identical to the group
of submitters, in other words, with submitting a talk comes the
responsibility to review others? This also has the added benefit that
rather than divvying up talks with every talk being reviewed by just
one (naturally flawed) reviewer, you get multiple eyeballs on each and
every talk.


More information about the Community mailing list