If you’ve ever proposed to speak at a conference then you’re pretty familiar with your end of the process: you fill out a proposal form describing the session you’d like to present, submit the form, wait for what might feel like a weirdly long period of time, and then hear back about whether your session has been accepted or not.
But what exactly happens behind the scenes during all that waiting time and how does it translate into creating a conference?
This question is one that comes up a lot, from everyone from conference newbies to seasoned speakers. And it’s also a tricky one to answer as every conference you propose to is unique in its own way, as are the specific processes used by the organization producing it. While we can’t speak for everyone, we can give you more insight into how specifically eLearning Guild conferences are created.
To take some of the mystery out of this process, the team who works on curating the conference program (David Kelly, Mark Britz, and I) will give you a backstage peek as we start work on DevLearn 2017. Over the upcoming weeks we’ll be sharing behind the scenes posts about each step of the conference creation process to give you more insight into what exactly goes into crafting our events.
And we’re kicking off the series with a post on the very first step – individually reviewing session proposals.
The manual review
What’s important to know is that there is no machine pre-sorting involved here looking for magic combinations of buzzwords to determine if a session makes the first cut. Mark, David, and I each review every single session proposal that comes in – and that’s a lot of reading. While the amount of proposals we receive varies from event to event, it’s always in the hundreds. For instance, for this year’s DevLearn we’ll each be reviewing over 600 proposals!
As you can imagine, this is time consuming. So why don’t we split up the review and have each of us only evaluate a third of the sessions?
First, because it helps to have multiple opinions on a proposal. Debate is healthy and each of us comes from different places in L&D, which helps us gain a necessary diversity of perspective and context.
Second, it’s grueling work and as you can guess after reading stacks of proposals we can sometimes miss things in them. Having all three of us review serves as a quality check and balance.
Finally, this phase is a form of development for each of us. We learn through the conversations we have with each other about every proposal, which helps hone our review skills for each successive event.
What exactly are we looking for?
When we initially review session proposals, we look at each one in isolation. What this means is that we ignore factors like multiple takes on the same theme, how many proposals a speaker has submitted, or how many sessions on a topic we’re looking for. Instead, we first look at each proposal on its own and assess whether or not it’s a good fit for the program.
So, what are we looking for? There are a lot of factors, but here are some of the most common questions we ask ourselves about each proposal:
- Is it audience-focused? People get more out of sessions that don’t just report on a topic or a case study, but also help the people in the audience understand how they can use this information in their own work.
- Does it feel engaging? While a proposal needs to, first and foremost, have substance, it also should be enjoyable and engaging for the audience too.
- Is it specific? Proposals that describe what specifically they’re going to talk about and have a unique angle on a topic tend to make more impact than ones that are too broad or generic about what they’re going to discuss.
- Has this person spoken at a Guild event before? While experienced speakers that we know deliver value are critical, we also allocate at least 20% of each program to first time speakers, so looking for high quality proposals from new voices is important to us. That brings us to the next important factor…
- What do we know about the speaker? Who a proposal comes from also has some influence on how it’s evaluated. While previous experience speaking at a Guild event (or other events) is an easy way to build credibility, it’s not at all the only one – especially for new speakers. Sharing about the topic online, participating in industry conversations or Twitter chats, or having easily viewed examples of their work can also help a speaker’s proposal.
- Does it feel like a good fit for this specific event? Each conference has a unique theme that we curate content around. For instance, FocusOn Learning goes narrow and deep on a few hot topics in our industry. If a proposal for that conference isn’t on one of those focuses though, no matter how good it is, it’s just not the right fit. Because of this, it’s actually not uncommon for us to review a session and comment that it’s not a fit for this event, but could be great for one of our other conferences.
So this stage is easy, right?
Not exactly. It can be simple to make a decision when the proposal is well written and has a clear value to the event audience – or when it’s extremely light on details or the topic isn’t well related to the conference.
That said, it’s not always a cut and dry situation. So while reviewing some proposals can take just a few minutes, others can take substantially longer as we weigh all the factors involved in why it might or might not work on the program.
Also, as previously mentioned, manually reviewing all of these proposals is a highly rigorous – and highly exhausting – process. All three of us find that if we review too many proposals in a row they all start to bleed together in our minds. To give them all a fair shot, we tend to review a handful and then switch to something else briefly as a palate cleanser before continuing. As you can imagine, what we gain in accuracy this way we lose in speed, but we find the trade-off worth it.
This first review gives us a chance to record our own thoughts about each proposal, but where the real value comes in is in combining our evaluations and comparing our ratings. We’ll cover that process in our next post.