I’m asking evaluators and non-evaluators to think about some of the most effective conference presentations they’ve seen as part of the American Evaluation Association’s new Potent Presentations Initiative (P2i).
When talking about the ingredients for great presentations, Herb Baum commented, “Do not present results unless they are relevant to the point. At evaluation conferences, I am tired of hearing results demonstrating the effectiveness of a given program. I much prefer hearing why it was challenging to measure the effectiveness of the program and what was done to overcome those challenges.” You can read Herb’s full post here.
That got me thinking… Is it ever acceptable to share the results of your evaluation study when you’re presenting to other evaluators? Under what circumstances? Here’s my guidance so far. If you really, really, really want to share the results of your evaluation:
- Please share why your results are interesting, either to you personally, or for the program staff, or to the larger evaluation field. An evaluator once told me, “Well, it’s not that interesting. It’s just my results.” If you’re not enthusiastic about your own evaluation project, then it’s harder for your audience to feel excited about your presentation. Or, if your results aren’t interesting, don’t share them. Instead, focus on your evaluation approach. This isn’t a research conference.
- Show me, don’t tell me, why your results are one-of-a-kind. Use your body language. Use your voice. Tell a story. Enthusiasm for data is inspiring.
- Explain why you chose your method – but in just a couple sentences. Was there something unique about your project? Did you encounter obstacles during the evaluation that influenced your choice of method? Did you use Plan B, Plan C, or Plan D when choosing your methods?
- What are you doing next in your analysis? One of my favorite results-filled presentations was by Stacy Merola and Allan Porowski. In their “spare” time (i.e. evenings and weekends), they’re mining a public dataset to learn about predictors of high school dropout. Their enthusiasm for the next steps in their research is contagious, and they always ask their audience for ideas about the analysis. When you’re in the audience, you feel excited to contribute to a just-for-fun, solely-for-learning project.
- What did you learn from this project? Be specific. This is why I came to the conference. This is why I came to your session. “We learned that getting informed consent is important” isn’t so helpful. Instead, “Make sure you use an ‘opt-out’ rather than ‘opt-in’ approach and collect consent forms in September when the rest of the forms requiring parental signatures are being sent home” is much more useful. (Thanks to Jen Hamilton for this great advice about educational evaluations.) Give me something relevant and memorable that I can apply to my own work.
- What challenges did you encounter? What would you do differently next time? Share an anecdote. Make us laugh. We’ve all run into unexpected challenges, and when you’re in the audience, it’s a relief to hear that you’re not alone.
- How are these results similar/different to other evaluation projects you’ve worked on? I don’t need a literature review. I don’t need citations to other studies. Instead, I’m looking for your opinions and perspectives. What’s the take-away message for other evaluators?
What else would you add to this list? What are some of the best data-filled presentations you’ve seen? Who gave those presentations? What made them so effective?
9 Comments
I do appreciate seeing evaluation results when the focus is on data visualization. NOT to tell about the results per se, but to show concrete examples of real-world data sets and how to present those results in ways that render the complex understandably and accurately to lay audiences.
Hi Susan,
Good point – sharing results is great when the focus is on data visualization.
I also learn a lot from data-filled presentations when the results are shared through effective visualizations (i.e. when they share a handful of clear graphics with large fonts, just a few colors used for emphasizing key patterns, and all the other elements that make a good visualization).
Thanks for reading.
Ann
One could present results during evaluation conferences to demonstrate the following:
1) The power of data visualization. Susan and Ann said it right
2) The power of an evaluation method in comparison with others. Suppose you are presenting on this great new methodology you came up with. How can we judge its usefulness? The results could be the answer. It is especially important if you are measuring something that hasn’t been measured before: you’ll need to provide the audience with a benchmark.
3) The impact of evaluation. The purpose of evaluation is not evaluation itself – it is improving the program that’s being evaluated. To keep the evaluation focused on utilization, one needs to know the results. If I am listening to a presentation about evaluation of a program, I want to know what impact the evaluation had and for that I need to know the results even if they are presented very briefly. That is what’s most exciting about evaluation!
One should present just enough results to make the point and opt for informative charts or graphs instead of pages and pages of tables filled with numbers.
I think results are important because they validate that the evaluation methods are sensitive enough to distinguish between programs / interventions that work and those that don’t (and all of the grey areas in between). An entertaining story does make a memorable conference experience but it does not necessarily promote better evaluation practice. However, I completely agree that an evaluation conference is not the time to go painful through painful step of the findings of an evaluation of a project of no relevance to the audience.
Thanks for nudging me to respond.
Great question, Ann!
I think one of the key things to remember is that it’s an *evaluation* conference, not a conference about the content area/program type.
As such, the audience is interested in how you grappled with difficult evaluation issues, how you presented things in a compelling way that enhanced understanding, why you opted for one evaluation approach over another, that sort of thing – NOT whether the program was any good or not.
Jane Davidson
Agata, Ann, and Jane,
Thanks for sharing your ideas! It’s helpful to have a variety of perspectives as I think through the best way that we should (and shouldn’t) be sharing results when we’re presenting to other evaluators.
Thanks again,
Ann
One further challenge is showing results in relation to evaluation use. Under the guise of use, we often see presentations that are basically results reports. Use is such a fundamentally important topic, any suggestions for helping presenters to make the leap to demonstrating use, and focusing the presentation on leveraging results for use for the eval audience?
Hi Susan,
The first idea that comes to mind is something I learned in Michael Quinn Patton’s pre-conference workshop last November. During the planning phases, he talks to the intended users about all the potential results of the evaluation. How will they respond and jump into action depending on the different numbers? For example, if the evaluation shows X, then the group will do Y. But if the results show A, then the group will do B instead.
I’ve had a lot of success with this strategy because it sets the tone for use early. When the results are ready, the group is ready to make changes based directly on that data.
If I were sharing these examples in a conference presentation, I still wouldn’t share any numbers with the audience…
I’d love to hear your ideas too. Thanks for reading!
Ann
I like this piece “…if your results aren’t interesting, don’t share them. Instead, focus on your evaluation approach.” At least be excited about presenting them if you choose to share.
I also like when challenges and the “what did you learn” pieces are included to help me to possibly avoid certain instances.
During an evaluation presentation I also want to know how the evaluator connected with their stakeholders on various levels, and what methods they used for engagement. I like hearing about the overall process and I am assuming that they used rigor when conducting the evaluation. I’d like to hear some results, but with a good balance regarding the process.