We were extremely fortunate to have past and present Presidents of the American Evaluation Association as our guest speakers at the 2012 Eastern Evaluation Research Society’s conference – Eleanor Chelimsky, Jennifer Greene, and Rodney Hopson.
Even though the conference was a couple weeks ago, I’m still thinking about one of Rodney Hopson’s comments. He mentioned that sometimes he wonders whether evaluators/the evaluation are actually hurting the program rather than helping it.
I’ve certainly had similar experiences. Mostly, I’ve seen program staff get so excited about data that they want to collect more, and more, and then even more data. You can read about one of my experiences here.
This is a great idea at first. What’s the harm? More data is better, right?
But… a few months down the road, the program staff and I are swimming in more data than we can handle. And, we often have more data than we really need. After all, my goal as a utilization-focused evaluator is to collect information that will directly influence decisions about the program or the participants. Simple, quick, streamlined data can be more useful than complex, time-consuming data.
Have other evaluators felt like this? Have you ever questioned whether your involvement is hurting rather than helping?
1 Comment
This is so timely for me Ann! Utilization focused evaluation is something I strive to achieve as well. I love when people get excited about data, what I don’t like is not having the adequate time to help them to learn from the results.
On a general level, people can see what areas may need some work or which areas are doing pretty well, but we as evaluators can also see when it just becomes”data overload”. I heard a definition for data recently (paraphrased) Data is just data until you do something with it, and then it becomes information.
It’s more of a capacity thing for me in my case, but I feel it is a disservice when I am not able to provide as much time and attention to helping to make the evaluation results as useful and meaningful for the major stakeholders. Annual review and revision of measurement tools (at least) may help with this, I love it when I hear “this doesn’t give us the data we need” *lightbulb* For now I’m just trying to be grateful for the small victories, but I definitely think we can be hurting in some ways when conducting utilization focused evalutions when that education piece does not include quality over quantity.