I share an office with an evaluator from Brazil and an evaluator from France. I feel it is my duty to educate them on all things American, like Ramen noodles, Slurpees and American television.
I recently introduced my teammates to Saturday Night Live and a brilliant Scared Straight parody.
Being researchy people, you can imagine where our conversation went next… We discussed and debated the Scared Straight program, our “good intentions are not enough” mantra for social programs, and its evaluation history.
By now, about a million studies have shown that juvenile awareness programs like Scared Straight actually have negative results for youth. There was even a meta-analysis of 9 studies back in 2002 that found that “not only does it fail to deter crime, but it actually leads to more offending behavior.”
While I’m rarely convinced by a single peer-reviewed journal article, I’m usually convinced when several studies point to the same conclusion. And a meta-analysis is always enough to convince me that something is true.
One of my teammates, however, was not convinced. He’s a huge fan of meta-analysis and understands the strengths and weaknesses of these calculations.
So I’m wondering… When you’re a skilled social scientist, and when you have a first-hand understanding of all the things that can and do go wrong in research and evaluation studies, do you ever believe the result of a study?
What does it take to convince an evaluator that something is actually “true?”
Leave a Reply