Leave a Reply

Your email address will not be published. Required fields are marked *

I’ve Been an Evaluator for 20 Years, and I Still Have No Idea What Evaluation Theory Is…

Updated on: May 10th, 2012
Data Visualization
A collage of laptops, clocks, calendars, and charts in Depict Data Studio's purple, blue, green, and fuchsia color palette.

I’m pleased to present Jen Hamilton, who is guest-posting for the second time on my blog. Jen posted a few weeks ago about potent presentations (you can read her post here). Now she’s back to share her reflections from the Eastern Evaluation Research Society’s 2012 conference about balancing evaluation theory and practice.

— Ann


“I have to admit something, and it’s kind of embarrassing. I’ve been an evaluator for 20 years, and I still have no idea what evaluation theory is. Generally, I am at a conference and a presenter will be discussing the importance of Theory. I look around the room and nod my head sagely, but I am totally faking it. Like many evaluators I came into the field via another discipline (statistics, in my case), and therefore missed all those graduate courses that teach Theory.

So I asked myself, how important is this knowledge gap between academics (the keepers of Theory) and practitioners out in the field (many lacking the academic background in evaluation)? Mel Mark, in a very clearly written piece (read it here) said that it’s theory that helps evaluators make good judgments about “what kind of methods to use, under what circumstances, and toward what forms of evaluation influence.” Does my lack of theoretical background therefore mean that I (and other evaluators like me) are not making good choices?

I think that much of this debate reflects the type of work that academics tend to do, and the type of work that many in-the-field evaluators tend to do. Much of the focus on Theory stresses it’s importance in areas that do not apply to me. For example, “better understanding the key areas of debate within the field” and “where an evaluation should be going and why” including “decisions on whether or not to implement some new program.”

As a contract evaluator, I don’t have control over these areas — I am generally hired to provide a client with the specific information they ask for, using the most rigorous design possible. It’s not part of my contract to direct how they will use the information I provide. Nor do I have the luxury of choosing what I will be evaluating — I investigate only the contracts that I am lucky enough to win. And quite honestly, by the time the contract is signed, I am already months behind schedule, so I don’t even have the luxury of time to consider different theoretical approaches.

This was the point I had originally intended to make. Theory is not only impractical for contract researchers, it is also irrelevant. And then I got to the following phrase in Mel Marks piece, “in a recent book, Huey-Tsyh Chen joins others who suggest that the choices made in evaluation should be driven by program stage.” This stopped me in my tracks. There is a theory that evaluators should consider the developmental stage of the program when deciding which type of evaluation design to employ. This is a Theory? Maybe I haven’t understood the definition of Theory correctly. I just presented this exact same idea at the Eastern Evaluation Research Society and am publishing it (with Jill Feldman) in an upcoming textbook. Maybe I am a theorist after all.”

—  Jen Hamilton

More about Jen Hamilton
A major emphasis in Dr. Hamilton’s work has been social equity and improving the social, academic, economic, and health outcomes of our Nation’s most vulnerable youth. She brings together stakeholders to investigate complex problems from a variety of perspectives, and works collaboratively with clients to provide the right information at the right time. She specializes in evaluation methodology, with a focus on the design and implementation of rigorous experimental and quasi-experimental designs. Dr. Hamilton is a Scientific Reviewer for the Institute of Education Sciences and NSF, a certified What Works Clearinghouse reviewer, a peer reviewer for numerous journals, and Past-President of the Eastern Evaluation Research Society. Her chapter on program evaluation was published in a major textbook.

Leave a Reply

Your email address will not be published. Required fields are marked *

You Might Like

Our complimentary mini course for beginners to dataviz. Takes 45 minutes to complete.


Redesigning a Thesis Chapter

Farihah Malik had the opportunity to work with a public health agency, which she was really excited about. Until she had to present the research to a group of policy makers… Condensing two full chapters—73 pages of Farihah’s thesis—into a short report for the policy making group seemed like an impossible task.

More »

Inside our flagship dataviz course, you’ll learn software-agnostic skills that can (and should!) be applied to every software program. You’ll customize graphs for your audience, go beyond bar charts, and use accessible colors and text.



Not another fluffy newsletter. Get actionable tips, videos and strategies from Ann in your inbox.