8 Comments

    Leave a Reply

    Your email address will not be published.

    Researchers vs. Evaluators: How Much Do We Have in Common?

    Updated on: May 24th, 2012
    Data Visualization
    ,
    Collage of a bar chart, people, arrow, computer and calendar.

    I started working as a research assistant with various psychology, education, and public policy projects during college. While friends spent their summers waitressing or babysitting, I was entering data, cleaning data, and transcribing interviews. Yay. Thankfully those days are mostly behind me…

    A few years ago, I (unintentionally) accepted an evaluation position, and the contrast between research and evaluation hit me like a brick. Now, I’m fully adapted to the evaluation field, but a few of my researcher friends have asked me to blog about the similarities and differences between researchers and evaluators.

    Researchers and evaluators often look similar on the outside. We might use the same statistical formulas and methods, and we often write reports at the end of projects. But our approaches, motivations, priorities, and questions are a little different.

    The researcher asks:

    • What’s most relevant to my field? How can I contribute new knowledge? What hasn’t been studied before, or hasn’t been studied in my unique environment? What’s most interesting to study?
    • What’s the most rigorous method available?
    • How can I follow APA guidelines when writing my reports and graphing my data?
    • What type of theory or model would describe my results?
    • What are the hypothesized outcomes of the study?
    • What type of situation or context will affect the stimulus?
    • Is there a causal relationship between my independent and dependent variables?
    • How can I get my research plan approved by the Institutional Review Board as fast as possible?

    The evaluator asks:

    • What’s most relevant to the client? How can I make sure that the evaluation serves the information needs of the intended users?
    • What’s the best method available, given my limited budget, limited time, and limited staff capacity? How can I adapt rigorous methods to fit my clients and my program participants?
    • When is the information needed? When’s the meeting in which the decision-makers will be discussing the evaluation results?
    • How can I create a culture of learning within the program, school, or organization that I’m working with?
    • How can I design a realistic, prudent, diplomatic, and frugal evaluation?
    • How can I use graphic design and data visualization techniques to share my results?
    • How can program staff use the results of the evaluation and benefit from the process of participating in an evaluation cycle?
    • What type of report (or handout, dashboards, presentation, etc.) will be the best communication tool for my specific program staff?
    • What type of capacity-building and technical assistance support can I provide throughout the evaluation? What can I teach non-evaluators about evaluation?
    • How can we turn results into action by improving programs, policies, and procedures?
    • How can we use logic models and other graphic organizers to describe the program’s theory of change?
    • What are the intended outcomes of the program, and is there a clear link between the activities and outcomes?
    • How can I keep working in the evaluation field for as long as possible so I can (usually) avoid the Institutional Review Board altogether?

    Researchers and evaluators are both concerned with:

    • Conducting legal and ethical studies
    • Protecting privacy and confidentiality
    • Conveying accurate information
    • Reminding the general public that correlation does not equal causation

    What else would you add to these lists? I’ve been out of the research mindset for a few years, so I’d appreciate feedback on these ideas. Thank you!

    More about Ann K. Emery
    Ann K. Emery is a sought-after speaker who is determined to get your data out of spreadsheets and into stakeholders’ hands. Each year, she leads more than 100 workshops, webinars, and keynotes for thousands of people around the globe. Her design consultancy also overhauls graphs, publications, and slideshows with the goal of making technical information easier to understand for non-technical audiences.

    8 Comments

      Leave a Reply

      Your email address will not be published.

      You Might Like

      Our complimentary mini course for beginners to dataviz. Takes 45 minutes to complete.

      Enroll

      Change Takes Time: How to Practice Patience in Report Redesign Processes

      Guest author Abby Henderson started to have conversations with her colleagues about how they could change their reporting. She started by suggesting they add more data visualizations and fewer tables. When she met resistance to this idea, she started to produce two versions of the reports. One version included the tables they were accustomed to, and the second version included more elements of data visualization. Through providing both options, Abby was able to slowly garner traction and buy-in on including data visualizations.

      More »

      Inside our flagship dataviz course, you’ll learn software-agnostic skills that can (and should!) be applied to every software program. You’ll customize graphs for your audience, go beyond bar charts, and use accessible colors and text.

      Enroll

      Subscribe

      Not another fluffy newsletter. Get actionable tips, videos and strategies from Ann in your inbox.