“Who is your boss?” Asked a senior legislator shortly after I started as the director of the Office of Performance Evaluations more than 12 years ago. I quickly responded, “JLOC.” (JLOC, or the Joint Legislative Oversight Committee, assigns evaluation projects to our office.) Pleased with my answer, the legislator said, “Correct, and make sure you never forget that.”
So who are our audiences?
Yes, I haven’t forgotten that. At the time, it made perfect sense to me. However, later I realized that JLOC should not be the only audience of our work. For one, the mission of our office is to promote confidence and accountability in state government. That means our audience has to include the general public and the press. Two, if we want to produce useful evaluations, we have to think about the various stakeholders, such as policymakers, agency and program officials, people who are directly affected by the program or the policy, and lobbyists representing different interest groups who have a stake in the evaluation.
There is one more, a kind of latent audience: our evaluation colleagues and peers. We always want to know what they think about our work, because this helps us gauge our professional credibility.
These audiences have varying levels of interest in our evaluation depending on their role and stakes in the evaluation. Not everyone is interested in technical details, although those details are the foundation of our evaluation work. As shown in the table, of the seven different audiences we have, only two are primarily interested in technical details. Others care more about the evaluation’s key message and desire presentation in plain language with clear and simple data visualization.
Tailoring our written products to meet audience needs
To meet the needs of these audiences, our office prepares different products to disseminate evaluation results. As shown in the figure, our experience tells us that the more technical the product is, the less reach it has in terms of number of people it connects with. For example, technical appendices of an evaluation report will be of interest to only a small number of people compared to the press release that is distributed to all media outlets.
Traditionally, our reports (both print and electronic versions) have had four components:
- The transmittal letter is the first page inside the report cover. We use a transmittal letter to draw the attention of policymakers and the press to the most important message of our evaluation. These letters are candid in their message and are written using the simplest of language.
- The executive summary summarizes the entire evaluation, including findings, conclusions, and recommendations. It usually spans 2-6 pages of the report.
- The main body of the report discusses the evaluation context, explains the program and the policy that are the focus of the evaluation, describes various data analyses, and details findings, conclusions, and recommendations.
- Appendices include the study request(s), study scope, evaluation methodology, selected bibliography, additional details about a particular analysis, and formal responses to the evaluation by the governor and heads of relevant agencies.
In addition to the report, we always prepare a press release for each evaluation we conduct. We distribute our press releases to all media outlets. The purpose of a press release is to keep the public informed about our work so it can judge the value of state policies and programs and hold policymakers and government officials accountable.
In the past couple of years, we have added three other methods to extend our reach to a larger audience and to effectively communicate our evaluation message.
- One-page report highlights, or one-pager, is an easy to understand document that is detached from the main report. It communicates the most important information (both quantitative and qualitative) of the evaluation to multiple audiences. Depending on the nature of the evaluation, some one-pagers might reflect more quantitative information than others. Clearly knowing what the main message of the report is and using effective data visualization are critical for producing useful one-pagers. For us, preparing one-pagers is the most difficult part of reporting results, but they are also probably the most used written product of our work. The information can also be presented in a Q&A format.
- Fold-out pages allow us to get out of the traditional reporting format of 8.5 x 11 sized paper. Don’t be shy to use fold-out pages in your report if you have something to show and you need more space. Examples include illustrating complex flow of funds, budget management processes, and relationships.
- Interactive data visualization can sometimes be useful for engaging certain audience members, such as policymakers who are interested in knowing more details and program managers who may want to test the workings of a particular model or analysis. This can be done by providing links to your website and by using the site during a live presentation. Here are examples one and two.
Of course, there will always be a handful of folks in each type of audience who would want to know technical details such as the sample size, sampling methodology, standard deviation, and r-squared.
For those folks, you need to be prepared to answer their questions adequately when and wherever necessary. However, you don’t need to lose most of your audience members by cluttering your report and presentations with technical details. Remember, it is not about dumbing down your evaluation message for certain audiences. It is all about conveying the message in a format that makes sense to the people to whom it matters most.
This is a great blog post just before the 4th of July, a time when celebrate the progress we have made toward freedom and democracy in the United States. Evaluators can learn a lot from the systematic process you describe to make evaluation findings and results accessible to so many different kinds of groups (stakeholders). The part you have not included is the ethical mindset that you and your Office take so seriously, and that is behind these different kinds of reports. Now, let me ask you a question: how does your Office define “the public,” and how do you ensure the evaluation includes and is accessible by those with more restricted access?
Thank you much for the excellent comments, Tessie. You are absolutely correct about the ethical mindset. The American Evaluation Association’s Guiding Principles, the Joint Committee Standards for Program Evaluation, and the Generally Accepted Government Auditing Standards serve as sideboards for me and my staff. In addition, personally for me, Mahatma Gandhi’s life serves as a great source of courage and ideas. Also, having mentors is very useful in navigating through tough ethical situations. Over the years anytime I was in a tough situation, I have sought guidance from four evaluators and two political leaders/policymakers – I consider them my mentors.
Because we are an oversight agency of the Idaho State Legislature and our mission is to promote confidence and accountability in state government, we define the public as all people of the state. Of course, certain groups of our state’s population will have more stake in a particular evaluation than others.
We post our reports on the internet, use press releases, conduct open public meetings with advance notice to the press, and promote our work on social media – I believe this covers a large portion of the public. PDF readers allow the blind to listen. When we worked on correctional issues, we sent reports to each correctional library for inmates to review. And of course, we always have a few hardcopy reports available for distribution. A few years back, we did a study on Idaho’s School for the Deaf and Blind. For that study, we hired a translator to translate our surveys and other material for Spanish speaking people and help with the focus groups. We also published that evaluation report in braille.
I hope my answers are helpful. Best wishes and Namaste.