My Ongoing Quest to Define a "Culture of Learning"

Apr 5th, 2012 / Data Visualization / ,

Data, research, and evaluation have been on everyone’s radar for a while now. Schools took data seriously after NCLB in 2001, and nonprofits and their donors are also starting to take a closer look at their numbers.
Data’s exciting, but using data to inform action is even better. But how exactly do we use data?
Everyone seems to agree that building an organizational culture of learning (within the school, youth center, etc.) is important. And how exactly do we build an organizational culture of learning?
In my quest to find resources about organizational learning, I’ve found that there’s little agreement or clarity about what a “culture of learning” really means.
Here’s my running list of what an organizational “culture of learning” might look like:

  • There are organizational policies and procedures in place that fit Michael Quinn Patton’s description of a fully integrated and highly valued internal evaluation system
  • Evaluation is institutionalized. Every program, department, project, etc. has a logic model, collects their own data, and uses it.
  • The logic models are made with SMART goals in mind
  • Programs collect quantitative and qualitative data
  • Qualitative data collection, like interviews and focus groups, is culturally-competent
  • Programs use formative and summative assessments to guide decision-making
  • Formative assessments are woven into the program’s design – for example, an advocacy program in which youth to testify to local congressmen will need to teach youth about public-speaking skills, so why not design simple rubrics to assess their progress in their public speaking skills over time? Youth are hungry for feedback and love seeing their own improvement over time.
  • Staff members take an initiative to monitor data without nudging from evaluators
  • Evaluation results get used to improve programs
  • Staff members demonstrate critical thinking skills when thinking and talking about their program
  • Staff members demonstrate depth of knowledge when thinking and talking about their program
  • Capacity building takes place on a day-to-day basis
  • Evaluation is conducted because everyone wants to improve programming, not because a report is due to funders. You can read more good reasons for evaluations here.
  • Curiosity about outcomes. The staff members start with some research questions, and then you find some answers in the data together, and the staff members have even more questions.
  • Staff members comment, “These numbers are great, but how can we measure progress??” Staff understand that measuring outputs is not the same as measuring outcomes.
  • Staff want feedback, especially individual feedback, so they can improve their work.
  • Staff want to design satisfaction surveys and are genuinely concerned about the participants’ experiences.
  • Staff look forward to discussing results together.
  • Staff learn about about another organization’s performance management system or read evaluation reports about other agencies and comment, “But this doesn’t really tell me anything… They didn’t even say whether the youth benefited! They just flashed some numbers are talked about how many people were served.”

What did I miss?
Thanks, Ann Emery

Related Courses

Data visualization best practices, practical how-tos, tutorials in multiple software platforms, and guest experts. Designed with busy number-crunchers in mind.
$599

Enroll Now

SPONSORED

#f-post-el-35{display:none !important}

#f-post-el-35{display:none !important}
#f-post-el-35{display:none !important}

Enroll Now