We like to complain about evaluation use.
People in my business (me included) like to lament the lack of attention that people pay to evaluation. If only we did a better job if identifying stakeholders. If only we could do a better job of engaging them. If only we understood their needs better. If only we had a different relationship with them. If only we presented our information in a different way. If only we chose the appropriate type of evaluation for the setting we were working in. If only we fit the multiple definitions of “evaluation use” to our setting. And so on and so forth. I’m in favor of asking these questions. I do it myself and I am convinced that asking them leads to more and better evaluation use.
Lately I have been thinking differently.
I’m using this blog post for two reasons. One is that I want to begin a discussion in the evaluation community that may lead to more and better evaluation use. The second reason is that writing this post is giving me a chance to discern the logic underlying a behavior pattern that I seem to have fallen into. As for the logic, as far as I can tell it has two roots: continuous process improvement, and complexity.
Continuous process improvement
This comes from the days when I was evaluating programs to help small and medium sized companies. Back in those days I was hanging out with industrial engineers who were devotees of Lean 6-Sigma and related continuous process improvement methodologies. They taught me an important lesson. If you want to understand a quality problem, never begin by blaming the touch labor. It may be that the problem lies with the people running the machines, but if you start with that assumption, you are unlikely to get to the true root and contributing causes. Better to begin with the assumption that the people running the machines are capable, acting in good faith, and want to do a good job.
Complexity
This comes from all the research and writing that I have been doing on applying complexity in evaluation. That work has brought me to see things in terms of evolutionary biology. For better or worse, my instinct has become to view programs as organisms evolving on a fitness landscape. When I see a program that has been relatively stable, adaptive, and sustainable for a while, I assume that it has successfully evolved to thrive in its environment.
Often decisions made policy makers, program designers, and managers seem irrational or counterproductive to me. Whenever I have this reaction I work at reverting to the first root. Assume that the touch labor (aka program designers and managers) is capable, acting in good faith, and has the best interests of the organization at heart. Or put in other terms, the decision makers are acting in ways that support the ability of their programs to thrive in their environments. And, those capable people realize that giving evaluation too large a role in their decisions may put the organisms they are nurturing at risk.
Why might too much reliance on evaluation be a bad idea?
The answer begins with a sense of what it means to do a good job running a program. What’s that job like? It involves:
- satisfying multiple interested parties who have different views of desired outcomes;
- pursuing outcomes and goals that are not all consonant with each other;
- balancing multiple outcomes that have different time horizons and importance;
- maintaining good relationships with other organizations and programs with whom yours interacts;
- assuring the desirability of the program in other people’s eyes, regardless of whether the program is meeting its stated goals;
- achieving desirable accomplishments that are not related to formal goals;
- doing whatever needs to be done to anticipate and prepare for demands that may pop up in the future; and
- managing successful internal operations with respect to facilities, human resources, and funding.
What is needed to succeed at a job like this? A lot more skill, patience, and understanding that I will ever have, that’s for sure. I can’t begin to list what’s needed to do it right, but I can name two items that are germane to this argument. Successful program designers and managers have:
- respect for the limits of their control, authority, and influence, and
- appreciation of the reality that doing a good job means jointly optimizing the above, and downplaying some, at least various periods of time.
Into their world come the evaluators – scheming, begging, asking for attention. What do the planners and managers know about evaluation? They know that:
- the results of evaluation can be wrong;
- empirical information is also coming from other sources (budget analysts, planners, think tanks, the academic community, government); and,
- the scope of the evaluation will cover only some, but not all, of the considerations that go into doing a good job of designing and running a program.
Given all this, why would any capable planner or manager put too much stock in information coming from evaluators? Therein lies folly, or so it seems to me.
What can we do about it?
Our field is privileged with an enormous amount of very insightful research and theory on topics such as evaluation capacity building, evaluation culture, and evaluation use. I’m not going to review that here, but I pay a lot of attention to it, and I think we all should. What I do want is to toss in an idea I have been pondering. My notion is to switch the focus from “evaluation” to “data”.
Every organization I know has a lot of data. The data may not come from a formal activity called “evaluation”. The quality of the data may be suspect. The coverage of the data over the organization’s decision making landscape may be spotty. But there is always data. Organizations do not have problems because they do not use evaluation. Organizations have problems because they do not exploit the data that they have.
I do not think we can advance the use of evaluation by advocating for evaluation use. What we can do is advocate for data use, and then do what we can to make sure that evaluation is swept up in the use process. What does it mean to “advocate for data use”? It means helping organizations to respect the importance of six questions.
- Where is the data?
- What is the data’s quality?
- What is the data’s relevance?
- What meaning can be extracted from the data?
- What collective meaning can be extracted from multiple data sources?
- What implications do those meanings have for the decisions we need to make?
What matters is the active questioning. It’s the belief that the data are important, but that the messages in that data are neither obvious nor trustworthy. It’s the use of the data for understanding, explanation, and inspiration. It’s abandonment of the belief that the message in any data will ever be so clear, so convincing, so overwhelming, that it should dictate a course of action. I believe my mantra. Respect data. Trust judgement.
Makes total sense to me Jonny