Evaluators have always been concerned about the weakness of their efforts to drive more effective programs, more desirable outcomes, and fewer unintended negative consequences. Broadly speaking, these concerns fall into three categories.
- evaluation methodology,
- “evaluation use” e., the dynamics by which evaluation works its way into decision making, and
- limits on what evaluation can say about why programs operate as they do and what consequences they have.
Complexity as a Trend in Evaluation and Planning
In recent years, the evaluation community has been looking to “complexity” as a source for addressing these difficulties.
- If the world is complex but program designs do not recognize that complexity, those programs cannot be effective.
- If evaluation methodology does not recognize complexity, then the methodology will fail to detect why programs operate as they do, and what outcomes they yield.
- If evaluation fails to produce satisfying explanations of why programs act as they do and what they accomplish, then why should program funders and designers pay attention to evaluators?
The “complexity solution” is by no means the sole path being pursued by evaluators, but it is becoming an increasingly prominent trend as a way of understanding these limitations.
I’m getting worried about what I see as a gap between how evaluators invoke complexity, and what complexity scientists have come to know. I think evaluators do too much hand waving over the field of complexity science and that we don’t pay enough attention to the field’s research and theory base.
This is problematic for two related reasons. The primary problem is that evaluators will fail to design and execute evaluation in ways that will elucidate the complexity dimensions of program design and impact. The derivative problem is that “complexity” will be a passing fad, to be replaced with the next big idea. This would be a shame because complexity has much to say to improve program design and program evaluation.
Comparing the Trajectory of Statics and Complexity in Evaluation
I see classical statistics as a model for the role that complexity should play in the field of evaluation. Classical statistics is an enormously powerful approach for understanding how the world works. On the other hand, its limitations are well recognized, and it has come to be seen as one of many analytical perspectives, e.g., Baseyan and qualitative modes of understanding. It is because the value of these other methods is recognized that the true contribution of classical statistics can be realized.
The difference between statistics and complexity is one of directionality. Evaluation grew up steeped in an intellectual environment that was shaped by statistical thinking. The intellectual question that grew up was not: “Is statistics a good way to design research”? Rather the question has become: “We never questioned whether we should think statically, but should we, and when?”
In contrast to statistics, “complexity” was never one of the formative paradigms that shaped evaluation theory and practice. For complexity, the question is: How can complexity find its way into evaluation theory and evaluation practice? Is there room for it? What place does it have?
History has led statistics to have a dominant role in Evaluation. We are now questioning that role. Complexity has the opposite problem – it has a minimal role and its advocates are trying to increase its prominence.