Workshop description All evaluators deal with unintended events that foul their evaluation plans. Either the program does not work as planned, or the evaluation does not work as planned, or both. Usually we treat these situations as fires, i.e. we exercise our skills to meet the crisis. This workshop steps back from crisis mode and presents a systematic treatment of why these situations pop up, the continuum from “unforeseen” to “unforeseeable”, tactics that can be used along the continuum, and why caution is needed because anything we do to minimize the effect of surprise may be the cause of yet other difficulties. The intent of the workshop is twofold. First, to provide individual attendees with skills and knowledge they can employ in their own practice. Second, to further a community of interest among evaluators dedicated to developing systematic understanding of the phenomenon of unanticipated events that threaten the integrity of evaluation.
This is an abstract of a presentation I gave at AEA 2014. Click here for the slide deck.
How to measure impact if impact is a moving target? Some examples: 1) a qualitative pre-post design uses semi-structured questions to determine people’s distress during refugee resettlement; 2) a post-test only quantitative study of innovation adoption requires comparisons between design engineers in different parts of a company; 3) a health awareness program requires a validated survey. But what if halfway through, different outcomes are suspected: family dynamics among the refugees; skills of production engineers; mental health in the health case? No problem if the solution is a post-test only design using existing data or flexible interview protocols. But those are just a few arrows in the quiver of evaluation methods. How can design choices be maximized when so many of those choices assume stability of impact? The answer depends on making design choices based on understanding why and when outcomes shift, and the amount of uncertainty about outcomes.
Here is the slide deck for a presentation I did at a State Department conference: Diplomacy, Development, and Defense — Evaluating Foreign Policy Success.
Evaluation of Unintended Consequences of Development Efforts: Building Evaluation Capacity in Support of Development and Democracy
Common Introduction to all 6 Posts
History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:
|Complexity behavior||Posting date|
|· Power law distributions||up|
|· Network effects and fractals||up|
|· Unpredictable outcome chains||up|
|· Consequence of small changes||up|
|· Joint optimization of uncorrelated outcomes||up|
Recently have been pushing the notion that one reason why programs have unintended consequences, and why those consequences tend to be undesirable, is because programs attempt to maximize outcomes that are highly correlated, to the detriment of multiple other benchmarks that recipients of program services need to meet in order to thrive. Details of what I have been thinking are at:
Despite all this writing, I have not been able to come up with a graphic to illustrate what I have in mind. I think I finally might have. The top of the picture illustrates the various benchmarks that the blue thing in the center needs to meet in order to thrive. (The “thing” is what the program is trying to help – people, school systems, county governments, whatever.)
The picture on the top connotes the situation before the program is implemented. There is an assumption made (an implicit one, or course) that A, C, D, E, and F can be left alone, but that the blue thing would be better off if B improved. The program is implemented. It succeeds. The blue thing gets a lot better with respect to B. (Bottom of picture.)
The problem is that getting B to improve distorts the resources and processes needed to maintain all the other benchmarks. The blue things can’t let that happen, so it acts in odd ways to maintain its “health”. Either it works in untested and uncertain ways to maintain the benchmark (hence the squiggly lines), or it fails to meet the benchmark, or both. Programs have unintended consequences because they force the blue thing into this awkward and dysfunctional position.
What I’d like to see is programs that pursue the joint optimization of at least somewhat uncorrelated outcomes. I don’t think it has to be more than one other outcome, but even that would help a lot. My belief is that so doing would minimize the distortion in the system, and thus minimize unintended negative outcomes.
This blog post is a pitch for a different way to identify desired program outcomes.
Program Theories as they are Presently Constructed
Go into your archives and pull out your favorite logic models. Or dip into the evaluation literature and find models you like. You will find lots of variability among them in terms of: Continue reading “Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action”
I have been developing an interest in “big data” as it may relate to the field of evaluation. The interest comes from two sources. 1) As the Editor of the journal Evaluation and Program Planning, I am on the lookout for cutting edge material to present to our readers. As someone who has done research and theoretical work on unintended consequences of program action, I see big data as a methodology that may help to reveal such unintended consequences. As a result of these two interests I’m looking for:
- Examples where a big data approach has revealed consequences of program action that were not anticipated, and
- People who may want to write about big data as it applies to the field of evaluation.
If you can help please get in touch with me at firstname.lastname@example.org
Discussion in the first session of the Azenet Tucson book club — th Theory; using explanatory power section with the introduction to life cycle behavior (p.49). The most common evaluation activity among our members is evaluation of state or federally funded programs (DOE, SAMHSA, OJJDP, BJA). Common characteristics: Continue reading “Azenet Book Club – Life Cycles, Rigid evaluation requirements, and Implementation theory”
A school breakfast program was organized in a food insecure rural area of Nicaragua to increase school enrollment. In some schools teachers also gave food to younger siblings who came when mothers were bringing students to school. Continue reading “Michael Bamberger has provided a case of unintended consequences: How a school breakfast program became a community nutrition program”