Workshop: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop slides for: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop description All evaluators deal with unintended events that foul their evaluation plans. Either the program does not work as planned, or the evaluation does not work as planned, or both. Usually we treat these situations as fires, i.e. we exercise our skills to meet the crisis. This workshop steps back from crisis mode and presents a systematic treatment of why these situations pop up, the continuum from “unforeseen” to “unforeseeable”, tactics that can be used along the continuum, and why caution is needed because anything we do to minimize the effect of surprise may be the cause of yet other difficulties. The intent of the workshop is twofold. First, to provide individual attendees with skills and knowledge they can employ in their own practice. Second, to further a community of interest among evaluators dedicated to developing systematic understanding of the phenomenon of unanticipated events that threaten the integrity of evaluation.

What to Do When Impacts Shift and Evaluation Design Requires Stability?

This is an abstract of a presentation I gave at AEA 2014. Click here for the slide deck.

How to measure impact if impact is a moving target? Some examples: 1) a qualitative pre-post design uses semi-structured questions to determine people’s distress during refugee resettlement; 2) a post-test only quantitative study of innovation adoption requires comparisons between design engineers in different parts of a company; 3) a health awareness program requires a validated survey. But what if halfway through, different outcomes are suspected: family dynamics among the refugees; skills of production engineers; mental health in the health case? No problem if the solution is a post-test only design using existing data or flexible interview protocols. But those are just a few arrows in the quiver of evaluation methods. How can design choices be maximized when so many of those choices assume stability of impact? The answer depends on making design choices based on understanding why and when outcomes shift, and the amount of uncertainty about outcomes.

Joint Optimization of Uncorrelated Outcomes: Part 6 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science

Common Introduction to all 6 Posts

History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:

Complexity behavior Posting date
·      Emergence up
·      Power law distributions up
·      Network effects and fractals up
·      Unpredictable outcome chains up
·      Consequence of small changes up
·      Joint optimization of uncorrelated outcomes up

For a history of my activity on this subject see: PowerPoint presentations: 1, 2, and 3; fifteen minute AEA “Coffee Break” videos 4, 5, and 6; long comprehensive video: 7.

Continue reading “Joint Optimization of Uncorrelated Outcomes: Part 6 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science”

Another post on joint optimization of uncorrelated program goals as a way to minimize unintended negative consequences

Recently  have been pushing the notion that one reason why programs have unintended consequences, and why those consequences tend to be undesirable, is because programs attempt to maximize outcomes that are highly correlated, to the detriment of multiple other benchmarks that recipients of program services need to meet in order to thrive.  Details of what I have been thinking are at:

Blog posts
Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action

A simple recipe for improving the odds of sustainability: A systems perspective

Article
From Firefighting to Systematic Action: Toward A Research Agenda for Better Evaluation of Unintended Consequences

Despite all this writing, I have not been able to come up with a graphic to illustrate what I have in mind. I think I finally might have. The top of the picture illustrates the various benchmarks that the blue thing in the center needs to meet in order to thrive. (The “thing” is what the program is trying to help – people, school systems, county governments, whatever.)

joint_optimization

The picture on the top connotes the situation before the program is implemented. There is an assumption  made (an implicit one, or course) that A, C, D, E, and F can be left alone, but that the blue thing would be better off if B improved. The program is implemented. It succeeds. The blue thing gets a lot better with respect to B. (Bottom of picture.)

The problem is that getting B to improve distorts the resources and processes needed to maintain all the other benchmarks. The blue things can’t let that happen, so it acts in odd ways to maintain its “health”. Either it works in untested and uncertain ways to maintain the benchmark (hence the squiggly lines), or it fails to meet the benchmark, or both. Programs have unintended consequences because they force the blue thing into this awkward and dysfunctional position.

What I’d like to see is programs that pursue the joint optimization of at least somewhat uncorrelated outcomes. I don’t think it has to be more than one other outcome, but even that would help a lot. My belief is that so doing would minimize the distortion in the system, and thus minimize  unintended negative outcomes.

 

 

 

 

 

 

 

 

Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action

This blog post is a pitch for a different way to identify desired program outcomes.

Program Theories as they are Presently Constructed

Go into your archives and pull out your favorite logic models. Or dip into the evaluation literature and find models you like. You will find lots of variability among them in terms of: Continue reading “Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action”