Joint Optimization of Uncorrelated Outcomes: Part 6 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science

Common Introduction to all 6 Posts

History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:

Complexity behavior Posting date
·      Emergence up
·      Power law distributions up
·      Network effects and fractals up
·      Unpredictable outcome chains up
·      Consequence of small changes up
·      Joint optimization of uncorrelated outcomes up

For a history of my activity on this subject see: PowerPoint presentations: 1, 2, and 3; fifteen minute AEA “Coffee Break” videos 4, 5, and 6; long comprehensive video: 7.

Since I began thinking of complexity and evaluation in this way I have been uncomfortable with the idea of just having a list of seemingly unconnected items. I have also been unhappy because presentations and lectures are not good vehicles for developing lines of reasoning. I wrote this series of posts to address these dissatisfactions.

From my reading in complexity I identified four themes that seem relevant for evaluation.

  • Pattern
  • Predictability
  • How change happens
  • Adaptive and evolutionary behavior

Others may pick out different themes, but these are the ones that work for me. Boundaries among these themes are not clean, and connections among them abound. But treating them separately works well enough for me, at least for right now. Figure 1 is a visual depiction of my approach to this subject.

Overview graphic
Figure 1: Complex Behaviors and Complexity Themes

The black rectangles on the left depict a scenario that pairs a well-defined program with a well-defined evaluation, resulting in a clear understanding of program outcomes. I respect evaluation like this. It yields good information, and there are compelling reasons working this way. (For reasons why I believe this, see 1 and 2.)

  • The blue region is there to indicate that no matter how clear cut the program and the evaluation; it is also true that both the program and the evaluation are embedded in a web of entities (programs, policies, culture, regulation, legislation, etc.) that interact with our program in unknown and often unknowable ways.
  • The green region depicts what happens over time. The program may be intact, but the contextual web has evolved in unknown and often unknowable ways. Such are the ways of complex systems.
  • Recognizing that we have a complex system, however, is not amenable to developing program theory, formulating methodology, or analyzing and interpreting data. For that, we need to focus on the behaviors of complex systems, as depicted in the red rows of the table. The columns show the complexity themes. The Xs in the cells show which themes relate to which complexity behaviors.

Joint optimization of uncorrelated outcomes 

Pattern

 

Predictability

 

How change happens Adaptive evolutionary behavior
Emergence
Power law distributions
Network effects and fractals
Unpredictable outcome chains
Consequence of small changes
Joint optimization of uncorrelated outcomes X X

 One way to think about the programs we evaluate is to see them as part of an ecology in which many different outcomes are being pursued by many different programs. Some of those outcomes will be correlated and will not. The outcomes pursued by any single program, however, will be correlated. Thus to say we have a “program” is to say that resources are being invested in a particular set of related outcomes to the exclusion of others. Because of that resource investment, the other programs will have to adapt to scarcity. One cannot predict what those adaptations will be, but one can be sure that adaptations will take place, and that those adaptations will negatively impact the ability of those other programs to achieve their desired outcomes.

Other than the sense that this reasoning comes across as a bit Hobbsian, I see three objections. The first is that often resources for a program come from the “outside”, e.g. a State paying for local nurses in a primary prevention clinic, or an NGO funding a school system. Leaving aside the question of what “outside” means, there is a problem because no matter where the money comes from, local resources are always needed. As an example, outside money may build schools and pay teachers’ salaries, but it is local talent that does the teaching. A second objection is that a program’s outcomes may not all be correlated. The best I can say is that I have never evaluated an outcome chain in which achieving one objective was not correlated with achieving others. (If you have counter examples, please send them my way.) Finally, what about cooperative and synergistic relationships? That is certainly possible, but if I had to bet my money, I’d bet for a competitive relationship. I’m sure I would win this bet for short and intermediate range outcomes. I’m less sure about what an ecosystem would do in the long run.

I constructed figures 2 and 3 to illustrate the argument I’m making. Figure 2 shows a logic model for an AIDS prevention program. It’s pretty straightforward. The program is implemented. It delivers services. The immediate consequence is to reduce incidence and prevalence. Reduced incidence and prevalence improve a variety of related outcomes. There are feedback loops between program success and both program design and services provided.

AIDS traditional
Figure 2: Traditional AIDS Theory of Change

But there is another way to depict Figure 2’s logic. Figure 3 depicts what happens when an outcome maximization logic is nestled within an adaptive/evolutionary logic.

AIDS system view
Figure 3: Evolutionary / Adaptive Theory of Change
Explanation of Figure 3
Top left The AIDS program as depicted in Figure 2 is implemented.
Top right This is a radar chart to depict the functioning of the various parts of the public health system, of which AIDS treatment and prevention is but one part. The public health system may not be very good, but each part of it is about as good as it can be given its circumstances.
Bottom left The table is meant to illustrate what happens when all that special effort is put into the AIDS program. “Resources” means not only money but all the elements in that list. For instance, where would a nurse choose to put his or her skills, given the chance of working in a poorly functioning prenatal program for women, or a better paying, and better functioning AIDS program?
Bottom right This radar chart shows the functioning of parts of the public health system after the AIDS program has been working for a while. The AIDS metrics have improved. But the other parts of the system have had to change and shrink to adapt to their new “resource” environment. It is not possible to know in advance how they will function, but it is a safe bet that they will function less well.

For evaluators, the implications of looking at the AIDS program in this way are straightforward. Put a rigorous evaluation of the program in place. Identify other aspects of the public health system. Locate the metrics that indicate how those systems are functioning. Identify the resources that may move among programs. Develop a monitoring system to detect how those resources are flowing. Develop a monitoring system to document how different parts of the pubic health system are adapting. I’m not saying that doing this would be cheap or easy. I am saying that it is not methodologically difficult or exotic. We don’t need to learn any new methodologies or analysis techniques. What we already know will serve us well.

We do, however, need to get our customers to accept the evolutionary/adaptive logic, and therein lies a problem because having that conversation can be uncomfortable. All those other aspects of the public health program are outside the scope of the funder’s mission. Including them in the evaluation makes it very likely that negative consequences will be detected. Thinking this way is weird and strange. Thus doing the evaluation this way would add discomfort and cost for the sake of putting the program in a bad light. The methodology is easy. The politics and psychology are not.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s