Recently  have been pushing the notion that one reason why programs have unintended consequences, and why those consequences tend to be undesirable, is because programs attempt to maximize outcomes that are highly correlated, to the detriment of multiple other benchmarks that recipients of program services need to meet in order to thrive.  Details of what I have been thinking are at:

Blog posts
Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action

A simple recipe for improving the odds of sustainability: A systems perspective

Article
From Firefighting to Systematic Action: Toward A Research Agenda for Better Evaluation of Unintended Consequences

Despite all this writing, I have not been able to come up with a graphic to illustrate what I have in mind. I think I finally might have. The top of the picture illustrates the various benchmarks that the blue thing in the center needs to meet in order to thrive. (The “thing” is what the program is trying to help – people, school systems, county governments, whatever.)

joint_optimization

The picture on the top connotes the situation before the program is implemented. There is an assumption  made (an implicit one, or course) that A, C, D, E, and F can be left alone, but that the blue thing would be better off if B improved. The program is implemented. It succeeds. The blue thing gets a lot better with respect to B. (Bottom of picture.)

The problem is that getting B to improve distorts the resources and processes needed to maintain all the other benchmarks. The blue things can’t let that happen, so it acts in odd ways to maintain its “health”. Either it works in untested and uncertain ways to maintain the benchmark (hence the squiggly lines), or it fails to meet the benchmark, or both. Programs have unintended consequences because they force the blue thing into this awkward and dysfunctional position.

What I’d like to see is programs that pursue the joint optimization of at least somewhat uncorrelated outcomes. I don’t think it has to be more than one other outcome, but even that would help a lot. My belief is that so doing would minimize the distortion in the system, and thus minimize  unintended negative outcomes.

 

 

 

 

 

 

 

 

2 thoughts on “Another post on joint optimization of uncorrelated program goals as a way to minimize unintended negative consequences

  1. Two top-of-the-head thoughts Jonny, both from other parts of the systems field.

    Firstly the topic of ‘sub-optimization’, where you trade off the effectiveness and efficiency of parts of the situation for the good of the whole. These ideas have been knocking around the systems field since the early 1950’s with the work of Charles Hitch at the Rand Corporation. True, this concept largely focuses around trade-offs around process rather than outcome. For instance, here in New Zealand we had a very high profile hospital chief who in their quest to increase efficiency went around every department and increased the efficiency of each. The result was an expensive flop, since surprise surprise, the most efficient way of running the laundry, nursing shift changes, meal preparation and consultant visits meant that everything happened at 1pm, causing chaos.

    The other thought comes from critical systems’ key interest in what you do with the negative consequences of an intervention (or strictly speaking your reference system) for those not directly involved in that intervention…. and the implications for your intervention. Critical systems interest in this is primarily around moral legitimacy but I’ve found the methods useful for more mundane aspects of an intervention, including identifying possible negative outcomes. For instance, I once had to comment on an evaluation where the outcome for a particular stakeholder not involved in the actual intervention was so negative that their response effectively destroyed the intervention. A critical systems analysis would probably have picked up this possible consequence very quickly and a strategy to deal with it could have been developed.

    Might be worth checking out those two traditions.

    1. Hi Bob—
      I have a few thoughts. First, of course you are right. Both of those systems traditions are worthwhile and provide insight my perspective does not have.

      I have been giving more thought to why I like my joint-optimization way of looking at things. Mostly, because it has a longitudinal, evolutionary sensibility to it. In other words, it captures a dynamic of how the world works. Take your hospital example. The solution is static in a sense. We know there is a problem because the shifts coincide. We can change the shift schedule and avoid the problem. If the had the right cross-functional team doing the planning, they would have detected the problem and avoided it. Engineers know this well, hence their use of concurrent engineering processes. In my evolutionary take, we cannot know which precise distortion will take place, but: 1) we know that something will happened, and 2) we know why it will happen, i.e. we have a causal mechanism.

      As for your critical systems comment. There is always a need to draw some kind of reasonable boundaries to define the system you want to work with. There is no escaping the necessity of keeping some things in and some things out. I have often heard you make the point that people do not pay enough attention to doing this carefully. You were right when you said it, you will be right when you say it again, and as in the past, people will continue to not take the need seriously enough.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s