Another post on joint optimization of uncorrelated program goals as a way to minimize unintended negative consequences

Recently  have been pushing the notion that one reason why programs have unintended consequences, and why those consequences tend to be undesirable, is because programs attempt to maximize outcomes that are highly correlated, to the detriment of multiple other benchmarks that recipients of program services need to meet in order to thrive.  Details of what I have been thinking are at:

Blog posts
Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action

A simple recipe for improving the odds of sustainability: A systems perspective

Article
From Firefighting to Systematic Action: Toward A Research Agenda for Better Evaluation of Unintended Consequences

Despite all this writing, I have not been able to come up with a graphic to illustrate what I have in mind. I think I finally might have. The top of the picture illustrates the various benchmarks that the blue thing in the center needs to meet in order to thrive. (The “thing” is what the program is trying to help – people, school systems, county governments, whatever.)

joint_optimization

The picture on the top connotes the situation before the program is implemented. There is an assumption  made (an implicit one, or course) that A, C, D, E, and F can be left alone, but that the blue thing would be better off if B improved. The program is implemented. It succeeds. The blue thing gets a lot better with respect to B. (Bottom of picture.)

The problem is that getting B to improve distorts the resources and processes needed to maintain all the other benchmarks. The blue things can’t let that happen, so it acts in odd ways to maintain its “health”. Either it works in untested and uncertain ways to maintain the benchmark (hence the squiggly lines), or it fails to meet the benchmark, or both. Programs have unintended consequences because they force the blue thing into this awkward and dysfunctional position.

What I’d like to see is programs that pursue the joint optimization of at least somewhat uncorrelated outcomes. I don’t think it has to be more than one other outcome, but even that would help a lot. My belief is that so doing would minimize the distortion in the system, and thus minimize  unintended negative outcomes.

 

 

 

 

 

 

 

 

Advertisements
Posted in Uncategorized | 2 Comments

Ideological diversity in Evaluation. We don’t have it, and we do need it

I’m about to make a case that the field of Evaluation would benefit from theoreticians and practitioners that were more diverse than they are now with respect to beliefs about what constitutes the social good, and how to get there. Making this argument is not easy for me because it means putting  head over heart. But I’ll do my best because I think it does matter for the future of Evaluation.

Examples from the Social Sciences
Think of the social sciences – Economics, Sociology, Political Science.

One does not have to have left wing inclinations to appreciate Marxian critiques of society and the relationships among classes. That understanding can inform anyone’s view of the world whether or not you think that overall, Capitalism is a good organizing principle for society. On the other end of the spectrum, even a dyed in the wool lefty would (should?) appreciate that self-interest and the profit motive are useful concepts for understanding why society works as it does, and that it does (might?) produce some social good despite its faults. Would the contribution of the field of Economics be as rich as it is if one of those perspectives did not exist?

Or to take an example from Sociology. Functionalists like Talcott Parsons and Robert Merton lean toward the notion that social change can lead to dysfunction. The existence of theory like that can shape (support? further?) go-slow views about the pace of social change. Or think of the conflict theories of people like Max Weber and C. Wright Mills. Those views support the idea that conflict and inequality are inherent in Capitalism. That’s the kind of theory that could support or shape a rather different view about the need for social change.

So what we have is a diversity of theory that is in some combination based on/facilitative of, different views of how society should operate. I think the disciplines of Economics and Sociology are better off because of that diversity. More important, we are all better off for having access to these different perspectives as we try to figure out how to do the right thing, or even, what the right thing is.

Evaluation
I am convinced that over the long run, if Evaluation is going to make a contribution to society, it has to encompass the kind of diversity I’m giving examples of above. Why?

One reason is that stakeholders and interested parties have different beliefs about programs – their very existence, choices of which ones to implement, their makeup, and their desired outcomes. How can Evaluation serve the needs of that diversity if there is too much uniformity in our ranks? Also, what kind of credibility do we have if the world at large comes to see our professional associations and evaluations as supportive of only one perspective on the social good and the role of government?

The argument above deals with the design of evaluations and the collection and interpretation of data. But the importance of diversity extends to Evaluation theory as well.

Explaining the value of diversity in Evaluation theory is harder for me because I don’t have a good idea of how it might play out, but I’ll try. It seems to me that right now, all existing Evaluation theory carries the implicit belief that change is a good thing. Change may not work out as we wish because programs may be weak or have unintended consequences. But fundamentally, change is good and the reason to evaluate is to make the change better. Well, what would Evaluation look like if we had evaluation theory that drew from the Functionalist school of Sociology, which takes such a jaundiced view of social change? I have no idea, and emotionally, I’m not sure I want to know because personally I am in favor of intervention in the service of the social good. But on an intellectual level, I know that evaluation based on a conservative (small “c”) view of change would end up producing some very worthwhile insight that I am sure would not come from our present theory.

Moving from Blather to Action
There are numerous impediments to working toward ideological diversity. Mostly, I am convinced that almost everyone in our field has politics that are not too much different from mine. We go into the evaluation business because we think that government is good and we want to make it better. That self-selection bias makes us a pretty homogeneous group that forms into associations that do not throw out the welcome mat for divergent opinion. Maybe the best we can do is make it known that ideological dimensions of diversity are welcome. That itself is not so easy because what does “dimension of diversity” even mean? Still, I think it’s worth a shot.

 

 

Posted in Uncategorized | 12 Comments

Invitation to a Conversation Between Program Funders and Program Evaluators: Complex Behavior in Program Design and Evaluation

Effective programs and useful evaluations require much more appreciation of complex behavior than is currently the case. This state of affairs must change. Evaluation methodology is not the critical inhibitor of that change. Program design is. Our purpose is to begin a dialog between program funders and evaluators to address this problem.

Current Practice: Common Sense Approach to Program Design and Evaluation
There is sense to successful program design, but that sense is not common sense. And therein lies a problem for program designers, and by extension, for the evaluators that are paid to evaluate the programs envisioned by their customers.

What is common sense?  
“Common sense is a basic ability to perceive, understand, and judge things that are shared by (“common to”) nearly all people and can reasonably be expected of nearly all people without need for debate.”

What is the common sense of program design?
The common sense of program design is usually expressed in one of two forms. One form is a set of columns with familiar labels such as “input”, “throughput”, and “output”. The second is a set of shapes that are connected with 1:1, 1:many, many:1 and many:many relationships. These relationships may be cast in elaborate forms, as for example, a systems dynamics model complete with buffers and feedback loops, or a tangle of participatory impact pathways.

But no matter what the specific form, the elements of these models, and hypothesized relationships among them, are based on our intuitive understandings of “cause and effect”, mechanistic views of how programs work. They also assume that the major operative elements of a program can be identified.

To be sure, program designers are aware that their models are simplifications of reality, that models can never be fully specified, and that uncertainties cannot be fully accounted for. Still, inspection of the program models that are produced makes it clear that almost all the thinking that went into developing those models was predominantly in the cause and effect, mechanistic mode. We think about the situation and say to ourselves: “If this happens, it will make (or has made) that happen.” Because the models are like that, so too are the evaluations.

Our common sense conceptualization of programs is based on deep knowledge about the problems being addressed and the methods available to address those problems. Common sense does not mean ignorance or naiveté. It does, however, mean that common sense logic is at play. There is no shame in approaching problems in this manner. We all do it. We are all human.

Including Complex Behavior in Program Design and Evaluation
When it comes to the very small, the very large, or the very fast, 20th Century science has succeeded in getting us to accept that the world is not common sensical. But we have trouble accepting a non-common sense view of the world at the scale that is experienced by human beings. Specifically, we do not think in terms of the dynamics of complex behavior. Complex behavior has much to say about why change happens, patterns of change, and program theory. We do not routinely consider these behaviors when we design programs and their evaluations.

There is nothing intuitively obvious about complex behavior. Much of it is not very psychologically satisfying. Some of it has uncomfortable implications for people who must commit resources and bear responsibility for those commitments. Still, program designers must appreciate complex behavior if they are ever going to design effective programs and commission meaningful evaluations of those programs.

Pursuing Change
There is already momentum in the field of evaluation to apply complexity. Our critique of that effort is that current discussions of complexity do not tap the richness of what complexity science has discovered, and also, that some of the conversation is an incorrect understanding of complexity. The purpose of this panel is to bring a more thorough, a more research based, understanding of complexity into the conversation.

By “conversation” we mean dialogue between program designers and evaluators with respect to the role that complexity can play in a program’s operations, outcomes, and impacts. This conversation matters because as we said at the outset, the inhibiting factor is recognition that complex behavior may be at play in the workings of programs. Methodology is not the problem. Except for a few exotic situations, the familiar tools of evaluation will more than suffice. The question is what program behavior evaluators have license to consider.

Our goal is to pursue a long-term effort to facilitate the necessary discourse. Our strategy is to generate a series of conferences, informal conversations, and empirical tests that will lead to a critical mass of program funders and evaluators who can bring about a long term change in the rigor with which complexity is applied to program design and evaluation.

 

Posted in Uncategorized | 2 Comments

Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action

This blog post is a pitch for a different way to identify desired program outcomes.

Program Theories as they are Presently Constructed

Go into your archives and pull out your favorite logic models. Or dip into the evaluation literature and find models you like. You will find lots of variability among them in terms of: Continue reading

Posted in Uncategorized | 1 Comment

A simple recipe for improving the odds of sustainability: A systems perspective

I have been to a lot of conferences that had many sessions on ways to assure program sustainability. There is also a lot of really good research literature on this topic. Also, sustainability is a topic that has been front and center in my own work of late.

Analyses and explanations of sustainability inevitably end up with some fairly elaborate discussions about what factors lead to sustainability, how the program is embedded in its context, and so on. I have no doubt that all these treatments of sustainability have a great deal of merit. I take them seriously in my own work. I think everyone should. That said, I have been toying with another, much simpler approach.

Almost every program I have ever evaluated had only one major outcome that it was after. Sure there are cascading outcomes from proximate to distal. (Outcome to waves of impact, if you like that phrasing better.) And of course many programs have many outcomes at all ranks. But in general the proximate outcomes, even if they are many, tend to be highly correlated. So in essence, there is only one.

What this means is that when a program is dropped into a complex system, that program is designed to move the entire system in the direction of attaining that one outcome. We know how systems work. If enough effort is put in, they can in fact be made to optimize a single objective. But we also know that success like that makes the system as a whole dysfunctional in terms of its ability to adapt to environmental change, meet the needs of multiple stakeholders, maintain effective and efficient internal operations, and so on. As I see it, that means that any effort to optimize one outcome will be inherently unstable. No need to look at the details.

My notion is that in order to increase the probability of sustainability, a program should pursue multiple outcomes that are as uncorrelated as possible. The goal should be joint optimization, at the expense of sub-optimizing any of the desired outcomes.

I understand the problems in following my idea. The greater the number of uncorrelated outcomes, the greater the need to coordinate across boundaries, and as I have argued elsewhere in this blog, that is exceedingly difficult. (Why do Policy and Program Planners Assume-away Complexity?)  Also, I am by no means advocating ignoring all that work that has been done on sustainability. Ignoring it is guaranteed to lead to trouble.

Even so, I think the idea I’m proposing has some merit. Look at the outcomes being pursued, and give some thought to how highly correlated they are. What we know about systems tells us that optimization of one outcome may succeed in the short term, but it will not succeed in the long term. Joint optimization of uncorrelated outcomes? That gives us a better fighting chance.

 

 

Posted in Uncategorized | 4 Comments

Agent-based Evaluation Guiding Implementation of Solar Technology

 

AEGIS: Agent-based Evaluation Guiding Implementation of Solar

DE-FOA-0001496: SOLAR ENERGY EVOLUTION AND DIFFUSION STUDIES II – STATE ENERGY STRATEGIES (SEEDSII-SES)

Business contact:
Mr. Vijay Kohli
President
Syntek Technologies
703.522.1025 ext. 201
vkohli@syntek.org
Technical contact:
Jonathan A. Morell, Ph.D.
Director of Evaluation
Syntek Technologies
734 646-8622
jmorell@syntek.org
Confidentiality statement:This proposal includes information and data that shall not be disclosed outside the Government and shall not be duplicated, used, or disclosed – in whole or in part – for any purpose other than to evaluate this proposal. Howev-er, if a contract is awarded to this participant as a result of – or in connection with – the submission of this information and data, the Government shall have the right to duplicate, use, or disclose the data to the extent provided in the resulting contract. This restriction does not limit the Government’s right to use information contained in these data if they are obtained from another source without restriction. The entirety of this proposal is subject to this restriction.

Introduction

AEGIS (Agent-based Evaluation Guiding Implementation of Solar) demonstrates a novel approach to doing program evaluation: combining agent-based modeling with traditional program evaluation, and doing so continually, as the evaluation work unfolds. We propose to test the value of this approach for evaluating programs that promote the goals of SEEDS II, Topic 1, specifically, “Development of new approaches to analyze and understand solar diffusion and solar technology evolution; developing and utilizing the significant solar data resources that are available; improvement in applied research program evaluation and portfolio analysis for solar technologies leading to clearer attribution and identification of successes and trends.”

The field of evaluation has historically fallen short in providing the conceptual understanding and instrumental knowledge that policy makers and planners need to design better programs, or to identify and measure impact. Our hypothesis, supported by our work to date, is that agent-based modeling can improve the quality and contribution of evaluation. Specifically, we will increase stakeholder involvement and the adoption of evaluation recommendations. We propose to apply and evaluate our approach on programs that are designed to reduce the soft costs of solar deployment and to overcome barriers to diffusion, commercialization, and acceptance.

Scientific Justification and Work to Date

Continue reading

Posted in Uncategorized | Leave a comment

Things to think about when observing programs from a systems perspective

A friend of mine (Donna Podems) is heading  up a project that involves providing a structure for a group of on-the-ground observers so they can apply a systems perspective to understanding what programs are doing and what they are accomplishing.  She asked me for a brain dump, which I happily provided.  What follows is by no means a systematic approach to looking at programs in terms of systems. It’s just a laundry list of ideas that popped into my head and flowed through my fingers. Below is a somewhat cleaned up version of what  sent her.

Hi Donna,

What follows is not a list of independent items. In fact I guarantee there are lots of connections. For instance, “redundancy” and “multiple paths” are not the same thing, but they are related. But time is tight, and I have a Greek meatball recipe to shop for, so let’s assume they are independent. Continue reading

Posted in Uncategorized | Leave a comment