Common Introduction to all Three Posts
Why do Policy and Program Planners Assume Away Complexity?
How Can Evaluators Apply Complexity in a way that will Help Program Designers?

Common Introduction to all Three Posts
This is the second of three blog posts I have been writing to help me understand how given the reality of how programs are designed, “complexity” can be used in evaluation . If it helps other people, great. If not, at least it helped me.

Part 1:  Complexity in Evaluation and in Studies on Complexity
In this section I talked about using complexity ideas as practical guides and inspiration for conducting an evaluation, and how those ideas hold up when looked at in terms of what is known from the study of complexity. It is by no means necessary that there be a perfect fit. It’s not even a good idea to try to make it a perfect fit. But the extent of the fit can’t be ignored, either.

Part 2: Complexity in Program Design
The problems that programs try to solve may be complex. The programs themselves may behave in complex ways when they are deployed. But the people who design programs act as if neither their programs, nor the desired outcomes, involve complex behavior. (I know this is an exaggeration, but not all that much. Details to follow.) It’s not that people don’t know better. They do. But there are very powerful and legitimate reasons to assume away complex behavior. So, if such powerful reasons exist, why would an evaluator want to deal with complexity? What’s the value added in the information the evaluator would produce? How might an evaluation recognize complexity and still be useful to program designers?

Part 3: Turning the Wrench: Applying Complexity in Evaluation
This is where the “turning the wrench” phrase comes from in the title of this blog post1. Considering what I said in the first two blog posts, how can I make good use of complexity in evaluation? In this regard my approach to complexity is no different than my approach to ANOVA or to doing a content analysis of interview data. I want to put my hands on a tool and make something happen. ANOVA, content analysis and complexity are different kinds of wrenches. The question is which one to use when, and how.

Complex Behavior or Complex System?
I’m not sure what the difference is between a “complex system” and “complex behavior”, but I am sure that unless I try to differentiate the two in my own mind, I’m going to get very confused. From what I have read in the evaluation literature, discussions tend to focus on “complex systems”, complete with topics such as parts, boundaries, part/whole relationships, and so on. My reading in the complexity literature, however, makes scarce use of these concepts. I find myself getting into trouble when talking about complexity with evaluators because their focus is on the “systems” stuff, and mine is on the “complexity” stuff. In these three blog posts I am going to concentrate on “complex behavior” as it appears in the research literature on complexity, not on the nature of “complex systems”. I don’t want to belabor this point because the boundaries are fuzzy, and there is overlap. But I will try to draw that distinction as clearly as I can.

Why do Policy and Program Planners Assume-away Complexity?
The evaluation community has produced a great deal of interesting and insightful writing about complexity. This work comes from an acute understanding that the programs we evaluate either are, or are embedded in, complex systems. The sense is that if we ignore the complex system nature of what we evaluate, we will be invoking incorrect (or at least sub-optimal) program theory, and that we will field methodologies that miss important knowledge about how programs are implemented, how they operate, and what they accomplish. Because we know this, we are not at ease with programs that ignore complexity. I feel this unease too, and I have made my share of efforts to get program designers to appreciate the role of complex systems in their work. My experience is that program designers do understand the importance of thinking in terms of complex systems, but that their understanding seldom translates into practice. Despite what many program designers will acknowledge, most programs that evaluators touch will be designed as if complex behaviors were not at play.

If the designers know better, why don’t they act on the knowledge? Because program designers know other things as well. They know that there are good political reasons for the status quo, that there are good economic reasons for it, that there are good sociological reasons for it, that there are good historical reasons for it, and that there are good cultural reasons for it. These factors drive a disconnect between how planners must behave in order to succeed at designing and implementing programs, and how the world works. (I’m summarizing from chapter 3 in my book – “Placing Surprise in the Evaluation Landscape.”)

Why the disconnect? Because while powerful solutions to many problems require coordinated action across multiple domains, often the only practical way to transform an idea for a program into reality is to constrain action within a limited set of boundaries. See the picture for an illustration of what I have in mind. It depicts the realty of program design and implementation.landscape

  • Any single program is nested in a larger landscape of multiple programs, chasing fewer objectives (aka intermediate goals), which in turn are chasing fewer grand goals.
  • Any evaluation can focus on any swath through program / objective / goal landscape.
  • Any single program being evaluated is interacting with a myriad of other activities that are taking place.
  • Despite the entangled nature of the landscape, programs tend to be designed and executed as if they live in the middle of the picture, i.e. with a logic familiar to us all – internal operations are specified in simple ways and echelons of outcomes posited.

There is nothing revelatory in the picture. It depicts a reality that everyone knows. So why does it happen? Because:

  • Windows of opportunity for getting programs approved are narrow because budget cycles are yearly, and elections take place every few years. So time is limited for planning and coordinating across organizational boundaries.
  • Competition for resources is fierce. Pallid claims of success will not be rewarded.
  • Resource owners have narrow interests. For example, how much do Congressional committees on transportation care about education or health?
  • Organizational structures limit access to diverse expertise. Someone working on pilot fatigue in aviation would not have easy access to sleep apnea experts in the CDC. Of course these experts may know each other personally from professional meetings and they may be willing to do each other favors. But active, time consuming collaboration? Not so much.
  • As more and more organizational boundaries are crossed, opportunities for personal contact become more and more limited, joint funding becomes difficult, and coordination mechanisms become more turgid, more difficult to manage.
  • Coordination requires overhead costs. The more elaborate the coordination, the greater the costs.

How Can Evaluators Apply Complexity in a way that will Help Program Designers?
I did not present the analysis above as a lament. In fact I see the situation in a very positive light. Stovepipes in organizations are wonderfully adaptive structures for bringing into close proximity the expertise, informal understandings, resources, and reward systems needed for effective action. Coordination across a wide swath of a bureaucracy requires central control, and we all know that complex systems are allergic to central control. I’m in favor of budgeting cycles because without them there would be no course correction when action went badly off track. I’m in favor of elections even if (or maybe especially because of) the changes in policy that they bring about. I’d rather have inefficiency and unpredictability in policy than the coherence that comes from authoritarian control.

I have to admit that in my daily technocratic work as an evaluator, my opinion is not quite so enlightened. I hate it because I know that if I evaluate programs based on the program theory of my customers, I will be using an incomplete program theory, which will result in an incomplete evaluation design, which will produce less than optimal insight for the people who are paying me to do a good job.

And to increase my anguish, I do not think it would be right to try to push my customers into applying complexity in their program design efforts. Why my reluctance? Because I see myself as being in a very conservative business, and I think it would be unethical to see my job as trying to get people to do what they cannot do. In almost all the evaluations I have ever done, the work has have begun with a civil servant coming to me and saying something like:

“In the interest of the public good I have invested a considerable amount of political capital, tax dollars, and personal credibility in program X. In doing so I have incurred opportunity costs. I made this investment because it I think it is the right thing to do, and because the context in which I am embedded has allowed me to make this choice among the constrained set of choices that I have. I want to know if I did a good job and if not, how to make it better. But, better has to be within that narrow range of improvement that is possible in the context I am working in.”

Or in an another variant of my conversation, someone may say to me:

“I may know the world is complex, but if I am to achieve any good at all, I have to act as if it is simple. I have to act as if single solution prescriptions will solve a problem. I have to act as if something that has been shown to work in one context can be assumed to work in another. And in any case, I will pay you to do this, but I won’t pay you to do anything else.”

Actually, the situation is less stark than I portray it. Sometimes I do have influence over program design, and sometimes I can inject some complexity thinking into the conversation. But not much, and not often. How responsible would I be if my response to these requests was: “Well you got it all wrong, you should be acting in ways that I know you cannot”.

So here I am. In Part 1 I explained my difficulties is squaring what I know about complexity with how complexity is applied in evaluation. In Part 2 I explained why it is so difficult for program designers to incorporate complexity thinking in their design activities. In both parts I tried to make the point that good evaluation must recognize complex behavior. How to do that?

My answer relies on going back to the distinction I made in Part 1 about the fuzzy distinction between “complex system” and “complex behavior”. One reason the distinction is fuzzy is because complex systems exhibit complex behavior. It may not be possible to get program designers to design their programs as complex systems. But it is possible to do evaluations of programs that: 1) recognize complex behavior, and 2) produce knowledge that program designers will be able to use to good practical advantage. Read on. It’s the subject of Part 3.



9 thoughts on “Drawing on Complexity to do Hands-on Evaluation (Part 2) – Complexity in Program Operation, Simplicity in Program Design

  1. Part Two of Jonny’s blog traverses the puzzle; why do smart people make dumb decisions? The answer according to Jonny is that its because they are only dumb to you. It’s related to the debates around whether evaluators should or should not make recommendations.

    But the post interests me for other reasons, because I think it highlights some significant issues within the complexity debates, that Jonny doesn’t touch on.

    I think many evaluators would acknowledge that while the activities and resources associated with programs exist in observable form, the program itself is a human construct. It only exists in the minds of certain players in a situation who chose to draw a boundary around certain activities and resources and give it a name. It’s for that reason I tend to chuckle when people accuse complexity and systems stuff as being ‘too conceptual’ when actually ‘program evaluation’ is based largely on a concept … someone’s idea of how the world might work and how might resources and activities be channeled to address some aspect of that world.

    But most program evaluation in practice treats a program as if it were this incontestable, certain, and homogeneously agreed reality.

    Jonny’s article gets really close to getting to grips with this, but his attempt at resolution is constrained by a particular framing. It’s what I (unfairly and somewhat inaccurately) call the American Systems Approach. This approach tends to treat systems as if they were real entities, that (in CAS) demonstrate certain features that can be observed and managed. In contrast what I call (again unfairly and inaccurately since the origins are partly American) the British Systems Approach, which regards systems as essentially conceptual, theoretical notions of how the world might work, or could work, but a useful because they allow us to pose questions of real life situations that display certain behavioural characteristics that lead to some form of resolution. Thus you take a situation (ontology), create a systemic understanding of what is or what might be (epistemology) and then use both to create some form of practical resolution (praxis). The current fashion of addressing so called ‘wicked problems’ is primarily rooted in this kind of approach although often doesn’t formally use complexity or systems concepts.

    Thus the CAS approach may not be the best way of addressing the dilemma Jonny describes in his post – but rather drawing on some of the ‘wicked problem’, soft systems, critical systems, human system dynamics ideas might be more appropriate. But I’m just declaring my biases here.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s