My friend and colleague and partner in crime Sanjeev Sridharan sent a query to a few people asking for reaction to various ideas he has been pondering. I thought it might be a good idea to post my response here.
Sometimes governments invest large amounts of money on grand ideas — these ideas are often not based on evidence or detailed plans. A kind version of this story is the “innovation” lies in taking risks based primarily on somewhat vague promises of the original idea. In all likelihood the set of interventions that connect the activities to the long term outcomes are going to be highly complex even though the language of accountability is framed as though the intervention is either simple/complicated.
- I talk about part of this issue in Chapter 3 of my book in a section titled: When is the probability of surprise high”. I won’t go into details here, but I argue that given economic, political, social, and human capital realities, governments have little choice but to implement unidimensional solutions to multidimensional problems.
It’s the rational choice because it is that, or nothing. The people establishing these programs know better, but they cannot act on the knowledge.
- As for taking risks based on vague promises. I’m in favor of it. The truth is that most important decisions get made that way. Of course I’m in favor of drawing on lots of experience and knowledge to make the most informed choice possible, but the idea of some kind of direct relationship between data and action is illusory for most important decisions. Should the U.S. pass Obama’s health care reform? Should NATO implement a no-fly zone over Libya? Should a university hire professor X or Y? Should Apple market the I-Pod, or Ford the Edsel? Should Head Start be implemented? (Remember the data on Head Start that was drawn upon to justify it’s establishment was a pretty small study.) And so on and so forth. There may be lots of good research and theory to help inform these decisions, but given the uncertainties of the world, there is more “taking risks based on vague promises” that my narrow overly rational mind is comfortable with.
- If I’m right, the value of evaluation is downgraded to helping with incremental improvement, sort of like a CPI situation. The feedback latency can be long and short. Short: I am implementing a new reading program, is the teacher training component part working or do I have to revise it before we scale up to the whole school? Long: NIH has spent years funding multiple studies on helping to get evidence based practice implemented. Over time each study provides feedback for the next round of efforts. But in both cases we are talking about incremental improvements. The big changes are a leap of faith, with hopefully, as much science as possible mixed with the faith.
How does the M & E system differ between interventions systems that are complicated vs. complex?
- This is yet another question I don’t have a very good answer for, but I do deal with something related in Chapter 5 my book, titled: Shifting from Advance Planning to Early Detection. In there is a discussion of extending M&E to scope for unanticipated change with respect to: 1) using the data to revise logic models, and 2) scope for changes in the program’s environment. I don’t make any distinctions between complicated and complex, but the ideas might be useful for what you are cooking up.
Equally, i have a concern that the language of uncertainty and emergence, can distract the implementers of this program to think more deeply about the mechanisms (non-linear or otherwise) that are necessary to impact long term outcomes.
- How right you are! I think people are becoming besotted with “complexity”. They are seduced by the sexiness of the exotic ideas to the detriment of using what is known to do some good.
- I do think there are some ideas in complexity that may be duplicated by other frameworks but where casting them in terms of complexity would be useful.. One of my favorites is the notion of fitness landscapes of different shapes. I like to think of programs as organisms evolving on fitness landscapes, complete with competition and cooperation among the various organisms. Sharply peaked landscapes mean small change can result in high payoff or catastrophe, in which case incremental improvement can be risky. With the landscape idea I can also think in terms of local optima, co-evolution, shifting typographies, and the like. But this is a personal decision for me. Could I articulate any one of these notions in other terms, e.g. error variance in environmental change? Of course I could. But I find the fitness landscape framework useful because it’s a neat way of encompassing a lot of different aspects of how a program or an organization functions.
- There are also some constructs in CAS that are useful and not duplicated in other frameworks. My favorite candidate is the notion of “edge of chaos”, the region were an entity is adaptive to its environment at the same time vulnerable. (Of course I have no idea how to use this idea in evaluation except in the most general heuristic fashion, but that’s another story.)
- Another concept is the power law distribution. (I know this is not the exclusive province of CAS, but it does play a big part in discussions of complex behavior.) I think the idea of equal numbers of occurrences over time with a power law distribution of the size of those occurrences is a very useful way to think about how frequently programs will have a noticeable effect, how efficiently they will function, and so on. I see it as an aspect of program theory when there are multiple efforts going on in a system to solve the same problem.
Any thoughts and examples of differences in accountability systems between complicated and complex would be most welcome.
- Accountability: I have been thinking about an analogue to this problem with respect to condition based maintenance in big machines – jet engines, locomotives, etc. Accident prevention is also relevant – normal accidents (Perrow stuff), multiple paths to the same accident, known historical causal path but no future prediction, etc. In all of these domains there is a tendency to look at the condition of parts of the system (which is necessary of course), as a way of understanding the overall system. That’s what accountability is usually all about, at least in it’s M&E sense. As I see it the problem gets worse the more granular the observations. To continue the previous example, do I want to determine whether the teachers who went through the training can teach the new curriculum, or do I want to look at the quality of the workshop instructors, the instructional design of the materials, attendance records, and the like? I’d be comfortable with the less granular level, but not with a program theory that articulates a specific path by which the teachers are brought to a level of competence. (Let’s leave aside the question of what “granular” means with respect to system boundaries, and take it all intuitively. Otherwise we open up a huge can of worms.) I talk about this stuff in my book and my logic model workshop. There are implications for program theory (pegging the theory to what we actually know), and methodology (what can we measure, when, etc.).
- Complicated and Complex: I don’t know of any way to look at a system and know if it is complex until it displays complex system behavior. Suppose you were looking at a complex system that was wallowing deep in a steep attractor? The system would be incredibly stable and yet be complex. Of course we know characteristics of CAS, e.g. non-linear interactions, multiple feedback loops, path dependence, etc. But all those could be operating to form a system that was so stable that you could whack it over the head with data forever, and it would never change. That’s why I don’t find the “complicated / complex” distinction so useful. And, I can easily imagine an unstable system that was not complex. All you need is one critical path element that could break easily and there would be lots of instability with no complexity.