My last blog post dealt with why evaluators should focus on complex behavior as opposed to complex systems. Bob Williams made a comment about how the post made a lot of sense, but that it conveyed the impression that evaluators do not have to worry about complexity theory. Evaluators do need to be concerned with theory, and Bob’s post got me to begin to crystallize some notions that have been marinating in the back of my brain for some time.
My Starting Point
Recently I have been pounding on the idea that a switch from complex systems to the behavior of complex systems would do a lot to further the abilities of evaluators to make practical, operational decisions about program theory, metrics, and methodology. And after all, that’s what it’s all about. We (I at least) get hired when someone says to me:
“Jonny, I implemented a program. I don’t know if I did a good job implementing it, and I don’t know what the program is accomplishing. I will pay you money to find out so I can do a better job. But, I don’t want to pay you to tell me what I cannot do, even if doing it would be a good idea. I want to pay you to tell me what I can do to improve the program”.
That’s about as practical as it gets, and that is the framework I like to work in. I don’t know how to help a customer by invoking the idea of a “complex system” when it comes to models, metrics, and methodologies. But I do know what to do about complex behavior.
As an example, suppose a customer told me that “growth patterns” and” robustness” were important concepts in the program’s outcomes. If I knew that, I would immediately think of fractals as being an important concept. And unlike the idea of a complex system, fractals are something I can do something about. I can measure what network patterns arise among the elements that the program is supposed to help. I can build a program theory that considers trade offs between efficiency and effectiveness. And so on.
The result of this thinking as been a laundry list of complex system behaviors that I have found to be useful in evaluation. I like the list, but when all is said and done, it’s just a list. It looks as if the items are independent. They are not, but the reasons why are not obvious. Also, it looks as if the items have no intellectual foundations, but they do. I think what Bob is reacting to is just this, the seeming simplicity of a list of constructs that seem unmoored from what is known about complexity theory. That needs to be fixed. I have been thinking about this for a while now. This blog post is my initial effort to articulate what I have been thinking.
How much complexity theory do evaluators need to know?
It’s one thing to say that evaluators need to know something about complexity theory. It’s quite something else to say what they should know, or how much they should know. Here is an example from statistics. Evaluators use statistical tests all the time, and more often than not, do so in a useful and meaningful way. But how much do they need to know about the general linear model, true and error scores, the nature of normal distributions, sensitivity to various assumptions, and so on and so forth? The answer to this question is: “Probably more than they do.” That’s a fair judgement, but the follow-on inquiry is tough. “How much is enough?” I have no idea what the answer is to this question. And so it is with complexity theory. Evaluators would do well to know more about it, rather than to rely on Jonny Morell’s favorite list of complex behaviors that he uses in his evaluation work. But what do they need to know, and how much? What follows is a framework that I think might help to formulate an answer to this question.
Framework for unearthing what evaluators need to know about complexity
I have not totally ignored the “list” problem in what I have been presenting to the evaluation community. I briefly touched on the matter in my AEA Coffee Break sessions when I talked about themes in complexity science. By “themes” I mean concepts that crop up frequently in many discussions of complexity in many corners of the complexity research universe. These are the concepts that seem to pull the disjointed study of complexity together. I think that it’s these themes that can be used to map my list of complex behaviors into complexity theory. What I have in mind is a table.
Theme in Complexity |
|||||
Complex Behavior |
Predict-ability |
Feedback loops |
Patterns of change
|
Evolution- |
Where change comes from |
Scaling | |||||
Network effects | |||||
Growth patterns | |||||
Realistic timeframes | |||||
Discontinuous change | |||||
Unpredictable outcomes | |||||
Unpredictable outcome chains | |||||
Consequence of small changes | |||||
Feedback loops among outcomes | |||||
Inequitable distribution of benefits | |||||
Joint optimization of uncorrelated outcomes |
I see this table as a rough draft because it has many problems. 1) The themes are probably not right. Some elements might best be combined, and others may need to be added. But it’s OK for now to convey a sense of what I have in mind. 2) I feel better about the list of complex behaviors, but they still need work.
There are a few ways to work with this table. First, each of the column headings can be understood on its own terms, drawing on research and theory coming from many different sources. Second, it is possible to work along the rows, looking at how a particular complex behavior relates to a particular theme. Third, one could work down the columns to see how a particular theme relates to the complex behaviors that are of interest to evaluators.
My plan for the next few months is to ponder these matters. I expect progress because the bicycle riding season is upon us, and I do some of my best thinking peddling through farm country.