A Model for Evaluation of Transformation to a Green Energy Future

I just got back from the IDEAS global assembly, which carried the theme: Evaluation for Transformative Change: Bringing experiences of the Global South to the Global North. The trip prompted me to think about how complexity can be applied to evaluating green energy transformation efforts. I have a longish document (~2000 words) that goes into detail, but here is my quick overview.

Because transformation is a complex process, any theory of change used to understand or measure it must be steeped in the principles of complexity.

The focus must be on the behavior of complex systems, not on “complex systems”. (Complex systems or complex behavior?)

In colloquial terms, a transformation to reliance on green energy can be thought of as a “new normal”. In complexity terms, “new normal” connotes an “attractor”, i.e. an equilibrium condition where perturbations settle back to the equilibrium. (Why might it be useful to think of programs and their outcomes in terms of attractors?)

A definition of a transformation to green energy must specify four measurable elements: 1) geographical boundaries, 2) level of energy use, 3)  time frame, and 4) level of precision. For instance: “We know that transformation has happened if in place X, 80% of energy use comes from green sources, and has remained at about that level for five years.”

Whether or not that definition is a good one is an empirical question for evaluators to address. What matters is  whether the evaluation can provide guidance as to how to improve efforts at transformation.

Knowing if a condition obtains is different from knowing why a condition obtains. To address the “why”, evaluation must produce a program theory that recognizes three complexity behaviors – attractors, sensitive dependence, and emergence.

Because of sensitive dependence, unambiguous relationships among variables may not continue over time or across contexts. Because of emergence, transformation does not come about as a result of a fixed set of interactions among well-defined elements. The result of sensitive dependence and emergence may produce outcomes that exist within identifiable boundaries,  i.e. within an attractor space. If they do, that is akin to “predicting an outcome”. If they do not, that is akin to showing that a program theory is wrong.

Models with many elements and connections cannot be used for prediction, or even, for understanding transformation as a holistic construct. Small parts of a large model, however, can be useful for designing research and for understanding the transformation process.

Six tactics can be used for evaluating progress toward transformation: 1) develop a TOC that recognizes complex behavior, 2) measure each individual factor in the model, 3) consider how much change took place in each element of the model, 4) focus on parts of the model, but not the model as a whole, 5) use computer-based modeling, 6) employ a multiple-comparative case study design.

As all the analysis takes place, interpret the data with respect to the limitations of models, and the implications of emergence, sensitive dependence, and attractor behavior.

A complexity perspective on a theory of change for long term program effects

Lately I have been spending a lot of time thinking about two subjects: 1) models (program, logic, change, etc.) and 2) complex behavior. (Not complex systems. I don’t like that subject.)

It occurred to me that different models are relevant at different time scales. Most of the models one sees in the evaluation world involve clear outcome chains between short, intermediate, and long range outcomes. Usually those long range outcomes are aspirational in two ways. 1) Nobody ever stays around long enough to actually evaluate whether the program in question had any impact. Continue reading “A complexity perspective on a theory of change for long term program effects”

What complexity theory do evaluators need to know?

My last blog post dealt with why evaluators should focus on complex behavior as opposed to complex systems. Bob Williams made a comment about how the post made a lot of sense, but that it conveyed the impression that evaluators do not have to worry about complexity theory. Evaluators do need to be concerned with theory, and Bob’s post got me to begin to crystallize some notions that have been marinating in the back of my brain for some time.

My Starting Point
Recently I have been pounding on the idea that a switch from complex systems to the behavior of complex systems would do a lot to further the abilities of evaluators to make practical, operational decisions about program theory, metrics, and methodology. And after all, that’s what it’s all about. We (I at least) get hired when someone says to me: Continue reading “What complexity theory do evaluators need to know?”

A simple recipe for improving the odds of sustainability: A systems perspective

I have been to a lot of conferences that had many sessions on ways to assure program sustainability. There is also a lot of really good research literature on this topic. Also, sustainability is a topic that has been front and center in my own work of late.

Analyses and explanations of sustainability inevitably end up with some fairly elaborate discussions about what factors lead to sustainability, how the program is embedded in its context, and so on. I have no doubt that all these treatments of sustainability have a great deal of merit. I take them seriously in my own work. I think everyone should. That said, I have been toying with another, much simpler approach.

Almost every program I have ever evaluated had only one major outcome that it was after. Sure there are cascading outcomes from proximate to distal. (Outcome to waves of impact, if you like that phrasing better.) And of course many programs have many outcomes at all ranks. But in general the proximate outcomes, even if they are many, tend to be highly correlated. So in essence, there is only one.

What this means is that when a program is dropped into a complex system, that program is designed to move the entire system in the direction of attaining that one outcome. We know how systems work. If enough effort is put in, they can in fact be made to optimize a single objective. But we also know that success like that makes the system as a whole dysfunctional in terms of its ability to adapt to environmental change, meet the needs of multiple stakeholders, maintain effective and efficient internal operations, and so on. As I see it, that means that any effort to optimize one outcome will be inherently unstable. No need to look at the details.

My notion is that in order to increase the probability of sustainability, a program should pursue multiple outcomes that are as uncorrelated as possible. The goal should be joint optimization, at the expense of sub-optimizing any of the desired outcomes.

I understand the problems in following my idea. The greater the number of uncorrelated outcomes, the greater the need to coordinate across boundaries, and as I have argued elsewhere in this blog, that is exceedingly difficult. (Why do Policy and Program Planners Assume-away Complexity?)  Also, I am by no means advocating ignoring all that work that has been done on sustainability. Ignoring it is guaranteed to lead to trouble.

Even so, I think the idea I’m proposing has some merit. Look at the outcomes being pursued, and give some thought to how highly correlated they are. What we know about systems tells us that optimization of one outcome may succeed in the short term, but it will not succeed in the long term. Joint optimization of uncorrelated outcomes? That gives us a better fighting chance.



A Complex System Perspective on Program Scale-up and Replication

I’m in the process of working up a presentation for the upcoming conference of the American Evaluation Association:. Successful Scale-up Of Promising Pilots: Challenges, Strategies, and Measurement Considerations. (It will be a great panel. You should attend if you can.) This is the abstract for my presentation.

Title: Complex System Behavior as a Lens to Understand Program Change Across Scale, Place, and Time
Abstract: Development programs are bedeviled by the challenge of transferability. Whether from a small scale test to widespread use, or across geography, or over time, programs do not work out as planned. They may have different consequences than we expected. They may have larger or smaller impacts than we hoped for. They may morph into programs we only dimly recognize. They may not be implemented at all. The changes often seem random, and indeed, in some sense they are. But coexisting with the randomness, a complex system perspective shows us the sense, the reason, the rationality in the unexpected changes. By thinking in terms of complex system behavior we can attain a different understanding of what it means to explain, or perhaps, sometimes to predict, the mysteries of transferability. That understanding will help us choose methodologies and interpret data. It will also give us new insight on program theory.

There will only be one slide in this presentation.


Based on this slide I’m developing talking points. I know I’ll have to abbreviate it at the presentation, but I do want a coherent story to work from. A rough draft is below. Comments appreciated. Whack away. Continue reading “A Complex System Perspective on Program Scale-up and Replication”

What is the relationship between path dependence and system stability? With explanation of why I care.

I realized it might help to explain what led me to ask this question in the first place. I submitted a proposal to AEA to talk about how traditional evaluation methods can be used in complex systems. Part of that explanation will have to involve understanding the CAS implications of stability in program impact across time and place. See the end of this post for that proposal.

I’m looking for some sources and opinions to help with a question that has been troubling me lately.  I’m struggling with the question of the relationship between path

  • dependence and
  •  system stability.

Or maybe I mean the relationship between path dependence and the ability to predict a system’s trajectory. I’m not sure about the best way to phrase the question.  In any case read on to see my confusion.

I’m bumping into a lot of people who believe that systems are unstable/unpredictable because of path dependence. This is one of those notions that seems right but smells wrong to me. It seems too simple, and it does not make sense to me because it implies that if systems are predictable there is no path dependence operating.  That can’t be right, can it? Here is a counter example. Continue reading “What is the relationship between path dependence and system stability? With explanation of why I care.”

Understanding sustainability – systems as a framework, thermodynamics as a metaphor

I have been thinking about the research I have been reading on sustainability, and musing about the possibility of a simple framework to help explain it all. Here is what I came up with. Please beat up on it as you deem fit and proper.

I begin with the age old question: Why is it easier to break something than to build it? The answer is rooted in system behavior. Continue reading “Understanding sustainability – systems as a framework, thermodynamics as a metaphor”

Kim Norris’ comment on Jam’s original Evaltalk post about population ecology and program sustainability

Please also look at publications by W.E. Grant. He and his colleagues apply population ecology concepts to a wide range of issues, from sustainability to applied philosophy. For my doctorate we did exactly
what you describe, in particular, looking at ways that environmental trends influenced likelihood of ameliorating results vs chaotic or downward spiraling results depending upon population structure and dynamics. Very applicable work.

Kim Norris knorris1@umd.edu

Chad Green’s comment on Jam’s original Evaltalk post about population ecology and program sustainability

This reminds me of a lava lamp metaphor that I created about a year ago to understand programs more as blobs of motivational energy that lose momentum as they rise from the social architecture, hit a technological layer at the top, cool, fall, and influence newly emergent collective action frames while on their way down to the source (collective efficacy?).

These amorphous blobs of emotional energy get recycled in a continual process, leaving a wake of conceptual and practical technological tools like shells on a beach.

That was then. Lately I’ve taken up an interest in meta-metaphors because none of these so-called shells catches my eye. 🙂

Chad Green

Mary Ann Scheirer’s comment on Jam’s original Evaltalk post about population ecology and program sustainability

Hi Jonny – I’m quite interested in the topic of program sustainability, and would be happy to join a group that you start up, if there is interest. I’m up on most of the literature on program sustainability in the health field, which is growing recently.

One difference between environmental sustainability, such as biological populations, I think, is that human directed programs are heavily influenced by the human factors – champions, leadership, professional “norms,” etc. So Continue reading “Mary Ann Scheirer’s comment on Jam’s original Evaltalk post about population ecology and program sustainability”