There is an interesting discussion going on in the Linked-In discussion group of the European Evaluation Society with respect to a question someone asked: How do linear models address the complexity in which we work? I can’t help but to weigh in. I also placed a link to this blog post on the EES discussion thread. My thoughts on this topic run in two directions.

1) Putting a lot of stuff in a model, and
2) What does it mean to “address complexity”?

Putting a Lot of Stuff in a Model

I am a big fan of information density. The more information that can be juxtaposed, the greater the amount of meaning that can be conveyed. The countervailing force to this inclination is that I’m also a big fan of information being readable. My solution is to think of rendering a model as an exercise in the joint optimization of two goals:

1) Information density, and
2) Readability.

To achieve this joint optimization, I try to pay as much attention as I possibly can to the principles of good graphic design. The better the graphic design, the more information can be included while the model still remains accessible.

There is a lot of very good writing these days in the field of Evaluation as to how to visually display information, but my advice for success is simple. Read the New York Times and the Wall Street Journal. Whatever they do with their graphics, do that.

Addressing Complexity with Linear Models

In one sense the answer is simple. If a model is linear, than it is not complex. But that’s because the question is posed colloquially, as I or anyone else would in normal conversation. To get a good answer the question has to be posed more precisely, as for example: “We write models in two dimensions, mostly with boxes and arrows. Given this limitation in depiction, how do we express complexity?”

My answer of course, is “it depends”. From the point of view of drawing models, I think in terms of three different aspects of complexity. (“From the point of view of drawing models” is a critical phrase. These three categories are by no means the best way to think of complexity when it comes to what really matters – program theory, methodology, and metrics.)

1)   “Complex” as “elaborate”
2    Diversity of content
3)   Sensitive dependence on initial conditions

“Complex” means “elaborate”.

What I mean here is that a complex model has lots of stuff in it – many different elements, many different relationships among elements, etc. The kind of thing where without good graphic design, it would look like a mess. The most apt word I can use for models like this is “complicated”. Alas, “complicated” also shows up in the “simple – complicated – complex” triumvirate that is so popular in our field, and that I reject absolutely. So I’ll use the term “elaborate” instead. How to depict a lot of stuff and a lot of relationships? As I said above, use principles of good graphic design, and think of model drawing as an exercise in joint optimization of two goals – information density and readability. (For an explanation of my aversion to simple – complicated – complex” formulation, see: Drawing on Complexity to do Hands-on Evaluation (Part 1) – Complexity in Evaluation and in Studies in Complexity.)

Diversity of Content

This is a subset of the “complex/elaborate” category, but it’s important enough to deserve its own section. Here I am not talking about a picture that has lots of the familiar elements we usually see, i.e. boxes and arrows. Rather, it has more diverse elements. This is important because we usually know a lot more about our programs than we depict in our models. Here are some examples.

Probabilities: It would be refreshing to see a model that had a visual indication that some program activity, or some outcome, might happen. I’m not asking for exact probabilities, just an acknowledgement of uncertainty. We have all worked with programs that could apply either previous research or expert opinion to make such a judgment. Why not capture that knowledge in our models?

Alternate causal paths: I have never seen a model that says: “Here is my desired outcome, and here are the two different ways in which my program might bring about that outcome”.  There have to be many programs where such a model would be a better depiction of reality. (Not to mention the delicious evaluation designs that would be needed to observe and test the different paths.)

Types of relationships: In the LinkedIn discussion Paul Duigan gives the examples of threshold and cutoff point relationships in addition to garden variety linear relationships. Those and many others are possible.

Distributions: I think there are heavy policy implications when a successful program has outcomes that are either symmetrically or long-tail distributed. Think about the difference between a new math curriculum and a high tech startup incubator. Success in the first means some mean improvement, and a (more or less) normal distribution of success around that mean. In the incubator case, success means a few very big successes, a few moderate ones, and a very large number of minimally successful ones. It seems to me that the implications of success for making policy would be very different in these two scenarios. So why not express that in our models?

Critical vs. non critical: All models I have ever looked at convey the impression that every box and arrow is equally important. Surely that cannot be true. Surely a program can succeed if some particular element or process is missing or weak, but will surely fail if some other element or process is missing or weak.

Undesirable (from someone’s point of view) outcomes: We claim to want to serve the information  needs of stakeholders, but it’s not really true. What we serve is the best interests of the stakeholders we care about. Suppose that in order for a program to be successful, it would also have to have some undesirable outcome? What are the odds that our customer will let us include that outcome? Or, how many of us have ever taken Donald Campbell’s advice and brought program opponents into the evaluation process?

And much else besides that I cannot think of right now.

It might be an exaggeration, but not much of one, to admit that I never did an evaluation that included anything in the above list. I’m not proud of myself about this, but it is true. Everyone should do as I say, not what I do. including me.

Back to my point about depicting complexity. If we added the elements above our models would be very much more elaborate than they are now, and they would better depict some of the complexity in which we work. So why not put them in? Back to the same advice as before. Jointly optimize information density and good graphic design.

Sensitive dependence on initial conditions

From the point of view of depicting a model, this is the root of all evil. Why does sensitive dependence make it difficult to render a model? Let me count the ways.

1)   If small changes in a single model element can have outsize effects, how can we depict causal relationships?

2)   To make matters worse, we can never fully specify a model. That may be OK is we know we have included most of the important elements. But if elements we don’t even think of as being important can affect the model in profound ways, what does that do to drawing boxes and arrows?

3)   There may be multiple causal paths through a model, all leading to the same outcome. “Multiple” may mean a few, or it may mean an very large number. (“All leading to the same outcome” is an important phrase. Complex systems can be exceedingly stable and exceedingly predictable in their outcomes. (For an explanation as to why, see: Complexity is about stability and predictability.)

4)   To further complicate matters, it is entirely possible that each time a program operatives, success comes through a different causal path.

5)   Depending on how a program operates and how its environment fluctuates, outcomes may change. The greater the distance (in terms of time and generations of outcomes) between program and outcome, the less certainty there is as to what will happen.

So, how to construct models in the face of all these program behaviors? Invoke three tactics: 1) look for stability of causal paths, 2) differentiate types of outcomes, and 3) use visuals that acknowledge ignorance.

Stable paths: I would start by taking the position that to acknowledge sensitive dependence does not require abandoning a search for stable causal paths. Such paths do exist, and in fact, they exist in most of the programs I have ever evaluated. These can be represented with the tried and true graphics that we know and love. Do not get seduced into believing that because sensitive dependence is real, that it must necessarily override stable paths. Ban the butterfly!

Types of outcomes: The next step is to think in terms of three kinds of outcomes: 1) stated desired outcomes, 2) unstated but knowable outcomes, and 3) unforeseeable outcomes. Stated desired outcomes constitute the overt reason for the program’s existence. These are the outcomes that someone is betting my tax money on, and hence, they are the high priority targets for evaluation. Unstated knowable outcomes are those that were not planned for, but about which someone could say: “Based on my experience and knowledge of the research literature, I could have told you would happen”. Unforeseeable outcomes come from complex system behavior, and are truly unpredictable. Outcome type #2 should, but usually does not, appear when we first make a model. Outcome type #3 cannot show up until after an evaluation has begun. But for both #2 and #3, we should add them to the model as data come in. Why not visually differentiate them? That results in a 2-D model that captures complexity. In the spirit of shameless self-promotion, I recommend my book for #s 2 and 3. Evaluation in the Face of Uncertainty Anticipating Surprise and Responding to the Inevitable.)

Acknowledge ignorance: I’m a big fan of the Kellogg model – a column full of unconnected inputs leading to a column of unconnected activities, etc. This is a very modest view of causation. The logic is: Do a bunch of stuff here and a bunch of stuff will happen there. Well, that’s a model that acknowledges sensitive dependence. It says that we can predict that if a bunch of stuff happens here, a bunch of stuff will happen there, but we cannot know which bunch of stuff, we acknowledge that it can be different bunches of stuff, and we do not know if the path is the same from cycle to cycle. That is acknowledging complexity in a 2-D picture.

 

2 thoughts on “Depicting Complexity in 2-D

  1. Thanks for drawing my attention to the EES linked. One the whole I agree with your assessment. Your second notion of complex as ‘elaborate’ has a substantial history within the systems field. In fact, the Open University used to have a module that looked at different kinds of elaboration. I commonly use two – Rich Picturing (where the idea is to place all the kinds of things and more on the diagram) and Influence Diagrams which are more ‘snapshot’ assessments of what is pushing whom where (a kind of visual version of Force Field Analysis). The difference is that they call them maps rather than models…. for fairly obvious reasons.

    I’m a big non-fan of the Kellogg model. It’s what I call an example of a dose model, that imagines a whole bunch of activities at one end that somehow flow mysteriously without any further help from activities to some final outcome. That’s why in Australia and New Zealand we tend to use “outcome’ or ‘results chain’ models that tend to require actions (and various other factors such as assumptions and contextual factors) between the blobs. I sometimes use the Snyder model approach to modelling when working with groups, which is really a combination of the classic Kellogg model and the Results chain model, with bits of network theory thrown in. In fact, Rick Davies has developed a means of using network maps as logic models.

  2. As for “dose”, there is nothing wrong with that. Its a common enough research design in lots of fields. It can be a good way of showing causation. You are right that it does not reveal the mechanism of action. I’d say that is sometimes important to know and sometimes not. It depends on whether knowing the “dose response” function is adequate for what you want to do.

    As for the Kellogg model, I think it differs from “dose response” because you never really know what you are delivering a “dose” of, even if the unknown dose on each cycle keeps changing but producing the same result. That’s what I like about it so much. It recognizes multiple, unknowable causal paths. Essentially, it recognizes ignorance, of which as many people would readily tell you, I have a lot of.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s