A friend of mine (Donna Podems) is heading up a project that involves providing a structure for a group of on-the-ground observers so they can apply a systems perspective to understanding what programs are doing and what they are accomplishing. She asked me for a brain dump, which I happily provided. What follows is by no means a systematic approach to looking at programs in terms of systems. It’s just a laundry list of ideas that popped into my head and flowed through my fingers. Below is a somewhat cleaned up version of what sent her.
What follows is not a list of independent items. In fact I guarantee there are lots of connections. For instance, “redundancy” and “multiple paths” are not the same thing, but they are related. But time is tight, and I have a Greek meatball recipe to shop for, so let’s assume they are independent.
Redundancy: Sometimes system processes have backups and sometimes not. For instance, imagine an educational system that in addition to teachers, has a high density of parents with lots of education and experience who could step in to do some teaching in a pinch. The backup process may be formalized and ready to kick in, or simply implicit in the situation, ready to be actuated. But systems that have it may respond to shock in very different ways from systems that don’t.
Capacity: Here we have the old standbys. Input, throughput and output rates. And let’s not forget buffers. How many people can get health care screening in a month? How many have to come in the door in a short time before the system chokes? What does a “short time” mean? Etc. and so forth.
Feedback loops: I believe people see these things lurking everywhere, and the less attention paid to them the better, mostly because nobody knows how to evaluate them. But, sometimes they do matter. Suppose the agricultural extension service does not pay attention to whether its advice is improving crop yields, and so is not changing the technical advice that it provides? That could certainly explain why the service’s effectiveness decreased over time.
Multiple paths: It is absolutely true that in big multi-part systems, sensitive dependence on initial conditions is operating. This means that even if the outcome is consistent, the exact way the outcome is achieved may differ. What you need to know is the family of configurations that will give you what you want, not just the exact one. Here is a trivially simple example. Let’s say we have a program to increase girls’ participation in school. We posit five important variables: 1) parents’ motivation, 2) social pressure, 3) cost in terms of school fees, 4) criticality for the family of the work girls do when they are not in school, 5) school capacity in terms of rooms, numbers of teachers, etc. My bet is that any logic model for this program would have lots of boxes and arrows. 1:1 relationships, 1:many relationships, and many:1 relationships. But what it would not have is acknowledgement that different configurations of the model might lead to the same outcome. One might not know what the alternate paths through the model are, but I’d bet anyone a bottle of good scotch that there are multiple paths.
Critical paths/elements: My diatribe above notwithstanding, there may well be some critical paths, or critical elements, in a system. To take the previous example, I can imagine that if there is strong social pressure not to send the girls to school, the girls won’t go no matter what else is going on. One thing a systems view has to do is to identify these critical paths/elements.
Defining boundaries: You definitely cover this, but I think you make it less of a big deal than it really is. The question of what is in and what is out is really important and not so easy to determine. Take my agricultural example. Say you are working with farmers and the agricultural extension service to get farmers to use some new crop rotation method. Do you want to include changes in the transportation infrastructure or the cost of transport as part of the system? Or what about different levels of uncertainty about crop prices for the different crops that might be raised? Well, maybe. I can see all kinds of good reasons for these things being either in or out.
Environmental conditions: Things that are not part of the system can still impact it. Think of the agricultural example. I would certainly want to keep an eye on national legislation that affected the import tariffs on different kinds of equipment I’d need.
Phase shifts (or state changes, I’m not really sure of the difference): It is a characteristic of systems that sometimes they change incrementally and sometimes sudden dramatic change appears. One reason this is important is the program theory. The other is methodology. A third is managing expectations. Take the example of a program to help small and medium size manufacturers. (SMEs in my jargon.) A lot of this is incremental. One gets small productivity changes in lots of companies, profits accumulate, and so it goes. Just keep adding up the outcomes of interest (productivity and money), and everyone is happy. But, those SMEs probably have dependencies on each other. Also, they may be slowly contributing to the overall economic and quality of life conditions in their community. These are conditions that are ripe for a state change. Networks are like that. No change observable for a long time, and then wham, the world changes.
Distributions: One thing that research in complexity tells us is that the world reeks of long tail distributions, sometimes honest to goodness power law distributions, and sometimes just garden variety long tails. Think of two programs. One is the crop yield example above. The other is a business incubator to help start-ups. Outcome in the crop example is likely to be normally (or at least symmetrically) distributed. Crop yield get better by some mean, and impact falls off on either side. But the incubator? A few companies will be spectacular successes in terms of job creation, revenue and so on. A few will do OK, and a very large number will have tiny, and ever smaller, outcomes. From an outcome measurement point of view, and from a program theory point of view, and from a policy point of view, the difference in these outcome distributions is really important.
Big fleas have little fleas upon their backs to bite em.
And little fleas have litteler fleas,
And so on, ad infinaitum.
There are a few issues that are important here. First, a judgement needs to be made about whether any kind of nesting should be considered, and if so, which ones really matter. Second, even if nested systems are included, one cannot assume that the behavior of a higher level system can be explained in terms of the lower level systems. This is the classic phenomenon of emergence, but there are other reasons as well. (It is someone’s famous paradox, but I forgot whose.)
Brittle vs. resilient: Some systems don’t break in the face of considerable pressure, but when they do, they fail catastrophically. Other systems can absorb shock and fail incrementally. I don’t think you can rate your systems on a resilience scale (in fact it may be impossible), but you can observe the kind of change the systems you are observing undergo. Assuming that some fail, it would be worth noting the pattern of failure. Bang or whimper?
Adaptation: I am a big fan of looking at systems in terms of evolutionary biology. This means treating them as organisms that are adapting on a fitness landscape. I won’t go into a lecture here, but suffice it to say that the systems you are evaluating probably are evolving. Look for two patterns of change. 1- They evolve because the environment is changing. E.g. the legislative tax climate in the SME example. (Or they fail to evolve, which is also an outcome to be recorded.) 2- They act to change their environment. For example, the girls’ education program makes a point of showing the community how successful the girls are and how life in the community is improving. Essentially, the program is acting to make its environment more salubrious.
Aggregate of small change in many programs: Think of all the examples I used above. Imagine two conditions. 1- They all show a small amount of the desired improvement. 2- They all take place within the same geopolitical entity – maybe a regional authority, or a small state. Maybe it’s a function of the networking effect, or maybe something else. I’m not sure. But it is entirely possible that the aggregate impact of all these small changes will be dramatic.
Scaling of resources: What is the difference between a string quartet and a gas station? The answer is that there is a (more or less) linear relationship between the number of people who can hear a string quartet, and the number of string quartets out there to play. Not so with gas stations. The number one finds in a city does go up with the population, but it is not linear. (I think this is actually a power law function, but I’m not totally sure about that. I’d have to go back and check something I read.) The point is that in evaluation, one might want to consider whether the number of resources is sufficient for a population. Do we have enough teachers, enough police, enough agricultural extension workers, etc.? I bet that schools are like string quartets, and police are like gas stations. (It’s a longish conversation as to why I think this.) But the point is that these kinds of different relationships are common in system scaling situations. I’m sure you don’t have the data to do the math, but the idea is still worthy of consideration when trying to define “success”. “Success” in scaling might mean a very different pattern depending on what is being scaled, and in what setting.
Inside or outside — where does change come from? My basic inclination (and almost everyone else’s’ too, I bet), is to assume that any big change in a program I’m evaluating must be a result of something that happened to it from the outside. Funding, change in client base, legislation or regulation, and so on and so forth. It must be something, and that’s the reason to do good environmental scanning in any kind of an M&E exercise. But, a key insight from complexity is that small changes from within a system can result in very major changes in the system’s behavior. (It’s those pesky feedback loops that I so derided above.) My only point is not to assume that if there is a radical change in a program, it must have resulted from some environmental event. And while I’m on the subject, don’t assume that the feedback loops that matter are the ones that show up in your pictures of program theory. All kinds of invisible ones might be operating.
More: No doubt there is more. If it pops into my head, I’ll let you know.