Blog

Invitation to Participate — Assumptions in Program Design and Evaluation

Bob’s response to our first post reminded us that we forgot to add something important.
We are actively seeking contributions. If you have something to contribute, please contact us.
If  you know others who might want to contribute, please ask them to contact us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org

Introducing a Blog Series on Assumptions in Program Design and Evaluation

Assumptions drive action.
Assumptions can be recognized or invisible.
Assumptions can be right, wrong, or anywhere in between.
Over time assumptions can atrophy, and new ones can arise.
To be effective drivers of action, assumptions must simplify and distort.

Welcome to our blog series that is dedicated to exploring these assertions. Our intent is to cultivate a community of interest. Our hope is that a loose coalition will form through this blog, and through other channels, that will enrich the wisdom that evaluators and designers bring to their work. Please join us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org

 

Complex systems or complex behavior? Part 1 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common Introduction to all sections

This is part 1 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YouTube URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators 6/14
3 Ignoring complexity can make sense 6/21
4 Complex behavior can be evaluated using comfortable, familiar methodologies 6/28
5 A pitch for sparse models 7/1
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

Complex systems or complex behavior?

There are two reasons why I am uncomfortable talking about complex systems. One reason is that I have never been able to find an unambiguous definition that everyone (or at least most people) agree on, and which also captures the range of topics that I think are useful in Evaluation. The second reason is that even if I knew what a complex system was, I would have no idea what to do with it when designing or conducting an evaluation.

What I do find useful is a focus on what complex systems do, on how they behave. Those behaviors are something I can do something with.  To telegraph an example I’ll use in Part 7, (Why should evaluators care about emergence?), when there is emergent behavior, a whole cannot be understood in terms of its parts. Were I to suspect such behavior, my program models would be less granular, my methodology would address different constructs, and my data interpretation would ignore fine level detail.

Table 1: Cross reference, complexity themes and complex behaviors that are useful in evaluation
  Theme in Complexity Science
Complex behavior that may be useful in evaluation Pattern Predictability How  change happens
Attractors      
Emergence      
Sensitive dependence      
Unpredictable outcome chains      
Network effects among outcomes      
Joint optimization of uncorrelated outcomes      

Not all complex behaviors are useful in evaluation, but some are. Also, appreciating the application of complexity in evaluation extends to themes that cut across much of the writings that appear in fields such as biology, meteorology, physics, mathematics, economics, and many others. For doing evaluation, I find it useful to think in terms of three themes: 1) pattern, 2) predictability, and 3) how change happens. When I do evaluation, I try to think about how invoking complex behaviors can help me understand a program in terms of those three themes. Table 1 shows the cross-references. In any given evaluation some cells will have content, and some will be empty.

Complexity has awkward implications for program designers and evaluators – Part 2 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common Introduction to all sections

This is part 2 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators up
3 Ignoring complexity can make sense 6/21
4 Complex behavior can be evaluated using comfortable, familiar methodologies 6/28
5 A pitch for sparse models 7/1
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

 

Complexity has awkward implications for evaluators and stakeholders

Complex behavior is problematic because it has implications for program outcomes that either do not conform to common sense, or which challenge accepted processes of program design, or both. Table 1 shows some examples. Table 2 outlines the complex behaviors that explain the outcome patterns.

Table 1:  Examples of complexity-driven program behaviors that have awkward implications for program designers

 
Program Behavior Implications for Common Sense and/or Design and Funding Logic
1 Benefits are highly skewed toward a small number of service recipients. This pattern may not be an aberration or a fault of the program design. Rather, it may be fundamental to the program and the conditions in which it is operating. Despite this inevitability, both politics and ideology favor a reasonably balanced distribution of program effects. It’s not pleasant to contemplate that the fundamental nature of an innovation, no matter how valuable that innovation may be, will be distributed in a highly unequal manner.
2 Understanding program effects defies understanding in terms of outcome chains identified in the program theory.  

 

Psychology: As humans, we have a natural desire to take things apart and see how the pieces fit back together. It’s unsettling to think that looking at the pieces will not help us understand what we have taken apart. 

Political, economic, and social realities: We live in a world where incremental efforts are needed when a long-term objective is pursued. That reality makes it difficult to admit that we can’t explain how incremental change adds up.

3 A program can be relied upon to produce long term outcomes, but a chain of intermediate outcomes cannot be identified in advance. The difficulty here is like the one above. It does not make sense that we can reliably predict where a program will end up, but we cannot identify the intermediate steps. And selling that assertion to funders can be no small effort.
4 Achieving program goals induces dysfunctional change in related programs. Our funding mechanisms are “stovepiped”, and thus optimized to achieve a single outcome, or at least, a set of highly correlated outcomes. It is disconcerting to contemplate the possibility that success within those stovepipes will of necessity, breed undesirable change in other activities that we care about.
Table 2:  Complex behaviors that explain the program performance
1 Benefits are highly skewed toward a small number of service recipients.  

Complex behavior = preferential attachment

Preferential attachment refers to a process by which one “entity” connects with another based on “size”.  One frequently cited example is the Internet, with its set of larger hubs connecting to smaller ones. Snowflakes also follow this pattern, as does wealth.

What do these seemingly different constructs have in common? They can all be thought of as random processes. If you are going to link to a URL, is there a greater chance that you will know about larger or smaller URL possibilities? If you are an ice particle are you more or less likely to find and bind to a larger or smaller collection of particles? If you are a business opportunity, are you more or less likely to seek larger or smaller centers of partnering potential? The direction in these examples can be turned around. If you are a large URL, is there a higher or lower chance that you will attract potential connections? If you are a snowflake, is there a higher or lower chance that you will bump into ice particles? If you are known to possesses resources, is here a higher or lower probability that you will attract partners?  Other characteristics of these patterns is that they are fractal and the sizes of their connections are power law distributed.

2 Understanding program effects defies understanding in terms of outcomes identified in the program theory.  

Complex behavior = emergence

Emergence is a phenomenon in which the functioning of an entire unit cannot be explained in terms of how individual parts interact. Contrast an automobile engine with a beehive, a traffic jam, or an economy. I could identify each part of the engine, explain its construction, discuss how an internal combustion engine works, and what role that part plays in the operation of the engine. The whole engine may be greater than the sum of its parts, but the unique role of each part remains. The contribution of each individual part does not exist with beehives, traffic jams, or economies. With these, it may be possible to identify the rules of interaction that have to be in place for emergence to manifest, but it would still be impossible to identify the unique contribution of each part.

3 A program can be relied upon to produce long term outcomes, but a chain of intermediate outcomes cannot be identified in advance.  

Complex behaviors = sensitive dependence and attractors

“Sensitive dependence” refers to a phenomenon in which small perturbations can result in a radical change in the trajectory of the system.  “Attractor” is a set of conditions that constrain the states in which a system can exist. It is well within the bounds of possibility that sensitive dependence precludes identifying how an entity will move within its attractor, but that the attractor will constrain the conditions in which the system can find itself.

4 Achieving program goals induces dysfunctional change in related programs.  

Complex behaviors = evolution / adaptation within an ecosystem

One can think of programs as organisms that are attempting to maximize their viability on a fitness landscape. In that sense the program can be thought of as competing with the other programs with which it shares an environment. In situations like this, anything that changes the allocation of resources among the organisms will result in adaptations to new realities. Put in a language of more resonance to planners and evaluators, there will be unexpected (and probably undesirable) consequences to implementing an effort to maximize only one outcome. As an example, consider a suite of health care services – AIDS, prenatal care, women’s’ health, tertiary care, and so on. What would happen to other elements of that health care suite if money, the most interesting jobs, planners’ intellectual effort, and networks of informal relationships all flowed into AIDS efforts?

From the point of view of doing evaluation, none of the complex behaviors, or their consequences for programs, are difficult to address. (See Part 4 Complex behavior can be evaluated using comfortable, familiar methodologies.) Getting program designers on board, however, is a different matter. One of my intentions in this blog series is to convince evaluators that program designers are acting rationally when they ignore complexity, but that productive dialogue about the complex can still be had. (See Part 3 Ignoring complexity can make sense.) And in any case, it’s a possible and worthwhile to evaluate based on complexity even when programs are not designed that way. (See Part 10 Evaluating for complexity when programs are not designed that way.)

 

 

Workshop: Logic Models — Beyond the Traditional View


Workshop slides used at the 2011 meeting of the American Evaluation Association.

Workshop description When should we use logic models? How can we maximize their explanatory value and usefulness as an evaluation tool? This workshop will present three broad topics that will increase the value of using logic models. First, we’ll explore an expanded view of what forms logic models can take, including 1) the range of information that can be included, 2) the use of different forms and scales, 3) the types of relationships that may be represented, and 4) uses of models at different stages of the evaluation life cycle. This workshop will examine how to balance the relationship between visual design and information density in order to make the best use of models with various stakeholders and technical experts and consider epistemological issues in logic modeling, addressing 1) strengths and weaknesses of ‘models’, 2) relationships between models, measures, and methodologies, and 3) conditions under which logic models are and are not useful. Through lecture and both small and large group discussions, we will move beyond the traditional view of logic models to examine their applicability, value, and relatability to attendees’ experiences.

You will learn:

  • The essential nature of a ‘model’, its strengths and weaknesses;
  • Uses of logic models across the entire evaluation life cycle;
  • The value of using multiple forms and scales of the same logic model for the same evaluation;
  • Principles of good graphic design for logic models;
  • Evaluation conditions under which logic models are, and are not, useful;
  • The relationship among logic models, measurement, and methodology.

Workshop: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop slides for: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop description All evaluators deal with unintended events that foul their evaluation plans. Either the program does not work as planned, or the evaluation does not work as planned, or both. Usually we treat these situations as fires, i.e. we exercise our skills to meet the crisis. This workshop steps back from crisis mode and presents a systematic treatment of why these situations pop up, the continuum from “unforeseen” to “unforeseeable”, tactics that can be used along the continuum, and why caution is needed because anything we do to minimize the effect of surprise may be the cause of yet other difficulties. The intent of the workshop is twofold. First, to provide individual attendees with skills and knowledge they can employ in their own practice. Second, to further a community of interest among evaluators dedicated to developing systematic understanding of the phenomenon of unanticipated events that threaten the integrity of evaluation.