Uncovering program assumptions

Apollo M Nkwake
nkwake@gmail.com

 

Assumptions are what we believe to hold true. They may be tacit or explicit. It is okay to assume. In fact, it’s inevitable, because in order to make sense of a complex world, one needs to prioritize the variables and relationships that matter most. The danger is when the variables that aren’t prioritized are thought not to exist entirely. That is to assume that we haven’t assumed.

Examining our assumptions about how a program should work is essential for program success, since it helps to unmask risks. Program assumptions should be understood in relation to program outputs and outcomes.

An old adage goes: “You can lead a horse to water, but can’t make it drink”.

In his book Utilization focused evaluation, Michael Patton dramatizes the above adage in a way that makes it easy to understand program outputs, outcomes and assumptions:

  • The longer term outcomes are that the horse stays healthy and works effectively.
    The desired outcome is that the horse drinks the water (Assuming the water is safe, horse is thirsty,  or that the horse herder has a theory of what makes horses want to drink water, or the science of horse drinking).
  • But because program staff know that they can’t make the horse drink the water, they focus on things that they can control: 
  • Leading the horse to the water, making sure the tank is full, monitoring the quality of water, and keeping the horse within drinking distance of the water.
  • In short, they focus on the processes of water delivery (outputs) rather than the outcome of water drunk (or the horse staying healthy and productive-emphasis added).
  • Because staff can control processes but cannot guarantee attaining of outcomes, government rules and regulations get written specifying exactly how to lead a horse to water.
  • Quality awards are made for improving the path to water-and keeping the horse happy along the way.
  • Most reporting systems focus on how many horses get led to the water, and how difficult it was to get them there, but never get around to finding out whether the horse drank the water and stayed healthy.

The outputs (horse taken to the well) and outcomes (horse drinks water and stays healthy) are clear in this illustration. What then are the assumptions in this illustration? Here are my suggestions:

Given the horse drinks water outcome, perhaps the horse is expected to be thirsty or we know when the horse needs to drink water, and the water is expected to taste good, that the horse herder understands the relationship between horse drinking and horse health and other horses are not competing for the same water source. And just because one horse drinks the water, it doesn’t mean all of them will do so for all sorts of reasons that we might not understand.

Given the horse healthy outcome, perhaps the water is expected to be safe. Etc. etc.

Most monitoring, evaluation and learning systems try to track program outputs and outcomes. But critical assumptions are seldom tracked. If they are, it’s factors beyond stakeholders’ control that are tracked–such as no epidemic breaking out to kill all horses in the community (external assumptions). In the above illustration, the water quality could be checked. Also, the horse’s thirst could be manipulated with a little salty food, and there could be a system of managing the horses so that they all get a drink. These are assumptions within stakeholder influence (internal assumptions).

My point here is that examining (internal and external) assumptions alongside program outputs and outcomes unmasks risks to program success.

Recommended reading:
Evaluation and Program Planning Special issue on Working with assumptions: Existing and emerging approaches for improved program design, monitoring and evaluation

  • Volume overview: Working with assumptions. Existing and emerging approaches for improved program design, monitoring and evaluation     Apollo M. Nkwake, Nathan Morrow
  •  Clarifying concepts and categories of assumptions for use in evaluation     Apollo M. Nkwake, Nathan Morrow
  • Assumptions at the philosophical and programmatic levels in evaluation     Donna M. Mertens
  • Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis     Huey T. Chen
  • Assumptions, conjectures, and other miracles: The application of evaluative thinking to theory of change models in community development      Thomas Archibald, Guy Sharrock, Jane Buckley, Natalie Cook
  • Causal inferences on the effectiveness of complex social programs: Navigating assumptions, sources of complexity and evaluation design challenges     Madhabi Chatterji
  • Assumption-aware tools and agency; an interrogation of the primary artifacts of the program evaluation and design profession in working with complex evaluands and complex contexts     Nathan Morrow, Apollo M. Nkwake
  • Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice     Nathan Morrow, Apollo M. Nkwake

 

Joint optimization of unrelated outcomes – Part 6 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common Introduction to all sections

This is part 6 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators up
3 Ignoring complexity can make sense up
4 Complex behavior can be evaluated using comfortable, familiar methodologies up
5 A pitch for sparse models up
6 Joint optimization of unrelated outcomes up
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

Joint optimization of unrelated outcomes

This blog is one of two in the series that discusses the possible advantages of having a less successful program than having a more successful program. The other is Part 9: Very successful programs, or many, connected, somewhat successful programs?

Figure 1: Typical Model for an AIDS Program

Figure 1 is a nice traditional model of an AIDS prevention/treatment program. The program is implemented, and services are provided. Because of careful planning, the quality of service is high. The combined amount and quality of service decreases the incidence and prevalence of AIDS. Decreased incidence and prevalence lead to improvements in quality of life and other similar measures. Because incidence and prevalence decrease, the amount of service provided goes down. However, at whatever level, the quality of service remains high. All these changes can be measured quantitatively. Change in the outcomes also affects the activities of the program, but for the most part, understanding those changes requires qualitative analysis.

There is nothing wrong with this model and this evaluation. I would dearly love to have a chance to do a piece of work like that. Note, however, an aspect of this program that characterizes every program I have ever seen. All the outcomes are highly correlated with each other. Because of the ways in which change happens, this can have some unpleasant consequences.

The unpleasant consequences can be seen by casting the AIDS program within a model that recognize that the AIDS program is but one of many organisms in a diverse ecosystem of health activities (Figure 2). (For a really good look at the subject of diversity and change, see Scott Page’s Diversity and Complexity.)

Figure 2: Casting the AIDS Model into an Ecosystem Based Health Care Model

Looked at in those terms, the way change happens can have widespread, and probably not very desirable consequences. Table 1 explains the details in the model shown in Figure 2.

Table 1: Explanation of Figure 2-
Upper right
  • The ecosystem of health services is arranged in a radar chart to show how much each service contributes to overall health in the community. There is no scale on the chart, but the absolute value of each service quality does not matter. All that matters is that under the circumstances, each service is about as good as it can be.
Upper left
  • This is the model of our very successful AIDS prevention and treatment program, as shown in Figure 1.
Lower left
  • This is a chart of what can happen to health system resources when an overriding priority is put on AIDS, to the exclusion of everything else. Resources flow from the rest of the system to AIDS. When I say “resources” I do not mean just money. I mean everything about a health care system that is needed for the system to function well.
Lower right
  • This radar chart shows the status of the system after the AIDS effort has been in operation for a while. Indeed, the AIDS measures improve. But what of the other services? How do they accommodate to nurses choosing to move to AIDS care, or policy makers time and intellectual efforts pointed in the AIDS direction, and so on? It seems reasonable to posit that whatever happens to those other services, it will not be to their advantage. Their environment has become resource poor.

This is what happens when a single objective is pursued when a system is comprised of diverse entities with diverse goals. You will get what you worked for, but the system as a whole may be the worse off for it. What is the solution? The solution is to work at jointly optimizing multiple somewhat unrelated outcomes. “Somewhat” is an important qualifier because the range of objectives cannot be too diverse. In the AIDS example, all health care objectives certainly have some overlap and relationships to each other. It’s not as if the goals to be jointly optimized were as far apart as AIDS and girls’ schooling. Some coherence of focus is needed.

The above advice can be excruciatingly difficult to follow. One problem is that there is nothing obvious about what “joint optimization” means. AIDS prevention, tertiary care, and women’s health – imagine drawing a logic model for the goals of each of these programs. Then imagine the interesting conversations that would ensue on the topic of how much achievement of each goal was appropriate.

Indeed, one way to look at the simple model depicted by Figure 1 is that it is a program operating within an organizational silo. And as I tried to show in Part 3 (Ignoring complexity can make sense), operating within silos is rational and functional. I am by no means arguing that the model in figure 2 is in any way better than the model in figure 1, or that programs must be designed and evaluated with respect to one or the other. My only point in this blog post is to show that there is complex system behavior in the form of evolutionary adaptation that is likely to cause unintended undesirable consequences when efforts are made to pursue a set of highly correlated outcomes.

Finally, I know many people take a dim view of the dark scenario I painted above, namely, that the most likely unintended consequences of pursuing a single objective are negative. But I think I’m right. For an explanation, see the section “Why are Unintended Consequences Likely to be Undesirable?” in From Firefighting to Systematic Action: Toward A Research Agenda for Better Evaluation of Unintended Consequences

A pitch for sparse models – Part 5 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common Introduction to all sections

This is part 5 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators up
3 Ignoring complexity can make sense up
4 Complex behavior can be evaluated using comfortable, familiar methodologies up
5 A pitch for sparse models up
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

Models

I’ll start with my take on the subject of “models”. I do not think of models exclusively in terms of traditional evaluation logic models, or in terms of the box and arrow graphics that we use to depict program theory. Rather, I think in terms of how “models” function in the process of scientific inquiry.  Table 1 summarizes how I engage models when I do evaluation. [Some writings that influenced my thinking about this topic: 1) Evaluation as technology, not science (Morell), 2) Models in Science Frigg, Roman and Hartmann, 3) The Model Thinker: What You Need to Know to Make Data Work for You (Page) 4) Timelines as evaluation logic models (Morell).]

Table 1: How Jonny Thinks About Models

Simplification A model is a simplification of reality that deliberately omits some aspects of a phenomenon’s functioning in order to highlight others. Simplification is required because without it, no methodology could cover all relevant factors.
Ubiquity Because evaluation is an analytical exercise there is always a need for some kind of a model. That model may be implicit or explicit, detailed or sparse, comprised of qualitative or quantitative concepts, and designed to drive any number of qualitative or quantitative ways of understanding a program. Also, models can vary in their half-lives. Some will remain relatively constant over an entire evaluation. Some may change with each new piece of data or each new analysis. But there will always be more going on than can be managed in any analysis. There will always be a need to decide what to strip out in order to discern relationships among elements of what is left.
Ignorance No matter how smart we are, we will never know what all the relevant factors are. We cannot have a complete model no matter how hard we try.
Choice Models can be cast in different forms and at different levels of detail. The appropriate form is the one that works best for a particular inquiry.
Multiple forms There is no reason to restrict an inquiry to only one model, or one form of model. In fact, there are many good reasons to use multiple models.
Wrong but useful George Box was right. “All models are wrong, but some are useful”. (Go here for a dated but public version. To here for the journal version.)
Outcome focus I use models to guide decisions about what methodology I should employ, what data I should collect, and how I should interpret the data. I tend not to use models to explain a program. If I did, I would include more detail than I could handle in an evaluation exercise. I do not use models for program advocacy, but if I did, it would use less detail.

A common view of models in evaluation

Considering the above, what should evaluation models look like? This question is unanswerable, but I do have a strong opinion as to what a model should not look like. It should not look almost all the models I have ever seen. It should not look like Figure 1. I know that no model used by evaluators looks exactly like this, but almost all models I have ever seen have a core logic that is similar. Qualitatively, they are all the same. I do not like these models.

Figure 1: Common, way over specified model

One reason I do not like these models is because they do not recognize complex behavior. Here are some examples of some complex behaviors that these kinds of models miss.

 

  • Even a single feedback loop can result in non-linear behavior
  • Small perturbations in any part of the model’s behavior may result in a major change in a model’s trajectory.
  • The model as a whole, or regions of it, may combine to generate effects that are not attributable to any single element in the model.
  • Models as depicted in Figure 1 are cast as networks, but the model is not treated as a network that can exhibit network behavior.
  • The model asserts that intermediate outcomes can be identified, as can paths through those outcomes. It is entirely possible that precise path cannot be predicted, but that long-term outcome can.

Another reason I do not like these models is because they are not modest. Read on. 

Recognizing ignorance

Give all the specific detail in Figure 1 a good look. Give it the sniff test. Is it plausible that we know enough about how the program works to specify it at that

Figure 2 Models with successive degrees of ignorance

level of detail? I suppose it’s possible, but I bet not.

As an aside, I also think that if models like this are used, they should include information that they always lack.  Here are two examples. 1) Are all those multiple arrows equally important? 2) Do those multiple connections represent “and”, or “or” relationships? It makes a difference because too many “and” requirements almost certainly portend that the program will fail. These are my favorites from a long list I developed for an analysis of implicit assumptions. If you want them all go to: Revealing Implicit Assumptions: Why, Where, and How?

My preference is to use models along the lines of those in Figure 2. From top to bottom, they capture a greater sense of what we do not know because we have not done enough research, or what we cannot know because of the workings of complex behaviors.

Blue model: The story in this model is that there are outcomes that matter, but whose precise relationships cannot be identified. (See the ovals in the “later”) column. The best we can do is think of these outcomes in groups such that if something happens in one group, something will happen in the subsequent group. This is the best we can do. We cannot specify relationships among single outcomes within each group, or specific outcomes across groups. Also, it is possible that for each replication of the program, the 1:1 relationship within and across groups may differ. Or, there may be no 1:1 relationships at all. Rather, there is emergent behavior in one group that is affecting the other. Or put more simply, the best we can say is that “if stuff happens here, stuff will happen there”. 

Green model: The story in the middle acknowledges an even greater degree of ignorance. The intermediate outcomes are still there, but the model acknowledges that much else not related to the program might be affecting the long-range outcome. Still, that long-range outcome can be isolated and identified. This seems like an odd possibility, but I believe that it is quite possible. (See Part 8: How can the concept of “attractors” be useful in evaluation?)  

Yellow model: The story at the bottom acknowledges more ignorance still. There, not only are the intermediate outcomes tangled with other activity, but the long-range outcome is as well.

I have no a priori preference for any of these models. The choice would depend on how much we know about the program, what the outcomes were, how much uncertainty we could tolerate, what data were available, what methodologies were available, the actual numbers for “later” and “much later”, and the needs of the stakeholders. What matters though, is that thinking of models in this way acknowledges the effects of complex behavior on program outcomes, and that it recognizes how little we know about the details of why a program will do what it does. Also, l I do not claim that these models are the only ones possible. They are as they say, for illustrative purposes only. Evaluators can and should be creative in fashioning models that serve the needs of their customers.

Locally right but not globally right

Models can have the odd characteristic of being everywhere locally correct but not globally correct. I tried to illustrate this with the green rectangle in Figure 3. Imagine moving that rectangle over the model. The relationships shown within the rectangle may well behave as the model depicts them, but as the size of the rectangle grows to overlap with the ent

Figure 3 Models can be everywhere correct locally but wrong globally

ire model, the fit between model and reality may fade. Several aspects of complex behavior explain why this is so.

  • Multiple interacting elements may exhibit global behavior that cannot be explained in terms of the sum of its parts. This is the phenomenon of emergence. (Part 7 Why should evaluators care about emergence?)
  • The model is a network, and networks can adapt and change as communication runs along its edges.
  • Because of sensitive dependence, small changes in any part of a system can result in long term change as the system evolves. The direction of that evolution cannot be predicted. To know it, the system must run, and its behavior observed.
  • All those feedback loops can result in non-linear change.
  • Collections of entities and relationships like this can result in phase shift behavior, a phenomenon where the characteristics of a system can change almost instantaneously.

Summary of common themes

There are two common themes that run through everything I have said in this post.

  • The models limit detail, either by removing specific element-to-element relationships, or by limiting the number and range of elements under investigation.
  • They portray scenarios in which complex behavior is affecting program outcome.

These two themes are related. One of the reasons we should use sparse models is because complex behavior makes it inappropriate to specify too much detail.

 

Complex behavior can be evaluated using comfortable, familiar methodologies – Part 4 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common introduction to all sections

This is part 4 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators up
3 Ignoring complexity can make sense up
4 Complex behavior can be evaluated using comfortable, familiar methodologies up
5 A pitch for sparse models 7/1
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

This blog post will give away much of what is to come in the other parts, but that’s OK. One reason it’s OK is that it’s never a bad thing to cover the same material twice, each time in a somewhat different way. The other reason it’s OK is that that before getting into the details of complex behavior and its use in evaluation, an important message needs to be internalized. Namely, that the title of this blog post is in fact correct. Complex behavior can be evaluated using comfortable, familiar methodologies.  

Figure 1 illustrates why this is so. It depicts a healthy eating program whose function is to reach out to individuals and teach them about dieting and exercise. Secondary effects are posited because attendees interact with friends and family. It is thought that because of that contact, four kinds of outcomes may occur.

  • Friends and family pick up some of the information that was transmitted to program attendees, and improve their personal health related behavior.
  • Collective change occurs within a family or cohort group, resulting in desirable health improvements, even though the specific changes cannot be identified in advance.
  • There may be community level changes. For instance, consider two examples: 1) An aggregate improvement in the health of people in a community may change their energy for engaging in volunteer behavior. The important outcome is not the number of hours each person puts in. The important outcome is what happens in the community because of those hours. 2) Better health may result in people working more hours, and, hence earning more money. Income is an individual level outcome, but the consequences of increased wealth in the community is a community level outcome.
  • To cap it all off, there is a feedback loop between the accomplishments of the program and what services the program delivers. So over time, the program’s outcomes may change as the program adapts to the changes it has wrought.
Evaluating Complex Behavior With Common, Familiar Methodologies

Even without a formal definition of complexity, I think we would all agree that this is a complex system. There are networks embedded in networks. There are community-level changes that cannot be understood by “summing” specific changes in friends and family. There are influences among the people receiving direct services. Program theory can identify health changes that may occur, but it is incapable of specifying any of the other changes that may occur. There is a feedback loop whereby the effects of the program influence the services the program delivers. And what methodologies are needed to deal with all this complexity? They are in the Table 1. Everything there are methods that most evaluators can either do themselves or can easily recruit colleagues who can.

Table 1: Familiar Methodologies to Address Complex Behaviors
Program Behavior Methodology
Feedback between services and impact
  • Service records
  • Budges and plans
  • Interviews with staff
Community level change
  • Monitoring
  • Observation
  • Open ended interviewing
  • Content analysis of community social media
Direct impact on participants
  • Interviews
  • Exercise logs
  • Food consumption logs
  • Blood pressure / weight measures

There are two exceptions to the “comfortable, familiar methodology” principle. The first would be cases where formal network structure mattered. For instance, imagine that it were not enough to show that network behavior was at play in the healthy eating example, but that the structure of the network and its various centrality measures were important for understanding the program outcomes. In that case one would need specialized expertise and software. The second case would be a scenario where it would further the evaluation if the program were modeled in a computer simulation. Those kinds of models are useless for predicting how a program will behave, but they are very useful for getting a sense of the program’s performance envelope, and testing assumptions about relationships between program and outcome. If any of that mattered, one would need specialized expertise in system dynamic or agent-based modeling, depending on one’s view of how the world works and what information one wanted to know.

 

 

 

 

 

 

 

Ignoring complexity can make sense – Part 3 of a 10-part series on how complexity can produce better insight on what programs do, and why 

Common Introduction to all sections

This is part 3 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators up
3 Ignoring complexity can make sense up
4 Complex behavior can be evaluated using comfortable, familiar methodologies 6/28
5 A pitch for sparse models 7/1
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

Ignoring complexity can make sense

Complexity is in large measure about connectedness. It is about what happens when processes and entities combine or interact. I believe that understanding complex connectedness will make for better models, and hence for more useful methodologies and data interpretation. Of course I believe this. Why else would I be producing all these blog posts and videos?

Still, I would be remiss if I did not advance a contrary view, i.e. that avoiding the implications of complexity can be functional and rational. In fact, it is usually functional and rational. I don’t think evaluators can do a good job if they fail to appreciate why this is so. It’s all too easy to jump to the conclusion that program designers “should” build complex behavior into their designs. I can make a good argument that they should not.

The difference between Figure 1 and Figure 2 illustrates what I mean. Every evaluation I have been involved with comes out of the organizational structure depicted in Figure 1. A program has internal operations (blue). Those operations produce consequences (pink). There is a feedback loop between what the program does and what it accomplishes. Real world cases may have many more parts, but qualitatively they are all the same picture.Figure 2 illustrate s how programs really operate. The core of the program is still there, color coded in the same pink and blue. However, that program contains embedded detail (dark blue and dark red), and is connected to a great deal of activity and organizational structure outside of its immediate boundaries (green, gray, yellow, and white.)

Figure 1
Figure 2

The people working in the program certainly know about  these complications. They also know that those complications affect the program they are managing. So why not act on that knowledge? There are good reasons. Think about what would be involved in taking all those relationships into account.

  • Different stakeholders will have different priorities.
  • Different organizational cultures would have to work with each other.
  • Goals among the programs may conflict and would have to be negotiated.
  • Different programs are likely to have different schedules for decision making.
  • The cost of coordination in terms of people, money, and time would increase.
  • Different time horizons for the different activities would have to be reconciled.
  • Interactions among the programs would have to be built into program theory and evaluation.
  • Program designers would have to interact with people they don’t know personally, and don’t trust.
  • Each program will have different contingencies, which instead of affecting a narrow program, would affect the entire suite of programs.

That’s the reality. I’d say its rational to work within narrow constraints, no matter how acutely aware people are of the limitations of doing so.

Invitation to Participate — Assumptions in Program Design and Evaluation

Bob’s response to our first post reminded us that we forgot to add something important.
We are actively seeking contributions. If you have something to contribute, please contact us.
If  you know others who might want to contribute, please ask them to contact us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org

Introducing a Blog Series on Assumptions in Program Design and Evaluation

Assumptions drive action.
Assumptions can be recognized or invisible.
Assumptions can be right, wrong, or anywhere in between.
Over time assumptions can atrophy, and new ones can arise.
To be effective drivers of action, assumptions must simplify and distort.

Welcome to our blog series that is dedicated to exploring these assertions. Our intent is to cultivate a community of interest. Our hope is that a loose coalition will form through this blog, and through other channels, that will enrich the wisdom that evaluators and designers bring to their work. Please join us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org