Assumptions are what we believe to hold true. They may be tacit or explicit. It is okay to assume. In fact, it’s inevitable, because in order to make sense of a complex world, one needs to prioritize the variables and relationships that matter most. The danger is when the variables that aren’t prioritized are thought not to exist entirely. That is to assume that we haven’t assumed.
Examining our assumptions about how a program should work is essential for program success, since it helps to unmask risks. Program assumptions should be understood in relation to program outputs and outcomes.
An old adage goes: “You can lead a horse to water, but can’t make it drink”.
In his book Utilization focused evaluation, Michael Patton dramatizes the above adage in a way that makes it easy to understand program outputs, outcomes and assumptions:
- The longer term outcomes are that the horse stays healthy and works effectively.
The desired outcome is that the horse drinks the water (Assuming the water is safe, horse is thirsty, or that the horse herder has a theory of what makes horses want to drink water, or the science of horse drinking).
- But because program staff know that they can’t make the horse drink the water, they focus on things that they can control:
- Leading the horse to the water, making sure the tank is full, monitoring the quality of water, and keeping the horse within drinking distance of the water.
- In short, they focus on the processes of water delivery (outputs) rather than the outcome of water drunk (or the horse staying healthy and productive-emphasis added).
- Because staff can control processes but cannot guarantee attaining of outcomes, government rules and regulations get written specifying exactly how to lead a horse to water.
- Quality awards are made for improving the path to water-and keeping the horse happy along the way.
- Most reporting systems focus on how many horses get led to the water, and how difficult it was to get them there, but never get around to finding out whether the horse drank the water and stayed healthy.
The outputs (horse taken to the well) and outcomes (horse drinks water and stays healthy) are clear in this illustration. What then are the assumptions in this illustration? Here are my suggestions:
Given the horse drinks water outcome, perhaps the horse is expected to be thirsty or we know when the horse needs to drink water, and the water is expected to taste good, that the horse herder understands the relationship between horse drinking and horse health and other horses are not competing for the same water source. And just because one horse drinks the water, it doesn’t mean all of them will do so for all sorts of reasons that we might not understand.
Given the horse healthy outcome, perhaps the water is expected to be safe. Etc. etc.
Most monitoring, evaluation and learning systems try to track program outputs and outcomes. But critical assumptions are seldom tracked. If they are, it’s factors beyond stakeholders’ control that are tracked–such as no epidemic breaking out to kill all horses in the community (external assumptions). In the above illustration, the water quality could be checked. Also, the horse’s thirst could be manipulated with a little salty food, and there could be a system of managing the horses so that they all get a drink. These are assumptions within stakeholder influence (internal assumptions).
My point here is that examining (internal and external) assumptions alongside program outputs and outcomes unmasks risks to program success.
- Volume overview: Working with assumptions. Existing and emerging approaches for improved program design, monitoring and evaluation Apollo M. Nkwake, Nathan Morrow
- Clarifying concepts and categories of assumptions for use in evaluation Apollo M. Nkwake, Nathan Morrow
- Assumptions at the philosophical and programmatic levels in evaluation Donna M. Mertens
- Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis Huey T. Chen
- Assumptions, conjectures, and other miracles: The application of evaluative thinking to theory of change models in community development Thomas Archibald, Guy Sharrock, Jane Buckley, Natalie Cook
- Causal inferences on the effectiveness of complex social programs: Navigating assumptions, sources of complexity and evaluation design challenges Madhabi Chatterji
- Assumption-aware tools and agency; an interrogation of the primary artifacts of the program evaluation and design profession in working with complex evaluands and complex contexts Nathan Morrow, Apollo M. Nkwake
- Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice Nathan Morrow, Apollo M. Nkwake