Assumptions and Risk

Jane Buckley, August 9th, 2019

Assumptions make up a significant percentage of every person’s everyday thinking. Most are subconscious, implicit and go without recognition (when I leave work, my car will be where I left it this morning). Others rise to the surface of our awareness and we can use that awareness to our advantage by checking the validity of that assumption; for example, I my ask my partner if my assumption that he purchased milk while at the grocery store is correct.

All assumptions represent some amount of risk, suggesting the following questions:

  • why do some assumptions emerge from our subconscious to be checked while others remain hidden?
  • how can we best surface assumptions and risks for the purpose of program planning and evaluation?

In order to answer these questions, we must first acknowledge that assumptions are an adaptive mechanism the brain uses to manage the sheer volume of stimuli and information that it takes on every day. If I had to constantly monitor my car’s position, I wouldn’t be able to write this blog or do much of anything else. Assumptions are the cognitive short-cuts that allow us to be so productive and creative in our thinking.

It makes sense that our brains are always looking for ways to consolidate and use assumptions as much as possible to move us through our environment and lives more efficiently. Following this reasoning, the only reason we would consciously identify and address an assumption is because the risk associated with that assumption may supersede the advantages it offers. This is, in part, an answer to the first question:

  • Assumptions naturally become explicit when their associated risk is a) known and b) understood to be greater than the advantage of maintaining the assumption

To understand how this assumption-risk dynamic plays out in program work, we need to unpack the idea of risk a bit more.

When we are thinking about community development programs, (programs focused on health, economic development, youth development education, etc.) risks include program conditions or components that might:

  1. inhibit positive outcomes
  2. do harm to beneficiaries
  3. waste resources

These three types of risk (among other more context-specific risks) endanger program success and do supersede any advantage gained by maintaining an associated assumption. Therefore, it is critical that programs uncover assumptions associated with the various types of risk so that they can be addressed (checked or consciously accepted) by program staff and leaders.

It can be hard for program staff to uncover assumptions that underly the programs that they work on day-in and day-out. Even when program staff understand the role and typology of assumptions (paradigmatic, prescriptive, causal), it can be hard to identify them without a more targeted path in to the subconscious. For some program teams, using the idea of risk is an effective way of facilitating this brainstorm. For example, one might ask, “What are the outcomes, outside our formal program plans, that have to hold true in order for us to meet our objectives?” Or, “Why do we think this is the best approach given our available resources?” This brings us to an answer to the second question:

  • It is critical that program professionals, who have context and program specific expertise, intentionally engage in surfacing implicit program assumptions in preparation for planning, evaluation and learning. This can be accomplished either by brainstorming assumptions directly and identifying associated risks or by brainstorming possible program risks and identifying the associated assumptions.

 

Uncovering program assumptions

Apollo M Nkwake
nkwake@gmail.com

 

Assumptions are what we believe to hold true. They may be tacit or explicit. It is okay to assume. In fact, it’s inevitable, because in order to make sense of a complex world, one needs to prioritize the variables and relationships that matter most. The danger is when the variables that aren’t prioritized are thought not to exist entirely. That is to assume that we haven’t assumed.

Examining our assumptions about how a program should work is essential for program success, since it helps to unmask risks. Program assumptions should be understood in relation to program outputs and outcomes.

An old adage goes: “You can lead a horse to water, but can’t make it drink”.

In his book Utilization focused evaluation, Michael Patton dramatizes the above adage in a way that makes it easy to understand program outputs, outcomes and assumptions:

  • The longer term outcomes are that the horse stays healthy and works effectively.
    The desired outcome is that the horse drinks the water (Assuming the water is safe, horse is thirsty,  or that the horse herder has a theory of what makes horses want to drink water, or the science of horse drinking).
  • But because program staff know that they can’t make the horse drink the water, they focus on things that they can control: 
  • Leading the horse to the water, making sure the tank is full, monitoring the quality of water, and keeping the horse within drinking distance of the water.
  • In short, they focus on the processes of water delivery (outputs) rather than the outcome of water drunk (or the horse staying healthy and productive-emphasis added).
  • Because staff can control processes but cannot guarantee attaining of outcomes, government rules and regulations get written specifying exactly how to lead a horse to water.
  • Quality awards are made for improving the path to water-and keeping the horse happy along the way.
  • Most reporting systems focus on how many horses get led to the water, and how difficult it was to get them there, but never get around to finding out whether the horse drank the water and stayed healthy.

The outputs (horse taken to the well) and outcomes (horse drinks water and stays healthy) are clear in this illustration. What then are the assumptions in this illustration? Here are my suggestions:

Given the horse drinks water outcome, perhaps the horse is expected to be thirsty or we know when the horse needs to drink water, and the water is expected to taste good, that the horse herder understands the relationship between horse drinking and horse health and other horses are not competing for the same water source. And just because one horse drinks the water, it doesn’t mean all of them will do so for all sorts of reasons that we might not understand.

Given the horse healthy outcome, perhaps the water is expected to be safe. Etc. etc.

Most monitoring, evaluation and learning systems try to track program outputs and outcomes. But critical assumptions are seldom tracked. If they are, it’s factors beyond stakeholders’ control that are tracked–such as no epidemic breaking out to kill all horses in the community (external assumptions). In the above illustration, the water quality could be checked. Also, the horse’s thirst could be manipulated with a little salty food, and there could be a system of managing the horses so that they all get a drink. These are assumptions within stakeholder influence (internal assumptions).

My point here is that examining (internal and external) assumptions alongside program outputs and outcomes unmasks risks to program success.

Recommended reading:
Evaluation and Program Planning Special issue on Working with assumptions: Existing and emerging approaches for improved program design, monitoring and evaluation

  • Volume overview: Working with assumptions. Existing and emerging approaches for improved program design, monitoring and evaluation     Apollo M. Nkwake, Nathan Morrow
  •  Clarifying concepts and categories of assumptions for use in evaluation     Apollo M. Nkwake, Nathan Morrow
  • Assumptions at the philosophical and programmatic levels in evaluation     Donna M. Mertens
  • Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis     Huey T. Chen
  • Assumptions, conjectures, and other miracles: The application of evaluative thinking to theory of change models in community development      Thomas Archibald, Guy Sharrock, Jane Buckley, Natalie Cook
  • Causal inferences on the effectiveness of complex social programs: Navigating assumptions, sources of complexity and evaluation design challenges     Madhabi Chatterji
  • Assumption-aware tools and agency; an interrogation of the primary artifacts of the program evaluation and design profession in working with complex evaluands and complex contexts     Nathan Morrow, Apollo M. Nkwake
  • Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice     Nathan Morrow, Apollo M. Nkwake

 

Invitation to Participate — Assumptions in Program Design and Evaluation

Bob’s response to our first post reminded us that we forgot to add something important.
We are actively seeking contributions. If you have something to contribute, please contact us.
If  you know others who might want to contribute, please ask them to contact us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org

Introducing a Blog Series on Assumptions in Program Design and Evaluation

Assumptions drive action.
Assumptions can be recognized or invisible.
Assumptions can be right, wrong, or anywhere in between.
Over time assumptions can atrophy, and new ones can arise.
To be effective drivers of action, assumptions must simplify and distort.

Welcome to our blog series that is dedicated to exploring these assertions. Our intent is to cultivate a community of interest. Our hope is that a loose coalition will form through this blog, and through other channels, that will enrich the wisdom that evaluators and designers bring to their work. Please join us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org