Embracing Uncertainty: The case for mindful development

 

Guy Sharrock, Catholic Relief Services

There is a growing awareness that many aspects of economic and social development are complex, unpredictable, and ultimately uncontrollable. Governments, non-governmental organizations, and international agencies have realized the need for a change in emphasis; a paradigm shift is taking place away from predominantly linear and reductionist models of change to approaches that signal a recognition of the indeterminate, dynamic and interconnected nature of social behavior.

Over the last few years many international NGOs have been adopting a more adaptive approach to project management often with reference to USAID’s ‘Collaborating, Learning and Adapting’ (CLA) framework and model. In the case of Catholic Relief Services this work builds on earlier and not unrelated capacity strengthening interventions – still ongoing – in which projects are encouraged to embed ‘evaluative thinking’ (ET) (Buckley et al., 2015) into their modus operandi.

Ellen Langer, in her excellent book The Power of Mindful Learning (Langer, 1997) introduces the notion of ‘mindfulness’. This concept, underpinned by many years of research, can be understood as being alert to novelty – intentionally “seeking surprise” (Guijt, 2008) – introducing in a helpful manner a sense of uncertainty to our thinking and thereby establishing a space for ‘psychologically safe’ learning (Edmondson, 2008) and an openness to multiple perspectives. This seems to me very applicable to the various strands of CLA and ET work in which I’ve been recently engaged; Langer’s arguments for mindful learning seem as applicable to international development as they are to her own sector of research interest, education. To coin the language of Lederach (2007), Langer seems to “demystify” the notion of mindfulness whilst at the same time offering us the chance to “remystify” the practice of development work that seeks to change behavior and support shifts in social norms. This is both essential and overdue for development interventions occurring in complex settings.

A mindful approach to development would seek to encourage greater awareness in the present of how different people on the receiving end of aid adapt (or not) their behavior in response to project interventions; in short, a willingness to go beyond our initial assumptions through a mindful acceptance that data bring not certainty but ambiguity. According to Langer, “in a mindful state, we implicitly recognize that no one perspective optimally explains a situation…we do not seek to select the one response that corresponds to the situation, but we recognize that there is more than one perspective on the information given and we choose from among these.” (op. cit..: 108). Mindful development encourages a learning climate in which uncertainty is embraced and stakeholders intentionally surface and value novelty, difference, context, and perspective to generate nuanced understandings of the outcome of project interventions. Uncertainty is the starting point for addressing complex challenges and a willingness to “spend more time not knowing” (Margaret Wheatley, quoted in Kania and Kramer, 2013) before deciding on course corrections if needed. As Kania and Kramer (ibid.: 7) remark, “Collective impact success favors those who embrace the uncertainty of the journey, even as they remain clear-eyed about their destination.”

References

Buckley, J., Archibald, T., Hargraves, M. and W.M. Trochim. (2015). ‘Defining and Teaching Evaluative Thinking: Insights from Research on Critical Thinking’. American Journal of Evaluation, pp. 1-14.

Edmondson, A. (2014). Building a Psychologically Safe Workplace. Retrieved from: https://www.youtube.com/watch?v=LhoLuui9gX8

Guijt, I. (2008). Seeking Surprise: Rethinking Monitoring for Collective Learning in Rural Resource Management. Published PhD thesis, Wageningen University, Wageningen, The Netherlands.

Kania, J. and M. Kramer. (2013) ‘Embracing Emergence: How Collective Impact Addresses Complexity’. Stanford Social Innovation Review. Stanford University, CA.

Langer, Ellen J. (1997). The Power of Mindful Learning. Perseus Books, Cambridge, MA.

Lederach, J.P., Neufeldt, R. and H. Culbertson. (2007). Reflective Peacebuilding. A Planning, Monitoring, and Learning Toolkit. Joan B. Kroc Institute for International Peace Studies, University of Notre Dame, South Bend, IN, and Catholic Relief Services, Baltimore, MD.

 

Assumptions and Risk

Jane Buckley, August 9th, 2019

Assumptions make up a significant percentage of every person’s everyday thinking. Most are subconscious, implicit and go without recognition (when I leave work, my car will be where I left it this morning). Others rise to the surface of our awareness and we can use that awareness to our advantage by checking the validity of that assumption; for example, I my ask my partner if my assumption that he purchased milk while at the grocery store is correct.

All assumptions represent some amount of risk, suggesting the following questions:

  • why do some assumptions emerge from our subconscious to be checked while others remain hidden?
  • how can we best surface assumptions and risks for the purpose of program planning and evaluation?

In order to answer these questions, we must first acknowledge that assumptions are an adaptive mechanism the brain uses to manage the sheer volume of stimuli and information that it takes on every day. If I had to constantly monitor my car’s position, I wouldn’t be able to write this blog or do much of anything else. Assumptions are the cognitive short-cuts that allow us to be so productive and creative in our thinking.

It makes sense that our brains are always looking for ways to consolidate and use assumptions as much as possible to move us through our environment and lives more efficiently. Following this reasoning, the only reason we would consciously identify and address an assumption is because the risk associated with that assumption may supersede the advantages it offers. This is, in part, an answer to the first question:

  • Assumptions naturally become explicit when their associated risk is a) known and b) understood to be greater than the advantage of maintaining the assumption

To understand how this assumption-risk dynamic plays out in program work, we need to unpack the idea of risk a bit more.

When we are thinking about community development programs, (programs focused on health, economic development, youth development education, etc.) risks include program conditions or components that might:

  1. inhibit positive outcomes
  2. do harm to beneficiaries
  3. waste resources

These three types of risk (among other more context-specific risks) endanger program success and do supersede any advantage gained by maintaining an associated assumption. Therefore, it is critical that programs uncover assumptions associated with the various types of risk so that they can be addressed (checked or consciously accepted) by program staff and leaders.

It can be hard for program staff to uncover assumptions that underly the programs that they work on day-in and day-out. Even when program staff understand the role and typology of assumptions (paradigmatic, prescriptive, causal), it can be hard to identify them without a more targeted path in to the subconscious. For some program teams, using the idea of risk is an effective way of facilitating this brainstorm. For example, one might ask, “What are the outcomes, outside our formal program plans, that have to hold true in order for us to meet our objectives?” Or, “Why do we think this is the best approach given our available resources?” This brings us to an answer to the second question:

  • It is critical that program professionals, who have context and program specific expertise, intentionally engage in surfacing implicit program assumptions in preparation for planning, evaluation and learning. This can be accomplished either by brainstorming assumptions directly and identifying associated risks or by brainstorming possible program risks and identifying the associated assumptions.

 

Uncovering program assumptions

Apollo M Nkwake
nkwake@gmail.com

 

Assumptions are what we believe to hold true. They may be tacit or explicit. It is okay to assume. In fact, it’s inevitable, because in order to make sense of a complex world, one needs to prioritize the variables and relationships that matter most. The danger is when the variables that aren’t prioritized are thought not to exist entirely. That is to assume that we haven’t assumed.

Examining our assumptions about how a program should work is essential for program success, since it helps to unmask risks. Program assumptions should be understood in relation to program outputs and outcomes.

An old adage goes: “You can lead a horse to water, but can’t make it drink”.

In his book Utilization focused evaluation, Michael Patton dramatizes the above adage in a way that makes it easy to understand program outputs, outcomes and assumptions:

  • The longer term outcomes are that the horse stays healthy and works effectively.
    The desired outcome is that the horse drinks the water (Assuming the water is safe, horse is thirsty,  or that the horse herder has a theory of what makes horses want to drink water, or the science of horse drinking).
  • But because program staff know that they can’t make the horse drink the water, they focus on things that they can control: 
  • Leading the horse to the water, making sure the tank is full, monitoring the quality of water, and keeping the horse within drinking distance of the water.
  • In short, they focus on the processes of water delivery (outputs) rather than the outcome of water drunk (or the horse staying healthy and productive-emphasis added).
  • Because staff can control processes but cannot guarantee attaining of outcomes, government rules and regulations get written specifying exactly how to lead a horse to water.
  • Quality awards are made for improving the path to water-and keeping the horse happy along the way.
  • Most reporting systems focus on how many horses get led to the water, and how difficult it was to get them there, but never get around to finding out whether the horse drank the water and stayed healthy.

The outputs (horse taken to the well) and outcomes (horse drinks water and stays healthy) are clear in this illustration. What then are the assumptions in this illustration? Here are my suggestions:

Given the horse drinks water outcome, perhaps the horse is expected to be thirsty or we know when the horse needs to drink water, and the water is expected to taste good, that the horse herder understands the relationship between horse drinking and horse health and other horses are not competing for the same water source. And just because one horse drinks the water, it doesn’t mean all of them will do so for all sorts of reasons that we might not understand.

Given the horse healthy outcome, perhaps the water is expected to be safe. Etc. etc.

Most monitoring, evaluation and learning systems try to track program outputs and outcomes. But critical assumptions are seldom tracked. If they are, it’s factors beyond stakeholders’ control that are tracked–such as no epidemic breaking out to kill all horses in the community (external assumptions). In the above illustration, the water quality could be checked. Also, the horse’s thirst could be manipulated with a little salty food, and there could be a system of managing the horses so that they all get a drink. These are assumptions within stakeholder influence (internal assumptions).

My point here is that examining (internal and external) assumptions alongside program outputs and outcomes unmasks risks to program success.

Recommended reading:
Evaluation and Program Planning Special issue on Working with assumptions: Existing and emerging approaches for improved program design, monitoring and evaluation

  • Volume overview: Working with assumptions. Existing and emerging approaches for improved program design, monitoring and evaluation     Apollo M. Nkwake, Nathan Morrow
  •  Clarifying concepts and categories of assumptions for use in evaluation     Apollo M. Nkwake, Nathan Morrow
  • Assumptions at the philosophical and programmatic levels in evaluation     Donna M. Mertens
  • Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis     Huey T. Chen
  • Assumptions, conjectures, and other miracles: The application of evaluative thinking to theory of change models in community development      Thomas Archibald, Guy Sharrock, Jane Buckley, Natalie Cook
  • Causal inferences on the effectiveness of complex social programs: Navigating assumptions, sources of complexity and evaluation design challenges     Madhabi Chatterji
  • Assumption-aware tools and agency; an interrogation of the primary artifacts of the program evaluation and design profession in working with complex evaluands and complex contexts     Nathan Morrow, Apollo M. Nkwake
  • Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice     Nathan Morrow, Apollo M. Nkwake

 

Invitation to Participate — Assumptions in Program Design and Evaluation

Bob’s response to our first post reminded us that we forgot to add something important.
We are actively seeking contributions. If you have something to contribute, please contact us.
If  you know others who might want to contribute, please ask them to contact us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org

Introducing a Blog Series on Assumptions in Program Design and Evaluation

Assumptions drive action.
Assumptions can be recognized or invisible.
Assumptions can be right, wrong, or anywhere in between.
Over time assumptions can atrophy, and new ones can arise.
To be effective drivers of action, assumptions must simplify and distort.

Welcome to our blog series that is dedicated to exploring these assertions. Our intent is to cultivate a community of interest. Our hope is that a loose coalition will form through this blog, and through other channels, that will enrich the wisdom that evaluators and designers bring to their work. Please join us.

Jonny Morell jamorell@jamorell.com
Apollo Nkwake nkwake@gmail.com
Guy Sharrock Guy.Sharrock@crs.org