The application of Social-Ecological Systems (SES) thinking to evaluation

 

Aaron Zazueta a@zazuetagroup.com    

Among the many approaches to complex systems thinking, the Social-Ecological Systems (SES) proponents have developed a set of concepts to understand and model the interlinked dynamics of social and environmental change. When addressing the transformation of large complex systems, I find particularly useful the SES  concepts of the Social-Ecological Systems, boundaries, domains, scales, agents, adaptive behavior, and emergence and system development trajectory.  I have used these concepts to construct theories of change (TOCs), to help me understand how and what extent projects, programs, or policies interact with social-ecological systems to steer development processes in the direction of a given trajectory or policy goals. The notion that all systems are composed of subsystems that are interconnected helps focus the attention on the phenomena relevant to the desired policy goals. The concepts of domains help to identify further the critical conditions that can enable or hamper change in the direction of a given development trajectory (such conditions can include the presence of sound governance, the availability of knowledge and technology to solve a problem or the necessary institutional capacities). As domains cut across the whole SES, the concept of a domain is helpful to trace system interactions across different scales of space and time.

The agents and their adaptive behavior underlie the phenomena that encompass the system and its components. SES scholars assume that systems operate through the actions and reactions of the agents (the agents’ adaptive behavior). While agents command different resources and are influenced differently by the conditions in the various domains, they are linked, either directly or through other agents). Even relatively minor agents under the right conditions can generate reverberating behaviors across the system. The aggregated adaptive behavior of the agents responding to other agents and other factors external to the system result in the emergence of system-level shapes that can be quite different from the behaviors of the agents. Based on these concepts, it is possible to develop a model of the conditions that are likely to steer agent behavior direction and the extent to which an intervention is has contributed or is likely enable adaptive behavior consistent with the desired long term policy goals.

SES also assumes that the adaptive behavior of the agents also contributes to various degrees of unpredictability and non-linearity. It is thus important not to expect that in complex systems, outputs or results will correspond to inputs. Therefore, when dealing with complex systems, effective development interventions are those which mimic other agents in the system and adopt adaptive management as the approach to steer the development trajectory of the system. Adaptive management entails clear long-term goals (or trajectory direction), identification of alternative management objectives,  the development of a set of the hypotheses of causation and procedures for data collection to adjust hypotheses (ongoing evaluation).

 

Traditions of ‘Complexity and Systems Science’?

Martin Reynolds (The Open University). Applied Systems Thinking in Practice (ASTiP) Group. School of Engineering and Innovation. The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom +44 (0) 1908 654894 | martin.reynolds@open.ac.uk |  Profile | Publications

From a systems thinking in practice (STiP) tradition I would first like to change the formulation from ‘complexity and systems science’ to complexity science and systems thinking (cf. Reynolds et al., 2016). The revised formulation is important for two reasons in appreciating respective lineages. First, contemporary ideas on complexity including the ‘butterfly effect’ and ‘complex adaptive systems’ are very much rooted in the scientific tradition dating from Warren Weaver’s 1947 paper ‘science and complexity’. Second, contemporary systems thinking should be regarded as a transdisciplinary endeavour inclusive of systems science and complexity science, but far beyond the confines of a scientific discipline (Reynolds and Howell, 2020). Note that systems science and complexity science have many common lineages, including pioneering work around cybernetics in the 1940s.  Appreciating the value of complexity science and systems thinking requires in my view attention to the ontological and epistemological dimensions of appreciating complexity and systems.

Complexity as used in complexity science invokes the scientific ontological (real world) premise that everything connects. Ideas of uncertainty and emergence are tied to appreciating reality as an infinite network of interconnections, the effects of which are impossible to precisely predict.  Systems science might be regarded as an endeavor to systematically bound such interconnections,  recognized by an impartial observer as relevant to a particular situation of interest.  By so doing, the ensuing bounded systems might be subject to scientific analysis. In systems science and complexity science, the key epistemological driver is positivism; there being an assumed direct representation between reality and systems (ontological realism; e.g. ‘the’ health system), subject to inquiry from an impartial ‘objective’ observer (scientist).

In contrast, complexity as used in a STiP tradition is an effect of contrasting human perspectives on the framing of interconnections, rather than an effect of interconnections directly. In the STiP tradition ‘systems’ as ontological representations of reality are legitimate, but the representations are always nominal (named by a human ‘observer’), provisional (with boundaries subject to change from other observers), and secondary.   Nominal systems such as (i) natural systems (individual organisms, ecosystems, solar system etc.); or (ii) engineered (purposive) systems (mechanical devices ranging from computers to heating systems), are secondary to a primary understanding and active use of systems as conceptual constructs which may be referred to as (iii) human (purposeful) systems.  Purposeful systems (where the bounded purposes are subject to ongoing adaptive change) are a powerful tool of contemporary STiP.  As distinct from ‘seeing’ reality only as natural or engineered systems,  purposeful systems enable such viewings to be tamed within a primary framing of a learning system (as an epistemological construct).  Such primary framings enable organisations, and interventions in education, health, etc. to be not only evaluated but (re)designed.  The STiP tradition, founded on epistemological constructivism, recognizes complexity as an effect of contrasting viewpoints on reality. Complexity here is a second-order attribute of interconnections in situations of interest – the indirect human framings of interconnections.  Complexity in complexity science is a first-order attribute of the interconnections themselves.

The difference is significant for all practitioners in all professional fields.   In a STiP tradition, complexity exists in all situations (since no human situation comprises only one perspective).   Each individual or group of individuals frame things differently depending on lifeworld experiences including, amongst other demographics, ethnic backgrounds.  STiP flushes out the framings of situations in terms of transparent purposeful systems in order to help improve the situations through more meaningful conversation amongst practitioners.  With increasingly uncertain times where racism is being called out internationally through the killing of black American George Floyd, it is perhaps worth recalling the founding principle of  STiP which takes its cue from C.West Churchman: “a systems approach begins when you see the world through the eyes of another” (1968 p. 23).

 

 

 

 

Constructing a Deep Complexity and Systems Science Foundation for the Field of Evaluation

Constructing a Deep Complexity and Systems Science Foundation for the Field of Evaluation

Jonny Morell, PhD
Meg Hargreaves PhD

This section of the blog Evaluation Uncertainty, Surprises in Programs and their Evaluations is an effort to continue and expand prior work setting evaluation theory and evaluation practice within a deep understanding of complexity and systems science research and theory. Four beliefs motivate our current efforts.

  • Not all evaluation needs to invoke systems concepts, but much of it does.
  • When system concepts are needed, evaluators must use those concepts to make operational decisions about the theoretical frameworks and models they construct, the methodologies they devise, and the meaning they draw from data.
  • To make those wise decisions, evaluators require deep-seated understanding of the research and theory that has been generated by complexity and system science.
  • At present, too little of evaluation’s reliance on systems is based on such deep understanding.

“This section of the blog”, and “begin” and are important phrases in the opening paragraph. We hope that what we are doing here will spur a wider range of discussion and inquiry.

We envision two kinds of contributions to this blog. To start, posts will say nothing at all about evaluation. Rather, evaluators will provide explanations of the intellectual domains within complexity and system science that affect their work. We see those contributions as providing an imperfect, but reasonably wide view of the domains of complexity and system science that influence how evaluation is done. We want these posts to be short, somewhere between one and four paragraphs.

Once a there is sufficient variety of material, contributions will then broaden to a wider set of evaluators, more discussion of complexity and system roots, and examples of how research and theory in the fields of complexity and systems affected practical decisions about theoretical frameworks, models, metrics, and methodologies.