I am presenting a professional development workshop at the upcoming meeting of the European Evaluation Association. The workshop description appears below.

The content of this workshop will be a succession of lectures, group discussions, and breakout exercises to provide participants with the understanding needed to recognize how complex behavior might play a role in the models they develop, the methodologies they devise, and what messages they extract from their data. Without this understanding, evaluators cannot correctly describe what programs are doing, and why. If they get it wrong, so too will evaluation users and stakeholders.

Understanding the application of Complexity Science to Evaluation requires understanding the relationship between specific constructs and general themes. Each specific construct is useful in its own right, but its full value lies in appreciating the epistemology in which that theme is embedded. This workshop will treat both specific constructs that can be applied to evaluation, and the epistemological themes reflected by the application of those constructs. The general themes are: 1) patterns in observed consequences of program and policy behavior,2) predictability of those consequences, and 3) reasons why those changes came about. The specific constructs will include: stigmergy, attractors, emergence, phase transition, self-organization, and sensitive dependence. Thus, the intellectual structure of the workshop can be described as a matrix:

                                                      Pattern     Predictability     Change

  • Stigmergy
  • Attractors
  • Emergence
  • Phase transition
  • Self-organization
  • Sensitive dependence

To illustrate, consider an example involving statistics. One might make two statements:

  • I am going to apply statistical thinking to my data.
  • I am going to use logistic regression to analyse my data.

The first statement has meaning because it reflects “statistical thinking” – an epistemology that contains a belief about how the world works, e.g., that observations contain true score and error, sampling theory, attaching probability estimates to observations, an emphasis on group characteristics over individual characteristics, and so on. Without an appreciation of statistical thinking, it would be impossible to derive meaning from a statistical analysis of data.

On the other hand, statistical thinking alone is not enough. It does not explain what a researcher or an evaluator actually did.  For that, one needs to make a statement such as: “I was analyzing an accident prevention program. To see if the program had an effect, I used logistic regression because that tool is a good way to detect change in mean time between accidents.” To be able to make a statement like that, I would have to know what logistic regression is.

As with statistics, so it is with complexity. We in the Evaluation community tend to invoke complexity in a way that is analogous to the first statement. We recognize that the world behaves in complex ways, and that to do good evaluation we must appreciate complex behavior. But we are not very good at knowing the specifics, knowing when to choose among them, or how to use that knowledge to develop models, devise methodologies, or interpret data. Consider three examples of how understanding constructs from Complexity Science may affect decisions about evaluation.

Example #1: What can be said about causality? Evaluators commonly work with models that specify linked chains (or networks) of intermediate outcomes. Based on this belief, methodologies are developed, and data are interpreted. This tactic carries the implicit assumption that it is possible to identify intermediate outcomes, that the path among those outcomes can be specified, and that knowing which outcomes have been achieved can provide actionable guidance or program improvement and replication. An appreciation of complex behavior, however, suggests other models that would drive other methodologies and data interpretation. Depending on what one thought about sensitive dependence and attractor shapes, we might conjure other hypotheses. One would be that it is possible to determine intermediate change in retrospect, but it is impossible to specify a causal path in advance. Another might be that there are numerous possible paths to success, that it might be possible to identify some common characteristics of those paths, and that travel through any of those paths would more or less lead to the same outcome.                                                                                                                                     

Example 2 – Innovation adoption: Imagine an evaluation that required assessing the success of an innovation adoption effort. Obviously, a successful adoption pattern would follow the extensively demonstrated “S” curve. There would be one inflection point when the rate sharply increased, and a second when the rate would flatten as a function of the percentage of adoption in the population. Knowing that, I might design an evaluation that looked at factors such as the number of potential users who were contacted, the number of influencers who were involved, the number of demonstrations conducted, and so on. But I could also make an effort to trace the network of contacts among adopters, expecting to see a fractal pattern if adoption were fueled by a preferential attachment process.

Example #3 – Transformation as an outcome: Outcome models can be complicated because they can be characterized by nonlinear patterns and contextual tipping points that are not measured by change in the outcomes of interest. This has implications for various elements in a model of outcome change. For instance, the importance we attribute to contextual tipping points (also known as phase transitions) may affect our beliefs about the change elements we put into the outcome model. As an example, consider measurement of conversion from gasoline powered to electric powered vehicles. As of this writing the percentage of electric vehicle use is miniscule. But manufacturers’ commitments to those vehicles, and the fact that many countries have mandated their use, may in fact be tipping points on the path to transformation. An outcome model that recognized the possibility of such changes, and hence included them as an element of transformation outcome, might not look like a model that ignored that possibility. And different models shape methodology and data interpretation. Or consider the challenge of time horizons for prediction and our theory as to when an inflection point might take place. If we believed that a tipping point will occur early in the change process, we would have much more confidence in any statements we might make about whether our efforts are succeeding to bring about transformation.

Learning objectives
Attendees will learn:

  • to apply specific complex behaviors when doing evaluation,
  • the historical and multi-disciplinary sources of the concept of “complexity”,
  • the range of complex behaviors that can be applied when doing evaluation,
  • how specific elements of complex behavior contribute to understanding pattern, predictability, and change, and
  • how and why common and familiar evaluation methodologies are sufficient for factoring complexity into routine evaluation practice.

One thought on “Complexity-Informed Evaluation – An Exploration in Understanding Pattern, Predictability, and How Change Happens.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s