People have been asking me if I could briefly summarize my work on evaluation in the face of uncertainty. It took me a while, but I finally came up with the following.
NEED FOR CREATIVE THINKING ABOUT UNEXPECTED PROGRAM BEHAVIOR
1) In powerful evaluation methodology.
2) Powerful methodology must encompass a broad range of quantitative and qualitative data collection techniques and research designs.
3) In order to have powerful evaluation designs, advance planning and rigorous implementation strategies are needed.
I know that:
1) Programs change in unexpected ways.
2) Program change may render many parts of an evaluation obsolete.
So, how to maintain maximum evaluation power in the face of uncertain program behavior? Answering this question has been the motivation for my work during the past few years.
I developed a theory of unexpected program behavior in which there is a continuum from events that might reasonably be foreseen, to those that are impossible to determine in advance. By “impossible” I mean really impossible because they spring from the dynamics of complex adaptive systems.
The same factors that make for uncertainty in program behavior also make for uncertainty in evaluation behavior. This is because both are similar social constructions — collections of people and resources, organized for a particular purpose, and set within a social / organizational / political / economic context.
I propose a variety of methodologies that are differentially useful at different points along the continuum. None of the specific methods are esoteric or innovative, although some come from fields and may be unfamiliar to evaluators, and others may put familiar methods to innovative uses. What’s important is the relationships among these methods, and the value of setting them within a theory of unexpected behavior in programs and evaluations.
The theory and techniques are informed by a set of eighteen case studies. These have been analyzed and used to support various parts of my arguments. Two a priori frameworks are used to organize the data: 1) relationship between program and evaluation life cycles, and 2) social/organizational factors, (e.g. whether an issue springs from internal program behavior or the program’s environment). Three other categorizations emerged from the data analysis: 1) pilot and feasibility tests, 2) resistance to evaluation, and 3) incorrect assumptions early in the evaluation life cycle.
GOOD REASONS NOT TO FOLLOW MY ADVICE
Evaluation designs and their execution follow the same dynamics that drive any system. This means that perturbing the system, or adding elements to them, can create their own complications. For instance, it may be desirable to add clinical records review to interviews with clients, thereby assuring data availability if the interviews fail to materialize. But adding the review adds time and cost to the evaluation, which may result in missing a window of opportunity to educate stakeholders, or fewer resources for data analysis. A framework is provided to help guide decisions about what changes to make, and what positive and negative consequences they might have.
I’m focusing on two initiatives to further what I have done so far.
Agent based modeling and complex adaptive systems
1) Continue the work I and my colleagues have been doing to tightly integrate traditional evaluation with agent based modeling and simulation, complete with a continual feeding of knowledge back and forth between the model and the empirical data collection. I want to do this for two reasons. 1) Provide evaluators with greater lead time in scouting for unanticipated changes that may be affecting the programs they are evaluating. Lead time is critical because the longer the lead time, the greater the possibilities for adjusting evaluation designs. 2) It’s worth exploring whether the principles of complex systems are capable of doing a better job of explaining program behavior than our traditional methods of understanding how social systems work. If anyone has a lead on funding to do this work, please let me know.
Find more examples
2) Collect as many additional cases of unexpected program behavior as possible, and use the data to refine or change the views I have been advocating. I am on a determined hunt for more examples in order to advance my thinking about evaluating in the face of surprise. If you have any, or know people who might, please contact me.
Links to what I have written recently are at: http://www.jamorell.com