I have been toying with an idea about thinking of “fidelity” in terms of “attractors” as they are cast in complex adaptive systems (CAS). I have yet to convince myself that this idea makes any sense in either a formal or a metaphorical sense. I am even less convinced that even if it does make sense, that it is at all useful in helping to do better evaluation. I’m open to suggestions, so please whack away.
I got interested in this topic because now that “Evaluation in the Face of Uncertainty” has been published, I have been pondering how to advance some of the ideas I set out in that book. The “fidelity/attractor” idea is one of the trails I have been sniffing.
What is the problem?
I think there are two big ideas floating around in our field that are tugging at each other. The first is the notion of “fidelity”, i.e. the notion that for a program to be successful as it moves into different settings, it must maintain a core set of characteristics. (Let’s assume we know what those are.) The second idea is that things are never the same as they move through space and time. We know from research on innovation adoption that “reinvention” is common. And certainly, all our recent talk about developmental evaluation, systems, complexity, and so on speak to the belief that programs are in a constant state of flux. I’m groping for a way to bring “fidelity” and “change” under a single conceptual umbrella.
What are attractors?
“An attractor is a set towards which a dynamical system evolves over time. That is, points that get close enough to the attractor remain close even if slightly disturbed.” (http://en.wikipedia.org/wiki/Attractor#Limit_cycle).
Closely related to the notion of an “attractor” is the idea of phase space: “In mathematics and physics, a phase space, introduced by Willard Gibbs in 1901, is a space in which all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space.” (http://en.wikipedia.org/wiki/Phase_space). Attractors come in three general categories.
A “point attractor” is an attractor where once all energy dissipates from a system, it settles down to a single point. A good example is a pendulum.
A “periodic/limit cycle attractor” is an attractor in which there is continual movement, but all points can be identified. A planet in orbit would be a good example. Another example would be the cyclic behavior defined by predator/prey relationships. “An example of a limit cycle is a predator-prey system. Imagine a lake with trout and a smaller number of pike. Those pike eat the young trout. Because there is so much food the number of pike increases. This increase in pike and all the young trout they eat means that the number of trout decreases. The drop in trout numbers means that the pike have less to eat and the pike numbers then decrease. This allows the trout numbers to increase again and the cycle begins over. The populations of trout and pike rise and fall in a cyclic fashion.” (http://en.wikipedia.org/wiki/Attractor#Limit_cycle.) (By the way, this relationship can very easily transform into formally chaotic behavior if the parameters change a bit, but that’s another story.)
“Strange attractors” are unique from other phase-space attractors in that one does not know exactly where on the attractor the system will be. Two points on the attractor that are near each other at one time will be arbitrarily far apart at later times. The only restriction is that the state of system remain on the attractor. Strange attractors are also unique in that they never close on themselves — the motion of the system never repeats (non-periodic). The motion we are describing on these strange attractors is what we mean by chaotic behavior.” (http://www.stsci.edu/~lbradley/seminar/attractors.html).
What do attractors have to do with fidelity?
I wonder if it helps in evaluation to think of program fidelity as conforming to one or another type of attractor. Doing so would help with program theory because it would help us understand the states of the system that we could consider as being able to have the expected effect. It would help with methodology because different types of programs may require different methodologies.
Point attractor fidelity would be a situation in which in order for a program to be effective, it had to operate within very narrow limits of variation. (I suppose this could be more like a limit or strange attractor operating in a very small phase space, but for me thinking of such narrow variation in terms of a “point” is nice.) If data or theory supported the idea that fidelity had to be this precise, then the evaluation would have to contain a few elements.
First, the process evaluation would have to contain a very rigorous effort to see if the program’s supporting systems (e.g. regulations, funding levels, collaborations with related services), and internal operations (e.g. cross linkages, management control, expertise of service providers, client tracking) allowed the organization to monitor key elements of fidelity within very narrow parameters.
Second, program theory would say that this treatment may not be very useful in messy real world situations. In this case we may want to test the applicability of the program in a variety of settings. In other words, even if the treatment were already proven in test settings, we may need to put considerable effort into testing outcome in a variety of diverse contexts. This would add a lot to the cost and difficulty of doing the evaluation, but it may be necessary because we do not know much about the characteristics of programs and systems that nudge a program away from a narrow range of critical values.
Periodic attractor fidelity reflects a program theory that says that the way a treatment is administered can “wander around” over time, but in fairly predictable ways.
For instance I can imagine a treatment program that is somewhat analogous to the fish example above. Imagine an industrial setting in which historically poor labor – management relations and a blame-based culture prevented the kind of collaborative problem solving that is needed to get to the root causes of accidents, thus preventing substantive improvements in safety. After all, why would labor report on their own bad behavior to management if doing so would result in punishment? (This hypothetical example is loosely based on some real programs I am evaluating.) An innovative program is established to give labor relief from certain kinds of disciplinary actions in order to foster collaborative problem solving. What might happen? The greater the relief from discipline, the greater the participation and the better the generation of corrective actions in the furtherance of safety. But too much relief from discipline might bring the company to a point where labor really did engage in too much dangerous behavior, thus prompting management to tighten up on discipline. This tightening of course, would lower participation rates, thus reducing the amount and/or quality of safety improvements that could take place. After a while the decrease in effectiveness is perceived, and also, memory of all the problems caused to too little discipline fades. At that point relief from discipline goes up, as does the problem solving. Over time (maybe even a period of years), the cycle repeats.
In essence, “program fidelity” in the example consists of a narrow range of two parameters — manager and worker behavior. We can expect continual cycles of program impact based on inevitable tensions between those two parameters. That’s the program theory. From a methodological point of view, we have a need to carefully observe those two parameters and to understand the psychological, social and organizational variables that affect the interplay between them. Why? Because if we understood that interplay, we could give stakeholders good advice on keeping those levels at optimal levels.
Strange attractor fidelity is a situation in which program theory says that small changes in program operation can result in very large changes in the extent to which a treatment conforms to a predetermined set of operating characteristics.
As an example, take another scenario that is loosely based on some real evaluation I am doing. An organization is seeking ways to address serious problems that have proved impervious to many efforts at solution. The organization comes up with the idea that a diverse group of people, pulled from across its membership, and well trained in problem solving, might be able to come up with creative effective solutions. As the organization develops the program, it decides that a large number of elements need to be attended to in order to make the program work: 1) good group leadership, 2) good group followership, (i.e. members who understand how to participate in groups), 3) analytical expertise distributed in group members, 4) management’s ability to choose important but solvable problems for the groups to work on, 5) ability of the group to redefine or partition the problem, 6) available data, 7) motivation / reward system for participation, 8) selection process that brings people into the groups who have certain levels of skill, interest, and motivation, 9) a similar selection process for group leaders.
These are a lot of characteristics, and is easy to imagine lots of interactions among them. What this means is that a great deal of variation in group effectiveness can come from small changes in any of the characteristics. Thus we can’t identify a single sets of values for any one of these characteristics that would make for successful problem solving. An implication is that our program theory would not be very precise. The best we could do is identify some very broad ranges of variation for each parameter and some boundaries for a global measure of “problem solving ability”, which in essence, would define the phase space in which the program had to operate. From a methodological point of view, this means we need a good global measure of “group problem solving ability” because it is impossible (and meaningless) to try to construct one from its constituent parts.
The above is as far as I have taken my musings on thinking about fidelity in terms of attractors. As I said in the beginning, I have not convinced myself that it is worth thinking of fidelity in this way. One big problem I see is that the typology I am using implies knowing a lot about treatments and their administration. The state of knowledge for almost all programs may well be that we can never know enough to apply the attractor idea.
So that’s why I need help. Comments are very much appreciated. Thanks to all who are willing to help.