I have been toying with an idea about thinking of “fidelity” in terms of “attractors” as they are cast in complex adaptive systems (CAS). I have yet to convince myself that this idea makes any sense in either a formal or a metaphorical sense. I am even less convinced that even if it does make sense, that it is at all useful in helping to do better evaluation. I’m open to suggestions, so please whack away.
I got interested in this topic because now that “Evaluation in the Face of Uncertainty” has been published, I have been pondering how to advance some of the ideas I set out in that book. The “fidelity/attractor” idea is one of the trails I have been sniffing.
What is the problem?
I think there are two big ideas floating around in our field that are tugging at each other. The first is the notion of “fidelity”, i.e. the notion that for a program to be successful as it moves into different settings, it must maintain a core set of characteristics. (Let’s assume we know what those are.) The second idea is that things are never the same as they move through space and time. We know from research on innovation adoption that “reinvention” is common. And certainly, all our recent talk about developmental evaluation, systems, complexity, and so on speak to the belief that programs are in a constant state of flux. I’m groping for a way to bring “fidelity” and “change” under a single conceptual umbrella.
What are attractors?
“An attractor is a set towards which a dynamical system evolves over time. That is, points that get close enough to the attractor remain close even if slightly disturbed.” (http://en.wikipedia.org/wiki/Attractor#Limit_cycle).
Closely related to the notion of an “attractor” is the idea of phase space: “In mathematics and physics, a phase space, introduced by Willard Gibbs in 1901, is a space in which all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space.” (http://en.wikipedia.org/wiki/Phase_space). Attractors come in three general categories.
A “point attractor” is an attractor where once all energy dissipates from a system, it settles down to a single point. A good example is a pendulum.
A “periodic/limit cycle attractor” is an attractor in which there is continual movement, but all points can be identified. A planet in orbit would be a good example. Another example would be the cyclic behavior defined by predator/prey relationships. “An example of a limit cycle is a predator-prey system. Imagine a lake with trout and a smaller number of pike. Those pike eat the young trout. Because there is so much food the number of pike increases. This increase in pike and all the young trout they eat means that the number of trout decreases. The drop in trout numbers means that the pike have less to eat and the pike numbers then decrease. This allows the trout numbers to increase again and the cycle begins over. The populations of trout and pike rise and fall in a cyclic fashion.” (http://en.wikipedia.org/wiki/Attractor#Limit_cycle.) (By the way, this relationship can very easily transform into formally chaotic behavior if the parameters change a bit, but that’s another story.)
“Strange attractors” are unique from other phase-space attractors in that one does not know exactly where on the attractor the system will be. Two points on the attractor that are near each other at one time will be arbitrarily far apart at later times. The only restriction is that the state of system remain on the attractor. Strange attractors are also unique in that they never close on themselves — the motion of the system never repeats (non-periodic). The motion we are describing on these strange attractors is what we mean by chaotic behavior.” (http://www.stsci.edu/~lbradley/seminar/attractors.html).
What do attractors have to do with fidelity?
I wonder if it helps in evaluation to think of program fidelity as conforming to one or another type of attractor. Doing so would help with program theory because it would help us understand the states of the system that we could consider as being able to have the expected effect. It would help with methodology because different types of programs may require different methodologies.
Point attractor fidelity would be a situation in which in order for a program to be effective, it had to operate within very narrow limits of variation. (I suppose this could be more like a limit or strange attractor operating in a very small phase space, but for me thinking of such narrow variation in terms of a “point” is nice.) If data or theory supported the idea that fidelity had to be this precise, then the evaluation would have to contain a few elements.
First, the process evaluation would have to contain a very rigorous effort to see if the program’s supporting systems (e.g. regulations, funding levels, collaborations with related services), and internal operations (e.g. cross linkages, management control, expertise of service providers, client tracking) allowed the organization to monitor key elements of fidelity within very narrow parameters.
Second, program theory would say that this treatment may not be very useful in messy real world situations. In this case we may want to test the applicability of the program in a variety of settings. In other words, even if the treatment were already proven in test settings, we may need to put considerable effort into testing outcome in a variety of diverse contexts. This would add a lot to the cost and difficulty of doing the evaluation, but it may be necessary because we do not know much about the characteristics of programs and systems that nudge a program away from a narrow range of critical values.
Periodic attractor fidelity reflects a program theory that says that the way a treatment is administered can “wander around” over time, but in fairly predictable ways.
For instance I can imagine a treatment program that is somewhat analogous to the fish example above. Imagine an industrial setting in which historically poor labor – management relations and a blame-based culture prevented the kind of collaborative problem solving that is needed to get to the root causes of accidents, thus preventing substantive improvements in safety. After all, why would labor report on their own bad behavior to management if doing so would result in punishment? (This hypothetical example is loosely based on some real programs I am evaluating.) An innovative program is established to give labor relief from certain kinds of disciplinary actions in order to foster collaborative problem solving. What might happen? The greater the relief from discipline, the greater the participation and the better the generation of corrective actions in the furtherance of safety. But too much relief from discipline might bring the company to a point where labor really did engage in too much dangerous behavior, thus prompting management to tighten up on discipline. This tightening of course, would lower participation rates, thus reducing the amount and/or quality of safety improvements that could take place. After a while the decrease in effectiveness is perceived, and also, memory of all the problems caused to too little discipline fades. At that point relief from discipline goes up, as does the problem solving. Over time (maybe even a period of years), the cycle repeats.
In essence, “program fidelity” in the example consists of a narrow range of two parameters — manager and worker behavior. We can expect continual cycles of program impact based on inevitable tensions between those two parameters. That’s the program theory. From a methodological point of view, we have a need to carefully observe those two parameters and to understand the psychological, social and organizational variables that affect the interplay between them. Why? Because if we understood that interplay, we could give stakeholders good advice on keeping those levels at optimal levels.
Strange attractor fidelity is a situation in which program theory says that small changes in program operation can result in very large changes in the extent to which a treatment conforms to a predetermined set of operating characteristics.
As an example, take another scenario that is loosely based on some real evaluation I am doing. An organization is seeking ways to address serious problems that have proved impervious to many efforts at solution. The organization comes up with the idea that a diverse group of people, pulled from across its membership, and well trained in problem solving, might be able to come up with creative effective solutions. As the organization develops the program, it decides that a large number of elements need to be attended to in order to make the program work: 1) good group leadership, 2) good group followership, (i.e. members who understand how to participate in groups), 3) analytical expertise distributed in group members, 4) management’s ability to choose important but solvable problems for the groups to work on, 5) ability of the group to redefine or partition the problem, 6) available data, 7) motivation / reward system for participation, 8) selection process that brings people into the groups who have certain levels of skill, interest, and motivation, 9) a similar selection process for group leaders.
These are a lot of characteristics, and is easy to imagine lots of interactions among them. What this means is that a great deal of variation in group effectiveness can come from small changes in any of the characteristics. Thus we can’t identify a single sets of values for any one of these characteristics that would make for successful problem solving. An implication is that our program theory would not be very precise. The best we could do is identify some very broad ranges of variation for each parameter and some boundaries for a global measure of “problem solving ability”, which in essence, would define the phase space in which the program had to operate. From a methodological point of view, this means we need a good global measure of “group problem solving ability” because it is impossible (and meaningless) to try to construct one from its constituent parts.
The above is as far as I have taken my musings on thinking about fidelity in terms of attractors. As I said in the beginning, I have not convinced myself that it is worth thinking of fidelity in this way. One big problem I see is that the typology I am using implies knowing a lot about treatments and their administration. The state of knowledge for almost all programs may well be that we can never know enough to apply the attractor idea.
So that’s why I need help. Comments are very much appreciated. Thanks to all who are willing to help.
3 thoughts on “Is it useful to think of “fidelity” in terms of “attractors?””
Interesting. I tend to share your caution raised in your opening paragraph. Maybe fidelity can be expressed in terms of attractors, but so what? As time goes on and as systems ideas – especially those from the complexity arena – start gaining ground in the evaluation field, I think it is increasingly important we address the “so what?” question.
I get a lot of examples of the use of systems ideas in evaluation sent to me. I would say around half of the time, systems ideas are used to describe phenomena or to explain behaviours that reveal insights that established evaluation approaches would struggle to expose. In which case, the use of systems ideas pass my “so what” test. The other half reveal insights that are either well known (so I guess are not really insights) or are perfectly capable of being described by methods well established in the evaluation field. In which case, they don’t pass my “so what” test. This is not a new idea, Michael Scriven long ago raised this issue in the Evaluation Thesaurus.
So what I’d suggest you ponder on (and you are probably one of the few people in the evaluation field who can do this with any degree of rigour) is what an “attractor” based approach can do that will allow evaluators to make as good as, or preferably better, judgments of worth compared with more common (and perhaps simpler) organisational and social inquiry methods.
I’m with Bob in not quite seeing the value in this. It could be that an implementation behaves something like a periodic or strange attractor, but I find it hard to think of a situation in which this is desirable. So it could be a descriptive model, but not a normative one. More significantly, even as a desriptive model I do not think it would have much predictive power. The attractor concept from chaos theory is based on a complicated but fundamentally deterministic process. Human systems would be far too messy (goal-oriented, irrational at times etc.) to have the mathematical fidelity needed for chaos theory type patterns to evolve.
What might be interesting would be to explore the analogy between fidelity in implementation and fidelity in human relationships. Should it be fidelity with respect to detailed rules, should it be fidelity with respect to maintaining the overall understanding and commitment between the parties? Is fidelity absolute, or does it vary by culture and with time?
Thank you for the thought-provoking post.
The best I can say is that it is useful for me. I used to think of fidelity in terms of a set of 1:1 relationships. Here are X critical characteristics of a treatment, let’s match conformance to each one of those, more conformance to more characteristics is better. It’s easy enough to think of cases where this is true.
My problem is that I am beginning to think there are many situations where: 1) it is possible to identify critical characteristics, but 2) program effectiveness as a function of 1:1 conformance to those characteristics is NOT true. (I’d bet my example above is one of them.) It’s not a question of what is desirable, it is a question of how the world works.
But if the world works that way, what are we to do about the notion of “fidelity”? Does it loose all meaning? I can’t believe that, so I’m struggling with a way to understand the concept. I don’t have it all worked out in my mind, but I’m sniffing up a trail in which fidelity is defined as the phase space in which all the individual elements of fidelity live. The question then is whether the individual characteristics of fidelity are operating within ranges that keep within that phase space. If they are, we can say that fidelity to the treatment design is maintained. If not, not.
As for it being a deterministic process, that’s fine with me. I’m perfectly content to say that there are a set of conditions that are needed for a treatment to be effective, and that it is possible to determine if those conditions are met. What I have trouble with is saying that “those set of conditions being met” means a set of 1:1 correspondences.