There is a great deal of conversation in our field about the need to move from linear-based to system-based evaluation. The discussion then proceeds to conversations about what system-based evaluation is and how to go about doing it. The reasoning is one of transformation from a “linear” to a “system” state. There is something missing in this direction of reasoning, namely, what “linear” means. This language leads us astray because many models that people disparage as “linear” are in fact deeply system based. First I make the case that seemingly linear models can exhibit non-linear behavior. Then I discuss how system-based frameworks can be applied to those seemingly linear models.  

3 thoughts on “Recognizing the System Behavior of Seemingly Linear Models

  1. This is nicely explained and clear. I’ve been saying this for many years, and so have people such as Patricia Rogers. I have one concern and it is a deep concern that applies right across the debate around attribution and contribution. I make a distinction between evaluation and research. It’s not a zero-sum game, more like Japanese paper folding; all research contains evaluative elements, and all evaluation contains research elements. It boils down to purpose. Research (with a capital R) tends to address questions that start with ‘what’ or ‘why’. Evaluation questions need to have some aspect of value (e.g., significance, worth, merit) assessed against some identified criteria. I’ve often said that the reason that evaluation is in so much trouble is not that the values challenge the status quo, but more because we end up doing cheap, unreliable and overly short research that everyone can pick holes in. The more complex the relationship between elements of the evaluand, then the more expensive and demanding is the necessary data collection. So in a sense, if this problem of an imbalance between research and evaluative elements of our Evaluations, then what you are arguing for is essentially a feedback loop that requires more evaluation. Researching the nature of these individual relationships between components and their potential dynamics risks further worsening the quality of the evaluative aspect, given that time and cost are indeed zero sum.

    All this is one reason why I tend to urge the adoption of systems ideas to the evaluative aspect of an Evaluation rather than the research aspect. Leave the how and why to researchers, who have the time, skills and budgets to untangle all this with some degree of rigour. Let us focus on the ‘so what’ judgments of value.

    1. Oh apologies. Poor editing on my part. This section should have ended with a different word:

      I’ve often said that the reason that Evaluation is in so much trouble is not that the values challenge the status quo, but more because we end up doing cheap, unreliable and overly short research that everyone can pick holes in. The more complex the relationship between elements of the evaluand, then the more expensive and demanding is the necessary data collection. So in a sense, if this problem of an imbalance between research and evaluative elements of our Evaluations, then what you are arguing for is essentially a feedback loop that requires more research. 

    2. Hello twentytwokorokoro,

      Thank you for taking the time and effort to respond to my post. I appreciate it because you touched on many topics that are near and dear to my heart.

      EVALUATION AND RESEARCH

      The question is not evaluation and research. It is technology and science. I believe that evaluation counts as technology, not science. In the broadest of terms, technologists care about what works. Scientists care about what is true. (Evaluation as Social Technology  https://evaluationuncertainty.com/2024/03/13/evaluation-as-social-technology-2/)

      There is a world of difference because correct prediction can be based on incorrect theory, and correct theory may not be able to predict. There is a lag relationship in both directions. Scientists do not want to depend on the latest and greatest, and often unproved technology. They wait to make sure the technology will work as advertised. When do technologists need science? Whan their technology fails, which means that the technology is being applied in settings where the existing (and possibly incorrect) theory is at play. 

      WHY IS EVALUATION IN TROUBLE?

      Of course what you said is right. But there is more. I don’t see any reason why technology-based inquiry needs to be methodologically inadequate. But evaluation is inadequate, and for the reasons you cite.

      But in addition to those reasons, even the outcomes that are most proximate to program outputs might take a long time to manifest. How many logic models have you seen that specify (however imprecisely) timing between output and first-tier outcomes, and beyond? Not many, I’d venture. This gets to your comment about “cheap”. The longer the timeline, the more expensive the budget. Also, even with all the money in the world, sponsors are under considerable pressure to get fast results. What to do if the time from output to outcome is two years but the budget cycle is one year? Money won’t get anyone out of that trap.

      SYSTEMS IN EVALUATION AND RESEARCH

      As I said above, the issue is not evaluation and research, but evaluation as technology, and science dealing with the what and why. Especially in the social sciences, systems issues are often relevant, and when they are, systems need to be brought into theory and research design. Two examples that spring to mind are economics and organizational studies. I have seen a lot of writing about economies as complex systems. I have also seen organizational studies that cast inter-organizational behavior in terms of birth and death rates, predator-prey relationships and ecosystem behavior. A great deal of this reasoning is rooted in complex system behavior.

Leave a reply to twentytwokorokoro Cancel reply