How/Can Evaluators Plan For Change Over Time?

I see two genre of futuring. One is based on the belief that however uncertainly, we can envision a future and plan to get there. I call this the “Locksley Hall” approach, after a line in Tennyson’s poem Locksley Hall “For I dipt into the future, far as human eye could see, / Saw the Vision of the world, and all the wonder that would be.” The second genre relies on the behavior of complex systems and is much less sanguine about futuring. We plan; God laughs.

Agent-based Modeling as an Evaluation Methodology

This paper presents our experiment in applying agent-based modeling to an evaluation scenario. It is based on a real evaluation, although we had to add quite a bit to flesh out the details needed to build the executable model. It’s a small-scale exercise that we hope will provide a sense of how agent-based modeling differs from equation-based modeling, and why the agent-based approach provides unique knowledge that would otherwise be unavailable.

Evaluations as Experiments in Systems: Value for Evaluation, Value for System Science

Evaluation is well stocked with knowledge about how to evaluate programs in terms of systems. Our stock is much less when it comes to evaluating the logic of the systems themselves. But separating the two and then bringing them back together can advance our understanding of both programs and systems. This assertion is illustrated with four examples: 1) causal chains, 2) stocks and flows, 3) network development and structure, and 4) complex systems with an attractor/equilibrium focus.

Evaluation Influence During Program Design and Funding: An Ecosystem / Environment Perspective

Lately I have been thinking about the Evaluators’ Eternal Problem, namely, how to influence decisions that are baked in during the funding and planning stages of programming. Woe unto us, we evaluators have precious little influence over what happens when those decisions get made. In this post I present an ecosystem/environment approach for addressing our EEP.

Evaluating Systems as Systems and Using that Knowledge to Inform Program Evaluation

There is much talk about how our programs are (or should be) thought of in terms of systems, and quite a bit of progress is being made toward that end. But there is a difference between: • evaluating programs in terms of systems, and • evaluating the systems themselves, i.e. abstracting the system structure from the details of a program. The purpose of this document is to take a stab at the latter. Why bother? For two reasons. First, Understanding the system that underlies a program will help us understand the program. Second, similar systems structures may indicate similarities across seemingly disparate programs.

The Logic in Logic Models Part 2:

This is the second of two blog posts on the logic in logic models. (More will come in the future.)I discuss three levels of model specificity. The first is the siloed model that is specific only at a high level of abstraction (e.g., outputs --> outcomes). This model form lists, but does not specify relationships among elements within each high-level category. The second form is the “box and arrow” layout that is so common in evaluation. The third adds to the “box and arrow” form with additional information on relationships, e.g., designations of how strong or likely relationships are likely to be.

The Logic in Logic Models Part 1: Extending Models in the Directions of Less, and More Specificity

I’m working on a series about the logic that can/should be contained in logic models. This is the first in the series . It’s message is that the range of knowledge that can be reflected in a model can be extended in two ways. One is toward less specificity and detail. One is toward more specificity by designating connections in AND/OR terms. Either tactic can be appropriate, depending on the circumstances.