The graphic superimposes a chessboard on a random walk. It symbolizes a core challenge in evaluation. 

The Chessboard   

Program outcomes are predictable in the commonsense definition of predictability. “If I provide service X, outcome A will occur.” That statement is a model: X–> A, and it is the foundation of almost every evaluation model I have seen. The models we build are more elaborate and more subtle.  They contain 1:1, 1:many, many:1, and many:many relationships. They often have feedback loops. Sometimes they acknowledge that a relationship exists but cannot be specified. But at root, the if X–> then logic prevails. This logical foundation of our models serves us well. I use it all the time and I recommend that everyone use it. Hence the chessboard graphic, which I offer as a metaphor for if X–> then logic.

The Random Walk

The programs we evaluate exhibit complex system behavior. They can manifest emergent pattern. Local change can affect long-term system-wide direction. Attractor shape and self-organization can explain both resistance to change and sustainability. Network structure can explain robustness. Scaling parameters can describe resource needs. Consequential threats to a program’s viability, or opportunities for growth, may be hiding in the tails of log-linear distributions. These phenomena are real. We must incorporate them into our models if we are give our customers knowledge they can use. Hence the random walk graphic, which I offer here as a metaphor for complexity. 

The Evaluator’s Challenge

I have never understood why evaluators feel a compulsion to compose a single model. Stakeholders are allowed to differ on what outcomes a program will produce and on how the program will produce those outcomes. Models are hypotheses. Why test only one if a plausible alternate hypothesis can also be tested? In any case, different models provide different understandings. Why limit the range of understanding? It is foolish to ignore the knowledge that can come from a chessboard model. It is also foolish to ignore the reality that programs exhibit complex behavior. The trick is to do both in a way that meets time and budget. Hence a graphic that superimposes metaphors for  X–> A understanding and for complex behavior understanding.