I have just published: Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. One feature of the book is a collection of cases where evaluators had to adjust to unexpected program behaviors. But I do not have enough examples to continue analysis of patterns and categories. If you have any examples, please post them here or send me email at jamorell@jamorell.com.
It’s great to see this blog up and running about uncertainty, which is of course a fact of life! What are some tactics to readjusting evaluation techniques? Does that get in the way of the preliminary evaluation, or is it simply necessary to get the job done? I’m always thinking about how evaluative techniques can be built in from the design phase, we always need to be able to analyze what we’ve done in order to know if it’s effective at various scales. I’m curious whether there are trends in the unintended consequences of evaluation that might inform such a process.