Blog

Workshop: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop slides for: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop description All evaluators deal with unintended events that foul their evaluation plans. Either the program does not work as planned, or the evaluation does not work as planned, or both. Usually we treat these situations as fires, i.e. we exercise our skills to meet the crisis. This workshop steps back from crisis mode and presents a systematic treatment of why these situations pop up, the continuum from “unforeseen” to “unforeseeable”, tactics that can be used along the continuum, and why caution is needed because anything we do to minimize the effect of surprise may be the cause of yet other difficulties. The intent of the workshop is twofold. First, to provide individual attendees with skills and knowledge they can employ in their own practice. Second, to further a community of interest among evaluators dedicated to developing systematic understanding of the phenomenon of unanticipated events that threaten the integrity of evaluation.

What to Do When Impacts Shift and Evaluation Design Requires Stability?

This is an abstract of a presentation I gave at AEA 2014. Click here for the slide deck.

How to measure impact if impact is a moving target? Some examples: 1) a qualitative pre-post design uses semi-structured questions to determine people’s distress during refugee resettlement; 2) a post-test only quantitative study of innovation adoption requires comparisons between design engineers in different parts of a company; 3) a health awareness program requires a validated survey. But what if halfway through, different outcomes are suspected: family dynamics among the refugees; skills of production engineers; mental health in the health case? No problem if the solution is a post-test only design using existing data or flexible interview protocols. But those are just a few arrows in the quiver of evaluation methods. How can design choices be maximized when so many of those choices assume stability of impact? The answer depends on making design choices based on understanding why and when outcomes shift, and the amount of uncertainty about outcomes.

Systems as Program Theory and as Methodology: A Hands on Approach over the Evaluation Life Cycle: Workshop at the American Evaluation Association Summer Institute

Description : This workshop will provide an opportunity to learn how to use a system approach when designing and conducting evaluation. The presentation will be practical. It is intended to give participants a hands-on ability to make pragmatic choices about developing and doing evaluation. Topics covered will be: 1) What do systems “look like” in terms of form and structure?  2) How do systems behave? 3) How can systems be used to develop program theory, as a methodology, and as a framework for data interpretation? 4) How should a systems approach be used along different parts of an evaluation life cycle – from initial design to reporting? The workshop will be built around real evaluation cases. Participants will be expected to work in groups to apply the material that will be presented.

Slide deck
Example

System design: Requirements, complexity, and cost

Systems that meet relatively small numbers of requirements will usually give people most of what they need. (If not most of what they want.) But people, many of whom should know better, insist on having it all, and thus doom themselves to building systems that fail. Either the systems are built and do not perform to expectations, or they never get finished. Why is this so, and how can people be brought to appreciate the “requirements trap”? The problem exists because: Full article