Blog

How can the concept of “attractors” be useful in evaluation? Part 8 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common Introduction to all sections

This is part 8 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators 6/14
3 Ignoring complexity can make sense 6/21
4 Complex behavior can be evaluated using comfortable, familiar methodologies 6/28
5 A pitch for sparse models 7/1
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

A few very successful programs, or many, connected, somewhat successful programs? Part 9 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common Introduction to all sections

This is part 9 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators 6/14
3 Ignoring complexity can make sense 6/21
4 Complex behavior can be evaluated using comfortable, familiar methodologies 6/28
5 A pitch for sparse models 7/1
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

Evaluating for complexity when programs are not designed that way Part 10 of a 10-part series on how complexity can produce better insight on what programs do, and why

Common Introduction to all sections

This is part 10 of 10 blog posts I’m writing to convey the information that I present in various workshops and lectures that I deliver about complexity. I’m an evaluator so I think in terms of evaluation, but I’m convinced that what I’m saying is equally applicable for planning.

I wrote each post to stand on its own, but I designed the collection to provide a wide-ranging view of how research and theory in the domain of “complexity” can contribute to the ability of evaluators to show stakeholders what their programs are producing, and why. I’m going to try to produce a YouTube video on each section. When (if?) I do, I’ll edit the post to include the YT URL.

Part Title Approximate post date
1 Complex systems or complex behavior? up
2 Complexity has awkward implications for program designers and evaluators 6/14
3 Ignoring complexity can make sense 6/21
4 Complex behavior can be evaluated using comfortable, familiar methodologies 6/28
5 A pitch for sparse models 7/1
6 Joint optimization of unrelated outcomes 7/8
7 Why should evaluators care about emergence? 7/16
8 Why might it be useful to think of programs and their outcomes in terms of attractors? 7/19
9 A few very successful programs, or many, connected, somewhat successful programs? 7/24
10 Evaluating for complexity when programs are not designed that way 7/31

 

Workshop: Logic Models — Beyond the Traditional View


Workshop slides used at the 2011 meeting of the American Evaluation Association.

Workshop description When should we use logic models? How can we maximize their explanatory value and usefulness as an evaluation tool? This workshop will present three broad topics that will increase the value of using logic models. First, we’ll explore an expanded view of what forms logic models can take, including 1) the range of information that can be included, 2) the use of different forms and scales, 3) the types of relationships that may be represented, and 4) uses of models at different stages of the evaluation life cycle. This workshop will examine how to balance the relationship between visual design and information density in order to make the best use of models with various stakeholders and technical experts and consider epistemological issues in logic modeling, addressing 1) strengths and weaknesses of ‘models’, 2) relationships between models, measures, and methodologies, and 3) conditions under which logic models are and are not useful. Through lecture and both small and large group discussions, we will move beyond the traditional view of logic models to examine their applicability, value, and relatability to attendees’ experiences.

You will learn:

  • The essential nature of a ‘model’, its strengths and weaknesses;
  • Uses of logic models across the entire evaluation life cycle;
  • The value of using multiple forms and scales of the same logic model for the same evaluation;
  • Principles of good graphic design for logic models;
  • Evaluation conditions under which logic models are, and are not, useful;
  • The relationship among logic models, measurement, and methodology.

Workshop: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop slides for: Grappling With the Unexpected From Firefighting to Systematic Action

Workshop description All evaluators deal with unintended events that foul their evaluation plans. Either the program does not work as planned, or the evaluation does not work as planned, or both. Usually we treat these situations as fires, i.e. we exercise our skills to meet the crisis. This workshop steps back from crisis mode and presents a systematic treatment of why these situations pop up, the continuum from “unforeseen” to “unforeseeable”, tactics that can be used along the continuum, and why caution is needed because anything we do to minimize the effect of surprise may be the cause of yet other difficulties. The intent of the workshop is twofold. First, to provide individual attendees with skills and knowledge they can employ in their own practice. Second, to further a community of interest among evaluators dedicated to developing systematic understanding of the phenomenon of unanticipated events that threaten the integrity of evaluation.

What to Do When Impacts Shift and Evaluation Design Requires Stability?

This is an abstract of a presentation I gave at AEA 2014. Click here for the slide deck.

How to measure impact if impact is a moving target? Some examples: 1) a qualitative pre-post design uses semi-structured questions to determine people’s distress during refugee resettlement; 2) a post-test only quantitative study of innovation adoption requires comparisons between design engineers in different parts of a company; 3) a health awareness program requires a validated survey. But what if halfway through, different outcomes are suspected: family dynamics among the refugees; skills of production engineers; mental health in the health case? No problem if the solution is a post-test only design using existing data or flexible interview protocols. But those are just a few arrows in the quiver of evaluation methods. How can design choices be maximized when so many of those choices assume stability of impact? The answer depends on making design choices based on understanding why and when outcomes shift, and the amount of uncertainty about outcomes.