How does political ideology affect program theories, methodologies, and metrics? Participants will be randomly assigned to groups, and asked to sketch an evaluation based on one of three positions. 1) Government has an obligation to alleviate social inequities and thereby promote the public good. 2) Government’s role is to uphold civil order so people to pursue their own goals, with the consequences of their actions being their own personal responsibility. In general, less government is better. 3) The family is the primary unit of social cohesion, and there resides the locus of decisions about issues such as health and education. Government can be active or passive, as long as it supports the centrality of the family as the locus of moral authority and daily living. During report backs and we will compare how the evaluation designs differ with respect to program theory, metrics, and methodology.
Evaluation is at its core a conservative business. We do not develop new programs. We do not plan innovative policies. We spend our time, our intellectual capital, and our resources on assessing programs that other people have already committed to. We are constrained to evaluate within the narrow frameworks given to us by our customers – i.e. by stakeholders who have the power, authority, influence, position and drive to make innovation happen. We should not denigrate what those stakeholders do or who they are. Bringing about change is extraordinarily difficult, and those committed to effecting change are acting in good faith to do good as they see that good. Providing empirical understanding of the consequences of those actions, and guidance for improvement, is a noble pursuit. Still, although evaluation may be conservative in its scope, the choices we make within that scope are sensitive a wide range of ideologies as we guide stakeholders through exercises to articulate program theory, make choices about what to measure, and decide upon a logic in which to embed our observations. These ideologies have led to well known debates in our field. How much should we attend to the needs of stakeholders who are not involved in program development or execution? What effort should be put into creating knowledge rather than guiding decision makers? How far should we stray from measuring the outcomes that were planned for the program at hand? How tied should we be the values of our customers? We cannot do our jobs well without confronting these questions. But as a practical,(and perhaps as a moral), matter, we cannot,(or perhaps should not), stray too far from the needs of the people who pay us. How then, to best serve those needs? We contend that to provide the greatest value to our customers, we should base our work on an appreciation of the hidden assumptions in program theory that emanate from political ideology. Program theory drives metrics and methodologies, and metrics and methodologies frame our findings and our conclusions. If we are trapped within our ideologies, we cannot help our customers understand the full scope of the changes they are trying to implement. We may or may not succeed in getting our customers to include that scope in their operations, but we cannot even try without self knowledge. We believe that precious little of that self knowledge resides within the intellectual stock of the individual members of AEA (After all, are we any different from all other individuals?), and within our membership collectively. That needs to change. We see this Think Tank as a small step in bringing about that change.
- contact Jonny
- Converting an intellectual understanding of complexity into practical tools
- Evaluation as Social Technology
- How to Evaluate a Conference
- Integrating Evaluation and Agent-Based Modeling
- Program Logic, Program Theory, and Unintended Consequences: Understanding Relationships. Implementing Action
- System design: Requirements, complexity, and cost
- Systems as Program Theory and as Methodology: A Hands on Approach over the Evaluation Life Cycle: Workshop at the American Evaluation Association Summer Institute
- Unintended consequences, Development, and Democracy
- What to Do When Impacts Shift and Evaluation Design Requires Stability?
- Why Do Hospitals Coordinate Activities As They Do? Or: What I Learned From My Hip Surgery
- Workshop: Grappling With the Unexpected From Firefighting to Systematic Action
- Workshop: Logic Models — Beyond the Traditional View
- Joint Optimization of Uncorrelated Outcomes: Part 6 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science
- Consequences of Small Change: Part 5 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science
- Unspecifiable Outcome Chains: Part 4 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science
- Networks and Fractals: Part 3 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science
- Power Law Distributions: Part 2 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science
- Follow Surprises in Programs and their Evaluations on WordPress.com
Hours & Info1-202-555-1212Lunch: 11am - 2pm
Dinner: M-Th 5pm - 11pm, Fri-Sat:5pm - 1am