Big Data in Evaluation

I have been developing an interest in “big data” as it may relate to the field of evaluation. The interest comes from two sources. 1) As the Editor of the journal Evaluation and Program Planning, I am on the lookout for cutting edge material to present to our readers. As someone who has done research and theoretical work on unintended consequences of program action, I see big data as a methodology that may help to reveal such unintended consequences. As a result of these two interests I’m looking for:

  • Examples where a big data approach has revealed consequences of program action that were not anticipated, and
  • People who may want to write about big data as it applies to the field of evaluation.

If you can help please get in touch with me at jamorell@jamorell.com

Azenet Book Club – Life Cycles, Rigid evaluation requirements, and Implementation theory

Discussion in the first session of the Azenet Tucson book club — th Theory; using explanatory power section with the introduction to life cycle behavior (p.49). The most common evaluation activity among our members is evaluation of state or federally funded programs (DOE, SAMHSA, OJJDP, BJA). Common characteristics: Continue reading “Azenet Book Club – Life Cycles, Rigid evaluation requirements, and Implementation theory”

Michael Bamberger has provided a case of unintended consequences: How a school breakfast program became a community nutrition program

A school breakfast program was organized in a food insecure rural area of Nicaragua to increase school enrollment.  In some schools teachers also gave food to younger siblings who came when mothers were bringing students to school.  Continue reading “Michael Bamberger has provided a case of unintended consequences: How a school breakfast program became a community nutrition program”

Azenet Book Club – Life Cycles, Rigid evaluation requirements, and Implementation theory

Discussion in the first session of the Azenet Tucson book club — th Theory; using explanatory power section with the introduction to life cycle behavior (p.49). The most common evaluation activity among our members is evaluation of state or federally funded programs (DOE, SAMHSA, OJJDP, BJA). Common characteristics:

  • programs have a few years to implement an ‘evidence-based practice’
  • evaluation is closely structured around performance measures, often with online reporting requirements
  • some projects require comparison groups or other hard-to-implement designs

While programs are ‘start ups’ and are supposed to mature, the rigidity of the reporting requirements means that evaluation often leaps right over implementation issues to outcomes (substituting ‘fidelity’ measures and specific program monitoring – dichotomous = ‘did they or didn’t they do X?” or ‘sustainability’).  Even the Strategic Prevention Framework stuff (SAMHSA,  which is trying to build coalitions to address local substance abuse issues)  doesn’t use implementation theory or program life cycle ideas, at least here in Arizona. So, (surprise!) when we don’t have any theory, the plan doesn’t include any way to measure/record what happened as the program or coalition weathered maturation changes, staff turnover etc., and there’s no way to report on it.   Do you have specific recommendations for social and educational program implementation-stages theory and/or program maturation?  I think that this discussion will continue throughout our reading.

Arizona Evaluation Book Group – Reading The Book

Our book group is part of the Tucson, Arizona contingent of Azenet, an AEA affiliate. Reading Evaluation in the Face of Uncertainty chapters 1-4 has stimulated the rich discussion and experience sharing that we had hoped for, among new and experienced evaluators.  As JM anticipated in the intro, some read cases as they were cited, while others are waiting to read them later and are inserting our own experiences. None of us has discussed systematically common problems, surprises and solutions before.  Can people learn to handle surprises before becoming evaluators/researchers? We agreed that the book would be a great read for an Evaluation II course. We’ve begun to call it Uncertainty in the Face of Evaluation…. More on our discussion next post.

Surprise in Evaluation: Values and Valuing as Expressed in Political Ideology, Program Theory, Metrics, and Methodology (AEA 2011 – Think Tank Proposal)

Submitted by
Jonny Morell
Joanne Farley
Tarek Azzam

Abstract:
How does political ideology affect program theories, methodologies, and metrics? Continue reading “Surprise in Evaluation: Values and Valuing as Expressed in Political Ideology, Program Theory, Metrics, and Methodology (AEA 2011 – Think Tank Proposal)”