Discussion in the first session of the Azenet Tucson book club — th Theory; using explanatory power section with the introduction to life cycle behavior (p.49). The most common evaluation activity among our members is evaluation of state or federally funded programs (DOE, SAMHSA, OJJDP, BJA). Common characteristics:
- programs have a few years to implement an ‘evidence-based practice’
- evaluation is closely structured around performance measures, often with online reporting requirements
- some projects require comparison groups or other hard-to-implement designs
While programs are ‘start ups’ and are supposed to mature, the rigidity of the reporting requirements means that evaluation often leaps right over implementation issues to outcomes (substituting ‘fidelity’ measures and specific program monitoring – dichotomous = ‘did they or didn’t they do X?” or ‘sustainability’). Even the Strategic Prevention Framework stuff (SAMHSA, which is trying to build coalitions to address local substance abuse issues) doesn’t use implementation theory or program life cycle ideas, at least here in Arizona. So, (surprise!) when we don’t have any theory, the plan doesn’t include any way to measure/record what happened as the program or coalition weathered maturation changes, staff turnover etc., and there’s no way to report on it. Do you have specific recommendations for social and educational program implementation-stages theory and/or program maturation? I think that this discussion will continue throughout our reading.