I could use some help muddling through a question that has been rattling around in the inner recesses of my brain. I have no data to support this, but I’d bet that when people look at the results of an evaluation they think in terms of normal, or at least symmetrical distributions. Even if the evaluation were qualitative, the thinking is along the lines of: “The program seems to have done X, and Y and Z. If I do it again it will do something close to X and Y and Z. If I get lucky it will do a lot better. If I get unlucky it will do a lot worse, but X and Y and Z seems like a good bet”.
My impression is that this kind of thinking drives policy decisions about whether to repeat or scale up a program. This way of looking at the world makes a lot of sense if we are talking about infection control protocols in hospitals, student achievement with a new math curriculum, and much else besides.
But what if the effects of the program we are dealing with are more like new business start-ups? Those effects are not symmetrically distributed around a mean. They are power law distributed. There are a very small number of very large effects, a few mid-sized effects and a very large number of small effects. In that case one would have very different expectations about what would happen if the program were repeated or scaled up.
What I have been wondering is if there are any ways in an evaluation to provide some insight as to the distribution. That’s easy enough if a program were replicated many times because there would be real data to plot. But that is not very interesting. We would know the answer. This has been bothering me a lot lately, so I’m open to suggestions.