I want to spark a discussion of what evaluation might look like if it were practiced by people who were working from different ideological frameworks. It has been difficult for me to frame this post because my own politics are distinctly medium rare, and I don’t have the imagination needed to think deeply from other points of view. Still, I think that there are three reasons why this is an important exercise for the members of AEA.

1) I have never taken a poll, but my bet is that the members of AEA cluster on the left end of the ideological spread. (Go ahead, prove me wrong. Someone should find out.) I’m not sure we serve our customers and stakeholders well if we do our work from too homogeneous a point of view. 2) The reason we don’t serve our customers and stakeholders well is because the nature of the data we develop, the findings we produce, and the outcome of our efforts at utilization, all combine to provide overly restricted choices relative to the policy decisions that can be made. 3) As an association that cares about the public good, what good are we if we provide weak guidance?

As I see it the problems with a restricted range touch on all aspects of what we do along the entire evaluation project life cycle. 1) What programs or groups of programs do we choose to evaluate? 2) When we design evaluations, what bodies of research literature, groups of experts, and theories do we query? 3) How do we choose which stakeholder groups to involve, and how do we determine the relative importance of each? 4) What positive and negative outcomes do we invest effort in trying to measure? 5) What time frame do we choose for measuring program effects? 6 How much effort do we put into measuring various opportunity costs? 7) How do we interpret data?

So, I propose a thought experiment that involves a collective effort to fill in a table. I define the rows by various political ideologies. I imagine a group of seven categories. 1) Marxist. 2) Socialist 3) Center left (sort of like European Social Democrats), 4) Center right (sort of like European Christian Democrats), 5) Social conservative, 6) conservative in the Edmund Burke sense, and 7) small government conservatism.

As for the columns, I came up with eleven programs that evaluators might be called upon to evaluate. 1) R&D investment by government for work that is near the R side of the R&D continuum. 2) Tax subsidies to wind energy. 3) International development programs to build up civil society. 4) International development programs for improving agriculture. 5) Web 2.0 programs for government to provide services to citizens. 6) Organizational change efforts to make government agencies more effective. 7) The impact of rule making in federal regulatory agencies. 8- Early childhood education. 9) Initiatives to teach under served populations to advocate for themselves. 10) Family support for military families. 11) Immunization promotion.

I certainly don’t think that we should all start filling out a 7×11 matrix, particularly since I started this post by admitting that I can’t do it myself. But I do think it would be of value to all of us if people who resonated to any of the cells took a crack at filling them in. At least, I know that it would be valuable for me.

4 thoughts on “Unexpected program outcomes as a function of the ideologies driving an evaluation – Some questions for the AEA

  1. Jonny, this is an incredibly hard one to reflect on, partly due to the fundamental attribution error (we attribute the actions of others to internal dispositions but the actions of ourselves due to situational factors).

    Most of my work is done for central government, so your question got me wondering whether and how my choice of what to work on (and how to do so) are influenced by which political parties are in power, what agendas they are driving, and how they fit with my own values and sense of justice. Hmmmm. I don’t see huge differences in how I work or what I work on based on who’s in power, but on reflection, this may be because the work I’m involved in is often driving at similar outcomes – maybe the political parties here don’t differ much on these, so I don’t notice a huge shift that forces me to rethink what I am working on or how.

    Would it be different if I did more work in economic development, or defense, or criminal justice? Maybe.

    I looked at the list of hypothetical programs you listed and saw some things that have impacted my thinking MORE (I think) than my own political ideology. Like the reality of a worldwide economic recession; like the evidence on climate change; like the realities and limitations of my own strengths and limitations (expertise, competence, and which audiences would see me as credible); like the kinds of evaluative thinking that make sense to me …

    Something else that occurs to me is that one’s approaches to a lot of these projects would be heavily influenced by one’s understandings of the relevant evidence on various topics – climate change is a big one – rather than by political leanings per se. My understanding is that those on the right of the political spectrum are more likely to be climate change skeptics. [This seems to me to be more of a spurious correlation – I don’t see how this is inherent in a right-wing view of the world. Is it??] If one is a climate change skeptic, one would tend to approach the evaluation of wind energy, for example, quite differently.

    Not sure this is anywhere near an answer to your question, Jonny, but hopefully others will chime in!

  2. Hi Jane–
    Thanks for your response. A few of your comments got me to thinking.

    Your comment on climate change reminded me of an exercise I did at AEA. I conjured a little program that did nothing more than teach literacy skills to immigrants. We then developed methodologies based on different logic models I wrote. All the models had the same short range outcomes — better literacy skills, higher quality of life, etc. From there we went on to the pie in the sky outcomes such as business start-ups, higher GDP, and other similar fantasies. In one model the analysis was split by legal and illegal immigrants. The short range outcomes for all were the same. But the longer range outcomes differed. For instance, we postulated that by improving job skills (a program success), the program would have the negative impact by squeezing native born and legal immigrants out of the job market, with subsequent negative impact on the fantasy outcomes. I realize that there is a lot of controversy about the effect of immigration, but hypothesizing those negative effects is certainly within the bounds of reason. Now I ask you, would those outcomes ever end up in the logic model unless a deliberate effort were made to include anti-immigration stakeholders in the logic model development process? (If anyone is interested in the details of this exercise, see slides 41 – 45 of my logic model workshop. The file can be downloaded from my digital scrapbook, aka ww.jamorell.com.)

    As another example, think of the work I am doing in federal regulatory agencies. There is lot of interesting work to be done on determining the impact of those regulations on safety. Everybody is in favor of safety, but what about other outcomes that some people might be less sanguine about? The cost of doing business, suppression of business’ ability to adjust to changing circumstances, “regulatory capture” which sets up perverse impacts of more regulation, etc. I could go on and on with a list like this. Further, what if someone had an ideological commitment to small government? There are plenty of rational people who have this belief. (Some of my best friends actually, misguided as they are, and as immune to rational argument as the are.) They would hypothesize a whole set of negative outcomes based on the evils of big government. This strikes me as a very good example of how ideology can affect how an evaluation is designed.

    There is another issue here. I firmly believe that “evaluation” is an inherently conservative business. We don’t get to practice our craft unless a commitment has been made to action. We don’t make that commitment, others do. Then they pay us to tell them what they have done. I don’t mean this in a bad way. Policy makers have a truly admirable interest in knowing how to improve. But it is still true that we evaluators don’t get our grubby hands on a program until people with power and authority have decided to invest their financial, political, and personal capital in a set direction. Yes, yes, I know I overstate the case and that evaluators often do have influence over fundamental decisions. But I bet that if you took a random sample of all evaluations that are done, and trace them back, you would see that I am mostly correct. (Any graduate students out there looking for a juicy thesis topic?)

    Finally, more on the subject of global warming. Why the skepticism? I think I know, and if I’m right there are some pretty heavy implications for knowledge use. As I see it global warming combines two streams that make for skepticism. First, who funds the research? Big Government of course! And we know how much trust people have in government these days. And who does the work? Scientists! Scientists who do work that nobody else can understand, thus requiring people to believe them based on faith in science. How far does that get you? One day hormone replacement therapy is good for woman. The next day it is not. One day PST tests are a good screening method for prostate cancer. The next day it is not. And on and on with a never ending stream of contradictory scientific findings. Government and scientists. A pretty smarmy combination, wouldn’t you say? Well you and I and most of our buddies wouldn’t, but there is the rest of the world.

  3. Jonny, I think some of the examples you give really highlight the importance of Carol Weiss’s idea of deliberately developing a “negative logic model” to show what ghastly chain of events might happen if the whole thing was a disaster. This is a great way of identifying potential negative impacts and a very good reason for involving vocal critics in some of the evaluation groundwork to help identify what should be watched for in a thorough evaluation.

    It seems obvious to me that a smart evaluator would include these perspectives at the front end so that they are systematically investigated, and so that there aren’t all those “well, you didn’t look at that”-type criticisms from the nay-sayers when the evaluation findings are in.

    But the real question, I suppose, is whether an evaluator with a strong political view can be sufficiently even-handed and open-ended in gathering and interpreting the evidence to unearth findings that would run counter to his or her political decision. It’s one thing to identify potential negative effects, but slippage can also occur with how seriously these are investigated in reality. This is the real acid test of whether a personal or political belief or point of view (which we all have, on various topics) translates into an error-producing (i.e. invalid conclusion-causing) bias in evaluation.

    There’s another important parallel that needs to be made here, and that is whether and how genuinely outcomes that are of value in a particular cultural context are included in an evaluation. Oftentimes – and historically this has been a pervasive problem – these are passed over by the evaluators as not important or “PC window dressing” and the evaluation has gone after only those outcomes that are considered of value in a so-called mainstream setting.

    This dismissing of local cultural values (not just what is “valued” by some people but what is demonstrably valuable in a particular context) is another reflection of a personal perspective getting in the way of good evaluation by excluding or seriously underweighting important outcomes. The personal perspective is in many ways part of a political ideology, but at another level it’s just a set of cultural blinkers. Whatever it is, the end result is the same: an evaluation that has – or is in danger of having – invalid conclusions because it has failed to take into account some important evaluative criterion.

  4. I think the responses above definately show a need for reflection in this area. The tone deafness of some of the responses in their defense of evaluation positions and content not being affected by political leanings is dramatic. I’m not sure the listing of distinctive leanings is complete either. There isn’t any thing on the list for libertarians, or for fundementalist conservatives. For that matter, one would be hard pressed to find any conservative who would identify with Edmund Burke. On the other hand, the choices on the left are pretty restricted as well.

    BTW, it isn’t a mis-trust of science that has people skeptical about Global Warming. It’s all of the snow, ice and freezing weather. Top that off with the unbalanced presentation of the science, the forged science, and the whole doomsday scenerio around it, it’s no wonder people are skeptical, and if that weren’t enough, there is the high priest of Global Warning (and inventor of the internet) making millions of dollars off of the scare while trotting about the globe in a private green-house-gas-producing airplane. The real surprise would be if people weren’t skeptical.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s