How does political ideology affect program theories, methodologies, and metrics? Participants will be randomly assigned to groups, and asked to sketch an evaluation based on one of three positions. 1) Government has an obligation to alleviate social inequities and thereby promote the public good. 2) Government’s role is to uphold civil order so people to pursue their own goals, with the consequences of their actions being their own personal responsibility. In general, less government is better. 3) The family is the primary unit of social cohesion, and there resides the locus of decisions about issues such as health and education. Government can be active or passive, as long as it supports the centrality of the family as the locus of moral authority and daily living. During report backs and we will compare how the evaluation designs differ with respect to program theory, metrics, and methodology.
Evaluation is at its core a conservative business. We do not develop new programs. We do not plan innovative policies. We spend our time, our intellectual capital, and our resources on assessing programs that other people have already committed to. We are constrained to evaluate within the narrow frameworks given to us by our customers – i.e. by stakeholders who have the power, authority, influence, position and drive to make innovation happen. We should not denigrate what those stakeholders do or who they are. Bringing about change is extraordinarily difficult, and those committed to effecting change are acting in good faith to do good as they see that good. Providing empirical understanding of the consequences of those actions, and guidance for improvement, is a noble pursuit. Still, although evaluation may be conservative in its scope, the choices we make within that scope are sensitive a wide range of ideologies as we guide stakeholders through exercises to articulate program theory, make choices about what to measure, and decide upon a logic in which to embed our observations. These ideologies have led to well known debates in our field. How much should we attend to the needs of stakeholders who are not involved in program development or execution? What effort should be put into creating knowledge rather than guiding decision makers? How far should we stray from measuring the outcomes that were planned for the program at hand? How tied should we be the values of our customers? We cannot do our jobs well without confronting these questions. But as a practical,(and perhaps as a moral), matter, we cannot,(or perhaps should not), stray too far from the needs of the people who pay us. How then, to best serve those needs? We contend that to provide the greatest value to our customers, we should base our work on an appreciation of the hidden assumptions in program theory that emanate from political ideology. Program theory drives metrics and methodologies, and metrics and methodologies frame our findings and our conclusions. If we are trapped within our ideologies, we cannot help our customers understand the full scope of the changes they are trying to implement. We may or may not succeed in getting our customers to include that scope in their operations, but we cannot even try without self knowledge. We believe that precious little of that self knowledge resides within the intellectual stock of the individual members of AEA (After all, are we any different from all other individuals?), and within our membership collectively. That needs to change. We see this Think Tank as a small step in bringing about that change.
12 thoughts on “Surprise in Evaluation: Values and Valuing as Expressed in Political Ideology, Program Theory, Metrics, and Methodology (AEA 2011 – Think Tank Proposal)”
I can’t attend your workshop, but I would if I could. I believe that your main point ranks as a top issues of our field (and our times).
It would be fascinating to participate in this workshop. Here’s why: I am just not sure how self-aware we (people) are of our political ideologies. I think of ideology as the set of “known, unknowns” that that Zizek has written
about (and that Rumsfeld infamously left out of his press conference on evidence of weapons of mass destruction). That is, our knowledge of the truth of the matter basically falls into one of four categories:
(1) “Known knowns”
Some things we know, and our knowledge can be verified scientifically/empirically.
(2) “Known unknowns”
Some things we consciously know that we do NOT know, and have not been verified scientifically/empirically.
(3) “Unknown unknowns”
For some things, we are not even aware that we don’t know about them.
(4) “Unknown knowns”
For some things, we are not even aware that we “know” them to be true, and our interpretation of empirical evidence is always construed to support our (ideological) “knowledge.” We are not even aware that we are doing this, and our ideological assumptions remain unquestioned.
I think that #4 above is a good definition of ideology. Our reasons for believing things to be true in such manner were developed over our lifespans, usually beginning in early childhood. Because we are largely unaware of our ideologies, “unknown knowns” are most resistant to change in the face of empirical evidence.
If true, them it is of the utmost importance that we understand the political ideologies of stakeholders, because then we can predict their interpretations of evaluation data and their likely resistance to contrary findings. Similarly we need to understand our own ideologies.
I’d say that ideology being in the “unknown known” category is not always true. I am sure that Milton Friedman knew very well what his ideological lens was. Paul Krugman is pretty much of a classic liberal, and I’m sure he is well aware of the assumptions he is making when he writes. This is certainly true for scholars and researchers all along the political spectrum. I don’t see this as a problem. But there are aspects to all this that I do see as problematic. The first is the situation you mention, where we are not aware of the workings of our ideologies. The second is a collective problem in the field of evaluation. I think that Economics, Political Science, and Sociology serve us well because of the diverse viewpoints that frame people’s work. I don’t think that happens in evaluation. That’s one of the reasons there are so many “unexpected outcomes” that I try so hard to deal with in my book.
I also think that evaluation theory would have developed differently if it had been informed by a broader range of political beliefs.
I’m missing something — having trouble reconciling this:
“We believe that precious little of that self knowledge resides within the intellectual stock of the individual members of AEA (After all, are we any different from all other individuals?), and within our membership collectively.”
“I’d say that ideology being in the “unknown known” category is not always true. [SNIP] and I’m sure he is well aware of the assumptions he is making when he writes. This is certainly true for scholars and researchers all along the political spectrum. I don’t see this as a problem.”
Authors wrote: How does political ideology affect program theories, methodologies, and metrics?
My answer is: one way of this affect is in proposal making! For example I find some political assumption in this proposal! They are:
1. We should not denigrate what those stakeholders do or who they are
2. Providing empirical understanding of the consequences of those actions, and guidance for improvement, is a noble pursuit.
3. We should base our work on an appreciation of the hidden assumptions in program theory that emanate from political ideology.
4. We should base our work on an appreciation of the hidden assumptions in program theory that emanate from political ideology.
What is your idea? Really I am in wrong understanding in this answer?
Excuse me the four ideological assumptions is:
We cannot,(or perhaps should not), stray too far from the needs of the people who pay us.
Well, just to complicate the discussion, I’ll add a few remarks. First, Charles, I do not think there is a True category of “known knowns.” The suggestion that we do know some things that have been scientifically/empircally verified is problematic. I didn’t raise this in our discussion about putting the panel together but will raise it here. I do not believe that any knowledge is certain and the terms “science” and “empirical” are themselves deeply political in nature (the literature in the philosophy of science and even in evaluation has demonstrated the diversity of views on what science is and what it means or takes to accept a claim as “true” knowledge. Jonathan, I think it is interesting you went outside of evaluation to identify individuals who are exceptions to the “unknown knowns” category Paul refers to. And while I think that Political Science, Economics, Sociology, etc. have lots to contribute to evaluation, they are plagued as much as evaluation is by ideological differences. When the social sciences split into separate and discrete disciplines (around Mill’s time), they carried with them ideological methodological assumptions that emerged originally from political theorists of different ilks. Take Hobbes, Hume, Mill and their descendents as one example of the interrelation between political ideology and Methodology (note the upper case “M”). Then look at the German expressivists such as Hegel, Winch, Herder and more recently Alasdair McIntyre and Charles Taylor as examples of philosophies leading to Methodologies which reject the modern notion of science. There are other schools of Methodology (or at least lead to Methodologies for gaining knowledge). All of these theorist were quite aware of what their assumptions were and where they led in terms of our “knowing” more about the human condition. But are these different schools of thought and what implications do they have for for science and for political choices ideological? Jurgen Habermas explains ideology as functioning in much the same way a neurotic sees the world, explains it, and acts on it. Ideology, like neurosis, is in short distorted thinking. Interesting comparison.
I don’t believe in a True category of “known knowns” either, though I do believe that there are levels of validity in knowledge. To me, the things we know most about are the “known knowns.” I was using the 2×2 table to quickly define some terms.
I think Habermas is approaching a truth of the origins of ideology — it arises as much from our (mid-brain) psyches as it does from (fore-brain) intellectual thought. That is what makes ideology so resistant to (evaluation) evidence that is contrary to the ideology. I think we can learn a lot about the nature of an individual’s ideology when he/she is presented with contrary evidence.
Joann – you say: “I think it is interesting you went outside of evaluation to identify individuals who are exceptions to the “unknown knowns” category Paul refers to.” This is my point exactly. Go to any of the social sciences, and people will tell you quite clearly how ideology informs their work, and will be happy and able to point to the other practitioners whose ideology is different. My problem is that I don’t think there is enough diversity like this in Evaluation. I could be wrong, it is as they say, an empirical question. But I know where I’d put my money.
Charles and Jonathan,
Thank you for your remarks. They have given me much to think about.
Charles, you note: “I don’t believe in a True category of “known knowns” either, though I do believe that there are levels of validity in knowledge. To me, the things we know most about are the “known knowns.” ” I too believe there are varying levels of validity in knowledge but “validity” is the crucial term here. Our notions of validity are tied to what we accept as truth claims, this is tied to how we think we come to really “know” something, and in the end, this is tied to overarching theories of human nature or history which in turn translate into political theories, public philosophies or for some, ideologies — belief systems emanating from distorted perhaps a priori assumptions about human nature and/or history and society. In explaining the degree of validity a claim has, you implicitly or explicitly begin to unpack what it means for us to “know,” and thus what kind of being it is that must know in this fashion, etc.
Jonathan, I both agree and disagree with you. Evaluators have done little to tie their conceptions of Methodology to public philosophies, political theories which are ideologies to some theorists, e.g., critical theorists. I think our panel will be great in showing that these do actually though quietly and subtly influence our disciplinary thinking. I think the next step then is to continue clarifying how certain disciplinary (e.g., choice of framing questions, Methodologies, metrics, outcomes of focus) relate back to social, psychological, economic, etc. theories and how these in turn emerge from theories of human nature and society. Only then can we as a discipline enter into a more profound discourse about such vital issues and bring to light these theoretical relationships so that they can be examined and critiqued within these broader contexts of philosophical/theoretical thought. I think that evaluation abounds with theoretical contexts that influence political thought and thus, evaluation praxis. It is just rare that the geneology is traced back in a thoughtful and reflective manner.
I don’t have much time to respond. Some quick thoughts:
Joanne-I think we are in agreement. I believe that evaluation and the social sciences are about establishing “truth,” not “Truth.” Paradoxically, the closer to “Truth” we come the more political/philosophical we must become, and then we’re talking about human nature and what we should value.
I think that evaluators need to come to some agreement on human nature by coming to some consensus on what they value. Specifically, I think evaluators need to rank-order their values first, then come to consensus on a general rank-ordering if that makes sense. I think that would be better than just stating what we value as evaluators. The Guiding Principles help, but then again, the APA has a well-developed set of written ethical standards, and yet their professional standards did not prohibit the APA from condoning torture.
I’d like to pick up on Moein’s comments because that’s just how I felt when I read the submission. From a critical systems framework, the fact that we regard those who pay our bills as “customers” is aligned with a particular ideological standpoint. To me it goes to the heart of whether evaluators operate as “professionals” (conscious of and challenging the boundaries we and others set around our endeavors) or “tradespeople”, exercising a craft but working primarily and largely unchallenging in the interests of a single stakeholder.