Some Musings on Evaluation Use in the Current Political Context

This blog is my effort to consolidate and organize some back and forth I have been having about evaluation use. It was spurred by a piece on NPR about the Administration’s position on an after school program. (Trump’s Budget Proposal Threatens Funding For Major After-School Program.) In large measure the piece dealt with whether the program was effective. Arguments abounded about stated and unstated goals, and the messages contained in a variety of evaluations. Needless to say, the political inclinations of different stakeholders had a lot to do with which evaluations were cited. Below are the notions that popped into my head as a result of hearing the piece and talking to others about it.

Selective Use of Data
Different stakeholders glommed onto different evaluations to make their arguments. Continue reading

Posted in Uncategorized | Leave a comment

Invitation to a Conversation Between Program Funders and Program Evaluators: Complex Behavior in Program Design and Evaluation

Effective programs and useful evaluations require much more appreciation of complex behavior than is currently the case. This state of affairs must change. Evaluation methodology is not the critical inhibitor of that change. Program design is. Our purpose is to begin a dialog between program funders and evaluators to address this problem.

Current Practice: Common Sense Approach to Program Design and Evaluation
There is sense to successful program design, but that sense is not common sense. And therein lies a problem for program designers, and by extension, for the evaluators that are paid to evaluate the programs envisioned by their customers.

What is common sense?  
“Common sense is a basic ability to perceive, understand, and judge things that are shared by (“common to”) nearly all people and can reasonably be expected of nearly all people without need for debate.”

What is the common sense of program design?
The common sense of program design is usually expressed in one of two forms. One form is a set of columns with familiar labels such as “input”, “throughput”, and “output”. The second is a set of shapes that are connected with 1:1, 1:many, many:1 and many:many relationships. These relationships may be cast in elaborate forms, as for example, a systems dynamics model complete with buffers and feedback loops, or a tangle of participatory impact pathways.

But no matter what the specific form, the elements of these models, and hypothesized relationships among them, are based on our intuitive understandings of “cause and effect”, mechanistic views of how programs work. They also assume that the major operative elements of a program can be identified.

To be sure, program designers are aware that their models are simplifications of reality, that models can never be fully specified, and that uncertainties cannot be fully accounted for. Still, inspection of the program models that are produced makes it clear that almost all the thinking that went into developing those models was predominantly in the cause and effect, mechanistic mode. We think about the situation and say to ourselves: “If this happens, it will make (or has made) that happen.” Because the models are like that, so too are the evaluations.

Our common sense conceptualization of programs is based on deep knowledge about the problems being addressed and the methods available to address those problems. Common sense does not mean ignorance or naiveté. It does, however, mean that common sense logic is at play. There is no shame in approaching problems in this manner. We all do it. We are all human.

Including Complex Behavior in Program Design and Evaluation
When it comes to the very small, the very large, or the very fast, 20th Century science has succeeded in getting us to accept that the world is not common sensical. But we have trouble accepting a non-common sense view of the world at the scale that is experienced by human beings. Specifically, we do not think in terms of the dynamics of complex behavior. Complex behavior has much to say about why change happens, patterns of change, and program theory. We do not routinely consider these behaviors when we design programs and their evaluations.

There is nothing intuitively obvious about complex behavior. Much of it is not very psychologically satisfying. Some of it has uncomfortable implications for people who must commit resources and bear responsibility for those commitments. Still, program designers must appreciate complex behavior if they are ever going to design effective programs and commission meaningful evaluations of those programs.

Pursuing Change
There is already momentum in the field of evaluation to apply complexity. Our critique of that effort is that current discussions of complexity do not tap the richness of what complexity science has discovered, and also, that some of the conversation is an incorrect understanding of complexity. The purpose of this panel is to bring a more thorough, a more research based, understanding of complexity into the conversation.

By “conversation” we mean dialogue between program designers and evaluators with respect to the role that complexity can play in a program’s operations, outcomes, and impacts. This conversation matters because as we said at the outset, the inhibiting factor is recognition that complex behavior may be at play in the workings of programs. Methodology is not the problem. Except for a few exotic situations, the familiar tools of evaluation will more than suffice. The question is what program behavior evaluators have license to consider.

Our goal is to pursue a long-term effort to facilitate the necessary discourse. Our strategy is to generate a series of conferences, informal conversations, and empirical tests that will lead to a critical mass of program funders and evaluators who can bring about a long term change in the rigor with which complexity is applied to program design and evaluation.

 

Posted in Uncategorized | 2 Comments

Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action

This blog post is a pitch for a different way to identify desired program outcomes.

Program Theories as they are Presently Constructed

Go into your archives and pull out your favorite logic models. Or dip into the evaluation literature and find models you like. You will find lots of variability among them in terms of: Continue reading

Posted in Uncategorized | 1 Comment

A simple recipe for improving the odds of sustainability: A systems perspective

I have been to a lot of conferences that had many sessions on ways to assure program sustainability. There is also a lot of really good research literature on this topic. Also, sustainability is a topic that has been front and center in my own work of late.

Analyses and explanations of sustainability inevitably end up with some fairly elaborate discussions about what factors lead to sustainability, how the program is embedded in its context, and so on. I have no doubt that all these treatments of sustainability have a great deal of merit. I take them seriously in my own work. I think everyone should. That said, I have been toying with another, much simpler approach.

Almost every program I have ever evaluated had only one major outcome that it was after. Sure there are cascading outcomes from proximate to distal. (Outcome to waves of impact, if you like that phrasing better.) And of course many programs have many outcomes at all ranks. But in general the proximate outcomes, even if they are many, tend to be highly correlated. So in essence, there is only one.

What this means is that when a program is dropped into a complex system, that program is designed to move the entire system in the direction of attaining that one outcome. We know how systems work. If enough effort is put in, they can in fact be made to optimize a single objective. But we also know that success like that makes the system as a whole dysfunctional in terms of its ability to adapt to environmental change, meet the needs of multiple stakeholders, maintain effective and efficient internal operations, and so on. As I see it, that means that any effort to optimize one outcome will be inherently unstable. No need to look at the details.

My notion is that in order to increase the probability of sustainability, a program should pursue multiple outcomes that are as uncorrelated as possible. The goal should be joint optimization, at the expense of sub-optimizing any of the desired outcomes.

I understand the problems in following my idea. The greater the number of uncorrelated outcomes, the greater the need to coordinate across boundaries, and as I have argued elsewhere in this blog, that is exceedingly difficult. (Why do Policy and Program Planners Assume-away Complexity?)  Also, I am by no means advocating ignoring all that work that has been done on sustainability. Ignoring it is guaranteed to lead to trouble.

Even so, I think the idea I’m proposing has some merit. Look at the outcomes being pursued, and give some thought to how highly correlated they are. What we know about systems tells us that optimization of one outcome may succeed in the short term, but it will not succeed in the long term. Joint optimization of uncorrelated outcomes? That gives us a better fighting chance.

 

 

Posted in Uncategorized | 3 Comments

Things to think about when observing programs from a systems perspective

A friend of mine (Donna Podems) is heading  up a project that involves providing a structure for a group of on-the-ground observers so they can apply a systems perspective to understanding what programs are doing and what they are accomplishing.  She asked me for a brain dump, which I happily provided.  What follows is by no means a systematic approach to looking at programs in terms of systems. It’s just a laundry list of ideas that popped into my head and flowed through my fingers. Below is a somewhat cleaned up version of what  sent her.

Hi Donna,

What follows is not a list of independent items. In fact I guarantee there are lots of connections. For instance, “redundancy” and “multiple paths” are not the same thing, but they are related. But time is tight, and I have a Greek meatball recipe to shop for, so let’s assume they are independent. Continue reading

Posted in Uncategorized | Leave a comment

Depicting Complexity in 2-D

There is an interesting discussion going on in the Linked-In discussion group of the European Evaluation Society with respect to a question someone asked: How do linear models address the complexity in which we work? I can’t help but to weigh in. I also placed a link to this blog post on the EES discussion thread. My thoughts on this topic run in two directions.

1) Putting a lot of stuff in a model, and
2) What does it mean to “address complexity”?

Putting a Lot of Stuff in a Model

I am a big fan of information density. The more information that can be juxtaposed, the greater the amount of meaning that can be conveyed. The countervailing force to this inclination is that I’m also a big fan of information being readable. My solution is to think of rendering a model as an exercise in the joint optimization of two goals: Continue reading

Posted in Uncategorized | 2 Comments

Drawing on Complexity to do Hands-on Evaluation (Part 3) – Turning the Wrench

Common Introduction to all Three Posts
What is the Contribution of Complexity to Evaluation?
Drawing from Research and Theory in Complexity Studies

Common Introduction to all Three Posts

This is the third of three blog posts I have been writing to help me understand how “complexity” can be used in evaluation. If it helps other people, great. If not, at least it helped me.

Part 1:  Complexity in Evaluation and in Studies on Complexity
In this section I talked about using complexity ideas as practical guides and inspiration for conducting an evaluation, and how those ideas hold up when looked at in terms of what is known from the study of complexity. It is by no means necessary that there be a perfect fit. It’s not even a good idea to try to make it a perfect fit. But the extent of the fit can’t be ignored, either.

Part 2: Complexity in Program Design
The problems that programs try to solve may be complex. The programs themselves may behave in complex ways when they are deployed. But the people who design programs act as if neither their programs, nor the desired outcomes, involve complex behavior. (I know this is an exaggeration, but not all that much. Details to follow.) It’s not that people don’t know better. They do. But there are very powerful and legitimate reasons to assume away complex behavior. So, if such powerful reasons exist, why would an evaluator want to deal with complexity? What’s the value added in the information the evaluator would produce? How might an evaluation recognize complexity and

Part 3: Turning the Wrench: Applying Complexity in Evaluation
This is where the “turning the wrench” phrase comes from in the title of this blog post1. Considering what I said in the first two blog posts, how can I make good use of complexity in evaluation? In this regard my approach to complexity is no different than my approach to ANOVA or to doing a content analysis of interview data. I want to put my hands on a tool and make something happen. ANOVA, content analysis and complexity are different kinds of wrenches. The question is which one to use when, and how.

Complex Behavior or Complex System?
I’m not sure what the difference is between a “complex system” and “complex behavior”, but I am sure that unless I try to differentiate the two in my own mind, I’m going to get very confused. From what I have read in the evaluation literature, discussions tend to focus on “complex systems”, complete with topics such as parts, boundaries, part/whole relationships, and so on. My reading in the complexity literature, however, makes scarce use of these concepts. I find myself getting into trouble when talking about complexity with evaluators because their focus is on the “systems” stuff, and mine is on the “complexity” stuff. In these three blog posts I am going to concentrate on “complex behavior” as it appears in the research literature on complexity, not on the nature of “complex systems”. I don’t want to belabor this point because the boundaries are fuzzy, and there is overlap. But I will try to draw that distinction as clearly as I can. Continue reading

Posted in Uncategorized | 1 Comment

Drawing on Complexity to do Hands-on Evaluation (Part 2) – Complexity in Program Operation, Simplicity in Program Design

Common Introduction to all Three Posts
Why do Policy and Program Planners Assume Away Complexity?
How Can Evaluators Apply Complexity in a way that will Help Program Designers?

Common Introduction to all Three Posts
This is the second of three blog posts I have been writing to help me understand how given the reality of how programs are designed, “complexity” can be used in evaluation . If it helps other people, great. If not, at least it helped me.

Part 1:  Complexity in Evaluation and in Studies on Complexity
In this section I talked about using complexity ideas as practical guides and inspiration for conducting an evaluation, and how those ideas hold up when looked at in terms of what is known from the study of complexity. It is by no means necessary that there be a perfect fit. It’s not even a good idea to try to make it a perfect fit. But the extent of the fit can’t be ignored, either.

Part 2: Complexity in Program Design
The problems that programs try to solve may be complex. The programs themselves may behave in complex ways when they are deployed. But the people who design programs act as if neither their programs, nor the desired outcomes, involve complex behavior. (I know this is an exaggeration, but not all that much. Details to follow.) It’s not that people don’t know better. They do. But there are very powerful and legitimate reasons to assume away complex behavior. So, if such powerful reasons exist, why would an evaluator want to deal with complexity? What’s the value added in the information the evaluator would produce? How might an evaluation recognize complexity and still be useful to program designers?

Part 3: Turning the Wrench: Applying Complexity in Evaluation
This is where the “turning the wrench” phrase comes from in the title of this blog post1. Considering what I said in the first two blog posts, how can I make good use of complexity in evaluation? In this regard my approach to complexity is no different than my approach to ANOVA or to doing a content analysis of interview data. I want to put my hands on a tool and make something happen. ANOVA, content analysis and complexity are different kinds of wrenches. The question is which one to use when, and how.

Complex Behavior or Complex System?
I’m not sure what the difference is between a “complex system” and “complex behavior”, but I am sure that unless I try to differentiate the two in my own mind, I’m going to get very confused. From what I have read in the evaluation literature, discussions tend to focus on “complex systems”, complete with topics such as parts, boundaries, part/whole relationships, and so on. My reading in the complexity literature, however, makes scarce use of these concepts. I find myself getting into trouble when talking about complexity with evaluators because their focus is on the “systems” stuff, and mine is on the “complexity” stuff. In these three blog posts I am going to concentrate on “complex behavior” as it appears in the research literature on complexity, not on the nature of “complex systems”. I don’t want to belabor this point because the boundaries are fuzzy, and there is overlap. But I will try to draw that distinction as clearly as I can. Continue reading

Posted in Uncategorized | 2 Comments

Drawing on Complexity to do Hands-on Evaluation (Part 1) – Complexity in Evaluation and in Studies in Complexity

This is the first of three blog posts I am writing to help me understand how “complexity” can be used in evaluation. If it helps other people, great. If not, at least it helped me.

Common Introduction to all Three Posts
Practicality and Theory

The Value and Dangers of Using Evaluation Program Theory
Complexity as an Aspect of Evaluation Program Theory

Appropriate but Incorrect Application of Scientific Concepts to Achieve Practical Ends

Continue reading

Posted in Uncategorized | 3 Comments

Three Coming Blog Posts on Applying Complexity Behavior in Evaluation

During each of the first three weeks in January I will be publishing a blog post on how complexity can be applied in evaluation. They are not ready yet, but they are close. Below is the common introduction that I will be using for each of the posts.

Common Introduction to all Three Posts

Part 1:  Complexity in Evaluation and in Studies on Complexity
In this section will I talk about using complexity ideas as practical guides and inspiration for conducting evaluation, and how those ideas hold up when looked at in terms of what is known from the study of complexity. It is by no means necessary that there be a perfect fit. It’s not even a good idea to try to make it a perfect fit. But the extent of the fit can’t be ignored, either.

Part 2: Complexity in Program Design
The problems that programs try to solve may be complex. The programs themselves may behave in complex ways when they are deployed. But the people who design programs act as if neither their programs, nor the desired outcomes, involve complex behavior. (I know this is an exaggeration, but not all that much. Details to follow.) It’s not that people don’t know better. They do. But there are very powerful and legitimate reasons to assume away complex behavior. So, if such powerful reasons exist, why would an evaluator want to deal with complexity? What’s the value added in the information the evaluator would produce? How might an evaluation recognize complexity and still be useful to program designers?

Part 3: Turning the Wrench: Applying Complexity in Evaluation
Considering what I said in the first two blog posts, how can I make good use of complexity in evaluation? In this regard my approach to complexity is no different than my approach to ANOVA or to doing a content analysis of interview data. I want to put my hands on a tool and make something happen. ANOVA, content analysis and complexity are different kinds of wrenches. The question is which one to use when, and how.

Complex Behavior or Complex System?
I’m not sure what the difference is between a “complex system” and “complex behavior”, but I am sure that unless I try to differentiate the two in my own mind, I’m going to get very confused. From what I have read in the evaluation literature, discussions tend to focus on “complex systems”, complete with topics such as parts, boundaries, part/whole relationships, and so on. My reading in the complexity literature, however, makes scarce use of these concepts. I find myself getting into trouble when talking about complexity with evaluators because their focus is on the “systems” stuff, and mine is on the “complexity” stuff. In these three blog posts I am going to concentrate on “complex behavior” as it appears in the research literature on complexity, not on the nature of “complex systems”. I don’t want to belabor this point because the boundaries are fuzzy, and there is overlap. But I will try to draw that distinction as clearly as I can.

Posted in Uncategorized | 1 Comment

A Complex System Perspective on Program Scale-up and Replication

I’m in the process of working up a presentation for the upcoming conference of the American Evaluation Association:. Successful Scale-up Of Promising Pilots: Challenges, Strategies, and Measurement Considerations. (It will be a great panel. You should attend if you can.) This is the abstract for my presentation:

Title: Complex System Behavior as a Lens to Understand Program Change Across Scale, Place, and Time
Abstract: Development programs are bedeviled by the challenge of transferability. Whether from a small scale test to widespread use, or across geography, or over time, programs do not work out as planned. They may have different consequences than we expected. They may have larger or smaller impacts than we hoped for. They may morph into programs we only dimly recognize. They may not be implemented at all. The changes often seem random, and indeed, in some sense they are. But coexisting with the randomness, a complex system perspective shows us the sense, the reason, the rationality in the unexpected changes. By thinking in terms of complex system behavior we can attain a different understanding of what it means to explain, or perhaps, sometimes to predict, the mysteries of transferability. That understanding will help us choose methodologies and interpret data. It will also give us new insight on program theory.

There will only be one slide in this presentation.

blog

Based on this slide I’m developing talking points. I know I’ll have to abbreviate it at the presentation, but I do want a coherent story to work from. A rough draft is below. Comments appreciated. Whack away. Continue reading

Posted in Uncategorized | 3 Comments

Case Study Example for Workshop 18: Systems as Program Theory and as Methodology

logo

Case Study Example for Workshop 18: Systems as Program Theory and as Methodology: A Hands on Approach over the Evaluation Life Cycle

This case was developed for a workshop at the American Evaluation Association’s 2015 Summer Evaluation Institute.

Construction of the Case
This is the example we will use throughout this workshop to illustrate how knowledge of system behavior can be applied in evaluation. The example is hypothetical. I made it up to resemble a plausible evaluation scenario that we may face, but which is elaborated to make sure it contains all the elements needed to explain the topics in the workshop. I am sure that none of us (me included) have ever been involved in an evaluation that is as far reaching and in-depth as the example here. But I am sure that all of us have been involved in evaluations that are similar to parts of the example, and, if you are like me, I bet you have dreamed of being involved in an evaluation of the size and scope of the example.

There are three initiatives. One aimed at adults. One aimed at mothers and young children. One aimed at teens. Each initiative has several individual programs that share some common outcomes, and which also have some unique outcomes.

All three initiatives are deliberately implemented Continue reading

Posted in Uncategorized | Leave a comment

Timelines, Critical Incidents and Systems: A Nice Way to Understand Programs

I have been involved in evaluating distracted driving programs for transportation workers. While working on the evaluations I developed an interesting way to understand how programs are acting and what they are doing. The method is based on a few principles, one set focusing on the nature of timelines and schedules, and the other on data collection.

Timelines and schedules
Timelines and schedules matter. These documents are constructed to meet a few objectives. They have to:

  • Provide a reasonable plan whose tasks can be executed.
  • Represent a reasonable correspondence between budget and work.
  • Satisfy customers desires for when the work will be completed, and for how much.

In a sense there is a conflict between the first two objectives and the third, resulting in overly optimistic assessments of budgets and timelines. That’s the direction of the bias in our estimates. Continue reading

Posted in Uncategorized | 1 Comment

Complexity is about stability and predictability

Table of Contents

Complexity is About Stability and Predictability

Example 1: Attractors
Example 2: Strange Attractors
Example 3: Fractals
Example 4: Phase Transitions
Example 5: Logistic Maps
Example 6: Power Laws
Example 7: Cross Linkages
Example 8: Emergence

What Does All This Mean for Evaluators?

Example 1: Attractors
Example 2: Strange Attractors
Example 3: Power Laws
Example 4: Timeframes, Attractors, and Power Laws
Example 5: Emergence
Example 6: Fractals
Example 7: Phase Shifts

Acknowledgements

Complexity is About Stability and Predictability

Figure 1: Ban the Butterfly

Figure 1: Ban the Butterfly

I have been thinking about how complexity is discussed in evaluation circles. A common theme seems to be that because programs are complex we can’t generalize evaluation findings over space and time because of the inherent uncertainties that reside in complex systems. (Sensitive dependence on initial conditions, evolving environments, etc.) The more I think about the emphasis on instability and unpredictability, the less I like it. See figure 1. Ban the butterfly! Continue reading

Posted in Uncategorized | 5 Comments

What is the relationship between path dependence and system stability? With explanation of why I care.

I realized it might help to explain what led me to ask this question in the first place. I submitted a proposal to AEA to talk about how traditional evaluation methods can be used in complex systems. Part of that explanation will have to involve understanding the CAS implications of stability in program impact across time and place. See the end of this post for that proposal.

I’m looking for some sources and opinions to help with a question that has been troubling me lately.  I’m struggling with the question of the relationship between path

  • dependence and
  •  system stability.

Or maybe I mean the relationship between path dependence and the ability to predict a system’s trajectory. I’m not sure about the best way to phrase the question.  In any case read on to see my confusion.

I’m bumping into a lot of people who believe that systems are unstable/unpredictable because of path dependence. This is one of those notions that seems right but smells wrong to me. It seems too simple, and it does not make sense to me because it implies that if systems are predictable there is no path dependence operating.  That can’t be right, can it? Here is a counter example. Continue reading

Posted in System Stability and Sustainability | 12 Comments

Another post on joint optimization of uncorrelated program goals as a way to minimize unintended negative consequences

Recently  have been pushing the notion that one reason why programs have unintended consequences, and why those consequences tend to be undesirable, is because programs attempt to maximize outcomes that are highly correlated, to the detriment of multiple other benchmarks that recipients of program services need to meet in order to thrive.  Details of what I have been thinking are at:

Blog posts
Joint Optimization of Uncorrelated Outcomes as a Method for Minimizing Undesirable Consequences of Program Action

A simple recipe for improving the odds of sustainability: A systems perspective

Article
From Firefighting to Systematic Action: Toward A Research Agenda for Better Evaluation of Unintended Consequences

Despite all this writing, I have not been able to come up with a graphic to illustrate what I have in mind. I think I finally might have. The top of the picture illustrates the various benchmarks that the blue thing in the center needs to meet in order to thrive. (The “thing” is what the program is trying to help – people, school systems, county governments, whatever.)

joint_optimization

The picture on the top connotes the situation before the program is implemented. There is an assumption  made (an implicit one, or course) that A, C, D, E, and F can be left alone, but that the blue thing would be better off if B improved. The program is implemented. It succeeds. The blue thing gets a lot better with respect to B. (Bottom of picture.)

The problem is that getting B to improve distorts the resources and processes needed to maintain all the other benchmarks. The blue things can’t let that happen, so it acts in odd ways to maintain its “health”. Either it works in untested and uncertain ways to maintain the benchmark (hence the squiggly lines), or it fails to meet the benchmark, or both. Programs have unintended consequences because they force the blue thing into this awkward and dysfunctional position.

What I’d like to see is programs that pursue the joint optimization of at least somewhat uncorrelated outcomes. I don’t think it has to be more than one other outcome, but even that would help a lot. My belief is that so doing would minimize the distortion in the system, and thus minimize  unintended negative outcomes.

 

 

 

 

 

 

 

 

Posted in Uncategorized | 2 Comments

Ideological diversity in Evaluation. We don’t have it, and we do need it

I’m about to make a case that the field of Evaluation would benefit from theoreticians and practitioners that were more diverse than they are now with respect to beliefs about what constitutes the social good, and how to get there. Making this argument is not easy for me because it means putting  head over heart. But I’ll do my best because I think it does matter for the future of Evaluation.

Examples from the Social Sciences
Think of the social sciences – Economics, Sociology, Political Science.

One does not have to have left wing inclinations to appreciate Marxian critiques of society and the relationships among classes. That understanding can inform anyone’s view of the world whether or not you think that overall, Capitalism is a good organizing principle for society. On the other end of the spectrum, even a dyed in the wool lefty would (should?) appreciate that self-interest and the profit motive are useful concepts for understanding why society works as it does, and that it does (might?) produce some social good despite its faults. Would the contribution of the field of Economics be as rich as it is if one of those perspectives did not exist?

Or to take an example from Sociology. Functionalists like Talcott Parsons and Robert Merton lean toward the notion that social change can lead to dysfunction. The existence of theory like that can shape (support? further?) go-slow views about the pace of social change. Or think of the conflict theories of people like Max Weber and C. Wright Mills. Those views support the idea that conflict and inequality are inherent in Capitalism. That’s the kind of theory that could support or shape a rather different view about the need for social change.

So what we have is a diversity of theory that is in some combination based on/facilitative of, different views of how society should operate. I think the disciplines of Economics and Sociology are better off because of that diversity. More important, we are all better off for having access to these different perspectives as we try to figure out how to do the right thing, or even, what the right thing is.

Evaluation
I am convinced that over the long run, if Evaluation is going to make a contribution to society, it has to encompass the kind of diversity I’m giving examples of above. Why?

One reason is that stakeholders and interested parties have different beliefs about programs – their very existence, choices of which ones to implement, their makeup, and their desired outcomes. How can Evaluation serve the needs of that diversity if there is too much uniformity in our ranks? Also, what kind of credibility do we have if the world at large comes to see our professional associations and evaluations as supportive of only one perspective on the social good and the role of government?

The argument above deals with the design of evaluations and the collection and interpretation of data. But the importance of diversity extends to Evaluation theory as well.

Explaining the value of diversity in Evaluation theory is harder for me because I don’t have a good idea of how it might play out, but I’ll try. It seems to me that right now, all existing Evaluation theory carries the implicit belief that change is a good thing. Change may not work out as we wish because programs may be weak or have unintended consequences. But fundamentally, change is good and the reason to evaluate is to make the change better. Well, what would Evaluation look like if we had evaluation theory that drew from the Functionalist school of Sociology, which takes such a jaundiced view of social change? I have no idea, and emotionally, I’m not sure I want to know because personally I am in favor of intervention in the service of the social good. But on an intellectual level, I know that evaluation based on a conservative (small “c”) view of change would end up producing some very worthwhile insight that I am sure would not come from our present theory.

Moving from Blather to Action
There are numerous impediments to working toward ideological diversity. Mostly, I am convinced that almost everyone in our field has politics that are not too much different from mine. We go into the evaluation business because we think that government is good and we want to make it better. That self-selection bias makes us a pretty homogeneous group that forms into associations that do not throw out the welcome mat for divergent opinion. Maybe the best we can do is make it known that ideological dimensions of diversity are welcome. That itself is not so easy because what does “dimension of diversity” even mean? Still, I think it’s worth a shot.

 

 

Posted in Uncategorized | 9 Comments

Agent-based Evaluation Guiding Implementation of Solar Technology

 

AEGIS: Agent-based Evaluation Guiding Implementation of Solar

DE-FOA-0001496: SOLAR ENERGY EVOLUTION AND DIFFUSION STUDIES II – STATE ENERGY STRATEGIES (SEEDSII-SES)

Business contact:
Mr. Vijay Kohli
President
Syntek Technologies
703.522.1025 ext. 201
vkohli@syntek.org
Technical contact:
Jonathan A. Morell, Ph.D.
Director of Evaluation
Syntek Technologies
734 646-8622
jmorell@syntek.org
Confidentiality statement:This proposal includes information and data that shall not be disclosed outside the Government and shall not be duplicated, used, or disclosed – in whole or in part – for any purpose other than to evaluate this proposal. Howev-er, if a contract is awarded to this participant as a result of – or in connection with – the submission of this information and data, the Government shall have the right to duplicate, use, or disclose the data to the extent provided in the resulting contract. This restriction does not limit the Government’s right to use information contained in these data if they are obtained from another source without restriction. The entirety of this proposal is subject to this restriction.

Introduction

AEGIS (Agent-based Evaluation Guiding Implementation of Solar) demonstrates a novel approach to doing program evaluation: combining agent-based modeling with traditional program evaluation, and doing so continually, as the evaluation work unfolds. We propose to test the value of this approach for evaluating programs that promote the goals of SEEDS II, Topic 1, specifically, “Development of new approaches to analyze and understand solar diffusion and solar technology evolution; developing and utilizing the significant solar data resources that are available; improvement in applied research program evaluation and portfolio analysis for solar technologies leading to clearer attribution and identification of successes and trends.”

The field of evaluation has historically fallen short in providing the conceptual understanding and instrumental knowledge that policy makers and planners need to design better programs, or to identify and measure impact. Our hypothesis, supported by our work to date, is that agent-based modeling can improve the quality and contribution of evaluation. Specifically, we will increase stakeholder involvement and the adoption of evaluation recommendations. We propose to apply and evaluate our approach on programs that are designed to reduce the soft costs of solar deployment and to overcome barriers to diffusion, commercialization, and acceptance.

Scientific Justification and Work to Date

Continue reading

Posted in Uncategorized | Leave a comment