AEGIS: Agent-based Evaluation Guiding Implementation of Solar
DE-FOA-0001496: SOLAR ENERGY EVOLUTION AND DIFFUSION STUDIES II – STATE ENERGY STRATEGIES (SEEDSII-SES)
Business contact: Mr. Vijay Kohli President Syntek Technologies 703.522.1025 ext. 201 vkohli@syntek.org |
Technical contact: Jonathan A. Morell, Ph.D. Director of Evaluation Syntek Technologies 734 646-8622 jmorell@syntek.org |
Confidentiality statement:This proposal includes information and data that shall not be disclosed outside the Government and shall not be duplicated, used, or disclosed – in whole or in part – for any purpose other than to evaluate this proposal. Howev-er, if a contract is awarded to this participant as a result of – or in connection with – the submission of this information and data, the Government shall have the right to duplicate, use, or disclose the data to the extent provided in the resulting contract. This restriction does not limit the Government’s right to use information contained in these data if they are obtained from another source without restriction. The entirety of this proposal is subject to this restriction.
Introduction
AEGIS (Agent-based Evaluation Guiding Implementation of Solar) demonstrates a novel approach to doing program evaluation: combining agent-based modeling with traditional program evaluation, and doing so continually, as the evaluation work unfolds. We propose to test the value of this approach for evaluating programs that promote the goals of SEEDS II, Topic 1, specifically, “Development of new approaches to analyze and understand solar diffusion and solar technology evolution; developing and utilizing the significant solar data resources that are available; improvement in applied research program evaluation and portfolio analysis for solar technologies leading to clearer attribution and identification of successes and trends.”
The field of evaluation has historically fallen short in providing the conceptual understanding and instrumental knowledge that policy makers and planners need to design better programs, or to identify and measure impact. Our hypothesis, supported by our work to date, is that agent-based modeling can improve the quality and contribution of evaluation. Specifically, we will increase stakeholder involvement and the adoption of evaluation recommendations. We propose to apply and evaluate our approach on programs that are designed to reduce the soft costs of solar deployment and to overcome barriers to diffusion, commercialization, and acceptance.
Scientific Justification and Work to Date
The scientific basis and theoretical justification for AEGIS has been articulated in publications by two members of the research team, Dr. Jonathan A. Morell (a psychologist specializing in program evaluation) and Dr. H. Van Dyke Parunak (a computer scientist specializing in complex systems and agent-based modeling).[1] Our ongoing research in education evaluation is funded by the Faster Forward Fund.[2] The work to date and its intellectual underpinnings fall into two categories: modeling and simulation as an evaluation tool, and agent based simulation as the preferred modeling methodology. The third member of our team, Dr. Gretchen Jordan, is an expert in evaluating energy programs.
Modeling and Simulation
Modeling and simulation are used in evaluation[3], but we know of no examples in which empirical data collection and modeling nurture each other as an evaluation proceeds from initial design through data analysis (Figure 1).
Caution is needed when using models because they cannot be trusted to do a very good job of prediction, for two reasons. Orrell emphasizes that the world is too full of sensitive dependence on initial conditions, which compromise models per se. Weisberg identifies the limitations of traditional statistical reasoning when studying the human condition.[4] But models can be very useful to probe why systems work as they do, and to conjecture as to how they may behave and what they might produce. To be maximally useful though, the models have to evolve with the phenomena they are describing. Otherwise the gap between model and reality will quickly become too large and the models will lose all their value.
Agent-based Simulation
Above we advocated for a more prominent place for modeling and simulationn in evaluation. Here, we advocate for a particular kind of modeling and simulation, an agent-based perspective on complexity.Figure 2 and Figure 3 illustrate the shift we seek to facilitate. Figure 2 is a simplified traditional logic model that we have used to evaluate the introduction of a new “proven best practice”.
Figure 3 is a screen shot of a Net Logo simulation of the static model. The rectangles in the lower left show people’s adoption of (“percent implementation”), and confidence in, the innovation. These are the data we would expect from a traditional model. The average values could be trusted and useful, but they would tell us nothing about the influence of each individual actor. But the black rectangle relates the state of each individual user to the group averages, and reveals a bimodal distribution for “confidence in the innovation,” but not for “use.” Understanding individual behaviors, and the relationship between individual behavior and group behavior, have implications for evaluation that could never be anticipated without an agent-based approach.
Potential Impact and Metrics
We expect the incorporation of agent-based modeling in evaluation to have two major impacts. These are increased stakeholder involvement in the evaluation process, and increased adoption of recommendations developed by the evaluation process.
Stakeholder involvement and input will increase because stakeholders will be engaged in the model building and interpretation of output. Technical evaluation naturally involves sophisticated theories and statistical analysis that are largely inaccessible to stakeholders. However, we have found that constructing and interacting with an executable model of the process engages people’s attention and stimulates their involvement in the process. They will become engaged more deeply, and interact with the evaluation team more often, than in conventional evaluation.
A second major impact is increased adoption of evaluation recommendations by stakeholders. This impact is driven by their engagement, and also by the ability of agent-based evaluation to detect design shortfalls and changing program circumstances that more traditional approaches miss, thus yielding higher-quality evaluation results that are more compelling to stakeholders.
In principle, both stakeholder involvement and recommendation adoption can be measured quantitatively, but this would require running parallel evaluations of the same system, one with and one without the support of agent-based modeling, and such a comparison would be costly and organizationally difficult. The project principals have extensive experience in large-scale program evaluation and networks of colleagues who are also involved in these exercises, and we will use a peer-review process, with structured questionnaires, to elicit comparisons of the agent-based evaluations we perform in AEGIS with previous engagements.
Proposed Research
The purpose of the proposed research is to determine the impact of combining an agent-based modeling capability with traditional evaluation methods on various aspects of the evaluation, including: program theory, evaluation design, data analysis, data interpretation, stakeholder involvement, and use of the evaluation results.
There is a considerable research literature specifically on the diffusion of solar technologies,[5] and precedent to using agent based modeling in studies of solar innovation adoption.[6] There is also a rich literature on explanatory and predictive models of innovation adoption.[7] There is not however, research that combines elements of these three areas: agent based models, theories of innovation adoption, and specific research on the adoption of solar technologies. We will effect this synthesis, and test its value in promoting the adoption of solar technologies. To this end we will recruit a partner who is engaged with solar technology adoption activities. An initial candidate, not yet confirmed, is DTE Energy, with whom we have connections through the University of Michigan, and their SolarCurrents program, which obtains easement rights to locate large (100kW-500kW) solar arrays on suitable property in south-east Michigan. Previous phases of this program offered financial incentives to customers who installed photovoltaic systems with a capacity of 1 kilowatt (kW) to 20 kW. Another candidate is the Solar Gardens program of Consumers Energy. These are only two examples of the strong commitment to solar diffusion in southeast Michigan that will provide our team with a rich array of programs to test our approach.
Working with our partner, we will identify a solar adoption effort and implement an evaluation. We will differ from traditional evaluation, however, by including an agent-based modeling component, as shown in Figure 1. While the evaluation of the program in question will be worthwhile in its own right, we emphasize that the research question is the value of combining agent based modeling and traditional methods when evaluating solar promotion efforts.
For two reasons, it is premature to specify the model before a specific evaluation scenario is identified. First, moving from a generic awareness of relevant issues, to the specifics needed for designing evaluations and models, requires detailed understanding of the specific situation. Second, if the evaluation is to be both valid and useful, it will be important to have stakeholder involvement in its design. In fact, the role of the stakeholders in interpreting model output, and in suggesting changes, will itself be one of the questions our research design will address.
It is possible, however, to identify some elements that are likely to be salient in whatever evaluation design and accompanying model is developed. These include 1) the position of various stakeholders, e.g., realtors, lenders, utilities, installers, equipment owners/leasers, and buyers; 2) environmental factors such as the regularity climate (state and local), and cost drivers; and 3) the specific nature of whatever solar promotion efforts are taking place. For instance, the evaluation and accompanying model for a “group buy in program” would be quite different from a program based on building public awareness, or developing community coalitions.
Core Members of the Research Group
All members of the team have worked with each other over an extend period of time over multiple projects.
Jonathan A. Morell PhD is an organizational psychologist with extensive experience in the theory and practice of program evaluation. Jonny’s current hands-on evaluation activity involves safety programs in industry, evaluating R&D, developing tools to monitor R&D outcomes, and evaluation capacity building. Jonny has also made theoretical contributions to the field involving evaluation methods for programs that exhibit unexpected behaviors and the application of complexity theory to evaluation. His views are set out in his recent book: Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable (Guilford, 2010), and in his journal articles and blog posts. Details of his work can be found at his website www.jamorell.com, and his blog, www.evaluationuncertainty.com.
Van Dyke Parunak, PhD is Vice President for Technology Innovation at AxonAI. He has done extensive research with chaos and complex systems, artificial intelligence, distributed computing, and human interfaces, and is known internationally for his work on agent-based modeling. He is the author or co-author of more than 100 technical articles and reports (available at www.abcresearch.org/papers), and has collaborated with Dr. Morell on several previous projects applying agents and complexity theory to evaluation.
Gretchen Jordan, PhD is an independent consultant whose work focuses on a systems view of innovation and program and evaluation design that considers the full range of research, development, and market adoption initiatives and the logical connections among them. Dr. Jordan retired as a Principal Member of Technical Staff at Sandia National Laboratories in 2012. Since 1993 she has worked with the U.S. Department of Energy (DOE) Energy Efficiency and Renewable Energy (EERE) offices on evaluation and performance measurement at the project, program and portfolio levels, and for the DOE Office of Science and the Sandia Science and Technology Strategic Management Unit.