During each of the first three weeks in January I will be publishing a blog post on how complexity can be applied in evaluation. They are not ready yet, but they are close. Below is the common introduction that I will be using for each of the posts.
Common Introduction to all Three Posts
Part 1: Complexity in Evaluation and in Studies on Complexity
In this section will I talk about using complexity ideas as practical guides and inspiration for conducting evaluation, and how those ideas hold up when looked at in terms of what is known from the study of complexity. It is by no means necessary that there be a perfect fit. It’s not even a good idea to try to make it a perfect fit. But the extent of the fit can’t be ignored, either.
Part 2: Complexity in Program Design
The problems that programs try to solve may be complex. The programs themselves may behave in complex ways when they are deployed. But the people who design programs act as if neither their programs, nor the desired outcomes, involve complex behavior. (I know this is an exaggeration, but not all that much. Details to follow.) It’s not that people don’t know better. They do. But there are very powerful and legitimate reasons to assume away complex behavior. So, if such powerful reasons exist, why would an evaluator want to deal with complexity? What’s the value added in the information the evaluator would produce? How might an evaluation recognize complexity and still be useful to program designers?
Part 3: Turning the Wrench: Applying Complexity in Evaluation
Considering what I said in the first two blog posts, how can I make good use of complexity in evaluation? In this regard my approach to complexity is no different than my approach to ANOVA or to doing a content analysis of interview data. I want to put my hands on a tool and make something happen. ANOVA, content analysis and complexity are different kinds of wrenches. The question is which one to use when, and how.
Complex Behavior or Complex System?
I’m not sure what the difference is between a “complex system” and “complex behavior”, but I am sure that unless I try to differentiate the two in my own mind, I’m going to get very confused. From what I have read in the evaluation literature, discussions tend to focus on “complex systems”, complete with topics such as parts, boundaries, part/whole relationships, and so on. My reading in the complexity literature, however, makes scarce use of these concepts. I find myself getting into trouble when talking about complexity with evaluators because their focus is on the “systems” stuff, and mine is on the “complexity” stuff. In these three blog posts I am going to concentrate on “complex behavior” as it appears in the research literature on complexity, not on the nature of “complex systems”. I don’t want to belabor this point because the boundaries are fuzzy, and there is overlap. But I will try to draw that distinction as clearly as I can.
One thought on “Three Coming Blog Posts on Applying Complexity Behavior in Evaluation”
I look forward to reading your posts. I saw your presentation on this topic at the evaluation meetings in Denver. At the time I was working on a review of 18 programs attempting to scale maternal or child health interventions nationally. It was inevitable that I would end up studying complex adaptive systems. Since then I have tried to distill what I learned into my evaluation practice. I have come up with eight questions to include in a midterm review or end-of-project evaluation about how projects work within complex adaptive systems. I will be interested in comparing your thoughts and mine. Here is the presentation if you are interested. http://www.slideshare.net/SocialDimensions/evaluation-amidst-complexity-8-questions-evaluators-should-ask