Matt Keene and I have been having a back and forth on the topic of how management systems are built with respect to the way in which systems actually behave. Below is a record of our conversation to date.
Desirability of Narrow Rigid Planning
First, I don’t place any negative value judgments on the need for rigid planning. There are good reasons for it given the nature of the world. Second, I don’t think that environmental management is different from any other kind of management in this regard. Everyone is in the same boat. Third, the problem is not management systems. To blame the systems is like shooting the messenger. Management systems match the world in which management takes place, and that is what is giving us trouble. Continue reading
I realized it might help to explain what led me to ask this question in the first place. I submitted a proposal to AEA to talk about how traditional evaluation methods can be used in complex systems. Part of that explanation will have to involve understanding the CAS implications of stability in program impact across time and place. See the end of this post for that proposal.
I’m looking for some sources and opinions to help with a question that has been troubling me lately. I’m struggling with the question of the relationship between path
- dependence and
- system stability.
Or maybe I mean the relationship between path dependence and the ability to predict a system’s trajectory. I’m not sure about the best way to phrase the question. In any case read on to see my confusion.
I’m bumping into a lot of people who believe that systems are unstable/unpredictable because of path dependence. This is one of those notions that seems right but smells wrong to me. It seems too simple, and it does not make sense to me because it implies that if systems are predictable there is no path dependence operating. That can’t be right, can it? Here is a counter example. Continue reading
What I tried to do is to look at the link between the intellectual traditions of evaluation and the sociology of evaluation from the point of view of evolutionary biology. For simplicity I’m assuming only two times in what is a long continuous process, and only two intellectual traditions. Also, I’m ignoring the possibility of linkages and networks among multiple traditions, and the multitude of reasons why the market for evaluation may change other than the behavior of evaluators. With those gross simplifications, my notions goes like this. Continue reading
In addition to my hissy fit about the “agreement x certainty” matrix I have also been in a bit of a lather about the typology in the Cynefin model that identifies four states of systems – simple, complicated, complex, and chaotic. I like this model and I think it is exceedingly useful for helping people understand the program and evaluation scenarios they are working in. But at the same time I always bristled at it, and I finally figured out why.
As I see it, the way Cynefin draws on complexity concepts only partially overlaps with the way complexity science deals with complexity. Those four domains only partially overlap with what CAS researchers and theoreticians think of as complex systems. Continue reading
ECLIPS (Evaluation Communities of Learning, Inquiry, and Practice about Systems) is a project sponsored by the National Science Foundation to improve the evaluation programs that support Science, Technology, Engineering and Math (STEM). The project is housed at InSites. Beverly Parsons is the PI. I am on the advisory board. Recently I and some of the project staff (Beverly Parsons, Pat Jessup and Marah Moore) had a rambling conversation about the “certainty x agreement” graph that is becoming so common among those of us who are trying to apply systems concepts in evaluation. I am not a big fan of the graph. Others are. Below is a somewhat cleaned up and edited version of our back and forth on this topic. We used colors to differentiate our responses.
Here is how to read this. Black is my original post. All the discussion breaks up the post. To read my original you have to go through and read only the black first. Blue are my responses to what other people said. All the other colors are comments Bev and Pat and Marah made. I don’t remember who used which color.
Discussion in the first session of the Azenet Tucson book club – th Theory; using explanatory power section with the introduction to life cycle behavior (p.49). The most common evaluation activity among our members is evaluation of state or federally funded programs (DOE, SAMHSA, OJJDP, BJA). Common characteristics: Continue reading
I could use some help muddling through a question that has been rattling around in the inner recesses of my brain. I have no data to support this, but I’d bet that when people look at the results of an evaluation they think in terms of normal, or at least symmetrical distributions. Even if the evaluation were qualitative, the thinking is along the lines of: “The program seems to have done X, and Y and Z. If I do it again it will do something close to X and Y and Z. If I get lucky it will do a lot better. If I get unlucky it will do a lot worse, but X and Y and Z seems like a good bet”. Continue reading