A rolling conversation about the “Agreement x Certainty” space.

ECLIPS (Evaluation Communities of Learning, Inquiry, and Practice about Systems) is a project sponsored by the National Science Foundation to improve the evaluation programs that support Science, Technology, Engineering and Math (STEM). The project is housed at InSites. Beverly Parsons is the PI. I am on the advisory board. Recently I  and some of the project staff (Beverly Parsons, Pat Jessup and Marah Moore) had a rambling conversation about the “certainty x agreement” graph that is becoming so common among those of us who are trying to apply systems concepts in evaluation. I am not a big fan of the graph. Others are. Below is a somewhat cleaned up and edited version of our back and forth on this topic.  We used colors to differentiate our responses.

Here is how to read this. Black is my original post. All the discussion breaks up the post. To read my original you have to go through and read only the black first. Blue are my responses to what other people said. All the other colors are comments Bev and Pat and Marah made. I don’t remember who used which color.


Bev and Pat  –

I feel like a real party pooper about all, this, or maybe the male equivalent of the Wicked Witch of the West, but the more I think about this framework, the less I like it. I’m very ambivalent about what practical advice I would give anyone on using the “certainty by agreement” framework. I like to intellectualize a lot, but when it comes to hands-on evaluation I am a very practical fellow. “Do what works “ is what I do, and it’s what I tell others to do. In that vein, if “agreement by certainty”  helps someone do evaluation, I  say “go for it”. But my other side objects for three reasons.

  • It is wrong.
  • It leads to a misunderstanding of systems.
  • It diverts people from the ways in which the intellectual contrition of systems perspectives can help evaluation.

If I had to sum up my problem with “certainty by agreement”  I would say this:

  • “certainty” and “agreement” are not legitimate systems concepts.
  • If they were not legitimate but useful  I’d be OK with using them. But they are also not useful.  (For me, at least. As I said, if they work for others, that’s fine with me.)

I’m not sure that the certainty-agreement claims to be a science-based systems concept.

I thought we were presenting it in systems terms. If not, then that assuages a lot of my heartburn. But the social context (ECLIPS) screams “systems”. And notions in the graph (e.g. organized, adaptive, self organizing”) are systems concepts. So to me it walks like a duck and quacks like a duck.

I’m seeing a difference between the content that is being presented, i.e., the types of systems dynamics and the form that is being used to present these concepts. It seems to me that the graph is being used as a means/tool to help explain the three types of systems dynamics rather than presenting agreement-certainty as systems concepts. Or maybe that’s because I don’t tend to focus in on the agreement-certainty as something to measure or get real specific about but more as a general frame within which to look at the shifts in the system dynamics. Even though certainty-agreement are on this scale of high-low I don’t tend to think about these as something we measure specifically but as conditions to consider to help me understand how the dynamics could change as conditions change.  I guess I’m one of those evaluators who use visuals like this to help me understand concepts rather than trying to apply it concretely. I tend to think that there will be turmoil of various sorts where things are changing and morphing from one type of dynamic to the other. The boundary areas of the various dynamics are where things usually are most contested.

4-1-13 BP: Yes, Pat, that is the point I was thinking about as well. My point about social science disciplines wasn’t very clear or possibly not even useful/accurate, since systems concepts are in social sciences too. Since the systems dynamics orientation comes from the “hard” sciences, I was focused on building that bridge. Your point is the more important one. The correlation between A&C and the types of system dynamics is only moderately present. The point is simply to give a visual representation that gives a newcomer a way to think about the dynamics in relationship to something they are familiar with. The visual also is limited because it makes the dynamics seem too separate but that’s the challenge of many visuals. It only conveys a part of the concept.

It seems that what is in the visual is systems concepts meeting social science disciplines. So what we have here is a combination of the two.

I don’t see it as a combination any more than systems in ecology is a combination of systems and ecology. It is simply that some social phenomena do constitute systems and can be explained in those terms. By the way, one of my favorite books is Edge of Organization. He takes all the well known theories of organizational behavior and adds a CAS perspective to them. It’s great.

What it is attempting to show is that three types of system dynamics (static or dynamic, self-organizing, and random) can morph from one to the other as conditions change.

You bet. But if that is not a systems view I don’t know what is. The duck is quacking.

It doesn’t say anything about what the characteristics are of complex adaptive systems for example.

Rather it says that as conditions change, they can move a dynamic or dynamical system that is primarily in one state to a bifurcation that transforms it into a different state.

Dynamical. Bifurcation. Quack!

The overlay of certainty and agreement is saying that if you are in a socially defined situation where there is a lot of agreement among people and things tend to happen in a fairly predictable way, a cause and effect model will likely work fairly well to use as a frame to understand the situation.

One thing to think about is what it means to say “things happen in a fairly predictable way”. The timeframe matters. There could be great predictability over a short time frame and little over a longer timeframe. Or the opposite may be true. In the short term there may be a lot of unpredictability but there are long term regularities. Think climate and weather. The short term forecasts (weather) are loaded with uncertainty. Most climate scientists would say that despite this, there is long term predictability on climate.(I admit this is a bit of a stretched example because it assumes that “weather” aggregates to “climate” in the long run, and this is not really true.)

The other thing I think about is what “agreement” means. My difficulty is that “agreement” is a layered multidimensional concept that AxC treats in a unitary fashion.  As an example, what is the level of aggregation? People? Organizations? Coalitions? It’s easy to see high agreement among some levels of aggregation mixed with low levels of agreement at others.

There is also the problem of the extent to which “agreement” drives behavior. It may not. Let’s say there is great disagreement among people who work for different organizations, but the organizations are following a policy that aligns their behavior. (This happens all the time.) How would we classify the degree of “agreement” here?

Want a depressing example? Think Congress. There is great agreement by every one of the 535 members of Congress that the legislative system is not working. But individual beliefs do not affect what will happen. Some people think it is a good thing the system is not working. After all, government is bad, so less working government is better. Politics and ideology trump psychology. I may think the system is not working but I am loyal to my caucus whose goals I believe in. So I agree the system is not working but I also agree that its worth putting up with it to further the goals of my party.

To think of AxC usefully it is necessary to agree on an approximate level of aggregation, definitions, etc. This can be done. It’s akin to deciding what boundaries to consider in a systems analysis. We do this all the time. But AxC alone does not help very much unless this kind of scoping exercise is done.

Yes, of course. Each of the variables has to be unpacked. As a starting point for thinking about it, I find it useful to say, hmm, if there is moderate agreement and predictability/certainty, there are likely to be some patterns to look at that better fit a model that builds from complex adaptive systems theory (and has the kinds of characteristics you were describing—emergence, self-organizing, attractors, adaptation, feedback loops).

I’m not so sure. I can easily see CAS behavior in other regions of the framework.

Here is an example of CAS behavior in a region near the origin of AxC. Think of the “neighborhood segregation” simulation. Drop red and blue square shaped families randomly into a neighborhood. Tell them to look around and follow three simple rules:1)  If 2 of my neighbors are my color, stay put.2)  If fewer than 2 of neighbors are my color, move. 3) Ignore the color of other families in the neighborhood that don’t touch me. There is complete agreement on the rules. There is complete certainty on how the rules will be followed. The emergent behavior is totally predictable – an almost totally segregated neighborhood after a few iterations of the simulation. Yes it is true that the results are predictable, just as we would expect in that region of the graph. But the reason for the predictability is emergent behavior from interacting autonomous agents. That is CAS behavior.

This doesn’t sound like autonomous agents. It sounds like they are being controlled by these rules that have been imposed on them.

In situations where there is low agreement and certainty, You don’t have much chance of finding patterns until you start to dig more deeply. Here is where you have to come back to recognizing that these three types of dynamics are not neatly separated as in the visual.

It  seems to me that there could very well be patterns. As an example: Let’s say there is low agreement and low certainty, but a great deal of bargaining and negotiation going is on. There is a very obvious pattern: . “Talk to the other guy, formulate a response, communicate, repeat”.

Again, I think we are thinking about agreement in different ways.

Or for another example, maybe the opposite is happening. Each group eyes the other and goes about setting mechanisms in motion to defend itself. Another pattern: “Spy on the other guy, see what he is doing, formulate a defense, implement defense, spy on the other guy again”. That seems like patterned behavior to me.

In this example there is good agreement on the rules. Yes, I would put that in one of the other sections of the visual.

Of course these examples assume a particular definition of “certainty” which is: I know what I want to do but I don’t know what the other guy wants to do”. That is quite different from “I don’t know what I want to do”, which is different from: “I know my goals but I don’t know how to implement them”. So as with the notion of “agreement” there are lots of definitional issues to consider, and the different definitions lead to different ways of thinking about systems. That’s OK with me but I don’t think you can plop AxC in front of people and not get into the definition stuff.

Agree.

You can, however, watch to see if some of what seems unpatterned starts to reach a conversion point where it does take on some patterns and moves to either of the other two states.

This is exactly right, but it also highlights the limitation of thinking only in terms of AxC.

Agree. It is a starting point for discussion.

Read on (a few paragraphs below) for my discussion of entropy and Maxwell’s demon with respect to patterns and randomness. But for now I’ll just say that I am using “unpatterned” as some kind of dynamic tension where no change takes place, rather than random in the sense of a gas in an equilibrium, where each molecule might move, but the system itself is truly unpatterned.  That said, the comment about reaching a conversion point is exactly right. I’ll give you an example.

Let’s think of Congress again. (And let us stop to shed a tear.) We have two broad groups driving the show. The Democrats and the Tea Party. (Oversimplified I know, but it works for my example.) Each are firm in its beliefs. But while firm, the world being what it is, we can assume fluctuations around the “mean” of the true score of the belief. So at any given time:

  • The Democrats may think: ““Maybe Paul Krugman is wrong and short-medium term deficits really are a problem.”
  • And some of my tea party friends may think: “Maybe we are wrong that small government and low taxes on the rich are the only way to improve the social good.”

If these random fluctuations aligned, even briefly, there may well be a brief but meaningful dialogue between the groups. ”Brief” however, may lead to mutually rewarding conversation that leads to deeper longer range negotiation. (An example of path dependence.)

What has happened? The situation was static but unstable. The brief closing of the gap in belief brought about a very welcome opportunity to change the system. Once people started talking to each other the situation became even less stable, i.e. less predictable.

Moreover, at the same time that predictability goes down, clear observable patterns appear, aka the negotiation process.

Systems Concepts

Pick any sources that pleases your fancy about complex systems. Therein will be found many juicy concepts: edge of chaos, attractors, adaptation, feedback loops, latencies of feedback loops, path dependence, and on and on. Nowhere will you find concepts of “certainty” and “agreement”. They are just not there in the literature, the theorizing, or the empirical research.  “Agreement” is a social psychological construct. “Certainty” is a psychological construct. Of if you want to think on a group level, both are social psychological l constructs. Neither has anything to do with complex systems. It’s like saying that some notion in Sociology, say “role conflict” has anything to do with Chemistry.

I agree. Certainty and agreement aren’t system concepts that come from the study of complexity in the hard sciences. I don’t think they are claimed to be.

Again, quack.

They are a couple of dimensions in social situations that can be brought together with systems concepts to help bring understanding of patterns that are arising from system dynamics.

These social/psychological constructs are practical vehicles to help the evaluator (or others) get some clues as to models that might be useful.

Of course this is true, but why only those two concepts? What makes them so important that others (e.g. power, role conflict, economic self interest, culture) are not brought into the discussion? And by using the “AxC” framework, we are elevating those two concepts above all others. And in any case this whole discussion is about constructs, not theory. If we want to understand systems, why not talk about market behavior, or social exchange theory, or states of organizational growth and decline, or organizational culture? Somehow, agreement and certainty trump all the other constructs and ignore social science theory. I just don’t get that.

Maybe it’s time for someone else to work on bringing in some other concepts. Maybe when Ralph Stacy started this line of thinking, it was his early attempt to delve into these concepts and connections. (Remember, he doesn’t use this display or discussion in the later editions of his book.

Low Certainty and Low Agreement Can be Very Stable

The “certainty” / “agreement” framework says that conditions of low certainty and low agreement are unstable.

I don’t know if that is the case.

After looking at it again I realized that you are completely right. See below on my view of randomness.

It’s simply saying that in situations where you can’t see patterns, you can’t expect to predict cause and effect relationships.

The situation may indeed be stable in its unpredictability/uncertainty and lack of agreement.

Do we really want to call random behavior stable? In some sense it is stable. Think of a gas in a perfectly isolated container. (Impossible in the real world, but theory is theory.) There is no Maxwell’s demon. Entropy maximizes and the system is as stable as a system can possibly get. But I don’t think that is the kind of stability we really mean when we talk about systems. Maybe we should be talking about dynamic stability, that is another story, as in my government example above.

Yes, dynamic stability would be that patterns in dynamic (living) situations keep reoccurring over time.

In any case as an outside observer, how would you differentiate between stability due to randomness and stability due to the balancing of forces? For that matter there is quite a bit of stability near the origin as well. So here we have three kinds of stability. From the point of view of whether or not the system changes we may not care. No change is no change. But for us, who want to evaluate a program, the type of stability makes a big difference in the metrics and methodologies we would use. AxC does not address any of this.

Good reason not to think this visual can help us in all situations. Here’s where that old saying about when you have a hammer you treat everything as a nail.

But I can think of lots of conditions of low certainty and low agreement that are very stable. Some examples:

  •  A Nash equilibrium is at work. Nobody is happy with the status quo, but any change will make things worse for some group. We have the best possible solution for the greatest number of groups, even though everyone may hate it. I suppose you could call that a state of “high agreement”, but that seems to stretch the concept.
  • There is a federal regulation or piece of legislation preventing change. Nobody agrees on goals and everyone is uncertain about the effectiveness of what is in place or of whatever might be put in place, but none of that matters. Regulation and legislation will maintain the status quo.

Agree.

  • One interested party is so powerful that because it likes the status quo, nothing is going to change. agree
  •  Despite low certainty and low agreement, one thing that people are certain about is that nobody can come up with a better solution. (This happens all the time. People complain and complain until you say to them: OK, what would you do? Then they shut up because they have no idea.)
  • The status quo is supported by so many internal cross linkages that the system can’t be changed very easily.
  • Various stakeholders have multiple conflicting interests such that only a small subset of interests are low certainty and low agreement across stakeholders.
  • Maybe there is no reason to change. Suppose there is low certainty and low agreement, but things are working well enough that change is not worth the effort?
  • What if there is low certainty and low agreement, but people are afraid of the unintended consequences of upsetting the applecart? Nobody is willing to take a chance. Stability ensues.

Yes. You’ve got definitions of certainty and agreement running through this that may well be different than the originally intended ones that led of this visual.  You’re making the case very well for clarification of definitions.

I suppose it is possible to contort some of these examples into the “agreement –  certainty” framework. For example take the last bullet. There is low certainty and low agreement about what should be done, but there is agreement that change is risky.  Or take the “multiple interests” example. One could say that there is enough agreement and enough certainty, on enough of the interests, that the situation is stable. But really, this involves a lot of stretching past the normal concepts of “agreement” and “certainty”. And in any case they are still social psychological constructs that don’t have a place in complex systems.

Hmm.

High Certainty and High Agreement Can be Very Unstable

The “certainty” / “agreement” framework says that conditions of high certainty and high agreement are stable.

Here again, I don’t think that the certainty/agreement framework is saying that this condition is stable.

True. I guess I misinterpreted “organized” “planned” and “controlled” as stable. You are right that organized planned controlled systems can change, and sometimes change rapidly.

It can move to instability and bifurcation that flip it into either the self-organizing or the random dynamic.

Indeed true.

I’m not sure there are any scenarios worth thinking about where there is really high agreement. But leaving that aside for now, it’s not hard to imagine scenarios where high certainty and high agreement can be highly unstable.

Agree.

Environmental Reasons Why High Certainty and High Agreement Can be Unstable

One reason for instability under high certainty and high agreement has to do with the environment. I imagine a condition in which the environment has been stable for a long time. Because of this stability, whatever programs are nested in that environment have settled into scenarios of high agreement and high certainty. The world looks stable. And then the environment changes in an unforeseen way, and whatever levels of certainty and agreement there were disappear suddenly.

Yes, exactly the point. Over long periods of time, you have been able to predict patterns using certain models. But here is where you are continually attending to the environment and what might flip it out of this stability, changing the dynamic.

Good point. Back to my previous comment.

Some examples of how this are:

  • The organization of the United States government.  If I told you on 9/10/01 that the United States Coast Guard would leave the DOT and end up in a non-existent thing called the Department of Homeland Security, you would send me to the loony bin. On that date there was very high certainty and very high agreement about the basic organizational structure of the Executive Branch.
  • Medical ethics. Obviously there was always disagreement on what is right and just, but think how  much new argument appeared when gene sequencing became cheap and widespread.
  • I’m sure there are lots of other examples but I can’t think of any right now.

Indeed. Agree with these.

Internal Reasons Why High Certainty and High Agreement Can be Unstable

Actually this framework is attempting to say that high certainty and high agreement can be unstable. That’s what the boundary areas between these dynamics are all about.

This is a pretty good example of why using agreement and certainty alone mislead about systems behavior. What we are really talking about here is the notion of “edge of chaos”, or somehow being close to a phase transition from one ordered state to another. The problem is that it is impossible to know if a system that is behaving in a stable manner is deep in some attractor, or close to a boundary condition. If we want to deal with this we have to deal with what boundary conditions are all about in CAS terms. And that cannot be done by only talking about agreement and certainty. But by focusing attention on that picture, it channels thinking in that narrow direction.

Those boundary areas are those bifurcation points where a system that has been following a certain pattern can flip into a different one of these three basic patterns.

You bet. See above.

Another route to instability under conditions of high certainty and high agreement deals with the internal workings of systems, not the environment in which they reside. Imagine a situation where lots of stakeholders are involved. Each has its own internal position, and over time they have worked out a modus vivendi with each other. They have settled into a state of high certainty and high agreement.

But maybe the reason for the stability is that nobody can envision any other possibilities. Then for whatever reason, there is some change within one of the stakeholders. Maybe nothing big, but a discernible difference in what that stakeholder wants or how sure they are in their objectives. It’s not hard to see how a small perturbation like that could ripple though the system in such a way that each group begins to shift, maybe just a little at first. But over time, well, you can see a pretty fast change from high certainty and high agreement to the opposite ends of each scale.

Yup. Exactly.

(PS – This explanation touches on the idea that systems can be stable and robust, or stable and brittle. Now those are system concepts. But they have nothing uniquely to do with agreement or certainty.

Agree. They are not intended to be uniquely related to agreement or certainty.)

But this is all really important stuff for understanding systems, and focus on AxC blinds people to all this stuff.

Yes, that’s why I usually move quickly beyond this visual to these other concepts.

Practical Value of Thinking in Terms of Certainty and Agreement

I have trouble understanding how to measure uncertainty and agreement in a way that leads to increased understanding of system behavior.  To illustrate,  I’m thinking of a scenario where there are ten stakeholders. Now I imagine two cases.

  • There are two coalitions of five groups each. There is high agreement and certainty within each but there are very large and serious differences between the two coalitions.
  • None of the ten groups agree but the difference in agreement and certainty between any two is relatively small.

Which condition has a greater degree of certainty and agreement?

I don’t think that is what this is about.

Why not? If we care about systems behavior this is a critical distinction.

This isn’t used to determine which of these conditions has more or less certainty and agreement but rather that a causal model will probably work pretty well within these groups for those things where there is good agreement and has shown predictability in the past. (Of course this depends on how we are defining these terms. I think we are using different definitions.)

See way above. This is one of the things I have trouble with.

Suppose the totals in each scenario were the same? Does that mean the scenarios are the same? Of course not. They are radically different.

Agree.

That’s not what the diagram is saying. What might the first scenario be like?

  • It might be very stable over a very long period of time. Then with what seems like a minor shift in power of one coalition, the system goes through a phase transition to another stable state, more to the liking of the other coalition.
  • Suppose the power of the groups remained about equal over time? I can easily see a continual flipping back and forth.

Yup, that’s the point of showing these different dynamics in this fashion with the phase transitions.

Phase transitions are truly critical concepts in systems. Looking at AXC teaches us nothing about them. Why?  Because the picture does not say anything about whether change is gradual or rapid. In any case it’s easy to see how sometimes gradual change will take place and sometimes rapid change will take place. But that whole issue of change is nowhere to be found in AxC.

Certainty and agreement are just two dimensions that some people find useful in helping to understand these shifts in system dynamics.

In that case I am in favor of using them. But why only them?

I wouldn’t advocate using other them. They’re just a place that some people find useful as a starting point for conversation.

I can’t imagine this kind of system behavior in the second scenario. So even if we could measure agreement and certainty (which I am sure we can), just knowing amounts does not help to understand the stability of the system.

I agree. I don’t think that’s what the diagram is intended to do.

I think that at least implicitly we are giving ECLIPS members the message that AxC does explain this. If I misperceive, then I’m wrong. It has happened before. Just ask my children.

We have to introduce other ideas that don’t show up in system science, the political science notions of “power”, “alignment of interest in coalitions”, and the “distribution of power across coalitions”. I just cannot figure out how using only the concepts of agreement and certainty to would help me to differentiate the two scenarios.

Agree. It doesn’t.

So why put so much time and effort into using the AxC framework?

But as I said at the outset, if other people find the “certainty by agreement” framework useful, I’m for it. I am a big fan of the proverbial honey bee who is ignorant of the laws of physics which prove he cannot fly, and thereby happily goes about making a little honey every day.

Well, you did ask.

Advertisements
This entry was posted in Complex Systems, Programs, and Evaluation, Uncategorized. Bookmark the permalink.

4 Responses to A rolling conversation about the “Agreement x Certainty” space.

  1. Bob Williams says:

    Great discussion and I really appreciate the tone of the discussion.

    Given how ubiquitous the AxC diagram has become in US evaluation’s attempt to incorporate systems ideas this discussion is long overdue. In this discussion I am totally on Jonny’s side, especially about the notions of “agreement”. I don’t think the table says much beyond “stuff happens” which doesn’t move anyone on very much. Like Jonny I think it potentially oversimplifies and mis-states what complexity science is about. It’s role in the development of complex systems thinking has been marginal. As far as I can see its origin is in a single page in a 300 page early book by Stacey. He barely refers to it in the text and indeed (from memory) only describes one axis of it. He produces no evidence that it has any ontological validity – and the discussion that’s just ensued makes me understand why. One of the reason why it is not in later editions of his books is that he has also disagreed with the way in which it has been used by others. I don’t buy the line that people find it useful even if it is not accurate – that it has epistemological use even if its ontologically dodgy. That’s how the apologists for Myers Briggs argue with equally risky consequences, although at least people aren’t hired or fired on the basis of the Stacey diagram – but it is quite likely that some situations are misanalysed along the lines of Jonny’s examples. Like Jonny I think that if it has a use it is very minor, yet it seems to have a very significant role in US explanations of systems ideas. I’ve seen quite a few papers and presentations that start with this diagram, which inevitably creates the impression that this comprises a core systems concept. Which I don’t think it is.

    One of the things that concerns me a lot, is how evaluation has tended to grab things for the systems field and used them without fundamentally understanding where those ideas came from and what they were developed to do. One of the problems that flows from this is the use of concepts that are precise in the systems field yet get muddy when taken out of their original systems concept by overuse of metaphor. Metaphors by their very nature make fairly precise meanings vague – so what was once an insightful sharp tool that can do a specific job within the evaluation field becomes to means so many possible things to many people that it generates few insights and potentially dilutes the potential use of systems ideas in evaluation.

  2. Very interesting conversation about theory and practice. It certainly brings to mind, “none is true, some are useful.” Of course I’m neither an evaluator nor a traditional systems scientist, but I can tell you how we use what we call the Landscape Diagram to help people see into human systems dynamics.
    The LD speaks to three systemic insights that we think are important, but are hard for people to see.
    1) Even though you know things are happening at multiple scales, it is sometimes helpful to zoom into one scale. The LD assumes, as we use it, only one level of interaction at a time. I could have one about the confusion an individual student is feeling, a different one about the state of the classroom, a third for the building, others for world peace. The LD takes a single-level slice of this complex system, so you can see some of the underlying dynamics and patterns as they emerge. In order to talk in any kind of rigorous way about interactions, it is helpful to have some conception of the differences between (among) the things that interact. The LD is simple and self-evident enough to most normal people that it can help them understand in a more dynamical way the problems of a child who’s stuck while they are in a system that is roiling.
    2) Complex systems (not deterministic chaotic ones, btw) are under the simultaneous influence of an unknowable number of variables. We call this “high dimension” for short. We find that it is hard enough for normal people to separate and consider the influence of a single variable (not most evaluators, btw), much less deal with multiple ones at the same time. Even we have some difficulty thinking about a continuum from stable to unstable, and we know what we’re talking about. The LD makes a first step toward understanding by helping people see their very confusing situations through two variables at the same time. We aren’t stuck on “agreement” and “certainty” as primary, so we help people name other two-variable pairs that they can play with as they see, understand, and influence patterns in their systems.
    3) There is a strong relationship between the level of constraint and stability of a system at a given scale (at least in human systems, I won’t vouch for the others). When I talk to engineers, I talk about degrees of freedom. That turns out to be risky when I’m talking to statisticians because some of them think “degrees of freedom” means something different, though it may still be useful. Sorry for the aside. . .

    In a very practical, concrete way, the more you constrain a system, the more predictable you make that scale of the system (at least in the short run). The less constraint, the less predictability. Of course there are boundary conditions here where too much constraint pushes a bifurcation or where too little constraint leads to entropy, but for the most part, in the middle states, more constraint means more control. That is why we have renamed the zones of the LD as stable, emergent, and unstable. Your counterexamples above are talking cross-scale, so they don’t hold water here, as the LD (as we use it) functions at a single scale at a time. Refer to the student and classroom where one can be in one place on the LD at the same time another scale is at another place. While we know the scales are interrelated, we need something else to talk about that because the LD doesn’t help you with inter-scale relationships, as you so well point out above.

    The reason we take A and C lightly is that we think the LD is about levels of constraint, and it just happens that Ralph focused on the constraints for decision making in groups and within that landed on agreement and certainty as the key ones. We think the labels are merely artifacts, and the process of looking for and labeling the two most important constraining variables in a complex system is an incredibly enlightening process for most folks.

    Now, you all know my work, so you won’t be surprised that when I say “constraints” I’m talking about the CDE–containers that bound the system, the differences that manifest patterns and set potential for change, and exchanges that connect across differences to relieve tension and create new patterns (CDE). When we use LD as an indicator of system constraint and, so, system stability and predictability and openness to control we are helping people see into the ways that stretching or shrinking a container, focusing on more or fewer differences, or tightening or loosening exchanges will influence a system to be more or less predictable. Given that, people can make some informed choices about their action to influence.

    As I said, I can tell you how we use the LD to help people see what we think are critical features of complex human systems dynamics. But that doesn’t tell you how it can or should be used by evaluators. It also doesn’t certify it as a bona fide characteristic of systems. You guys can work that out.

    One more word–what doesn’t teach people about scaling, high dimensionality, and constraints is the Cynefin Model. Or at least I’ve never seen it or heard it described in a way that opened up dynamics of the systems to inquiry. IMHO It gives a set of categories and ways to move in and out of those categories without challenging how people see what they see in the systems they influence. But, I think at least some of us have had that discussion before.

    Thanks for surfacing these important differences within and between theories and practices. Every time I talk to any of you I learn something new, and it is a gift.

  3. gkheoyang says:

    I certaintly didn’t mean this to stop the conversation. What are your thoughts? Thanks.

  4. Pingback: All Homebase.Co.Uk Voucher Codes

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s