A Complex System View of Technology Acquisition Choice

I am involved in a project that involves helping people make a single choice among multiple technologies. They must commit to one, so there is no waffling. This is one more of many such exercises that I have been involved in over the course of my career, and I have never been fully satisfied with any of them. On an intuitive level, everyone knows they cannot make the best choice, but everyone thinks that they should be able to. I finally figured out why they cannot. I don’t mean that people are not smart enough. I mean that it is impossible. The behavior of complex systems makes it impossible.

A Workable, Effective Solution
If there is a technology choice with a very few criteria, and it is absolutely clear what criterion is truly critical, and there is good data on performance, then yes,  it is possible to make the best choice. But how many situations like that are there? So, what to do in the majority of all the other cases?

Before I get into a longish esoteric discussion, I’ll jump to a simple, practical method for making a technology choice. The answer is that we accept the reality of how human beings make decisions. We satisfice within a context of bounded rationality. As Herbert Simon put it, “decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world”.

With respect to technology choice, satisficing dictates two decision making strategies which can be used alone or combined.

  • Find a few acceptable technology choices and pick the one you are most comfortable with.
  • Aggregate the requirements into broad enough categories and accept the imprecision that such aggregation requires.

And now for my explanation of why this simple solution is more than an efficient convenience, but a necessity. Of course, there is too much going on in the world for our humble intellect to find and understand. But it is more than the volume of information and our limited capacities. It is how that information is structured.

What are the System-based Reasons why an Optimal Choice Cannot be Made?
To begin, I need to define what I mean by “best choice”. I mean it in a technical optimization sense, where there is a true joint optimization of all relevant criteria. “Best” can also have a social psychological meaning, i.e., a situation where most interested parties are as satisfied as they can be with the collective choice that was made. But although I am a social psychologist, I’ll stick to a definition of “best” that is near and dear to the hearts of my engineer friends.

Choice Criteria are Networked
Why can’t a best choice be made? The answer is that choice criteria are networked and that the nodes of the network are subject to environmental influences. The result is sensitive dependence and emergent behavior. To illustrate with an example, see Table 1. It contains a list of choice criteria that I adapted from a project I’m working on.

Table 1: Technology Choice Criteria
High level Detailed
Signal detection capability 1.   Data analysis capability
2.   Number of signal types detected
3.   Signal resolution for each data type
Human Factors

 

4.   Usability for operator
5.   Training requirements
6.   Visual presentation quality of output
Interoperability 7.   Data export formats
8.   Data import formats
Operating environment 9.     Time of day
10.   Temperature
11.   Weather conditions
Market 12.   Competing technologies
13.   Market demand
14.   Initial cost
15.   Life cycle cost
16.   Compatibility with technology trends
17.   Synergy with other technologies in places where implemented.

Pull just four elements from the list (see the picture): 1) number of signal types detectable, 2) weather,  3) cost, and 4) training requirements. A wider range of detection needs, the ability to work in bad weather, and low training requirements will all increase cost. The ability to work in adverse weather conditions may affect the types of signal detection that can be used. The greater the diversity of information, the greater the training requirements. What would happen to all the tangled dependencies if new hiring drove up the burden on training, or if a need for higher resolution imaging asserted itself, or if requirements for operation in adverse weather conditions were relaxed? Scale this up to dependencies among the seventeen choice criteria, and even a casual look at the dependencies makes it obvious why a strict ordering of criteria is impossible.

Network Behavior
Node relationships in networks are prone to sensitive dependence. This means that local differences in any one node, (or in a small number of nodes) might ripple through the system and affect relationships among many of the nodes. And the nature of those large-scale changes may be different as a function of different local changes. Moreover, networks can be adaptive in the sense that as influence is transmitted across edges, node and edge relationships can rearrange. I am not claiming that sensitive dependence or network adaptivity will always be at play in networks, only that they often are. Given what I know about interactions among technology requirements, it’s hard for me to believe that they are not at play in networked technology choice requirements.

There is yet another network phenomenon that I am convinced makes a strict ordering of criteria impossible, but which I won’t push too hard because I can’t make a strong case for it. I suspect that the “best” technology choice is not an additive function of its component requirements. Rather, “best” is an emergent characteristic of network behavior. Or put differently, each requirement loses its unique identity.

Influences on Network Nodes
If local change in a network of choice criteria can have such profound effects, how certain can we be that those kinds of changes will occur? Very certain. Consider just a few of the endless possibilities that may affect one or a few choice criteria.

  • Technology costs may rise or fall with market conditions.
  • A competing technology standard may become ascendant.
  • Funds for technology acquisition may increase or decrease.
  • The importance of the reasons for the technology choice may change.
  • Domains (location, business conditions, etc.) where the technology is desirable may narrow or broaden.
  • Choices are based on the best knowledge one has at the time about each relevant criterion. But the discovery of more extensive, or more accurate, knowledge is always a
  • And many, many more.

Summary
In the first section of this blog post I made the observation that people have an intuitive appreciation for the difficulty of making an optimal choice among competing technology acquisition candidates. In the second section I provided a complex system, network-based justification for this intuitive appreciation. I laid all this out to make the point that it is not difficult to make an optimal choice, it is impossible, and that therefore choices need to be made via a process of satisficing rather than optimizing. With respect to technology choice, satisficing dictates two decision making strategies that can be used alone or combined.

  • Find a few acceptable technology choices and pick the one you are most comfortable with.
  • Aggregate the requirements into broad enough categories and accept the imprecision that such aggregation requires.

 

 

 

 

 

 

What does evaluation gain from thinking in alien terms? An argument for taking complexity and evolutionary biology seriously

For a long time, I have been arguing that if “complexity” is to be useful in evaluation, evaluators’ should focus on what complex systems do, rather than on what complex systems are. This is because by focusing on behavior, we can make practical decisions about models, methodologies, and metrics.

I still believe this, but I’m also coming to appreciate that thinking within research traditions also matters. I’m not advocating a return to a “complex system” focus, but I do see value in adopting the perspectives of people who do research and develop theory in the domain of complexity. And by extension, this is also true for evolutionary biology, another field that I have been promoting as being useful for evaluators.

Continue reading “What does evaluation gain from thinking in alien terms? An argument for taking complexity and evolutionary biology seriously”

A Model for Evaluation of Transformation to a Green Energy Future

I just got back from the IDEAS global assembly, which carried the theme: Evaluation for Transformative Change: Bringing experiences of the Global South to the Global North. The trip prompted me to think about how complexity can be applied to evaluating green energy transformation efforts. I have a longish document (~2000 words) that goes into detail, but here is my quick overview.

Because transformation is a complex process, any theory of change used to understand or measure it must be steeped in the principles of complexity.

The focus must be on the behavior of complex systems, not on “complex systems”. (Complex systems or complex behavior?)

In colloquial terms, a transformation to reliance on green energy can be thought of as a “new normal”. In complexity terms, “new normal” connotes an “attractor”, i.e. an equilibrium condition where perturbations settle back to the equilibrium. (Why might it be useful to think of programs and their outcomes in terms of attractors?)

A definition of a transformation to green energy must specify four measurable elements: 1) geographical boundaries, 2) level of energy use, 3)  time frame, and 4) level of precision. For instance: “We know that transformation has happened if in place X, 80% of energy use comes from green sources, and has remained at about that level for five years.”

Whether or not that definition is a good one is an empirical question for evaluators to address. What matters is  whether the evaluation can provide guidance as to how to improve efforts at transformation.

Knowing if a condition obtains is different from knowing why a condition obtains. To address the “why”, evaluation must produce a program theory that recognizes three complexity behaviors – attractors, sensitive dependence, and emergence.

Because of sensitive dependence, unambiguous relationships among variables may not continue over time or across contexts. Because of emergence, transformation does not come about as a result of a fixed set of interactions among well-defined elements. The result of sensitive dependence and emergence may produce outcomes that exist within identifiable boundaries,  i.e. within an attractor space. If they do, that is akin to “predicting an outcome”. If they do not, that is akin to showing that a program theory is wrong.

Models with many elements and connections cannot be used for prediction, or even, for understanding transformation as a holistic construct. Small parts of a large model, however, can be useful for designing research and for understanding the transformation process.

Six tactics can be used for evaluating progress toward transformation: 1) develop a TOC that recognizes complex behavior, 2) measure each individual factor in the model, 3) consider how much change took place in each element of the model, 4) focus on parts of the model, but not the model as a whole, 5) use computer-based modeling, 6) employ a multiple-comparative case study design.

As all the analysis takes place, interpret the data with respect to the limitations of models, and the implications of emergence, sensitive dependence, and attractor behavior.

Converting an intellectual understanding of complexity into practical tools

This is an abstract of a presentation I gave at AEA 2014. Click here for the slide deck.

How can “complexity” be used to identify program theory, specify data collection, interpret findings, and make recommendations? Substitute “multiple regression” for “complexity”, and we know the answer because our familiarity with regression is practical and instrumental. This presentation will nudge our understanding of “complexity” closer to our comfort level with the tools we already know and love so well. It will then present a brief overview of key concepts in complexity, starting with agents and iteration, and identify some of the many concepts that derive from probing those two ideas (e.g. attractors, state changes, fractals, evolution, network behavior and power laws.) Finally, a few elements of complexity will be chosen, and examples given of how they can be applied in evaluation.

Consequences of Small Change: Part 5 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science

Common Introduction to all 6 Posts

History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:

Complexity behavior Posting date
·      Emergence up
·      Power law distributions Sept. 21
·      Network effects and fractals Sept. 28
·      Unpredictable outcome chains Oct. 5
·      Consequence of small changes Oct. 12
·      Joint optimization of uncorrelated outcomes Oct. 19

For a history of my activity on this subject see: PowerPoint presentations: 1, 2, and 3; fifteen minute AEA “Coffee Break” videos 4, 5, and 6; long comprehensive video: 7.

Since I began thinking of complexity and evaluation in this way I have been uncomfortable with the idea of just having a list of seemingly unconnected items. I have also been unhappy because presentations and lectures are not good vehicles for developing lines of reasoning. I wrote these posts to address these dissatisfactions.

From my reading in complexity I have identified four themes that seem relevant for evaluation.

  • Pattern
  • Predictability
  • How change happens
  • Adaptive and evolutionary behavior

Continue reading “Consequences of Small Change: Part 5 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science”

Unspecifiable Outcome Chains: Part 4 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science

Common Introduction to all 6 Posts

History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:

Complexity behavior Posting date
·      Emergence up
·      Power law distributions up
·      Network effects and fractals up
·      Unpredictable outcome chains up
·      Consequence of small changes Oct. 12
·      Joint optimization of uncorrelated outcomes Oct. 19

For a history of my activity on this subject see: PowerPoint presentations: 1, 2, and 3; fifteen minute AEA “Coffee Break” videos 4, 5, and 6; long comprehensive video: 7.

Since I began thinking of complexity and evaluation in this way I have been uncomfortable with the idea of just having a list of seemingly unconnected items. I have also been unhappy because presentations and lectures are not good vehicles for developing lines of reasoning. I wrote these posts to address these dissatisfactions.

From my reading in complexity I have identified four themes that seem relevant for evaluation.

  • Pattern
  • Predictability
  • How change happens
  • Adaptive and evolutionary behavior

Others may pick out different themes, but these are the ones that work for me. Boundaries among these themes are not clean, and connections among them abound. But treating them separately works well enough for me, at least for right now.

Figure 1 is a visual depiction of my approach to this subject.

Overview graphicFigure 1: Complex Behaviors and Complexity Themes.
  • The black rectangles on the left depict a scenario that pairs a well-defined program with a well-defined evaluation, resulting in a clear understanding of program outcomes. I respect evaluation like this. It yields good information, and there are compelling reasons working this way. (For reasons why I believe this, see 1 and 2.)
  • The blue region is there to indicate that no matter how clear cut the program and the evaluation; it is also true that both the program and the evaluation are embedded in a web of entities (programs, policies, culture, regulation, legislation, etc.) that interact with our program in unknown and often unknowable ways.
  • The green region depicts what happens over time. The program may be intact, but the contextual web has evolved in unknown and often unknowable ways. Such are the ways of complex systems.
  • Recognizing that we have a complex system, however, is not amenable to developing program theory, formulating methodology, or analyzing and interpreting data. For that, we need to focus on the behaviors of complex systems, as depicted in the red text in the table. Note that the complex behaviors form the rows of a table. The columns show the complexity themes. The Xs in the cells show which themes relate to which complexity behaviors.

Unspecifiable Outcome Chains

Pattern

 

Predictability

 

How change happens Adaptive evolutionary behavior
Emergence
Power law distributions
Network effects and fractals
Unspecifiable outcome chains X X
Consequence of small changes
Joint optimization of uncorrelated outcomes

People are enamored by the “butterfly effect”, but I hate it. It is beyond me why evaluators are so drawn to the idea of instability. In my world you can hit programs over the head with data as hard as you can, and they still do not change. My problem is too must stability, not too little. And yet, the notion of sensitive dependence has its place in evaluation. That place is not in uncertainty about what will happen, but uncertainty about the order in which things will happen. I don’t know how frequent a problem this is in evaluation, but I’m pretty sure it exists. I am very sure that evaluators would do well to consider the possibility when they develop program theory.

In what follows I am going to adopt a common view of butterflies and instability. It’s the one that opens Wikipedia’s entry on the butterfly effect: “In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.” Needless to say this is a very simplistic approach to a very complicated and controversial subject. Read the rest of the Wikipedia entry to get a sense of the issues involved. If you really want to get into it, go to: Chaos.

The reason for the difficulty in understanding outcome order is that we can be too confident in our estimations of what fits where in outcome chains. We think the sequence is invariant, which most often it probably is. I am convinced though, that there are times when small perturbations can affect the order. Stated differently, the sequence of outcomes is subject to small random fluctuations in the environment.

I’ll illustrate with a hypothetical example. A friend of mine who does a lot of educational evaluation assures me that it makes some sense. The program in question is designed to improve teachers’ classroom management skills. Figure 2 compares two versions of the program theory. The top of the figure takes the form of a linear sequence. It’s sophisticated in the way it mixes unambiguous relationships and uncertain relationships. The dashed arrows indicate unambiguous relationships: For instance, classroom management leads to job satisfaction, which in turn leads to less tension between teachers and principles. Solid black arrows show ambiguous relationships. For instance, “student satisfaction” is an input toan unspecified collection of other intermediate outcomes.

The bottom form of the model acknowledges limitations in the program theory. It depicts a situation in which better classroom management makes itself felt in a cloud of outcomes that affect each other in elaborate ways, both directly via 1:1 and 1:many relationships, and also via proximate and distal feedback loops. Also, note the two different network possibilities – red solid, and blue dashed. I did that to emphasize that any number of relationships are possible. It would make the picture too complicated, but it is also the case that the network of relationships will be different in each setting where the classroom management program is implemented.

classroomFigure 2: Traditional and Complex Theories of Change

What will the relationships be in any particular setting? That is an unanswerable question because too many factors will be operating to specify the conditions. All we know is that better classroom management leads to any number of student performance outcomes, which in turn will lead to higher test scores.

If there is so much confusion about intermediate outcomes, why might we be able to reliably expect that the classroom management program will result in higher test scores? Complexity provides two ways to think about this: 1) emergence, and 2) attractor space.

Emergence: A good way to explain emergence is to start with a counter example. Think of a car. Of course the car is more than the sum of its parts. But it is also true that the unique function of each part, and its contribution to the car, can be explained. If someone asked me what a “cylinder” is, I could describe it. I could describe what it does. When I got finished, you would know how the part “cylinder” contributes to the system called a “car”.

In contrast, think about trying to explain a traffic jam only in terms of the movement of each individual car. The jam moves in the opposite direction to the cars of which it is composed. The jam grows at the back as cars slow down, and the front recedes as cars peel off. Looked at as a whole, the traffic jam is clearly something qualitatively different from the movement of any care in the jam. (NetLogo has good one and two lane simulations that are worth looking at.) In the “classroom management” case, we might consider “better test scores” as an emergent outcome – one that cannot be explained in terms of the particulars of any of its parts.

Attractor space: There are two ways to think about “attractors” in complexity. The most formal is a mathematical formulation concerning the evolution of a dynamical system over time. As Wikipedia puts it: “In the mathematical field of dynamical systems, an “attractor” is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed.”

However, there is a more metaphorical, but still useful, way to think about this. Namely, that an attractor is a “space” that defines how something will move through it. There may be many paths within the attractor, but depending on its “shape”, many paths through it will lead to the same place. Marry this to the open systems notion of “equifinality” and it’s not hard to think in terms of a set of causal relationships among a defined set of variables that will lead to the same outcome. In theory there could be an infinite number of elements and paths that would lead to higher test scores, but that does not matter. What matters is that a particular set of outcomes are meaningful intermediate outcomes for a particular program, that it makes sense to measure those outcomes, and that many different combinations of those intermediate outcomes can be relied upon to produce better test scores.

While I am not sure which way to think about the bottom scenario in Figure 2, I do know that there is an important difference between the “emergence” and “attractor” perspective. With emergence, the specific intermediate outcomes do not matter very much. Which ones are manifest and which ones are not is irrelevant to the emergent result. That may be elegant in its way, but it is not all that satisfying to program funders. After all, they do want to know what intermediate outcomes were produced. The attractor way of looking at it does focus attention on which of those intermediate outcomes were manifest, and in what order. It may not be possible to assure the funders that the same outcomes and order will appear the next time, but it is possible to give them some pretty good understanding of what happened. The logic of generality and external validity notwithstanding, knowing what happened in one case can be awfully useful for planning the future.

 

Networks and Fractals: Part 3 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science

Common Introduction to all 6 Posts

History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:

Complexity behavior Posting date
·      Emergence up
·      Power law distributions up
·      Network effects and fractals up
·      Unpredictable outcome chains Oct. 5
·      Consequence of small changes Oct. 12
·      Joint optimization of uncorrelated outcomes Oct. 19

For a history of my activity on this subject see: PowerPoint presentations: 1, 2, and 3; fifteen minute AEA “Coffee Break” videos 4, 5, and 6; long comprehensive video: 7.

Since I began thinking of complexity and evaluation in this way I have been uncomfortable with the idea of just having a list of seemingly unconnected items. I have also been unhappy because presentations and lectures are not good vehicles for developing lines of reasoning. I wrote these posts to address these dissatisfactions.

From my reading in complexity I have identified four themes that seem relevant for evaluation.

  • Pattern
  • Predictability
  • How change happens
  • Adaptive and evolutionary behavior

Others may pick out different themes, but these are the ones that work for me. Boundaries among these themes are not clean, and connections among them abound. But treating them separately works well enough for me, at least for right now.

Figure 1 is a visual depiction of my approach to this subject.

Overview graphicFigure 1: Complex Behaviors and Complexity Themes
  • The black rectangles on the left depict a scenario that pairs a well-defined program with a well-defined evaluation, resulting in a clear understanding of program outcomes. I respect evaluation like this. It yields good information, and there are compelling reasons working this way. (For reasons why I believe this, see 1 and 2.)
  • The blue region is there to indicate that no matter how clear cut the program and the evaluation; it is also true that both the program and the evaluation are embedded in a web of entities (programs, policies, culture, regulation, legislation, etc.) that interact with our program in unknown and often unknowable ways.
  • The green region depicts what happens over time. The program may be intact, but the contextual web has evolved in unknown and often unknowable ways. Such are the ways of complex systems.
  • Recognizing that we have a complex system, however, is not amenable to developing program theory, formulating methodology, or analyzing and interpreting data. For that, we need to focus on the behaviors of complex systems, as depicted in the red text in the table. Note that the complex behaviors form the rows of a table. The columns show the complexity themes. The Xs in the cells show which themes relate to which complexity behaviors.

Network Structure and Fractals

Pattern

 

Predictability

 

How change happens Adaptive evolutionary behavior
Emergence
Power law distributions
Network effects and fractals X
Unspecifiable outcome chains
Consequence of small changes
Joint optimization of uncorrelated outcomes

Any time a program is concerned with the spread of its services or impact over time, the subject of networks becomes a candidate for attention by evaluators. This is because: 1) It can be of value to know the pattern that describes how a program moves from one adopting location to another. 2) Network structure is a useful concept when the spread of a program constitutes an outcome to be evaluated. 3) Network structure is a useful construct when evaluating the degree to which a set of connections is efficient, effective, and resilient in the face of breakage. Continue reading “Networks and Fractals: Part 3 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science”

Power Law Distributions: Part 2 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science

Common Introduction to all 6 Posts

History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:

Complexity behavior Posting date
·      Emergence up
·      Power law distributions up
·      Network effects and fractals Sept. 28
·      Unpredictable outcome chains Oct. 5
·      Consequence of small changes Oct. 12
·      Joint optimization of uncorrelated outcomes Oct. 19

For a history of my activity on this subject see: PowerPoint presentations: 1, 2, and 3; fifteen minute AEA “Coffee Break” videos 4, 5, and 6; long comprehensive video: 7.

Since I began thinking of complexity and evaluation in this way I have been uncomfortable with the idea of just having a list of seemingly unconnected items. I have also been unhappy because presentations and lectures are not good vehicles for developing lines of reasoning. I wrote these posts to address these dissatisfactions.

From my reading in complexity I have identified four themes that seem relevant for evaluation.

  • Pattern
  • Predictability
  • How change happens
  • Adaptive and evolutionary behavior

Continue reading “Power Law Distributions: Part 2 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science”

Emergence: Part 1 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science

Common Introduction to all 6 Posts

History and Context
These blog posts are an extension of my efforts to convince evaluators to shift their focus from complex systems to specific behaviors of complex systems. We need to make this switch because there is no practical way to apply the notion of a “complex system” to decisions about program models, metrics, or methodology. But we can make practical decisions about models, metrics, and methodology if we attend to the things that complex systems do. My current favorite list of complex system behavior that evaluators should attend to is:

Complexity behavior Posting date
·      Emergence up
·      Power law distributions Sept. 21
·      Network effects and fractals Sept. 28
·      Unpredictable outcome chains Oct. 5
·      Consequence of small changes Oct. 12
·      Joint optimization of uncorrelated outcomes Oct. 19

For a history of my activity on this subject see: PowerPoint presentations: 1, 2, and 3; fifteen minute AEA “Coffee Break” videos 4, 5, and 6; long comprehensive video: 7.

Since I began thinking of complexity and evaluation in this way I have been uncomfortable with the idea of just having a list of seemingly unconnected items. I have also been unhappy because presentations and lectures are not good vehicles for developing lines of reasoning. I wrote these posts to address these dissatisfactions. From my reading in complexity I have identified four themes that seem relevant for evaluation.

  • Pattern
  • Predictability
  • How change happens
  • Adaptive and evolutionary behavior

Continue reading “Emergence: Part 1 of 6 Posts on Evaluation, Complex Behavior, and Themes in Complexity Science”