A friend of mine sent me a very good report on promoting data use. It’s not the first thing I have read on this subject, but I thought that it was one of the better ones. It is full of the kind of advice we would all be well-advised to follow, e.g. engage leaders, set expectations, make data accessible, encourage data use in decision making, establish strong sponsorship for data literacy programs, and so on. Although the report is not set in an evaluation context, much of what it says resonates with themes that have been in our community for a long time with respect to evaluative thinking, evaluation use, and so on. The more I thought about it though, the more I thought that advice of this nature is at best weak and at most counterproductive because it does not recognize some very good reasons why data are not used or are used poorly.
Information inventory versus processing capacity
I have never worked in a setting that was not suffused with data. This is true for individual topics and exponentially truer any time there is a need to draw on multiple data sources. We almost never have a problem of data insufficiency. On the contrary. Our problem is too much data paired with limited time and attention to make sense of the data we have. These limits exist at both the “processing capacity” of people, and of the groups and coalitions that are often the true “decision makers” in organizational settings.
Attention to what?
Because there will always be more data to attend to than can be attended to, how can we set priorities? The answer matters because attending to the wrong data may be worse than deciding based only on expert judgement. My belief is that outside of crisis settings, it is not at all obvious what data matter. We are also frequently ignorant as to what knowledge is needed to make a decision, either because our theories are weak, or our experience does not cover the contingency, or because we cannot predict how a situation will develop.
Effort put into attending to data incurs opportunity costs
Time and effort invested in attending to data is time and effort that cannot be put into other activities. Many of those other activities are likely to be more important than scrutinizing data.
Too much change is dysfunctional even if each change is desirable
Some degree of stability is essential, and attention to data is more likely than not to motivate change. (I can’t prove this, but I believe it.) I can easily imagine a scenario in which too much change will limit an organization’s ability to function to a degree that outweighs the value of each individual change.
Culture of Data Use
Of course we should put effort into tactics such as engaging leaders, making data accessible, and so on. I’m in favor of invoking these tactics. The more the better. But these tactics need to be embedded in an appreciation of how organizations work and how people process information. I’m arguing for a culture of data use that resonates with what we know about information use and decision making. Here are some examples that spring to mind.
Illusory pattern
We fallible humans are good at perceiving pattern in random noise. We would be better off if whenever we thought we detected a pattern, we took a step back and asked ourselves a question: Could it be that no matter how much it looks like a pattern, maybe it’s not?
Cognitive bias
Generations of social psychologists have taken delight in discovering systematic errors that our brains impose to distort rational decision making. Whenever we try to make sense of data, we should stay aware of a few findings from the cognitive bias research. 1) We favor the most recent information we receive. 2) We give more weight to information that confirms our beliefs. 3) We view past events as predictable (“I knew it all along”.) 4) We have too much faith in our capabilities to make correct decisions.
Multiple perspectives
How many different perspectives have a hand in choosing what information to look at and what message information holds? Diversity matters. A decision need not (and in most cases should not) be a democratic vote. But the entity (person, group, or coalition) that makes the decision needs to make a good faith effort to take diverse opinion seriously.
Total change versus specific change
Implementing a change can have two types of consequences. The first is the objective that is desired, the ostensible reason the effort was made. The second is the disruptive effect it will have on other activities. No matter how worthwhile any single change may be, its value needs to be judged in terms of its overall consequences for interfering with organizational functioning.
Avoiding Perverse Outcomes
My concern is that effective tactics for promoting information use can have perverse consequences if they are not embedded in an appropriate culture of data use. The problem is twofold. First, if there is too much data that is easily accessible, there may be a desire to use it. After all, we all like to play with our toys. But beyond the instrumentality of it, I can see how applying the tactics in that report will induce an expectation that the data should be used. And as I have tried to show, as much as I like using data, using it in the wrong way can have very unpleasant consequences.
You make a lot of great points here. Naming the cognitive biases made me smile. I absolutely agree that evaluators do not pay enough attention to “unintended consequences” (what you call total vs. specific change) since they are less valued by funders, leaders, etc. And the “illusory pattern” made me think of the frustrating but sound statistical practice of “failing to reject the null hypothesis”, but never “accepting” it (i.e., we must be skeptical of what we can claim). And I, of course, agree with the information overload we all face. But I have to say that I find the point on “opportunity costs” concerning. It reads to me like the same argument against funding overhead… all $ and time should go to direct service, deliver the goods, make sure program participants get what they need. Not only do I find that problematic because that assumes the services are good (see point above on unintended consequences!), but it seems to assert organizations shouldn’t make time for intentionality, reflection, strategy… all of what I think are just smart practices in ALL domains of work (and not unique to our sector). I wonder if perhaps the “use” conversation should be more about that, not forcing use of specific data points, but using the evaluation process to encourage reflection and prioritization. Evaluators can help stakeholders wade through the sea of data to figure out what matters most to them. This then is less about the evaluator’s agenda of data use, and more about the evaluator as a facilitator to encourage reflection and help build that appropriate positive data culture you say is needed.
Great post Jonny. Particularly applicable in view of EvalPartners’ “Evidence Matter” campaign. Simply piling up evidence is not going to address the problem of what makes for successful policymaking and programming