|

What Makes You Think You’re So Smart?

by

Vol. 104 No. 1 (2020) | A Clearer View | Download PDF Version of Article
Ariely Background Compressed

In recent years, scholars have taken new interest in people’s ability to reason rationally. The conventional take from economic theory is that, as rationally motivated individuals, people generally make appropriate decisions. However, this view has been questioned by much psychological research — as well as work from some economists — that demonstrates numerous instances in which human judgments and decisions are biased by external influences that we often don’t even recognize. In other words, rational may be reasonable, but rational is not reality.

People no doubt make errors in judgments. At the same time, we know people are capable of remarkable intellectual achievements. The tension between the two might prompt us to ask a legitimate question: “If people are so demonstrably dumb, how did we get to the moon?” The Knowledge Illusion, by scholars Steven Sloman and Philip Fernbach, suggests one answer to this important question.

The Knowledge Illusion explores two key ideas. The first is that we all overestimate our knowledge. The second is that the knowledge that we access in making decisions does not just reside in our heads but also outside our heads — in our bodies, our environment, and other people in the communities in which we live and participate. Moreover, we are typically unaware of the sources of the knowledge that shape our decisions.

To demonstrate how we overestimate our knowledge, the authors use a research paradigm proposed by psychologists Frank Keil and Leon Rozenblit. In this experimental paradigm, participants are first asked how well they understand the workings of a familiar household item, such as a flush toilet or a coffee machine. Participants provide an estimate — on a scale from 1 to 7 — of their level of understanding or knowledge of the device (the greater their rating, the greater their knowledge or understanding).

Next, participants are asked to provide an explanation of how the device works. Typically, participants have some difficulty with this task and don’t always provide good explanations.

Third, after making their explanations explicit, participants are asked to reassess their level of knowledge about the device on the same scale used in the first task. The main result is that participants’ reassessments of their knowledge are typically lower than their initial judgments. In other words, having to make an explicit statement of understanding leads people to realize that their knowledge is not as complete as they had thought. The study suggests that people are generally overconfident in their beliefs about what they know and suffer from “the knowledge illusion.”

Why, then, do people not systematically make more mistakes when taking actions that are guided by explicit reasoning? The authors say it’s because people have access to more information than what is revealed by their explicit attempts to reason. Cognition not only makes use of the information in the head, but also draws on the information suggested by the movements of one’s body, the comments and actions of others, and sources throughout the environment — what the authors dub “the community.”

In other words, cognition captures information from a wide range of sources that are both internal and external to the individual. We typically are not aware of this process. But we don’t need to be: There is no real need to understand how information gained from the “outside” works, because we can learn to simply make use of it. For example, when driving an automobile, you don’t need to know how certain devices (such as lights or gears) function; you just need to know how to flip the right switch or press the right pedal. Your understanding may not be explicit, but you are in an environment that does not require you to make it so. Clearly, in today’s world we interact with many devices that don’t require explicit knowledge, and they are useful precisely because of this fact.

As the authors tell us in the very subtitle of the book, “we never think alone.” We may think that our actions reflect only the thoughts in our own heads but, in fact, we call on much information that lies outside. The bias is not so much that we are over-confident in our knowledge, but that we ignore the fact that our actions draw on these external sources of information.

The authors, both cognitive scientists (Sloman is a professor of cognitive, linguistic, and psychological sciences at Brown University, and Fernbach is professor of marketing at the University of Colorado’s Leeds School of Business) also stress the notion that people deal with the complexities of understanding the environments in which they live by developing causal mental models of phenomena. Almost by definition, these models cannot match the complexity of the actual phenomena they address. Such models nonetheless serve a useful goal: namely, to guide the actions people take. Once again, our impoverished representations of causal relationships can be corrected and augmented by information that lies “outside” the head.

In a discussion of collective decision making, Sloman and Fernbach helpfully analogize to beehives. Each individual bee knows what it has to do but is ignorant about what others do. Collectively, however, the actions of all the bees contribute to the greater good of the hive, which benefits from all the different individual inputs. Here, the authors argue that, at an individual level, we are typically overconfident in our knowledge but that, at the aggregate level, our mistaken beliefs can be moderated by information from external sources.

The goal of thought is to aid in action, and society’s “beehive” way of overcoming the limitations of the way we individually access and process information has much to recommend it; even when individuals are not clear about the way to move forward, their contributions to the collective help create progress. There is, however, one notable caveat: The information synthesized by the community must actually solve the problems at hand rather than create dysfunctional con-sequences. It is not clear that one can rely on this mechanism when facing entirely new problems or during periods of environmental turbulence.

Sloman and Fernbach write well for a general audience, and their explanations of many phenomena were a pleasure to read. The early chapters of the book convincingly explicate their thesis about the chronic overestimation of knowledge and the reliance on “external” sources of information. “Because we confuse the knowledge in our heads with the knowledge we have access to,” they say, “we are largely unaware of how little we understand. We live with the belief that we understand more than we do.” They go on to posit that “many of society’s most pressing problems stem from this illusion” (p. 129).

In my view, what Sloman and Fernbach achieve in those first chapters of the book is to provide a general and interesting hypothesis that appears to explain why people are generally overconfident in their knowledge. People are unaware of the extent to which they depend on knowledge in “the community.”

But while this broad hypothesis seems plausible, it does not clearly explain human irrationality in a wide range of tasks. Too much depends on the validity of the information that is accessed in the community. Is the community information always “correct,” or does it also bias thoughts in systematic ways? When and why do these alternative outcomes occur? Sloman and Fernbach don’t tell us and, indeed, it would be difficult for them to do so. Instead, they ask us to respect their general hypothesis and then explore different situations and problems that can be illuminated by giving it credence.

The thesis is both comprehensive and bold, and the authors — who are clearly accomplished scientists — provide many arguments that seem to make sense. Indeed, the reader is presented with many well-articulated passages that support the thesis. It is also fun to read.

But what should we make of the fact that we are typically unaware of the border between internal and external information? Much clearly depends on the nature of the external information that we access. Is this relevant or irrelevant to the situation at hand? More importantly, does it lead to decisions that are or are not appropriate?

My comments might seem unfair because — in the second half of the book — Sloman and Fernbach deal with a range of important issues for which the hypothesis does seem to apply quite nicely. These include chapters on technology, science, and politics, and — perhaps most effectively — meditations on what it means for a person to be “smart.”

In the realm of technology, the authors emphasize the importance of the internet as a source of external information. Just consulting the internet about a topic apparently makes people feel more knowledgeable about that topic than topics they have not explicitly searched. Society’s newfound dependence on the internet suggests that people today may feel more knowledgeable than they did just a few years ago.

The internet can be seen as a force for good — people have rapid access to a wide range of information, and it may not matter whether they are deluded into thinking that they really know more. However, as others have articulated, the internet undoubtedly plays a role in creating “bubbles” that capture people’s minds and causing people to resist new information. It has also undoubtedly contributed to the dissemination of “fake” news.

Sloman and Fernbach also have interesting comments about the use and growth of artificial intelligence and, in particular, of machine-learning algorithms. They view as unreasonable the fear that these algorithms will eventually replace humans. Although the authors admit algorithms are becoming faster and more comprehensive, they also believe algorithms are just useful tools that cannot really challenge humans. That is because, as tools, the products of artificial intelligence will always lack “intentionality,” which the authors view as the distinctive aspect of human thought. These insights are interesting but fail to connect to the authors’ central hypothesis.

By contrast, the authors’ comments about what makes a person “smart” readily and effectively realize their hypothesis. In brief, Sloman and Fernbach question whether a person’s intelligence can be represented by a single number, such as the score on an IQ test. Instead, they argue, one should think about people in the context of the community or group in which they are active, and then ask what the person’s knowledge adds to what is needed by the group. If the group is lacking in the skills displayed by the individual, then that individual should be considered “smart.” If, on the other hand, the person’s abilities are already present among other group members, then that person would not be considered “smart.” In other words, the assessment of intelligence is conditional on the characteristics of particular groups. What is interesting about this suggestion is the notion that intelligence is considered to be a consequence of a person’s unique informational input to the community. Change the community, and you may well change the assessment of intelligence. There are at least two implications to this line of reasoning: One, that groups of decision-makers should seek diversity in their members; and, two, that diversity is typically more valid than particular expertise, if the latter is already represented in the group. Interestingly, other researchers have reached similar conclusions.

At the close of the book, the authors ask about the extent to which overconfidence is functional or dysfunctional in the context of an individual’s decision-making activities. Is the person whose confidence is well-calibrated to his or her knowledge better off than the person who is systematically overconfident? At first, it is tempting to say that overconfidence is a bias and, as such, should be roundly avoided. However, from a dynamic perspective, acting on overconfident beliefs can lead to a willingness to engage in risky behaviors that, on occasion, may yield significant positive outcomes. Consider, for example, a tennis player who, rather than playing with an accurate assessment of her ability, is not afraid to believe that she is better than she actually is, and that this belief indeed results in better play. In other words, overconfidently assessing one’s level of competence can have self-fulfilling effects.

Across a range of activities, then, those with accurate self-assessments may, to some extent, “handicap” themselves relative to people who are overconfident. On the other hand, those whose acts are borne of over-confidence may handicap themselves in other ways. The discussion raises the question of whether there may be an optimal level of overconfidence and how this level might vary across different tasks and contexts.

To summarize, Sloman and Fernbach have proposed a bold and interesting hypothesis about how we think. Although data — such as the experiments by Keil and Rozenblit — support part of the hypothesis at an individual level, the broader implications of the hypothesis’s societal consequences are more open to interpretation. Regardless, the authors deserve thanks for raising issues that are illuminated by their cognitive perspective, and their book is a worthwhile and good read.