Climate Change 2001:
Working Group II: Impacts, Adaptation and Vulnerability
Other reports in this collection

2.6.5.3. Overconfidence

Overconfidence is another cognitive illusion that has been reported to plague experts' judgments. In the 1970s and 1980s, a considerable amount of evidence was amassed for the view that people suffer from an overconfidence bias. The common finding is that respondents are correct less often than their confidence assessments imply.

However, "ecological" theorists (cf. McClelland and Bolger, 1994) claim that overconfidence is an artifact of artificial experimental tasks and nonrepresentative sampling of stimulus materials. Gigerenzer et al. (1991) and Juslin (1994) claim that individuals are well adapted to their environments and do not make biased judgments. Overconfidence is observed because the typical general knowledge quiz used in most experiments contains a disproportionate number of misleading items. These authors have found that when knowledge items are randomly sampled, the overconfidence phenomenon disappears. Juslin et al. (2000) report a meta-analysis comparing 35 studies in which items were randomly selected from a defined domain with 95 studies in which items were selected by experimenters. Although overconfidence was evident for selected items, it was close to zero for randomly sampled items—which suggests that overconfidence is not simply a ubiquitous cognitive bias. This analysis suggests that the appearance of overconfidence may be an illusion created by research, not a cognitive failure by respondents.

Furthermore, in cases of judgments of repeated events (weather forecasters, horse race bookmakers, tournament bridge players), experts make well-calibrated forecasts. In these cases, respondents might be identifying relative frequencies for sets of similar events rather than judging the likelihood of individual events. If we compare studies of the calibration of probability assessments concerning individual events (e.g., Wright and Ayton, 1992) with those in which subjective assessments have been made for repetitive predictions of events (Murphy and Winkler, 1984), we observe that relatively poor calibration has been observed in the former, whereas relatively good calibration has been observed in the latter.

It might be concluded that a frequentist rather than a Bayesian approach should be adopted when attempting to elicit judgment. However, there are occasions when there will be events for which no obvious reference class exists and one will be unable to assess likelihood by adopting the frequentist approach. This particularly applies to novel situations for which there is no actuarial history. One might well be able to account for the (no doubt varying) subjective probabilities offered by a sample of people by identifying mental heuristics. However, note that, without a reference class, we have no means of evaluating the validity of any judgments that might be offered. Consequently, any probability given to a unique event remains somewhat ambiguous.



Other reports in this collection