Book contents
- Frontmatter
- Contents
- List of Contributors
- Preface
- Introduction – Heuristics and Biases: Then and Now
- PART ONE THEORETICAL AND EMPIRICAL EXTENSIONS
- PART TWO NEW THEORETICAL DIRECTIONS
- PART THREE REAL-WORLD APPLICATIONS
- 33 The Hot Hand in Basketball: On the Misperception of Random Sequences
- 34 Like Goes with Like: The Role of Representativeness in Erroneous and Pseudo-Scientific Beliefs
- 35 When Less Is More: Counterfactual Thinking and Satisfaction among Olympic Medalists
- 36 Understanding Misunderstanding: Social Psychological Perspectives
- 37 Assessing Uncertainty in Physical Constants
- 38 Do Analysts Overreact?
- 39 The Calibration of Expert Judgment: Heuristics and Biases Beyond the Laboratory
- 40 Clinical versus Actuarial Judgment
- 41 Heuristics and Biases in Application
- 42 Theory-Driven Reasoning about Plausible Pasts and Probable Futures in World Politics
- References
- Index
39 - The Calibration of Expert Judgment: Heuristics and Biases Beyond the Laboratory
from PART THREE - REAL-WORLD APPLICATIONS
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of Contributors
- Preface
- Introduction – Heuristics and Biases: Then and Now
- PART ONE THEORETICAL AND EMPIRICAL EXTENSIONS
- PART TWO NEW THEORETICAL DIRECTIONS
- PART THREE REAL-WORLD APPLICATIONS
- 33 The Hot Hand in Basketball: On the Misperception of Random Sequences
- 34 Like Goes with Like: The Role of Representativeness in Erroneous and Pseudo-Scientific Beliefs
- 35 When Less Is More: Counterfactual Thinking and Satisfaction among Olympic Medalists
- 36 Understanding Misunderstanding: Social Psychological Perspectives
- 37 Assessing Uncertainty in Physical Constants
- 38 Do Analysts Overreact?
- 39 The Calibration of Expert Judgment: Heuristics and Biases Beyond the Laboratory
- 40 Clinical versus Actuarial Judgment
- 41 Heuristics and Biases in Application
- 42 Theory-Driven Reasoning about Plausible Pasts and Probable Futures in World Politics
- References
- Index
Summary
The study of how people use subjective probabilities is a remarkably modern concern, and was largely motivated by the increasing use of expert judgment during and after World War II (Cooke, 1991). Experts are often asked to quantify the likelihood of events such as a stock market collapse, a nuclear plant accident, or a presidential election (Ayton, 1992; Baron, 1998; Hammond, 1996). For applications such as these, it is essential to know how the probabilities experts attach to various outcomes match the relative frequencies of those outcomes; that is, whether experts are properly “calibrated.” Despite this, relatively few studies have evaluated how well descriptive theories of probabilistic reasoning capture the behavior of experts in their natural environment. In this chapter, we examine the calibration of expert probabilistic predictions “in the wild” and assess how well the heuristics and biases perspective on judgment under uncertainty can account for the findings. We then review alternate theories of calibration in light of the expert data.
Calibration and Miscalibration
Miscalibration presents itself in a number of forms. Figure 39.1 displays four typical patterns of miscalibrated probability judgments. The solid diagonal line, identity line, or line of perfect calibration, indicates the set of points at which judged probability and relative frequency coincide. The solid line marked A, where all judgments are higher than the corresponding relative frequency, represents overprediction bias. The solid line B, where all judgments are lower than the corresponding relative frequency, represents underprediction bias.
- Type
- Chapter
- Information
- Heuristics and BiasesThe Psychology of Intuitive Judgment, pp. 686 - 715Publisher: Cambridge University PressPrint publication year: 2002
- 77
- Cited by