To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The COVID-19 crisis has forced healthcare professionals to make tragic decisions concerning which patients to save. Furthermore, The COVID-19 crisis has foregrounded the influence of self-serving bias in debates on how to allocate scarce resources. A utilitarian principle favors allocating scarce resources such as ventilators toward younger patients, as this is expected to save more years of life. Some view this as ageist, instead favoring age-neutral principles, such as “first come, first served”. Which approach is fairer? The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by reducing decision-makers’ use of potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning to the COVID-19 ventilator dilemma, asking participants which policy they would prefer if they did not know whether they were younger or older. Two studies (pre-registered; online samples; Study 1, N=414; Study 2 replication, N=1,276) show that veil-of-ignorance reasoning shifts preferences toward saving younger patients. The effect on older participants is dramatic, reversing their opposition toward favoring the young, thereby eliminating self-serving bias. These findings provide guidance on how to remove self-serving biases to healthcare policymakers and frontline personnel charged with allocating scarce medical resources during times of crisis.
In 2020, most countries around the world adopted various measures aimed at combating the coronavirus (i.e., COVID-19), or reducing risky behavior which may spread the virus. In the current study (N = 215), we examined compliance with COVID-19 prevention guidelines using a risk-taking perspective, differentiating active from passive risk taking. In the corona context active risk taking involves actions that may cause disease contraction, such as shaking hands, while passive risk taking involves the acceptance of risk brought on by inaction, as in not using an alco-gel disinfectant. We found that personal tendencies for passive and active risk taking predicted passive and active corona related risk taking, respectively. Furthermore, compliance with COVID-19 prevention measures was also related to differences in self-control, with low Initiation self-control predicting passive corona risk taking and low levels of Inhibition self-control predicting active corona risk taking. Thus, while not complying with Covid-19 prevention measures put people at risk, differentiating between active and passive risks is helpful for accurate prediction of each type of risk behavior.
We conducted a replication of Shafir (1993) who showed that people are inconsistent in their preferences when faced with choosing versus rejecting decision-making scenarios. The effect was demonstrated using an enrichment paradigm, asking subjects to choose between enriched and impoverished alternatives, with enriched alternatives having more positive and negative features than the impoverished alternative. Using eight different decision scenarios, Shafir found support for a compatibility principle: subjects chose and rejected enriched alternatives in choose and reject decision scenarios (d = 0.32 [0.23,0.40]), respectively, and indicated greater preference for the enriched alternative in the choice task than in the rejection task (d = 0.38 [0.29,0.46]). In a preregistered very close replication of the original study (N = 1026), we found no consistent support for the hypotheses across the eight problems: two had similar effects, two had opposite effects, and four showed no effects (overall d = −0.01 [−0.06,0.03]). Seeking alternative explanations, we tested an extension, and found support for the accentuation hypothesis.
The classic preference reversal phenomenon, where monetary evaluations contradict risky choices, has been argued to arise due to a focus on outcomes during the evaluation of alternatives, leading to overpricing of long-shot options. Such an explanation makes the implicit assumption that attentional shifts drive the phenomenon. We conducted an eye-tracking study to causally test this hypothesis by comparing a treatment based on cardinal, monetary evaluations with a different treatment avoiding a monetary frame. We find a significant treatment effect in the form of a shift in attention toward outcomes (relative to probabilities) when evaluations are monetary. Our evidence suggests that attentional shifts resulting from the monetary frame of evaluations are a driver of preference reversals.
The MPG illusion and the time-saving bias both show that people misjudge the gains from increases in efficiency or speed, because people falsely believe that efficiency and speed are linearly related to consumption (e.g., gallons of fuel or journey time). This efficiency-consumption gap (ECG) has been demonstrated consistently in various situations. In parallel, people have also been found to show a diminished sensitivity to increases in magnitudes when judged under separate vs. joint evaluation modes (SE vs. JE). We show that these “two wrongs can make a right”: when people judge efficiency upgrades under SE mode, their subjective judgments follow a concave curve that closely resembles the curvilinear pattern of efficiency upgrades, making their preferences (artificially) less biased than they are under JE. In two studies we show that when asked for their willingness-to-pay (WTP) for upgrading products or services in two (a smaller vs. a larger) upgrade options, WTPs are less different in SE vs. JE modes. This means that people are exhibiting lower sensitivity to the upgrade size under SE which leads to a de-biasing effect. We show that because JE follow a linear trend, it yields biased preferences for efficiency measures, but not for consumption measures. In contrast, SE yield biased preferences for consumption, but not for efficiency measures.
Governments use taxes to discourage undesired behaviors and encourage desired ones. One target of such interventions is reckless behavior, such as texting while driving, which in most cases is harmless but sometimes leads to catastrophic outcomes. Past research has demonstrated how interventions can backfire when the tax on one reckless behavior is set too high whereas other less attractive reckless actions remain untaxed. In the context of experience-based decisions, this undesirable outcome arises from people behaving as if they underweighted rare events, which according to a popular theoretical account can result from basing decisions on a small, random sample of past experiences. Here, we reevaluate the adverse effect of overtaxation using an alternative account focused on recency. We show that a reinforcement-learning model that weights recently observed outcomes more strongly than than those observed in the past can provide an equally good account of people’s behavior. Furthermore, we show that there exist two groups of individuals who show qualitatively distinct patterns of behavior in response to the experience of catastrophic outcomes. We conclude that targeted interventions tailored for a small group of myopic individuals who disregard catastrophic outcomes soon after they have been experienced can be nearly as effective as an omnibus intervention based on taxation that affects everyone.
The scale distortion theory of anchoring argues that people are influenced by a previously considered numeric value, an anchor, because the anchor distorts the scale on which a subsequent judgment is made. The distortion of the scale due to the anchor is a momentary effect that would be overridden if the scale was distorted again, for example, by consideration of a different value on the same scale. In the present study, participants compared thirteen random anchors on the same scale to thirteen different objects. Subsequent numeric estimates of objects’ attributes were influenced by the corresponding anchors even though the anchors were divided from the estimates by twelve questions pertaining to different values on the same scale. The numeric value considered immediately before the estimate did not have a considerable effect on the judgment. While the anchoring effect was robust, it cannot be easily explained by scale distortion. Other possible theories of the anchoring effect are compatible with the results.
Are groups of people better able to minimize a collective loss if there is a collective target that must be reached or if every small contribution helps? In this paper we investigate whether cooperation in social dilemmas can be increased by structuring the problem as a step-level social dilemma rather than a linear social dilemma and whether cooperation can be increased by manipulating endowment asymmetry between individuals. In two laboratory experiments using ‘Public Bad’ games, we found that that individuals defect less and are better able to minimize collective and personal costs in a step-level social dilemma than in a linear social dilemma. We found that the level of cooperation is not affected by an ambiguous threshold: even when participants cannot be sure about the optimal cooperation level, cooperation remains high in the step-level social dilemmas. We find mixed results for the effect of asymmetry on cooperation. These results imply that presenting social dilemmas as step-level games and reducing asymmetry can help solve environmental dilemmas in the long term.
People often use tools for tasks, and sometimes there is uncertainty about whether a given task can be completed with a given tool. This project explored whether, when, and how people’s optimism about successfully completing a task with a given tool is affected by the contextual salience of a better or worse tool. In six studies, participants were faced with novel tasks. For each task, they were assigned a tool but also exposed to a comparison tool that was better or worse in utility (or sometimes similar in utility). In some studies, the tool comparisons were essentially social comparisons, because the tool was assigned to another person. In other studies, the tool comparisons were merely counterfactual rather than social. The studies revealed contrast effects on optimism, and the effect worked in both directions. That is, worse comparison tools boosted optimism and better tools depressed optimism. The contrast effects were observed regardless of the general type of comparison (e.g., social, counterfactual). The comparisons also influenced discrete decisions about which task to attempt (for a prize), which is an important finding for ruling out superficial scaling explanations for the contrast effects. It appears that people fail to exclude irrelevant tool-comparison information from consideration when assessing their likelihood of success on a task, resulting in biased optimism and decisions.
Understanding how sustainable preference change can be achieved is of both scientific and practical importance. Recent work shows that merely responding or not responding to objects during go/no-go training can influence preferences for these objects right after the training, when people choose with a time limit. Here we examined whether and how such immediate preference change in fast choices can affect choices without time limit one week later. In two preregistered experiments, participants responded to go food items and withheld responses toward no-go food items during a go/no-go training. Immediately after the training, they made consumption choices for half of the items (with a time limit in Experiment 1; without time limit in Experiment 2). One week later, participants chose again (without time limit in both experiments). Half of the choices had been presented immediately after the training (repeated choices), while the other half had not (new choices). Participants preferred go over no-go items both immediately after the training and one week later. Furthermore, the effect was observed for both repeated and new choices after one week, revealing a direct effect of mere (non)responses on preferences one week later. Exploratory analyses revealed that the effect after one week is related to the memory of stimulus-response contingencies immediately after the training, and this memory is impaired by making choices. These findings show mere action versus inaction can directly induce preference change that lasts for at least one week, and memory of stimulus-response contingencies may play a crucial role in this effect.