To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Public health officials have faced resistance in their efforts to promote mask-wearing to counter the spread of COVID-19. One approach to promoting behavior change is to alert people to the fact that a behavior is common (a descriptive norm). However, partisan differences in pandemic mitigation behavior mean that Americans may be especially (in)sensitive to information about behavioral norms depending on the party affiliation of the group in question. In July–August 2020, we tested the effects of providing information to respondents about how many Americans, co-partisans, or out-partisans report wearing masks regularly on both mask-wearing intentions and on the perceived effectiveness of masks. Learning that a majority of Americans report wearing masks regularly increases mask-wearing intentions and perceived effectiveness, though the effects of this information are not distinguishable from other treatments.
In this study, we tested the validity across two scales addressing conspiratorial thinking that may influence behaviours related to public health and the COVID-19 pandemic. Using the COVIDiSTRESSII Global Survey data from 12 261 participants, we validated the 4-item Conspiratorial Thinking Scale and 3-item Anti-Expert Sentiment Scale across 24 languages and dialects that were used by at least 100 participants per language. We employed confirmatory factor analysis, measurement invariance test and measurement alignment for internal consistency testing. To test convergent validity of the two scales, we assessed correlations with trust in seven agents related to government, science and public health. Although scalar invariance was not achieved when measurement invariance test was conducted initially, we found that both scales can be employed in further international studies with measurement alignment. Moreover, both conspiratorial thinking and anti-expert sentiments were significantly and negatively correlated with trust in all agents. Findings from this study provide supporting evidence for the validity of both scales across 24 languages for future large-scale international research.
Political elites sometimes seek to delegitimize election results using unsubstantiated claims of fraud. Most recently, Donald Trump sought to overturn his loss in the 2020 US presidential election by falsely alleging widespread fraud. Our study provides new evidence demonstrating the corrosive effect of fraud claims like these on trust in the election system. Using a nationwide survey experiment conducted after the 2018 midterm elections – a time when many prominent Republicans also made unsubstantiated fraud claims – we show that exposure to claims of voter fraud reduces confidence in electoral integrity, though not support for democracy itself. The effects are concentrated among Republicans and Trump approvers. Worryingly, corrective messages from mainstream sources do not measurably reduce the damage these accusations inflict. These results suggest that unsubstantiated voter-fraud claims undermine confidence in elections, particularly when the claims are politically congenial, and that their effects cannot easily be mitigated by fact-checking.
Psychological attachment to political parties can bias people’s attitudes, beliefs, and group evaluations. Studies from psychology suggest that self-affirmation theory may ameliorate this problem in the domain of politics on a variety of outcome measures. We report a series of studies conducted by separate research teams that examine whether a self-affirmation intervention affects a variety of outcomes, including political or policy attitudes, factual beliefs, conspiracy beliefs, affective polarization, and evaluations of news sources. The different research teams use a variety of self-affirmation interventions, research designs, and outcomes. Despite these differences, the research teams consistently find that self-affirmation treatments have little effect. These findings suggest considerable caution is warranted for researchers who wish to apply the self-affirmation framework to studies that investigate political attitudes and beliefs. By presenting the “null results” of separate research teams, we hope to spark a discussion about whether and how the self-affirmation paradigm should be applied to political topics.
Studies of the American public demonstrate that partisans often diverge not only on questions of opinion but also on matters of fact. However, little is known about partisan divergence in factual beliefs among the government officials who make real policy decisions, or how it compares to belief polarization among the public. This letter describes the first systematic comparison of factual belief polarization between the public and government officials, which we conducted using a paired survey approach. The results indicate that political elites are consistently more accurately informed than the public across a wide range of politically contentious facts. However, this increase in accuracy does not translate into reduced factual belief polarization. These findings demonstrate that a more informed political elite does not necessarily mitigate partisan factual disagreement in policy making.
Majorities of citizens in high-income countries often oppose foreign aid spending. One popular explanation is that the public overestimates the percentage and amount of taxpayer funds that goes toward overseas aid. Does expressing aid flows in dollar and/or percentage terms shift public opinion toward aid? We report the results of an experiment examining differences in support for aid spending as a function of the information American and British respondents receive about foreign aid spending. In both nations, providing respondents with information about foreign aid spending as a percentage of the national budget significantly reduces support for cuts. The findings suggest that support for aid can be increased, but significant opposition to aid spending remains.
Misinformation can be very difficult to correct and may have lasting effects even after it is discredited. One reason for this persistence is the manner in which people make causal inferences based on available information about a given event or outcome. As a result, false information may continue to influence beliefs and attitudes even after being debunked if it is not replaced by an alternate causal explanation. We test this hypothesis using an experimental paradigm adapted from the psychology literature on the continued influence effect and find that a causal explanation for an unexplained event is significantly more effective than a denial even when the denial is backed by unusually strong evidence. This result has significant implications for how to most effectively counter misinformation about controversial political events and outcomes.