To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Political actors face a trade-off when they try to influence the beliefs of voters about the effects of policy proposals. They want to sway voters maximally, yet voters may discount predictions that are inconsistent with what they already hold to be true. Should political actors moderate or exaggerate their predictions to maximize persuasion? I extend the Bayesian learning model to account for confirmation bias and show that only under strong confirmation bias are predictions far from the priors of voters self-defeating. I use a preregistered survey experiment to determine whether and how voters discount predictions conditional on the distance between their prior beliefs and the predictions. I find that voters assess predictions far from their prior beliefs as less credible and, consequently, update less. The paper has important implications for strategic communication by showing theoretically and empirically that the prior beliefs of voters constrain political actors.
Applying the BIC in practice is far from straightforward and fraught with difficulties because it requires the regularization of space-time infinities by implementing some cosmic “measure.” Furthermore, a suitable physical quantity must be chosen as proxy for the number of reference class observers in some given space-time region. Unfortunately, the choices made in this procedure are prone to being exploited – often unintentionally – by the researchers as so-called researcher degrees of freedom (a term from the social science literature) to yield those results that would best conform to their theoretical preferences. In the light of this difficulty, the prospects for obtaining compelling evidence in favor of any specific multiverse theory by testing whether our observations are those that typical multiverse inhabitants would make do look bad. As it turns out, the multiverse theories that have the best chances of being successfully tested empirically are those that do not behave as typical multiverse theories in important respects – i.e., those multiverse theories according to which all universes in the multiverse are similar or identical in a significant number of ways.
The second trap is the consequence of the strategic dilemma. Some negotiation behaviors come more naturally to us than others. If the task seems more coherent than it is, we might not notice the dilemma. This can lead to the illusion of competence. When we do not realize the full picture of the task and dilemma, we do not develop the full skill set needed to address them. This is especially true in relation to cooperation, an innate ability but one at which humans have to persevere in order to become highly skilled.
Our modern observation-based approaches to the study of the human condition were shaped by the Scottish Enlightenment. Political Economy emerged as a discipline of its own in the nineteenth century, then fragmented further around the dawn of the twentieth century. Today, we see Political Economy’s pieces being reassembled and reunited with their philosophical roots. This issue pauses to reflect on the history of this new but also old field of study.
How we think we read stories or real-life situations, and how we actually read them are often very different. This chapter explores what the differences are, and how they can get in the way of effectively interpreting case stories. You will see how applying a systematic approach to reading case stories helps you become more self-aware and skilful in your interpretive practices. Following a systematic approach will enable you to separate observations from interpretations or evaluations and make you less likely to jump to conclusions. The approach presented in this chapter is the ‘SNAAPI’ steps, a simple five-step inductive reasoning–based process that will help you make sense of both the case stories in this book and the real-life situations you will encounter in schools. The chapter will also introduce three variants of the SNAAPI steps that you can use when you want to be more specialised in your engagement with a case story. All the interpretive approaches can be undertaken individually, but you will gain most benefit from discussing your thinking with others at all stages of the process.
This paper focuses on the effects of entrepreneurial overconfidence at new venture creation. By analyzing Global Entrepreneurship Monitor data and using the theory of planned behavior as a framework, the study provides new evidence on the relative or absolute nature of overconfidence in entrepreneurial skills and the effect of overprecision on new venture creation. Overprecision of supporting beliefs is newly linked to venture creation and it is shown that nascent entrepreneurs’ overconfidence is based on a self-focusing attitude. The results confirm that overconfidence is not a single construct and highlights the differences between the forms of overconfidence habitually confused in the entrepreneurship literature.
Real-world policymakers face pressure to take action, to legislate, and to attempt to solve problems even in imperfect ways. What kind of paternalistic policies can we reasonably expect policymakers to create? We argue that public-choice pressures will tend to produce suboptimal paternalistic policies, even if we assume behavioral paternalists’ conclusions about human behavior are generally correct. Rational ignorance, bureaucratic self-interest, concentrated benefits and diffuse costs, the influence of rent-seekers and moralists, and other factors will tend to shape policy in undesirable ways. If policymakers are susceptible to biases such as those attributed to regular people, the results could be even worse. Biases with the potential to adversely affect policymaking include action bias, overconfidence, confirmation bias, availability and salience effects, affect and prototype heuristics, and present bias. Because the political sphere offers weak incentives for the self-correction of biases, we expect such biases to be more significant in the public than in the private sphere.
This final chapter summarises the arguments and evidence presented in the previous nine chapters - it is thus, in the main, a collection of short summaries of those chapters. The chapter finishes off by contending that the notion that we ought to, and often do, reciprocate is one that most people can accept. It is acknowledged once again that there are many possible dark sides to reciprocity, but that it more often than not serves the better angels of our nature, and in the process generates significant group and, by extension, individual benefits. Consequently, it is advised that policies, institutions, organisations and sectors should be designed to encourage and sustain this most fundamental motivator of human behaviour.
During analysis in engineering design, systematic thinking errors - so-called cognitive biases - can lead to inaccurate understanding of the design problem. With a simplified version of the Analysis of Competing Hypotheses - ACH method and a simplified decision matrix, the confirmation bias in particular can be minimized. To evaluate this method, it was taught to experienced design engineers and mechanical engineering students. During the experimental evaluation the participants analysed a real technical problem. The procedures and results were compared with a previously conducted study with the same task. The design engineers have not changed their approaches and could not further improve their analysis success. The students profited considerably from the training. They have mentioned twice as much supporting evidence and six times as much contradicting evidence through the training indicating a more extensive analysis. As a result, the students showed significantly fewer signs of confirmation bias than without training. The findings suggest that debiasing strategies should be introduced early in engineering design education.
Psychologists have demonstrated the value of diversity – particularly diversity of viewpoints – for enhancing creativity, discovery, and problem solving. But one key type of viewpoint diversity is lacking in academic psychology in general and social psychology in particular: political diversity. This article reviews the available evidence and finds support for four claims: (1) Academic psychology once had considerable political diversity, but has lost nearly all of it in the last 50 years. (2) This lack of political diversity can undermine the validity of social psychological science via mechanisms such as the embedding of liberal values into research questions and methods, steering researchers away from important but politically unpalatable research topics, and producing conclusions that mischaracterize liberals and conservatives alike. (3) Increased political diversity would improve social psychological science by reducing the impact of bias mechanisms such as confirmation bias, and by empowering dissenting minorities to improve the quality of the majority's thinking. (4) The underrepresentation of non-liberals in social psychology is most likely due to a combination of self-selection, hostile climate, and discrimination. We close with recommendations for increasing political diversity in social psychology.
Diagnostic errors can have tremendous consequences because they can result in a fatal chain of wrong decisions. Experts assume that physicians' desire to confirm a preliminary diagnosis while failing to seek contradictory evidence is an important reason for wrong diagnoses. This tendency is called ‘confirmation bias’.
To study whether psychiatrists and medical students are prone to confirmation bias and whether confirmation bias leads to poor diagnostic accuracy in psychiatry, we presented an experimental decision task to 75 psychiatrists and 75 medical students.
A total of 13% of psychiatrists and 25% of students showed confirmation bias when searching for new information after having made a preliminary diagnosis. Participants conducting a confirmatory information search were significantly less likely to make the correct diagnosis compared to participants searching in a disconfirmatory or balanced way [multiple logistic regression: odds ratio (OR) 7.3, 95% confidence interval (CI) 2.53–21.22, p<0.001; OR 3.2, 95% CI 1.23–8.56, p=0.02]. Psychiatrists conducting a confirmatory search made a wrong diagnosis in 70% of the cases compared to 27% or 47% for a disconfirmatory or balanced information search (students: 63, 26 and 27%). Participants choosing the wrong diagnosis also prescribed different treatment options compared with participants choosing the correct diagnosis.
Confirmatory information search harbors the risk of wrong diagnostic decisions. Psychiatrists should be aware of confirmation bias and instructed in techniques to reduce bias.
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.
Email your librarian or administrator to recommend adding this to your organisation's collection.