To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Scholars have explored the impact of the unconscious on legal decision-making. Research has shown that, as with all human beings, intuitive reactions play a significant role in judges’ and arbitrators’ decision-making. This article examines the unconscious intuitive processes in the arbitration process and offers suggestions to foster a more robust deliberative overlay and improve the quality of decisions by arbitrators. It also provides suggestions for counsel’s consideration to aid them in capitalizing on these unconscious influences. In order to provide a context that reflects actual arbitrator decision making, the results of a survey of arbitrators is reported where they illustrate and amplify the psychological influence discussed.
Empirical research into investment treaty conflict is simultaneously promising and potentially perilous. This chapter identifies both its costs and benefits while striving to provide a clear set of guidelines for quality research in an effort to identify the potential uses and abuses of empiricism in international investment law. Empirical research is not immune from the polarization within the field, but certain steps can ensure that empirical work is not influenced by narrow or ideological perspectives. First, we need to understand norms of quality social science to enable a data-driven, rather than emotive, conversation. Second, we need to create time and space for balanced contemplation that cuts across ideological groupings – rather than having conferences and events attended by one selected segment – and ensuring that alternative perspectives are welcomed. Third, we need to work on developing empathic dialogue to engage productively about empirical research and normative reform, including focusing upon aspects that are valuable and those that require development. The objective should be to organize conversations about international investment law around data to engage productively, so that reason and intuition can interact to create solutions that are constructive and sustainable for the longer term.
Chapter 4 examines institutional and intellectual trends in the early Qing, up to the end of Kangxi’s reign. It focuses on the intellectual and political response to the Ming collapse, which spurred a large wave of arguments, both scholarly and political, in favor of light taxes and a noninterventionist state. In contrast to the heavily moralistic tone of Ming fiscal conservatism, the trauma of the Ming-Qing transition drove early Qing elites toward a fiscal worldview that was both more “realist,” but also far more hostile toward state taxation. This hostility stemmed directly from a mainstream historical interpretation of Ming collapse that placed much of the blame on late Ming tax increases, which, in turn, seemed to have cognitive roots in Qing elites’ deep-rooted moral skepticism of state extraction. Mindful of this “history lesson,” and of ethnic tensions between Manchu and Han populations, the Qing political elite committed itself, both rhetorically and institutionally, to very low agricultural tax quotas. At the same time, however, no such commitment was made towards nonagricultural taxes due to the specific circumstances of the Ming collapse.
This chapter discusses the categories of cognitive heuristic and cognitive bias. These categories have come to define a burgeoning research program in cognitive science (the “heuristics and biases” program) and are widely considered to be universal features of human thought. On closer inspection, both categories are found to be too heterogeneous to identify real cognitive kinds, though some of their sub-categories may. In particular, the chapter examines the construct myside heuristic (closely related to the phenomenon often known as “confirmation bias”). This is found to be a better candidate for being a cognitive kind, since it seems to pertain to a specific feature of human cognitive architecture. Moreover, the myside heuristic, which (roughly speaking) attaches more weight to one’s own opinions than to contrary opinions, can be rational in certain contexts. Thus, distinguishing the heuristic from a corresponding bias can only be done against the background of a cognitive task or problem. This constitutes another instance of contextual or environmental individuation of a cognitive construct, making it unlikely that it will correspond to a neural kind.
This study investigates the amount and valence of information selected during single item evaluation. One hundred and thirty-five participants evaluated a cell phone by reading hypothetical customers reports. Some participants were first asked to provide a preliminary rating based on a picture of the phone and some technical specifications. The participants who were given the customer reports only after they made a preliminary rating exhibited valence bias in their selection of customers reports. In contrast, the participants that did not make an initial rating sought subsequent information in a more balanced, albeit still selective, manner. The preliminary raters used the least amount of information in their final decision, resulting in faster decision times. The study appears to support the notion that selective exposure is utilized in order to develop cognitive coherence.
This chapter explicates the Bayesian foundations of iterative research, where scholars move back and forth between theory revision, data collection, and data analysis. In this style of research, attention to likelihood ratios guards against common forms of confirmation bias, while Occam factors help to control ad hoc hypothesizing.
Chapter 4 provides original data on the way sexual assault was adjudicated across the country in the wake of the Dear Colleague Letter. The chapter presents data gathered from eighty-five of the top colleges and universities over a twenty-seven-month period, from October 2014 to January 2017. It asks about rights deemed fundamental in a criminal trial including: the right to a live hearing, the right to question the opposing party, the right to appeal, and the right to remain silent.
In this chapter, we discuss some of the common psychological or behavioural factors that influence risk analysis and risk management. We give examples of cases where behavioural biases created a risk management failure, and some ways in which the negative impact of biases can be mitigated. Biases are categorized, loosely, as relating to (i) self-deception, (ii) information processing (both forms of cognitive bias), and (iii) social bias, relating to the pressures created by social norms and expectations. We give examples of a range of common behavioural biases in risk management, and we briefly describe some strategies for overcoming the distortions created by behavioural factors in decision-making. Next, we present the foundational concepts of Cumulative Prospect Theory, which provides a mathematical framework for decision making that reflects some universal cognitive biases.
There are systems in place, or should be, for our government officials to make the decisions that affect our health. But as the Flint water crisis and our decision to go to war in Iraq in 2003 demonstrate, the rules for these systems are lacking. Officials and CIA Analysts fall prey to their unconscious biases just as readily as anyone. The Flint water crisis is described with all evidence in place that confirmation bias (and racial bias) drove the decision not to stop the public from drinking the water, using snippets from actual emails at the governor’s office to illustrate. The same failings led to war in Iraq, though the CIA has since created processes to prevent their analysts from making the same errors again. The chapter ends with a light-hearted example of a decision-making tool the CIA developed in the aftermath of the Iraq invasion: the Analysis of Competing Hypotheses. I use the tool to determine if my dog made a mess on the floor - a more everyday example of decision-making than airplane crashes, war, or public health (graphic to illustrate). This brings it home that bias can be avoided; it only takes effort.
This chapter delves into the reasons for attending to the cognitive constraints of the political decision maker, whether average citizen or member of the ruling elite. The main focus of our discussion is the concept of bounded rationality and other cognitive strategies that humans have evolved in order to make good enough political decisions, if not optimal ones. The discussion includes a review of many instances where cognitive short cuts, or heuristics, influence decisions by reducing the burden associated with making choices in highly complex information environments. The downside, of course, is that these shortcuts can also lead citizens and leaders astray, fomenting biases, even as they help simplify a decision. Understanding how cognitive limitations affect the ability of citizens and elites to make good decisions is the key to solving a large number of puzzles in our politics. The chapter also addresses how, if at all, one could overcome these biases.
In Chapter 8 I look at the strategies that people use to gather and interpret evidence, focusing on the classic confirmation bias. I argue that this ‘bias’ covers various different strategies, some of which are reasonable, whereas others are genuine biases. One danger is when investigators misinterpret evidence and these errors cascade undetected through the evaluation process, distorting the evidence presented to the ultimate decision-makers. I argue that, without meta-level insight into the way evidence is gathered and assessed, we risk distorting the impact of that evidence, sometimes with disastrous consequences.
How do we make sense of complex evidence? What are the cognitive principles that allow detectives to solve crimes, and lay people to puzzle out everyday problems? To address these questions, David Lagnado presents a novel perspective on human reasoning. At heart, we are causal thinkers driven to explain the myriad ways in which people behave and interact. We build mental models of the world, enabling us to infer patterns of cause and effect, linking words to deeds, actions to effects, and crimes to evidence. But building models is not enough; we need to evaluate these models against evidence, and we often struggle with this task. We have a knack for explaining, but less skill at evaluating. Fortunately, we can improve our reasoning by reflecting on inferential practices and using formal tools. This book presents a system of rational inference that helps us evaluate our models and make sounder judgments.
Political actors face a trade-off when they try to influence the beliefs of voters about the effects of policy proposals. They want to sway voters maximally, yet voters may discount predictions that are inconsistent with what they already hold to be true. Should political actors moderate or exaggerate their predictions to maximize persuasion? I extend the Bayesian learning model to account for confirmation bias and show that only under strong confirmation bias are predictions far from the priors of voters self-defeating. I use a preregistered survey experiment to determine whether and how voters discount predictions conditional on the distance between their prior beliefs and the predictions. I find that voters assess predictions far from their prior beliefs as less credible and, consequently, update less. The paper has important implications for strategic communication by showing theoretically and empirically that the prior beliefs of voters constrain political actors.
Applying the BIC in practice is far from straightforward and fraught with difficulties because it requires the regularization of space-time infinities by implementing some cosmic “measure.” Furthermore, a suitable physical quantity must be chosen as proxy for the number of reference class observers in some given space-time region. Unfortunately, the choices made in this procedure are prone to being exploited – often unintentionally – by the researchers as so-called researcher degrees of freedom (a term from the social science literature) to yield those results that would best conform to their theoretical preferences. In the light of this difficulty, the prospects for obtaining compelling evidence in favor of any specific multiverse theory by testing whether our observations are those that typical multiverse inhabitants would make do look bad. As it turns out, the multiverse theories that have the best chances of being successfully tested empirically are those that do not behave as typical multiverse theories in important respects – i.e., those multiverse theories according to which all universes in the multiverse are similar or identical in a significant number of ways.
The second trap is the consequence of the strategic dilemma. Some negotiation behaviors come more naturally to us than others. If the task seems more coherent than it is, we might not notice the dilemma. This can lead to the illusion of competence. When we do not realize the full picture of the task and dilemma, we do not develop the full skill set needed to address them. This is especially true in relation to cooperation, an innate ability but one at which humans have to persevere in order to become highly skilled.
Our modern observation-based approaches to the study of the human condition were shaped by the Scottish Enlightenment. Political Economy emerged as a discipline of its own in the nineteenth century, then fragmented further around the dawn of the twentieth century. Today, we see Political Economy’s pieces being reassembled and reunited with their philosophical roots. This issue pauses to reflect on the history of this new but also old field of study.
How we think we read stories or real-life situations, and how we actually read them are often very different. This chapter explores what the differences are, and how they can get in the way of effectively interpreting case stories. You will see how applying a systematic approach to reading case stories helps you become more self-aware and skilful in your interpretive practices. Following a systematic approach will enable you to separate observations from interpretations or evaluations and make you less likely to jump to conclusions. The approach presented in this chapter is the ‘SNAAPI’ steps, a simple five-step inductive reasoning–based process that will help you make sense of both the case stories in this book and the real-life situations you will encounter in schools. The chapter will also introduce three variants of the SNAAPI steps that you can use when you want to be more specialised in your engagement with a case story. All the interpretive approaches can be undertaken individually, but you will gain most benefit from discussing your thinking with others at all stages of the process.
This paper focuses on the effects of entrepreneurial overconfidence at new venture creation. By analyzing Global Entrepreneurship Monitor data and using the theory of planned behavior as a framework, the study provides new evidence on the relative or absolute nature of overconfidence in entrepreneurial skills and the effect of overprecision on new venture creation. Overprecision of supporting beliefs is newly linked to venture creation and it is shown that nascent entrepreneurs’ overconfidence is based on a self-focusing attitude. The results confirm that overconfidence is not a single construct and highlights the differences between the forms of overconfidence habitually confused in the entrepreneurship literature.
Real-world policymakers face pressure to take action, to legislate, and to attempt to solve problems even in imperfect ways. What kind of paternalistic policies can we reasonably expect policymakers to create? We argue that public-choice pressures will tend to produce suboptimal paternalistic policies, even if we assume behavioral paternalists’ conclusions about human behavior are generally correct. Rational ignorance, bureaucratic self-interest, concentrated benefits and diffuse costs, the influence of rent-seekers and moralists, and other factors will tend to shape policy in undesirable ways. If policymakers are susceptible to biases such as those attributed to regular people, the results could be even worse. Biases with the potential to adversely affect policymaking include action bias, overconfidence, confirmation bias, availability and salience effects, affect and prototype heuristics, and present bias. Because the political sphere offers weak incentives for the self-correction of biases, we expect such biases to be more significant in the public than in the private sphere.
This final chapter summarises the arguments and evidence presented in the previous nine chapters - it is thus, in the main, a collection of short summaries of those chapters. The chapter finishes off by contending that the notion that we ought to, and often do, reciprocate is one that most people can accept. It is acknowledged once again that there are many possible dark sides to reciprocity, but that it more often than not serves the better angels of our nature, and in the process generates significant group and, by extension, individual benefits. Consequently, it is advised that policies, institutions, organisations and sectors should be designed to encourage and sustain this most fundamental motivator of human behaviour.