We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Under what conditions are people more likely to support judicial invalidation of legislative acts? We theorize that constitutional recency confers greater democratic legitimacy on constitutional provisions, reducing concerns that judges may use dated language to impose their own will on a living majority. Exploiting differences among US state constitutions, we show in a pre-registered vignette experiment and conjoint analysis that Americans are more supportive of judicial review and original intent interpretation when presented with a younger constitutional provision or constitution. These results imply that Americans might alter their approach to the US Constitution if it were changed as easily and as often as a typical state constitution.
Threat perception provokes a range of behaviour, from cooperation to conflict. Correctly interpreting others’ behaviour, and responding optimally, is thought to be aided by ‘stepping into their shoes’ (i.e. mentalising) to understand the threats they have perceived. But IR scholarship on the effects of attempting this exercise has yielded mixed findings. One missing component in this research is a clear understanding of the link between effort and accuracy. I use a US-based survey experiment (study N = 839; pilot N = 297) and a novel analytic approach to study mentalising accuracy in the domain of threat perception. I find that accurately estimating why someone feels threatened by either climate change or illegal immigration is conditional on sharing a belief in the issue’s overall dangerousness. Similar beliefs about dangerousness are not proxies for shared political identities, and accuracy for those with dissimilar beliefs does not exceed chance. Focusing first on the emotional states of those who felt threatened did not significantly improve accuracy. These findings suggest that: (1) effort does not guarantee accuracy in estimating the threats others see; (2) emotion understanding may not be a solution to threat mis-estimation; and (3) misperception can arise from basic task difficulty, even without information constraints or deception.
It is no longer news to argue that economics is performative, that it does not only describe markets, but takes part in producing or manufacturing them. Once accepted as close to a matter of fact, the performativity argument risks becoming too much of a general statement. So, what’s next for performativity? This chapter turns the performativity into an empirical research agenda which moves from demonstrating the existence of performativity to putting the performativity argument to the test and investigate sites and modes of performativity. We need to distinguish between the performativity of research as constitutive, that is how knowledge production has an effect on the world; while simultaneously being aware that this is not by itself enough to effectively shape actual markets. To find empirical and analytical ways to observe performativity in action, we go back to one of the original sites from where the performativity of economics argument was developed: economic and marketing experiments. We find that both much more is made in these settings than the making of actual markets, for instance the making of economics as a discipline and so also knowledge of markets, and much less, as the markets produced in these settings do not always move out of them.
This article investigates the effect of priming the existence of corrupt connections to the bureaucracy and of trusted references on the demand for intermediary services. We performed an experimental survey with undergraduate students in Caracas, Venezuela. Participants are presented with a hypothetical situation in which they need to obtain the apostille of their professional degrees in order to migrate and are considering whether to hire an intermediary (“gestor”) or not. The survey randomly reveals the existence of an illicit connection between the gestor and the bureaucracy and whether a trusted individual referred the intermediary. Our findings are not consistent with the “market maker” hypothesis that revealing the existence of illicit connections increases demand. Consistent with the view that trust is a key element in inherently opaque transactions, we find that the demand for intermediaries is price inelastic when gestores are referred by trusted individuals.
The human sciences should seek generalisations wherever possible. For ethical and scientific reasons, it is desirable to sample more broadly than ‘Western, educated, industrialised, rich, and democratic’ (WEIRD) societies. However, restricting the target population is sometimes necessary; for example, young children should not be recruited for studies on elderly care. Under which conditions is unrestricted sampling desirable or undesirable? Here, we use causal diagrams to clarify the structural features of measurement error bias and target population restriction bias (or ‘selection restriction’), focusing on threats to valid causal inference that arise in comparative cultural research. We define any study exhibiting such biases, or confounding biases, as weird (wrongly estimated inferences owing to inappropriate restriction and distortion). We explain why statistical tests such as configural, metric and scalar invariance cannot address the structural biases of weird studies. Overall, we examine how the workflows for causal inference provide the necessary preflight checklists for ambitious, effective and safe comparative cultural research.
Confounding bias arises when a treatment and outcome share a common cause. In randomised controlled experiments (trials), treatment assignment is random, ostensibly eliminating confounding bias. Here, we use causal directed acyclic graphs to unveil eight structural sources of bias that nevertheless persist in these trials. This analysis highlights the crucial role of causal inference methods in the design and analysis of experiments, ensuring the validity of conclusions drawn from experimental data.
What are the consequences of including a “don't know” (DK) response option to attitudinal survey questions? Existing research, based on traditional survey modes, argues that it reduces the effective sample size without improving the quality of responses. We contend that it can have important effects not only on estimates of aggregate public opinion, but also on estimates of opinion differences between subgroups of the population who have different levels of political information. Through a pre-registered online survey experiment conducted in the United States, we find that the DK response option has consequences for opinion estimates in the present day, where most organizations rely on online panels, but mainly for respondents with low levels of political information and on low salience issues. These findings imply that the exclusion of a DK option can matter, with implications for assessments of preference differences and our understanding of their impacts on politics and policy.
This chapter shows how human obedience is captured in an experimental setup, and how such research methodology can help us understand how people can comply with orders to hurt another person on a neurological level. By reviewing past experimental research, such as the rat decapitation study of Landis, the studies of Stanley Milgram on destructive obedience, and the Utrecht studies on obedience to non-ethical requests, this chapter shows that under certain circumstances, a majority of individuals could be coerced into inflicting physical or psychological harm on others at levels generally deemed unacceptable, even without any tangible social pressures such as military court or job loss. The chapter also describes a novel method where people can administer real painful electric shocks to someone else in exchange for a small monetary reward, and describes how such a method allows neuroscience investigations that would focus on the neural mechanisms associated with obedience.
Over the years, the Serengeti has been a model ecosystem for answering basic ecological questions about the distribution and abundance of organisms, populations, and species, and about how different species interact with each other and with their environment. Tony Sinclair and many other researchers have addressed some of these questions, and continue to work on understanding important biotic and abiotic linkages that influence ecosystem functioning. In common with all types of scientific inquiry, ecologists use predictions to test hypotheses about ecological processes; this approach is highlighted by Sinclair’s research that explored why buffalo and wildebeest populations were rapidly expanding. Like other scientists, ecologists use observation, modeling, and experimentation to generate and test hypotheses. However, in contrast with much biological inquiry, ecologists ask questions that link numerous levels of the biological hierarchy, from molecular to global ecology.
In working with network data, data acquisition is often the most basic yet the most important and challenging step. The availability of data and norms around data vary drastically across different areas and types of research. A team of biologists may spend more than a decade running assays to gather a cells interactome; another team of biologists may only analyze publicly available data. A social scientist may spend years conducting surveys of underrepresented groups. A computational social scientist may examine the entire network of Facebook. An economist may comb through large financial documents to gather tables of data on stakes in corporate holdings. In this chapter, we move one step along the network study life-cycle. Key to data gathering is good record-keeping and data provenance. Good data gathering sets us up for future success—otherwise, garbage in, garbage out—making it critical to ensure the best quality and most appropriate data is used to power your investigation.
The image of Darwin as a lone thinker, a theoretician who worked largely in isolation rather than a hands-on scientist, has no single origin but is stubbornly persistent. Modern accounts that do feature him as a practical researcher tend to emphasize the domestic setting of his work, focusing on experiments that can be replicated in a modern house, garden, or school. But contemporary evidence, in particular from Darwin’s extensive correspondence, demonstrates that he was an ingenious and innovative experimenter, keenly aware of advances in science, and often at the cutting edge both in the nature of his investigations and in the technologies he employed. Far from working alone on gathering facts and grinding out his theories, Darwin was expert at cultivating and exploiting contacts. He actively sought collaboration with all sorts of people around the world, both asking for their help and encouraging their own investigations. Although he rarely travelled after settling in the village of Down in Kent as a young married man, Darwin’s version of ‘working from home’ was far from solitary: he was surrounded not only by a large and happy family but by governesses, gardeners, friends, neighbors, and visitors, who acted as critics, assistants, editors, and even as research subjects.
Featuring an explanation of Enlightenment thought on agronomy and political economy, Chapter 3 examines the efforts to make Mauritius a self-sufficient island through the importation and naturalisation of plants (primarily foodstuffs and fodder, also some industrial materials, and a few ornamental plants). It explores how newly introduced and ‘old’ plants were cultivated and how local knowledge, which was gathered together with the plants in their countries of origin, was implemented. It highlights the practical significance of knowledge about plants in relation to their cultivation of Malagasy and non-European communities across Asia. It seeks to understand how settlers sought to cultivate foreign, newly introduced staple crops, such as rice, root vegetables, and fruit trees. Stressing the importance of non-European knowledge, the chapter looks at the interplay of this knowledge between its practical implementation and environmental factors. The chapter reveals that cultivation techniques were difficult to implement and often led to a crop’s failure.
Why might female leaders of democratic countries commit more money, equipment, soldiers, and other resources to interstate conflicts than male leaders? We argue that gender bias in the process of democratic election helps explain this behavior. Since running for office is generally more costly for women than for men, only women who place a higher value on winning competitions will choose to run. After election, they also devote more resources to pursuing victory in conflict situations. To provide microfoundational evidence for this claim, we analyze data from an online laboratory game featuring real-time group play in which participants chose to run for election, conducted a simple campaign, and represented their group in a contest game if elected. Women with a higher nonmonetary value to winning were more likely to self-select into candidacy, and when victorious, they spent more resources on intergroup contests than male elected leaders. The data suggest that electoral selection plays an important role in observed differences between male and female leaders in the real world.
Public decision-makers incorporate algorithm decision aids, often developed by private businesses, into the policy process, in part, as a method for justifying difficult decisions. Ethicists have worried that over-trust in algorithm advice and concerns about punishment if departing from an algorithm’s recommendation will result in over-reliance and harm democratic accountability. We test these concerns in a set of two pre-registered survey experiments in the judicial context conducted on three representative U.S. samples. The results show no support for the hypothesized blame dynamics, regardless of whether the judge agrees or disagrees with the algorithm. Algorithms, moreover, do not have a significant impact relative to other sources of advice. Respondents who are generally more trusting of elites assign greater blame to the decision-maker when they disagree with the algorithm, and they assign more blame when they think the decision-maker is abdicating their responsibility by agreeing with an algorithm.
Survey experiments often yield intention-to-treat effects that are either statistically and/or practically “non-significant.” There has been a commendable shift toward publishing such results, either to avoid the “file drawer problem” and/or to encourage studies that conclude in favor of the null hypothesis. But how can researchers more confidently adjudicate between true, versus erroneous, nonsignificant results? Guidance on this critically important question has yet to be synthesized into a single, comprehensive text. The present essay therefore highlights seven “alternative explanations” that can lead to (erroneous) nonsignificant findings. It details how researchers can more rigorously anticipate and investigate these alternative explanations in the design and analysis stages of their studies, and also offers recommendations for subsequent studies. Researchers are thus provided with a set of strategies for better designing their experiments, and more thoroughly investigating their survey-experimental data, before concluding that a given result is indicative of “no significant effect.”
This chapter follows the definition of ‘empirical legal studies’ as research which applies quantitative methods to questions about the relationship between law and society, in particular with the aim of drawing conclusions about causal connections between variables. Comparative law does not typically phrase its research as being interested in questions of causal inference. Yet, implicitly, it is very much interested in such topics as it explores, for example, the determinants of legal differences between countries or when it evaluates how far it may be said that one of the legal solutions is preferable. It is thus valuable that significant progress has been made in empirical approaches to comparative law that may be able to show robust causal links about the relationship between law and society. This chapter outlines the main types of such studies: experiments, cross-sectional studies, panel data analysis and quasi-experiments. However, it also shows that such studies face a number of methodological problems. This chapter concludes that often it may be most promising to combine different methods in order to reach a valid empirical result.
Across the globe, women are underrepresented in elected politics. The study's case countries of Australia (ranked 33), Canada (61) and the United States (66) rank poorly for women's political representation. Drawing on role strain and gender-mainstreaming theories and applying large-scale survey experiments, we examine public opinion on non-quota mechanisms to bolster women's political participation. The experimental design manipulates the politician's gender and level of government (federal/local) before asking about non-quota supports to help the politician. We find public support for policies aimed at lessening work–family role strain is higher for a woman politician; these include a pay raise, childcare subsidies and housework allowances. This support is amplified among women who are presented with a woman politician in our experiment, providing evidence of a gender-affinity effect. The study's findings contribute to scholarship on gender equality and point to gender-mainstreaming mechanisms to help mitigate the gender gap in politics.
I create and test three interventions that are meant to dampen negative partisanship without diminishing positive partisanship: the impact of cross-cutting identities, unifying identities, and the role of party leaders.
At a time in U.S. politics when advocacy groups are increasingly relying on supporters to help advance their agendas, this chapter considers how intersectional advocates are mobilizing their supporters in Chapter 6. While membership in women’s advocacy organizations has decreased over the years, supporters that volunteer their time to advocacy organizations to advance their policy goals has been largely overlooked. Yet, these supporters are important contributors to intersectional advocacy. In Chapter 6, two original survey experiments are presented with the supporters from this organization that also engages in intersectional advocacy. Each experiment contain authentic policy platforms that either present an intersectional advocacy approach or a traditional single issue policy approach to supporters. The findings from these experiments answer the final question: does intersectional advocacy resonate with the intersectionally marginalized populations it aims to serve, and if so, to what extent does it mobilize them to participate in the policymaking process? This chapter highlights the role of supporters in advancing these policy efforts while showcasing tangible
Chapter 5 moves from theory into evidence and discusses how empirical economists think about causality. First, the chapter covers common issues that make it difficult to have confidence in causal claims based on associational evidence alone. Then, experimental evidence is discussed: how to run an experiment, common pitfalls that can undermine confidence in experimental evidence, and what can be done to avoid them. Next, major experimental studies on the impact of health insurance are described. Finally, the chapter discusses the concept of quasi-experimental evidence and how it fits into economics. The end of chapter supplement discusses ethics in research with human subjects and the role of institutional review boards.