To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Vietnam draft lottery exposed millions of men to risk of induction at a time when the Vietnam War was becoming increasingly unpopular. We study the long-term effects of draft risk on political attitudes and behaviors of men who were eligible for the draft in 1969–1971. Our 2014–2016 surveys of men who were eligible for the Vietnam draft lotteries reveal no appreciable effect of draft risk across a wide range of political attitudes. These findings are bolstered by analysis of a vast voter registration database, which shows no differences in voting rates or tendency to register with the Democratic or Republican parties. The pattern of weak long-term effects is in line with studies showing that the long-term economic effects of Vietnam draft risk dissipated over time and offers a counterweight to influential observational studies that report long-term persistence in the effects of early experiences on political attitudes.
This paper evaluates the state of contact hypothesis research from a policy perspective. Building on Pettigrew and Tropp's (2006) influential meta-analysis, we assemble all intergroup contact studies that feature random assignment and delayed outcome measures, of which there are 27 in total, nearly two-thirds of which were published following the original review. We find the evidence from this updated dataset to be consistent with Pettigrew and Tropp's (2006) conclusion that contact “typically reduces prejudice.” At the same time, our meta-analysis suggests that contact's effects vary, with interventions directed at ethnic or racial prejudice generating substantially weaker effects. Moreover, our inventory of relevant studies reveals important gaps, most notably the absence of studies addressing adults' racial or ethnic prejudices, an important limitation for both theory and policy. We also call attention to the lack of research that systematically investigates the scope conditions suggested by Allport (1954) under which contact is most influential. We conclude that these gaps in contact research must be addressed empirically before this hypothesis can reliably guide policy.
This study evaluates the turnout effects of direct mail sent in advance of the 2014 New Hampshire Senate election. Registered Republican women were sent up to 10 mailings from a conservative advocacy group that encouraged participation in the upcoming election. We find that mail raises turnout, but no gains are achieved beyond five mailers. This finding is shown to be consistent with other experiments that have sent large quantities of mail. We interpret these results in light of marketing research on repetitive messaging.
Missing outcome data plague many randomized experiments. Common solutions rely on ignorability assumptions that may not be credible in all applications. We propose a method for confronting missing outcome data that makes fairly weak assumptions but can still yield informative bounds on the average treatment effect. Our approach is based on a combination of the double sampling design and nonparametric worst-case bounds. We derive a worst-case bounds estimator under double sampling and provide analytic expressions for variance estimators and confidence intervals. We also propose a method for covariate adjustment using poststratification and a sensitivity analysis for nonignorable missingness. Finally, we illustrate the utility of our approach using Monte Carlo simulations and a placebo-controlled randomized field experiment on the effects of persuasion on social attitudes with survey-based outcome measures.
Regression discontinuity (RD) designs enable researchers to estimate causal effects using observational data. These causal effects are identified at the point of discontinuity that distinguishes those observations that do or do not receive the treatment. One challenge in applying RD in practice is that data may be sparse in the immediate vicinity of the discontinuity. Expanding the analysis to observations outside this immediate vicinity may improve the statistical precision with which treatment effects are estimated, but including more distant observations also increases the risk of bias. Model specification is another source of uncertainty; as the bandwidth around the cutoff point expands, linear approximations may break down, requiring more flexible functional forms. Using data from a large randomized experiment conducted by Gerber, Green, and Larimer (2008), this study attempts to recover an experimental benchmark using RD and assesses the uncertainty introduced by various aspects of model and bandwidth selection. More generally, we demonstrate how experimental benchmarks can be used to gauge and improve the reliability of RD analyses.
Randomized experiments commonly compare subjects receiving a treatment to subjects receiving a placebo. An alternative design, frequently used in field experimentation, compares subjects assigned to an untreated baseline group to subjects assigned to a treatment group, adjusting statistically for the fact that some members of the treatment group may fail to receive the treatment. This article shows the potential advantages of a three-group design (baseline, placebo, and treatment). We present a maximum likelihood estimator of the treatment effect for this three-group design and illustrate its use with a field experiment that gauges the effect of prerecorded phone calls on voter turnout. The three-group design offers efficiency advantages over two-group designs while at the same time guarding against unanticipated placebo effects (which would undermine the placebo-treatment comparison) and unexpectedly low rates of compliance with the treatment assignment (which would undermine the baseline-treatment comparison).
In the social sciences, randomized experimentation is the optimal research design for establishing causation. However, for a number of practical reasons, researchers are sometimes unable to conduct experiments and must rely on observational data. In an effort to develop estimators that can approximate experimental results using observational data, scholars have given increasing attention to matching. In this article, we test the performance of matching by gauging the success with which matching approximates experimental results. The voter mobilization experiment presented here comprises a large number of observations (60,000 randomly assigned to the treatment group and nearly two million assigned to the control group) and a rich set of covariates. This study is analyzed in two ways. The first method, instrumental variables estimation, takes advantage of random assignment in order to produce consistent estimates. The second method, matching estimation, ignores random assignment and analyzes the data as though they were nonexperimental. Matching is found to produce biased results in this application because even a rich set of covariates is insufficient to control for preexisting differences between the treatment and control group. Matching, in fact, produces estimates that are no more accurate than those generated by ordinary least squares regression. The experimental findings show that brief paid get-out-the-vote phone calls do not increase turnout, while matching and regression show a large and significant effect.
The debate about the cost-effectiveness of randomized field experimentation ignores one of the most important potential uses of experimental data. This article defines and illustrates “downstream” experimental analysis—that is, analysis of the indirect effects of experimental interventions. We argue that downstream analysis may be as valuable as conventional analysis, perhaps even more so in the case of laboratory experimentation.
If the publication decisions of journals are a function of the statistical significance of research findings, the published literature may suffer from “publication bias.” This paper describes a method for detecting publication bias. We point out that to achieve statistical significance, the effect size must be larger in small samples. If publications tend to be biased against statistically insignificant results, we should observe that the effect size diminishes as sample sizes increase. This proposition is tested and confirmed using the experimental literature on voter mobilization.
Party identification has been studied extensively using both individual- and aggregate-level data. This paper attempts to formulate a statistical model that can account for the range of empirical generalizations that have emerged from aggregate time series and panel surveys. Using Monte Carlo simulation, we show that only certain types of data generation processes can account for these empirical regularities. Deciding which of the remaining types best explains the data means investigating the ways in which individual-level partisanship behaves over time. Partisanship at the aggregate-level tends to be highly autocorrelated, reequilibrating slowly in the wake of each perturbation. Working downward from the analysis of aggregate data, previous researchers argued that aggregate partisanship is fractionally integrated and contended that dynamics at the individual level are therefore heterogeneous. Using data from three panel surveys, we present the first direct assessment of individual-level dynamics. We also investigate the hypothesis that these dynamics vary among individuals, a claim that motivates much recent work on fractionally integrated time series. The model that best explains the observed characteristics of party identification is one in which individuals respond in similar ways to external shocks, reequilibrate rapidly thereafter, and seldom change their equilibrium level of partisan attachment.
Field experiments on voter mobilization enable researchers to test theoretical propositions while at the same time addressing practical questions that confront campaigns. This confluence of interests has led to increasing collaboration between researchers and campaign organizations, which in turn has produced a rapid accumulation of experiments on voting. This new evidence base makes possible translational works such as Get Out the Vote: How to Increase Voter Turnout that synthesize the burgeoning research literature and convey its conclusions to campaign practitioners. However, as political groups develop their own in-house capacity to conduct experiments whose results remain proprietary and may be reported selectively, the accumulation of an unbiased, public knowledge base is threatened. We discuss these challenges and the ways in which research that focuses on practical concerns may nonetheless speak to enduring theoretical questions.
Across the social sciences, growing concerns about research transparency have led to calls for pre-analysis plans (PAPs) that specify in advance how researchers intend to analyze the data they are about to gather. PAPs promote transparency and credibility by helping readers distinguish between exploratory and confirmatory analyses. However, PAPs are time-consuming to write and may fail to anticipate contingencies that arise in the course of data collection. This article proposes the use of “standard operating procedures” (SOPs)—default practices to guide decisions when issues arise that were not anticipated in the PAP. We offer an example of an SOP that can be adapted by other researchers seeking a safety net to support their PAPs.
We report the results of a field experiment conducted in New York City during the 2013 election cycle, examining the impact of nonpartisan messages on donations from small contributors. Using information from voter registration and campaign finance records, we built a forecasting model to identify voters with an above-average probability of donating. A random sample of these voters received one of four messages asking them to donate to a candidate of their choice. Half of these treatments reminded voters that New York City's campaign finance program matches small donations with public funds. Candidates’ financial disclosures to the city's Campaign Finance Board reveal that only the message mentioning policy (in generic terms) increased donations. Surprisingly, reminding voters that matching funds multiplied the value of their contribution had no effect. Our experiment sheds light on the motivations of donors and represents the first attempt to assess nonpartisan appeals to contribute.
A small but growing social science literature examines the correspondence between experimental results obtained in lab and field settings. This article reviews this literature and reanalyzes a set of recent experiments carried out in parallel in both the lab and field. Using a standardized format that calls attention to both the experimental estimates and the statistical uncertainty surrounding them, the study analyzes the overall pattern of lab-field correspondence, which is found to be quite strong (Spearman's ρ = 0.73). Recognizing that this correlation may be distorted by the ad hoc manner in which lab-field comparisons are constructed (as well as the selective manner in which results are reported and published), the article concludes by suggesting directions for future research, stressing in particular the need for more systematic investigation of treatment effect heterogeneity.