To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Existing approaches to estimating ideal points offer no method for consistent estimation or inference without relying on strong parametric assumptions. In this paper, I introduce a nonparametric approach to ideal-point estimation and inference that goes beyond these limitations. I show that some inferences about the relative positions of two pairs of legislators can be made with minimal assumptions. This information can be combined across different possible choices of the pairs to provide estimates and perform hypothesis tests for all legislators without additional assumptions. I demonstrate the usefulness of these methods in two applications to Supreme Court data, one testing for ideological movement by a single justice and the other testing for multidimensional voting behavior in different decades.
Computers hold the potential to draw legislative districts in a neutral way. Existing approaches to automated redistricting may introduce bias and encounter difficulties when drawing districts of large and even medium-sized jurisdictions. We present a new algorithm that can neutrally generate legislative districts without indications of bias that are contiguous, balanced and relatively compact. The algorithm does not show the kinds of bias found in prior algorithms and is an advance over previously published algorithms for redistricting because it is computationally more efficient. We use the new algorithm to draw 10,000 maps of congressional districts in Mississippi, Virginia, and Texas. We find that it is unlikely that the number of majority-minority districts we observe in the Mississippi, Virginia, and Texas congressional maps of these states would happen through a neutral redistricting process.
Despite the popularity of unsupervised techniques for political science text-as-data research, the importance and implications of preprocessing decisions in this domain have received scant systematic attention. Yet, as we show, such decisions have profound effects on the results of real models for real data. We argue that substantive theory is typically too vague to be of use for feature selection, and that the supervised literature is not necessarily a helpful source of advice. To aid researchers working in unsupervised settings, we introduce a statistical procedure and software that examines the sensitivity of findings under alternate preprocessing regimes. This approach complements a researcher’s substantive understanding of a problem by providing a characterization of the variability changes in preprocessing choices may induce when analyzing a particular dataset. In making scholars aware of the degree to which their results are likely to be sensitive to their preprocessing decisions, it aids replication efforts.
Representative democracy entails the aggregation of multiple policy issues by parties into competing bundles of policies, or “manifestos,” which are then evaluated holistically by voters in elections. This aggregation process obscures the multidimensional policy preferences underlying a voter’s single choice of party or candidate. We address this problem through a conjoint experiment based on the actual party manifestos in Japan’s 2014 House of Representatives election. By juxtaposing sets of issue positions as hypothetical manifestos and asking respondents to choose one, our study identifies the effects of specific positions on the overall assessment of manifestos, heterogeneity in preferences among subgroups of respondents, and the popularity ranking of manifestos. Our analysis uncovers important discrepancies between voter preferences and the portrayal of the election results by politicians and the media as providing a policy mandate to the Liberal Democratic Party, underscoring the potential danger of inferring public opinion from election outcomes alone.
We introduce a model that extends the standard vote choice model to encompass text. In our model, votes and speech are generated from a common set of underlying preference parameters. We estimate the parameters with a sparse Gaussian copula factor model that estimates the number of latent dimensions, is robust to outliers, and accounts for zero inflation in the data. To illustrate its workings, we apply our estimator to roll call votes and floor speech from recent sessions of the US Senate. We uncover two stable dimensions: one ideological and the other reflecting to Senators’ leadership roles. We then show how the method can leverage common speech in order to impute missing data, recovering reliable preference estimates for rank-and-file Senators given only leadership votes.
In this paper, I introduce a Bayesian model for detecting changepoints in a time series of overdispersed counts, such as contributions to candidates over the course of a campaign or counts of terrorist violence. To avoid having to specify the number of changepoint ex ante, this model incorporates a hierarchical Dirichlet process prior to estimate the number of changepoints as well as their location. This allows researchers to discover salient structural breaks and perform inference on the number of such breaks in a given time series. I demonstrate the usefulness of the model with applications to campaign contributions in the 2012 U.S. Republican presidential primary and incidences of global terrorism from 1970 to 2015.
Multiple imputation (MI) is often presented as an improvement over listwise deletion (LWD) for regression estimation in the presence of missing data. Against a common view, we demonstrate anew that the complete case estimator can be unbiased, even if data are not missing completely at random. As long as the analyst can control for the determinants of missingness, MI offers no benefit over LWD for bias reduction in regression analysis. We highlight the conditions under which MI is most likely to improve the accuracy and precision of regression results, and develop concrete guidelines that researchers can adopt to increase transparency and promote confidence in their results. While MI remains a useful approach in certain contexts, it is no panacea, and access to imputation software does not absolve researchers of their responsibility to know the data.
The sole purpose of the enhanced standard analysis (ESA) is to prevent so-called untenable assumptions in Qualitative Comparative Analysis (QCA). One source of such assumptions can be statements of necessity. QCA realists, the majority of QCA researchers, have elaborated a set of criteria for meaningful claims of necessity: empirical consistency, empirical relevance, and conceptual meaningfulness. I show that once Thiem’s (2017) data mining approach to detecting supersets is constrained by adhering to those standards, no CONSOL effect of Schneider and Wagemann’s ESA exists. QCA idealists, challenging most of QCA realists’ conventions, argue that separate searches for necessary conditions are futile because the most parsimonious solution formula reveals the minimally necessary disjunction of minimally sufficient conjunctions. Engaging with this perspective, I address several unresolved empirical and theoretical issues that seem to prevent the QCA idealist position from becoming mainstream.