To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In recent years, scholars, journals, and professional organizations in political science have been working to improve research transparency. Although better transparency is a laudable goal, the implementation of standards for reproducibility still leaves much to be desired. This article identifies two practices that political science should adopt to improve research transparency: (1) journals must provide detailed replication guidance and run provided material; and (2) authors must begin their work with replication in mind. We focus on problems that occur when scholars provide research materials to journals for replication, and we outline best practices regarding documentation and code structure for researchers to use.
The aim of the current study was to explore the effect of gender, age at onset, and duration on the long-term course of schizophrenia.
Twenty-nine centers from 25 countries representing all continents participated in the study that included 2358 patients aged 37.21 ± 11.87 years with a DSM-IV or DSM-5 diagnosis of schizophrenia; the Positive and Negative Syndrome Scale as well as relevant clinicodemographic data were gathered. Analysis of variance and analysis of covariance were used, and the methodology corrected for the presence of potentially confounding effects.
There was a 3-year later age at onset for females (P < .001) and lower rates of negative symptoms (P < .01) and higher depression/anxiety measures (P < .05) at some stages. The age at onset manifested a distribution with a single peak for both genders with a tendency of patients with younger onset having slower advancement through illness stages (P = .001). No significant effects were found concerning duration of illness.
Our results confirmed a later onset and a possibly more benign course and outcome in females. Age at onset manifested a single peak in both genders, and surprisingly, earlier onset was related to a slower progression of the illness. No effect of duration has been detected. These results are partially in accord with the literature, but they also differ as a consequence of the different starting point of our methodology (a novel staging model), which in our opinion precluded the impact of confounding effects. Future research should focus on the therapeutic policy and implications of these results in more representative samples.
The integrity of democratic elections, both in the United States and abroad, is an important problem. In this Element, we present a data-driven approach that evaluates the performance of the administration of a democratic election, before, during, and after Election Day. We show that this data-driven method can help to improve confidence in the integrity of American elections.
Building on past research, we implement a hierarchical latent class model to analyze political participation from a comparative perspective. Our methodology allows simultaneously: (i) estimating citizens’ propensity to engage in conventional and unconventional modes of participation; (ii) classifying individuals into underlying “types” capturing within- and cross-country variations in participation; and (iii) assessing how this classification varies with micro- and macro-level factors. We apply our model to Latin American survey data. We show that our method outperforms alternative approaches used to study participation and derive typologies of political engagement. Substantively, we find that the distribution of participatory types is similar throughout the continent, and that it correlates strongly with respondents’ socio-demographic characteristics and crime victimization.
Does attentiveness matter in survey responses? Do more attentive survey participants give higher quality responses? Using data from a recent online survey that identified inattentive respondents using instructed-response items, we demonstrate that ignoring attentiveness provides a biased portrait of the distribution of critical political attitudes and behavior. We show that this bias occurs in the context of both typical closed-ended questions and in list experiments. Inattentive respondents are common and are more prevalent among the young and less educated. Those who do not pass the trap questions interact with the survey instrument in distinctive ways: they take less time to respond; are more likely to report nonattitudes; and display lower consistency in their reported choices. Inattentiveness does not occur completely at random and failing to properly account for it may lead to inaccurate estimates of the prevalence of key political attitudes and behaviors, of both sensitive and more prosaic nature.
With the discipline’s push toward data access and research transparency (DA-RT), journal replication archives are becoming increasingly common. As researchers work to ensure that replication materials are provided, they also should pay attention to the content—rather than simply the provision—of journal archives. Based on our experience in analyzing and handling journal replication materials, we present a series of recommendations that can make them easier to understand and use. The provision of clear, functional, and well-documented replication materials is key for achieving the goals of transparent and replicable research. Furthermore, good replication materials enhance the development of extensions and related research by making state-of-the-art methodologies and analyses more accessible.
For more than a decade, increased scrutiny has been placed on the administration and integrity of democratic elections throughout the world (Levin and Alvarez 2012). The surge of interest in electoral integrity seems to be fueled by a number of different factors: an increase in the number of nations conducting elections, more concerns about election administration and voting technology, the increased use of social media, and a growing number of scholars throughout the world who are interested in the study of integrity and the possible manipulation of elections (Alvarez, Hall, and Hyde 2008).
Although there are many ways that the integrity of elections can be assessed – for example, by studying the opinions of voters about their confidence in the conduct of elections (Alvarez, Atkeson, and Hall 2012) or through election monitoring (Bjornlund 2004; Hyde 2007, 2011; Kelley 2013) – many methodologists, statisticians and computer scientists have contributed to the new and growing literature on “election forensics”. This body of research involves the development of a growing suite of tools – some as simple as looking at the distributions of variables, such as turnout in an election, and others that use more complex multivariate statistical models – to sift through observational data from elections to detect anomalies or outliers as potential indicators for election fraud and manipulation (Levin et al. 2009; Alvarez et al. 2014).
The literature on election forensics now has advanced a somewhat dizzying array of methods for detecting election anomalies, without providing guidance for when particular methods might best be utilized by analysts. That is, when is it best to look for anomalies in distributions of voter turnout? When should digit tests (such as Benford's Law) be applied? What about the use of regression models to detect outliers, either in single or multiple contests? How much statistical power do distributional tests have in common settings where you want to try to detect election outliers? These questions have motivated some of our recent research and have led us to consider the use of new techniques, such as machine learning, for the detection of election manipulation in nations like Venezuela (Alvarez et al. 2014).
Machine learning procedures use statistical tools to find patterns in the data that reveal new and relevant information that may prove useful for performing an action or task.
This volume has its origins in the rapid and staggering changes occurring in computational social research. As one of the editors of Political Analysis (an academic journal that publishes research articles in political methodology) and Analytical Methods for Social Research (a book series), I know I am witnessing amajor shift in social science research methodology. Researchers have vast (and complex) arrays of data to work with; we have incredible tools to sift through the data and recognize patterns in that data; there are now many sophisticated models that we can use to make sense of those patterns; and we have extremely powerful computational systems that help us accomplish these tasks quickly.
When I was in graduate school in the late 1980s and early 1990s, those of us who worked with survey and public opinion polling data were considered “big-N” researchers in the social sciences. When I teach introductory research methods in my graduate seminars at Caltech, I will often have students read the 1978 American Political Science Review paper by Steven J. Rosenstone and Raymond E. Wolfinger, “The Effect of Registration Laws on Voter Turnout.” Today this paper seems straightforward to students: Rosenstone and Wolfinger simply collected information on state-by-state voter registration and administrative practices, and merged that with the November 1972 U.S. Census Bureau's Current Population voting supplement, which the authors report as having more than 93,000 respondents.They then tested, using a relatively simple binary probit model, for the effects of various registration and election administration procedures on whether the survey respondent reported having voted in the 1972 federal general elections.
Most students of statistics, methodology, or econometrics today are familiar with the binary probit model and its near-cousin, binary logit. These are techniques that model the probability that an outcome is met (here, did a voter turn out in an election) based on the covariates or regressors on the right-hand side of the model. The parameters of the probit and logit model are typically fit via maximum-likelihood optimization. Today a student could use an off-the-shelf statistics software package and replicate the original Rosenstone-Wolfinger analysis, literally in the blink of an eye, on his or her laptop computer.
Quantitative research in social science research is changing rapidly. Researchers have vast and complex arrays of data with which to work: we have incredible tools to sift through the data and recognize patterns in that data; there are now many sophisticated models that we can use to make sense of those patterns; and we have extremely powerful computational systems that help us accomplish these tasks quickly. This book focuses on some of the extraordinary work being conducted in computational social science - in academia, government, and the private sector - while highlighting current trends, challenges, and new directions. Thus, Computational Social Science showcases the innovative methodological tools being developed and applied by leading researchers in this new field. The book shows how academics and the private sector are using many of these tools to solve problems in social science and public policy.