To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We show that a common method of predicting individuals’ race in administrative records, Bayesian Improved Surname Geocoding (BISG), produces misclassification errors that are strongly correlated with demographic and socioeconomic factors. In addition to the high error rates for some racial subgroups, the misclassification rates are correlated with the political and economic characteristics of a voter’s neighborhood. Racial and ethnic minorities who live in wealthy, highly educated, and politically active areas are most likely to be misclassified as white by BISG. Inferences about the relationship between sociodemographic factors and political outcomes, like voting, are likely to be biased in models using BISG to infer race. We develop an improved method in which the BISG estimates are incorporated into a machine learning model that accounts for class imbalance and incorporates individual and neighborhood characteristics. Our model decreases the misclassification rates among non-white individuals, in some cases by as much as 50%.
We propose and explore the possibility that language models can be studied as effective proxies for specific human subpopulations in social science research. Practical and research applications of artificial intelligence tools have sometimes been limited by problematic biases (such as racism or sexism), which are often treated as uniform properties of the models. We show that the “algorithmic bias” within one such tool—the GPT-3 language model—is instead both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups. We term this property algorithmic fidelity and explore its extent in GPT-3. We create “silicon samples” by conditioning the model on thousands of sociodemographic backstories from real human participants in multiple large surveys conducted in the United States. We then compare the silicon and human samples to demonstrate that the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and sociocultural context that characterize human attitudes. We suggest that language models with sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of humans and society across a variety of disciplines.
Low political support for religious minority groups in the United States is often explained as a matter of social distance or unfamiliarity between religious traditions. Observable differences between beliefs and behaviors of religious minority groups and the cultural mainstream are thought to demarcate group boundaries. However, little scholarship has examined why some practices become symbolic boundaries that reduce support for religious accommodation in public policy, while nearly identical practices are tolerated. We hypothesize that politics is an important component of the process by which some religious practices are transformed into demarcations between “us” and “them.” We conduct an original survey experiment in which people are exposed to an identical policy demand—women-only swim times at a local public pool—attributed to three different religious denominations (Muslim, Jewish, and Pentecostal). We find that people are less supportive of women-only swim times when the requesting religion is not a part of their partisan coalition.
Women earn approximately half of all bachelor’s degrees in political science but they comprise only 22% of full professors. Scholars have offered various likely explanations and proposed many interventions to improve women’s advancement. This article reviews existing research regarding the effectiveness of these interventions. We find that many of the proposed interventions have yet to be fully evaluated. Furthermore, some of the policies that have been evaluated turn out to be ineffective. Women’s mentoring and networking workshops are the most promising of the fully tested interventions. The potential for failure underscores the need for additional evaluation of any proposed intervention before widespread implementation.
Email your librarian or administrator to recommend adding this to your organisation's collection.