We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In an important contribution to scholarship on measuring democratic performance, Little and Meng suggest that bias among expert coders accounts for erosion in ratings of democratic quality and performance observed in recent years. Drawing on 19 waves of survey data on US democracy from academic experts and from the public collected by Bright Line Watch (BLW), this study looks for but does not find manifestations of the type of expert bias that Little and Meng posit. Although we are unable to provide a direct test of Little and Meng’s hypothesis, several analyses provide reassurance that expert samples are an informative source to measure democratic performance. We find that respondents who have participated more frequently in BLW surveys, who have coded for V-Dem, and who are vocal about the state of American democracy on Twitter are no more pessimistic than other participants.
When a party or candidate loses the popular vote but still wins the election, do voters view the winner as legitimate? This scenario, known as an electoral inversion, describes the winners of two of the last six presidential elections in the United States. We report results from two experiments testing the effect of inversions on democratic legitimacy in the US context. Our results indicate that inversions significantly decrease the perceived legitimacy of winning candidates. Strikingly, this effect does not vary with the margin by which the winner loses the popular vote, nor by whether the candidate benefiting from the inversion is a co-partisan. The effect is driven by Democrats, who punish inversions regardless of candidate partisanship; few effects are observed among Republicans. These results suggest that the experience of inversions increases sensitivity to such outcomes among supporters of the losing party.
To enhance the performance evaluation of Clinical and Translational Science Award (CTSA) hubs, we examined the utility of advanced bibliometric measures that go beyond simple publication counts to demonstrate the impact of translational research output.
Methods:
The sampled data included North Carolina Translational and Clinical Science Institute (NC TraCS)-supported publications produced between September 2008 and March 2017. We adopted advanced bibliometric measures and a state-of-the-art bibliometric network analysis tool to assess research productivity, citation impact, the scope of research collaboration, and the clusters of research topics.
Results:
Totally, 754 NC TraCS-supported publications generated over 24,000 citation counts by April 2017 with an average of 33 cites per article. NC TraCS-supported research papers received more than twice as many cites per year as the average National Institute of Health-funded research publications from the same field and time. We identified the top productive researchers and their networks within the CTSA hub. Findings demonstrated the impact of NC TraCS in facilitating interdisciplinary collaborations within the CTSA hub and across the CTSA consortium and connecting researchers with right peers and organizations.
Conclusion:
Both improved bibliometrics measures and bibliometric network analysis can bring new perspectives to CTSA evaluation via citation influence and the scope of research collaborations.
This chapter shows that preferences do not differ greatly when we separate students out by their race/ethnicity, gender, or socioeconomic background. All groups favor applicants and faculty candidates from underrepresented minority racial/ethnic and socioeconomic groups. The one area where we see preference polarization is with respect to gender non-binary applicants and faculty candidates. Women tend to favor gender non-binary individuals but men disfavor them, consistent with intolerance among men toward gender non-conformity.
This chapter describes the preferences we estimate on attitudes toward undergraduate admissions and faculty recruitment across our full population of student particpants. It shows that students prioritize academic and professional achievement most, but also that they give preference to all underrepresented minority racial and ethnic groups over whites, to women and gender non-binary applicants over men, and to applicants from disadvantaged socioeconomic backgrounds over the wealthy. They also give preference to recruited varsity athletes and to legacy applicants.
This chapter reports on results of similar conjoint experiments conducted at the United States Naval Academy and at the London School of Economics. At both institutions, we find pro-diversity preferences that largely complement those from other schools. However, at the Naval Academy we find no preferences in favor of women applicants despite the fact that women are underrepresented among students at the Academy (whereas they make up majorities at most undergraduate institutions), and we find that preferences against gender non-binary applicants and faculty candidates are far stronger at the Naval Academy than at other institutions. At the London School of Economics, we find positive but smaller preferences in favor of blacks but not for East Asian or South Asian applicants but we find strong preferences in favor of applicants from disadvantages socioeconomic backgrounds.
Fully randomized conjoint analysis can mitigate many of the shortcomings of traditional survey methods in estimating attitudes on controversial topics. This chapter explains how we applied conjoint analysis at seven universities and describes the population of participants in our experiments.
The concluding chapter provides a summary of the results reported in the previous chapters, emphasizing the overall preferences in favor of racial/ethnic, gender, and socioeconomic diversity and the broad consensus around these preferences across groups of participants. The chapter then reviews scholarship on how diversity affects campus communities and individual students and faculty, emphasizing that effects at the community level are widely regarded to be positive whereas deeper debates surround impact at the individual level. The chapter concludes by considering current challenges to affirmative action in college admissions in the courts and from those arguing for diversity of viewpoints rather than demographics.
The demographic composition of campuses has changed dramatically in recent decades, both among students and faculty. This chapter documents those trends as well as persistent demographic inequalities. It then reviews the policies that created such inequalities as well as more recent attempts to mitigate them. It also reviews recent protests and controversies surrounding campus diversity.
This chapter shows that the rate of return to academic achievement (for students) or professional achievement (for faculty) does not differ across key demographic categories, by race/ethnicity or gender. That is, whites, blacks, Hispanics, Asians, and Native Americans all receive commensurate increases in likelihood of selection in our experiments for similar increases in academic achievement. Women, men, and gender non-binary faculty candidates are rewarded at commensurate rates for stronger professional achivement. The rates of return to achievement do not differ across demographic groups.
Debates over diversity on campus are intense, they command media attention, and the courts care about how efforts to increase diversity affect students’ experiences and attitudes. Yet we know little about what students really think because measuring attitudes on politically charged issues is challenging. This book adopts an innovative approach to addressing this challenge.
This chapter shows that, even across our deepest political divides, we find little polarization of preferences on admissions and faculty recruitment. Breaking out participants by party, preferences differ, with Democrats favoring all underrepresented minority groups whereas Republicans are, statistically, indifferent toward non-whites and women (although they disfavor gender non-binary applicants). Most surprisingly, when we break out participants by whether they state support for, or opposition to, consideration of race in college admissions on a conventional survey question, both groups give preference to members of underrepresented minority racial/ethnic groups relative to whites, and to women relative to men, in our conjoint experiments. Preferences as revealed in holistic choices differ from those as revealed in standard surveys.