Hostname: page-component-76fb5796d-45l2p Total loading time: 0 Render date: 2024-04-27T12:13:08.733Z Has data issue: false hasContentIssue false

Expert Bias and Democratic Erosion: Assessing Expert Perceptions of Contemporary American Democracy

Published online by Cambridge University Press:  11 January 2024

Olivier Bergeron-Boutin
Affiliation:
Dartmouth College, USA
John M. Carey
Affiliation:
Dartmouth College, USA
Gretchen Helmke
Affiliation:
University of Rochester, USA
Eli Rau
Affiliation:
Vanderbilt University, USA
Rights & Permissions [Opens in a new window]

Abstract

In an important contribution to scholarship on measuring democratic performance, Little and Meng suggest that bias among expert coders accounts for erosion in ratings of democratic quality and performance observed in recent years. Drawing on 19 waves of survey data on US democracy from academic experts and from the public collected by Bright Line Watch (BLW), this study looks for but does not find manifestations of the type of expert bias that Little and Meng posit. Although we are unable to provide a direct test of Little and Meng’s hypothesis, several analyses provide reassurance that expert samples are an informative source to measure democratic performance. We find that respondents who have participated more frequently in BLW surveys, who have coded for V-Dem, and who are vocal about the state of American democracy on Twitter are no more pessimistic than other participants.

Type
Comment and Controversy
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Political Science Association

Little and Meng challenge the thesis that democracies are eroding globally. Summoning salient metrics of democratic performance and comparing them with democracy indicators built largely from expert assessments drawn from V-Dem, Little and Meng detect divergence in recent years between what they characterize as “objective” versus “subjective” measures.

They then consider two accounts for how such divergence could arise: (1) would-be autocrats have grown increasingly subtle, channeling their transgressions into actions that “fly under the radar” of objective metrics but nevertheless represent threats to democracy (and, presumably, eventually could manifest into objective erosion); and (2) media have increasingly focused on the prospect of democratic erosion and, correspondingly, the coders who provide assessments for democracy indicators have grown more sensitive to transgressions against democratic norms.

Little and Meng (Reference Little and Meng2023) note that these explanations are not mutually exclusive and that they are open to both. However, they lean toward the latter, noting that “we argue that coder bias likely explains at least some of the discrepancy.”

We (i.e., most of the coauthors) are part of Bright Line Watch (BLW), an organization that was formed in 2017 specifically to focus attention and energy on the question of whether US democracy faces existential threats. In that sense, we embody exactly the heightened attentiveness to erosion that, by Little and Meng’s account, could be driving bias in expert assessments of democracy—at least for the United States. Briefly, if there is a problem here, we might be a part of it.

BLW regularly conducts parallel surveys of two distinct respondent pools. Our “expert” respondents are drawn from political science faculty at all US universities. We also poll a representative sample of the American public assembled by the survey firm, YouGov.Footnote 1 This article probes Little and Meng’s concerns about how expert bias might operate by leveraging two types of comparisons within our data. The first type considers comparisons across our expert pool; the second concentrates on comparing the attitudes of the expert respondents with those from the public sample. Although it is impossible to directly refute the proposition that experts are alarmist relative to some ground truth about the actual state of American democracy, our data allow us to counter several implications of the Little and Meng argument. Specifically, we show that:

  • Comparing those experts who regularly self-select into BLW surveys to those experts who only rarely participate, there is little evidence that the former are more alarmist about democracy than the latter.

  • Likewise, comparing those experts who are more active on “democracy Twitter” to those who are less engaged or immersed, there is scant support for the implication that the former are more pessimistic about democracy.

  • We do not find that experts in BLW’s survey sample who also participate in coding for V-Dem (i.e., a principal research consortium whose democracy ratings have pointed to a “democratic recession” in recent years) are more despairing about democracy than BLW experts who do not or would not engage with V-Dem.

  • Contrary to the broader implications of Little and Meng’s argument, comparisons with the public sample reveal that BLW experts are consistently more optimistic about US democracy overall. We confirm that our expert assessments correlate more closely with Democratic partisans among the public sample than with Republicans and that, although the Democrat–expert alignment has not changed during the past six years, Republican and expert assessments have diverged dramatically.

ARE HIGHLY ENGAGED EXPERTS ALSO MORE PESSIMISTIC?

Unlike existing expert surveys on democracy across countries (e.g., V-Dem, Freedom House, and Polity), our expert pool is drawn entirely from American universities. Our mailing list includes approximately 10,000 unique email addresses, and typically 500 to 1,000 respondents complete the surveys. In any given wave, about 5% to 10% of the discipline participates in our surveys; thus, BLW’s expert surveys are completed by a much larger pool of experts than any of the existing global measures of democracy.Footnote 2

Given how BLW recruits expert respondents, self-selection is a possible source of bias. If experts who participate in our surveys are more concerned about the state of US democracy than those who do not participate, our measures could be systematically overstating the extent to which the discipline as a whole perceives threats to American democracy. Moreover, if political scientists indeed are selecting in and out of participation on the basis of their level of concern, then respondents who participate sporadically—or even only once—should be less pessimistic than those who participate regularly, thereby constituting a disproportionate share of responses in any given survey.

To test this, we exploited the fact that each BLW expert respondent was assigned a unique participant ID that is stable across survey waves.Footnote 3 For each survey wave, we computed the mean evaluation of US democracy on a 100-point scale among respondents who participated in only that one survey wave, respondents who participated in a total of two waves, respondents who participated in three waves, and so forth. In figure 1, each facet shows the mean rating on the 0–100 scale item among respondents who participated in 1, 2, 3…14 surveys of the 18 survey waves overall.Footnote 4 Contrary to the self-selection hypothesis, figure 1 shows no pattern by which frequent responders are more or less sanguine about US democracy than infrequent responders.Footnote 5

Figure 1 Ratings of American Democracy in Waves 2–5, by Expert Participation

Horizontal lines show unconditional means by wave. Point size represents the number of respondents.

MEDIA IMMERSION

Little and Meng suggest that immersion in a media environment that is pessimistic about democracy might generate expert-coder bias. Media sources increasingly indicate that democracy is under threat. Democracy experts bathe in this discourse and react to it. The same experts also generate the subjective assessments used to measure erosion. If all of this is the case, then we might expect to see that experts who are more heavily marinated in democracy-alarmist media are particularly pessimistic about democracy. We call this the consumption hypothesis. We agree with Little and Meng that the media-consumption narrative is plausible, although we struggle to imagine a research design that provides a rigorous causal test. Moreover, we note that much of the media coverage about threats to democracy is rooted in the work of academics themselves. As such, media coverage should not be taken as an exogenous force to which and by which political scientists and other experts happen to be exposed and influenced. Rather, at least part of the concern expressed by the academic world is causally prior to media coverage.

A second possibility is that experts have balanced information diets that neither overstate nor understate the threat to democracy but that the bundle of information that experts produce for public consumption (e.g., academic articles, op-eds, and social media posts) is disproportionately of the “threat-to-democracy” genre. We call this the production hypothesis.Footnote 6

Data from the 17th wave of BLW surveys, fielded in October 2022, touch on this theme of skewed scholarly production. In an attempt to detect any selection effects that may skew the public face of scholarship on the state of democracy, we asked our expert sample whether and how often they use Twitter. We also asked how often respondents tweeted about issues related to the state of American democracy. Twitter is a relevant platform for this test because of its role as a digital “town square” for academic communities, including the political science community (Bisbee, Larson, and Munger Reference Bisbee, Larson and Munger2022). It is plausible that excessive pessimism regarding the state of American democracy may attract a substantial audience, given the propensity to consume negative news (Robertson et al. Reference Robertson, Pröllochs, Schwarzenegger, Pärnamets, Van Bavel and Feuerriegel2023; Sacerdote, Sehgal, and Cook Reference Sacerdote, Sehgal and Cook2020).

In our October 2022 survey of 682 political scientists, 40% of the 626 who answered the question reported that they did not use Twitter at all (i.e., non-users), 49% used the application but did not tweet regularly about American democracy (i.e., Twitter consumers), and 11% tweeted about American democracy once a week or more (i.e., democracy tweeters). We asked all of the experts to rate the then-current performance of US democracy on a 100-point scale and also to make projections five and 10 years into the future. If the production hypothesis is correct, we expected to see a group of highly concerned experts who select into frequent public discussions of the state of US democracy and who skew public-facing scholarship in the direction of alarmism. That is, we expected our democracy-tweeter experts to assess American democracy more negatively than those who are “less online.” Democratic pessimism among the Twitter consumer experts would be consistent with the consumption hypothesis. Figure 2 illustrates the democracy ratings as of October 2022 as well as future assessments of each group.

Figure 2 Expert Ratings of Democracy by Twitter Usage

Vertical error bars are 95% confidence intervals.

On current assessments of democratic performance, the mean rating among non-users was 65; among Twitter consumers it was 68; and among democracy tweeters it was 66. The difference between Twitter consumers and non-users reached conventional statistical significance (p=0.04), with consumers more optimistic than non-users—contrary to the consumption hypothesis. Projecting into the future, all three groups anticipated democratic erosion. At five and 10 years out, Twitter consumers were the most optimistic, followed by non-users, and democracy tweeters were more pessimistic. Differences between non-users and either of the Twitter-engaged groups never reach statistical significance. The differences between Twitter consumers and democracy tweeters were significant at p=0.04 and p=0.03, respectively. However, we cannot imagine any theoretical account by which a limited amount of Twitter immersion should cause democratic optimism whereas further increasing Twitter immersion should cause pessimism. In summary, the differences that we observed in both current assessments and future projections did not map onto an account by which more Twitter immersion produces more pessimism.

It is important to note that the data presented so far are drawn from survey questions not purposely designed to test Little and Meng’s proposition about expert bias. Rather, Little and Meng’s intervention opened an important discussion and prompted us to reexamine data collected for other purposes, seeking leverage on this new debate. There could be a pessimism effect big enough to cause the observed decline in certain countries’ V-Dem polyarchy indices but one limited enough to evade our imperfect searchlight. We note that the more dire claims about the state of democracy in recent V-Dem reports (e.g., Boese-Schlosser et al. Reference Boese-Schlosser, Alizada, Lundstedt, Morrison, Natsika, Sato, Tai and Lindberg2022) rely on shifting the unit of analysis from the country to the individual citizen. Thus, recent declines in V-Dem’s polyarchy scores for a few large countries (in particular, India, where 17.7% of the world’s population lives) overshadow democratic improvements in smaller countries. As Little and Meng observe, the average V-Dem polyarchy score across countries has remained fairly constant since 1990—in fact, the trend line is remarkably similar to that of Little and Meng’s proposed alternative measures. Nonetheless, we still may be concerned that a pessimism bias is driving or exaggerating the appearance of backsliding in the countries where V-Dem has registered declining polyarchy scores.

In our most recent BLW survey, conducted in June–July 2023, we sought to determine more directly whether the specific political scientists who generate expert assessments on which contemporary democracy scholarship is based are systematically biased toward democratic pessimism.Footnote 7 Specifically, at the end of a BLW expert survey, we included questions—designed in collaboration with Little and Meng—that asked respondents whether they had ever been invited to serve as a coder for V-Dem. We also asked whether they had served (if invited) and about their willingness to serve (if not invited). Of the 544 expert respondents who completed this section of our survey, 484 (89%) had never been invited to serve as V-Dem codersFootnote 8; 16 (3%) had been invited but did not participate; and 44 (8%) had been invited and served as coders.

Prior to being asked about V-Dem participation, our expert respondents had rated on a 100-point scale the quality of democracy in a random subset of six countries around the world in addition to the United States from the following: Brazil, Hungary, India, Italy, Israel, Kenya, Mexico, Peru, the Philippines, Poland, Turkey, and the United Kingdom.

Expert-coder bias could operate through either V-Dem invitations (i.e., experts who are invited are more pessimistic than those who are not invited) or self-selection (i.e., experts who choose to participate are more pessimistic than those who decline). Figure 3 shows the average ratings for each country. The left panel compares ratings from expert respondents in our sample who had and had not been invited by V-Dem. The right panel compares ratings from those willing versus those unwilling to participate with V-Dem.

Figure 3 Invitation Bias and Participation Bias Among V-Dem Coders

We find no evidence of bias toward pessimism at either stage of selection into the V-Dem coder pool, invitation or participation. The average democracy ratings among those who were invited to code for V-Dem were higher than the average ratings among uninvited experts for 12 of 13 countries (except only Turkey). The difference reached statistical significance for Brazil and Peru. With regard to participation, the average ratings were higher among those willing to code for V-Dem than among those who were unwilling for all 13 countries, with statistically significant differences for the United States, Brazil, Mexico, and Poland. Overall, those political scientists whom V-Dem targets and those inclined to participate if asked appeared more, not less, sanguine about democracy around the world than experts outside of that V-Dem coder pool.

There are three important caveats to our analytical strategy. First, those experts who code do so for specific countries for which they have the greatest expertise. Unfortunately, our survey sample does not provide a sufficient number of “direct hits” (i.e., V-Dem coders who also rated their specific country of expertise in our survey) to allow comparison with ratings from the broader set of political scientists. Second, our sample of political scientists may be vulnerable to self-selection bias; it is possible that experts who declined to take part in our survey hold systematically different attitudes from those who did.Footnote 9 Third, it also is possible that political science as a whole is unduly pessimistic about democracy across the world, in which case the baseline against which we are comparing V-Dem coders does not reflect a ground truth about democracy around the world. If such a bias were new or increased in recent years, it could cause a universal shift in coder standards—according to Little and Meng’s formulation—that our approach might not detect.

EXPERT VERSUS PUBLIC ASSESSMENTS OF DEMOCRACY

In addition to our expert surveys, BLW also routinely polls the general public about the state of democracy in the United States and, occasionally, other countries. Comparing recent rankings of democracy in 13 countries, expert assessments about democratic performance exhibit far less compression than those of the public and offer more precise estimates (with lower variance as a group)—as we would expect and hope if the experts are actually better informed.

Figure 4 shows mean democracy ratings on the 100-point scale, with 95% confidence intervals, for 13 countries plus the United States as of October 2022. The rank ordering of countries by experts and the public is almost identical, with North Korea at the bottom and Canada at the top. However, mean expert assessments range from 2 to 84, whereas the public’s assessments range from 18 to 59. The expert assessments are far more precise, with country-level standard deviations ranging from 6.7 (North Korea) to 20.7 (Israel), compared with 21.8 (Great Britain) to 26.6 (Israel) for the public.

Figure 4 Expert and Public Ratings of Democracy in 13 Countries

Horizontal error bars are 95% confidence intervals. Data are from Wave 17 (October 2022) of BLW surveys.

Returning our focus to the United States, figure 5 shows the time series from 2017 through 2022 of mean responses with 95% confidence intervals on the same 100-point scale from our expert respondents (shown in green) and the public (shown in purple).Footnote 10

Figure 5 Expert and Public Ratings of US Democracy, 2017–2023

Vertical error bars are 95% confidence intervals.

If Little and Meng’s thesis about expert bias is correct, we might expect experts to be more pessimistic than the public or at least that the relative optimism of experts, compared to the public, would have declined. In fact, relative to the public, our experts consistently rate American democracy approximately 10 points higher. Neither do we observe any evidence that experts are increasingly pessimistic, relative to the public as a benchmark, over time. Indeed, BLW’s most recent survey, conducted in June–July 2023, shows the largest optimism gap yet between experts and the public.

Next, we consider how expert ratings compare to those of the public across a range of democratic principles. BLW regularly surveys both groups on 30 principles related to elections and voting; citizen rights and protections; and accountability, institutions, and norms. A full description of each principle is included in the Appendix. Figure 6 illustrates the proportion of experts (green circles) and the public (purple squares) who, in June–July 2023, rated the United States as fully or mostly meeting each standard.

Figure 6 Expert and Public Ratings of 30 Indicators of US Democratic Performance

Horizontal error bars are 95% confidence intervals. Statements are in descending order of performance ratings for experts surveyed in June–July 2023.

The pattern of expert discernment and precision relative to the general public is similar to what is observed in figure 4. Expert assessments range from 9% (i.e., districts not biased) to 90% (i.e., government statistics not politically influenced). The range in these responses highlights the value of expert-coded measures of specific, concrete variables. Expert ratings of American democracy tend overall to cluster between 65 and 70 on the 100-point scale. However, when the experts are asked about specific components, their assessments range from widespread confidence (e.g., in election integrity and many civil liberties) to almost consensus on principles not met (e.g., unbiased districts or adherence to norms of mutual respect and cooperation). By contrast, the public’s assessments across the 30 standards of performance are relatively clustered. Reviewing the percentage of respondents who agree that the United States fully or partially meets each standard, the range is from 15% to 56%, with 20 of the 30 items falling between 25% and 50%.Footnote 11

One interpretation of figure 6 is that many respondents in our public sample may have a general sense of the state of democracy and work backward from that overall rating to assess the individual indicators of performance on which we query them. There are notable exceptions, particularly regarding issues for which there have been salient elite cues, such as fraud-free elections and equal voting rights. Nonetheless, the broad pattern appears to be reasoning from the general to the specific. By contrast, the experts—informed and opinionated—address each standard independently.

EXPERT ASSESSMENTS AND PARTISANSHIP

A final possible source of bias in expert assessments (albeit one that was not directly raised by Little and Meng) is that the BLW expert sample may be systematically more aligned with one partisan group compared to another. BLW does not ask our expert respondents about their partisanship, but we do ask respondents in our public sample.Footnote 12 Figure 7 shows the correlation coefficients between three pairs of subgroups (i.e., experts and Democrats, experts and Republicans, and experts and partisan independents) across the 30 performance indicators for each survey as far back as October 2017.Footnote 13 A few patterns are worth noting.

Figure 7 Correlation Between Experts and Partisan Groups on 30 Indicators of Democratic Performance, by Wave

Figure shows the correlation coefficient in the proportion of experts and different partisan groups that rate the performance of US democracy on 30 different indicators as positive. Each point is the correlation coefficient for a given wave, across those 30 indicators.

First, assessments among our expert sample consistently align with those of Democrats more than with Independents and far more than with Republicans. The correlation coefficients between expert and Democratic assessments on the 30 principles hover around 0.80, with no dramatic change over time.

The second broad pattern is that what began as a moderate alignment between expert and Republican assessments decreased drastically beginning around Fall 2019, which coincides with the process that led to the first impeachment of then-President Donald J. Trump over his pressuring of Ukraine to investigate the Biden family. A second even more precipitous decrease followed in late 2020, coinciding with the immediate aftermath of the 2020 election (and Trump’s second impeachment). The sharp divergence between expert and Republican assessments largely derives from growing pessimism among Republicans and optimism among experts after the 2020 election. Republicans became more concerned about election fraud and free-speech protections; experts became less concerned about politically motivated investigations and the use of government agencies to monitor and attack political opponents. On one measure—whether politicians publicly concede defeat—experts became much more concerned than Republicans after the 2020 election.

The Republican–expert correlation rebounded sharply in November 2022, in the wake of the midterm elections, and remained approximately level in June–July 2023. This shift corresponds to increasing confidence among Republicans in fraud-free elections (recovering from a low of 18% to 35%) and increasing confidence among experts that politicians concede defeat (recovering from a low of 34% to 57%).

Third, the correlation between partisan Independents and our experts, which held steady into 2020, eroded more gradually during the next two years—although never as sharply as among Republicans—and partially recovered in late 2022 and 2023.

There are at least two competing interpretations of these patterns. The more obvious is that our sample of experts likely skews Democratic. This is unsurprising because American university faculty members are well known to skew overwhelmingly Democratic (Langbert and Stevens Reference Langbert and Stevens2021). This predates the beginning of our time series and continues throughout the study. Another possibility—albeit one that is difficult to establish systematically—is that the partisan groups differ in the soundness of their assessments of performance on our 30 democratic principles, with Democrats being more accurate than Republicans and Independents. We note that the Democrat–expert correlation remains steady throughout, whereas high-salience political events since 2019 appear to have breached any common ground that our experts shared with Republican partisans on democratic performance.

CONCLUSION

Our data provide a unique opportunity to delve into Little and Meng’s concerns about expert bias; however, it is important to highlight the limitations of our conclusions. BLW began conducting surveys only in 2017, after the period that Little and Meng identify as potentially plagued by the beginning of the bias. Moreover, BLW data focus mostly on the United States, whereas Little and Meng focus on cross-national democracy indices. Finally, nothing presented here directly refutes the Little and Meng proposition that our expert assessments are increasingly biased toward pessimism relative to a ground truth. However, relative to the discipline of political science more generally and relative to the public from 2017 to 2022, we find no evidence for a particular pessimism among our experts. The most highly engaged experts—whether with BLW surveys, Twitter, or V-Dem—evaluate American democracy about the same as less-engaged experts. The experts are more optimistic than the public as a whole and expert assessments—of both specific elements of American democracy and overall democratic performance in other countries—display properties of discernment and precision that are reassuring for a highly informed sample.

The most highly engaged experts—whether with BLW surveys, Twitter, or V-Dem—evaluate American democracy about the same as less-engaged experts. The experts are more optimistic than the public as a whole and expert assessments…display properties of discernment and precision that are reassuring for a highly informed sample.

ACKNOWLEDGMENTS

The authors are grateful for constructive input and feedback from Andrew Little, Anne Meng, Brendan Nyhan, and Susan Stokes, as well as from the PS editorial team and anonymous reviewers.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the PS: Political Science and Politics Harvard Dataverse at https://doi.org/10.7910/DVN/0H0X7O.

CONFLICTS OF INTEREST

The authors declare that there are no ethical issues or conflicts of interest in this research.

Appendix: Full statements of 30 democratic principles

  1. 1. Government officials are legally sanctioned for misconduct

  2. 2. Government officials do not use public office for private gain

  3. 3. Government agencies are not used to monitor, attack, or punish political opponents

  4. 4. All adult citizens enjoy the same legal and political rights

  5. 5. Government does not interfere with journalists or news organizations

  6. 6. Government effectively prevents private actors from engaging in politically-motivated violence or intimidation

  7. 7. Government protects individuals’ right to engage in unpopular speech or expression

  8. 8. Political competition occurs without criticism of opponents’ loyalty or patriotism

  9. 9. Elections are free from foreign influence

  10. 10. Parties and candidates are not barred due to their political beliefs and ideologies

  11. 11. All adult citizens have equal opportunity to vote

  12. 12. All votes have equal impact on election outcomes

  13. 13. Elections are conducted, ballots counted, and winners determined without pervasive fraud or manipulation

  14. 14. Executive authority cannot be expanded beyond constitutional limits

  15. 15. The legislature is able to effectively limit executive power

  16. 16. The judiciary is able to effectively limit executive power

  17. 17. The elected branches respect judicial independence

  18. 18. Voter participation in elections is generally high

  19. 19. Information about the sources of campaign funding is available to the public

  20. 20. Public policy is not determined by large campaign contributions

  21. 21. Citizens can make their opinions heard in open debate about policies that are under consideration

  22. 22. The geographic boundaries of electoral districts do not systematically advantage any particular political party

  23. 23. Even when there are disagreements about ideology or policy, political leaders generally share a common understanding of relevant facts

  24. 24. Elected officials seek compromise with political opponents

  25. 25. Citizens have access to information about candidates that is relevant to how they would govern

  26. 26. Government protects individuals’ right to engage in peaceful protest

  27. 27. Law enforcement investigations of public officials or their associates are free from political influence or interference

  28. 28. Government statistics and data are produced by experts who are not influenced by political considerations

  29. 29. The law is enforced equally for all persons

  30. 30. Incumbent politicians who lose elections publicly concede defeat

Footnotes

1. Every Bright Line Watch survey instrument is reviewed—and all have been deemed exempt—by the Institutional Review Boards of Dartmouth College, the University of Chicago, and the University of Rochester. Every participant in each survey is informed that all responses will remain anonymous.

2. Little and Meng (Reference Little and Meng2023) note that the average number of coders for V-Dem per country in the 2010s was approximately 11. By contrast, there are almost 6,000 unique respondent IDs across the 18 BLW survey waves, indicating that more than half of those invited have participated at least once.

3. We used a survey firm to manage survey invitations and link each email address to a unique respondent ID to avoid sending follow-up invitations to those who have already responded to a given survey. We had access only to the unique respondent IDs and do not know to whom they correspond.

4. Only a few respondents have taken 15 or more surveys (thank you, one and all!). To limit crowding in the figure, we show markers for only those who have taken up to 14 surveys, and we include facets for only the first four waves that included the 100-point scale rating. An analogous figure for all survey waves shows the same pattern.

5. Of course, those who have never participated could be systematically different from those who have.

6. Two mechanisms may explain this phenomenon. First, experts who evaluate the threat to democracy to be low may choose to select out of public discussions of the topic (e.g., because they feel social pressure to conform). Second, even assuming a uniform propensity to engage in the debate, if the most noteworthy pieces of information attract more discussion, this may lead to an overrepresentation of threats to democracy.

7. See the Bright Line Watch full report on this survey, and its paired survey of a representative sample of Americans, at http://brightlinewatch.org/uncharted-territory-the-aftermath-of-presidential-indictments.

8. Of these, 227 (48%) respondents expressed a willingness to participate if asked.

9. Moreover, differences between V-Dem invitees and V-Dem–willing experts are different among those who do not participate in our survey and those who do.

10. The first wave of BLW surveys (i.e., February 2017) asked expert respondents to rate the performance of US democracy on a 1–10 scale. We switched to the 100-point scale for our second expert survey in May 2017 and added a public sample in October 2017.

11. Respondents in both samples could select “not sure.” Typically, fewer than 1% of experts answered “not sure.” Among the public, the interquartile range of proportions of “not sure” answers across all survey waves was 10% to 17%. We excluded “not sure” responses.

12. We repeatedly imposed on our expert sample, asking for their time every few months. We want to unmistakably signal to potential respondents that we do so because their specific expertise brings unique value to the exercise.

13. Consistent with conventional practice, we categorized as Independent only those who do not lean toward one major party or the other. Leaners were categorized with their proximate partisans.

References

REFERENCES

Bisbee, James, Larson, Jennifer, and Munger, Kevin. 2022. “#polisci Twitter: A Descriptive Analysis of How Political Scientists Use Twitter in 2019.” Perspectives on Politics 20 (3): 879900. https://doi.org/10.1017/S1537592720003643.CrossRefGoogle Scholar
Boese-Schlosser, Vanessa A., Alizada, Nazifa, Lundstedt, Marin, Morrison, Kelly, Natsika, Natalia, Sato, Yuko, Tai, Hugo, and Lindberg, Staffan I.. 2022. “Autocratization Changing Nature?” Democracy Report, March 1. Varieties of Democracy Institute (V-Dem). http://doi.org/10.2139/ssrn.4052548.CrossRefGoogle Scholar
Langbert, Mitchell, and Stevens, Sean. 2021. “Partisan Registration of Faculty in Flagship Colleges.” Studies in Higher Education 47 (8): 1750–60. https://doi.org/10.1080/03075079.2021.1957815.CrossRefGoogle Scholar
Little, Andrew T., and Meng, Anne. 2023. “Measuring Democratic Backsliding.” Political Science & Politics, this issue.CrossRefGoogle Scholar
Robertson, Claire E., Pröllochs, Nicolas, Schwarzenegger, Kaoru, Pärnamets, Philip, Van Bavel, Jay J., and Feuerriegel, Stefan. 2023. “Negativity Drives Online News Consumption.” Nature Human Behaviour 7:812–22. https://doi.org/10.1038/s41562-023-01538-4.CrossRefGoogle ScholarPubMed
Sacerdote, Bruce E., Sehgal, Ranjan, and Cook, Molly. 2020. “Why Is All COVID-19 News Bad News?” National Bureau of Economic Research, Working Paper No. 28110. https://doi.org/10.3386/w28110.CrossRefGoogle Scholar
Figure 0

Figure 1 Ratings of American Democracy in Waves 2–5, by Expert ParticipationHorizontal lines show unconditional means by wave. Point size represents the number of respondents.

Figure 1

Figure 2 Expert Ratings of Democracy by Twitter UsageVertical error bars are 95% confidence intervals.

Figure 2

Figure 3 Invitation Bias and Participation Bias Among V-Dem Coders

Figure 3

Figure 4 Expert and Public Ratings of Democracy in 13 CountriesHorizontal error bars are 95% confidence intervals. Data are from Wave 17 (October 2022) of BLW surveys.

Figure 4

Figure 5 Expert and Public Ratings of US Democracy, 2017–2023Vertical error bars are 95% confidence intervals.

Figure 5

Figure 6 Expert and Public Ratings of 30 Indicators of US Democratic PerformanceHorizontal error bars are 95% confidence intervals. Statements are in descending order of performance ratings for experts surveyed in June–July 2023.

Figure 6

Figure 7 Correlation Between Experts and Partisan Groups on 30 Indicators of Democratic Performance, by WaveFigure shows the correlation coefficient in the proportion of experts and different partisan groups that rate the performance of US democracy on 30 different indicators as positive. Each point is the correlation coefficient for a given wave, across those 30 indicators.

Supplementary material: Link

Bergeron-Boutin et al. Dataset

Link