Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-gtxcr Total loading time: 0 Render date: 2024-04-19T22:10:54.874Z Has data issue: false hasContentIssue false

2 - Misinformation, Disinformation, and Online Propaganda

Published online by Cambridge University Press:  24 August 2020

Nathaniel Persily
Affiliation:
Stanford University, California
Joshua A. Tucker
Affiliation:
New York University

Summary

The research literature on misinformation, disinformation, and propaganda is vast and sprawling. This chapter discusses descriptive research on the supply and availability of misinformation, patterns of exposure and consumption, and what is known aboutmechanisms behind its spread through networks. It provides a brief overview of theliterature on misinformation in political science and psychology, which provides a basis for understanding the phenomena discussed here. It then examines what we know about the effects of misinformation and how it is studied. It concludes with a discussion of gaps in our knowledge and future directions in research in this area.

Type
Chapter
Information
Social Media and Democracy
The State of the Field, Prospects for Reform
, pp. 10 - 33
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Introduction

Not long ago, the rise of social media inspired great optimism about its potential for flattening access to economic and political opportunity, enabling collective action, and facilitating new forms of expression. Its increasingly widespread use ushered in a wave of commentary and scholarship seeking to meld well-established bodies of knowledge on mass media, economics, and social movements with the affordances of this new communication technology. Several political upheavals and an election later, the outlook in both the popular press and scholarly discussions is decidedly less optimistic. Facebook and Twitter are more likely to be discussed as incubators of “fake news” and propaganda than as tools for empowerment and social change. The resulting research focus has changed, too, with scholars looking to earlier literatures on misperceptions and persuasion for insight into the challenges of the present.

The terms “misinformation,” “disinformation,” and “propaganda” are sometimes used interchangeably, with shifting and overlapping definitions. All three concern false or misleading messages spread under the guise of informative content, whether in the form of elite communication, online messages, advertising, or published articles. For the purposes of this chapter, we define misinformation as constituting a claim that contradicts or distorts common understandings of verifiable facts. This is distinct conceptually from rumors or conspiracy theories, whose definitions do not hinge on the truth value of the claims being made. Instead, rumors are understood as claims whose power arises from social transmission itself (Reference BerinskyBerinsky 2015). Conspiracy theories have specific characteristics, such as the belief that a hidden group of powerful individuals exerts control over some aspect of society (Reference Sunstein and VermeuleSunstein and Vermeule 2009).

Misinformation, by contrast, is false by definition. Determining what is “false” can, of course, raise thorny epistemological issues: Virtually all our experience of the world beyond our immediate perception is mediated in some way, whether by institutions such as the media or by connections with other individuals, raising concerns about the attainability of objectively verifiable truth in any context outside of the scientific method. Empirical research thus tends to focus by necessity on claims that can be either directly verified (Is Barack Obama an American citizen? Is the election on Wednesday?) or bolstered by a reasonable consensus of appropriate experts or authorities (Did GDP increase last year?). This approach limits but does not eliminate controversy on factual matters. It implies, moreover, that unverifiable claims cannot strictly speaking be classified as misinformation.

Following Reference Tucker, Guess and BarberáTucker et al. (2018), we define disinformation as the subset of misinformation that is deliberately propagated. This is a question of intent: Disinformation is meant to deceive, while misinformation may be inadvertent or unintentional. Proving intent can sometimes be more difficult than proving whether something is true, but, practically speaking, organized attempts to propagate misinformation by political actors – whether domestic or foreign – are typically thought of as disinformation. Another type of disinformation is the false content known as “fake news,” or deliberately misleading articles designed to mimic the look of actual articles from established news organizations. Finally, Reference Tucker, Guess and BarberáTucker et al. (2018) define propaganda as information that can be true but is used to “disparage opposing viewpoints.” In this chapter, we will refer to any communications that are intended to persuade people to support one political group over another as propaganda.

The research literature on misinformation, disinformation, and propaganda is vast and sprawling. This chapter will focus on empirical findings that focus on the production, supply, consumption, and dissemination of these materials online. While the spread of various forms of dubious and misleading content is closely related to important phenomena such as political polarization, incivility, hate speech, trolls, and bots, this chapter will not cover them in depth. In addition, research on efforts to correct misinformation and rumors is covered in Chapter 8 in this volume.

We will focus, instead, on two main types of findings: First, we discuss descriptive research on various types of misinformation and propaganda. This covers the supply and availability of misinformation, patterns of exposure and consumption, and what is known about mechanisms behind its spread through networks. One theme of this chapter is that, while social science research traditionally places a premium on causal identification of effects – appropriately so, given the task of evaluating falsifiable hypotheses – the state of knowledge about social media and misinformation is unsettled enough that assembling basic descriptive data remains a valuable and pressing task. This is true for at least two reasons. First, both the experience of social media and its effects are likely to be highly heterogeneous, raising questions about the generalizability of experimental estimates. Second, the most informative data are often proprietary and unavailable to the public, leaving researchers to make assumptions about the overall landscape about which they are testing counterfactual propositions.

These descriptive findings contextualize and inform the nascent literature on the effects of exposure to online misinformation. Owing to practical and ethical restrictions, such research is necessarily conducted in artificial settings, often with convenience samples, but it provides an opportunity to check intuitions about the hypothetical effects of content such as fake news stories seen on Facebook. Combining estimates of effect size with what is known about the spread and prevalence of similar content during specific time periods, it might be possible to check intuitions about its role in real-world outcomes. In these experiments, the dependent variables that are typically studied relate either to beliefs about the claims made (i.e., misperceptions) or to behaviors ranging from sharing and engagement on social media to voter turnout and vote choice.

In the following section, we provide a brief overview of the literature on misinformation in political science and psychology, which provides a basis for understanding the phenomena discussed in this chapter. We then turn to what we know about the production of disinformation and the supply and availability of misinformation more broadly online. We then focus on the consumption side, with a section on exposure and its correlates on the individual level. One important factor determining exposure is how misinformation is spread and disseminated, which we cover next. The penultimate section looks at what we know about the effects of misinformation and how it is studied. We conclude with a discussion of gaps in our knowledge and future directions in research in this area.

Misinformation and Misperceptions

The misinformation literature in political science can be said to begin with the canonical study by Reference Kuklinski, Quirk, Jerit, Schwieder and RichKuklinski et al. (2000). Over two experiments, the authors demonstrated that subjects tend to hold incorrect beliefs about various aspects of welfare policy, such as the portion of the federal budget dedicated to welfare programs and the percentage of recipients who are African American. These beliefs were related to policy views; moreover, the misinformed were more likely to be confident in their beliefs than the correctly informed. This study was among the first to explore the link between misinformation and misperceptions, that is, people with incorrect (rather than nonexistent) factual beliefs.

This strand of research was picked up a few years later as real-world developments suggested the possibility that large portions of the public were misinformed about basic factual matters. When surveys showed that Republicans were substantially more likely to believe that there were weapons of mass destruction in Iraq, even after the administration had abandoned that rationale, scholars took note (Reference Shapiro and Bloch-ElkonShapiro and Bloch-Elkon 2006; Reference JacobsonJacobson 2010). Similarly, as commentators and conservative websites raised the specter of “death panels” during the debate on health reform, analysts noticed that the belief was hardest to dislodge among those who considered themselves the most knowledgeable (Reference NyhanNyhan 2010). As these two examples illustrated, not only did misperceptions seem to be commonly held but they sometimes seemed to originate from concerted efforts to persuade people of those beliefs. Perhaps as a result of the political nature of those efforts, the misinformed views were not evenly distributed across the population: They were concentrated among partisans and among those with higher levels of knowledge and exposure to political discourse. This pattern has recurred repeatedly, as in the Obama Muslim myth.

These findings should come as no surprise to those familiar with the well-developed literature on partisan bias, which can be traced back to at least The American Voter (Reference Campbell, Converse, Miller and StokesCampbell et al. 1960) and explores the extent to which “perceptual screens” color individuals’ beliefs about objective facts such as the state of the economy (Reference Achen and BartelsAchen and Bartels 2017). Today, such phenomena tend to be examined within the framework of motivated reasoning, which views partisan gaps in attitudes and factual beliefs as a function of protective mechanisms such as confirmation bias and selective avoidance (Reference Taber and LodgeTaber and Lodge 2006; Reference Flynn, Nyhan and ReiflerFlynn, Nyhan, and Reifler 2017). What distinguishes much of the later research on misinformation is its focus on misperceptions in factual beliefs and their effect on opinion – rather than the effects of partisan or other commitments on factual beliefs.

This focus on the effects of misinformation, particularly on attitudes and opinions, led to a neglect in basic research on the prevalence, supply, and spread of this content, particularly on social media. As public polls remind us every day, misperceptions are common; but where do they come from? Traditional approaches to mass media imply a broadcast model in which propaganda and disinformation are disseminated from the top down by governments possessing the most powerful megaphone. Today, however, misinformation originating from any number of small outlets can spread organically through existing social networks online. Who is producing it, and why? What kind of misinformation is being published, and what are the processes by which it is shared by individuals?

Production of Disinformation

Because of the nature of disinformation, inquiry into its producers has been limited; those who intend to mislead others also tend to mask their identity.Footnote 1 Scholarship addressing producers has focused on a few key groups that have distorted the information ecosystem in recent years. These include Macedonian teenagers publishing pro-Trump fake news during the 2016 election, the Internet Research Agency’s Russian “troll farm,” and collaborative anonymous networks such as those found on forums like 4chan or 8chan. Research in this area is descriptive and complements the rich work on the sociology of news production by tracking the production of its antithesis. Academic work is complemented by reports from journalists, intelligence agencies, and social media platforms themselves. Key facets revealed in this work include the identities and physical locations of much of the systematic disinformation production in recent years; motives for producing disinformation; and the nature of their organizational practices and methods of dissemination (we cover the latter in greater depth in the section “Spread and Dissemination of Misinformation”). Regardless, we are left with a still-incomplete picture because, as always, disinformation producers are a difficult group of actors to study. Further, as with other questions we discuss – supply, consumption, dissemination – our knowledge is concentrated on Western, and especially US-specific, contexts.

One group of disinformation producers that has received attention in recent years was based in Macedonia. In the town of Veles, roughly 100 pro-Trump fake news sites were registered and operated in the run-up to the 2016 election. These publications used American-sounding domains such as USADailyPolitics.com, WorldPoliticus.com, and DonaldTrumpNews.com. Our knowledge about the Macedonian case is largely drawn from journalistic accounts, such as those published in Wired (Reference SubramanianSubramanian 2017), BBC News (Reference KirbyKirby 2016), and BuzzFeed News (Reference Silverman and AlexanderSilverman and Alexander 2016). Notably, these reports suggest that the motive behind most of the pro-Trump fake news publication in 2016 was profit (Reference TynanTynan 2016): teenagers producing these stories earned up to $8,000 per month (twenty times the typical wage in Veles during this time period).Footnote 2 Because these producers were driven by profit rather than ideology, the preponderance of pro-Trump content in 2016 was apparently driven by superior engagement metrics relative to left-leaning fake news (Reference Bakir and McStayBakir and McStay 2018).

Another source of disinformation that has received considerable scrutiny is Russia’s Internet Research Agency (IRA), a “troll factory” propaganda effort (Reference Bastos and FarkasBastos and Farkas 2019). The IRA came into the limelight in part due to congressional investigations into Russian meddling in the 2016 US elections. Like the Macedonian case, journalists have contributed significantly to our knowledge of the IRA via interviews with former employees. In addition, academics have conducted several analyses of IRA Twitter activity, which largely corroborate these interviews.

Journalistic accounts show that the IRA operated in an industrialized fashion, with division of labor based on geographic targets and platform specialization (Reference Volchek and SindelarVolchek and Sindelar 2015). According to interviews, individual operators were responsible for multiple fake accounts and a high volume of expected contributions – ranging from fifty comments daily on news articles, to the maintenance of six Facebook pages with three daily posts, to the maintenance of ten Twitter accounts with at least fifty daily tweets (Reference Dawson and InnesDawson and Innes 2019). Workers also were reportedly given daily topics to focus on and keywords to include. The IRA reportedly experienced high worker turnover. While driven by Russian interests at the organizational level, individual workers were probably not typically ideologically invested in the work (Reference KorenevaKoreneva 2015).

Owing to congressional interest, Twitter has made publicly available datasets of accounts linked to the IRA. According to Twitter, 3,814 accounts were operated by the IRA (Twitter 2018). Analyses of the data corroborate and expand on interviews conducted by news agencies. For instance, accounts of heavy workloads and little personal investment are backed up by data showing significant messaging repetition (Reference Dawson and InnesDawson and Innes 2019).Footnote 3 Likewise, through the analysis of metadata, Reference Boyd, Spangher and FourneyBoyd et al. (2018) show that “the IRA’s operations were largely unsophisticated and ‘low-budget’ in nature, with no serious attempts at point-of-origin obfuscation being taken” (p. 1). Activity patterns for IRA-linked accounts map directly onto standard Moscow business hours. Reference Boyd, Spangher and FourneyBoyd et al. (2018) show that IRA Twitter activity displays highly different linguistic patterns than standard English-language tweets, suggesting little attention to masking their foreign origin. Descriptions of the IRA as an assembly line are supported by studies that show Twitter handles were built into one of several groups and then used interchangeably based on strategic goals (e.g., influencing different demographic targets in the United States) and Twitter bans (Reference Linvill, Boatwright, Grant and WarrenLinvill et al. 2019). Reference Farkas and BastosFarkas and Bastos (2018) similarly show that the IRA maintained a range of different spoofed account types for various tasks, including “openly pro-Russian profiles, local American and German news sources, pro-Trump conservatives, and Black Lives Matter activists.”

Analyses of these Twitter datasets also demonstrate a range of tactics the IRA employed to influence Americans and other targets. A study by Reference Yin, Roscher, Bonneau, Nagler and TuckerYin et al. (2018) showed that IRA accounts shared significantly more junk news, particularly around the 2016 election, than other users but still only accounted for 6 percent of all links. These accounts also shared local news fifteen times more often than politically interested users and ninety times more than average users in an attempt to take advantage of trust in local news sources. These accounts frequently impersonated American partisans (Reference Yin, Roscher, Bonneau, Nagler and TuckerYin et al. 2018) but only shared disinformation around 20 percent of the time, spending the rest of the time mimicking interests and values of the social identity being spoofed (Reference Dawson and InnesDawson and Innes 2019), making definitive attribution as Russia-based disinformation more difficult for those exposed.

Finally, there are also more nebulous groups of producers that are not geographically bound or centrally organized. For instance, Reference Marwick and LewisMarwick and Lewis (2017) provide qualitative analysis of the communities that foster far-right extremists online, detailing the spaces where these actors convene and tactics they sometimes employ to spread disinformation. The authors argue that participatory media are key to these producers’ ability to manipulate mainstream media, allowing those with fringe views to collaborate on the production and dissemination of content. First, those interested in spreading far-right ideologies and the disinformation that supports them often frequent anonymous discussion spaces such as 8chan and 4chan. Many ideological blogs and websites are also important hubs for politically motivated conspiracy theories, ranging from Infowars to The Daily Stormer. These sites constitute a network in that they link to one another and engage with each other’s content. Finally, mainstream social media platforms (Twitter, Facebook, YouTube) are used by members of these groups to spread disinformation and conspiracy theories to larger numbers of people and seed topics for journalists. Producers in this broad grouping display a mix of ideological and economic motives, and some may be motivated simply by “the enjoyment they get at the expense of others” (Reference Marwick and LewisMarwick and Lewis 2017).

Supply of Misinformation

In the academic research literature, there are very few studies that have attempted to estimate quantities related to the supply or availability of misinformation online. This is due in part to the inherent challenge of establishing a “ground truth” standard for what constitutes misinformation or subsets of interest such as fake news; contested judgments about the veracity of a subset of published articles must be used to draw inferences about the production and availability of similar content. Since not all content is shared or consumed equally (or at all), there is additionally a concern about a biased or incomplete search of the set of potential sources of misinformation. As with all research in this area, inferences about the processes behind the consumption of misinformation begin at the end, with observations of the public dissemination of particular dubious content. The challenge is to move backwards through multiple rounds of selection to begin to describe the choice environments people are confronted with in the first place.

With these challenges in mind, one approach to answering these questions is to explore the role of misinformation in the overall media ecosystem. Such studies constitute a common strand in communication research, where media ecology explores the relationship between networked actors and how they influence each other. In a major study on the role of online propaganda and disinformation in the US 2016 presidential election, Reference Benkler, Faris and RobertsBenkler, Faris, and Roberts (2018) use articles from online media sources and the links between them to conduct an analysis of the role of partisanship and disinformation in the coverage of the candidates and issues during the campaign. They find, first, that there seems to be a connection between a strong partisan slant and the publishing of dubious content. Moreover, publishers often classified as “hyperpartisan,” as well as those known to produce fake content, appear distinct in that their stories are shared much more often on social media. Additional research has verified the existence of a dense “fake news” ecosystem in which automated bots appear to play an important role (Reference Hindman and BarashHindman and Barash 2018). These findings confirm the initial reporting of Reference SilvermanSilverman (2016), whose investigations of fake news generated much of the initial journalistic and scholarly interest in the phenomenon. Beyond the importance of social media as a locus of dissemination, Benkler and colleagues find that looking at sites that were more popular on Facebook than Twitter reveals a list strikingly similar to commonly referenced fake news purveyors such as Ending the Fed, Bipartisan Report, and Western Journalism.

These characteristics of publishers provide some clues about the sources and dynamics of the online misinformation ecosystem. Yet what was the partisan lean of the stories being produced? In a study of fake news consumption behavior, Reference Guess, Lyons, Nyhan and ReiflerGuess, Nyhan, and Reifler (2018) estimate the proportion of stories published by fake news domains that were pro-Trump and pro-Clinton during the 2016 campaign. To obtain supply-side data, the authors use a list of “fake news” domains and scrape the text of all articles from those domains still available on the Internet Archive’s Wayback Machine. Between June and the election, nearly 500,000 of such articles were published. Then, using supervised learning on a hand-coded set of articles, the authors estimate that 93.5 percent of that supply for which they could gauge slant were pro-Trump in orientation. Of course, these estimates assume that all articles from fake news domains are themselves false or dubious; this is likely not true. Nonetheless, these findings point to a large absolute number of articles being generated by these producers and a highly lopsided slant that tended to favor Donald Trump over Hillary Clinton.

In line with this finding, Reference Bakir and McStayBenkler et al. (2018) reveal highly asymmetric patterns in the online media ecosystem. The authors argue that partisan conservative media represent a cluster that encompasses far-right, hyperpartisan, and outright fake sites as well as more mainstream conservative outlets that serve to amplify extreme and/or misleading content and fail to check factual excesses. By contrast – and like mainstream media in general – left-wing partisan media tend to be more constrained by journalistic practices and norms than their right-wing counterparts. This creates an asymmetry in the ideological valence of extremist content and misinformation that circulate on social media.

Consumption of Misinformation

Why were these articles being generated? Quite simply, there is and was demand for them. Given an increasingly fragmented media ecosystem and the power of social media gatekeepers to drive traffic, this does not necessarily mean that people are clamoring for a continuous stream of fake news. It potentially means, however, that, within a multifaceted information environment with streams of news and other distractions, misinformation – often designed to be vivid and compelling – can often command people’s limited attention (and therefore clicks). That the contours and incentives of social media ranking algorithms can change drastically and without warning is well known, as the once-mighty clickbait purveyors Upworthy and Demand Media discovered. Yet, within any given regime, it appears that producers of misinformation, disinformation, and fake news were able to tailor content to maximize engagement and, with it, online advertising revenue.

Both to understand how this targeting process works and to address questions about the prevalence and impact of online misinformation, it is necessary to move beyond macro-level analyses of the availability of content and how it is situated within a larger media ecosystem. To explore these demand-side dynamics more fully, a number of studies have focused on analyzing the consumption of misinformation by individuals. The simplest approach to doing so is simply to ask people in surveys whether they recall having seen, clicked on, or read a particular article, such as a fake news story in 2016. However, relying on self-reported survey measures can lead to biased conclusions due to faulty recall, social desirability concerns, and other sources of misreporting (Reference PriorPrior 2013; Reference GuessGuess 2015). These problems are likely exacerbated when eliciting responses about individual news items rather than exposure to a news source in general. Anticipating this challenge, Reference Allcott and GentzkowAllcott and Gentzkow (2017) included “placebo” fake news stories – articles designed to look like “fake news” that had not actually been published – to estimate the baseline level of false recall by respondents. They found that 14 percent of respondents reported seeing the “fake fake” articles and 15 percent reported seeing the “real fake” articles. After correcting for misreporting, their estimate is that the average American adult saw and remembered slightly more than one fake news article in 2016.

Even this clever estimation approach is subject to the limitations of surveys – namely, the ability to ask only about the recall of a relatively small sample of articles. A more direct way of studying consumption patterns is to obtain web visit data, either in aggregated form from analytics firms or from individual-level tracking data. Guess, Nyhan, and Reifler (2018) collected these kinds of tracking data, linked to a national online survey, over five weeks near the end of the 2016 campaign (N = 2,525). Using lists derived from existing research, the authors estimate that approximately 27 percent of Americans were exposed to at least one fake news article – a potentially large number representing more than 65 million people in the United States. However, as a share of a broad category of “hard news” visits, the proportion is small, in the order of 2 percent. Other studies have found comparable results using aggregated traffic data from various sources. Analyzing comScore multiplatform data covering desktop and mobile visits, Reference Nelson and TanejaNelson and Taneja (2018) find that the ratio of monthly visits to “real” news sites to fake news sites was 40 to 1. The average audience size for a fake news site in a given month was 675,000 – a far cry from the kinds of numbers generated by “engagement” metrics on social media. In another study from Microsoft Research, anonymized web visit data from Internet Explorer 11 and Edge browsers in the United States was checked against a list of fake news domains over a period from July to November 2016 (Reference Fourney, Racz, Ranade, Mobius and HorvitzFourney et al. 2017). In line with the other estimates, the authors find, for example, that, on a given day, 0.34 percent of users sending data visited a fake news site.

How did people encounter this information? In the following section, we explore research findings on the spread and dissemination of misinformation online in general. Given the prominent role of social media in narratives about fake news, we also consider existing evidence on its prevalence on these platforms. Facebook-wide data on the prevalence and availability of misinformation are sparse, but Twitter’s open application programming interface (API) allows for estimates of the amount of fake news a typical user may have seen. Reference Grinberg, Joseph, Friedland, Swire-Thompson and LazerGrinberg et al. (2019) match Twitter accounts to American voter file data and use follow patterns to estimate how much of people’s feeds may have contained fake news in 2016. In line with the other estimates, prevalence is low – 5 percent of political content originated from fake news sources, although due to the lack of exposure measures on Twitter this fraction constitutes potential exposure to online misinformation. Although, as we discuss in the next section, Facebook appears to be much more powerful as a dissemination mechanism for misinformation, these results from Twitter are still striking. While about double the Reference Guess, Lyons, Nyhan and ReiflerGuess, Nyhan, and Reifler (2018) estimate, it is on the same order of magnitude and much lower than data from shares would imply; and the findings do not appear to be an outlier: According to a study from the Politoscope Project, 4,888 out of 60 million tweets (less than 0.01 percent) during the French presidential election in 2017 contained a link to a story determined by online fact-checkers to be false.

If these empirical findings seem at odds with popular narratives about fake news and online misinformation, it may be because the averages obscure another recurring finding: the highly skewed nature of consumption patterns. This can be illustrated in multiple ways. Looking specifically at fake news articles with a clear pro-Trump slant, Guess, Nyhan and Reifler find in their web consumption data that more than 40 percent of Trump supporters (as determined by the linked survey data) read at least one article, compared to less than 15 percent of Clinton supporters. Even more striking is that, when respondents are grouped according to the overall ideological lean of their news consumption habits, more than 65 percent in the most conservative decile visited at least one fake news website. Grinberg and colleagues similarly find that 1 percent of Twitter users in their sample accounted for roughly 80 percent of the potential fake news exposures that they identified. Regardless of the data or approach, it appears that fake news consumption is relatively rare but highly concentrated among key subgroups.

In Europe, it appears that there is a similar story. Evidence so far suggests that the reach of fake news sites was limited: Using analytics data from comScore and CrowdTangle, Reference Fletcher, Cornia, Graves and NielsenFletcher et al. (2018) found that their sample of fake news sites in France and Italy had an average monthly reach of 3.5 percent. (For comparison, that number for the major newspaper Le Figaro was 22.3 percent and for La Repubblica it was 50.9 percent.) People also spent less time on these sites than on news sites. Interestingly, they found that, despite these consumption figures, social media engagement metrics for a subset of these fake news sites approached or even exceeded those of “real” news sites. This suggests another explanation for the divergence between consumption figures and perceptions of widespread dissemination: A small fraction of both producers and consumers of fake news can generate the vast bulk of online engagement, even if most people never encounter it. Similarly, in an analysis of junk news during the EU parliamentary elections in 2019, Marchal et al. (n.d.) show that less than 4 percent of EU-related news links circulating on Twitter during this time were from disreputable sources. Notably, though, the Polish Twitter sphere stood out with junk news comprising 21 percent of traffic. Still, just as Reference Fletcher, Cornia, Graves and NielsenFletcher et al. (2018) found, individual stories from these outlets often surpassed traditional news engagement metrics on Facebook. The most “successful” junk news stories were found to center on populist, anti-immigration, and Islamophobic themes.

In sum, the consumption of various forms of online misinformation at the individual level is relatively limited as a share of people’s overall information diets, on average. However, this can mask differences between subgroups; people with strongly partisan news consumption habits may be much more likely to encounter and consume pro-attitudinal misinformation. Given the heterogeneity and skew of the prevalence of misinformation online, it is important to be cautious about making generalizations on the basis of highly aggregated data and “engagement” metrics, which can be difficult to interpret and whose magnitudes can be misleading.

Spread and Dissemination of Misinformation

How does misinformation spread online? Researchers have most often tried to address this question by turning to Twitter and analyzing retweet networks for links to articles from low-credibility sources or for content found by fact-checkers to be false (Reference Shao, Hui and WangShao et al. 2018; Reference Vosoughi, Roy and AralVosoughi et al. 2018). By analyzing these networks, this body of work identifies key actors in online diffusion. Some of these studies have identified an amplifying role for social bots in these online networks (Reference Bessi and FerraraBessi and Ferrara 2016; Reference FerraraFerrara 2017; Reference GorwaGorwa 2017; Reference Shao, Ciampaglia, Varol, Flammini and MenczerShao et al. 2017). Using the Botometer machine-learning algorithm to detect social bots, for instance, Reference Shao, Ciampaglia, Varol, Flammini and MenczerShao et al. (2017) find that relatively few users – likely bots – account for a great deal of the traffic surrounding pieces of misinformation. These bots work to spread misinformation with specific strategies. First, they amplify false content in the early stages of dissemination, prior to achieving organic spread. Second, bots single out influential accounts, trying to leverage their influence by gaining their attention through replies and mentions. These studies find that people retweet bots just as much as other humans, suggesting the strategies are at least in part effective.

In addition to social bots, Reference Andrews, Fichet, Ding, Spiro and StarbirdAndrews et al. (2016) identify “breaking news” sites as key propagators of misinformation on Twitter. Twitter users attribute trust to these accounts that mimic legitimate news sources and have an air of authority. This allows “breaking news” sites to build large, credulous follower bases, to which they broadcast misinformation in a definitive tone. In other words, a combination of social media users’ psychology – how they attribute credibility – and platform affordances can help spread misinformation.

Aside from key actors, then, other mechanisms of diffusion include a mix of biases – cognitive, social, and algorithmic (Reference Shao, Ciampaglia, Varol, Flammini and MenczerShao et al. 2017). Information diffusion tends to be bounded by limited attention resources; information disseminated during an “attention burst” – a period of demand for a given topic – is more likely to gain traction (Reference Ciampaglia, Flammini and MenczerCiampaglia, Flammini, and Menczer 2015). Beyond these basic cognitive constraints, social media users are often embedded in homogeneous clusters, mixed findings on echo chambers notwithstanding (Reference Guess, Lyons, Nyhan and ReiflerGuess, Lyons et al. 2018). These network configurations can encourage exposure to and dissemination of agreeable misinformation (Reference Del Vicario, Bessi and ZolloDel Vicario, Bessi, and Zollo 2016; Reference Shin, Jian, Driscoll and BarShin et al. 2017). Likewise, social media users place a great amount of trust in their close friends. When it comes to expressing trust in news shared on Facebook, for example, research suggests the person who shared it matters more than the news organization that produced it (American Press Institute 2017). Users are more likely to think news is accurate and well-balanced when it is shared by someone they trust, which may encourage the spread of misinformation, especially because platforms surface these close friends’ posts in the name of engagement. Algorithmic bias, then, arises from the design of most social media platforms that prioritize engagement, favoring popular content over trustworthy content in users’ feeds (Reference Ciampaglia, Nematzadeh, Menczer and FlamminiCiampaglia et al. 2018).

Much of the research on dissemination and spread lacks individual-level data, limiting the kinds of conclusions that can be made about the types of people who are more likely to share online misinformation. One recent exception is Reference Guess, Nagler and TuckerGuess, Nagler, and Tucker (2019), who examine the individual-level determinants of fake news sharing behavior on Facebook. By combining anonymized profile data with a representative survey of Americans, they find that the most consistent predictor of sharing a fake news article to one’s friends is age: Those in the oldest age groups were much more likely to post links to fake news. As the authors discuss, this observational study is not able to disentangle the mechanisms behind the age effect, but one possibility is that age is a proxy for digital media literacy, which may be related to perceptions of source credibility and therefore the likelihood of believing dubious information posted on social media. Reference Guess, Lyons, Montgomery, Nyhan and ReiflerGrinberg et al. (2019) similarly find evidence for an association with age. The authors also uncover an important empirical regularity that parallels their findings on exposure: The sharing of fake news on Twitter reflects an extreme power-law pattern in which 0.1 percent of users in their sample shared 80 percent of the content.

As a useful point of comparison, several studies have examined the diffusion of accurate information alongside misinformation, thus examining patterns at a more generalizable level (while also moving beyond examinations of single cases of misinformation, such as the Haitian earthquake of 2010 [Reference Oh, Kwon and RaoOh, Kwon, and Rao 2010] and the Boston Marathon bombing of 2013 [Reference Starbird, Maddock, Orand, Achterman and MasonStarbird et al. 2014]). “Truth and falsity spread differently,” Reference Vosoughi, Roy and AralVosoughi, Roy, and Aral (2018) find, and “factors of human judgment explain these differences.” These researchers examine comprehensive data on all fact-checked rumors on Twitter from its inception in 2006 through 2017. Falsehoods traveled farther, faster, deeper, and more broadly on Twitter during this time. Furthermore, misinformation spread virally – not simply through broadcast dynamics but through peer-to-peer processes. Importantly, false political news spread deeper and more broadly, and was more viral, than any other category of misinformation.

Why does misinformation beat out its competitors? Contrary to expectations, characteristics of individual posters appear to play no role in falsehoods’ greater velocity. Users who spread misinformation were more likely to have unverified accounts and more likely to have fewer followers and be less active on the platform (Reference Vosoughi, Roy and AralVosoughi et al. 2018). Instead, novelty seems to be the biggest driver of misinformation’s diffusion. Using topic modeling, the researchers find that pieces of misinformation that users decided to pass along offered “significantly higher information uniqueness” than other tweets they had seen in recent weeks, and, accordingly, users expressed greater surprise and disgust when passing along misinformation.

One of the key limitations of studies of diffusion is public availability of data across platforms. Most researchers therefore rely on Twitter data, overlooking how misinformation is spread on the world’s largest social media site, Facebook. Reference Del Vicario, Bessi and ZolloDel Vicario et al. (2016) fill part of this gap by scraping all posts from sixty-seven public Facebook pages – about half devoted to conspiracy theories and half to science news. Although more limited in scope, these data can serve as a Facebook-based proxy for the explorations of high- and low-quality information diffusion conducted elsewhere. For both science news and conspiracy theory pages, diffusion was primarily driven by selective exposure. In other words, users preferred one or the other, generating something approaching separate echo chambers. However, while science news typically reached a high level of diffusion quickly and tapered off, the opposite was true for conspiracy theory content – this form of misinformation spread slowly but interest increased over its lifetime. This general differential diffusion pattern is found again in the findings of Reference Shin, Jian, Driscoll and BarShin et al. (2018), who conducted time series analysis for seventeen political rumors that circulated on Twitter during the 2012 US election period. They find that true information exhibited a single initial spike of sharing, while false rumors periodically resurfaced, often repackaged by partisan websites as “news,” and often became more extreme and exaggerated over time.

An important footnote to these findings on the prevalence of misinformation on social media is that they are based on snapshots in time. A more dynamic perspective shows how engagement with and referrals to fake news – but not other types of content – have markedly declined on Facebook since 2016 (Reference Allcott, Gentzkow and YuAllcott, Gentzkow, and Yu 2018; Reference Guess, Nagler and TuckerGuess et al. 2019). This suggests that internal efforts by Facebook to reduce the spread of misinformation on its platform may be working and illustrates the challenges of studying a “moving target” whose availability and effects are subject to algorithmic changes (Reference MungerMunger 2018).

In addition to Facebook, a number of venues for online misinformation remain understudied relative to the size of their user base, including Reddit, Pinterest, and, perhaps most critically, YouTube (Reference Song and GruzdSong and Gruzd 2017; Reference Donzelli, Palomba and FederigiDonzelli et al. 2018). Meanwhile, examinations of the flow of misinformation across multiple platforms (Reference Thorson and WellsThorson and Wells 2015; Reference Bode and VragaBode and Vraga 2018) are essentially nonexistent to date. On the other hand, research on misinformation diffusion presents a number of opportunities for future work. More broadly, research into the behavior behind sharing (and beyond digital trace data) is needed. Researchers might employ functional neuroimaging to better understand engagement patterns, for instance, as others have for traditional news content (Reference Scholz, Baek, O’Donnell, Kim, Cappella and FalkScholz et al. 2017). Similarly, research to date has overlooked the potential for two-step flows of misinformation (Reference Druckman, Levendusky and McLainDruckman, Levendusky, and McLain 2018): How does misinformation make the jump from online to interpersonal discussions, and what happens when it does?

Effects of Misinformation

Questions about misinformation’s spread logically lead to questions of effects. If misinformation can spread quickly, aided by human and technological biases, how great of a danger does it ultimately pose? How, and to what extent, does it influence those exposed? While researchers have not yet examined the flow of misinformation from online to offline discussion, they have examined the potential for it to set the agenda of legitimate news providers. By computationally analyzing a dataset of news-like content pulled from online sources such as Google News, Reference Vargo, Guo and AmazeenVargo, Guo, and Amazeen (2018) show that, while fake news websites are not excessively influential over the media landscape at large, they do at times set the issue agenda for partisan news sources, and partisan media were especially responsive to fake news agenda-setting in the 2016 election year. Agenda-setting power matters because it influences which issues capture the public’s attention.

That said, actual persuasive effects of online misinformation have been particularly difficult to study. Field experiments, which could provide clear evidence, are infeasible for ethical reasons (Reference Gerber and GreenGerber and Green 2012). Some studies have relied on observational data to suggest use of online media sources may drive misperceptions (Reference GarrettGarrett 2011), but such designs may be subject to inaccurate reporting, reverse causation, or unobserved confounds. There is some evidence from survey experiments that misinformation seen online is believed (Reference Pereira and Van BavelPereira and Van Bavel 2018), particularly by partisans (however, partisans may be able to identify even agreeable headlines as fake when “blatantly inaccurate” [Reference Pennycook and RandPennycook and Rand 2018]), those less prone to analytic reasoning (Reference Pennycook and RandPennycook and Rand 2018), and those previously exposed to the misinformation (Reference Pennycook, Cannon and RandPennycook et al. 2018). Yet these experiments require strong assumptions about homogeneous treatment effects in order to generalize to the real world; not everyone is exposed to misinformation, nor do they necessarily pay attention when they are. As an exception to these designs, Reference Kim and KimKim and Kim (2018) leverage variation in survey timing to estimate the causal effect of misinformation diffusion. They analyze data collected around a surge in the circulation of online misinformation about Obama’s supposed Muslim faith, using a difference-in-differences strategy to compare belief changes over time for those responding before and after the rumor circulated. They find the rumor’s diffusion increases public belief that Obama is a Muslim by 4 to 8 percentage points, although the case (occurring during the 2008 campaign) predates the rise of social media in today’s information environment.

Regardless, the effects of misinformation on candidate preferences themselves and, moreover, the effects on electoral outcomes or other behavior have yet to be reliably detected (Reference Aral and EcklesAral and Eckles 2019). Looking to the literature on campaign effects may be instructive here. Using a meta-analysis of forty field experiments and nine original field experiments, Reference Kalla and BroockmanKalla and Broockman (2018) show that the best estimate for the effect of campaign contact and advertising is zero. This work shows that persuasive effects are only likely to appear in rare cases, such as when a candidate takes an unusually unpopular stance and opposing campaigns invest heavily in finding persuadable cross-pressured voters. This rare exception in ad effects identified by Kalla and Brookman, however, might similarly suggest circumstances under which online misinformation is most likely to be persuasive. A disinformation campaign is most likely to be effective if it attributes especially unacceptable positions, rhetoric, or behavior to a politician and then works to identify and target the subgroups of voters most vulnerable to persuasion (Reference Zuiderveen Borgesius, Moller and KruikemeierZuiderveen Borgesius et al. 2018).

In any event, the most important effects of misinformation may extend beyond direct persuasion. Media effects research more broadly suggests that exposure to fake news and other misinformation may do most of its damage in increasing cynicism and apathy while feeding extremism and affective polarization (Reference Garrett, Gvirsman, Johnson, Tsfati, Neo and DalGarrett et al. 2014; Reference Lau, Andersen, Ditonto, Kleinberg and RedlawskLau et al. 2017; Reference Tsfati and NirTsfati and Nir 2017; Reference Lazer, Baum and BenklerLazer et al. 2018; Reference Suhay, Bello-Pardo and MaurerSuhay et al. 2018). These less obvious effects of misinformation have rarely been examined, but a study by Reference Van Duyn and CollierVanDuyn and Collier (2018) shows that even the elite discourse surrounding fake news may reduce trust in the media and worsen the public’s ability to accurately identify real news. These sorts of second-order effects of misinformation in the media landscape deserve a great deal more attention from researchers. The contemporary Russian model for propaganda, “the firehose of falsehood,” for instance, employs rapid, continuous, and repetitive messaging across a high number of channels, while lacking a commitment to consistency (Reference Paul and MatthewsPaul and Matthews 2016). This method seeks to confuse and overwhelm audiences; the mental fatigue and cumulative effects will be difficult but important to measure.

A Global Phenomenon

While our review has highlighted the US focus of this research area, the perils of misinformation, disinformation, and online propaganda are truly a global issue. In this section, we briefly review what is known about the dissemination of misinformation in the rest of the world – across Europe, a range of authoritarian countries, and finally democracies in the global South.

In addition to the studies of fake news’ reach in Europe (Reference Fletcher, Cornia, Graves and NielsenFletcher et al. 2018; Reference Marchal, Kollanyi, Neudert and HowardMarchal et al. n.d.), scholars at the Oxford Internet Institute have published reports detailing case studies of “computational propaganda” around the world (including Brazil, Canada, China, Germany, Poland, Taiwan, Russia, Ukraine, and the United States), combining expert interviews with computational analysis of posts on a variety of social media platforms (Reference Woolley and HowardWoolley and Howard 2017). This set of findings shows that in many political contexts social media platforms are dominated by government-organized disinformation campaigns (e.g., in Russia and Poland). Notably, these case studies find that the disinformation campaigns waged over Ukraine may be the most advanced, with manipulation efforts dating back to the early 2000s. The aggregation of the case studies, with even more cases added in the following year (Reference Bradshaw and HowardBradshaw and Howard 2018), allows comparison across authoritarian and democratic regimes. These authors find that across twenty-eight countries every authoritarian regime has targeted their own population via social media influence campaigns but only a handful targeted public user bases in other countries. Most democracies, on the other hand, were found to target foreign publics, while their national political parties targeted domestic voters (Reference Bradshaw and HowardBradshaw and Howard 2018).

Finally, the rise of WhatsApp, and its potential to sow misinformation via its closed messaging system, has drawn interest from scholars focusing on the global South, where the messaging app is especially popular. India and Brazil, in particular, are believed to be hotbeds of WhatsApp misinformation. A handful of studies have begun to reveal the dynamics of misinformation on this unique platform and help describe the misinformation landscape in these countries more broadly. Narayanan et al. (n.d.) conducted content analysis of information shared in India in the lead-up to the 2019 election. They find that more than 25 percent of the Facebook content shared by the Bharatiya Janata Party (BJP) and a fifth of the Indian National Congress’s content was classified as junk news. In a cross-platform comparison, misinformation on WhatsApp tended to be visual, while on Facebook it was more likely to take the form of links to conspiratorial or extremist news sites. Work conducted in Brazil looking at political WhatsApp group text content during the 2018 presidential campaign found that messages containing misinformation spread more quickly within groups but took longer to cross group boundaries (Reference Resende, Melo and SousaResende et al. 2019). Examining attention cascades (i.e., message chains) across 120 Brazilian WhatsApp groups, Reference Caetano, Magno, Gonçalves, Almeida, Marques-Neto and AlmeidaCaetano et al. (2019) present complementary results: Cascades containing false information “tend to be deeper, reach more users, and last longer in political groups than in non-political groups.

Examinations of misinformation in Africa are conspicuously absent from much of this work, though recent work, for instance, has analyzed Ebola rumors on Twitter in Guinea, Liberia, and Nigeria (Reference Oyeyemi, Gabarron and WynnOyeyemi, Gabarron, and Wynn 2014). Beyond geographic regions, however, studies of misinformation effects in the rest of the world are also lacking. In light of reports of murders resulting from false WhatsApp rumors in India (Reference PurohitPurohit 2019), Sri Lanka (Reference FisherFisher 2019), and elsewhere, the potential behavioral effects of political misinformation in these areas are particularly salient.

Conclusion

Even though it is a relatively new area of study, the research literature on online misinformation has already generated useful evidence and insights. It is clear, for example, that the prevalence of misinformation is limited in comparison to other forms of online content and that it is highly concentrated both in the media ecosystem and among the types of people who consume it. Furthermore, while mechanisms behind the spread of misinformation, disinformation, and fake news are not yet fully understood, there is evidence that sheer novelty – rather than the falsity of the information – may play a role in people’s decisions to share or forward content to their friends or followers; and, based on what we currently know, caution should be exercised when claims are made about the effects of misinformation, especially on behaviors such as voting. We suggest a focus on system-level outcomes such as trust and cynicism, which, while more difficult to identify, may be of greater long-term importance for society.

A number of gaps in our understanding remain to be explored by researchers. One is the puzzle that “engagement” metrics on social media are orders of magnitude larger than both aggregate traffic statistics and consumption data would imply. A possible resolution to this apparent discrepancy is that some share of the already-small fraction of the population that encounters misinformation online engages with it frequently and repeatedly. This would explain the skewed patterns of both consumption and sharing and suggests the existence of a relatively small number of accounts, likely on social media (and possibly not human), that drive a vastly disproportionate share of engagement and dissemination activity (Reference Grinberg, Joseph, Friedland, Swire-Thompson and LazerGrinberg et al. 2019). A focus on counts or averages obscures this possible reality; also, since site analytics track raw visits, this activity (perhaps by design) could drive the incentives of online publishers to produce more misinformation.

Of course, online misinformation, disinformation, and propaganda are not unique to the United States. As reflected in this chapter, much of the current evidence is from research conducted by American researchers focusing on the United States. However, there is a growing body of work focusing on the problems of online misinformation in other national contexts. Part of the reason for this imbalance is that American awareness and interest in the topic spiked after postelection narratives focused on the role of “fake news” and Russian disinformation tactics in 2016. This likely interacted with the fact that subsequent responses by US technology companies initially focused on their role in the American campaign.

That said, there is growing concern that misinformation spread via the rapid introduction of social media in developing countries, especially via mobile devices, is causing increasing social divisions and even violence. Rigorous evidence on this question does not yet exist, but understanding the mechanisms and potential interventions for stopping the spread of weaponized online propaganda intended to sow discord in contexts with less-established media institutions should be a major priority for future research. Studies should also take stock of evidence across countries so that we can begin to understand the conditions under which misinformation thrives online. What institutional, social, technological, contextual, and other factors increase the likelihood of greater dissemination of falsehoods online? Relatedly, what determines whether mainstream institutions – including media organizations but also political parties – adapt themselves to such messages or choose instead to push back against them?

Ultimately, such questions may be easier to study with access to better data. Much of the existing research cited in this chapter is designed to overcome barriers to direct observation of online misinformation and the factors correlated with its spread. For example, inferences can be drawn from samples, but, given the high degree of concentration and skew, a better understanding of key subgroups would benefit from observing behavioral data from the entire population of interest. Furthermore, while multiple studies suggest that Facebook played a more important role in driving consumption of fake news than Twitter, our best evidence comes from the open API offered by the latter. Bridging these major gaps in knowledge, potentially via privacy-preserving arrangements between academics and social platforms themselves (Reference King and PersilyKing and Persily 2018), will help to develop our understanding of this important and ever-evolving topic.

Footnotes

This project received funding for Benjamin A. Lyons’ time from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 682758).

1 In this section, we focus on disinformation rather than misinformation writ large because “production” implies an element of intent inherent in our definition of the former.

2 Subsequent reporting has revealed apparent connections between the Macedonians who built the pro-Trump media ecosystem in Veles and a network of American and British political consultants and writers. These connections, along with alleged ties to the Russian “troll factory,” are currently being investigated (Reference Silverman, Feder, Cvetkovska and BelfordSilverman et al. 2018).

3 Dawson and Innes are unique in analyzing IRA propaganda efforts in Europe as opposed to the United States.

References

Achen, C. H., & Bartels, L. M. (2017). Democracy for Realists: Why Elections Do Not Produce Responsive Government, Vol. 4. Princeton: Princeton University Press.Google Scholar
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211236.Google Scholar
Allcott, H., Gentzkow, M., & Yu, C. (2018). Trends in the diffusion of misinformation on social media. arXiv.org. https://arxiv.org/abs/1809.05901Google Scholar
American Press Institute. (2017). “Who Shared It?” How Americans Decide What News to Trust on Social Media. American Press Institute report. www.americanpressinstitute.org/publications/reports/survey-research/trust-social-media/Google Scholar
Andrews, C., Fichet, E., Ding, Y., Spiro, E. S., & Starbird, K. (2016). Keeping up with the tweetdashians: The impact of “official” accounts on online rumoring. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 452465).CrossRefGoogle Scholar
Aral, S., & Eckles, D. (2019). Protecting elections from social media manipulation. Science, 365(6456), 858861. https://science.sciencemag.org/content/365/6456/858Google Scholar
Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions: Problems, causes, solutions. Digital Journalism, 6(2), 154175.Google Scholar
Bastos, M., & Farkas, J. (2019). “Donald Trump is my president!”: The internet research agency propaganda machine. Social Media and Society, 5(3). https://doi.org/10.1177/2056305119865466Google Scholar
Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press. https://books.google.com/books?id=6hhnDwAAQBAJCrossRefGoogle Scholar
Berinsky, A. J. (2015). Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science, 47(2), 241262. www.cambridge.org/core/journals/british-journal-of-political-science/article/rumors-and-health-care-reform-experiments-in-political-misinformation/8B88568CD057242D2D97649300215CF2CrossRefGoogle Scholar
Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US presidential election online discussion. First Monday, 21(11). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2982233Google Scholar
Bode, L., & Vraga, E. K. (2018). Studying politics across media. Political Communication, 35(1), 17.Google Scholar
Boyd, R. L., Spangher, A., Fourney, A. et al. (2018). Characterizing the internet research agency’s social media operations during the 2016 US presidential election using linguistic analyses. Working paper. http://test.adamfourney.com/papers/boyd_psyarxiv2018.pdfGoogle Scholar
Bradshaw, S., & Howard, P. N. (2018). Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation. The Computational Propaganda Project report. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/07/ct2018.pdfGoogle Scholar
Caetano, J. A., Magno, G., Gonçalves, M., Almeida, J., Marques-Neto, H. T., & Almeida, V. (2019). Characterizing attention cascades in WhatsApp groups. arXiv.org. arXiv:1905.00825Google Scholar
Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American Voter. Chicago: University of Chicago Press.Google Scholar
Ciampaglia, G. L., Flammini, A., & Menczer, F. (2015). The production of information in the attention economy. Scientific Reports, 5, 9452. https://doi.org/10.1038/srep09452CrossRefGoogle ScholarPubMed
Ciampaglia, G. L., Nematzadeh, A., Menczer, F., & Flammini, A. (2018). How algorithmic popularity bias hinders or promotes quality. Scientific Reports, 8(1), 15951.Google Scholar
Dawson, A., & Innes, M. (2019). How Russia’s Internet Research Agency built its disinformation campaign. The Political Quarterly, 90(2), 245256.CrossRefGoogle Scholar
Del Vicario, M., Bessi, A., & Zollo, F. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554559. https://doi.org/10.1073/pnas.1517441113Google Scholar
Donzelli, G., Palomba, G., Federigi, I. et al. (2018). Misinformation on vaccination: A quantitative analysis of YouTube videos. Human Vaccines & Immunotherapeutics, 14(7), 16541659.Google Scholar
Druckman, J. N., Levendusky, M. S., & McLain, A. (2018). No need to watch: How the effects of partisan media can spread via interpersonal discussions. American Journal of Political Science, 62(1), 99112.Google Scholar
Farkas, J., & Bastos, M. (2018). State propaganda in the age of social media: Examining strategies of the internet research agency. In 7th European Communication Conference (ECC).Google Scholar
Ferrara, E. (2017). Disinformation and social bot operations in the run up to the 2017 French presidential election. Working paper. https://arxiv.org/pdf/1707.00086&gtGoogle Scholar
Fisher, M. (2019). Sri Lanka blocks social media, fearing more violence. New York Times, April 21. www.nytimes.com/2019/04/21/world/asia/sri-lanka-social-media.htmlGoogle Scholar
Fletcher, R., Cornia, A., Graves, L., & Nielsen, R. K. (2018). Measuring the Reach of “Fake News” and Online Disinformation in Europe. Reuters Institute factsheet.Google Scholar
Flynn, D., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(S1), 127150. https://doi.org/10.1111/pops.12394Google Scholar
Fourney, A., Racz, M. Z., Ranade, G., Mobius, M., & Horvitz, E. (2017). Geographic and temporal trends in fake news consumption during the 2016 US presidential election. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, vol. 17 (pp. 610).Google Scholar
Garrett, R. K. (2011). Troubling consequences of online political rumoring. Human Communication Research, 37(2), 255274.CrossRefGoogle Scholar
Garrett, R. K., Gvirsman, S. D., Johnson, B. K., Tsfati, Y., Neo, R., & Dal, A. (2014). Implications of pro-and counterattitudinal information exposure for affective polarization. Human Communication Research, 40(3), 309332.CrossRefGoogle Scholar
Gerber, A. S., & Green, D. P. (2012). Field Experiments: Design, Analysis, and Interpretation. New York: W. W. Norton.Google Scholar
Gorwa, R. (2017). Computational propaganda in Poland: False amplifiers and the digital public sphere. Project on Computational Propaganda Working Paper Series, Oxford.Google Scholar
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 US presidential election. Science.Google Scholar
Guess, A. (2015). Measure for measure: An experimental test of online political media exposure. Political Analysis, 23(1), 5975.Google Scholar
Guess, A., Lyons, B., Montgomery, J., Nyhan, B., & Reifler, J. (2019). Fake News, Facebook Ads, and Misperceptions: Assessing Information Quality in the 2018 U.S. Midterm Election Campaign. Democracy Fund report. www.dartmouth.edu/~nyhan/fake-news-2018.pdfGoogle Scholar
Guess, A., Lyons, B., Nyhan, B., & Reifler, J. (2018). Avoiding the Echo Chamber about Echo Chambers: Why Selective Exposure to Like-Minded Political News Is Less Prevalent Than You Think. Knight Foundation report, February 12. https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/133/original/Topos_KF_White-Paper_Nyhan_V1.pdfGoogle Scholar
Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1). http://advances.sciencemag.org/content/5/1/eaau4586Google Scholar
Guess, A., Nyhan, B., & Reifler, J. (2018). Fake News Consumption and Behavior in the 2016 US Presidential Election. Unpublished manuscript.Google Scholar
Hindman, M., & Barash, V. (2018). Disinformation, “Fake News” and Influence Campaigns on Twitter. Knight Foundation report, October. https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/238/original/KF-DisinformationReport-final2.pdfGoogle Scholar
Jacobson, G. C. (2010). Perception, memory, and partisan polarization on the Iraq war. Political Science Quarterly, 125(1), 3156.CrossRefGoogle Scholar
Kalla, J. L., & Broockman, D. E. (2018). The minimal persuasive effects of campaign contact in general elections: Evidence from 49 field experiments. American Political Science Review, 112(1), 148166.Google Scholar
Kim, J. W., & Kim, E. (2018). Identifying the effect of online rumoring: Evidence from circulation of the “Obama-is-a-Muslim” myth on the Internet. Quarterly Journal of Political Science, 14(3), 293311.Google Scholar
King, G., & Persily, N. (2018). A new model for industry-academic partnerships. (Current version: GaryKing.org/partnerships)Google Scholar
Kirby, E. J. (2016). The city getting rich from fake news. BBC News, 5.Google Scholar
Koreneva, M. (2015). Trolling for Putin: Russia’s information war explained. Yahoo. www.yahoo.com/news/trolling-putin-russias-information-war-explained-063716887.htmlGoogle Scholar
Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D., & Rich, R. F. (2000). Misinformation and the currency of democratic citizenship. Journal of Politics, 62(3), 790816.Google Scholar
Lau, R. R., Andersen, D. J., Ditonto, T. M., Kleinberg, M. S., & Redlawsk, D. P. (2017). Effect of media environment diversity and advertising tone on information search, selective exposure, and affective polarization. Political Behavior, 39(1), 231255.Google Scholar
Lazer, D. M., Baum, M. A., Benkler, Y. et al. (2018). The science of fake news. Science, 359(6380), 10941096.Google Scholar
Linvill, D. L., Boatwright, B. C., Grant, W. J., & Warren, P. L. (2019). “The Russians are hacking my brain!”: Investigating Russia’s internet research agency twitter tactics during the 2016 United States presidential campaign. Computers in Human Behavior.CrossRefGoogle Scholar
Marchal, N., Kollanyi, B., Neudert, L.-M., & Howard, P. N. (n.d.). Junk news during the EU parliamentary elections: Lessons from a seven-language study of Twitter and Facebook.Google Scholar
Marwick, A., & Lewis, R. (2017). Media Manipulation and Disinformation Online. New York: Data & Society Research Institute.Google Scholar
Narayanan, V., Kollanyi, B., Hajela, R., Barthwal, A., Marchal, N., & Howard, P. N. (n.d.). News and information over Facebook and WhatsApp during the Indian election campaign.Google Scholar
Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media and Society, https://doi.org/10.1177/1461444818758715Google Scholar
Nyhan, B. (2010). Why the “death panel” myth wouldn’t die: Misinformation in the health care reform debate. The Forum, 8(1).Google Scholar
Oh, O., Kwon, K. H., & Rao, H. R. (2010). An exploration of social media in extreme events: Rumor theory and twitter during the Haiti earthquake 2010. In ICIS (Vol. 231).Google Scholar
Oyeyemi, S. O., Gabarron, E., & Wynn, R. (2014). Ebola, Twitter, and misinformation: A dangerous combination? BMJ, 349, g6178.CrossRefGoogle ScholarPubMed
Paul, C., & Matthews, M. (2016). The Russian “firehose of falsehood” propaganda model. RAND Corporation.Google Scholar
Pennycook, G., Cannon, T., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2958246CrossRefGoogle Scholar
Pennycook, G., & Rand, D. G. (2018). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition.Google Scholar
Pereira, A., & Van Bavel, J. (2018). Identity concerns drive belief in fake news. PsyArXiv. https://psyarxiv.com/7vc5d/Google Scholar
Prior, M. (2013). The challenge of measuring media exposure: Reply to Dilliplane, Goldman, and Mutz. Political Communication, 30(4), 620634.Google Scholar
Purohit, K. (2019). WhatsApp rumours have led to 30 deaths in India: Who’s next? South China Morning Post, February 25. www.scmp.com/week-asia/society/article/2187612/whatsapp-rumours-have-led-30-deaths-india-social-mediaGoogle Scholar
Resende, G., Melo, P., Sousa, H. et al. (2019). (Mis)information dissemination in WhatsApp: Gathering, analyzing and countermeasures. In The World Wide Web Conference (pp. 818828).Google Scholar
Scholz, C., Baek, E. C., O’Donnell, M. B., Kim, H. S., Cappella, J. N., & Falk, E. B. (2017). A neural model of valuation and information virality. Proceedings of the National Academy of Sciences, 201615259.CrossRefGoogle Scholar
Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of fake news by social bots. arXiv.org preprint arXiv:1707.07592, 96104.Google Scholar
Shao, C., Hui, P.-M., Wang, L. et al. (2018). Anatomy of an online misinformation network. PloS ONE, 13(4), e0196087.Google Scholar
Shapiro, R. Y., & Bloch-Elkon, Y. (2006). Political polarization and the rational public. Paper presented at the annual conference of the American Association for Public Opinion Research, Montreal, Canada.Google Scholar
Shin, J., Jian, L., Driscoll, K., & Bar, F. (2017). Political rumoring on Twitter during the 2012 US presidential election: Rumor diffusion and correction. New Media and Society, 19(8), 12141235.CrossRefGoogle Scholar
Shin, J., Jian, L., Driscoll, K., & Bar, F. (2018). The diffusion of misinformation on social media: Temporal pattern, message, and source. Computers in Human Behavior, 83, 278287.Google Scholar
Silverman, C. (2016). This analysis shows how fake election news stories outperformed real news on Facebook, BuzzFeed News, November 16. www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook?utm_term=.ohXvLeDzK#.cwwgb7EX0Google Scholar
Silverman, C., & Alexander, L. (2016). How teens in the Balkans are duping Trump supporters with fake news. BuzzFeed News, 3.Google Scholar
Silverman, C., Feder, J. L., Cvetkovska, S., & Belford, A. (2018). Macedonia’s pro-Trump fake news industry had American links, and is under investigation for possible Russia ties. BuzzFeed News, 7. www.buzzfeednews.com/article/craigsilverman/american-conservatives-fake-news-macedonia-paris-wade-libertGoogle Scholar
Song, M. Y.-J., & Gruzd, A. (2017). Examining sentiments and popularity of pro-and antivaccination videos on YouTube. In Proceedings of the 8th International Conference on Social Media and Society (p. 17).Google Scholar
Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014). Rumors, false flags, and digital vigilantes: Misinformation on Twitter after the 2013 Boston Marathon bombing. iConference 2014 Proceedings. www.ideals.illinois.edu/handle/2142/47257Google Scholar
Subramanian, S. (2017). Inside the Macedonian fake-news complex. Wired, 15.Google Scholar
Suhay, E., Bello-Pardo, E., & Maurer, B. (2018). The polarizing effects of online partisan criticism: Evidence from two experiments. The International Journal of Press/Politics, 23(1), 95115.Google Scholar
Sunstein, C. R., & Vermeule, A. (2009). Conspiracy theories: Causes and cures. Journal of Political Philosophy, 17(2), 202227. https://doi.org/10.1111/j.1467-9760.2008.00325.xCrossRefGoogle Scholar
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755769.CrossRefGoogle Scholar
Thorson, K., & Wells, C. (2015). Curated flows: A framework for mapping media exposure in the digital age. Communication Theory, 26(3), 309328.Google Scholar
Tsfati, Y., & Nir, L. (2017). Frames and reasoning: Two pathways from selective exposure to affective polarization. International Journal of Communication, 11, 22.Google Scholar
Tucker, J., Guess, A., Barberá, P. et al. (2018). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. Hewlett Foundation report, March. https://hewlett.org/wp-content/uploads/2018/03/Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdfCrossRefGoogle Scholar
Twitter. (2018). Update on Twitter’s review of the 2016 US election. Twitter [blog post].Google Scholar
Tynan, D. (2016). How Facebook powers money machines for obscure political “news” sites. The Guardian, 24.Google Scholar
Van Duyn, E., & Collier, J. (2018). Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society, 120.Google Scholar
Vargo, C. J., Guo, L., & Amazeen, M. A. (2018). The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media and Society, 20(5), 20282049.Google Scholar
Volchek, D., & Sindelar, D. (2015). One professional Russian troll tells all. Radio Free Europe, March 25. www.rferl.org/a/how-to-guide-russian-trolling-trolls/26919999.htmlGoogle Scholar
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 11461151. https://doi.org/10.1126/science.aap9559CrossRefGoogle ScholarPubMed
Woolley, S. C., & Howard, P. N. (2017). Computational propaganda worldwide: Executive summary. Computational Propaganda Research Project, 2017–11.Google Scholar
Yin, L., Roscher, F., Bonneau, R., Nagler, J., & Tucker, J. A. (2018). Your Friendly Neighborhood Troll: The Internet Research Agency’s Use of Local and Fake News in the 2016 US Presidential Campaign. Social Media and Political Participation Lab (SMaPP), New York University data report.Google Scholar
Zuiderveen Borgesius, F. J., Moller, J., Kruikemeier, S. et al. (2018). Online political microtargeting: Promises and threats for democracy. Utrecht Law Review, 14, 82.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×