Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-ndmmz Total loading time: 0 Render date: 2024-05-27T17:46:22.765Z Has data issue: false hasContentIssue false

Part IV - The Policy Problem

Published online by Cambridge University Press:  06 October 2020

W. Lance Bennett
Affiliation:
University of Washington
Steven Livingston
Affiliation:
George Washington University, Washington DC

Summary

Type
Chapter
Information
The Disinformation Age
Politics, Technology, and Disruptive Communication in the United States
, pp. 151 - 210
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BY
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY 4.0 https://creativecommons.org/cclicenses/

6 How Digital Disinformation Turned Dangerous

Dave Karpf

They say history doesn’t repeat itself, but it rhymes.

Writing for WIRED magazine in January 1997, Tom Dowe reflected on the spread of online rumors, conspiracy theories, and outright lies that Bill Clinton had faced in the 1996 election. His article, titled “News You Can Abuse,” will spark a sense of déjà vu for any reader familiar with the digital misinformation practices that surfaced throughout the 2016 election:

The Net is opening up new terrain in our collective consciousness, between old-fashioned “news” and what used to be called the grapevine – rumor, gossip, word of mouth. Call it paranews – information that looks and sounds like news, that might even be news. Or a carelessly crafted half-truth. Or the product of a fevered, Hofstadterian mind working overtime. It’s up to you to figure out which. Like a finely tuned seismograph, an ever more sophisticated chain of Web links, email chains, and newsgroups is now in place to register the slightest tremor in the zeitgeist, no matter how small, distant, or far-fetched. And then deliver it straight to the desktop of anyone, anywhere who agrees with the opening button on the National Enquirer Web site “I Want to Know!”1

The parallels to today’s digital news controversies are so obvious that they ruin the punchline. It would appear as though online misinformation, disinformation, and “fake news” has been spreading about Democratic candidates named Clinton since the very first internet-mediated election. And even back in 1997, Dowe was raising some of the same concerns that we face today: “When the barriers come down, when people cease to trust the authorities,” he writes, “they – some of them, anyway – become at once more skeptical and more credulous. And on the Net right now – hell, in America – there’s plenty of evidence of that.”

Is Dowe’s “paranews” really all that different from the weaponized disinformation campaigns that we witnessed in 2016? A cynic might conclude that the key difference between the two cycles is that 1996’s Clinton won and 2016’s Clinton lost. (How different, after all, would the contents of this volume be if the election had narrowly swung the other way?) But such cynicism is both unwarranted and unproductive. The online rumor mills of the early Web are substantially different from the industrialized digital disinformation and misinformation operations that trouble us today. The real value of reflecting on the paranews of 1996 is that it provides a helpful point of comparison to see just how much the digital context has changed.

The Internet is not new media any longer. The World Wide Web has over a twenty-five-year history. Digital media is no longer our looming technological future. It has a track record from which we can make observations and draw lessons. We need no longer make static comparisons between mainstream/mass media and digital/social media. We can instead make apples-to-apples comparisons within the digital era, identifying commonalities and differences between today’s digital landscape and the digital media of past decades.

The purpose of this chapter is to explore how the digital media landscape has changed over time, and how these changes impact the status of fake news, misinformation, and disinformation. The chapter focuses on three major developments that make today’s digital disinformation and propaganda more dangerous than it was in decades past. First, rumors and misinformation spread at a different rate, and by different mechanisms. Second, there is both more profit and more power in online disinformation today than there was two decades ago. Third, online misinformation has now been with us long enough to alter elite permission structures. The chapter concludes by discussing what digital platforms, policymakers, and journalists can do to confront these changing circumstances in the years ahead.

Mechanisms of Diffusion

Both the Internet and the broader media system were substantially different in 1996 and 2016. The Internet that Tom Dowe was describing was populated by different technologies with different affordances, encouraging different behaviors. It was an Internet of desktop computers and America Online CD-ROMs, an Internet of dial-up modems and search engines that were laughably bad at providing accurate search results. The “new media” of 1996 was characterized by the expansion of cable television and the growth of conservative talk radio. Fox News Channel debuted in October 1996, attempting to copy CNN’s successful business model. Today’s disinformation can spread more quickly because of a set of structural changes to the overall media system.

Consider how online rumors and disinformation spread in the mid-1990s Internet: one could (a) spread salacious gossip through email forwarding chains, or (b) post made-up stories on a website, or (c) make false claims in an online chatroom. Each of these options is self-limiting for the spread of online rumors.

Chain emails are traceable and relatively costly. You know who forwarded them to you, and you probably have some experience with the veracity of the stories they share. Email forwarding is a relatively high-bar activity in the digital landscape. In today’s terms, it takes more work to forward an email to 100 friends than it does to “like” or retweet a post, sharing it with everyone in your network who is then algorithmically exposed to your social media activity. These are structural characteristics of email forwarding chains. Conspiracy theories via email, in other words, are spread by the known conspiratorial thinkers in one’s network; they can be discounted by recipients accordingly.

Conspiratorial websites in the mid-1990s also had a sharply limited audience. This was the pre-Google Internet, where search was time consuming and difficult. Online writers sought to build traffic by forming “web rings” with fellow travelers, and by filling their websites with keywords that might be typed into the Yahoo/Alta Vista search box. Incidental exposure to conspiratorial websites was thus limited. If you wanted to find information about all manner of Clinton conspiracies in 1997, there were websites to indulge your interests. But you would have had to look pretty hard. Again, these are structural characteristics of the World Wide Web of the 1990s that matter for how gossip, propaganda, and disinformation spread through the system.

Chat rooms face a parallel set of constraints. Chats are segregated by topic and occupied by small groups, making them a poor vector for incidental exposure to misinformation and disinformation. The Internet of 1997 provided a virtual space where adherents to all sorts of Clinton conspiracy theories could gather and swap tall tales. But if they entered a random AOL chatroom to post their screeds, they would not find much of an audience. Disinformation efforts via chatroom are liable to fail because they will appear as off-topic ramblings, inserted into an online conversation among a small group of participants who can just move elsewhere.

The result is that conspiracy theories on the web of the 1990s had quite a lot in common with conspiracy theories in previous media. Dowe’s reference to the National Enquirer is instructive. Salacious gossip and misinformation did not begin with the Internet. They were spread through tabloids, and through radio programs, and through newsletters. The early Web made misinformation easier to find. It made it easier to interact with like-minded conspiratorial thinkers. But it was a difference in degree, rather than a difference in kind.

By comparison, let’s consider how these limiting conditions of the early Web compare to the industrial production of misinformation in the 2016 election. As Samanth Subramanian documents in his WIRED article, “Inside the Macedonian Fake News Complex,” the 2016 election featured entire websites set up with the semblance of reputable new outlets.2 These websites invented salacious stories, engineered to maximize social sharing and public exposure. They advertised cheaply on Facebook, boosting their visibility in news feeds. NewYorkTimesPolitics.com was one such fake news website, designed to resemble the real New York Times website, and featuring plagiarized articles on American politics. Unlike the chain emails of 1997, these stories were shared through social media, spreading faster and farther while presenting fewer signals of their (lack of) source credibility.

Meanwhile, employees of Russia’s Internet Research Agency (IRA) piloted swarms of automated and semi-automated social media accounts with fake, US-based profiles. These accounts sought to influence the public dialogue and amplify disagreement and discontent in online discourse. They liked, shared, and retweeted social media posts. They attacked authors and spread misinformation in comment threads, manufacturing the appearance of broader social distrust of Hillary Clinton’s candidacy. Where the chatrooms of 1996 were a terrible vector for spreading disinformation, their capacity for amplification was limited; in 2016, however, the deliberate amplification of conspiracy theories and mistrust helped propel topics deemed harmful to Hillary Clinton into the broader public sphere.

Alongside the different affordances of the modern Internet, we also have to reckon with the Internet’s changing status within the broader media ecosystem. As Yochai Benkler makes clear in Chapter 2 of this book, American political journalism has changed drastically over the past few decades. Newspapers have been hollowed out. Conservative outlets from Fox News (founded, incidentally, in October 1996) to Breitbart now play a central role in fueling the spread of conservative propaganda and strategic misinformation. More broadly, as Andrew Chadwick suggests in his 2013 book, The Hybrid Media System, digital media has changed the rhythms of news production, converting traditional news cycles into what Chadwick terms “political information cycles.”3 Episodes of political contention now move back and forth between social media, television, radio, and newsprint. Online conspiracy theories do not remain isolated online – trending hashtags and artificially boosted clickbait stories can become the topic of the nightly newscast, dramatically expanding the reach of rumors and misinformation.

Conspiracy theories on the early Web were treated by the broader media system much like rumors in the pages of the National Enquirer or other tabloids. They did not set the mainstream news agenda. They were not incorporated into newsgathering routines. They were at best an oddity, or a whisper that might lead to a story pitch. But digital news was not yet a competitor, either for eyeballs or for advertising revenue. This was a pre-blogosphere Internet. Conspiracy theorists could not influence news routines by swamping comment threads on news websites. News organizations were not yet monitoring clicks or hyperlinks to judge the news value of a given story. The digital challenges to traditional journalism were not yet viewed as a looming threat by newsrooms. As Paul Starr notes in Chapter 3 of this volume, the Internet of the 1990s was characterized by a sense of naïve technological optimism, particularly amongst its vocal advocates and early adopters who believed the technology would soon usher in a new era of rational and critical civic discussion. The Web was decentralized and barely populated. The dotcom boom was still in its first year. Conspiracy theories online were an odd sideshow, rather than an outright social ill.

By 2016, in contrast, major news organizations have adapted to the hybrid media system, modifying their news routines to incorporate trending topics and viral stories into their agenda-setting process. The fact of a viral story is itself news, regardless of the underlying veracity of the story itself. The conservative ecosystem of media organizations (both digital, television, and radio) stokes these stories, decrying the lack of coverage in mainstream outlets and demanding coverage of “both sides” of the manufactured controversy. The hybrid media system is much more vulnerable to strategic misinformation and disinformation than the industrial broadcast media system that still dominated American politics in 1996.

What has changed, then, is both the exposure rate, the traceability, and the lateral impact of misinformation. Digital misinformation has become progressively less traceable, less costly, and more spreadable, while developing a more substantial role in traditional news organizations. When you were handed a John Birch Society newsletter, you could see quite clearly where the newsletter came from and who gave it to you. Those newsletters were filled with disinformation, but they did not travel far and they did not set the agenda for the nightly news. The early Web had many of the same qualities. Today’s social media has become unmoored from those limitations.

And the reason why it has become so unmoored leads to my second observation.

Profit and Power

To state it plainly, fake news in the 1990s was a hobby. Today it is an industry.

As Subramanian notes in his article on the Macedonian fake news industry, “Between August and November, [young Veles resident] Boris earned nearly $16,000 off his two pro-Trump websites. The average monthly salary in Macedonia is $371.” The mechanics of this money-making operation are entirely determined by how online advertising revenue is generated through Google and Facebook. The purveyors of these manufactured stories would pay Facebook to promote their content in the news feed. Scandalous headlines generated clicks, comments, and shares, and each visitor to the website generated profit through Google AdSense. Though there is now some controversy as to the nature of the relationship between Veles residents and Russian information operations, the Macedonians claimed contemporaneously that they were not particularly interested in supporting Trump or opposing Clinton – they just found that anti-Clinton fake stories generated more traffic (and thus, more advertising revenue).4

The incident is a testament to a broader phenomenon in today’s hybrid media system. The dynamics of mass attention and of advertising profitability are overwhelmingly shaped by the algorithmic decisions of two corporations: Google and Facebook. As journalist Joshua Micah Marshall describes in his 2017 essay, “A Serf on Google’s Farm,” about Google’s involvement with his digital news site, Talking Points Memo (TPM); “Google has directly or indirectly driven millions of dollars of revenue to TPM over more than a decade. … few publishers really want to talk about the depths or mechanics of Google’s role in news publishing.” He details the degree to which Google is implicated in the news, owning as it does: “1) The system for running ads [DoubleClick], 2) the top purchaser of ads [AdExchange], 3) the most pervasive audience data service [Google Analytics], 4) all search [Google.com], 5) our [TPM’s] email.”5 Marshall goes on to describe how Google’s sheer market power can dictate the success or failure of digital news organizations. TPM was blacklisted by Google for violating the company’s ban on hate speech. This was a false positive – TPM was reporting on incidents of white supremacist violence, and the reporting was coded as hate speech – but it was a potential economic catastrophe for the news site, because Google is the center of the digital advertising economy.

Facebook, likewise, has arguably become the central vector for the social sharing of news and information. Changes to Facebook’s algorithmic weighting can create or destroy the market for particularly forms of journalism. As I discuss in Analytic Activism, this was the major public lesson of Upworthy.com, a social news site that specialized in developing Facebook-friendly headlines to drive attention to stories and videos with social impact.6 In 2013, Upworthy was the fastest growing website in history. Then Facebook debuted a new Facebook video feature, and penalized websites that linked to videos outside of the Facebook ecosystem. Upworthy immediately lost roughly two-thirds of its monthly visitors.

Herein lies the problem with the “marketplace of ideas” arguments that frequently appear in current debates over the negative consequences of online speech. The Web of the 1990s could arguably be thought of as a neutral marketplace of ideas, one in which anyone with a dial-up connection and a bit of training in HTML could write online and potentially find a modest audience. The “Safe Harbor” provision of the Communications Decency Act (Section 230) was designed to help protect free speech by making websites non-liable for the content that visitors posted to them. That was a reasonable and appropriate provision at the time. But in the intervening years, the Internet has recentralized around a handful of quasi-monopolistic platforms. And in the meantime, online advertising has experienced massive growth, while the advertising markets that supported the industrial broadcast news system have been cannibalized.7

Consider how these changes have impacted the status of online rumors and disinformation. Dowe’s 1997 article quotes digital pioneer Esther Dyson, who tells the author, “the Net is terrible at propaganda, but it’s wonderful at conspiracy.” This is a remarkable statement, viewed in retrospect. The Internet of 2016 is clearly quite good at propaganda – at least as good as the mass media of decades past! Part of this change is because the broader public has come online. It was a terrible propaganda channel in 1997 because there was not yet a mass audience to be propagandized. Fake news in the 1990s was a hobby because the Internet in the 1990s was confined to hobbyists. Digital media today is everywhere, always on, and always with us.

Alongside this secular expansion in Internet use, the technologies of digital ad targeting have also advanced greatly in the intervening twenty years.8 As the masses came online, the Web became more valuable as a substitute for and complement to mass media. The Web has also become more valuable as data, providing insights into what we read, what we purchase, and where we are physically located at all times. Cookie-based and geolocal tracking provide a wealth of data, which in turn has funneled additional investments into online media. While today’s digital advertising is still far less precise than its marketers routinely claim9 (Google and Facebook do not actually know you better than you know yourself), the digital advertising economy now determines which speech is profitable, and thus which types of journalism, propaganda, public information and disinformation will receive broad dissemination. The platform monopolists are too big to be neutral; their algorithmic choices are market-makers, with an indelible impact upon the marketplace of ideas.

The result is a situation in which there can be strong economic incentives for misinformation and disinformation campaigns. The online marketplace does not reward the best ideas, or the most thorough reporting. It rewards the stories that perform best on Facebook, Twitter, Google, and YouTube. It rewards user engagement, and social sharing, and time-on-site. Meanwhile there are also compelling strategic incentives for misinformation and disinformation campaigns. The Russian Internet Research Agency is not designed to make money.10 It is designed to spread mistrust and discontent online. And the logic of troll farms like the IRA is, that now so much of the public is online, disrupting online media can be a high-value propaganda goal. The marketplace for speech will permanently malfunction if lies are made more profitable than truths.

This is not an inherent problem to the Internet or social media. It has gotten worse because of specific policy decisions that have protected and rewarded bad social behaviors. It can be fixed through different policy decisions – the fake news industry in Macedonia disappeared after the 2016 election, as Google implemented new policies that excluded the fake news websites from the AdWords program. In 2011, Google likewise dramatically curtailed “content farms” through the quasi-regulatory act of adjusting the company’s search algorithms.11 Regulation ought to come from the government, but in the absence of government oversight, the platform monopolies play an uncomfortable, quasi-regulatory role. To be clear, Facebook and Google are not going to create voluntary rules that do much to curtail their own power or profit. But they can, and do, slowly respond to the worst abuses of their platform in order to safeguard their reputation.

The more urgent issue is that government regulators in the United States have essentially abandoned their posts. At the time of writing, the Federal Election Commission (FEC) does not have enough commissioners to even make quorum.12 Thus, the main regulatory agency tasked with determining what forms of electoral communication are supported by law is no longer capable of regulating. The Federal Trade Commission (FTC) has levied fines against Facebook and Google, but is so drastically understaffed that it mostly enforces violations of decades-old laws rather than crafting new regulatory regimes for today’s Internet. While there have been congressional hearings into the role of “Big Tech” in spreading disinformation and propaganda, those hearings have mostly been turned into partisan spectacles. The hearings have even become a vector for their own set of conspiracy theories, with a few Republican politicians advancing the baseless claim that Facebook, Google, and Twitter are suppressing conservative content to support a progressive ideological agenda. In the near term, if the marketplace for disinformation is going to be seriously regulated, those regulations will likely be created and enforced by the platforms themselves, rather than by elected officials.

And this in turn leads to my third observation: the greatest threat posed by online misinformation is the lateral effect it has on the behavior of political elites.

Online Disinformation and the Dissolution of Load-Bearing Norms

Online disinformation and propaganda were clearly a bigger problem in the 2016 election than in the 1996 election. But it still bears exploring just what the nature of the disinformation problem is. Why, really, does it matter that online gossip, propaganda, and strategic untruths are spreading faster and farther than ever before? Where is the impact of digital disinformation most keenly felt? I would argue, perhaps counter-intuitively, that the direct impact of digital disinformation is quite limited, particularly within the context of a presidential election. There is, however, a second-order effect which is quite threatening to the foundations of democratic governance. Political elites are learning just how much they can get away with in the absence of a well-informed public.

The literature on persuasive effects in US general election campaigns is overwhelmingly clear: even for the most sophisticated, large-scale campaigns, it is tremendously difficult to change voters’ minds. In a recent meta-analysis of field experiments in American elections, published in the American Political Science Review, Joshua Kalla and David Broockman conclude “the best estimate of the effects of campaign contact and advertising on Americans’ candidate choices in general elections is zero.”13 In particular, they find that “when a partisan cue and competing frames are present, campaign contact and advertising are unlikely to influence voters’ choices.” In effect, they are arguing that the sheer volume of campaign communications in US elections, combined with the established partisan preferences of the mass electorate, reduce the marginal effect of campaign persuasive tactics to practically nil. “Voters in general elections appear to bring their vote choice into line with their predispositions close to election day and are difficult to budge from there.”14

Kalla and Broockman’s research is not specifically focused on disinformation or on the 2016 presidential election, but the implication is clear: if well-funded, sophisticated voter persuasion efforts launched by seasoned campaign professionals in collaboration with social scientists have little-to-no effect in general elections, we ought to remain skeptical that less well-funded disinformation efforts launched by Russian trolls, Macedonian teens, or the Trump campaign itself would have substantial impacts on voter behavior. Persuasion in a general election is unlike commercial branding or marketing efforts, where consumer awareness is low and consumer preferences are weak. There is no reason to believe the direct impact of microtargeted digital propaganda and misinformation is larger than the direct impact of microtargeted campaign outreach and persuasion campaigns.

At a more foundational level, discussions of media and disinformation are often premised upon the assertion that a well-informed public is a necessary component of a functioning democracy. Misinformation, disinformation, and propaganda are viewed as toxic to a healthy democracy, because they weaken the informational health of the body politic. But there is a contradiction in this premise that we too often ignore. As Michael Schudson documents in The Good Citizen: A History of American Civic Life, American democracy cannot require a well-informed public, because no such public has existed in American history.15 Though we routinely hearken back to memories of a past golden era in which citizens were better-informed, civically minded, and more engaged in public life, our lived reality has always been messier. The engaged, attentive public is one of the grand myths of American civic life.

The fundamental tension here is that the myth of the attentive public is itself a necessary precondition for a functional democracy. As Vincent Mosco writes in The Digital Sublime, myths

… are neither true nor false, but living or dead. A myth is alive if it continues to give meaning to human life, if it continues to represent some important part of the collective mentality of a given age, and if it continues to render socially and intellectually tolerable what would otherwise be experienced as incoherence.16

American democracy does not require a well-informed public. What it requires are political elites (including media elites) who behave as though an attentive public is watching, rewarding or penalizing them for their actions. In the absence of this myth, there is little preventing political elites from outright graft and corruption.

The great irony of our current moment is that digital misinformation’s most dangerous impact comes not through directly deceiving voters and altering their vote choice, but through indirectly exposing to political elites that voters are inattentive and therefore will not keep misbehaving politicians in check. A politician can run on a platform of deficit reduction and then propose legislation that explodes the deficit. A politician can vote for health care legislation that removes the protections for preexisting conditions and then run advertisements claiming the exact opposite. A politician can spend years strategically refusing to ever work with the opposition party on any legislation, specifically so he can blame his opponents for the lack of bipartisan collaboration. If the public is not paying attention, and if traditional media gatekeepers no longer serve as arbiters of political reality, then there is no incentive for engaging in the difficult, messy, and risky work of actual governance. The well-informed public is a myth, but it is a load-bearing myth. Faith in this mythology is a necessary component of a well-functioning democracy.

We are governed both by laws and by norms. The force of law is felt though the legal system – break the law and you risk imprisonment or financial penalties. The force of norms are felt through social pressure – violate norms and you will be ostracized. The myth of the well-informed public anchors a set of norms about elite behavior: politicians should not lie to the press; they should keep their campaign promises; they should consistently pursue a set of goals that are justifiable in terms of promoting the public good, not merely in terms of increasing their own odds of winning the next election. And while laws change formally through the legislative process, norms change informally and in haphazard fashion. When someone breaks a long-held norm and faces no consequence, when they test out part of the mythology and find that it can be violated without consequence, the myth is imperiled and the norm ceases to operate.

The conspiracy theorists of 1996 were confined to small corners of the Web, just as the conspiracy theorists of 1976 were ostracized from polite society. Things were very different in 2016. During the 2016 presidential race, Donald Trump appeared on conspiracy theorist Alex Jones’s radio program and told him “your reputation is amazing.” Trump also made Steve Bannon, executive chairman of Breitbart News (a far-right website trafficking in conspiracy theories, misinformation, and disinformation), White House chief strategist.

This is a trend that predates the modern social Web. It can be traced back to at least the 1990s, gaining traction in the aftermath of Newt Gingrich’s 1994 “Republican revolution.” It coincides with the rise of the World Wide Web, but I would caution against drawing the conclusion that the Internet is what is driving it. Rather, it is a noteworthy accident of history that the rise of the Web immediately follows the fall of the Soviet Union. Governing elites in the United States no longer had to fear how their behavior would be read by a hostile foreign adversary. They almost immediately began testing old norms of good governance and bipartisan cooperation, and found that the violation of these norms did not carry a social penalty. Our politicians have learned that they can tell blatant lies on the Senate floor and in campaign commercials, and neither the media nor the mass public will exact a cost for their actions. In the meantime, online misinformation has provided ongoing additional evidence that the mass public was not paying close attention and that the myth of the well-informed public could be blithely cast away with little immediate consequence.

Social trust in government and the media is eroding. Technology plays a part in all of this. But changing media technology is more of an ensemble cast member than a headlining star in the narrative. The threat we face today is not that the political knowledge of the citizenry has declined due to online misinformation. The direct effects of misinformation on social media are small, just as the direct effects of all other forms of propaganda have been small. The great danger is that the current digital media environment is exposing the myth of the attentive public, increasing the pace at which political elites learn they can violate the norms of governance with impunity.

Conclusion

Writing in 1997, Tom Dowe remarked, “When the barriers come down, when people cease to trust the authorities, they – some of them, anyway – become at once more skeptical and more credulous.” Over the intervening twenty years, the barriers have been in a perpetual state of decline. Trust in all sorts of authority has slipped as well. The credulous skeptics have only gotten more vocal and prominent. The early Web, as Esther Dyson states in Dowe’s article, was “terrible at propaganda, but wonderful at conspiracy.” Today’s Internet excels at both. And though digital propaganda may not directly change many voters’ minds, its second-order effects hasten the erosion of the very foundations of American democracy. What, if anything, can be done to reverse this trend?

The path to repairing our load-bearing democratic myths and constructing a healthier information ecosystem is neither simple nor straightforward. No single political leader, tech company, or journalistic organization can fix these issues on their own. But there is a role to be played by each. Here is what I imagine those roles might look like.

First, there are the platform monopolies – Google, Facebook, and Twitter.17 In the immediate future, it seems the platforms are going to shoulder an uncomfortable burden. The US government is facing a crisis of competence; the regulatory state is in disarray: the FEC no longer operates. Other government agencies are mired in scandals, run by political appointees whose main qualifications tend to be their personal ties to the Trump organization. Google, Facebook, and Twitter should not be determining how we regulate disinformation and propaganda. Such regulatory decisions are beyond what is appropriate to their role and beyond their expertise – the boundaries of acceptable political speech should not be determined by a handful of profit-maximizing firms. But in the near future, there is little hope of genuine regulatory oversight. The platforms will be blamed for the ways in which they are misused in the next election, so they will need to take an active role in determining and enforcing the boundaries of appropriate behavior. In the long-term, it is an untenable situation, but in the short-term, the platforms stand in as self-regulators-of-last-resort.

Next, there are the political elites. We are going to need our politicians to start believing in the myth of the attentive public again – not because the public is in fact closely watching, but because American democracy only works when our elected officials behave as though they are under close and meaningful scrutiny. Disinformation and propaganda can reduce the public sphere to endless static and noise. It can drown out the very notion of an overriding public interest. But it can only do so if our political elites choose to behave as though it does. If American democracy is to survive, we are going to need public officials who take the public compact seriously. If the regulatory state is going to reclaim its important role, we are going to need to start repairing our regulatory capacity.

Finally, there are the journalistic organizations. As other authors in this volume have noted, the past twenty years have been a time of rapid change within the journalism industry. Much of that change has been more negative than was once predicted. Today’s journalism not only has to defend itself against being labeled “fake news” and “the enemy of the people,” it also has to compete with partisan propagandists in the struggle for relevance, attention, and revenue. Today’s media organizations should hold tight to journalistic principles and editorial judgment. That is what makes them different from the propagandists. The temptation to chase every controversy in service of more eyeballs and more clicks is neither healthy nor productive. Disinformation and propaganda campaigns thrive by creating controversies which then become news stories by virtue of their virality. Media organizations are at their strongest when they prioritize issues of public importance, and when they fulfill their role as watchdogs of political elites. They should focus on this mission not just because it is morally right, but also because it is what distinguishes them from the cheap content farms and partisan propagandists.

Today’s misinformation is not identical to the misinformation of the early Web, nor has it proceeded in a linear fashion. Rather, as the Internet has changed and the decades have passed, the quality and character of online misinformation has changed as well. Today’s misinformation travels further and faster. It is less traceable and harder for well-meaning individuals to evaluate on their own. Today’s misinformation is a strategic asset, at least for campaigns and particular digital media companies. Public mistrust is good for (some) politicians, at least those who traffic in authoritarian populist appeals. Jettisoning the myth of the well-informed public has worked out very well for some political elites. But it is also worth reminding ourselves that today’s Internet is not a finished product. The current version of the social Web does indeed seem to further accelerate public mistrust. This was not always true of the Internet. It is still changing. It is still governable.

The great conundrum we face is that our current political moment routinely and repeatedly reveals that the myth of the well-informed, attentive public can be easily rejected without immediate consequence. Myths are not true or false, but living or dead. Twenty years of online misinformation at an ever-accelerating pace threatens to kill this myth, and there will be consequences. The norms and assumptions governing elite behavior are everywhere tested, and everywhere proven to be easily violated without consequence. We can see, through digital trace data, that misinformation and lies are more clickable than policy details and truths. We can see, through high-profile examples, that political elites can adopt win-at-all-cost strategies and face no social penalty.

Online misinformation is not new. But today’s online misinformation is different, and dangerous. We can construct policy frameworks that change the Web and incentivize pro-social behavior and penalize misinformation. But it will be a long and winding path, requiring leadership and commitment from platforms, political elites, and journalistic organizations. Disinformation is a threat to American democracy, not because of how well it works, but because of what it reveals and enables.

7 Policy Lessons from Five Historical Patterns in Information Manipulation

Heidi Tworek

Comparisons between today and 1930s Nazi Germany are legion. Hardly a day passes without someone comparing Trump’s praise of Twitter as a way to reach the people directly, to Nazi Propaganda Minister Joseph Goebbels’ purportedly similar praise of radio. In 2017, Daniel Ziblatt drew on his political science work about conservatives in the Weimar Republic (and their use of media) to coauthor a popular book with Steven Levitsky about How Democracies Die.1 That same year, Timothy Snyder wrote a pamphlet book with twenty rules for how to survive fascism, drawing from his work on the 1930s and World War II.2

This does not mean that today is destined to be a rerun of the interwar period. But the resonances suggest historical patterns. These patterns can make us more critical about assertions of radical novelty in the present. If we fall into the trap of believing the novelty hype, we miss multiple important points. First, we might forget the path dependency of the current Internet.3 Second, we might misdiagnose contemporary issues with social media platforms by thinking about them too narrowly as content problems, rather than within a broader context of international relations, economics, and society. Third, we might focus on day-to-day minutiae rather than underlying structures. Fourth, we might think short-term rather than long-term about the unintended consequences of regulation. Finally, we might inadvertently project nostalgia onto the past as a “Golden Age” that it never was.

Some aspects of the Internet are unprecedented: the scale of its reach, the microtargeting, the granular level of surveillance, and the global preeminence of US-based platforms. But many patterns look surprisingly familiar – for instance: oligopolistic companies, political influence, and short-term thinking that focuses on media above and beyond broader societal problems. This chapter will explore five patterns from history that can help us to understand the present.

I developed the framework in this chapter for my testimony before the International Grand Committee on Big Data, Privacy, and Democracy in Ottawa, Canada in May 2019. This committee was formed in fall 2018, when the British Digital Culture Media and Sport (DCMS) committee gathered together twenty-four representatives from nine countries for a hearing. The DCMS committee had been investigating the role of Facebook and social media in the Brexit referendum. In a highly unusual move, the British committee had travelled to Washington, D.C. to question representatives from social media companies. The committee had subpoenaed Mark Zuckerberg to appear before them in the United Kingdom. Zuckerberg declined. In response, Britain teamed with Canada’s Standing Committee on Access to Information, Privacy and Ethics (ETHI) and representatives from nine countries in total for hearings in London in November 2018.4 The second committee meeting in Ottawa included representatives from Canada and ten other countries, ranging from St. Lucia to Mexico, to Estonia. Again, Mark Zuckerberg and Sheryl Sandberg were subpoenaed and they did not appear.5 When invited to testify before the committee, I worked on a framework that would provide a usable history for policymakers, but not one that simplified for the sake of political point-scoring. It is all too tempting to create a highlights reel from the past; it is far more productive to examine the history and bring that as evidence to the table.

Historian Sam Haselby has suggested a key distinction between history and the past:

Think of history as the depth and breadth of human experience, as what actually happened. History makes the world, or a place and people, what it is, or what they are. In contrast, think of the past as those bits and pieces of history that a society selects in order to sanction itself, to affirm its forms of government, its institutions and dominant morals.6

This chapter uses history rather than the past to discuss five patterns in the relationship between media and democracy. The history does not provide simple lessons that can be applied universally regardless of context. Instead, the history of media and democracy is messy and often counterintuitive. It often does not offer politically convenient answers. What history can give us, is a long-term perspective, a way to ask broader questions, and another analytical approach to the current moment.

Five Historical Patterns
1. Disinformation is an international relations problem.

Information warfare may seem new. In fact, it is a long-standing feature of the international system. Countries feeling encircled or internationally weak may use communications to project international prowess. This was as true for Germany in the past as it is for Russia today. We are returning to a world of geopolitical jockeying over news. If the causes of information warfare are geopolitical, so are many of the solutions. These must address the underlying foreign policy reasons for why states engage in information warfare. To address this, we need to understand when and why states use information warfare to achieve geopolitical goals.

Germans, for example, did not always care about international news. In the 1860s and 1870s, Germany was just unifying into a nation-state. Chancellor Otto von Bismarck cared about international relations. But he also cared about achieving German unification and then maintaining Germany’s status within Europe. Bismarck tried to influence journalists, particularly in London, Paris, and Berlin. He also intervened to ensure that Germany had its own semi-official news agency. But Bismarck did not mind that the global news supply system developed in such a way that British and French firms collected and disseminated most of the news from outside Europe, even for the German news agency.

Only from the 1890s, did German politicians and business owners start to care about and disagree with this system. They believed that it enclosed Germany at a time when the country wanted to become an imperial and global power. The news supply system had not become less effective from a media perspective. It had become so from a political perspective. Germans turned to information to push this agenda: many Germans were convinced that they had lost the world war of words and now needed to send news around the globe to counter Allied propaganda.7

For a historian, it is strange to see Americans so surprised that information falls under foreign policy. There is a long, often forgotten history of “active measures” or disinformation.8 “Psychological warfare” was a key concept for the CIA during the Cold War and the Department of Defense during the Vietnam War.9 After the Vietnam War, the Carter and Reagan administrations both incorporated information into their national security strategies. By 2000, these strategies for active engagement abroad were known in the Department of Defense’s Joint Staff Officers’ Guide under the acronym of DIME: diplomatic, informational, military, and economic power.10

This historical perspective makes recent Russian efforts seem less of an anomaly. If information has long formed part of international relations, we should not be surprised to see Iran, Saudi Arabia, and other states using social media to fight perspectives they dislike.

2. We must pay attention to physical infrastructure.

It seems so easy to access information on smartphones and wireless devices, that we forget the very physical infrastructure underpinning our current system. That current system also perpetuates inequalities in communication stretching back at least to submarine cables and steamships carrying the post in the mid-nineteenth century.

The first submarine cable was laid between the United Kingdom and France in 1851. After two unsuccessful attempts, a transatlantic cable was completed in 1866. In the interwar period, Austrian writer Stefan Zweig would pick that event as one of his Sternstunden der Menschheit (Decisive Moments in History).11 Cables spread rapidly around the world. But they followed specific patterns. Instead of connecting previously unconnected places, they created denser networks where networks already existed. Cables quickly connected British imperial territories to London. The Atlantic soon housed the most cables. The major company laying cables was a conglomerate, the Eastern and Associated Telegraph Companies, headquartered in London, but with Anglo-American financial backing.12

The company focused on places that seemed profitable. Unsurprisingly, these were places with trade connections. Cable entrepreneurs laid cables where business already existed. In one instance, the managing director of the biggest multinational cable company, James Anderson, argued against a proposed cable from the Cape of Good Hope to Australia, via Mauritius. He said the Eastern and Associated Telegraph Companies simply did not lay cables where there was “not even a sandbank on which to catch fish.”13 Market orientation shut out connections where massive profits could not be made.

Cable entrepreneurs differed from current social media platforms in one key way: men like Anderson thought that telegraphy was a communications medium for elites and that most people simply would not pay for international telegrams. Telegrams were highly priced and only about ninety businesses made regular use of transatlantic cables in the first few decades of their existence (alongside governments and the press). Cable entrepreneurs subscribed to the paradigm of high-cost, low-volume, which differs from today’s social media unicorns who seek rapid growth and billions of users above all else. But those cable entrepreneurs created infrastructure systems that have influenced communications networks until today.

These apparently global communications infrastructures had imperial roots. Africa in particular seemed less important for telegraph companies, because there would be fewer high-paying clients than in white dominions. Britain’s “All-Red Route” around the world was completed in 1902 and enabled the British to send cables around the world while only touching on imperial soil. Of the entire African continent, the cables only landed in South Africa. Other cables spread up the coasts of Africa but with far less density than across the Atlantic. Racist beliefs about African colonial subjects’ inability to communicate dovetailed with imperial communications governance.

Submarine cables set precedents for later communications networks in the twentieth century, like telephone cables and fiber-optic internet cables.14 Cables were generally laid on ocean beds that had already been explored, as this saved money. This also followed the pattern of laying cables where proven markets for communication already existed. Fiber-optic internet cable networks resembled submarine cable networks until very recently. Africa had far fewer cables and much less Internet coverage.

These precedents are crucial in understanding our current Internet. The Internet may seem wireless; but actually fiber-optic cables carry 95 to 99 percent of international data. Thinking about the history of infrastructure pushes us to look at the full spectrum of platform companies’ businesses. It turns out, for instance, that Google and Facebook are also infrastructure providers. Google partly owns 8.5 percent of all submarine cables.15 Just as the Eastern and Associated Telegraph Companies eventually expanded to Africa, so too are Facebook and Google: both companies intend to lay cables to Africa.16 Around a quarter to a third of Africans have internet access at present; by supplying the cables, Google and Facebook hope to increase the capacity of cables to Africa, lower the cost, and massively increase the market for their products.

Google is fully funding a cable from Portugal to South Africa via Nigeria. The company will name the cable Equiano, in honor of Olaudah Equiano, a Nigerian former slave who campaigned for the abolition of slavery in the eighteenth century. Equiano wrote about his experiences and travelled to London to push for the end of slavery.17 There is much irony in the name. The cable will land in South Africa, formerly a white dominion, and the site where Britain’s All-Red Route landed in 1902. An American-based company has appropriated the name of a former slave, while the cable itself represents an attempt by a Western company to appropriate provision of African communications.

More broadly, the cable ramps up competition between larger powers (the United States and China) over communications space. Chinese firm Huawei built around 70 percent of Africa’s 4G connections.18 Laying cables is part of a broader infrastructural competition over the supply of internet access to Africa. Beyond Africa, the Chinese government and Chinese companies are investing in 5G infrastructure while building international information networks through the news agency Xinhua, and a Belt and Road News Network (to accompany the Belt and Road Initiative’s other infrastructural projects). China aims to set the standards for 5G networks as a way to assert greater control over the next phase of global communications.

In the 1970s, Third World nations from Africa and Latin America called for a New International Information Order (later the New World Information and Communication Order).19 This was supposed to push back against Western dominance of news supply. It paid more attention to news firms such as news agencies than infrastructure. Now, however, African nations seem less concerned about China providing internet connectivity. Emeka Umejei from the American University of Nigeria noted in March 2019 that “most policymakers and politicians in Africa … don’t really care” about allegations that Huawei had installed listening devices in the African Union’s headquarters, a complex built by Chinese companies. Umejei called Africa “a pawn on the global chessboard in the ongoing geopolitical context.”20

China follows in a long tradition of states that see infrastructure and information as inextricably intertwined. These states invest in infrastructure for informational, geopolitical, and economic gains. The increasing contemporary attention to infrastructure parallels developments in the 1890s. Prior to that decade, most states were content with the submarine cable system and saw it as a neutral conduit of information. As international competition began to heat up between countries like Britain and Germany in the 1890s, both states started to see cables as the locus for growing geopolitical jockeying. Many states worried that cables were not neutral conduits of content. They feared, moreover, that states might subject cables to surveillance, that they might censor content, and that they might even cut cables in the event of a war.

A few decades later, these concerns led to infrastructure warfare. One of Britain’s first acts during World War I was to cut submarine cables connecting Germany to the world. In retaliation, German submarines devoted massive resources to cutting British cables throughout the war. From May 1915 to April 1917 (when the United States entered the war), the German Navy cut every cable starting from Britain, except those across the Atlantic. These were sophisticated efforts. On occasion, the Germans even used a rheostat to emit false electrical signals about where the break in a submarine cable had occurred, which made it harder to repair the cables swiftly.21 Cables were as much a part of the war as other weapons.

Internet infrastructure receives surprising little attention in the press and scholarly communities. Perhaps cables seem too far removed from our everyday experiences with wireless smartphones. But these cables make international communication possible and we ignore them at our peril. Information warfare is enabled by infrastructure, whether submarine cables a century ago or fiber-optic cables today.

Just as the history encourages us to look at infrastructures, it also encourages us to look at the structures enabling content dissemination. The history of the media industry should push us to pay attention to business structures as a crucial determinant of content.

3. Business structures are often more crucial than individual pieces of content.

The third historical pattern is that business structures are often more crucial than individual pieces of content. It is tempting to focus on the harm created by particular viral posts, but that virality is enabled by a few major companies who control the bottlenecks of information. Only 29 percent of Americans or Brits understand that their Facebook news feed is algorithmically organized; the most aware are the Finns at 39 percent.22 This control affords social media platforms huge power.

That power stems from the market dominance of platform and social media companies. Amazon, Apple, Alphabet (the parent company of Google and YouTube), Facebook (which also owns Instagram and WhatsApp), and Microsoft (owner of LinkedIn) together comprise one-seventh of the total value of the American stock market.23 That concentration of companies in a particular sector of the stock market is unprecedented.

However, business history can help us to understand how such circumstances affect content. For over a decade, business historians have been calling for scholars of management and entrepreneurship to take history seriously.24 This is no less true for the media business. It is notable that the runaway hit of 2019 on platforms was written by an emerita professor from Harvard Business School, Shoshana Zuboff. Zuboff argues that the companies accumulate data and are already using it to nudge our behavior. She calls this phenomenon “surveillance capitalism” because the companies surveil online behavior in order to monetize it. The ability to track people’s behavior across the Internet became key to the companies’ success.25 Some critics, like Evgeny Morozov, argue that Zuboff’s book mischaracterizes the capitalist aspect of the companies’ business model, which may be less effective in its targeting and advertising than it might seem.26

Business history offers several new ways to understand current problems. First, it pushes us to understand that bottlenecks have always existed in modern news delivery. Now it is Facebook, Google and co. But those companies’ role as a bottleneck for news resembles that of news agencies in the nineteenth and twentieth centuries. From the mid-nineteenth century, news agencies were similarly powerful. These were companies like Reuters, that used the new technology of submarine cables to gather news from around the world and telegraph it back home for newspapers to print. Because foreign correspondents and telegrams were so expensive, only a few news agencies existed. They became gatekeepers controlling the flow of information. News agencies possessed astonishing power. In 1926, 90 percent of all German newspapers had no correspondents abroad or in Berlin. They received all their national and international news through news agencies or syndicate services. It may now be algorithmic, but the problem of a few companies dominating news and determining how it is delivered is an old issue.

Ironically, news agencies have become more powerful in print media again over the last few decades. More and more newspapers have cut foreign correspondents, so more newspapers print wire stories than ever, even large newspapers like the Globe & Mail. On July 22, 2019, for example, the Globe & Mail front section included nine international stories; eight of them came from non-Canadian news agencies or the New York Times.27 This concern is long-standing. In 2008, journalist Nick Davies published a book criticizing British newspapers’ excessive reliance on news agencies for information.28

Second, a business history approach shows how ownership can affect overall directions in content. New business structures like vertical integration and cross-subsidies were able to create concentration and corresponding power in the news market. One key example of this in Weimar Germany was Alfred Hugenberg. Hugenberg began as a local bureaucrat, then moved into heavy industry in the Ruhr region of West Germany before starting to accumulate a media empire just before 1914. Unlike other newspaper magnates like William Randolph Hearst or Lord Northcliffe, Hugenberg succeeded by importing techniques of vertical integration from heavy industry firms like Krupp.

Hugenberg used vertical integration to incorporate all aspects of the newspaper business from paper to advertising. In 1916, he purchased the ailing publishing house, August Scherl, which published many leading newspapers, like Berliner Lokal-Anzeiger and Der Tag, and popular magazines, like Die Gartenlaube and Berliner Illustrierte Nachtausgabe. Hugenberg founded the advertising agency, Allgemeine Anzeigen GmbH (ALA), in 1917 and owned numerous paper companies. In 1927, Hugenberg purchased Universum-Film AG (UFA), which produced and distributed films and cinema news reels called Wochenschauen. UFA was a 1920s YouTube (without the user-generated content). At that time, cinema newsreels were a new and critical form of news consumption. Largely forgotten today, they ran before every film. Most newspaper readers and cinema goers probably had little idea that Hugenberg owned their entire media diet.

The hidden networks of Hugenberg’s media products extended to a news agency, Telegraph Union. This was a loss-making company that received cross-subsidies from other, more successful firms in Hugenberg’s portfolio. From the early 1920s, newspapers faced increasing financial issues (due to rising paper prices, hyperinflation, and increased fixed costs), and Hugenberg’s companies offered subsidies to small newspapers as long as they subscribed to Telegraph Union. Even ostensibly nonpartisan papers often unwittingly presented a nationalist take by printing news from Telegraph Union, particularly in the provinces. The agency’s increasing success polarized the supply of information.

Hugenberg shaped his media empire as a right-wing enterprise with no party affiliation, believing that readers would stop reading newspapers that too obviously pushed one political party or industrial sector. Instead, Hugenberg’s media enterprises supported antisocialist and nationalist politics in general. From 1920 onward, every editor working for Hugenberg’s Telegraph Union was contractually obliged to “campaign for the route of political and economic reconstruction of Germany without party-political or other ties on a national basis.”29

Telegraph Union exerted tremendous power by framing events and setting news agendas. That power would not generate political success: Hugenberg’s political party lost half its vote from 1928 to 1933 (from 14 to 7 percent). But these dynamics undermined the shared space for news within the increasingly febrile Weimar Republic and unintentionally laid the groundwork for the even more nationalist Nazis.

We tend to remember the Weimar Republic’s vibrant urban media culture, which was mostly liberal or left wing, but the business structures of Hugenberg’s media world were equally important. Similar problems plague our current analysis, where journalists and policy analysts still focus on celebrating the “Trump bump” in New York Times subscriptions and have only just started to understand the problems in local news beyond major urban centers. These analysts have not yet devoted sufficient energy to understanding the long-term trends, like those fostered by right-wing talk radio or other innovative conservative media initiatives and business structures. By contrast, historians like Nicole Hemmer and Brian Rosenwald are tracing the long-term dynamics of how conservative media activists and formats like talk radio might have been more important for explaining the rise of Trump than Fox News.30 And Jen Schradie’s work demonstrates that conservative activists have taken advantage of our new media environment more ably than groups on the left.31 These dynamics perhaps parallel Hugenberg’s successes (and maybe also his electoral failures because he was outmaneuvered by the further-right forces of the Nazi Party).

A focus on funding and business illuminates contemporary dynamics too. Many of the suggested reforms to social media companies are really about the companies’ business model. The companies optimize for engagement: they are content-agonistic. This means they prioritize content that generates engagement and more time spent on the site. Which in turn generates more advertising dollars. It does not matter if that content is extremist or cat videos. The companies are also incentivized not to investigate whether their content has problematic effects on users or, indeed, to reveal exactly how many people engage and with what intensity to which content. One obvious example is the President Donald Trump’s assertions of “conservative bias” from social media companies. The companies could publish investigations, which would almost certainly reveal that the claim is flawed. President Trump is highly unlikely to accept that finding. In August 2019, former Senator Jon Kyl, a Republican who represented Arizona, published a report commissioned by Facebook on the issue. Kyl’s short report drew from interviews with over 100 unnamed groups and individuals to enumerate conservative concerns;32 it focused on conservatives’ subjective experience of the platform without statistics published by Facebook itself. Facebook has not commissioned similar investigations for marginalized groups or even Democrats. The companies currently continue with models that optimize for engagement, no matter the externalities. Nicholas John has termed this “agnotology”: the counter-intuitive idea that the companies’ business model requires them to assert high engagement or effective algorithms but not to investigate the full effects or to reveal transparent numbers.33

The importance of ownership also extends to more conventional media products. Rupert Murdoch’s Fox News and newspaper outlets are an obvious example. Oligarchs and publishers loyal to Viktor Orbán have silenced dissenting voices by purchasing Hungarian media outlets. In November 2018, nearly 500 media companies were transferred to a non-profit foundation led by a publisher close to Orbán.34 We ignore newspapers, TV, and radio at our peril. Although their power is diminished, it remains vital.

If media history reminds us to look at business structures, the present shows how transnational those structures can be. Far-right news outlets like Rebel Media in Canada seem to be funded by the American anti-Muslim, far-right think tank, Middle East.35 And Rebel Media was at one point paying Tommy Robinson, a leading far-right figure in the United Kingdom who founded the English Defence League in 2009.36 There are also currently questions over a Saudi Arabian partial purchase of the Evening Standard, a London newspaper edited from 2017 to 2020 by the former Chancellor of the Exchequer, George Osborne.37 Meanwhile, Chinese media influence has a far reach, as one project (Chinfluence) is investigating in Eastern Europe. While most coverage appears to continue unaffected, Chinese ownership of Czech media led to much more positive coverage of China.38 The history of Hugenberg reminds us that we may not find the smoking gun of an owner telling journalists what to print; broader direction and ownership structures are enough. For tech companies too, business models explain much of the content we see. Alternative business models may solve more problems online than tinkering around the edges.

4. We need to design robust regulatory institutions and democracy-proof our solutions.

It is understandable that politicians worry in particular about elections and interference during campaigns, and many of the initiatives to counter disinformation focus on political consequences, such as the EU Code of Conduct for Disinformation, US proposals for an Honest Ads Act, or the Canadian Election Modernization Act. The German Network Enforcement Law (Netzwerkdurchsetzungsgesetz or NetzDG) enforces twenty-two statutes of German speech law online; it was passed swiftly before a German election in fall 2017 to show government action against social media companies.

However, the focus on the next election and the short-term can obscure the long-term consequences of regulatory action. Often the most important developments take years to understand. Talk radio in the United States is a good example; another is the unintended consequences of spoken radio regulation in Weimar Germany. Bureaucrats aimed to save democracy by increasing state supervision over content. This was meant to prevent seditious material that would bolster anti-democratic sentiment and actions. Ironically, however, these regulations ensured that the Nazis could far more swiftly co-opt radio content once they came to power in January 1933.39 Well-intentioned regulation had tragic, unintended consequences.

Weimar bureaucrats actively attempted to shape the media to save German democracy. They tried everything, ranging from subsidies to laws banning particular newspapers. A Law for the Protection of the Republic was passed in 1922; and while the Weimar Republic had press freedom, this legislation foresaw the restriction of freedom in exceptional circumstances. Nearly a decade later, in 1931, with rising violence on the streets, emergency decrees banned entire editions of newspapers for seditious content. There were 284 bans in total, including ninety-nine for Nazi papers and seventy-seven for Rote Fahne (the Communist newspaper) between 1930 and 1932.40

Officials also tried to withhold official government news from Alfred Hugenberg’s anti-republican newspapers, particularly Berliner Lokal-Anzeiger, Berliner Illustrierte Nachtausgabe, and Der Tag. In December 1929, the Prussian Ministry of the Interior decreed that it would stop supplying these three newspapers with official publications due to their “invidious and extremely provocative way” of attacking the government and form of the state.41 The Social Democratic Minister of the Interior, Carl Severing, hoped that removing official material from the “anti-state press … would lead without further ado to a corresponding reorientation of the reading public.”42 Other ministries (like the Finance Ministry) disagreed and found it “improbable” that readers would subscribe to different papers, just to get official news.43 In Weimar Germany at least, bans seemed to exert no measurable effect on readership. Hugenberg’s newspapers started to receive government news again in 1932, after bringing a court case on the matter.44

We see a similar debate now about banning various figures like Alex Jones from social media. Will it amplify their message or remove them from view? Will it stoke claims of “conservative bias”? Will bans change users’ habits or not? Multiple European countries like France and Germany have either enacted or are currently considering regulatory suggestions about enforcing bans on hate speech online. In the case of Germany’s law, NetzDG, a prominent AfD politician, Beatrix von Storch, had a social media post removed the day that the law came into force. This promoted considerable discussion amongst journalists and ironically amplified von Storch’s message, as well as giving prominence to the AfD’s assertions that they were being censored by both mainstream news outlets and social media.45 In fact, removing whole networks can be counterproductive by pushing them to migrate to another platform and amplifying their sense of victimization. Wholesale banning may be less effective on social media platforms than other strategies, such as banning small groups of users from online hate clusters (groups of users propagating hate speech).46

Other regulatory debates similarly focus on removal over other possible solutions. The European Union plans to introduce terrorist content regulation that will require social media companies to remove terrorist content within one hour. The regulation does not define terrorism and leaves it to member-states to do so.47 It is troubling if legislation allows leaders like Viktor Orbán to define terrorism as they please. A historical view reminds us that any media legislation has to stand in the long term. Some might like a hate speech law requiring removals under President Emmanuel Macron; but would they like it under a President Marine Le Pen?

Any productive approach to regulation should consider how to democracy-proof our systems. Institutional design is key here. Robust institutions would, for instance, consistently include civil society. They would bolster data security and privacy. They would also be designed not to lock in the current big players and shut down possibilities for further innovation.

In the United States, for example, campaign finance reform would likely prove more effective than other suggestions. This does not directly appear to address tech companies, but it would address their increasingly important role in campaigns. Both Democrats and Republicans now outsource communication to companies like Facebook. (Facebook embedded employees in the Trump campaign, for instance.)48 At the same time, campaign finance reform would address longer-standing issues of influence from billionaires and hidden campaigners, as discussed in other chapters in this volume. These reforms would affect all candidates and charge the Federal Elections Commission (FEC) with examining financial flows rather than content.

Other suggestions specifically for social media include regulating for transparency before intervening in content. A French proposal in May 2019, suggested the creation of a regulator who would enforce transparency and accountability from the largest social media companies. The idea is to create an ex ante regulator who will enable greater transparency from the companies and more involvement from civil society. The proposal followed a unique experiment where French civil servants were embedded at Facebook for several months.49 This regulator would also enable third-party access for researchers. Such proposals are less interventionist than many other suggestions and less appealing to many clamoring for the regulation of content. Such calls are particularly understandable from people who have suffered extensively from doxing or abuse online. But it is worth considering whether less interventionist solutions will better uphold democracy in the long run. It is also worth considering whether much of the abuse is enabled by the particular business models of social media and the lack of incentives to enforce their terms of service, which often already ban the behavior of abusive users.

One thing historians know is that humans are consistently terrible at predicting the future. We cannot foresee all the unintended consequences of our well-intentioned interventions. That does not mean we should do nothing, but it does warn us to democracy-proof our policy solutions. Or we might find ourselves undermining the very freedoms that we seek to protect.

5. Solutions must address the societal divisions exploited on social media.

The seeds of authoritarianism need fertile soil to grow; if we don’t address underlying economic and social issues, communications cannot obscure discontent forever. It would be an extreme oversimplification, for example, to attribute the rise of the Nazis to media strategies. The Great Depression, political unrest, discontent stoked after the loss of World War I and the Versailles Treaty, and elite machinations all played essential roles.50

Media amplified certain aspects of discontent and contributed to systemic instability. The continual coverage of scandals by papers across the political spectrum conveyed a sense of a democratic system that was not working. Historian Corey Ross has argued that German interwar obsessions with propaganda undermined the Weimar Republic “not only by nourishing right-wing notions of an authoritarian Volksgemeinschaft, but also by eroding democratic conceptualizations of public opinion across the political spectrum.”51 These attitudes mattered, but political behavior also dovetailed with people’s lived experiences of hyperinflation, unemployment, and street violence.

Media effects research over the past century warns us to beware of simple assumptions that equate exposure to media with political outcomes. So does historical research on the Weimar Republic. Bernhard Fulda examined a small town in Germany with one – right-wing – newspaper, which recommended its readers vote one way in a referendum in the mid-1920s.52 The majority of the town voted the other way. Another study has found that Hitler’s speeches appeared to have negligible effect on how people voted (other than possibly in the presidential election of 1932). This suggests that media coverage of Hitler’s charismatic speeches was less influential than scholars had previously assumed.53 Many other economic and social factors clearly shaped voter behavior. This does not mean that media do not matter. It means that we must be careful to over-ascribe efficacy to individual pieces of content. The same is true for social media.54

Just as media in the Weimar Republic exploited or deepened extant social divisions, social media today often does the same. What has changed is the algorithmic and microtargeted delivery of news. Algorithms amplify particular pieces of content to increase engagement; Russian trolls, for example, have used this to their advantage by focusing on stoking controversy around issues such as Black Lives Matter or vaccination. People are most likely to share material online that angers them. The negative emotion of anger decreases our analytical functions, so we are more likely to believe the material; we are also more likely to repost it. As social media companies optimize for content that increases engagement, their algorithms may supply more material that angers us, inspiring sharing and engagement.55 The algorithmic bias toward anger is new; our anger-inspired analytical biases are not. Social media may amplify anger, but that anger also stems from real-world experiences of current conditions. As we continue to debate how best to address legacy and social media, we should not focus on those problems to the exclusion of others. Sometimes, media scholars are the people best placed to argue that other policy areas matter more. If we do not address pressing issues like growing inequality and climate change, improved social media communication will not stem discontent.

Conclusion

Over the past decade, I worked on a book about how Germany tried – and almost succeeded – in its attempts to control world communications from 1900 to 1945.56 Amongst other things, I explain how Germany’s democracy, with its vibrant media landscape, could descend into an authoritarian, Nazi regime spreading anti-Semitic, homophobic, and racist content around the world.

While I was writing this book, the present caught up with history. Far-right groups in Germany and around the world revived Nazi terminology like Lügenpresse (lying press) or Systempresse (system press) to decry the media. News was falsified for political and economic purposes. Minority groups were targeted and blamed for societal ills that they did not cause. As with radio, internet technologies designed with utopian aims have become tools for demagogues and dictators.

As these events unfolded, scholars tried to combat erroneous assertions of novelty. As Michael Schudson and Barbie Zelizer wrote in 2018, “To act as if today’s fake news environment is fundamentally different from that of earlier times misreads how entrenched fake news and broader attitudes toward fakery have been.”57 Attitudes toward fakery have changed over time and depending upon the medium. Andie Tucher has shown that in the late nineteenth and early twentieth centuries, faking in photography was prized as a way to make something appear more real.58 John Maxwell Hamilton and I have explored different forms of falsification in the history of news: faking for political purposes, both domestic and international; and faking for economic purposes, either to increase a newspaper’s circulation or to boost a product.59

What I have discussed in this chapter is not the content itself, but rather the structural conditions enabling falsification or disinformation. First, disinformation is also an international relations problem. Second, physical infrastructure matters. Third, business structures are more important than individual pieces of content. Fourth, robust regulatory institutions must take a long-term view that balances between protecting freedom of expression and protecting democracy. Fifth, media exploit extant societal divisions.

Five years ago, the question was if we would regulate social media. Now the questions are when and how. That development is a good one. But for regulation to protect democracy, we should also consider the questions raised by broader historical patterns.

8 Why It Is So Difficult to Regulate Disinformation Online

Ben Epstein

Efforts to strategically spread false information online are dangerous and spreading fast. In 2018, a global inventory of social media manipulation found evidence of formally organized disinformation campaigns in forty-eight nations, up from twenty-one a year earlier.1 While disinformation is not new, the ways in which it is now created and spread online, especially through social media platforms, increase the speed and potency of false information. As a report from the Eurasia Center, a think tank housed within the Atlantic Council argues, “There is no one fix, or set of fixes, that can eliminate weaponization of information and the intentional spread of disinformation. Still, policy tools, changes in practices, and a commitment by governments, social-media companies, and civil society to exposing disinformation, and building long-term social resilience to disinformation, can mitigate the problem.”2 In other words, false information purposefully spread online is actually a series of major problems that require an all hands on deck approach.

The 2016 election and the revelations in the years since about the breadth of disinformation have opened many eyes to the potential impact of strategic dissemination of false information online.3 As this complex problem has gained greater attention, proposed interventions have spread at 5G speed. Heidi Tworek correctly notes in her chapter that five years ago there was a question about whether social media was going to be regulated. Today, that question has morphed into how and when. Tworek uses historical examples from Germany to provide greater context for the current disinformation age and outlines five historical patterns that create the structural conditions that enable disinformation. First, disinformation is a part of information warfare, which has been a long-standing feature of the international system. She argues that if the causes of disinformation are rooted in international causes, some of their solutions must also be international in design. Second, physical infrastructure matters. The architecture of political communication spans a hybrid media system that includes traditional media along with digital forms, all of which have been used extensively for coordinated disinformation.4 Online disinformation is a strategy disseminated by the very infrastructure of the Internet and effective regulation of disinformation requires an understanding of the organization and control of that infrastructure. Third, business structures are more important than individual pieces of content. In other words, as the main sources of information, those companies with market dominance must be understood as fundamental to the form of the disinformation Fourth, regulatory institutions must be “democracy-proof,” with clarity of purpose, a long-term view allowing room for innovation, and structural guards against any takeover by those who would use such tools to increase disinformation for their own ends. Fifth, media exploit societal divisions, and it is these divisions that fuel so much of the disinformation spread online.

Disinformation is neither a new problem, nor a simple one. This chapter aims to build on Tworek’s historical patterns and apply them to the modern disinformation age in order to clarify the challenges to effective disinformation regulation and to offer lessons that could help future regulatory efforts. This chapter identifies three challenges to effective regulation of online disinformation. First, the question of how to define the problem of disinformation in a way that allows regulators to distinguish it from other types of false information online. Second, which organizations should be responsible for regulating disinformation. As Tworek notes, the international nature of online disinformation, the physical structure of the Internet, and the business models of dominant online platforms necessitate difficult choices regarding who should be in control of these decisions. Specifically, what regulatory role should belong to central governments, international organizations, independent commissions, or the dominant social media companies themselves. Finally, we must ask what elements are necessary for effective disinformation regulation.

After analyzing the major challenges, four standards for effective disinformation regulation emerge. First, disinformation regulation should target the negative effects of disinformation while consciously minimizing any additional harm caused by the regulation itself. Second, regulation should be proportional to the harm caused by the disinformation and powerful enough to cause change. Third, effective regulation must be nimble, and better able to adapt to changes in technology and disinformation strategies than previous communication regulations. And fourth, effective regulations should be as independent as possible from political leaders and leadership of the dominant social media and internet companies and guided by ongoing research in this field as much as possible.

Challenge 1: Defining the Problem

Terminology and definitions matter, especially as problems are identified and responses are considered. Disinformation is one of a few related, and often confused, types of false and misleading information spread online. There are many types of misleading information that can be dangerous to democratic institutions and nations. A number of recent studies have attempted to identify the definitional challenges associated with false or misleading information online in order to produce useful definitions for the purpose of more clearly understanding the problem.5 There are two axes upon which inaccurate information should be evaluated: its truthfulness, and the motivation behind its creation.6 False information falls into two broad categories, disinformation and misinformation, depending on whether the information was spread intentionally or not. This paper uses the definitions from Claire Wardle’s essential glossary of the information disorder, which was also adopted by the High Level Expert Group (HLEG) on disinformation convened by the European Commission:7

  • Disinformation: false information that is deliberately created or disseminated with the express purpose to cause harm or make profit.

  • Misinformation: Information that is false, but spread unintentionally and without intent to cause harm.

While helpful, these two baskets encompass a wide variety of information, only some of which have led to calls for greater scrutiny and regulation. The hodgepodge of terms and uses have been described as information disorder.8 Wardle describes seven different types of mis- and disinformation and offers a matrix that details types of false information (satire, misleading, manipulated, fabricated, impostor, false, etc.), the motivations of those who create it (profit, politics, poor journalism, passion, partisanship, parody, etc.), and the different ways that the content is disseminated (human vs. bot).9 Put simply, there is a need to recognize the difference between the false and misleading information spread by Russian troll farms meant to influence the 2016 election, and satirical articles from The Onion.

The definitional challenges to creating effective regulation aimed at misleading and harmful information are further complicated because the term that has captured the popular imagination is nether misinformation, nor disinformation. It is fake news. Hossein Derakhshan and Claire Wardle document the dramatic increase in the use of the term fake news by politicians, the public, and scholars alike, especially since the 2016 election.10 The increase in attention paid to fake news coincided with President Trump’s weaponizing of the term.11

Fake news may be the catch all phrase that has recently rung alarm bells the loudest, however, it cannot effectively be applied as the definitive realization of false information online because of its variety of forms, definitions, and uses. Fake news is a term that is great for clickbait but terrible as a target for effective regulation. It is a confusing and overly broad term that should be minimized in academic work and should not be used in any thoughtful discussion of regulatory efforts.12

Disinformation is the appropriate term for issues arising from intentional and harmful false information and is better suited for regulatory laws and legal action, because those responsible can potentially be identified. Disinformation can take many forms and may be conducted for economic or political gain. An example of disinformation for economic gain was the pro-Trump disinformation campaign spread by students in Veles, a town of 55,000 people in the country recently renamed North Macedonia; a campaign which was not ideological but instead was purely based on which messages received the most clicks and attention.13 Politically motivated disinformation can target electoral results or other sociopolitical outcomes like the efforts by the Myanmar military to support a horrific ethnic cleansing campaign against the Rohinga, a Muslim minority group. For over half a decade, members of the Myanmar military conducted a disinformation campaign on Facebook which targeted the Rohinga, and paved the way for brutal attacks, persecution, and rape, all on a colossal scale. The disinformation campaign was particularly effective because Facebook is so widely used in Myanmar, and many of its 18 million internet users regularly confuse the social media platform with the Internet itself.14

The High Level Expert Group (HLEG) assembled by the UN, helpfully described how disinformation

includes forms of speech that fall outside already illegal forms of speech, notably defamation, hate speech, incitement to violence, etc. but can nonetheless be harmful. It is a problem of state or nonstate political actors, for-profit actors, citizens individually or in groups, as well as infrastructures of circulation and amplification through news media, platforms, and underlying networks, protocols and algorithms.15

Disinformation can take many forms and is linked to a varied group of actors who create it, and a variety of platforms which are used to disseminate it. However, disinformation is always perpetuated on purpose by a particular group of responsible actors and has potential to cause harm. Recognizing these consistent traits serves as the starting point for any effective regulatory action.

Challenge 2: Who Should be in Control of the Regulation?

Regardless of the specific goals of effective regulation, the practical nature of implementation must be addressed. That involves determining who should do the regulating, and if regulation is actually necessary at all. Any regulation must be for a particular purpose. Traditionally, regulations are put in place to protect or assist a population or a group within a population, and that need is clearly present here. Concerns about various types of false or misleading information online and the need to address them are widespread.16 When it comes to combating disinformation, there are three main options that have been internationally adopted: no regulation, self-regulation by industry leaders, or government regulation.

A system of minimal or no regulation is the starting position for many nations in the Western world, and is supported by free-market arguments about the benefits of letting the consumers and corporations make the decisions on both efficiency and ethical grounds. It is also articulated by a wide variety of lawyers, technology experts, media companies, and free speech campaigners, who have argued that hastily created domestic measures outlawing disinformation efforts may prove ineffective, counterproductive, or could manifest themselves as thinly veiled government censorship.17

Often an opposition to government regulation or action is coupled with a push to empower individuals and the public at large to develop skills to improve their digital literacy, in order to be better prepared when they encounter false information online.18 Research into media and digital literacy is extensive and a number of important studies have specifically focused on understanding how we can identify and minimize the effects of false information online, especially when encountered on social media.19 However this is all directed at helping people become better able to identify misinformation. As stated earlier, disinformation is much better suited for regulatory action because it is effected with intention and as such, there are groups or individuals who are responsible.

Government Regulation

The fight against online disinformation campaigns requires systematic interventions, and governments are often identified as the organizations with the size and resources to address the scale of the problem. Government regulation can take on many forms and, as of early 2019, forty-four different nations had taken some action regarding various forms of false information online. However, only eight of these nations had even considered actions specifically aimed at limiting harmful disinformation originating from either inside or outside the country.20

Governments are also notoriously slow to respond to complex problems, especially those involving newer technology, and the government response to disinformation is no different.21 Nearly three years after the 2016 US election, which featured a massive and successful disinformation campaign run by the Russian government to influence the election in favor of Donald Trump, the US Defense Department announced a program that aims to identify disinformation posts sent on social networks in the USA moving forward. The Defense Advanced Research Projects Agency (DARPA) will test a program that aims to identify false posts and news stories which are systematically spread through social media at a massive scale. The agency eventually aims to be able to scour upwards of half a million posts, though the rollout will take years and will not be fully functional until well after the 2020 election, if ever.22 Relative to the speed of innovations in technology and disinformation strategies, the proposal put forth by the US Department of Defense moves at a glacial pace.

Beyond efficiency concerns, another daunting challenge to effective government regulations is finding the right balance between the expertise needed to regulate today’s complicated, hybrid media environment and the independence from industry leaders needed to create policies that are as objective as possible.23 There is a long history of industry leaders influencing communication policy and regulations. In the American context, the Federal Communication Commission (FCC) and the Federal Radio Commission (FRC) were both heavily influenced by industry leaders, as were many efforts at internet regulation over the past decade, such as net neutrality decisions. Perhaps this should not be surprising when we realize how many of the members who have served on the FCC over the past eighty-five years came from careers working for the companies they were then asked to regulate.24 Nevertheless, government policies and actions often have unparalleled legal, economic, and political force, and have the potential to create the most sweeping and lasting changes.

Action taken at a national or even regional level, like the EU, may be insufficient to tackle many challenges caused by disinformation for a number of reasons, not the least of which is the fact that political parties in many nations are aligned with movements spreading disinformation and hate speech, and any new government standards run the risk of being branded as repressive and politically motivated by these politicians and their supporters. This governmental role is further complicated by the international nature of disinformation that Tworek describes.

In one tragic example, days after members of the Sudanese military massacred a number of pro-democracy protesters in Khartoum in June 2019, an online disinformation campaign emerged from an unlikely source, an obscure digital marketing company based in Cairo, Egypt. The company, run by a former military officer, conducted a covert disinformation campaign, offering people $180 per month to post pro-military messages on fake accounts on Facebook, Twitter, Instagram, and Telegram. As investigators from Facebook pulled at the string of this company, they discovered that it was part of a much larger campaign targeting people in at least nine nations in the Middle East and North Africa, emanating from multiple mirror organizations existing in multiple countries. Campaigns like this have become increasingly common, used both by powerful states like Russia and China, and smaller firms, aimed at thwarting democratic movements and supporting authoritarian regimes.25

This recent Sudanese case involves every one of Tworek’s historical patterns, and begs the question: what form of regulation could best limit the harmful effects of these anti-democratic disinformation campaigns? In this case, the platforms used to post messages were central to the campaign, and therefore such platforms must be included in either externally enforced self-regulation, in the mode of the EU Code of Practice on Disinformation, or in traditional regulation that has the power to impose fines and penalties.

Internet infrastructure, communication, commerce, politics, and false information all extend beyond borders, yet decisions about policies and regulations are often national in origin and enforcement. For over two decades, scholars have explored the jurisdictional complexities of internet regulation.26 While there are exceptions, such as the high level group organized by the EU, and longstanding efforts by the Internet Corporation for Assigned Names and Numbers (ICANN), most internet regulation is national, and many nations hold different cultural, political, and ethical positions regarding if, when, and how to regulate.27

There are a wide variety of positions about whether or not the government should actively regulate what is or is not true online. However, there is no question that the problem is pervasive. The 2018 Digital News Report found that a large portion of citizens across the world had been exposed to information in the week preceding the survey that was completely made up, either for political or for commercial reasons.28 But there is a wide discrepancy in how people around the globe feel about the role of governments in fighting misinformation.29 It is widely understood that privacy rights have been valued more highly than the roles of content providers in places like Europe, but less so in America. These values have helped to shape different government actions regarding the Internet more broadly, and online disinformation in particular.30

The First Amendment has been a consistent source of resistance to media regulation throughout American history, especially for content creators. While the protections of the First Amendment have extended much more broadly to print media than broadcast, the Internet has generally been regulated lightly. Beyond the First Amendment protections, any interventions that aim to regulate content creators or internet service providers (ISPs) will confront the long-standing legal protections provided by Section 230 of the Communications Decency Act of 1996 (CDA 230). CDA 230 is a key legal provision which broadly shields platforms from legal liability for the actions of third-party users of their services, and it has been seen as a cornerstone supporting free expression on the Web. CDA 230 has also been used to inhibit platform responsiveness to the harms posed by harassment, defamation, child pornography, and a host of other activities online. Therefore, the escalating debates on how to address disinformation online will join a long history of efforts to reform or eliminate the shield provided by CDA 230.31

Though there are legal and constitutional challenges that inhibit government action in the United States, the decisions there will have a disproportional impact on the rest of the world. This is due to the fact that the majority of major global content providers and social media platforms were founded and primarily operate out of the USA. Thus Facebook, Twitter, Google, Apple, and Amazon, all dominant global players, could be affected by actions taken in the United States. While each of these companies and platforms have been affected by regional or national policies in various parts of the world, the United States would have more authority than any other to force any structural change or to mandate action regarding disinformation online.

The Power of the Platforms and Self-Regulation

The physical infrastructure and business models that Tworek notes are often overlooked when it comes to the causes of disinformation and potentially effective regulations. This is exemplified by the small number of dominant platforms that act as the lungs of disinformation campaigns. These platforms have been designed to keep users interested, engaged, and logged on as long as possible through the use of sticky content. This content is supported by black box algorithms that drive the experiences of users, and must play a role in potential regulatory decisions. Algorithms are one of the most important curators of internet users’ media intake in the modern hybrid media system.32

It has been shown that algorithms often steer users to extreme content, especially on Facebook and YouTube, two of the most prominent platforms used for spreading disinformation around the world.33 One employee of Google-owned YouTube created a grouping of YouTube videos associated with the alt-right, a loosely connected right wing group in the USA that peddles misogynistic, nativist, white supremacist, Islamophobic, and anti-Semitic rhetoric, including conspiracy theories and disinformation campaigns. The grouping found that alt-right videos on YouTube were extraordinary in size and reach, comparable to music, sports, and gaming channels, and aided by algorithms.34

Some nations are trying different ways to reduce the power of these platforms. In some instances, nations are attempting to force platforms to counter the effects of their very successful business models. In March 2018, after the Cambridge Analytica scandal in which Facebook allowed the company to harvest tens of millions of users’ data for “psychologic profiling” and use it for political purposes, Germany sought to stop the disinformation spread on Facebook. While the goal is a good one, the means that Germany took was to try to gain access to the black box that is the Facebook’s algorithm. There are many concerns about this approach. First, the legality of forcing Facebook to disclose their proprietary algorithm is far from a given. Second, it’s unlikely that making such information more transparent would actually help Facebook users identify and avoid disinformation spread on their pages as much as other efforts, like making the funding of political ads on Facebook more obvious. Third, this approach is not targeted directly at disinformation. And finally, this effort could potentially be counterproductive as greater transparency of Facebook’s algorithm could give greater power to those who would seek to create disinformation campaigns in the future.35

Government action often extends to related areas including limiting the size and reach of individual companies or their use of data, or protecting the privacy of users.36 For instance, there have been increasing calls for the breakup of massive media companies like Facebook, Amazon, and Google.37 In September 2019, official antitrust investigations were launched by multiple states into Facebook and Alphabet, the parent company of Google.38 Meanwhile the FBI, the Department of Homeland Security, and the Director of National Intelligence have met with leaders from platforms like Facebook, Google, Microsoft, and Twitter to focus on national security issues on the platforms in connection to the 2020 election.39 There is no question about the power of the dominant platforms. The only question is whether they will be in charge of self-regulation or if governments or internationals commissions will take the reins.

Self-Regulation

Mark Zuckerberg once stated that, “in a lot of ways Facebook is more like a government than a traditional company. We have this large community of people, and more than other technology companies we’re really setting policies.”40 He was right. And this reality aptly describes other behemoth social media and internet companies like Google, Amazon, Apple, Microsoft, Twitter, WeChat, and Alibaba that play central roles in the spreading of information, fake or otherwise. Facebook and other content companies make and enforce polices about online content every day and the option of allowing, or aiding a self-regulatory approach is a path that many support. As the 2018 Digital News Report found, far more online news consumers prefer media or tech companies working to identity real and false news than governments.41

Self-regulation of internet content is far from a new option and has evolved with the growth of numerous institutions and self-regulatory systems over the past two decades.42 One advantage of self-regulation is that media companies simply understand how they work best and are often motivated to provide effective self-regulation in lieu of potential government action that could be more disruptive of their services or business. There are also legal reasons in many nations as to why more heavy-handed government regulations are either more difficult or flatly illegal.

All of these considerations led the European Commission, the executive branch of the European Union, to adopt a standard policy-making path in addressing emerging issues that involve technological challenges, which was then used to create the EU Code of Practice (CoP) on Disinformation. The CoP on Disinformation was put into practice in early 2019, a few months before the EU parliament elections in May 2019.43 Importantly, the EU CoP preferred self-regulation over traditional government-directed regulation to target and reduce disinformation at this stage because they saw it as faster and more flexible than traditional regulation, and they didn’t see a tested top-down solution for the problem of disinformation.44

The options for control are not a binary choice between autonomous self-regulation by the powerful platforms themselves and legislation handed down by national or international governmental bodies. Independent commissions are likely going to play an important role in the regulation of disinformation moving forward because they can have greater impartiality from government or corporate control; can potentially act more nimbly than governments; and can have the authority to hold companies or individuals accountable. In March 2019, Mark Zuckerberg surprised some in admitting that their platform had too much control. He stated that he supported increasing regulatory action specifically aimed at protecting election integrity, privacy, data portability, and harmful content including disinformation. He also went further, promising to establish an independent group working within Facebook to help guide these efforts. In September 2019, Facebook unveiled its plans for a new independent board that could have the power to review appeals made by users and make decisions that could not be overruled, even by Zuckerberg. This Facebook “Supreme Court” is not focused initially on curbing disinformation on the platform, but could evolve into a larger board with multiple foci. Regardless, it serves as an example of a powerful independent group working within a company with broad authority to make and enforce reforms.

Challenge 3: What Should Effective Regulation Look Like?

Regulation is often as tricky as it is controversial. Tworek offers extremely helpful, historically defined, guideposts for effective disinformation regulation. As she describes, effective regulation should be forward thinking, adaptable, clear in focus, and responsive to changes in technology and the international nature of both online communication and disinformation campaigns. Perhaps most challenging, effective regulation of disinformation should aim to protect the democratic ideals, structures, and nations that have been threatened, but should also remain “democracy proof” enough to avoid the takeover of regulatory efforts by powerful actors who would aim to use such tools through political means or otherwise, in order to further their disinformation goals. Therefore, it should remain vigilantly independent.45 The stakes are as high as the difficulties faced.

Disinformation strategies and the digital tools and platforms that are used to spread it are changing quickly, yet regulatory action is notoriously slow. Margaret O’Mara, historian and expert on the history of the technology industry, sums it up well: “Technology will always move faster than lawmakers are able to regulate. The answer to the dilemma is to listen to the experts at the outset, and be vigilant in updating laws to match current technological realities.”46 Many of the most important regulatory frameworks governing the Internet today originated in the 1990s, when the Internet was a far cry from what it is today, and today’s leading social media platforms and online disinformation campaigns were nonexistent.47 It is important that regulations, though long overdue, are clearly targeted and proportional. Some nations, like Germany, have been quick to act. However, there are concerns that some of the early regulatory steps may be excessive and potentially ineffective.

Another concern is that the regulatory teeth are proportional to the harms found, and large enough to change the actions of the some of the most profitable and influential companies on earth. Recent instances in the USA, aimed at penalizing major platforms for past inaction, serve as a good example. After a spiraling investigation sparked by the Cambridge Analytica scandal, the Federal Trade Commission (FTC) levied a five-billion-dollar fine, its largest ever, on Facebook in July 2019. While large in absolute dollars, it is less than a third of the $16 billion-dollar profit Facebook earned in the second quarter of 2019 alone. It’s also notable that, although the FTC considered a much larger fine along with the requirement for changes in Facebook’s actions, both were scrapped due to fears of a drawn-out court battle. Two months later, Google agreed to pay $170 million in fines to the FTC for violating the 1998 Children’s Online Privacy Protection Act due to data collected from children by YouTube, a part of Google. Alphabet, the parent company of Google is set to make over $160 billion in profits in 2019, $20 billion of which will be generated by YouTube. A fine of $170 million is a drop in the bucket.48 While neither of these regulatory actions are focused on disinformation, they are examples of how recent efforts to regulate internet companies and social media platforms over data or privacy issues are using outdated policy and ineffective penalties.

Thankfully, the work of providing thoughtful and comprehensive suggestions for effective policy aimed at disinformation has already begun. The most rigorous efforts so far have emanated from Europe. Wardle and Derakhshan produced one of the first of these efforts with their 2017 report for the Council of Europe which aimed to define the major issues involved in what they label “information disorder,” and to analyze its implications for democracy and for various stakeholders.49 They go on to offer suggestions for what technology companies, media companies, national governments, education ministries, and the public at large could do moving forward.

In November 2018, the Truth, Trust and Technology Commission from the London School of Economics and Political Science published a report called “Tackling the Information Crisis: A Policy Framework for Media System Resilience.” In this report, the commission defined “five giant evils” of the information crisis that effect the public and should be targeted by thoughtful policy: confusion, cynicism, fragmentation, irresponsibility, and apathy. To fight against these evils, the report details short, medium, and long term recommendations for the United Kingdom which includes an independent platform agency, established by law, to do research, report findings publicly, coordinate with different government agencies, and to collect data and information from all major platforms and impose fines and penalties.50 The foundation of solid research included in the commission report is an important place to start. While there is a lot of good scholarship on disinformation, there are research gaps that remain.51

A few months after the report, the UK government’s Home Office and the Department for Digital, Media, Culture and Sport followed up these proposals in a white paper that called for a new system of regulation for tech companies aiming to prevent a wide variety of online harms including disinformation. The white paper outlines government proposals for consultation in advance of passing new legislation. In short, it calls for an independent regulator that will draw up codes of conduct for tech companies, outlining a new statutory “duty of care” toward their users, with the threat of penalties for noncompliance including heavy fines, naming and shaming, the possibility of being blocked, and personal liability for managers. It notably describes its approach as risk-based and proportionate, though both are subjective.52

The white paper is a set of expectations for companies to follow that serve as guidelines for future regulatory action and codes of practice. However, any interventions aimed at fighting the harmful effects of disinformation must avoid creating more harm than they reduce. In particular, many groups have already voiced their concerns about the potential negative effects of regulation on innovation, and a slippery slope of censorship and free speech violations resulting from efforts to reduce the effects of disinformation.53 The proof of harm caused by disinformation is not always clear-cut and the potential for major restrictions on free speech increases as subjective judgements are made. It is also not clear how to regulate problematic information spread with differing types of intentions, such as the anti-vaccination information spreading across the world like a disease, though without a clear economic or political motivation.54

The Lessons Learned from the Challenges of Regulating Disinformation

The distance between thoughtful recommendations to combat disinformation and effective regulatory policies are vast due to political complications, divergent philosophies about the dangers and threats to democratic processes and ideals, and regional differences. In addition, online disinformation does not exist in isolation and is impacted by other concerns that have led many to call for reforms and regulation of issues including data security, privacy issues, and the oversized power and influence of platforms like Facebook and YouTube.55 The EU General Data Protection Regulation (GDPR), in effect since May 2018, is a great example. The GDPR is arguably the most important change in data privacy regulation in decades and can impact disinformation efforts in a number of ways, notably by impacting platforms and companies that are used to spread disinformation.56

There are many reasons why regulating disinformation online is difficult, but the time for simply admiring the problem is over.57 This chapter has detailed the complex challenges that face those who seek to design and implement effective disinformation regulations. The first set of challenges centered around the definitional challenges of distinguishing between misinformation and disinformation and why disinformation is ripe for regulation, while misinformation is not. The second challenge is determining who should be in control of regulations and their implementation; governments, independent commissions, or self-regulations by the social media and internet companies themselves could all play a role. Finally, there is the issue of what effective disinformation should look like, and what it should avoid.

The challenges are real, and daunting, but thoughtful efforts toward disinformation regulation have already begun. When we distill these early efforts down to their consistent themes, and view them through Tworek’s historical lens, four standards for effective disinformation regulation stand out. First, is a regulatory Hippocratic oath: disinformation regulation should target the negative effects of disinformation while minimizing any additional harm caused by the regulation itself. Second, regulation should be proportional to the size of the harm caused by the disinformation and the economic realities of the companies potentially subject to regulations. Third, effective regulation must be nimble, and able to adapt to changes in technology and disinformation strategies more than previous communication regulations. Fourth, effective regulations should be determined by independent agencies or organizations that are guided by ongoing research in this field.

It is extremely difficult to effectively regulate online disinformation. However, understanding the complex sources of the regulatory challenges, and the historical patterns that have contributed to them, will help current and future efforts toward curbing the harms caused by online disinformation. The Eurasia Center was correct, there is no single fix, or set of fixes that will completely mitigate the dangers of strategic disinformation campaigns. However, the four standards identified in this chapter can help serve as a guide, as online disinformation and the regulatory efforts to stop it, continue into the future.

Footnotes

6 How Digital Disinformation Turned Dangerous

7 Policy Lessons from Five Historical Patterns in Information Manipulation

8 Why It Is So Difficult to Regulate Disinformation Online

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×