We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There is abundant anecdotal evidence that nondemocratic regimes are harnessing new digital technologies known as social media bots to facilitate policy goals. However, few previous attempts have been made to systematically analyze the use of bots that are aimed at a domestic audience in autocratic regimes. We develop two alternative theoretical frameworks for predicting the use of pro-regime bots: one which focuses on bot deployment in response to offline protest and the other in response to online protest. We then test the empirical implications of these frameworks with an original collection of Twitter data generated by Russian pro-government bots. We find that the online opposition activities produce stronger reactions from bots than offline protests. Our results provide a lower bound on the effects of bots on the Russian Twittersphere and highlight the importance of bot detection for the study of political communication on social media in nondemocratic regimes.
Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.
Do online social networks affect political tolerance in the highly polarized climate of postcoup Egypt? Taking advantage of the real-time networked structure of Twitter data, the authors find that not only is greater network diversity associated with lower levels of intolerance, but also that longer exposure to a diverse network is linked to less expression of intolerance over time. The authors find that this relationship persists in both elite and non-elite diverse networks. Exploring the mechanisms by which network diversity might affect tolerance, the authors offer suggestive evidence that social norms in online networks may shape individuals’ propensity to publicly express intolerant attitudes. The findings contribute to the political tolerance literature and enrich the ongoing debate over the relationship between online echo chambers and political attitudes and behavior by providing new insights from a repressive authoritarian context.
“Clickbait” media has long been espoused as an unfortunate consequence of the rise of digital journalism. But little is known about why readers choose to read clickbait stories. Is it merely curiosity, or might voters think such stories are more likely to provide useful information? We conduct a survey experiment in Italy, where a major political party enthusiastically embraced the esthetics of new media and encouraged their supporters to distrust legacy outlets in favor of online news. We offer respondents a monetary incentive for correct answers to manipulate the relative salience of the motivation for accurate information. This incentive increases differences in the preference for clickbait; older and less educated subjects become even more likely to opt to read a story with a clickbait headline when the incentive to produce a factually correct answer is higher. Our model suggests that a politically relevant subset of the population prefers Clickbait Media because they trust it more.
Does social media educate voters, or mislead them? This study measures changes in political knowledge among a panel of voters surveyed during the 2015 UK general election campaign while monitoring the political information to which they were exposed on the Twitter social media platform. The study's panel design permits identification of the effect of information exposure on changes in political knowledge. Twitter use led to higher levels of knowledge about politics and public affairs, as information from news media improved knowledge of politically relevant facts, and messages sent by political parties increased knowledge of party platforms. But in a troubling demonstration of campaigns' ability to manipulate knowledge, messages from the parties also shifted voters' assessments of the economy and immigration in directions favorable to the parties' platforms, leaving some voters with beliefs further from the truth at the end of the campaign than they were at its beginning.
The goal of this book is to synthesize the existing research on social media and democracy. We present reviews of the literature on disinformation, polarization, echo chambers, hate speech, bots, political advertising, and new media. In addition, we
canvass the literature on reform proposals to address the widely perceived threats to
democracy. We seek to examine the current state of knowledge on social media and
democracy, to identify the many knowledge gaps and obstacles to research in this area,
and to chart a course for future research. We hope to advocate for this new field of
study and to suggest that universities, foundations, private firms, and governments
should commit to funding and supporting this research.
Responding to an environment of panic surrounding social media’s effect on democracy, regulators and other political actors are rushing to fill the policy void with proposals based on anecdote and folk wisdom emerging from whatever is the most recent scandal. The need for real-time production of rigorous, policy-relevant scientific research on the effects of new technology on political communication has never been more urgent. This book represents a clarion call for making social media data available for research, with results concomitantly released in the public domain, even while recognizing the importance of privacy and the business interests of the firms. We hope this concluding chapter, as well as the entire volume, can be helpful in providing a path to do so.
Over the last five years, widespread concern about the effects of social media on democracy has led to an explosion in research from different disciplines and corners of academia. This book is the first of its kind to take stock of this emerging multi-disciplinary field by synthesizing what we know, identifying what we do not know and obstacles to future research, and charting a course for the future inquiry. Chapters by leading scholars cover major topics – from disinformation to hate speech to political advertising – and situate recent developments in the context of key policy questions. In addition, the book canvasses existing reform proposals in order to address widely perceived threats that social media poses to democracy. This title is also available as Open Access on Cambridge Core.
A growing body of research explores the factors that affect when corrupt politicians are held accountable by voters. Most studies, however, focus on one or few factors in isolation, leaving incomplete our understanding of whether they condition each other. To address this, we embedded rich conjoint candidate choice experiments into surveys in Argentina, Chile, and Uruguay. We test the importance of two contextual factors thought to mitigate voters’ punishment of corrupt politicians: how widespread corruption is and whether it brings side benefits. Like other scholars, we find that corruption decreases candidate support substantially. But, we also find that information that corruption is widespread does not lessen the sanction applied against corruption, whereas information about the side benefits from corruption does, and does so to a similar degree as the mitigating role of permissible attitudes toward bribery. Moreover, those who stand to gain from these side benefits are less likely to sanction corruption.
Michael Jordan supposedly justified his decision to stay out of politics by noting that Republicans buy sneakers too. In the social media era, the name of the game for celebrities is engagement with fans. So why then do celebrities risk talking about politics on social media, which is likely to antagonize a portion of their fan base? With this question in mind, we analyze approximately 220,000 tweets from 83 celebrities who chose to endorse a presidential candidate in the 2016 U.S. presidential election campaign to assess whether there is a cost—defined in terms of engagement on Twitter—for celebrities who discuss presidential candidates. We also examine whether celebrities behave similarly to other campaign surrogates in being more likely to take on the “attack dog” role by going negative more often than going positive. More specifically, we document how often celebrities of distinct political preferences tweet about Donald Trump, Bernie Sanders, and Hillary Clinton, and we show that followers of opinionated celebrities do not withhold engagement when entertainers become politically mobilized and do indeed often go negative. Interestingly, in some cases political content from celebrities actually turns out to be more popular than typical lifestyle tweets.