Hostname: page-component-848d4c4894-p2v8j Total loading time: 0 Render date: 2024-04-30T22:08:20.838Z Has data issue: false hasContentIssue false

Testing the Effect of Information on Discerning the Veracity of News in Real Time

Published online by Cambridge University Press:  08 November 2023

Kevin Aslett*
Affiliation:
School of Politics, Security and International Affairs, University of Central Florida, Orlando, USA
Zeve Sanderson
Affiliation:
Center for Social Media and Politics, New York University, New York, USA
William Godel
Affiliation:
Center for Social Media and Politics, New York University, New York, USA
Nathaniel Persily
Affiliation:
Law School, Stanford University, Stanford, USA
Jonathan Nagler
Affiliation:
Center for Social Media and Politics, New York University, New York, USA Wilf Family Department of Politics, New York University, New York, USA
Richard Bonneau
Affiliation:
Center for Social Media and Politics, New York University, New York, USA
Joshua A. Tucker
Affiliation:
Center for Social Media and Politics, New York University, New York, USA Wilf Family Department of Politics, New York University, New York, USA
*
Corresponding author: Kevin Aslett; Email: kevin.aslett@ucf.edu
Rights & Permissions [Opens in a new window]

Abstract

Despite broad adoption of digital media literacy interventions that provide online users with more information when consuming news, relatively little is known about the effect of this additional information on the discernment of news veracity in real time. Gaining a comprehensive understanding of how information impacts discernment of news veracity has been hindered by challenges of external and ecological validity. Using a series of pre-registered experiments, we measure this effect in real time. Access to the full article relative to solely the headline/lede and access to source information improves an individual's ability to correctly discern the veracity of news. We also find that encouraging individuals to search online increases belief in both false/misleading and true news. Taken together, we provide a generalizable method for measuring the effect of information on news discernment, as well as crucial evidence for practitioners developing strategies for improving the public's digital media literacy.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of American Political Science Association

Introduction

The risks of misinformation have become especially acute during the COVID-19 pandemic and in the context of rising antidemocratic movements. These forces have led scholars, civil society groups, and social media companies to develop novel intervention and mitigation strategies. Among those, perhaps the most common are digital media literacy interventions that seek to provide people with more context or information about an article. These interventions are based on the assumption that providing more information will prompt news consumers to rely on cognitive shortcuts and heuristics, which can often be helpful (Gigerenzer and Selten, Reference Gigerenzer and Selten2002), especially in the case of discerning the veracity of news (Pennycook and Rand, Reference Pennycook and Rand2019a). While this assumption underlies the design of these digital media literacy interventions, we have yet to establish whether individuals use informational cues to improve their news veracity discernment in the time period during which they are most likely to be exposed to a news article (within 48 hours of publication) (Vosoughi et al., Reference Vosoughi, Roy and Aral2018).

We present results from multiple studies that test if cues from additional textual, source, or external information improve news veracity discernment in real time.Footnote 1 By measuring these effects in similar research designs, we can also directly compare the effect size – rather than simply directionality – of different cues. To ascertain whether individuals use cues from additional information, we measure the marginal effect of providing three different types of information on the discernment of news veracity: providing the reader with the entire article rather than solely the headline/lede (textual cues); providing the reader an article with the source rather than without the source (source cues); and encouraging readers to search online for external information beyond the article itself rather than not encouraging online search (external cues). Importantly, our articles are selected and evaluated within 48 hours of publication – the period in which news consumption is mostly likely to take place. Taken together, these experiments capture the effect of three informational cues in real time.

Popular digital media literacy approaches are built on the assumption that additional information improves the ability of individuals to correctly discern the veracity of news, but we have little empirical evidence with which to evaluate these assumptions. The lack of evidence risks introducing interventions that lack efficacy or, worse, introduce unintended consequences. For example, a media literacy guide released by Facebook instructed individuals to rely on textual cues by looking through a news article for specific textual features, such as words written in all caps. The NewsGuard web extension focuses individual’s attention on source cues by providing easy-to-interpret source reliability information when consuming news (Aslett et al., Reference Aslett, Guess, Bonneau, Nagler and Tucker2022). In addition to focusing the attention of news consumers on information within an article, digital media literacy interventions have also encouraged individuals to utilize information external to the article itself, such as the United States Surgeon General’s Office suggestion that individuals search online to help discern the veracity of news.

Previous studies have measured the effect of textual and source information (Austin and Dong, Reference Austin and Dong1994; Dias et al., Reference Dias, Pennycook and Rand2020; Kim et al., Reference Kim, Moravec and Dennis2019) months after publication. Here, we present a new method to measure this effect directly after publication. Measuring belief in misinformation well after publication may present different results than if it were measured while a piece of misinformation is most likely to be consumed “in the wild.” The believability of misinformation – and thus the incentives for actors to produce it – may depend on the context of the general information environment at the time of its publication. Our studies, which source articles and collect evaluations in real time, leverage the environment with which misinformation was generated to interact, enabling more accurate measurement of levels of belief and effects of interventions.

In addition to the benefit of measuring this effect in real time, we also remove the researcher from the article selection process. Researchers in previous studies utilized articles that they themselves selected, regardless of their popularity (Allcott and Gentzkow, Reference Allcott and Gentzkow2017; Clayton et al., Reference Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin and Morgan2019; Pennycook and Rand, Reference Pennycook and Rand2020), or they themselves created (Moravec et al., Reference Moravec, Minas and Dennis2018; Pennycook et al., Reference Pennycook, Cannon and Rand2018), potentially introducing sampling biases. Stimulus samples without clear selection criteria have been found to limit the validity and robustness of estimates in previous work measuring news discernment (Clemm von Hohenberg, Reference Clemm von Hohenberg2020). To ameliorate these potential issues, we remove the researcher from the selection process and ensure that we only select the most popular pieces of true and false/misleading news in a given time period. Taken together, our methodological innovations allow us to measure the effects of additional pieces of information about popular news articles in real time.

Our experiments produce three important findings. First, we find that access to the full article, rather than just the headline/lede, improves the ability of an individual to correctly discern the veracity of news. To our knowledge, this is the first time this has been tested and shows that individuals use cues from the text of an article beyond what they learn from the headline/lede to help them evaluate news. This finding also suggests that studies that rely solely on headlines and ledes may be underestimating people’s ability to identify the veracity of news when reading the whole article. Second, we find that source information also improves the ability of an individual to correctly discern the veracity of news. This effect is strongest when respondents receive source cues by traveling to the article’s URL, suggesting that cues on webpages, such as the quality of the design or the type of advertisements, may be important. Past studies predominantly did not send respondents to the online news website when delivering source cues, so this result may explain why some studies identified no source effect on news veracity discernment (Dias et al., Reference Dias, Pennycook and Rand2020). Finally, we find that being encouraged to search online for additional information increases the ability to identify the veracity of true news stories, but reduces the ability to identify the veracity of false news. In addition to contributing to the growing literature on people’s ability to discern the veracity of news, these findings provide novel and important evidence for practitioners supporting digital media literacy initiatives to consider.

Theory and hypotheses

In this manuscript, we test whether textual, source, and external cues are utilized by online users to help discern the veracity of both true and false/misleading news, building off of recent literature advocating for testing intervention efficacy on veracity discernment beyond a narrow focus on misinformation (Guay et al., Reference Guay, Pennycook and Rand2022). To this end, we measure the effect of providing three types of information to news consumers directly after publication: (H1) information in the text (relative to information contained solely in the headline/lede); (H2) source information; and (H3) searching online for additional information beyond the article itself.

Information in the text

We begin by investigating the possible effect of cues contained in the text of news articles, as opposed to solely in the headline or lede. News consumers likely use textual content to help them identify the veracity of an online news article through a number of specific cues: the confidence shown in claims (Zhang et al., Reference Zhang, Ranganathan, Metz, Appling, Sehat, Gilmore, Adams, Vincent, Lee and Robbins2018), quotes from outside experts (Sundar, Reference Sundar1998), logical fallacies (Frankena, Reference Frankena1939), grammatical errors (Jahanbakhsh et al., Reference Jahanbakhsh, Zhang, Berinsky, Pennycook, Rand and Karger2021), the emotional tone of the text (Zhang et al., Reference Zhang, Ranganathan, Metz, Appling, Sehat, Gilmore, Adams, Vincent, Lee and Robbins2018), or the claim appearing motivated or biased (Jahanbakhsh et al., Reference Jahanbakhsh, Zhang, Berinsky, Pennycook, Rand and Karger2021). Although these specific cues have been identified as helpful, no one has measured the aggregate effect of textual cues on the ability of individuals to discern the veracity of news. To do so, we test whether the effect of providing the full text relative to solely the headline/lede improves the discernment of news veracity.

Testing whether textual cues aid the discernment of news veracity has two important implications. First, many digital media literacy interventions encourage news consumers to inspect the full text. Second, there has been an increase in the proportion of news that is consumed solely in headline/lede form on social feedsFootnote 2 , and it is important to identify if losing these textual cues is impacting people’s ability to correctly discern news veracity.

In Study 1, we test the effect of textual information by asking four groups of respondents to evaluate the same news articles in the same 24-hour period, but varied whether they receive the full text of the article. In order to isolate the effect of textual information, we measured the textual effect with the source and without the source. Varying the presence of source is necessary to measure the textual effect per se; for example, the effect of providing the full text may be weak/non-existent when the source is available, because cues from the source of the article might outweigh cues from the text of the article. Therefore, we test the effect of the full text in two different contexts (source present versus source hidden); Figure 1 outlines the different variations of text and source information provided to the four different groups of respondents. Each respondent is initially randomly placed in one treatment category and then evaluates articles in that format.

Figure 1. Two-by-two table of different combinations of text and source information provided to each different group of respondents in study 1.

By comparing the evaluations of the same articles, but with different amounts of information available to the respondent, we can test our pre-registered hypotheses regarding the marginal effect of the full-text information in different formats. We pre-registered the following two hypotheses:

H1.1: Respondents who are given the full text of an article to evaluate are more likely to match the veracity assessment of fact-checkers than those who are only given the headline/lede to evaluate (when source information is not present). To test this hypothesis, we compare the number of “correct” evaluations of articles in article format 1 to the number of “correct” evaluations of articles in article format 2.

H1.2: Respondents who are given the full text of an article to evaluate are more likely to match the veracity assessment of fact-checkers than those who are only given the headline/lede to evaluate (when source information is present). To test this hypothesis, we compare the number of “correct” evaluations of articles in article format 3 to the number of “correct” evaluations of articles in article format 4.

Source information

Second, we turn to source information. Trust in information hinges at least partly on the reliability of its source (Druckman, Reference Druckman2022). Source information may affect one’s belief in an online news article through two specific cues: reputation of the publisher/domain and the quality of the webpage design. Traditional media provide cues through authenticity and reputation (Althaus and Tewksbury, Reference Althaus and Tewksbury2000; Flanagin and Metzger, Reference Flanagin and Metzger2000), as well as the professionalism of the design of the website (Flanagin and Metzger, Reference Flanagin and Metzger2007; Fogg et al., Reference Fogg, Marshall, Laraki, Osipovich, Varma, Fang, Paul, Rangnekar, Shon and Swani2001); conversely, low-quality news source likely have either an unknown or negative reputation to most news consumers and less professional web design. Previous academic work on the effect of source information is mixed. Some studies found that source information affects the perceived veracity of information (Baum and Groeling, Reference Baum and Groeling2009; Berinsky, Reference Berinsky2017; Kim et al., Reference Kim, Moravec and Dennis2019; Sundar and Nass, Reference Sundar and Nass2001; Swire et al., Reference Swire, Berinsky, Lewandowsky and Ecker2017), while others found no effect (Austin and Dong, Reference Austin and Dong1994; Dias et al., Reference Dias, Pennycook and Rand2020; Jakesch et al., Reference Jakesch, Koren, Evtushenko and Naaman2019). Past work, however, has not tested these effects in real time.

In addition, a number of digital media literacy interventions emphasize the credibility of sources, but it remains unclear the extent to which individuals utilize source cues (Aslett et al., Reference Aslett, Guess, Bonneau, Nagler and Tucker2022). To determine if individuals rely on source cues to help them assess whether an article is true, we measure the effect of providing source information from mainstream and low-quality news sources. Given the lack of pre-existing consensus, we pre-registered four different hypotheses about the effect of source information on belief that a news article is true.

Similar to Study 1, we isolate the effect of low-quality and mainstream news source cues by varying whether the full text or the headline/lede is provided. It is possible that the effect of source cues differs by context. For example, the source cue could be strongest when the full article is available, because cues from the professionalism of the design of the website may come across stronger when respondents see the full website rather than solely the headline. Alternatively, a headline/lede contains less textual information, so readers may have to rely more strongly on source cues to make a veracity judgment. Figure 1 outlines the different variations of text and source information provided to the four different groups of respondents.

The logic behind our pre-registered hypotheses is as follows. We expect that for articles from mainstream news sources, the source cue will make people more likely to believe an article is true because mainstream news sources tend to produce true articles. Thus, the same article (or headline/lede) with a mainstream news source identified will be more likely to seem true. The converse, however, holds for low-quality news sources: knowing that an article (or headline/lede) is from a low-quality news sources should make people less likely to believe the article is true. Therefore, we pre-registered the following hypotheses:

H2.1: Respondents are less likely to rate an article published by a low-quality news source as true when the source information is provided than when it is not provided (only the headline/lede is provided to all respondents). To test H2.1, we compare the number “true” ratings of articles in article format 1 to the number “true” ratings of articles in article format 3 for articles from low-quality news sources.

H2.2: Respondents are more likely to rate an article published by a mainstream news source as true when the source information is provided than when it is not provided (only the headline/lede is provided). To test H2.2, we compare the number “true” ratings of articles in article format 1 to the number “true” ratings of articles in article format 3 for articles from mainstream news sources.

H2.3: Respondents are less likely to rate an article published by a low-quality news source as true when the source information is provided than when it is not provided (the full text of the article is provided to all respondents). To test H2.3, we compare the number “true” ratings of articles in article format 2 to the number “true” ratings of articles in article format 4 for articles from low-quality news sources.

H2.4: Respondents are more likely to rate an article published by a mainstream news source as true when the source information is provided than when it is not provided (the full text of the article is provided to all respondents). To test H2.4, we compare the number “true” ratings of articles in article format 2 to the number “true” ratings of articles in article format 4 for articles from mainstream news sources.

Searching for additional information

In recent years, a growing body of literature has tested the efficacy of interventions mitigating belief in misinformation (Guess and Munger, Reference Guess and Munger2020; Pennycook et al., Reference Pennycook, McPhetres, Zhang and Rand2020; Roozenbeek and Van der Linden, Reference Roozenbeek and Van der Linden2019), but no work has directly tested the impact of using a search engine to evaluate news veracity in real time. Over the last few decades, users have become increasingly reliant on search engines to fact-check news stories they see online (Dutton et al., Reference Dutton, Reisdorf, Dubois and Blank2017), and social media companies,Footnote 3 civil society,Footnote 4 and government agenciesFootnote 5 have all encouraged users to research suspect news using search engines with the goal of reducing belief in misinformation. In view of the increased use and trust of search engines, as well as digital media literacy campaigns advocating for their use when encountering suspect news, it is critical we understand the effect of online search on veracity discernment.

It is possible that when searching for information about a true news story, one will come into contact with similar articles that corroborate the claims in the initial article, thus increasing belief in true articles. However, this dynamic may also hold for false news articles. Recent qualitative work has found that when searching for information about false stories, individuals can fall into “data voids” (Golebiewski and boyd, Reference Golebiewski and boyd2019) where only information from other non-credible sources appear. Online search for false articles may be especially problematic for those already predisposed to believing misinformation, such as those whose ideological perspective is consistent with the ideological perspective of the original false news article (Allcott and Gentzkow, Reference Allcott and Gentzkow2017; Moravec et al., Reference Moravec, Minas and Dennis2018). This vulnerability may also be compounded by recent findings suggesting that individuals may be more likely to find ideologically congruent news when using search engines, such as Google (Robertson et al., Reference Robertson, Lazer and Wilson2018). Indeed, past research has shown seeking out ideologically congruent information online leads some to adopt inaccurate beliefs (Peterson and Iyengar, Reference Peterson and Iyengar2021), but we are unaware of any prior research estimating the effect of searching online on news veracity discernment.

We set out to measure whether the additional information provided to a reader by online search affects their ability to correctly identify the veracity of the original news article. Given the lack of previous work directly measuring the relationship between online search and news discernment, it is important to first determine the direction of the effect, if any. Therefore, our pre-registered hypotheses strictly identify the predicted direction of the online search effect, but not the mechanism. Importantly, we test this effect in real time, so that individuals engage with the same search engine environment that individuals would likely see “in the wild.” We pre-registered and tested three hypotheses:

H3.1: Individuals who are asked to search for evidence to help them evaluate a true news article are more likely to correctly rate the article as true (i.e., matching the fact-checker assessment) than respondents who are not asked to search for evidence to help them evaluate that same true news article.

H3.2: Individuals who are asked to search for evidence to help them evaluate a false/misleading news article are less likely to correctly rate the article as false/misleading (i.e., matching the fact-checker assessment) than those who are not asked to search for evidence to help them evaluate that same false news article.

H3.1 Individuals who are asked to search for evidence to help them evaluate a false/misleading news article are more likely to rate this story as true (i.e., incorrectly answer the assessment question) than respondents who are not asked to search for evidence to help them evaluate that same false news article.

We test the effect of searching for additional information online in our second study by asking two groups of respondents to evaluate the same news articles in the same 24-hour period. Those in the control group evaluate an online news article with the full text and source information on the website (Article Format 4 in Figure 1). Those in the treatment group evaluate the same articles in the same format and time frame, but are encouraged to seek out additional information online to help them evaluate the veracity of the article. Study 2 is outlined in Figure 2.

Figure 2. Diagram outlining study 2.

In a step towards identifying a mechanism, we also explore whether partisan selection could explain the search effect on false/misleading articles. We run an exploratory analysis (not pre-registered) that investigates whether the search effect is concentrated among those who are ideologically congruent with the perspective of the misinformation, as previous work would suggest.

Sampling and demographic characteristics

To test these hypotheses, we recruited survey subjects using Qualtrics. The groups of survey respondents were balanced every day in each article group by ideology, gender, age, and education. In both Studies 1 and 2, each respondent was asked to evaluate three different popular articles published within the previous 48 hours. Current methods for sourcing stimuli risk selection effects by selecting articles that have already been fact-checked or created by researchers themselves. To address these concerns, we created a transparent, replicable, and pre-registered article selection process that sources popular false/misleading and true articles from across the ideological spectrum within 24-48 hours of their publication. More specifically, we sourced the most popular article that had been published in the past 24 hours from each of the following “streams” of news: liberal mainstream news domains; conservative mainstream news domains; liberal low-quality news domains; conservative low-quality news domains; and low-quality news domains with no clear political orientation. Each day of the study, we took the most popular online articles from these five streams (using CrowdTangle for the mainstream sources and RSS feeds for the low-quality ones) that had appeared in the previous 24 hours and sent them to our respondents recruited by Qualtrics. Articles chosen by this algorithm, therefore, represent the most popular mainstream and low-quality news from across the ideological spectrum. These articles were also sent to a panel of six professional fact-checkers, and the article’s veracity was determined by the mode of their evaluations. A full explanation of this process can be found in the Supplementary Materials, Section S.

Results

We now present the effect of providing respondents with three types of informational cues on evaluations of news veracity. To do so, we fit an OLS regression model with standard errors clustered at the respondent level to predict either correctly discerning the veracity of a news article (i.e., matching the evaluation of the professional fact-checker) or rating an article as true, depending on the hypothesis we are testing. We control for a number of pre-registered variables: education level, age, gender, income, and ideology,Footnote 6 but also report the results from models that do not condition on these covariates (supplementary Materials, Section I). We run all analyses using a logistic regression and find similar results (Supplementary Materials, Section K).Footnote 7 We also run all analyses in Figures 3, 4, and 5a substituting a 7-point ordinal scale of veracity for the dichotomous measure and all of our findings hold.Footnote 8 In each figure in this section, the first line of the y-axis label (in brackets) denotes the hypothesis and the type of information for which we are measuring an effect. In the next line of the y-axis label (in braces), we list the type of news articles on which we are testing this effect. In the final line of the y-axis label, we state in parentheses the other type(s) of information that are constant across the control and treatment group.Footnote 9

Figure 3. Marginal effects of providing the full text.

Figure 4. Marginal effects of providing the source.

Figure 5. Marginal effects of searching for additional information.

Marginal effect of providing full text

Using our results from Study 1, we begin by assessing the marginal effect of the full text of the article on news veracity discernment in real time. As Figure 3 shows, we find that providing the full text to respondents improves the discernment of news veracity. When the source is not provided to respondents (H1.1), providing respondents with the full text of the article increases the likelihood a respondent correctly discerns the veracity of an online news article by 0.089. When source information is available (H1.2), providing respondents with the full text of the article has a smaller but still significant effect, increasing the likelihood of correctly discerning the veracity of an article by 0.059.

Marginal effect of source information

Similar to the marginal effect of the full text of the article, we also find that source information improves the discernment of news veracity in real time. Rather than testing the effect of source information on the discernment of news veracity, we focus on the effect of source information on the belief that a news article from mainstream source or a low-quality source is true. Figure 4 shows that, as expected, when the full text is available to respondents, providing source information for an article from a low-quality source (H2.1) decreases the likelihood that one rates that article as true by 0.07, while providing source information for an article from a mainstream source (H2.2) increases the likelihood that one rates it as true by 0.043. If we restrict respondents to evaluating solely the headline/lede rather than the full text, Figure 4 shows that source cues from articles from low-quality sources remain strong. When only the headline/lede is available, the effect of providing source information for an article from a low-quality source (H2.3) decreases the likelihood of rating an article as true by 0.06, but we find that the effect of source information of an article from a mainstream source dissipates and is no longer statistically significant (H2.4). These results suggest that source effects are stronger when respondents visit the website and see the full text relative to when they only evaluate the headline/lede of an article. This may explain why previous studies investigating source effects (Austin and Dong, Reference Austin and Dong1994; Jakesch et al., Reference Jakesch, Koren, Evtushenko and Naaman2019; Pennycook and Rand, Reference Pennycook and Rand2019b) without the full web page did not find any source cue effects. The format in which source cues are provided matter. It appears that the website passes on strong cues that may enhance the source effect.

Marginal effect of searching for additional information

Contrary to the marginal effect of providing the full text of an article or its source, we find that seeking out additional information has a mixed effect on improving respondent’s ability to discern the veracity of news in real time. Figure 5a shows that encouraging respondents to search for information increases the likelihood of rating true articles as true by 0.071 (H3.1), but has no effect on the likelihood of correctly identifying false/misleading news as false/misleading (H3.2). More worryingly, Figure 5b shows that encouraging respondents to search for information increases the likelihood of rating false/misleading articles as true by 0.059.

To investigate if the search effect in H3.3 can be explained by partisans seeking out ideologically congruent information online, we measured the marginal effect of seeking additional information among individuals who were ideologically congruent and ideologically incongruent with the original false/misleading article.Footnote 10 Figure 6 shows that encouraging respondents to search for information increases the likelihood of rating false/misleading articles as true by 0.09 among ideologically congruent respondents and 0.12 among ideologically incongruent respondents. If seeking out attitudinally congruent information could explain this effect, we would expect the effect to be much stronger among ideologically congruent individuals relative to ideologically incongruent individuals. Instead, the effect is slightly stronger among ideologically incongruent respondents, although not statistically different from the effect identified among those who were ideologically congruent.

Figure 6. Marginal effects of searching for additional information.

Discussion

We offer new findings regarding the marginal effect of providing additional information on the discernment of news veracity in real time. First, we present novel empirical results showing that access to the full text of an article (as opposed to just the headline/lede) improves news veracity discernment. Second, we offer an original contribution to the mixed findings surrounding the effect of source information by testing stimuli in different formats (headline/lede and full text) and in real time. Third, we find that encouraging individuals to search for information increases the likelihood an individual rates a true article as true, but it also increases the likelihood an individual rates a false/misleading article as true by a similar magnitude.

The effects of these three textual cues emphasize the importance of how individuals come into contact with news stories online. Individuals may be more likely to believe misinformation if they are only exposed to the headline/lede of an article rather than the full text. In addition, these findings help us assess previous studies that strictly expose respondents to the headlines/ledes of articles rather than the full article. Prior studies that only expose respondents to the headline/ledes of articles are likely underestimating the ability of news consumers to correctly discern the veracity of news relative to when individuals have access to the full article.

Our finding about source information may explain past conflicting results. We find that source information is particularly important when the cue is strong (individuals access the full news story on the website), whereas it is weaker when the source cue is solely the logo of a publisher. Our study design helps disentangle past mixed results when studies attempt to identify the effect of source information utilizing different stimuli presentations.

Our finding that online search increases belief in false/ misleading information is particularly concerning given that current digital media literacy guides recommend that individuals search for information when they come into contact with suspect news. It is also notable that in an exploratory analysis, we do not find that this effect is solely concentrated among individuals who are ideologically congruent with the false article. Given that search engines remain an understudied but central component of the online information environment, we believe assessing the mechanism underlying this finding is a critical subject for future research.

Our findings are enabled by an innovative and transparent survey design that can be replicated to measure the efficacy of interventions designed to improve news veracity discernment in real time. Increasing the external and ecological validity of efficacy studies is crucial to identifying interventions that can reduce the harms of the online information environment. To this end, three features of our study design offer contributions to this growing topic of inquiry: measuring discernment in real time, selecting popular articles without researcher discretion, and maintaining consistent participant recruitment. More generally, this study underscores the importance of evidence-based interventions that are thoroughly tested, rather than intuitively designed.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2023.20

Data availability

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi: 10.7910/DVN/1ONUFG (Aslett et al., Reference Aslett, Godel, Sanderson, Nagler, Bonneau, Persily and Tucker2023).

Competing interests

The authors whose names are listed on this paper certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge, or beliefs) in the subject matter or materials discussed in this manuscript.

Ethics statement

This study was approved by an independent ethics board at NYU (IRB-FY2019-3511). We also affirm that the research adheres to the APSA’s Principles and Guidance for Human Subjects Research. Section T of the Supplementary Materials expands on this ethics statement.

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

1 Pre-registrations are located here: https://osf.io/crdpg; https://osf.io/z92ad

2 Almost 3 in 10 American adults often consumed news on social media sites (For the People the Press, 2019), compared to only 1 in 10 Americans in 2008 (For the People the Press, 2008).

3 See Constine (Reference Constine2017).

6 Variable explanations are in the supplementary Materials, Section Q.

7 Model results are located in the Supplementary Materials, Section H.

8 Model results are located in the Supplementary Materials, Section J.

9 We report adjusted p-values to account for multiple hypothesis testing (Benjamini and Hochberg, Reference Benjamini and Hochberg1995) in the Supplementary Materials, Section L. Our results are unchanged when applying these multiple hypothesis testing corrections.

10 Supplementary Materials, Section R details how ideological congruence was tabulated.

References

Allcott, H., and Gentzkow, M.. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31(2): 211236.CrossRefGoogle Scholar
Althaus, S. L., and Tewksbury, D.. 2000. “Patterns of Internet and Traditional News Media Use in a Networked Community.” Political Communication 17(1): 2145.CrossRefGoogle Scholar
Aslett, K., Godel, W., Sanderson, Z., Nagler, J., Bonneau, R., Persily, N., and Tucker, J.. 2023. Replication Data for: Testing the Effect of Information on Discerning the Veracity of News in Real-Time. https://doi.org/10.7910/DVN/1ONUFG CrossRefGoogle Scholar
Aslett, K., Guess, A. M., Bonneau, R., Nagler, J., and Tucker, J. A.. 2022. “News Credibility Labels have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions.” Science Advances 8(18): eabl3844. https://doi.org/10.1126/sciadv.abl3844 CrossRefGoogle ScholarPubMed
Austin, E. W., and Dong, Q.. 1994. “Source v. Content Effects on Judgments of News Believability.” Journalism Quarterly 71(4): 973983.CrossRefGoogle Scholar
Baum, M. A., & Groeling, T.. 2009. Shot by the messenger: Partisan cues and public opinion regarding national security and war. Political Behavior 31(2): 157186.CrossRefGoogle Scholar
Benjamini, Y., & Hochberg, Y.. 1995. “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing.” Journal of the Royal Statistical Society: Series B (Methodological) 57(1): 289300.Google Scholar
Berinsky, A. J.. 2017. “Rumors and Health Care Reform: Experiments in Political Misinformation.” British Journal of Political Science 47(2): 241262.CrossRefGoogle Scholar
Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., Kawata, A., Kovvuri, A., Martin, J., Morgan, E., et al. 2019. “Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media.” Political Behavior, 123.Google Scholar
Clemm von Hohenberg, B. 2020. Truth and Bias: Robust Findings? OSF. https://osf.io/yj2rn/ CrossRefGoogle Scholar
Constine, J. 2017. Facebook puts link to 10 tips for spotting ‘false news’ atop feed. https://techcrunch.com/2017/04/06/facebook-puts-link-to10-tips-for-spotting-false-news-atop-feed/ Google Scholar
Dias, N., Pennycook, G., and Rand, D. G.. 2020. “Emphasizing Publishers Does Not Effectively Reduce Susceptibility to Misinformation on Social Media.” Harvard Kennedy School Misinformation Review 1(1).Google Scholar
Druckman, J. N. 2022. A framework for the study of persuasion. Annual Review of Political Science, 25, 6588.CrossRefGoogle Scholar
Dutton, W. H., Reisdorf, B., Dubois, E., and Blank, G.. 2017. Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States.CrossRefGoogle Scholar
Flanagin, A. J., and Metzger, M. J.. 2000. “Perceptions of Internet Information Credibility.” Journalism & Mass Communication Quarterly 77(3): 515540.CrossRefGoogle Scholar
Flanagin, A. J., & Metzger, M. J.. 2007. “The Role of Site Features, User Attributes, and Information Verification Behaviors on the Perceived Credibility of Web-Based Information.” New Media & Society 9(2): 319342.CrossRefGoogle Scholar
Fogg, B. J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar, A., Shon, J., Swani, P., et al. 2001. “What Makes Web Sites Credible? A Report on a Large Quantitative Study.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 6168.Google Scholar
For the People the Press, P. R. C. 2008. The Pew Research Center 2008 Biennial Media Consumption Survey.Google Scholar
For the People the Press, P. R. C. 2019. Pew Research Center: American Trends Panel Wave 51.Google Scholar
Frankena, W. K. 1939. “The naturalistic fallacy.” Mind 48(192): 464477.CrossRefGoogle Scholar
Gigerenzer, G., & Selten, R.. 2002. Bounded Rationality: The Adaptive Toolbox. MIT Press.CrossRefGoogle Scholar
Golebiewski, M., & boyd, D.. 2019. “Data Voids: Where Missing Data Can Easily Be Exploited.” Data & Society.Google Scholar
Guay, B., Pennycook, G., Rand, D., et al. 2022. How to Think about Whether Misinformation Interventions Work. CrossRefGoogle Scholar
Guess, A., and Munger, K.. 2020. “Digital Literacy and Online Political Behavior.” Charlottesville: OSF Preprints. Retrieved April, 13, 2020.Google Scholar
Jahanbakhsh, F., Zhang, A. X., Berinsky, A. J., Pennycook, G., Rand, D. G., and Karger, D. R.. 2021. “Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media.” Proceedings of the ACM on Human-Computer Interaction 5(CSCW1): 142.CrossRefGoogle Scholar
Jakesch, M., Koren, M., Evtushenko, A., and Naaman, M.. 2019. “The Role of Source and Expressive Responding in Political News Evaluation.” In Computation and Journalism Symposium.CrossRefGoogle Scholar
Kim, A., Moravec, P. L., and Dennis, A. R.. 2019. “Combating Fake News on Social Media with Source Ratings: The Effects of User and Expert Reputation Ratings.” Journal of Management Information Systems 36(3): 931968.CrossRefGoogle Scholar
Moravec, P., Minas, R., and Dennis, A. R.. 2018. “Fake News on Social Media: People Believe What They Want to Believe When It Makes No Sense at All.” Kelley School of Business Research Paper (18-87).CrossRefGoogle Scholar
Pennycook, G., Cannon, T. D., and Rand, D. G.. 2018. “Prior Exposure Increases Perceived Accuracy of Fake News.” Journal of Experimental Psychology: General 147(12): 1865.CrossRefGoogle ScholarPubMed
Pennycook, G., McPhetres, J., Zhang, Y., and Rand, D.. 2020. “Fighting Covid-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy Nudge Intervention.” PsyArXiv Preprints, 10.CrossRefGoogle Scholar
Pennycook, G., and Rand, D. G.. 2019a. “Fighting Misinformation on Social Media using Crowdsourced Judgments of News Source Quality.” Proceedings of the National Academy of Sciences 116(7): 25212526.CrossRefGoogle Scholar
Pennycook, G., and Rand, D. G.. 2019b. “Lazy, Not Biased: Susceptibility to Partisan Fake News is Better Explained by Lack of Reasoning than by Motivated Reasoning.” Cognition 188: 3950.CrossRefGoogle ScholarPubMed
Pennycook, G., and Rand, D. G.. 2020. “Who Falls for Fake News? The Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytic Thinking.” Journal of Personality 88(2): 185200.CrossRefGoogle ScholarPubMed
Peterson, E., and Iyengar, S.. 2021. “Partisan Gaps in Political Information and Information-Seeking Behavior: Motivated Reasoning or Cheerleading?” American Journal of Political Science 65(1): 133147.CrossRefGoogle Scholar
Robertson, R. E., Lazer, D., and Wilson, C.. 2018. “Auditing the Personalization and Composition of Politically-Related Search Engine Results Pages.” In Proceedings of the 2018 World Wide Web Conference, 955965.Google Scholar
Roozenbeek, J., and Van der Linden, S.. 2019. “Fake News Game Confers Psychological Resistance against Online Misinformation.” Palgrave Communications 5(1): 110.CrossRefGoogle Scholar
Sundar, S. S. 1998. “Effect of Source Attribution on Perception of Online News Stories.” Journalism & Mass Communication Quarterly 75(1): 5568.CrossRefGoogle Scholar
Sundar, S. S., and Nass, C.. 2001. “Conceptualizing sources in online news.” Journal of Communication 51(1): 5272.CrossRefGoogle Scholar
Swire, B., Berinsky, A. J., Lewandowsky, S., and Ecker, U. K.. 2017. “Processing Political Misinformation: Comprehending the Trump Phenomenon.” Royal Society Open Science 4(3): 160802.CrossRefGoogle ScholarPubMed
Vosoughi, S., Roy, D., and Aral, S.. 2018. “The Spread of True and False News Online.” Science 359(6380): 11461151. https://doi.org/10.1126/science.aap9559 CrossRefGoogle ScholarPubMed
Zhang, A. X., Ranganathan, A., Metz, S. E., Appling, S., Sehat, C. M., Gilmore, N., Adams, N. B., Vincent, E., Lee, J., Robbins, M., et al. 2018. “A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles.” In Companion Proceedings of the Web Conference 2018, 603–612.Google Scholar
Figure 0

Figure 1. Two-by-two table of different combinations of text and source information provided to each different group of respondents in study 1.

Figure 1

Figure 2. Diagram outlining study 2.

Figure 2

Figure 3. Marginal effects of providing the full text.

Figure 3

Figure 4. Marginal effects of providing the source.

Figure 4

Figure 5. Marginal effects of searching for additional information.

Figure 5

Figure 6. Marginal effects of searching for additional information.

Supplementary material: Link

Aslett et al. Dataset

Link
Supplementary material: PDF

Aslett et al. supplementary material

Aslett et al. supplementary material

Download Aslett et al. supplementary material(PDF)
PDF 710.9 KB