Skip to main content Accessibility help
×
Home

Information:

  • Access

Actions:

      • Send article to Kindle

        To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

        Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        Partisan motivated reasoning and misinformation in the media: Is news from ideologically uncongenial sources more suspicious?
        Available formats
        ×

        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

        Partisan motivated reasoning and misinformation in the media: Is news from ideologically uncongenial sources more suspicious?
        Available formats
        ×

        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

        Partisan motivated reasoning and misinformation in the media: Is news from ideologically uncongenial sources more suspicious?
        Available formats
        ×
Export citation

Abstract

In recent years, concerns about misinformation in the media have skyrocketed. President Donald Trump has repeatedly claimed that various news outlets are disseminating ‘fake news’ for political purposes. But when the information contained in mainstream media news reports provides no clear clues about its truth value or any indication of a partisan slant, do people rely on the congeniality of the news outlet to judge whether the information is true or false? In a survey experiment, we presented partisans (Democrats and Republicans) and ideologues (liberals and conservatives) with a news article excerpt that varied by source shown (CNN, Fox News, or no source) and content (true or false information), and measured their perceived accuracy of the information contained in the article. Our results suggest that the participants do not blindly judge the content of articles based on the news source, regardless of their own partisanship and ideology. Contrary to prevailing views on the polarization and politicization of news outlets, as well as on voters' growing propensity to engage in ‘partisan motivated reasoning,’ source cues are not as important as the information itself for partisans on both sides of the aisle.

Footnotes

An earlier version of this paper was presented at the 2017 Political Psychology APSA Pre-Conference, 30 August 2017, at the University of California, Berkeley. We thank John Carey, D.J. Flynn, Jamie Druckman, Brendan Nyhan, Joanne Miller, and Sean Westwood for useful comments.

1. Introduction

In recent years, concerns about misinformation in the media have skyrocketed. President Donald Trump has repeatedly claimed that various major news outlets including CNN, The New York Times, ABC News, and MSNBC are disseminating ‘fake news’ for political purposes (Coll, 2017; Nelson, 2017; Wang, 2017). But how do people judge whether the news from mainstream media networks contains true or false information? We examine this question based on a pre-registered survey experiment conducted during the first year of the Trump presidency. Our focus is on ‘ambiguous’ false information in the news – the kind of information major news networks could disseminate regardless of their intention. We define the content of news as ‘ambiguous’ if it provides no clear clues about its truth value or about its partisan slant.

Examining this question is important in the context of growing concerns about the polarization of American politics (e.g., Poole and Rosenthal, 1984; Layman et al., 2006; Cohn, 2014; Doherty, 2014) and the politicization of the mainstream media in the USA (McCright and Dunlap, 2011; Levendusky, 2013a; Mitchell et al., 2014). While Americans increasingly exhibit low trust in the media in general (Swift, 2016; Silverman, 2017; Knight Foundation, 2018), a more concerning trend is that they trust the individual media sources that they use (Daniller et al., 2017; Pennycook and Rand, 2019a). If individuals automatically judge content from ideologically uncongenial sources as biased or untrustworthy, and content from congenial sources as undeniably factual, systematic errors in collective public opinion may arise (Flynn et al., 2017). Indeed, these errors could have detrimental effects on important aspects of democracy, such as voting and political beliefs (Dellavigna and Kaplan, 2007).

Given these concerns, a growing number of scholars have studied misinformation about politics and policy in recent years (e.g., Nyhan and Reifler, 2010; Berinsky, 2015; Bullock et al., 2015; Prior et al., 2015; Swire et al., 2017). The trustworthiness of mainstream media outlets has also become a popular topic of debate on social media and among journalists (e.g., Nazaryan, 2017; Rymel, 2017), and several studies have investigated whether and how people evaluate the credibility of mainstream sources (e.g., Carr et al., 2014; Kim, 2015; Pennycook and Rand, 2019a). Nevertheless, to the best of our knowledge, no previous research has directly examined the impact of misinformation that major news networks, such as CNN and Fox News, could intentionally or unintentionally be spreading and, more importantly, how partisanship and ideology moderate people's susceptibility to believing this sort of information.

To address this, we presented study participants – partisans (Democrats and Republicans) and ideologues (liberals and conservatives) – with a news article excerpt that varied by source shown (CNN, Fox News, or no source) and content (true or false information), and measured their perceived accuracy of the information contained in the article. Our goal was to investigate how the effects of these treatment variables vary by study participants' partisanship and ideology.

The results of our experiment suggest that people do not blindly judge articles based on news source, regardless of their own partisanship or ideology. Rather, we find that contrary to prevailing ideas about the increasing polarization of American people's trust in the media, as well as on American voters' growing propensity to engage in ‘partisan motivated reasoning,’ source cues are not as important as the information itself. In other words, there is little stopping American consumers of the mass media from trusting and believing the information they read, true and false alike, from any major news source. While this conclusion may lead us to be optimistic about individuals' ability to overlook partisan or ideological cues when processing information from the media, we argue that it underlines the importance of preventing any type of false content from reaching people in the first place.

2. Motivated reasoning, misinformation, and the media

Our experiment draws on several bodies of existing literature – studies on the theory of partisan motivated reasoning, research on the impacts of various media sources and news content on opinion formation, and literature on misinformation about politics and policy. In this section, we first introduce these studies and discuss our theoretical contributions. We then specify our hypothesis.

2.1 Literature review

Scholarship on partisan motivated reasoning suggests that people interpret evidence or information from ideologically congenial sources as stronger than information from opposing sources (e.g., Druckman et al., 2013; Bolsen et al., 2014), or that exposure to congenial information reduces perceptions that the information is biased (Kelly, 2019). Several studies explore this phenomenon in the media by investigating how individuals respond to messages from different news networks. By presenting participants with identical written or televised news reports but varying the source attributed to the reports, these studies show that people use news network as a heuristic to shape their opinions about the content (Gussin and Baum, 2004; Baum and Gussin, 2005, 2007; Turner, 2007; Levendusky, 2013a, 2013b; Druckman et al., 2015). Baum and Gussin (2007), for example, demonstrate that participants who read a news transcript about the 2004 presidential campaign attributed to Fox News saw it as more favorable toward Bush, relative to Kerry, than participants who read an identical transcript attributed to CNN.

A separate but related body of research explores whether partisans and ideologues engage in motivated reasoning when processing corrections to political misperceptions. Berinsky (2015), for example, manipulates the source of rumor corrections concerning the Affordable Care Act (ACA), and finds that corrections from an ‘unexpected’ (Republican) individual who might otherwise be expected to oppose the ACA are most effective at correcting the rumors.1 Nyhan and Reifler (2013) examine the extent to which corrections succeed in reducing misperceptions about the 2012 presidential election by varying the news source of mock newspaper articles with misperceptions and corrections.2 They find mixed effects with regard to ideology; for liberals, a correction to misinformation is just as persuasive when it comes from MSNBC as when it comes from Fox News, but for conservatives, a combination of MSNBC as the media outlet and a liberal think tank as the correction source is significantly less persuasive than any other combination of outlet and speaker source at reducing misperceptions.

While these studies often use widely-shared rumors or conspiracy theories as treatment materials, the literature directly testing whether media source impacts belief in false information is sparse. In a recent study on prior exposure and perceived accuracy of fake news on social media, Pennycook et al. (2018) find that including a mainstream liberal or conservative source on a fake news article headline has no effect on participants' perceived accuracy of the claim made in the headline.3 Here, the authors focus on demonstrably false news headlines published primarily by fake news websites.

2.2 Theoretical contributions

Despite the recent growth of research on these topics, the area in which all of these studies interact has not been investigated systematically. Specifically, we address the following important gaps in the literature:

First, while several studies show that people rely on news source (specifically, whether the source is politically congenial or not) to form their opinions and attitudes about the news content, this research generally does not focus on false information. Although some studies test how the source of corrections to political misinformation impacts belief in false content, there is little research evaluating whether the source impacts people's initial perceptions (i.e., before any corrected information is presented, in the case of false information) of whether the content of a news report is accurate. It therefore remains unclear whether the source (congenial or uncongenial) or content (true or false) of news articles drives individuals' initial perceptions of whether the news they encounter is credible.

Second, few existing studies examine individuals' perceptions of the source vs content credibility of mainstream media sources. As we noted above, the new studies that have begun to explore this question use fake news headlines as treatment materials (Pennycook and Rand, 2019b; Pennycook et al., 2018) rather than the type of misleading content that major news outlets could actually spread.

To fill in these gaps in the literature, our experiment randomized not only the source and content of a news article, but also focused on ‘ambiguous’ false information in the mainstream media, which we mentioned briefly in the introduction. Pairing ambiguous content with major news outlets as the information source is an important test of partisan motivated reasoning in the mainstream media because, simply put, mainstream outlets do not systematically disseminate ‘fake news.’ Tabloid newspapers and malicious online media are most responsible for the deliberate spread of undeniably false information, or provocative news that people most likely regard as slanted. These outlets are motivated to generate intense short-term profit by providing content that maximizes partisan utility regardless of its truth value. In contrast, we have every reason to believe that major news networks avoid disseminating groundless news as much as possible, because doing so would damage their reputation as trustworthy news sources.

Nevertheless, mainstream sources have been known to unintentionally publish information that is false, or to intentionally or unintentionally exclude or alter certain details. For example, ABC News suspended one of its investigative journalists for publishing an inaccurate news report about Trump's involvement with Russia during the 2016 presidential campaign (Wang, 2017). Likewise, MSNBC aired a false story about the Kate Steinle murder verdict in December 2017 (Darcy, 2017), and Fox News published a bogus report about Russian military power that it got from a tabloid newspaper. In fact, there are long lists of severe cases of misreporting by the mainstream media that have been compiled every year since 2013 (Mantzarlis, 2018).

What makes this type of information so troubling is that when it is initially published, it often fails to provide any clues that it should be read with scrutiny. Furthermore, there is no guarantee that readers will be exposed to any subsequent corrections that may or may not be made by the original source.

Another problem with this ambiguous content is that because it is published by major news networks which, on the whole, try to pursue journalistic objectivity, it often lacks clear partisan cues that could alert readers about the trustworthiness of the information. While articles disseminated by fake news websites, citizen journalism, and other non-mainstream sources often have a clear partisan or ideological leaning, mainstream sources are more likely to publish – or try to publish – objective reports. When average news consumers encounter such content, they often cannot rely on their pre-existing partisan or ideological notions to judge what they read.

In sum, when the mainstream media publish ambiguous information about an issue which provides neither a clear indication that the information is false nor any specific partisan or ideological cues, it is difficult for people to know for sure whether they should view the information they encounter with suspicion.

2.3. Hypothesis

Consistent with the theory of motivated reasoning, we predict that people rely on heuristics to judge news articles, such as whether the report is published by an ideologically congenial source, to make their judgments. Specifically, we test the following hypothesis:4

Hypothesis 1:

When the false statement is presented with the Fox News [CNN] header, Republicans and conservatives [Democrats and liberals], as compared to Democrats and liberals [Republicans and conservatives], are more likely to think that the statement is accurate.

We selected CNN and Fox News to use as our news sources for several reasons. First, Pew Research Center data show that CNN is the most favored news outlet among consistent liberals, while Fox News is the most favored outlet among consistent conservatives (Mitchell et al., 2014). Data from the same Pew survey also show that within ideological groups, ‘consistently liberal’ and ‘mostly liberal’ audiences trust CNN more than they distrust it, and distrust Fox News more than they trust it.5 Likewise, ‘consistently conservative’ and ‘mostly conservative’ groups trust Fox News more than they distrust it, and distrust CNN more than they trust it. In addition, previous research finds that trust in media outlets including CNN and Fox News predicts individuals' vote intentions in the 2016 election with 88% accuracy (The Economist, 2016). Finally, we draw on existing experiments that use a label-switching approach with CNN and Fox News as media outlets on opposite sides of the ideological/partisan spectrum (Baum and Gussin, 2007; Turner, 2007; Baum and Groeling, 2009).

We note that whether these sources actually disseminate false information is not important for the purpose of our research. What matters is whether American people think that the news content provided by these major media networks is believable. Given that President Trump tweeted, ‘Any negative polls are fake news, just like the CNN, ABC, NBC polls in the election’ (6 February 2017), it is sensible to assume that some American citizens believe that even major news networks provide false information, and that they rely on an article's source cue alone without carefully reading its content to make a judgment about the information's truth value.

3. Research design

We administered a randomized survey experiment on 21–22 July 2017.6 The study participants were workers who had registered at the online marketplace, Amazon Mechanical Turk, or MTurk (http://www.mturk.com). Although survey samples obtained from MTurk are not probability samples, several studies have shown that the estimates of treatment effects in experiments obtained from MTurk participants mirror those of nationally representative samples (e.g., Horton et al., 2011; Berinsky et al., 2012; Mullinix et al., 2015; Coppock et al., 2018). Most importantly, Coppock et al. (2018) recently replicated 27 existing studies using MTurk and show a high correspondence between the MTurk results and results based on nationally representative samples.7

The total number of valid responses is 3,932. Since our treatment materials are news stories with only subtle differences, we expected that the treatment effects could be relatively small. For this reason, our sample size is larger than typical MTurk experiments on political misinformation.8 Each participant was paid $0.70 to complete our survey, and the average response time was just over 4 min.

We note that MTurk samples tend to include a higher percentage of Democrats or liberals than the general population. While acknowledging this limitation in terms of external validity, our sample size is large enough to overcome a potential lack of statistical power and thus valid for the purpose of making causal inference within subgroups defined by participants' partisanship or ideology. Specifically, the number of participants within each group is 1,078 for conservatives, 2,081 for liberals, 1,079 for Republicans, and 2,107 for Democrats.9

3.1 Treatment materials

All study participants were first asked to answer a series of basic demographic and attitudinal questions. Then, they were randomly exposed to one of six versions of an article excerpt on health care reform. The content of the article excerpt was either true information (control condition) or false information, and the source was either CNN or Fox News (as indicated by a large header across the top of the page), or was not included (control condition). The article title, author, and date were constant across all treatment groups. Consistent with our concept of ‘ambiguous’ false information, the article title and author name were intended to avoid cuing partisanship or ideology.

Within the content of news report, the first two sentences of the article were also constant across all treatment groups; the third and final sentence differed across treatment groups in that it contained either true or false information. Figure 1 shows the article excerpt containing false information with the Fox News header, and Figure 2 shows the article excerpt containing true information with the CNN header. The remaining four excerpts are shown in the Supplementary Materials.

Figure 1. Sample treatment article (Fox News, false information condition).

Figure 2. Sample treatment article (CNN, true information condition).

In selecting an article topic to use for our experiment, we focused on the issue of health care reform. Debate over repealing and replacing the ACA permeated the news media following President Donald Trump's election, and it was particularly salient in the summer of 2017, when Republicans in Congress were fighting to pass bills that would redesign health care in the USA. Health care was – and continues to be – a complex topic for average Americans to understand; members of Congress offered dozens of amendments and revisions to these acts for months on end (Henry J. Kaiser Family Foundation, 2017; Park et al., 2017), which led to nearly constant news coverage of the health care debate by mainstream media sources in 2017.

To present participants with ambiguous but false information, we focused on one provision of the new health care bills proposed by the House and Senate that never changed: the under-26 coverage provision. Under the ACA, this provision stipulates that young adults can remain on their parents' health insurance plan until they turn 26 years old. The under-26 coverage provision has remained in place in all new versions of the health care bills proposed by the House and Senate, and we presented this claim in our true information materials. In our false information materials, we altered this information to state that in the new health care bills proposed by the House and Senate, young adults would lose coverage through their parents' health insurance plan when they turn 18. The exact text of the true and false information presented in our experiment is shown in the third sentence of the article excerpts in Figures 1 and 2.

There are two other notable reasons why we focused on the under-26 coverage provision. First, it is one of the few issues in the health care debate that enjoys wide bipartisan support (Kirzinger et al., 2016). In order to estimate the effects of news source by partisanship or ideology, we sought to avoid eliciting strong partisan or ideological responses based on news content (e.g., Kelly, 2019). In other words, we tried to manipulate the content of the excerpts only with respect to whether the information was true or false. By presenting participants with false information about an issue for which individuals' pre-existing notions are not strongly divided by their partisanship or ideology, we isolate the impact of source on belief in the misinformation irrespective of whether the statement itself is congenial to one ideological group over another.10

We also focused on this provision because public knowledge about the under-26 coverage provision is generally high. A nationally representative survey conducted by Gross et al. (2012) finds that 52% of Americans identify the under-26 coverage provision as part of the ACA with ‘high certainty,’ and 81% believe it is part of the ACA regardless of their certainty (Gross et al., 2012). All other provisions included in this survey were correctly identified as part of the ACA with high certainty by less than 40% of participants. These findings suggest that on the whole, participants have a reasonably good understanding of this specific provision. This would imply that the magnitude of measurement error in our dependent variable (the perceived accuracy of the news content) is relatively small.

3.2 Outcome measure

After reading the article containing true or false information, all participants were first asked about their interest in reading the rest of the article.11 They were then asked the following question to measure our outcome variable:

‘How accurate is the following statement? In the new health care bills proposed by the House and Senate, young adults would lose coverage through their parents’ health insurance plan when they turn 18.’

The response options were ‘very accurate’ (4), ‘somewhat accurate’ (3), ‘not very accurate’ (2), and ‘not at all accurate’ (1). Note that the statement is false. In the literature on misinformation, this question format is a common and standard way to measure how accurate study participants perceive a false statement to be (e.g., Nyhan and Reifler, 2010; Kuru et al., 2017; Pennycook and Rand, 2017, 2018, 2019b; Pennycook et al., 2018).12

Finally, the survey concluded with a question intended to assess the quality of our responses, as well as an opportunity to provide written feedback to the survey. Participants also saw a debriefing message about the nature of the study that differed by treatment condition (true vs false) and clarified that the information was fabricated for the purpose of research.

3.3 Statistical models

To conduct our analysis, we begin by dividing our samples by either ideology (liberal or conservative) or partisanship (Democrat or Republican) to explore whether the treatment effects differ by study participants' ideology or partisanship. Although the two measures are related, they comprise slightly different subsets of our sample.13

We then run the following OLS regression model for each subset of participants:

(1)$$\eqalign{& Y_i = b_0 + b_1\cdot {\rm False} + b_2\cdot {\rm CNN} + b_3\cdot {\rm Fox}\;{\rm News} \cr & \quad\quad + b_4\cdot {\rm False}\cdot {\rm CNN} + b_5\cdot {\rm False}\cdot {\rm Fox}\;{\rm News} + \epsilon _i,} $$

where Y i is our measure of the perceived accuracy of the false statement.14 The parameter for the base category (no source, true information) is b 0. The model includes three dichotomous variables, False (= 1 if the false information is presented, = 0 otherwise), CNN (= 1 if the CNN header is shown, = 0 otherwise), and Fox News (= 1 if the Fox News header is shown, = 0 otherwise), their interactions, and a random error term.

Finally, we compare the estimate of b 4 between Democrats (or liberals) and Republicans (or conservatives), and the estimate of b 5 between them, to test the hypothesis that perceived accuracy of the false information from a given source differs between participants with different political preferences. The coefficient b 4 measures the interaction effect of exposure to false information with the CNN header on participants' perceived accuracy of the false statement, whereas the coefficient b 5 measures the interaction effect of exposure to false information with the Fox News header.15

For the ease of estimation, we run a model with all the variables in Equation 1, a variable measuring each respondent's partisanship (Democrat or Republican) or ideology (liberal or conservative), and its interaction with all the included variables. This triple-interaction model is mathematically equivalent to running two separate regression models and comparing the differences between the estimates for each variable included in Equation 1.

3.4 Exploratory analysis

In addition to the confirmatory analyses based on our pre-registered hypothesis, we undertake further exploratory analyses to examine the robustness of our findings and to explore the heterogeneity of the treatment effects among different types of participants. The results of these additional analyses are presented in the Supplementary Materials.

First, we added a set of additional pre-treatment demographic variables to the triple interaction model noted above. Adding these covariates improves the efficiency of our estimation.16 More importantly, we can control the association between our main moderator, partisanship or ideology, and other demographic variables. Specifically, we added a set of dummy variables for participants' age group (18–24 [baseline], 25–34, 35–44, 45–54, 55 or older), gender (male [baseline], female, or non-binary), level of education (without a college/university degree [baseline], with a college/university degree), race (white [baseline], non-white), and region of residence (East North Central [baseline], East South Central, Middle Atlantic, Mountain, New England, Pacific, South Atlantic, West North Central, West South Central).

Second, we excluded participants who could have used search engines to check the accuracy of the statements. At the end of our survey, we included the following question: ‘It is essential for the validity of our research that we know whether participants looked up any information online during the study. Did you look up any information during the study? Please be honest; you will still be paid and you will not be penalized in any way if you did.’ About 6% of participants reported that they had looked up the information. We ran our analyses after excluding these participants.

Third, we focused on high-quality participants by excluding ‘speeders’ who completed the survey faster than the first quartile of the distribution of response time, which was 2 min. In other words, we excluded about 25% of participants who might have spent insufficient time reading the treatment materials and/or survey questions. Note that the median response time for the survey was 3 min, which, in a survey with 20 questions and an article excerpt, leaves little time to look up information on an online search engine.

Finally, we conducted some exploratory analyses based on other subgroups in our sample. Although some of the sample sizes are relatively small within these subgroups, they highlight some of the other characteristics that may help explain which individuals are more or less susceptible to believing false information from different news sources. Specifically, we subset our data by participants' level of interest in politics, their level of trust in the media, and their level of political knowledge (measured by five political knowledge test questions).

4. Results

Figure 3 presents the results of our regression analysis graphically, showing the average perceived accuracy of the false statement compared to the baseline control condition (true information, no source presented), broken down by partisanship (top) and ideology (bottom). The point estimates are indicated by the dots, while the 95% confidence intervals are indicated by the horizontal lines. The coefficients that are significant at the 0.05 level are shown in black, while insignificant coefficients are in grey. As the figure shows, exposure to false information (without the source cue) is the single variable showing the largest and most highly significant effect on individuals' rating of the false statement as accurate. Figure 3 also shows that this pattern is independent of participants' ideology or partisanship. Regardless of their political preferences, individuals exposed to false information are significantly more likely to rate the false statement as accurate than individuals exposed to true information.

Figure 3. Average perceived accuracy of the false statement compared to the baseline control condition (true information, no source presented) among Republicans and Democrats (top) and conservatives and liberals (bottom).

The coefficients that are relevant to our hypothesis are, however, those for False × CNN (b 4 in the model) and False × Fox News (b 5 in the model) and, more importantly, the differences in these coefficients between subgroups of participants of a different ideology or partisanship. The results are contrary to our expectation in Hypothesis 1. Figure 3 shows that among participants who were exposed to false information originating from CNN or Fox News (compared to the baseline of no source), there are few significant differences by partisanship (top) or by ideology (bottom). Specifically, the coefficient estimates for the interaction of the false information condition and the CNN condition $\lpar {\widehat{{b_4}}} \rpar $, and the interaction of the false information condition and the Fox News condition $\lpar {\widehat{{b_5}}} \rpar $, are all small and statistically insignificant with one exception. As the top half of Figure 3 shows, Democrats are less likely to think that the false statement is accurate, compared to the baseline condition, when it is attributed to Fox News $(\widehat{{b_5}} = -0.20,\,p \lt 0.05)$. However, we are not able to reject the null hypothesis of no difference between Democrats and Republicans (the leftmost panel in Figure 3). In terms of the effect of exposure to false information originating from CNN, Republicans are not significantly less likely to think the false statement is accurate, as compared to the baseline condition.17 Again, this result is contrary to our expectation.

As we noted above, we also undertake exploratory analyses to verify the robustness of our results and to further examine treatment effect heterogeneity among various subsets of participants. Overall, the results of these additional analyses (presented in the Supplementary Materials) suggest that our main results are robust to different model specifications and that there is no substantial treatment effect heterogeneity.

5. Conclusion

Our most consistent – but unexpected – finding is that mere exposure to ambiguous but false information explains study participants' rating of a false statement as accurate, regardless of the news source from which the information originates and of participants' political beliefs. These findings run contrary to existing studies that emphasize the role of partisan motivated reasoning in how individuals process information (e.g., Druckman et al., 2013), and to research suggesting the important roles that heuristics (including news sources) play in opinion formation (e.g., Baum and Gussin, 2007). Rather, our experiment demonstrates that whether the news is from a congenial or uncongenial source is less important than the content itself (false or true information) in explaining study participants' belief in a false statement.

Our results add to a growing literature that questions the validity and applicability of the theory on partisan motivated reasoning. Leeper and Slothuus (2014) argue that the extent to which motivated reasoning operates in the context of partisanship is contingent on citizens' individual predispositions and goals. In many cases, individuals lack the motivation or effort required of them to engage in motivated reasoning when they form opinions. Furthermore, Guess and Coppock (2015) find that instead of engaging in motivated reasoning, individuals exposed to information about various contentious issues update their beliefs in parallel – regardless of their ideological predispositions. Similar to these studies, our findings suggest that partisan motivated reasoning is conditional on contexts and types of information.

Our results may also be consistent with Bullock's (2011) finding that when individuals are exposed to both factual information and elite partisan cues, they are capable of adopting policy views that are independent of the views of partisan elites. Although Bullock's findings led him to be optimistic about partisans' ability to evaluate information regardless of its source, our findings tell a more cautionary tale. We argue that people tend to simply believe information uncritically when they are exposed to it, even false information (e.g., Hasher et al., 1977; Gilbert et al., 1993). Pennycook et al., (2018) recently found evidence of this phenomenon as it pertains to fake news. Their findings show that including a news source has no impact on participants' perceived accuracy of fake news headlines. Our research corroborates this account, and suggests that the power of exposure to false content may extend beyond fake news headlines to plausible, mainstream news articles.

In an era of increasing political polarization and supposedly low trust in the mass media, people still tend to believe what they read. Considering that holding false beliefs about current events, politicians, and policies can impact people's subsequent political behaviors and create macro-level errors in public opinion (Flynn et al., 2017), this is a cause for concern. The implication of our research, therefore, is that major media outlets should attempt to prevent any false information from reaching readers or viewers in the first place, rather than emphasizing the sources of the information or focusing on who takes in information from which news outlet(s). Specifically, journalists have an obligation to ensure that all published material is fully supported by facts. Indeed, there is little that prevents Americans from swallowing information fed to them from mass media sources. For better or for worse, content appears to ‘trump’ source, partisanship, and ideology among American consumers of the mass media.

1 See also Nyhan and Reifler (2010).

2 See Nyhan and Reifler (2010) for a related study.

3 See Pennycook and Rand (2019b) for a related study.

4 We also test a related hypothesis (Hypothesis 2) about the effects of source vs content on study participants' interest in reading the rest of the article. The derivation of this hypothesis and the results of the empirical tests are presented in the Supplementary Materials. The hypotheses were pre-registered at Evidence in Governance and Politics, EGAP (ID: 20170720AC).

5 See also Pennycook and Rand (2019a).

6 Our study was approved by the Committee for the Protection of Human Subjects at Dartmouth College on 20 July 2017 (ID: STUDY00030452).

7 Still, replicating our findings on a probability sample would be a fruitful avenue for future research.

8 We excluded participants under 18 years of age and those who resided outside the USA. We also kept only the first completion if there were multiple completed responses with the same IP address. The number of IP addresses with multiple responses is very small (only 72).

9 The question wording for ideology was: ‘When it comes to politics, would you describe yourself as liberal, conservative, or neither liberal nor conservative?’ Those who chose ‘Very conservative [liberal],’ ‘Somewhat conservative [liberal],’ or ‘Slightly conservative [liberal],’ were coded as conservatives [liberals]. Those who chose ‘Moderate; middle of the road’ were excluded from our analysis. The question wording for partisanship was: ‘Generally speaking, do you think of yourself as a Republican, a Democrat, an Independent, or something else?’ Those who chose ‘Strong Republican [Democrat],’ ‘Republican [Democrat],’ or ‘Independent, leaning Republican [Democrat],’ were coded as Republicans [Democrats]. Those who chose ‘Something else’ were excluded from our analysis.

10 A potential extension of this design is to measure individuals' perceived accuracy of false information that is not politically neutral and identify differential effects of source congruence vis-à-vis content congruence. In a recent paper, for example, Kelly (2019) examines the impact of content congruence independent of source.

11 Our hypothesis and results related to this outcome measure can be found in the Supplementary Materials.

12 It is possible that participants saw the question measuring perceived accuracy of the false statement as an attention check, which tests whether or not they had read the article, rather than a question asking how accurate they personally rated the statement. It is also possible that they based their answers to the question on the information they had just read, rather than their actual beliefs. Hypothetically, this could be a particular matter of concern when MTurk samples are used. Since any researchers who post experimental studies to MTurk can withhold payment if they believe that participants have not completed tasks properly, MTurk workers could have an incentive to respond to factual belief questions based on what they think the researchers want to hear. However, previous research testing for this exact issue finds that the effects of corrections to misinformation are nearly identical among samples of MTurk workers and Morning Consult poll participants, who lack the potential monetary incentive to treat belief accuracy questions as attention checks or to rely exclusively on information provided to them by the researchers when offering responses (Nyhan et al., 2017). Additionally, Mummolo and Peterson (2019) find that ‘experimenter demand effects’ are rare in online survey experiments. Their results suggest that study participants do not adjust their behavior to align with researchers' expectations.

13 While unpacking the differences between partisanship and ideology is outside the scope of our study, Lupton et al. (2018) argue that core political values including egalitarianism and moral traditionalism moderate the relationship between the two.

14 The estimated treatment effects for each subset of participants are causal because of randomization, but the differences in the coefficient estimates between the subsets are not causal because participants are not divided based on randomization.

15 We did not preregister hypotheses concerning how b 2 and b 3 are different between Democrats/liberals and Republicans/conservatives in terms of the perceived accuracy of information, because our main interest is to understand individuals' susceptibility to false information conditional on news sources.

16 Retrospectively, we should have made strata (or subgroups) based on some key demographic variables and partisanship (or ideology), and then randomly assigned participants into one of 2 × 3 treatment groups within each stratum. An important advantage of this block randomization over the covariate balancing that we do for our analysis is that it is not dependent on the specification of a regression model.

17 This finding stands in contrast to studies showing that Republicans are more susceptible to motivated reasoning than Democrats (e.g., Nyhan and Reifler, 2010).

Supplementary materials

The supplementary materials for this article and a complete replication package can be found at https://doi.org/10.1017/S1468109919000082 and https://doi.org/10.7910/DVN/K1R14D.

This article was reviewed in the “Result Blind Review” category.

References

Baum, MA and Gussin, P (2005) Issue bias: How issue coverage and media bias affect voter perceptions of elections. Paper Presented at 2004 meeting of the American political science association, Chicago, IL. Available at https://sites.hks.harvard.edu/fs/mbaum/documents/IssueBiasAPSA05.pdf (Accessed 18 April 2019).
Baum, MA and Gussin, P (2007) In the eye of the beholder: how information shortcuts shape individual perceptions of bias in the media. Quarterly Journal of Political Science 3, 131.
Baum, MA and Groeling, T (2009) Shot by the messenger: partisan cues and public opinion regarding national security and war. Political Behavior 31, 157186.
Berinsky, AJ (2015) Rumors and health care reform: experiments in political misinformation. British Journal of Political Science 47, 241262.
Berinsky, AJ, Huber, GA, Lenz, GS and Alvarez, RM (2012) Evaluating online labor markets for experimental research: Amazon.com's mechanical turk. Political Analysis 20, 351368.
Bolsen, T, Druckman, JN and Lomax Cook, F (2014) The influence of partisan motivated reasoning on public opinion. Political Behavior 36, 235262.
Bullock, JG (2011) Elite influence on public opinion in an informed electorate. American Political Science Review 105, 496515.
Bullock, JG, Gerber, AS, Hill, SJ and Huber, GA (2015) Partisan bias in factual beliefs about politics. Quarterly Journal of Political Science 10, 519578.
Carr, DJ, Barnidge, M, Gu Lee, B and Jean Tsang, S (2014) Cynics and skeptics: evaluating the credibility of mainstream and citizen journalism. Journalism & Mass Communication Quarterly 91, 452470.
Cohn, N (2014) Polarization is dividing American society, not just politics. June 12. Available at https://nyti.ms/1ldRFEk. The New York Times. (Accessed 18 April 2019).
Coll, S (2017) Donald Trump's ‘Fake News’ tactics. December 11. Available at https://www.newyorker.com/magazine/2017/12/11/donald-trumps-fake-news-tactics. The New Yorker. (Accessed 18 April 2019).
Coppock, A, Leeper, TJ and Mullinix, KJ (2018) Generalizability of heterogeneous treatment effect estimates across samples. Proceedings of the National Academy of Sciences of the USA 115, 1244112446.
Daniller, A, Allen, D, Tallevi, A and Mutz, DC (2017) Communication methods and measures measuring trust in the press in a changing media environment measuring trust in the press in a changing media environment. Communication Methods and Measures 11, 7685.
Darcy, O (2017) Fox news tweets correction on MSNBC report after Twitter users call out error. December 4. Available at http://money.cnn.com/2017/12/04/media/fox-news-msnbc-correction-error/index.html. CNN. (Accessed 18 April 2019).
Dellavigna, S and Kaplan, E (2007) The Fox News effect: media bias and voting. Quarterly Journal of Economics 122, 11871234.
Doherty, C (2014) Polarization in American politics. June 12. Available at http://pewrsr.ch/TNl6mr. Pew Research Center. (Accessed 18 April 2019).
Druckman, JN, Peterson, E and Slothuus, R (2013) How elite partisan polarization affects public opinion formation. American Political Science Review 107, 5779.
Druckman, J, Levendusky, M and McLain, A (2015) No need to watch: how the effects of partisan media can spread via inter-personal discussions. Institute for Policy Research at Northwestern University WP-15-12. Available at http://www.ipr.northwestern.edu/publications/docs/workingpapers/2015/IPR-WP-15-12.pdf (Accessed 18 April 2019).
Flynn, DJ, Nyhan, B and Reifler, J (2017) The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Advances in Political Psychology 38, 127150.
Gilbert, DT, Tafarodi, RW and Malone, PS (1993) You can't not believe everything you read. Journal of Personality and Social Psychology 65, 221233.
Gross, W, Stark, TH, Krosnick, J, Pasek, J, Sood, G, Tompson, T, Agiesta, J and Junius, D (2012) Americans’ Attitudes Toward the Affordable Care Act: Would Better Public Understanding Increase or Decrease Favorability? Report prepared for the Political Psychology Research Group. Available at https://pprg.stanford.edu/wp-content/uploads/Health-Care-2012-Knowledge-and-Favorability.pdf (Accessed 18 April 2019).
Guess, A and Coppock, A (2015) Back to Bayes: confronting the evidence on attitude polarization. Working paper. Available at https://webspace.princeton.edu/users/aguess/GuessCoppockBack2BayesV3.pdf (Accessed 18 April 2019).
Gussin, P and Baum, MA (2004) In the eye of the beholder: an experimental investigation into the foundations of the hostile media phenomenon. Paper presented at the 2004 annual meeting of the American Political Science Association, Chicago, IL. Available at https://sites.hks.harvard.edu/fs/mbaum/documents/EyeoftheBeholder.pdf (Accessed 18 April 2019).
Hasher, L, Goldstein, D and Toppino, T (1977) Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior 16, 107112.
Henry J. Kaiser Family Foundation (2017) Compare proposals to replace The Affordable Care Act. September 25. Available at https://www.kff.org/interactive/proposals-to-replace-the-affordable-care-act/. The Henry J. Kaiser Family Foundation. (Accessed 18 April 2019).
Horton, JJ, Rand, DG and Zeckhauser, RJ (2011) The online laboratory: conducting experiments in a real labor market. Experimental Economics 14, 399425.
Kelly, D (2019) Evaluating the news: (mis)perceptions of objectivity and credibility. Political Behavior 41, 445471.
Kim, M (2015) Partisans and controversial news online: comparing perceptions of bias and credibility in news content from blogs and mainstream media. Mass Communication and Society 18, 1736.
Kirzinger, A, Sugarman, E and Brodie, M (2016) Kaiser health tracking poll: November 2016. December 1. Available at http://www.kff.org/health-costs/poll-finding/kaiser-health-tracking-poll-november-2016/. The Henry J. Kaiser Family Foundation. (Accessed 18 April 2019).
Knight Foundation (2018) Perceived Accuracy and Bias in the News Media. Gallup. Report Available at https://knightfoundation.org/reports/perceived-accuracy-and-bias-in-the-news-media (Accessed 18 April 2019).
Kuru, O, Pasek, J and Traugott, MW (2017) Motivated reasoning in the perceived credibility of public opinion polls. Public Opinion Quarterly 81, 422446.
Layman, GC, Carsey, TM and Menasce Horowitz, J (2006) Party polarization in American politics: characteristics, causes, and consequences. Annual Review of Political Science 9, 83110.
Leeper, TJ and Slothuus, R (2014) Political parties, motivated reasoning, and public opinion formation. Advances in Political Psychology 35, 129156.
Levendusky, M (2013 a) How Partisan Media Polarize America. Chicago, IL: University of Chicago Press.
Levendusky, MS (2013 b) Why do partisan media polarize viewers? American Journal of Political Science 57, 611623.
Lupton, RN, Smallpage, SM and Enders, AM (2018) Values and political predispositions in the age of polarization: examining the relationship between partisanship and ideology in the United States, 1988–2012. Forthcoming, British Journal of Political Science.
Mantzarlis, A (2018) The funny, the weird and the serious: 33 media corrections from 2018. December 18. Poynter. Available at https://www.poynter.org/fact-checking/2018/the-funny-the-weird-and-the-serious-33-media-corrections-from-2018/ (Accessed 18 April 2019).
McCright, AM and Dunlap, RE (2011) The politicization of climate change and polarization in the American public's views of global warming, 2001–2010. The Sociological Quarterly 52, 155194.
Mitchell, A, Gottfried, J, Kiley, J and Eva Matsa, K (2014) Political polarization & media habits. Pew Research Center. October 21. Available at http://www.journalism.org/2014/10/21/political-polarization-media-habits/ (Accessed 18 April 2019).
Mullinix, KJ, Leeper, TJ, Druckman, JN and Freese, J (2015) The generalizability of survey experiments. Journal of Experimental Political Science 2, 109138.
Mummolo, J and Peterson, E (2019) Demand effects in survey experiments: an empirical assessment. American Political Science Review 113, 517529.
Nazaryan, A (2017) Fox news pounded in ratings as truth mounts surprising comeback. May 23. Available at http://www.newsweek.com/fox-news-pounded-ratings-truth-mounts-surprising-comeback-614170. Newsweek. (Accessed 18 April 2019).
Nelson, L (2017) Trump claims his base is ‘Getting Stronger’ despite ‘Fake News’. August 7. Available at http://www.politico.com/story/2017/08/07/trump-new-york-times-criticism-241378. Politico. (Accessed 18 April 2019).
Nyhan, B and Reifler, J (2010) When corrections fail: the persistence of political misperceptions. Political Behavior 32, 303330.
Nyhan, B and Reifler, J (2013) Which Corrections Work? Research Results and Practice Recommendations. Report for the New America Foundation. Available at http://www.dartmouth.edu/bnyhan/nyhan-reifler-report-naf-corrections.pdf (Accessed 18 April 2019).
Nyhan, B, Porter, E, Reifler, J and Wood, TJ (2017) Taking corrections literally but not seriously? The effects of information on factual beliefs and candidate favorability. Working paper. Available at SSRN https://papers.ssrn.com/sol3/papers.cfm?abstractid=2995128.
Park, H, Parlapiano, A and Sanger-Katz, M (2017) The three plans to repeal Obamacare that failed in the senate this week. July 28. Available at https://nyti.ms/2tXjwAm. The New York Times. (Accessed 18 April 2019).
Pennycook, G and Rand, DG (2017) The implied truth effect: attaching warnings to a subset of fake news stories increases perceived accuracy of stories without warnings. Working paper. Available at https://papers.ssrn.com/sol3/papers.cfm?abstractid=3035384.
Pennycook, G and Rand, DG (2018) Who falls for fake news? the roles of analytic thinking, motivated reasoning, political ideology, and bullshit receptivity. Working paper. Available at SSRN https://papers.ssrn.com/sol3/papers.cfm?abstractid=3023545.
Pennycook, G and Rand, DG (2019 a) Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences 116, 25212526.
Pennycook, G and Rand, DG (2019 b) Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188, 3950.
Pennycook, G, Cannon, TD and Rand, DG (2018) Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General 147, 18651880.
Poole, KT and Rosenthal, H (1984) The polarization of American politics. The Journal of Politics 46, 10611079.
Prior, M, Sood, G and Khanna, K (2015) You cannot be serious: the impact of accuracy incentives on partisan bias in reports of economic perceptions. Quarterly Journal of Political Science 10, 489518.
Rymel, T (2017) If we can't trust media, who can we trust? February 24. Available at https://www.huffpost.com/entry/if-we-cant-trust-media-who-can-we-trustus58b0757fe4b02f3f81e446c4. The Huffington Post. (Accessed 18 April 2019).
Silverman, C (2017) Trump is causing democrats to trust media more, while republicans are endorsing more extreme views, says a new study. December 4. Available at https://www.poynter.org/news/trump-causing-democrats-trust-media-more-while-republicans-are-endorsing-more-extreme-views. Poynter. (Accessed 18 April 2019).
Swift, A (2016) Americans’ Trust in Mass Media Sinks to New Low. Gallup. September 14. Available at http://www.gallup.com/poll/195542/americans-trust-mass-media-sinks-new-low.aspx (Accessed 18 April 2019).
Swire, B, Berinsky, AJ, Lewandowsky, S and Ecker, UKH (2017) Processing political misinformation: comprehending the trump phenomenon. Royal Society Open Science 4, 160802.
The Economist (2016) How Americans’ media habits can predict how they will vote. November 7. Available at https://www.economist.com/blogs/democracyinamerica/2016/11/voters-and-media. The Economist. (Accessed 18 April 2019).
Turner, J (2007) The messenger overwhelming the message: ideological cues and perceptions of bias in television news. Political Behavior 29, 441464.
Wang, AB (2017) ABC news apologizes for ‘serious error’ in Trump report and suspends Brian Ross for four weeks. December 3. Available at http://wapo.st/2AsEMTq. The Washington Post. (Accessed 18 April 2019).

Katherine Clayton is a pre-doctoral research fellow in the Program of Quantitative Social Science at Dartmouth College, a graduate of the Dartmouth College Class of 2018, and an incoming doctoral student in political science at Stanford University. Her research focuses on political behavior in the U.S. and Europe.

Jase Davis is a Dartmouth College Class of 2018 graduate.

Kristen Hinckley is a Dartmouth College Class of 2017 graduate.

Yusaku Horiuchi is a Professor of Government and the Mitsui Professor of Japanese Studies at Dartmouth College. His research and teaching interests include comparative politics, political behavior, and political methodology.