To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Presidential elections can be forecast using information from political and economic conditions, polls, and a statistical model of changes in public opinion over time. However, these “knowns” about how to make a good presidential election forecast come with many unknowns due to the challenges of evaluating forecast calibration and communication. We highlight how incentives may shape forecasts, and particularly forecast uncertainty, in light of calibration challenges. We illustrate these challenges in creating, communicating, and evaluating election predictions, using the Economist and Fivethirtyeight forecasts of the 2020 election as examples, and offer recommendations for forecasters and scholars.
This chapter offers our first empirical analyses of media coverage of policy, across the various policy domains and news organizations. We first compare the aggregated “media signals” to actual changes in policy. Does aggregated coverage follow policy over time? Does this relationship vary across domains? Given the multiple measures developed in the previous chapter, this chapter also considers whether and how the measures matter for what we observe. This chapter centers on figures depicting the ebb and flow of policy and media coverage over time. In so doing, it offers the first large-scale comparison of policy change, and media coverage of policy change, across six domains over a forty-year period. Do patterns vary across newspapers? How about across media, particularly television coverage? Does it match what we see in newspapers? This chapter offers some critical diagnostics, assessing the degree to which media coverage has followed public policy; and relatedly, whether media coverage reliably includes the information citizens need to respond to policy change.
This chapter spells out how we believe the mass media cover public policy, particularly the outputs government produces. Although there is a considerable body of work detailing a range of biases in coverage and a lack of policy content, we posit that mass media can and do track trends in policy, at least in very salient policy areas that attract a lot of attention. Put differently, even as media can be biased and provide inaccurate information, there also can be a signal of important policy actions amidst the noise. News organizations have a professional and economic interest in doing so, at least up to a point. We are especially interested in media coverage of policy change. This is in part because we suppose that media often reports on change in policy, not levels, much as research on news coverage of other areas, for example, economic conditions, has revealed. (Change also seems easier to directly measure.) The conceptualization and theory in this chapter guide both the measurement and analyses that follow.
Chapter 3 laid out the building blocks for our measures of the media policy signal and presented a preliminary version of that signal across newspapers, television, and social media content. We now turn to a series of refinements and robustness tests, critical checks on the accuracy of our media policy signal measures. We begin with some comparisons between crowdsourced codes and those produced by trained student coders. Assessing the accuracy of crowdsourced data is important for the dictionary-based measures in the preceding chapter and for the comparisons with machine-learning-based measures introduced in this chapter. We then turn to crowdsourced content analyses of the degree to which extracted content reflects past, present, or future changes in spending. Our measures likely reflect some combination of these spending changes, and understanding the balance of each will be important for analyses in subsequent chapters. Finally, we present comparisons of dictionary-based measures and those based on machine-learning, using nearly 30,000 human-coded sentences and random forest models to replicate that coding across our entire corpus.
Does media coverage matter for the functioning of representative democracy? Do people notice news coverage? Do they take it into account? In particular, do citizens use the information that media content conveys to update their policy preferences? These questions are the central motivation for this book. In this chapter we try to provide some answers. We begin by introducing our principal measures of public preferences from the General Social Survey. We then consider a smaller, unique body of data on public perceptions of policy change, from the American National Election Studies. These data allow us some preliminary insight into whether the public notices government spending and media coverage of government spending. The remainder of the chapter then presents results of analyses of public preferences, first to establish the effects of spending on preferences, and then to assess the role of the media signal. Results document thermostatic public responsiveness, as found in previous research, and also that news coverage is a critical mediating force.
Preceding chapters have provided evidence that media coverage frequently reflects public policy, and that public preferences respond to a combination of policy and the media “policy signal.” Those results speak to some important questions about the nature and functioning of representative democracy, we believe. A good number of questions nevertheless remain. This chapter attempts to address some of what seem to us to be the most pressing issues. First, we consider the impact that trends in media consumption have on public responsiveness. Second, we consider heterogeneity in public responsiveness to the media policy signal. Third, we reconsider the causal relationships between policy, news coverage, and the public. Fourth and finally, we investigate several of the domain-specific media effects identified in Chapter 6. Media coverage of policy matters, but to varying degrees and in different ways. We offer additional analyses here to help illuminate some of these domain-level differences in information flows.
This chapter provides an introduction to the ideas and literatures that guide the analyses that follow. We consider past work on the potential role of media coverage in representative democracy and public responsiveness.
This chapter moves from theory to practice and implements a measure of media coverage. We introduce our database of news coverage. We also described the unique “layered dictionary” approach used to identify sentences on the direction of policy change. The focus on change in policy and not levels is critical, and we discuss this in some detail. We also compare the use of application of both dictionary and supervised machine-learning approaches to content analyses of news content. This chapter is necessarily technical, but it also is an opportunity for us to introduce the methods to a broader audience. We plan to escort readers through the various available approaches, our implementation of them, and then an assessment of the outputs they produce. We end the chapter with some substantive findings: the overall amount of coverage of policy change in newspapers and television, and the general trends in aggregated “media signals” generated by the different approaches.
Around the world, there are increasing concerns about the accuracy of media coverage. It is vital in representative democracies that citizens have access to reliable information about what is happening in government policy, so that they can form meaningful preferences and hold politicians accountable. Yet much research and conventional wisdom questions whether the necessary information is available, consumed, and understood. This study is the first large-scale empirical investigation into the frequency and reliability of media coverage in five policy domains, and it provides tools that can be exported to other areas, in the US and elsewhere. Examining decades of government spending, media coverage, and public opinion in the US, this book assesses the accuracy of media coverage, and measures its direct impact on citizens' preferences for policy. This innovative study has far-reaching implications for those studying and teaching politics as well as for reporters and citizens.
It is fairly well-known that proper time series analysis requires that estimated equations be balanced. Numerous scholars mistake this to mean that one cannot mix orders of integration. Previous studies have clarified the distinction between equation balance and having different orders of integration, and shown that mixing orders of integration does not increase the risk of type I error when using the general error correction/autoregressive distributed lag (GECM/ADL) models, that is, so long as equations are balanced (and other modeling assumptions are met). This paper builds on that research to assess the consequences for type II error when employing those models. Specifically, we consider cases where a true relationship exists, the left- and right-hand sides of the equation mix orders of integration, and the equation still is balanced. Using the asymptotic case, we find that the different orders of integration do not preclude identification of the true relationship using the GECM/ADL. We then highlight that estimation is trickier in practice, over finite time, as data sometimes do not reveal the underlying process. But, simulations show that even in these cases, researchers will typically draw accurate inferences as long as they select their models based on the observed characteristics of the data and test to be sure that standard model assumptions are met. We conclude by considering the implications for researchers analyzing or conducting simulations with time series data.