Skip to main content Accessibility help

Using screeners to measure respondent attention on self-administered surveys: Which items and how many?

  • Adam J. Berinsky (a1), Michele F. Margolis (a2), Michael W. Sances (a3) and Christopher Warshaw (a4)


Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys.


Corresponding author

*Corresponding author. Email:


Hide All
Alvarez, RM, Atkeson, LR, Levin, I and Li, Y (2019) Paying attention to inattentive survey respondents. Political Analysis 27(2), 145162.
Ansolabehere, S, Rodden, J and Snyder, JM Jr (2008) The strength of issues: using multiple measures to gauge preference stability, ideological constraint, and issue voting. American Political Science Review 102, 215232.
Berinsky, AJ, Margolis, MF and Sances, MW (2014) Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science 58, 739753.
Berinsky, AJ, Margolis, MF and Sances, MW (2016) Can we turn shirkers into workers? Journal of Experimental Social Psychology 66, 2028.
Bimbaum, A (1968) Some latent trait models and their use in inferring an examinee's ability. In Lord, FM, Novick, MR and Birnbaum, A (eds), Statistical Theories of Mental Test Scores. Oxford, England: Addison-Wesley, pp. 395479.
Clinton, J, Jackman, S and Rivers, D (2004) The statistical analysis of roll call data. American Political Science Review 98, 355370.
Fox, J-P (2010) Bayesian Item Response Modeling: Theory and Applications. New York: Springer (PDF ebook).
Hauser, DJ and Schwarz, N (2015) It's a trap! Instructional manipulation checks prompt systematic thinking on ‘tricky’ tasks. Sage Open 5(2), 16.
Jackman, S (2009) Bayesian Analysis for the Social Sciences. Hoboken, NJ: Wiley.
Jackman, S (2010) pscl: Classes and methods for R. Developed in the Political Science Computational Laboratory, Stanford University. Department of Political Science, Stanford University, Stanford, CA. R package version 1.03. 5.
Kaplan, D (2004) The Sage Handbook of Quantitative Methodology for the Social Sciences. Thousand Oaks, CA: Sage.
Kung, FY, Kwok, N and Brown, DJ (2018) Are attention check questions a threat to scale validity? Applied Psychology 67(2), 264283.
Meade, AW and Craig, SB (2012) Identifying careless responses in survey data. Psychological Methods 17, 437.
Montgomery, JM and Cutler, J (2013) Computerized adaptive testing for public opinion surveys. Political Analysis 21, 172192.
Oppenheimer, DM, Meyvis, T and Davidenko, N (2009) Instructional manipulation checks: detecting satisficing to increase statistical power. Journal of Experimental Social Psychology 45, 867872.
Tausanovitch, C and Warshaw, C (2012) How should we choose survey questions to measure citizens' policy preferences? Working paper, Department of Political Science, Stanford University. Available at
Treier, S and Hillygus, DS (2009) The nature of political ideology in the contemporary electorate. Public Opinion Quarterly 73, 679703.
Tversky, A and Kahneman, D (1981) The framing of decisions and the psychology of choice. Science 211(4481), 453458.
van der Linden, WJ (1998) Bayesian item selection criteria for adaptive testing. Psychometrika 63, 201216.
Van der Linden, WJ (2005) Linear Models for Optimal Test Design. New York: Springer Science & Business Media.


Type Description Title
Supplementary materials

Berinsky et al. supplementary material
Online Appendix

 PDF (334 KB)
334 KB

Using screeners to measure respondent attention on self-administered surveys: Which items and how many?

  • Adam J. Berinsky (a1), Michele F. Margolis (a2), Michael W. Sances (a3) and Christopher Warshaw (a4)


Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed.