Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-27T19:48:37.225Z Has data issue: false hasContentIssue false

Mode Matters: Evaluating Response Comparability in a Mixed-Mode Survey*

Published online by Cambridge University Press:  02 July 2015

Abstract

This paper examines the effects of survey mode on patterns of survey response, paying special attention to the conditions under which mode effects are more or less consequential. We use the Youth Participatory Politics survey, a study administered either online or over the phone to 2920 young people. Our results provide consistent evidence of mode effects. The internet sample exhibits higher rates of item non-response and “no opinion” responses, and considerably lower levels of differentiation in the use of rating scales. These differences remain even after accounting for how respondents selected into the mode of survey administration. We demonstrate the substantive implications of mode effects in the context of items measuring political knowledge and racial attitudes. We conclude by discussing the implications of our results for comparing data obtained from surveys conducted with different modes, and for the design and analysis of multi-mode surveys.

Type
Original Articles
Copyright
© The European Political Science Association 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Benjamin T. Bowyer is a Senior Researcher in the Civic Engagement Research Group, School of Education, Mills College, 5000 MacArthur Boulevard, MB-56, Oakland, CA 94613 (bbowyer@mills.edu). Jon C. Rogowski is an Assistant Professor in the Department of Political Science, Washington University, Campus Box 1063, One Brookings Drive, St. Louis, MO 63130 (jrogowski@wustl.edu). The data used in this project were collected as part of the Youth Participatory Politics Study funded by the MacArthur Foundation under the supervision of Cathy Cohen and Joseph Kahne, principal investigators. The authors are grateful to Matthew DeBell, Chris Evans, Ellen Middaugh, and Catherine de Vries for thoughtful discussion and helpful comments. To view supplementary material for this article, please visit http://dx.doi.org/10.1017/psrm.2015.28

References

Aldrich, John H., and McKelvey., Richard D. 1977. ‘A Method of Scaling with Applications to the 1968 and 1972 Presidential Elections’. American Political Science Review 71:111130.Google Scholar
American Association for Public Opinion Research (AAPOR). 2010. ‘AAPOR Report on Online Panels’. Public Opinion Quarterly 74:711781.Google Scholar
Berinsky, Adam J. 2004. Silent Voices: Public Opinion and Political Participation in America. Princeton, NJ: Princeton University Press.Google Scholar
Blumberg, Stephen J., and Luke, Julian V.. 2007. ‘Coverage Bias in Traditional Telephone Surveys of Low-Income and Young Adults’. Public Opinion Quarterly 71:734749.Google Scholar
Brehm, John. 1993. The Phantom Respondents: Opinion Surveys and Political Representation. Ann Arbor: University of Michigan Press.Google Scholar
Cannell, Charles F., Miller, Peter V., and Oksenberg, Lois T.. 1981. ‘Research on Interviewing Techniques’. Sociological Methodology 12:389437.Google Scholar
Chang, Linchiat, and Krosnick, Jon A.. 2009. ‘National Surveys via RDD Telephone Interviewing Versus the Internet: Comparing Sample Representativeness and Response Quality’. Public Opinion Quarterly 73:641678.Google Scholar
Chang, Linchiat, and Krosnick, Jon A.. 2010. ‘Comparing Oral Interviewing with Self-Administered Computerized Questionnaires: An Experiment’. Public Opinion Quarterly 74:154167.CrossRefGoogle Scholar
Chartrand, Tanya L. and Bargh, John A. 1999. ‘The Chameleon Effect: the Perception--Behavior Link and Social Interaction’. Journal of Personality and Social Psychology 76:893910.Google Scholar
Curtin, Richard. Presser, Stanley, and Singer, Eleanor. 2000. ‘The Effects of Response Rate Changes on the Index of Consumer Sentiment’. Public Opinion Quarterly 64:413428.Google Scholar
de Leeuw, Edith D. 2010. ‘Mixed-Mode Surveys and the Internet’. Survey Practice 3:15.Google Scholar
Groves, Robert M. 2006. ‘Nonresponse Rates and Nonresponse Bias in Household Surveys’. Public Opinion Quarterly 70:646675.Google Scholar
Groves, Robert M. Fowler, Floyd Couper, Mick, Singer, Eleanor, and Tourangeau, Roger. 2004. Survey Methodology. Hoboken, NJ: J. Wiley.Google Scholar
Groves, Robert M. and Lyberg, Lars. 2010. ‘Total Survey Error: Past, Present, and Future’. Public Opinion Quarterly 74:849879.CrossRefGoogle Scholar
Heerwegh, Dirk. 2009. ‘Mode Differences Between Face-to-Face and Web Surveys: An Experimental Investigation of Data Quality and Social Desirability Effects’. International Journal of Public Opinion Research 21:111121.Google Scholar
Heerwegh, Dirk, and Loosveldt, Geert. 2008. ‘Face-to-Face Versus Web Surveying in a High-Internet-Coverage Population: Differences in Response Quality’. Public Opinion Quarterly 72:836846.Google Scholar
Holbrook, Allyson L., Green, Melanie C., and Krosnick, Jon A.. 2003. ‘Telephone Versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias’. Public Opinion Quarterly 67:79125.Google Scholar
Keeter, Scott, Kennedy, Courtney, Clark, April, Tompson, Trevor, and Mokrzycki, Mike. 2007. ‘What’s Missing from National Landline RDD Surveys?: The Impact of the Growing Cell-Only Population’. Public Opinion Quarterly 71:772792.Google Scholar
Keeter, Scott, Kennedy, Courtney, Dimock, Michael, Best, Jonathan, and Craighill, Peyton. 2006. ‘Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey’. Public Opinion Quarterly 70:759779.Google Scholar
King, Gary, J., Christopher Murray, L., Salomon, Joshua A., and Tandon, Ajay. 2003. ‘Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research’. American Political Science Review 97:567583.Google Scholar
Kreuter, Frauke, Presser, Stanley, and Tourangeau, Roger. 2009. ‘Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity’. Public Opinion Quarterly 72:847865.Google Scholar
Krosnick, Jon A. 1991. ‘Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys’. Applied Cognitive Psychology 5:213236.Google Scholar
Krysan, Maria. 1998. ‘Privacy and the Expression of White Racial Attitudes: A Comparison Across Three Contexts’. Public Opinion Quarterly 62:506544.Google Scholar
Kuklinski, James H., Cobb, Michael D., and Gilens, Martin. 1997. ‘Racial Attitudes and the “New South”’. Journal of Politics 59:323349.CrossRefGoogle Scholar
Luskin, Robert C. 2011. ‘“Don’t Know” Means “Don’t Know”: DK Responses and the Public’s Level of Political Knowledge’. Journal of Politics 73:547557.CrossRefGoogle Scholar
Malhotra, Neil, and Krosnick, Jon A.. 2007. ‘The Effect of Survey Mode and Sampling on Inferences About Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples’. Political Analysis 15:286323.Google Scholar
McCarty, John A., and Shrum, L. J.. 2000. ‘The Measurement of Personal Values in Survey Research: A Test of Alternative Rating Procedures’. Public Opinion Quarterly 64:271298.Google Scholar
Mondak, Jeffrey J. 1999. ‘Reconsidering the Measurement of Political Knowledge’. Political Analysis 8:5782.Google Scholar
Rivers, Doug. 2009. ‘Second Thoughts About Internet Surveys’. Available at http://www.pollster.com/blogs/doug_rivers.html, accessed 22 June 2015.Google Scholar
Rossi, Peter E., Zvi, Gilulaa, and Allenby, Greg M. 2001. ‘Overcoming Scale Usage Heterogeneity: A Bayesian Hierarchical Approach’. Journal of the American Statistical Association 96:2031.Google Scholar
Tarman, Christopher, and Sears, David O.. 2005. ‘The Conceptualization and Measurement of Symbolic Racism’. Journal of Politics 67:731761.Google Scholar
Tourangeau, Roger, and Yan, Ting. 2007. ‘Sensitive Questions in Surveys’. Psychological Bulletin 133:859883.Google Scholar
Tourangeau, Roger, and Smith, Tom W.. 1996. ‘Asking Sensitive Questions: The Impact of Data Collection Mode, Question Format, and Question Context’. Public Opinion Quarterly 60:275304.Google Scholar
Yeager, David S., and Krosnick, Jon A.. 2009. ‘Were the Benchmarks Really Wrong?’. Available at http://abcnews.go.com/images/PollingUnit/KrosnickReply to Taylor.pdf, accessed 22 June 2015.Google Scholar
Yeager, David S., Krosnick, Jon A., Chang, Linchiat, and Javitz, Harold S.. 2011. ‘Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples’. Public Opinion Quarterly 75:709747.Google Scholar
Supplementary material: Link

Bowyer et al datasets

Link
Supplementary material: PDF

Bowyer and Rogowski supplementary material

Supplementary Appendix

Download Bowyer and Rogowski supplementary material(PDF)
PDF 106 KB