Skip to main content Accessibility help
×
Home

Information Equivalence in Survey Experiments

  • Allan Dafoe (a1) (a2), Baobao Zhang (a1) and Devin Caughey (a3)

Abstract

Survey experiments often manipulate the description of attributes in a hypothetical scenario, with the goal of learning about those attributes’ real-world effects. Such inferences rely on an underappreciated assumption: experimental conditions must be information equivalent (IE) with respect to background features of the scenario. IE is often violated because subjects, when presented with information about one attribute, update their beliefs about others too. Labeling a country “a democracy,” for example, affects subjects’ beliefs about the country’s geographic location. When IE is violated, the effect of the manipulation need not correspond to the quantity of interest (the effect of beliefs about the focal attribute). We formally define the IE assumption, relating it to the exclusion restriction in instrumental-variable analysis. We show how to predict IE violations ex ante and diagnose them ex post with placebo tests. We evaluate three strategies for achieving IE. Abstract encouragement is ineffective. Specifying background details reduces imbalance on the specified details and highly correlated details, but not others. Embedding a natural experiment in the scenario can reduce imbalance on all background beliefs, but raises other issues. We illustrate with four survey experiments, focusing on an extension of a prominent study of the democratic peace.

  • View HTML
    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Information Equivalence in Survey Experiments
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Information Equivalence in Survey Experiments
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Information Equivalence in Survey Experiments
      Available formats
      ×

Copyright

Corresponding author

Footnotes

Hide All

Authors’ note: Replication files for this paper can be downloaded from Dafoe, Zhang, and Caughey (2017). Further materials can be found at allandafoe.com/ie. The main studies reported in this paper have been preregistered and preanalysis plans have been posted. Superscripted capital letters indicate relevant portions of the Supplementary Information (see Section A, available at https://doi.org/10.1017/pan.2018.9). For helpful comments, we would like to thank Peter Aronow, Cameron Ballard-Rosa, Adam Berinsky, Matthew Blackwell, David Broockman, Alex Debs, Chris Fariss, Alan Gerber, Donald Green, Sophia Hatz, Dan Hopkins, Susan Hyde, Josh Kalla, Gary King, Audrey Latura, Jason Lyall, Neil Malhotra, Elizabeth Menninga, Nuno Monteiro, Brendan Nyhan, Jonathan Renshon, Bruce Russett, Cyrus Samii, Jas Sekhon, Maya Sen, Robert Trager, Mike Tomz, Jessica Weeks, Teppei Yamamoto, Sean Zeigler, Thomas Zeitzoff, and participants of the University of North Carolina Research Series, the Yale Institution for Social and Policy Studies Experiments Workshop, the Yale International Relations Workshop, the University of Konstanz Communication, Networks and Contention Workshop, the Polmeth 2014 and 2015 Summer Methods Meetings, the Survey Experiments in Peace Science Workshop, the West Coast Experiments Conference, and the Comparative Political Economy and Conjoint Analysis workshop at the University of Zurich. For support, we acknowledge the MacMillan Institute at Yale University and the National Science Foundation Graduate Research Fellowship Program. Yale IRB has granted exemption to the survey experiments reported in this paper under IRB Protocol # 1302011471.

Contributing Editor: Jonathan N. Katz

Footnotes

References

Hide All
Acharya, Avidit, Blackwell, Matthew, and Sen, Maya. Forthcoming. Analyzing causal mechanisms in survey experiments. Political Analysis, to appear.
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B.. 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434):444455.
Aronow, Peter M., and Samii, Cyrus. 2016. Does regression produce representative estimates of causal effects? American Journal of Political Science 60(1):250267.
Baker, Andy. 2015. Race, paternalism, and foreign aid: evidence from US public opinion. American Political Science Review 109(1):93109.
Bansak, Kirk, Hainmueller, Jens, Hopkins, Daniel J., and Yamamoto, Teppei. 2018. Beyond the breaking point? Survey satisficing in conjoint experiments. Political Analysis 26(1):112119.
Barabas, Jason, and Jerit, Jennifer. 2010. Are survey experiments externally valid? American Political Science Review 104(2):226242.
Brader, Ted, Valentino, Nicholas A., and Suhay, Elizabeth. 2008. What triggers public opposition to immigration? Anxiety, group cues, and immigration threat. American Journal of Political Science 52(4):959978.
Butler, Daniel M., and Homola, Jonathan. 2017. An empirical justification for the use of racially distinctive names to signal race in experiments. Political Analysis 25(1):122130.
Butler, Daniel M., and Powell, Eleanor Neff. 2014. Understanding the party brand: experimental evidence on the role of valence. Journal of Politics 76(2):492505.
Chong, Dennis, and Druckman, James N.. 2010. Dynamic public opinion: communication effects over time. American Political Science Review 104(4):663680.
Dafoe, Allan, and Weiss, Jessica. 2018. Provocation, public opinion, and international crises: Evidence from China. http://www.allandafoe.com/china.
Dafoe, Allan, Hatz, Sophia, and Zhang, Baobao. Coercion and provocation. Unpublished working paper. http://www.allandafoe.com/provocation.
Dafoe, Allan, Zhang, Baobao, and Caughey, Devin. 2015. Confounding in survey experiments. Paper presented at the Annual Meeting of The Society for Political Methodology, University of Rochester, Rochester, NY, July 23.
Dafoe, Allan, Zhang, Baobao, and Caughey, Devin. 2017. Replication data for: Information equivalence in survey experiments, https://doi.org/10.7910/DVN/KVZXE8, Harvard Dataverse, V1, UNF:6:pUX5QK8MgtHBJ2cJQwYiyw==.
Desante, Christopher D. 2013. Working twice as hard to get half as far: race, work ethic, and America’s deserving poor. American Journal of Political Science 57(2):342356.
Dunning, Thad. 2012. Natural Experiments in the Social Sciences: A Design-Based Approach . New York: Cambridge.
Gaines, Brian J., Kuklinski, James H., and Quirk, Paul J.. 2007. The logic of the survey experiment reexamined. Political Analysis 15(1):120.
Gilens, Martin. 2002. An anatomy of survey-based experiments. In Navigating Public Opinion: Polls, Policy, and the Future of American Democracy , ed. Manza, Jeff, Cook, Fay Lomax, and Benjamin, I.. New York: Oxford, pp. 232250.
Hainmueller, Jens, and Hiscox, Michael J.. 2010. Attitudes toward highly skilled and low-skilled immigration: evidence from a survey experiment. American Political Science Review 104(1):6184.
Hainmueller, Jens, Hangartner, Dominik, and Yamamoto, Teppei. 2015. Validating vignette and conjoint survey experiments against real-world behavior. Proceedings of the National Academy of Sciences 112(8):23952400.
Hainmueller, Jens, Hopkins, Daniel J., and Yamamoto, Teppei. 2014. Causal inference in conjoint analysis: understanding multidimensional choices via stated preference experiments. Political Analysis 22(1):130.
Imai, Kosuke, Keele, Luke, Tingley, Dustin, and Yamamoto, Teppei. 2011. Unpacking the black box of causality: learning about causal mechanisms from experimental and observational studies. American Political Science Review 105(4):765789.
Johns, Robert, and Davies, Graeme A. M.. 2012. Democratic peace or clash of civilizations? Target states and support for war in Britain and the United States. Journal of Politics 74(4):10381052.
Jones, Benjamin F., and Olken, Banjamin A.. 2009. Hit or miss? The effect of assassinations on insitutions and war. American Economic Journal: Macroeconomics 1(2):5587.
Kahneman, Daniel, and Tversky, Amos. 1973. On the psychology of prediction. Psychological Review 80(4):237251.
Kertzer, Joshua D., and Brutger, Ryan. 2016. Decomposing audience costs: bringing the audience back into audience cost theory. American Journal of Political Science 60(1):234249.
King, Gary, and Zeng, Langche. 2006. The dangers of extreme counterfactuals. Political Analysis 14(2):131159.
Krosnick, Jon A. 1999. Survey research. Annual Review of Psychology 50(1):537567.
Latura, Audrey. 2015. Material and normative factors in womens professional advancement: experimental evidence from a childcare policy intervention. Paper presented at the American Politics Research Workshop, Harvard University, April 28. http://lists.fas.harvard.edu/pipermail/gov3004-list/attachments/20150427/ea95d274/attachment-0001.pdf.
Marsden, Peter V., and Wright, James D.. 2010. Handbook of Survey Research . Bingley: Emerald Group Publishing.
Middleton, Joel A., Scott, Marc A., Diakow, Ronli, and Hill, Jennifer L.. 2016. Bias amplification and bias unmasking. Political Analysis 24(4):307323.
Mintz, Alex, and Geva, Nehemia. 1993. Why don’t democracies fight each other? An experimental study. Journal of Conflict Resolution 37(3):484503.
Mutz, Diana C. 2011. Population-Based Survey Experiments . Princeton, NJ: Princeton.
Renshon, Jonathan, Dafoe, Allan, and Huth, Paul. 2018. Leader influence and reputation formation in world politics. American Journal of Political Science 62(2):325339.
Sekhon, Jasjeet S. 2009. Opiates for the matches: matching methods for causal inference. Annual Review of Political Science 12(1):487508.
Sen, Maya, and Wasow, Omar. 2016. Race as a ‘bundle of sticks’: designs that estimate effects of seemingly immutable characteristics. Annual Review of Political Science 19:499522.
Sher, Shlomi, and McKenzie, Craig R. M.. 2006. Information leakage from logically equivalent frames. Cognition 101(3):467494.
Sniderman, Paul M., and Grob, Douglas B.. 1996. Innovations in experimental design in attitude surveys. Annual Review of Sociology 22:377399.
Tomz, Michael, and Weeks, Jessica L.. 2013. Public opinion and the democratic peace. American Political Science Review 107(4):849865.
Weiss, Jessica, and Dafoe, Allan. 2018. Authoritarian audiences and government rhetoric in international crises: Evidence from China. http://www.allandafoe.com/china.
MathJax
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.

Keywords

Related content

Powered by UNSILO
Type Description Title
UNKNOWN
Supplementary materials

Dafoe et al. supplementary material
Dafoe et al. supplementary material 1

 Unknown (2.5 MB)
2.5 MB

Information Equivalence in Survey Experiments

  • Allan Dafoe (a1) (a2), Baobao Zhang (a1) and Devin Caughey (a3)

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed.