Hostname: page-component-8448b6f56d-gtxcr Total loading time: 0 Render date: 2024-04-16T19:34:02.569Z Has data issue: false hasContentIssue false

Observing the Counterfactual? The Search for Political Experiments in Nature

Published online by Cambridge University Press:  04 January 2017

Gregory Robinson*
Affiliation:
Department of Political Science, Binghamton University, State University of New York, PO Box 6000, Binghamton, NY 13902-6000
John E. McNulty
Affiliation:
Department of Political Science, Binghamton University, State University of New York, PO Box 6000, Binghamton, NY 13902-6000. e-mails: jmcnulty@binghamton.edu
Jonathan S. Krasno
Affiliation:
Department of Political Science, Binghamton University, State University of New York, PO Box 6000, Binghamton, NY 13902-6000. e-mails: jkrasno@binghamton.edu
*
e-mail: grobinso@binghamton.edu (corresponding author)

Abstract

A search of recent political science literature and conference presentations shows substantial fascination with the concept of the natural experiment. However, there seems to be a wide array of definitions and applications employed in research that purports to analyze natural experiments. In this introductory essay to the special issue, we attempt to define natural experiments and discuss related issues of research design. In addition, we briefly explore the basic methodological issues around the appropriate analysis of natural experiments and give an overview of different techniques. The overarching theme of this essay and of this issue is to encourage applied researchers to look for natural experiments in their own work and to think more systematically about research design.

Type
Research Article
Copyright
Copyright © The Author 2009. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Authors' note: A previous draft of this essay was presented at the 2008 Meeting of the American Political Science Association, Boston, MA. Thanks go to Chris Zorn for turning over the pages of Political Analysis to us and our contributors. The authors thank Ryan Enos, Seth Hill, Amy Lerman, Jeff Lewis, James Lo, Michael McDonald, Nate Monroe, and two anonymous reviewers for their helpful comments. Any remaining errors belong to the authors.

References

Abadie, Alberto, and Imbens, Guido W. 2006. On the Failure of the Bootstrap for Matching Estimators. NBER Technical Working Papers 0325, National Bureau of Economic Research, Inc.Google Scholar
Aldrich, John H. 1995. Why parties? Chicago, IL: University of Chicago Press.Google Scholar
Ansolabehere, Stephen, Snyder, James M. Jr., and Charles Stewart, III. 2000. Old Voters, new voters, and the personal vote: using redistricting to measure the incumbency advantage. American Journal of Political Science 44: 1734.Google Scholar
Arrington, Leonard J., and Bitton, Davis. 1992. The Mormon Experience: A History of the Latter-Day Saints. Champaign, IL: University of Illinois Press.Google Scholar
Ashenfelter, Orley A., and Card, David E. 1985. Using the longitudinal structure of earnings to estimate the effect of training programs. Review of Economics and Statistics 67: 648–60.Google Scholar
Athey, Susan, and Imbens, Guido W. 2006. Identification and inference in nonlinear difference-in-differences models. Econometrica 74: 431–97.Google Scholar
Bertrand, Marianne, Duflo, Esther, and Mullainathan, Sendhil. 2004. How much should we trust differences-in-differences estimates? Quarterly Journal of Economics 119: 249–75.Google Scholar
Black, Nick. 1996. Why we need observational studies to evaluate the effectiveness of health care. BMJ 312: 1215–8.Google Scholar
Brady, Henry E. 2003. Models of causal inference: going beyond the Neyman-Rubin-Holland theory Paper prepared for the Annual Meeting of the Midwest Political Science Association, Chicago, IL.Google Scholar
Brady, Henry E., and McNulty, John E. 2004. The costs of voting: evidence from a natural experiment. Presented at the Annual Meeting of the Society for Political Methodology, Palo Alto, CA.Google Scholar
Brown, Thomas. 1835. Inquiry into the relation of cause and effect. London: H. G. Bohn.Google Scholar
Campbell, Donald T. 1969. Reforms as experiments. American Psychologist 24: 409–29.Google Scholar
Campbell, Donald T., and Stanley, Julian C. 1966 [1963]. Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally.Google Scholar
Carson, Jamie L., Monroe, Nathan W., and Robinson, Gregory. 2009. Unpacking agenda control in congress: individual roll rates and the republican revolution. Political Research Quarterly. First published on August 31, 2009, 10.1177/1065912909343579.Google Scholar
Chen, Jowei. 2008a. When do government benefits influence voters’ behavior? The effect of FEMA Disaster Awards on US Presidential Votes. Typescript. Stanford University.Google Scholar
Chen, Jowei. 2008b. Are poor voters easier to buy off? A natural experiment from the 2004 Florida hurricane season. Typescript. Stanford University.Google Scholar
Cook, Thomas D., and Campbell, Donald T. 1979. Quasi-experimentation: design & analysis for field settings. Chicago, IL: Rand McNally.Google Scholar
Cordray, David S. 1986. Quasi-experimental analysis: a mixture of methods and judgment. New Directions for Program Evaluation 31: 927.Google Scholar
Doherty, Daniel, Green, Donald, and Gerber, Alan. 2006. Personal income and attitudes toward redistribution: a study of lottery winners. Political Psychology 27: 441–58.Google Scholar
Dunning, Thad. 2008. Improving causal inference: strengths and limitations of natural experiments. Political Research Quarterly 61: 282–93.Google Scholar
Egger, Matthias, Schneider, Martin, and Smith, George Davey. 1998. Spurious precision? Meta-analysis of observational studies. BMJ 316: 140–4.CrossRefGoogle ScholarPubMed
von Elm, Erik, Altman, Douglas G., Egger, Matthias, Pocock, Stuart J., Gotzsche, Peter C., and Vandenbrouke, Jan P. 2007. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 335: 806–8.Google Scholar
Engle, Robert F., Hendry, David F., and Richard, Jean-Francois. 1983. Exogeneity. Econometrica 51: 277304.Google Scholar
Frankfort-Nachmias, Chava, and Nachmias, David. 2000. Research Methods in the Social Sciences. 6th ed. New York: Worth Publishers.Google Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W. 2008. Social pressure and voter turnout: evidence from a large-scale field experiment. American Political Science Review 102: 3348.Google Scholar
Glasziou, Paul, Vandenbrouke, Jan, and Chalmers, Iain. 2004. Assessing the quality of research. BMJ 328: 3941.CrossRefGoogle ScholarPubMed
Goldberg, Beverly. 2008. Help, I've fallen into the doughnut hole and I can't get up: the problems with medicare. Part D. Issue Brief. New York: The Century Fund. http://www.tcf.org/Publications/Healthcare/Beverly_brief.pdf.Google Scholar
Green, Donald P., Leong, Terence Y., Kern, Holger L., Gerber, Alan S., and Larimer, Christopher W. 2009. Testing the accuracy of regression discontinuity analysis using experimental benchmarks. Political Analysis Advance Access published on August 3, 2009. 10.1093/pan/mpp018.Google Scholar
Grose, Christian R., and Yoshinaka, Antoine. 2003. The electoral consequences of party switching by incumbent members of congress, 1947–2000. Legislative Studies Quarterly 28(1): 5575.Google Scholar
Hansen, Christian. 2007a. Generalized least squares inference in panel and multilevel models with serial correlation and fixed effects. Journal of Econometrics 140: 670–94.Google Scholar
Hansen, Christian. 2007b. Asymptotic properties of a robust variance matrix estimator for panel data when T is large. Journal of Econometrics 141: 597620.CrossRefGoogle Scholar
Heckman, James J. 1979. Sample selection bias as a specification error. Econometrica 47: 153–61.Google Scholar
Hitchcock, Christopher. 2004. Do all and only causes raise the probabilities of effects? In Causation and counterfactuals, eds. Collins, John, Hall, Ned, and Paul, L. A., 403–8. Cambridge, MA: MIT Press.Google Scholar
Imai, Kosuke. 2005. Do get-out-the-vote calls reduce turnout? The importance of statistical methods for field experiments. American Political Science Review 99: 283300.Google Scholar
Imbens, Guido W. 2004. Nonparametric estimation of average treatment effects under exogeneity: a review. The Review of Economics and Statistics 86: 429.Google Scholar
Imbens, Guido W., and Lemieux, Thomas. 2008. Regression discontinuity designs: A guide to practice. Journal of Econometrics 142: 615–35.CrossRefGoogle Scholar
Johnson, R. Burke 2000. It's (beyond) time to drop the terms causal-comparative and correlational research in education. Athens, GA: Instructional Technology Forum, University of Georgia. ITFORUM Paper #43. http://itech1.coe.uga.edu/itforum/paper43/paper43.html.Google Scholar
van der Klaauw, Wilbert. 2008. Regression-discontinuity analysis: a survey of recent developments in economics. Labour 22: 219–45.Google Scholar
Kousser, Thad, and Mullin, Megan. 2007. Does voting by mail increase participation? using matching to analyze a natural experiment. Political Analysis 115: 428–45.Google Scholar
Krasno, Jonathan, and Green, Donald. 2008. Do televised advertisements increase voter turnout? Evidence from a natural experiment. Journal of Politics 70: 245–61.Google Scholar
Lee, David S. 2008. Randomized experiments from non-random selection in U.S. House elections. Journal of Econometrics 142: 675–97.CrossRefGoogle Scholar
McCrary, Justin. 2008. Testing for manipulation of the running variable in the regression discontinuity design. Journal of Econometrics 142: 698714.Google Scholar
Meyer, Bruce D. 1994. Natural and quasi-experiments in economics NBER Technical Working Papers No. 170. National Bureau of Economic Research, Inc.Google Scholar
Miguel, Edward, Satyanath, Shanker, and Sergenti, Ernest. 2004. Economic shocks and civil conflict: an instrumental approach. Journal of Political Economy 122: 725–53.Google Scholar
Milgram, Stanley. 1963. Behavioural study of obedience. Journal of Abnormal and Social Psychology 67: 371–78.Google Scholar
Nokken, Timothy P., and Poole, Keith T. 2004. Congressional party defection in American history. Legislative Studies Quarterly 29: 545–68.Google Scholar
Popper, Karl. 1959. The logic of scientific discovery. New York: Basic Books.Google Scholar
Rosenbaum, P. R. 2002. Covariance adjustment in randomized experiments and observational studies. Statistical Science 17: 286327.Google Scholar
Rubin, Donald B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66: 688701.Google Scholar
Sanderson, Simon, Tatt, Iain D., and Higgins, Julian P. 2007. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology. International Journal of Epidemiology 36: 666–76.CrossRefGoogle ScholarPubMed
Solon, Gary. 1984. Estimating autocorrelations in fixed-effects models NBER Working Paper No. T0032. Stein, Robert M., and Greg Vonnahme. 2008. Engaging the unengaged voter: vote centers and voter turnout. Journal of Politics 70: 487–97.Google Scholar
Thistlethwaite, Donald L., and Campbell, Donald T. 1960. Regression-discontinuity analysis: an alternative to the ex post facto experiment. Journal of Educational Psychology 51: 309–17.Google Scholar
Trochim, William M. K. 1982. Methodologically Based Discrepancies in Compensatory Education Evaluations. Evaluation Review 6(4): 443–80.Google Scholar
Trochim, William M. K. 1984. Research design for program evaluation: the regression-discontinuity approach. Thousand Oaks, CA: Sage.Google Scholar