Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-22T10:02:52.889Z Has data issue: false hasContentIssue false

Standards for Experimental Research: Encouraging a Better Understanding of Experimental Methods

Published online by Cambridge University Press:  12 January 2016

Diana C. Mutz
Affiliation:
Political Science and Communication, University of Pennsylvania, Philadelphia, PA USA; email: mutz@sas.upenn.edu
Robin Pemantle
Affiliation:
Department of Mathematics, University of Pennsylvania, Philadelphia, PA USA

Abstract

In this essay, we closely examine three aspects of the Reporting Guidelines for this journal, as described by Gerber et al. (2014, Journal of Experimental Political Science 1(1): 81–98) in the inaugural issue of the Journal of Experimental Political Science. These include manipulation checks and when the reporting of response rates is appropriate. The third, most critical, issue concerns the committee's recommendations for detecting errors in randomization. This is an area where there is evidence of widespread confusion about experimental methods throughout our major journals. Given that a goal of the Journal of Experimental Political Science is promoting best practices and a better understanding of experimental methods across the discipline, we recommend changes to the Standards that will allow the journal to play a leading role in correcting these misunderstandings.

Type
Research Article
Copyright
Copyright © The Experimental Research Section of the American Political Science Association 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Abelson, R. 1995. Statistics as Principled Argument. Hillsdale, NJ: L. Erlbaum Associates.Google Scholar
Arceneaux, K. 2012. “Cognitive Biases and the Strength of Political Arguments.” American Journal of Political Science 56 (2): 271–85.CrossRefGoogle Scholar
Arceneaux, K. and Kolodny, R.. 2009. “Educating the Least Informed: Group Endorsements in a Grassroots Campaign.” American Journal of Political Science 53 (4): 755–70.CrossRefGoogle Scholar
Barabas, J., Pollock, W. and Wachtel, J.. 2011. “Informed Consent: Roll-Call Knowledge, the Mass Media, and Political Representation.” Paper Presented at the Annual Meeting of the American Political Science Association, Seattle, WA, Sept. 1–4. (http://www.jasonbarabas.com/images/BarabasPollockWachtel_RewardingRepresentation.pdf), accessed September 1, 2014.Google Scholar
Berinsky, A. J., Margolis, M. F. and Sances, M. W.. 2014. “Separating the Shirkers from the Wokers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American Journal of Political Science 58 (3): 739–53.CrossRefGoogle Scholar
Berinsky, A. J. and Mendelberg, T.. 2005. “The Indirect Effects of Discredited Stereotypes in Judgments of Jewish Leaders.” American Journal of Political Science 49 (4): 845–64.Google Scholar
Boers, M. 2011. “In randomized Trials, Statistical Tests are not Helpful to Study Prognostic (im)balance at Baseline.” Lett Ed Rheumatol 1 (1): e110002. doi:10.2399/ler.11.0002.Google Scholar
Butler, D. M. and Broockman, D. E.. 2011. “Do Politicians Racially Discriminate Against Constituents? A Field Experiment on State Legislators.” American Journal of Political Science 55 (3): 463–77.Google Scholar
Cover, A. D. and Brumberg, B. S.. 1982. “Baby Books and Ballots: The Impact of Congressional Mail on Constituent Opinion.” The American Political Science Review 76 (2): 347–59.CrossRefGoogle Scholar
Cozby, P. C. 2009. Methods of Behavioral Research. (10th ed.). New York, NY: McGraw-Hill.Google Scholar
Fisher, R. A. 1935. The Design of Experiments. London: Oliver and Boyd.Google Scholar
Foschi, M. 2007. “Hypotheses, Operationalizations, and Manipulation Checks.” Chapter 5, In Laboratory Experiment in the Social Sciences, eds. Webster, M. Jr. and Sell, J., (pp.113140). New York: Elsevier.Google Scholar
Franco, A., Malhotra, N. and Simonovits, G.. 2014. “Publication Bias in the Social Sciences: Unlocking the File Drawer.” Science 345 (6203): 1502–5.Google Scholar
Franklin, C. 1991. “Efficient Estimation in Experiments.” Political Methodologist 4 (1): 1315.Google Scholar
Gerber, A., Arceneaux, K., Boudreau, C., Dowling, C., Hillygus, S., Palfrey, T., Biggers, D. R. and Hendry, D. J.. 2014. “Reporting Guidelines for Experimental Research: A Report from the Experimental Research Section Standards Committee.” Journal of Experimental Political Science 1 (1): 8198.CrossRefGoogle Scholar
Gosnell, H. F. 1942. Grass Roots Politics. Washington, DC: American Council on Public Affairs.Google Scholar
Green, D. P. and Gerber, A. S.. 2002. “Reclaiming the Experimental Tradition in Political Science.” In Political Science: The State of the Discipline, 3rd Edition. eds. Milner, H. V. and Katznelson, I., (pp.805–32). New York: W.W. Norton & Co. Google Scholar
Harbridge, L., Malhotra, N. and Harrison, B. F.. 2014. “Public Preferences for Bipartisanship in the Policymaking Process.” Legislative Studies Quarterly 39 (3): 327–55.Google Scholar
Hiscox, M. J. 2006. “Through a Glass and Darkly: Attitudes Toward International Trade and the Curious Effects of Issue Framing.” International Organization 60 (3): 755–80.Google Scholar
Hutchings, V. L., Valentino, N. A., Philpot, T. S. and White, I. K.. 2004. “The Compassion Strategy: Race and the Gender Gap in Campaign 2000.” Public Opinion Quarterly 68 (4): 512–41.CrossRefGoogle Scholar
Hyde, S. D. 2010. “Experimenting in Democracy Promotion: International Observers and the 2004 Presidential Elections in Indonesia.” Perspectives on Politics 8 (2): 511–27.CrossRefGoogle Scholar
Imai, K., King, G. and Stuart, E.. 2008. “Misunderstandings between Experimentalists and Observationalists about Causal Inference.” Journal of the Royal Statistical Society, Series A, 171 (2): 481502.Google Scholar
Ioannidis, J. P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2 (8): e124. doi:10.1371/journal.pmed.0020124.Google Scholar
Kessler, J. B. and Meier, S.. 2014. “Learning from (Failed) Replications: Cognitive Load Manipulations and Charitable Giving.” Journal of Economic Behavior and Organization 102 (June): 1013.CrossRefGoogle Scholar
Ladd, J. M. 2010. “The Neglected Power of Elite Opinion Leadership to Produce Antipathy Toward the News Media: Evidence from a Survey Experiment.” Political Behavior 32 (1): 2950.Google Scholar
Malhotra, N. and Popp, E.. 2012. “Bridging Partisan Divisions over Antiterrorism Policies: The Role of Threat Perceptions.” Political Research Quarterly 65 (1): 3447.Google Scholar
Michelbach, P. A., Scott, J. T., Matland, R. E. and Bornstein, B. H.. 2003. “Doing Rawls Justice: An Experimental Study of Income Distribution Norms.” American Journal of Political Science 47 (3): 523–39.CrossRefGoogle Scholar
Moher, D., Hopewell, S., Schulz, K. F., Montori, V., Gøtzsche, P. C., Devereaux, P. J., Elbourne, D., Egger, M. and Altman, D. G.. CONSORT. 2010. “Explanation and Elaboration: Updated Guidelines for Reporting Parallel Group Randomised Trials.” Journal of Clinical Epidemiology 63 (8): e1e37.CrossRefGoogle ScholarPubMed
Morgan, K. and Rubin, D.. 2012. “Rerandomization to improve covariate balance in experiments.” Annals of Statistics 40 (2): 1263–82.Google Scholar
Mutz, D. C. and Pemantle, R.. 2011. “The Perils of Randomization Checks in the Analysis of Experiments.” Paper presented at the Annual Meetings of the Society for Political Methodology, (July 28–30). (http://www.math.upenn.edu/~pemantle/papers/Preprints/perils.pdf), accessed September 1, 2014.Google Scholar
Newsted, P. R., Todd, P. and Zmud, R. W.. 1997. “Measurement Issues in the Study of Human Factors in Management Information Systems.” Chapter 16, In Human Factors in Management Information System, ed. Carey, J., (pp.211242). New York, USA: Ablex.Google Scholar
Perdue, B. C. and Summers, J. O.. 1986. “Checking the Success of Manipulations in Marketing Experiments.” Journal of Marketing Research 23 (4): 317–26.CrossRefGoogle Scholar
Permutt, T. 1990. “Testing for Imbalance of Covariates in Controlled Experiments.” Statistics in Medicine 9 (12): 1455–62.Google Scholar
Prior, M. 2009. “Improving Media Effects Research through Better Measurement of News Exposure.” Journal of Politics 71 (3): 893908.Google Scholar
Sances, M. W. 2012. “Is Money in Politics Harming Trust in Government? Evidence from Two Survey Experiments.” (http://www.tessexperiments.org/data/SancesSSRN.pdf), accessed January 20, 2015.Google Scholar
Sansone, C., Morf, C. C. and Panter, A. T.. 2008. The Sage Handbook of Methods in Social Psychology. Thousand Oaks, CA: Sage Publications.CrossRefGoogle Scholar
Saperstein, A. and Penner, A. M.. 2012. “Racial Fluidity and Inequality in the United States.” American Journal of Sociology 118 (3): 676727.Google Scholar
Scherer, N. and Curry, B.. 2010. “Does Descriptive Race Representation Enhance Institutional legitimacy? The Case of the U.S. Courts.” Journal of Politics 72 (1): 90104.Google Scholar
Schulz, K. F., Altman, D. G., Moher, D., for the CONSORT Group. CONSORT 2010. “Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials.” British Medical Journal 340: c332.Google Scholar
Senn, S. 1994. “Testing for Baseline Balance in Clinical Trials.” Statistics in Medicine 13: 1715–26.Google Scholar
Senn, S. 2013. “Seven Myths of Randomisation in Clinical Trials.” Statistics in Medicine 32 (9): 1439–50. doi: 10.1002/sim.5713. Epub 2012 Dec 17.Google Scholar
Thye, S. 2007. “Logic and Philosophical Foundations of Experimental Research in the Social Sciences.” Chapter 3, In Laboratory Experiments in the Social Sciences, (pp.5786). Burlington, MA: Academic Press.Google Scholar
Valentino, N. A., Hutchings, V. L. and White, I. K.. 2002. “Cues That Matter: How Political Ads Prime Racial Attitudes during Campaigns.” The American Political Science Review 96 (1): 7590.CrossRefGoogle Scholar