Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-cnmwb Total loading time: 0 Render date: 2024-07-16T10:45:22.172Z Has data issue: false hasContentIssue false

31 - Treatment Effects

Published online by Cambridge University Press:  05 June 2012

Brian J. Gaines
Affiliation:
University of Illinois at Urbana–Champaign
James H. Kuklinski
Affiliation:
University of Illinois at Urbana–Champaign
James N. Druckman
Affiliation:
Northwestern University, Illinois
Donald P. Greene
Affiliation:
Yale University, Connecticut
James H. Kuklinski
Affiliation:
University of Illinois, Urbana-Champaign
Arthur Lupia
Affiliation:
University of Michigan, Ann Arbor
Get access

Summary

Within the prevailing Fisher-Neyman-Rubin framework of causal inference, causal effects are defined as comparisons of potential outcomes under different treatments. In most contexts, it is impossible or impractical to observe multiple outcomes (realizations of the variable of interest) for any given unit. Given this fundamental problem of causality (Holland 1986), experimentalists approximate the hypothetical treatment effect by comparing averages of groups or, sometimes, averages of differences of matched cases. Hence, they often use (Ȳ|t = 1) − (Ȳ|t = 0) to estimate E[(Yi|t = 1) − (Yi|t = 0)], labeling the former quantity the treatment effect or, more accurately, the average treatment effect.

The rationale for substituting group averages originates in the logic of the random assignment experiment: each unit has different potential outcomes; units are randomly assigned to one treatment or another; and, in expectation, control and treatment groups should be identically distributed. To make causal inferences in this manner requires that one unit's outcomes not be affected by another unit's treatment assignment. This requirement has come to be known as the stable unit treatment value assumption.

Until recently, experimenters have reported average treatment effects as a matter of routine. Unfortunately, this difference of averages often masks as much as it reveals. Most crucially, it ignores heterogeneity in treatment effects, whereby the treatment affects (or would affect if it were actually experienced) some units differently from others.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Albertson, Bethany, and Lawrence, Adria. 2009. “After the Credits Roll: The Long-Term Effects of Educational Television on Public Knowledge and Attitudes.” American Politics Research 32: 275–300.CrossRefGoogle Scholar
Angrist, Joshua, Bettinger, Eric, Bloom, Erik, King, Elizabeth, and Kremer, Michael. 2002. “Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment.” American Economic Review 92: 1535–58.CrossRefGoogle Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B.. 1996. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association 91: 444–55.CrossRefGoogle Scholar
Ansolabehere, Stephen D., Iyengar, Shanto, and Simon, Adam. 1999. “Replicating Experiments Using Aggregate and Survey Data: The Case of Negative Advertising and Turnout.” American Political Science Review 93: 901–9.CrossRefGoogle Scholar
Ansolabehere, Stephen D., Iyengar, Shanto, Simon, Adam, and Valentino, Nicholas. 1994. “Does Attack Advertising Demobilize the Electorate?” American Political Science Review 88: 829–38.CrossRefGoogle Scholar
Braumoeller, Bear G. 2006. “Explaining Variance; Or, Stuck in a Moment We Can't Get Out of.” Political Analysis 14: 268–90.CrossRefGoogle Scholar
Caucutt, Elizabeth M. 2002. “Educational Vouchers When There Are Peer Group Effects – Size Matters.” International Economic Review 43: 195–222.CrossRefGoogle Scholar
Fisher, Ronald Aylmer. 1935. The Design of Experiments. Edinburgh: Oliver and Boyd.Google Scholar
Gaines, Brian J., Kuklinski, James H., and Quirk, Paul J.. 2007. “The Logic of the Survey Experiment Reexamined.” Political Analysis 15: 1–20.CrossRefGoogle Scholar
Gerber, Alan S., and Green, Donald P.. 2000. “The Effects of Personal Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment.” American Political Science Review 49: 653–64.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W.. 2008. “Social Pressure and Vote Turnout: Evidence from a Large-Scale Field Experiment.” American Political Science Review 102: 33–48.CrossRefGoogle Scholar
Hansen, Ben B., and Bowers, Jake. 2009. “Attributing Effects to a Cluster-Randomized Get-Out-the-Vote Campaign.” Journal of the American Statistical Association 104: 873–85CrossRefGoogle Scholar
Heckman, James J. 1976. “The Common Structure of Statistical Models of Truncation, Sample Selection and Limited Dependent Variables and a Simple Estimator for Such Models.” Annals of Economic and Social Measurement 5: 475–92.Google Scholar
Holland, Paul W. 1986. “Statistics and Causal Inference.” Journal of the American Statistical Association 81: 945–60.CrossRefGoogle Scholar
Hovland, Carl I. 1959. “Reconciling Conflicting Results Derived from Experimental and Survey Studies of Attitude Change.” American Psychologist 14: 8–17.CrossRefGoogle Scholar
Howell, William G., Wolf, Patrick J., Campbell, David E., and Peterson, Paul E.. 2002. “School Vouchers and Academic Performance: Results from Three Randomized Field Trials.” Journal of Policy Analysis and Management 21: 191–217.CrossRefGoogle Scholar
Ladd, Helen F. 2002. “School Vouchers: A Critical View.” Journal of Economic Perspectives 16: 3–24.CrossRefGoogle Scholar
Lau, Richard R., Sigelman, Lee, Heldman, Caroline, and Babbitt, Paul. 1999. “The Effects of Negative Political Advertisements: A Meta-Analytic Assessment.” American Political Science Review 93: 851–76.CrossRefGoogle Scholar
Levin, Henry M. 2002. “A Comprehensive Framework for Evaluating Educational Vouchers.” Educational Evaluation and Policy Analysis 24: 159–74.CrossRefGoogle Scholar
Little, Roderick J., Long, Qi, and Lin, Xihong. 2008. “Comment [on Shadish et al.].” Journal of the American Statistical Association 13: 1344–50.CrossRefGoogle Scholar
Manski, Charles F. 1995. Identification Problems in the Social Sciences. Cambridge, MA: Harvard University Press.Google Scholar
Nickerson, David W. 2005. “Scalable Protocols Offer Efficient Design for Field Experiments.” Political Analysis 13: 233–52.CrossRefGoogle Scholar
Nickerson, David W. 2008. “Is Voting Contagious? Evidence from Two Field Experiments.” American Political Science Review 102: 49–76.CrossRefGoogle Scholar
Orbell, John, and Dawes, Robyn M.. 1991. “A ‘Cognitive Miser’ Theory of Cooperators' Advantage.” American Political Science Review 85: 515–28.CrossRefGoogle Scholar
Rouse, Cecilia Elena. 1998. “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program.” Quarterly Journal of Economics 113: 553–602.CrossRefGoogle Scholar
Rubin, Donald B. 1990a. “Comment: Neyman (1923) and Causal Inference in Experiments and Observational Studies.” Statistical Science 5: 472–80.CrossRefGoogle Scholar
Rubin, Donald B. 1990b. “Formal Modes of Statistical Inference for Causal Effects.” Journal of Statistical Planning and Inference 25: 279–92.CrossRefGoogle Scholar
Shadish, William R., Clark, M. H., and Steiner, Peter M.. 2008. “Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments.” Journal of the American Statistical Association 103: 1334–43.CrossRefGoogle Scholar
Tobin, James. 1958. “Estimation of Relationships for Limited Dependent Variables.” Econometrica 26: 24–36.CrossRefGoogle Scholar
Transue, John E., Lee, Daniel J., and Aldrich, John H.. 2009. “Treatment Spillover Effects across Survey Experiments.” Political Analysis 17: 143–61.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×