Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-rkxrd Total loading time: 0 Render date: 2024-07-20T10:19:10.006Z Has data issue: false hasContentIssue false

9 - Using Regression Discontinuity Designs in Crime Research

Published online by Cambridge University Press:  05 June 2014

Emily G. Owens
Affiliation:
Cornell University
Jens Ludwig
Affiliation:
University of Chicago
Brandon C. Welsh
Affiliation:
Northeastern University
Anthony A. Braga
Affiliation:
Rutgers University, New Jersey
Gerben J. N. Bruinsma
Affiliation:
Netherlands Institute for the Study of Crime and Law Enforcement
Get access

Summary

INTRODUCTION

Some of the most interesting and important questions in criminology are causal in nature: Does neighborhood disadvantage influence an individual’s risk of criminal involvement? Does drug treatment work? Do changes in the threat of punishment deter crime? Distinguishing between causation and mere correlation is central to developing, testing, and refining our theories about the determinants of criminal behavior, most of which build on basic facts about how different constructs are causally related to one another. The distinction between statistical correlations and causal relationships is also of more than just academic interest, as most policy decisions hinge on causal questions. Getting the wrong answers to these questions leads to misguided policies that divert resources away from more effective alternatives, and sometimes even impose direct harm on society as well.

Although everyone in empirical criminology recognizes the challenges to valid causal inference created by the threat of omitted variables, the field remains divided about the most constructive way to proceed. One camp is often viewed as having adopted a purist, “randomized-trial-or-bust” perspective – a group that Sampson (2010) calls “randomistas.” Another camp consists of those we would call “research pluralists,” who are happy to consider findings from any sort of research design on a case-by-case basis.

Type
Chapter
Information
Experimental Criminology
Prospects for Advancing Science and Public Policy
, pp. 194 - 222
Publisher: Cambridge University Press
Print publication year: 2013

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angrist, Joshua D., and Pischke, Jörn-Steffen. 2009. Mostly Harmless Econometrics: An Empiricists’s Companion. Princeton, NJ: Princeton University Press.Google Scholar
Becker, Gary. 1968. “Crime and Punishment: An Economic Approach.” Journal of Political Economy 76(2): 169–217.CrossRefGoogle Scholar
Berk, Richard. 2010. “Recent Perspectives on the Regression Discontinuity Design.” In Handbook of Quantitative Criminology, edited by Piquero, A. and Weisburd, D., pp. 563–79. New York: Springer.
Berk, Richard, Barnes, Geoffrey, Alhman, Lindsay, and Kurtz, Ellen. 2010. “When Second Best Is Good Enough: A Comparison between a True Experiment and a Regression Discontinuity Quasi-Experiment.” Journal of Experimental Criminology 6(2): 191–208.CrossRefGoogle Scholar
Berk, Richard A., and Leeuw, Jan de. 1999. “An Evaluation of California’s Inmate Classification System Using a Generalized Regression Discontinuity Design.” Journal of the American Statistical Association 94(448): 1045–52.CrossRefGoogle Scholar
Berk, Richard A., and Rauma, David.1983. “Capitalizing on Nonrandom Assignment to Treatments: A Regression-Discontinuity Evaluation of a Crime-Control Program.” Journal of the American Statistical Association 78(381): 21–7.CrossRefGoogle Scholar
Chen, Keith M., and Shapiro, Jesse. 2007. “Do Harsher Prison Conditions Reduce Recidivism? A Discontinuity-based Approach.” American Law and Economics Review 9(1): 1–29.CrossRefGoogle Scholar
Cook, Thomas D., and Wong, Vivian C.. 2008. “Empirical Tests of the Validity of the Regression Discontinuity Design.” Annales d’Economie et de Statistique (91/92): 127–50.
Gerber, Alan S., Green, Donald P., and Kaplan, Edward H.. 2004. “The Illusion of Learning from Observational Research.” In Problems and Methods in the Study of Politics, edited by Shapiro, Ian, Smith, Rogers M., and Masoud, Tarek E., pp. 251–73. New York: Cambridge University Press.Google Scholar
Heckman, James, and Joseph Hotz, V.. 1989. “Choosing among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: The Case of Manpower Training.” Journal of the American Statistical Association 84(408): 862–74.CrossRefGoogle Scholar
Heckman, James, LaLonde, Robert, and Smith, Jeff. 1999 “The Economics and Econometrics of Active Labor Market Programs.” In Handbook of Labor Economics, edited by Ashenfelter, O. and Card, D., pp. 1865–2097. Philadelphia: Elsevier.
Hjalmarsson, Randi. 2009a. “Juvenile Jails: A Path to the Straight and Narrow or Hardened Criminality?Journal of Law and Economics 52(4): 779–809.CrossRefGoogle Scholar
Hjalmarsson, Randi 2009b. “Crime and Expected Punishment: Changes in Perceptions at the Age of Criminal Majority.” American Law and Economics Review 11(1): 209–48.CrossRefGoogle Scholar
Holland, Paul. 1988. “Causal Inference, Path Analysis, and Recursive Structural Equations Models.” Sociological Methodology 18: 449–84.CrossRefGoogle Scholar
Imbens, Guido, and Kalyanaraman, Karthik. 2009. “Optimal Bandwidth Choice for the Regression Discontinuity Estimator.” NBER Working Papers No. 14726. Cambridge: National Bureau of Economic Research.
Imbens, Guido, and Lemieux, Thomas. 2008. “Regression Discontinuity Designs: A Guide to Practice.” Journal of Econometrics 142(2): 615–35.CrossRefGoogle Scholar
LaLonde, Robert. 1986. “Evaluating the Econometric Evaluations of Training Programs.” American Economic Review 76(4): 4604–20.Google Scholar
Lee, David, and Lemeiux, Thomas. 2010. “Regression Discontinuity Designs in Economics.” Journal of Economic Literature 48(2): 281–355.CrossRefGoogle Scholar
Lee, David, and McCrary, Justin. 2005. “Crime, Punishment, and Myopia.” NBER Working Paper #11491. Cambridge: National Bureau of Economic Research.
Lerman, Amy E. 2009. “The People Prisons Make: Effects of Incarceration on Criminal Psychology.” In Do Prisons Make Us Safer? The Benefits and Costs of the Prison Boom, edited by Raphael, Steven and Stoll, Michael A., pp. 152–76. New York: Russell Sage Foundation.Google Scholar
Ludwig, Jens, and Miller, Douglas. 2007. “Does Head Start Improve Children’s Life Chances? Evidence from a Regression Discontinuity Design.” Quarterly Journal of Economics, 122(1): 159–208.CrossRefGoogle Scholar
Marie, Olivier, Walmsley, Rachel, and Moreton, Karen. 2011. “The Effect of Early Release of Prisoners on Home Detention Curfew on Recidivism.” United Kingdom Ministry of Justice Report.
McCrary, Justin. 2008. “Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test.” Journal of Econometrics 142(2): 698–714.CrossRefGoogle Scholar
McKenzie, David, Gibson, John, and Stillman, Steven. 2010. “How Important Is Selection? Experimental versus Nonexperimental Measures of Income Gains from Migration.” Journal of the European Economic Association 8(4): 913–45.CrossRefGoogle Scholar
Rauma, David, and Berk, Richard A.. 1987. “Remuneration and Recidivism: The Long-term Impact of Unemployment Compensation on Ex-offenders.” Journal of Quantitative Criminology 3(1): 3–27.CrossRefGoogle Scholar
Rubin, Donald. 1977. “Assignment to a Treatment Group on the Basis of a Covariate.” Journal of Educational Statistics 2(1): 1–26.CrossRefGoogle Scholar
Sampson, Robert. 2010. “Gold Standard Myths: Observations on the Experimental Turn in Quantitative Criminology.” Journal of Quantitative Criminology 26(4): 489–500.CrossRefGoogle Scholar
Smith, Jeffrey, and Todd, Petra. 2005. “Does Matching Overcome LaLonde’s Critique of Nonexperimental Methods?Journal of Econometrics 125(1–2): 305–53.CrossRefGoogle Scholar
Thistlethwaite, Donald L., and Campbell, Donald T.. 1960. “Regression-Discontinuity Analysis: An Alternative to the Ex-Post Facto Experiment.” Journal of Educational Psychology, 51: 309–17.CrossRefGoogle Scholar
Wilde, Elizabeth. T., and Hollister, R.. 2007. “How Close Is Close Enough? Testing Nonexperimental Estimates of Impact against Experimental Estimates of Impact with Education Test Scores as Outcomes.” Journal of Policy Analysis and Management 26: 455–77.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×