Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-pftt2 Total loading time: 0 Render date: 2024-05-01T02:19:54.153Z Has data issue: false hasContentIssue false

3 - Critical Thinking in Quasi-Experimentation

Published online by Cambridge University Press:  05 June 2012

William R. Shadish
Affiliation:
University of California – Merced
Robert J. Sternberg
Affiliation:
Yale University, Connecticut
Henry L. Roediger III
Affiliation:
Washington University, St Louis
Diane F. Halpern
Affiliation:
Claremont McKenna College, California
Get access

Summary

All experiments are about discovering the effects of causes. In this sense, humans always have been experimenters, from early man seeing whether striking a stone against another stone would start a fire, to the modern cook trying out new ingredients in a recipe to see how it changes the taste. All experiments have in common the deliberate manipulation of an assumed cause (striking a stone, adding a new ingredient), followed by observation of the effects that follow (fire, taste). This common thread holds for all modern scientific experiments, including the randomized experiments discussed in the previous chapter, and the quasi-experiments described in the present chapter.

This chapter focuses on critical thinking about causation in quasi-experiments. The reason for this focus on causation is not that other kinds of critical thinking are unimportant in quasi-experiments. To the contrary, every bit of critical thinking that was described in the previous chapter for randomized experiments also has to be done in quasi-experiments, such as choosing good independent and dependent variables, identifying useful populations of participants and settings to study, ensuring that the assumptions of statistical tests are met, and thinking about ways in which the results might generalize. However, the quasi-experimenter also has one more task to do – the critical thinking that takes the place of random assignment.

All tasks in life are easier when you have the proper tools. This chapter describes some of the basic tools we use for the task of critical thinking about causation in quasi-experiments.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2006

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297–312.CrossRefGoogle ScholarPubMed
Campbell, D. T., & Erlebacher, A. E. (1970). How regression artifacts can mistakenly make compensatory education programs look harmful. In Hellmuth, J. (Vol. Ed.), The disadvantaged child: Vol. 3. Compensatory education: A national debate (pp. 185–225). New York: Brunner/Mazel.Google Scholar
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.Google Scholar
Cicirelli, V. G., and Associates. (1969). The impact of Head Start: An evaluation of the effects of Head Start on children's cognitive and affective development, vols. 1 and 2. A Report to the Office of Economic Opportunity. Athens: Ohio University and Westinghouse Learning Corporation.Google Scholar
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.Google Scholar
Cordray, D. W. (1986). Quasi-experimental analysis: A mixture of methods and judgment. In Trochim, W. M. K. (Ed.), Advances in quasi-experimental design and analysis (pp. 9–27). San Francisco: Jossey-Bass.Google Scholar
Eells, E. (1991). Probabilistic causality. New York: Cambridge University Press.CrossRefGoogle Scholar
Faust, D. (1984). The limits of scientific reasoning. Minneapolis: University of Minnesota Press.Google Scholar
Folkman, J. (1996). Fighting cancer by attacking its blood supply. Scientific American, 275, 150–154.CrossRefGoogle ScholarPubMed
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945–970.CrossRefGoogle Scholar
Lewis, D. (1973). Causation. Journal of Philosophy, 70, 556–567.CrossRefGoogle Scholar
Locke, J. (1975). An Essay Concerning Human Understanding. Oxford, England: Clarendon Press.Google Scholar
Mackie, J. L. (1974). The cement of the universe: A study of causation. Oxford, England: Oxford University Press.Google Scholar
Magidson, J. (1977). Toward a causal model approach for adjusting for preexisting differences in the nonequivalent control group situation. Evaluation Quarterly, 1, 399–420.CrossRefGoogle Scholar
Magidson, J. (2000). On models used to adjust for preexisting differences. In Bickman, L. (Ed.), Research design: Donald Campbell's legacy (Vol. 2, pp. 181–194). Thousand Oaks, CA: Sage.Google Scholar
Mark, M. M. (1986). Validity typologies and the logic and practice of quasi-experimentaton. In Trochim, W. M. K. (Ed.), Advances in quasi-experimental design and analysis (pp. 47–66). San Francisco: Jossey-Bass.Google Scholar
Popper, K. R. (1959). The logic of scientific discovery. New York: Basic Books.Google Scholar
Poulton, E. C. (1982). Influential companions: Effects of one strategy on another in the within-subjects designs of cognitive psychology. Psychological Bulletin, 91, 673–690.CrossRefGoogle Scholar
Rindskopf, D. (2000). Plausible rival hypotheses in measurement, design, and scientific theory. In Bickman, L. (Ed.), Research design: Donald Campbell's legacy (Vol. 1, pp. 1–12). Thousand Oaks, CA: Sage.Google Scholar
Rowe, P. M. (1999). What is all the hullabaloo about endostatin?The Lancet, 353, 732.CrossRefGoogle ScholarPubMed
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701.CrossRefGoogle Scholar
Sackett, D. L. (1979). Bias in analytic research. Journal of Chronic Diseases, 32, 51–63.CrossRefGoogle ScholarPubMed
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.CrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×