Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-jbqgn Total loading time: 0 Render date: 2024-06-25T03:58:10.695Z Has data issue: false hasContentIssue false

2 - Assessment Through the Lens of “Opportunity to Learn”

Published online by Cambridge University Press:  05 June 2012

Diana C. Pullin
Affiliation:
Professor in the Lynch School of Education and an affiliate professor of law, Boston College
Edward H. Haertel
Affiliation:
Jacks Family Professor of Education, Stanford University
Pamela A. Moss
Affiliation:
University of Michigan, Ann Arbor
Diana C. Pullin
Affiliation:
Boston College, Massachusetts
James Paul Gee
Affiliation:
University of Wisconsin, Madison
Edward H. Haertel
Affiliation:
Stanford University, California
Lauren Jones Young
Affiliation:
The Spencer Foundation, Chicago
Get access

Summary

Educational tests are sometimes viewed as no more than measuring instruments, neutral indicators of learning outcomes. For more than a century, though, tests and assessments have been used in the United States to influence curriculum, allocate educational resources and opportunities, and influence classroom instructional practices (Haertel and Herman 2005). It is argued in this chapter that the idea of opportunity to learn (OTL) offers a useful lens through which to understand these many consequences of testing policies and practices, both positive and negative. Whenever assessment affects instructional content, resources, or processes, whether by design or otherwise, it is affecting OTL.

After framing the interplay of assessment with conceptions of OTL in terms of (1) content taught; (2) adequacy and allocation of educational resources; and (3) teaching practices, the chapter turns to five cases that illustrate some of these intersections. First considered is the intelligence-testing movement of the early twentieth century. This was a well-intentioned but unfortunate attempt to use testing to guide more efficient resource allocation. Second is Tyler's Eight-Year Study in the 1930s. This study reflected the designers' deep understanding that neither curriculum content nor instructional practices could be changed fundamentally unless consequential examinations were changed at the same time. The third case considered is the minimum competency testing (MCT) movement of the 1970s and 1980s, which prompted litigation leading to the legal requirement that students have a fair opportunity to learn what is covered on a high school graduation test.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2008

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

American Educational Research Association, American Psychological Association, and National Council of Measurement in Education. 1999. Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.
Baxter, G. P. and Glaser, R.. 1998. Investigating the cognitive complexity of science assessments. Educational Measurement: Issues and Practice 17(3): 37–45.CrossRefGoogle Scholar
Black, P. and Wiliam, D.. 1998. Assessment and classroom learning. Assessment in Education 5: 7–74.CrossRefGoogle Scholar
Bloom, B. 1971. Mastery learning. New York: Holt, Rinehart, and Winston.Google Scholar
Brown v. Board of Education of Topeka Kansas, 347 U.S. 483 (1954).
Carnegie Forum on Education and the Economy. 1986. A nation prepared: Teachers for the 21st century (Report of the Task Force on Teaching as a Profession). Washington, D.C.: The Forum.
Carnoy, M., Elmore, R., and Siskin, L.. 2003. The new accountability: High schools and high stakes testing. New York: RoutledgeFalmer.Google Scholar
Chapman, P. D. 1988. Schools as sorters: Lewis M. Terman, applied psychology, and the intelligence testing movement, 1890–1930. New York: New York University Press.Google Scholar
Cohen, D. K. and Hill, H. C.. 2001. Learning policy: When state education reform works. New Haven: Yale University Press.CrossRefGoogle Scholar
Cohen, D., Raudenbush, S., and Ball, D.. 2003. Resources, instruction, and research. Educational Evaluation and Policy Analysis 25: 119–42.CrossRefGoogle Scholar
Debra P. v. Turlington, 644 F. 2d 397 (5th Cir. 1981).
Elmore, R. 2002. Testing trap. Harvard Magazine 105: 35–37.Google Scholar
Elmore, R. 2004. School reform from the inside out: Policy, practice, and performance. Cambridge: Harvard Education Press.Google Scholar
Frederiksen, N. 1984. The real test bias: Influences of testing on teaching and learning. American Psychologist 39: 193–202.CrossRefGoogle Scholar
Fuhrman, S., ed. 2001. From the capitol to the classroom: Standards-based reform in the states (Yearbook of the National Society for the Study of Education). Chicago: National Society for the Study of Education.Google Scholar
Glaser, R. 1963. Instructional technology and the measurement of learning outcomes: Some questions. American Psychologist 18: 519–21.CrossRefGoogle Scholar
Glaser, R. 1994. Criterion-referenced tests: Part I. Origins. Educational Measurement: Issues and Practice 13(4): 9–11.CrossRefGoogle Scholar
Guiton, G. and Oakes, J.. 1995. Opportunity to learn and conceptions of educational equality. Educational Evaluation and Policy Analysis 17: 323–36.CrossRefGoogle Scholar
Haertel, E. H. 1989. Student achievement tests as tools of educational policy: Practices and consequences. In Test policy and test performance: Education, language, and culture, edited by Gifford, B. R., 35–63. Boston: Kluwer Academic Publishers.CrossRefGoogle Scholar
Haertel, E. H. 1999a. Performance assessment and educational reform. Phi Delta Kappan 80: 662–66.Google Scholar
Haertel, E. H. 1999b. Validity arguments for high-stakes testing: In search of the evidence. Educational Measurement: Issues and Practice 18(4): 5–9.CrossRefGoogle Scholar
Haertel, E. H. and Calfee, R. C.. 1983. School achievement: Thinking about what to test. Journal of Educational Measurement 20: 119–32.CrossRefGoogle Scholar
Haertel, E. H. and J. L. Herman. 2005. A historical perspective on validity arguments for accountability testing. In Uses and misuses of data for educational accountability and improvement (yearbook of the National Society for the Study of Education), issue 2, edited by Herman, J. L. and Haertel, E. H., 1–34. Malden, Mass.: Blackwell.Google Scholar
Hancock v. Driscoll, 443 Mass. 428, 822 N.E. 2d 1134 (2005).
Heubert, J. P. and , R. M. Hauser, eds. 1999. High Stakes: Testing for tracking, promotion, and graduation. Washington, D.C.: National Academy Press.Google Scholar
Kirp, D., M. Yudof, B. Levin, and R. Moran. 2001. Educational policy and the law (4th ed.). Stamford, Conn.: Wadsworth.
Kirst, M. W. and Mazzeo, C.. 1996. The rise, fall, and rise of state assessment in California, 1993–96. Phi Delta Kappan 78: 319–23.Google Scholar
Koretz, D. 2005. Alignment, high stakes, and the inflation of test scores. In uses and misuses of data for educational accountability and improvement (yearbook of the National Society for the Study of Education), issue 2, edited by Herman, J. L. and Haertel, E. H., 99–118. Malden, Mass.: Blackwell.Google Scholar
Lange, P. C., ed. 1967. Programmed instruction in the schools: An application of programming principles in “individually prescribed” instruction (Yearbook of the National Society for the Study of Education), issue 2, edited by Richey, H. G., Coulson, M. M., and Lange, P. C.. Chicago: University of Chicago Press.Google Scholar
Linn, R. L. 1993. Educational assessment: expanded expectations and challenges. Educational Evaluation and Policy Analysis 15: 1–16.CrossRefGoogle Scholar
Linn, R. L. 2000. Assessments and accountability. Educational Researcher 29(2): 4–16.Google Scholar
Madaus, G., ed. 1983. The Courts, validity, and minimum competency testing. Boston: Kluwer-Nijhoff.CrossRefGoogle Scholar
Madaus, G. F. 1988. The influence of testing on the curriculum. In Critical issues in curriculum (Yearbook of the National Society for the Study of Education), issue 1, edited by Rehage, K. J., Westbury, I., and Purves, A. C., 83–121. Chicago: University of Chicago Press.Google Scholar
Madaus, G. F., D. Stufflebeam, and M. S. Scriven. 1983. Program evaluation: An historical overview. In Evaluation models: Viewpoints on educational and human services evaluation, edited by Madaus, G. F., Scriven, M. S., and Stufflebeam, D., 3–22. Norwell, Mass.: Kluwer Academic Publishers.Google Scholar
McDonnell, L. M. 1995. Opportunity to learn as a research concept and a policy instrument. Educational Evaluation and Policy Analysis 17: 305–22.CrossRefGoogle Scholar
McDonnell, L. M. 2004. Politics, persuasion, and educational testing.Cambridge: Harvard University Press.Google Scholar
McGuinn, P. 2006. No child left behind and the transformation of federal education policy, 1965–2005. Lawrence: The University Press of Kansas.Google Scholar
McNeil, L. M. 2000. Contradictions of school reform: The educational costs of standardized testing. New York: Routledge.Google Scholar
Melnick, S. and Pullin, D.. 2000, September/October. Can you take dictation? Prescribing teacher quality through testing. Journal of Teacher Education 51: 262–75.CrossRefGoogle Scholar
Messick, S. 1984. The psychology of educational measurement. Journal of Educational Measurement 21: 215–37.CrossRefGoogle Scholar
Mislevy, R. J. 1996. Test theory reconceived. Journal of Educational Measurement 33: 379–416.CrossRefGoogle Scholar
Moss, P. A., D. Pullin, J. P. Gee, and E. N. Haertel. 2005. The idea of testing: psychometric and sociocultural perspectives. Measurement: Interdisciplinary Research and Perspectives 3: 63–83.Google Scholar
National Commission on Excellence in Education (NCEE). 1983. A nation at risk: The imperative for educational reform. Washington, D.C.: U.S. Government Printing Office.
National Council on Education Standards and Testing (NCEST). 1992. Raising standards for American education: A report to Congress, the Secretary of Education, the National Education Goals Panel, and the American people. Washington, D.C.: U.S. Government Printing Office.
National Research Council. 2001. Testing teacher candidates: the role of licensure tests in improving teacher quality, edited by Mitchell, K. et al. Washington, D.C.: National Academy Press.Google Scholar
Office of Technology Assessment. 2002. Testing in American schools: Asking the right questions (OTA-SET-519). Washington, D.C.: U.S. Government Printing Office.
Popham, W. J. 1994. The instructional consequences of criterion-referenced clarity. Educational Measurement: Issues and Practice 13, no. 4: 15–18, 30.CrossRefGoogle Scholar
Porter, A. 1995. The uses and misuses of opportunity-to-learn standards. Educational Researcher 24: 21–27.Google Scholar
Porter, A. 2002. Measuring the content of instruction: uses in research and practice. Educational Researcher 30: 3–14.CrossRefGoogle Scholar
Porter, A. and J. Smithson. 2001. Are content standards being implemented in the classroom? A methodology and some tentative answers. In From the Capitol to the classroom: Standards-based reform in the States (yearbook of the National Society for the Study of Education), issue 2, edited by Fuhrman, S. H., 60–80. Chicago: University of Chicago Press.Google Scholar
Pullin, D. 2002. Testing individuals with disabilities: Reconciling social science and social policy. In Assessing Individuals With Disabilities, edited by Ekstrom, R. and Smith, D., 11–32. Washington, D.C.: American Psychological Association.Google Scholar
Pullin, D. 2004, Sept./Oct. Accountability, autonomy, and academic freedom in educator preparation programs. Journal of Teacher Education 55: 300–12.CrossRefGoogle Scholar
Pullin, D. 2007, Winter. Ensuring an adequate education: Opportunity to learn, law and social science. Boston College Third World Law Journal 27: 83–130.Google Scholar
Resnick, L. B. and D. P. Resnick. 1992. Assessing the thinking curriculum: New tools for educational reform. In Changing Assessments: Alternative Views of Aptitude, Achievement, and Instruction, edited by Gifford, B. and Connor, M. O', 37–75. Norwell, Mass.: Kluwer.CrossRefGoogle Scholar
Rose v. Council for Better Education. 790 S.W. 2d 186 (Kentucky Supreme Court, 1989).
Sax, G. and Collet, L. S.. 1968. An empirical comparison of the effects of recall and multiple-choice tests on student achievement. Journal of Educational Measurement 5: 169–73.CrossRefGoogle Scholar
Shavelson, R. J., Baxter, G. P., and Pine, J.. 1992. Performance assessment: political rhetoric and measurement reality. Educational Researcher 21(4): 22–27.Google Scholar
Shepard, L. A. 2000. The role of assessment in a learning culture. Educational Researcher 29(7): 4–14.CrossRefGoogle Scholar
Smith, M. and J. O'Day. 1993. Systemic school reform. In Designing Coherent Education Policy, edited by Fuhrman, S.. San Francisco: Jossey-Bass.Google Scholar
Smith, E. R., Tyler, R. W., and the Evaluation Staff. 1942. Appraising and Recording Student Progress, vol. III. The Adventure in American Education Series. New York: Harper and Bros.Google Scholar
Spillane, J. 2004. Standards deviation: How schools misunderstand educational policy. Cambridge: Harvard University Press.Google Scholar
Terman, L. M. 1919. The intelligence of school children. Boston: Houghton Mifflin.Google Scholar
Tyler, R. W. 1949. Basic principles of curriculum and instruction. Chicago: University of Chicago Press.Google Scholar
Vinovskis, M. A. 1999. History and educational policymaking. New Haven: Yale University Press.Google Scholar
Wang, J. 1998. Opportunity to learn: The impacts and policy implications. Educational Evaluation and Policy Analysis 20: 137–56CrossRefGoogle Scholar
Wiggins, G. 1992. Creating tests worth taking. Educational Leadership 49(8): 26–33.Google Scholar
Wilson, S. 2003. California dreaming: Reforming mathematics education. New Haven: Yale University Press.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×