Skip to main content Accessibility help
×
Home
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 24
  • Print publication year: 2007
  • Online publication date: November 2009

9 - Using the Attribute Hierarchy Method to Make Diagnostic Inferences About Examinees' Cognitive Skills

Summary

INTRODUCTION

Many educational assessments are based on cognitive problem-solving tasks. Cognitive diagnostic assessments are designed to model examinees' cognitive performances on these tasks and yield specific information about their problem-solving strengths and weaknesses. Although most psychometric models are based on latent trait theories, a cognitive diagnostic assessment requires a cognitive information processing approach to model the psychology of test performance because the score inference is specifically targeted to examinees' cognitive skills. Latent trait theories posit that a small number of stable underlying characteristics or traits can be used to explain test performance. Individual differences on these traits account for variation in performance over a range of testing situations (Messick, 1989). Trait performance is often used to classify or rank examinees because these traits are specified at a large grain size and are deemed to be stable over time. Cognitive information processing theories require a much deeper understanding of trait performance, where the psychological features of how a trait can produce a performance become the focus of inquiry (cf. Anderson et al., 2004). With a cognitive approach, problem solving is assumed to require the processing of information using relevant sequences of operations. Examinees are expected to differ in the knowledge they possess and the processes they apply, thereby producing response variability in each test-taking situation. Because these knowledge structures and processing skills are specified at a small grain size and are expected to vary among examinees within any testing situation, cognitive theories and models can be used to understand and evaluate specific cognitive skills that affect test performance.

References
American Educational Research Association (AERA), American Psychological Association, National Council on Measurement in Education. (1999) Standards for educational and psychological testing. Washington, DC: AERA.
Anderson, J.R. (2005). Human symbol manipulation within an integrated cognitive architecture. Cognitive Science, 29, 313–341.
Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111, 1036–1060.
Anderson, J.R., Reder, L.M., & Simon, H.A. (2000). Applications and Misapplications of Cognitive Psychology to Mathematics Education. Retrieved June 7, 2006, from http://act-r.psy.cmu.edu/publications.
Anderson, J.R., & Shunn, C.D., (2000). Implications of the ACT-R learning theory: No magic bullets. In Glaser, R. (Ed.), Advances in instructional psychology: Educational design and cognitive science (Vol. 5, pp. 1–33). Mahwah, NJ: Erlbaum.
Bransford, J.D., Brown, A.L., & Cocking, R.R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.
Brown, J.S., & Burton, R.R. (1978). Diagnostic models for procedural bugs in basic mathematics skills. Cognitive Science, 2, 155–192.
Cui, Y., Leighton, J.P., Gierl, M.J., & Hunka, S. (2006, April). A person-fit statistic for the attribute hierarchy method: The hierarchy consistency index. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco.
Dawson, M.R.W. (1998). Understanding cognitive science. Malden, MA: Blackwell.
Donovan, M.S., Bransford, J.D., & Pellegrino, J.W. (1999). How people learn: Bridging research and practice. Washington, DC: National Academy Press.
Embretson, S.E. (1999). Cognitive psychology applied to testing. In Durso, F.T., Nickerson, R.S., Schvaneveldt, R.W., Dumais, S.T., Linday, D.S., & Chi, M.T.H. (Eds.), Handbook of applied cognition, (pp. 629–66). New York: Wiley.
Ericsson, K.A., & Simon, H.A. (1993). Protocol analysis: Verbal reports as data. Cambridge, MA: The MIT Press.
Fodor, J.A. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Gierl, M. J., Leighton, J.P., & Hunka, S. (2000). Exploring the logic of Tatsuoka's rule-space model for test development and analysis. Educational Measurement: Issues and Practice, 19, 34–44.
Gierl, M.J., Bisanz, J., & Li, Y.Y. (2004, April). Using the multidimensionality-based DIF analysis framework to study cognitive skills that elicit gender differences. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego.
Gierl, M.J., Cui, Y., & Hunka, S. (2007, April). Using connectionist models to evaluate examinees' response patterns on tests using the Attribute Hierarchy Method. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago.
Glaser, R., Lesgold, A., & Lajoie, S. (1987). Toward a cognitive theory for the measurement of achievement. In Ronning, R.R., Glover, J.A., Conoley, J.C., & Witt, J.C. (Eds.), The influence of cognitive psychology on testing (pp. 41–85). Hillsdale, NJ: Erlbaum.
Goodman, D.P., & Hambleton, R.K. (2004). Student test score reports and interpretative guides: Review of current practices and suggestions for future research. Applied Measurement in Education, 17, 145–220.
Hunt, E. (1995). Where and when to represent students this way and that way: An evaluation of approaches to diagnostic assessment. In Nichols, P.D., Chipman, S.F., & Brennan, R.L. (Eds.), Cognitively diagnostic assessment (pp. 411–429). Hillsdale, NJ: Erlbaum.
Kuhn, D. (2001). Why development does (and does not occur) occur: Evidence from the domain of inductive reasoning. In McClelland, J.L. & Siegler, R. (Eds.), Mechanisms of cognitive development: Behavioral and neural perspectives (pp. 221–249). Hillsdale, NJ: Erlbaum.
Leighton, J.P. (2004). Avoiding misconceptions, misuse, and missed opportunities: The collection of verbal reports in educational achievement testing. Educational Measurement: Issues and Practice, 23, 6–15.
Leighton, J.P., & Gierl, M.J. (in press). Defining and evaluating models of cognition used in educational measurement to make inferences about examinees' thinking processes. Educational Measurement: Issues and Practice.
Leighton, J.P., Gierl, M.J., & Hunka, S. (2004). The attribute hierarchy model: An approach for integrating cognitive theory with assessment practice. Journal of Educational Measurement, 41, 205–236.
Leighton, J.P., & Gokiert, R. (2005, April). The cognitive effects of test item features: Identifying construct irrelevant variance and informing item generation. Paper presented at the annual meeting of the National Council on Measurement in Education, Montréal, Canada.
Messick, S. (1989). Validity. In Linn, R.L. (Ed.), Educational measurement (3rd ed.; pp. 13–103). New York: American Council on Education/Macmillan.
Mislevy, R.J. (1996). Test theory reconceived. Journal of Educational Measurement, 33, 379–416.
Mislevy, R.J., Steinberg, L.S., & Almond, R.G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3–62.
National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
Nichols, P. (1994). A framework of developing cognitively diagnostic assessments. Review of Educational Research, 64, 575–603.
Nichols, P., & Sugrue, B. (1999). The lack of fidelity between cognitively complex constructs and conventional test development practice. Educational Measurement: Issues and Practice, 18, 18–29.
Norris, S.P., Leighton, J.P., & Phillips, L.M. (2004). What is at stake in knowing the content and capabilities of children's minds? A case for basing high stakes tests on cognitive models. Theory and Research in Education, 2, 283–308.
Pellegrino, J.W. (1988). Mental models and mental tests. In Wainer, H. & Braun, H.I. (Eds.), Test validity (pp. 49–60). Hillsdale, NJ: Erlbaum.
Pellegrino, J.W. (2002). Understanding how students learn and inferring what they know: Implications for the design of curriculum, instruction, and assessment. In Smith, M.J. (Ed.), NSF K-12 Mathematics and Science Curriculum and Implementation Centers Conference Proceedings (pp. 76–92). Washington, DC: National Science Foundation and American Geological Institute.
Pellegrino, J.W., Baxter, G.P., & Glaser, R. (1999). Addressing the “two disciplines” problem: Linking theories of cognition and learning with assessment and instructional practices. In Iran-Nejad, A. & Pearson, P.D. (Eds.), Review of Research in Education (pp. 307–353). Washington, DC: American Educational Research Association.
Poggio, A., Clayton, D.B., Glasnapp, D., Poggio, J., Haack, P., & Thomas, J. (2005, April). Revisiting the item format question: Can the multiple choice format meet the demand for monitoring higher-order skills? Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal, Canada.
Royer, J.M., Cisero, C.A., & Carlo, M.S. (1993). Techniques and procedures for assessing cognitive skills. Review of Educational Research, 63, 201–243.
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.
Snow, R.E., & Lohman, D.F. (1989). Implications of cognitive psychology for educational measurement. In Linn, R.L. (Ed.), Educational measurement (3rd ed., pp. 263–331). New York: American Council on Education/Macmillan.
Taylor, K.L., & Dionne, J-P. (2000). Accessing problem-solving strategy knowledge: The complementary use of concurrent verbal protocols and retrospective debriefing. Journal of Educational Psychology, 92, 413–425.
Tatsuoka, K.K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345–354.
Tatsuoka, K.K. (1995). Architecture of knowledge structures and cognitive diagnosis: A statistical pattern recognition and classification approach. In Nichols, P.D., Chipman, S.F., & Brennan, R.L. (Eds.), Cognitively diagnostic assessment (pp. 327–359). Hillsdale, NJ: Erlbaum.
Tatsuoka, M.M., & Tatsuoka, K.K. (1989). Rule space. In Kotz, S. & Johnson, N.L. (Eds.), Encyclopedia of statistical sciences (pp. 217–220). New York: Wiley.
VanderVeen, A.A., Huff, K., Gierl, M., McNamara, D.S, Louwerse, M., & Graesser, A. (in press). Developing and validating instructionally relevant reading competency profiles measured by the critical reading section of the SAT. In McNamara, D.S. (Ed.), Reading comprehension strategies: Theories, interventions, and technologies. Mahwah, NJ: Erlbaum.
Webb, N.L. (2006). Identifying content for student achievement tests. In Downing, S.M. & Haladyna, T.M. (Eds.), Handbook of test development (pp. 155–180). Mahwah, NJ: Erlbaum.