Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-t5pn6 Total loading time: 0 Render date: 2024-04-23T16:26:38.098Z Has data issue: false hasContentIssue false

5 - Construct Validity and Cognitive Diagnostic Assessment

Published online by Cambridge University Press:  23 November 2009

Xiangdong Yang
Affiliation:
Research Associate, Center for Educational Testing and Evaluation, University of Kansas
Susan E. Embretson
Affiliation:
Professor of Psychology, Georgia Institute of Technology
Jacqueline Leighton
Affiliation:
University of Alberta
Mark Gierl
Affiliation:
University of Alberta
Get access

Summary

INTRODUCTION

Cognitive diagnostic assessment (CDA) is increasingly a major focus in psychological and educational measurement. Instead of inferring a general response tendency or behavior consistency of an examinee over a target domain of measurement, diagnostic assessment results provide a detailed account of the underlying cognitive basis of the examinee's performance by mining the richer information that is afforded by specific response patterns. Sophisticated measurement procedures, such as the rule-space methodology (Tatsuoka, 1995), the attribute hierarchy method (Leighton, Gierl, & Hunka, 2004), the tree-based regression approach (Sheehan, 1997a, 1997b), and the knowledge space theory (Doignon & Falmagne, 1999), as well as specially parameterized psychometric models (De La Torre & Douglas, 2004; DiBello, Stout, & Roussos, 1995; Draney, Pirolli, & Wilson, 1995; Hartz, 2002; Junker & Sijtsma, 2001; Maris, 1999), have been developed for inferring diagnostic information.

Although measurement models for diagnostic testing have become increasingly available, cognitive diagnosis must be evaluated by the same measurement criteria (e.g., construct validity) as traditional trait measures. With the goal of inferring more detailed information about an individual's skill profile, we are not just concerned about how many items have been correctly solved by an examinee. We are also concerned about the pattern of responses to items that differ in the knowledge, skills, or cognitive processes required for solution. Similar to traditional tests, empirical evidence and theoretical rationales that elaborate the underlying basis of item responses are required to support the inferences and interpretations made from diagnostic assessments.

Type
Chapter
Information
Cognitive Diagnostic Assessment for Education
Theory and Applications
, pp. 119 - 145
Publisher: Cambridge University Press
Print publication year: 2007

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

American Education Research Association (AERA), American Psychological Association, National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: AERA.
The American Heritage Dictionary of the English Language (4th ed.). (2000). Boston: Houghton Mifflin.
Anderson, J. R. (1990). Analysis of student performance with the LISP tutor. In Frederiksen, N., Glaser, R., Lesgold, A., & Shafto, M. G. (Eds.), Diagnostic monitoring of skills and knowledge acquisition (pp. 27–50). Hillsdale, NJ: Erlbaum.Google Scholar
Britton, B. K., & Tidwell, P. (1995). Cognitive structure testing: A computer system for diagnosis of expert-novice differences. In Nichols, P.., Chipman, S. F.., & Brennan, R. L. (Eds.), Cognitively diagnostic assessment (pp. 251–278). Hillsdale, NJ: Erlbaum.Google Scholar
Brown, J. S., & Burton, R. R. (1978). Diagnostic models for procedural bugs in basic mathematical skills. Cognitive Science, 2, 155–192.CrossRefGoogle Scholar
Carpenter, P. A., Just, M. A., & Shell, P. (1990). What one intelligence test measures: A theoretical account of the processing in the Raven Progressive Matrices Test. Psychological Review, 97, 404–431.CrossRefGoogle ScholarPubMed
Chi, M. T. H., Glaser, R., & Farr, M. (1988). The nature of expertise. Hillsdale, NJ: Erlbaum.Google Scholar
Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82, 407–428.CrossRefGoogle Scholar
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302.CrossRefGoogle ScholarPubMed
Das, J. P., Naglieri, J. A., & Kirby, J. R. (1994). Assessment of cognitive processes: The PASS theory of intelligence. Needham Heights, MA: Allyn & Bacon.Google Scholar
Dayton, C. M., & MaCready, G. B. (1976). A probabilistic model for a validation of behavioral hierarchies. Psychometrika, 41, 189–204.CrossRefGoogle Scholar
Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69, 333–353.CrossRefGoogle Scholar
DiBello, L., Stout, W., & Roussos, L. (1995). Unified cognitive/psychometric diagnostic assessment likelihood-based classification techniques. In Nichols, P., Chipman, S. F., & Brennan, R. L. (Eds.), Cognitively diagnostic assessment (pp. 361–389). Hillsdale, NJ: Erlbaum.Google Scholar
Doignon, J. P., & Falmagne, J. C. (1999). Knowledge spaces. Berlin: Springer-Verlag.CrossRefGoogle Scholar
Draney, K. L., Pirolli, P., & Wilson, M. (1995). A measurement model for complex cognitive skill. In Nichols, P., Chipman, S. F., & Brennan, R. L. (Eds.), Cognitively diagnostic assessment (pp. 103–126). Hillsdale, NJ: Erlbaum.Google Scholar
Embretson, S. E. (1983). Construct validity: Construct representation versus nomothetic span. Psychological Bulletin, 93, 179–197.Google Scholar
Embretson, S. E. (1985). Test design: developments in psychology and psychometrics. Academic Press.Google Scholar
Embretson, S. E. (1994). Application of cognitive design systems to test development. In Reynolds, C. R. (Eds.), Cognitive assessment: A multidisciplinary perspective (pp. 107–135). New York: Plenum Press.CrossRefGoogle Scholar
Embretson, S. E. (1995). The role of working memory capacity and general control processes in intelligence. Intelligence, 20, 169–190.CrossRefGoogle Scholar
Embretson, S. E. (1998). A cognitive design system approach to generating valid tests: Application to abstract reasoning. Psychological Methods, 3, 300–326.CrossRefGoogle Scholar
Embretson, S. E., & Waxman, M. (1989). Models for processing and individual differences in spatial folding. Unpublished manuscript.Google Scholar
Ericsson, K. A., & Charness, N. (1994). Expert performance, its structure and acquisition. American Psychologist, 49, 725–747.CrossRefGoogle Scholar
Guttman, L. (1971). Measurement as structural theory. Psychometrika, 36, 329–347.CrossRefGoogle Scholar
Hartz, S. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: blending theory with practicality. Unpublished doctoral thesis, University of Illinois at Urbana-Champaign.Google Scholar
Irvine, S. H., & Kyllonen, P. C. (2002). Item generation for test development. Mahwah, NJ: Erlbaum.Google Scholar
Johnson, P. J., Goldsmith, T. E., & Teague, K. W. (1995). Similarity, structure, and knowledge: A representational approach to assessment. In Nichols, P., Chipman, S. F., & Brennan, R. L. (Eds.), Cognitively diagnostic assessment (pp. 221–250). Hillsdale, NJ: Erlbaum.Google Scholar
Junker, B., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.CrossRefGoogle Scholar
Kane, M. T. (1982). A sampling model for validity. Applied Psychological Measurement, 6, 125–160.CrossRefGoogle Scholar
Leighton, J. P., Gierl, M. J., & Hunka, S. (2004). The attribute hierarchy method for cognitive assessment: A variation on Tatsuoka's rule-space approach. Journal of Educational Measurement, 41, 205–236.CrossRefGoogle Scholar
Lohman, D. F., & Ippel, M. J. (1993). Cognitive diagnosis from statistically based assessment toward theory based assessment. In Frederiksen, N., Mislevy, R. J., & Bejar, I. (Eds.), Test theory for a new generation of tests (pp. 41–71). Hillsdale, NJ: Erlbaum.Google Scholar
Maris, E. (1999). Estimating multiple classification latent class model. Psychometrika, 64, 187–212.CrossRefGoogle Scholar
Marshall, S. P. (1990). Generating good items for diagnostic tests. In Frederiksen, N., Glaser, R., Lesgold, A., & Shafto, M. G. (Eds.), Diagnostic monitoring of skill and knowledge acquisition (pp. 433–452). Hillsdale, NJ: Erlbaum.Google Scholar
Messick, S. (1989). Validity. In Linn, R. L. (Eds.), Educational measurement (pp. 13–103). New York: Macmillan.Google Scholar
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. The American Psychologist, 50, 741–749.CrossRefGoogle Scholar
Mislevy, R. J. (1993). Foundations of a new test theory. In Frederiksen, N., Mislevy, R. J., & Bejar, I. I. (Eds.), Test theory for a new generation of tests (pp. 19–39). Hillsdale, NJ: Erlbaum.Google Scholar
Mislevy, R. J. (1996). Test theory reconceived. Journal of Educational Measurement, 33, 379–416.CrossRefGoogle Scholar
Naveh-Benjamin, M., Lin, Y., & McKeachie, W. J. (1995). Inferring student's cognitive structures and their development using the “fill-in-the-structure” (FITS) technique. In Nichols, P., Chipman, S. F., & Brennan, R. L. (Eds.), Cognitively diagnostic assessment (pp. 279–304). Hillsdale, NJ: Erlbaum.Google Scholar
Proctor, C. H. (1970). A probabilistic formulation and statistical analysis for Guttman scaling. Psychometrika, 35, 73–78.CrossRefGoogle Scholar
Raven, J. C. (1965). Advanced progressive matrices, set I and II. London: H. K. Lewis. (Distributed in the United States by The Psychological Corporation, San Antonio, TX).Google Scholar
Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In Spiro, R. J., Bruce, B. C., & Brewer, W. F. (Eds.), Theoretical issues in reading comprehension (pp. 33–57). Hillsdale NJ: Erlbaum.Google Scholar
Samejima, F. (1995). A cognitive diagnosis method using latent trait models: Competency space approach and its relationship with DiBello and Stout's unified cognitive-psychometric diagnosis model. In Nichols, P., Chipman, S. F., & Brennan, R. L. (Eds.), Cognitively diagnostic assessment (pp. 391–410). Hillsdale, NJ: Erlbaum.Google Scholar
Sheehan, K. M. (1997a). A tree-based approach to proficiency scaling (ETS Research Report No. RR-97–2). Princeton, NJ: Educational Testing Service.Google Scholar
Sheehan, K. M. (1997b). A tree-based approach to proficiency scaling and diagnostic assessment (ETS Research Report No. RR-97–9). Princeton, NJ: Educational Testing Service.Google Scholar
Shute, V. J., & Psotka, J. (1996). Intelligent tutoring systems: Past, present, and future. In Jonassen, D. H. (Ed.), Handbook of educational communications and technology (pp. 570-600). New York: Macmillan.Google Scholar
Snow, R. E., & Lohman, D. F. (1993). Cognitive psychology, new test design, and new test theory: An introduction. In Frederiksen, N., Mislevy, R. J., & Bejar, I. I. (Eds.), Test theory for a new generation of tests (pp. 1–17). Hillsdale, NJ: Erlbaum.Google Scholar
Tatsuoka, K. K. (1995). Architecture of knowledge structures and cognitive diagnosis: A statistical pattern recognition and classification approach. In Nichols, P., Chipman, S. F., & Brennan, R. L. (Eds.), Cognitively diagnostic assessment (pp. 327–359). Hillsdale, NJ: Erlbaum.Google Scholar
Templin, J. (2004). Generalized linear mixed proficiency models for cognitive diagnosis. Unpublished doctoral dissertation, University of Illinois at Urbana−Champaign.Google Scholar
White, B., & Frederikson, J. (1987). Qualitative models and intelligent learning environment. In Lawler, R. & Yazdani, M. (Eds.), AI and education (pp. 281–305). Norwood, NJ: Ablex.Google Scholar
Yang, X. (2003). Inferring diagnostic information from abstract reasoning test items. Unpublished doctoral thesis, University of Kansas, Lawrence.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×