American Educational Research Association (AERA), American Psychological Association, National Council on Measurement in Education. (1999) Standards for educational and psychological testing. Washington, DC: AERA.
2005). Human symbol manipulation within an integrated cognitive architecture. Cognitive Science, 29, 313–341. (
2004). An integrated theory of the mind. Psychological Review, 111, 1036–1060., , , , , & (
Anderson, J.R., Reder, L.M., & Simon, H.A. (2000). Applications and Misapplications of Cognitive Psychology to Mathematics Education. Retrieved June 7, 2006, from http://act-r.psy.cmu.edu/publications.
Anderson, J.R., & Shunn, C.D., (2000). Implications of the ACT-R learning theory: No magic bullets. In (Ed.), Advances in instructional psychology: Educational design and cognitive science (Vol. 5, pp. 1–33). Mahwah, NJ: Erlbaum.
1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press., , & (
1978). Diagnostic models for procedural bugs in basic mathematics skills. Cognitive Science, 2, 155–192., & (
2006, April). A person-fit statistic for the attribute hierarchy method: The hierarchy consistency index. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco., , , & (
1998). Understanding cognitive science. Malden, MA: Blackwell. (
1999). How people learn: Bridging research and practice. Washington, DC: National Academy Press., , & (
Embretson, S.E. (1999). Cognitive psychology applied to testing. In , , , , , & (Eds.), Handbook of applied cognition, (pp. 629–66). New York: Wiley.
1993). Protocol analysis: Verbal reports as data. Cambridge, MA: The MIT Press., & (
1983). The modularity of mind. Cambridge, MA: MIT Press. (
2000). Exploring the logic of Tatsuoka's rule-space model for test development and analysis. Educational Measurement: Issues and Practice, 19, 34–44., , & (
2004, April). Using the multidimensionality-based DIF analysis framework to study cognitive skills that elicit gender differences. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego., , & (
2007, April). Using connectionist models to evaluate examinees' response patterns on tests using the Attribute Hierarchy Method. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago., , & (
Glaser, R., Lesgold, A., & Lajoie, S. (1987). Toward a cognitive theory for the measurement of achievement. In , , , & (Eds.), The influence of cognitive psychology on testing (pp. 41–85). Hillsdale, NJ: Erlbaum.
2004). Student test score reports and interpretative guides: Review of current practices and suggestions for future research. Applied Measurement in Education, 17, 145–220., & (
Hunt, E. (1995). Where and when to represent students this way and that way: An evaluation of approaches to diagnostic assessment. In , , & (Eds.), Cognitively diagnostic assessment (pp. 411–429). Hillsdale, NJ: Erlbaum.
Kuhn, D. (2001). Why development does (and does not occur) occur: Evidence from the domain of inductive reasoning. In & (Eds.), Mechanisms of cognitive development: Behavioral and neural perspectives (pp. 221–249). Hillsdale, NJ: Erlbaum.
2004). Avoiding misconceptions, misuse, and missed opportunities: The collection of verbal reports in educational achievement testing. Educational Measurement: Issues and Practice, 23, 6–15. (
in press). Defining and evaluating models of cognition used in educational measurement to make inferences about examinees' thinking processes. Educational Measurement: Issues and Practice., & (
2004). The attribute hierarchy model: An approach for integrating cognitive theory with assessment practice. Journal of Educational Measurement, 41, 205–236., , & (
2005, April). The cognitive effects of test item features: Identifying construct irrelevant variance and informing item generation. Paper presented at the annual meeting of the National Council on Measurement in Education, Montréal, Canada., & (
Messick, S. (1989). Validity. In (Ed.), Educational measurement (3rd ed.; pp. 13–103). New York: American Council on Education/Macmillan.
1996). Test theory reconceived. Journal of Educational Measurement, 33, 379–416. (
2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3–62., , & (
National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
1994). A framework of developing cognitively diagnostic assessments. Review of Educational Research, 64, 575–603. (
1999). The lack of fidelity between cognitively complex constructs and conventional test development practice. Educational Measurement: Issues and Practice, 18, 18–29., & (
2004). What is at stake in knowing the content and capabilities of children's minds? A case for basing high stakes tests on cognitive models. Theory and Research in Education, 2, 283–308., , & (
Pellegrino, J.W. (1988). Mental models and mental tests. In & (Eds.), Test validity (pp. 49–60). Hillsdale, NJ: Erlbaum.
Pellegrino, J.W. (2002). Understanding how students learn and inferring what they know: Implications for the design of curriculum, instruction, and assessment. In (Ed.), NSF K-12 Mathematics and Science Curriculum and Implementation Centers Conference Proceedings (pp. 76–92). Washington, DC: National Science Foundation and American Geological Institute.
Pellegrino, J.W., Baxter, G.P., & Glaser, R. (1999). Addressing the “two disciplines” problem: Linking theories of cognition and learning with assessment and instructional practices. In & (Eds.), Review of Research in Education (pp. 307–353). Washington, DC: American Educational Research Association.
2005, April). Revisiting the item format question: Can the multiple choice format meet the demand for monitoring higher-order skills? Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal, Canada., , , , , & (
1993). Techniques and procedures for assessing cognitive skills. Review of Educational Research, 63, 201–243., , & (
1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage. (
Snow, R.E., & Lohman, D.F. (1989). Implications of cognitive psychology for educational measurement. In (Ed.), Educational measurement (3rd ed., pp. 263–331). New York: American Council on Education/Macmillan.
2000). Accessing problem-solving strategy knowledge: The complementary use of concurrent verbal protocols and retrospective debriefing. Journal of Educational Psychology, 92, 413–425., & (
1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345–354. (
Tatsuoka, K.K. (1995). Architecture of knowledge structures and cognitive diagnosis: A statistical pattern recognition and classification approach. In , , & (Eds.), Cognitively diagnostic assessment (pp. 327–359). Hillsdale, NJ: Erlbaum.
Tatsuoka, M.M., & Tatsuoka, K.K. (1989). Rule space. In & (Eds.), Encyclopedia of statistical sciences (pp. 217–220). New York: Wiley.
VanderVeen, A.A., Huff, K., Gierl, M., McNamara, D.S, Louwerse, M., & Graesser, A. (in press). Developing and validating instructionally relevant reading competency profiles measured by the critical reading section of the SAT. In Reading comprehension strategies: Theories, interventions, and technologies. Mahwah, NJ: Erlbaum. (Ed.),
Webb, N.L. (2006). Identifying content for student achievement tests. In & (Eds.), Handbook of test development (pp. 155–180). Mahwah, NJ: Erlbaum.