Hostname: page-component-77c89778f8-vpsfw Total loading time: 0 Render date: 2024-07-17T22:03:06.608Z Has data issue: false hasContentIssue false

Partial measurement equivalence of French and English versions of the Canadian Study of Health and Aging neuropsychological battery

Published online by Cambridge University Press:  01 May 2009

HOLLY A. TUOKKO*
Affiliation:
Department of Psychology, University of Victoria, Victoria, British Columbia, Canada Centre on Aging, University of Victoria, Victoria, British Columbia, Canada
PAK HEI BENIDITO CHOU
Affiliation:
Centre on Aging, University of Victoria, Victoria, British Columbia, Canada
STEPHEN C. BOWDEN
Affiliation:
Department of Psychology, School of Behavioural Science, University of Melbourne, Parkville, Victoria, Australia
MARTINE SIMARD
Affiliation:
École de Psychologie, Université Laval, Québec City, Québec, Canada
BERNADETTE SKA
Affiliation:
Faculté de Médecine—École d’orthophonie et audiologie, Université de Montréal, Montréal, Québec, Canada
MARGARET CROSSLEY
Affiliation:
Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada

Abstract

Neuropsychological batteries are often translated for use across populations differing in preferred language. Yet, equivalence in construct measurement across groups cannot be assumed. To address this issue, we examined data from the Canadian Study of Health and Aging, a large study of older adults. We tested the hypothesis that the latent variables underlying the neuropsychological battery administered in French or English were the same (invariant). The best-fitting baseline model, established in the English-speaking Exploratory sample (n = 716), replicated well in the English-speaking Validation sample (n = 715), and the French-speaking sample (FS, n = 446). Across the English- and FSs, two of the factors, Long-term Retrieval and Visuospatial speed, displayed invariance, that is, reflected the same constructs measured in the same scales. In contrast, the Verbal Ability factor showed only partial invariance, reflecting differences in the relative difficulty of some tests of language functions. This empirical demonstration of partial measurement invariance lends support to the continued use of these translated measures in clinical and research contexts and illustrates a framework for detailed evaluation of the generality of models of cognition and psychopathology, across groups of any sort. (JINS, 2009, 15, 416–425.)

Type
Research Articles
Copyright
Copyright © The International Neuropsychological Society 2009

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

American Educational Research Association, American Psychological Association, National Council on Measurement in Education (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
American Psychiatric Association (1987). Diagnostic and Statistical Manual of Mental Disorders (3rd ed., revised.). Washington, DC: American Psychiatric Association.Google Scholar
Ardila, A., Rodriguez-Menéndez, G., & Rosselli, M. (2002). Current issues in neuropsychological assessment with hispanics/latinos. In Ferraro, F.R. (Ed.), Minority and cross-cultural aspects of neuropsychological assessment (pp. 161179). Lisse, The Netherlands: Swets & Zeitlinger.Google Scholar
Bates, E., Appelbaum, M., Salcedo, J., Saygin, A.P., & Pizzamiglio, L. (2003). Quantifying dissociations in neuropsychological research. Journal of Clinical and Experimental Neuropsychology, 25, 11281153.CrossRefGoogle ScholarPubMed
Benton, A.L. (1974). Revised visual retention test: Clinical and experimental applications. New York: Psychological Corporation.Google Scholar
Benton, A.L. & Hamsher, K. (1989). Multilingual Aphasia Examination: Manual of Instructions. Iowa City, IA: AJA Associates.Google Scholar
Bleecker, M.L. & Bolla-Wilson, K. (1988). Age-related sex differences in verbal memory. Journal of Clinical Psychology, 44, 403411.Google Scholar
Bontempo, D.E. & Hofer, S.M. (2007). Assessing factorial invariance in cross-sectional and longitudinal studies. In Ong, A.D. & van Dulmen, M. (Eds.), Handbook of methods in positive psychology (pp. 153175). New York, NY: Oxford University Press.Google Scholar
Bowden, S.C., Carstairs, J.R., & Shores, E.A. (1999). Confirmatory factor analysis of combined Wechsler Adult Intelligence Scale–Revised and Wechsler Memory Scale–Revised scores in a healthy community sample. Psychological Assessment, 11, 339344.CrossRefGoogle Scholar
Bowden, S.C., Cook, M.J., Bardenhagen, F.J., Shores, E.A., & Carstairs, J.R. (2004). Measurement invariance of core cognitive abilities in heterogeneous neurological and community samples. Intelligence, 33, 363389.Google Scholar
Bowden, S.C., Dodds, B., Whelan, G., Long, C.M., Dudgeon, P., Ritter, A.J., & Clifford, C.C. (1997). Confirmatory factor analysis of the Wechsler Memory Scale–Revised in a sample of alcohol dependent clients. Journal of Clinical and Experimental Neuropsychology, 19, 755762.Google Scholar
Bowden, S.C., Gregg, N., Bandalos, D., David, M., Coleman, C., Holdnack, J.A., & Weiss, L.G. (2008a). Latent mean and covariance differences with measurement equivalence in college students with developmental difficulties versus the Wechsler Adult Intelligence-III/Wechsler Memory Scale-III normative sample. Educational and Psychological Measurement, 68, 621642.Google Scholar
Bowden, S.C., Ritter, A.J., Carstairs, J.R., Shores, E.A., Pead, J., Greeley, J.D., Whelan, G., Long, C.M., & Clifford, C.C. (2001). Factorial invariance for combined WAIS-R and WMS-R scores in a sample of patients with alcohol dependency. The Clinical Neuropsychologist, 15, 6980.CrossRefGoogle Scholar
Bowden, S.C., Weiss, L.G., Holdnack, J.A., Bardenhagen, F.J., & Cook, M.J. (2008b). Equivalence of a measurement model of cognitive abilities in US standardization and Australian neuroscience samples. Assessment, 15, 132144.CrossRefGoogle ScholarPubMed
Bravo, G. & Hébert, R. (1997). Reliability of the Modified Mini-Mental State Examination in the context of a two-phase community prevalence study. Neuroepidemiology, 16(3), 141148.Google Scholar
Brown, T.A. (2006). Confirmatory factor analysis for applied research. New York: The Guilford Press.Google Scholar
Browne, M.W. & Cudeck, R. (1993). Alternative ways of assessing model fit. In Bollen, K. & Long, J.S. (Eds.), Testing structural equation models (pp. 136162). Newbury Park, CA: Sage Publications.Google Scholar
Buschke, H. (1984). Cued recall in amnesia. Journal of Clinical Neuropsychology, 6, 433440.CrossRefGoogle ScholarPubMed
Byrne, B.M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Byrne, B.M. & Watkins, D. (2003). The issue of measurement invariance revisited. Journal of Cross-Cultural Psychology, 34, 155175.CrossRefGoogle Scholar
Canadian Study of Health and Aging Working Group (1994). Canadian Study of Health and Aging: Study methods and prevalence of dementia. Canadian Medical Association Journal, 6, 433440.Google Scholar
Carroll, J.B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press.Google Scholar
Chen, F.F., Sousa, K.H., & West, S.G. (2005). Testing measurement invariance of second-order factor models. Structural Equation Modeling, 12, 471492.CrossRefGoogle Scholar
Cheung, G.W. & Rensvold, R.B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233255.Google Scholar
Cortina, J.M. (2002). Big things have small beginnings: An assortment of “minor” methodological misunderstandings. Journal of Management, 28, 339362.Google Scholar
Dudgeon, P.L. (2004). A note on extending Steiger’s (1998) multiple sample RMSEA adjustment to other noncentrality parameter-based statistics. Structural Equation Modeling, 11, 305319.Google Scholar
Flanagan, D.P. & Harrison, P.L. (2005). Contemporary intellectual assessment: Theories, tests, and issues (2nd ed.). New York: The Guilford Press.Google Scholar
Gladsjo, J.A., McAdams, L.A., Palmer, B.W., Moore, D.J., Jeste, D.V., & Heaton, R.K. (2004). A six-factor model of cognition in schizophrenia and related psychotic disorders: Relationships with clinical symptoms and functional capacity. Schizophrenia Bulletin, 30(4), 739754.Google Scholar
Hancock, G.R. (2001). Effect size, power, and sample size determination for structured means modeling and MIMIC approaches to between-groups hypothesis testing of means on a single latent construct. Psychometrika, 66, 373388.Google Scholar
Hu, L. & Bentler, P.M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3, 424453.Google Scholar
Marsh, H.W., Hua, K-T., & Wen, Z. (2004). In search of golden-rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers of overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling, 11, 320341.CrossRefGoogle Scholar
Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58, 525543.CrossRefGoogle Scholar
Meredith, W. & Teresi, J.A. (2006). An essay on measurement and factorial invariance. Medical Care, 44(Suppl 3), S69S77.Google Scholar
Muthen, L.K. & Muthen, B.O. (1998–2007). Mplus User’s Guide, Fifth Edition. Los Angeles, CA: Muthen & Muthen.Google Scholar
Omura, K. & Sugishita, M. (2004). Simultaneous confirmatory factor analysis of the Wechsler Memory Scale–Revised for two standardization samples: A comparison of groups from Japan and the United States. Journal of Clinical and Experimental Neuropsychology, 26, 645652.Google Scholar
Preacher, K.J. & MacCallum, R.C. (2003). Repairing Tom Swift’s electric factor analysis machine. Understanding Statistics, 2, 1343.CrossRefGoogle Scholar
Ramirez, M., Teresi, J.A., Holmes, D., Gurland, B., & Lantigua, R. (2006). Differential item functioning (DIF) and the Mini-Mental State Examination (MMSE): Overview, sample, and issues of translation. Medical Care, 44. S95S106.CrossRefGoogle ScholarPubMed
Reilly, R.R., Bowden, S., Bardenhagen, F.J., & Cook, M.J. (2006) Invariance of the measurement model underlying depressive symptoms in patients with temporal lobe epilepsy. Journal of Clinical and Experimental Neuropsychology, 28, 115.CrossRefGoogle ScholarPubMed
Rey, A. (1964). L’examen clinique en psychologie. Paris, France: Presses Universitaires de France.Google Scholar
Rosen, W.G. (1980). Verbal fluency in aging and dementia. Journal of Clinical Neuropsychology, 2, 135146.CrossRefGoogle Scholar
Satz, P. & Mogel, S. (1962). An abbreviation of the WAIS for clinical use. Journal of Clinical Psychology, 18, 7779.Google Scholar
Spreen, O. & Benton, A.L. (1977). Neurosensory center comprehensive examination for aphasia. Victoria, BC: University of Victoria.Google Scholar
SPSS Inc. (2005). SPSS 14.0 for Windows. (2005). Chicago, III: SPSS Incorporated.Google Scholar
Strauss, M. & Smith, G.T. (in press). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology in press.Google Scholar
Teng, E.L. & Chui, H.C. (1987). The Modified Mini-Mental State (3MS) examination. Journal of Clinical Psychiatry, 48, 314318.Google ScholarPubMed
Teresi, J.A. (2006). Overview of quantitative measurement methods: Equivalence, invariance, and differential item functioning in health applications. Medical Care, 44(Suppl 3), S39S49.CrossRefGoogle ScholarPubMed
Tuokko, H., Kristjansson, E., & Miller, J.A. (1995). The neuropsychological detection of dementia: An overview of the neuropsychological component of the Canadian Study of Health and Aging. Journal of Clinical and Experimental Neuropsychology, 17, 352373.CrossRefGoogle ScholarPubMed
Tuokko, H., Vernon-Wilkinson, R., Weir, J., & Beattie, B.L. (1991). Cued recall and early identification of dementia. Journal of Clinical and Experimental Neuropsychology, 13, 871879.Google Scholar
Tuokko, H. & Woodward, T. (1996). Development and validation of the demographic correction system for neuropsychological measures used in the Canadian Study of Health and Aging. Journal of Clinical and Experimental Neuropsychology, 18, 479616.Google Scholar
Vandenberg, R.J. & Lance, C.E. (2000). A review and synthesis of the measurements invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3, 469.CrossRefGoogle Scholar
Wechsler, D. (1945). A standardized memory scale for clinical use. Journal of Psychology, 19, 8795.CrossRefGoogle Scholar
Wechsler, D. (1981). Wechsler Adult Intelligence Scale–Revised Manual. New York: The Psychological Corporation.Google Scholar
Whitely, S. (1983). Construct validity: Construct representation versus nomothetic span. Psychological Bulletin, 93, 179197.Google Scholar
Widaman, K.F. & Reise, S.P. (1997). Exploring the measurement invariance of psychological instruments: Applications in the substance abuse domain. In Bryant, K.J. & Windle, M. (Eds.), The science of prevention: Methodological advance from alcohol and substance abuse research (pp. 281324). Washington, DC: American Psychological Association.CrossRefGoogle Scholar