We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study compared the level of education and tests from multiple cognitive domains as proxies for cognitive reserve.
Method:
The participants were educationally, ethnically, and cognitively diverse older adults enrolled in a longitudinal aging study. We examined independent and interactive effects of education, baseline cognitive scores, and MRI measures of cortical gray matter change on longitudinal cognitive change.
Results:
Baseline episodic memory was related to cognitive decline independent of brain and demographic variables and moderated (weakened) the impact of gray matter change. Education moderated (strengthened) the gray matter change effect. Non-memory cognitive measures did not incrementally explain cognitive decline or moderate gray matter change effects.
Conclusions:
Episodic memory showed strong construct validity as a measure of cognitive reserve. Education effects on cognitive decline were dependent upon the rate of atrophy, indicating education effectively measures cognitive reserve only when atrophy rate is low. Results indicate that episodic memory has clinical utility as a predictor of future cognitive decline and better represents the neural basis of cognitive reserve than other cognitive abilities or static proxies like education.
This study investigated the latent factor structure of the NIH Toolbox Cognition Battery (NIHTB-CB) and its measurement invariance across clinical diagnosis and key demographic variables including sex, race/ethnicity, age, and education for a typical Alzheimer’s disease (AD) research sample.
Method:
The NIHTB-CB iPad English version, consisting of 7 tests, was administered to 411 participants aged 45–94 with clinical diagnosis of cognitively unimpaired, dementia, mild cognitive impairment (MCI), or impaired not MCI. The factor structure of the whole sample was first examined with exploratory factor analysis (EFA) and further refined using confirmatory factor analysis (CFA). Two groups were classified for each variable (diagnosis or demographic factors). The confirmed factor model was next tested for each group with CFA. If the factor structure was the same between the groups, measurement invariance was then tested using a hierarchical series of nested two-group CFA models.
Results:
A two-factor model capturing fluid cognition (executive function, processing speed, and memory) versus crystalized cognition (language) fit well for the whole sample and each group except for those with age < 65. This model generally had measurement invariance across sex, race/ethnicity, and education, and partial invariance across diagnosis. For individuals with age < 65, the language factor remained intact while the fluid cognition was separated into two factors: (1) executive function/processing speed and (2) memory.
Conclusions:
The findings mostly supported the utility of the battery in AD research, yet revealed challenges in measuring memory for AD participants and longitudinal change in fluid cognition.
Alzheimer’s disease (AD) studies are increasingly targeting earlier (pre)clinical populations, in which the expected degree of observable cognitive decline over a certain time interval is reduced as compared to the dementia stage. Consequently, endpoints to capture early cognitive changes require refinement. We aimed to determine the sensitivity to decline of widely applied neuropsychological tests at different clinical stages of AD as outlined in the National Institute on Aging – Alzheimer’s Association (NIA-AA) research framework.
Method:
Amyloid-positive individuals (as determined by positron emission tomography or cerebrospinal fluid) with longitudinal neuropsychological assessments available were included from four well-defined study cohorts and subsequently classified among the NIA-AA stages. For each stage, we investigated the sensitivity to decline of 17 individual neuropsychological tests using linear mixed models.
Results:
1103 participants (age = 70.54 ± 8.7, 47% female) were included: n = 120 Stage 1, n = 206 Stage 2, n = 467 Stage 3 and n = 309 Stage 4. Neuropsychological tests were differentially sensitive to decline across stages. For example, Category Fluency captured significant 1-year decline as early as Stage 1 (β = −.58, p < .001). Word List Delayed Recall (β = −.22, p < .05) and Trail Making Test (β = 6.2, p < .05) became sensitive to 1-year decline in Stage 2, whereas the Mini-Mental State Examination did not capture 1-year decline until Stage 3 (β = −1.13, p < .001) and 4 (β = −2.23, p < .001).
Conclusions:
We demonstrated that commonly used neuropsychological tests differ in their ability to capture decline depending on clinical stage within the AD continuum (preclinical to dementia). This implies that stage-specific cognitive endpoints are needed to accurately assess disease progression and increase the chance of successful treatment evaluation in AD.
The utility of informant-based measures of cognitive decline to accurately describe objective cognitive performance in Parkinson’s disease (PD) without dementia is uncertain. Due to the clinical relevance of this information, the purpose of this study was to examine the relationship between informant-based reports of patient cognitive decline via the Informant Questionnaire of Cognitive Decline in the Elderly (IQCODE) and objective cognition in non-demented PD controlling for cognitive status (i.e., mild cognitive impairment; PD-MCI and normal cognition; PD-NC).
Method:
One-hundred and thirty-nine non-demented PD participants (PD-MCI n = 38; PD-NC n = 101) were administered measures of language, executive function, attention, learning, delayed recall, visuospatial function, mood, and motor function. Each participant identified an informant to complete the IQCODE and a mood questionnaire.
Results:
Greater levels of informant-based responses of patient cognitive decline on the IQCODE were significantly associated with worse objective performance on measures of global cognition, attention, learning, delayed recall, and executive function in the overall sample, above and beyond covariates and cognitive status. However, the IQCODE was not significantly associated with language or visuospatial function.
Conclusions:
Results indicate that informant responses, as measured by the IQCODE, may provide adequate information on a wide range of cognitive abilities in non-demented PD, including those with MCI and normal cognition. Findings have important clinical implications for the utility of the IQCODE in the identification of PD patients in need of further evaluation, monitoring, and treatment.
To investigate the impact of cognitive impairment on spoken language produced by speakers with multiple sclerosis (MS) with and without dysarthria.
Method:
Sixty speakers comprised operationally defined groups. Speakers produced a spontaneous speech sample to obtain speech timing measures of speech rate, articulation rate, and silent pause frequency and duration. Twenty listeners judged the overall perceptual severity of the samples using a visual analog scale that ranged from no impairment to severe impairment (speech severity). A 2 × 2 factorial design examined main and interaction effects of dysarthria and cognitive impairment on speech timing measures and speech severity in individuals with MS. Each speaker group with MS was further compared to a healthy control group. Exploratory regression analyses examined relationships between cognitive and biopsychosocial variables and speech timing measures and perceptual judgments of speech severity, for speakers with MS.
Results:
Speech timing was significantly slower for speakers with dysarthria compared to speakers with MS without dysarthria. Silent pause durations also significantly differed for speakers with both dysarthria and cognitive impairment compared to MS speakers without either impairment. Significant interactions between dysarthria and cognitive factors revealed comorbid dysarthria and cognitive impairment contributed to slowed speech rates in MS, whereas dysarthria alone impacted perceptual judgments of speech severity. Speech severity was strongly related to pause duration.
Conclusions:
The findings suggest the nature in which dysarthria and cognitive symptoms manifest in objective, acoustic measures of speech timing and perceptual judgments of severity is complex.
The Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) is commonly used to assist with post-concussion return-to-play decisions for athletes. Additional investigation is needed to determine whether embedded indicators used to determine the validity of scores are influenced by the presence of neurodevelopmental disorders (NDs).
Method:
This study examined standard and novel ImPACT validity indicators in a large sample of high school athletes (n = 33,772) with or without self-reported ND.
Results:
Overall, 7.1% of athletes’ baselines were judged invalid based on standard ImPACT validity criteria. When analyzed by group (healthy, ND), there were significantly more invalid ImPACT baselines for athletes with an ND diagnosis or special education history (between 9.7% and 54.3% for standard and novel embedded validity criteria) when compared to athletes without NDs. ND history was a significant predictor of invalid baseline performance above and beyond other demographic characteristics (i.e., age, sex, and sport), although it accounted for only a small percentage of variance. Multivariate base rates are presented stratified for age, sex, and ND.
Conclusions:
These data provide evidence of higher than normal rates of invalid baselines in athletes who report ND (based on both the standard and novel embedded validity indicators). Although ND accounted for a small percentage of variance in the prediction of invalid performance, negative consequences (e.g., extended time out of sports) of incorrect decision-making should be considered for those with neurodevelopmental conditions. Also, reasons for the overall increase noted here, such as decreased motivation, “sandbagging”, or disability-related cognitive deficit, require additional investigation.
Multiple studies have found evidence of task non-specific slow drift rate in ADHD, and slow drift rate has rapidly become one of the most visible cognitive hallmarks of the disorder. In this study, we use the diffusion model to determine whether atypicalities in visuospatial cognitive processing exist independently of slow drift rate.
Methods:
Eight- to twelve-year-old children with (n = 207) and without ADHD (n = 99) completed a 144-trial mental rotation task.
Results:
Performance of children with ADHD was less accurate and more variable than non-ADHD controls, but there were no group differences in mean response time. Drift rate was slower, but nondecision time was faster for children with ADHD. A Rotation × ADHD interaction for boundary separation was also found in which children with ADHD did not strategically adjust their response thresholds to the same degree as non-ADHD controls. However, the Rotation × ADHD interaction was not significant for nondecision time, which would have been the primary indicator of a specific deficit in mental rotation per se.
Conclusions:
Poorer performance on the mental rotation task was due to slow rate of evidence accumulation, as well as relative inflexibility in adjusting boundary separation, but not to impaired visuospatial processing specifically. We discuss the implications of these findings for future cognitive research in ADHD.
Reading difficulties are one of the most significant challenges for children with neurofibromatosis type 1 (NF1). The aims of this study were to identify and categorize the types of reading impairments experienced by children with NF1 and to establish predictors of poor reading in this population.
Method:
Children aged 7–12 years with NF1 (n = 60) were compared with typically developing children (n = 36). Poor word readers with NF1 were classified according to impairment type (i.e., phonological, surface, mixed), and their reading subskills were compared. A hierarchical multiple regression was conducted to identify predictors of word reading.
Results:
Compared to controls, children with NF1 demonstrated significantly poorer literacy abilities. Of the 49 children with NF1 classified as poor readers, 20 (41%) were classified with phonological dyslexia, 24 (49%) with mixed dyslexia, and 5 (10%) fell outside classification categories. Children with mixed dyslexia displayed the most severe reading impairments. Stronger working memory, better receptive language, and fewer inattentive behaviors predicted better word reading skills.
Conclusions:
The majority of children with NF1 experience deficits in key reading skills which are essential for them to become successful readers. Weaknesses in working memory, receptive language, and attention are associated with reading difficulties in children with NF1.
Demographic trends and the globalization of neuropsychology have led to a push toward inclusivity and diversity in neuropsychological research in order to maintain relevance in the healthcare marketplace. However, in a review of neuropsychological journals, O’Bryant et al. found systematic under-reporting of sample characteristics vital for understanding the generalizability of research findings. We sought to update and expand the findings reported by O’Bryant et al.
Method:
We evaluated 1648 journal articles published between 2016 and 2019 from 7 neuropsychological journals. Of these, 1277 were original research or secondary analyses and were examined further. Articles were coded for reporting of age, sex/gender, years of education, ethnicity/race, socioeconomic status (SES), language, and acculturation. Additionally, we recorded information related to sample size, country, and whether the article focused on a pediatric or adult sample.
Results:
Key variables such as age and sex/gender (both over 95%) as well as education (71%) were frequently reported. Language (20%) and race/ethnicity (36%) were modestly reported, and SES (13%), and acculturation (<1%) were more rarely reported. SES was more commonly reported in pediatric than adult samples, and the opposite was true for education. There were differences between the present results and those of O’Bryant et al., though the same general trends remained.
Conclusions:
Reporting of demographic data in neuropsychological research appears to be slowly changing toward greater comprehensiveness, though clearly more work is needed. Greater systematic reporting of such data is likely to be beneficial for the generalizability and contextualization of neurocognitive function.
This study examines the relationship of serum total tau, neurofilament light (NFL), ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1), and glial fibrillary acidic protein (GFAP) with neurocognitive performance in service members and veterans with a history of traumatic brain injury (TBI).
Method:
Service members (n = 488) with a history of uncomplicated mild (n = 172), complicated mild, moderate, severe, or penetrating TBI (sTBI; n = 126), injured controls (n = 116), and non-injured controls (n = 74) prospectively enrolled from Military Treatment Facilities. Participants completed a blood draw and neuropsychological assessment a year or more post-injury. Six neuropsychological composite scores and presence/absence of mild neurocognitive disorder (MNCD) were evaluated. Within each group, stepwise hierarchical regression models were conducted.
Results:
Within the sTBI group, increased serum UCH-L1 was related to worse immediate memory and delayed memory (R2Δ = .065–.084, ps < .05) performance, while increased GFAP was related to worse perceptual reasoning (R2Δ = .030, p = .036). Unexpectedly, within injured controls, UCH-L1 and GFAP were inversely related to working memory (R2Δ = .052–.071, ps < .05), and NFL was related to executive functioning (R2Δ = .039, p = .021) and MNCD (Exp(B) = 1.119, p = .029).
Conclusions:
Results suggest GFAP and UCH-L1 could play a role in predicting poor cognitive outcome following complicated mild and more severe TBI. Further investigation of blood biomarkers and cognition is warranted.