We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Head impact exposure (HIE) in youth football is a public health concern. The objective of this study was to determine if one season of HIE in youth football was related to cognitive changes.
Method:
Over 200 participants (ages 9–13) wore instrumented helmets for practices and games to measure the amount of HIE sustained over one season. Pre- and post-season neuropsychological tests were completed. Test score changes were calculated adjusting for practice effects and regression to the mean and used as the dependent variables. Regression models were calculated with HIE variables predicting neuropsychological test score changes.
Results:
For the full sample, a small effect was found with season average rotational values predicting changes in list-learning such that HIE was related to negative score change: standardized beta (β) = -.147, t(205) = -2.12, and p = .035. When analyzed by age clusters (9–10, 11–13) and adding participant weight to models, the R2 values increased. Splitting groups by weight (median split), found heavier members of the 9–10 cohort with significantly greater change than lighter members. Additionaly, significantly more participants had clinically meaningful negative changes: X2 = 10.343, p = .001.
Conclusion:
These findings suggest that in the 9–10 age cluster, the average seasonal level of HIE had inverse, negative relationships with cognitive change over one season that was not found in the older group. The mediation effects of age and weight have not been explored previously and appear to contribute to the effects of HIE on cognition in youth football players.
In neurological diseases, metacognitive judgements have been widely used in order to assess the degree of disease awareness. However, as yet little research of this type has focused on multiple sclerosis (MS).
Method:
We here focused on an investigation of item-by-item metacognitive predictions (using feeling-of-knowing judgements) in episodic and semantic memory and global metacognitive predictions in standard neuropsychological tests pertinent to MS (processing speed and verbal fluency). Twenty-seven relapsing–remitting MS (RR-MS) patients and 27 comparison participants took part.
Results:
We found that RR-MS patients were as accurate as the group of comparison participants on our episodic and semantic item-by-item judgements. However, for the global predictions, we found that the MS group initially overestimated their performance (ds = .64), but only on a task on which performance was also impaired (ds = .89; processing speed). We suggest that MS patients, under certain conditions, show inaccurate metacognitive knowledge. However, postdictions and item-by-item predictions indicate that online metacognitive processes are no different from participants without MS.
Conclusion:
We conclude that there is no monitoring deficit in RR-MS and as such these patients should benefit from adaptive strategies and symptom education.
Neurodegenerative diseases (NDDs), such as Alzheimer’s disease, frontotemporal dementia, dementia with Lewy bodies, and Huntington’s disease, inevitably lead to impairments in higher-order cognitive functions, including the perception of emotional cues and decision-making behavior. Such impairments are likely to cause risky daily life behavior, for instance, in traffic. Impaired recognition of emotional expressions, such as fear, is considered a marker of impaired experience of emotions. Lower fear experience can, in turn, be related to risk-taking behavior. The aim of our study was to investigate whether impaired emotion recognition in patients with NDD is indeed related to unsafe decision-making in risky everyday life situations, which has not been investigated yet.
Methods:
Fifty-one patients with an NDD were included. Emotion recognition was measured with the Facial Expressions of Emotions: Stimuli and Test (FEEST). Risk-taking behavior was measured with driving simulator scenarios and the Action Selection Test (AST). Data from matched healthy controls were used: FEEST (n = 182), AST (n = 36), and driving simulator (n = 18).
Results:
Compared to healthy controls, patients showed significantly worse emotion recognition, particularly of anger, disgust, fear, and sadness. Furthermore, patients took significantly more risks in the driving simulator rides and the AST. Only poor recognition of fear was related to a higher amount of risky decisions in situations involving a direct danger.
Conclusions:
To determine whether patients with an NDD are still fit to drive, it is crucial to assess their ability to make safe decisions. Measuring emotion recognition may be a valuable contribution to this judgment.
The criteria for objective memory impairment in mild cognitive impairment (MCI) are vaguely defined. Aggregating the number of abnormal memory scores (NAMS) is one way to operationalise memory impairment, which we hypothesised would predict progression to Alzheimer’s disease (AD) dementia.
Methods:
As part of the Australian Imaging, Biomarkers and Lifestyle Flagship Study of Ageing, 896 older adults who did not have dementia were administered a psychometric battery including three neuropsychological tests of memory, yielding 10 indices of memory. We calculated the number of memory scores corresponding to z ≤ −1.5 (i.e., NAMS) for each participant. Incident diagnosis of AD dementia was established by consensus of an expert panel after 3 years.
Results:
Of the 722 (80.6%) participants who were followed up, 54 (7.5%) developed AD dementia. There was a strong correlation between NAMS and probability of developing AD dementia (r = .91, p = .0003). Each abnormal memory score conferred an additional 9.8% risk of progressing to AD dementia. The area under the receiver operating characteristic curve for NAMS was 0.87 [95% confidence interval (CI) .81–.93, p < .01]. The odds ratio for NAMS was 1.67 (95% CI 1.40–2.01, p < .01) after correcting for age, sex, education, estimated intelligence quotient, subjective memory complaint, Mini-Mental State Exam (MMSE) score and apolipoprotein E ϵ4 status.
Conclusions:
Aggregation of abnormal memory scores may be a useful way of operationalising objective memory impairment, predicting incident AD dementia and providing prognostic stratification for individuals with MCI.
With longitudinal executive function (EF) data from the Victoria Longitudinal Study, we investigated three research goals pertaining to key characteristics of EF in non-demented aging: (a) examining variability in EF longitudinal trajectories, (b) establishing trajectory classes, and (c) identifying biomarker predictors discriminating these classes.
Method:
We used a trajectory analyses sample (n = 781; M age = 71.42) for the first and second goals and a prediction analyses sample (n = 570; M age = 70.10) for the third goal. Eight neuropsychological EF measures were used as indicators of three EF dimensions: inhibition, updating, and shifting. Data-driven classification analyses were applied to the full trajectory distribution. Machine learning prediction analyses tested 15 predictors from genetic, functional, lifestyle, mobility, and demographic risk domains.
Results:
First, we observed: (a) significant variability in EF trajectories over a 40-year band of aging and (b) significantly variable patterns of EF decline. Second, a four-class EF trajectory model was observed, characterized with classes differentiated by an algorithm of level and slope information. Third, the highest group class was discriminated from lowest by several prediction factors: more education, more novel cognitive activity, lower pulse pressure, younger age, faster gait, lower body mass index, and better balance.
Conclusion:
First, with longitudinal variability in EF aging, the data-driven approach showed that long-term trajectories can be differentiated into separable classes. Second, prediction analyses discriminated class membership by a combination of multiple biomarkers from demographic, lifestyle, functional, and mobility domains of risk for brain and cognitive aging decline.
Mobility limitation and cognitive decline are related. Metabolic syndrome (MetS), the clustering of three or more cardiovascular risk factors, is associated with decline in both mobility and cognition. However, the interrelationship among MetS, mobility, and cognition is unknown. This study investigated a proposed pathway where cognition moderates the relationship between MetS and Mobility.
Method:
Adults ages 45–90 years were recruited. MetS risk factors and mobility performance (Short Physical Performance Battery (SPPB) and gait speed) were evaluated. Cognition was assessed using a comprehensive neuropsychological battery. A factor analysis of neuropsychological test scores yielded three factors: executive function, explicit memory, and semantic/contextual memory. Multivariable linear regression models were used to examine the relationship among MetS, mobility, and cognition.
Results:
Of the 74 participants (average age 61 ± 9 years; 41% female; 69% White), 27 (36%) participants manifested MetS. Mean SPPB score was 10.9 ± 1.2 out of 12 and gait speed was 1.0 ± 0.2 m/s. There were no statistically significant differences in mobility by MetS status. However, increase in any one of the MetS risk factors was associated with decreased mobility performance after adjusting for age and gender (SPPB score: β (SE) -.17 (0.08), p < .05; gait speed: -.03 (.01), p < .01). Further adjusting for cognitive factors (SPPB score: explicit memory .31 (.14), p = .03; executive function 0.45 (0.13), p < .01; gait speed: explicit memory 0.04 (0.02), p = .03; executive function 0.06 (0.02), p < .01) moderated the relationships between number of metabolic risk factors and mobility.
Conclusion:
The relationship between metabolic risk factors and mobility may be moderated by cognitive performance, specifically through executive function and explicit memory.
The assessment of cognitive functions such as prospective memory, episodic memory, attention, and executive functions benefits from an ecologically valid approach to better understand how performance outcomes generalize to everyday life. Immersive virtual reality (VR) is considered capable of simulating real-life situations to enhance ecological validity. The present study attempted to validate the Virtual Reality Everyday Assessment Lab (VR-EAL), an immersive VR neuropsychological battery, against an extensive paper-and-pencil neuropsychological battery.
Methods:
Forty-one participants (21 females) were recruited: 18 gamers and 23 non-gamers who attended both an immersive VR and a paper-and-pencil testing session. Bayesian Pearson’s correlation analyses were conducted to assess construct and convergent validity of the VR-EAL. Bayesian t-tests were performed to compare VR and paper-and-pencil testing in terms of administration time, similarity to real-life tasks (i.e., ecological validity), and pleasantness.
Results:
VR-EAL scores were significantly correlated with their equivalent scores on the paper-and-pencil tests. The participants’ reports indicated that the VR-EAL tasks were significantly more ecologically valid and pleasant than the paper-and-pencil neuropsychological battery. The VR-EAL battery also had a shorter administration time.
Conclusion:
The VR-EAL appears as an effective neuropsychological tool for the assessment of everyday cognitive functions, which has enhanced ecological validity, a highly pleasant testing experience, and does not induce cybersickness.
The purpose of this study was to evaluate whether loss of consciousness (LOC), retrograde amnesia (RA), and anterograde amnesia (AA) independently influence a particular aspect of post-concussion cognitive functioning—across-test intra-individual variability (IIV), or cognitive dispersion.
Method:
Concussed athletes (N = 111) were evaluated, on average, 6.04 days post-injury (SD = 5.90; Mdn = 4 days; Range = 1–26 days) via clinical interview and neuropsychological assessment. Primary outcomes of interest included two measures of IIV—an intra-individual standard deviation (ISD) score and a maximum discrepancy (MD) score—computed from 18 norm-referenced variables.
Results:
Analyses of covariance (ANCOVAs) adjusting for time since injury and sex revealed a significant effect of LOC on the ISD (p = .018, ηp2 = .051) and MD (p = .034, ηp2 = .041) scores, such that athletes with LOC displayed significantly greater IIV than athletes without LOC. In contrast, measures of IIV did not significantly differ between athletes who did and did not experience RA or AA (all p > .05).
Conclusions:
LOC, but not RA or AA, was associated with greater variability, or inconsistencies, in cognitive performance acutely following concussion. Though future studies are needed to verify the clinical significance of these findings, our results suggest that LOC may contribute to post-concussion cognitive dysfunction and may be a risk factor for less efficient cognitive functioning.
The Weigl Colour-Form Sorting Test is a brief, widely used test of executive function. So far, it is unknown whether this test is specific to frontal lobe damage. Our aim was to investigate Weigl performance in patients with focal, unilateral, left or right, frontal, or non-frontal lesions.
Method:
We retrospectively analysed data from patients with focal, unilateral, left or right, frontal (n = 37), or non-frontal (n = 46) lesions who had completed the Weigl. Pass/failure (two correct solutions/less than two correct solutions) and errors were analysed.
Results:
A greater proportion of frontal patients failed the Weigl than non-frontal patients, which was highly significant (p < 0.001). In patients who failed the test, a significantly greater proportion of frontal patients provided the same solution twice. No significant differences in Weigl performance were found between patients with left versus right hemisphere lesions or left versus right frontal lesions. There was no significant correlation between performance on the Weigl and tests tapping fluid intelligence.
Conclusions:
The Weigl is specific to frontal lobe lesions and not underpinned by fluid intelligence. Both pass/failure on this test and error types are informative. Hence, the Weigl is suitable for assessing frontal lobe dysfunction.