We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Society of Thoracic Surgeons Congenital Heart Surgery Database is the largest congenital heart surgery database worldwide but does not provide information beyond primary episode of care. Linkage to hospital electronic health records would capture complications and comorbidities along with long-term outcomes for patients with CHD surgeries. The current study explores linkage success between Society of Thoracic Surgeons Congenital Heart Surgery Database and electronic health record data in North Carolina and Georgia.
Methods:
The Society of Thoracic Surgeons Congenital Heart Surgery Database was linked to hospital electronic health records from four North Carolina congenital heart surgery using indirect identifiers like date of birth, sex, admission, and discharge dates, from 2008 to 2013. Indirect linkage was performed at the admissions level and compared to two other linkages using a “direct identifier,” medical record number: (1) linkage between Society of Thoracic Surgeons Congenital Heart Surgery Database and electronic health records from a subset of patients from one North Carolina institution and (2) linkage between Society of Thoracic Surgeons data from two Georgia facilities and Georgia’s CHD repository, which also uses direct identifiers for linkage.
Results:
Indirect identifiers successfully linked 79% (3692/4685) of Society of Thoracic Surgeons Congenital Heart Surgery Database admissions across four North Carolina hospitals. Direct linkage techniques successfully matched Society of Thoracic Surgeons Congenital Heart Surgery Database to 90.2% of electronic health records from the North Carolina subsample. Linkage between Society of Thoracic Surgeons and Georgia’s CHD repository was 99.5% (7,544/7,585).
Conclusions:
Linkage methodology was successfully demonstrated between surgical data and hospital-based electronic health records in North Carolina and Georgia, uniting granular procedural details with clinical, developmental, and economic data. Indirect identifiers linked most patients, consistent with similar linkages in adult populations. Future directions include applying these linkage techniques with other data sources and exploring long-term outcomes in linked populations.
Wild oat is a long-standing weed problem in Australian grain cropping systems, potentially reducing the yield and quality of winter grain crops significantly. The effective management of wild oat requires an integrated approach comprising diverse control techniques that suit specific crops and cropping situations. This research aimed to construct and validate a bioeconomic model that enables the simulation and integration of weed control technologies for wild oat in grain production systems. The Avena spp. integrated management (AIM) model was developed with a simple interface to provide outputs of biological and economic data (crop yields, weed control costs, emerged weeds, weed seedbank, gross margins) on wild oat management data in a cropping rotation. Uniquely, AIM was validated against real-world data on wild oat management in a wheat and sorghum cropping rotation, where the model was able to reproduce the patterns of wild oat population changes as influenced by weed control and agronomic practices. Correlation coefficients for 12 comparison scenarios ranged between 0.55 and 0.96. With accurate parameterization, AIM is thus able to make useful predictions of the effectiveness of individual and integrated weed management tactics for wild oat control in grain cropping systems.
The frequent prescribing of psychotropics and high prevalence of polypharmacy among older adults with intellectual disabilities require close monitoring.
Aims
To describe change in prevalence, predictors and health outcomes of psychotropic use during the four waves (2009/2010, 2013/2014, 2016/2017, 2019/2020) of the Intellectual Disability Supplement to the Irish Longitudinal Study on Ageing (IDS-TILDA).
Method
Eligible participants were adults (≥40 years) with intellectual disabilities who participated in all four waves of IDS-TILDA and who reported medication use for the entire period. Differences between groups were tested using Cochran's Q test for binary variables and the McNemar–Bowker test for variables with more than two categories. Generalised estimating equation models were used to assess associations between psychotropic use, participants’ characteristics and health outcomes.
Results
Across waves (433 participants) there were no significant differences in prevalence of psychotropic use (61.2–64.2%) and psychotropic polypharmacy (42.7–38.3%). Antipsychotics were the most used subgroup, without significant change in prevalence between waves (47.6–44.6%). A significant decrease was observed for anxiolytics (26.8–17.6%; P < 0.001) and hypnotics/sedatives (14.1–9.0%; P < 0.05). A significant increase was recorded for antidepressants (28.6–35.8%; P < 0.001) and mood-stabilising agents (11.5–14.6%; P < 0.05). Psychotropic polypharmacy (≥2 psychotropics) was significantly associated with moderate to total dependence in performing activities of daily living over the 10-year period (OR = 1.80, 95% CI 1.21–2.69; P < 0.05).
Conclusions
The study indicates an increase in usage of some classes of psychotropic, a reduction in others and no change in the relatively high rate of antipsychotic use over 10 years in a cohort of older adults with intellectual disabilities and consequent risk of psychotropic polypharmacy and medication-related harm.
Metacognition is defined as the ability to observe, monitor, and make judgments about one’s own cognitive status. Judgments of learning (JOLs) and retrospective confidence judgments (RCJs) are two elements of metacognition related to memory, or metamemory. JOLs refer to one’s assumptions of their memory performance prior to completing a memory task, while RCJs describe one’s subjective assessment of their memory performance after they have completed the task. Traumatic brain injury (TBI) is known to negatively impact general metacognitive functioning. However, the nuanced effects of TBI on constituent metacognitive subprocesses like JOLs and RCJs remain unclear. This study aimed to characterize patterns of brain activity that occur when individuals with TBI render JOLs and RCJs during a meta-memory task. Differences between JOL- and RCJ-related patterns of activation were also explored.
Participants and Methods:
20 participants with moderate-to-severe TBI completed a metacognition task while undergoing functional magnetic resonance imaging (fMRI). Participants were first exposed to target slides with a set of polygons placed in specific locations, then asked to identify the target slides within a set of distractors. Before identifying the target slides, participants rated how well they believed they would remember the polygons’ shape and location (JOL). After answering, they rated how confident they were that the answer they provided was correct (RCJ). First-level time series analyses of fMRI data were conducted for each participant using FSL FEAT. Higher-level random effects modeling was then performed to assess average activation across all participants. Finally, contrasts were applied to examine and compare JOL- and RCJ-specific patterns of activation.
Results:
JOLs were associated with activation of the left frontal gyri, bilateral anterior cingulate, left insula, and right putamen (p < 0.01). RCJs were associated with activation of the bilateral frontal gyri, bilateral posterior and anterior cingulate, left insula, right putamen, and left thalamus (p < 0.01). Compared to RCJs, JOLs demonstrated greater left insula activation (p < 0.01). Compared to JOLs, RCJs demonstrated greater activation of the left superior frontal gyrus, bilateral middle frontal gyrus, and bilateral anterior cingulate (p < 0.01).
Conclusions:
The areas of activation found in this study were consistent with structures previously identified in the broader metacognition literature. Overall, RCJs produced activity in a greater number of regions that was more bilaterally distributed compared to JOLs. Moreover, several regions that were active during both metacognitive subprocesses tended to be even more active during RCJs. A hypothesis for this observation suggests that, unlike JOLs, the additional involvement of reflecting on one’s immediate memory of completing the task during RCJs may require greater recruitment of resources compared to JOLs. Importantly, these findings suggest that, while different metacognitive subprocesses may recruit similar brain circuitry, some subprocesses may require more potent and widespread activation of this circuitry than others. As such, subprocesses with greater activational needs and complexity, such as RCJs, may be more susceptible to damage caused by TBI. Future research should aim to compare patterns of activation associated with certain metacognitive subprocesses between survivors of TBI and healthy controls.
The Functional Assessment of Cancer Therapy-Cognitive scale (FACT-Cog) is one of the most frequently used patient-reported outcome (PRO) measures of cancer-related cognitive impairment (CRCI) and of CRCI-related impact on quality of life (QOL). Previous studies using the FACT-Cog found that >75% of women with breast cancer (BCa) experience CRCI. Distress tolerance (DT) is a complex construct that encompasses both the perceived capacity (i.e., cognitive appraisal) and the behavioral act of withstanding uncomfortable/aversive/negative emotional or physical experiences. Low DT is associated with psychopathology and executive dysfunction. We previously found that women with BCa with better DT skills reported less CRCI on the FACT-Cog. However, this relationship has not been tested using a performance-based cognitive measure. Therefore, the aims of this study were to: (1) assess the relationship between the FACT-Cog and the Telephone Interview for Cognitive Status (TICS), a performance-based cognitive measure; and (2) test whether the association between DT and CRCI (using the FACT-Cog) was replicated with the TICS.
Participants and Methods:
Participants completed the Distress Tolerance Scale (DTS), the FACT-Cog, and the TICS after undergoing BCa surgery and prior to starting adjuvant therapy [101 women, age >50 years, M(SD)= 61.15(7.76), 43% White Non-Hispanic, 34.4% White Hispanic, 10.8% Black, with nonmetastatic BCa, 55.4% lumpectomy, 36.6% mastectomy; median 29 days post-surgery].
Results:
Although there was a significant correlation between the TICS total score and the FACT-CogQOL subscale (r = 0.347, p < 0.001), the TICS total score was not correlated with scores on the FACT-Cog perceived cognitive impairment (CogPCI), perceived cognitive abilities (CogPCA), or comments from others (CogOth) subscales. However, the TICS memory item, a 10-word list immediate recall task, had a weak statistically significant correlation with CogPCI (r = 0.237, p < 0.032), CogOth (r = 0.223, p < 0.044), and CogPCA (r = 0.233, p < 0.036). Next, the sample was divided based on the participant’s score on TICS memory item (i.e., < vs. > sample mean of 5.09). Results of independent samples t-tests demonstrated significant differences in mean scores for CogPCI, f(80) = -2.09, p = 0.04, Mdt = -7.65, Cohen’s d = 0.483, and CogQOL, f(80) = -2.57, p = 0.01, Mditt = -2.38, Cohen’s d = 0.593. A hierarchical linear regression found that DTS subscale and total scores did not significantly predict performance on the TICS. However, DTS continued to be a significant predictor of poorer FACT-Cog PCI scores while controlling for TICS scores.
Conclusions:
We found a weak relationship between self-reported cognitive impairment and objective cognitive performance (TICS). However, greater self-reported PCI and its impact on QOL was found in participants who scored below the sample mean on a recall task from the TICS. Although perceived ability to tolerate distress continued to predict self-reported PCI on the FACT-Cog, it did not predict overall performance on the TICS. Therefore, responses on the FACT-Cog may be more representative of an individual’s ability to tolerate distress related to perceived CRCI than actual overall cognitive ability or impairment.
The process of metacognitive monitoring refers to one’s ability to incorporate rapid in-the-moment self-assessments of their cognitive performance. An area of interest within this literature concerns metacognitive accuracy (MA), or the extent to which an individual can discern when their own judgments are incorrect/correct. Much of the work in this area has either focused on school-aged samples or clinical samples, with findings of impairment in metacognitive processes associated with traumatic brain injury, Schizophrenia, cerebrovascular accidents, and Alzheimer’s disease. Notably, decreased working memory and executive functioning are frequently reported in samples with low MA, suggesting a possible reliance on basic cognitive resources in the facilitation of metacognitive processes. Thus, the goal of this investigation was to elucidate potential relationships between individual domains of cognition and higher-order MA. We hypothesized that performance on measures of working memory and executive function would be positively associated with measures of MA.
Participants and Methods:
Data from 87 undergraduate students who volunteered in research for class credit were used. All participants completed a computerized metamemory task where six lists of 12 words each paired with varying point values were first presented to the participants. After each list, participants were instructed to score as many points as possible by recalling words they could remember. After a brief delay, participants completed a recognition task using the words presented earlier and provided a retrospective confidence judgement (RCJ) following each item. A metric for MA, meta d', was calculated using signal-detection theory analysis from the reported RCJs and recognition task performance. Participants also completed neuropsychological tests of attention (Trails A), working memory (WM; Backward Digits), executive function (EF; Trails B), mental flexibility (MF; Trails B/A Ratio), and processing speed (Symbol Digit Modalities). A sequential multiple regression was performed with meta d’ serving as the criterion, with education, age, and performance on neuropsychological measures entered as predictors.
Results:
The model indicated that a moderate percentage of the variability (R2 = .201) in metacognitive accuracy could be attributed to the combination of predictors in the model (F (7,79) = 2.843, p = .011). Examination of the regression coefficients indicated that only measures of attention (ß = .638, p = .01), MF (ß = .473, p = .041), and WM ß = .244, p = .024) were significantly related to MA after controlling for all other variables in the model.
Conclusions:
The model suggests that working memory, attention, and mental flexibility increased in a linear fashion as MA increased. Our hypotheses were partially supported, while working memory predicted MA, its contribution to the overall model was the smallest among the significant predictors. While executive function was not a significant contributor to the model, MF (a component of EF) was. The largest contributor to the model was attention, which supports prior findings in the literature. This outcome would suggest that while separate from EF, metacognitive processes in neurotypical students may rely on other, more basic cognitive processes. These results may prove beneficial in guiding the development of rehabilitative interventions for MA in clinical samples.
Metacognition refers to one’s ability to make online, in-the-moment judgments regarding their own cognitive performance, and has significant implications for one’s abilities to function in daily life. It has been documented that individuals with TBI often present with metacognitive deficits, and are slower than neurotypical peers in making such judgments. Preliminary attempts have been made to determine how neural contributions to metacognitive functioning differ after injury. Studies thus far have found unique roles of prefrontal gray matter volume and inter-network connectivity in metacognitive functioning after injury, but functional activation directly associated with metacognitive processing has yet to be investigated. This event-related functional magnetic resonance imaging (fMRI) study aimed to document differences in functional activation between adults with TBI and neurotypical peers when completing metacognitive confidence judgments.
Participants and Methods:
16 adults with moderate to severe TBI and 10 healthy adults (HCs) completed a metacognitive task while in the fMRI scanner. All participants were exposed to target slides with polygons arranged in various positions, then asked to identify the target slide from a group including 3 other distractor slides. Following each response, participants provided a metacognitive retrospective confidence judgment (RCJ) by rating their confidence that the answer they provided was correct. Meta d', a signal-detection based metric of metacognitive accuracy, was calculated. FSL FEAT was used for processing and analysis of the imaging data. Contrasts were created to model activation that was greater when RCJs were made compared to target recognition, mixed effects modeling was then used to investigate group differences. Cluster based thresholding set to z>2.3, p<0.01 was used for multiple comparisons correction.
Results:
Healthy controls performed significantly better on the target identification task (p<0.01), and were faster at making RCJs (p=0.03). Individuals with TBI had greater meta d’ scores (p=0.03). Significant activation beyond what was present during target recognition (RCJ>recognition) was found in left supramarginal gyrus, left posterior cingulate, and left cerebellum when individuals with TBI made RCJs, while HCs showed significant activation in the left precuneus, and bilateral superior temporal gyri. Individuals with TBI demonstrated more activation in the lateral occipital cortex bilaterally and the left cerebellum than HCs when completing RCJs. HCs presented with more activation in the left supramarginal gyrus than the TBI group when making RCJs.
Conclusions:
The areas of activation present in both the TBI and HC groups are consistent with previous imaging findings from studies of healthy samples. Interestingly, two structures previously implicated in self-directed cognition and consciousness, the posterior cingulate and precuneus, were differentially activated by the groups. The lack of a common network between the two groups suggests that survivors may rely on separate neural substrates to facilitate metacognition after injury. The TBI group was found to recruit more functional areas when completing the RCJs. These findings, paired with the behavioral data indicating metacognitive performance differences, suggests that neural recruitment may occur after injury to allow for survivors to engage in making metacognitive judgments. Future qualitative investigations of the metacognitive judgments are needed to determine the compensatory nature of this postinjury recruitment.
The objectives of this study were to investigate the effect of level and timing of silage supplementation during early lactation on animal performance and dry matter intake (DMI). Two farm-lets were established with a high (1253 kg DM/ha) and low (862 kg DM/ha) grass availability at turnout. In spring, cows were assigned to one of two treatments as they calved over 2 years; high grass (HG) and low grass (LG). During period 1 (week 1–6), cows on the HG treatment were offered a high daily herbage allowance (DHA) with low silage and the LG treatment were offered a low DHA with high silage. In period 2 (week 7–12), half of the cows from the HG treatment in P1 switched to the LG treatment in P2 and vice versa as 20 LG cows in P1 switched to the HG treatment in P2. Cows on the HG treatment in P2 received a high DHA with no silage and the LG treatment received a low DHA with 3 kg DM/cow silage. Grass DMI was significantly higher for the HG treatment during both periods (+1.6 and +3.4 kg DM/cow/day, respectively). The HG treatment produced +0.9 kg milk/cow/day and had a higher protein concentration (+1.1 g/kg milk) compared to cows on the LG treatment during period 2. Differences in animal performance observed in period 2 were maintained throughout the 8-week carryover period.
The importance of an early diagnosis of dementia is not limited to the clinical management through treatment with anti-dementia medications. A crucial component of dementia care is to enable a person with dementia to make decisions in respect of their own care and treatment. An early diagnosis provides the opportunity for timely discussions about future care needs and the chance for the individual to consider their advance care plan (ACP) at a time when the person retains capacity or, at least, can be an active participant. A person may wish to consult a solicitor or create their own advance decision, lasting power of attorney or will, while they still have capacity to do so.
In this chapter, we will consider the pathway for diagnosing a person with dementia and the legal corollaries of such a diagnosis rather than the organisation or implementation of advance care plans. The considerations are universal when applied to settings where a person is first diagnosed with a dementing illness. The importance of these cannot be overstated in the context of the progressive and deteriorating trajectory.
Clinicians are less likely to be familiar with the provisions of the Care Act compared with the MHA or MCA. While the Act is primarily the domain of social workers and local authorities, its effects are so widespread that a general overview of it is helpful in planning care and providing safeguards for people with dementia. The importance to clinicians arises because so much hinges on the assessment of the person’s needs and that the assessment is carried out in accordance with the Care Act. In a typical case in the Court of Protection, the key documents before the court will be determined by the Care Act assessment. These relate to what the person needs and whether they have the capacity to accept or decline the services required. We discuss the main provisions of the Care Act, which places a series of duties and responsibilities on local authorities concerning care and support for adults, as well as safeguarding in the Act. We then discuss the role of Continuing Healthcare, which is legally underpinned by the NHS Act 2006 and the Care Act, with the overlapping legal schemes essentially working in parallel.
Individuals with dementia may encounter the Criminal Justice System (CJS), including the police and courts, in different capacities: as victims, witnesses or perpetrators of crimes (related or not to the diagnosis). The Living Well with Dementia strategy makes the following statement: ‘People with dementia access all services and so need informed understanding and support from all the services they come into contact with, not only from specialist dementia services. Awareness and skills are therefore needed in all sections of the workforce and society (e.g. housing, emergency services, employers, utilities, public sector services, GP receptionists, criminal justice system staff), not just those involved with dementia care.’ WhilE we think it is important to highlight this policy initiative, this chapter does not analyse how it has been implemented within the CJS. We are limiting our discussion to the legal issues relating to four areas: crimes committed against people with dementia; crimes committed by people with dementia; dementia in secure settings; and discharge of restricted patients on conditions that amount to deprivation of liberty.
Clinicians need to be vigilant about whether the court’s intervention is required because of a dispute or specific legal requirement in relation to their patient. Circumstances may arise when it is necessary to obtain authority from a court regarding the lawfulness of a treatment (either to be given or withdrawn) when a patient refuses, lacks capacity or there is a difference of opinion regarding best interests. In other cases, a judgment from the court may protect a clinician from claims that they have acted unlawfully. Of course, the courts are also there to safeguard the welfare of the patient. We discuss the role of the First-Tier Tribunal (Mental Health) and that of decision-making capacity of patients to participate in tribunal proceedings. We then explain the Court of Protection and its powers, and the pathways for application to the court, as well as the evidence that a clinician may be required to provide. We consider common health and welfare cases that the Court of Protection may be asked to decide on and then discuss the role of the inherent jurisdiction of the High Court in protecting the vulnerable but capacitous.
Discharge planning of older people with dementia to a domestic or care home setting can present difficult practical, legal and ethical dilemmas to the hospital clinician. There may be a different and challenging profile of risks whichever strategy is pursued, but undoubtedly the issue of where someone lives or the care they receive has profound personal importance. Decision-making around these issues exemplifies the tension between preserving autonomy and protection of the individual. A hospital admission can act as a watershed point whereby a view is taken that the person requires a different approach to their care. If the person comes from a domestic setting, this may lead to instigating or modifying an existing care package or moving to a care home. Furthermore, it is generally accepted that the services available in the community for this large and growing patient group are inadequate, the applicable legal framework itself is often complex and unwieldy and, inevitably, the planning process involves more than a single agency. We discuss the key legislation, guidance and processes relating to discharge of a person with dementia from both general and psychiatric hospital settings.
The ability to make decisions (and thereby its assessment) can be complex. It may be affected by a combination of factors that vary between individuals. Even when cognitive functioning may be compromised (for instance, by dementia), a person may still be able to express important deep-rooted values underpinning their decisions. The circumstances may demand that these different elements are explored in greater or lesser detail in making conclusions about an individual’s decision-making capacity. This may add to the complexity of an assessment. From the outset, to make an adequately informed treatment or other decision, you must have sufficient information, be able to make the decision free from coercion and have mental capacity.
Notwithstanding the challenges, with appropriate thought, preparation and attention to documentation, it should be possible to record a legally defensible assessment of capacity for most situations that arise in clinical practice. The purpose of this chapter is to provide an accessible approach to capacity assessment and its recording.