We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Substance use (SU) and substance use disorders (SUDs) are prevalent public health problems among emerging adult populations. Emerging adulthood is a time when young people are growing in their independence and exploring their identities, social connections, and future opportunities. It is also a developmental period characterized by experimentation and engagement in alcohol and drug use. The aim of this book chapter is to discuss and provide examples of prevention research to address SU/SUD among emerging adults. We utilize ecodevelopmental and multicultural frameworks to discuss approaches to prevention research. Next, we describe prevention research in the following areas: risk and protective factor research and intervention development. In the area of risk and protective factor research, we will review studies testing risk and protective factors for SU/SUD among Latinx emerging adults. Finally, we also share the development of two intervention studies designed to address alcohol-related sexual assault and a cognitive-behavioral model for mild-to-moderate substance use disorder. Implications for future prevention research are also discussed.
Preschool anxiety is highly prevalent and well known to predict risk for future psychopathology. The present study explores whether a diagnosis of an anxiety disorder in preschool interacts with (a) social skills and (b) cognitive ability to longitudinally predict psychopathology, two well-known protective factors, among a sample of 207 children measured at preschool (Mage = 4.34 years) and early childhood (Mage = 6.61 years). To assess social skills and cognitive ability, we utilized the Social Skills Rating Scale and the Differential Abilities Scale, respectively. To assess psychopathology, we utilized the parent report of the Preschool Age Psychiatric Assessment. Hierarchical linear regression models revealed significant interactions between both social skills and cognitive ability with preschool anxiety. We observed that social skills protected against emergent psychopathology for both children with and without anxiety, although this association was stronger for children with preschool anxiety. Contrastingly, cognitive ability served as a protective factor against future psychopathology primarily among children without preschool anxiety. Results from this study identify targets for future intervention and inform our understanding of how preschool anxiety, a common disorder among young children, shapes future psychopathology risk in childhood.
New machine-vision technologies like the John Deere See & Spray™ could provide the opportunity to reduce herbicide use by detecting weeds and target-spraying herbicides simultaneously. Experiments were conducted for 2 yr in Keiser, AR, and Greenville, MS, to compare residual herbicide timings and targeted spray applications versus traditional broadcast herbicide programs in glyphosate/glufosinate/dicamba-resistant soybean. Treatments utilized consistent herbicides and rates with a preemergence (PRE) application followed by an early postemergence (EPOST) dicamba application followed by a mid-postemergence (MPOST) glufosinate application. All treatments included a residual at PRE and excluded or included a residual EPOST and MPOST. Additionally, the herbicide application method was considered, with traditional broadcast applications, broadcasted residual + targeted applications of postemergence herbicides (dual tank), or targeted applications of all herbicides (single tank). Targeted applications provided comparable control to broadcast applications with a ≤1% decrease in efficacy and overall control ≥93% for Palmer amaranth, broadleaf signalgrass, morningglory species, and purslane species. Additionally, targeted sprays slightly reduced soybean injury by at most 5 percentage points across all evaluations, and these effects did not translate to a yield increase at harvest. The relationship between weed area and targeted sprayed area also indicates that nozzle angle can influence potential herbicide savings, with narrower nozzle angles spraying less area. On average, targeted sprays saved a range of 28.4% to 62.4% on postemergence herbicides. On the basis of these results, with specific machine settings, targeted application programs could reduce the amount of herbicide applied while providing weed control comparable to that of traditional broadcast applications.
We conducted a quantitative analysis of the microbial burden and prevalence of epidemiologically important pathogens (EIP) found on long-term care facilities (LTCF) environmental surfaces.
Methods:
Microbiological samples were collected using Rodac plates (25cm2/plate) from resident rooms and common areas in five LTCFs. EIP were defined as MRSA, VRE, C. difficile and multidrug-resistant (MDR) Gram-negative rods (GNRs).
Results:
Rooms of residents with reported colonization had much greater EIP counts per Rodac (8.32 CFU, 95% CI 8.05, 8.60) than rooms of non-colonized residents (0.78 CFU, 95% CI 0.70, 0.86). Sixty-five percent of the resident rooms and 50% of the common areas were positive for at least one EIP. If a resident was labeled by the facility as colonized with an EIP, we only found that EIP in 30% of the rooms. MRSA was the most common EIP recovered, followed by C. difficile and MDR-GNR.
Discussion:
We found frequent environmental contamination with EIP in LTCFs. Colonization status of a resident was a strong predictor of higher levels of EIP being recovered from his/her room.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
If livestock at risk of poor welfare could be identified using a risk assessment tool, more targeted response strategies could be developed by enforcement agencies to facilitate early intervention, prompt welfare improvement and a decrease in reoffending. This study aimed to test the ability of an Animal Welfare Risk Assessment Tool (AWRAT) to identify livestock at risk of poor welfare in extensive farming systems in Australia. Following farm visits for welfare- and non-welfare-related reasons, participants completed a single welfare rating (WR) and an assessment using the AWRAT for the farm just visited. A novel algorithm was developed to generate an AWRAT-Risk Rating (AWRAT-RR) based on the AWRAT assessment. Using linear regression, the relationship between the AWRAT-RR and the WR was tested. The AWRAT was good at identifying farms with poor livestock welfare based on this preliminary testing. As the AWRAT relies upon observation, the intra- and inter-observer agreement were compared in an observation study. This included rating a set of photographs of farm features, on two occasions. Intra-observer reliability was good, with 83% of Intra-class Correlation Coefficients (ICCs) for observers ≥ 0.8. Inter-observer reliability was moderate with an ICC of 0.67. The AWRAT provides a structured framework to improve consistency in livestock welfare assessments. Further research is necessary to determine the AWRAT’s ability to identify livestock at risk of poor welfare by studying animal welfare incidents and reoffending over time.
The objective of this study was to identify factors more commonly observed on farms with poor livestock welfare compared to farms with good welfare. Potentially, these factors may be used to develop an animal welfare risk assessment tool (AWRAT) that could be used to identify livestock at risk of poor welfare. Identifying livestock at risk of poor welfare would facilitate early intervention and improve strategies to promptly resolve welfare issues. This study focuses on cattle, sheep and goats in non-dairy extensive farming systems in Australia. To assist with identifying potential risk factors, a survey was developed presenting 99 factors about the farm, farmers, animals and various aspects of management. Based on their experience, key stakeholders, including veterinarians, stock agents, consultants, extension and animal welfare officers were asked to consider a farm where the welfare of the livestock was either high or low and rate the likelihood of observing these factors. Of the 141 responses, 65% were for farms with low welfare. Only 6% of factors had ratings that were not significantly different between high and low welfare surveys, and these were not considered further. Factors from poor welfare surveys with median ratings in the lowest 25% were considered potential risks (n = 49). Considering correlation, ease of verification and the different livestock farming systems in Australia, 18 risk factors relating to farm infrastructure, nutrition, treatment and husbandry were selected. The AWRAT requires validation in future studies.
Inhibitory control plays an important role in children’s cognitive and socioemotional development, including their psychopathology. It has been established that contextual factors such as socioeconomic status (SES) and parents’ psychopathology are associated with children’s inhibitory control. However, the relations between the neural correlates of inhibitory control and contextual factors have been rarely examined in longitudinal studies. In the present study, we used both event-related potential (ERP) components and time-frequency measures of inhibitory control to evaluate the neural pathways between contextual factors, including prenatal SES and maternal psychopathology, and children’s behavioral and emotional problems in a large sample of children (N = 560; 51.75% females; Mage = 7.13 years; Rangeage = 4–11 years). Results showed that theta power, which was positively predicted by prenatal SES and was negatively related to children’s externalizing problems, mediated the longitudinal and negative relation between them. ERP amplitudes and latencies did not mediate the longitudinal association between prenatal risk factors (i.e., prenatal SES and maternal psychopathology) and children’s internalizing and externalizing problems. Our findings increase our understanding of the neural pathways linking early risk factors to children’s psychopathology.
Suicidal thoughts and behaviors are elevated among active-duty service members (ADSM) and veterans compared to the general population. Hence, it is a priority to examine maintenance factors underlying suicidal ideation among ADSM and veterans to develop effective, targeted interventions. In particular, interpersonal risk factors, hopelessness, and overarousal have been robustly connected to suicidal ideation and intent.
Methods
To identify the suicidal ideation risk factors that are most relevant, we employed network analysis to examine between-subjects (cross-sectional), contemporaneous (within seconds), and temporal (across four hours) group-level networks of suicidal ideation and related risk factors in a sample of ADSM and veterans (participant n = 92, observations n = 10 650). Participants completed ecological momentary assessment (EMA) surveys four times a day for 30 days, where they answered questions related to suicidal ideation, interpersonal risk factors, hopelessness, and overarousal.
Results
The between-subjects and contemporaneous networks identified agitation, not feeling close to others, and ineffectiveness as the most central symptoms. The temporal network revealed that feeling ineffective was most likely to influence other symptoms in the network over time.
Conclusion
Our findings suggest that ineffectiveness, low belongingness, and agitation are important drivers of moment-to-moment and longitudinal relations between risk factors for suicidal ideation in ADSM and veterans. Targeting these symptoms may disrupt suicidal ideation.
To determine the effects of the non-classic psychedelic, ibogaine, on cognitive functioning. Ibogaine is an indole alkaloid derived from the Tabernanthe Iboga plant family, indigenous to Africa, and traditionally used in spiritual and healing ceremonies. Ibogaine has primarily been studied with respect to its clinical efficacy in reducing substance addiction. There are, however, indications that it also may enhance recovery from traumatic experiences. Ibogaine is a Schedule 1 substance in the USA.
ParticipSabants and Methods:
Participants were U.S. Special Operations Veterans who had independently and voluntarily referred themselves for an ibogaine retreat at a specialized clinic outside the USA prior to learning about this observational study. After meeting rigorous screening requirements, 30 participants were enrolled, all endorsing histories of combat and repeated blast exposure, as well as traumatic brain injury. Participants were seen in person pre-treatment, post-treatment, and one-month post-treatment for neuropsychological testing, neuroimaging, and collection of clinical outcome measures. All 30 participants were seen pre- and post-treatment, of whom 27 were also able to return one-month post-treatment.
The neuropsychological battery included the the Hopkins Verbal Learning Test (HVLT), the Brief Visuospatial Memory Test - Revised (BVMT-R), the Wechsler Adult Intelligence Scale - Fourth Edition (WAIS-IV) Working Memory Index (Digit Span and Arithmetic) and Processing Speed Index (Symbol Search and Coding), and the Delis-Kaplan Executive Function System (D-KEFS) tests of Verbal Fluency (VF), Trail Making (TMT), Color Word (CW), and Tower Test (TT). For repeated measures, alternate forms were used whenever possible.
Results:
Repeated-measures ANOVA revealed significant effects of time, with post-treatment improvements across multiple measures including processing speed (WAIS-IV PSI; F(2,25) = 26.957, p < .001), executive functions (CW Conditions 3 and 4: F(1.445,25) = 11.383, p < .001 and F(1.381,25) = 7.687, p = .004, respectively), verbal fluency (VF Condition 3 correct and accuracy: F(2,25) = 6.419, p = .003 and F(2,25) = 153.076, p < .001, respectively), and verbal learning (HVLT Total Recall (alternate forms used at each time point): F(1.563,23) = 6.958, p = .004). Score progression graphs are presented. Performance on all other cognitive measures did not significantly change following treatment.
Conclusions:
To our knowledge, this is the first prospective study examining neuropsychological test performance following ibogaine use at post-treatment and one-month post-treatment time points. Our results indicated that several cognitive domains improved either post-treatment or one-month post-ibogaine treatment, suggesting ibogaine’s therapeutic potential for cognition in the context of traumatic brain injury and mood disorders. Potential explanations include neuroplastic changes, reduction of PTSD and mood-related effects on cognitive functioning, and practice effects. While we found no evidence of negative cognitive consequences for up to one-month post-single-ibogaine treatment, further study of this substance is necessary to clarify its clinical utility and safety parameters.
Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments. However, many factors may impede or prevent face-to-face assessments, including distance to clinic, limited mobility, eyesight, or transportation. The COVID19 pandemic further widened gaps in access to care and clinical research participation. Alternatives to face-to-face assessments may provide an opportunity to alleviate the burden caused by both the COVID-19 pandemic and longer standing social inequities. The objectives of this study were to develop and assess the feasibility of a telephone- and video-administered version of the Uniform Data Set (UDS) v3 cognitive batteries for use by NIH-funded Alzheimer’s Disease Research Centers (ADRCs) and other research programs.
Participants and Methods:
Ninety-three individuals (M age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent adjudicated cognitive status was normal cognition (N=44), MCI (N=35), mild dementia (N=11) or other (N=3). They completed portions of the UDSv3 cognitive battery, plus the RAVLT, either by telephone or video-format within approximately 6 months (M:151 days) of their annual in-person visit, where they completed the same in-person cognitive assessments. Some measures were substituted (Oral Trails for TMT; Blind MoCA for MoCA) to allow for phone administration. Participants also answered questions about the pleasantness, difficulty level, and preference for administration mode. Cognitive testers provided ratings of perceived validity of the assessment. Participants’ cognitive status was adjudicated by a group of cognitive experts blinded to most recent inperson cognitive status.
Results:
When results from video and phone modalities were combined, the remote assessments were rated as pleasant as the inperson assessment by 74% of participants. 75% rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing. Overall perceived validity of the testing session, determined by cognitive assessors (video = 92%; phone = 87.5%), was good. There was generally good concordance between test scores obtained remotely and in-person (r = .3 -.8; p < .05), regardless of whether they were administered by phone or video, though individual test correlations differed slightly by mode. Substituted measures also generally correlated well, with the exception of TMT-A and OTMT-A (p > .05). Agreement between adjudicated cognitive status obtained remotely and cognitive status based on in-person data was generally high (78%), with slightly better concordance between video/in-person (82%) vs phone/in-person (76%).
Conclusions:
This pilot study provided support for the use of telephone- and video-administered cognitive assessments using the UDSv3 among individuals with normal cognitive function and some degree of cognitive impairment. Participants found the experience similarly pleasant and no more difficult than inperson assessment. Test scores obtained remotely correlated well with those obtained in person, with some variability across individual tests. Adjudication of cognitive status did not differ significantly whether it was based on data obtained remotely or in-person. The study was limited by its’ small sample size, large test-retest window, and lack of randomization to test-modality order. Current efforts are underway to more fully validate this battery of tests for remote assessment. Funded by: P30 AG072947 & P30 AG049638-05S1
There is a pressing need for sensitive, non-invasive indicators of cognitive impairment in those at risk for Alzheimer’s disease (AD). One group at an increased risk for AD is APOEε4 carriers. One study found that cognitively normal APOEε4 carriers are less likely to produce low frequency (i.e., less common) words on semantic fluency tasks relative to non-carriers, but this finding has not yet been replicated. This study aims to replicate these findings within the Wake Forest ADRC clinical core population, and examine whether these findings extend to additional semantic fluency tasks.
Participants and Methods:
This sample includes 221 APOEε4 non-carriers (165 females, 56 males; 190 White, 28 Black/African American, 3 Asian; Mage = 69.55) and 79 APOEε4 carriers (59 females, 20 males; 58 White, 20 Black/African American, 1 Asian; Mage = 65.52) who had been adjudicated as cognitively normal at baseline. Semantic fluency data for both the animal task and vegetable task was scored for total number of items as well as mean lexical frequency (attained via the SUBTLEXus database). Demographic variables and additional cognitive variables (MMSE, MoCA, AMNART scores) were also included from the participants’ baseline visit.
Results:
APOEε4 carriers and non-carriers did not differ on years of education, AMNART scores, or gender (ps > 0.05). APOEε4 carriers were slightly younger and included more Black/African American participants (ps < 0.05). Stepwise linear regression was used to determine the variance in total fluency score and mean lexical frequency accounted for by APOEε4 status after including relevant demographic variables (age, sex, race, years of education, and AMNART score). As expected, demographic variables accounted for significant variance in total fluency score (p < 0.0001). Age accounted for significant variance in total fluency score for both the animal task (ß = -0.32, p <0.0001) and the vegetable task (ß = -0.29, p < 0.0001), but interestingly, not the lexical frequency of words produced. After accounting for demographic variables, APOEε4 status did not account for additional variance in lexical frequency for either fluency task (ps > 0.05). Interestingly, APOEε4 status was a significant predictor of total words for the vegetable semantic fluency task only (ß = 0.13, p = 0.01), resulting in a model that accounted for more variance (R2 = 0.25, F(6, 292) = 16.11, p < 0.0001) in total words than demographic variables alone (R2 = 0.23, F(5, 293) = 17.75, p < 0.0001).
Conclusions:
Unsurprisingly, we found that age, AMNART, and education were significant predictors of total word fluency. One unexpected finding was that age did not predict the lexical frequency - that is - regardless of age, participants tended to retrieve words of the same lexical frequency, which stands in contrast to the notion that retrieval efficiency of infrequent words declines with age. With regard to APOEε4, we did not replicate existing work demonstrating differences in lexical frequency and semantic fluency tasks for ε4 carriers and non-carriers; possibly due to differences in the demographic characteristics of the sample.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
On 3–4 October 2022, the Memorial Sloan Kettering Cancer Center Supportive Care Service and Department of Psychiatry and Behavioral Sciences hosted the Third Annual United States (US) Celebration of World Hospice and Palliative Care Day (WHPCD). The purpose of this article is to reflect on the event within the broader context of the international WHPCD theme: “healing hearts and communities.” We describe lessons learned in anticipation of the fourth annual conference to be held on 3–4 October 2023.
Methods
Description of the third annual event, conference planning team reflection, and attendee evaluation responses.
Results
The Worldwide Hospice Palliative Care Alliance launched WHPCD in 2005 as an annual unified day of action to celebrate and support hospice and palliative care globally. Since 2020, the conference has attracted an increasing number of attendees from around the world. Two primary aims continue to guide the event: community building and wisdom sharing. Fifty-two interprofessional palliative care experts, advocates, patients, and caregivers provided 13 unique interactive sessions. Four hundred and fifty-eight multidisciplinary registrants from at least 17 countries joined the program. Free registration for colleagues in low- and middle-income countries, students and trainees, and individuals experiencing financial hardship remains a cornerstone of inclusion and equitable access to the event.
Significance of results
The US WHPCD celebration provides a virtual platform that offers opportunities for scientific dissemination and collective reflection on hospice and palliative care delivery amid significant local and global changes in clinical practice, research, policy and advocacy, and population health. We remain committed to ensuring an internationally relevant, culturally diverse, and multidisciplinary agenda that will continue to draw increased participation worldwide during future annual events.
Five experiments addressed a controversy in the probability judgment literature that centers on the efficacy of framing probabilities as frequencies. The natural frequency view predicts that frequency formats attenuate errors, while the nested-sets view predicts that highlighting the set-subset structure of the problem reduces error, regardless of problem format. This study tested these predictions using a conjunction task. Previous studies reporting that frequency formats reduced conjunction errors confounded reference class with problem format. After controlling this confound, the present study’s findings show that conjunction errors can be reduced using either a probability or a frequency format, that frequency effects depend upon the presence of a reference class, and that frequency formats do not promote better statistical reasoning than probability formats.
Since the vaccine roll out, research has focused on vaccine safety and efficacy, with large clinical trials confirming that vaccines are generally effective against symptomatic COVID-19 infection. However, breakthrough infections can still occur, and the effectiveness of vaccines against transmission from infected vaccinated people to susceptible contacts is unclear.
Health Technology Wales (HTW) collaborated with the Wales COVID-19 Evidence Centre to identify and examine evidence on the transmission risk of SARS-CoV-2 from vaccinated people to unvaccinated or vaccinated people.
Methods
We conducted a systematic literature search for evidence on vaccinated people exposed to SARS-CoV-2 in any setting. Outcome measures included transmission rate, cycle threshold (Ct) values and viral load. We identified a rapid review by the University of Calgary that was the main source of our outcome data. Nine studies published following the rapid review were also identified and included.
Results
In total, 35 studies were included in this review: one randomized controlled trial (RCT), one post-hoc analysis of an RCT, 13 prospective cohort studies, 16 retrospective cohort studies and four case control studies.
All studies reported a reduction in transmission of the B.1.1.7 (Alpha) variant from partial and fully vaccinated individuals. More recent evidence is uncertain on the effects of vaccination on transmission of the B.1.617.2 (Delta) variant. Overall, vaccine effectiveness in reducing transmission appears to increase with full vaccination, compared with partial vaccination. Most of the direct evidence is limited to transmission in household settings therefore, there is a gap in the evidence on risk of transmission in other settings. One UK study found protection against onward transmission waned within 3 months post second vaccination.
Conclusions
Early findings that focused on the alpha variant, showed a reduction in transmission from vaccinated people. There is limited evidence on the effectiveness of vaccination on transmission of the Delta variant, therefore alternative preventative measures to reduce transmission may still be required.
We compared the effectiveness of 4 sampling methods to recover Staphylococcus aureus, Klebsiella pneumoniae and Clostridioides difficile from contaminated environmental surfaces: cotton swabs, RODAC culture plates, sponge sticks with manual agitation, and sponge sticks with a stomacher. Organism type was the most important factor in bacterial recovery.
Many male prisoners have significant mental health problems, including anxiety and depression. High proportions struggle with homelessness and substance misuse.
Aims
This study aims to evaluate whether the Engager intervention improves mental health outcomes following release.
Method
The design is a parallel randomised superiority trial that was conducted in the North West and South West of England (ISRCTN11707331). Men serving a prison sentence of 2 years or less were individually allocated 1:1 to either the intervention (Engager plus usual care) or usual care alone. Engager included psychological and practical support in prison, on release and for 3–5 months in the community. The primary outcome was the Clinical Outcomes in Routine Evaluation Outcome Measure (CORE-OM), 6 months after release. Primary analysis compared groups based on intention-to-treat (ITT).
Results
In total, 280 men were randomised out of the 396 who were potentially eligible and agreed to participate; 105 did not meet the mental health inclusion criteria. There was no mean difference in the ITT complete case analysis between groups (92 in each arm) for change in the CORE-OM score (1.1, 95% CI –1.1 to 3.2, P = 0.325) or secondary analyses. There were no consistent clinically significant between-group differences for secondary outcomes. Full delivery was not achieved, with 77% (108/140) receiving community-based contact.
Conclusions
Engager is the first trial of a collaborative care intervention adapted for prison leavers. The intervention was not shown to be effective using standard outcome measures. Further testing of different support strategies for prison with mental health problems is needed.
The rapid spread of coronavirus disease 2019 (COVID-19) required swift preparation to protect healthcare personnel (HCP) and patients, especially considering shortages of personal protective equipment (PPE). Due to the lack of a pre-existing biocontainment unit, we needed to develop a novel approach to placing patients in isolation cohorts while working with the pre-existing physical space.
Objectives:
To prevent disease transmission to non–COVID-19 patients and HCP caring for COVID-19 patients, to optimize PPE usage, and to provide a comfortable and safe working environment.
Methods:
An interdisciplinary workgroup developed a combination of approaches to convert existing spaces into COVID-19 containment units with high-risk zones (HRZs). We developed standard workflow and visual management in conjunction with updated staff training and workflows. The infection prevention team created PPE standard practices for ease of use, conservation, and staff safety.
Results:
The interventions resulted in 1 possible case of patient-to-HCP transmission and zero cases of patient-to-patient transmission. PPE usage decreased with the HRZ model while maintaining a safe environment of care. Staff on the COVID-19 units were extremely satisfied with PPE availability (76.7%) and efforts to protect them from COVID-19 (72.7%). Moreover, 54.8% of HCP working in the COVID-19 unit agreed that PPE monitors played an essential role in staff safety.
Conclusions:
The HRZ model of containment unit is an effective method to prevent the spread of COVID-19 with several benefits. It is easily implemented and scaled to accommodate census changes. Our experience suggests that other institutions do not need to modify existing physical structures to create similarly protective spaces.