To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Hurricane Maria caused catastrophic damage in Puerto Rico, increasing the risk for morbidity and mortality in the post-impact period. We aimed to establish a syndromic surveillance system to describe the number and type of visits at 2 emergency health-care settings in the same hospital system in Ponce, Puerto Rico.
We implemented a hurricane surveillance system by interviewing patients with a short questionnaire about the reason for visit at a hospital emergency department and associated urgent care clinic in the 6 mo after Hurricane Maria. We then evaluated the system by comparing findings with data from the electronic medical record (EMR) system for the same time period.
The hurricane surveillance system captured information from 5116 participants across the 2 sites, representing 17% of all visits captured in the EMR for the same period. Most visits were associated with acute illness/symptoms (79%), followed by injury (11%). The hurricane surveillance and EMR data were similar, proportionally, by sex, age, and visit category.
The hurricane surveillance system provided timely and representative data about the number and type of visits at 2 sites. This system, or an adapted version using available electronic data, should be considered in future disaster settings.
The rocky shores of the north-east Atlantic have been long studied. Our focus is from Gibraltar to Norway plus the Azores and Iceland. Phylogeographic processes shape biogeographic patterns of biodiversity. Long-term and broadscale studies have shown the responses of biota to past climate fluctuations and more recent anthropogenic climate change. Inter- and intra-specific species interactions along sharp local environmental gradients shape distributions and community structure and hence ecosystem functioning. Shifts in domination by fucoids in shelter to barnacles/mussels in exposure are mediated by grazing by patellid limpets. Further south fucoids become increasingly rare, with species disappearing or restricted to estuarine refuges, caused by greater desiccation and grazing pressure. Mesoscale processes influence bottom-up nutrient forcing and larval supply, hence affecting species abundance and distribution, and can be proximate factors setting range edges (e.g., the English Channel, the Iberian Peninsula). Impacts of invasive non-native species are reviewed. Knowledge gaps such as the work on rockpools and host–parasite dynamics are also outlined.
The aim of this study was to describe individuals seeking care for injury at a major emergency department (ED) in southern Puerto Rico in the months after Hurricane Maria on September 20, 2017.
After informed consent, we used a modified version of the Natural Disaster Morbidity Surveillance Form to determine why patients were visiting the ED during October 16, 2017–March 28, 2018. We analyzed visits where injury was reported as the primary reason for visit and whether it was hurricane-related.
Among 5 116 patients, 573 (11%) reported injury as the primary reason for a visit. Of these, 10% were hurricane-related visits. The most common types of injuries were abrasions, lacerations, and cuts (43% of all injury visits and 50% of hurricane-related visits). The most common mechanisms of injury were falls, slips, trips (268, 47%), and being hit by/or against an object (88, 15%). Most injury visits occurred during the first 3 months after the hurricane.
Surveillance after Hurricane Maria identified injury as the reason for a visit for about 1 in 10 patients visiting the ED, providing evidence on the patterns of injuries in the months following a hurricane. Public health and emergency providers can use this information to anticipate health care needs after a disaster.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
Quality Improvement and Patient Safety (QIPS) plays an important role in addressing shortcomings in optimal healthcare delivery. However, there is little published guidance available for emergency department (ED) teams with respect to developing their own QIPS programs. We sought to create recommendations for established and aspiring ED leaders to use as a pathway to better patient care through programmatic QIPS activities, starting internally and working towards interdepartmental collaboration.
An expert panel comprised of ten ED clinicians with QIPS and leadership expertise was established. A scoping review was conducted to identify published literature on establishing QIPS programs and frameworks in healthcare. Stakeholder consultations were conducted among Canadian healthcare leaders, and recommendations were drafted by the expert panel based on all the accumulated information. These were reviewed and refined at the 2018 CAEP Academic Symposium in Calgary using in-person and technologically-supported feedback.
Recommendations include: creating a sense of urgency for improvement; engaging relevant stakeholders and leaders; creating a formal local QIPS Committee; securing funding and resources; obtaining local data to guide the work; supporting QIPS training for team members; encouraging interprofessional, cross-departmental, and patient collaborations; using an established QIPS framework to guide the work; developing reward mechanisms and incentive structures; and considering to start small by focusing on a project rather than a program.
A list of 10 recommendations is presented as guiding principles for the establishment and sustainable deployment of QIPS activities in EDs throughout Canada and abroad. ED leaders are encouraged to implement our recommendations in an effort to improve patient care.
Experiments were initiated to characterize a waterhemp population (CHR) discovered in a central Illinois corn field after it was not controlled by the 4-hydroxyphenylpyruvate dioxygenase (HPPD) inhibitor topramezone. Field experiments conducted during 2014–2015 indicated that acetolactate synthase (ALS)-, protoporphyrinogen oxidase (PPO)-, photosystem II (PSII)-, and HPPD-inhibiting herbicides and the synthetic auxin 2,4-D did not control the CHR population. Laboratory experiments confirmed target site–based resistance mechanisms to ALS- and PPO-inhibiting herbicides. Herbicide doses required to reduce dry biomass 50% (GR50) were determined in greenhouse dose–response experiments, and indicated 16-fold resistance to the HPPD inhibitor mesotrione, 9.5-fold resistance to the synthetic auxin 2,4-D, and 252-fold resistance to the PSII inhibitor atrazine. Complementary results from field, laboratory, and greenhouse investigations indicate that the CHR population has evolved resistance to herbicides from five sites of action (SOAs): ALS-, PPO-, PSII-, and HPPD-inhibiting herbicides and 2,4-D. Herbicide use history for the field in which CHR was discovered indicates no previous use of 2,4-D.
Veterans are at high risk for suicide; emotion dysregulation may confer additional risk. Dialectical behaviour therapy (DBT) is a well-supported intervention for suicide attempt reduction in individuals with emotion dysregulation, but is complex and multi-component. The skills group component of DBT (DBT-SG) has been associated with reduced suicidal ideation and emotion dysregulation. DBT-SG for Veterans at risk for suicide has not been studied.
This study sought to evaluate the feasibility and acceptability of DBT-SG in Veterans and to gather preliminary evidence for its efficacy in reducing suicidal ideation and emotion dysregulation and increasing coping skills.
Veterans with suicidal ideation and emotion dysregulation (N = 17) enrolled in an uncontrolled pilot study of a 26-week DBT-SG as an adjunct to mental health care-as-usual.
Veterans attended an average 66% of DBT-SG sessions. Both Veterans and their primary mental health providers believed DBT-SG promoted Veterans’ use of coping skills to reduce suicide risk, and they were satisfied with the treatment. Paired sample t-tests comparing baseline scores with later scores indicated suicidal ideation and emotion dysregulation decreased at post-treatment (d = 1.88, 2.75, respectively) and stayed reduced at 3-month follow-up (d = 2.08, 2.59, respectively). Likewise, skillful coping increased at post-treatment (d = 0.85) and was maintained at follow-up (d = 0.91).
An uncontrolled pilot study indicated DBT-SG was feasible, acceptable, and demonstrated potential efficacy in reducing suicidal ideation and emotion dysregulation among Veterans. A randomized controlled study of DBT-SG with Veterans at risk for suicide is warranted.
Objectives: Insomnia is associated with neuropsychological dysfunction. Evidence points to the role of nocturnal light exposure in disrupted sleep patterns, particularly blue light emitted through smartphones and computers used before bedtime. This study aimed to test whether blocking nocturnal blue light improves neuropsychological function in individuals with insomnia symptoms. Methods: This study used a randomized, placebo-controlled crossover design. Participants were randomly assigned to a 1-week intervention with amber lenses worn in wrap-around frames (to block blue light) or a 1-week intervention with clear lenses (control) and switched conditions after a 4-week washout period. Neuropsychological function was evaluated with tests from the NIH Toolbox Cognition Battery at three time points: (1) baseline (BL), (2) following the amber lenses intervention, and (3) following the clear lenses intervention. Within-subjects general linear models contrasted neuropsychological test performance following the amber lenses and clear lenses conditions with BL performance. Results: Fourteen participants (mean(standard deviation, SD): age = 46.5(11.4)) with symptoms of insomnia completed the protocol. Compared with BL, individuals performed better on the List Sorting Working Memory task after the amber lenses intervention, but similarly after the clear lenses intervention (F = 5.16; p = .014; η2 = 0.301). A similar pattern emerged on the Pattern Comparison Processing Speed test (F = 7.65; p = 0.002; η2 = 0.370). Consideration of intellectual ability indicated that treatment with amber lenses “normalized” performance on each test from approximately 1 SD below expected performance to expected performance. Conclusions: Using a randomized, placebo-controlled crossover design, we demonstrated improvement in processing speed and working memory with a nocturnal blue light blocking intervention among individuals with insomnia symptoms. (JINS, 2019, 25, 668–677)
We sought to define the prevalence of echocardiographic abnormalities in long-term survivors of paediatric hematopoietic stem cell transplantation and determine the utility of screening in asymptomatic patients. We analysed echocardiograms performed on survivors who underwent hematopoietic stem cell transplantation from 1982 to 2006. A total of 389 patients were alive in 2017, with 114 having an echocardiogram obtained ⩾5 years post-infusion. A total of 95 patients had echocardiogram performed for routine surveillance. The mean time post-hematopoietic stem cell transplantation was 13 years. Of 95 patients, 77 (82.1%) had ejection fraction measured, and 10/77 (13.0%) had ejection fraction z-scores ⩽−2.0, which is abnormally low. Those patients with abnormal ejection fraction were significantly more likely to have been exposed to anthracyclines or total body irradiation. Among individuals who received neither anthracyclines nor total body irradiation, only 1/31 (3.2%) was found to have an abnormal ejection fraction of 51.4%, z-score −2.73. In the cohort of 77 patients, the negative predictive value of having a normal ejection fraction given no exposure to total body irradiation or anthracyclines was 96.7% at 95% confidence interval (83.3–99.8%). Systolic dysfunction is relatively common in long-term survivors of paediatric hematopoietic stem cell transplantation who have received anthracyclines or total body irradiation. Survivors who are asymptomatic and did not receive radiation or anthracyclines likely do not require surveillance echocardiograms, unless otherwise indicated.
Takayasu’s arteritis is a rare idiopathic arteritis causing stenosis or aneurysms of the aorta, pulmonary arteries, and their branches. It usually occurs in women, but has been described in children.
The objective of this study was to determine the clinical presentation, demographic profile, vascular involvement, origins, management, and outcome of children diagnosed with Takayasu’s arteritis at a Southern African tertiary care centre between 1993 and 2015.
This is a retrospective analysis of all children with Takayasu’s arteritis captured on a computerised electronic database during the study period.
A total of 55 children were identified. The female:male ratio was 3.2:1, and the mean age was 9.7±3.04 years. Most originated outside the provincial borders of the study centre. The majority presented with hypertension and heart failure. In all, 37 (67%) patients had a cardiomyopathy with a mean fractional shortening of 15±5%. A positive purified protein derivative test was documented in 73%. Abdominal aorta and renal artery stenosis were the predominant angiographic lesions. A total of 23 patients underwent 30 percutaneous interventions of the aorta, pulmonary, and renal arteries: eight stents, 22 balloon angioplasties, and seven had nephrectomies. All patients received empiric tuberculosis treatment, immunosuppressive therapy, and anti-hypertensive agents as required. Overall, there was a significant reduction in systolic blood pressure and improvement in fractional shortening (p<0.05) with all treatments.
Takayasu’s arteritis is more common in girls and frequently manifests with hypertension and heart failure. The abdominal aorta and renal arteries are mostly affected. Immunosuppressive, anti-hypertensive, and vascular intervention therapies improve blood pressure control and cardiac function.
The Sahara was wetter and greener during multiple interglacial periods of the Quaternary, when some have suggested it featured very large (mega) lakes, ranging in surface area from 30,000 to 350,000 km2. In this paper, we review the physical and biological evidence for these large lakes, especially during the African Humid Period (AHP) 11–5 ka. Megalake systems from around the world provide a checklist of diagnostic features, such as multiple well-defined shoreline benches, wave-rounded beach gravels where coarse material is present, landscape smoothing by lacustrine sediment, large-scale deltaic deposits, and in places, tufas encrusting shorelines. Our survey reveals no clear evidence of these features in the Sahara, except in the Chad basin. Hydrologic modeling of the proposed megalakes requires mean annual rainfall ≥1.2 m/yr and a northward displacement of tropical rainfall belts by ≥1000 km. Such a profound displacement is not supported by other paleo-climate proxies and comprehensive climate models, challenging the existence of megalakes in the Sahara. Rather than megalakes, isolated wetlands and small lakes are more consistent with the Sahelo-Sudanian paleoenvironment that prevailed in the Sahara during the AHP. A pale-green and discontinuously wet Sahara is the likelier context for human migrations out of Africa during the late Quaternary.
OBJECTIVES/SPECIFIC AIMS: Neurological injury remains as the main limiting factor for overall recovery after cardiac arrest (CA). Currently available indicators of neurological injury are inadequate for early prognostication after return of spontaneous circulation (ROSC). High diversification of brain mitochondrial cardiolipins (CL) makes them unique candidates to quantify brain injury and to predict prognosis early after ROSC. METHODS/STUDY POPULATION: CL content in plasma in 39 patients within 6 hours of ROSC and 10 healthy subjects as well as CL content in human heart and brain specimens were quantified using a high-resolution liquid chromatography mass spectrometry method. The quantities of brain-type CL species were correlated with clinical parameters of brain injury severity permitting derivation of a cerebral CL score (C-score) using linear regression. C-score and a single CL species (70:5) were evaluated in patients with varying neurological injury and outcome. Using a rat model of CA, CL was quantified in the plasma and brain of rats using similar methods and results compared with the controls. RESULTS/ANTICIPATED RESULTS: We found that brain and the heart fell on extreme ends of the CL diversity spectrum with 26 species of CL exclusively present in human brain not heart. Nine of these 26 species were present in plasma within 6 hours of ROSC with quantities correlating with greater brain injury. The C-score correlated with early neurologic injury and predicted discharge neurologic/functional outcome. CL (70:5) emerged as a potential point-of-care marker that alone was predictive of injury severity and outcome nearly as well as C-score. Using a rat CA model we showed a significant reduction in hippocampal CL content corresponding to CL released from the brain into systemic circulation. C-score was significantly increased in 10 minute Versus 5 minute no-flow CA and naïve controls. DISCUSSION/SIGNIFICANCE OF IMPACT: CA results in appearance and accumulation of CL in plasma, proportional to injury severity. Quantitation of brain-type CL species in plasma can be used to prognosticate neurological injury within 6 hours after ROSC.
Protected areas are central to global efforts to prevent species extinctions, with many countries investing heavily in their establishment. Yet the designation of protected areas alone can only abate certain threats to biodiversity. Targeted management within protected areas is often required to achieve fully effective conservation within their boundaries. It remains unclear what combination of protected area designation and management is needed to remove the suite of processes that imperil species. Here, using Australia as a case study, we use a dataset on the pressures facing threatened species to determine the role of protected areas and management in conserving imperilled species. We found that protected areas that are not resourced for threat management could remove one or more threats to 1,185 (76%) species and all threats to very few (n = 51, 3%) species. In contrast, a protected area network that is adequately resourced to manage threatening processes within their boundary could remove one or more threats to almost all species (n = 1,551; c. 100%) and all threats to almost half (n = 740, 48%). However, 815 (52%) species face one or more threats that require coordinated conservation actions that protected areas alone could not remove. This research shows that investing in the continued expansion of Australia's protected area network without providing adequate funding for threat management within and beyond the existing protected area network will benefit few threatened species. These findings highlight that as the international community expands the global protected area network in accordance with the 2020 Strategic Plan for Biodiversity, a greater emphasis on the effectiveness of threat management is needed.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Children with chronic illness often experience difficulties at school, yet little is known about the impact of the child's illness on siblings’ school experiences. This study investigated parents’ perceptions of siblings’ school experiences and school support. We conducted semi-structured telephone interviews with 27 parents of children with a chronic illness who had a sibling or siblings (4–25 years), representing the experiences of 31 siblings. Interviews were audio-recorded, transcribed, and analysed using content analysis. Parents believed that 14 of 31 (45.2%) siblings had school difficulties related to the ill child, such as increased anxiety or stress at school, lack of attention from teachers, and changes in behaviour as a result of increased carer responsibilities. Parents identified increased absenteeism due to the ill child's hospitalisation and the impact of parent absences on sibling school functioning. Parents described general and psychological support from the school, and the importance of monitoring the sibling at school and focusing on their unique needs. Overall, our findings suggest the need for a school-based sibling support model that combines psycho-education for siblings and school personnel, individualised sibling psychological support, and shared school and parent responsibility in normalising the sibling experience and providing consistent support.
22q11.2 deletion syndrome (22q11.2DS) is associated with high rates of neurodevelopmental disorder, however, the links between developmental coordination disorder (DCD), intellectual function and psychiatric disorder remain unexplored.
To establish the prevalence of indicative DCD in children with 22q11.2DS and examine associations with IQ, neurocognition and psychopathology.
Neurocognitive assessments and psychiatric interviews of 70 children with 22q11.2DS (mean age 11.2, s.d. = 2.2) and 32 control siblings (mean age 11.5, s.d. = 2.1) were carried out in their homes. Nine children with 22q11.2DS and indicative DCD were subsequently assessed in an occupational therapy clinic.
Indicative DCD was found in 57 (81.4%) children with 22q11.2DS compared with 2 (6.3%) control siblings (odds ratio (OR) = 36.7, P < 0.001). Eight of nine (89%) children with indicative DCD met DSM-5 criteria for DCD. Poorer coordination was associated with increased numbers of anxiety, (P < 0.001), attention-deficit hyperactivity disorder (ADHD) (P < 0.001) and autism-spectrum disorder (ASD) symptoms (P < 0.001) in children with 22q11.2DS. Furthermore, 100% of children with 22q11.2DS and ADHD had indicative DCD (20 of 20), as did 90% of children with anxiety disorder (17 of 19) and 96% of children who screened positive for ASD (22 of 23). The Developmental Coordination Disorder Questionnaire score was related to sustained attention (P = 0.006), even after history of epileptic fits (P = 0.006) and heart problems (P = 0.009) was taken into account.
Clinicians should be aware of the high risk of coordination difficulties in children with 22q11.2DS and its association with risk of mental disorder and specific neurocognitive deficits.
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
To determine the impact of recurrent Clostridium difficile infection (RCDI) on patient behaviors following illness.
Using a computer algorithm, we searched the electronic medical records of 7 Chicago-area hospitals to identify patients with RCDI (2 episodes of CDI within 15 to 56 days of each other). RCDI was validated by medical record review. Patients were asked to complete a telephone survey. The survey included questions regarding general health, social isolation, symptom severity, emotional distress, and prevention behaviors.
In total, 119 patients completed the survey (32%). On average, respondents were 57.4 years old (standard deviation, 16.8); 57% were white, and ~50% reported hospitalization for CDI. At the time of their most recent illness, patients rated their diarrhea as high severity (58.5%) and their exhaustion as extreme (30.7%). Respondents indicated that they were very worried about getting sick again (41.5%) and about infecting others (31%). Almost 50% said that they have washed their hands more frequently (47%) and have increased their use of soap and water (45%) since their illness. Some of these patients (22%–32%) reported eating out less, avoiding certain medications and public areas, and increasing probiotic use. Most behavioral changes were unrelated to disease severity.
Having had RCDI appears to increase prevention-related behaviors in some patients. While some behaviors are appropriate (eg, handwashing), others are not supported by evidence of decreased risk and may negatively impact patient quality of life. Providers should discuss appropriate prevention behaviors with their patients and should clarify that other behaviors (eg, eating out less) will not affect their risk of future illness.