To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent information indicates that the number of forensic patients in state hospitals has been increasing, largely driven by an increase in patients referred to state hospitals as incompetent to stand trial (IST). This survey was intended to broaden the understanding of IST population trends on a national level.
The authors developed a 30-question survey to gather specific information on IST commitments in each state and the District of Columbia. The survey was administered to all 50 states and the District of Columbia via email. Specific individuals identified as primary administrators responsible for the care and evaluation of IST admissions in each state were contacted.
A total of 50 out of the 51 jurisdictions contacted completed the survey. Fully 82% of states indicated that referrals for competency evaluation were increasing. Additionally, 78% of respondents thought referrals for competency restoration were increasing. When asked to rank factors that led to an increase, the highest ranked response was inadequate general mental health services in the community. Inadequate crisis services were the second ranked reason. Inadequate number of inpatient psychiatric beds in the community was the third highest, with inadequate assertive community treatment services ranking fourth.
Understanding the national trend and causes behind the recent surge in referrals for IST admissions will benefit states searching for ways to remedy this crisis. Our survey indicates most states are facing this issue, and that it is largely related to insufficient services in the community.
To evaluate whether clinical cultures are an appropriate surrogate for surveillance cultures to measure the effect of interventions on the incidence of MRSA and VRE in the hospital.
Cross-sectional and quasi-experimental, retrospective analysis
Setting and population:
Convenience sample of patients admitted between January 1, 2002, and June 31, 2011, to the medical intensive care unit (MICU) and surgical intensive care unit (SICU) of an acute-care hospital in the United States.
Asynchronously in the MICU and SICU, we introduced (1) universal glove and gown use, (2) bundled intervention to prevent central-line–associated bloodstream infection, and (3) daily chlorhexidine gluconate bathing.
We observed a statistically significant correlation between surveillance and clinical culture-based incidence rates of MRSA in the MICU (0.32; P < .001) and the SICU (0.37; P < .001) but not for VRE in either the MICU (0.16, P = .11) or the SICU (0.15; P = .12). For VRE, but not for MRSA, incidence density rates based on surveillance cultures were 2- to 4-fold higher than for clinical cultures. When evaluating the impacts of the interventions, different effect estimates were noted for universal glove and gown use on MRSA acquisition in MICU, and for VRE acquisition in both the MICU and the SICU based on surveillance versus clinical cultures.
For multidrug-resistant organism acquisition, surveillance cultures should be used when feasible because clinical cultures may not be an appropriate surrogate. Clinical or surveillance-based end points for infection control interventions should reflect the conceptual model from colonization to infection and where an intervention might have an effect, rather than considering them interchangeable.
Herbicide applications performed with pulse width modulation (PWM) sprayers to deliver specific spray droplet sizes could maintain product efficacy, minimize potential off-target movement, and increase flexibility in field operations. Given the continuous expansion of herbicide-resistant Palmer amaranth populations across the southern and midwestern United States, efficacious and cost-effective means of application are needed to maximize Palmer amaranth control. Experiments were conducted in two locations in Mississippi (2016, 2017, and 2018) and one location in Nebraska (2016 and 2017) for a total of seven site-years. The objective of this study was to evaluate the influence of a range of spray droplet sizes [150 (Fine) to 900 μm (Ultra Coarse)] on lactofen and acifluorfen efficacy for Palmer amaranth control. The results of this research indicated that spray droplet size did not influence lactofen efficacy on Palmer amaranth. Palmer amaranth control and percent dry biomass reduction remained consistent with lactofen applied with aforementioned droplet size range. Therefore, larger spray droplets should be used as part of a drift mitigation approach. In contrast, acifluorfen application with 300 μm (Medium) spray droplets provided the greatest Palmer amaranth control. Although percent biomass reduction was numerically greater with 300 μm (Medium) droplets, results did not differ with respect to spray droplet size, possibly due to initial plant injury, causing weight loss, followed by regrowth. Overall, 900 μm (Ultra Coarse) droplets could be used effectively without compromising lactofen efficacy on Palmer amaranth, and 300 μm (Medium) droplets should be used to achieve maximum Palmer amaranth control with acifluorfen.
Neuroanatomical abnormalities in first-episode psychosis (FEP) tend to be subtle and widespread. The vast majority of previous studies have used small samples, and therefore may have been underpowered. In addition, most studies have examined participants at a single research site, and therefore the results may be specific to the local sample investigated. Consequently, the findings reported in the existing literature are highly heterogeneous. This study aimed to overcome these issues by testing for neuroanatomical abnormalities in individuals with FEP that are expressed consistently across several independent samples.
Structural Magnetic Resonance Imaging data were acquired from a total of 572 FEP and 502 age and gender comparable healthy controls at five sites. Voxel-based morphometry was used to investigate differences in grey matter volume (GMV) between the two groups. Statistical inferences were made at p < 0.05 after family-wise error correction for multiple comparisons.
FEP showed a widespread pattern of decreased GMV in fronto-temporal, insular and occipital regions bilaterally; these decreases were not dependent on anti-psychotic medication. The region with the most pronounced decrease – gyrus rectus – was negatively correlated with the severity of positive and negative symptoms.
This study identified a consistent pattern of fronto-temporal, insular and occipital abnormalities in five independent FEP samples; furthermore, the extent of these alterations is dependent on the severity of symptoms and duration of illness. This provides evidence for reliable neuroanatomical alternations in FEP, expressed above and beyond site-related differences in anti-psychotic medication, scanning parameters and recruitment criteria.
To determine the use and perceived value of different information sources that patients may use to support identification of medicine side effects; to explore associations between coping styles and use of information sources.
Side effects from medicines can have considerable negative impact on peoples’ daily lives. As a result of an ageing UK population and attendant multi-morbidity, an increasing number of medicines are being prescribed for patients, leading to increased risk of unintended side effects.
A cross-sectional survey of patients who use medicine, recruited from community pharmacies. The survey sought views on attributes of various information sources, their predicted and actual use, incorporating a shortened Side Effects Coping Questionnaire (SECope) scale and the abbreviated Miller Behavioural Style Scale (MBSS).
Of 935 questionnaires distributed, 230 (25.0%) were returned, 61.3% from females; 44.7% were retired and 84.6% used at least one medicine regularly. 69.6% had experienced a side effect, resulting in 57.5% of these stopping the medicine. Patient information leaflets (PILs) and GPs were both predicted and actually most widely used sources, despite GPs being judged as relatively less accessible and PILs less trustworthy, particularly by regular medicine users. Pharmacists, considered both easy to access and trustworthy, were used by few in practice, while the internet was considered easy to access, but less trustworthy and was also little used. SECope sub-scales for non-adherence and information seeking showed positive associations with stopping a medicine and seeking information from a health professional. More high monitors than low monitors stopped a medicine themselves, but there were no differences in use of information sources. Information seeking following a side effect is a common strategy, potentially predicted by the SECope, but not the MBSS. Limited GP accessibility could contribute to high internet use. Further research could determine how the trustworthiness of PILs can be improved.
To date, there are no published data on the association of patient-centered outcomes and accurate public-safety answering point (PSAP) dispatch in an American population. The goal of this study is to determine if PSAP dispatcher recognition of out-of-hospital cardiac arrest (OHCA) is associated with neurologically intact survival to hospital discharge.
This retrospective cohort study is an analysis of prospectively collected Quality Assurance/Quality Improvement (QA/QI) data from the San Antonio Fire Department (SAFD; San Antonio, Texas USA) OHCA registry from January 2013 through December 2015. Exclusion criteria were: Emergency Medical Services (EMS)-witnessed arrest, traumatic arrest, age <18 years old, no dispatch type recorded, and missing outcome data. The primary exposure was dispatcher recognition of cardiac arrest. The primary outcome was neurologically intact survival (defined as Cerebral Performance Category [CPC] 1 or 2) to hospital discharge. The secondary outcomes were: bystander cardiopulmonary resuscitation (CPR), automated external defibrillator (AED) use, and prehospital return of spontaneous return of circulation (ROSC).
Of 3,469 consecutive OHCA cases, 2,569 cases were included in this analysis. The PSAP dispatched 1,964/2,569 (76.4%) of confirmed OHCA cases correctly. The PSAP dispatched 605/2,569 (23.6%) of confirmed OHCA cases as another chief complaint. Neurologically intact survival to hospital discharge occurred in 99/1,964 (5.0%) of the recognized cardiac arrest group and 28/605 (4.6%) of the unrecognized cardiac arrest group (OR = 1.09; 95% CI, 0.71–1.70). Bystander CPR occurred in 975/1,964 (49.6%) of the recognized cardiac arrest group versus 138/605 (22.8%) of the unrecognized cardiac arrest group (OR = 3.34; 95% CI, 2.70–4.11).
This study found no association between PSAP dispatcher identification of OHCA and neurologically intact survival to hospital discharge. Dispatcher identification of OHCA remains an important, but not singularly decisive link in the OHCA chain of survival.
Delegation of powers represents a grant of authority by politicians to one or more agents whose powers are determined by the conditions in enabling statutes. Extant empirical studies of this problem have relied on labor-intensive content analysis that ultimately restricts our knowledge of how delegation has responded to politics and institutional change in recent years. We present a machine learning approach to the empirical estimation of authority and constraint in European Union (EU) legislation, and demonstrate its ability to accurately generate the same discretionary measures used in an original study directly using all EU directives and regulations enacted between 1958–2017. We assess validity by training our classifier on a random sample of only 10% of hand-coded provisions and replicating an important substantive finding. While our principal interest lies in delegation, our method is extensible to any context in which human coding has been profitably produced.
We studied the association between chlorhexidine gluconate (CHG) concentration on skin and resistant bacterial bioburden. CHG was almost always detected on the skin, and detection of methicillin-resistant Staphylococcus aureus, carbapenem-resistant Enterobacteriaceae, and vancomycin-resistant Enterococcus on skin sites was infrequent. However, we found no correlation between CHG concentration and bacterial bioburden.
The period before the formation of a persecutory delusion may provide causal insights. Patient accounts are invaluable in informing this understanding.
To inform the understanding of delusion formation, we asked patients about the occurrence of potential causal factors – identified from a cognitive model – before delusion onset.
A total of 100 patients with persecutory delusions completed a checklist about their subjective experiences in the weeks before belief onset. The checklist included items concerning worry, images, low self-esteem, poor sleep, mood dysregulation, dissociation, manic-type symptoms, aberrant salience, hallucinations, substance use and stressors. Time to reach certainty in the delusion was also assessed.
Most commonly it took patients several months to reach delusion certainty (n = 30), although other patients took a few weeks (n = 24), years (n = 21), knew instantly (n = 17) or took a few days (n = 6). The most frequent experiences occurring before delusion onset were: low self-confidence (n = 84); excessive worry (n = 80); not feeling like normal self (n = 77); difficulties concentrating (n = 77); going over problems again and again (n = 75); being very negative about the self (n = 75); images of bad things happening (n = 75); and sleep problems (n = 75). The average number of experiences occurring was high (mean 23.5, s.d. = 8.7). The experiences clustered into six main types, with patients reporting an average of 5.4 (s.d. = 1.0) different types.
Patients report numerous different experiences in the period before full persecutory delusion onset that could be contributory causal factors, consistent with a complex multifactorial view of delusion occurrence. This study, however, relied on retrospective self-report and could not determine causality.
Places such as Poverty Point, Mound City, and Chaco Canyon remind us that the siting of ritual infrastructure in ancient North America was a matter of cosmological precedent. The cosmic gravity of these places gathered persons periodically in numbers that challenged routine production. Ritual economies intensified, but beyond the material demands of hosting people, the siting of these places and the timing of gatherings were cosmic work that preconfigured these outcomes. A first millennium AD civic-ceremonial center on the northern Gulf Coast of Florida illustrates the rationale for holding feasts on the end of a parabolic dune that it shared with an existing mortuary facility. Archaeofauna from large pits at Shell Mound support the inference that feasts were timed to summer solstices. Gatherings were large, judging from the infrastructure in support of feasts and efforts to intensify production through oyster mariculture and the construction of a large tidal fish trap. The 250-year history of summer solstice feasts at Shell Mound reinforces the premise that ritual economies were not simply the amplification of routine production. It also suggests that the ecological potential for intensification was secondary to the cosmic significance of solstice-oriented dunes and their connection to mortuary and world-renewal ceremonialism.
Being a family caregiver, and in particular giving care to someone with dementia, impacts mental and physical health and potentially reduces the ability of caregivers to “live well.” This paper examines whether three key psychological resources—self-efficacy, optimism, and self-esteem—are associated with better outcomes for caregivers of people with dementia.
Design and Participants:
Caregivers of 1,283 people with mild-to-moderate dementia in the Improving the Experience of Dementia and Enhancing Active Life (IDEAL) project responded to measures of self-efficacy, optimism, and self-esteem, and “living well” (quality of life, life satisfaction, and well-being). Multivariate linear regression was used to examine the association between psychological resources and “living well”.
Self-efficacy, optimism, and self-esteem were all independently associated with better capability to “live well” for caregivers. This association persisted when accounting for a number of potential confounding variables (age group, sex, and hours of caregiving per day).
Low self-efficacy, optimism, and self-esteem might present a risk of poor outcomes for caregivers of people with dementia. These findings encourage us to consider how new or established interventions might increase the psychological resilience of caregivers.
The objective of this study was to systematically assess the literature regarding postnatal healthcare utilization and barriers/facilitators of healthcare in neonatal abstinence syndrome (NAS) children.
A systematic search was performed in PubMed, Cochrane Database of Systematic Reviews, PsychINFO, Cumulative Index to Nursing and Allied Health Literature (CINAHL), and Web of Science to identify peer-reviewed research. Eligible studies were peer-reviewed articles reporting on broad aspects of primary and specialty healthcare utilization and access in NAS children. Three investigators independently reviewed all articles and extracted data. Study bias was assessed using the Newcastle–Ottawa Assessment Scale and the National Institute of Health Study Quality Assessment Tool.
This review identified 14 articles that met criteria. NAS children have poorer outpatient appointment adherence and have a higher rate of being lost to follow-up. These children have overall poorer health indicated by a significantly higher risk of ER visits, hospital readmission, and early childhood mortality compared with non-NAS infants. Intensive multidisciplinary support provided through outpatient weaning programs facilitates healthcare utilization and could serve as a model that could be applied to other healthcare fields to improve the health among this population.
This review investigated the difficulties in accessing outpatient care as well as the utilization of such care for NAS infants. NAS infants tend to have decreased access to and utilization of outpatient healthcare following hospital birth discharge. Outpatient weaning programs have proven to be effective; however, these programs require intensive resources and care coordination that has yet to be implemented into other healthcare areas for NAS children.
An improved understanding of diagnostic and treatment practices for patients with rare primary mitochondrial disorders can support benchmarking against guidelines and establish priorities for evaluative research. We aimed to describe physician care for patients with mitochondrial diseases in Canada, including variation in care.
We conducted a cross-sectional survey of Canadian physicians involved in the diagnosis and/or ongoing care of patients with mitochondrial diseases. We used snowball sampling to identify potentially eligible participants, who were contacted by mail up to five times and invited to complete a questionnaire by mail or internet. The questionnaire addressed: personal experience in providing care for mitochondrial disorders; diagnostic and treatment practices; challenges in accessing tests or treatments; and views regarding research priorities.
We received 58 survey responses (52% response rate). Most respondents (83%) reported spending 20% or less of their clinical practice time caring for patients with mitochondrial disorders. We identified important variation in diagnostic care, although assessments frequently reported as diagnostically helpful (e.g., brain magnetic resonance imaging, MRI/MR spectroscopy) were also recommended in published guidelines. Approximately half (49%) of participants would recommend “mitochondrial cocktails” for all or most patients, but we identified variation in responses regarding specific vitamins and cofactors. A majority of physicians recommended studies on the development of effective therapies as the top research priority.
While Canadian physicians’ views about diagnostic care and disease management are aligned with published recommendations, important variations in care reflect persistent areas of uncertainty and a need for empirical evidence to support and update standard protocols.
The first episode of psychosis is a critical period in the emergence of cardiometabolic risk.
We set out to explore the influence of individual and lifestyle factors on cardiometabolic outcomes in early psychosis.
This was a prospective cohort study of 293 UK adults presenting with first-episode psychosis investigating the influence of sociodemographics, lifestyle (physical activity, sedentary behaviour, nutrition, smoking, alcohol, substance use) and medication on cardiometabolic outcomes over the following 12 months.
Rates of obesity and glucose dysregulation rose from 17.8% and 12%, respectively, at baseline to 23.7% and 23.7% at 1 year. Little change was seen over time in the 76.8% tobacco smoking rate or the quarter who were sedentary for over 10 h daily. We found no association between lifestyle at baseline or type of antipsychotic medication prescribed with either baseline or 1-year cardiometabolic outcomes. Median haemoglobin A1c (HbA1c) rose by 3.3 mmol/mol in participants from Black and minority ethnic (BME) groups, with little change observed in their White counterparts. At 12 months, one-third of those with BME heritage exceeded the threshold for prediabetes (HbA1c >39 mmol/mol).
Unhealthy lifestyle choices are prevalent in early psychosis and cardiometabolic risk worsens over the next year, creating an important window for prevention. We found no evidence, however, that preventative strategies should be preferentially directed based on lifestyle habits. Further work is needed to determine whether clinical strategies should allow for differential patterns of emergence of cardiometabolic risk in people of different ethnicities.
We used a survey to characterize contemporary infection prevention and antibiotic stewardship program practices across 64 healthcare facilities, and we compared these findings to those of a similar 2013 survey. Notable findings include decreased frequency of active surveillance for methicillin-resistant Staphylococcus aureus, frequent active surveillance for carbapenem-resistant Enterobacteriaceae, and increased support for antibiotic stewardship programs.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
Microstructural analysis and bulk dielectric property analysis (real and imaginary permittivity at 95 GHz) were performed at temperatures ranging from 25 to 550 °C for ceramic composites comprising a hot-pressed aluminum nitride matrix (containing yttria and trace carbon as sintering additives) with molybdenum powder as a millimeter-wave radiation-absorbing additive. Loading percentages in the range of 0.25 vol% to 4.0 vol% Mo were characterized. For the temperature regime evaluated, the temperature-related changes in real and imaginary components of permittivity were found to be relatively modest compared with those driven by Mo loading. Energy-dispersive X-ray spectroscopic analysis of Mo grains and surrounding regions showed the presence of a mixed-phase layer, containing Mo2C, at the AlN–Mo interface. The Mo2C-containing mixed-phase layer, typically a few micrometers thick, surrounded the Mo grains. Further characterization of this mixed-phase layer is required to determine its contribution to the dielectric properties of the composite.