To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Herbicides with soil-residual activity have the potential for carryover into subsequent crops, resulting in injury to sensitive crops and limiting productivity if severe. The increased use of soil residual herbicides in the United States for management of troublesome weeds in corn and soybean cropping systems has potential to result in more cases of carryover. Soil management practices have different effects on the soil environment, potentially influencing herbicide degradation and likelihood of carryover. Field experiments were conducted at three sites in 2019 and 2020 to determine the effects of corn (clopyralid and mesotrione) and soybean (fomesafen and imazethapyr) herbicides applied in the fall at reduced rates (25% and 50% of labeled rates) and three soil management practices (tillage, no-tillage, and a fall established cereal rye cover crop) on subsequent growth and productivity of the cereal rye cover crop and the soybean and corn crops, respectively. Most response variables (cereal rye biomass and crop canopy cover at cover crop termination in the spring, early season crop stand, and herbicide injury ratings, and crop yield) were not affected by herbicide carryover. Corn yield was lower when soil was managed with a cereal rye cover crop compared to tillage at all three sites while yield was lower for no-till compared to tillage at two sites. Soybean yield was lower when managed with a cereal rye cover crop compared to tillage and no-till at one site. Findings from this research indicate a low carryover risk for these herbicides across site-years when label rotational restrictions are followed and environmental conditions favorable for herbicide degradation exist, regardless of soil management practice on silt loam or silty clay loam soil types in the Midwest U.S. region.
There has been a notable increase in requests for psychiatric reports from District Courts for persons remanded to Ireland’s main remand prison, Cloverhill. We aimed to identify if reports were prepared for persons with severe mental illness and if they led to therapeutic benefits such as diversion to healthcare. Measures of equitability between Cloverhill and other District Courts were explored.
For District Court-requested reports completed by the Prison Inreach and Court Liaison Service (PICLS) at Cloverhill Prison from 2015 to 2017, we recorded clinical variables and therapeutic outcomes such as diversion to inpatient psychiatric settings.
Of 236 cases, over half were diverted to inpatient or outpatient psychiatric care. One-third of remand episodes were admitted to a psychiatric hospital, mainly in non-forensic settings. Nearly two-thirds had major mental illness, mainly schizophrenia and related conditions. Almost half had active psychosis. Cases in Cloverhill District Court and other District Courts were similarly likely to have active psychosis (47% overall) and hospital admission (33% overall). Voluntary reports were more likely to identify active psychosis, with over 90% diverted to inpatient or outpatient community treatment settings.
This is the first large scale study of diversion outcomes following requests for psychiatric advice from District Courts in Ireland. Requests were mainly appropriate. Over half led to diversion from the criminal justice system to healthcare settings. There is a need for a complementary network of diversion initiatives at every stage of the criminal justice system to effectively divert mentally ill individuals to appropriate settings at the earliest possible stage.
Infants who require open heart surgery are at increased risk for developmental delays including gross motor impairments which may have implications for later adaptive skills and cognitive performance. We sought to evaluate the feasibility and efficacy of a tummy time intervention to improve motor skill development in infants after cardiac surgery.
Infants <4 months of age who underwent cardiac surgery were randomly assigned to tummy time with or without outpatient reinforcement or standard of care prior to hospital discharge. The Alberta Infant Motor Scale (AIMS) was administered to each infant prior to and 3 months after discharge. Groups were compared, and the association between parent-reported tummy time at home and change in motor scores at follow-up was examined.
Parents of infants (n = 64) who had cardiac surgery at a median age of 5 days were randomly assigned to tummy time instruction (n = 20), tummy time + outpatient reinforcement (n = 21) or standard of care (n = 23). Forty-nine (77%) returned for follow-up. At follow-up, reported daily tummy time was not significantly different between groups (p = 0.17). Fifteen infants had <15 minutes of tummy time daily. Infants who received >15 minutes of tummy time daily had a significantly greater improvement in motor scores than infants with <15 minutes of tummy time daily (p = 0.01).
In infants following cardiac surgery, <15 minutes of tummy time daily is associated with increased motor skill impairment. Further research is needed to elucidate the best strategies to optimise parental compliance with tummy time recommendations.
A novel paediatric disease, multi-system inflammatory syndrome in children, has emerged during the 2019 coronavirus disease pandemic.
To describe the short-term evolution of cardiac complications and associated risk factors in patients with multi-system inflammatory syndrome in children.
Retrospective single-centre study of confirmed multi-system inflammatory syndrome in children treated from 29 March, 2020 to 1 September, 2020. Cardiac complications during the acute phase were defined as decreased systolic function, coronary artery abnormalities, pericardial effusion, or mitral and/or tricuspid valve regurgitation. Patients with or without cardiac complications were compared with chi-square, Fisher’s exact, and Wilcoxon rank sum.
Thirty-nine children with median (interquartile range) age 7.8 (3.6–12.7) years were included. Nineteen (49%) patients developed cardiac complications including systolic dysfunction (33%), valvular regurgitation (31%), coronary artery abnormalities (18%), and pericardial effusion (5%). At the time of the most recent follow-up, at a median (interquartile range) of 49 (26–61) days, cardiac complications resolved in 16/19 (84%) patients. Two patients had persistent mild systolic dysfunction and one patient had persistent coronary artery abnormality. Children with cardiac complications were more likely to have higher N-terminal B-type natriuretic peptide (p = 0.01), higher white blood cell count (p = 0.01), higher neutrophil count (p = 0.02), severe lymphopenia (p = 0.05), use of milrinone (p = 0.03), and intensive care requirement (p = 0.04).
Patients with multi-system inflammatory syndrome in children had a high rate of cardiac complications in the acute phase, with associated inflammatory markers. Although cardiac complications resolved in 84% of patients, further long-term studies are needed to assess if the cardiac abnormalities (transient or persistent) are associated with major cardiac events.
To determine the impact of electronic health record (EHR)–based interventions and test restriction on Clostridioides difficile tests (CDTs) and hospital-onset C. difficile infection (HO-CDI).
Quasi-experimental study in 3 hospitals.
957-bed academic (hospital A), 354-bed (hospital B), and 175-bed (hospital C) academic-affiliated community hospitals.
Three EHR-based interventions were sequentially implemented: (1) alert when ordering a CDT if laxatives administered within 24 hours (January 2018); (2) cancellation of CDT orders after 24 hours (October 2018); (3) contextual rule-driven order questions requiring justification when laxative administered or lack of EHR documentation of diarrhea (July 2019). In February 2019, hospital C implemented a gatekeeper intervention requiring approval for all CDTs after hospital day 3. The impact of the interventions on C. difficile testing and HO-CDI rates was estimated using an interrupted time-series analysis.
C. difficile testing was already declining in the preintervention period (annual change in incidence rate [IR], 0.79; 95% CI, 0.72–0.87) and did not decrease further with the EHR interventions. The laxative alert was temporally associated with a trend reduction in HO-CDI (annual change in IR from baseline, 0.85; 95% CI, 0.75–0.96) at hospitals A and B. The gatekeeper intervention at hospital C was associated with level (IRR, 0.50; 95% CI, 0.42-0.60) and trend reductions in C. difficile testing (annual change in IR, 0.91; 95% CI, 0.85–0.98) and level (IRR 0.42; 95% CI, 0.22–0.81) and trend reductions in HO-CDI (annual change in IR, 0.68; 95% CI, 0.50–0.92) relative to the baseline period.
Test restriction was more effective than EHR-based clinical decision support to reduce C. difficile testing in our 3-hospital system.
To understand how the different data collections methods of the Alberta Health Services Infection Prevention and Control Program (IPC) and the National Surgical Quality Improvement Program (NSQIP) are affecting reported rates of surgical site infections (SSIs) following total hip replacements (THRs) and total knee replacements (TKRs).
Retrospective cohort study.
Four hospitals in Alberta, Canada.
Those with THR or TKR surgeries between September 1, 2015, and March 31, 2018.
Demographic information, complex SSIs reported by IPC and NSQIP were compared and then IPC and NSQIP data were matched with percent agreement and Cohen’s κ calculated. Statistical analysis was performed for age, gender and complex SSIs. A P value <.05 was considered significant.
In total, 7,549 IPC and 2,037 NSQIP patients were compared. The complex SSI rate for NSQIP was higher compared to IPC (THR: 1.19 vs 0.68 [P = .147]; TKR: 0.92 vs 0.80 [P = .682]). After matching, 7 SSIs were identified by both IPC and NSQIP; 3 were identified only by IPC, and 12 were identified only by NSQIP (positive agreement, 0.48; negative agreement, 1.0; κ = 0.48).
Different approaches to monitor SSIs may lead to different results and trending patterns. NSQIP reports total SSI rates that are consistently higher than IPC. If systems are compared at any point in time, confidence on the data may be eroded. Stakeholders need to be aware of these variations and education provided to facilitate an understanding of differences and a consistent approach to SSI surveillance monitoring over time.
The world is astoundingly variable, and organisms – from individuals to whole communities – must respond to variability to survive. One example of nature’s variability is the fluctuations in populations of spruce budworm, Choristoneura fumiferana Clemens (Lepidoptera: Tortricidae), which cycle every 35 years. In this study, we examined how a parasitoid community altered its parasitism of budworm and other caterpillar species in response to these fluctuations. Budworm and other caterpillar species were sampled from balsam fir (Pinaceae) in three plots for 14 years in Atlantic Canada, then were reared to identify any emerging parasitoids. We found that the parasitoid community generally showed an indiscriminate response (i.e., no preference, where frequencies dictated parasitism rates) to changes in budworm frequencies relative to other caterpillar species on balsam fir. We also observed changes in topology and distributions of interaction strengths between the parasitoids, budworm, and other caterpillar species as budworm frequencies fluctuated. Our study contributes to the hypothesis that hardwood trees are a critical part of the budworm–parasitoid food web, where parasitoids attack other caterpillar species on hardwood trees when budworm populations are low. Taken together, our results show that a parasitoid community collectively alters species interactions in response to variable budworm frequencies, thereby fundamentally shifting food-web pathways.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
Current debates about surveillance demonstrate the complexity of political controversies whose uncertainty and moral ambiguities render normative consensus difficult to achieve. The question of how to study political controversies remains a challenge for IR scholars. Critical security studies scholars have begun to examine political controversies around surveillance by exploring changing security practices in the everyday. Yet, (de)legitimation practices have hitherto not been the focus of analysis. Following recent practice-oriented research, we develop a conceptual framework based on the notion of ‘narrative legitimation politics’. We first introduce the concept of ‘tests’ from Boltanski's pragmatic sociology to categorise the discursive context and different moral reference points (truth, reality, existence). Second, we combine pragmatic sociology with narrative analysis to enable the study of dominant justificatory practices. Third, we develop the framework through a practice-oriented exploration of the Snowden controversy with a focus on the US and Germany. We identify distinct justificatory practices in each test format linked to narrative devices (for example, plots, roles, metaphors) whose fluid, contested dynamics have the potential to effect change. The framework is particularly relevant for IR scholars interested in legitimacy issues, the normativity of practices, and the power of narratives.
Equilibrium solutions for hollow vortices in straining flow in a corner are obtained by solving a free-boundary problem. Conformal maps from a canonical doubly connected annular domain to the physical plane combining the Schottky–Klein prime function with an appropriate algebraic map lead to a problem similar to Pocklington's propagating hollow dipole. The result is a two-parameter family of solutions depending on the corner angle and on the non-dimensional ratio of strain to circulation.
The media and scientific literature are increasingly reporting an escalation of large carnivore attacks on humans, mainly in the so-called developed countries, such as Europe and North America. Although large carnivore populations have generally increased in developed countries, increased numbers are not solely responsible for the observed rise in the number of attacks. Of the eight bear species inhabiting the world, two (i.e. the Andean bear and the giant panda) have never been reported to attack humans, whereas the other six species have: sun bears Helarctos malayanus, sloth bears Melursus ursinus, Asiatic black bears Ursus thibetanus, American black bears Ursus americanus, brown bears Ursus arctos, and polar bears Ursus maritimus. This chapter provides insights into the causes, and as a result the prevention, of bear attacks on people. Prevention and information that can encourage appropriate human behavior when sharing the landscape with bears are of paramount importance to reduce both potentially fatal human–bear encounters and their consequences to bear conservation.
Background: In Alberta, Canada, surgical site infections (SSIs) following total hip (THR) and knee replacements (TKR) are reported using 2 data sources: infection prevention and control (IPC), which surveys all THR and TKR using NHSN definitions and the Canadian International Classification of Disease, Tenth Revision (ICD-10-CA) codes, and the National Surgical Quality Improvement Program (NSQIP), which uses a systematic sampling process that involves an 8-day cycle schedule, modified NHSN definitions and current procedural terminology (CPT) codes. We compared the similarities and discrepancies in THR/TKR SSI reporting. Methods: A retrospective multisite cohort study of IPC and NSQIP THR/TKR SSI data at 4 hospitals was performed. SSI data were collected between September 1, 2015, and March 31, 2018. Demographic information and complex and total SSIs reported by IPC and NSQIP were compared for both THR and TKR surgeries. To determine whether both data sources reported similar trends over time, total SSIs by quarter were compared. Univariate analyses using a t test for age and the χ2 test for gender for complex SSIs and total SSIs was performed. The Pearson correlation and the Shapiro-Wilk test were used to assess the THR and TKR trends between the 2 data sources. A P value of <.05 was considered significant. Results: Following the removal of duplicates and missing data, 7,549 IPC and 2,037 NSQIP patients, respectively, were compared. Age, gender, and other demographic parameters were not significantly different. Total THR and TKR SSIs per 100 procedures using NSQIP data were significantly higher than the same rates using IPC data: THR, 2.25 versus 0.92 (P < .05) and TKR, 3.43 versus 1.26 (P < .05). Both IPC and NSQIP data indicated increasing total THR SSI rates over time, but with different magnitudes (r = 0.658). For total TKR SSI, the IPC rate decreased, whereas the NSQIP rate increased over the same period (r = 0.374). When superficial SSIs were excluded, the rates reported between IPC and NSQIP data by hospital and by procedure type were more comparable, with trends toward higher rates reported by NSQIP for THR than for TKR: THR, 1.19 versus 0.68 (P = 0.15) and TKR, 0.92 versus 0.80 (P = .68). Conclusions: Different approaches used to monitor SSIs following surgeries may lead to different results and trend patterns. NSQIP reports total SSI rates that are significantly higher than the IPC Alberta orthopedic population predominantly as a result of increased identification of superficial SSIs. Because the diagnosis of superficial SSIs may be less reliable, SSI reporting should focus on complex infections.
Background: The standardized infection ratio (SIR) is the nationally adopted metric used to track and compare catheter-associated urinary tract infections (CAUTIs) and central-line– associated bloodstream infections (CLABSIs). Despite its widespread use, the SIR may not be suitable for all settings and may not capture all catheter harm. Our objective was to look at the correlation between SIR and device use for CAUTIs and CLABSIs across community hospitals in a regional network. Methods: We compared SIR and SUR (standardized utilization ratio) for CAUTIs and CLABSIs across 43 hospitals in the Duke Infection Control Outreach Network (DICON) using a scatter plot and calculated an R2 value. Hospitals were stratified into large (>70,000 patient days), medium (30,000–70,000 patient days), and small hospitals (<30,000 patient days) based on DICON’s benchmarking for community hospitals. Results: We reviewed 24 small, 11 medium, and 8 large hospitals within DICON. Scatter plots for comparison of SIRs and SURs for CLABSIs and CAUTIs across our network hospitals are shown in Figs. 1 and 2. We detected a weak positive overall correlation between SIR and SUR for CLABSIs (0.33; R2 = 0.11), but no correlation between SIR and SUR for CAUTIs (−0.07; R2 = 0.00). Of 15 hospitals with SUR >1, 7 reported SIR <1 for CLABSIs, whereas 10 of 13 hospitals with SUR >1 reported SIR <1 for CAUTIs. Smaller hospitals showed a better correlation for CLABSI SIR and SUR (0.37) compared to medium and large hospitals (0.19 and 0.22, respectively). Conversely, smaller hospitals showed no correlation between CAUTI SIR and SUR, whereas medium and larger hospitals showed a negative correlation (−0.31 and −0.39, respectively). Conclusions: Our data reveal a weak positive correlation between SIR and SUR for CLABSIs, suggesting that central line use impacts CLABSI SIR to some extent. However, we detected no correlation between SIR and SUR for CAUTIs in smaller hospitals and a negative correlation for medium and large hospitals. Some hospitals with low CAUTI SIRs might actually have higher device use, and vice versa. Therefore, the SIR alone does not adequately reflect preventable harm related to urinary catheters. Public reporting of SIR may incentivize hospitals to focus more on urine culture stewardship rather than reducing device utilization.
Background: Chlorhexidine bathing reduces bacterial skin colonization and prevents infections in specific patient populations. As chlorhexidine use becomes more widespread, concerns about bacterial tolerance to chlorhexidine have increased; however, testing for chlorhexidine minimum inhibitory concentrations (MICs) is challenging. We adapted a broth microdilution (BMD) method to determine whether chlorhexidine MICs changed over time among 4 important healthcare-associated pathogens. Methods: Antibiotic-resistant bacterial isolates (Staphylococcus aureus from 2005 to 2019 and Escherichia coli, Klebsiella pneumoniae, and Enterobacter cloacae complex from 2011 to 2019) were collected through Emerging Infections Program surveillance in 2 sites (Georgia and Tennessee) or through public health reporting in 1 site (Orange County, California). A convenience sample of isolates were collected from facilities with varying amounts of chlorhexidine use. We performed BMD testing using laboratory-developed panels with chlorhexidine digluconate concentrations ranging from 0.125 to 64 μg/mL. After successfully establishing reproducibility with quality control organisms, 3 laboratories performed MIC testing. For each organism, epidemiological cutoff values (ECVs) were established using ECOFFinder. Results: Among 538 isolates tested (129 S. aureus, 158 E. coli, 142 K. pneumoniae, and 109 E. cloacae complex), S. aureus, E. coli, K. pneumoniae, and E. cloacae complex ECVs were 8, 4, 64, and 64 µg/mL, respectively (Table 1). Moreover, 14 isolates had an MIC above the ECV (12 E. coli and 2 E. cloacae complex). The MIC50 of each species is reported over time (Table 2). Conclusions: Using an adapted BMD method, we found that chlorhexidine MICs did not increase over time among a limited sample of S. aureus, E. coli, K. pneumoniae, and E. cloacae complex isolates. Although these results are reassuring, continued surveillance for elevated chlorhexidine MICs in isolates from patients with well-characterized chlorhexidine exposure is needed as chlorhexidine use increases.
Background: In Alberta, Canada, surgical site infections (SSIs) following total hip and knee replacements (THRs and TKRs) are reported using the infection prevention and control (IPC) surveillance system, which surveys all THRs and TKRs using the NHSN definitions; and the National Surgical Quality Improvement Program (NSQIP), which uses different definitions and sampling strategies. Deterministic matching of patient data from these sources was used to examine the overlap and discrepancies in SSI reporting. Methods: A retrospective multisite cohort study of IPC and NSQIP superficial, deep, and organ-space THR/TKR SSI data collected 30 days postoperatively from September 1, 2015, to March 31, 2018 was undertaken. To identify patients with procedures captured by both IPC and NSQIP, data were cleaned, duplicates removed, and patients matched 1:1 using year of birth, procedure facility, type, side, date, and time. Positive and negative agreement were assessed, and the Cohen κ values were calculated. The definitions and data capture methods used by both IPC and NSQIP were also compared. Results: There were 7,549 IPC and 2,037 NSQIP patients, respectively, with 1,798 matched patients: IPC (23.8%) and NSQIP (88.3%). Moreover, 17 SSIs were identified by both IPC and NSQIP, including 9 superficial and 8 complex by IPC and 6 superficial and 11 complex by NSQIP. Also, 7 SSIs were identified only by IPC, of which 5 were superficial, and 36 SSIs were identified only by NSQIP, of which 28 were superficial (positive agreement, 0.44; negative agreement, 0.99; κ = .43). Excluding superficial SSIs, 7 SSIs were identified by both IPC and NSQIP; 3 were identified only by IPC; and 12 were identified only by NSQIP (positive agreement, 0.48; negative agreement, 1.00; κ = 0.48). Conclusions: THR/TKR SSI rates reported by IPC and NSQIP were not comparable in this matched dataset. NSQIP identifies more superficial SSIs. Variations in data capture methods and definitions accounted for most of the discordance. Both surveillance systems are critically involved with improving patient outcomes following surgery. However, stakeholders need to be aware of these variations, and education should be provided to facilitate an understanding of the differences and their interpretation. Future work should explore other surgical procedures and larger data sets.
The criteria for objective memory impairment in mild cognitive impairment (MCI) are vaguely defined. Aggregating the number of abnormal memory scores (NAMS) is one way to operationalise memory impairment, which we hypothesised would predict progression to Alzheimer’s disease (AD) dementia.
As part of the Australian Imaging, Biomarkers and Lifestyle Flagship Study of Ageing, 896 older adults who did not have dementia were administered a psychometric battery including three neuropsychological tests of memory, yielding 10 indices of memory. We calculated the number of memory scores corresponding to z ≤ −1.5 (i.e., NAMS) for each participant. Incident diagnosis of AD dementia was established by consensus of an expert panel after 3 years.
Of the 722 (80.6%) participants who were followed up, 54 (7.5%) developed AD dementia. There was a strong correlation between NAMS and probability of developing AD dementia (r = .91, p = .0003). Each abnormal memory score conferred an additional 9.8% risk of progressing to AD dementia. The area under the receiver operating characteristic curve for NAMS was 0.87 [95% confidence interval (CI) .81–.93, p < .01]. The odds ratio for NAMS was 1.67 (95% CI 1.40–2.01, p < .01) after correcting for age, sex, education, estimated intelligence quotient, subjective memory complaint, Mini-Mental State Exam (MMSE) score and apolipoprotein E ϵ4 status.
Aggregation of abnormal memory scores may be a useful way of operationalising objective memory impairment, predicting incident AD dementia and providing prognostic stratification for individuals with MCI.