To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Prevention of Clostridioides difficile infection (CDI) is a national priority and may be facilitated by deployment of the Targeted Assessment for Prevention (TAP) Strategy, a quality improvement framework providing a focused approach to infection prevention. This article describes the process and outcomes of TAP Strategy implementation for CDI prevention in a healthcare system.
Hospital A was identified based on CDI surveillance data indicating an excess burden of infections above the national goal; hospitals B and C participated as part of systemwide deployment. TAP facility assessments were administered to staff to identify infection control gaps and inform CDI prevention interventions. Retrospective analysis was performed using negative-binomial, interrupted time series (ITS) regression to assess overall effect of targeted CDI prevention efforts. Analysis included hospital-onset, laboratory-identified C. difficile event data for 18 months before and after implementation of the TAP facility assessments.
The systemwide monthly CDI rate significantly decreased at the intervention (β2, −44%; P = .017), and the postintervention CDI rate trend showed a sustained decrease (β1 + β3; −12% per month; P = .008). At an individual hospital level, the CDI rate trend significantly decreased in the postintervention period at hospital A only (β1 + β3, −26% per month; P = .003).
This project demonstrates TAP Strategy implementation in a healthcare system, yielding significant decrease in the laboratory-identified C. difficile rate trend in the postintervention period at the system level and in hospital A. This project highlights the potential benefit of directing prevention efforts to facilities with the highest burden of excess infections to more efficiently reduce CDI rates.
This study investigated whether individual differences in receptive vocabulary, speech perception and production, and nonword repetition at age 2 years, 4 months to 3 years, 4 months predicted phonological awareness 2 years later. One hundred twenty-one children were tested twice. During the first testing period (Time 1), children’s receptive vocabulary, speech perception and production, and nonword repetition were measured. Nonword repetition accuracy in the present study was distinct from other widely used measures of nonword repetition in that it focused on narrow transcription of diphone sequences in each nonword that differed systematically in phonotactic probability. At the second testing period (Time 2), children’s phonological awareness was measured. The best predictors of phonological awareness were a measure of speech production and a measure of phonological processing derived from performance on the nonword repetition task. The results of this study suggest that nonword repetition accuracy provides an implicit measure of phonological skills that are indicative of later phonological awareness at an age when children are too young to perform explicit phonological awareness tasks reliably.
To describe pathogen distribution and rates for central-line–associated bloodstream infections (CLABSIs) from different acute-care locations during 2011–2017 to inform prevention efforts.
CLABSI data from the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) were analyzed. Percentages and pooled mean incidence density rates were calculated for a variety of pathogens and stratified by acute-care location groups (adult intensive care units [ICUs], pediatric ICUs [PICUs], adult wards, pediatric wards, and oncology wards).
From 2011 to 2017, 136,264 CLABSIs were reported to the NHSN by adult and pediatric acute-care locations; adult ICUs and wards reported the most CLABSIs: 59,461 (44%) and 40,763 (30%), respectively. In 2017, the most common pathogens were Candida spp/yeast in adult ICUs (27%) and Enterobacteriaceae in adult wards, pediatric wards, oncology wards, and PICUs (23%–31%). Most pathogen-specific CLABSI rates decreased over time, excepting Candida spp/yeast in adult ICUs and Enterobacteriaceae in oncology wards, which increased, and Staphylococcus aureus rates in pediatric locations, which did not change.
The pathogens associated with CLABSIs differ across acute-care location groups. Learning how pathogen-targeted prevention efforts could augment current prevention strategies, such as strategies aimed at preventing Candida spp/yeast and Enterobacteriaceae CLABSIs, might further reduce national rates.
The duodenum lies in front of the right kidney and renal vessels, the right psoas muscle, the inferior vena cava, and the aorta (Figure 26.1).
The duodenum is approximately 25 cm in length. It is the most fixed part of the small intestine and has no mesentery. It is anatomically divided into four parts:
The superior or first portion is intraperitoneal along the anterior half of its circumference. Superiorly, the first portion is attached to the hepatoduodenal ligament. The posterior wall is associated with the gastroduodenal artery, common bile duct, and the portal vein.
The descending or second portion shares a medial border with the head of the pancreas. It is bordered posteriorly by the medial surface of the right kidney, the right renal vessels, and the inferior vena cava. The hepatic flexure and transverse colon cross anteriorly. The common bile duct and main pancreatic duct drain into the medial wall of the descending duodenum.
The transverse or third portion is also entirely retroperitoneal. Posteriorly, it is bordered by the inferior vena cava and the aorta. The superior mesenteric vessels cross in front of this portion of the duodenum.
The ascending or fourth portion of the duodenum is approximately 2.5 cm in length and is primarily retroperitoneal, except for the most distal segment. It crosses anterior to and ascends to the left of the aorta to join the jejunum at the ligament of Treitz.
The common bile duct courses laterally within the hepatodudenal ligament and lies posterior to the first portion of the duodenum and pancreatic head, becoming partially invested within the parenchyma of the pancreatic head. The main pancreatic duct then joins the common bile duct to drain into the ampulla of Vater within the second portion of the duodenum. The ampulla of Vater is located approximately 7 cm from the pylorus. The accessory pancreatic duct drains approximately 2 cm proximal to the ampulla of Vater.
The vascular supply to the duodenum is intimately associated with the head of the pancreas. The head of the pancreas and the second portion of the duodenum derive their blood supply from the anterior and posterior pancreaticoduodenal arcades (Figure 26.2). These arcades lie on the surface of the pancreas near the duodenal C loop. Attempts to separate these two organs at this location usually results in ischemia of the duodenum.
Sulfur-bearing monazite-(Ce) occurs in silicified carbonatite at Eureka, Namibia, forming rims up to ~0.5 mm thick on earlier-formed monazite-(Ce) megacrysts. We present X-ray photoelectron spectroscopy data demonstrating that sulfur is accommodated predominantly in monazite-(Ce) as sulfate, via a clino-anhydrite-type coupled substitution mechanism. Minor sulfide and sulfite peaks in the X-ray photoelectron spectra, however, also indicate that more complex substitution mechanisms incorporating S2– and S4+ are possible. Incorporation of S6+ through clino-anhydrite-type substitution results in an excess of M2+ cations, which previous workers have suggested is accommodated by auxiliary substitution of OH– for O2–. However, Raman data show no indication of OH–, and instead we suggest charge imbalance is accommodated through F– substituting for O2–. The accommodation of S in the monazite-(Ce) results in considerable structural distortion that may account for relatively high contents of ions with radii beyond those normally found in monazite-(Ce), such as the heavy rare earth elements, Mo, Zr and V. In contrast to S-bearing monazite-(Ce) in other carbonatites, S-bearing monazite-(Ce) at Eureka formed via a dissolution–precipitation mechanism during prolonged weathering, with S derived from an aeolian source. While large S-bearing monazite-(Ce) grains are likely to be rare in the geological record, formation of secondary S-bearing monazite-(Ce) in these conditions may be a feasible mineral for dating palaeo-weathering horizons.
This paper summarizes a multi-state, multi-year study assessing the potential for local agriculture in northern New England. While largely rural, this region's agricultural sector differs greatly from the rest of the United States, and demand for locally produced food has been increasing. To assess this unique economic landscape, researchers and Cooperative Extension at the Universities of Maine, New Hampshire, and Vermont investigated four key areas: (1) local food capacities, (2) constraints to agricultural expansion, (3) consumer preferences for local and organic produce, and (4) the role of intermediaries as alternative local food outlets. The project included input from local farmers, Extension members, restaurants, and the general public. We present the four research areas in a sequential, overlapping fashion. The timing of our research was such that each step in the process informed the next and can be used as a template for assessing a region's potential for local agricultural production.
The introduction of agriculture is a key defining element of the Neolithic, yet considerable debate persists concerning the nature and significance of early farming practices in north-west Europe. This paper reviews archaeobotanical evidence from 95 Neolithic sites (c. 4000–2200 cal bc) in Wales, focusing on wild plant exploitation, the range of crops present, and the significance of cereals in subsistence practices. Cereal cultivation practices in Early Neolithic Wales are also examined using cereal grain stable carbon (δ13C) and nitrogen (δ15N) isotope analysis. The Early Neolithic period witnessed the widespread uptake of cereals alongside considerable evidence for continued wild plant exploitation, notably hazelnuts and wild fruits. The possibility that wild plants and woodlands were deliberately managed or altered to promote the growth of certain plants is outlined. Small cereal grain assemblages, with little evidence for chaff and weed seeds, are common in the Early Neolithic, whereas cereal-rich sites are rare. Emmer wheat was the dominant crop in the Early Neolithic, while other cereal types were recorded in small quantities. Cereal nitrogen isotope (δ15N) values from Early Neolithic sites provided little evidence for intensive manuring. We suggest that cultivation conditions may have been less intensive when compared to other areas of Britain and Europe. In the later Neolithic period, there is evidence for a decline in the importance of cereals. Finally, the archaeobotanical and crop isotope data from this study are considered within a wider European context.
Thermal infrared data collected by the Thermal Emission Spectrometer (TES) and Thermal Emission Imaging System (THEMIS) instruments have significantly impacted the understanding of martian surface mineralogy. Spatial/temporal variations in igneous lithologies; the discovery of quartz, carbonates, and chlorides; and the widespread identification of amorphous, silica-enriched materials reveal a planet that has experienced a diversity of primary and secondary geo-logic processes including igneous crustal evolution, regional sedimentation, aqueous alteration, and glacial/periglacial activity.
Describe common pathogens and antimicrobial resistance patterns for healthcare-associated infections (HAIs) that occurred during 2015–2017 and were reported to the Centers for Disease Control and Prevention’s (CDC’s) National Healthcare Safety Network (NHSN).
Data from central line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), ventilator-associated events (VAEs), and surgical site infections (SSIs) were reported from acute-care hospitals, long-term acute-care hospitals, and inpatient rehabilitation facilities. This analysis included device-associated HAIs reported from adult location types, and SSIs among patients ≥18 years old. Percentages of pathogens with nonsusceptibility (%NS) to selected antimicrobials were calculated for each HAI type, location type, surgical category, and surgical wound closure technique.
Overall, 5,626 facilities performed adult HAI surveillance during this period, most of which were general acute-care hospitals with <200 beds. Escherichia coli (18%), Staphylococcus aureus (12%), and Klebsiella spp (9%) were the 3 most frequently reported pathogens. Pathogens varied by HAI and location type, with oncology units having a distinct pathogen distribution compared to other settings. The %NS for most pathogens was significantly higher among device-associated HAIs than SSIs. In addition, pathogens from long-term acute-care hospitals had a significantly higher %NS than those from general hospital wards.
This report provides an updated national summary of pathogen distributions and antimicrobial resistance among select HAIs and pathogens, stratified by several factors. These data underscore the importance of tracking antimicrobial resistance, particularly in vulnerable populations such as long-term acute-care hospitals and intensive care units.
To describe common pathogens and antimicrobial resistance patterns for healthcare-associated infections (HAIs) among pediatric patients that occurred in 2015–2017 and were reported to the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN).
Antimicrobial resistance data were analyzed for pathogens implicated in central line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), ventilator-associated pneumonias (VAPs), and surgical site infections (SSIs). This analysis was restricted to device-associated HAIs reported from pediatric patient care locations and SSIs among patients <18 years old. Percentages of pathogens with nonsusceptibility (%NS) to selected antimicrobials were calculated by HAI type, location type, and surgical category.
Overall, 2,545 facilities performed surveillance of pediatric HAIs in the NHSN during this period. Staphylococcus aureus (15%), Escherichia coli (12%), and coagulase-negative staphylococci (12%) were the 3 most commonly reported pathogens associated with pediatric HAIs. Pathogens and the %NS varied by HAI type, location type, and/or surgical category. Among CLABSIs, the %NS was generally lowest in neonatal intensive care units and highest in pediatric oncology units. Staphylococcus spp were particularly common among orthopedic, neurosurgical, and cardiac SSIs; however, E. coli was more common in abdominal SSIs. Overall, antimicrobial nonsusceptibility was less prevalent in pediatric HAIs than in adult HAIs.
This report provides an updated national summary of pathogen distributions and antimicrobial resistance patterns among pediatric HAIs. These data highlight the need for continued antimicrobial resistance tracking among pediatric patients and should encourage the pediatric healthcare community to use such data when establishing policies for infection prevention and antimicrobial stewardship.
Quaternary processes and environmental changes are often difficult to assess in remote subantarctic islands due to high surface erosion rates and overprinting of sedimentary products in locations that can be a challenge to access. We present a set of high-resolution, multichannel seismic lines and complementary multibeam bathymetry collected off the eastern (leeward) side of the subantarctic Auckland Islands, about 465 km south of New Zealand's South Island. These data constrain the erosive and depositional history of the island group, and they reveal an extensive system of sediment-filled valleys that extend offshore to depths that exceed glacial low-stand sea level. Although shallow, marine, U-shaped valleys and moraines are imaged, the rugged offshore geomorphology of the paleovalley floors and the stratigraphy of infill sediments suggests that the valley floors were shaped by submarine fluvial erosion, and subsequently filled by lacustrine, fjord, and fluvial sedimentary processes.
Basal ice of glaciers and ice sheets frequently contains a well-developed stratification of distinct, semi-continuous, alternating layers of debris-poor and debris-rich ice. Here, the nature and distribution of shear within stratified basal ice are assessed through the anisotropy of magnetic susceptibility (AMS) of samples collected from Matanuska Glacier, Alaska. Generally, the AMS reveals consistent moderate-to-strong fabrics reflecting simple shear in the direction of ice flow; however, AMS is also dependent upon debris content and morphology. While sample anisotropy is statistically similar throughout the sampled section, debris-rich basal ice composed of semi-continuous mm-scale layers (the stratified facies) possesses well-defined triaxial to oblate fabrics reflecting shear in the direction of ice flow, whereas debris-poor ice containing mm-scale star-shaped silt aggregates (the suspended facies) possesses nearly isotropic fabrics. Thus, deformation within the stratified basal ice appears concentrated in debris-rich layers, likely the result of decreased crystal size and greater availability of unfrozen water associated with high debris content. These results suggest that variations in debris-content over small spatial scales influence ice rheology and deformation in the basal zone.
The rocky shores of the north-east Atlantic have been long studied. Our focus is from Gibraltar to Norway plus the Azores and Iceland. Phylogeographic processes shape biogeographic patterns of biodiversity. Long-term and broadscale studies have shown the responses of biota to past climate fluctuations and more recent anthropogenic climate change. Inter- and intra-specific species interactions along sharp local environmental gradients shape distributions and community structure and hence ecosystem functioning. Shifts in domination by fucoids in shelter to barnacles/mussels in exposure are mediated by grazing by patellid limpets. Further south fucoids become increasingly rare, with species disappearing or restricted to estuarine refuges, caused by greater desiccation and grazing pressure. Mesoscale processes influence bottom-up nutrient forcing and larval supply, hence affecting species abundance and distribution, and can be proximate factors setting range edges (e.g., the English Channel, the Iberian Peninsula). Impacts of invasive non-native species are reviewed. Knowledge gaps such as the work on rockpools and host–parasite dynamics are also outlined.
Paediatric hearing loss rates in Ghana are currently unknown.
A cross-sectional study was conducted in peri-urban Kumasi, Ghana; children (aged 3–15 years) were recruited from randomly selected households. Selected children underwent otoscopic examination prior to in-community pure tone screening using the portable ShoeBox audiometer. The LittlEars auditory questionnaire was also administered to caregivers and parents.
Data were collected from 387 children. After conditioning, 362 children were screened using monaural pure tones presented at 25 dB. Twenty-five children could not be conditioned to behavioural audiometric screening. Eight children were referred based on audiometric screening results. Of those, four were identified as having hearing loss. Four children scored less than the maximum mark of 35 on the LittleEars questionnaire. Of those, three had hearing loss as identified through pure tone screening. The predominant physical finding on otoscopy was ear canal cerumen impaction.
Paediatric hearing loss is prevalent in Ghana, and should be treated as a public health problem warranting further evaluation and epidemiology characterisation.
The pig industry faces many animal welfare issues. Among these, biting behaviour has a high incidence. It is indicative of an existing problem in biters and is a source of physical damage and psychological stress for the victims. We categorize this behaviour into aggressive and non-aggressive biting, the latter often being directed towards the tail. This review focusses specifically on predisposing factors in early life, comprising the prenatal and postnatal periods up to weaning, for the expression of aggressive and non-aggressive biting later in life. The influence of personality and coping style has been examined in a few studies. It varies according to these studies and, thus, further evaluation is needed. Regarding the effect of environmental factors, the number of scientific papers is low (less than five papers for most factors). No clear influence of prenatal factors has been identified to date. Aggressive biting is reduced by undernutrition, cross-fostering and socialization before weaning. Non-aggressive biting is increased by undernutrition, social stress due to competition and cross-fostering. These latter three factors are highly dependent on litter size at birth. The use of familiar odours may contribute to reducing biting when pigs are moved from one environment to another by alleviating the level of stress associated with novelty. Even though the current environment in which pigs are expressing biting behaviours is of major importance, the pre-weaning environment should be optimized to reduce the likelihood of this problem.
Despite the frequency that refugees suffer bereavement, there is a dearth of research into the prevalence and predictors of problematic grief reactions in refugees. To address this gap, this study reports a nationally representative population-based study of refugees to determine the prevalence of probable prolonged grief disorder (PGD) and its associated problems.
This study recruited participants from the Building a New Life in Australia (BNLA) prospective cohort study of refugees admitted to Australia between October 2013 and February 2014. The current data were collected in 2015–2016, and comprised 1767 adults, as well as 411 children of the adult respondents. Adult refugees were assessed for trauma history, post-migration difficulties, probable PGD, post-traumatic stress disorder (PTSD) and mental illness. Children were administered the Strengths and Difficulties Questionnaire.
In this cohort, 38.1% of refugees reported bereavement, of whom 15.8% reported probable PGD; this represents 6.0% of the entire cohort. Probable PGD was associated with a greater likelihood of mental illness, probable PTSD, severe mental illness, currently unemployed and reported disability. Children of refugees with probable PGD reported more psychological difficulties than those whose parents did not have probable PGD. Probable PGD was also associated with the history of imprisonment, torture and separation from family. Only 56.3% of refugees with probable PGD had received psychological assistance.
Bereavement and probable PGD appear highly prevalent in refugees, and PGD seems to be associated with disability in the refugees and psychological problems in their children. The low rate of access to mental health assistance for these refugees highlights that there is a need to address this issue in refugee populations.
A 230Th/U-dated stalagmite from Hulu Cave was analyzed for δ18O, δ13C, and trace elements. A ~10-yr-resolution δ18O record, spanning 51.7–42.6 ka, revealed Dansgaard-Oeschger (DO) events 14 to 11. A similar rapid transition and synchronous timing of the onset of DO 12 is evident between the Greenland and Hulu Cave records, which suggests a common forcing mechanism of DO cycles in the North Atlantic and monsoonal region of Asia. Centennial-scale monsoonal oscillations in the cave δ18O record are indicative of hydroclimatic instability during interstadials. After removing the signals of remote moisture sources, the proportion of moisture from nearby sources is found to be higher during stadials than during interstadials. To explain this, we propose that the movement of the westerly jet is an important control on the balance of nearby and distant moisture sources in East Asia. In addition, the records of δ13C and trace element ratios, which are proxies of local environmental changes, resemble the δ18O record on the scale of DO cycles, as well as on even shorter timescales. This suggests that hydrological processes and biological activity at the cave site respond sensitively to the monsoonal changes.
Tree-ring reconstructions of temperature often target trees at altitudinal or latitudinal tree line where annual growth is broadly expected to be limited by and respond to temperature variability. Based on this principal, regions with sparse tree line would seem to be restricted in their potential to reconstruct past temperatures. In the northeastern United States, there are only two published temperature reconstructions. Previous work in the region reconstructing moisture availability, however, has shown that using a greater diversity of species can improve reconstruction model skill. Here, we use a network of 228 tree-ring records composed of 29 species to test the hypothesis that an increase in species diversity among the pool of predictors improves reconstructions of past temperatures. Chamaecyparis thyoides alone explained 31% of the variability in observed cool-season minimum temperatures, but a multispecies model increased the explained variance to 44%. Liriodendron tulipifera, a species not previously used for temperature reconstructions, explained a similar amount of variance as Chamaecyparis thyoides (12.9% and 20.8%, respectively). Increasing the species diversity of tree proxies has the potential for improving reconstruction of paleotemperatures in regions lacking latitudinal or elevational tree lines provided that long-lived hardwood records can be located.
Immune system markers may predict affective disorder treatment response, but whether an overall immune system marker predicts bipolar disorder treatment effect is unclear.
Bipolar CHOICE (N = 482) and LiTMUS (N = 283) were similar comparative effectiveness trials treating patients with bipolar disorder for 24 weeks with four different treatment arms (standard-dose lithium, quetiapine, moderate-dose lithium plus optimised personalised treatment (OPT) and OPT without lithium). We performed secondary mixed effects linear regression analyses adjusted for age, gender, smoking and body mass index to investigate relationships between pre-treatment white blood cell (WBC) levels and clinical global impression scale (CGI) response.
Compared to participants with WBC counts of 4.5–10 × 109/l, participants with WBC < 4.5 or WBC ≥ 10 showed similar improvement within each specific treatment arm and in gender-stratified analyses.
An overall immune system marker did not predict differential treatment response to four different treatment approaches for bipolar disorder all lasting 24 weeks.
Research was conducted from 2013 to 2015 across three sites in Mississippi to evaluate corn response to sublethal paraquat or fomesafen (105 and 35 g ai ha−1, respectively) applied PRE, or to corn at the V1, V3, V5, V7, or V9 growth stages. Fomesafen injury to corn at three d after treatment (DAT) ranged from 0% to 38%, and declined over time. Compared with the nontreated control (NTC), corn height 14 DAT was reduced approximately 15% due to fomesafen exposure at V5 or V7. Exposure at V1 or V7 resulted in 1,220 and 1,110 kg ha−1 yield losses, respectively, compared with the NTC, but yield losses were not observed at any other growth stage. Fomesafen exposure at any growth stage did not affect corn ear length or number of kernel rows relative to the NTC. Paraquat injury to corn ranged from 26% to 65%, depending on growth stage and evaluation interval. Corn exposure to paraquat at V3 or V5 consistently caused greater injury across evaluation intervals, compared with other growth stages. POST timings of paraquat exposure resulted in corn height reductions of 13% to 50%, except at V7, which was most likely due to rapid internode elongation at that stage. Likewise, yield loss occurred after all exposure times of paraquat except PRE, compared with the NTC. Corn yield was reduced 1,740 to 5,120 kg ha−1 compared with the NTC, generally worsening as exposure time was delayed. Paraquat exposure did not reduce corn ear length, compared with the NTC, at any growth stage. However, paraquat exposure at V3 or V5 was associated with reduction of kernel rows by 1.1 and 1.7, respectively, relative to the NTC. Paraquat and fomesafen applications near corn should be avoided if conditions are conducive for off-target movement, because significant injury and yield loss can result.