We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Face masks are a major tool for reducing the spread of COVID-19 has been the use of face masks because (1) they protect the wearer from aerosol laden virus in the environment and (2) they reduce aerosol emissions to the environment from infected individuals. Methods that quantify the fitted mask filtration efficiency for protection of the wearer are well established (eg, Sickbert-Bennett et al, JAMA Intern Med 2020;180:1607). In contrast, current methods for assessing face-mask containment efficiency are generally semiquantitative and rely on measurement of a very low concentration of aerosols emitted from a healthy or infected human, or the use of mannequins in which a high concentration of surrogate aerosols can be introduced inside the mask. Methods: Expanding on our standard methods used for fitted face-mask filtration efficiency, we designed a small-volume, low-ventilation chamber to accommodate a seated study participant. The study participant wore a ported face mask to allow introduction of a stream of 0.05 μm NaCl particles at a constant concentration (TSI 8026 particle generator) into the mask space. The ambient chamber concentration was continuously measured by a TSI 3775 condensation particle counter sampling 2 feet (~2 m) in front of the participant’s head over a series of three 3-minute periods: (1) resting, (2) reading out loud, and (3) repeated forceful coughing (2 × 10 coughs) (~450 L/min peak flows). Figure 1 shows a raw data sample for the coughing procedure. Containment efficiency (%) for each mask and procedure were determined as 100 × (1 – the average of all 1 − second ambient concentration values between 30 and 180 seconds divided by the same for the “no mask” condition). Results: Table 1 shows the average % containment efficiency for 2 study days with each mask or procedure in an adult male. The 2-ear-loop masks (KN95 and procedure) tested during coughing had the greatest reduction in % containment efficiency compared to that during resting breathing, likely owing to a decreased mask fit with transient pressure increase inside the mask associated with the coughs. The N95 was least affected by the introduction of reading and/or coughing, maintaining near 95% containment efficiency throughout. Conclusions: Our preliminary data on fitted containment efficiency of masks under different conditions suggest that the fitted containment efficiency closely mimics their performance for personal protection. This information that may aid in providing optimum source control in indoor environments.
This article presents empirical findings about the distinctiveness of smaller voluntary sector organisations (VSOs) involved in welfare service provision, based on in-depth, qualitative case study research. We identify a series of organisational features and practices which can mean that smaller VSOs are distinctive from larger organisations. These include how they are governed and managed, their approach to their work, and their position relative to other providers. To explain our findings, we draw on the concept of stakeholder ambiguity. This idea was posited by Billis and Glennerster (1998) and is commonly cited in relation to distinctiveness. We identified several manifestations of stakeholder ambiguity and confirm the concept’s explanatory importance, although we argue that our understanding of distinctiveness is enhanced when stakeholder ambiguity is considered alongside other closely related features, such as being embedded in a local geographic community and informal familial care-based organisational cultures. Our findings also highlight the fragility of smaller VSOs. We argue that this combination of distinctiveness and fragility creates a tension for social policy makers, many of whom recognise the value of smaller VSOs and the risks that they face but must weigh this against a requirement to allocate resources for statutory services as effectively as possible.
Although dissemination and implementation (D&I) science is a growing field, many health researchers with relevant D&I expertise do not self-identify as D&I researchers. The goal of this work was to analyze the distribution, clustering, and recognition of D&I expertise in an academic institution.
Methods:
A snowball survey was administered to investigators at University of Rochester with experience and/or interest in D&I research. The respondents were asked to identify their level of D&I expertise and to nominate others who were experienced and/or active in D&I research. We used social network analysis to examine nomination networks.
Results:
Sixty-eight participants provided information about their D&I expertise. Thirty-eight percent of the survey respondents self-identified as D&I researchers, 24% as conducting D&I under different labels, and 38% were familiar with D&I concepts. D&I researchers were, on average, the most central actors in the network (nominated most by other survey participants) and had the highest within-group density, indicating wide recognition by colleagues and among themselves. Researchers who applied D&I under different labels had the highest within-group reciprocity (25%), and the highest between-group reciprocity (29%) with researchers familiar with D&I. Participants significantly tended to nominate peers within their departments and within their expertise categories.
Conclusions:
Identifying and engaging unrecognized clusters of expertise related to D&I research may provide opportunities for mutual learning and dialog and will be critical to bridging across departmental and topic area silos and building capacity for D&I in academic settings.
To estimate population-based rates and to describe clinical characteristics of hospital-acquired (HA) influenza.
Design:
Cross-sectional study.
Setting:
US Influenza Hospitalization Surveillance Network (FluSurv-NET) during 2011–2012 through 2018–2019 seasons.
Methods:
Patients were identified through provider-initiated or facility-based testing. HA influenza was defined as a positive influenza test date and respiratory symptom onset >3 days after admission. Patients with positive test date >3 days after admission but missing respiratory symptom onset date were classified as possible HA influenza.
Results:
Among 94,158 influenza-associated hospitalizations, 353 (0.4%) had HA influenza. The overall adjusted rate of HA influenza was 0.4 per 100,000 persons. Among HA influenza cases, 50.7% were 65 years of age or older, and 52.0% of children and 95.7% of adults had underlying conditions; 44.9% overall had received influenza vaccine prior to hospitalization. Overall, 34.5% of HA cases received ICU care during hospitalization, 19.8% required mechanical ventilation, and 6.7% died. After including possible HA cases, prevalence among all influenza-associated hospitalizations increased to 1.3% and the adjusted rate increased to 1.5 per 100,000 persons.
Conclusions:
Over 8 seasons, rates of HA influenza were low but were likely underestimated because testing was not systematic. A high proportion of patients with HA influenza were unvaccinated and had severe outcomes. Annual influenza vaccination and implementation of robust hospital infection control measures may help to prevent HA influenza and its impacts on patient outcomes and the healthcare system.
Liben Lark Heteromirafra archeri is a ‘Critically Endangered’ species threatened by the loss and degradation of grassland at the Liben Plain, southern Ethiopia, one of only two known sites for the species. We use field data from nine visits between 2007 and 2019 and satellite imagery to quantify changes over time in the species’ abundance and in the extent and quality of its habitat. We estimate that the population fell from around 279 singing males (95% CL: 182–436) in 2007 to around 51 (14–144) in 2013, after which too few birds were recorded to estimate population size. Arable cultivation first appeared on the plain in the early 1990s and by 2019 more than a third of the plain had been converted to crops. Cultivation was initially confined to the fertile black soils but from 2008 began to spread into the less fertile red soils that cover most of the plain. Liben Larks strongly avoided areas with extensive bare ground or trees and bushes, but the extent of these did not change significantly over the survey period. A plausible explanation for the species’ decline is that grassland degradation, caused before 2007 by continuous high-pressure grazing by livestock, reduced its rates of reproduction or survival to a level that could not support its previous population. Since 2015, communal kalos (grazing exclosures) have been established to generate forage and other resources in the hope of also providing breeding habitat for Liben Larks. Grass height and density within four grassland kalos in 2018 greatly exceeded that in the surrounding grassland, indicating that the plain retains the potential to recover rapidly if appropriately managed. Improvement of grassland structure through the restitution of traditional and sustainable rangeland management regimes and the reversion of cereal agriculture to grassland are urgently needed to avert the species’ extinction.
Background: Carbapenem-resistant Enterobacteriaceae (CRE) are increasingly common in the United States and have the potential to spread widely across healthcare networks. Only a fraction of patients with CRE carriage (ie, infection or colonization) are identified by clinical cultures. Interventions to reduce CRE transmission can be explored with agent-based models (ABMs) comprised of unique agents (eg, patients) represented by a synthetic population or model-generated representation of the population. We used electronic health record data to determine CRE carriage risk, and we discuss how these results can inform CRE transmission parameters for hospitalized agents in a regional healthcare network ABM. Methods: We reviewed the laboratory data of patients admitted during July 1, 2016−June 30, 2017, to any of 7 short-term acute-care hospitals of a regional healthcare network in North Carolina (N = 118,022 admissions) to find clinically detected cases of CRE carriage. A case was defined as the first occurrence of Enterobacter spp, Escherichia coli, or Klebsiella spp resistant to any carbapenem isolated from a clinical specimen in an admitted patient. We used Poisson regression to estimate clinically detected CRE carriage risk according to variables common to data from both the electronic health records and the ABM synthetic population, including patient demographics, systemic antibiotic administration, intensive care unit stay, comorbidities, length of stay, and admitting hospital size. Results: We identified 58 (0.05%) cases of CRE carriage among all admissions. Among these cases, 30 (52%) were ≥65 years of age and 37 (64%) were female. During their admission, 47 cases (81%) were administered systemic antibiotics and 18 cases (31%) had an intensive care unit stay. Patients administered systemic antibiotics and those with an intensive care unit stay had CRE carriage risk 6.5 times (95% CI, 3.4–12.5) and 4.9 times (95% CI, 2.8–8.5) higher, respectively, than patients without these exposures (Fig. 1). Patients ≥50 years of age and those with a higher Elixhauser comorbidity index score and with longer length of stay also had increased CRE carriage risk. Conclusions: Among admissions in our dataset, CRE carriage risk was associated with systemic antibiotic exposure, intensive care unit stay, higher Elixhauser comorbidity index score, and longer length of stay. We will use these risk estimates in the ABM to inform agents’ CRE carriage status upon hospital admission and the CRE transmission parameters for short-term acute-care hospitals. We will explore CRE transmission interventions in the parameterized regional healthcare network ABM and assess the impact of CRE carriage underestimation.
Funding: This work was supported by Centers for Disease Control and Prevention (CDC) Cooperative Agreement number U01CK000527. The conclusions, findings, and opinions expressed do not necessarily reflect the official position of CDC.
Shanidar Cave in Iraqi Kurdistan became an iconic Palaeolithic site following Ralph Solecki's mid twentieth-century discovery of Neanderthal remains. Solecki argued that some of these individuals had died in rockfalls and—controversially—that others were interred with formal burial rites, including one with flowers. Recent excavations have revealed the articulated upper body of an adult Neanderthal located close to the ‘flower burial’ location—the first articulated Neanderthal discovered in over 25 years. Stratigraphic evidence suggests that the individual was intentionally buried. This new find offers the rare opportunity to investigate Neanderthal mortuary practices utilising modern archaeological techniques.
Mental health problems are prevalent among therapists and may have a negative impact on therapist effectiveness. To counteract such problems, therapist self-care (for example, striking a balance between personal and professional demands and seeking personal therapy), has received increased attention. Conceptually, self-care can be considered as part of a personal practice model, focusing on techniques that therapists engage with self-experientially with a focus on their personal and/or professional development. However, studies of the self-application of specific treatment techniques are lacking. We aimed to explore the use, and perceived usefulness, of cognitive behavioural therapy (CBT) techniques for self-care to prevent or treat own mental health problems among practising therapists. Participants were therapists (n = 228) of various professional backgrounds in Sweden. Data were collected using a web-based survey. Descriptive statistics were calculated, and non-parametric analyses conducted to investigate associations of 13 CBT techniques with therapist characteristics. Use of CBT techniques for self-care was highly prevalent among participants, and they perceived the techniques as useful, irrespective of characteristics such as gender, age, profession, years since graduation, clinical experience, level of training in CBT, and previous experience of personal CBT. The high prevalence among therapists of the use of treatment techniques for self-care is very encouraging. Therapist self-care, including the self-application of treatment techniques, may be an important factor for therapist effectiveness, which calls for further development of personal practice models with respect to self-care, and future studies investigating associations between therapist mental health, self-care, effectiveness and patient outcome.
Key learning aims
(1) Therapist self-care using cognitive behavioural therapy (CBT) techniques to prevent or treat own mental health problems may influence therapist effectiveness. However, studies of self-application of treatment techniques are lacking.
(2) In the present survey study, the use of CBT techniques for self-care was highly prevalent among practising therapists, and they perceived the techniques as useful, irrespective of characteristics such as gender, age, profession, years since graduation, clinical experience, level of training in CBT, and previous experience of personal CBT.
(3) Almost all therapists believed that it was a good idea to self-apply CBT techniques for their own sake and for the benefit of their patients.
Coral reefs have experienced extensive degradation across the world over the last 50 years as a result of a variety of stressors operating at a range of spatial and temporal scales. In order to assess whether declines are continuing, or if reefs are recovering, detailed baseline information is required from across wide spatial scales. Unfortunately, for some regions this information is not readily available, making future reef trajectories difficult to determine. Here we characterized the current benthic community state for coral reefs in the Wakatobi region of Indonesia, one of the most biodiverse marine regions in the world. We surveyed 10 reef sites (5, 10 and 15 m depth) to explore spatial variation in coral reef benthic communities and provide a detailed baseline. Previous data (2002–2011) were available for coral, sponges, algae and soft coral at six of our study sites. Using this information, we determined if any changes had occurred in dominance of these benthic groups. We found that benthic assemblage composition differed significantly over relatively small spatial scales (2–10 km) and hard coral cover was highly variable, ranging from 7–48% (average 19.5% ± 1.5 SE). While coral cover appears to have declined at all sites where data were available since 2002, we found little evidence for widespread increases in other benthic groups or regime shifts. Our study provides a comprehensive baseline dataset for the region that can be used in the future to determine rates of change in benthic communities.
Chapter 1 discusses the terminology of the name Third Intermediate Period and demonstrates the views within previous archaeological thought and theory, showig which ideas have shaped the discussions and approaches to Third Intermediate Period archaeology, history, and culture. Chapter 1 also provides a discussion of the complex and disputed chronology for the Third Intermediate Period, outlining those areas that are agreed upon and those areas which are still debated.
Chapter 5 discusses the evidence presented in the preceding four chapters and its overall significance for the understanding of the development of Egypt during the Third Intermediate Period. The chapter discusses a series of interconnected characteristics identified within Third Intermediate Period culture and society which relate to the political and economic power of regions, the nucleation of both settlements and people, self-sufficiency at a collective and individual level, defence, both physical and spiritual, regionality in terms of settlement development and material culture, and finally elite emulation through objects. These characteristics are also discussed in association with the themes of continuity and change/transition compared with the previous New Kingdom, and also within aspects of the (Egyptian) north and (Libyan) south socio-cultural and socio-geographical divide.
Chapter 2 establishes the theoretical and archaeological context for the study of landscape and settlements in the Third Intermediate Period. It discusses the approaches to and problems inherent in Egyptian settlement studies regarding landscape reconstruction, the preservation of ancient sites, and defining the concept of ‘site’. The chapter constructs a framework for the understanding of settlement archaeology in the Third Intermediate Period through the analysis of a dataset made up of Third Intermediate Period textual and archaeological material from landscapes and settlements. It further outlines the archaeological theory regarding landscape archaeology in order to establish a methodology for the most effective way of approaching Egyptian settlement patterns and defining the concept of what is a ‘site’ in Egyptian settlement archaeology. The comprehensive record of survey, excavation reports, and artefact/textual source analysis compiled in Appendix 1 is used in Chapter 2 to evaluate the potential for conducting landscape archaeology to see whether settlement patterns are visible, the extent to which they are different from the New Kingdom, and the factors which may have influenced these patterns with due regard to the limitations of the data.
Chapter 4 demonstrates links back to Ramesside object preferences, and to precursors of Late Period object typologies. The material culture of everyday life and social practices of the people living at that time demonstrate the Third Intermediate Period as a distinctly defined cultural element within Egyptian society and Egyptology. There were changes in artefact usages and material culture, and implications for understanding characteristics of the object world of the period, and the lifecycles of the Third Intermediate Period population. The domestic material culture also demonstrates aspects of regionality in relation to the political fragmentation of the country. The ceramics of the period identify continuity or changes in storage, dining, and drinking cultures. Alongside ceramics, Chapter 4 also includes objects of personal adornment, tools, weapons, and re-used and salvaged stone. The artefacts and object-world of the settlements allow exploration of the social status of the population, their religious beliefs, the extent of elite emulation and self-sufficiency regarding elite object replication, the extent of object re-use and recycling, and the creation and availability of materials for object manufacture.