To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Dynamic changes in microRNAs in oocyte and cumulus cells before and after maturation may explain the spatiotemporal post-transcriptional gene regulation within bovine follicular cells during the oocyte maturation process. miR-20a has been previously shown to regulate proliferation and differentiation as well as progesterone levels in cultured bovine granulosa cells. In the present study, we aimed to demonstrate the function of miR-20a during the bovine oocyte maturation process. Maturation of cumulus–oocyte complexes (COCs) was performed at 39°C in an humidified atmosphere with 5% CO2 in air. The expression of miR-20a was investigated in the cumulus cells and oocytes at 22 h post culture. The functional role of miR-20a was examined by modulating the expression of miR-20a in COCs during in vitro maturation (IVM). We found that the miR-20a expression was increased in cumulus cells but decreased in oocytes after IVM. Overexpression of miR-20a increased the oocyte maturation rate. Even though not statistically significant, miR-20a overexpression during IVM increased progesterone levels in the spent medium. This was further supported by the expression of STAR and CYP11A1 genes in cumulus cells. The phenotypes observed due to overexpression of miR-20a were validated by BMP15 supplementation during IVM and subsequent transfection of BMP15-treated COCs using miR-20a mimic or BMPR2 siRNA. We found that miR-20a mimic or BMPR2 siRNA transfection rescued BMP15-reduced oocyte maturation and progesterone levels. We concluded that miR-20a regulates oocyte maturation by increasing cumulus cell progesterone synthesis by simultaneous suppression of BMPR2 expression.
Genetic susceptibility to late maturity alpha-amylase (LMA) in wheat (Triticum aestivum L.) results in increased alpha-amylase activity in mature grain when cool conditions occur during late grain maturation. Farmers are forced to sell wheat grain with elevated alpha-amylase at a discount because it has an increased risk of poor end-product quality. This problem can result from either LMA or preharvest sprouting, grain germination on the mother plant when rain occurs before harvest. Whereas preharvest sprouting is a well-understood problem, little is known about the risk LMA poses to North American wheat crops. To examine this, LMA susceptibility was characterized in a panel of 251 North American hard spring wheat lines, representing ten geographical areas. It appears that there is substantial LMA susceptibility in North American wheat since only 27% of the lines showed reproducible LMA resistance following cold-induction experiments. A preliminary genome-wide association study detected six significant marker-trait associations. LMA in North American wheat may result from genetic mechanisms similar to those previously observed in Australian and International Maize and Wheat Improvement Center (CIMMYT) germplasm since two of the detected QTLs, QLMA.wsu.7B and QLMA.wsu.6B, co-localized with previously reported loci. The Reduced height (Rht) loci also influenced LMA. Elevated alpha-amylase levels were significantly associated with the presence of both wild-type and tall height, rht-B1a and rht-D1a, loci in both cold-treated and untreated samples.
The transplantation of endocrine organs can be regarded as the oldest form of transplantation in modern medical history. By the end of the nineteenth and beginning of the twentieth centuries, a large research focus was set on endocrine transplantations. Before the complex endocrine secretion and function was even understood, researchers attempted to cure endocrine diseases and infertility through transplantation of the endocrine glands and gonads. Hence, most endocrine organs have been transplanted in that period, including the thyroid , the adrenal gland , the testis  and the ovary . Even though the principles of transplant rejection have not been understood at that time, researchers already noticed successful transplantations almost exclusively in experiments with autografts. The first published allogeneic ovarian transplantations in animals have been performed by Paul Bert in the sixties of the nineteenth century .
Subglacial hydrological systems require innovative technological solutions to access and observe. Wireless sensor platforms can be used to collect and return data, but their performance in deep and fast-moving ice requires quantification. We report experimental results from Cryoegg: a spherical probe that can be deployed into a borehole or moulin and transit through the subglacial hydrological system. The probe measures temperature, pressure and electrical conductivity in situ and returns all data wirelessly via a radio link. We demonstrate Cryoegg's utility in studying englacial channels and moulins, including in situ salt dilution gauging. Cryoegg uses VHF radio to transmit data to a surface receiving array. We demonstrate transmission through up to 1.3 km of cold ice – a significant improvement on the previous design. The wireless transmission uses Wireless M-Bus on 169 MHz; we present a simple radio link budget model for its performance in cold ice and experimentally confirm its validity. Cryoegg has also been tested successfully in temperate ice. The battery capacity should allow measurements to be made every 2 h for more than a year. Future iterations of the radio system will enable Cryoegg to transmit data through up to 2.5 km of ice.
In drafting the first Australian class actions regime under Part IVA of the Federal Court of Australia Act 1976 (Cth) (FCAA), the Commonwealth legislature had the difficult task of creating a procedure that was appropriate for the Australian jurisdiction, including being in keeping with its litigation culture, while also learning from the procedures already employed in the United States and Canada. After twenty-seven years of federal class actions, it can be said that Australia has fashioned in Part IVA an effective and efficient framework for resolving mass litigation, accompanied by a robust body of jurisprudence. Equally, class action practice in Australia has evolved in ways that would have been beyond the reasonable comprehension of those who initially drafted Part IVA, third-party litigation funding being a key development. This chapter tells the story of Part IVA’s creation and maturation, providing an overview of the jurisprudence that has characterised its evolution, as well as an account of contentious issues at the forefront of modern class action practice.
Years of sport participation (YoP) is conventionally used to estimate cumulative repetitive head impacts (RHI) experienced by contact sport athletes. The relationship of this measure to other estimates of head impact exposure and the potential associations of these measures with neurobehavioral functioning are unknown. We investigated the association between YoP and the Head Impact Exposure Estimate (HIEE), and whether associations between the two estimates of exposure and neurobehavioral functioning varied.
Former American football players (N = 58; age = 37.9 ± 1.5 years) completed in-person evaluations approximately 15 years following sport discontinuation. Assessments consisted of neuropsychological assessment and structured interviews of head impact history (i.e., HIEE). General linear models were fit to test the association between YoP and the HIEE, and their associations with neurobehavioral outcomes.
YoP was weakly correlated with the HIEE, p = .005, R2 = .13. Higher YoP was associated with worse performance on the Symbol Digit Modalities Test, p = .004, R2 = .14, and Trail Making Test-B, p = .001, R2 = .18. The HIEE was associated with worse performance on the Delayed Recall trial of the Hopkins Verbal Learning Test-Revised, p = .020, R2 = .09, self-reported cognitive difficulties (Neuro-QoL Cognitive Function), p = .011, R2 = .10, psychological distress (Brief Symptom Inventory-18), p = .018, R2 = .10, and behavioral regulation (Behavior Rating Inventory of Executive Function for Adults), p = .017, R2 = .10.
YoP was marginally associated with the HIEE, a comprehensive estimate of head impacts sustained over a career. Associations between each exposure estimate and neurobehavioral functioning outcomes differed. Findings have meaningful implications for efforts to accurately quantify the risk of adverse long-term neurobehavioral outcomes potentially associated with RHI.
Due to shortages of N95 respirators during the coronavirus disease 2019 (COVID-19) pandemic, it is necessary to estimate the number of N95s required for healthcare workers (HCWs) to inform manufacturing targets and resource allocation.
We developed a model to determine the number of N95 respirators needed for HCWs both in a single acute-care hospital and the United States.
For an acute-care hospital with 400 all-cause monthly admissions, the number of N95 respirators needed to manage COVID-19 patients admitted during a month ranges from 113 (95% interpercentile range [IPR], 50–229) if 0.5% of admissions are COVID-19 patients to 22,101 (95% IPR, 5,904–25,881) if 100% of admissions are COVID-19 patients (assuming single use per respirator, and 10 encounters between HCWs and each COVID-19 patient per day). The number of N95s needed decreases to a range of 22 (95% IPR, 10–43) to 4,445 (95% IPR, 1,975–8,684) if each N95 is used for 5 patient encounters. Varying monthly all-cause admissions to 2,000 requires 6,645–13,404 respirators with a 60% COVID-19 admission prevalence, 10 HCW–patient encounters, and reusing N95s 5–10 times. Nationally, the number of N95 respirators needed over the course of the pandemic ranges from 86 million (95% IPR, 37.1–200.6 million) to 1.6 billion (95% IPR, 0.7–3.6 billion) as 5%–90% of the population is exposed (single-use). This number ranges from 17.4 million (95% IPR, 7.3–41 million) to 312.3 million (95% IPR, 131.5–737.3 million) using each respirator for 5 encounters.
We quantified the number of N95 respirators needed for a given acute-care hospital and nationally during the COVID-19 pandemic under varying conditions.
Background: Measles is a highly contagious virus that reemerged in 2019 with the highest number of reported cases in the United States since 1992. Beginning in March 2019, The Johns Hopkins Hospital (JHH) responded to an influx of patients with concern for measles as a result of outbreaks in Maryland and the surrounding states. We report the JHH Department of Infection Control and Hospital Epidemiology (HEIC) response to this measles outbreak using a multidisciplinary measles incident command system (ICS). Methods: The JHH HEIC and the Johns Hopkins Office of Emergency Management established the HEIC Clinical Incident Command Center and coordinated a multipronged response to the measles outbreak with partners from occupational health services, microbiology, the adult and pediatric emergency departments, marketing and communication and local and state public health departments. The multidisciplinary structure rapidly developed, approved, and disseminated tools to improve the ability of frontline providers to quickly identify, isolate, and determine testing needs for patients suspected to have measles infection and reduce the risk of secondary transmission. The tools included a triage algorithm, visitor signage, staff and patient vaccination guidance and clinics, and standard operating procedures for measles evaluation and testing. The triage algorithms were developed for phone or in-person and assessed measles exposure history, immune status, and symptoms, and provided guidance regarding isolation and the need for testing. The algorithms were distributed to frontline providers in clinics and emergency rooms across the Johns Hopkins Health System. The incident command team also distributed resources to community providers to reduce patient influx to JHH and staged an outdoor measles evaluation and testing site in the event of a case influx that would exceed emergency department resources. Results: From March 2019 through June 2019, 37 patients presented with symptoms or concern for measles. Using the ICS tools and algorithms, JHH rapidly identified, isolated, and tested 11 patients with high suspicion for measles, 4 of whom were confirmed positive. Of the other 26 patients not tested, none developed measles infection. Exposures were minimized, and there were no secondary measles transmissions among patients. Conclusions: Using the ICS and development of tools and resources to prevent measles transmission, including a patient triage algorithm, the JHH team successfully identified, isolated, and evaluated patients with high suspicion for measles while minimizing exposures and secondary transmission. These strategies may be useful to other institutions and locales in the event of an emerging or reemerging infectious disease outbreak.
Disclosures: Aaron Milstone reports consulting for Becton Dickinson.
Background: Hospital-onset bacteremia and fungemia (HOB) may be a preventable hospital-acquired condition and a potential healthcare quality measure. We developed and evaluated a tool to assess the preventability of HOB and compared it to a more traditional consensus panel approach. Methods: A 10-member healthcare epidemiology expert panel independently rated the preventability of 82 hypothetical HOB case scenarios using a 6-point Likert scale (range, 1= “Definitively or Almost Certainly Preventable” to 6= “Definitely or Almost Certainly Not Preventable”). Ratings on the 6-point scale were collapsed into 3 categories: Preventable (1–2), Uncertain (3–4), or Not preventable (5–6). Consensus was defined as concurrence on the same category among ≥70% expert raters. Cases without consensus were deliberated via teleconference, web-based discussion, and a second round of rating. The proportion meeting consensus, overall and by predefined HOB source attribution, was calculated. A structured HOB preventability rating tool was developed to explicitly account for patient intrinsic and extrinsic healthcare-related risks (Fig. 1). Two additional physician reviewers independently applied this tool to adjudicate the same 82 case scenarios. The tool was iteratively revised based on reviewer feedback followed by repeat independent tool-based adjudication. Interrater reliability was evaluated using the Kappa statistic. Proportion of cases where tool-based preventability category matched expert consensus was calculated. Results: After expert panel round 1, consensus criteria were met for 29 cases (35%), which increased to 52 (63%) after round 2. Expert consensus was achieved more frequently for respiratory or surgical site infections than urinary tract and central-line–associated bloodstream infections (Fig. 2a). Most likely to be rated preventable were vascular catheter infections (64%) and contaminants (100%). For tool-based adjudication, following 2 rounds of rating with interim tool revisions, agreement between the 2 reviewers was 84% for cases overall (κ, 0.76; 95% CI, 0.64–0.88]), and 87% for the 52 cases with expert consensus (κ, 0.79; 95% CI, 0.65–0.94). Among cases with expert consensus, tool-based rating matched expert consensus in 40 of 52 (77%) and 39 of 52 (75%) cases for reviewer 1 and reviewer 2, respectively. The proportion of cases rated “uncertain“ was lower among tool-based adjudicated cases with reviewer agreement (15 of 69) than among cases with expert consensus (23 of 52) (Fig. 2b). Conclusions: Healthcare epidemiology experts hold varying perspectives on HOB preventability. Structured tool-based preventability rating had high interreviewer reliability, matched expert consensus in most cases, and rated fewer cases with uncertain preventability compared to expert consensus. This tool is a step toward standardized assessment of preventability in future HOB evaluations.
Following up on earlier work which demonstrated an improved numerical stellarator coil design optimization performance by the use of stochastic optimization (Lobsien et al., Nucl. Fusion, vol. 58 (10), 2018, 106013), it is demonstrated here that significant further improvements can be made – lower field errors and improved robustness – for a Wendelstein 7-X test case. This is done by increasing the sample size and applying fully three-dimensional perturbations, but most importantly, by changing the design sequence in which the optimization targets are applied: optimization for field error is conducted first, with coil shape penalties only added to the objective function at a later step in the design process. A robust, feasible coil configuration with a local maximum field error of 3.66 % and an average field error of 0.95 % is achieved here, as compared to a maximum local field error of 6.08 % and average field error of 1.56 % found in our earlier work. These new results are compared to those found without stochastic optimization using the FOCUS and ONSET suites. The relationship between local minima in the optimization space and coil shape penalties is also discussed.
Studies suggest that alcohol consumption and alcohol use disorders have distinct genetic backgrounds.
We examined whether polygenic risk scores (PRS) for consumption and problem subscales of the Alcohol Use Disorders Identification Test (AUDIT-C, AUDIT-P) in the UK Biobank (UKB; N = 121 630) correlate with alcohol outcomes in four independent samples: an ascertained cohort, the Collaborative Study on the Genetics of Alcoholism (COGA; N = 6850), and population-based cohorts: Avon Longitudinal Study of Parents and Children (ALSPAC; N = 5911), Generation Scotland (GS; N = 17 461), and an independent subset of UKB (N = 245 947). Regression models and survival analyses tested whether the PRS were associated with the alcohol-related outcomes.
In COGA, AUDIT-P PRS was associated with alcohol dependence, AUD symptom count, maximum drinks (R2 = 0.47–0.68%, p = 2.0 × 10−8–1.0 × 10−10), and increased likelihood of onset of alcohol dependence (hazard ratio = 1.15, p = 4.7 × 10−8); AUDIT-C PRS was not an independent predictor of any phenotype. In ALSPAC, the AUDIT-C PRS was associated with alcohol dependence (R2 = 0.96%, p = 4.8 × 10−6). In GS, AUDIT-C PRS was a better predictor of weekly alcohol use (R2 = 0.27%, p = 5.5 × 10−11), while AUDIT-P PRS was more associated with problem drinking (R2 = 0.40%, p = 9.0 × 10−7). Lastly, AUDIT-P PRS was associated with ICD-based alcohol-related disorders in the UKB subset (R2 = 0.18%, p < 2.0 × 10−16).
AUDIT-P PRS was associated with a range of alcohol-related phenotypes across population-based and ascertained cohorts, while AUDIT-C PRS showed less utility in the ascertained cohort. We show that AUDIT-P is genetically correlated with both use and misuse and demonstrate the influence of ascertainment schemes on PRS analyses.
Early detection and intervention strategies in patients at clinical high-risk (CHR) for syndromal psychosis have the potential to contain the morbidity of schizophrenia and similar conditions. However, research criteria that have relied on severity and number of positive symptoms are limited in their specificity and risk high false-positive rates. Our objective was to examine the degree to which measures of recency of onset or intensification of positive symptoms [a.k.a., new or worsening (NOW) symptoms] contribute to predictive capacity.
We recruited 109 help-seeking individuals whose symptoms met criteria for the Progression Subtype of the Attenuated Positive Symptom Psychosis-Risk Syndrome defined by the Structured Interview for Psychosis-Risk Syndromes and followed every three months for two years or onset of syndromal psychosis.
Forty-one (40.6%) of 101 participants meeting CHR criteria developed a syndromal psychotic disorder [mostly (80.5%) schizophrenia] with half converting within 142 days (interquartile range: 69–410 days). Patients with more NOW symptoms were more likely to convert (converters: 3.63 ± 0.89; non-converters: 2.90 ± 1.27; p = 0.001). Patients with stable attenuated positive symptoms were less likely to convert than those with NOW symptoms. New, but not worsening, symptoms, in isolation, also predicted conversion.
Results suggest that the severity and number of attenuated positive symptoms are less predictive of conversion to syndromal psychosis than the timing of their emergence and intensification. These findings also suggest that the earliest phase of psychotic illness involves a rapid, dynamic process, beginning before the syndromal first episode, with potentially substantial implications for CHR research and understanding the neurobiology of psychosis.
In this article, we describe two experiments measuring the impact of a collection of interventions informed by behavioural sciences to reduce unemployment. In a small-scale pilot study (n = 2,383) run in partnership with a Jobcentre in the UK, we found that small changes to the way jobseekers interacted with employment advisers showed promising effects. Based on these findings, we refined our intervention and tested it in a second, larger trial (n = 88,033) across 12 Jobcentres in the UK. We found that our intervention significantly increased off-flow from benefits. These experiments demonstrate that policies and programmes aimed at reducing unemployment can benefit greatly from a deeper understanding of the behaviours of jobseekers and employment advisers. Further, we suggest that this approach could have positive implications for other areas of public policy.
We used multivariable analyses to assess whether meeting core elements was associated with antibiotic utilization. Compliance with 7 elements versus not doing so was associated with higher use of broad-spectrum agents for community-acquired infections [days of therapy per 1,000 patient days: 155 (39) vs 133 (29), P = .02] and anti-methicillin-resistant S. aureus agents [days of therapy per 1,000 patient days: 145 (37) vs 124 (30), P = .03].
A lasting legacy of the International Polar Year (IPY) 2007–2008 was the promotion of the Permafrost Young Researchers Network (PYRN), initially an IPY outreach and education activity by the International Permafrost Association (IPA). With the momentum of IPY, PYRN developed into a thriving network that still connects young permafrost scientists, engineers, and researchers from other disciplines. This research note summarises (1) PYRN’s development since 2005 and the IPY’s role, (2) the first 2015 PYRN census and survey results, and (3) PYRN’s future plans to improve international and interdisciplinary exchange between young researchers. The review concludes that PYRN is an established network within the polar research community that has continually developed since 2005. PYRN’s successful activities were largely fostered by IPY. With >200 of the 1200 registered members active and engaged, PYRN is capitalising on the availability of social media tools and rising to meet environmental challenges while maintaining its role as a successful network honouring the legacy of IPY.
Samuel Pursch, Research and strategy advisor working on governance and social development in Myanmar and across Southeast Asia.,
Andrea Woodhouse, Senior social development specialist at the World Bank.,
Michael Woolcock, Lead social scientist with the World Bank's Development Research Group, and a (part-time) Lecturer in Public Policy at Harvard University's Kennedy School of Government.,
Matthew Zurstrassen, Development professional working on social research and justice programs in the Asia Pacific Region and managed research for the World Bank's “Livelihoods and Social Change in Rural Myanmar” study.
Myanmar has undergone significant reforms in recent years. A commonly accepted view is that, unlike many of the political shifts experienced elsewhere in the world in the twenty-first century, their impetus came not “from below” but from national elites, prompted by military decisions to open the country to the world and begin to democratize (Pederson 2012, Fink 2014). Much of the literature on Myanmar's transition thus focuses on national dynamics, seeking insights on what has changed, why, and how, by examining shifts among political elites, the business community, and the upper echelons of the Tatmadaw (see Pederson 2012; ICG 2012; Jones 2014). These changes emerged from a variety of elite-led processes, including the drafting of a new constitution in 2008, and accelerated under the Thein Sein-led government starting in 2011. Yet while analysing the motives and strategies of elites is vital for understanding the national impetus behind Myanmar's reforms, it leaves little space for assessing how the transition has played out among the broader populace, particularly in the rural villages where seventy per cent of Myanmar's people live. It also overlooks how the prevailing social institutions at the local level have responded to the various forms and sources of contention (actual and/or potential) inherently accompanying such major changes, and the associated implications for policy and practice in Myanmar.
This paper seeks to contribute to research on Myanmar's social transformation by analysing how governance reforms and changes in the life experiences of people in rural communities are altering the social contract at the village level. The paper argues that the nature and extent of the “social contract”—i.e., the terms on which citizens interact with one another, and the basis on which contending views of citizens’ core rights and responsibilities are negotiated with and legitimately upheld by the state—is being re-written in Myanmar. Three areas of change, especially since 2011, have affected how citizens in rural areas interact with the state: village governance, citizens’ expectations of the state, and connectivity. Responding to these challenges will require strategies informed by the best available evidence.
Assertive Community Treatment (ACT) is an evidence-based treatment program for people with severe mental illness developed in high-income countries. We report the first randomized controlled trial of ACT in mainland China.
Sixty outpatients with schizophrenia with severe functional impairments or frequent hospitalizations were randomly assigned to ACT (n = 30) or standard community treatment (n = 30). The severity of symptoms and level of social functioning were assessed at baseline and every 3 months during the 1-year study. The primary outcome was the duration of hospital readmission. Secondary outcomes included a pre-post change in symptom severity, the rates of symptom relapse and gainful employment, social and occupational functioning, and quality of life of family caregivers.
Based on a modified intention-to-treat analysis, the outcomes for ACT were significantly better than those of standard community treatment. ACT patients were less likely to be readmitted [3.3% (1/30) v. 25.0% (7/28), Fisher's exact test p = 0.023], had a shorter mean readmission time [2.4 (13.3) v. 30.7 (66.9) days], were less likely to relapse [6.7% (2/30) v. 28.6% (8/28), Fisher's exact test p = 0.038], and had shorter mean time in relapse [3.5 (14.6) v. 34.4 (70.6) days]. The ACT group also had significantly longer times re-employed and greater symptomatic improvement and their caregivers experienced a greater improvement in their quality of life.
Our results show that culturally adapted ACT is both feasible and effective for individuals with severe schizophrenia in urban China. Replication studies with larger samples and longer duration of follow up are warranted.
Collaborative quality improvement and learning networks have amended healthcare quality and value across specialities. Motivated by these successes, the Pediatric Acute Care Cardiology Collaborative (PAC3) was founded in late 2014 with an emphasis on improving outcomes of paediatric cardiology patients within cardiac acute care units; acute care encompasses all hospital-based inpatient non-intensive care. PAC3 aims to deliver higher quality and greater value care by facilitating the sharing of ideas and building alignment among its member institutions. These aims are intentionally aligned with the work of other national clinical collaborations, registries, and parent advocacy organisations. The mission and early work of PAC3 is exemplified by the formal partnership with the Pediatric Cardiac Critical Care Consortium (PC4), as well as the creation of a clinical registry, which links with the PC4 registry to track practices and outcomes across the entire inpatient encounter from admission to discharge. Capturing the full inpatient experience allows detection of outcome differences related to variation in care delivered outside the cardiac ICU and development of benchmarks for cardiac acute care. We aspire to improve patient outcomes such as morbidity, hospital length of stay, and re-admission rates, while working to advance patient and family satisfaction. We will use quality improvement methodologies consistent with the Model for Improvement to achieve these aims. Membership currently includes 36 centres across North America, out of which 26 are also members of PC4. In this report, we describe the development of PAC3, including the philosophical, organisational, and infrastructural elements that will enable a paediatric acute care cardiology learning network.