To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A geochemical and biostratigraphic approach has been applied to investigate the spatial and stratigraphic variability of Palaeogene sandstones from key wells in Taranaki Basin, New Zealand. Chronostratigraphic control is predominantly based on miospore zonation, while differences in the composition of Paleocene and Eocene sandstones are supported by geochemical evidence. Stratigraphic changes are manifested by a significant decrease in Na2O across the New Zealand miospore PM3b/MH1 early Eocene zonal boundary, at approximately 53.5 Ma. The change in Na2O is associated with a decrease in baseline concentrations of many other major (MnO, CaO, TiO2) and trace elements, and is interpreted to reflect a significant change in sandstone maturity. Paleocene sandstones are characterized by abundant plagioclase (albite and locally Na–Ca plagioclase), significant biotite and a range of heavy minerals, while Eocene sandstones are typically quartzose, with K-feldspar dominant over plagioclase, low mica contents and rare heavy minerals comprising a resistant suite. This change could reflect a change in provenance from local plutonic basement during the Paleocene Epoch to relatively quartz- and K-feldspar-rich granitic sources during Eocene time. However, significant quartz enrichment of Eocene sediment was also likely due to transportation reworking/winnowing along the palaeoshoreface and enhanced chemical weathering, driven in part by long-term global warming associated with the Early Eocene Climatic Optimum. The broad-ranging changes in major-element composition overprint local variations in sediment provenance, which are only detectable from the immobile trace-element geochemistry.
Field trials were conducted in North Carolina in 2017 and Louisiana and Mississippi in 2018 to determine the effect of pre-transplanting applications of diquat on sweetpotato crop tolerance, yield, and storage root quality. At North Carolina treatments consisted of two rates of diquat (560 or 1,120 g ai ha-1) alone or mixed with 107 g ai ha-1 flumioxazin and applied 1 d before transplanting (DBP), sequential applications of diquat (560 or 1,120 g ha-1) 1 and 17 DBP, 107 g ha-1 flumioxazin alone, and a non-treated check. At Louisiana and Mississippi treatments consisted of diquat (560 or 1,120 g ha-1) applied 1 DBP either alone or followed by (fb) rehipping rows or 107 g ha-1 flumioxazin immediately prior to transplanting. Additional treatments included 546 g ha-1 paraquat applied 1 DBP and a non-treated check. At North Carolina injury was ≤ 3% for all treatments through 23 DAP, and no injury was observed after 23 DAP. Visual sweetpotato stunting pooled across Mississippi and Louisiana ranged from 1 to 14%, 0 to 6%, and 0 to 3% at 2, 4, and 6 WAP, respectively and no crop injury was observed after 6 WAP. Diquat applied 1 DBP and not fb rehipping resulted in greater crop injury (12%) than comparable treatments which were rehipped (2%). At North Carolina single and sequential diquat applications resulted in reduced no. 1 sweetpotato yield (24,230 and 24,280 kg ha-1, respectively) compared to the non-treated check, but no. 1 yield of diquat plus flumioxazin (26,330 kg ha-1) was similar to the non-treated check. No. 1 yield did not differ by treatment in Louisiana and Mississippi.
The analysis of the aerodynamic environment of the re-entry vehicle attaches great importance to the design of the novel drag reduction strategies, and the combinational spike and jet concept has shown promising application for the drag reduction in supersonic flows. In this paper, the drag force reduction mechanism induced by the combinational spike and lateral jet concept with the freestream Mach number being 5.9332 has been investigated numerically by means of the two-dimensional axisymmetric Navier-Stokes equations coupled with the shear stress transport (SST) k-ω turbulence model, and the effects of the lateral jet location and its number on the drag reduction of the blunt body have been evaluated. The obtained results show that the drag force of the blunt body can be reduced more profoundly when employing the dual lateral jets, and its maximum percentage is 38.81%, with the locations of the first and second lateral jets arranged suitably. The interaction between the leading shock wave and the first lateral jet has a great impact on the drag force reduction. The drag force reduction is more evident when the interaction is stronger. Due to the inclusion of the lateral jet, the pressure intensity at the reattachment point of the blunt body decreases sharply, as well as the temperature near the walls of the spike and the blunt body, and this implies that the multi-lateral jet is beneficial for the drag reduction.
Heat shock proteins (HSPs) consist of highly preserved stress proteins that are expressed in response to stress. Two studies were carried out to investigate whether HSP genes in hair follicles from beef calves can be suggested as indicators of heat stress (HS). In study 1, hair follicles were harvested from three male Hanwoo calves (aged 172.2 ± 7.20 days) on six dates over the period of 10 April to 9 August 2017. These days provided varying temperature–humidity indices (THIs). In study 2, 16 Hanwoo male calves (aged 169.6 ± 4.60 days, with a BW of 136.9 ± 6.23 kg) were maintained (4 calves per experiment) in environmentally controlled chambers. A completely randomized design with a 2 × 4 factorial arrangement involving two periods (thermoneutral: TN; HS) and four THI treatment groups (threshold: THI = 68 to 70; mild: THI = 74 to 76; moderate THI = 81 to 83; severe: THI = 88 to 90). The calves in the different group were subjected to ambient temperature (22°C) for 7 days (TN) and subsequently to the temperature and humidity corresponding to the target THI level for 21 days (HS). Every three days (at 1400 h) during both the TN and HS periods, the heart rate (HR) and rectal temperature (RT) of each individual were measured, and hair follicles were subsequently collected from the tails of each individual. In study 1, the high variation (P < 0.0001) in THI indicated that the external environment influenced the HS to different extents. The expression levels of the HSP70 and HSP90 genes at the high-THI level were higher (P = 0.0120, P = 0.0002) than those at the low-THI level. In study 2, no differences in the THI (P = 0.2638), HR (P = 0.2181) or RT (P = 0.3846) were found among the groups during the TN period, whereas differences in these indices (P < 0.0001, P < 0.0001 and P < 0.0001, respectively) were observed during the HS period. The expression levels of the HSP70 (P = 0.0010, moderate; P = 0.0065, severe) and HSP90 (P = 0.0040, severe) genes were increased after rapid exposure to heat-stress conditions (moderate and severe levels). We conclude that HSP gene expression in hair follicles provides precise and accurate data for evaluating HS and can be considered a novel indicator of HS in Hanwoo calves maintained in both external and climatic chambers.
The practice of foodborne illness outbreak investigations has evolved, shifting away from large-scale community case-control studies towards more focused case exposure assessments and sub-cluster investigations to identify contaminated food sources. Criteria to include or exclude cases are established to increase the efficiency of epidemiological analyses and traceback activities, but these criteria can also affect the investigator's ability to implicate a suspected food vehicle. A 2010 outbreak of Salmonella ser. Hvittingfoss infections associated with a chain of quick-service restaurants (Chain A) provided a useful case study on the impact of exclusion criteria on the ability to identify a food vehicle. In the original investigation, a case-control study of restaurant-associated cases and well meal companions was conducted at the ingredient level to identify a suspected food vehicle; however, 21% of cases and 22% of well meal companions were excluded for eating at Chain A restaurants more than once during the outbreak. The objective of this study was to explore how this decision affected the results of the outbreak investigation.
It is unclear what session frequency is most effective in cognitive–behavioural therapy (CBT) and interpersonal psychotherapy (IPT) for depression.
Compare the effects of once weekly and twice weekly sessions of CBT and IPT for depression.
We conducted a multicentre randomised trial from November 2014 through December 2017. We recruited 200 adults with depression across nine specialised mental health centres in the Netherlands. This study used a 2 × 2 factorial design, randomising patients to once or twice weekly sessions of CBT or IPT over 16–24 weeks, up to a maximum of 20 sessions. Main outcome measures were depression severity, measured with the Beck Depression Inventory-II at baseline, before session 1, and 2 weeks, 1, 2, 3, 4, 5 and 6 months after start of the intervention. Intention-to-treat analyses were conducted.
Compared with patients who received weekly sessions, patients who received twice weekly sessions showed a statistically significant decrease in depressive symptoms (estimated mean difference between weekly and twice weekly sessions at month 6: 3.85 points, difference in effect size d = 0.55), lower attrition rates (n = 16 compared with n = 32) and an increased rate of response (hazard ratio 1.48, 95% CI 1.00–2.18).
In clinical practice settings, delivery of twice weekly sessions of CBT and IPT for depression is a way to improve depression treatment outcomes.
Psychotic experiences (PEs) are reported by a significant minority of adolescents and are associated with the development of psychiatric disorders. The aims of this study were to examine associations between PEs and a range of factors including psychopathology, adversity and lifestyle, and to investigate mediating effects of coping style and parental support on associations between adversity and PEs in a general population adolescent sample.
Cross-sectional data were drawn from the Irish centre of the Saving and Empowering Young Lives in Europe study. Students completed a self-report questionnaire and 973 adolescents, of whom 522 (53.6%) were boys, participated. PEs were assessed using the 7-item Adolescent Psychotic Symptom Screener.
Of the total sample, 81 (8.7%) of the sample were found to be at risk of PEs. In multivariate analysis, associations were found between PEs and number of adverse events reported (OR 4.48, CI 1.41–14.25; p < 0.011), maladaptive/pathological internet use (OR 2.70, CI 1.30–5.58; p = 0.007), alcohol intoxication (OR 2.12, CI 1.10–4.12; p = 0.025) and anxiety symptoms (OR 4.03, CI 1.57–10.33; p = 0.004). There were small mediating effects of parental supervision, parental support and maladaptive coping on associations between adversity and PEs.
We have identified potential risk factors for PEs from multiple domains including adversity, mental health and lifestyle factors. The mediating effect of parental support on associations between adversity and PEs suggests that poor family relationships may account for some of this mechanism. These findings can inform the development of interventions for adolescents at risk.
Making dairy farming more cost-effective and reducing nitrogen environmental pollution could be reached through a reduced input of dietary protein, provided productivity is not compromised. This could be achieved through balancing dairy rations for essential amino acids (EAA) rather than their aggregate, the metabolizable protein (MP). This review revisits the estimations of the major true protein secretions in dairy cows, milk protein yield (MPY), metabolic fecal protein (MFP), endogenous urinary loss and scurf and associated AA composition. The combined efficiency with which MP (EffMP) or EAA (EffAA) is used to support protein secretions is calculated as the sum of true protein secretions (MPY + MFP + scurf) divided by the net supply (adjusted to remove the endogenous urinary excretion: MPadj and AAadj). Using the proposed protein and AA secretions, EffMP and EffAA were predicted through meta-analyses (807 treatment means) and validated using an independent database (129 treatment means). The effects of MPadj or AAadj, plus digestible energy intake (DEI), days in milk (DIM) and parity (primiparous v. multiparous), were significant in all models. Models using (MPadj, MPadj × MPadj, DEI and DEI × DEI) or (MPadj/DEI and MPadj/DEI × MPadj/DEI) had similar corrected Akaike’s information criterion, but the model using MPadj/DEI performed better in the validation database. A model that also included this ratio was, therefore, used to fitting equations to predict EffAA. These equations predicted well EffAA in the validation database except for Arg which had a strong slope bias. Predictions of MPY from predicted EffMP based on MPadj/DEI, MPadj/DEI × MPadj/DEI, DIM and parity yielded a better fit than direct predictions of MPY based on MPadj, MPadj × MPadj, DEI, DIM and parity. Predictions of MPY based on each EffAA yielded fairly similar results among AA. It is proposed to ponder the mean of MPY predictions obtained from each EffAA by the lowest prediction to retain the potential limitation from AA with the shortest supply. Overall, the revisited estimations of endogenous urinary excretion and MFP, revised AA composition of protein secretions and inclusion of a variable combined EffAA (based on AAadj/DEI, AAadj/DEI × Aadj/DEI, DIM and parity) offer the potential to improve predictions of MPY, identify which AA are potentially in short supply and, therefore, improve the AA balance of dairy rations.
Feeding management of the postnatal and preweaning calf has an important impact on calf growth and development during this critical period and affects the health and well-being of the calves. After birth, an immediate and sufficient colostrum supply is a prerequisite for successful calf rearing. Colostrum provides high amounts of nutrient as well as non-nutrient factors that promote the immune system and intestinal maturation of the calf. The maturation and function of the neonatal intestine enable the calf to digest and absorb the nutrients provided by colostrum and milk. Therefore, colostrum intake supports the start of anabolic processes in several tissues, stimulating postnatal body growth and organ development. After the colostrum feeding period, an intensive milk feeding protocol, that is, at least 20% of BW milk intake/day, is required to realise the calf potential for growth and organ development during the preweaning period. Insufficient milk intake delays postnatal growth and may have detrimental effects on organ development, for example, the intestine and the mammary gland. The somatotropic axis as the main postnatal endocrine regulatory system for body growth is stimulated by the intake of high amounts of colostrum and milk and indicates the promotion of anabolic metabolism in calves. The development of the forestomach is an important issue during the preweaning period in calves, and forestomach maturation is best achieved by solid feed intake. Unfortunately, intensive milk-feeding programmes compromise solid feed intake during the first weeks of life. In the more natural situation for beef calves, when milk and solid feed intake occurs at the same time, calves benefit from the high milk intake as evidenced by enhanced body growth and organ maturation without impaired forestomach development during weaning. To realise an intensive milk-feeding programme, it is recommended that the weaning process should not start too early and that solid feed intake should be at a high extent despite intensive milk feeding. A feeding concept based on intensive milk feeding prevents hunger and abnormal behaviour of the calves and fits the principles of animal welfare during preweaning calf rearing. Studies on milk performance in dairy cows indicate that feeding management during early calf rearing influences lifetime performance. Therefore, an intensive milk-feeding programme affects immediate as well as long-term performance, probably by programming metabolic pathways during the preweaning period.
Methane (CH4) production is a ubiquitous, apparently unavoidable side effect of fermentative fibre digestion by symbiotic microbiota in mammalian herbivores. Here, a data compilation is presented of in vivo CH4 measurements in individuals of 37 mammalian herbivore species fed forage-only diets, from the literature and from hitherto unpublished measurements. In contrast to previous claims, absolute CH4 emissions scaled linearly to DM intake, and CH4 yields (per DM or gross energy intake) did not vary significantly with body mass. CH4 physiology hence cannot be construed to represent an intrinsic ruminant or herbivore body size limitation. The dataset does not support traditional dichotomies of CH4 emission intensity between ruminants and nonruminants, or between foregut and hindgut fermenters. Several rodent hindgut fermenters and nonruminant foregut fermenters emit CH4 of a magnitude as high as ruminants of similar size, intake level, digesta retention or gut capacity. By contrast, equids, macropods (kangaroos) and rabbits produce few CH4 and have low CH4 : CO2 ratios for their size, intake level, digesta retention or gut capacity, ruling out these factors as explanation for interspecific variation. These findings lead to the conclusion that still unidentified host-specific factors other than digesta retention characteristics, or the presence of rumination or a foregut, influence CH4 production. Measurements of CH4 yield per digested fibre indicate that the amount of CH4 produced during fibre digestion varies not only across but also within species, possibly pointing towards variation in microbiota functionality. Recent findings on the genetic control of microbiome composition, including methanogens, raise the question about the benefits methanogens provide for many (but apparently not to the same extent for all) species, which possibly prevented the evolution of the hosting of low-methanogenic microbiota across mammals.
To further understand the contribution of feedstuff ingredients to gut health in swine, gut histology and intestinal bacterial profiles associated with the use of two high-quality protein sources, microbially enhanced soybean meal (MSBM) and Menhaden fishmeal (FM) were assessed. Weaned pigs were fed one of three experimental diets: (1) basic diet containing corn and soybean meal (Negative Control (NEG)), (2) basic diet + fishmeal (FM; Positive Control (POS)) and (3) basic diet + MSBM (MSBM). Phase I POS and MSBM diets (d 0 to d 7 post-wean) included FM or MSBM at 7.5%, while Phase II POS and MSBM diets (d 8 to d 21) included FM or MSBM at 5.0%. Gastrointestinal tissue and ileal digesta were collected from euthanised pigs at d 21 (eight pigs/diet) to assess gut histology and intestinal bacterial profiles, respectively. Data were analysed using Proc Mixed in SAS, with pig as the experimental unit and pig (treatment) as the random effect. Histological and immunohistochemical analyses of stomach and small intestinal tissue using haematoxylin–eosin, Periodic Acid Schiff/Alcian blue and inflammatory cell staining did not reveal detectable differences in host response to dietary treatment. Ileal bacterial composition profiles were obtained from next-generation sequencing of PCR generated amplicons targeting the V1 to V3 regions of the 16S rRNA gene. Lactobacillus-affiliated sequences were found to be the most highly represented across treatments, with an average relative abundance of 64.0%, 59.9% and 41.80% in samples from pigs fed the NEG, POS and MSBM diets, respectively. Accordingly, the three most abundant Operational Taxonomic Units (OTUs) were affiliated to Lactobacillus, showing a distinct abundance pattern relative to dietary treatment. One OTU (SD_Ssd_00001), most closely related to Lactobacillus amylovorus, was found to be more abundant in NEG and POS samples compared to MSBM (23.5% and 35.0% v. 9.2%). Another OTU (SD_Ssd_00002), closely related to Lactobacillus johnsonii, was more highly represented in POS and MSBM samples compared to NEG (14.0% and 15.8% v. 0.1%). Finally, OTU Sd_Ssd-00011, highest sequence identity to Lactobacillus delbrueckii, was found in highest abundance in ileal samples from MSBM-fed pigs (1.9% and 3.3% v. 11.3, in POS, NEG and MSBM, respectively). There was no effect of protein source on bacterial taxa to the genus level or diversity based on principal component analysis. Dietary protein source may provide opportunity to enhance presence of specific members of Lactobacillus genus that are associated with immune-modulating properties without altering overall intestinal bacterial diversity.
Here we use polarimetric measurements from an Autonomous phase-sensitive Radio-Echo Sounder (ApRES) to investigate ice fabric within Whillans Ice Stream, West Antarctica. The survey traverse is bounded at one end by the suture zone with the Mercer Ice Stream and at the other end by a basal ‘sticky spot’. Our data analysis employs a phase-based polarimetric coherence method to estimate horizontal ice fabric properties: the fabric orientation and the magnitude of the horizontal fabric asymmetry. We infer an azimuthal rotation in the prevailing horizontal c-axis between the near-surface (z ≈ 10–50 m) and deeper ice (z ≈ 170–360 m), with the near-surface orientated closer to perpendicular to flow and deeper ice closer to parallel. In the near-surface, the fabric asymmetry increases toward the center of Whillans Ice Stream which is consistent with the surface compression direction. By contrast, the fabric orientation in deeper ice is not aligned with the surface compression direction but is consistent with englacial ice reacting to longitudinal compression associated with basal resistance from the nearby sticky spot.
In economic evaluation, the healthcare perspective has gradually given way to use of the societal perspective, as this perspective is often advocated for support in making optimal societal decisions. In practice, economic evaluations conducted from the societal perspective ignore, fail to measure and/or fail to monetize many of the costs that fall outside of the healthcare sector. To limit bias and increase decision-supportive power, researchers could strengthen their evaluations by adhering to a few basic principles. Five “pillars for the societal perspective” are proposed. First, who bears the cost and who does not is irrelevant. Second, it is imperative to consider including costs for sectors outside the healthcare sector. Third, both high frequent costs and costs with high unit prices should be considered. Fourth, double counting should be avoided. And fifth, researchers should reflect on choices related to costs, i.e. cost omission and problems with identifying, measuring, and valuing costs.
Technological progress has enabled researchers to use new unobtrusive measures of relationships between actors in social network analysis. However, research on how these unobtrusive measures of peer connections relate to traditional sociometric nominations in adolescents is scarce. Therefore, the current study compared traditional peer nominated networks with more unobtrusive measures of peer connections: Communication networks that consist of instant messages in an online social platform and proximity networks based on smartphones’ Bluetooth signals that measure peer proximity. The three social network types were compared in their coverage, stability, overlap, and the extent to which the networks exhibit the often observed sex segregation in adolescent social networks.
Two samples were derived from the MyMovez project: a longitudinal sample of 444 adolescents who participated in the first three waves of the first year of the project (Y1; 51% male; Mage = 11.29, SDage = 1.26) and a cross-sectional sample of 774 adolescents that participated in fifth wave in the third year (Y3; 48% male; Mage = 10.76, SDage = 1.23). In the project, all participants received a research smartphone and a wrist-worn accelerometer. On the research smartphone, participants received daily questionnaires such as peer nomination questions (i.e., nominated network). In addition, the smartphone automatically scanned for other smartphones via Bluetooth signal every 15 minutes of the day (i.e., proximity network). In the Y3 sample, the research smartphone also had a social platform in which participants could send messages to each other (i.e., communication network).
The results show that nominated networks provided data for the most participants compared to the other two networks, but in these networks, participants had the lowest number of connections with peers. Nominated networks showed to be more stable over time compared to proximity or communication networks. That is, more connections remained the same in nominated networks than in proximity networks over the three waves of Y1. The overlap between the three networks was rather small, indicating that the networks measured different types of connections. Nominated and communication networks were segregated by sex, whereas this was less the case in proximity networks.
The communication and proximity networks seem to be promising unobtrusive measures of peer connections and are less of a burden to the participant compared to a nominated network. However, given the structural differences between the networks and the number of connections per wave, the communication and proximity networks should not be used as direct substitutes for sociometric nominations, and researchers should bear in mind what type of connections they wish to assess.
Clostridioides difficile infection (CDI) is the most frequently reported hospital-acquired infection in the United States. Bioaerosols generated during toilet flushing are a possible mechanism for the spread of this pathogen in clinical settings.
To measure the bioaerosol concentration from toilets of patients with CDI before and after flushing.
In this pilot study, bioaerosols were collected 0.15 m, 0.5 m, and 1.0 m from the rims of the toilets in the bathrooms of hospitalized patients with CDI. Inhibitory, selective media were used to detect C. difficile and other facultative anaerobes. Room air was collected continuously for 20 minutes with a bioaerosol sampler before and after toilet flushing. Wilcoxon rank-sum tests were used to assess the difference in bioaerosol production before and after flushing.
Rooms of patients with CDI at University of Iowa Hospitals and Clinics.
Bacteria were positively cultured from 8 of 24 rooms (33%). In total, 72 preflush and 72 postflush samples were collected; 9 of the preflush samples (13%) and 19 of the postflush samples (26%) were culture positive for healthcare-associated bacteria. The predominant species cultured were Enterococcus faecalis, E. faecium, and C. difficile. Compared to the preflush samples, the postflush samples showed significant increases in the concentrations of the 2 large particle-size categories: 5.0 µm (P = .0095) and 10.0 µm (P = .0082).
Bioaerosols produced by toilet flushing potentially contribute to hospital environmental contamination. Prevention measures (eg, toilet lids) should be evaluated as interventions to prevent toilet-associated environmental contamination in clinical settings.
To determine whether the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA) Clostridioides difficile infection (CDI) severity criteria adequately predicts poor outcomes.
Retrospective validation study.
Setting and participants:
Patients with CDI in the Veterans’ Affairs Health System from January 1, 2006, to December 31, 2016.
For the 2010 criteria, patients with leukocytosis or a serum creatinine (SCr) value ≥1.5 times the baseline were classified as severe. For the 2018 criteria, patients with leukocytosis or a SCr value ≥1.5 mg/dL were classified as severe. Poor outcomes were defined as hospital or intensive care admission within 7 days of diagnosis, colectomy within 14 days, or 30-day all-cause mortality; they were modeled as a function of the 2010 and 2018 criteria separately using logistic regression.
We analyzed data from 86,112 episodes of CDI. Severity was unclassifiable in a large proportion of episodes diagnosed in subacute care (2010, 58.8%; 2018, 49.2%). Sensitivity ranged from 0.48 for subacute care using 2010 criteria to 0.73 for acute care using 2018 criteria. Areas under the curve were poor and similar (0.60 for subacute care and 0.57 for acute care) for both versions, but negative predictive values were >0.80.
Model performances across care settings and criteria versions were generally poor but had reasonably high negative predictive value. Many patients in the subacute-care setting, an increasing fraction of CDI cases, could not be classified. More work is needed to develop criteria to identify patients at risk of poor outcomes.
A direct construction of equilibrium magnetic fields with toroidal topology at arbitrary order in the distance from the magnetic axis is carried out, yielding an analytical framework able to explore the landscape of possible magnetic flux surfaces in the vicinity of the axis. This framework can provide meaningful analytical insight into the character of high-aspect-ratio stellarator shapes, such as the dependence of the rotational transform and the plasma beta limit on geometrical properties of the resulting flux surfaces. The approach developed here is based on an asymptotic expansion on the inverse aspect ratio of the ideal magnetohydrodynamics equation. The analysis is simplified by using an orthogonal coordinate system relative to the Frenet–Serret frame at the magnetic axis. The magnetic field vector, the toroidal magnetic flux, the current density, the field line label and the rotational transform are derived at arbitrary order in the expansion parameter. Moreover, a comparison with a near-axis expansion formalism employing an inverse coordinate method based on Boozer coordinates (the so-called Garren–Boozer construction) is made, where both methods are shown to agree at lowest order. Finally, as a practical example, a numerical solution using a W7-X equilibrium is presented, and a comparison between the lowest-order solution and the W7-X magnetic field is performed.
Tuberous sclerosis complex is a rare genetic disorder leading to the growth of hamartomas in multiple organs, including cardiac rhabdomyomas. Children with symptomatic cardiac rhabdomyoma require frequent admissions to intensive care units, have major complications, namely, arrhythmias, cardiac outflow tract obstruction and heart failure, affecting the quality of life and taking on high healthcare cost. Currently, there is no standard pharmacological treatment for this condition, and the management includes a conservative approach and supportive care. Everolimus has shown positive effects on subependymal giant cell astrocytomas, renal angiomyolipoma and refractory seizures associated with tuberous sclerosis complex. However, evidence supporting efficacy in symptomatic cardiac rhabdomyoma is limited to case reports. The ORACLE trial is the first randomised clinical trial assessing the efficacy of everolimus as a specific therapy for symptomatic cardiac rhabdomyoma.
ORACLE is a phase II, prospective, randomised, placebo-controlled, double-blind, multicentre protocol trial. A total of 40 children with symptomatic cardiac rhabdomyoma secondary to tuberous sclerosis complex will be randomised to receive oral everolimus or placebo for 3 months. The primary outcome is 50% or more reduction in the tumour size related to baseline. As secondary outcomes we include the presence of arrhythmias, pericardial effusion, intracardiac obstruction, adverse events, progression of tumour reduction and effect on heart failure.
ORACLE protocol addresses a relevant unmet need in children with tuberous sclerosis complex and cardiac rhabdomyoma. The results of the trial will potentially support the first evidence-based therapy for this condition.
The concept of compressions only cardiopulmonary resuscitation (CO-CPR) evolved from a perception that lay rescuers may be less likely to perform mouth-to-mouth ventilations during an emergency. This study hopes to describe the efficacy of bystander compressions and ventilations cardiopulmonary resuscitation (CV-CPR) in cardiac arrest following drowning.
The aim of this investigation is to test the hypothesis that bystander cardiopulmonary resuscitation (CPR) utilizing compressions and ventilations results in improved survival for cases of cardiac arrest following drowning compared to CPR involving compressions only.
The Cardiac Arrest Registry for Enhanced Survival (CARES) was queried for patients who suffered cardiac arrest following drowning from January 1, 2013 through December 31, 2017, and in whom data were available on type of bystander CPR delivered (ie, CV-CPR CO-CPR). The primary outcome of interest was neurologically favorable survival, as defined by cerebral performance category (CPC).
Neurologically favorable survival was statistically significantly associated with CV-CPR in pediatric patients aged five to 15 years (aOR = 2.68; 95% CI, 1.10–6.77; P = .03), as well as all age group survival to hospital discharge (aOR = 1.54; 95% CI, 1.01–2.36; P = .046). There was a trend with CV-CPR toward neurologically favorable survival in all age groups (aOR = 1.35; 95% CI, 0.86–2.10; P = .19) and all age group survival to hospital admission (aOR = 1.29; 95% CI, 0.91–1.84; P = .157).
In cases of cardiac arrest following drowning, bystander CV-CPR was statistically significantly associated with neurologically favorable survival in children aged five to 15 years and survival to hospital discharge.