To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Identifying risk factors of individuals in a clinical-high-risk state for psychosis are vital to prevention and early intervention efforts. Among prodromal abnormalities, cognitive functioning has shown intermediate levels of impairment in CHR relative to first-episode psychosis and healthy controls, highlighting a potential role as a risk factor for transition to psychosis and other negative clinical outcomes. The current study used the AX-CPT, a brief 15-min computerized task, to determine whether cognitive control impairments in CHR at baseline could predict clinical status at 12-month follow-up.
Baseline AX-CPT data were obtained from 117 CHR individuals participating in two studies, the Early Detection, Intervention, and Prevention of Psychosis Program (EDIPPP) and the Understanding Early Psychosis Programs (EP) and used to predict clinical status at 12-month follow-up. At 12 months, 19 individuals converted to a first episode of psychosis (CHR-C), 52 remitted (CHR-R), and 46 had persistent sub-threshold symptoms (CHR-P). Binary logistic regression and multinomial logistic regression were used to test prediction models.
Baseline AX-CPT performance (d-prime context) was less impaired in CHR-R compared to CHR-P and CHR-C patient groups. AX-CPT predictive validity was robust (0.723) for discriminating converters v. non-converters, and even greater (0.771) when predicting CHR three subgroups.
These longitudinal outcome data indicate that cognitive control deficits as measured by AX-CPT d-prime context are a strong predictor of clinical outcome in CHR individuals. The AX-CPT is brief, easily implemented and cost-effective measure that may be valuable for large-scale prediction efforts.
We have observed the G23 field of the Galaxy AndMass Assembly (GAMA) survey using the Australian Square Kilometre Array Pathfinder (ASKAP) in its commissioning phase to validate the performance of the telescope and to characterise the detected galaxy populations. This observation covers ~48 deg2 with synthesised beam of 32.7 arcsec by 17.8 arcsec at 936MHz, and ~39 deg2 with synthesised beam of 15.8 arcsec by 12.0 arcsec at 1320MHz. At both frequencies, the root-mean-square (r.m.s.) noise is ~0.1 mJy/beam. We combine these radio observations with the GAMA galaxy data, which includes spectroscopy of galaxies that are i-band selected with a magnitude limit of 19.2. Wide-field Infrared Survey Explorer (WISE) infrared (IR) photometry is used to determine which galaxies host an active galactic nucleus (AGN). In properties including source counts, mass distributions, and IR versus radio luminosity relation, the ASKAP-detected radio sources behave as expected. Radio galaxies have higher stellar mass and luminosity in IR, optical, and UV than other galaxies. We apply optical and IR AGN diagnostics and find that they disagree for ~30% of the galaxies in our sample. We suggest possible causes for the disagreement. Some cases can be explained by optical extinction of the AGN, but for more than half of the cases we do not find a clear explanation. Radio sources aremore likely (~6%) to have an AGN than radio quiet galaxies (~1%), but the majority of AGN are not detected in radio at this sensitivity.
Little is known about the implications of accessing an outdoor range for broiler chicken welfare, particularly in relation to the distance ranged from the shed. Therefore, we monitored individual ranging behaviour of commercial free-range broiler chickens and identified relationships with welfare indicators. The individual ranging behaviour of 305 mixed-sex Ross 308 broiler chickens was tracked on a commercial farm from the second day of range access to slaughter age (from 16 to 42 days of age) by radio frequency identification (RFID) technology. The radio frequency identification antennas were placed at pop-holes and on the range at 2.7 and 11.2 m from the home shed to determine the total number of range visits and the distance ranged from the shed. Chickens were categorised into close-ranging (CR) or distant-ranging (DR) categories based on the frequency of visits less than or greater than 2.7 m from the home shed, respectively. Half of the tracked chickens (n=153) were weighed at 7 days of age, and from 14 days of age their body weight, foot pad dermatitis (FPD), hock burn (HB) and gait scores were assessed weekly. The remaining tracked chickens (n=152) were assessed for fear and stress responses before (12 days of age) and after range access was provided (45 days of age) by quantifying their plasma corticosterone response to capture and 12 min confinement in a transport crate followed by behavioural fear responses to a tonic immobility (TI) test. Distant-ranging chickens could be predicted based on lighter BW at 7 and 14 days of age (P=0.05), that is before range access was first provided. After range access was provided, DR chickens weighed less every week (P=0.001), had better gait scores (P=0.01) and reduced corticosterone response to handling and confinement (P<0.05) compared to CR chickens. Longer and more frequent range visits were correlated with the number of visits further from the shed (P<0.01); hence distant ranging was correlated with the amount of range access, and consequently the relationships between ranging frequency, duration and distance were strong. These relationships indicate that longer, more frequent and greater ranging from the home shed was associated with improved welfare. Further research is required to identify whether these relationships between ranging behaviour and welfare are causal.
Introduction: Although use of point of care ultrasound (PoCUS) protocols for patients with undifferentiated hypotension in the Emergency Department (ED) is widespread, our previously reported SHoC-ED study showed no clear survival or length of stay benefit for patients assessed with PoCUS. In this analysis, we examine if the use of PoCUS changed fluid administration and rates of other emergency interventions between patients with different shock types. The primary comparison was between cardiogenic and non-cardiogenic shock types. Methods: A post-hoc analysis was completed on the database from an RCT of 273 patients who presented to the ED with undifferentiated hypotension (SBP <100 or shock index > 1) and who had been randomized to receive standard care with or without PoCUS in 6 centres in Canada and South Africa. PoCUS-trained physicians performed scans after initial assessment. Shock categories and diagnoses recorded at 60 minutes after ED presentation, were used to allocate patients into subcategories of shock for analysis of treatment. We analyzed actual care delivered including initial IV fluid bolus volumes (mL), rates of inotrope use and major procedures. Standard statistical tests were employed. Sample size was powered at 0.80 (α:0.05) for a moderate difference. Results: Although there were expected differences in the mean fluid bolus volume between patients with non-cardiogenic and cardiogenic shock, there was no difference in fluid bolus volume between the control and PoCUS groups (non-cardiogenic control 1878 mL (95% CI 1550 – 2206 mL) vs. non-cardiogenic PoCUS 1687 mL (1458 – 1916 mL); and cardiogenic control 768 mL (194 – 1341 mL) vs. cardiogenic PoCUS 981 mL (341 – 1620 mL). Likewise there were no differences in rates of inotrope administration, or major procedures for any of the subcategories of shock between the control group and PoCUS group patients. The most common subcategory of shock was distributive. Conclusion: Despite differences in care delivered by subcategory of shock, we did not find any significant difference in actual care delivered between patients who were examined using PoCUS and those who were not. This may help to explain the previously reported lack of outcome difference between groups.
Introduction: Point of care ultrasound has been reported to improve diagnosis in non-traumatic hypotensive ED patients. We compared diagnostic performance of physicians with and without PoCUS in undifferentiated hypotensive patients as part of an international prospective randomized controlled study. The primary outcome was diagnostic performance of PoCUS for cardiogenic vs. non-cardiogenic shock. Methods: SHoC-ED recruited hypotensive patients (SBP < 100 mmHg or shock index > 1) in 6 centres in Canada and South Africa. We describe previously unreported secondary outcomes relating to diagnostic accuracy. Patients were randomized to standard clinical assessment (No PoCUS) or PoCUS groups. PoCUS-trained physicians performed scans after initial assessment. Demographics, clinical details and findings were collected prospectively. Initial and secondary diagnoses including shock category were recorded at 0 and 60 minutes. Final diagnosis was determined by independent blinded chart review. Standard statistical tests were employed. Sample size was powered at 0.80 (α:0.05) for a moderate difference. Results: 273 patients were enrolled with follow-up for primary outcome completed for 270. Baseline demographics and perceived category of shock were similar between groups. 11% of patients were determined to have cardiogenic shock. PoCUS had a sensitivity of 80.0% (95% CI 54.8 to 93.0%), specificity 95.5% (90.0 to 98.1%), LR+ve 17.9 (7.34 to 43.8), LR-ve 0.21 (0.08 to 0.58), Diagnostic OR 85.6 (18.2 to 403.6) and accuracy 93.7% (88.0 to 97.2%) for cardiogenic shock. Standard assessment without PoCUS had a sensitivity of 91.7% (64.6 to 98.5%), specificity 93.8% (87.8 to 97.0%), LR+ve 14.8 (7.1 to 30.9), LR- of 0.09 (0.01 to 0.58), Diagnostic OR 166.6 (18.7 to 1481) and accuracy of 93.6% (87.8 to 97.2%). There was no significant difference in sensitivity (-11.7% (-37.8 to 18.3%)) or specificity (1.73% (-4.67 to 8.29%)). Diagnostic performance was also similar between other shock subcategories. Conclusion: As reported in other studies, PoCUS based assessment performed well diagnostically in undifferentiated hypotensive patients, especially as a rule-in test. However performance was similar to standard (non-PoCUS) assessment, which was excellent in this study.
After five positive randomized controlled trials showed benefit of mechanical thrombectomy in the management of acute ischemic stroke with emergent large-vessel occlusion, a multi-society meeting was organized during the 17th Congress of the World Federation of Interventional and Therapeutic Neuroradiology in October 2017 in Budapest, Hungary. This multi-society meeting was dedicated to establish standards of practice in acute ischemic stroke intervention aiming for a consensus on the minimum requirements for centers providing such treatment. In an ideal situation, all patients would be treated at a center offering a full spectrum of neuroendovascular care (a level 1 center). However, for geographical reasons, some patients are unable to reach such a center in a reasonable period of time. With this in mind, the group paid special attention to define recommendations on the prerequisites of organizing stroke centers providing medical thrombectomy for acute ischemic stroke, but not for other neurovascular diseases (level 2 centers). Finally, some centers will have a stroke unit and offer intravenous thrombolysis, but not any endovascular stroke therapy (level 3 centers). Together, these level 1, 2, and 3 centers form a complete stroke system of care. The multi-society group provides recommendations and a framework for the development of medical thrombectomy services worldwide.
We assessed whether paternal demographic, anthropometric and clinical factors influence the risk of an infant being born large-for-gestational-age (LGA). We examined the data on 3659 fathers of term offspring (including 662 LGA infants) born to primiparous women from Screening for Pregnancy Endpoints (SCOPE). LGA was defined as birth weight >90th centile as per INTERGROWTH 21st standards, with reference group being infants ⩽90th centile. Associations between paternal factors and likelihood of an LGA infant were examined using univariable and multivariable models. Men who fathered LGA babies were 180 g heavier at birth (P<0.001) and were more likely to have been born macrosomic (P<0.001) than those whose infants were not LGA. Fathers of LGA infants were 2.1 cm taller (P<0.001), 2.8 kg heavier (P<0.001) and had similar body mass index (BMI). In multivariable models, increasing paternal birth weight and height were independently associated with greater odds of having an LGA infant, irrespective of maternal factors. One unit increase in paternal BMI was associated with 2.9% greater odds of having an LGA boy but not girl; however, this association disappeared after adjustment for maternal BMI. There were no associations between paternal demographic factors or clinical history and infant LGA. In conclusion, fathers who were heavier at birth and were taller were more likely to have an LGA infant, but maternal BMI had a dominant influence on LGA.
We investigated risk factors for severe acute lower respiratory infections (ALRI) among hospitalised children <2 years, with a focus on the interactions between virus and age. Statistical interactions between age and respiratory syncytial virus (RSV), influenza, adenovirus (ADV) and rhinovirus on the risk of ALRI outcomes were investigated. Of 1780 hospitalisations, 228 (12.8%) were admitted to the intensive care unit (ICU). The median (range) length of stay (LOS) in hospital was 3 (1–27) days. An increase of 1 month of age was associated with a decreased risk of ICU admission (rate ratio (RR) 0.94; 95% confidence intervals (CI) 0.91–0.98) and with a decrease in LOS (RR 0.96; 95% CI 0.95–0.97). Associations between RSV, influenza, ADV positivity and ICU admission and LOS were significantly modified by age. Children <5 months old were at the highest risk from RSV-associated severe outcomes, while children >8 months were at greater risk from influenza-associated ICU admissions and long hospital stay. Children with ADV had increased LOS across all ages. In the first 2 years of life, the effects of different viruses on ALRI severity varies with age. Our findings help to identify specific ages that would most benefit from virus-specific interventions such as vaccines and antivirals.
Excessive abdominal fat might be associated with more severe metabolic disorders in Holstein cows. Our hypothesis was that there are genetic differences between cows with low and high abdominal fat deposition and a normal cover of subcutaneous adipose tissue. The objective of this study was to assess the genetic basis for variation in visceral adiposity in US Holstein cows. The study included adult Holstein cows sampled from a slaughterhouse (Green Bay, WI, USA) during September 2016. Only animals with a body condition score between 2.75 and 3.25 were considered. The extent of omental fat at the level of the insertion of the lesser omentum over the pylorus area was assessed. A group of 100 Holstein cows with an omental fold <5 mm in thickness and minimum fat deposition throughout the entire omentum, and the second group of 100 cows with an omental fold ⩾20 mm in thickness and with a marked fat deposition observed throughout the entire omentum were sampled. A small piece of muscle from the neck was collected from each cow into a sterile container for DNA extraction. Samples were submitted to a commercial laboratory for interrogation of genome-wide genomic variation using the Illumina BovineHD Beadchip. Genome-Wide association analysis was performed to test potential associations between fat deposition and genomic variation. A univariate mixed linear model analysis was performed using genome-wide efficient mixed model association to identify single nucleotide polymorphisms (SNPs) significantly associated with variation in a visceral fat deposition. The chip heritability was 0.686 and the estimated additive genetic and residual variance components were 0.427 and 0.074, respectively. In total, 11 SNPs defining four quantitative trait locus (QTL) regions were found to be significantly associated with visceral fat deposition (P<0.00001). Among them, two of the QTL were detected with four and five significantly associated SNPs, respectively; whereas, the QTLs detected on BTA12 and BTA19 were each detected with only one significantly associated SNP. No enriched gene ontology terms were found within the gene networks harboring these genes when supplied to DAVID using either the Bos taurus or human gene ontology databases. We conclude that excessive omental fat in Holstein cows with similar body condition scores is not caused by a single Mendelian locus and that the trait appears to be at least moderately heritable; consequently, selection to reduce excessive omental fat is potentially possible, but would require the generation of predicted transmitting abilities from larger and random samples of Holstein cattle.
Amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) represent a disease continuum with common genetic causes and molecular pathology. We recently identified mutations in the T-cell restricted intracellular antigen-1 (TIA1) protein as a cause of ALS +/− FTD. TIA1 is an RNA-binding protein containing a low complexity domain (LCD) that promotes the assembly of membrane-less organelles, such as stress granules (SG). Whole exome sequencing of two family members with fALS/FTD revealed a novel missense mutation in the TIA1 LCD (P362L). Subsequent screening identified five more TIA1 mutations in six additional ALS patients, but none in controls. All mutation carriers presented with weakness, behavioral abnormalities or language impairments and had a final diagnosis of ALS +/− FTD. Autopsy on five TIA1 mutation carriers showed widespread neurodegeneration with TDP-43 pathology. Round eosinophilic inclusions in lower motor neurons were a consistent feature. Cellular assays revealed abnormal SG dynamics in the presence of TIA1 mutations. In summary, missense mutations in the LCD of TIA1 are a newly recognized cause of ALS/FTD with TDP-43 pathology and strengthen the role of RNA metabolism in the pathogenesis in this disease.
Multimorbidity is common but little is known about its relationship with obstructive sleep apnea (OSA).
Men Androgen Inflammation Lifestyle Environment and Stress Study participants underwent polysomnography. Chronic diseases (CDs) were determined by biomedical measurement (diabetes, dyslipidaemia, hypertension, obesity), or self-report (depression, asthma, cardiovascular disease, arthritis). Associations between CD count, multimorbidity, apnea-hyponea index (AHI) and OSA severity and quality-of-life (QoL; mental & physical component scores), were determined using multinomial regression analyses, after adjustment for age.
Of the 743 men participating in the study, overall 58% had multimorbidity (2+ CDs), and 52% had OSA (11% severe). About 70% of those with multimorbidity had undiagnosed OSA. Multimorbidity was associated with AHI and undiagnosed OSA. Elevated CD count was associated with higher AHI value and increased OSA severity.
We demonstrate an independent association between the presence of OSA and multimorbidity in this representative sample of community-based men. This effect was strongest in men with moderate to severe OSA and three or more CDs, and appeared to produce a greater reduction in QoL when both conditions were present together.
The Binary Population and Spectral Synthesis suite of binary stellar evolution models and synthetic stellar populations provides a framework for the physically motivated analysis of both the integrated light from distant stellar populations and the detailed properties of those nearby. We present a new version 2.1 data release of these models, detailing the methodology by which Binary Population and Spectral Synthesis incorporates binary mass transfer and its effect on stellar evolution pathways, as well as the construction of simple stellar populations. We demonstrate key tests of the latest Binary Population and Spectral Synthesis model suite demonstrating its ability to reproduce the colours and derived properties of resolved stellar populations, including well-constrained eclipsing binaries. We consider observational constraints on the ratio of massive star types and the distribution of stellar remnant masses. We describe the identification of supernova progenitors in our models, and demonstrate a good agreement to the properties of observed progenitors. We also test our models against photometric and spectroscopic observations of unresolved stellar populations, both in the local and distant Universe, finding that binary models provide a self-consistent explanation for observed galaxy properties across a broad redshift range. Finally, we carefully describe the limitations of our models, and areas where we expect to see significant improvement in future versions.
The Taipan galaxy survey (hereafter simply ‘Taipan’) is a multi-object spectroscopic survey starting in 2017 that will cover 2π steradians over the southern sky (δ ≲ 10°, |b| ≳ 10°), and obtain optical spectra for about two million galaxies out to z < 0.4. Taipan will use the newly refurbished 1.2-m UK Schmidt Telescope at Siding Spring Observatory with the new TAIPAN instrument, which includes an innovative ‘Starbugs’ positioning system capable of rapidly and simultaneously deploying up to 150 spectroscopic fibres (and up to 300 with a proposed upgrade) over the 6° diameter focal plane, and a purpose-built spectrograph operating in the range from 370 to 870 nm with resolving power R ≳ 2000. The main scientific goals of Taipan are (i) to measure the distance scale of the Universe (primarily governed by the local expansion rate, H0) to 1% precision, and the growth rate of structure to 5%; (ii) to make the most extensive map yet constructed of the total mass distribution and motions in the local Universe, using peculiar velocities based on improved Fundamental Plane distances, which will enable sensitive tests of gravitational physics; and (iii) to deliver a legacy sample of low-redshift galaxies as a unique laboratory for studying galaxy evolution as a function of dark matter halo and stellar mass and environment. The final survey, which will be completed within 5 yrs, will consist of a complete magnitude-limited sample (i ⩽ 17) of about 1.2 × 106 galaxies supplemented by an extension to higher redshifts and fainter magnitudes (i ⩽ 18.1) of a luminous red galaxy sample of about 0.8 × 106 galaxies. Observations and data processing will be carried out remotely and in a fully automated way, using a purpose-built automated ‘virtual observer’ software and an automated data reduction pipeline. The Taipan survey is deliberately designed to maximise its legacy value by complementing and enhancing current and planned surveys of the southern sky at wavelengths from the optical to the radio; it will become the primary redshift and optical spectroscopic reference catalogue for the local extragalactic Universe in the southern sky for the coming decade.
Background: Patients who require hospitalization for a mild or moderate traumatic brain injury (TBI) are often discharged home with uncertainty around their full recovery. This study examines the frequency and severity of common post-TBI symptoms, as assessed by the Rivermead Post-Concussion Symptoms Questionnaire (RPCQ). Methods: All adult TBI inpatients discharged home from the Neurosurgery service were interviewed by phone at two weeks by a rehab-based nurse practitioner. RPCQ components (cognitive, emotional, and somatic) were analyzed; findings and management recommendations were communicated to family practitioners and the treating neurosurgeon. Results: In 46 patients, cognitive symptoms were present in 52%, 91% had somatic, and 100% had emotional symptoms. Fatigue was the most common symptom (67%). Double vision was the least common symptom (4%). Recommendations for managing symptoms, return to work, and need for formal clinical assessment were provided for 37% of cases. Conclusions: All patients admitted to neurosurgery with mild or moderate TBI had symptoms at two weeks. The RPCQ is a low-cost structured evaluative tool which highlights needs and provides guidance for patients and care-givers; it also seems effective in identifying those who may require formal clinical assessment.
Introduction: Point of care ultrasound (PoCUS) is an established tool in the initial management of patients with undifferentiated hypotension in the emergency department (ED). While PoCUS protocols have been shown to improve early diagnostic accuracy, there is little published evidence for any mortality benefit. We report the findings from our international multicenter randomized controlled trial, assessing the impact of a PoCUS protocol on survival and key clinical outcomes. Methods: Recruitment occurred at 7 centres in North America (4) and South Africa (3). Scans were performed by PoCUS-trained physicians. Screening at triage identified patients (SBP<100 or shock index>1), randomized to PoCUS or control (standard care and no PoCUS) groups. Demographics, clinical details and study findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. The primary outcome measure was 30-day/discharge mortality. Secondary outcome measures included diagnostic accuracy, changes in vital signs, acid-base status, and length of stay. Categorical data was analyzed using Fishers test, and continuous data by Student T test and multi-level log-regression testing. (GraphPad/SPSS) Final chart review was blinded to initial impressions and PoCUS findings. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no difference between groups for the primary outcome of mortality; PoCUS 32/129 (24.8%; 95% CI 14.3-35.3%) vs. Control 32/129 (24.8%; 95% CI 14.3-35.3%); RR 1.00 (95% CI 0.869 to 1.15; p=1.00). There were no differences in the secondary outcomes; ICU and total length of stay. Our sample size has a power of 0.80 (α:0.05) for a moderate effect size. Other secondary outcomes are reported separately. Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We did not find any mortality or length of stay benefits with the use of a PoCUS protocol, though a larger study is required to confirm these findings. While PoCUS may have diagnostic benefits, these may not translate into a survival benefit effect.
Introduction: Point of Care Ultrasound (PoCUS) protocols are commonly used to guide resuscitation for emergency department (ED) patients with undifferentiated non-traumatic hypotension. While PoCUS has been shown to improve early diagnosis, there is a minimal evidence for any outcome benefit. We completed an international multicenter randomized controlled trial (RCT) to assess the impact of a PoCUS protocol on key resuscitation markers in this group. We report diagnostic impact and mortality elsewhere. Methods: The SHoC-ED1 study compared the addition of PoCUS to standard care within the first hour in the treatment of adult patients presenting with undifferentiated hypotension (SBP<100 mmHg or a Shock Index >1.0) with a control group that did not receive PoCUS. Scans were performed by PoCUS-trained physicians. 4 North American, and 3 South African sites participated in the study. Resuscitation outcomes analyzed included volume of fluid administered in the ED, changes in shock index (SI), modified early warning score (MEWS), venous acid-base balance, and lactate, at one and four hours. Comparisons utilized a T-test as well as stratified binomial log-regression to assess for any significant improvement in resuscitation amount the outcomes. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no significant difference in mean total volume of fluid received between the control (1658 ml; 95%CI 1365-1950) and PoCUS groups (1609 ml; 1385-1832; p=0.79). Significant improvements were seen in SI, MEWS, lactate and bicarbonate with resuscitation in both the PoCUS and control groups, however there was no difference between groups. Conclusion: SHOC-ED1 is the first RCT to compare PoCUS to standard of care in hypotensive ED patients. No significant difference in fluid used, or markers of resuscitation was found when comparing the use of a PoCUS protocol to that of standard of care in the resuscitation of patients with undifferentiated hypotension.
Introduction: Point of care ultrasonography (PoCUS) is an established tool in the initial management of hypotensive patients in the emergency department (ED). It has been shown rule out certain shock etiologies, and improve diagnostic certainty, however evidence on benefit in the management of hypotensive patients is limited. We report the findings from our international multicenter RCT assessing the impact of a PoCUS protocol on diagnostic accuracy, as well as other key outcomes including mortality, which are reported elsewhere. Methods: Recruitment occurred at 4 North American and 3 Southern African sites. Screening at triage identified patients (SBP<100 mmHg or shock index >1) who were randomized to either PoCUS or control groups. Scans were performed by PoCUS-trained physicians. Demographics, clinical details and findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. Final chart review was blinded to initial impressions and PoCUS findings. Categorical data was analyzed using Fishers two-tailed test. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. The perceived shock category changed more frequently in the PoCUS group 20/127 (15.7%) vs. control 7/125 (5.6%); RR 2.81 (95% CI 1.23 to 6.42; p=0.0134). There was no significant difference in change of diagnostic impression between groups PoCUS 39/123 (31.7%) vs control 34/124 (27.4%); RR 1.16 (95% CI 0.786 to 1.70; p=0.4879). There was no significant difference in the rate of correct category of shock between PoCUS (118/127; 93%) and control (113/122; 93%); RR 1.00 (95% CI 0.936 to 1.08; p=1.00), or for correct diagnosis; PoCUS 90/127 (70%) vs control 86/122 (70%); RR 0.987 (95% CI 0.671 to 1.45; p=1.00). Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We found that the use of PoCUS did change physicians’ perceived shock category. PoCUS did not improve diagnostic accuracy for category of shock or diagnosis.
We have recently released version 2.0 of the Binary Population and Spectral Synthesis (BPASS) population synthesis code. This is designed to construct the spectra and related properties of stellar populations built from ~200,000 detailed, individual stellar models of known age and metallicity. The output products enable a broad range of theoretical predictions for individual stars, binaries, resolved and unresolved stellar populations, supernovae and their progenitors, and compact remnant mergers. Here we summarise key applications that demonstrate that binary populations typically reproduce observations better than single star models.
Urban slum environments in the tropics are conducive to the proliferation and the spread of rodent-borne zoonotic pathogens to humans. Calodium hepaticum (Brancroft, 1893) is a zoonotic nematode known to infect a variety of mammalian hosts, including humans. Norway rats (Rattus norvegicus) are considered the most important mammalian host of C. hepaticum and are therefore a potentially useful species to inform estimates of the risk to humans living in urban slum environments. There is a lack of studies systematically evaluating the role of demographic and environmental factors that influence both carriage and intensity of infection of C. hepaticum in rodents from urban slum areas within tropical regions. Carriage and the intensity of infection of C. hepaticum were studied in 402 Norway rats over a 2-year period in an urban slum in Salvador, Brazil. Overall, prevalence in Norway rats was 83% (337/402). Independent risk factors for C. hepaticum carriage in R. norvegicus were age and valley of capture. Of those infected the proportion with gross liver involvement (i.e. >75% of the liver affected, a proxy for a high level intensity of infection), was low (8%, 26/337). Sixty soil samples were collected from ten locations to estimate levels of environmental contamination and provide information on the potential risk to humans of contracting C. hepaticum from the environment. Sixty percent (6/10) of the sites were contaminated with C. hepaticum. High carriage levels of C. hepaticum within Norway rats and sub-standard living conditions within slum areas may increase the risk to humans of exposure to the infective eggs of C. hepaticum. This study supports the need for further studies to assess whether humans are becoming infected within this community and whether C. hepaticum is posing a significant risk to human health.