To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Early in a foodborne disease outbreak investigation, illness incubation periods can help focus case interviews, case definitions, clinical and environmental evaluations and predict an aetiology. Data describing incubation periods are limited. We examined foodborne disease outbreaks from laboratory-confirmed, single aetiology, enteric bacterial and viral pathogens reported to United States foodborne disease outbreak surveillance from 1998–2013. We grouped pathogens by clinical presentation and analysed the reported median incubation period among all illnesses from the implicated pathogen for each outbreak as the outbreak incubation period. Outbreaks from preformed bacterial toxins (Staphylococcus aureus, Bacillus cereus and Clostridium perfringens) had the shortest outbreak incubation periods (4–10 h medians), distinct from that of Vibrio parahaemolyticus (17 h median). Norovirus, salmonella and shigella had longer but similar outbreak incubation periods (32–45 h medians); campylobacter and Shiga toxin-producing Escherichia coli had the longest among bacteria (62–87 h medians); hepatitis A had the longest overall (672 h median). Our results can help guide diagnostic and investigative strategies early in an outbreak investigation to suggest or rule out specific etiologies or, when the pathogen is known, the likely timeframe for exposure. They also point to possible differences in pathogenesis among pathogens causing broadly similar syndromes.
Feed represents a substantial proportion of production costs in the dairy industry and is a useful target for improving overall system efficiency and sustainability. The objective of this study was to develop methodology to estimate the economic value for a feed efficiency trait and the associated methane production relevant to Canada. The approach quantifies the level of economic savings achieved by selecting animals that convert consumed feed into product while minimizing the feed energy used for inefficient metabolism, maintenance and digestion. We define a selection criterion trait called Feed Performance (FP) as a 1 kg increase in more efficiently used feed in a first parity lactating cow. The impact of a change in this trait on the total lifetime value of more efficiently used feed via correlated selection responses in other life stages is then quantified. The resulting improved conversion of feed was also applied to determine the resulting reduction in output of emissions (and their relative value based on a national emissions value) under an assumption of constant methane yield, where methane yield is defined as kg methane/kg dry matter intake (DMI). Overall, increasing the FP estimated breeding value by one unit (i.e. 1 kg of more efficiently converted DMI during the cow’s first lactation) translates to a total lifetime saving of 3.23 kg in DMI and 0.055 kg in methane with the economic values of CAD $0.82 and CAD $0.07, respectively. Therefore, the estimated total economic value for FP is CAD $0.89/unit. The proposed model is robust and could also be applied to determine the economic value for feed efficiency traits within a selection index in other production systems and countries.
Introduction: Although use of point of care ultrasound (PoCUS) protocols for patients with undifferentiated hypotension in the Emergency Department (ED) is widespread, our previously reported SHoC-ED study showed no clear survival or length of stay benefit for patients assessed with PoCUS. In this analysis, we examine if the use of PoCUS changed fluid administration and rates of other emergency interventions between patients with different shock types. The primary comparison was between cardiogenic and non-cardiogenic shock types. Methods: A post-hoc analysis was completed on the database from an RCT of 273 patients who presented to the ED with undifferentiated hypotension (SBP <100 or shock index > 1) and who had been randomized to receive standard care with or without PoCUS in 6 centres in Canada and South Africa. PoCUS-trained physicians performed scans after initial assessment. Shock categories and diagnoses recorded at 60 minutes after ED presentation, were used to allocate patients into subcategories of shock for analysis of treatment. We analyzed actual care delivered including initial IV fluid bolus volumes (mL), rates of inotrope use and major procedures. Standard statistical tests were employed. Sample size was powered at 0.80 (α:0.05) for a moderate difference. Results: Although there were expected differences in the mean fluid bolus volume between patients with non-cardiogenic and cardiogenic shock, there was no difference in fluid bolus volume between the control and PoCUS groups (non-cardiogenic control 1878 mL (95% CI 1550 – 2206 mL) vs. non-cardiogenic PoCUS 1687 mL (1458 – 1916 mL); and cardiogenic control 768 mL (194 – 1341 mL) vs. cardiogenic PoCUS 981 mL (341 – 1620 mL). Likewise there were no differences in rates of inotrope administration, or major procedures for any of the subcategories of shock between the control group and PoCUS group patients. The most common subcategory of shock was distributive. Conclusion: Despite differences in care delivered by subcategory of shock, we did not find any significant difference in actual care delivered between patients who were examined using PoCUS and those who were not. This may help to explain the previously reported lack of outcome difference between groups.
Introduction: Point of care ultrasound has been reported to improve diagnosis in non-traumatic hypotensive ED patients. We compared diagnostic performance of physicians with and without PoCUS in undifferentiated hypotensive patients as part of an international prospective randomized controlled study. The primary outcome was diagnostic performance of PoCUS for cardiogenic vs. non-cardiogenic shock. Methods: SHoC-ED recruited hypotensive patients (SBP < 100 mmHg or shock index > 1) in 6 centres in Canada and South Africa. We describe previously unreported secondary outcomes relating to diagnostic accuracy. Patients were randomized to standard clinical assessment (No PoCUS) or PoCUS groups. PoCUS-trained physicians performed scans after initial assessment. Demographics, clinical details and findings were collected prospectively. Initial and secondary diagnoses including shock category were recorded at 0 and 60 minutes. Final diagnosis was determined by independent blinded chart review. Standard statistical tests were employed. Sample size was powered at 0.80 (α:0.05) for a moderate difference. Results: 273 patients were enrolled with follow-up for primary outcome completed for 270. Baseline demographics and perceived category of shock were similar between groups. 11% of patients were determined to have cardiogenic shock. PoCUS had a sensitivity of 80.0% (95% CI 54.8 to 93.0%), specificity 95.5% (90.0 to 98.1%), LR+ve 17.9 (7.34 to 43.8), LR-ve 0.21 (0.08 to 0.58), Diagnostic OR 85.6 (18.2 to 403.6) and accuracy 93.7% (88.0 to 97.2%) for cardiogenic shock. Standard assessment without PoCUS had a sensitivity of 91.7% (64.6 to 98.5%), specificity 93.8% (87.8 to 97.0%), LR+ve 14.8 (7.1 to 30.9), LR- of 0.09 (0.01 to 0.58), Diagnostic OR 166.6 (18.7 to 1481) and accuracy of 93.6% (87.8 to 97.2%). There was no significant difference in sensitivity (-11.7% (-37.8 to 18.3%)) or specificity (1.73% (-4.67 to 8.29%)). Diagnostic performance was also similar between other shock subcategories. Conclusion: As reported in other studies, PoCUS based assessment performed well diagnostically in undifferentiated hypotensive patients, especially as a rule-in test. However performance was similar to standard (non-PoCUS) assessment, which was excellent in this study.
The field of psychiatry would benefit significantly from developing objective biomarkers that could facilitate the early identification of heterogeneous subtypes of illness. Critically, although machine learning pattern recognition methods have been applied recently to predict many psychiatric disorders, these techniques have not been utilized to predict subtypes of posttraumatic stress disorder (PTSD), including the dissociative subtype of PTSD (PTSD + DS).
Using Multiclass Gaussian Process Classification within PRoNTo, we examined the classification accuracy of: (i) the mean amplitude of low-frequency fluctuations (mALFF; reflecting spontaneous neural activity during rest); and (ii) seed-based amygdala complex functional connectivity within 181 participants [PTSD (n = 81); PTSD + DS (n = 49); and age-matched healthy trauma-unexposed controls (n = 51)]. We also computed mass-univariate analyses in order to observe regional group differences [false-discovery-rate (FDR)-cluster corrected p < 0.05, k = 20].
We found that extracted features could predict accurately the classification of PTSD, PTSD + DS, and healthy controls, using both resting-state mALFF (91.63% balanced accuracy, p < 0.001) and amygdala complex connectivity maps (85.00% balanced accuracy, p < 0.001). These results were replicated using independent machine learning algorithms/cross-validation procedures. Moreover, areas weighted as being most important for group classification also displayed significant group differences at the univariate level. Here, whereas the PTSD + DS group displayed increased activation within emotion regulation regions, the PTSD group showed increased activation within the amygdala, globus pallidus, and motor/somatosensory regions.
The current study has significant implications for advancing machine learning applications within the field of psychiatry, as well as for developing objective biomarkers indicative of diagnostic heterogeneity.
Research has shown both production and health benefits for the use of chicory (Cichorium intybus) within ruminant diets. Despite this, little was known about the effects of this forage, containing differing fatty acid profiles and secondary plant compounds compared with ryegrass, on beef stability, fatty acid composition or sensory properties. An experiment was conducted to investigate whether the inclusion of chicory in the diet of grazing beef steers would alter these three properties in the M. Longissimus muscle when compared with beef steers grazing perennial ryegrass (Lolium perenne). Triplicate 2 ha plots were established with a chicory (cv. Puna II)/perennial ryegrass mix or a perennial ryegrass control. A core group of 36 Belgian Blue – cross steers were used within a 2-year beef finishing experiment (n=6/replicate plot). In the 2nd grazing year, steers were slaughtered as they reached a target fat class of 3. Muscle pH was checked 2 and 48 h post-slaughter. A section of the hindloin joint containing the M. Longissimus lumborum muscle was removed and a 20 mm-thick steak was cut and muscle samples were taken for analysis of vitamin E and fatty acid analysis. The remaining section of the loin was vacuum packed in modified atmosphere packs and subjected to simulated retail display. A section of the conditioned loin was used for sensory analysis. Data on pH, vitamin E concentration and colour stability in a simulated retail display showed there were no effects of including chicory in the diet of grazing beef steers on meat stability. There were also no differences found in the fatty acid composition or the overall eating quality of the steaks from the two treatments. In conclusion, there were no substantive effects of including chicory in the swards of grazing beef cattle on meat stability, fatty acid composition or sensory properties of the M. Longissimus muscle when compared with beef steers grazing ryegrass-only swards.
Introduction: Point of care ultrasound (PoCUS) is an established tool in the initial management of patients with undifferentiated hypotension in the emergency department (ED). While PoCUS protocols have been shown to improve early diagnostic accuracy, there is little published evidence for any mortality benefit. We report the findings from our international multicenter randomized controlled trial, assessing the impact of a PoCUS protocol on survival and key clinical outcomes. Methods: Recruitment occurred at 7 centres in North America (4) and South Africa (3). Scans were performed by PoCUS-trained physicians. Screening at triage identified patients (SBP<100 or shock index>1), randomized to PoCUS or control (standard care and no PoCUS) groups. Demographics, clinical details and study findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. The primary outcome measure was 30-day/discharge mortality. Secondary outcome measures included diagnostic accuracy, changes in vital signs, acid-base status, and length of stay. Categorical data was analyzed using Fishers test, and continuous data by Student T test and multi-level log-regression testing. (GraphPad/SPSS) Final chart review was blinded to initial impressions and PoCUS findings. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no difference between groups for the primary outcome of mortality; PoCUS 32/129 (24.8%; 95% CI 14.3-35.3%) vs. Control 32/129 (24.8%; 95% CI 14.3-35.3%); RR 1.00 (95% CI 0.869 to 1.15; p=1.00). There were no differences in the secondary outcomes; ICU and total length of stay. Our sample size has a power of 0.80 (α:0.05) for a moderate effect size. Other secondary outcomes are reported separately. Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We did not find any mortality or length of stay benefits with the use of a PoCUS protocol, though a larger study is required to confirm these findings. While PoCUS may have diagnostic benefits, these may not translate into a survival benefit effect.
Introduction: Point of Care Ultrasound (PoCUS) protocols are commonly used to guide resuscitation for emergency department (ED) patients with undifferentiated non-traumatic hypotension. While PoCUS has been shown to improve early diagnosis, there is a minimal evidence for any outcome benefit. We completed an international multicenter randomized controlled trial (RCT) to assess the impact of a PoCUS protocol on key resuscitation markers in this group. We report diagnostic impact and mortality elsewhere. Methods: The SHoC-ED1 study compared the addition of PoCUS to standard care within the first hour in the treatment of adult patients presenting with undifferentiated hypotension (SBP<100 mmHg or a Shock Index >1.0) with a control group that did not receive PoCUS. Scans were performed by PoCUS-trained physicians. 4 North American, and 3 South African sites participated in the study. Resuscitation outcomes analyzed included volume of fluid administered in the ED, changes in shock index (SI), modified early warning score (MEWS), venous acid-base balance, and lactate, at one and four hours. Comparisons utilized a T-test as well as stratified binomial log-regression to assess for any significant improvement in resuscitation amount the outcomes. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no significant difference in mean total volume of fluid received between the control (1658 ml; 95%CI 1365-1950) and PoCUS groups (1609 ml; 1385-1832; p=0.79). Significant improvements were seen in SI, MEWS, lactate and bicarbonate with resuscitation in both the PoCUS and control groups, however there was no difference between groups. Conclusion: SHOC-ED1 is the first RCT to compare PoCUS to standard of care in hypotensive ED patients. No significant difference in fluid used, or markers of resuscitation was found when comparing the use of a PoCUS protocol to that of standard of care in the resuscitation of patients with undifferentiated hypotension.
Introduction: Point of care ultrasonography (PoCUS) is an established tool in the initial management of hypotensive patients in the emergency department (ED). It has been shown rule out certain shock etiologies, and improve diagnostic certainty, however evidence on benefit in the management of hypotensive patients is limited. We report the findings from our international multicenter RCT assessing the impact of a PoCUS protocol on diagnostic accuracy, as well as other key outcomes including mortality, which are reported elsewhere. Methods: Recruitment occurred at 4 North American and 3 Southern African sites. Screening at triage identified patients (SBP<100 mmHg or shock index >1) who were randomized to either PoCUS or control groups. Scans were performed by PoCUS-trained physicians. Demographics, clinical details and findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. Final chart review was blinded to initial impressions and PoCUS findings. Categorical data was analyzed using Fishers two-tailed test. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. The perceived shock category changed more frequently in the PoCUS group 20/127 (15.7%) vs. control 7/125 (5.6%); RR 2.81 (95% CI 1.23 to 6.42; p=0.0134). There was no significant difference in change of diagnostic impression between groups PoCUS 39/123 (31.7%) vs control 34/124 (27.4%); RR 1.16 (95% CI 0.786 to 1.70; p=0.4879). There was no significant difference in the rate of correct category of shock between PoCUS (118/127; 93%) and control (113/122; 93%); RR 1.00 (95% CI 0.936 to 1.08; p=1.00), or for correct diagnosis; PoCUS 90/127 (70%) vs control 86/122 (70%); RR 0.987 (95% CI 0.671 to 1.45; p=1.00). Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We found that the use of PoCUS did change physicians’ perceived shock category. PoCUS did not improve diagnostic accuracy for category of shock or diagnosis.
Although most non-typhoidal Salmonella illnesses are self-limiting, antimicrobial treatment is critical for invasive infections. To describe resistance in Salmonella that caused foodborne outbreaks in the United States, we linked outbreaks submitted to the Foodborne Disease Outbreak Surveillance System to isolate susceptibility data in the National Antimicrobial Resistance Monitoring System. Resistant outbreaks were defined as those linked to one or more isolates with resistance to at least one antimicrobial drug. Multidrug resistant (MDR) outbreaks had at least one isolate resistant to three or more antimicrobial classes. Twenty-one per cent (37/176) of linked outbreaks were resistant. In outbreaks attributed to a single food group, 73% (16/22) of resistant outbreaks and 46% (31/68) of non-resistant outbreaks were attributed to foods from land animals (P < 0·05). MDR Salmonella with clinically important resistance caused 29% (14/48) of outbreaks from land animals and 8% (3/40) of outbreaks from plant products (P < 0·01). In our study, resistant Salmonella infections were more common in outbreaks attributed to foods from land animals than outbreaks from foods from plants or aquatic animals. Antimicrobial susceptibility data on isolates from foodborne Salmonella outbreaks can help determine which foods are associated with resistant infections.
Eta Carinae is one of the most massive observable binaries. Yet determination of its orbital and physical parameters is hampered by obscuring winds. However the effects of the strong, colliding winds changes with phase due to the high orbital eccentricity. We wanted to improve measures of the orbital parameters and to determine the mechanisms that produce the relatively brief, phase-locked minimum as detected throughout the electromagnetic spectrum. We conducted intense monitoring of the He ii λ4686 line in η Carinae for 10 months in the year 2014, gathering ~300 high S/N spectra with ground- and space-based telescopes. We also used published spectra at the FOS4 SE polar region of the Homunculus, which views the minimum from a different direction. We used a model in which the He ii λ4686 emission is produced by two mechanisms: a) one linked to the intensity of the wind-wind collision which occurs along the whole orbit and is proportional to the inverse square of the separation between the companion stars; and b) the other produced by the ‘bore hole’ effect which occurs at phases across the periastron passage. The opacity (computed from 3D SPH simulations) as convolved with the emission reproduces the behavior of equivalent widths both for direct and reflected light. Our main results are: a) a demonstration that the He ii λ4686 light curve is exquisitely repeatable from cycle to cycle, contrary to previous claims for large changes; b) an accurate determination of the longitude of periastron, indicating that the secondary star is ‘behind’ the primary at periastron, a dispute extended over the past decade; c) a determination of the time of periastron passage, at ~4 days after the onset of the deep light curve minimum; and d) show that the minimum is simultaneous for observers at different lines of sight, indicating that it is not caused by an eclipse of the secondary star, but rather by the immersion of the wind-wind collision interior to the inner wind of the primary.
The main question that Firestone & Scholl (F&S) pose is whether “what and how we see is functionally independent from what and how we think, know, desire, act, and so forth” (sect. 2, para. 1). We synthesize a collection of concerns from an interdisciplinary set of coauthors regarding F&S's assumptions and appeals to intuition, resulting in their treatment of visual perception as context-free.
Currently, limited studies have quantified the risk of methicillin-resistant Staphylococcus aureus (MRSA) skin and soft tissue infections (SSTIs) for MRSA-colonized patients on discharge from hospital. Our retrospective, case-control study identified independent risk factors for the development of MRSA SSTIs among such patients detected by active MRSA nasal screening in an acute care hospital by PCR on admission, and bacteriological cultures on discharge. Cases were MRSA-colonized patients aged ⩾18 years who developed a MRSA SSTI post-discharge and controls were those who did not develop a MRSA SSTI post-discharge. Controls were matched to cases by length of follow-up (±10 days) for up to 18 months. Potential demographic and clinical risk factors for MRSA infection were identified using electronic queries and manual chart abstraction; data were compared by standard statistical tests and variables with P values ⩽0·05 in bivariable analysis were entered into a logistic regression model. Multivariable analysis demonstrated prior hospital admission within 12 months (P = 0·02), prior MRSA infection (P = 0·05), and previous myocardial infarction (P = 0·01) were independently predictive of a MRSA SSTI post-discharge. Identification of MRSA colonization upon admission and recognition of risk factors could help identify a high-risk population that could benefit from MRSA SSTI prevention strategies.
The rearing period has a key influence on the later performance of cattle, affecting future fertility and longevity. Producers usually aim to breed replacement heifers by 15 months to calve at 24 months. An age at first calving (AFC) close to 2 years (23 to 25 months) is optimum for economic performance as it minimises the non-productive period and maintains a seasonal calving pattern. This is rarely achieved in either dairy or beef herds, with average AFC for dairy herds usually between 26 and 30 months. Maintaining a low AFC requires good heifer management with adequate growth to ensure an appropriate BW and frame size at calving. Puberty should occur at least 6 weeks before the target breeding age to enable animals to undergo oestrous cycles before mating. Cattle reach puberty at a fairly consistent, but breed-dependent, proportion of mature BW. Heifer fertility is a critical component of AFC. In US Holsteins the conception rate peaked at 57% at 15 to 16 months, declining in older heifers. Wide variations in growth rates on the same farm often lead to some animals having delayed first breeding and/or conception. Oestrous synchronisation regimes and sexed semen can both be used but unless heifers have been previously well-managed the success rates may be unacceptably low. Altering the nutritional input above or below those needed for maintenance at any stage from birth to first calving clearly alters the average daily gain (ADG) in weight. In general an ADG of around 0.75 kg/day seems optimal for dairy heifers, with lower rates delaying puberty and AFC. There is some scope to vary ADG at different ages providing animals reach an adequate size by calving. Major periods of nutritional deficiency and/or severe calfhood disease will, however, compromise development with long-term adverse consequences. Infectious disease can also cause pregnancy loss/abortion. First lactation milk yield may be slightly lower in younger calving cows but lifetime production is higher as such animals usually have good fertility and survive longer. There is now extensive evidence that as long as the AFC is >23 months then future performance is not adversely influenced. On the other hand, delayed first calving >30 months is associated with poor survival. Underfeeding of young heifers reduces their milk production potential and is a greater problem than overfeeding. Farmers are more likely to meet the optimum AFC target of 23 to 25 months if they monitor growth rates and adjust feed accordingly.
Co-morbid major depression occurs in approximately 10% of people suffering from a chronic medical condition such as cancer. Systematic integrated management that includes both identification and treatment has been advocated. However, we lack information on the cost-effectiveness of this combined approach, as published evaluations have focused solely on the systematic (collaborative care) treatment stage. We therefore aimed to use the best available evidence to estimate the cost-effectiveness of systematic integrated management (both identification and treatment) compared with usual practice, for patients attending specialist cancer clinics.
We conducted a cost-effectiveness analysis using a decision analytic model structured to reflect both the identification and treatment processes. Evidence was taken from reviews of relevant clinical trials and from observational studies, together with data from a large depression screening service. Sensitivity and scenario analyses were undertaken to determine the effects of variations in depression incidence rates, time horizons and patient characteristics.
Systematic integrated depression management generated more costs than usual practice, but also more quality-adjusted life years (QALYs). The incremental cost-effectiveness ratio (ICER) was £11 765 per QALY. This finding was robust to tests of uncertainty and variation in key model parameters.
Systematic integrated management of co-morbid major depression in cancer patients is likely to be cost-effective at widely accepted threshold values and may be a better way of generating QALYs for cancer patients than some existing medical and surgical treatments. It could usefully be applied to other chronic medical conditions.
We present the results of an all sky survey for binary systems among the massive stars that we made with the HST Fine Guidance Sensors. The sample of 225 stars is comprised mainly of Galactic O- and B-type stars and Luminous Blue Variables, plus a few luminous stars in the LMC. The FGS TRANS mode observations are sensitive to detection of companions with an angular separation of 0.01–1 arcsec and brighter than △m = 5 mag. The FGS observations resolved 52 binary and 6 triple star systems and detected partially resolved binaries in 7 additional targets, yielding a companion detection frequency of 29%. We also gathered literature results on the numbers of close spectroscopic binaries and wider astrometric binaries among the sample. These results confirm the high multiplicity fraction. The period distribution is essentially flat in increments of log P, although there remains an observational gap in detections for periods of years and decades.