We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Cambridge Core ecommerce is unavailable Sunday 08/12/2024 from 08:00 – 18:00 (GMT). This is due to site maintenance. We apologise for any inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Homeless shelter residents and staff may be at higher risk of SARS-CoV-2 infection. However, SARS-CoV-2 infection estimates in this population have been reliant on cross-sectional or outbreak investigation data. We conducted routine surveillance and outbreak testing in 23 homeless shelters in King County, Washington, to estimate the occurrence of laboratory-confirmed SARS-CoV-2 infection and risk factors during 1 January 2020–31 May 2021. Symptom surveys and nasal swabs were collected for SARS-CoV-2 testing by RT-PCR for residents aged ≥3 months and staff. We collected 12,915 specimens from 2,930 unique participants. We identified 4.74 (95% CI 4.00–5.58) SARS-CoV-2 infections per 100 individuals (residents: 4.96, 95% CI 4.12–5.91; staff: 3.86, 95% CI 2.43–5.79). Most infections were asymptomatic at the time of detection (74%) and detected during routine surveillance (73%). Outbreak testing yielded higher test positivity than routine surveillance (2.7% versus 0.9%). Among those infected, residents were less likely to report symptoms than staff. Participants who were vaccinated against seasonal influenza and were current smokers had lower odds of having an infection detected. Active surveillance that includes SARS-CoV-2 testing of all persons is essential in ascertaining the true burden of SARS-CoV-2 infections among residents and staff of congregate settings.
This paper provides an overview and appraisal of the International Design Engineering Annual (IDEA) challenge - a virtually hosted design hackathon run with the aim of generating a design research dataset that can provide insights into design activities at virtually hosted hackathons. The resulting dataset consists of 200+ prototypes with over 1300 connections providing insights into the products, processes and people involved in the design process. The paper also provides recommendations for future deployments of virtual hackathons for design research.
OBJECTIVES/GOALS: The goal of this study was to understand the impact of a high sodium diet on gene networks in the kidney that correlate with blood pressure in female primates, and translating findings to women. METHODS/STUDY POPULATION: Sodium-naïve female baboons (n=7) were fed a low-sodium (LS) diet for 6 weeks followed by a high sodium (HS) diet for 6 weeks. Sodium intake, serum 17 beta-estradiol, and ultrasound-guided kidney biopsies for RNA-Seq were collected at the end of each diet. Blood pressure was continuously measured for 64-hour periods throughout the study by implantable telemetry devices. Weighted gene coexpression network analysis was performed on RNA-Seq data to identify transcripts correlated with blood pressure on each diet. Network analysis was performed on transcripts highly correlated with BP, and in silico findings were validated by immunohistochemistry of kidney tissues. RESULTS/ANTICIPATED RESULTS: On the LS diet, Na+ intake and serum 17 beta-estradiol concentration correlated with BP. Cell type composition of renal biopsies was consistent among all animals for both diets. Kidney transcriptomes differed by diet; analysis by unbiased weighted gene co-expression network analysis revealed modules of genes correlated with BP on the HS diet. Network analysis of module genes showed causal networks linking hormone receptors, proliferation and differentiation, methylation, hypoxia, insulin and lipid regulation, and inflammation as regulators underlying variation in BP on the HS diet. Our results show variation in BP correlated with novel kidney gene networks with master regulators PPARG and MYC in female baboons on a HS diet. DISCUSSION/SIGNIFICANCE: Previous studies in primates to identify molecular networks dysregulated by HS diet focused on males. Current clinical guidelines do not offer sex-specific treatment plans for sodium sensitive hypertension. This study leveraged variation in BP as a first step to identify correlated kidney regulatory gene networks in female primates after a HS diet.
Optical tracking systems typically trade off between astrometric precision and field of view. In this work, we showcase a networked approach to optical tracking using very wide field-of-view imagers that have relatively low astrometric precision on the scheduled OSIRIS-REx slingshot manoeuvre around Earth on 22 Sep 2017. As part of a trajectory designed to get OSIRIS-REx to NEO 101955 Bennu, this flyby event was viewed from 13 remote sensors spread across Australia and New Zealand to promote triangulatable observations. Each observatory in this portable network was constructed to be as lightweight and portable as possible, with hardware based off the successful design of the Desert Fireball Network. Over a 4-h collection window, we gathered 15 439 images of the night sky in the predicted direction of the OSIRIS-REx spacecraft. Using a specially developed streak detection and orbit determination data pipeline, we detected 2 090 line-of-sight observations. Our fitted orbit was determined to be within about 10 km of orbital telemetry along the observed 109 262 km length of OSIRIS-REx trajectory, and thus demonstrating the impressive capability of a networked approach to Space Surveillance and Tracking.
We present 0.″2–0.″4 resolution ALMA images of the submillimeter dust continuum and the CO, H2O, and H2O+ line emission in a z = 3.63 strongly lensed dusty starburst. We construct the lens model for the system with an MCMC technique. While the average magnification for the dust continuum is about 11, the magnification of the line emission varies from 5 to 22 across the source, resolving the source down to sub-kpc scales. The ISM content reveals that it is a pre-coalescence major merger of two ultra-luminous infrared galaxies, both with a large amount of molecular gas reservoir. The approaching galaxy in the south shows no apparent kinematic structure with a half-light radius of 0.4 kpc, while the preceding one resembles a 1.2 kpc rotating disk, separated by a projected distance of 1.3 kpc. The distribution of dust and gas emission suggests a large amount of cold ISM concentrated in the interacting region.
We hypothesized that a computerized clinical decision support tool for Clostridium difficile testing would reduce unnecessary inpatient tests, resulting in fewer laboratory-identified events. Census-adjusted interrupted time-series analyses demonstrated significant reductions of 41% fewer tests and 31% fewer hospital-onset C. difficile infection laboratory-identified events following this intervention.
Introduction: Point of care ultrasound (PoCUS) has become an established tool in the initial management of patients with undifferentiated hypotension in the emergency department (ED). Current established protocols (e.g. RUSH and ACES) were developed by expert user opinion, rather than objective, prospective data. Recently the SHoC Protocol was published, recommending 3 core scans; cardiac, lung, and IVC; plus other scans when indicated clinically. We report the abnormal ultrasound findings from our international multicenter randomized controlled trial, to assess if the recommended 3 core SHoC protocol scans were chosen appropriately for this population. Methods: Recruitment occurred at seven centres in North America (4) and South Africa (3). Screening at triage identified patients (SBP<100 or shock index>1) who were randomized to PoCUS or control (standard care with no PoCUS) groups. All scans were performed by PoCUS-trained physicians within one hour of arrival in the ED. Demographics, clinical details and study findings were collected prospectively. A threshold incidence for positive findings of 10% was established as significant for the purposes of assessing the appropriateness of the core recommendations. Results: 138 patients had a PoCUS screen completed. All patients had cardiac, lung, IVC, aorta, abdominal, and pelvic scans. Reported abnormal findings included hyperdynamic LV function (59; 43%); small collapsing IVC (46; 33%); pericardial effusion (24; 17%); pleural fluid (19; 14%); hypodynamic LV function (15; 11%); large poorly collapsing IVC (13; 9%); peritoneal fluid (13; 9%); and aortic aneurysm (5; 4%). Conclusion: The 3 core SHoC Protocol recommendations included appropriate scans to detect all pathologies recorded at a rate of greater than 10 percent. The 3 most frequent findings were cardiac and IVC abnormalities, followed by lung. It is noted that peritoneal fluid was seen at a rate of 9%. Aortic aneurysms were rare. This data from the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients, supports the use of the prioritized SHoC protocol, though a larger study is required to confirm these findings.
Introduction: Point of care ultrasound (PoCUS) is an established tool in the initial management of patients with undifferentiated hypotension in the emergency department (ED). While PoCUS protocols have been shown to improve early diagnostic accuracy, there is little published evidence for any mortality benefit. We report the findings from our international multicenter randomized controlled trial, assessing the impact of a PoCUS protocol on survival and key clinical outcomes. Methods: Recruitment occurred at 7 centres in North America (4) and South Africa (3). Scans were performed by PoCUS-trained physicians. Screening at triage identified patients (SBP<100 or shock index>1), randomized to PoCUS or control (standard care and no PoCUS) groups. Demographics, clinical details and study findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. The primary outcome measure was 30-day/discharge mortality. Secondary outcome measures included diagnostic accuracy, changes in vital signs, acid-base status, and length of stay. Categorical data was analyzed using Fishers test, and continuous data by Student T test and multi-level log-regression testing. (GraphPad/SPSS) Final chart review was blinded to initial impressions and PoCUS findings. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no difference between groups for the primary outcome of mortality; PoCUS 32/129 (24.8%; 95% CI 14.3-35.3%) vs. Control 32/129 (24.8%; 95% CI 14.3-35.3%); RR 1.00 (95% CI 0.869 to 1.15; p=1.00). There were no differences in the secondary outcomes; ICU and total length of stay. Our sample size has a power of 0.80 (α:0.05) for a moderate effect size. Other secondary outcomes are reported separately. Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We did not find any mortality or length of stay benefits with the use of a PoCUS protocol, though a larger study is required to confirm these findings. While PoCUS may have diagnostic benefits, these may not translate into a survival benefit effect.
Introduction: Point of Care Ultrasound (PoCUS) protocols are commonly used to guide resuscitation for emergency department (ED) patients with undifferentiated non-traumatic hypotension. While PoCUS has been shown to improve early diagnosis, there is a minimal evidence for any outcome benefit. We completed an international multicenter randomized controlled trial (RCT) to assess the impact of a PoCUS protocol on key resuscitation markers in this group. We report diagnostic impact and mortality elsewhere. Methods: The SHoC-ED1 study compared the addition of PoCUS to standard care within the first hour in the treatment of adult patients presenting with undifferentiated hypotension (SBP<100 mmHg or a Shock Index >1.0) with a control group that did not receive PoCUS. Scans were performed by PoCUS-trained physicians. 4 North American, and 3 South African sites participated in the study. Resuscitation outcomes analyzed included volume of fluid administered in the ED, changes in shock index (SI), modified early warning score (MEWS), venous acid-base balance, and lactate, at one and four hours. Comparisons utilized a T-test as well as stratified binomial log-regression to assess for any significant improvement in resuscitation amount the outcomes. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no significant difference in mean total volume of fluid received between the control (1658 ml; 95%CI 1365-1950) and PoCUS groups (1609 ml; 1385-1832; p=0.79). Significant improvements were seen in SI, MEWS, lactate and bicarbonate with resuscitation in both the PoCUS and control groups, however there was no difference between groups. Conclusion: SHOC-ED1 is the first RCT to compare PoCUS to standard of care in hypotensive ED patients. No significant difference in fluid used, or markers of resuscitation was found when comparing the use of a PoCUS protocol to that of standard of care in the resuscitation of patients with undifferentiated hypotension.
Introduction: Point of care ultrasonography (PoCUS) is an established tool in the initial management of hypotensive patients in the emergency department (ED). It has been shown rule out certain shock etiologies, and improve diagnostic certainty, however evidence on benefit in the management of hypotensive patients is limited. We report the findings from our international multicenter RCT assessing the impact of a PoCUS protocol on diagnostic accuracy, as well as other key outcomes including mortality, which are reported elsewhere. Methods: Recruitment occurred at 4 North American and 3 Southern African sites. Screening at triage identified patients (SBP<100 mmHg or shock index >1) who were randomized to either PoCUS or control groups. Scans were performed by PoCUS-trained physicians. Demographics, clinical details and findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. Final chart review was blinded to initial impressions and PoCUS findings. Categorical data was analyzed using Fishers two-tailed test. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. The perceived shock category changed more frequently in the PoCUS group 20/127 (15.7%) vs. control 7/125 (5.6%); RR 2.81 (95% CI 1.23 to 6.42; p=0.0134). There was no significant difference in change of diagnostic impression between groups PoCUS 39/123 (31.7%) vs control 34/124 (27.4%); RR 1.16 (95% CI 0.786 to 1.70; p=0.4879). There was no significant difference in the rate of correct category of shock between PoCUS (118/127; 93%) and control (113/122; 93%); RR 1.00 (95% CI 0.936 to 1.08; p=1.00), or for correct diagnosis; PoCUS 90/127 (70%) vs control 86/122 (70%); RR 0.987 (95% CI 0.671 to 1.45; p=1.00). Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We found that the use of PoCUS did change physicians’ perceived shock category. PoCUS did not improve diagnostic accuracy for category of shock or diagnosis.
An understanding of environmental factors governing patchy weed distribution in fields could prove to be a valuable tool in weed management. The objectives of this research were to investigate the relationships between weed distribution patterns and environmental properties in two Mississippi soybean fields and to construct models based on those relationships to predict weed distribution. Two months before planting, fields were soil sampled on a 60- by 60-m coordinate grid, and samples were analyzed for calcium, magnesium, potassium, sodium, phosphorus, zinc, cation exchange capacity, percent organic matter, and soil pH. The relative elevation of each sample location was also recorded. Approximately 8 wk after planting, weed populations were estimated on a 30- by 30-m grid over the soil sample grid. Punctual kriging was used to estimate environmental values at each weed sample location. Discriminant analysis techniques were used to evaluate the associations between environmental characteristics on weed population densities of sample areas within each field. Generally, as sicklepod and pitted morningglory infestations increased, the prediction accuracy of the discriminant functions also increased; however, horsenettle infestations were not closely correlated to the environmental properties. Discriminant functions reasonably predicted presence or absence of sicklepod and pitted morningglory within the field. However, validation of the functions across years within the same field and with data collected from the other field resulted in poor classification of all species infestations. Prediction of weed infestations with environmental properties was specific for each field, year, and species.
At the Symposium in April last year the Society met to consider how design and operation of quiet aircraft related to the often conflicting economic and environmental pressures. The prospects for quieter aircraft and engines were discussed from many technical points of view. The message emerging from the discussion was that, compared to the improvements made in the past 10 years, and which would be available now at the cost of replacing existing noisy, older generation installations, further reductions in noise at source would be very much harder to win. Moreover, they would be made against a steeply-rising trend of overall detriment to performance and direct operating cost.
So far as the possibilities are concerned for further reductions in the noise of subsonic propulsion engines, if we are to have more than modest, evolutionary improvements in levels, there will need to be radical changes affecting engine design, such as a definite move towards lower jet velocities.
Civilian suicide rates vary by occupation in ways related to occupational stress exposure. Comparable military research finds suicide rates elevated in combat arms occupations. However, no research has evaluated variation in this pattern by deployment history, the indicator of occupation stress widely considered responsible for the recent rise in the military suicide rate.
Method
The joint associations of Army occupation and deployment history in predicting suicides were analysed in an administrative dataset for the 729 337 male enlisted Regular Army soldiers in the US Army between 2004 and 2009.
Results
There were 496 suicides over the study period (22.4/100 000 person-years). Only two occupational categories, both in combat arms, had significantly elevated suicide rates: infantrymen (37.2/100 000 person-years) and combat engineers (38.2/100 000 person-years). However, the suicide rates in these two categories were significantly lower when currently deployed (30.6/100 000 person-years) than never deployed or previously deployed (41.2–39.1/100 000 person-years), whereas the suicide rate of other soldiers was significantly higher when currently deployed and previously deployed (20.2–22.4/100 000 person-years) than never deployed (14.5/100 000 person-years), resulting in the adjusted suicide rate of infantrymen and combat engineers being most elevated when never deployed [odds ratio (OR) 2.9, 95% confidence interval (CI) 2.1–4.1], less so when previously deployed (OR 1.6, 95% CI 1.1–2.1), and not at all when currently deployed (OR 1.2, 95% CI 0.8–1.8). Adjustment for a differential ‘healthy warrior effect’ cannot explain this variation in the relative suicide rates of never-deployed infantrymen and combat engineers by deployment status.
Conclusions
Efforts are needed to elucidate the causal mechanisms underlying this interaction to guide preventive interventions for soldiers at high suicide risk.
We describe the efficacy of enhanced infection control measures, including those recommended in the Centers for Disease Control and Prevention’s 2012 carbapenem-resistant Enterobacteriaceae (CRE) toolkit, to control concurrent outbreaks of carbapenemase-producing Enterobacteriaceae (CPE) and extensively drug-resistant Acinetobacter baumannii (XDR-AB).
Design
Before-after intervention study.
Setting
Fifteen-bed surgical trauma intensive care unit (ICU).
Methods
We investigated the impact of enhanced infection control measures in response to clusters of CPE and XDR-AB infections in an ICU from April 2009 to March 2010. Polymerase chain reaction was used to detect the presence of blaKPC and resistance plasmids in CRE. Pulsed-field gel electrophoresis was performed to assess XDR-AB clonality. Enhanced infection-control measures were implemented in response to ongoing transmission of CPE and a new outbreak of XDR-AB. Efficacy was evaluated by comparing the incidence rate (IR) of CPE and XDR-AB before and after the implementation of these measures.
Results
The IR of CPE for the 12 months before the implementation of enhanced measures was 7.77 cases per 1,000 patient-days, whereas the IR of XDR-AB for the 3 months before implementation was 6.79 cases per 1,000 patient-days. All examined CPE shared endemic blaKPC resistance plasmids, and 6 of the 7 XDR-AB isolates were clonal. Following institution of enhanced infection control measures, the CPE IR decreased to 1.22 cases per 1,000 patient-days (P = .001), and no more cases of XDR-AB were identified.
Conclusions
Use of infection control measures described in the Centers for Disease Control and Prevention’s 2012 CRE toolkit was associated with a reduction in the IR of CPE and an interruption in XDR-AB transmission.
The case-control approach is a powerful method for investigating factors that may explain a particular event. It is extensively used in epidemiology to study disease incidence, one of the best-known examples being Bradford Hill and Doll's investigation of the possible connection between cigarette smoking and lung cancer. More recently, case-control studies have been increasingly used in other fields, including sociology and econometrics. With a particular focus on statistical analysis, this book is ideal for applied and theoretical statisticians wanting an up-to-date introduction to the field. It covers the fundamentals of case-control study design and analysis as well as more recent developments, including two-stage studies, case-only studies and methods for case-control sampling in time. The latter have important applications in large prospective cohorts which require case-control sampling designs to make efficient use of resources. More theoretical background is provided in an appendix for those new to the field.
This book is about the planning and analysis of a special kind of investigation: a case-control study. We use this term to cover a number of different designs. In the simplest form individuals with an outcome of interest, possibly rare, are observed and information about past experience is obtained. In addition corresponding data are obtained on suitable controls in the hope of explaining what influences the outcome. In this book we are largely concerned with binary outcomes, for example indicating disease diagnosis or death. Such studies are reasonably called retrospective as contrasted with prospective studies, in which one records explanatory features and then waits to see what outcome arises. In retrospective studies we are studying the causes of effects and in prospective studies we are studying the effects of causes. We also discuss some extensions of case-control studies to incorporate temporality, which may be more appropriately viewed as a form of prospective study. The key aspect of all these designs is that they involve a sample of the underlying population that motivates the study, in which individuals with certain outcomes are strongly over-represented.
While we shall concentrate on the many special issues raised by such studies, we begin with a brief survey of the general themes of statistical design and analysis. We use a terminology deriving in part from epidemiological applications although the ideas are of much broader relevance.
We start the general discussion by considering a population of study individuals, patients, say, assumed to be statistically independent.