To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although hypofractionated radiotherapy has been standardised in early breast cancer, even in post-mastectomy no such consensus has been developed for locally advanced breast cancer (LABC), probably due to complex planning and field matching. This study is directed towards dosimetric evaluation and comparison of toxicity, response and disease-free survival (DFS) comparison between hypofractionation and conventional radiotherapy in post-mastectomy LABC.
In total, 222 female breast cancer patients were randomly assigned to be treated with either hypofractionated radiotherapy (n = 120) delivering 40 Gy in 15 fractions over 3 weeks or conventional radiotherapy (n = 102) with 50 Gy in 25 fractions over 5 weeks after modified radical mastectomy (MRM) along with neoadjuvant and/or adjuvant chemotherapy. All patients were planned with treatment planning software and assessed regularly during and after treatment.
Median follow-up period was 178 weeks in conventional arm (CRA) and 182 weeks in hypofractionation arm (HFA). There exists a dosimetric difference between the two arms of treatment, in spite of similar dose coverage [planning treatment volume (PTV) D90 92·04% in CRA versus 92·5% in HFA; p = 0·49], average dose in HFA is less than that of CRA (p < 0·001); so is the maximum clinical target volume (CTV) dose (p < 0·001). Similarly, average lung dose in HFA arm is significantly lower than CRA (9·9 versus 10·84; p = 0·06), but the V20Gy of lung and V30Gy of heart had no difference. The toxicity of radiation was comparable with similar mean time to produce toxicity [CRA: 7 W, HFA: 10 W; hazard ratio 0·64, 95% confidence interval (CI) = 0·28–1·45]. Three-year recurrence event was alike in two arms (CRA: 4·9%, HFA: 5·8%; p = 0·76). Mean DFS in CRA is 230 weeks and that of HFA is 235 weeks with hazard ratio 1·01 (95% CI = 0·32–3·19; p = 0·987).
Though biologically effective dose (BED) in hypofractionation is lesser than that of conventional fractionation, there are indistinguishable toxicity, locoregional recurrence, distant failure rate and DFS between the two modalities.
To assess the safety of, and subsequent allergy documentation associated with, an antimicrobial stewardship intervention consisting of test-dose challenge procedures prompted by an electronic guideline for hospitalized patients with reported β-lactam allergies.
Retrospective cohort study.
Large healthcare system consisting of 2 academic and 3 community acute-care hospitals between April 2016 and December 2017.
We evaluated β-lactam antibiotic test-dose outcomes, including adverse drug reactions (ADRs), hypersensitivity reactions (HSRs), and electronic health record (EHR) allergy record updates. HSR predictors were examined using a multivariable logistic regression model. Modification of the EHR allergy record after test doses considered relevant allergy entries added, deleted, and/or specified.
We identified 1,046 test-doses: 809 (77%) to cephalosporins, 148 (14%) to penicillins, and 89 (9%) to carbapenems. Overall, 78 patients (7.5%; 95% confidence interval [CI], 5.9%–9.2%) had signs or symptoms of an ADR, and 40 (3.8%; 95% CI, 2.8%–5.2%) had confirmed HSRs. Most HSRs occurred at the second (ie, full-dose) step (68%) and required no treatment beyond drug discontinuation (58%); 3 HSR patients were treated with intramuscular epinephrine. Reported cephalosporin allergy history was associated with an increased odds of HSR (odds ratio [OR], 2.96; 95% CI, 1.34–6.58). Allergies were updated for 474 patients (45%), with records specified (82%), deleted (16%), and added (8%).
This antimicrobial stewardship intervention using β-lactam test-dose procedures was safe. Overall, 3.8% of patients with β-lactam allergy histories had an HSR; cephalosporin allergy histories conferred a 3-fold increased risk. Encouraging EHR documentation might improve this safe, effective, and practical acute-care antibiotic stewardship tool.
OBJECTIVES/SPECIFIC AIMS: Our goals were to understand the pattern, location, and extent of cardiac replacement fibrosis seen as late gadolinium enhancement on cardiovascular magnetic resonance imaging in a large cohort of cancer patients treated with anthracyclines and/or trastuzumab. METHODS/STUDY POPULATION: We performed a retrospective cohort study of consecutive adult cancer patients treated with anthracyclines and/or trastuzumab from 2004 through 2017. CMRs were analyzed for the presence, location, and pattern of LGE. RESULTS/ANTICIPATED RESULTS: Of 238 patients, 220/(92.4%) had no LGE. Among the 18/(7.6%) patients with LGE, 13/(72.2%) were ischemic in pattern (myocardial infarctions); 10 of these had known coronary artery disease (CAD). Of 5/(27.8%) patients with non-ischemic LGE, the etiologies were known for 4 – myocarditis, cardiac sarcoidosis, eosinophilic myocarditis, and acute myocardial calcification. Only 4/(1.7%) patients had unexpected LGE, of which 3 were unrecognized myocardial infarctions. DISCUSSION/SIGNIFICANCE OF IMPACT: The assessment of fibrosis helps to diagnose the cause of LVSD in cancer patients treated with potentially cardiotoxic medications. This is necessary because currently, the cause of LVSD in cancer patients cannot be established conclusively even though the cause is closely linked to patient outcomes. Our results demonstrate that cancer treatment-related LVSD is not associated with fibrosis. A minority of cancer patients with LVSD have fibrosis related to other reasons, most commonly CAD. Identification of the correct cause of LVSD in cancer patients treated with cardiotoxic medications allows for appropriate treatment. This, in turn, could improve patient outcomes.
Previous cross-sectional studies have demonstrated obesity rates in children with CHD and the general paediatric population. We reviewed longitudinal data to identify factors predisposing to the development of obesity in children, hypothesising that age may be an important risk factor for body mass index growth.
Retrospective electronic health records were reviewed in all 5–20-year-old CHD patients seen between 2011 and 2015, and in age-, sex-, and race/ethnicity-matched controls. Subjects were stratified into aged cohorts of 5–10, 11–15, and 15–20. Annualised change in body mass index percentile (BMI%) over this period was compared using paired Student’s t-test. Linear regression analysis was performed with the CHD population.
A total of 223 CHD and 223 matched controls met the inclusion criteria for analysis. Prevalence of combined overweight/obesity did not differ significantly between the CHD cohort (24.6–25.8%) and matched controls (23.3–29.1%). Univariate analysis demonstrated a significant difference of BMI% change in the age cohort of 5–10 (CHD +4.1%/year, control +1.7%/year, p=0.04), in male sex (CHD +1.8%/year, control −0.3%/year, p=0.01), and status-post surgery (CHD 2.03%/year versus control 0.37%, p=0.02). Linear regression analysis within the CHD subgroup demonstrated that age 5–10 years (+4.80%/year, p<0.001) and status-post surgery (+3.11%/year, p=0.013) were associated with increased BMI% growth.
Prevalence rates of overweight/obesity did not differ between children with CHD and general paediatric population over a 5-year period. Longitudinal data suggest that CHD patients in the age cohort 5–10 and status-post surgery may be at increased risk of BMI% growth relative to peers with structurally normal hearts.
To validate a system to detect ventilator associated events (VAEs) autonomously and in real time.
Retrospective review of ventilated patients using a secure informatics platform to identify VAEs (ie, automated surveillance) compared to surveillance by infection control (IC) staff (ie, manual surveillance), including development and validation cohorts.
The Massachusetts General Hospital, a tertiary-care academic health center, during January–March 2015 (development cohort) and January–March 2016 (validation cohort).
Ventilated patients in 4 intensive care units.
The automated process included (1) analysis of physiologic data to detect increases in positive end-expiratory pressure (PEEP) and fraction of inspired oxygen (FiO2); (2) querying the electronic health record (EHR) for leukopenia or leukocytosis and antibiotic initiation data; and (3) retrieval and interpretation of microbiology reports. The cohorts were evaluated as follows: (1) manual surveillance by IC staff with independent chart review; (2) automated surveillance detection of ventilator-associated condition (VAC), infection-related ventilator-associated complication (IVAC), and possible VAP (PVAP); (3) senior IC staff adjudicated manual surveillance–automated surveillance discordance. Outcomes included sensitivity, specificity, positive predictive value (PPV), and manual surveillance detection errors. Errors detected during the development cohort resulted in algorithm updates applied to the validation cohort.
In the development cohort, there were 1,325 admissions, 479 ventilated patients, 2,539 ventilator days, and 47 VAEs. In the validation cohort, there were 1,234 admissions, 431 ventilated patients, 2,604 ventilator days, and 56 VAEs. With manual surveillance, in the development cohort, sensitivity was 40%, specificity was 98%, and PPV was 70%. In the validation cohort, sensitivity was 71%, specificity was 98%, and PPV was 87%. With automated surveillance, in the development cohort, sensitivity was 100%, specificity was 100%, and PPV was 100%. In the validation cohort, sensitivity was 85%, specificity was 99%, and PPV was 100%. Manual surveillance detection errors included missed detections, misclassifications, and false detections.
Manual surveillance is vulnerable to human error. Automated surveillance is more accurate and more efficient for VAE surveillance.
Children with CHD and acquired heart disease have unique, high-risk physiology. They may have a higher risk of adverse tracheal-intubation-associated events, as compared with children with non-cardiac disease.
Materials and methods
We sought to evaluate the occurrence of adverse tracheal-intubation-associated events in children with cardiac disease compared to children with non-cardiac disease. A retrospective analysis of tracheal intubations from 38 international paediatric ICUs was performed using the National Emergency Airway Registry for Children (NEAR4KIDS) quality improvement registry. The primary outcome was the occurrence of any tracheal-intubation-associated event. Secondary outcomes included the occurrence of severe tracheal-intubation-associated events, multiple intubation attempts, and oxygen desaturation.
A total of 8851 intubations were reported between July, 2012 and March, 2016. Cardiac patients were younger, more likely to have haemodynamic instability, and less likely to have respiratory failure as an indication. The overall frequency of tracheal-intubation-associated events was not different (cardiac: 17% versus non-cardiac: 16%, p=0.13), nor was the rate of severe tracheal-intubation-associated events (cardiac: 7% versus non-cardiac: 6%, p=0.11). Tracheal-intubation-associated cardiac arrest occurred more often in cardiac patients (2.80 versus 1.28%; p<0.001), even after adjusting for patient and provider differences (adjusted odds ratio 1.79; p=0.03). Multiple intubation attempts occurred less often in cardiac patients (p=0.04), and oxygen desaturations occurred more often, even after excluding patients with cyanotic heart disease.
The overall incidence of adverse tracheal-intubation-associated events in cardiac patients was not different from that in non-cardiac patients. However, the presence of a cardiac diagnosis was associated with a higher occurrence of both tracheal-intubation-associated cardiac arrest and oxygen desaturation.
An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH).
We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital.
Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80–0.84) and 0.75 ( 95% CI, 0.73–0.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities.
A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies.
Electronic health records (EHRs) offer significant advantages over paper charts, such as ease of portability, facilitated communication, and a decreased risk of medical errors; however, important ethical concerns related to patient confidentiality remain. Although legal protections have been implemented, in practice, EHRs may be still prone to breaches that threaten patient privacy. Potential safeguards are essential, and have been implemented especially in sensitive areas such as mental illness, substance abuse, and sexual health. Features of one institutional model are described that may illustrate the efforts to both ensure adequate transparency and ensure patient confidentiality. Trust and the therapeutic alliance are critical to the provider–patient relationship and quality healthcare services. All of the benefits of an EHR are only possible if patients retain confidence in the security and accuracy of their medical records.
To determine the impact of methicillin-resistant Staphylococcus aureus and vancomycin-resistant Enterococcus (MRSA/VRE) designations, or flags, on selected hospital operational outcomes.
Retrospective cohort study of inpatients admitted to the Massachusetts General Hospital during 2010–2011.
Operational outcomes were time to bed arrival, acuity-unrelated within-hospital transfers, and length of stay. Covariates considered included demographic and clinical characteristics: age, gender, severity of illness on admission, admit day of week, residence prior to admission, hospitalization within the prior 30 days, clinical service, and discharge destination.
Overall, 81,288 admissions were included. After adjusting for covariates, patients with a MRSA/VRE flag at the time of admission experienced a mean delay in time to bed arrival of 1.03 hours (9.63 hours [95% CI, 9.39–9.88] vs 8.60 hours [95% CI, 8.47–8.73]). These patients had 1.19 times the odds of experiencing an acuity-unrelated within-hospital transfer [95% CI, 1.13–1.26] and a mean length of stay 1.76 days longer (7.03 days [95% CI, 6.82–7.24] vs 5.27 days [95% CI, 5.15–5.38]) than patients with no MRSA/VRE flag.
MRSA/VRE designation was associated with delays in time to bed arrival, increased likelihood of acuity-unrelated within-hospital transfers and extended length of stay. Efforts to identify patients who have cleared MRSA/VRE colonization are critically important to mitigate inefficient use of resources and to improve inpatient flow.
Human lymphatic filariasis (LF) is a major cause of disability globally. The success of global elimination programmes for LF depends upon effectiveness of tools for diagnosis and treatment. In this study on stage-specific antigen detection in brugian filariasis, L3, adult worm (AW) and microfilarial antigenaemia were detected in around 90–95% of microfilariae carriers (MF group), 50–70% of adenolymphangitis (ADL) patients, 10–25% of chronic pathology (CP) patients and 10–15% of endemic normal (EN) controls. The sensitivity of the circulating filarial antigen (CFA) detection in serum samples from MF group was up to 95%. In sera from ADL patients, unexpectedly, less antigen reactivity was observed. In CP group all the CFA positive individuals were from CP grade I and II only and none from grade III or IV, suggesting that with chronicity the AWs lose fecundity and start to disintegrate and die. Amongst EN subject, 10–15% had CFA indicating that few of them harbour filarial AWs, thus they might not be truly immune as has been conventionally believed. The specificity for antigen detection was 100% when tested with sera from various other protozoan and non-filarial helminthic infections.
Despite published catheter-associated urinary tract infection prevention guidelines, inappropriate catheter use is common. We surveyed housestaff about their knowledge of catheter-associated urinary tract infections at a teaching hospital and found most are aware of prevention guidelines; however, their application to clinical scenarios and catheter practices fall short of national goals.
Infect. Control Hosp. Epidemiol. 2015;36(11):1355–1357
We have selected cold and massive (M > 100M⊙) cores as candidates for early phases of star formation from millimeter continuum surveys without associations at short wavelengths. We compared the millimeter continuum peak positions with IR and radio catalogs and excluded cores that had sources associated with the cores’ peaks. We compiled a list of 173 cores in over 117 regions that are candidates for very early phases of Massive Star Formation (MSF). Now with the Spitzer and Herschel archives, these cores can be characterized further. We are compiling this data set to construct the complete spectral energy distribution (SED) in the mid- and far-infrared with good spatial resolution and broad spectral coverage. This allow us to disentangle the complex regions and model the SED of the deeply embedded protostars/clusters. We present a status report of our efforts: a preview of the IR properties of all cores and their embedded source inferred from a grey body fit to the compiled SEDs.
To maximize heterosis, it is important to understand the genetic diversity of germplasm and associate useful phenotypic traits such as fertility restoration for hybrid rice breeding. The objectives of the present study were to characterize genetic diversity within a set of rice germplasm groups using coefficient of parentage (COP) values and 58 simple sequence repeat (SSR) markers for 124 genotypes having different attributes such as resistance/tolerance to various biotic and abiotic stresses. These lines were also used for identifying prospective restorers and maintainers for wild abortive-cytoplasmic male sterile (CMS) line. The mean COP value for all the lines was 0.11, indicating that the genotypes do not share common ancestry. The SSR analysis generated a total of 268 alleles with an average of 4.62 alleles per locus. The mean polymorphism information content value was 0.53, indicating that the markers selected were highly polymorphic. Grouping based on COP analysis revealed three major clusters pertaining to the indica, tropical japonica and japonica lines. A similar grouping pattern with some variation was also observed for the SSR markers. Fertility restoration phenotype based on the test cross of the 124 genotypes with a CMS line helped identify 23 maintainers, 58 restorers and 43 genotypes as either partial maintainers or partial restorers. This study demonstrates that COP analysis along with molecular marker analysis might encourage better organization of germplasm diversity and its use in hybrid rice breeding. Potential restorers identified in the study can be used for breeding high-yielding stress-tolerant medium-duration rice hybrids, while maintainers would prove useful for developing new rice CMS lines.
Abstract This chapter discusses some operational considerations relevant to managing a modeling environment for analyzing systemic risk. Challenges in data management, model hosting and data security are described. Suggestions for establishing a frame of reference for an assessment, and visualizing model outputs are presented. Three operating models for a risk modeling forum that would help decision makers build consensus around data driven analyses are described.
The nation's exposure to financial risk arising from a broad range of diverse and additive effects has gained recent attention from leaders in government and the financial services industry. Models of the financial system are useful to help decision makers understand what is currently happening, and what conditions may exist in the future. They also play an important role in helping policy makers understand the potential impact of regulatory actions that they may consider.
There are numerous finance, economic, and risk models that have been developed to represent aspects of the nation's systemic risk. Many of those models are based on different assumptions, or focus on different aspects of the economy. At times, decision makers may seek to form aggregated views from a collection of disparate models. It can also be informative to compare outputs from related models to gain understanding of the ramifications of differing assumptions, and the range of uncertainty embedded in model outputs.
This chapter discusses a few of the operational considerations that must be managed to allow an array of those models to be brought together both to form aggregated views of the composite situation, and to perform unbiased comparisons of any conflicting forecasts that they might produce.