To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bloodstream infections (BSIs) are a frequent cause of morbidity in patients with acute myeloid leukemia (AML), due in part to the presence of central venous access devices (CVADs) required to deliver therapy.
To determine the differential risk of bacterial BSI during neutropenia by CVAD type in pediatric patients with AML.
We performed a secondary analysis in a cohort of 560 pediatric patients (1,828 chemotherapy courses) receiving frontline AML chemotherapy at 17 US centers. The exposure was CVAD type at course start: tunneled externalized catheter (TEC), peripherally inserted central catheter (PICC), or totally implanted catheter (TIC). The primary outcome was course-specific incident bacterial BSI; secondary outcomes included mucosal barrier injury (MBI)-BSI and non-MBI BSI. Poisson regression was used to compute adjusted rate ratios comparing BSI occurrence during neutropenia by line type, controlling for demographic, clinical, and hospital-level characteristics.
The rate of BSI did not differ by CVAD type: 11 BSIs per 1,000 neutropenic days for TECs, 13.7 for PICCs, and 10.7 for TICs. After adjustment, there was no statistically significant association between CVAD type and BSI: PICC incident rate ratio [IRR] = 1.00 (95% confidence interval [CI], 0.75–1.32) and TIC IRR = 0.83 (95% CI, 0.49–1.41) compared to TEC. When MBI and non-MBI were examined separately, results were similar.
In this large, multicenter cohort of pediatric AML patients, we found no difference in the rate of BSI during neutropenia by CVAD type. This may be due to a risk-profile for BSI that is unique to AML patients.
Paediatric residents are often taught cardiac anatomy with two-dimensional images of heart specimens, or via imaging such as echocardiography or computed tomography. This study aimed to determine if the use of a structured, interactive, teaching session using heart specimens with CHD would be effective in teaching the concepts of cardiac anatomy.
The interest amongst paediatric residents of a cardiac anatomy session using heart specimens was assessed initially by circulating a survey. Next, four major cardiac lesions were identified to be of interest: atrial septal defect, ventricular septal defect, tetralogy of Fallot, and transposition. A list of key structures and anatomic concepts for these lesions was developed, and appropriate specimens demonstrating these features were identified by a cardiac morphologist. A structured, interactive, teaching session was then held with the paediatric residents using the cardiac specimens. The same 10-question assessment was administered at the beginning and end of the session.
The initial survey demonstrated that all the paediatric residents had an interest in a cardiac anatomy teaching session. A total of 24 participated in the 2-hour session. The median pre-test score was 45%, compared to a median post-test score of 90% (p < 0.01). All paediatric residents who completed a post-session survey indicated that the session was a good use of educational time and contributed to increasing their knowledge base. They expressed great interest in future sessions.
A 2-hour hands-on cardiac anatomy teaching session using cardiac specimens can successfully highlight key anatomic concepts for paediatric residents.
An end of summer snowline (EOSS) photographic dataset for Aotearoa New Zealand contains over four decades of equilibrium line altitude (ELA) observations for more than 50 index glaciers. This dataset provides an opportunity to create a climatological ELA reference series that has several applications. Our work screened out EOSS sites that had low temporal coverage and also removed limited observations when the official survey did not take place. Snowline data from 41 of 50 glaciers in the EOSS dataset were retained and included in a normalised master snowline series that spans 1977–2020. Application of the regionally representative normalised master snowline series in monthly and seasonally resolved climate response function analyses showed consistently strong relationships with austral warm-season temperatures for land-based stations west of the Southern Alps and the central Tasman Sea. There is a trend towards higher regional snowlines since the 1990s that has been steepening in recent decades. If contemporary decadal normalised master snowline series trends are maintained, the average Southern Alps snowline elevation will be displaced at least 200 m higher than normal by the 2025–2034 decade. More frequent extremely high snowlines are expected to drive more extreme cumulative mass-balance losses that will reduce the glacierised area of Aotearoa New Zealand.
To describe the cumulative seroprevalence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies during the coronavirus disease 2019 (COVID-19) pandemic among employees of a large pediatric healthcare system.
Design, setting, and participants:
Prospective observational cohort study open to adult employees at the Children’s Hospital of Philadelphia, conducted April 20–December 17, 2020.
Employees were recruited starting with high-risk exposure groups, utilizing e-mails, flyers, and announcements at virtual town hall meetings. At baseline, 1 month, 2 months, and 6 months, participants reported occupational and community exposures and gave a blood sample for SARS-CoV-2 antibody measurement by enzyme-linked immunosorbent assays (ELISAs). A post hoc Cox proportional hazards regression model was performed to identify factors associated with increased risk for seropositivity.
In total, 1,740 employees were enrolled. At 6 months, the cumulative seroprevalence was 5.3%, which was below estimated community point seroprevalence. Seroprevalence was 5.8% among employees who provided direct care and was 3.4% among employees who did not perform direct patient care. Most participants who were seropositive at baseline remained positive at follow-up assessments. In a post hoc analysis, direct patient care (hazard ratio [HR], 1.95; 95% confidence interval [CI], 1.03–3.68), Black race (HR, 2.70; 95% CI, 1.24–5.87), and exposure to a confirmed case in a nonhealthcare setting (HR, 4.32; 95% CI, 2.71–6.88) were associated with statistically significant increased risk for seropositivity.
Employee SARS-CoV-2 seroprevalence rates remained below the point-prevalence rates of the surrounding community. Provision of direct patient care, Black race, and exposure to a confirmed case in a nonhealthcare setting conferred increased risk. These data can inform occupational protection measures to maximize protection of employees within the workplace during future COVID-19 waves or other epidemics.
We identified quality indicators (QIs) for care during transitions of older persons (≥ 65 years of age). Through systematic literature review, we catalogued QIs related to older persons’ transitions in care among continuing care settings and between continuing care and acute care settings and back. Through two Delphi survey rounds, experts ranked relevance, feasibility, and scientific soundness of QIs. A steering committee reviewed QIs for their feasible capture in Canadian administrative databases. Our search yielded 326 QIs from 53 sources. A final set of 38 feasible indicators to measure in current practice was included. The highest proportions of indicators were for the emergency department (47%) and the Institute of Medicine (IOM) quality domain of effectiveness (39.5%). Most feasible indicators were outcome indicators. Our work highlights a lack of standardized transition QI development in practice, and the limitations of current free-text documentation systems in capturing relevant and consistent data.
ABSTRACT IMPACT: This work will standardize necessary image pre-processing for diagnostic and prognostic clinical workflows dependent on quantitative analysis of conventional magnetic resonance imaging. OBJECTIVES/GOALS: Conventional magnetic resonance imaging (MRI) poses challenges for quantitative analysis due to a lack of uniform inter-scanner voxel intensity values. Head and neck cancer (HNC) applications in particular have not been well investigated. This project aims to systematically evaluate voxel intensity standardization (VIS) methods for HNC MRI. METHODS/STUDY POPULATION: We utilize two separate cohorts of HNC patients, where T2-weighted (T2-w) MRI sequences were acquired before beginning radiotherapy for five patients in each cohort. The first cohort corresponds to patients with images taken at various institutions with a variety of non-uniform acquisition scanners and parameters. The second cohort corresponds to patients from a prospective clinical trial with uniformity in both scanner and acquisition parameters. Regions of interest from a variety of healthy tissues assumed to have minimal interpatient variation were manually contoured for each image and used to compare differences between a variety of VIS methods for each cohort. Towards this end, we implement a new metric for cohort intensity distributional overlap to compare region of interest similarity in a given cohort. RESULTS/ANTICIPATED RESULTS: Using a simple and interpretable metric, we have systematically investigated the effects of various commonly implementable VIS methods on T2-w sequences for two independent cohorts of HNC patients based on region of interest intensity similarity. We demonstrate VIS has a substantial effect on T2-w images where non-uniform acquisition parameters and scanners are utilized. Oppositely, it has a modest to minimal impact on T2-w images generated from the same scanner with the same acquisition parameters. Moreover, with a few notable exceptions, there does not seem to be a clear advantage or disadvantage to using one VIS method over another for T2-w images with non-uniform acquisition parameters. DISCUSSION/SIGNIFICANCE OF FINDINGS: Our results inform which VIS methods should be favored in HNC MRI and may indicate VIS is not a critical factor to consider in circumstances where similar acquisition parameters can be utilized. Moreover, our results can help guide downstream quantitative imaging tasks that may one day be implemented in clinical workflows.
The only complete inventory of New Zealand glaciers was based on aerial photography starting in 1978. While there have been partial updates using 2002 and 2009 satellite data, most glaciers are still represented by the 1978 outlines in contemporary global glacier databases. The objective of this project is to establish an updated glacier inventory for New Zealand. We have used Landsat 8 OLI satellite imagery from February and March 2016 for delineating clean glaciers using a semi-automatic band ratio method and debris-covered glaciers using a maximum likelihood classification. The outlines have been checked against Sentinel-2 MSI data, which have a higher resolution. Manual post processing was necessary due to misclassifications (e.g. lakes, clouds), mapping in shadowed areas, and combining the clean and debris-covered parts into single glaciers. New Zealand glaciers cover an area of 794 ± 34 km2 in 2016 with a debris-covered area of 10%. Of the 2918 glaciers, seven glaciers are >10 km2 while 71% is <0.1 km2. The debris cover on those largest glaciers is >40%. Only 15 glaciers are located on the North Island. For a selection of glaciers, we were able to calculate the area reduction between the 1978 and 2016 inventories.
Registry-based trials have emerged as a potentially cost-saving study methodology. Early estimates of cost savings, however, conflated the benefits associated with registry utilisation and those associated with other aspects of pragmatic trial designs, which might not all be as broadly applicable. In this study, we sought to build a practical tool that investigators could use across disciplines to estimate the ranges of potential cost differences associated with implementing registry-based trials versus standard clinical trials.
We built simulation Markov models to compare unique costs associated with data acquisition, cleaning, and linkage under a registry-based trial design versus a standard clinical trial. We conducted one-way, two-way, and probabilistic sensitivity analyses, varying study characteristics over broad ranges, to determine thresholds at which investigators might optimally select each trial design.
Registry-based trials were more cost effective than standard clinical trials 98.6% of the time. Data-related cost savings ranged from $4300 to $600,000 with variation in study characteristics. Cost differences were most reactive to the number of patients in a study, the number of data elements per patient available in a registry, and the speed with which research coordinators could manually abstract data. Registry incorporation resulted in cost savings when as few as 3768 independent data elements were available and when manual data abstraction took as little as 3.4 seconds per data field.
Registries offer important resources for investigators. When available, their broad incorporation may help the scientific community reduce the costs of clinical investigation. We offer here a practical tool for investigators to assess potential costs savings.
The goal of pharmacological treatment is a desired response, known as the target effect (e.g. bispectral index of 50). An understanding of the concentration–response relationship (i.e. pharmacodynamics (PD)) can be used to predict the target concentration (e.g. propofol 4 mg/L) required to achieve this target effect in a typical individual . Pharmacokinetic (PK) knowledge (e.g. clearance, volume) then determines the dose that will achieve the target concentration. Each individual, however, is somewhat different and there is variability associated with all parameters used in PK and PD equations (known as models). Covariate information (e.g. weight, age, pathology, drug interactions, pharmacogenomics) can be used to help predict the dose in a specific patient. The Holy Grail of clinical pharmacology is prediction of drug PK and PD in the individual patient (Fig. 4.1) and this requires knowledge of the covariates that contribute to variability .
Indications for TIVA in children are essentially the same as adults with the additional benefit of reducing emergence delirium and possibly cognitive dysfunction.[1,2] Fears that children may develop propofol infusion syndrome during routine anaesthesia have not eventuated.
Obesity is a chronic disease characterised by the presence of excessive body fat that increases the risk of health problems. Traditionally, the administration of TIVA and TCI in the obese has been done using dose schemes extrapolated from non-obese patients. The use of such schemes has proven inadequate in the obese and they are commonly associated with overdose.[1,2] Dosing strategies of IV anaesthetics in heavy weight patients requires approaches that differ from those used in lean patients due to physiological and pharmacological changes associated with obesity. This chapter is intended to be a practical guide for anaesthetists who wish to undertake TIVA and TCI in obese patients.
Recent years have seen an exponential increase in the variety of healthcare data captured across numerous sources. However, mechanisms to leverage these data sources to support scientific investigation have remained limited. In 2013 the Pediatric Heart Network (PHN), funded by the National Heart, Lung, and Blood Institute, developed the Integrated CARdiac Data and Outcomes (iCARD) Collaborative with the goals of leveraging available data sources to aid in efficiently planning and conducting PHN studies; supporting integration of PHN data with other sources to foster novel research otherwise not possible; and mentoring young investigators in these areas. This review describes lessons learned through the development of iCARD, initial efforts and scientific output, challenges, and future directions. This information can aid in the use and optimisation of data integration methodologies across other research networks and organisations.
Objective: Concussion in children and adolescents is a prevalent problem with implications for subsequent physical, cognitive, behavioral, and psychological functioning, as well as quality of life. While these consequences warrant attention, most concussed children recover well. This study aimed to determine what pre-injury, demographic, and injury-related factors are associated with optimal outcome (“wellness”) after pediatric concussion. Method: A total of 311 children 6–18 years of age with concussion participated in a longitudinal, prospective cohort study. Pre-morbid conditions and acute injury variables, including post-concussive symptoms (PCS) and cognitive screening (Standardized Assessment of Concussion, SAC), were collected in the emergency department, and a neuropsychological assessment was performed at 4 and 12 weeks post-injury. Wellness, defined by the absence of PCS and cognitive inefficiency and the presence of good quality of life, was the main outcome. Stepwise logistic regression was performed using 19 predictor variables. Results: 41.5% and 52.2% of participants were classified as being well at 4 and 12 weeks post-injury, respectively. The final model indicated that children who were younger, who sustained sports/recreational injuries (vs. other types), who did not have a history of developmental problems, and who had better acute working memory (SAC concentration score) were significantly more likely to be well. Conclusions: Determining the variables associated with wellness after pediatric concussion has the potential to clarify which children are likely to show optimal recovery. Future work focusing on wellness and concussion should include appropriate control groups and document more extensively pre-injury and injury-related factors that could additionally contribute to wellness. (JINS, 2019, 25, 375–389)
We evaluated whether a diagnostic stewardship initiative consisting of ASP preauthorization paired with education could reduce false-positive hospital-onset (HO) Clostridioides difficile infection (CDI).
Single center, quasi-experimental study.
Tertiary academic medical center in Chicago, Illinois.
Adult inpatients were included in the intervention if they were admitted between October 1, 2016, and April 30, 2018, and were eligible for C. difficile preauthorization review. Patients admitted to the stem cell transplant (SCT) unit were not included in the intervention and were therefore considered a contemporaneous noninterventional control group.
The intervention consisted of requiring prescriber attestation that diarrhea has met CDI clinical criteria, ASP preauthorization, and verbal clinician feedback. Data were compared 33 months before and 19 months after implementation. Facility-wide HO-CDI incidence rates (IR) per 10,000 patient days (PD) and standardized infection ratios (SIR) were extracted from hospital infection prevention reports.
During the entire 52 month period, the mean facility-wide HO-CDI-IR was 7.8 per 10,000 PD and the SIR was 0.9 overall. The mean ± SD HO-CDI-IR (8.5 ± 2.0 vs 6.5 ± 2.3; P < .001) and SIR (0.97 ± 0.23 vs 0.78 ± 0.26; P = .015) decreased from baseline during the intervention. Segmented regression models identified significant decreases in HO-CDI-IR (Pstep = .06; Ptrend = .008) and SIR (Pstep = .1; Ptrend = .017) trends concurrent with decreases in oral vancomycin (Pstep < .001; Ptrend < .001). HO-CDI-IR within a noninterventional control unit did not change (Pstep = .125; Ptrend = .115).
A multidisciplinary, multifaceted intervention leveraging clinician education and feedback reduced the HO-CDI-IR and the SIR in select populations. Institutions may consider interventions like ours to reduce false-positive C. difficile NAAT tests.
As referrals to specialist palliative care (PC) grow in volume and diversity, an evidence-based triage method is needed to enable services to manage waiting lists in a transparent, efficient, and equitable manner. Discrete choice experiments (DCEs) have not to date been used among PC clinicians, but may serve as a rigorous and efficient method to explore and inform the complex decision-making involved in PC triage. This article presents the protocol for a novel application of an international DCE as part of a mixed-method research program, ultimately aiming to develop a clinical decision-making tool for PC triage.
Five stages of protocol development were undertaken: (1) identification of attributes of interest; (2) creation and (3) execution of a pilot DCE; and (4) refinement and (5) planned execution of the final DCE.
Six attributes of interest to PC triage were identified and included in a DCE that was piloted with 10 palliative care practitioners. The pilot was found to be feasible, with an acceptable cognitive burden, but refinements were made, including the creation of an additional attribute to allow independent analysis of concepts involved. Strategies for recruitment, data collection, analysis, and modeling were confirmed for the final planned DCE.
Significance of results
This DCE protocol serves as an example of how the sophisticated DCE methodology can be applied to health services research in PC. Discussion of key elements that improved the utility, integrity, and feasibility of the DCE provide valuable insights.
The outermost “crust” and an underlying, compositionally distinct, and denser layer, the “mantle,” constitute the silicate portion of a terrestrial planet. The “lithosphere” is the planet’s high-strength outer shell. The crust records the history of shallow magmatism, which along with temporal changes in lithospheric thickness, provides information on a planet’s thermal evolution. We focus on the basic structure and mechanics of Mercury’s crust and lithosphere as determined primarily from gravity and topography data acquired by the MESSENGER mission. We first describe these datasets: how they were acquired, how the data are represented on a sphere, and the limitations of the data imparted by MESSENGER’s highly eccentric orbit. We review different crustal thickness models obtained by parsing the observed gravity signal into contributions from topography, relief on the crust–mantle boundary, and density anomalies that drive viscous flow in the mantle. Estimates of lithospheric thickness from gravity–topography analyses are at odds with predictions from thermal models, thus challenging our understanding of Mercury’s geodynamics. We show that, like those of the Moon, Mercury's ellipsoidal shape and geoid are far from hydrostatic equilibrium, possibly the result of Mercury's peculiar surface temperature distribution and associated buoyancy anomalies and thermoelastic stresses in the interior.
Impact craters are the dominant landform on Mercury and range from the largest basins to the smallest young craters. Peak-ring basins are especially prevalent on Mercury, although basins of all forms are far undersaturated, probably the result of the extensive volcanic emplacement of intercrater plains and younger smooth plains between about 4.1 and 3.5 Ga. This chapter describes the geology of the two largest well-preserved basins, Caloris and Rembrandt, and the three smaller Raditladi, Rachmaninoff, and Mozart basins. We describe analyses of crater size–frequency distributions and relate them to populations of asteroid impactors (Late Heavy Bombardment in early epochs and the near-Earth asteroid population observable today during most of Mercury’s history), to secondary cratering, and to exogenic and endogenic processes that degrade and erase craters. Secondary cratering is more important on Mercury than on other solar system bodies and shaped much of the surface on kilometer and smaller scales, compromising our ability to use craters for relative and absolute age-dating of smaller geological units. Failure to find “vulcanoids” and satellites of Mercury suggests that such bodies played a negligible role in cratering Mercury. We describe an absolute cratering chronology for Mercury’s geological evolution as well as its uncertainties.