To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We identified quality indicators (QIs) for care during transitions of older persons (≥ 65 years of age). Through systematic literature review, we catalogued QIs related to older persons’ transitions in care among continuing care settings and between continuing care and acute care settings and back. Through two Delphi survey rounds, experts ranked relevance, feasibility, and scientific soundness of QIs. A steering committee reviewed QIs for their feasible capture in Canadian administrative databases. Our search yielded 326 QIs from 53 sources. A final set of 38 feasible indicators to measure in current practice was included. The highest proportions of indicators were for the emergency department (47%) and the Institute of Medicine (IOM) quality domain of effectiveness (39.5%). Most feasible indicators were outcome indicators. Our work highlights a lack of standardized transition QI development in practice, and the limitations of current free-text documentation systems in capturing relevant and consistent data.
ABSTRACT IMPACT: This work will standardize necessary image pre-processing for diagnostic and prognostic clinical workflows dependent on quantitative analysis of conventional magnetic resonance imaging. OBJECTIVES/GOALS: Conventional magnetic resonance imaging (MRI) poses challenges for quantitative analysis due to a lack of uniform inter-scanner voxel intensity values. Head and neck cancer (HNC) applications in particular have not been well investigated. This project aims to systematically evaluate voxel intensity standardization (VIS) methods for HNC MRI. METHODS/STUDY POPULATION: We utilize two separate cohorts of HNC patients, where T2-weighted (T2-w) MRI sequences were acquired before beginning radiotherapy for five patients in each cohort. The first cohort corresponds to patients with images taken at various institutions with a variety of non-uniform acquisition scanners and parameters. The second cohort corresponds to patients from a prospective clinical trial with uniformity in both scanner and acquisition parameters. Regions of interest from a variety of healthy tissues assumed to have minimal interpatient variation were manually contoured for each image and used to compare differences between a variety of VIS methods for each cohort. Towards this end, we implement a new metric for cohort intensity distributional overlap to compare region of interest similarity in a given cohort. RESULTS/ANTICIPATED RESULTS: Using a simple and interpretable metric, we have systematically investigated the effects of various commonly implementable VIS methods on T2-w sequences for two independent cohorts of HNC patients based on region of interest intensity similarity. We demonstrate VIS has a substantial effect on T2-w images where non-uniform acquisition parameters and scanners are utilized. Oppositely, it has a modest to minimal impact on T2-w images generated from the same scanner with the same acquisition parameters. Moreover, with a few notable exceptions, there does not seem to be a clear advantage or disadvantage to using one VIS method over another for T2-w images with non-uniform acquisition parameters. DISCUSSION/SIGNIFICANCE OF FINDINGS: Our results inform which VIS methods should be favored in HNC MRI and may indicate VIS is not a critical factor to consider in circumstances where similar acquisition parameters can be utilized. Moreover, our results can help guide downstream quantitative imaging tasks that may one day be implemented in clinical workflows.
The only complete inventory of New Zealand glaciers was based on aerial photography starting in 1978. While there have been partial updates using 2002 and 2009 satellite data, most glaciers are still represented by the 1978 outlines in contemporary global glacier databases. The objective of this project is to establish an updated glacier inventory for New Zealand. We have used Landsat 8 OLI satellite imagery from February and March 2016 for delineating clean glaciers using a semi-automatic band ratio method and debris-covered glaciers using a maximum likelihood classification. The outlines have been checked against Sentinel-2 MSI data, which have a higher resolution. Manual post processing was necessary due to misclassifications (e.g. lakes, clouds), mapping in shadowed areas, and combining the clean and debris-covered parts into single glaciers. New Zealand glaciers cover an area of 794 ± 34 km2 in 2016 with a debris-covered area of 10%. Of the 2918 glaciers, seven glaciers are >10 km2 while 71% is <0.1 km2. The debris cover on those largest glaciers is >40%. Only 15 glaciers are located on the North Island. For a selection of glaciers, we were able to calculate the area reduction between the 1978 and 2016 inventories.
Registry-based trials have emerged as a potentially cost-saving study methodology. Early estimates of cost savings, however, conflated the benefits associated with registry utilisation and those associated with other aspects of pragmatic trial designs, which might not all be as broadly applicable. In this study, we sought to build a practical tool that investigators could use across disciplines to estimate the ranges of potential cost differences associated with implementing registry-based trials versus standard clinical trials.
We built simulation Markov models to compare unique costs associated with data acquisition, cleaning, and linkage under a registry-based trial design versus a standard clinical trial. We conducted one-way, two-way, and probabilistic sensitivity analyses, varying study characteristics over broad ranges, to determine thresholds at which investigators might optimally select each trial design.
Registry-based trials were more cost effective than standard clinical trials 98.6% of the time. Data-related cost savings ranged from $4300 to $600,000 with variation in study characteristics. Cost differences were most reactive to the number of patients in a study, the number of data elements per patient available in a registry, and the speed with which research coordinators could manually abstract data. Registry incorporation resulted in cost savings when as few as 3768 independent data elements were available and when manual data abstraction took as little as 3.4 seconds per data field.
Registries offer important resources for investigators. When available, their broad incorporation may help the scientific community reduce the costs of clinical investigation. We offer here a practical tool for investigators to assess potential costs savings.
The goal of pharmacological treatment is a desired response, known as the target effect (e.g. bispectral index of 50). An understanding of the concentration–response relationship (i.e. pharmacodynamics (PD)) can be used to predict the target concentration (e.g. propofol 4 mg/L) required to achieve this target effect in a typical individual . Pharmacokinetic (PK) knowledge (e.g. clearance, volume) then determines the dose that will achieve the target concentration. Each individual, however, is somewhat different and there is variability associated with all parameters used in PK and PD equations (known as models). Covariate information (e.g. weight, age, pathology, drug interactions, pharmacogenomics) can be used to help predict the dose in a specific patient. The Holy Grail of clinical pharmacology is prediction of drug PK and PD in the individual patient (Fig. 4.1) and this requires knowledge of the covariates that contribute to variability .
Indications for TIVA in children are essentially the same as adults with the additional benefit of reducing emergence delirium and possibly cognitive dysfunction.[1,2] Fears that children may develop propofol infusion syndrome during routine anaesthesia have not eventuated.
Obesity is a chronic disease characterised by the presence of excessive body fat that increases the risk of health problems. Traditionally, the administration of TIVA and TCI in the obese has been done using dose schemes extrapolated from non-obese patients. The use of such schemes has proven inadequate in the obese and they are commonly associated with overdose.[1,2] Dosing strategies of IV anaesthetics in heavy weight patients requires approaches that differ from those used in lean patients due to physiological and pharmacological changes associated with obesity. This chapter is intended to be a practical guide for anaesthetists who wish to undertake TIVA and TCI in obese patients.
Recent years have seen an exponential increase in the variety of healthcare data captured across numerous sources. However, mechanisms to leverage these data sources to support scientific investigation have remained limited. In 2013 the Pediatric Heart Network (PHN), funded by the National Heart, Lung, and Blood Institute, developed the Integrated CARdiac Data and Outcomes (iCARD) Collaborative with the goals of leveraging available data sources to aid in efficiently planning and conducting PHN studies; supporting integration of PHN data with other sources to foster novel research otherwise not possible; and mentoring young investigators in these areas. This review describes lessons learned through the development of iCARD, initial efforts and scientific output, challenges, and future directions. This information can aid in the use and optimisation of data integration methodologies across other research networks and organisations.
Objective: Concussion in children and adolescents is a prevalent problem with implications for subsequent physical, cognitive, behavioral, and psychological functioning, as well as quality of life. While these consequences warrant attention, most concussed children recover well. This study aimed to determine what pre-injury, demographic, and injury-related factors are associated with optimal outcome (“wellness”) after pediatric concussion. Method: A total of 311 children 6–18 years of age with concussion participated in a longitudinal, prospective cohort study. Pre-morbid conditions and acute injury variables, including post-concussive symptoms (PCS) and cognitive screening (Standardized Assessment of Concussion, SAC), were collected in the emergency department, and a neuropsychological assessment was performed at 4 and 12 weeks post-injury. Wellness, defined by the absence of PCS and cognitive inefficiency and the presence of good quality of life, was the main outcome. Stepwise logistic regression was performed using 19 predictor variables. Results: 41.5% and 52.2% of participants were classified as being well at 4 and 12 weeks post-injury, respectively. The final model indicated that children who were younger, who sustained sports/recreational injuries (vs. other types), who did not have a history of developmental problems, and who had better acute working memory (SAC concentration score) were significantly more likely to be well. Conclusions: Determining the variables associated with wellness after pediatric concussion has the potential to clarify which children are likely to show optimal recovery. Future work focusing on wellness and concussion should include appropriate control groups and document more extensively pre-injury and injury-related factors that could additionally contribute to wellness. (JINS, 2019, 25, 375–389)
We evaluated whether a diagnostic stewardship initiative consisting of ASP preauthorization paired with education could reduce false-positive hospital-onset (HO) Clostridioides difficile infection (CDI).
Single center, quasi-experimental study.
Tertiary academic medical center in Chicago, Illinois.
Adult inpatients were included in the intervention if they were admitted between October 1, 2016, and April 30, 2018, and were eligible for C. difficile preauthorization review. Patients admitted to the stem cell transplant (SCT) unit were not included in the intervention and were therefore considered a contemporaneous noninterventional control group.
The intervention consisted of requiring prescriber attestation that diarrhea has met CDI clinical criteria, ASP preauthorization, and verbal clinician feedback. Data were compared 33 months before and 19 months after implementation. Facility-wide HO-CDI incidence rates (IR) per 10,000 patient days (PD) and standardized infection ratios (SIR) were extracted from hospital infection prevention reports.
During the entire 52 month period, the mean facility-wide HO-CDI-IR was 7.8 per 10,000 PD and the SIR was 0.9 overall. The mean ± SD HO-CDI-IR (8.5 ± 2.0 vs 6.5 ± 2.3; P < .001) and SIR (0.97 ± 0.23 vs 0.78 ± 0.26; P = .015) decreased from baseline during the intervention. Segmented regression models identified significant decreases in HO-CDI-IR (Pstep = .06; Ptrend = .008) and SIR (Pstep = .1; Ptrend = .017) trends concurrent with decreases in oral vancomycin (Pstep < .001; Ptrend < .001). HO-CDI-IR within a noninterventional control unit did not change (Pstep = .125; Ptrend = .115).
A multidisciplinary, multifaceted intervention leveraging clinician education and feedback reduced the HO-CDI-IR and the SIR in select populations. Institutions may consider interventions like ours to reduce false-positive C. difficile NAAT tests.
As referrals to specialist palliative care (PC) grow in volume and diversity, an evidence-based triage method is needed to enable services to manage waiting lists in a transparent, efficient, and equitable manner. Discrete choice experiments (DCEs) have not to date been used among PC clinicians, but may serve as a rigorous and efficient method to explore and inform the complex decision-making involved in PC triage. This article presents the protocol for a novel application of an international DCE as part of a mixed-method research program, ultimately aiming to develop a clinical decision-making tool for PC triage.
Five stages of protocol development were undertaken: (1) identification of attributes of interest; (2) creation and (3) execution of a pilot DCE; and (4) refinement and (5) planned execution of the final DCE.
Six attributes of interest to PC triage were identified and included in a DCE that was piloted with 10 palliative care practitioners. The pilot was found to be feasible, with an acceptable cognitive burden, but refinements were made, including the creation of an additional attribute to allow independent analysis of concepts involved. Strategies for recruitment, data collection, analysis, and modeling were confirmed for the final planned DCE.
Significance of results
This DCE protocol serves as an example of how the sophisticated DCE methodology can be applied to health services research in PC. Discussion of key elements that improved the utility, integrity, and feasibility of the DCE provide valuable insights.
Mercury is the only terrestrial planet other than Earth that possesses a global magnetic field, and the unique solar wind environment of the inner heliosphere has profound consequences for both the structure and dynamics of its magnetosphere. The first in situ observations of Mercury and its space environment made four decades ago by the Mariner 10 spacecraft revealed a magnetic field that is sufficiently strong to stand off the solar wind and form a magnetosphere. Many new insights into Mercury’s magnetosphere were enabled by data returned by the MESSENGER spacecraft. The extensive magnetic field and particle observations allowed detailed characterization of the magnetospheric structure and configuration. MESSENGER magnetic field observations definitively determined the orientation, moment, and location of the internal planetary magnetic dipole field. Furthermore, these observations established the configuration of the magnetopause, bow shock, and magnetospheric current systems. Plasma observations revealed the distribution and composition of plasma in the magnetosphere. We review the geometry and dominant physical processes of Mercury’s unique magnetosphere inferred from MESSENGER data, including the solar wind environment, the shape and location of magnetospheric boundaries, and the fundamental regions and configuration of the magnetosphere and transport and heating of plasma therein.
The MESSENGER mission provided a wealth of discoveries regarding Mercury’s present and past magnetic field and completed the first-order characterization of the magnetic fields of the solar system’s inner planets. MESSENGER demonstrated that Mercury is the only inner planet other than Earth to possess a global magnetic field generated by fluid motions in its liquid iron core. The field possesses some similarities to that of Earth, particularly its dipolar nature, but it is more than a factor of 100 weaker at the surface and unlike Earth’s field is highly asymmetric about the geographic equator. This structure constrains the dynamo process that generates the field and in turn the compositional and thermal structure of Mercury’s interior. Measurements made by MESSENGER less than 100 km above the planetary surface revealed signatures of crustal magnetization, at least some of which were acquired in a very ancient global magnetic field. Electric currents flow in the planet’s interior as a result of the dynamic interactions of the global magnetic field with the solar wind. These currents provide information on the radius of Mercury’s electrically conductive core, as well as the conductivity structure of the crust and mantle, which in turn reflects interior composition and temperature.
Missions to Mercury are challenging because of the planet’s proximity to the Sun, as close as one-third the mean Earth–Sun distance. This location imparts a stressing thermal environment because of intense solar illumination, as well as major propulsion requirements because of the energy gained by a spacecraft descending from Earth into the Sun’s gravity well. Although Mercury has been a primary exploration target since the 1960s, it was not until the discovery of gravity-assist trajectories to Mercury that robotic exploration became feasible. The Mariner 10 flybys in the 1970s revealed many of Mercury's characteristics and whetted the appetite of the science community for an orbiter mission. Enabled by multiple planetary gravity assists and innovations in spacecraft and instrumentation, MESSENGER successfully orbited Mercury from 2011 into 2015 and revolutionized our understanding of the planet. New questions raised by the MESSENGER results motivate the much larger, dual-spacecraft BepiColombo mission, scheduled to arrive at Mercury in late 2025. Even after BepiColombo, many key questions central to understanding Mercury’s formation will likely require a Mercury lander mission, potentially enabled by sufficiently large launch vehicles. The return of samples from Mercury to Earth may long remain an aspiration for future generations of scientists and engineers.