To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When clinicians communicate effectively, patients retain more information, have higher trust and a better quality of life. Such a patient-centred approach is the future of clinical care, and this book is an essential how-to guide on improving these skills. Grounded in innovative and evidence-based methodology, perfected through over twenty years of teaching in the VitalTalk training program, content includes foundational communication skills, how to help patients plan for the future, what to do when you are really stuck, and strategies to work through conflicts with colleagues. In this updated edition, emphasis is placed on the roles privilege, race, and power play in the medical encounter, and new tools are provided to help clinicians navigate this landscape with greater self-awareness and sensitivity. This practical guide is filled with skills and roadmaps, demonstrating how to be clearer when sharing information, more competent at understanding patient concerns, and more effective when making recommendations.
While adolescent-onset schizophrenia (ADO-SCZ) and adolescent-onset bipolar disorder with psychosis (psychotic ADO-BPD) present a more severe clinical course than their adult forms, their pathophysiology is poorly understood. Here, we study potentially state- and trait-related white matter diffusion-weighted magnetic resonance imaging (dMRI) abnormalities along the adolescent-onset psychosis continuum to address this need.
Forty-eight individuals with ADO-SCZ (20 female/28 male), 15 individuals with psychotic ADO-BPD (7 female/8 male), and 35 healthy controls (HCs, 18 female/17 male) underwent dMRI and clinical assessments. Maps of extracellular free-water (FW) and fractional anisotropy of cellular tissue (FAT) were compared between individuals with psychosis and HCs using tract-based spatial statistics and FSL's Randomise. FAT and FW values were extracted, averaged across all voxels that demonstrated group differences, and then utilized to test for the influence of age, medication, age of onset, duration of illness, symptom severity, and intelligence.
Individuals with adolescent-onset psychosis exhibited pronounced FW and FAT abnormalities compared to HCs. FAT reductions were spatially more widespread in ADO-SCZ. FW increases, however, were only present in psychotic ADO-BPD. In HCs, but not in individuals with adolescent-onset psychosis, FAT was positively related to age.
We observe evidence for cellular (FAT) and extracellular (FW) white matter abnormalities in adolescent-onset psychosis. Although cellular white matter abnormalities were more prominent in ADO-SCZ, such alterations may reflect a shared trait, i.e. neurodevelopmental pathology, present across the psychosis spectrum. Extracellular abnormalities were evident in psychotic ADO-BPD, potentially indicating a more dynamic, state-dependent brain reaction to psychosis.
Pharmacogenomic testing has emerged to aid medication selection for patients with major depressive disorder (MDD) by identifying potential gene-drug interactions (GDI). Many pharmacogenomic tests are available with varying levels of supporting evidence, including direct-to-consumer and physician-ordered tests. We retrospectively evaluated the safety of using a physician-ordered combinatorial pharmacogenomic test (GeneSight) to guide medication selection for patients with MDD in a large, randomized, controlled trial (GUIDED).
Materials and Methods
Patients diagnosed with MDD who had an inadequate response to ≥1 psychotropic medication were randomized to treatment as usual (TAU) or combinatorial pharmacogenomic test-guided care (guided-care). All received combinatorial pharmacogenomic testing and medications were categorized by predicted GDI (no, moderate, or significant GDI). Patients and raters were blinded to study arm, and physicians were blinded to test results for patients in TAU, through week 8. Measures included adverse events (AEs, present/absent), worsening suicidal ideation (increase of ≥1 on the corresponding HAM-D17 question), or symptom worsening (HAM-D17 increase of ≥1). These measures were evaluated based on medication changes [add only, drop only, switch (add and drop), any, and none] and study arm, as well as baseline medication GDI.
Most patients had a medication change between baseline and week 8 (938/1,166; 80.5%), including 269 (23.1%) who added only, 80 (6.9%) who dropped only, and 589 (50.5%) who switched medications. In the full cohort, changing medications resulted in an increased relative risk (RR) of experiencing AEs at both week 4 and 8 [RR 2.00 (95% CI 1.41–2.83) and RR 2.25 (95% CI 1.39–3.65), respectively]. This was true regardless of arm, with no significant difference observed between guided-care and TAU, though the RRs for guided-care were lower than for TAU. Medication change was not associated with increased suicidal ideation or symptom worsening, regardless of study arm or type of medication change. Special attention was focused on patients who entered the study taking medications identified by pharmacogenomic testing as likely having significant GDI; those who were only taking medications subject to no or moderate GDI at week 8 were significantly less likely to experience AEs than those who were still taking at least one medication subject to significant GDI (RR 0.39, 95% CI 0.15–0.99, p=0.048). No other significant differences in risk were observed at week 8.
These data indicate that patient safety in the combinatorial pharmacogenomic test-guided care arm was no worse than TAU in the GUIDED trial. Moreover, combinatorial pharmacogenomic-guided medication selection may reduce some safety concerns. Collectively, these data demonstrate that combinatorial pharmacogenomic testing can be adopted safely into clinical practice without risking symptom degradation among patients.
The Genomics Used to Improve DEpresssion Decisions (GUIDED) trial assessed outcomes associated with combinatorial pharmacogenomic (PGx) testing in patients with major depressive disorder (MDD). Analyses used the 17-item Hamilton Depression (HAM-D17) rating scale; however, studies demonstrate that the abbreviated, core depression symptom-focused, HAM-D6 rating scale may have greater sensitivity toward detecting differences between treatment and placebo. However, the sensitivity of HAM-D6 has not been tested for two active treatment arms. Here, we evaluated the sensitivity of the HAM-D6 scale, relative to the HAM-D17 scale, when assessing outcomes for actively treated patients in the GUIDED trial.
Outpatients (N=1,298) diagnosed with MDD and an inadequate treatment response to >1 psychotropic medication were randomized into treatment as usual (TAU) or combinatorial PGx-guided (guided-care) arms. Combinatorial PGx testing was performed on all patients, though test reports were only available to the guided-care arm. All patients and raters were blinded to study arm until after week 8. Medications on the combinatorial PGx test report were categorized based on the level of predicted gene-drug interactions: ‘use as directed’, ‘moderate gene-drug interactions’, or ‘significant gene-drug interactions.’ Patient outcomes were assessed by arm at week 8 using HAM-D6 and HAM-D17 rating scales, including symptom improvement (percent change in scale), response (≥50% decrease in scale), and remission (HAM-D6 ≤4 and HAM-D17 ≤7).
At week 8, the guided-care arm demonstrated statistically significant symptom improvement over TAU using HAM-D6 scale (Δ=4.4%, p=0.023), but not using the HAM-D17 scale (Δ=3.2%, p=0.069). The response rate increased significantly for guided-care compared with TAU using both HAM-D6 (Δ=7.0%, p=0.004) and HAM-D17 (Δ=6.3%, p=0.007). Remission rates were also significantly greater for guided-care versus TAU using both scales (HAM-D6 Δ=4.6%, p=0.031; HAM-D17 Δ=5.5%, p=0.005). Patients taking medication(s) predicted to have gene-drug interactions at baseline showed further increased benefit over TAU at week 8 using HAM-D6 for symptom improvement (Δ=7.3%, p=0.004) response (Δ=10.0%, p=0.001) and remission (Δ=7.9%, p=0.005). Comparatively, the magnitude of the differences in outcomes between arms at week 8 was lower using HAM-D17 (symptom improvement Δ=5.0%, p=0.029; response Δ=8.0%, p=0.008; remission Δ=7.5%, p=0.003).
Combinatorial PGx-guided care achieved significantly better patient outcomes compared with TAU when assessed using the HAM-D6 scale. These findings suggest that the HAM-D6 scale is better suited than is the HAM-D17 for evaluating change in randomized, controlled trials comparing active treatment arms.
OBJECTIVES/SPECIFIC AIMS: The objective of this research was to assess the clinical impact of simulation-based team leadership training on team leadership effectiveness and patient care during actual trauma resuscitations. This translational work addresses an important gap in simulation research and medical education research. METHODS/STUDY POPULATION: Eligible trauma team leaders were randomized to the intervention (4-hour simulation-based leadership training) or control (standard training) condition. Subject-led actual trauma patient resuscitations were video recorded and coded for leadership behaviors (primary outcome) and patient care (secondary outcome) using novel leadership and trauma patient care metrics. Patient outcomes for trauma resuscitations were obtained through the Harborview Medical Center Trauma Registry and analyzed descriptively. A one-way ANCOVA analysis was conducted to test the effectiveness of our training intervention versus a control group for each outcome (leadership effectiveness and patient care) while accounting for pre-training performance, injury severity score, postgraduate training year, and days since training occurred. Association between leadership effectiveness and patient care was evaluated using random coefficient modeling. RESULTS/ANTICIPATED RESULTS: Sixty team leaders, 30 in each condition, completed the study. There was a significant difference in post-training leadership effectiveness [F(1,54)=30.19, p<.001, η2=.36] between the experimental and control conditions. There was no direct impact of training on patient care [F(1,54)=1.0, p=0.33, η2=.02]; however, leadership effectiveness mediated an indirect effect of training on patient care. Across all trauma resuscitations team leader effectiveness correlated with patient care (p<0.05) as predicted by team leadership conceptual models. DISCUSSION/SIGNIFICANCE OF IMPACT: This work represents a critical step in advancing translational simulation-based research (TSR). While there are several examples of high quality translational research programs, they primarily focus on procedural tasks and do not evaluate highly complex skills such as leadership. Complex skills present significant measurement challenges because individuals and processes are interrelated, with multiple components and emergent nature of tasks and related behaviors. We provide evidence that simulation-based training of a complex skill (team leadership behavior) transfers to a complex clinical setting (emergency department) with highly variable clinical tasks (trauma resuscitations). Our novel team leadership training significantly improved overall leadership performance and partially mediated the positive effect between leadership and patient care. This represents the first rigorous, randomized, controlled trial of a leadership or teamwork-focused training that systematically evaluates the impact on process (leadership) and performance (patient care).
Major depressive disorder (MDD) is a leading cause of disease burden worldwide, with lifetime prevalence in the United States of 17%. Here we present the results of the first prospective, large-scale, patient- and rater-blind, randomized controlled trial evaluating the clinical importance of achieving congruence between combinatorial pharmacogenomic (PGx) testing and medication selection for MDD.
1,167 outpatients diagnosed with MDD and an inadequate response to ≥1 psychotropic medications were enrolled and randomized 1:1 to a Treatment as Usual (TAU) arm or PGx-guided care arm. Combinatorial PGx testing categorized medications in three groups based on the level of gene-drug interactions: use as directed, use with caution, or use with increased caution and more frequent monitoring. Patient assessments were performed at weeks 0 (baseline), 4, 8, 12 and 24. Patients, site raters, and central raters were blinded in both arms until after week 8. In the guided-care arm, physicians had access to the combinatorial PGx test result to guide medication selection. Primary outcomes utilized the Hamilton Depression Rating Scale (HAM-D17) and included symptom improvement (percent change in HAM-D17 from baseline), response (50% decrease in HAM-D17 from baseline), and remission (HAM-D17<7) at the fully blinded week 8 time point. The durability of patient outcomes was assessed at week 24. Medications were considered congruent with PGx test results if they were in the ‘use as directed’ or ‘use with caution’ report categories while medications in the ‘use with increased caution and more frequent monitoring’ were considered incongruent. Patients who started on incongruent medications were analyzed separately according to whether they changed to congruent medications by week8.
At week 8, symptom improvement for individuals in the guided-care arm was not significantly different than TAU (27.2% versus 24.4%, p=0.11). However, individuals in the guided-care arm were more likely than those in TAU to achieve remission (15% versus 10%; p<0.01) and response (26% versus 20%; p=0.01). Remission rates, response rates, and symptom reductions continued to improve in the guided-treatment arm until the 24week time point. Congruent prescribing increased to 91% in the guided-care arm by week 8. Among patients who were taking one or more incongruent medication at baseline, those who changed to congruent medications by week 8 demonstrated significantly greater symptom improvement (p<0.01), response (p=0.04), and remission rates (p<0.01) compared to those who persisted on incongruent medications.
Combinatorial PGx testing improves short- and long-term response and remission rates for MDD compared to standard of care. In addition, prescribing congruency with PGx-guided medication recommendations is important for achieving symptom improvement, response, and remission for MDD patients.
Funding Acknowledgements: This study was supported by Assurex Health, Inc.
Electron and proton microprobes, along with electron backscatter diffraction (EBSD) analysis were used to study the microstructure of the contemporary Al–Cu–Li alloy AA2099-T8. In electron probe microanalysis, wavelength and energy dispersive X-ray spectrometry were used in parallel with soft X-ray emission spectroscopy (SXES) to characterize the microstructure of AA2099-T8. The electron microprobe was able to identify five unique compositions for constituent intermetallic (IM) particles containing combinations of Al, Cu, Fe, Mn, and Zn. A sixth IM type was found to be rich in Ti and B (suggesting TiB2), and a seventh IM type contained Si. EBSD patterns for the five constituent IM particles containing Al, Cu, Fe, Mn, and Zn indicated that they were isomorphous with four phases in the 2xxx series aluminium alloys including Al6(Fe, Mn), Al13(Fe, Mn)4 (two slightly different compositions), Al37Cu2Fe12 and Al7Cu2Fe. SXES revealed that Li was present in some constituent IM particles. Al SXES mapping revealed an Al-enriched (i.e., Cu, Li-depleted) zone in the grain boundary network. From the EBSD analysis, the kernel average misorientation map showed higher levels of localized misorientation in this region, suggesting greater deformation or stored energy. Proton-induced X-ray emission revealed banding of the TiB2 IM particles and Cu inter-band enrichment.
TAG depleted remnants of postprandial chylomicrons are a risk factor for atherosclerosis. Recent studies have demonstrated that in the fasted state, the majority of chylomicrons are small enough for transcytosis to arterial subendothelial space and accelerate atherogenesis. However, the size distribution of chylomicrons in the absorptive state is unclear. This study explored in normolipidaemic subjects the postprandial distribution of the chylomicron marker, apoB-48, in a TAG-rich lipoprotein plasma fraction (Svedberg flotation rate (Sf>400), in partially hydrolysed remnants (Sf 20–400) and in a TAG-deplete fraction (Sf<20), following ingestion of isoenergetic meals with either palm oil (PO), rice bran or coconut oil. Results from this study show that the majority of fasting chylomicrons are within the potentially pro-atherogenic Sf<20 fraction (70–75 %). Following the ingestion of test meals, chylomicronaemia was also principally distributed within the Sf<20 fraction. However, approximately 40 % of subjects demonstrated exaggerated postprandial lipaemia specifically in response to the SFA-rich PO meal, with a transient shift to more buoyant chylomicron fractions. The latter demonstrates that heterogeneity in the magnitude and duration of hyper-remnantaemia is dependent on both the nature of the meal fatty acids ingested and possible metabolic determinants that influence chylomicron metabolism. The study findings reiterate that fasting plasma TAG is a poor indicator of atherogenic chylomicron remnant homoeostasis and emphasises the merits of considering specifically, chylomicron remnant abundance and kinetics in the context of atherogenic risk. Few studies address the latter, despite the majority of life being spent in the postprandial and absorptive state.
Studies were conducted in 1997 and 1998 to evaluate the efficacy and economics of glyphosate-resistant and nontransgenic soybean systems. The three highest yielding glyphosate-resistant and nontransgenic soybean cultivars were chosen each year for three Mississippi locations based on Mississippi Soybean Variety Trials. Treatments within each cultivar/herbicide system included nontreated, low input (one-half of the labeled rate), medium input (labeled rate), and high input level (labeled rate plus an additional postemergence application). In 1997, all systems controlled hemp sesbania by more than 80% but nontransgenic systems controlled hemp sesbania more than the glyphosate-resistant systems in most instances in 1998. High input levels usually controlled pitted morningglory more than low or medium inputs in 1997. In 1998, both systems controlled pitted morningglory by 90% or more at Shelby; however, at other locations control was less than 85%. Soybean yield in 1997 at Shelby was more with the glyphosate-resistant system than with the nontransgenic systems at medium and high input levels, primarily because of early-season injury to a metribuzin-sensitive cultivar in the nontransgenic system. In 1998, soybean yield at Shelby was more with the nontransgenic system than the glyphosate-resistant system, regardless of input level, due to poor late-season hemp sesbania control with glyphosate. Net returns were often more with the glyphosate-resistant system at Shelby in 1997. Within the glyphosate-resistant system, there were no differences in net return between input levels. Within the nontransgenic system, low input level net returns were higher compared to medium and high input levels due to higher soybean yield and less herbicide cost. At Brooksville, using high input levels, the glyphosate-resistant systems net returns were $55.00/ha more than the nontransgenic system. Net returns were higher with the nontransgenic system compared to the glyphosate-resistant system at Shelby in 1998, regardless of input level.
The main question that Firestone & Scholl (F&S) pose is whether “what and how we see is functionally independent from what and how we think, know, desire, act, and so forth” (sect. 2, para. 1). We synthesize a collection of concerns from an interdisciplinary set of coauthors regarding F&S's assumptions and appeals to intuition, resulting in their treatment of visual perception as context-free.
Exposures to antioxidants (AO) are associated with levels of C-reactive protein (CRP), but the pattern of evidence is mixed, due in part to studying each potential AO, one at a time, when multiple AO exposures might affect CRP levels. By studying multiple AO via a composite indicator approach, we estimate the degree to which serum CRP level is associated with serum AO level. Standardised field survey protocols for the US National Health and Nutrition Examination Survey (NHANES) 2003–2006 yielded nationally representative cross-sectional samples of adults aged 20 years and older (n 8841). NHANES latex-enhanced nephelometry quantified serum CRP levels. Liquid chromatography quantified serum concentrations of vitamins A, E and C and carotenoids. Using structural equations, we regressed CRP level on AO levels, and derived a summary estimate for a composite of these potential antioxidants (CPA), with covariates held constant. The association linking CPA with CRP was inverse, stronger for slightly elevated CRP (1·8≤CRP<10 mg/l; slope= −1·08; 95 % CI −1·39, −0·77) and weaker for highly elevated CRP (≥10 mg/l; slope= −0·52; 95 % CI −0·68, −0·35), with little change when covariates were added. Vitamins A and C, as well as lutein+zeaxanthin, were prominent contributors to the composite. In these cross-sectional data studied via a composite indicator approach, the CPA level and the CRP level were inversely related. The stage is set for more confirmatory longitudinal or intervention research on multiple vitamins. The composite indicator approach might be most useful in epidemiology when several exposure constructs are too weakly inter-correlated to be studied via formal measurement models for underlying latent dimensions.
To determine whether use of contact precautions on hospital ward patients is associated with patient adverse events
Individually matched prospective cohort study
The University of Maryland Medical Center, a tertiary care hospital in Baltimore, Maryland
A total of 296 medical or surgical inpatients admitted to non–intensive care unit hospital wards were enrolled at admission from January to November 2010. Patients on contact precautions were individually matched by hospital unit after an initial 3-day length of stay to patients not on contact precautions. Adverse events were detected by physician chart review and categorized as noninfectious, preventable and severe noninfectious, and infectious adverse events during the patient’s stay using the standardized Institute for Healthcare Improvement’s Global Trigger Tool.
The cohort of 148 patients on contact precautions at admission was matched with a cohort of 148 patients not on contact precautions. Of the total 296 subjects, 104 (35.1%) experienced at least 1 adverse event during their hospital stay. Contact precautions were associated with fewer noninfectious adverse events (rate ratio [RtR], 0.70; 95% confidence interval [CI], 0.51–0.95; P=.02) and although not statistically significant, with fewer severe adverse events (RtR, 0.69; 95% CI, 0.46–1.03; P=.07). Preventable adverse events did not significantly differ between patients on contact precautions and patients not on contact precautions (RtR, 0.85; 95% CI, 0.59–1.24; P=.41).
Hospital ward patients on contact precautions were less likely to experience noninfectious adverse events during their hospital stay than patients not on contact precautions.
Infect. Control Hosp. Epidemiol. 2015;36(11):1268–1274
Chemoradiotherapy followed by monthly temozolomide (TMZ) is the standard of care for patients with glioblastoma multiforme (GBM). Case reports have identified GBM patients who experienced transient radiological deterioration after concurrent chemoradiotherapy which stabilized or resolved after additional cycles of adjuvant TMZ, a phenomenon known as radiographic pseudoprogression. Little is known about the natural history of radiographic pseudoprogression.
We retrospectively evaluated the incidence of radiographic pseudoprogression in a population-based cohort of GBM patients and determined its relationship with outcome and MGMT promoter methylation status.
Out of 43 evaluable patients, 25 (58%) exhibited radiographic progression on the first MRI after concurrent treatment. Twenty of these went on to receive adjuvant TMZ, and subsequent investigation demonstrated radiographic pseudoprogression in 10 cases (50%). Median survival (MS) was better in patients with pseudoprogression (MS 14.5 months) compared to those with true radiologic progression (MS 9.1 months, p=0.025). The MS of patients with pseudoprogression was similar to those who stabilized/responded during concurrent treatment (p=0.31). Neither the extent of the initial resection nor dexamethasone dosing was associated with pseudoprogression.
These data suggest that physicians should continue adjuvant TMZ in GBM patients when early MRI scans show evidence of progression following concurrent chemoradiotherapy, as up to 50% of these patients will experience radiologic stability or improvement in subsequent treatment cycles.
A number of copy number variants (CNVs) have been suggested as
susceptibility factors for schizophrenia. For some of these the data
remain equivocal, and the frequency in individuals with schizophrenia is
To determine the contribution of CNVs at 15 schizophrenia-associated loci
(a) using a large new data-set of patients with schizophrenia
(n = 6882) and controls (n = 6316),
and (b) combining our results with those from previous studies.
We used Illumina microarrays to analyse our data. Analyses were
restricted to 520 766 probes common to all arrays used in the different
We found higher rates in participants with schizophrenia than in controls
for 13 of the 15 previously implicated CNVs. Six were nominally
significantly associated (P<0.05) in this new
data-set: deletions at 1q21.1, NRXN1, 15q11.2 and
22q11.2 and duplications at 16p11.2 and the Angelman/Prader–Willi
Syndrome (AS/PWS) region. All eight AS/PWS duplications in patients were
of maternal origin. When combined with published data, 11 of the 15 loci
showed highly significant evidence for association with schizophrenia
We strengthen the support for the majority of the previously implicated
CNVs in schizophrenia. About 2.5% of patients with schizophrenia and 0.9%
of controls carry a large, detectable CNV at one of these loci. Routine
CNV screening may be clinically appropriate given the high rate of known
deleterious mutations in the disorder and the comorbidity associated with
these heritable mutations.