We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Hierarchical Taxonomy of Psychopathology (HiTOP) has emerged out of the quantitative approach to psychiatric nosology. This approach identifies psychopathology constructs based on patterns of co-variation among signs and symptoms. The initial HiTOP model, which was published in 2017, is based on a large literature that spans decades of research. HiTOP is a living model that undergoes revision as new data become available. Here we discuss advantages and practical considerations of using this system in psychiatric practice and research. We especially highlight limitations of HiTOP and ongoing efforts to address them. We describe differences and similarities between HiTOP and existing diagnostic systems. Next, we review the types of evidence that informed development of HiTOP, including populations in which it has been studied and data on its validity. The paper also describes how HiTOP can facilitate research on genetic and environmental causes of psychopathology as well as the search for neurobiologic mechanisms and novel treatments. Furthermore, we consider implications for public health programs and prevention of mental disorders. We also review data on clinical utility and illustrate clinical application of HiTOP. Importantly, the model is based on measures and practices that are already used widely in clinical settings. HiTOP offers a way to organize and formalize these techniques. This model already can contribute to progress in psychiatry and complement traditional nosologies. Moreover, HiTOP seeks to facilitate research on linkages between phenotypes and biological processes, which may enable construction of a system that encompasses both biomarkers and precise clinical description.
The Mountain West Clinical Translational Research – Infrastructure Network (MW CTR-IN), established in 2013, is a research network of 13 university partners located among seven Institutional Development Award (IDeA) states targeting health disparities. This is an enormous undertaking because of the size of the infrastructure network (encompassing a third of the US landmass and spanning four time zones in predominantly rural and underserved areas, with populations that have major health disparities issues). In this paper, we apply the barriers, strategies, and metrics to an adapted educational conceptual model by Fink (2013). Applying this model, we used four tailored approaches across this regional infrastructure network to: (1) assess individual faculty specific needs, (2) reach out and engage with faculty, (3) provide customized services to meet the situational needs of faculty, and (4) utilize a “closed communication feedback loop” between Professional Development (PD) core and MW CTR-IN faculty within the context of their home institutional environment. Summary statement results from participating faculty show that these approaches were positive. Grounded in best educational practice approaches, we have an opportunity to refine and build from this sound foundation with implications for future use in other CTR-IN networks and institutions in the IDeA states.
Pharmacogenomic testing has emerged to aid medication selection for patients with major depressive disorder (MDD) by identifying potential gene-drug interactions (GDI). Many pharmacogenomic tests are available with varying levels of supporting evidence, including direct-to-consumer and physician-ordered tests. We retrospectively evaluated the safety of using a physician-ordered combinatorial pharmacogenomic test (GeneSight) to guide medication selection for patients with MDD in a large, randomized, controlled trial (GUIDED).
Materials and Methods
Patients diagnosed with MDD who had an inadequate response to ≥1 psychotropic medication were randomized to treatment as usual (TAU) or combinatorial pharmacogenomic test-guided care (guided-care). All received combinatorial pharmacogenomic testing and medications were categorized by predicted GDI (no, moderate, or significant GDI). Patients and raters were blinded to study arm, and physicians were blinded to test results for patients in TAU, through week 8. Measures included adverse events (AEs, present/absent), worsening suicidal ideation (increase of ≥1 on the corresponding HAM-D17 question), or symptom worsening (HAM-D17 increase of ≥1). These measures were evaluated based on medication changes [add only, drop only, switch (add and drop), any, and none] and study arm, as well as baseline medication GDI.
Results
Most patients had a medication change between baseline and week 8 (938/1,166; 80.5%), including 269 (23.1%) who added only, 80 (6.9%) who dropped only, and 589 (50.5%) who switched medications. In the full cohort, changing medications resulted in an increased relative risk (RR) of experiencing AEs at both week 4 and 8 [RR 2.00 (95% CI 1.41–2.83) and RR 2.25 (95% CI 1.39–3.65), respectively]. This was true regardless of arm, with no significant difference observed between guided-care and TAU, though the RRs for guided-care were lower than for TAU. Medication change was not associated with increased suicidal ideation or symptom worsening, regardless of study arm or type of medication change. Special attention was focused on patients who entered the study taking medications identified by pharmacogenomic testing as likely having significant GDI; those who were only taking medications subject to no or moderate GDI at week 8 were significantly less likely to experience AEs than those who were still taking at least one medication subject to significant GDI (RR 0.39, 95% CI 0.15–0.99, p=0.048). No other significant differences in risk were observed at week 8.
Conclusion
These data indicate that patient safety in the combinatorial pharmacogenomic test-guided care arm was no worse than TAU in the GUIDED trial. Moreover, combinatorial pharmacogenomic-guided medication selection may reduce some safety concerns. Collectively, these data demonstrate that combinatorial pharmacogenomic testing can be adopted safely into clinical practice without risking symptom degradation among patients.
During a disease outbreak, healthcare workers (HCWs) are essential to treat infected individuals. However, these HCWs are themselves susceptible to contracting the disease. As more HCWs get infected, fewer are available to provide care for others, and the overall quality of care available to infected individuals declines. This depletion of HCWs may contribute to the epidemic's severity. To examine this issue, we explicitly model declining quality of care in four differential equation-based susceptible, infected and recovered-type models with vaccination. We assume that vaccination, recovery and survival rates are affected by quality of care delivered. We show that explicitly modelling HCWs and accounting for declining quality of care significantly alters model-predicted disease outcomes, specifically case counts and mortality. Models neglecting the decline of quality of care resulting from infection of HCWs may significantly under-estimate cases and mortality. These models may be useful to inform health policy that may differ for HCWs and the general population. Models accounting for declining quality of care may therefore improve the management interventions considered to mitigate the effects of a future outbreak.
We evaluated the safety and feasibility of high-intensity interval training via a novel telemedicine ergometer (MedBIKE™) in children with Fontan physiology.
Methods:
The MedBIKE™ is a custom telemedicine ergometer, incorporating a video game platform and live feed of patient video/audio, electrocardiography, pulse oximetry, and power output, for remote medical supervision and modulation of work. There were three study phases: (I) exercise workload comparison between the MedBIKE™ and a standard cardiopulmonary exercise ergometer in 10 healthy adults. (II) In-hospital safety, feasibility, and user experience (via questionnaire) assessment of a MedBIKE™ high-intensity interval training protocol in children with Fontan physiology. (III) Eight-week home-based high-intensity interval trial programme in two participants with Fontan physiology.
Results:
There was good agreement in oxygen consumption during graded exercise at matched work rates between the cardiopulmonary exercise ergometer and MedBIKE™ (1.1 ± 0.5 L/minute versus 1.1 ± 0.5 L/minute, p = 0.44). Ten youth with Fontan physiology (11.5 ± 1.8 years old) completed a MedBIKE™ high-intensity interval training session with no adverse events. The participants found the MedBIKE™ to be enjoyable and easy to navigate. In two participants, the 8-week home-based protocol was tolerated well with completion of 23/24 (96%) and 24/24 (100%) of sessions, respectively, and no adverse events across the 47 sessions in total.
Conclusion:
The MedBIKE™ resulted in similar physiological responses as compared to a cardiopulmonary exercise test ergometer and the high-intensity interval training protocol was safe, feasible, and enjoyable in youth with Fontan physiology. A randomised-controlled trial of a home-based high-intensity interval training exercise intervention using the MedBIKE™ will next be undertaken.
Hospitalized patients placed in isolation due to a carrier state or infection with resistant or highly communicable organisms report higher rates of anxiety and loneliness and have fewer physician encounters, room entries, and vital sign records. We hypothesized that isolation status might adversely impact patient experience as reported through Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys, particularly regarding communication.
Design
Retrospective analysis of HCAHPS survey results over 5 years.
Setting
A 1,165-bed, tertiary-care, academic medical center.
Patients
Patients on any type of isolation for at least 50% of their stay were the exposure group. Those never in isolation served as controls.
Methods
Multivariable logistic regression, adjusting for age, race, gender, payer, severity of illness, length of stay and clinical service were used to examine associations between isolation status and “top-box” experience scores. Dose response to increasing percentage of days in isolation was also analyzed.
Results
Patients in isolation reported worse experience, primarily with staff responsiveness (help toileting 63% vs 51%; adjusted odds ratio [aOR], 0.77; P = .0009) and overall care (rate hospital 80% vs 73%; aOR, 0.78; P < .0001), but they reported similar experience in other domains. No dose-response effect was observed.
Conclusion
Isolated patients do not report adverse experience for most aspects of provider communication regarded to be among the most important elements for safety and quality of care. However, patients in isolation had worse experiences with staff responsiveness for time-sensitive needs. The absence of a dose-response effect suggests that isolation status may be a marker for other factors, such as illness severity. Regardless, hospitals should emphasize timely staff response for this population.
Effects of soil tillage systems and nitrogen (N) fertilizer management on spring wheat yield components, grain yield and N-use efficiency (NUE) were evaluated in contrasting weather of 2013 and 2014 on a clay soil at the Royal Agricultural University's Harnhill Manor Farm, Cirencester, UK. Three tillage systems – conventional plough tillage (CT), high intensity non-inversion tillage (HINiT) and low intensity non-inversion tillage (LINiT) for seedbed preparation – were compared at four rates of N fertilizer (0, 70, 140 and 210 kg N/ha). Responses to the effects of the management practices were strongly influenced by weather conditions and varied across seasons. Grain yields were similar between LINiT and CT in 2013, while CT produced higher yields in 2014. Nitrogen fertilization effects also varied across the years with no significant effects observed on grain yield in 2013, while in 2014 applications up to 140 kg N/ha increased yield. Grain protein ranged from 10·1 to 14·5% and increased with N rate in both years. Nitrogen-use efficiency ranged from 12·6 to 49·1 kg grain per kg N fertilizer and decreased as N fertilization rate increased in both years. There was no tillage effect on NUE in 2013, while in 2014 NUE under CT was similar to LINiT and higher than HINiT. The effect of tillage and N fertilization on soil moisture and soil mineral N (SMN) fluctuated across years. In 2013, LINiT showed significantly higher soil moisture than CT, while soil moisture did not differ between tillage systems in 2014. Conventional tillage had significantly higher SMN at harvest time in 2014, while no significant differences on SMN were observed between tillage systems in 2013. These results indicate that LINiT can be used to produce similar spring wheat yield to CT on this particular soil type, if a dry cropping season is expected. Crop response to N fertilization is limited when soil residual N is higher, while in conditions of lower residual SMN, a higher N supply is needed to increase yield and improve grain protein content.
This study describes psychometric properties of the NIH Toolbox Cognition Battery (NIHTB-CB) Composite Scores in an adult sample. The NIHTB-CB was designed for use in epidemiologic studies and clinical trials for ages 3 to 85. A total of 268 self-described healthy adults were recruited at four university-based sites, using stratified sampling guidelines to target demographic variability for age (20–85 years), gender, education, and ethnicity. The NIHTB-CB contains seven computer-based instruments assessing five cognitive sub-domains: Language, Executive Function, Episodic Memory, Processing Speed, and Working Memory. Participants completed the NIHTB-CB, corresponding gold standard validation measures selected to tap the same cognitive abilities, and sociodemographic questionnaires. Three Composite Scores were derived for both the NIHTB-CB and gold standard batteries: “Crystallized Cognition Composite,” “Fluid Cognition Composite,” and “Total Cognition Composite” scores. NIHTB Composite Scores showed acceptable internal consistency (Cronbach’s alphas=0.84 Crystallized, 0.83 Fluid, 0.77 Total), excellent test–retest reliability (r: 0.86–0.92), strong convergent (r: 0.78–0.90) and discriminant (r: 0.19–0.39) validities versus gold standard composites, and expected age effects (r=0.18 crystallized, r=−0.68 fluid, r=−0.26 total). Significant relationships with self-reported prior school difficulties and current health status, employment, and presence of a disability provided evidence of external validity. The NIH Toolbox Cognition Battery Composite Scores have excellent reliability and validity, suggesting they can be used effectively in epidemiologic and clinical studies. (JINS, 2014, 20, 1–11)
This study introduces a special series on validity studies of the Cognition Battery (CB) from the U.S. National Institutes of Health Toolbox for the Assessment of Neurological and Behavioral Function (NIHTB) (Gershon, Wagster et al., 2013) in an adult sample. This first study in the series describes the sample, each of the seven instruments in the NIHTB-CB briefly, and the general approach to data analysis. Data are provided on test–retest reliability and practice effects, and raw scores (mean, standard deviation, range) are presented for each instrument and the gold standard instruments used to measure construct validity. Accompanying papers provide details on each instrument, including information about instrument development, psychometric properties, age and education effects on performance, and convergent and discriminant construct validity. One study in the series is devoted to a factor analysis of the NIHTB-CB in adults and another describes the psychometric properties of three composite scores derived from the individual measures representing fluid and crystallized abilities and their combination. The NIHTB-CB is designed to provide a brief, comprehensive, common set of measures to allow comparisons among disparate studies and to improve scientific communication. (JINS, 2014, 20, 1–12)
Patterns in radar-detected internal layers in glaciers and ice streams can be tracked hundreds of kilometers downstream. We use distinctive patterns to delineate flowbands of Thwaites Glacier in the Amundsen Sea sector of West Antarctica. Flowbands contain information for the past century to millennium, the approximate time for ice to flow through the study region. GPS-detected flow directions (acquired in 2007/08) agree within uncertainty (~4°) with the radar-detected flowlines, indicating that the flow direction has not changed significantly in recent centuries. In contrast, InSAR-detected directions (from 1996) differ from the radar- and GPS-detected flowlines in all but the middle tributary, indicating caution is needed when using InSAR velocities to define flow directions. There is agreement between all three datasets in the middle tributary. We use two radar-detected flowlines to define a 95 km long flowband and perform a flux balance analysis using InSAR-derived velocities, radar-detected ice thickness, and estimates of the accumulation rate. Inferred thinning of 0.49 ± 0.34 m a–1 is consistent with satellite altimetry measurements, but has higher uncertainty due mainly to the velocity uncertainty. The uncertainty is underestimated because InSAR velocities often differ from GPS velocities by more than the stated uncertainties.
Insurance accounting has for many years proved a challenging topic for standard setters, preparers and users, often described as a “black box”. Will recent developments, in particular the July 2010 Insurance Contracts Exposure Draft, herald a new era?
This paper reviews these developments, setting out key issues and implications. It concentrates on issues relevant to life insurers, although much of the content is also relevant to non-life insurers.
The paper compares certain IFRS and Solvency II developments, recognising that UK insurers face challenges in implementing new financial and regulatory reporting requirements in similar timeframes. The paper considers resulting external disclosure requirements and a possible future role for supplementary information.