To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cognitive impairment is a core feature of psychotic disorders, but the profile of impairment across adulthood, particularly in African-American populations, remains unclear.
Using cross-sectional data from a case–control study of African-American adults with affective (n = 59) and nonaffective (n = 68) psychotic disorders, we examined cognitive functioning between early and middle adulthood (ages 20–60) on measures of general cognitive ability, language, abstract reasoning, processing speed, executive function, verbal memory, and working memory.
Both affective and nonaffective psychosis patients showed substantial and widespread cognitive impairments. However, comparison of cognitive functioning between controls and psychosis groups throughout early (ages 20–40) and middle (ages 40–60) adulthood also revealed age-associated group differences. During early adulthood, the nonaffective psychosis group showed increasing impairments with age on measures of general cognitive ability and executive function, while the affective psychosis group showed increasing impairment on a measure of language ability. Impairments on other cognitive measures remained mostly stable, although decreasing impairments on measures of processing speed, memory and working memory were also observed.
These findings suggest similarities, but also differences in the profile of cognitive dysfunction in adults with affective and nonaffective psychotic disorders. Both affective and nonaffective patients showed substantial and relatively stable impairments across adulthood. The nonaffective group also showed increasing impairments with age in general and executive functions, and the affective group showed an increasing impairment in verbal functions, possibly suggesting different underlying etiopathogenic mechanisms.
As referrals to specialist palliative care (PC) grow in volume and diversity, an evidence-based triage method is needed to enable services to manage waiting lists in a transparent, efficient, and equitable manner. Discrete choice experiments (DCEs) have not to date been used among PC clinicians, but may serve as a rigorous and efficient method to explore and inform the complex decision-making involved in PC triage. This article presents the protocol for a novel application of an international DCE as part of a mixed-method research program, ultimately aiming to develop a clinical decision-making tool for PC triage.
Five stages of protocol development were undertaken: (1) identification of attributes of interest; (2) creation and (3) execution of a pilot DCE; and (4) refinement and (5) planned execution of the final DCE.
Six attributes of interest to PC triage were identified and included in a DCE that was piloted with 10 palliative care practitioners. The pilot was found to be feasible, with an acceptable cognitive burden, but refinements were made, including the creation of an additional attribute to allow independent analysis of concepts involved. Strategies for recruitment, data collection, analysis, and modeling were confirmed for the final planned DCE.
Significance of results
This DCE protocol serves as an example of how the sophisticated DCE methodology can be applied to health services research in PC. Discussion of key elements that improved the utility, integrity, and feasibility of the DCE provide valuable insights.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Stress-related pathophysiology drives comorbid trajectories that elude precise prediction. Allostatic load algorithms that quantify biological “wear and tear” represent a comprehensive approach to detect multisystemic disease processes of the mind and body. However, the multiple morbidities directly or indirectly related to stress physiology remain enigmatic. Our aim in this article is to propose that biological comorbidities represent discrete pathophysiological processes captured by measuring allostatic load. This has applications in research and clinical settings to predict physical and psychiatric comorbidities alike. The reader will be introduced to the concepts of allostasis, allostasic states, allostatic load, and allostatic overload as they relate to stress-related diseases and the proposed prediction of biological comorbidities that extend rather to understanding psychopathologies. In our transdisciplinary discussion, we will integrate perspectives related to (a) mitochondrial biology as a key player in the allostatic load time course toward diseases that “get under the skin and skull”; (b) epigenetics related to child maltreatment and biological embedding that shapes stress perception throughout lifespan development; and (c) evolutionary drivers of distinct personality profiles and biobehavioral patterns that are linked to dimensions of psychopathology.
We have undertaken an adaptive optics imaging survey of extra-solar planetary systems and stars showing interesting radial velocity trends from high precision radial velocity searches. Adaptive Optics increases the resolution and dynamic range of an image, substantially improving the detectability of faint close companions. This survey is sensitive to objects less luminous than the bottom of the main sequence at separations as close as 1″. We have detected stellar companions to the planet bearing stars HD 114762 and Tau Boo. We have also detected a companion to the non-planet bearing star 16 Cyg A.
Catheter-associated urinary tract infection (CAUTI) is considered a reasonably preventable event in the hospital setting, and it has been included in the US Department of Health and Human Services National Action Plan to Prevent Healthcare-Associated Infections. While multiple definitions for measuring CAUTI exist, each has important limitations, and understanding these limitations is important to both clinical practice and policy decisions. The National Healthcare Safety Network (NHSN) surveillance definition, the most frequently used outcome measure for CAUTI prevention efforts, has limited clinical correlation and does not necessarily reflect noninfectious harms related to the catheter. We advocate use of the device utilization ratio (DUR) as an additional performance measure for potential urinary catheter harm. The DUR is patient-centered and objective and is currently captured as part of NHSN reporting. Furthermore, these data are readily obtainable from electronic medical records. The DUR also provides a more direct reflection of improvement efforts focused on reducing inappropriate urinary catheter use.
Infect. Control Hosp. Epidemiol. 2016;37(3):327–333
The addition of a CdMgTe (CMT) layer at the back of a CdTe solar cell should improve its performance by reflecting both photoelectrons and forward-current electrons away from the rear surface. Higher collection of photoelectrons will increase the cell’s current, and reduction of forward current will increase its voltage. To achieve electron reflection, conformal CMT layers were deposited at the back of CdTe cells, and a variety of measurements including performance curves, transmission electron microscopy, x-ray photoelectron spectroscopy, and energy-dispersive x-ray spectroscopy were performed. Oxidation of magnesium in the CMT layer was addressed by adding a CdTe capping layer. MgCl2 passivation was substituted for CdCl2 in some cases, but little difference was seen.
The number of pediatric antimicrobial stewardship programs (ASPs) is increasing and program evaluation is a key component to improve efficiency and enhance stewardship strategies.
To determine the antimicrobials and diagnoses most strongly associated with a recommendation provided by a well-established pediatric ASP.
DESIGN AND SETTING
Retrospective cohort study from March 3, 2008, to March 2, 2013, of all ASP reviews performed at a free-standing pediatric hospital.
ASP recommendations were classified as follows: stop therapy, modify therapy, optimize therapy, or consult infectious diseases. A multinomial distribution model to determine the probability of each ASP recommendation category was performed on the basis of the specific antimicrobial agent or disease category. A logistic model was used to determine the odds of recommendation disagreement by the prescribing clinician.
The ASP made 2,317 recommendations: stop therapy (45%), modify therapy (26%), optimize therapy (19%), or consult infectious diseases (10%). Third-generation cephalosporins (0.20) were the antimicrobials with the highest predictive probability of an ASP recommendation whereas linezolid (0.05) had the lowest probability. Community-acquired pneumonia (0.26) was the diagnosis with the highest predictive probability of an ASP recommendation whereas fever/neutropenia (0.04) had the lowest probability. Disagreement with ASP recommendations by the prescribing clinician occurred 22% of the time, most commonly involving community-acquired pneumonia and ear/nose/throat infections.
Evaluation of our pediatric ASP identified specific clinical diagnoses and antimicrobials associated with an increased likelihood of an ASP recommendation. Focused interventions targeting these high-yield areas may result in increased program efficiency and efficacy.
To better understand tuberculosis (TB) infection control (IC) in healthcare facilities (HCFs) in Georgia.
A cross-sectional evaluation of healthcare worker (HCW) knowledge, beliefs and behaviors toward TB IC measures including latent TB infection (LTBI) screening and treatment of HCWs.
Georgia, a high-burden multidrug-resistant TB (MDR-TB) country.
HCWs from the National TB Program and affiliated HCFs.
An anonymous self-administered 55-question survey developed based on the Health Belief Model (HBM) conceptual framework.
In total, 240 HCWs (48% physicians; 39% nurses) completed the survey. The overall average TB knowledge score was 61%. Only 60% of HCWs reported frequent use of respirators when in contact with TB patients. Only 52% of HCWs were willing to undergo annual LTBI screening; 48% were willing to undergo LTBI treatment. In multivariate analysis, HCWs who worried about acquiring MDR-TB infection (adjusted odds ratio [aOR], 1.7; 95% confidence interval [CI], 1.28–2.25), who thought screening contacts of TB cases is important (aOR, 3.4; 95% CI, 1.35–8.65), and who were physicians (aOR, 1.7; 95% CI, 1.08–2.60) were more likely to accept annual LTBI screening. With regard to LTBI treatment, HCWs who worked in an outpatient TB facility (aOR, 0.3; 95% CI, 0.11–0.58) or perceived a high personal risk of TB reinfection (aOR, 0.5; 95% CI, 0.37–0.64) were less likely to accept LTBI treatment.
The concern about TB reinfection is a major barrier to HCW acceptance of LTBI treatment. TB IC measures must be strengthened in parallel with or prior to the introduction of LTBI screening and treatment of HCWs.
We present the results of an approximately 6 100 deg2 104–196 MHz radio sky survey performed with the Murchison Widefield Array during instrument commissioning between 2012 September and 2012 December: the MWACS. The data were taken as meridian drift scans with two different 32-antenna sub-arrays that were available during the commissioning period. The survey covers approximately 20.5 h < RA < 8.5 h, − 58° < Dec < −14°over three frequency bands centred on 119, 150 and 180 MHz, with image resolutions of 6–3 arcmin. The catalogue has 3 arcmin angular resolution and a typical noise level of 40 mJy beam− 1, with reduced sensitivity near the field boundaries and bright sources. We describe the data reduction strategy, based upon mosaicked snapshots, flux density calibration, and source-finding method. We present a catalogue of flux density and spectral index measurements for 14 110 sources, extracted from the mosaic, 1 247 of which are sub-components of complexes of sources.
To examine regional variation in the use and appropriateness of indwelling urinary catheters and catheter-associated urinary tract infection (CAUTI).
Design and Setting.
US acute care hospitals.
Hospitals were divided into 4 regions according to the US Census Bureau. Baseline data on urinary catheter use, catheter appropriateness, and CAUTI were collected from participating units. The catheter utilization ratio was calculated by dividing the number of catheter-days by the number of patient-days. We used the National Healthcare Safety Network (NHSN) definition (number of CAUTIs per 1,000 catheter-days) and a population-based definition (number of CAUTIs per 10,000 patient-days) to calculate CAUTI rates. Logistic and Poisson regression models were used to assess regional differences.
Data on 434,207 catheter-days over 1,400,770 patient-days were collected from 1,101 units within 726 hospitals across 34 states. Overall catheter utilization was 31%. Catheter utilization was significantly higher in non-intensive care units (ICUs) in the West compared with non-ICUs in all other regions. Approximately 30%–40% of catheters in non-ICUs were placed without an appropriate indication. Catheter appropriateness was the lowest in the West. A total of 1,099 CAUTIs were observed (NHSN rate of 2.5 per 1,000 catheter-days and a population-based rate of 7.8 per 10,000 patient-days). The population-based CAUTI rate was highest in the West (8.9 CAUTIs per 10,000 patient-days) and was significantly higher compared with the Midwest, even after adjusting for hospital characteristics (P = .02).
Regional differences in catheter use, appropriateness, and CAUTI rates were detected across US hospitals.
The diagnosis of parasitic worm (helminth) infections requires specialized laboratory settings, but most affected individuals reside in locations without access to such facilities. We tested two portable microscopic devices for the diagnosis of helminth infections in a cross-sectional survey in rural Côte d'Ivoire. We examined 164 stool samples under a light microscope and then re-examined with a commercial portable light microscope and an experimental mobile phone microscope for the diagnosis of Schistosoma mansoni and soil-transmitted helminths. Additionally, 180 filtered urine samples were examined by standard microscopy and compared with the portable light microscope for detection of Schistosoma haematobium eggs. Conventional microscopy was considered the diagnostic reference standard. For S. mansoni, S. haematobium and Trichuris trichiura, the portable light microscope showed sensitivities of 84·8%, 78·6% and 81·5%, respectively, and specificities of 85·7%, 91·0% and 93·0%, respectively. For S. mansoni and T. trichiura, we found sensitivities for the mobile phone microscope of 68·2% and 30·8%, respectively, and specificities of 64·3% and 71·0%, respectively. We conclude that the portable light microscope has sufficient diagnostic yield for Schistosoma and T. trichiura infections, while the mobile phone microscope has only modest sensitivity in its current experimental set-up. Development of portable diagnostic technologies that can be used at point-of-sample collection will enhance diagnostic coverage in clinical and epidemiological settings.
The traditional CdCl2 passivation of CdTe is expanded by adding other chlorides such as MgCl2, NaCl, and MnCl2 into the process through a two-step passivation procedure that combines closed space sublimation step with a vapor process. This allows the possibility of forming a highly doped field at the back of the device that could act as an electron reflector that could boost device performance by directing electrons back into the absorber layer and increasing the voltage while limiting recombination at the back of the device. The effects the two-step passivation process on device performance are characterized by current-voltage measurements, and by electroluminescence and laser-beam induced current images to show the degree of device uniformity. Additionally, capacitance voltage measurements are used to study doping density, depletion width, and possible formation of a field at the back of the device.
Central line-associated bloodstream infection (CLABSI) is a national target for mandatory reporting and a Centers for Medicare and Medicaid Services target for value-based purchasing. Differences in chart review versus claims-based metrics used by national agencies and groups raise concerns about the validity of these measures.
Evaluate consistency and reasons for discordance among chart review and claims-based CLABSI events.
We conducted 2 multicenter retrospective cohort studies within 6 academic institutions. A total of 150 consecutive patients were identified with CLABSI on the basis of National Healthcare Safety Network (NHSN) criteria (NHSN cohort), and an additional 150 consecutive patients were identified with CLABSI on the basis of claims codes (claims cohort). Ail events had full-text medical record reviews and were identified as concordant or discordant with the other metric.
In the NHSN cohort, there were 152 CLABSIs among 150 patients, and 73.0% of these cases were discordant with claims data. Common reasons for the lack of associated claims codes included coding omission and lack of physician documentation of bacteremia cause. In the claims cohort, there were 150 CLABSIs among 150 patients, and 65.3% of these cases were discordant with NHSN criteria. Common reasons for the lack of NHSN reporting were identification of non-CLABSI with bacteremia meeting Centers for Disease Control and Prevention (CDC) criteria for an alternative infection source.
Substantial discordance between NHSN and claims-based CLABSI indicators persists. Compared with standardized CDC chart review criteria, claims data often had both coding omissions and misclassification of non-CLABSI infections as CLABSI. Additionally, claims did not identify any additional CLABSIs for CDC reporting. NHSN criteria are a more consistent interhospital standard for CLABSI reporting.
Brunilde Gril, National Cancer Institute, United States,
Russell Szmulewitz, The University of Chicago, Committee on Cancer Biology and Pritzker School of Medicine, United States,
Joshua Collins, National Cancer Institute, United States,
Jennifer Taylor, The University of Chicago, Committee on Cancer Biology and Pritzker School of Medicine, United States,
Carrie Rinker-Schaeffer, The University of Chicago, Committee on Cancer Biology and Pritzker School of Medicine, United States,
Patricia Steeg, National Cancer Institute, United States,
Jean-Claude Marshall, National Cancer Institute, United States
In the 1970s and 1980s, clever scientific insight and innovation rapidly advanced our understanding of the molecular mechanisms of cancer biology. The discoveries of oncogenes and tumor suppressors, and the elucidation of their functions, greatly aided in studies aimed at a molecular understanding of the etiology of primary tumors. Despite this, cancer biologists had little understanding of the molecular aspects of metastasis. Considering the devastating consequences, scientists were anxious for a breakthrough. The first clue would come from the study of tumor suppressors.
Tumor suppressor genes were identified when it was discovered that their loss of function was critical to tumorigenesis. Prior to their discovery, researchers were of the mindset that the oncogenic phenotype was always dominant. In other words, a mutation need happen on only a single allele for a normal cell to be transformed into a tumor cell. However, not all disease incidence data seemed to fit neatly into this hypothesis. By studying retinoblastoma case histories, a “two-hit” hypothesis emerged, predicting that for at least some cancers, two mutations must occur (one on each allele) to successfully transform a cell . Indeed, the retinoblastoma gene, or Rb, would become known as the first described tumor suppressor. We now know that the “two hits” need not come in the form of distinct somatic mutations but may be the result of any combination of germinal and/or somatic mutations, mitotic recombinations, gene conversions, and functional inactivation of genes owing to promoter hypermethylation.