To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This research communication reports the results from questionnaires used to identify the impact of recent research into the disinfection of cattle foot-trimming equipment to prevent bovine digital dermatitis (BDD) transmission on (a) biosecurity knowledge and (b) hygiene practice of foot health professionals. An initial questionnaire found that more than half of participating farmers, veterinary surgeons and commercial foot-trimmers were not considering hand or hoof-knife hygiene in their working practices. The following year, after the release of a foot-trimming hygiene protocol and a comprehensive knowledge exchange programme by the University of Liverpool, a second survey showed 35/80 (43.8%) farmers, veterinary surgeons and commercial foot-trimmers sampled considered they were now more aware of the risk of spreading BDD during foot- trimming. Furthermore, 36/80 (45.0%) had enhanced their hygiene practice in the last year, impacting an estimated 1383 farms and 5130 cows trimmed each week. Participants who reported having seen both the foot-trimming hygiene protocol we developed with AHDB Dairy and other articles about foot-trimming hygiene in the farming and veterinary press, were significantly more likely to have changed their working practices. Difficulties accessing water and cleaning facilities on farms were identified as the greatest barrier to improving biosecurity practices. Participants' preferred priority for future research was continued collection of evidence for the importance and efficacy of good foot-trimming hygiene practices.
Coronary artery aneurysms in children were observed as a rare complication associated with coronavirus disease 2019 (COVID-19). This case report describes the severe end of the spectrum of the new multisystem inflammatory syndrome in a 12-year-old child with coronary aneurysms, myocardial dysfunction, and shock, managed successfully with extracorporeal membrane oxygenation support and immunomodulation therapy. This report also highlights the additional benefits of cardiac CT in the diagnosis and follow-up of coronary aneurysms.
The extent to which Clinical and Translational Science Award (CTSA) programs offer publicly accessible online resources for training in community-engaged research (CEnR) core competencies is unknown. This study cataloged publicly accessible online CEnR resources from CTSAs and mapped resources to CEnR core competency domains.
Following a search and review of the current literature regarding CEnR competencies, CEnR core competency domains were identified and defined. A systematic review of publicly accessible online CEnR resources from all 64 current CTSAs was conducted between July 2018 and May 2019. Resource content was independently reviewed by two reviewers and scored for the inclusion of each CEnR core competency domain. Domain scores across all resources were assessed using descriptive statistics.
Eight CEnR core competency domains were identified. Overall, 214 CEnR resources publicly accessible online from 35 CTSAs were eligible for review. Scoring discrepancies for at least one domain within a resource initially occurred in 51% of resources. “CEnR methods” (50.5%) and “Knowledge and relationships with communities” (40.2%) were the most frequently addressed domains, while “CEnR program evaluation” (12.1%) and “Dissemination and advocacy” (11.2%) were the least frequently addressed domains. Additionally, challenges were noted in navigating CTSA websites to access CEnR resources, and CEnR competency nomenclature was not standardized.
Our findings guide CEnR stakeholders to identify publicly accessible online resources and gaps to address in CEnR resource development. Standardized nomenclature for CEnR competency is needed for effective CEnR resource classification. Uniform organization of CTSA websites may maximize navigability.
OBJECTIVES/GOALS: The extent that Clinical and Translational Science Award (CTSA) programs offer resources accessible online for training in community-engaged research (CEnR) core competencies is unknown. This study cataloged CEnR resources accessible online from CTSAs and mapped resources to CEnR core competencies. METHODS/STUDY POPULATION: Eight domains of CEnR core competencies were identified: knowledge/perceptions of CEnR; personal traits necessary for CEnR; knowledge of/relationships with communities; training for performing CEnR; CEnR methods; program evaluation; resource sharing and communication; and dissemination and advocacy. A systematic review of CEnR resources accessible online from CTSAs was conducted between July 2018 and May 2019. Resource content was independently reviewed by two reviewers and scored for inclusion of each domain of CEnR core competencies. Domain scores across all resources and inter-rater reliability in scoring domains were assessed using descriptive statistics and Cohen’s kappa coefficients. RESULTS/ANTICIPATED RESULTS: Overall, 214 resources available from 24 CTSAs were eligible for full review. Scoring discrepancies for at least one domain within a resource initially occurred in 51% of resources. “CEnR methods” (50.5%; 108 of 214) and “Knowledge of/relationships with the community” (40.2%; 86 of 214) were most frequently addressed and “Program evaluation” (12.1%; 26 of 214) and “Dissemination and advocacy” (11.2%; 24 of 214) were least frequently addressed domains. Additionally, challenges were noted in navigating CTSA websites to access CEnR resources, and CEnR competency nomenclature was not standardized. DISCUSSION/SIGNIFICANCE OF IMPACT: Our findings guide CEnR stakeholders to identify CEnR resources accessible online and gaps to address in CEnR resource development. Standardized nomenclature for CEnR competencies is needed for effective CEnR resource classification. Uniform organization of CTSA websites may maximize navigability. CONFLICT OF INTEREST DESCRIPTION: In addition to the funding information listed previously (see above), within the last three years, R.J. Piasecki has been employed as: Project Coordinator, CEnR Online Learning Project, Johns Hopkins University School of Nursing (Current) Temporary Employee (Doctoral Student Intern), Michigan State University Institute for Health Policy (Current) Clinical RN, Intrastaff at the Johns Hopkins Health System (Past) Research Data Analysis Assistant, Maryland Institute for Emergency Medical Services (Past - contracted)
Introduction: Paramedics commonly administer intravenous dextrose to severely hypoglycemic patients. Typically, the treatment provided is a 25g ampule of 50% dextrose (D50). This dose of D50 is meant to ensure a return to consciousness. However, this dose may be unnecessary and lead to harm or difficulties regulating blood glucose post treatment. We hypothesize that a lower dose such as dextrose 10% (D10) or titrating the D50 to desired level of consciousness may be optimal and avoid adverse events. Methods: We systematically searched Medline, Embase, CINAHL and Cochrane Central on June 5th 2019. PRISMA guidelines were followed. The GRADE methods and risk of bias assessments were applied to determine the certainty of the evidence. We included primary literature investigating the use of intravenous dextrose in hypoglycemic diabetic patients presenting to paramedics or the emergency department. Outcomes of interest were related to the safe and effective reversal of symptoms and blood glucose levels (BGL). Results: 660 abstracts were screened, 40 full text articles, with eight studies included. Data from three randomized controlled trials and five observational studies were analyzed. A single RCT comparing D10 to D50 was identified. The primary significant finding of the study was an increased post-treatment glycemic profile by 3.2 mmol/L in the D50 group; no other outcomes had significant differences between groups. When comparing pooled data from all the included studies we find higher symptom resolution in the D10 group compared to the D50 group; at 99.8% and 94.9% respectively. However, the mean time to resolution was approximately 4 minutes longer in the D10 group (4.1 minutes (D50) and 8 minutes (D10)). There was more need for subsequent doses in the D10 group at 23.0% versus 16.5% in the D50 group. The post treatment glycemic profile was lower in the D10 group at 5.9 mmol/L versus 8.5 mmol/L in the D50 group. Both treatments had nearly complete resolution of hypoglycemia; 98.7% (D50) and 99.2% (D10). No adverse events were observed in the D10 group (0/871) compared to 12/133 adverse events in the D50 group. Conclusion: D10 may be as effective as D50 at resolving symptoms and correcting hypoglycemia. Although the desired effect can take several minutes longer there appear to be fewer adverse events. The post treatment glycemic profile may facilitate less challenging ongoing glucose management by the patients.
Introduction: The Prehospital Evidence-based Practice (PEP) program is an online, freely accessible, continuously updated repository of appraised EMS research evidence. This report is an analysis of published evidence for EMS interventions used to assess and treat patients suffering from hypoglycemia. Methods: PubMed was systematically searched in June 2019. One author screened titles, abstracts and full-texts for relevance. Trained appraisers reviewed full text articles, scored each on a three-point Level of Evidence (LOE) scale (based on study design and quality) and three-point Direction of Evidence (DOE) scale (supportive, neutral, or opposing findings for each intervention's primary outcome), abstracted the primary outcome, setting and assigned an outcome category (patient or process). Second party appraisal was conducted for all included studies. The level and direction of each intervention was plotted in an evidence matrix, based on appraisals. Results: Twenty-nine studies were included and appraised for seven interventions: 5 drugs (Dextrose 50% (D50), Dextrose 10% (D10), glucagon, oral glucose and thiamine), one assessment tool (point-of-care (POC) glucose testing) and one call disposition (treat-and-release). The most frequently reported study primary outcomes were related to: clinical improvement (n = 15, 51.7%), feasibility/safety (n = 8, 27.6%), and diagnostics (n = 6, 20.7%). The majority of outcomes were patient focused (n = 18, 62.0%). Conclusion: EMS interventions for treating hypoglycemia are informed by high-quality supportive evidence. Both D50 and D10 are supported by high-quality evidence; suggesting D10 may be an effective alternative to the standard D50. “Treat-and-release” practices for hypoglycemia are supported by moderate-quality evidence for the patient related outcomes of relapse, patient preference and complications. This body of evidence is high-quality, patient-focused and conducted in the prehospital setting thus generalizable paramedic practice.
Background: Atrial fibrillation (AF) is a risk for stroke. The Canadian Cardiovascular Society advises patients who are CHADS65 positive should be started on oral anticoagulation (OAC). Our local emergency department (ED) review showed that only 16% of CHADS65 positive patients were started on OAC and that 2% of our patients were diagnosed with stroke within 90 days. We implemented a new pathway for initiation of OAC in the ED (the SAFE pathway). Aim Statement: We report the effectiveness and safety of the SAFE pathway for initiation of OAC in patients treated for AF in the ED. Measures & Design: A multidisciplinary group of physicians and pharmacist developed the SAFE pathway for patients who are discharged home from the ED with a diagnosis of AF. Step 1: contraindications to OAC, Step 2: CHADS65 score, Step 3: OAC dosing if indicated. The pathway triggers referral to AF clinic, family physician letter and follow up call from the ED pharmacist. Patients are followed for 90 days by a structured medical record review and a structured telephone interview. We record persistence with OAC, stroke, TIA, systemic arterial embolism and major bleeding (ISTH criteria). Patient outcomes are fed back to the treating ED physician. Evaluation/ Results: The SAFE pathway was introduced in two EDs in June 2018. In total, 177 patients have had the pathway applied. The median age was 70 (interquartile range (IQR) 61-78), 48% male, median CHADS2 score 2 (IQR 0-2). 19/177 patients (11%) had a contraindication to initiating OAC. 122 patients (69%) had no contraindication to OAC and were CHADS65 positive. Of these 122 patients, 109 were given a prescription for OAC (96 the correct dose, 9 too high a dose and 4 too low a dose). 6 patients declined OAC and the physician did not want to start OAC for 7 patients. 73/122 were contacted by phone at 90 days, 15 could not be reached and 34 have not completed 90 days of follow up since their ED visit. Of the 73 who were reached by phone after 90 days, 65 were still taking an anticoagulant. To date, 1 patient who declined OAC (CHADS2 score of 2) had a stroke within 90 days and one patient prescribed OAC had a gastrointestinal bleed. Discussion/Impact: The SAFE pathway appears safe and effective although we continue to evaluate and improve the process.
We describe an ultra-wide-bandwidth, low-frequency receiver recently installed on the Parkes radio telescope. The receiver system provides continuous frequency coverage from 704 to 4032 MHz. For much of the band (
), the system temperature is approximately 22 K and the receiver system remains in a linear regime even in the presence of strong mobile phone transmissions. We discuss the scientific and technical aspects of the new receiver, including its astronomical objectives, as well as the feed, receiver, digitiser, and signal processor design. We describe the pipeline routines that form the archive-ready data products and how those data files can be accessed from the archives. The system performance is quantified, including the system noise and linearity, beam shape, antenna efficiency, polarisation calibration, and timing stability.
The available literature suggests that treatments and health services for psychosis are considered to be poorly organized and highly variable. Little is known, however, about how inpatient care is provided to individuals experiencing early psychosis. To facilitate quality improvement activities, we characterized the care this patient group receives in an inner city hospital.
We performed chart reviews of individuals admitted to psychiatric inpatient units at St. Paul's Hospital, Vancouver, British Columbia between 01/04/2014 and 31/03/2016. Those who were 17–25 years of age and hospitalized for psychotic symptoms at the time of admission were included. Demographic and health service use were summarized using descriptive characteristics.
We identified 73 inpatients (mean age = 22; males = 78%; Caucasian = 41%) that met study inclusion criteria, having a combined total of 102 care episodes and an average length of stay of 30.7 days (median = 18; min = 3; max = 268). Half of the care episodes were repeat admissions, with up to 30% of the patients readmitted within 28 days of discharge. Physical and mental status examinations (MSE) were performed in virtually all care episodes, although frequency is low (31.4% had daily physical examinations and 18.6% had MSE every nursing shift). In 49% and 50% of care episodes, patients were given oral antipsychotics and discharged on depot medications. Even when indicated, not all care episodes had follow-up appointments (60%) or referrals to income assistance (35%), community mental health teams (61%), and housing support (38%).
Specific programs are needed to address current gaps in inpatient care for patients with early psychosis.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Implementation scientists increasingly recognize that the process of implementation is dynamic, leading to ad hoc modifications that may challenge fidelity in protocol-driven interventions. However, limited attention to ad hoc modifications impairs investigators’ ability to develop evidence-based hypotheses about how such modifications may impact intervention effectiveness and cost. We propose a multi-method process map methodology to facilitate the systematic data collection necessary to characterize ad hoc modifications that may impact primary intervention outcomes.
We employ process maps (drawn from systems science), as well as focus groups and semi-structured interviews (drawn from social sciences) to investigate ad hoc modifications. Focus groups are conducted with the protocol’s developers and/or planners (the implementation team) to characterize the protocol “as envisioned,” while interviews conducted with frontline administrators characterize the process “as realized in practice.” Process maps with both samples are used to identify when modifications occurred across a protocol-driven intervention. A case study investigating a multistage screening protocol for autism spectrum disorders (ASD) is presented to illustrate application and utility of the multi-method process maps.
In this case study, frontline administrators reported ad hoc modifications that potentially influenced the primary study outcome (e.g., time to ASD diagnosis). Ad hoc modifications occurred to accommodate (1) whether providers and/or parents were concerned about ASD, (2) perceptions of parental readiness to discuss ASD, and (3) perceptions of family service delivery needs and priorities.
Investigation of ad hoc modifications on primary outcomes offers new opportunities to develop empirically based adaptive interventions. Routine reporting standards are critical to provide full transparency when studying ad hoc modifications.
Identifying risk factors of individuals in a clinical-high-risk state for psychosis are vital to prevention and early intervention efforts. Among prodromal abnormalities, cognitive functioning has shown intermediate levels of impairment in CHR relative to first-episode psychosis and healthy controls, highlighting a potential role as a risk factor for transition to psychosis and other negative clinical outcomes. The current study used the AX-CPT, a brief 15-min computerized task, to determine whether cognitive control impairments in CHR at baseline could predict clinical status at 12-month follow-up.
Baseline AX-CPT data were obtained from 117 CHR individuals participating in two studies, the Early Detection, Intervention, and Prevention of Psychosis Program (EDIPPP) and the Understanding Early Psychosis Programs (EP) and used to predict clinical status at 12-month follow-up. At 12 months, 19 individuals converted to a first episode of psychosis (CHR-C), 52 remitted (CHR-R), and 46 had persistent sub-threshold symptoms (CHR-P). Binary logistic regression and multinomial logistic regression were used to test prediction models.
Baseline AX-CPT performance (d-prime context) was less impaired in CHR-R compared to CHR-P and CHR-C patient groups. AX-CPT predictive validity was robust (0.723) for discriminating converters v. non-converters, and even greater (0.771) when predicting CHR three subgroups.
These longitudinal outcome data indicate that cognitive control deficits as measured by AX-CPT d-prime context are a strong predictor of clinical outcome in CHR individuals. The AX-CPT is brief, easily implemented and cost-effective measure that may be valuable for large-scale prediction efforts.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
Introduction: Early and accurate diagnosis of critical conditions is essential in emergency medical services (EMS). Serum lactate testing may be used to identify patients with worse prognosis, including sepsis. Recently, the use of a point-of-care lactate (POCL) test has been evaluated in guiding treatment in patients with sepsis. Operating as part of the Prehospital Evidence Based Practice (PEP) Program, the authors sought to identify and describe the body of evidence for POCL use in EMS and the emergency department (ED) for patients with sepsis. Methods: Following PEP methodology, in May 2018, PubMed was searched in a systematic manner. Title and abstract screening were conducted by the program coordinator. These studies were collected, appraised and added to the existing body of literature contained within the PEP database. Evidence appraisal was conducted by two reviewers who assigned both a level of evidence (LOE) on a novel three tier scale and a direction of evidence (supportive, neutral or opposing; based on primary outcome). Data on setting and study design were also extracted. Results: Eight studies were included in our analysis. Three of these studies were conducted in the ED setting; each investigating the POCL test's ability to predict severe sepsis, ICU admission or death. All three studies found supportive results for POCL. A systematic review on the use of POCL in the ED determined that this test can also improve time to treatment. Five of the total 8 studies were conducted prehospitally. Two of these studies were supportive of POCL use in the prehospital setting; in terms of feasibility and the ability to predict sepsis. Both of these study sites used this early information as part of initiating a “sepsis alert” pathway. The other three prehospital studies provide neutral support for POCL. One study demonstrated moderate ability of POCL to predict severe illness. Two studies found poor agreement between prehospital POCL and serum lactate values. Conclusion: Limited low and moderate quality evidence suggest POCL may be feasible and helpful in predicting sepsis in the prehospital setting. However, there is sparse and inconsistent support for specific important outcomes, including accuracy.
Introduction: The Prehospital Evidence-Based Practice (PEP) program is an online, freely accessible, continuously updated Emergency Medical Services (EMS) evidence repository. This summary describes the research evidence for the identification and management of adult patients suffering from sepsis syndrome or septic shock. Methods: PubMed was searched in a systematic manner. One author reviewed titles and abstracts for relevance and two authors appraised each study selected for inclusion. Primary outcomes were extracted. Studies were scored by trained appraisers on a three-point Level of Evidence (LOE) scale (based on study design and quality) and a three-point Direction of Evidence (DOE) scale (supportive, neutral, or opposing findings based on the studies’ primary outcome for each intervention). LOE and DOE of each intervention were plotted on an evidence matrix (DOE x LOE). Results: Eighty-eight studies were included for 15 interventions listed in PEP. The interventions with the most evidence were related to identification tools (ID) (n = 26, 30%) and early goal directed therapy (EGDT) (n = 21, 24%). ID tools included Systematic Inflammatory Response Syndrome (SIRS), quick Sequential Organ Failure Assessment (qSOFA) and other unique measures. The most common primary outcomes were related to diagnosis (n = 30, 34%), mortality (n = 40, 45%) and treatment goals (e.g. time to antibiotic) (n = 14, 16%). The evidence rank for the supported interventions were: supportive-high quality (n = 1, 7%) for crystalloid infusion, supportive-moderate quality (n = 7, 47%) for identification tools, prenotification, point of care lactate, titrated oxygen, temperature monitoring, and supportive-low quality (n = 1, 7%) for vasopressors. The benefit of prehospital antibiotics and EGDT remain inconclusive with a neutral DOE. There is moderate level evidence opposing use of high flow oxygen. Conclusion: EMS sepsis interventions are informed primarily by moderate quality supportive evidence. Several standard treatments are well supported by moderate to high quality evidence, as are identification tools. However, some standard in-hospital therapies are not supported by evidence in the prehospital setting, such as antibiotics, and EGDT. Based on primary outcomes, no identification tool appears superior. This evidence analysis can guide selection of appropriate prehospital therapies.
Traditional ambulatory rhythm monitoring in children can have limitations, including cumbersome leads and limited monitoring duration. The ZioTM patch ambulatory monitor is a small, adhesive, single-channel rhythm monitor that can be worn up to 2 weeks. In this study, we present a retrospective cross-sectional analysis of the ZioTM monitor’s impact in clinical practice. Patients aged 0–18 years were included in the study. A total of 373 studies were reviewed in 332 patients. In all, 28.4% had structural heart disease, and 16.9% had a prior surgical, catheterisation, or electrophysiology procedure. The most common indication for monitoring was tachypalpitations (41%); 93.5% of these patients had their symptoms captured during the study window. The median duration of monitoring was 5 days. Overall, 5.1% of ZioTM monitoring identified arrhythmias requiring new intervention or increased medical management; 4.0% identified arrhythmias requiring increased clinical surveillance. The remainder had either normal-variant rhythm or minor rhythm findings requiring no change in management. For patients with tachypalpitations and no structural heart disease, 13.2% had pathological arrhythmias, but 72.9% had normal-variant rhythm during symptoms, allowing discharge from cardiology care. Notably, for patients with findings requiring intervention or increased surveillance, 56% had findings first identified beyond 24 hours, and only 62% were patient-triggered findings. Seven studies (1.9%) were associated with complications or patient intolerance. The ZioTM is a well-tolerated device that may improve what traditional Holter and event monitoring would detect in paediatric cardiology patients. This study shows a positive clinical impact on the management of patients within a paediatric cardiology practice.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Coinfection with human immunodeficiency virus (HIV) and viral hepatitis is associated with high morbidity and mortality in the absence of clinical management, making identification of these cases crucial. We examined characteristics of HIV and viral hepatitis coinfections by using surveillance data from 15 US states and two cities. Each jurisdiction used an automated deterministic matching method to link surveillance data for persons with reported acute and chronic hepatitis B virus (HBV) or hepatitis C virus (HCV) infections, to persons reported with HIV infection. Of the 504 398 persons living with diagnosed HIV infection at the end of 2014, 2.0% were coinfected with HBV and 6.7% were coinfected with HCV. Of the 269 884 persons ever reported with HBV, 5.2% were reported with HIV. Of the 1 093 050 persons ever reported with HCV, 4.3% were reported with HIV. A greater proportion of persons coinfected with HIV and HBV were males and blacks/African Americans, compared with those with HIV monoinfection. Persons who inject drugs represented a greater proportion of those coinfected with HIV and HCV, compared with those with HIV monoinfection. Matching HIV and viral hepatitis surveillance data highlights epidemiological characteristics of persons coinfected and can be used to routinely monitor health status and guide state and national public health interventions.
To achieve their conservation goals individuals, communities and organizations need to acquire a diversity of skills, knowledge and information (i.e. capacity). Despite current efforts to build and maintain appropriate levels of conservation capacity, it has been recognized that there will need to be a significant scaling-up of these activities in sub-Saharan Africa. This is because of the rapid increase in the number and extent of environmental problems in the region. We present a range of socio-economic contexts relevant to four key areas of African conservation capacity building: protected area management, community engagement, effective leadership, and professional e-learning. Under these core themes, 39 specific recommendations are presented. These were derived from multi-stakeholder workshop discussions at an international conference held in Nairobi, Kenya, in 2015. At the meeting 185 delegates (practitioners, scientists, community groups and government agencies) represented 105 organizations from 24 African nations and eight non-African nations. The 39 recommendations constituted six broad types of suggested action: (1) the development of new methods, (2) the provision of capacity building resources (e.g. information or data), (3) the communication of ideas or examples of successful initiatives, (4) the implementation of new research or gap analyses, (5) the establishment of new structures within and between organizations, and (6) the development of new partnerships. A number of cross-cutting issues also emerged from the discussions: the need for a greater sense of urgency in developing capacity building activities; the need to develop novel capacity building methodologies; and the need to move away from one-size-fits-all approaches.
Cognitive behaviour therapy (CBT) and interpersonal psychotherapy (IPT) are the most studied psychotherapies for treatment of depression, but they are rarely directly compared particularly over the longer term. This study compares the outcomes of patients treated with CBT and IPT over 10 months and tests whether there are differential or general predictors of outcome.
A single centre randomised controlled trial (RCT) of depressed outpatients treated with weekly CBT or IPT sessions for 16 weeks and then 24 weeks of maintenance CBT or IPT. The principle outcome was depression severity measured using the MADRS. Pre-specified predictors of response were in four domains: demographic depression, characteristics, comorbidity and personality. Data were analysed over 16 weeks and 40 weeks using general linear mixed effects regression models.
CBT was significantly more effective than IPT in reducing depressive symptoms over the 10 month study largely because it appeared to work more quickly. There were no differential predictors of response to CBT v. IPT at 16 weeks or 40 weeks. Personality variables were most strongly associated with overall outcome at both 16 weeks and 40 weeks. The number of personality disorder symptoms and lower self-directness and reward dependence scores were associated with poorer outcome for both CBT and IPT at 40 weeks.
CBT and IPT are effective treatments for major depression over the longer term. CBT may work more quickly. Personality variables are the most relevant predictors of outcome.