We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Haematopoietic stem cell transplantation is an important and effective treatment strategy for many malignancies, marrow failure syndromes, and immunodeficiencies in children, adolescents, and young adults. Despite advances in supportive care, patients undergoing transplant are at increased risk to develop cardiovascular co-morbidities.
Methods:
This study was performed as a feasibility study of a rapid cardiac MRI protocol to substitute for echocardiography in the assessment of left ventricular size and function, pericardial effusion, and right ventricular hypertension.
Results:
A total of 13 patients were enrolled for the study (age 17.5 ± 7.7 years, 77% male, 77% white). Mean study time was 13.2 ± 5.6 minutes for MRI and 18.8 ± 5.7 minutes for echocardiogram (p = 0.064). Correlation between left ventricular ejection fraction by MRI and echocardiogram was good (ICC 0.76; 95% CI 0.47, 0.92). None of the patients had documented right ventricular hypertension. Patients were given a survey regarding their experiences, with the majority both perceiving that the echocardiogram took longer (7/13) and indicating they would prefer the MRI if given a choice (10/13).
Conclusion:
A rapid cardiac MRI protocol was shown feasible to substitute for echocardiogram in the assessment of key factors prior to or in follow-up after haematopoietic stem cell transplantation.
Subglacial sediments have the potential to reveal information about the controls on glacier flow, changes in ice-sheet history and characterise life in those environments. Retrieving sediments from beneath the ice, through hot water drilled access holes at remote field locations, present many challenges. Motivated by the need to minimise weight, corer diameter and simplify assembly and operation, British Antarctic Survey, in collaboration with UWITEC, developed a simple mechanical percussion corer. At depths over 1000 m however, manual operation of the percussion hammer is compromised by the lack of clear operator feedback at the surface. To address this, we present a new auto-release-recovery percussion hammer mechanism that makes coring operations depth independent and improves hammer efficiency. Using a single rope tether for both the corer and hammer operation, this modified percussion corer is relatively simple to operate, easy to maintain, and has successfully operated at a depth of >2130 m.
Susceptibility to infection such as SARS-CoV-2 may be influenced by host genotype. TwinsUK volunteers (n = 3261) completing the C-19 COVID-19 symptom tracker app allowed classical twin studies of COVID-19 symptoms, including predicted COVID-19, a symptom-based algorithm to predict true infection, derived from app users tested for SARS-CoV-2. We found heritability of 49% (32−64%) for delirium; 34% (20−47%) for diarrhea; 31% (8−52%) for fatigue; 19% (0−38%) for anosmia; 46% (31−60%) for skipped meals and 31% (11−48%) for predicted COVID-19. Heritability estimates were not affected by cohabiting or by social deprivation. The results suggest the importance of host genetics in the risk of clinical manifestations of COVID-19 and provide grounds for planning genome-wide association studies to establish specific genes involved in viral infectivity and the host immune response.
To examine children’s sugar-sweetened beverage (SSB) and water intakes in relation to implemented intervention activities across the social ecological model (SEM) during a multilevel community trial.
Design:
Children’s Healthy Living was a multilevel, multicomponent community trial that reduced young child obesity (2013–2015). Baseline and 24-month cross-sectional data were analysed from nine intervention arm communities. Implemented intervention activities targeting reduced SSB and increased water consumption were coded by SEM level (child, caregiver, organisation, community and policy). Child SSB and water intakes were assessed by caregiver-completed 2-day dietary records. Multilevel linear regression models examined associations of changes in beverage intakes with activity frequencies at each SEM level.
Setting:
US-Affiliated Pacific region.
Participants:
Children aged 2–8 years (baseline: n 1343; 24 months: n 1158).
Results:
On average (± sd), communities implemented 74 ± 39 SSB and 72 ± 40 water activities. More than 90 % of activities targeted both beverages together. Community-level activities (e.g. social marketing campaign) were most common (61 % of total activities), and child-level activities (e.g. sugar counting game) were least common (4 %). SSB activities across SEM levels were not associated with SSB intake changes. Additional community-level water activities were associated with increased water intake (0·62 ml/d/activity; 95 % CI: 0·09, 1·15) and water-for-SSB substitution (operationalised as SSB minus water: –0·88 ml/d/activity; 95 % CI: –1·72, –0·03). Activities implemented at the organization level (e.g. strengthening preschool wellness guidelines) and policy level (e.g. SSB tax advocacy) also suggested greater water-for-SSB substitution (P < 0·10).
Conclusions:
Community-level intervention activities were associated with increased water intake, alone and relative to SSB intake, among young children in the Pacific region.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
Background: With the emergence of antibiotic resistant threats and the need for appropriate antibiotic use, laboratory microbiology information is important to guide clinical decision making in nursing homes, where access to such data can be limited. Susceptibility data are necessary to inform antibiotic selection and to monitor changes in resistance patterns over time. To contribute to existing data that describe antibiotic resistance among nursing home residents, we summarized antibiotic susceptibility data from organisms commonly isolated from urine cultures collected as part of the CDC multistate, Emerging Infections Program (EIP) nursing home prevalence survey. Methods: In 2017, urine culture and antibiotic susceptibility data for selected organisms were retrospectively collected from nursing home residents’ medical records by trained EIP staff. Urine culture results reported as negative (no growth) or contaminated were excluded. Susceptibility results were recorded as susceptible, non-susceptible (resistant or intermediate), or not tested. The pooled mean percentage tested and percentage non-susceptible were calculated for selected antibiotic agents and classes using available data. Susceptibility data were analyzed for organisms with ≥20 isolates. The definition for multidrug-resistance (MDR) was based on the CDC and European Centre for Disease Prevention and Control’s interim standard definitions. Data were analyzed using SAS v 9.4 software. Results: Among 161 participating nursing homes and 15,276 residents, 300 residents (2.0%) had documentation of a urine culture at the time of the survey, and 229 (76.3%) were positive. Escherichia coli, Proteus mirabilis, Klebsiella spp, and Enterococcus spp represented 73.0% of all urine isolates (N = 278). There were 215 (77.3%) isolates with reported susceptibility data (Fig. 1). Of these, data were analyzed for 187 (87.0%) (Fig. 2). All isolates tested for carbapenems were susceptible. Fluoroquinolone non-susceptibility was most prevalent among E. coli (42.9%) and P. mirabilis (55.9%). Among Klebsiella spp, the highest percentages of non-susceptibility were observed for extended-spectrum cephalosporins and folate pathway inhibitors (25.0% each). Glycopeptide non-susceptibility was 10.0% for Enterococcus spp. The percentage of isolates classified as MDR ranged from 10.1% for E. coli to 14.7% for P. mirabilis. Conclusions: Substantial levels of non-susceptibility were observed for nursing home residents’ urine isolates, with 10% to 56% reported as non-susceptible to the antibiotics assessed. Non-susceptibility was highest for fluoroquinolones, an antibiotic class commonly used in nursing homes, and ≥ 10% of selected isolates were MDR. Our findings reinforce the importance of nursing homes using susceptibility data from laboratory service providers to guide antibiotic prescribing and to monitor levels of resistance.
Background: Antibiotics are among the most commonly prescribed drugs in nursing homes; urinary tract infections (UTIs) are a frequent indication. Although there is no gold standard for the diagnosis of UTIs, various criteria have been developed to inform and standardize nursing home prescribing decisions, with the goal of reducing unnecessary antibiotic prescribing. Using different published criteria designed to guide decisions on initiating treatment of UTIs (ie, symptomatic, catheter-associated, and uncomplicated cystitis), our objective was to assess the appropriateness of antibiotic prescribing among NH residents. Methods: In 2017, the CDC Emerging Infections Program (EIP) performed a prevalence survey of healthcare-associated infections and antibiotic use in 161 nursing homes from 10 states: California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New Mexico, New York, Oregon, and Tennessee. EIP staff reviewed resident medical records to collect demographic and clinical information, infection signs, symptoms, and diagnostic testing documented on the day an antibiotic was initiated and 6 days prior. We applied 4 criteria to determine whether initiation of treatment for UTI was supported: (1) the Loeb minimum clinical criteria (Loeb); (2) the Suspected UTI Situation, Background, Assessment, and Recommendation tool (UTI SBAR tool); (3) adaptation of Infectious Diseases Society of America UTI treatment guidelines for nursing home residents (Crnich & Drinka); and (4) diagnostic criteria for uncomplicated cystitis (cystitis consensus) (Fig. 1). We calculated the percentage of residents for whom initiating UTI treatment was appropriate by these criteria. Results: Of 248 residents for whom UTI treatment was initiated in the nursing home, the median age was 79 years [IQR, 19], 63% were female, and 35% were admitted for postacute care. There was substantial variability in the percentage of residents with antibiotic initiation classified as appropriate by each of the criteria, ranging from 8% for the cystitis consensus, to 27% for Loeb, to 33% for the UTI SBAR tool, to 51% for Crnich and Drinka (Fig. 2). Conclusions: Appropriate initiation of UTI treatment among nursing home residents remained low regardless of criteria used. At best only half of antibiotic treatment met published prescribing criteria. Although insufficient documentation of infection signs, symptoms and testing may have contributed to the low percentages observed, adequate documentation in the medical record to support prescribing should be standard practice, as outlined in the CDC Core Elements of Antibiotic Stewardship for nursing homes. Standardized UTI prescribing criteria should be incorporated into nursing home stewardship activities to improve the assessment and documentation of symptomatic UTI and to reduce inappropriate antibiotic use.
To disrupt cycles of health inequity, traceable to dietary inequities in the earliest stages of life, public health interventions should target improving nutritional wellbeing in preconception/pregnancy environments. This requires a deep engagement with pregnant/postpartum people (PPP) and their communities (including their health and social care providers, HSCP). We sought to understand the factors that influence diet during pregnancy from the perspectives of PPP and HSCP, and to outline intervention priorities.
Design:
We carried out thematic network analyses of transcripts from ten focus group discussions (FGD) and one stakeholder engagement meeting with PPP and HSCP in a Canadian city. Identified themes were developed into conceptual maps, highlighting local priorities for pregnancy nutrition and intervention development.
Setting:
FGD and the stakeholder meeting were run in predominantly lower socioeconomic position (SEP) neighbourhoods in the sociodemographically diverse city of Hamilton, Canada.
Participants:
All local, comprising twenty-two lower SEP PPP and forty-three HSCP.
Results:
Salient themes were resilience, resources, relationships and the embodied experience of pregnancy. Both PPP and HSCP underscored that socioeconomic-political forces operating at multiple levels largely determined the availability of individual and relational resources constraining diet during pregnancy. Intervention proposals focused on cultivating individual and community resilience to improve early-life nutritional environments. Participants called for better-integrated services, greater income supports and strengthened support programmes.
Conclusions:
Hamilton stakeholders foregrounded social determinants of inequity as main factors influencing pregnancy diet. They further indicated a need to develop interventions that build resilience and redistribute resources at multiple levels, from the household to the state.
The aim of this study was to describe the sensitivity of various C-reactive protein (CRP) cut-off values to identify patients requiring magnetic resonance imaging evaluation for pyogenic spinal infection among emergency department (ED) adults presenting with neck or back pain.
Methods
We prospectively enrolled a convenience series of adults presenting to a community ED with neck or back pain in whom ED providers had concern for pyogenic spinal infection in a derivation cohort from 2004 to 2010 and a validation cohort from 2010 to 2018. The validation cohort included only patients with pyogenic spinal infection. We analysed diagnostic test characteristics of various CRP cut-off values.
Results
We enrolled 232 patients and analysed 201 patients. The median age was 55 years, 43.8% were male, 4.0% had history of intravenous drug use, and 20.9% had recent spinal surgery. In the derivation cohort, 38 (23.9%) of 159 patients had pyogenic spinal infection. Derivation sensitivity and specificity of CRP cut-off values were > 3.5 mg/L (100%, 24.8%), > 10 mg/L (100%, 41.3%), > 30 mg/L (100%, 61.2%), and > 50 mg/L (89.5%, 69.4%). Validation sensitivities of CRP cut-off values were > 3.5 mg/L (97.6%), > 10 mg/L (97.6%), > 30 mg/L (90.4%), and > 50 mg/L (85.7%).
Conclusions
CRP cut-offs beyond the upper limit of normal had high sensitivity for pyogenic spinal infection in this adult ED population. Elevated CRP cut-off values of 10 mg/L and 30 mg/L require validation in other settings.
Introduction: Delegation of controlled medical acts by physicians to paramedics is an important component of the prehospital care framework. Where directives indicate that physician input is needed before proceeding with certain interventions, online medical control (a “patch”) exists to facilitate communication between a paramedic and a Base Hospital Physician (BHP) to request an order to proceed with that intervention. The quality and clarity of audio transmission is paramount for effective and efficient communication. The aim of this study was to examine the impact of audio transmission quality on the results of paramedic patch calls. Methods: Prehospital paramedic calls that included a mandatory patch point (excluding requests exclusively for termination of resuscitation and those records which were unavailable) were identified through review of all patch records from January 1, 2014 to December 31, 2017 for Paramedic Services in our region. Written Ambulance Call Reports (ACRs) and audio recordings of paramedic patches were obtained and reviewed. Pre-specified patch audio quality metrics, markers of transmission quality and comprehension as well as the resulting orders from the BHP were extracted. Differences between groups was compared using chi-square analyses. Results: 214 records were identified and screened initially. 91 ACRs and audio records were included in the analysis. At least one explicit reference to poor or inadequate call audio quality was made in 55/91 (60.4%) of calls and on average, 1.4 times per call. Of the 91 audited call records, 48 of 91 (52.7%) patches experienced an interruption of the call. Each time a call was interrupted, re-initiation of the call was required, introducing a mean [IQR] delay of 81 [33-68] seconds to re-establish verbal communication. Order requests made by paramedics in calls with no interruptions were approved in 30 of 43 patches (70%) while those requests made in calls with one or more interruptions were approved in only 21 of 48 cases (44%) (Δ26.0%; 95%CI 5.6-43.5%, p = 0.01). Conclusion: This retrospective review suggests that audio quality and interruptions of patch calls may impact a physician's ability to approve orders for interventions in the prehospital setting. Focus on infrastructure and technology underlying this important mode of communication may be a fruitful avenue for future improvements in systems where this may be an issue.
Introduction: CAEP recently developed the acute atrial fibrillation (AF) and flutter (AFL) [AAFF] Best Practices Checklist to promote optimal care and guidance on cardioversion and rapid discharge of patients with AAFF. We sought to assess the impact of implementing the Checklist into large Canadian EDs. Methods: We conducted a pragmatic stepped-wedge cluster randomized trial in 11 large Canadian ED sites in five provinces, over 14 months. All hospitals started in the control period (usual care), and then crossed over to the intervention period in random sequence, one hospital per month. We enrolled consecutive, stable patients presenting with AAFF, where symptoms required ED management. Our intervention was informed by qualitative stakeholder interviews to identify perceived barriers and enablers for rapid discharge of AAFF patients. The many interventions included local champions, presentation of the Checklist to physicians in group sessions, an online training module, a smartphone app, and targeted audit and feedback. The primary outcome was length of stay in ED in minutes from time of arrival to time of disposition, and this was analyzed at the individual patient-level using linear mixed effects regression accounting for the stepped-wedge design. We estimated a sample size of 800 patients. Results: We enrolled 844 patients with none lost to follow-up. Those in the control (N = 316) and intervention periods (N = 528) were similar for all characteristics including mean age (61.2 vs 64.2 yrs), duration of AAFF (8.1 vs 7.7 hrs), AF (88.6% vs 82.9%), AFL (11.4% vs 17.1%), and mean initial heart rate (119.6 vs 119.9 bpm). Median lengths of stay for the control and intervention periods respectively were 413.0 vs. 354.0 minutes (P < 0.001). Comparing control to intervention, there was an increase in: use of antiarrhythmic drugs (37.4% vs 47.4%; P < 0.01), electrical cardioversion (45.1% vs 56.8%; P < 0.01), and discharge in sinus rhythm (75.3% vs. 86.7%; P < 0.001). There was a decrease in ED consultations to cardiology and medicine (49.7% vs 41.1%; P < 0.01), but a small but insignificant increase in anticoagulant prescriptions (39.6% vs 46.5%; P = 0.21). Conclusion: This multicenter implementation of the CAEP Best Practices Checklist led to a significant decrease in ED length of stay along with more ED cardioversions, fewer ED consultations, and more discharges in sinus rhythm. Widespread and rigorous adoption of the CAEP Checklist should lead to improved care of AAFF patients in all Canadian EDs.
Introduction: Delegation of controlled medical acts by physicians to paramedics is an important component of the prehospital care framework. Where directives indicate that physician input is needed before proceeding with certain interventions, online medical control (a “patch”) exists to facilitate communication between a paramedic and a Base Hospital Physician (BHP) to request an order to proceed with that intervention. Many factors contribute to success or failure of effective interpersonal communication during a patch call. The aim of this study was to examine areas of potential improvement in communication between paramedics and physicians during the patch call. Methods: Prehospital paramedic calls that included a mandatory patch point (excluding requests for termination of resuscitation and those records which were unavailable) were identified through review of all patch records from January 1, 2014 to December 31, 2017 for Paramedic Services in our region. Written Ambulance Call Reports (ACRs) and audio recordings of paramedic patches were obtained and reviewed. Pre-specified time intervals, clinical factors, specific patch requests and resulting orders from the BHP to the paramedics were extracted. Differences between groups were compared using t-tests. Results: 214 records were initially identified and screened. 91 ACRs and audio patch records were included in the analysis. 51/91 (56%) of patch order requests for interventions were granted by the BHP. Clarification of information provided by the paramedic or reframing of the paramedic's request was required less often, but not statistically significant, in calls ultimately resulting in granted requests versus those that were not granted (mean 1.4 versus 1.7, Δ-0.28; 95% CI -0.75-0.18 p = 0.64). The mean time from first contact with the BHP to statement of the request was similar in patches where the request was granted and not granted (44.9 versus 46.3, Δ-1.4; 95% CI -12.9-10.2, p = 0.49). Conclusion: The communication between BHPs and paramedics is an important and under-investigated component of prehospital emergency care. This retrospective review presents some novel targets for further research and potential education in patch communication to improve efficiency and quality of prehospital care for patients.
Introduction: Delegation of controlled medical acts by physicians to paramedics is an important component of the prehospital care framework. Where directives indicate that physician input is needed before proceeding with certain interventions, online medical control (a “patch”) exists to facilitate communication between a paramedic and a Base Hospital Physician (BHP) to request an order to proceed with that intervention. The clinical and logistical setting will contribute to the decision to proceed with or withhold an intervention in the prehospital setting. The aim of this study was to examine the impact of various clinical and situational factors on the likelihood of a patch request being granted. Methods: Prehospital paramedic calls that included a mandatory patch point (excluding requests exclusively for termination of resuscitation and those records which were unavailable) were identified through review of all patch records from January 1, 2014 to December 31, 2017 for Paramedic Services in our region. Written Ambulance Call Reports (ACRs) and audio recordings of paramedic patches were obtained and reviewed. Results: 214 patch records were identified and screened for inclusion. 91 ACRs and audio patch records were included in the analysis. 51 of 91 (56%) patch requests were granted by the BHP. Of the 40 paramedic requests that were not granted, the most commonly cited reason was close proximity to hospital (22/40; 55%) followed by low likelihood of the intervention making a clinical impact in the prehospital setting (11/40; 27.5%). Requests for certain interventions were more likely to be granted than other requests. All requests to perform needle thoracostomy for possible tension pneumothorax, administer atropine for symptomatic bradycardia and treat hemodynamically unstable hyperkalemia were granted (2/2, 3/3 and 7/7, respectively), while requests for synchronized cardioversion (7/19; 37%) and transcutaneous pacing (2/6; 33%) were approved less than half of the time. Conclusion: This retrospective review suggests that requests to perform certain critical and potentially time sensitive interventions are more likely to be granted which calls into question the requirement for a mandatory patch point for these procedures. Furthermore, the interplay between proximity to hospital and the decision to proceed with an intervention potentially informs future modifications to directives to facilitate timely, safe and efficient care.
Acute change in mental status (ACMS), defined by the Confusion Assessment Method, is used to identify infections in nursing home residents. A medical record review revealed that none of 15,276 residents had an ACMS documented. Using the revised McGeer criteria with a possible ACMS definition, we identified 296 residents and 21 additional infections. The use of a possible ACMS definition should be considered for retrospective nursing home infection surveillance.
The poster will address the important issue of how we can use opportunities in teaching our medical students how to take a wider view of psychiatry and learn to ‘think outside the box’ thus broadening their vision, enabling them to challenge presently held concepts, while at the same time learning the basic tenets of our profession.
Clearly, this is done by involving our students in clinical research based and audit based activities. However not all schools or teachers are comfortable with doing this, while the medical curriculum is broad, and there is a risk that students ‘only study for exams’.
Research based activities, including simple things such as using basic it skills to do a literature search for a review article or carrying out a useful clinical audit, using a unit held database, are however things which students can easily do, and these can lead to publishable case reports, posters, or ever articles in peer reviewed journals.
The poster will illustrate how we developed research activities with students at Cambridge University Clinical School. It shall discuss the advantages, difficulties, and indeed enjoyment of carrying out such activities.
Antidepressants that combine serotonergic (SSRI) and noradrenergic (NaRI) actions may have greater efficacy in treating depression than SSRI monotherapy. This theory has not been tested in any trials examining augmentation of SSRIs with a NaRI.
Objectives
Does augmenting SSRIs with reboxetine, a NaRI, in depressed patients unresponsive to first line treatment, result in improved antidepressant efficacy?
Methods
In a naturalistic observational study, 30 patients with moderate to severe depression (ICD-10) who failed to respond to at least 20 mg of a SSRI, were augmented with reboxetine (4 mg increased to 8 mg if tolerated). BDI-II was measured before and 6 weeks after introduction of reboxetine. Changes in BDI scores were analysed using paired t-test.
Results
20 out of 30 patients were able to tolerate the combination of SSRI and reboxetine treatment for at least 6 weeks. There was a significant reduction in mean BDI-II scores from 36.6 at baseline to 27.2 at six weeks follow-up (t = 4.13, df = 29, p < 0.001). 13 out of 30 previously unresponsive patients showed a response (reduction in BDI score of at least 10 points) to combination treatment, with 5 patients achieving remission (BDI < 12) over the six weeks.
Conclusions
Reboxetine augmentation of SSRIs can be tolerated by a majority of patients and results in a significant increase in response rates. It is a treatment strategy that should be considered in patients with moderate to severe depression who fail to respond to first line treatment with an SSRI.
Dyfunctions of prefrontal neuronal circuits contribute to the pathophysiology of depression. Previous studies showed increased functional MRI and EEG connectivity in patients with depression. In this study we investigated a large sample of patients with major depression (n=228) and healthy subjects (n=215).
Methods
Spectrotemporal dynamics during resting state with closed eyes were analyzed in sensor and source space to examine functional EEG connectivity (fcEEG) alterations between groups. Quantitative measures of delta, theta, alpha, beta and gamma power, hemispheric asymmetry, coherence, phase and current source density (CSD, eLORETA) analyses were calculated from artifact-free EEG recordings.
Results
EEG theta power was increased in all brain regions in the group of patients with a focus in frontal regions and increased frontal theta and alpha power. Excessive coherence differences were detected in the delta, theta and alpha-bands in frontal, frontal-temporal and frontal-parietal regions. There were changes in phase differences in the delta, theta, alpha-bands between patients and healthy subjects. Differences in CSD were found for the delta, theta, alpha-band in the (rostral and subgenual) anterior cingulate cortex (ACC) with increased CSD in the patients.
Conclusion
The main finding of the present study was an increase in cortical slow-wave activity in sensor and source space in patients with depression revealing marked differences in prefrontal cortical networks. Functional delta, theta and alpha connectivity (coherence and phase) were altered with a predominance in the left hemisphere. Dysfunctions of the ACC, together with alterations in fcEEG may contribute to the pathophysiology of major depression.
Centralized ratings by telephone have proven feasible for assessment of psychiatric diagnosis, symptom severity, and suicidality, and may be used for safety assessments in non-psychiatric trials with sites that do not employ staff experienced in psychiatric assessment.
Objective
To assess whether centralizing assessments with mental health experts enables immediate clinical follow-up and actionable diagnostic support for investigators.
Aims
To examine the feasibility of centralized ratings in a Phase III dermatology clinical trial for safety assessments.
Methods
1127 subjects enrolled in a trial of medication for their dermatologic condition were assessed via telephone by central raters who administered the SCID-CT, C-SSRS and PHQ-8 at screening. At monthly visits, central raters performed the C-SSRS, PHQ-8, GAD-7 and items designed to detect emergent psychotic symptoms.
Results:
Screening
34 subjects were excluded on the basis of SCID-CT diagnosis. Based on diagnosis or severity, subjects were classified as being in no need of mental health services, having mild psychiatric symptoms (referred to local mental health service provider; n=33), moderate (immediate referral for psychiatric evaluation; n=17), or severe (immediate escort to emergency room; n=0).
One subject reported suicidal ideation on the C-SSRS, 10 reported self-injurious behavior, and 5 reported suicidal behavior in the last year.
Follow-Up
No subjects reported suicidal ideation or behavior at any of the 6861 follow-up assessments. One subject reported self-injurious behavior and two reported emergent psychotic symptoms.
Conclusions
This study established the feasibility and acceptability of routine screening and monitoring of psychopathology and suicidality by central raters in a non-psychiatric population.