To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
American society is rapidly secularizing–a radical departure from its historically high level of religiosity–and politics is a big part of the reason. Just as, forty years ago, the Religious Right arose as a new political movement, today secularism is gaining traction as a distinct and politically energized identity. This book examines the political causes and political consequences of this secular surge, drawing on a wealth of original data. The authors show that secular identity is in part a reaction to the Religious Right. However, while the political impact of secularism is profound, there may not yet be a Secular Left to counterbalance the Religious Right. Secularism has introduced new tensions within the Democratic Party while adding oxygen to political polarization between Democrats and Republicans. Still there may be opportunities to reach common ground if politicians seek to forge coalitions that encompass both secular and religious Americans.
Cognitive behavior therapy (CBT) is effective for most patients with a social anxiety disorder (SAD) but a substantial proportion fails to remit. Experimental and clinical research suggests that enhancing CBT using imagery-based techniques could improve outcomes. It was hypothesized that imagery-enhanced CBT (IE-CBT) would be superior to verbally-based CBT (VB-CBT) on pre-registered outcomes.
A randomized controlled trial of IE-CBT v. VB-CBT for social anxiety was completed in a community mental health clinic setting. Participants were randomized to IE (n = 53) or VB (n = 54) CBT, with 1-month (primary end point) and 6-month follow-up assessments. Participants completed 12, 2-hour, weekly sessions of IE-CBT or VB-CBT plus 1-month follow-up.
Intention to treat analyses showed very large within-treatment effect sizes on the social interaction anxiety at all time points (ds = 2.09–2.62), with no between-treatment differences on this outcome or clinician-rated severity [1-month OR = 1.45 (0.45, 4.62), p = 0.53; 6-month OR = 1.31 (0.42, 4.08), p = 0.65], SAD remission (1-month: IE = 61.04%, VB = 55.09%, p = 0.59); 6-month: IE = 58.73%, VB = 61.89%, p = 0.77), or secondary outcomes. Three adverse events were noted (substance abuse, n = 1 in IE-CBT; temporary increase in suicide risk, n = 1 in each condition, with one being withdrawn at 1-month follow-up).
Group IE-CBT and VB-CBT were safe and there were no significant differences in outcomes. Both treatments were associated with very large within-group effect sizes and the majority of patients remitted following treatment.
In response to advancing clinical practice guidelines regarding concussion management, service members, like athletes, complete a baseline assessment prior to participating in high-risk activities. While several studies have established test stability in athletes, no investigation to date has examined the stability of baseline assessment scores in military cadets. The objective of this study was to assess the test–retest reliability of a baseline concussion test battery in cadets at U.S. Service Academies.
All cadets participating in the Concussion Assessment, Research, and Education (CARE) Consortium investigation completed a standard baseline battery that included memory, balance, symptom, and neurocognitive assessments. Annual baseline testing was completed during the first 3 years of the study. A two-way mixed-model analysis of variance (intraclass correlation coefficent (ICC)3,1) and Kappa statistics were used to assess the stability of the metrics at 1-year and 2-year time intervals.
ICC values for the 1-year test interval ranged from 0.28 to 0.67 and from 0.15 to 0.57 for the 2-year interval. Kappa values ranged from 0.16 to 0.21 for the 1-year interval and from 0.29 to 0.31 for the 2-year test interval. Across all measures, the observed effects were small, ranging from 0.01 to 0.44.
This investigation noted less than optimal reliability for the most common concussion baseline assessments. While none of the assessments met or exceeded the accepted clinical threshold, the effect sizes were relatively small suggesting an overlap in performance from year-to-year. As such, baseline assessments beyond the initial evaluation in cadets are not essential but could aid concussion diagnosis.
Unit cohesion may protect service member mental health by mitigating effects of combat exposure; however, questions remain about the origins of potential stress-buffering effects. We examined buffering effects associated with two forms of unit cohesion (peer-oriented horizontal cohesion and subordinate-leader vertical cohesion) defined as either individual-level or aggregated unit-level variables.
Longitudinal survey data from US Army soldiers who deployed to Afghanistan in 2012 were analyzed using mixed-effects regression. Models evaluated individual- and unit-level interaction effects of combat exposure and cohesion during deployment on symptoms of post-traumatic stress disorder (PTSD), depression, and suicidal ideation reported at 3 months post-deployment (model n's = 6684 to 6826). Given the small effective sample size (k = 89), the significance of unit-level interactions was evaluated at a 90% confidence level.
At the individual-level, buffering effects of horizontal cohesion were found for PTSD symptoms [B = −0.11, 95% CI (−0.18 to −0.04), p < 0.01] and depressive symptoms [B = −0.06, 95% CI (−0.10 to −0.01), p < 0.05]; while a buffering effect of vertical cohesion was observed for PTSD symptoms only [B = −0.03, 95% CI (−0.06 to −0.0001), p < 0.05]. At the unit-level, buffering effects of horizontal (but not vertical) cohesion were observed for PTSD symptoms [B = −0.91, 90% CI (−1.70 to −0.11), p = 0.06], depressive symptoms [B = −0.83, 90% CI (−1.24 to −0.41), p < 0.01], and suicidal ideation [B = −0.32, 90% CI (−0.62 to −0.01), p = 0.08].
Policies and interventions that enhance horizontal cohesion may protect combat-exposed units against post-deployment mental health problems. Efforts to support individual soldiers who report low levels of horizontal or vertical cohesion may also yield mental health benefits.
This article explores the literary relationship between the Matthean tradition and the Ascension of Isaiah, a second-century pseudepigraphon detailing Isaiah's visions of the ‘Beloved’ and his polemical (and fatal) engagement with the ‘false prophet’ Belkira. While the lexical affiliation between these texts has been a point of interest, the discussion has oscillated between types of sources utilised, whether gospel material mutually shared with Matthew or Matthew itself. Though this paper details lexical contact, it pushes beyond philological similarity and posits narrative imitations as well as shared polemical strategies. The result is that Isaiah is more readily seen as a figure fashioned after the Matthean Jesus, and the ‘martyred prophet’ motif that ripples throughout the Gospel of Matthew as appropriated and narrativised by the Ascension of Isaiah for a second-century conflict over prophetic practices.
People aging with long-term physical disabilities (PAwLTPD), meaning individuals with onset of disability from birth through midlife, often require long-term support services (LTSS) to remain independence. The LTSS system is fragmented into aging and disability organizations with little communication between them. In addition, there are currently no evidence-based LTSS-type programs listed on the Administration for Community Living website that have been demonstrated to be effective for PAwLTPD. Because of these gaps, we have developed a community-based research network (CBRN), drawing on the practice-based research network model (PBRN), to bring together aging and disability organizations to address the lack of evidence-based programs for PAwLTPD.
Materials and Methods:
Community-based organizations serving PAwLTPD across the state of Missouri were recruited to join the CBRN. A formative process evaluation of the network was conducted after a year to evaluate the effectiveness of the network.
Nine community-based organizations across the state of Missouri joined the CBRN. CBRN members include three centers for independent living (CILs), three area agencies on aging (AAAs), one CIL/AAA hybrid, one non-CIL disability organization, and one non-AAA aging organization. To date, we have held seven meetings, provided educational opportunities for CBRN members, and launched an inaugural research study within the CBRN. Formative evaluation data indicate that CBRN members feel that participation in the CBRN is beneficial.
The PBRN model appears to be a feasible framework for use with community-based organizations to facilitate communication between agencies and to support research aimed at addressing the needs of PAwLTPD.
Introduction: Blood transfusions continue to be a critical intervention in patients presenting to emergency departments (ED). Improved understanding of the adverse events associated with transfusions has led to new research to inform and delineate transfusion guidelines. The Nova Scotia Guideline for Blood Component Utilization in Adults and Pediatrics was implemented in June 2017 to reflect current best practice in transfusion medicine. The guideline includes a lowering of the hemoglobin threshold from 80 g/L to 70 g/L for transfusion initiation, to be used in conjunction with the patient's hemodynamic assessment before and after transfusions. Our study aims to augment understanding of transfusion guideline adherence and ED physician transfusing practices at the Halifax Infirmary Emergency Department in Nova Scotia. Methods: A retrospective chart review was conducted on one third of all ED visits involving red-cell transfusions for one year prior to and one year following the guideline implementation. A total of 350 charts were reviewed. The primary data abstracted for the initial transfusion, and subsequent transfusion if applicable, from each reviewed chart included clinical and laboratory data reflective of the transfusion guideline. Based on these data, the transfusion event was classified one of three ways: indicated based on hemoglobin level, indicated based on patient's symptomatic presentation, or unable to determine if transfusion indicated based on charting. Results: The year before guideline implementation, the total number of transfusions initiated at a hemoglobin of between 71-80 was 31 of 146 total transfusions. This number dropped by 23.6% to 22 of 136 in the year following guideline implementation. The number of single-unit transfusions increased by 28.0% from 47 of 146 in the year prior to 56 of 136 in the year after guideline implementation. The initial indication for transfusion being unable to be determined based on charting provided increased by 120%. The indication for subsequent transfusions being unable to be determined based on charting increased by 1500% (P < 0.05). Conclusion: These data suggest that implementing transfusion guidelines effectively reduced the number of transfusions given in the ED setting and increased the number of single-unit transfusions administered. However, the data also suggest the need for better education around transfusion indications and proper documentation clearly outlining the rationale behind the decision to transfuse.
Background: In Canada, injuries represent 21% of Emergency Department (ED) visits. Faced with occupational injuries, physicians may feel pressured to provide urgent imaging to facilitate expedited return to work. There is not a body of literature to support this practice. Twenty percent of adult ED injuries involve workers compensation. Aim Statement: Tacit pressures were felt to impact imaging rates for patients with workplace injuries, and our aim was to determine if this hypothesis was accurate. We conducted a quality review to assess imaging rates among injuries suffered at work and outside work. A secondary aim was to reduce the harm resulting from non-value-added testing. Measures & Design: Information was collected from the Emergency Department Information System on patients with acute injuries over the age of 16-years including upper limb, lower limb, neck, back and head injuries. Data included both workplace and non-work-related presentations, Canadian Triage and Acuity Scale (CTAS) levels and age at presentation. Imaging included any of X-ray, CT, MRI, or Ultrasound ordered in EDs across the central zone of Nova Scotia from July 1, 2009 to June 30, 2019. A total of 282,860 patient-encounters were included for analysis. Comparison was made between patients presenting under the Workers’ Compensation Board of Nova Scotia (WCB) and those covered by the Department of Health and Wellness (DOHW). Imaging rates for all injuries were also trended over this ten-year period. Evaluation/Results: In patients between 16 and 65-years, the WCB group underwent more imaging (55.3% of visits) than did the DOHW group (43.1% of visits). In the same cohort, there was an overall decrease of over 10% in mean imaging rates for both WBC and DOHW between the first five-year period (2009-2013) and the second five-year study period (2013-2018). Imaging rates for WCB and DOHW converged with each decade beyond 35 years of age. No comparison was possible beyond 85-years, due to the absence of WCB presentations. Discussion/Impact: Patients presenting to the ED with workplace injuries are imaged at a higher rate than those covered by the DOHW. Campaigns promoting value-added care may have impacted imaging rates during the ten-year study period, explaining the decline in ED imaging for all injuries. While this 10% decrease in overall imaging is encouraging, these preliminary data indicate the need for further education on resource stewardship, especially for patients presenting to the ED with workplace injuries.
Background: Trauma care represents a complex patient journey, requiring multi-disciplinary coordinated care. Team members are human, and as such, how they feel about their colleagues and their work affects performance. The challenge for health service leaders is enabling culture that supports high levels of collaboration, cooperation and coordination across diverse groups. Aim Statement: We aimed to define and set the agenda for improvement of the relational aspects of trauma care at a large tertiary care hospital. Measures & Design: We conducted a mixed-methods collaborative ethnography using the Relational Coordination survey – an established tool to analyze the relational dimensions of multidisciplinary teamwork – participant observation, interviews, and narrative surveys. Findings were presented to clinicians in working groups for further interpretation and to facilitate co-creation of targeted interventions designed to improve team relationships and performance. Evaluation/Results: We engaged a complex multidisciplinary network of ~500 care providers dispersed across seven core interdependent clinical disciplines. Initial findings highlighted the importance of relationships in trauma care and opportunities to improve. Narrative survey and ethnographic findings further highlighted the centrality of a translational simulation program in contributing positively to team culture and relational ties. A range of 16 interventions – focusing on structural, process and relational dimensions – were co-created with participants and are now being implemented and evaluated by various trauma care providers. Discussion/Impact: Through engagement of clinicians spanning organizational boundaries, relational aspects of care can be measured and directly targeted in a collaborative quality improvement process. We encourage health care leaders to consider relationship-based quality improvement strategies, including translational simulation and relational coordination processes, in their efforts to improve care for patients with complex, interdependent journeys.
Introduction: Choosing Wisely Nova Scotia (CWNS), an affiliate of Choosing Wisely Canada™ (CWC), aims to address unnecessary care and testing through literature-informed lists developed by various disciplines. CWC has identified unnecessary head CTs among the top five interventions to question in the Emergency Department (ED). Zyluk (2015) determined the Canadian CT Head Rule (CCHR) as the most effective clinical decision rule in adults with minor head injuries. To better understand the current status of CCHR use in Nova Scotia, we conducted a retrospective audit of patient charts at the Charles V. Keating Emergency and Trauma Center, in Halifax, Nova Scotia. Methods: Our mixed methods design included a literature review, retrospective chart audit, and a qualitative audit-feedback component with participating physicians. The chart audit applied the guidelines for adherence to the CCHR and reported on the level of compliance within the ED. Analysis of qualitative data is included here, in parallel with in-depth to contextualize findings from the audit. Results: 302 charts of patients having presented to the surveyed site were retrospectively reviewed. Of the 37 cases where a CT head was indicated as per the CCHR, a CT was ordered 32 (86.5%) times. Of the 176 cases where a CT head was not indicated, a CT was not ordered 155 (88.1%) times. Therefore, the CCHR was followed in 187 (87.8%) of the total 213 cases where the CCHR should be applied. Conclusion: Our study reveals adherence to the CCHR in 87.8% of cases at this ED. Identifying contextual factors that facilitate or hinder the application of CCHR in practice is critical for reducing unnecessary CTs. This work has been presented to the physician group to gain physician engagement and to elucidate enablers and barriers to guideline adherence. In light of the frequency of CT heads ordered EDs, even a small reduction would be impactful.
Introduction: The Cunningham reduction method for anterior shoulder dislocation offers an atraumatic alternative to traditional reduction techniques without the inconvenience and risk of procedural sedation and analgesia (PSA). Unfortunately, success rates as low as 27% have limited widespread use of this method. Inhaled methoxyflurane (I-MEOF) offers a rapidly administered, minimally invasive option for short-term analgesia. We conducted a pilot study to evaluate the feasibility of studying whether I-MEOF increased success rates for atraumatic reduction of anterior shoulder dislocation. Methods: A convenience sample of 20 patients with uncomplicated anterior shoulder dislocations were offered the Cunningham reduction method supported by methoxyflurane analgesia under the guidance of an advanced care paramedic. Operators were instructed to limit their attempt to the Cunningham method. Outcomes included success rate without the requirement for PSA, time to discharge, and operator and patient satisfaction with the procedure. Results: 20 patients received I-MEOF and an attempt at Cunningham reduction. 80% of patients were male, median age was 38.6 (range 18-71), and 55% were first dislocations of that joint. 35% (8/20 patients) had reduction successfully achieved by the Cunningham method under I-MEOF analgesia. The remainder proceeded to closed reduction under PSA. All patients had eventual successful reduction in the ED. 60% of operators reported good to excellent satisfaction with the process, with inadequate muscle relaxation being identified as the primary cause of failed initial attempts. 80% of patients reported good to excellent satisfaction. Conclusion: Success with the Cunningham technique was marginally increased with the use of I-MEOF, although 65% of patients still required PSA to facilitate reduction. The process was generally met with satisfaction by both providers and patients, suggesting that early administration of analgesia is appreciated. Moreover, one-third of patients had reduction achieved atraumatically without need for further intervention. A larger, randomized study may identify patient characteristics which make this reduction method more likely to be successful.
The current study aimed to understand the mediating and/or moderating role of prenatal hypothalamic–pituitary–adrenal (HPA) axis function in the association between maternal adverse childhood experiences (ACEs) and child internalizing and externalizing behavior problems at age 4. The influence of timing and child sex were also explored. Participants were 248 mother–child dyads enrolled in a prospective longitudinal cohort study (the Alberta Pregnancy Outcomes and Nutrition Study). Maternal ACEs were retrospectively assessed while maternal self-reported depression and diurnal salivary cortisol were assessed prospectively at 6–26 weeks gestation (T1) and 27–37 weeks gestation (T2). Maternal report of child internalizing and externalizing problems was assessed at 4 years (T3). Results revealed that there was a negative indirect association between maternal ACEs and child internalizing behavior via a higher maternal cortisol awakening response (CAR). Maternal diurnal cortisol slope moderated the association between maternal ACEs and child behavior problems. Some of these effects were dependent on child sex, such that higher ACEs and a flatter diurnal slope at T1 was associated with more internalizing behavior in female children and more externalizing behavior in male children. There were timing effects such that the mediating and moderating effects were strongest at T1.
We developed a tilt sensor for studying ice deformation and installed our tilt sensor systems in two boreholes drilled close to the shear margin of Jarvis Glacier, Alaska to obtain kinematic measurements of streaming ice. We used the collected tilt data to calculate borehole deformation by tracking the orientation of the sensors over time. The sensors' tilts generally trended down-glacier, with an element of cross-glacier flow in the borehole closer to the shear margin. We also evaluated our results against flow dynamic parameters derived from Glen's exponential flow law and explored the parameter space of the stress exponent n and enhancement factor E. Comparison with values from ice deformation experiments shows that the ice on Jarvis is characterized by higher n values than that is expected in regions of low stress, particularly at the shear margin (~3.4). The higher n values could be attributed to the observed high total strains coupled with potential dynamic recrystallization, causing anisotropic development and consequently sped up ice flow. Jarvis' n values place the creep regime of the ice between basal slip and dislocation creep. Tuning E towards a theoretical upper limit of 10 for anisotropic ice with single-maximum fabric reduces the n values by 0.2.
Frascati international research criteria for HIV-associated neurocognitive disorders (HAND) are controversial; some investigators have argued that Frascati criteria are too liberal, resulting in a high false positive rate. Meyer et al. recommended more conservative revisions to HAND criteria, including exploring other commonly used methodologies for neurocognitive impairment (NCI) in HIV including the global deficit score (GDS). This study compares NCI classifications by Frascati, Meyer, and GDS methods, in relation to neuroimaging markers of brain integrity in HIV.
Two hundred forty-one people living with HIV (PLWH) without current substance use disorder or severe (confounding) comorbid conditions underwent comprehensive neurocognitive testing and brain structural magnetic resonance imaging and magnetic resonance spectroscopy. Participants were classified using Frascati criteria versus Meyer criteria: concordant unimpaired [Frascati(Un)/Meyer(Un)], concordant impaired [Frascati(Imp)/Meyer(Imp)], or discordant [Frascati(Imp)/Meyer(Un)] which were impaired via Frascati criteria but unimpaired via Meyer criteria. To investigate the GDS versus Meyer criteria, the same groupings were utilized using GDS criteria instead of Frascati criteria.
When examining Frascati versus Meyer criteria, discordant Frascati(Imp)/Meyer(Un) individuals had less cortical gray matter, greater sulcal cerebrospinal fluid volume, and greater evidence of neuroinflammation (i.e., choline) than concordant Frascati(Un)/Meyer(Un) individuals. GDS versus Meyer comparisons indicated that discordant GDS(Imp)/Meyer(Un) individuals had less cortical gray matter and lower levels of energy metabolism (i.e., creatine) than concordant GDS(Un)/Meyer(Un) individuals. In both sets of analyses, the discordant group did not differ from the concordant impaired group on any neuroimaging measure.
The Meyer criteria failed to capture a substantial portion of PLWH with brain abnormalities. These findings support continued use of Frascati or GDS criteria to detect HIV-associated CNS dysfunction.
Background: SMA is characterized by reduced levels of survival of motor neuron (SMN) protein from deletions and/or mutations of the SMN1 gene. While SMN1 produces full-length SMN protein, a second gene, SMN2, produces low levels of functional SMN protein. Risdiplam (RG7916/RO7034067) is an investigational, orally administered, centrally and peripherally distributed small molecule that modulates pre-mRNA splicing of SMN2 to increase SMN protein levels. Methods: SUNFISH (NCT02908685) is an ongoing multicenter, double-blind, placebo-controlled, operationally seamless study (randomized 2:1, risdiplam:placebo) in patients aged 2–25 years, with Type 2/3 SMA. Part 1 (n=51) assesses safety, tolerability, pharmacokinetics and pharmacodynamics of different risdiplam dose levels. Pivotal Part 2 (n=180) assesses safety and efficacy of the risdiplam dose level selected based on Part 1 results. Results: Part 1 results showed a sustained, >2-fold increase in median SMN protein versus baseline following 1 year of treatment. Adverse events were mostly mild, resolved despite ongoing treatment and reflected underlying disease. No drug-related safety findings have led to withdrawal (data-cut 06/17/18). SUNFISH Part 1 exploratory endpoint results and Part 2 study design will also be presented. Conclusions: To date, no drug-related safety findings have led to withdrawal. Risdiplam led to sustained increases in SMN protein levels.
Background: To attain the most comprehensive view of the quality of life (QoL) of a child with Duchenne Muscular Dystrophy (DMD), the completion of a pediatric QoL measure by the child and his/her parent and the assessment of QoL and health-related quality of life (HRQoL) as separate constructs is crucial. Previous QoL research has not assessed HRQoL as a separate construct. By using the Quality of My Life (QoML) questionnaire, our objective was to describe QoL and HRQoL in boys with DMD based on child-and parent-reports. Methods: Parent and child dyads identified via the Canadian Neuromuscular Disease Registry received QoML questionnaires (2013-2016). Children and parent-proxy each completed the QoL and HRQoL Visual Analog Scales. Responses were marked on a 10-cm line, with higher scores (max=10) reflecting higher QoL and HRQoL. Descriptive statistics were computed for child- and parent-reports of QoL and HRQoL at three time-points. Results: Mean(SD) QoL and HRQoL scores for child- and parent-reports were: 1) Baseline (n=20 dyads), 8.32(1.72) vs. 6.73(2.23) and 7.63(2.51) vs. 6.73(2.19); 2)18-months (n=10 dyads, n=9 dyads), 7.83(2.05) vs 7.66(1.66) and 7.62(2.41) vs 7.41(2.16); 3) 36-months (n=15 dyads) 7.38(2.00) vs. 6.99(1.77) and 7.19(2.70) vs. 6.76(2.26). Conclusions: Boys with DMD report higher QoL and HRQoL compared to their parents.
Oats can be processed in a variety of ways ranging from minimally processed such as steel-cut oats (SCO), to mildly processed such as large-flake oats (old fashioned oats, OFO), moderately processed such as instant oats (IO) or highly processed in ready-to-eat oat cereals such as Honey Nut Cheerios (HNC). Although processing is believed to increase glycaemic and insulinaemic responses, the effect of oat processing in these respects is unclear. Thus, we compared the glycaemic and insulinaemic responses elicited by 628 kJ portions of SCO, OFO, IO and HNC and a portion of Cream of Rice cereal (CR) containing the same amount of available-carbohydrate (23 g) as the oatmeals. Healthy males (n 18) and females (n 12) completed this randomised, cross-over trial. Blood was taken fasting and at intervals for 3 h following test-meal consumption. Glucose and insulin peak-rises and incremental AUC (iAUC) were subjected to repeated-measures ANOVA using Tukey’s test (two-sided P<0·05) to compare individual means. Glucose peak-rise (primary endpoint, mean (sem) mmol/l) after OFO, 2·19 (sem 0·11), was significantly less than after CR, 2·61 (sem 0·13); and glucose peak-rise after SCO, 1·93 (sem 0·13), was significantly less than after CR, HNC, 2·49 (sem 0·13) and IO 2·47 (sem 0·13). Glucose iAUC was significantly lower after SCO than CR and HNC. Insulin peak rise was similar among the test meals, but insulin iAUC was significantly less after SCO than IO. Thus, the results show that oat processing affects glycaemic and insulinaemic responses with lower responses associated with less processing.
Introduction: Patients with advanced or end-stage illness frequently present to emergency departments (EDs), many of whom are in need of palliative care (PC). Emergency physicians have struggled in providing high quality care to these patients and there is a need to identify cost-effective PC interventions delivered in the ED to improve patient outcomes. The objective of this systematic review was to examine the effectiveness of ED-based PC interventions. Methods: A comprehensive search of nine electronic databases and grey literature sources was conducted to identify any comparative studies assessing the effectiveness of ED-based PC interventions to improve health outcomes of patients with advanced or end-stage illness. Two independent reviewers completed study selection, quality assessment, and data extraction. Differences were mediated via third-party adjudication. Relative risks (RR) with 95% confidence intervals (CIs) were calculated using a random effects model and heterogeneity (I2) was reported. Results: From 5882 potentially eligible citations, 12 studies were included. Two studies are currently on-going clinical trials, and as such, 10 studies were included in this analysis. The studies consisted of before-after studies (n = 5), RCTs (n = 4), and an observational cohort (n = 1). Interventions assessed among the included studies consisted primarily of ED-directed PC consultations (n = 6), while other studies assessed screening of patients with advanced or end-stage illness and PC needs (n = 2), education on PC for ED-staff (n = 1), and an ED-based critical care unit (n = 1). Infrequent reporting of important outcomes (e.g., Mortality, ED relapse) limited the ability of this review to conduct meaningful meta-analysis. There was no difference in patient mortality between two studies assessing ED-directed PC consultations (RR = 0.89; 95% CI: 0.71, 1.13; I2 = 0%). One before-after study (RR = 0.73; 95% CI: 0.47, 1.13) and two RCTs (RR = 2.19; 95% CI: 0.40, 11.92; I2 = 96%) did not identify significant differences in PC consultations intervention (implementation of ED-directed PC consultations) and control (usual care) patients. Conclusion: This review found limited evidence to support the recommendation of any particular ED-based intervention for patients presenting to the ED with advanced or end-stage illness. High quality studies and standardized outcome reporting are needed to better understand the impact of PC interventions in the ED setting.