To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent evidence suggests that quitline text messaging is an effective treatment for smoking cessation, but little is known about the relative effectiveness of the message content.
A pilot study of the effects of gain-framed (GF; focused on the benefits of quitting) versus loss-framed (LF; focused on the costs of continued smoking) text messages among smokers contacting a quitline.
Participants were randomized to receive LF (N = 300) or GF (N = 300) text messages for 30 weeks. Self-reported 7-day point prevalence abstinence and number of 24 h quit attempts were assessed at week 30. Intent-to-treat (ITT) and responder analyses for smoking cessation were conducted using logistic regression.
The ITT analysis showed 17% of the GF group quit smoking compared to 15% in the LF group (P = 0.508). The responder analysis showed 44% of the GF group quit smoking compared to 35% in the LF group (P = 0.154). More participants in the GF group reported making a 24 h quit attempt compared to the LF group (98% vs. 93%, P = 0.046).
Although there were no differences in abstinence rates between groups at the week 30 follow-up, participants in the GF group made more quit attempts than those in the LF group.
Introduction: Paramedics commonly administer intravenous dextrose to severely hypoglycemic patients. Typically, the treatment provided is a 25g ampule of 50% dextrose (D50). This dose of D50 is meant to ensure a return to consciousness. However, this dose may be unnecessary and lead to harm or difficulties regulating blood glucose post treatment. We hypothesize that a lower dose such as dextrose 10% (D10) or titrating the D50 to desired level of consciousness may be optimal and avoid adverse events. Methods: We systematically searched Medline, Embase, CINAHL and Cochrane Central on June 5th 2019. PRISMA guidelines were followed. The GRADE methods and risk of bias assessments were applied to determine the certainty of the evidence. We included primary literature investigating the use of intravenous dextrose in hypoglycemic diabetic patients presenting to paramedics or the emergency department. Outcomes of interest were related to the safe and effective reversal of symptoms and blood glucose levels (BGL). Results: 660 abstracts were screened, 40 full text articles, with eight studies included. Data from three randomized controlled trials and five observational studies were analyzed. A single RCT comparing D10 to D50 was identified. The primary significant finding of the study was an increased post-treatment glycemic profile by 3.2 mmol/L in the D50 group; no other outcomes had significant differences between groups. When comparing pooled data from all the included studies we find higher symptom resolution in the D10 group compared to the D50 group; at 99.8% and 94.9% respectively. However, the mean time to resolution was approximately 4 minutes longer in the D10 group (4.1 minutes (D50) and 8 minutes (D10)). There was more need for subsequent doses in the D10 group at 23.0% versus 16.5% in the D50 group. The post treatment glycemic profile was lower in the D10 group at 5.9 mmol/L versus 8.5 mmol/L in the D50 group. Both treatments had nearly complete resolution of hypoglycemia; 98.7% (D50) and 99.2% (D10). No adverse events were observed in the D10 group (0/871) compared to 12/133 adverse events in the D50 group. Conclusion: D10 may be as effective as D50 at resolving symptoms and correcting hypoglycemia. Although the desired effect can take several minutes longer there appear to be fewer adverse events. The post treatment glycemic profile may facilitate less challenging ongoing glucose management by the patients.
Introduction: For rhythm control of acute atrial flutter (AAFL) in the emergency department (ED), choices include initial drug therapy or initial electrical cardioversion (ECV). We compared the strategies of pharmacological cardioversion followed by ECV if necessary (Drug-Shock), and ECV alone (Shock Only). Methods: We conducted a randomized, blinded, placebo-controlled trial (1:1 allocation) comparing two rhythm control strategies at 11 academic EDs. We included stable adult patients with AAFL, where onset of symptoms was <48 hours. Patients underwent central web-based randomization stratified by site. The Drug-Shock group received an infusion of procainamide (15mg/kg over 30 minutes) followed 30 minutes later, if necessary, by ECV at 200 joules x 3 shocks. The Shock Only group received an infusion of saline followed, if necessary, by ECV x 3 shocks. The primary outcome was conversion to sinus rhythm for ≥30 minutes at any time following onset of infusion. Patients were followed for 14 days. The primary outcome was evaluated on an intention-to-treat basis. Statistical significance was assessed using chi-squared tests and multivariable logistic regression. Results: We randomized 76 patients, and none was lost to follow-up. The Drug-Shock (N = 33) and Shock Only (N = 43) groups were similar for all characteristics including mean age (66.3 vs 63.4 yrs), duration of AAFL (30.1 vs 24.5 hrs), previous AAFL (72.7% vs 69.8%), median CHADS2 score (1 vs 1), and mean initial heart rate (128.9 vs 126.0 bpm). The Drug-Shock and Shock only groups were similar for the primary outcome of conversion (100% vs 93%; absolute difference 7.0%, 95% CI -0.6;14.6; P = 0.25). The multivariable analyses confirmed the similarity of the two strategies (P = 0.19). In the Drug-Shock group 21.2% of patients converted with the infusion. There were no statistically significant differences for time to conversion (84.2 vs 97.6 minutes), total ED length of stay (9.4 vs 7.5 hours), disposition home (100% vs 95.3%), and stroke within 14 days (0 vs 0). Premature discontinuation of infusion (usually for transient hypotension) was more common in the Drug-Shock group (9.1% vs 0.0%) but there were no serious adverse events. Conclusion: Both the Drug-Shock and Shock Only strategies were highly effective and safe in allowing AAFL patients to go home in sinus rhythm. IV procainamide alone was effective in only one fifth of patients, much less than for acute AF.
Introduction: CAEP recently developed the acute atrial fibrillation (AF) and flutter (AFL) [AAFF] Best Practices Checklist to promote optimal care and guidance on cardioversion and rapid discharge of patients with AAFF. We sought to assess the impact of implementing the Checklist into large Canadian EDs. Methods: We conducted a pragmatic stepped-wedge cluster randomized trial in 11 large Canadian ED sites in five provinces, over 14 months. All hospitals started in the control period (usual care), and then crossed over to the intervention period in random sequence, one hospital per month. We enrolled consecutive, stable patients presenting with AAFF, where symptoms required ED management. Our intervention was informed by qualitative stakeholder interviews to identify perceived barriers and enablers for rapid discharge of AAFF patients. The many interventions included local champions, presentation of the Checklist to physicians in group sessions, an online training module, a smartphone app, and targeted audit and feedback. The primary outcome was length of stay in ED in minutes from time of arrival to time of disposition, and this was analyzed at the individual patient-level using linear mixed effects regression accounting for the stepped-wedge design. We estimated a sample size of 800 patients. Results: We enrolled 844 patients with none lost to follow-up. Those in the control (N = 316) and intervention periods (N = 528) were similar for all characteristics including mean age (61.2 vs 64.2 yrs), duration of AAFF (8.1 vs 7.7 hrs), AF (88.6% vs 82.9%), AFL (11.4% vs 17.1%), and mean initial heart rate (119.6 vs 119.9 bpm). Median lengths of stay for the control and intervention periods respectively were 413.0 vs. 354.0 minutes (P < 0.001). Comparing control to intervention, there was an increase in: use of antiarrhythmic drugs (37.4% vs 47.4%; P < 0.01), electrical cardioversion (45.1% vs 56.8%; P < 0.01), and discharge in sinus rhythm (75.3% vs. 86.7%; P < 0.001). There was a decrease in ED consultations to cardiology and medicine (49.7% vs 41.1%; P < 0.01), but a small but insignificant increase in anticoagulant prescriptions (39.6% vs 46.5%; P = 0.21). Conclusion: This multicenter implementation of the CAEP Best Practices Checklist led to a significant decrease in ED length of stay along with more ED cardioversions, fewer ED consultations, and more discharges in sinus rhythm. Widespread and rigorous adoption of the CAEP Checklist should lead to improved care of AAFF patients in all Canadian EDs.
Introduction: The Prehospital Evidence-based Practice (PEP) program is an online, freely accessible, continuously updated repository of appraised EMS research evidence. This report is an analysis of published evidence for EMS interventions used to assess and treat patients suffering from hypoglycemia. Methods: PubMed was systematically searched in June 2019. One author screened titles, abstracts and full-texts for relevance. Trained appraisers reviewed full text articles, scored each on a three-point Level of Evidence (LOE) scale (based on study design and quality) and three-point Direction of Evidence (DOE) scale (supportive, neutral, or opposing findings for each intervention's primary outcome), abstracted the primary outcome, setting and assigned an outcome category (patient or process). Second party appraisal was conducted for all included studies. The level and direction of each intervention was plotted in an evidence matrix, based on appraisals. Results: Twenty-nine studies were included and appraised for seven interventions: 5 drugs (Dextrose 50% (D50), Dextrose 10% (D10), glucagon, oral glucose and thiamine), one assessment tool (point-of-care (POC) glucose testing) and one call disposition (treat-and-release). The most frequently reported study primary outcomes were related to: clinical improvement (n = 15, 51.7%), feasibility/safety (n = 8, 27.6%), and diagnostics (n = 6, 20.7%). The majority of outcomes were patient focused (n = 18, 62.0%). Conclusion: EMS interventions for treating hypoglycemia are informed by high-quality supportive evidence. Both D50 and D10 are supported by high-quality evidence; suggesting D10 may be an effective alternative to the standard D50. “Treat-and-release” practices for hypoglycemia are supported by moderate-quality evidence for the patient related outcomes of relapse, patient preference and complications. This body of evidence is high-quality, patient-focused and conducted in the prehospital setting thus generalizable paramedic practice.
Introduction: Acute heart failure (AHF) is a common emergency department (ED) presentation and may be associated with poor outcomes. Conversely, many patients rapidly improve with ED treatment and may not need hospital admission. Because there is little evidence to guide disposition decisions by ED and admitting physicians, we sought to create a risk score for predicting short-term serious outcomes (SSO) in patients with AHF. Methods: We conducted prospective cohort studies at 9 tertiary care hospital EDs from 2007 to 2019, and enrolled adult patients who required treatment for AHF. Each patient was assessed for standardized real-time clinical and laboratory variables, as well as for SSO (defined as death within 30 days or intubation, non-invasive ventilation (NIV), myocardial infarction, coronary bypass surgery, or new hemodialysis after admission). The fully pre-specified, logistic regression model with 13 predictors (age, pCO2, and SaO2 were modeled using spline functions with 3 knots and heart rate and creatinine with 5 knots) was fitted to the 10 multiple imputation datasets. Harrell's fast stepdown procedure reduced the number of variables. We calculated the potential impact on sensitivity (95% CI) for SSO and hospital admissions and estimated a sample size of 170 SSOs. Results: The 2,246 patients had mean age 77.4 years, male sex 54.5%, EMS arrival 41.1%, IV NTG 3.1%, ED NIV 5.2%, admission on initial visit 48.6%. Overall there were 174 (7.8%) SSOs including 70 deaths (3.1%). The final risk scale is comprised of five variables (points) and had c-statistic of 0.76 (95% CI: 0.73-0.80): 1.Valvular heart disease (1) 2.ED non-invasive ventilation (2) 3.Creatinine 150-300 (1) ≥300 (2) 4.Troponin 2x-4x URL (1) ≥5x URL (2) 5.Walk test failed (2) The probability of SSO ranged from 2.0% for a total score of 0 to 90.2% for a score of 10, showing good calibration. The model was stable over 1,000 bootstrap samples. Choosing a risk model total point admission threshold of >2 would yield a sensitivity of 80.5% (95% CI 73.9-86.1) for SSO with no change in admissions from current practice (48.6% vs 48.7%). Conclusion: Using a large prospectively collected dataset, we created a concise and sensitive risk scale to assist with admission decisions for patients with AHF in the ED. Implementation of this risk scoring scale should lead to safer and more efficient disposition decisions, with more high-risk patients being admitted and more low-risk patients being discharged.
Introduction: An important challenge physicians face when treating acute heart failure (AHF) patients in the emergency department (ED) is deciding whether to admit or discharge, with or without early follow-up. The overall goal of our project was to improve care for AHF patients seen in the ED while avoiding unnecessary hospital admissions. The specific goal was to introduce hospital rapid referral clinics to ensure AHF patients were seen within 7 days of ED discharge. Methods: This prospective before-after study was conducted at two campuses of a large tertiary care hospital, including the EDs and specialty outpatient clinics. We enrolled AHF patients ≥50 years who presented to the ED with shortness of breath (<7 days). The 12-month before (control) period was separated from the 12-month after (intervention) period by a 3-month implementation period. Implementation included creation of rapid access AHF clinics staffed by cardiology and internal medicine, and development of referral procedures. There was extensive in-servicing of all ED staff. The primary outcome measure was hospital admission at the index visit or within 30 days. Secondary outcomes included mortality and actual access to rapid follow-up. We used segmented autoregression analysis of the monthly proportions to determine whether there was a change in admissions coinciding with the introduction of the intervention and estimated a sample size of 700 patients. Results: The patients in the before period (N = 355) and the after period (N = 374) were similar for age (77.8 vs. 78.1 years), arrival by ambulance (48.7% vs 51.1%), comorbidities, current medications, and need for non-invasive ventilation (10.4% vs. 6.7%). Comparing the before to the after periods, we observed a decrease in hospital admissions on index visit (from 57.7% to 42.0%; P <0.01), as well as all admissions within 30 days (from 65.1% to 53.5% (P < 0.01). The autoregression analysis, however, demonstrated a pre-existing trend to fewer admissions and could not attribute this to the intervention (P = 0.91). Attendance at a specialty clinic, amongst those discharged increased from 17.8% to 42.1% (P < 0.01) and the median days to clinic decreased from 13 to 6 days (P < 0.01). 30-day mortality did not change (4.5% vs. 4.0%; P = 0.76). Conclusion: Implementation of rapid-access dedicated AHF clinics led to considerably increased access to specialist care, much reduced follow-up times, and possible reduction in hospital admissions. Widespread use of this approach can improve AHF care in Canada.
Japanese stiltgrass is regarded as one of the most troublesome invasive species in the USA. It is commonly found invading forested areas; however, more recently it has been noted to be invading golf course roughs and out of play areas. The purpose of this study was to evaluate POST herbicide control of Japanese stiltgrass in golf course and highly maintained turfgrass facilities. None of the treatments provided >80% Japanese stiltgrass control 2 WAT. At 4 WAT >80% Japanese stiltgrass control was observed with MSMA, MSMA + metribuzin, amicarbazone, and sethoxydim, while metsulfuron, pinoxaden, and imazapic provided minimum control. By 8 WAT, MSMA, MSMA + metribuzin, amicarbazone, and sethoxydim provided >98% control, while quinclorac, metsulfuron, pinoxaden, and imazapic provided no visible control. Thiencarbazone-methyl + foramsulfuron + halosulfuron-methyl, and sulfentrazone provided limited (≤60%) control. This study indicates POST control of Japanese stiltgrass can be achieved with MSMA, MSMA + metribuzin, amicarbazone, and sethoxydim. Future research should include long term control over multiple growing seasons, repeat applications of herbicides, and evaluation of herbicides in combination for increased and longer-term Japanese stiltgrass control.
Commonly used measures of instrumental activities of daily living (IADL) do not capture activities for a technologically advancing society. This study aimed to adapt the proxy/informant-based Amsterdam IADL Questionnaire (A-IADL-Q) for use in the UK and develop a self-report version.
An iterative mixed method cross-cultural adaptation of the A-IADL-Q and the development of a self-report version involving a three-step design: (1) interviews and focus groups with lay and professional stakeholders to assess face and content validity; (2) a questionnaire to measure item relevance to older adults in the U.K.; (3) a pilot of the adapted questionnaire in people with cognitive impairment.
Community settings in the UK.
One hundred and forty-eight participants took part across the three steps: (1) 14 dementia professionals; 8 people with subjective cognitive decline (SCD), mild cognitive impairment (MCI), or dementia due to Alzheimer’s disease; and 6 relatives of people with MCI or dementia; (2) 92 older adults without cognitive impairment; and (3) 28 people with SCD or MCI.
The cultural relevance and applicability of the A-IADL-Q scale items were assessed using a 6-point Likert scale. Cognitive and functional performance was measured using a battery of cognitive and functional measures.
Iterative modifications to the scale resulted in a 55-item adapted version appropriate for UK use (A-IADL-Q-UK). Pilot data revealed that the new and revised items performed well. Four new items correlated with the weighted average score (Kendall’s Tau −.388, −.445, −.497, −.569). An exploratory analysis of convergent validity found correlations in the expected direction with cognitive and functional measures.
The A-IADL-Q-UK provides a measurement of functional decline for use in the UK that captures culturally relevant activities. A new self-report version has been developed and is ready for testing. Further evaluation of the A-IADL-Q-UK for construct validity is now needed.
Lymphopenia is common in adults who have had a Fontan operation although its aetiology and clinical implications remain unknown. Previous work suggests an association between lymphopenia and both liver disease and splenomegaly. The objective of this study was to assess the prevalence of lymphopenia in adults with a Fontan circulation and evaluate its associations with risk factors and clinical outcomes. Using a retrospective cohort study design, we studied 73 adult Fontan patients (age 25.0 ± 8.4 years) who had a complete blood count and abdominal imaging performed. Patients with protein-losing enteropathy were excluded. Clinical data were extracted from hospital records. The mean white blood cell count was 6580 ± 220/ml with a mean lymphocyte count of 1223 ± 508/ml. Lymphopenia, defined as lymphocyte count <1000/ml, was present in 23 (32%) patients. Patients with lymphopenia had a lower total white blood cell count (5556 ± 2517 versus 7136 ± 1924/ml, p = 0.009) and a lower platelet count (162 ± 69 versus 208 ± 69 k/ml, p = 0.008). Lymphopenia was also associated with findings of portal hypertension, including splenomegaly (36 versus 14%, p = 0.04), varices (22 versus 6%, p = 0.04), and ascites (39 versus 14%, p = 0.02). Lymphopenia did not correlate with any cardiac imaging, haemodynamic or exercise testing variables. In conclusion, lymphopenia is common in adult Fontan patients and is associated with markers of portal hypertension. Larger studies are needed to better define the relationship between lymphopenia and clinical outcomes.
Participation in European surveillance for bloodstream infection (BSI) commenced in Ireland in 1999 with all laboratories (n = 39) participating by 2014. Observational hand hygiene auditing (OHHA) was implemented in 2011. The aim of this study was to evaluate the impact of OHHA on hand hygiene compliance, alcohol hand rub (AHR) procurement and the incidence of sensitive and resistant Staphylococcus aureus and Enterococcus faecium and faecalis BSI. A prospective segmented regression analysis was performed to determine the temporal association between OHHA and outcomes. Observed hand hygiene improved from 74.7% (73.7–75.6) in 2011 to 90.8% (90.1–91.3) in 2016. AHR procurement increased from 20.1 l/1000 bed days used (BDU) in 2009 to 33.2 l/1000 BDU in 2016. A pre-intervention reduction of 2% per quarter in the ratio of methicillin sensitive Staphylococcus aureus BSI/BDU stabilized in the time period after the intervention (P < 0.01). The ratio of Methicillin resistant Staphylococcus aureus (MRSA) BSI/BDU was decreasing by 5% per quarter pre-intervention, this slowed to 2% per quarter post intervention, (P < 0.01). There was no significant change in the ratio of vancomycin sensitive (P = 0.49) or vancomycin resistant (P = 0.90) Enterococcus sp. BSI/BDU post intervention. This study shows national OHHA increased observed hand hygiene compliance and AHR procurement, however there was no associated reduction in BSI.
The Genomics Used to Improve DEpresssion Decisions (GUIDED) trial assessed outcomes associated with combinatorial pharmacogenomic (PGx) testing in patients with major depressive disorder (MDD). Analyses used the 17-item Hamilton Depression (HAM-D17) rating scale; however, studies demonstrate that the abbreviated, core depression symptom-focused, HAM-D6 rating scale may have greater sensitivity toward detecting differences between treatment and placebo. However, the sensitivity of HAM-D6 has not been tested for two active treatment arms. Here, we evaluated the sensitivity of the HAM-D6 scale, relative to the HAM-D17 scale, when assessing outcomes for actively treated patients in the GUIDED trial.
Outpatients (N=1,298) diagnosed with MDD and an inadequate treatment response to >1 psychotropic medication were randomized into treatment as usual (TAU) or combinatorial PGx-guided (guided-care) arms. Combinatorial PGx testing was performed on all patients, though test reports were only available to the guided-care arm. All patients and raters were blinded to study arm until after week 8. Medications on the combinatorial PGx test report were categorized based on the level of predicted gene-drug interactions: ‘use as directed’, ‘moderate gene-drug interactions’, or ‘significant gene-drug interactions.’ Patient outcomes were assessed by arm at week 8 using HAM-D6 and HAM-D17 rating scales, including symptom improvement (percent change in scale), response (≥50% decrease in scale), and remission (HAM-D6 ≤4 and HAM-D17 ≤7).
At week 8, the guided-care arm demonstrated statistically significant symptom improvement over TAU using HAM-D6 scale (Δ=4.4%, p=0.023), but not using the HAM-D17 scale (Δ=3.2%, p=0.069). The response rate increased significantly for guided-care compared with TAU using both HAM-D6 (Δ=7.0%, p=0.004) and HAM-D17 (Δ=6.3%, p=0.007). Remission rates were also significantly greater for guided-care versus TAU using both scales (HAM-D6 Δ=4.6%, p=0.031; HAM-D17 Δ=5.5%, p=0.005). Patients taking medication(s) predicted to have gene-drug interactions at baseline showed further increased benefit over TAU at week 8 using HAM-D6 for symptom improvement (Δ=7.3%, p=0.004) response (Δ=10.0%, p=0.001) and remission (Δ=7.9%, p=0.005). Comparatively, the magnitude of the differences in outcomes between arms at week 8 was lower using HAM-D17 (symptom improvement Δ=5.0%, p=0.029; response Δ=8.0%, p=0.008; remission Δ=7.5%, p=0.003).
Combinatorial PGx-guided care achieved significantly better patient outcomes compared with TAU when assessed using the HAM-D6 scale. These findings suggest that the HAM-D6 scale is better suited than is the HAM-D17 for evaluating change in randomized, controlled trials comparing active treatment arms.
Environmental information from place-names has largely been overlooked by geoarchaeologists and fluvial geomorphologists in analyses of the depositional histories of rivers and floodplains. Here, new flood chronologies for the rivers Teme, Severn, and Wye are presented, modelled from stable river sections excavated at Broadwas, Buildwas, and Rotherwas. These are connected by the Old English term *wæsse, interpreted as ‘land by a meandering river which floods and drains quickly’. The results reveal that, in all three places, flooding during the early medieval period occurred more frequently between AD 350–700 than between AD 700–1100, but that over time each river's flooding regime became more complex including high magnitude single events. In the sampled locations, the fluvial dynamics of localized flood events had much in common, and almost certainly differed in nature from other sections of their rivers, refining our understanding of the precise nature of flooding which their names sought to communicate. This study shows how the toponymic record can be helpful in the long-term reconstruction of historic river activity and for our understanding of past human perceptions of riverine environments.
A primary barrier to translation of clinical research discoveries into care delivery and population health is the lack of sustainable infrastructure bringing researchers, policymakers, practitioners, and communities together to reduce silos in knowledge and action. As National Institutes of Healthʼs (NIH) mechanism to advance translational research, Clinical and Translational Science Award (CTSA) awardees are uniquely positioned to bridge this gap. Delivering on this promise requires sustained collaboration and alignment between research institutions and public health and healthcare programs and services. We describe the collaboration of seven CTSA hubs with city, county, and state healthcare and public health organizations striving to realize this vision together. Partnership representatives convened monthly to identify key components, common and unique themes, and barriers in academic–public collaborations. All partnerships aligned the activities of the CTSA programs with the needs of the city/county/state partners, by sharing resources, responding to real-time policy questions and training needs, promoting best practices, and advancing community-engaged research, and dissemination and implementation science to narrow the knowledge-to-practice gap. Barriers included competing priorities, differing timelines, bureaucratic hurdles, and unstable funding. Academic–public health/health system partnerships represent a unique and underutilized model with potential to enhance community and population health.
We present a detailed overview of the cosmological surveys that we aim to carry out with Phase 1 of the Square Kilometre Array (SKA1) and the science that they will enable. We highlight three main surveys: a medium-deep continuum weak lensing and low-redshift spectroscopic HI galaxy survey over 5 000 deg2; a wide and deep continuum galaxy and HI intensity mapping (IM) survey over 20 000 deg2 from
$z = 0.35$
to 3; and a deep, high-redshift HI IM survey over 100 deg2 from
$z = 3$
to 6. Taken together, these surveys will achieve an array of important scientific goals: measuring the equation of state of dark energy out to
$z \sim 3$
with percent-level precision measurements of the cosmic expansion rate; constraining possible deviations from General Relativity on cosmological scales by measuring the growth rate of structure through multiple independent methods; mapping the structure of the Universe on the largest accessible scales, thus constraining fundamental properties such as isotropy, homogeneity, and non-Gaussianity; and measuring the HI density and bias out to
$z = 6$
. These surveys will also provide highly complementary clustering and weak lensing measurements that have independent systematic uncertainties to those of optical and near-infrared (NIR) surveys like Euclid, LSST, and WFIRST leading to a multitude of synergies that can improve constraints significantly beyond what optical or radio surveys can achieve on their own. This document, the 2018 Red Book, provides reference technical specifications, cosmological parameter forecasts, and an overview of relevant systematic effects for the three key surveys and will be regularly updated by the Cosmology Science Working Group in the run up to start of operations and the Key Science Programme of SKA1.
Nutritional factors and infectious agents may contribute to paediatric growth deficits in low- and middle-income countries; however, the contribution of enteric pathogens is only beginning to be understood. We analysed the stool from children <5 years old from an open cohort, cluster-randomised controlled trial of a point-of-collection water chlorinator in urban Bangladesh. We compared the presence/absence of 15 enteric pathogens detected via multiplex, molecular methods in the stool with concurrent Z-scores/Z-score cut-offs (−2 standard deviations (s.d.)) for height-for-age (HAZ/stunting), weight-for-age (WAZ/underweight) and weight-for-height (WHZ/wasting), adjusted for sociodemographic and trial-related factors, and measured caregiver-reported diarrhoea. Enteric pathogen prevalence in the stool was high (88% had ≥1 enteric pathogen, most commonly Giardia spp. (40%), Salmonella enterica (33%), enterotoxigenic E. coli (28%) and Shigella spp. (27%)) while reported 7-day diarrhoea prevalence was 6%, suggesting high subclinical infection rates. Many children were stunted (26%) or underweight (24%). Adjusted models suggested Giardia spp. detection was associated with lower HAZ (−0.22 s.d., 95% CI −0.44 to 0.00; prevalence ratio for stunting: 1.39, 95% CI 0.94–2.06) and potentially lower WAZ. No pathogens were associated with reported diarrhoea in adjusted models. Giardia spp. carriage may be associated with growth faltering, but not diarrhoea, in this and similar low-income settings. Stool-based enteric pathogen detection provides a direct indication of previous exposure that may be useful as a broader endpoint of trials of environmental interventions.
Patients diagnosed with glioblastoma (GBM) are treated with surgery followed by fractionated radiotherapy with concurrent and adjuvant temozolomide. Patients are monitored with serial magnetic resonance imaging (MRI). However, treatment-related changes frequently mimic disease progression. We reviewed a series of patients undergoing surgery for presumed first-recurrence GBM, where pathology reports were available for tissue diagnosis, in order to better understand factors associated with a diagnosis of treatment-related changes on final pathology.
Patient records at a single institution between 2005 and 2015 were retrospectively reviewed. Pathology reports were reviewed to determine diagnosis of recurrent GBM or treatment effect. Survival analysis was performed interrogating overall survival (OS) and progression-free survival (PFS). Correlation with radiation treatment plans was also examined.
One-hundred-twenty-three patients were identified. One-hundred-sixteen patients (94%) underwent resection and seven underwent biopsy. Treatment-related changes were reported in 20 cases (16%). These patients had longer median OS and PFS from the time of recurrence than patients with true disease progression. However, there was no significant difference in OS from the time of initial diagnosis. Treatment effect was associated with surgery within 90 days of completing radiation. In patients receiving radiation at our institution (n = 53), larger radiation target volume and a higher maximum dose were associated with treatment effect.
Treatment effect was associated with surgery nearer to completion of radiation, a larger radiation target volume, and a higher maximum point dose. Treatment effect was associated with longer PFS and OS from the time of recurrence, but not from the time of initial diagnosis.
There is demand for new, effective and scalable treatments for depression, and development of new forms of cognitive bias modification (CBM) of negative emotional processing biases has been suggested as possible interventions to meet this need.
We report two double blind RCTs, in which volunteers with high levels of depressive symptoms (Beck Depression Inventory ii (BDI-ii) > 14) completed a brief course of emotion recognition training (a novel form of CBM using faces) or sham training. In Study 1 (N = 36), participants completed a post-training emotion recognition task whilst undergoing functional magnetic resonance imaging to investigate neural correlates of CBM. In Study 2 (N = 190), measures of mood were assessed post-training, and at 2-week and 6-week follow-up.
In both studies, CBM resulted in an initial change in emotion recognition bias, which (in Study 2) persisted for 6 weeks after the end of training. In Study 1, CBM resulted in increases neural activation to happy faces, with this effect driven by an increase in neural activity in the medial prefrontal cortex and bilateral amygdala. In Study 2, CBM did not lead to a reduction in depressive symptoms on the BDI-ii, or on related measures of mood, motivation and persistence, or depressive interpretation bias at either 2 or 6-week follow-ups.
CBM of emotion recognition has effects on neural activity that are similar in some respects to those induced by Selective Serotonin Reuptake Inhibitors (SSRI) administration (Study 1), but we find no evidence that this had any later effect on self-reported mood in an analogue sample of non-clinical volunteers with low mood (Study 2).
To further understand the contribution of feedstuff ingredients to gut health in swine, gut histology and intestinal bacterial profiles associated with the use of two high-quality protein sources, microbially enhanced soybean meal (MSBM) and Menhaden fishmeal (FM) were assessed. Weaned pigs were fed one of three experimental diets: (1) basic diet containing corn and soybean meal (Negative Control (NEG)), (2) basic diet + fishmeal (FM; Positive Control (POS)) and (3) basic diet + MSBM (MSBM). Phase I POS and MSBM diets (d 0 to d 7 post-wean) included FM or MSBM at 7.5%, while Phase II POS and MSBM diets (d 8 to d 21) included FM or MSBM at 5.0%. Gastrointestinal tissue and ileal digesta were collected from euthanised pigs at d 21 (eight pigs/diet) to assess gut histology and intestinal bacterial profiles, respectively. Data were analysed using Proc Mixed in SAS, with pig as the experimental unit and pig (treatment) as the random effect. Histological and immunohistochemical analyses of stomach and small intestinal tissue using haematoxylin–eosin, Periodic Acid Schiff/Alcian blue and inflammatory cell staining did not reveal detectable differences in host response to dietary treatment. Ileal bacterial composition profiles were obtained from next-generation sequencing of PCR generated amplicons targeting the V1 to V3 regions of the 16S rRNA gene. Lactobacillus-affiliated sequences were found to be the most highly represented across treatments, with an average relative abundance of 64.0%, 59.9% and 41.80% in samples from pigs fed the NEG, POS and MSBM diets, respectively. Accordingly, the three most abundant Operational Taxonomic Units (OTUs) were affiliated to Lactobacillus, showing a distinct abundance pattern relative to dietary treatment. One OTU (SD_Ssd_00001), most closely related to Lactobacillus amylovorus, was found to be more abundant in NEG and POS samples compared to MSBM (23.5% and 35.0% v. 9.2%). Another OTU (SD_Ssd_00002), closely related to Lactobacillus johnsonii, was more highly represented in POS and MSBM samples compared to NEG (14.0% and 15.8% v. 0.1%). Finally, OTU Sd_Ssd-00011, highest sequence identity to Lactobacillus delbrueckii, was found in highest abundance in ileal samples from MSBM-fed pigs (1.9% and 3.3% v. 11.3, in POS, NEG and MSBM, respectively). There was no effect of protein source on bacterial taxa to the genus level or diversity based on principal component analysis. Dietary protein source may provide opportunity to enhance presence of specific members of Lactobacillus genus that are associated with immune-modulating properties without altering overall intestinal bacterial diversity.
This replication study was invited by the Editor in Chief of Management and Organization Review, Arie Y. Lewin. The original study by Judge, Fainshmidt, and Brown (2014) spanned the global financial crisis (2005–2010), and as such, this anomalous time period may not have been representative of most economies, or even the overall global economy. In this replication study we refine and extend Judge et al. (2014) which explored the provocative question – which form of capitalism works best in terms of ‘equitable wealth creation’? Similar to the earlier study, we find that there are multiple paths to macro-economic success. Notably, effective institutional configurations tend to combine high-quality regulatory institutions, effective skill development systems, and social cultures largely unaffected by corruption so there is some commonality amongst effective configurations. In contrast, ineffective institutional configurations tend to be relatively weak in one or several of these three critical sets of institutions. Importantly, we find some novel patterns emerging from the most recent data, including potentially new forms of capitalism associated with equitable wealth creation. In addition, we find that effective credit market institutions are more important, and collective bargaining institutions are less important than the original study suggested. We discuss implications for the comparative capitalism literature, policy makers, and the future of capitalism in the global economy.