To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Spot urinary polyphenols have potential as a biomarker of polyphenol-rich food intakes. The aim of this study is to explore the relationship between spot urinary polyphenols and polyphenol intakes from polyphenol-rich food sources. Young adults (18–24 years old) were recruited into a sub-study of an online intervention aimed at improving diet quality. Participants’ intake of polyphenols and polyphenol-rich foods was assessed at baseline and 3 months using repeated 24-h recalls. A spot urine sample was collected at each session, with samples analysed for polyphenol metabolites using LC-MS. To assess the strength of the relationship between urinary polyphenols and dietary polyphenols, Spearman correlations were used. Linear mixed models further evaluated the relationship between polyphenol intakes and urinary excretion. Total urinary polyphenols and hippuric acid (HA) demonstrated moderate correlation with total polyphenol intakes (rs = 0·29–0·47). HA and caffeic acid were moderately correlated with polyphenols from tea/coffee (rs = 0·26–0·46). Using linear mixed models, increases in intakes of total polyphenols or polyphenols from tea/coffee or oil resulted in a greater excretion of HA, whereas a negative relationship was observed between soya polyphenols and HA, suggesting that participants with higher intakes of soya polyphenols had a lower excretion of HA. Findings suggest that total urinary polyphenols may be a promising biomarker of total polyphenol intakes foods and drinks and that HA may be a biomarker of total polyphenol intakes and polyphenols from tea/coffee. Caffeic acid warrants further investigation as a potential biomarker of polyphenols from tea/coffee.
This study aimed to investigate general factors associated with prognosis regardless of the type of treatment received, for adults with depression in primary care.
We searched Medline, Embase, PsycINFO and Cochrane Central (inception to 12/01/2020) for RCTs that included the most commonly used comprehensive measure of depressive and anxiety disorder symptoms and diagnoses, in primary care depression RCTs (the Revised Clinical Interview Schedule: CIS-R). Two-stage random-effects meta-analyses were conducted.
Twelve (n = 6024) of thirteen eligible studies (n = 6175) provided individual patient data. There was a 31% (95%CI: 25 to 37) difference in depressive symptoms at 3–4 months per standard deviation increase in baseline depressive symptoms. Four additional factors: the duration of anxiety; duration of depression; comorbid panic disorder; and a history of antidepressant treatment were also independently associated with poorer prognosis. There was evidence that the difference in prognosis when these factors were combined could be of clinical importance. Adding these variables improved the amount of variance explained in 3–4 month depressive symptoms from 16% using depressive symptom severity alone to 27%. Risk of bias (assessed with QUIPS) was low in all studies and quality (assessed with GRADE) was high. Sensitivity analyses did not alter our conclusions.
When adults seek treatment for depression clinicians should routinely assess for the duration of anxiety, duration of depression, comorbid panic disorder, and a history of antidepressant treatment alongside depressive symptom severity. This could provide clinicians and patients with useful and desired information to elucidate prognosis and aid the clinical management of depression.
Structural models of psychopathology consistently identify internalizing (INT) and externalizing (EXT) specific factors as well as a superordinate factor that captures their shared variance, the p factor. Questions remain, however, about the meaning of these data-driven dimensions and the interpretability and distinguishability of the larger nomological networks in which they are embedded.
The sample consisted of 10 645 youth aged 9–10 years participating in the multisite Adolescent Brain and Cognitive Development (ABCD) Study. p, INT, and EXT were modeled using the parent-rated Child Behavior Checklist (CBCL). Patterns of associations were examined with variables drawn from diverse domains including demographics, psychopathology, temperament, family history of substance use and psychopathology, school and family environment, and cognitive ability, using instruments based on youth-, parent-, and teacher-report, and behavioral task performance.
p exhibited a broad pattern of statistically significant associations with risk variables across all domains assessed, including temperament, neurocognition, and social adversity. The specific factors exhibited more domain-specific patterns of associations, with INT exhibiting greater fear/distress and EXT exhibiting greater impulsivity.
In this largest study of hierarchical models of psychopathology to date, we found that p, INT, and EXT exhibit well-differentiated nomological networks that are interpretable in terms of neurocognition, impulsivity, fear/distress, and social adversity. These networks were, in contrast, obscured when relying on the a priori Internalizing and Externalizing dimensions of the CBCL scales. Our findings add to the evidence for the validity of p, INT, and EXT as theoretically and empirically meaningful broad psychopathology liabilities.
This chapter comprises the following sections: names, taxonomy, subspecies and distribution, descriptive notes, habitat, movements and home range, activity patterns, feeding ecology, reproduction and growth, behavior, parasites and diseases, status in the wild, and status in captivity.
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.
Introduction: Hyperkalemia is a common electrolyte disturbance associated with morbidity and mortality. Commonly used therapies for hyperkalemia include IV calcium, sodium bicarbonate, insulin, beta-adrenergic agents, ion-exchange resins, diuretics and hemodialysis. This study aims to evaluate which treatments are more commonly used to treat hyperkalemia and to examine factors which influence those clinical decisions. Methods: This is a retrospective chart review of all cases of hyperkalemia encountered in 2017 at a Canadian adult ED. Potassium values were classified as mild (5.5 - 6.5 mEq/L), moderate (>6.5 - 7.5 mEq/L) and severe (>7.5 mEq/L). Treatment choices were then recorded and matched to hemodynamic stability, degree of hyperkalemia and ECG findings. More statistical methods to test correlation between treatment and specific variables will be performed over the next 2 months, including logistic regression to highlight potential determinants of treatment and Chi-square tests to verify randomness and to construct 95% confidence intervals. Results: 1867 ED visits were identified, of which 479 met the inclusion criteria. 89.1% of hyperkalemia cases were mild, 8.2% were moderate, and 2.7% were severe. IV insulin was used in 22.1% of cases, followed by Kayexalate in 20.5%, sodium bicarbonate in 12.3%, IV calcium in 9.4%, frusemide in 7.3%, salbutamol in 2.7%, and dialysis in 1.9%. Moderate and severe hyperkalemia were associated with higher use of insulin (79.5% and 64.3% respectively), IV calcium (41% and 64.3% respectively), sodium bicarbonate (56.4% and 85.7% respectively). Bradycardia was associated with higher insulin and IV calcium use (46.7% and 33.3% respectively). Hypotension was associated with a similar increase in use of insulin and IV calcium (34.2% and 23.7% respectively). There were only 15 cases of cardiac arrest in which sodium bicarbonate and IV calcium were more frequently used (80% and 60% respectively). Conclusion: This study demonstrates variability in the ED management of hyperkalemia. We found that Insulin and Kayexalate were the 2 most common interventions, with degree of hyperkalemia, bradycardia and hypotension influencing rates of treatment. Overuse of kayexalate for emergent treatment of hyperkalemia is evident despite weak supporting evidence. Paradoxically, beta adrenergic agents were underutilized despite their rapid effect and safer profile. The development of a widely accepted guideline may help narrow the differences in practice and potentially improve outcomes.
Introduction: CAEP recently developed the acute atrial fibrillation (AF) and flutter (AFL) [AAFF] Best Practices Checklist to promote optimal care and guidance on cardioversion and rapid discharge of patients with AAFF. We sought to assess the impact of implementing the Checklist into large Canadian EDs. Methods: We conducted a pragmatic stepped-wedge cluster randomized trial in 11 large Canadian ED sites in five provinces, over 14 months. All hospitals started in the control period (usual care), and then crossed over to the intervention period in random sequence, one hospital per month. We enrolled consecutive, stable patients presenting with AAFF, where symptoms required ED management. Our intervention was informed by qualitative stakeholder interviews to identify perceived barriers and enablers for rapid discharge of AAFF patients. The many interventions included local champions, presentation of the Checklist to physicians in group sessions, an online training module, a smartphone app, and targeted audit and feedback. The primary outcome was length of stay in ED in minutes from time of arrival to time of disposition, and this was analyzed at the individual patient-level using linear mixed effects regression accounting for the stepped-wedge design. We estimated a sample size of 800 patients. Results: We enrolled 844 patients with none lost to follow-up. Those in the control (N = 316) and intervention periods (N = 528) were similar for all characteristics including mean age (61.2 vs 64.2 yrs), duration of AAFF (8.1 vs 7.7 hrs), AF (88.6% vs 82.9%), AFL (11.4% vs 17.1%), and mean initial heart rate (119.6 vs 119.9 bpm). Median lengths of stay for the control and intervention periods respectively were 413.0 vs. 354.0 minutes (P < 0.001). Comparing control to intervention, there was an increase in: use of antiarrhythmic drugs (37.4% vs 47.4%; P < 0.01), electrical cardioversion (45.1% vs 56.8%; P < 0.01), and discharge in sinus rhythm (75.3% vs. 86.7%; P < 0.001). There was a decrease in ED consultations to cardiology and medicine (49.7% vs 41.1%; P < 0.01), but a small but insignificant increase in anticoagulant prescriptions (39.6% vs 46.5%; P = 0.21). Conclusion: This multicenter implementation of the CAEP Best Practices Checklist led to a significant decrease in ED length of stay along with more ED cardioversions, fewer ED consultations, and more discharges in sinus rhythm. Widespread and rigorous adoption of the CAEP Checklist should lead to improved care of AAFF patients in all Canadian EDs.
Although genetic and environmental factors operating before or around the time of birth have been demonstrated to be relevant to the aetiology of the major psychoses, a seasonal variation in the rates of admission of such patients has long been recognised. Few studies have compared first and readmissions. This study examined for seasonal variation of admission in the major psychoses, and compared diagnostic categories by admission status. Patients admitted to Irish psychiatric inpatient facilities between 1989 and 1994 with an ICD-9/10 diagnosis of schizophrenia or affective disorder were identified from the National Psychiatric Inpatient Reporting System (NPIRS). The data were analysed using a hierarchical log linear model, the chi-square test, a Kolmogorov-Smirnov (KS) type statistic, and the method of Walter and Elwood. The hierarchical log linear model demonstrated significant interactions between the month of admission and admission order (change in scaled deviance 28.77, df = 11, P < 0.003). Both first admissions with mania, and readmissions with bipolar affective disorder exhibited significant seasonality. In contrast, only first admissions with schizophrenia showed significant seasonal effects. Although first admissions with mania and readmissions with bipolar disorder both show seasonality, seasonal influences appear to be more relevant to onset of schizophrenia than subsequent relapse.
Starting university is an important time with respect to dietary changes. This study reports a novel approach to assessing student diet by utilising student-level food transaction data to explore dietary patterns. First-year students living in catered accommodation at the University of Leeds (UK) received pre-credited food cards for use in university catering facilities. Food card transaction data were obtained for semester 1, 2016 and linked with student age and sex. k-Means cluster analysis was applied to the transaction data to identify clusters of food purchasing behaviours. Differences in demographic and behavioural characteristics across clusters were examined using χ2 tests. The semester was divided into three time periods to explore longitudinal changes in purchasing patterns. Seven dietary clusters were identified: ‘Vegetarian’, ‘Omnivores’, ‘Dieters’, ‘Dish of the Day’, ‘Grab-and-Go’, ‘Carb Lovers’ and ‘Snackers’. There were statistically significant differences in sex (P < 0·001), with women dominating the Vegetarian and Dieters, age (P = 0·003), with over 20s representing a high proportion of the Omnivores and time of day of transactions (P < 0·001), with Dieters and Snackers purchasing least at breakfast. Many students (n 474, 60·4 %) changed dietary cluster across the semester. This study demonstrates that transactional data present a feasible method for dietary assessment, collecting detailed dietary information over time and at scale, while eliminating participant burden and possible bias from self-selection, observation and attrition. It revealed that student diets are complex and that simplistic measures of diet, focusing on narrow food groups in isolation, are unlikely to adequately capture dietary behaviours.
Analysis of human remains and a copper band found in the center of a Late Archaic (ca. 5000–3000 cal BP) shell ring demonstrate an exchange network between the Great Lakes and the coastal southeast United States. Similarities in mortuary practices suggest that the movement of objects between these two regions was more direct and unmediated than archaeologists previously assumed based on “down-the-line” models of exchange. These findings challenge prevalent notions that view preagricultural Native American communities as relatively isolated from one another and suggest instead that wide social networks spanned much of North America thousands of years before the advent of domestication.
Observation of the ion source generated background has been an area of focus during our routine analytical work. It is noted that the results of very-low-ratio samples are dependent upon the particular procedures for measurement using the present-day Cs+ sputter ion sources. When measured without excessive Cs+ fluxes and without interleafing with other higher-ratio samples and references, the accelerator mass spectrometry (AMS) sensitivity can be somewhat improved. In some cases, it appears possible to assess old radiocarbon (14C) samples to beyond the long-standing 60 kyr limit. A number of observational studies are made for the sole purpose of minimizing the final contamination to the rare isotopes that is generated within the ion source.
Sample preparation techniques for radiocarbon analysis of dissolved inorganic carbon (DIC) and dissolved organic carbon (DOC) in freshwater, as well as CO2 and CH4 in gas mixtures are presented. Focused efforts have been on developing a robust and low-background wet oxidation extraction method for DOC in freshwater, following routine methods developed for stable carbon isotope analysis and adapted for radiocarbon (14C) analysis. DIC (by acidification) and DOC (by wet oxidation) are converted to CO2 in pre-baked septum-fitted borosilicate bottles, where the resulting CO2 is extracted from the dissolved and headspace portions on a low-flow He-carrier flow-through system interfaced to a vacuum extraction line. A peripheral CH4 extraction line interfaces to the flow line to separate CH4 from environmental samples following the methods of Pack et al. 2015. High sample throughput and low blanks are achievable with this method. DIC and DOC blanks are consistently <0.7 pMC, while CO2 and CH4 blanks are typically <0.1 pMC.
Laser–solid interactions are highly suited as a potential source of high energy X-rays for nondestructive imaging. A bright, energetic X-ray pulse can be driven from a small source, making it ideal for high resolution X-ray radiography. By limiting the lateral dimensions of the target we are able to confine the region over which X-rays are produced, enabling imaging with enhanced resolution and contrast. Using constrained targets we demonstrate experimentally a
X-ray source, improving the image quality compared to unconstrained foil targets. Modelling demonstrates that a larger sheath field envelope around the perimeter of the constrained targets increases the proportion of electron current that recirculates through the target, driving a brighter source of X-rays.
Culture-based studies, which focus on individual organisms, have implicated stethoscopes as potential vectors of nosocomial bacterial transmission. However, the full bacterial communities that contaminate in-use stethoscopes have not been investigated.
We used bacterial 16S rRNA gene deep-sequencing, analysis, and quantification to profile entire bacterial populations on stethoscopes in use in an intensive care unit (ICU), including practitioner stethoscopes, individual-use patient-room stethoscopes, and clean unused individual-use stethoscopes. Two additional sets of practitioner stethoscopes were sampled before and after cleaning using standardized or practitioner-preferred methods.
Bacterial contamination levels were highest on practitioner stethoscopes, followed by patient-room stethoscopes, whereas clean stethoscopes were indistinguishable from background controls. Bacterial communities on stethoscopes were complex, and community analysis by weighted UniFrac showed that physician and patient-room stethoscopes were indistinguishable and significantly different from clean stethoscopes and background controls. Genera relevant to healthcare-associated infections (HAIs) were common on practitioner stethoscopes, among which Staphylococcus was ubiquitous and had the highest relative abundance (6.8%–14% of contaminating bacterial sequences). Other HAI-related genera were also widespread although lower in abundance. Cleaning of practitioner stethoscopes resulted in a significant reduction in bacterial contamination levels, but these levels reached those of clean stethoscopes in only a few cases with either standardized or practitioner-preferred methods, and bacterial community composition did not significantly change.
Stethoscopes used in an ICU carry bacterial DNA reflecting complex microbial communities that include nosocomially important taxa. Commonly used cleaning practices reduce contamination but are only partially successful at modifying or eliminating these communities.
Objectives: Prior research has identified numerous genetic (including sex), education, health, and lifestyle factors that predict cognitive decline. Traditional model selection approaches (e.g., backward or stepwise selection) attempt to find one model that best fits the observed data, risking interpretations that only the selected predictors are important. In reality, several predictor combinations may fit similarly well but result in different conclusions (e.g., about size and significance of parameter estimates). In this study, we describe an alternative method, Information-Theoretic (IT) model averaging, and apply it to characterize a set of complex interactions in a longitudinal study on cognitive decline. Methods: Here, we used longitudinal cognitive data from 1256 late–middle aged adults from the Wisconsin Registry for Alzheimer’s Prevention study to examine the effects of sex, apolipoprotein E (APOE) ɛ4 allele (non-modifiable factors), and literacy achievement (modifiable) on cognitive decline. For each outcome, we applied IT model averaging to a set of models with different combinations of interactions among sex, APOE, literacy, and age. Results: For a list-learning test, model-averaged results showed better performance for women versus men, with faster decline among men; increased literacy was associated with better performance, particularly among men. APOE had less of an association with cognitive performance in this age range (∼40–70 years). Conclusions: These results illustrate the utility of the IT approach and point to literacy as a potential modifier of cognitive decline. Whether the protective effect of literacy is due to educational attainment or intrinsic verbal intellectual ability is the topic of ongoing work. (JINS, 2019, 25, 119–133)
Objectives: A major challenge in cognitive aging is differentiating preclinical disease-related cognitive decline from changes associated with normal aging. Neuropsychological test authors typically publish single time-point norms, referred to here as unconditional reference values. However, detecting significant change requires longitudinal, or conditional reference values, created by modeling cognition as a function of prior performance. Our objectives were to create, depict, and examine preliminary validity of unconditional and conditional reference values for ages 40–75 years on neuropsychological tests. Method: We used quantile regression to create growth-curve–like models of performance on tests of memory and executive function using participants from the Wisconsin Registry for Alzheimer’s Prevention. Unconditional and conditional models accounted for age, sex, education, and verbal ability/literacy; conditional models also included past performance on and number of prior exposures to the test. Models were then used to estimate individuals’ unconditional and conditional percentile ranks for each test. We examined how low performance on each test (operationalized as <7th percentile) related to consensus-conference–determined cognitive statuses and subjective impairment. Results: Participants with low performance were more likely to receive an abnormal cognitive diagnosis at the current visit (but not later visits). Low performance was also linked to subjective and informant reports of worsening memory function. Conclusions: The percentile-based methods and single-test results described here show potential for detecting troublesome within-person cognitive change. Development of reference values for additional cognitive measures, investigation of alternative thresholds for abnormality (including multi-test criteria), and validation in samples with more clinical endpoints are needed. (JINS, 2019, 25, 1–14)
Background: Central neuropathic pain syndromes are a result of central nervous system injury, most commonly related to stroke, traumatic spinal cord injury, or multiple sclerosis. These syndromes are distinctly less common than peripheral neuropathic pain, and less is known regarding the underlying pathophysiology, appropriate pharmacotherapy, and long-term outcomes. The objective of this study was to determine the long-term clinical effectiveness of the management of central neuropathic pain relative to peripheral neuropathic pain at tertiary pain centers. Methods: Patients diagnosed with central (n=79) and peripheral (n=710) neuropathic pain were identified for analysis from a prospective observational cohort study of patients with chronic neuropathic pain recruited from seven Canadian tertiary pain centers. Data regarding patient characteristics, analgesic use, and patient-reported outcomes were collected at baseline and 12-month follow-up. The primary outcome measure was the composite of a reduction in average pain intensity and pain interference. Secondary outcome measures included assessments of function, mood, quality of life, catastrophizing, and patient satisfaction. Results: At 12-month follow-up, 13.5% (95% confidence interval [CI], 5.6-25.8) of patients with central neuropathic pain and complete data sets (n=52) achieved a ≥30% reduction in pain, whereas 38.5% (95% CI, 25.3-53.0) achieved a reduction of at least 1 point on the Pain Interference Scale. The proportion of patients with central neuropathic pain achieving both these measures, and thus the primary outcome, was 9.6% (95% CI, 3.2-21.0). Patients with peripheral neuropathic pain and complete data sets (n=463) were more likely to achieve this primary outcome at 12 months (25.3% of patients; 95% CI, 21.4-29.5) (p=0.012). Conclusion: Patients with central neuropathic pain syndromes managed in tertiary care centers were less likely to achieve a meaningful improvement in pain and function compared with patients with peripheral neuropathic pain at 12-month follow-up.
The supplementing of sow diets with lipids during pregnancy and lactation has been shown to reduce sow condition loss and improve piglet performance. The aim of this study was to determine the effects of supplemental palm oil (PO) on sow performance, plasma metabolites and hormones, milk profiles and pre-weaning piglet development. A commercial sow ration (C) or an experimental diet supplemented with 10% extra energy in the form of PO, were provided from day 90 of gestation until weaning (24 to 28 days postpartum) in two groups of eight multiparous sows. Gestation length of PO sows increased by 1 day (P<0.05). Maternal BW changes were similar throughout the trial, but loss of backfat during lactation was reduced in PO animals (C: −3.6±0.8 mm; PO: −0.1±0.8 mm; P<0.01). Milk fat was increased by PO supplementation (C day 3: 8.0±0.3% fat; PO day 3: 9.1±0.3% fat; C day 7: 7.8±0.5% fat; PO day 7: 9.9±0.5% fat; P<0.05) and hence milk energy yield of PO sows was also elevated (P<0.05). The proportion of saturated fatty acids was greater in colostrum from PO sows (C: 29.19±0.31 g/100 g of fat; PO: 30.77±0.36 g/100 g of fat; P<0.01). Blood samples taken on 105 days of gestation, within 24 h of farrowing, day 7 of lactation and at weaning (28±3 days post-farrowing) showed there were no differences in plasma concentrations of triacylglycerol, non-esterified fatty acids, insulin or IGF-1 throughout the trial. However, circulating plasma concentrations of both glucose and leptin were elevated during lactation in PO sows (P<0.05 and P<0.005, respectively) and thyroxine was greater at weaning in PO sows (P<0.05). Piglet weight and body composition were similar at birth, as were piglet growth rates throughout the pre-weaning period. A period of 7 days after birth, C piglets contained more body fat, as indicated by their lower fat-free mass per kg (C: 66.4±0.8 arbitrary units/kg; PO: 69.7±0.8 arbitrary unit/kg; P<0.01), but by day 14 of life this situation was reversed (C: 65.8±0.6 arbitrary units/kg; PO: 63.6±0.6 arbitrary units/kg; P<0.05). Following weaning, PO sows exhibited an increased ratio of male to female offspring at their subsequent farrowing (C: 1.0±0.3; PO: 2.2±0.2; P<0.05). We conclude that supplementation of sow diets with PO during late gestation and lactation appears to increase sow milk fat content and hence energy supply to piglets. Furthermore, elevated glucose concentrations in the sow during lactation may be suggestive of impaired glucose homoeostasis.