To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Using data from a nationally generalisable birth cohort, we aimed to: (i) describe the cohort’s adherence to national evidence-based dietary guidelines using an Infant Feeding Index (IFI) and (ii) assess the IFI’s convergent construct validity, by exploring associations with antenatal maternal socio-demographic and health behaviours and with child overweight/obesity and central adiposity at age 54 months. Data were from the Growing Up in New Zealand cohort (n 6343). The IFI scores ranged from zero to twelve points, with twelve representing full adherence to the guidelines. Overweight/obesity was defined by BMI-for-age (based on the WHO Growth Standards). Central adiposity was defined as waist-to-height ratio > 90th percentile. Associations were tested using multiple linear regression and Poisson regression with robust variance (risk ratios, 95 % CI). Mean IFI score was 8·2 (sd 2·1). Maternal characteristics explained 29·1 % of variation in the IFI score. Maternal age, education and smoking had the strongest independent relationships with IFI scores. Compared with children in the highest IFI tertile, girls in the lowest and middle tertiles were more likely to be overweight/obese (1·46, 1·03, 2·06 and 1·56, 1·09, 2·23, respectively) and boys in the lowest tertile were more likely to have central adiposity (1·53, 1·02, 2·30) at age 54 months. Most infants fell short of meeting national Infant Feeding Guidelines. The associations between IFI score and maternal characteristics, and children’s overweight/obesity/central adiposity, were in the expected directions and confirm the IFI’s convergent construct validity.
Many appellate courts and regulatory commissions simultaneously produce case dispositions and rules rationalizing the dispositions. We explore the properties of the American practice for doing this. We show that the median judge is pivotal over case dispositions, although she and others may not vote sincerely. Strategic dispositional voting is more likely when the case location is extreme, resulting in majority coalitions that give the appearance of less polarization on the court than is the case. The equilibrium policy created in the majority opinion generically does not coincide with the ideal policy of the median judge in either the dispositional majority or the bench as a whole. Rather, opinions approach a weighted center of the dispositional majority but often reflect the preferences of the opinion author. We discuss some empirical implications of the American practice for jointly producing case dispositions and rules.
As insignia of power and prestige, Inka unku (tapestry tunics) communicated the strength and extent of Inka sociopolitical hegemony in the Andes. Of the 36 known full-size examples in museum collections, only one, found in Argentina, comes from outside Peru. This article investigates another recently excavated unku found out of context on Chile's northernmost coast. To confirm its authenticity, we compiled a database showing the technical and stylistic attributes of previous finds for comparison. We conclude that this artifact is indeed a new type of unku and that the discovery affects our understanding of the complex relationship between the people of this province and the Inka state.
Debate about the nature of climate and the magnitude of ecological change across Australia during the last glacial maximum (LGM; 26.5–19 ka) persists despite considerable research into the late Pleistocene. This is partly due to a lack of detailed paleoenvironmental records and reliable chronological frameworks. Geochemical and geochronological analyses of a 60 ka sedimentary record from Brown Lake, subtropical Queensland, are presented and considered in the context of climate-controlled environmental change. Optically stimulated luminescence dating of dune crests adjacent to prominent wetlands across North Stradbroke Island (Minjerribah) returned a mean age of 119.9 ± 10.6 ka; indicating relative dune stability soon after formation in Marine Isotope Stage 5. Synthesis of wetland sediment geochemistry across the island was used to identify dust accumulation and applied as an aridification proxy over the last glacial-interglacial cycle. A positive trend of dust deposition from ca. 50 ka was found with highest influx occurring leading into the LGM. Complexities of comparing sedimentary records and the need for robust age models are highlighted with local variation influencing the accumulation of exogenic material. An inter-site comparison suggests enhanced moisture stress regionally during the last glaciation and throughout the LGM, returning to a more positive moisture balance ca. 8 ka.
COVID-19 altered research in Clinical and Translational Science Award (CTSA) hubs in an unprecedented manner, leading to adjustments for COVID-19 research.
CTSA members volunteered to conduct a review on the impact of CTSA network on COVID-19 pandemic with the assistance from NIH survey team in October 2020. The survey questions included the involvement of CTSAs in decision-making concerning the prioritization of COVID-19 studies. Descriptive and statistical analyses were conducted to analyze the survey data.
60 of the 64 CTSAs completed the survey. Most CTSAs lacked preparedness but promptly responded to the pandemic. Early disruption of research triggered, enhanced CTSA engagement, creation of dedicated research areas and triage for prioritization of COVID-19 studies. CTSAs involvement in decision-making were 16.75 times more likely to create dedicated diagnostic laboratories (95% confidence interval [CI] = 2.17–129.39; P < 0.01). Likewise, institutions with internal funding were 3.88 times more likely to establish COVID-19 dedicated research (95% CI = 1.12–13.40; P < 0.05). CTSAs were instrumental in securing funds and facilitating establishment of laboratory/clinical spaces for COVID-19 research. Workflow was modified to support contracting and IRB review at most institutions with CTSAs. To mitigate chaos generated by competing clinical trials, central feasibility committees were often formed for orderly review/prioritization.
The lessons learned from the COVID-19 pandemic emphasize the pivotal role of CTSAs in prioritizing studies and establishing the necessary research infrastructure, and the importance of prompt and flexible research leadership with decision-making capacity to manage future pandemics.
An early economic evaluation to inform the translation into clinical practice of a spectroscopic liquid biopsy for the detection of brain cancer. Two specific aims are (1) to update an existing economic model with results from a prospective study of diagnostic accuracy and (2) to explore the potential of brain tumor-type predictions to affect patient outcomes and healthcare costs.
A cost-effectiveness analysis from a UK NHS perspective of the use of spectroscopic liquid biopsy in primary and secondary care settings, as well as a cost–consequence analysis of the addition of tumor-type predictions was conducted. Decision tree models were constructed to represent simplified diagnostic pathways. Test diagnostic accuracy parameters were based on a prospective validation study. Four price points (GBP 50-200, EUR 57-228) for the test were considered.
In both settings, the use of liquid biopsy produced QALY gains. In primary care, at test costs below GBP 100 (EUR 114), testing was cost saving. At GBP 100 (EUR 114) per test, the ICER was GBP 13,279 (EUR 15,145), whereas at GBP 200 (EUR 228), the ICER was GBP 78,300 (EUR 89,301). In secondary care, the ICER ranged from GBP 11,360 (EUR 12,956) to GBP 43,870 (EUR 50,034) across the range of test costs.
The results demonstrate the potential for the technology to be cost-effective in both primary and secondary care settings. Additional studies of test use in routine primary care practice are needed to resolve the remaining issues of uncertainty—prevalence in this patient population and referral behavior.
On January 29, 2020, a total of 195 US citizens were evacuated from the coronavirus disease 2019 (COVID-19) epidemic in Wuhan, China, to March Air Reserve Base in Riverside, California, and entered the first federally mandated quarantine in over 50 years. With less than 1-d notice, a multi-disciplinary team from Riverside County and Riverside University Health System in conjunction with local and federal agencies established on-site 24-h medical care and behavioral health support. This report details the coordinated efforts by multiple teams that took place to provide care for the passengers and to support the surrounding community.
Children and young people with intellectual disability and/or Autism Spectrum Disorder (autism) experience higher rates of mental health problems, including depression, than their typically developing peers. Although international guidelines suggest psychological therapies as first-line intervention for children and young people, there is limited evidence for psychological therapy for depression in children and young people with intellectual disability and/or autism.
To evaluate the current evidence base for psychological interventions for depression in children and young people with intellectual disability and/or autism, and examine the experiences of children and young people with intellectual disability and/or autism, their families and therapists, in receiving and delivering psychological treatment for depression.
Databases were searched up to 30 April 2020 using pre-defined search terms and criteria. Articles were independently screened and assessed for risk of bias. Data were synthesised and reported in a narrative review format.
A total of 10 studies met the inclusion criteria. Four identified studies were clinical case reports and six were quasi-experimental or experimental studies. All studies were assessed as being of moderate or high risk of bias. Participants with intellectual disability were included in four studies. There was limited data on the experiences of young people, their families or therapists in receiving or delivering psychological treatment for depression.
Well-designed, randomised controlled trials are critical to develop an evidence base for psychological treatment for young people with intellectual disability and/or autism with depression. Future research should evaluate the treatment experiences of young people, their families and therapists.
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.
Maintaining nutritional adequacy contributes to successful ageing. B vitamins involved in one-carbon metabolism regulation (folate, riboflavin, vitamins B6 and B12) are critical nutrients contributing to homocysteine and epigenetic regulation. Although cross-sectional B vitamin intake in ageing populations is characterised, longitudinal changes are infrequently reported. This systematic review explores age-related changes in dietary adequacy of folate, riboflavin, vitamins B6 and B12 in community-dwelling older adults (≥65 years at follow-up). Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, databases (MEDLINE, Embase, BIOSIS, CINAHL) were systematically screened, yielding 1579 records; eight studies were included (n 3119 participants, 2–25 years of follow-up). Quality assessment (modified Newcastle–Ottawa quality scale) rated all of moderate–high quality. The estimated average requirement cut-point method estimated the baseline and follow-up population prevalence of dietary inadequacy. Riboflavin (seven studies, n 1953) inadequacy progressively increased with age; the prevalence of inadequacy increased from baseline by up to 22·6 and 9·3 % in males and females, respectively. Dietary folate adequacy (three studies, n 2321) improved in two studies (by up to 22·4 %), but the third showed increasing (8·1 %) inadequacy. Evidence was similarly limited (two studies, respectively) and inconsistent for vitamins B6 (n 559; −9·9 to 47·9 %) and B12 (n 1410; −4·6 to 7·2 %). This review emphasises the scarcity of evidence regarding micronutrient intake changes with age, highlighting the demand for improved reporting of longitudinal changes in nutrient intake that can better direct micronutrient recommendations for older adults. This review was registered with PROSPERO (CRD42018104364).
The Eating Assessment in Toddlers FFQ (EAT FFQ) has been shown to have good reliability and comparative validity for ranking nutrient intakes in young children. With the addition of food items (n 4), we aimed to re-assess the validity of the EAT FFQ and estimate calibration factors in a sub-sample of children (n 97) participating in the Growing Up Milk – Lite (GUMLi) randomised control trial (2015–2017). Participants completed the ninety-nine-item GUMLi EAT FFQ and record-assisted 24-h recalls (24HR) on two occasions. Energy and nutrient intakes were assessed at months 9 and 12 post-randomisation and calibration factors calculated to determine predicted estimates from the GUMLi EAT FFQ. Validity was assessed using Pearson correlation coefficients, weighted kappa (κ) and exact quartile categorisation. Calibration was calculated using linear regression models on 24HR, adjusted for sex and treatment group. Nutrient intakes were significantly correlated between the GUMLi EAT FFQ and 24HR at both time points. Energy-adjusted, de-attenuated Pearson correlations ranged from 0·3 (fibre) to 0·8 (Fe) at 9 months and from 0·3 (Ca) to 0·7 (Fe) at 12 months. Weighted κ for the quartiles ranged from 0·2 (Zn) to 0·6 (Fe) at 9 months and from 0·1 (total fat) to 0·5 (Fe) at 12 months. Exact agreement ranged from 30 to 74 %. Calibration factors predicted up to 56 % of the variation in the 24HR at 9 months and 44 % at 12 months. The GUMLi EAT FFQ remained a useful tool for ranking nutrient intakes with similar estimated validity compared with other FFQ used in children under 2 years.
Schizophrenia is a disorder characterized by pervasive deficits in cognitive functioning. However, few well-powered studies have examined the degree to which cognitive performance is impaired even among individuals with schizophrenia not currently on antipsychotic medications using a wide range of cognitive and reinforcement learning measures derived from cognitive neuroscience. Such research is particularly needed in the domain of reinforcement learning, given the central role of dopamine in reinforcement learning, and the potential impact of antipsychotic medications on dopamine function.
The present study sought to fill this gap by examining healthy controls (N = 75), unmedicated (N = 48) and medicated (N = 148) individuals with schizophrenia. Participants were recruited across five sites as part of the CNTRaCS Consortium to complete tasks assessing processing speed, cognitive control, working memory, verbal learning, relational encoding and retrieval, visual integration and reinforcement learning.
Individuals with schizophrenia who were not taking antipsychotic medications, as well as those taking antipsychotic medications, showed pervasive deficits across cognitive domains including reinforcement learning, processing speed, cognitive control, working memory, verbal learning and relational encoding and retrieval. Further, we found that chlorpromazine equivalency rates were significantly related to processing speed and working memory, while there were no significant relationships between anticholinergic load and performance on other tasks.
These findings add to a body of literature suggesting that cognitive deficits are an enduring aspect of schizophrenia, present in those off antipsychotic medications as well as those taking antipsychotic medications.
This chapter provides an overview of economic and behavioral economic approaches to behavior change. The chapter begins with a description of the traditional or neoclassical economic view of decision-making using expected utility theory as its basis. Attempts by an external party (e.g., a government or agency) to change behavior are viewed as justifiable in a limited number of circumstances, such as when there are externalities or coordination failures. When behavior change is warranted, neoclassical economics has focused on four options: provide information, increase incentives, reduce prices, or increase subsidies, or impose regulations. To be successful, the approach must change the net benefits of the promoted behavior. The chapter then describes the rationale behind behavioral economic approaches to behavior change, emphasizing the role that “nudges” play in behavior change. Examples are provided of common heuristics and associated decision errors that can result, and how nudges are designed to overcome these decision errors. The underlying rationale and steps for developing nudges are summarized. Current evidence suggests that some nudges can be effective in changing behavior, but more research is needed to demonstrate the effectiveness of many nudge strategies. The chapter concludes with a discussion of the likely long-term impact of nudges in the field of behavior change.
Diet has a major influence on the composition and metabolic output of the gut microbiome. Higher-protein diets are often recommended for older consumers; however, the effect of high-protein diets on the gut microbiota and faecal volatile organic compounds (VOC) of elderly participants is unknown. The purpose of the study was to establish if the faecal microbiota composition and VOC in older men are different after a diet containing the recommended dietary intake (RDA) of protein compared with a diet containing twice the RDA (2RDA). Healthy males (74⋅2 (sd 3⋅6) years; n 28) were randomised to consume the RDA of protein (0⋅8 g protein/kg body weight per d) or 2RDA, for 10 weeks. Dietary protein was provided via whole foods rather than supplementation or fortification. The diets were matched for dietary fibre from fruit and vegetables. Faecal samples were collected pre- and post-intervention for microbiota profiling by 16S ribosomal RNA amplicon sequencing and VOC analysis by head space/solid-phase microextraction/GC-MS. After correcting for multiple comparisons, no significant differences in the abundance of faecal microbiota or VOC associated with protein fermentation were evident between the RDA and 2RDA diets. Therefore, in the present study, a twofold difference in dietary protein intake did not alter gut microbiota or VOC indicative of altered protein fermentation.
In response to advancing clinical practice guidelines regarding concussion management, service members, like athletes, complete a baseline assessment prior to participating in high-risk activities. While several studies have established test stability in athletes, no investigation to date has examined the stability of baseline assessment scores in military cadets. The objective of this study was to assess the test–retest reliability of a baseline concussion test battery in cadets at U.S. Service Academies.
All cadets participating in the Concussion Assessment, Research, and Education (CARE) Consortium investigation completed a standard baseline battery that included memory, balance, symptom, and neurocognitive assessments. Annual baseline testing was completed during the first 3 years of the study. A two-way mixed-model analysis of variance (intraclass correlation coefficent (ICC)3,1) and Kappa statistics were used to assess the stability of the metrics at 1-year and 2-year time intervals.
ICC values for the 1-year test interval ranged from 0.28 to 0.67 and from 0.15 to 0.57 for the 2-year interval. Kappa values ranged from 0.16 to 0.21 for the 1-year interval and from 0.29 to 0.31 for the 2-year test interval. Across all measures, the observed effects were small, ranging from 0.01 to 0.44.
This investigation noted less than optimal reliability for the most common concussion baseline assessments. While none of the assessments met or exceeded the accepted clinical threshold, the effect sizes were relatively small suggesting an overlap in performance from year-to-year. As such, baseline assessments beyond the initial evaluation in cadets are not essential but could aid concussion diagnosis.
We describe 14 yr of public data from the Parkes Pulsar Timing Array (PPTA), an ongoing project that is producing precise measurements of pulse times of arrival from 26 millisecond pulsars using the 64-m Parkes radio telescope with a cadence of approximately 3 weeks in three observing bands. A comprehensive description of the pulsar observing systems employed at the telescope since 2004 is provided, including the calibration methodology and an analysis of the stability of system components. We attempt to provide full accounting of the reduction from the raw measured Stokes parameters to pulse times of arrival to aid third parties in reproducing our results. This conversion is encapsulated in a processing pipeline designed to track provenance. Our data products include pulse times of arrival for each of the pulsars along with an initial set of pulsar parameters and noise models. The calibrated pulse profiles and timing template profiles are also available. These data represent almost 21 000 h of recorded data spanning over 14 yr. After accounting for processes that induce time-correlated noise, 22 of the pulsars have weighted root-mean-square timing residuals of
in at least one radio band. The data should allow end users to quickly undertake their own gravitational wave analyses, for example, without having to understand the intricacies of pulsar polarisation calibration or attain a mastery of radio frequency interference mitigation as is required when analysing raw data files.
Surveillance for acute flaccid paralysis (AFP) cases are essential for polio eradication. However, as most poliovirus infections are asymptomatic and some regions of the world are inaccessible, additional surveillance tools require development. Within England and Wales, we demonstrate how inclusion of environmental sampling (ENV) improves the sensitivity of detecting both wild and vaccine-derived polioviruses (VDPVs) when compared to current surveillance. Statistical modelling was used to estimate the spatial risk of wild and VDPV importation and circulation in England and Wales. We estimate the sensitivity of each surveillance mode to detect poliovirus and the probability of being free from poliovirus, defined as being below a pre-specified prevalence of infection. Poliovirus risk was higher within local authorities in Manchester, Birmingham, Bradford and London. The sensitivity of detecting wild poliovirus within a given month using AFP and enterovirus surveillance was estimated to be 0.096 (95% CI 0.055–0.134). Inclusion of ENV in the three highest risk local authorities and a site in London increased surveillance sensitivity to 0.192 (95% CI 0.191–0.193). The sensitivity of ENV strategies can be compared using the framework by varying sites and the frequency of sampling. The probability of being free from poliovirus slowly increased from the date of the last case in 1993. ENV within areas thought to have the highest risk improves detection of poliovirus, and has the potential to improve confidence in the polio-free status of England and Wales and detect VDPVs.
Background: In patients with acute hip fracture, a fascia iliaca compartment block (FICB) has been shown to provide effective non-opioid analgesia, reduce the incidence of pneumonia, and potentially decrease the rate of delirium . However, this procedure was infrequently used in the St. Michael's Hospital (SMH) emergency department (ED). Aim Statement: Our aim was to increase the proportion of patients with hip fracture receiving FICB in the ED to 50% in six months. Measures & Design: We completed two Plan-Do-Study-Act (PDSA) cycles, measuring rates of FICB before and after each cycle. The first was a departmental rounds presentation with information about the process and benefits of FICB, addressing barriers identified by surveying the group. The second cycle included a bundle of interventions comprising of an “instruction card” with the steps required to do the procedure, access to a video tutorial, and a list of experienced physicians willing to help less experienced providers perform FICB. Evaluation/Results: In the three months prior to the project, the rate of FICB in the ED was 12.5% (3/24). For the three months after the first PDSA cycle, the rate increased to 22.2% (8/36). Then, the second cycle was performed. In the following two months the rate further increased to 36.8% (7/19). Discussion/Impact: Despite the clear increase in FICB rate, these changes were not statistically significant (p = 0.063). Our methodology was shown to be safe and effective, and our model can be applied to other ED groups looking to increase their rates of FICB.
Introduction: Trauma care is highly complex and prone to medical errors. Accordingly, several studies have identified adverse events and conditions leading to potentially preventable or preventable deaths. Depending on the availability of specialized trauma care and the trauma system organization, between 10 and 30% of trauma-related deaths worldwide could be preventable if optimal care was promptly delivered. This narrative review aims to identify the main determinants and areas for improvements associated with potentially preventable trauma mortality. Methods: A literature review was performed using Medline, Embase and Cochrane Central Register of Controlled Trials from 1990 to a maximum of 6 months before submission for publication. Experimental or observational studies that have assessed determinants and areas for improvements that are associated with trauma death preventability were considered for inclusion. Two researchers independently selected eligible studies and extracted the relevant data. The main areas for improvements were classified using the Joint Commission on Accreditation of Healthcare Organizations patient event taxonomy. No statistical analyses were performed given the data heterogeneity. Results: From the 3647 individual titles obtained by the search strategy, a total of 37 studies were included. Each study included between 72 and 35311 trauma patients who had sustained mostly blunt trauma, frequently following a fall or a motor vehicle accident. Preventability assessment was performed for 17 to 2081 patients using either a single expert assessment (n = 2, 5,4%) or an expert panel review (n = 35, 94.6%). The definition of preventability and the taxonomy used varied greatly between the studies. The rate of potentially preventable or preventable death ranged from 2.4% to 76.5%. The most frequently reported areas for improvement were treatment delay, diagnosis accuracy to avoid missed or incorrect diagnosis and adverse events associated with the initial procedures performed. The risk of bias of the included studies was high for 32 studies because of the retrospective design and the panel review preventability assessment. Conclusion: Deaths occurring after a trauma remain often preventable. Included studies have used unstandardized definitions of a preventable death and various methodologies to perform the preventability assessment. The proportion of preventable or potentially preventable death reported in each study ranged from 2.4% to 76.5%. Delayed treatment, missed or incorrect initial diagnosis and adverse events following a procedure were commonly associated with preventable trauma deaths and could be targeted to develop quality improvement and monitoring projects.