To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We obtained 24 air samples in 8 general wards temporarily converted into negative-pressure wards admitting coronavirus disease 2019 (COVID-19) patients infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) omicron variant BA.2.2 in Hong Kong. SARS-CoV-2 RNA was detected in 19 (79.2%) of 24 samples despite enhanced indoor air dilution. It is difficult to prevent airborne transmission of SARS-CoV-2 in hospitals.
Reaction time variability (RTV) has been estimated using Gaussian, ex-Gaussian, and diffusion model (DM) indices. Rarely have studies examined interrelationships among these performance indices in childhood, and the use of reaction time (RT) computational models has been slow to take hold in the developmental psychopathology literature. Here, we extend prior work in adults by examining the interrelationships among different model parameters in the ABCD sample and demonstrate how computational models of RT can clarify mechanisms of time-on-task effects and sex differences in RTs.
This study utilized trial-level data from the stop signal task from 8916 children (9–10 years old) to examine Gaussian, ex-Gaussian, and DM indicators of RTV. In addition to describing RTV patterns, we examined interrelations among these indicators, temporal patterns, and sex differences.
There was no one-to-one correspondence between DM and ex-Gaussian parameters. Nonetheless, drift rate was most strongly associated with standard deviation of RT and tau, while nondecisional processes were most strongly associated with RT, mu, and sigma. Performance worsened across time with changes driven primarily by decreasing drift rate. Boys were faster and less variable than girls, likely attributable to girls’ wide boundary separation.
Intercorrelations among model parameters are similar in children as has been observed in adults. Computational approaches play a crucial role in understanding performance changes over time and can also clarify mechanisms of group differences. For example, standard RT models may incorrectly suggest slowed processing speed in girls that is actually attributable to other factors.
A terrestrial (lacustrine and fluvial) palaeoclimate record from Hoxne (Suffolk, UK) shows two temperate phases separated by a cold episode, correlated with MIS 11 subdivisions corresponding to isotopic events 11.3 (Hoxnian interglacial period), 11.24 (Stratum C cold interval), and 11.23 (warm interval with evidence of human presence). A robust, reproducible multiproxy consensus approach validates and combines quantitative palaeotemperature reconstructions from three invertebrate groups (beetles, chironomids, and ostracods) and plant indicator taxa with qualitative implications of molluscs and small vertebrates. Compared with the present, interglacial mean monthly air temperatures were similar or up to 4.0°C higher in summer, but similar or as much as 3.0°C lower in winter; the Stratum C cold interval, following prolonged nondeposition or erosion of the lake bed, experienced summers 2.5°C cooler and winters between 5°C and 10°C cooler than at present. Possible reworking of fossils into Stratum C from underlying interglacial assemblages is taken into account. Oxygen and carbon isotopes from ostracod shells indicate evaporatively enriched lake water during Stratum C deposition. Comparative evaluation shows that proxy-based palaeoclimate reconstruction methods are best tested against each other and, if validated, can be used to generate more refined and robust results through multiproxy consensus.
Air dispersal of respiratory viruses other than SARS-CoV-2 has not been systematically reported. The incidence and factors associated with air dispersal of respiratory viruses are largely unknown.
We performed air sampling by collecting 72,000 L of air over 6 hours for pediatric and adolescent patients infected with parainfluenza virus 3 (PIF3), respiratory syncytial virus (RSV), rhinovirus, and adenovirus. The patients were singly or 2-patient cohort isolated in airborne infection isolation rooms (AIIRs) from December 3, 2021, to January 26, 2022. The viral load in nasopharyngeal aspirates (NPA) and air samples were measured. Factors associated with air dispersal were investigated and analyzed.
Of 20 singly isolated patients with median age of 30 months (range, 3 months–15 years), 7 (35%) had air dispersal of the viruses compatible with their NPA results. These included 4 (40%) of 10 PIF3-infected patients, 2 (66%) of 3 RSV-infected patients, and 1 (50%) of 2 adenovirus-infected patients. The mean viral load in their room air sample was 1.58×103 copies/mL. Compared with 13 patients (65%) without air dispersal, these 7 patients had a significantly higher mean viral load in their NPA specimens (6.15×107 copies/mL vs 1.61×105 copies/mL; P < .001). Another 14 patients were placed in cohorts as 7 pairs infected with the same virus (PIF3, 2 pairs; RSV, 3 pairs; rhinovirus, 1 pair; and adenovirus, 1 pair) in double-bed AIIRs, all of which had air dispersal. The mean room air viral load in 2-patient cohorts was significantly higher than in rooms of singly isolated patients (1.02×104 copies/mL vs 1.58×103 copies/mL; P = .020).
Air dispersal of common respiratory viruses may have infection prevention and public health implications.
The concept of the ‘benefit cheat’ plays a critical role in political rhetoric and public policy and it has been deployed to justify changes to the benefit system that have had a very negative impact on well being and justice. The authors argue that the concept is dangerous, adding to the existing burdens of poverty and exclusion and that it must be eradicated by a reorganisation of the welfare system. Dignity and a spirit of equality must be the starting point for any system of welfare that aims to promote universal well being.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Psychosis is a major mental illness with first onset in young adults. The prognosis is poor in around half of the people affected, and difficult to predict. The few tools available to predict prognosis have major weaknesses which limit their use in clinical practice. We aimed to develop and validate a risk prediction model of symptom non-remission in first-episode psychosis.
Our development cohort consisted of 1027 patients with first-episode psychosis recruited between 2005 to 2010 from 14 early intervention services across the National Health Service in England. Our validation cohort consisted of 399 patients with first-episode psychosis recruited between 2006 to 2009 from a further 11 English early intervention services. The one-year non-remission rate was 52% and 54% in the development and validation cohorts, respectively. Multivariable logistic regression was used to develop a risk prediction model for non-remission, which was externally validated.
The prediction model showed good discrimination (C-statistic of 0.74 (0.72, 0.76) and adequate calibration with intercept alpha of 0.13 (0.03, 0.23) and slope beta of 0.99 (0.87, 1.12). Our model improved the net-benefit by 16% at a risk threshold of 50%, equivalent to 16 more detected non-remitted first-episode psychosis individuals per 100 without incorrectly classifying remitted cases.
Once prospectively validated, our first episode psychosis prediction model could help identify patients at increased risk of non-remission at initial clinical contact.
Telephone consultations have been in clinical use since the early 1960s and are increasing in frequency and importance in many areas of medicine. With the advent of the COVID-19 pandemic in 2020, the use of telemedicine consultations increased dramatically alongside utilization of other digital technologies. Despite promise and potential advantages for clinicians (including remote working, improved time management and safety) there are known drawbacks to telephone consultations for psychiatrists. This includes limitations to assessments of mental state and risk, with loss of non-verbal communication often cited as a point in favour of more sophisticated technologies such as video calling. By adopting telephone consultations to a greater extent during the initial months of the COVID-19 pandemic in the Coventry Crisis Resolution and Home Treatment Team (CRHTT), we aimed to assess the patient experience in telehealth, through a patient survey.
After an initial assessment or follow-up consultation with a medical practitioner from the crisis team, patients were invited to take part in a short questionnaire with a member of the administration staff. This consisted of eight questions on a Likert scale and three open questions for comments. Results were collated and analyzed via Microsoft Excel.
Most patients found the telephone consultations satisfactory, with more than 90% returning positive scores in understanding, convenience and overall satisfaction. All patients felt listened to and that their confidentiality was maintained; with all but one respondent willing to engage in further consultations via the telephone. Negative scores were typically returned for practical telephonic problems including poor signal, interference and background noise. In their comments patients expressed largely positive views about their experience with their clinician; analysis revealed key insights into the patient experience, demonstrating the convenience, comfort and flexibility possible with ‘telepsychiatry’.
Patient experience of telemedicine in a UK psychiatric crisis team is mostly positive, with clear advantages for both patients and clinicians. Our results show telephone consultations can be expanded to new patient assessments alongside follow-ups, enabling the team to reach a greater number of service users. This includes service users who are housebound due to infirmity, required to shield or have significant anxiety about the pandemic.
Two prominent risk factors for major depressive disorder (MDD) are childhood maltreatment (CM) and familial risk for MDD. Despite having these risk factors, there are individuals who maintain mental health, i.e. are resilient, whereas others develop MDD. It is unclear which brain morphological alterations are associated with this kind of resilience. Interaction analyses of risk and diagnosis status are needed that can account for complex adaptation processes, to identify neural correlates of resilience.
We analyzed brain structural data (3T magnetic resonance imaging) by means of voxel-based morphometry (CAT12 toolbox), using a 2 × 2 design, comparing four groups (N = 804) that differed in diagnosis (healthy v. MDD) and risk profiles (low-risk, i.e. absence of CM and familial risk v. high-risk, i.e. presence of both CM and familial risk). Using regions of interest (ROIs) from the literature, we conducted an interaction analysis of risk and diagnosis status.
Volume in the left middle frontal gyrus (MFG), part of the dorsolateral prefrontal cortex (DLPFC), was significantly higher in healthy high-risk individuals. There were no significant results for the bilateral superior frontal gyri, frontal poles, pars orbitalis of the inferior frontal gyri, and the right MFG.
The healthy high-risk group had significantly higher volumes in the left DLPFC compared to all other groups. The DLPFC is implicated in cognitive and emotional processes, and higher volume in this area might aid high-risk individuals in adaptive coping in order to maintain mental health. This increased volume might therefore constitute a neural correlate of resilience to MDD in high risk.
Public-Private Innovation Partnerships (PPIPs) are increasingly used as a tool for addressing ‘wicked’ public sector challenges. ‘Innovation’ is, however, frequently treated as a ‘magic’ concept: used unreflexively, taken to be axiomatically ‘good’, and left undefined within policy programmes. Using McConnell’s framework of policy success and failure and a case study of a multi-level PPIP in the English health service (NHS Test Beds), this paper critically explores the implications of the mobilisation of innovation in PPIP policy and practice. We highlight how the interplay between levels (macro/micro and policy maker/recipient) can shape both emerging policies and their prospects for success or failure. The paper contributes to an understanding of PPIP success and failure by extending McConnell’s framework to explore inter-level effects between policy and innovation project, and demonstrating how the success of PPIP policy cannot be understood without recognising the particular political effects of ‘innovation’ on formulation and implementation.
Eighty percent of all patients suffering from major depressive disorder (MDD) relapse at least once in their lifetime. Thus, understanding the neurobiological underpinnings of the course of MDD is of utmost importance. A detrimental course of illness in MDD was most consistently associated with superior longitudinal fasciculus (SLF) fiber integrity. As similar associations were, however, found between SLF fiber integrity and acute symptomatology, this study attempts to disentangle associations attributed to current depression from long-term course of illness.
A total of 531 patients suffering from acute (N = 250) or remitted (N = 281) MDD from the FOR2107-cohort were analyzed in this cross-sectional study using tract-based spatial statistics for diffusion tensor imaging. First, the effects of disease state (acute v. remitted), current symptom severity (BDI-score) and course of illness (number of hospitalizations) on fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), and axial diffusivity were analyzed separately. Second, disease state and BDI-scores were analyzed in conjunction with the number of hospitalizations to disentangle their effects.
Disease state (pFWE < 0.042) and number of hospitalizations (pFWE< 0.032) were associated with decreased FA and increased MD and RD in the bilateral SLF. A trend was found for the BDI-score (pFWE > 0.067). When analyzed simultaneously only the effect of course of illness remained significant (pFWE < 0.040) mapping to the right SLF.
Decreased FA and increased MD and RD values in the SLF are associated with more hospitalizations when controlling for current psychopathology. SLF fiber integrity could reflect cumulative illness burden at a neurobiological level and should be targeted in future longitudinal analyses.
Universal masking for healthcare workers and patients in hospitals was adopted to combat coronavirus disease 2019 (COVID-19), with compliance rates of 100% and 75.9%, respectively. Zero rates of nosocomial influenza A, influenza B, and respiratory syncytial virus infection were achieved from February to April 2020, which was significantly lower than the corresponding months in 2017–2019.
Radiocarbon (14C) ages cannot provide absolutely dated chronologies for archaeological or paleoenvironmental studies directly but must be converted to calendar age equivalents using a calibration curve compensating for fluctuations in atmospheric 14C concentration. Although calibration curves are constructed from independently dated archives, they invariably require revision as new data become available and our understanding of the Earth system improves. In this volume the international 14C calibration curves for both the Northern and Southern Hemispheres, as well as for the ocean surface layer, have been updated to include a wealth of new data and extended to 55,000 cal BP. Based on tree rings, IntCal20 now extends as a fully atmospheric record to ca. 13,900 cal BP. For the older part of the timescale, IntCal20 comprises statistically integrated evidence from floating tree-ring chronologies, lacustrine and marine sediments, speleothems, and corals. We utilized improved evaluation of the timescales and location variable 14C offsets from the atmosphere (reservoir age, dead carbon fraction) for each dataset. New statistical methods have refined the structure of the calibration curves while maintaining a robust treatment of uncertainties in the 14C ages, the calendar ages and other corrections. The inclusion of modeled marine reservoir ages derived from a three-dimensional ocean circulation model has allowed us to apply more appropriate reservoir corrections to the marine 14C data rather than the previous use of constant regional offsets from the atmosphere. Here we provide an overview of the new and revised datasets and the associated methods used for the construction of the IntCal20 curve and explore potential regional offsets for tree-ring data. We discuss the main differences with respect to the previous calibration curve, IntCal13, and some of the implications for archaeology and geosciences ranging from the recent past to the time of the extinction of the Neanderthals.
We have previously shown that higher intake of cruciferous vegetables is inversely associated with carotid artery intima-media thickness. To further test the hypothesis that an increased consumption of cruciferous vegetables is associated with reduced indicators of structural vascular disease in other areas of the vascular tree, we aimed to investigate the cross-sectional association between cruciferous vegetable intake and extensive calcification in the abdominal aorta. Dietary intake was assessed, using a FFQ, in 684 older women from the Calcium Intake Fracture Outcome Study. Cruciferous vegetables included cabbage, Brussels sprouts, cauliflower and broccoli. Abdominal aortic calcification (AAC) was scored using the Kauppila AAC24 scale on dual-energy X-ray absorptiometry lateral spine images and was categorised as ‘not extensive’ (0–5) or ‘extensive’ (≥6). Mean age was 74·9 (sd 2·6) years, median cruciferous vegetable intake was 28·2 (interquartile range 15·0–44·7) g/d and 128/684 (18·7 %) women had extensive AAC scores. Those with higher intakes of cruciferous vegetables (>44·6 g/d) were associated with a 46 % lower odds of having extensive AAC in comparison with those with lower intakes (<15·0 g/d) after adjustment for lifestyle, dietary and CVD risk factors (ORQ4 v. Q1 0·54, 95 % CI 0·30, 0·97, P = 0·036). Total vegetable intake and each of the other vegetable types were not related to extensive AAC (P > 0·05 for all). This study strengthens the hypothesis that higher intake of cruciferous vegetables may protect against vascular calcification.
Prospectively acquired Canadian cerebrospinal fluid samples were used to assess the performance characteristics of three ante-mortem tests commonly used to support diagnoses of Creutzfeldt–Jakob disease. The utility of the end-point quaking-induced conversion assay as a test for Creutzfeldt–Jakob disease diagnoses was compared to that of immunoassays designed to detect increased amounts of the surrogate markers 14-3-3γ and hTau. The positive predictive values of the end-point quaking-induced conversion, 14-3-3γ, and hTau tests conducted at the Prion Diseases Section of the Public Health Agency of Canada were 96%, 68%, and 66%, respectively.
The role of severe respiratory coronavirus virus 2 (SARS-CoV-2)–laden aerosols in the transmission of coronavirus disease 2019 (COVID-19) remains uncertain. Discordant findings of SARS-CoV-2 RNA in air samples were noted in early reports.
Sampling of air close to 6 asymptomatic and symptomatic COVID-19 patients with and without surgical masks was performed with sampling devices using sterile gelatin filters. Frequently touched environmental surfaces near 21 patients were swabbed before daily environmental disinfection. The correlation between the viral loads of patients’ clinical samples and environmental samples was analyzed.
All air samples were negative for SARS-CoV-2 RNA in the 6 patients singly isolated inside airborne infection isolation rooms (AIIRs) with 12 air changes per hour. Of 377 environmental samples near 21 patients, 19 (5.0%) were positive by reverse-transcription polymerase chain reaction (RT-PCR) assay, with a median viral load of 9.2 × 102 copies/mL (range, 1.1 × 102 to 9.4 × 104 copies/mL). The contamination rate was highest on patients’ mobile phones (6 of 77, 7.8%), followed by bed rails (4 of 74, 5.4%) and toilet door handles (4 of 76, 5.3%). We detected a significant correlation between viral load ranges in clinical samples and positivity rate of environmental samples (P < .001).
SARS-CoV-2 RNA was not detectable by air samplers, which suggests that the airborne route is not the predominant mode of transmission of SARS-CoV-2. Wearing a surgical mask, appropriate hand hygiene, and thorough environmental disinfection are sufficient infection control measures for COVID-19 patients isolated singly in AIIRs. However, this conclusion may not apply during aerosol-generating procedures or in cohort wards with large numbers of COVID-19 patients.
Implementation of genome-scale sequencing in clinical care has significant challenges: the technology is highly dimensional with many kinds of potential results, results interpretation and delivery require expertise and coordination across multiple medical specialties, clinical utility may be uncertain, and there may be broader familial or societal implications beyond the individual participant. Transdisciplinary consortia and collaborative team science are well poised to address these challenges. However, understanding the complex web of organizational, institutional, physical, environmental, technologic, and other political and societal factors that influence the effectiveness of consortia is understudied. We describe our experience working in the Clinical Sequencing Evidence-Generating Research (CSER) consortium, a multi-institutional translational genomics consortium.
A key aspect of the CSER consortium was the juxtaposition of site-specific measures with the need to identify consensus measures related to clinical utility and to create a core set of harmonized measures. During this harmonization process, we sought to minimize participant burden, accommodate project-specific choices, and use validated measures that allow data sharing.
Identifying platforms to ensure swift communication between teams and management of materials and data were essential to our harmonization efforts. Funding agencies can help consortia by clarifying key study design elements across projects during the proposal preparation phase and by providing a framework for data sharing data across participating projects.
In summary, time and resources must be devoted to developing and implementing collaborative practices as preparatory work at the beginning of project timelines to improve the effectiveness of research consortia.