To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Subanesthetic ketamine infusion therapy can produce fast-acting antidepressant effects in patients with major depression. How single and repeated ketamine treatment modulates the whole-brain functional connectome to affect clinical outcomes remains uncharacterized.
Data-driven whole brain functional connectivity (FC) analysis was used to identify the functional connections modified by ketamine treatment in patients with major depressive disorder (MDD). MDD patients (N = 61, mean age = 38, 19 women) completed baseline resting-state (RS) functional magnetic resonance imaging and depression symptom scales. Of these patients, n = 48 and n = 51, completed the same assessments 24 h after receiving one and four 0.5 mg/kg intravenous ketamine infusions. Healthy controls (HC) (n = 40, 24 women) completed baseline assessments with no intervention. Analysis of RS FC addressed effects of diagnosis, time, and remitter status.
Significant differences (p < 0.05, corrected) in RS FC were observed between HC and MDD at baseline in the somatomotor network and between association and default mode networks. These disruptions in FC in MDD patients trended toward control patterns with ketamine treatment. Furthermore, following serial ketamine infusions, significant decreases in FC were observed between the cerebellum and salience network (SN) (p < 0.05, corrected). Patient remitters showed increased FC between the cerebellum and the striatum prior to treatment that decreased following treatment, whereas non-remitters showed the opposite pattern.
Results support that ketamine treatment leads to neurofunctional plasticity between distinct neural networks that are shown as disrupted in MDD patients. Cortico-striatal-cerebellar loops that encompass the SN could be a potential biomarker for ketamine treatment.
As the pathophysiology of COVID-19 emerges, this paper describes dysphagia as a sequela of the disease, including its diagnosis and management, hypothesised causes, symptomatology in relation to viral progression, and concurrent variables such as intubation, tracheostomy and delirium, at a tertiary UK hospital.
During the first wave of the COVID-19 pandemic, 208 out of 736 patients (28.9 per cent) admitted to our institution with SARS-CoV-2 were referred for swallow assessment. Of the 208 patients, 102 were admitted to the intensive treatment unit for mechanical ventilation support, of which 82 were tracheostomised. The majority of patients regained near normal swallow function prior to discharge, regardless of intubation duration or tracheostomy status.
Dysphagia is prevalent in patients admitted either to the intensive treatment unit or the ward with COVID-19 related respiratory issues. This paper describes the crucial role of intensive swallow rehabilitation to manage dysphagia associated with this disease, including therapeutic respiratory weaning for those with a tracheostomy.
To explore the phenomenology of auditory verbal hallucinations (AVHs) in a clinical sample of young people who have a ‘non-psychotic’ diagnosis.
Ten participants aged 17–31 years with presentation of emotionally unstable personality disorder or post-traumatic stress disorder and frequent AVHs were recruited and participated in a qualitative study exploring their subjective experience of hearing voices. Photo-elicitation and ethnographic diaries were used to stimulate discussion in an otherwise unstructured walking interview.
‘Non-psychotic’ voices comprised auditory qualities such as volume and clarity. Participants commonly personified their voices, viewing them as distinct characters with which they could interact and form relationships. There appeared to be an intimate and unstable relationship between participant and voice, whereby voices changed according to the participants’ mood, insecurities, distress and circumstance. Equally, participants reacted to provocation by the voice, leading to changes in mood and circumstance through emotional and physical disturbances. In contrast to our previous qualitative work in psychosis, voice hearing was not experienced with a sense of imposition or control.
This phenomenological research yielded in-depth and novel accounts of ‘non-psychotic’ voices which were intimately linked to emotional experience. In contrast to standard reports of voices in disorders such as schizophrenia, participants described a complex and bi-directional relationship with their voices. Many other features were in common with voice hearing in psychosis. Knowledge of the phenomenology of hallucinations in non-psychotic disorders has the potential to inform future more successful management strategies. This report gives preliminary evidence for future research.
Neurobiological models of auditory verbal hallucination (AVH) have been advanced by symptom capture functional magnetic resonance imaging (fMRI), where participants self-report hallucinations during scanning. To date, regions implicated are those involved with language, memory and emotion. However, previous studies focus on chronic schizophrenia, thus are limited by factors, such as medication use and illness duration. Studies also lack detailed phenomenological descriptions of AVHs. This study investigated the neural correlates of AVHs in patients with first episode psychosis (FEP) using symptom capture fMRI with a rich description of AVHs. We hypothesised that intrusive AVHs would be associated with dysfunctional salience network activity.
Sixteen FEP patients with frequent AVH completed four psychometrically validated tools to provide an objective measure of the nature of their AVHs. They then underwent fMRI symptom capture, utilising general linear models analysis to compare activity during AVH to the resting brain.
Symptom capture of AVH was achieved in nine patients who reported intrusive, malevolent and uncontrollable AVHs. Significant activity in the right insula and superior temporal gyrus (cluster size 141 mm3), and the left parahippocampal and lingual gyri (cluster size 121 mm3), P < 0.05 FDR corrected, were recorded during the experience of AVHs.
These results suggest salience network dysfunction (in the right insula) together with memory and language processing area activation in intrusive, malevolent AVHs in FEP. This finding concurs with others from chronic schizophrenia, suggesting these processes are intrinsic to psychosis itself and not related to length of illness or prolonged exposure to antipsychotic medication.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Environmental information from place-names has largely been overlooked by geoarchaeologists and fluvial geomorphologists in analyses of the depositional histories of rivers and floodplains. Here, new flood chronologies for the rivers Teme, Severn, and Wye are presented, modelled from stable river sections excavated at Broadwas, Buildwas, and Rotherwas. These are connected by the Old English term *wæsse, interpreted as ‘land by a meandering river which floods and drains quickly’. The results reveal that, in all three places, flooding during the early medieval period occurred more frequently between AD 350–700 than between AD 700–1100, but that over time each river's flooding regime became more complex including high magnitude single events. In the sampled locations, the fluvial dynamics of localized flood events had much in common, and almost certainly differed in nature from other sections of their rivers, refining our understanding of the precise nature of flooding which their names sought to communicate. This study shows how the toponymic record can be helpful in the long-term reconstruction of historic river activity and for our understanding of past human perceptions of riverine environments.
Introduction: There is increasing evidence supporting ultrasonography for the determination of optimal chest compression location during cardiac arrest. Radiological studies have demonstrated that in up to 1/3 of patients the aortic root or outflow tract is being compressed during standard CPR. Out-of-hospital-cardiac-arrests (OHCA) could benefit from cardiac localization, undertaken with scaled-down ultrasound equipment by which the largest fluid filled structure in the chest (the heart) is identified to guide optimal compression location. We intend to evaluate 1) where the left ventricle is in supine patients, 2) the accuracy and precision as well as 3) the feasibility and reliability of cardiac localization with a scaled down ultrasound device (bladder scanners). Methods: We are recruiting men and women over the age of 40. The scanning protocol involves using a bladder scanner on a 15-point grid over the subject's left chest and parasternal, midclavicular, and anterior axillary intercostal spaces 3-7. Detected volumes will be recorded, with the presumption that the intercostal space with the largest measured volume is centered over the heart. Echocardiography will then be used to confirm the bladder scanner accuracy and to better describe the patient's internal chest anatomy. Having assessed procedural feasibility on 3 pilot subjects, we are now recruiting 100 participants, with planned interim analysis at 50 participants for sample size reassessment. Maximal volume location frequencies from the echocardiograms will be described and assessed for variation utilizing the goodness-of-fit test. The proportion of agreement across the two modalities regarding the maximal volume location will also be examined. Results: Among the 3 volunteers (pilot study), the scanner identified fluid in 4-8 of 15 intercostal spaces. In each of the three pilot study patients, the maximal volume identified by the bladder scanner was found to be at the parasternal location of the 6th intercostal space. This was also the location of the mid left ventricular diameter on echocardiography. Conclusion: Our literature review and pilot study data support the premise that lay persons and emergency medical personnel may improve compressions (and thus outcomes) during OHCA by using a scaled-down ultrasound to identify the location of optimal compression. We are currently enrolling patients in our study.
Community-acquired pneumonia (CAP) results in substantial numbers of hospitalisations and deaths in older adults. There are known lifestyle and medical risk factors for pneumococcal disease but the magnitude of the additional risk is not well quantified in Australia. We used a large population-based prospective cohort study of older adults in the state of New South Wales (45 and Up Study) linked to cause-specific hospitalisations, disease notifications and death registrations from 2006 to 2015. We estimated the age-specific incidence of CAP hospitalisation (ICD-10 J12-18), invasive pneumococcal disease (IPD) notification and presumptive non-invasive pneumococcal CAP hospitalisation (J13 + J18.1, excluding IPD), comparing those with at least one risk factor to those with no risk factors. The hospitalised case-fatality rate (CFR) included deaths in a 30-day window after hospitalisation. Among 266 951 participants followed for 1 850 000 person-years there were 8747 first hospitalisations for CAP, 157 IPD notifications and 305 non-invasive pneumococcal CAP hospitalisations. In persons 65–84 years, 54.7% had at least one identified risk factor, increasing to 57.0% in those ⩾85 years. The incidence of CAP hospitalisation in those ⩾65 years with at least one risk factor was twofold higher than in those without risk factors, 1091/100 000 (95% confidence interval (CI) 1060–1122) compared with 522/100 000 (95% CI 501–545) and IPD in equivalent groups was almost threefold higher (18.40/100 000 (95% CI 14.61–22.87) vs. 6.82/100 000 (95% CI 4.56–9.79)). The CFR increased with age but there were limited difference by risk status, except in those aged 45 to 64 years. Adults ⩾65 years with at least one risk factor have much higher rates of CAP and IPD suggesting that additional risk factor-based vaccination strategies may be cost-effective.
During the summer of 2016, the Hawaii Department of Health responded to the second-largest domestic foodborne hepatitis A virus (HAV) outbreak in the post-vaccine era. The epidemiological investigation included case finding and investigation, sequencing of RNA positive clinical specimens, product trace-back and virologic testing and sequencing of HAV RNA from the product. Additionally, an online survey open to all Hawaii residents was conducted to estimate baseline commercial food consumption. We identified 292 confirmed HAV cases, of whom 11 (4%) were possible secondary cases. Seventy-four (25%) were hospitalised and there were two deaths. Among all cases, 94% reported eating at Oahu or Kauai Island branches of Restaurant Chain A, with 86% of those cases reporting raw scallop consumption. In contrast, a food consumption survey conducted during the outbreak indicated 25% of Oahu residents patronised Restaurant Chain A in the 7 weeks before the survey. Product trace-back revealed a single distributor that supplied scallops imported from the Philippines to Restaurant Chain A. Recovery, amplification and sequence comparison of HAV recovered from scallops revealed viral sequences matching those from case-patients. Removal of product from implicated restaurants and vaccination of those potentially exposed led to the cessation of the outbreak. This outbreak further highlights the need for improved imported food safety.
Objectives: Although subjective cognitive complaints (SCC) are an integral component of the diagnostic criteria for mild cognitive impairment (MCI), previous findings indicate they may not accurately reflect cognitive ability. Within the Alzheimer’s Disease Neuroimaging Initiative, we investigated longitudinal change in the discrepancy between self- and informant-reported SCC across empirically derived subtypes of MCI and normal control (NC) participants. Methods: Data were obtained for 353 MCI participants and 122 “robust” NC participants. Participants were classified into three subtypes at baseline via cluster analysis: amnestic MCI, mixed MCI, and cluster-derived normal (CDN), a presumptive false-positive group who performed within normal limits on neuropsychological testing. SCC at baseline and two annual follow-up visits were assessed via the Everyday Cognition Questionnaire (ECog), and discrepancy scores between self- and informant-report were calculated. Analysis of change was conducted using analysis of covariance. Results: The amnestic and mixed MCI subtypes demonstrated increasing ECog discrepancy scores over time. This was driven by an increase in informant-reported SCC, which corresponded to participants’ objective cognitive decline, despite stable self-reported SCC. Increasing unawareness was associated with cerebrospinal fluid Alzheimer’s disease biomarker positivity and progression to Alzheimer’s disease. In contrast, CDN and NC groups over-reported cognitive difficulty and demonstrated normal cognition at all time points. Conclusions: MCI participants’ discrepancy scores indicate progressive underappreciation of their evolving cognitive deficits. Consistent over-reporting in the CDN and NC groups despite normal objective cognition suggests that self-reported SCC do not predict impending cognitive decline. Results demonstrate that self-reported SCC become increasingly misleading as objective cognitive impairment becomes more pronounced. (JINS, 2018, 24, 842–853)
Extinctions have altered island ecosystems throughout the late Quaternary. Here, we review the main historic drivers of extinctions on islands, patterns in extinction chronologies between islands, and the potential for restoring ecosystems through reintroducing extirpated species. While some extinctions have been caused by climatic and environmental change, most have been caused by anthropogenic impacts. We propose a general model to describe patterns in these anthropogenic island extinctions. Hunting, habitat loss and the introduction of invasive predators accompanied prehistoric settlement and caused declines of endemic island species. Later settlement by European colonists brought further land development, a different suite of predators and new drivers, leading to more extinctions. Extinctions alter ecological networks, causing ripple effects for islands through the loss of ecosystem processes, functions and interactions between species. Reintroduction of extirpated species can help restore ecosystem function and processes, and can be guided by palaeoecology. However, reintroduction projects must also consider the cultural, social and economic needs of humans now inhabiting the islands and ensure resilience against future environmental and climate change.
A legionellosis outbreak at an industrial site was investigated to identify and control the source. Cases were identified from disease notifications, workplace illness records, and from clinicians. Cases were interviewed for symptoms and risk factors and tested for legionellosis. Implicated environmental sources were sampled and tested for legionella. We identified six cases with Legionnaires’ disease and seven with Pontiac fever; all had been exposed to aerosols from the cooling towers on the site. Nine cases had evidence of infection with either Legionella pneumophila serogroup (sg) 1 or Legionella longbeachae sg1; these organisms were also isolated from the cooling towers. There was 100% DNA sequence homology between cooling tower and clinical isolates of L. pneumophila sg1 using sequence-based typing analysis; no clinical L. longbeachae isolates were available to compare with environmental isolates. Routine monitoring of the towers prior to the outbreak failed to detect any legionella. Data from this outbreak indicate that L. pneumophila sg1 transmission occurred from the cooling towers; in addition, L. longbeachae transmission was suggested but remains unproven. L. longbeachae detection in cooling towers has not been previously reported in association with legionellosis outbreaks. Waterborne transmission should not be discounted in investigations for the source of L. longbeachae infection.
Cannabis use shows a robust dose-dependent relationship with psychosis risk among the general population. Despite this, it has been difficult to link cannabis use with risk for transitioning to a psychotic disorder among individuals at ultra-high risk (UHR) for psychosis. The present study examined UHR transition risk as a function of cannabis use characteristics which vary substantially between individuals including age of first use, cannabis abuse severity and a history of cannabis-induced attenuated psychotic symptoms (APS).
Participants were 190 UHR individuals (76 males) recruited at entry to treatment between 2000 and 2006. They completed a comprehensive baseline assessment including a survey of cannabis use characteristics during the period of heaviest use. Outcome was transition to a psychotic disorder, with mean time to follow-up of 5.0 years (range 2.4–8.7 years).
A history of cannabis abuse was reported in 58% of the sample. Of these, 26% reported a history of cannabis-induced APS. These individuals were 4.90 (95% confidence interval 1.93–12.44) times more likely to transition to a psychotic disorder (p = 0.001). Greater severity of cannabis abuse also predicted transition to psychosis (p = 0.036). However, this effect was mediated by higher abuse severity among individuals with a history of cannabis-induced APS.
Findings suggest that cannabis use poses risk in a subpopulation of UHR individuals who manifest cannabis-induced APS. Whether this reflects underlying genetic vulnerability requires further study. Nevertheless, findings reveal an important early marker of risk with potentially significant prognostic utility for UHR individuals.
In recent years there have been numerous investigations of the helium shell-burning evolution of low-mass stars, and it was in such studies that Schwarzschild and Härm and Weigert independently discovered the thermal instability phenomenon. In the case of stars with hydrogen-rich envelopes, its reality has been amply confirmed. On the other hand, studies have also been made of the shell-burning in pure helium stars (many for comparison with the nuclei of planetary nebulae), and here the situation is far less clear. Some investigators have found the instability, while others have not. Paczyński has drawn attention to the fact that in all cases where thermal pulses have been reported for pure helium stars, the helium shell-source was treated as an abundance discontinuity, while in all cases where a detailed abundance profile was used, there was no evidence of pulses. He suggests therefore that the shells in pure helium stars are stable. We wish to report a calculation for a 0.8 ɱ⊙ pure helium star, with a detailed shell abundance profile, in which a single thermal pulse was encountered at the end of the shell-burning evolution.
In this review, I will be concentrating on problems related to the evolution of stars on the asymptotic giant branch (AGB). AGB stars are defined as stars which have completed core helium burning and have subsequently developed degenerate carbon/oxygen cores surrounded by hydrogen and helium burning shells; such stars have main sequence masses M≤9 M⊙ (Paczynski 1971; Becker and Iben 1980). In the HR diagram most AGB stars sit on the red giant branch. An exception to this rule occurs in Population II systems, where the AGB stars evolve asymptotically to the red giant branch from the blue side as the luminosity increases after completion of core helium burning on the horizontal branch.
Many of the important events in the life of a star occur, or are thought to occur, during the red giant or supergiant phase of evolution. For example, in heavy and intermediate mass stars supernova explosions terminate normal evolutionary processes while in lower mass stars the stellar envelope is entirely removed giving rise to planetary nebulae and, subsequently, white dwarfs. Theoretical calculations suggest that before the onset of these rather drastic events, a significant amount of nucleosynthesis occurs, giving rise to enhanced surface abundances of He, C, N and s-process elements (e.g., Iben and Truran 1978; Renzini and Voli 1981); loss of the envelope material by stellar winds, planetary nebula ejection and supernova explosions produce overall galactic enrichment in these elements.
We have discovered a radiation levitation mechanism which, under certain circumstances, can remove the last vestiges of hydrogen-rich material from an incipient planetary nebula nucleus if its mass exceeds a critical value of about 0.85M⊙. This process may be responsible for the production of helium-rich planetary nuclei, and their progeny the DB white dwarfs.
The spectrum of a symbiotic star consists of an M-type absorption spectrum, a B-type shell spectrum and nebula emission lines, the relative contributions of these three components varying with time. The light curves of the symbiotic stars vary with a semi-regular period typically 200-800 days while larger eruptions occur on a timescale of ~ 3.5 years. Some suggestions which have been advanced to explain the combination spectrum, variability and eruptive behaviour of the symbiotic stars are:
(a)the symbiotic stars are binaries consisting of a hot and cool component.
(b)the symbiotic stars consist of a single hot star surrounded by a large optically thick envelope giving the appearance of a hot continuum with the absorption spectrum of a cool star superimposed on it.
(c)the symbiotic stars are single stars surrounded by a shock wave heated chromosphere.
(c)Although some of the symbiotic stars are undoubtedly binaries (for example, T Coronae Borealis), observatienal evidence suggests that others may be explained by hypothesis (c) above. The calculations described below provide an explanation of the symbiotic stars in conjunction with hypothesis (c).
Seeing measured in the open air with a differential image motion monitor (DIMM) is compared with seeing measured simultaneously at the Cassegrain focus of the Anglo-Australian Telescope (AAT). It is shown that when the mirror is hotter than the dome air, the AAT’s seeing is degraded by ~1 arcsec per Celsius degree of excess mirror temperature. The consequence of this is that mirror seeing currently contributes significantly to the seeing at the AAT on many nights. A mirror colder than the dome air does not seem to degrade seeing, and neither does an internal-to-external air temperature difference of up to at least 3°C when the venting fans are on.
A photometric survey of a central region of the LMC has been undertaken to obtain a magnitude and colour limited sample of bright asymptotic giant branch (AGB) stars; the stars were selected from V and Ic plates taken by the UK Schmidt Telescope Unit (UKSTU) at Coonabarabran. Infrared JHK photometry has been obtained for all the stars in the sample in order to determine bolometric magnitudes, and spectra have been obtained for most of the stars to obtain spectral types. Stars in the sample have bolometric magnitudes up to the AGB limit of Mbol ∼ – 7.1, and many of the stars show evidence for dredge-up of carbon and s-process elements during helium shell flashes. A bolometric luminosity function has been constructed and its behaviour is discussed in terms of possible mass loss scenarios.