To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
People living in precarious housing or homelessness have higher than expected rates of psychotic disorders, persistent psychotic symptoms, and premature mortality. Psychotic symptoms can be modeled as a complex dynamic system, allowing assessment of roles for risk factors in symptom development, persistence, and contribution to premature mortality.
The severity of delusions, conceptual disorganization, hallucinations, suspiciousness, and unusual thought content was rated monthly over 5 years in a community sample of precariously housed/homeless adults (n = 375) in Vancouver, Canada. Multilevel vector auto-regression analysis was used to construct temporal, contemporaneous, and between-person symptom networks. Network measures were compared between participants with (n = 219) or without (n = 156) history of psychotic disorder using bootstrap and permutation analyses. Relationships between network connectivity and risk factors including homelessness, trauma, and substance dependence were estimated by multiple linear regression. The contribution of network measures to premature mortality was estimated by Cox proportional hazard models.
Delusions and unusual thought content were central symptoms in the multilevel network. Each psychotic symptom was positively reinforcing over time, an effect most pronounced in participants with a history of psychotic disorder. Global connectivity was similar between those with and without such a history. Greater connectivity between symptoms was associated with methamphetamine dependence and past trauma exposure. Auto-regressive connectivity was associated with premature mortality in participants under age 55.
Past and current experiences contribute to the severity and dynamic relationships between psychotic symptoms. Interrupting the self-perpetuating severity of psychotic symptoms in a vulnerable group of people could contribute to reducing premature mortality.
Clarifying the relationship between depression symptoms and cardiometabolic and related health could clarify risk factors and treatment targets. The objective of this study was to assess whether depression symptoms in midlife are associated with the subsequent onset of cardiometabolic health problems.
The study sample comprised 787 male twin veterans with polygenic risk score data who participated in the Harvard Twin Study of Substance Abuse (‘baseline’) and the longitudinal Vietnam Era Twin Study of Aging (‘follow-up’). Depression symptoms were assessed at baseline [mean age 41.42 years (s.d. = 2.34)] using the Diagnostic Interview Schedule, Version III, Revised. The onset of eight cardiometabolic conditions (atrial fibrillation, diabetes, erectile dysfunction, hypercholesterolemia, hypertension, myocardial infarction, sleep apnea, and stroke) was assessed via self-reported doctor diagnosis at follow-up [mean age 67.59 years (s.d. = 2.41)].
Total depression symptoms were longitudinally associated with incident diabetes (OR 1.29, 95% CI 1.07–1.57), erectile dysfunction (OR 1.32, 95% CI 1.10–1.59), hypercholesterolemia (OR 1.26, 95% CI 1.04–1.53), and sleep apnea (OR 1.40, 95% CI 1.13–1.74) over 27 years after controlling for age, alcohol consumption, smoking, body mass index, C-reactive protein, and polygenic risk for specific health conditions. In sensitivity analyses that excluded somatic depression symptoms, only the association with sleep apnea remained significant (OR 1.32, 95% CI 1.09–1.60).
A history of depression symptoms by early midlife is associated with an elevated risk for subsequent development of several self-reported health conditions. When isolated, non-somatic depression symptoms are associated with incident self-reported sleep apnea. Depression symptom history may be a predictor or marker of cardiometabolic risk over decades.
Current COVID-19 guidelines recommend symptom-based screening and regular nasopharyngeal (NP) testing for healthcare personnel in high-risk settings. We sought to estimate case detection percentages with various routine NP and saliva testing frequencies.
Simulation modelling study.
We constructed a sensitivity function based on the average infectiousness profile of symptomatic COVID-19 cases to determine the probability of being identified at the time of testing. This function was fitted to reported data on the percent positivity of symptomatic COVID-19 patients using NP testing. We then simulated a routine testing program with different NP and saliva testing frequencies to determine case detection percentages during the infectious period, as well as the pre-symptomatic stage.
Routine bi-weekly NP testing, once every two weeks, identified an average of 90.7% (SD: 0.18) of cases during the infectious period and 19.7% (SD: 0.98) during the pre-symptomatic stage. With a weekly NP testing frequency, the corresponding case detection percentages were 95.9% (SD: 0.18) and 32.9% (SD: 1.23), respectively. A 5-day saliva testing schedule had a similar case detection percentage as weekly NP testing during the infectious period, but identified about 10% more cases (mean: 42.5%; SD: 1.10) during the pre-symptomatic stage.
Our findings highlight the utility of routine non-invasive saliva testing for frontline healthcare workers to protect vulnerable patient populations. A 5-day saliva testing schedule should be considered to help identify silent infections and prevent outbreaks in nursing homes and healthcare facilities.
The vacuum-exhausted isolation locker (VEIL) provides a safety barrier during the care of COVID-19 patients. The VEIL is a 175-L enclosure with exhaust ports to continuously extract air through viral particle filters connected to hospital suction. Our experiments show that the VEIL contains and exhausts exhaled aerosols and droplets.
In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups’ final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency’s promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1
Recently, artificial intelligence-powered devices have been put forward as potentially powerful tools for the improvement of mental healthcare. An important question is how these devices impact the physician-patient interaction.
Aifred is an artificial intelligence-powered clinical decision support system (CDSS) for the treatment of major depression. Here, we explore the use of a simulation centre environment in evaluating the usability of Aifred, particularly its impact on the physician–patient interaction.
Twenty psychiatry and family medicine attending staff and residents were recruited to complete a 2.5-h study at a clinical interaction simulation centre with standardised patients. Each physician had the option of using the CDSS to inform their treatment choice in three 10-min clinical scenarios with standardised patients portraying mild, moderate and severe episodes of major depression. Feasibility and acceptability data were collected through self-report questionnaires, scenario observations, interviews and standardised patient feedback.
All 20 participants completed the study. Initial results indicate that the tool was acceptable to clinicians and feasible for use during clinical encounters. Clinicians indicated a willingness to use the tool in real clinical practice, a significant degree of trust in the system's predictions to assist with treatment selection, and reported that the tool helped increase patient understanding of and trust in treatment. The simulation environment allowed for the evaluation of the tool's impact on the physician–patient interaction.
The simulation centre allowed for direct observations of clinician use and impact of the tool on the clinician–patient interaction before clinical studies. It may therefore offer a useful and important environment in the early testing of new technological tools. The present results will inform further tool development and clinician training materials.
We conjecture a Verlinde type formula for the moduli space of Higgs sheaves on a surface with a holomorphic 2-form. The conjecture specializes to a Verlinde formula for the moduli space of sheaves. Our formula interpolates between K-theoretic Donaldson invariants studied by Göttsche and Nakajima-Yoshioka and K-theoretic Vafa-Witten invariants introduced by Thomas and also studied by Göttsche and Kool. We verify our conjectures in many examples (for example, on K3 surfaces).
Gaza City, with a population of over half a million residents, is the largest urban center and de-facto capital of the Gaza Strip, which itself has a total population of over 1.8 million. As of 2018, it is estimated that at least 1.3 million of the residents of the Gaza Strip are Palestinian refugees from other areas in historic Palestine, having fled to Gaza after the creation of the State of Israel in 1948. These refugee communities originally come from the many historic rural areas that surrounded the Gaza Strip, smaller towns and cities along the Mediterranean coast, as well as areas further afield such as Jaffa, 69 km north of Gaza on the coast, and Bir il-Sab‘, the largest city in the Nagab desert region (see Figure 1). Linguistically, many of these refugee communities are of Palestinian Arabic dialect backgrounds, although varieties of Arabic spoken in the Nagab are classified as originating in the Hijaz area of what is today Saudi Arabia (see Shahin 2007 on Palestinian Arabic and Henkin 2010 on Nagab Arabic).
Bovine respiratory disease (BRD) is the leading natural cause of death in US beef and dairy cattle, causing the annual loss of more than 1 million animals and financial losses in excess of $700 million. The multiple etiologies of BRD and its complex web of risk factors necessitate a herd-specific intervention plan for its prevention and control on dairies. Hence, a risk assessment is an important tool that producers and veterinarians can utilize for a comprehensive assessment of the management and host factors that predispose calves to BRD. The current study identifies the steps taken to develop the first BRD risk assessment tool and its components, namely the BRD risk factor questionnaire, the BRD scoring system, and a herd-specific BRD control and prevention plan. The risk factor questionnaire was designed to inquire on aspects of calf-rearing including management practices that affect calf health generally, and BRD specifically. The risk scores associated with each risk factor investigated in the questionnaire were estimated based on data from two observational studies. Producers can also estimate the prevalence of BRD in their calf herds using a smart phone or tablet application that facilitates selection of a true random sample of calves for scoring using the California BRD scoring system. Based on the risk factors identified, producers and herd veterinarians can then decide the management changes needed to mitigate the calf herd's risk for BRD. A follow-up risk assessment after a duration of time sufficient for exposure of a new cohort of calves to the management changes introduced in response to the risk assessment is recommended to monitor the prevalence of BRD.
To describe epidemiologic and genomic characteristics of a severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreak in a large skilled nursing facility (SNF), and the strategies that controlled transmission.
Design, Setting, and Participants:
Cohort study during March 22–May 4, 2020 of all staff and residents at a 780-bed SNF in San Francisco, California.
Contact tracing and symptom screening guided targeted testing of staff and residents; respiratory specimens were also collected through serial point prevalence surveys (PPS) in units with confirmed cases. Cases were confirmed by real-time reverse transcription–polymerase chain reaction testing for SARS-CoV-2; whole genome sequencing (WGS) characterized viral isolate lineages and relatedness. Infection prevention and control (IPC) interventions included restricting from work any staff who had close contact to a confirmed case; restricting movements between units; implementing surgical face masking facility-wide; and recommended PPE (isolation gown, gloves, N95 respirator and eye protection) for clinical interactions in units with confirmed cases.
Of 725 staff and residents tested through targeted testing and serial PPS, twenty-one (3%) were SARS-CoV-2-positive; sixteen (76%) staff and 5 (24%) residents. Fifteen (71%) were linked to a single unit. Targeted testing identified 17 (81%) cases; PPS identified 4 (19%). Most (71%) cases were identified prior to IPC intervention. WGS was performed on SARS-CoV-2 isolates from four staff and four residents; five were of Santa Clara County lineage and the three others were distinct lineages.
Early implementation of targeted testing, serial PPS, and multimodal IPC interventions limited SARS-CoV-2 transmission within the SNF.
This Position Paper from the Academy of Nutrition Sciences is the first in a series which describe the nature of the scientific evidence and frameworks that underpin nutrition recommendations for health. This first paper focuses on evidence which underpins dietary recommendations for prevention of non-communicable diseases. It considers methodological advances made in nutritional epidemiology and frameworks used by expert groups to support objective, rigorous and transparent translation of the evidence into dietary recommendations. The flexibility of these processes allows updating of recommendations as new evidence becomes available. For CVD and some cancers, the paper has highlighted the long-term consistency of a number of recommendations. The innate challenges in this complex area of science include those relating to dietary assessment, misreporting and the confounding of dietary associations due to changes in exposures over time. A large body of experimental data is available that has the potential to support epidemiological findings, but many of the studies have not been designed to allow their extrapolation to dietary recommendations for humans. Systematic criteria that would allow objective selection of these data based on rigour and relevance to human nutrition would significantly add to the translational value of this area of nutrition science. The Academy makes three recommendations: (i) the development of methodologies and criteria for selection of relevant experimental data, (ii) further development of innovative approaches for measuring human dietary intake and reducing confounding in long-term cohort studies and (iii) retention of national nutrition surveillance programmes needed for extrapolating global research findings to UK populations.
The impacts of the COVID-19 pandemic extend to global biodiversity and its conservation. Although short-term beneficial or adverse impacts on biodiversity have been widely discussed, there is less attention to the likely political and economic responses to the crisis and their implications for conservation. Here we describe four possible alternative future policy responses: (1) restoration of the previous economy, (2) removal of obstacles to economic growth, (3) green recovery and (4) transformative economic reconstruction. Each alternative offers opportunities and risks for conservation. They differ in the agents they emphasize to mobilize change (e.g. markets or states) and in the extent to which they prioritize or downplay the protection of nature. We analyse the advantages and disadvantages of these four options from a conservation perspective. We argue that the choice of post-COVID-19 recovery strategy has huge significance for the future of biodiversity, and that conservationists of all persuasions must not shrink from engagement in the debates to come.
Despite advances in endovascular interventions, including the introduction of drug-eluting stents (DES), high target lesion revascularization (TLR) rates still burden the treatment of symptomatic lower-limb peripheral arterial disease (PAD). EluviaTM, a novel, sustained-release, paclitaxel-eluting DES, was shown to further reduce TLRs when compared with the paclitaxel-coated Zilver® PTX® stent, in the IMPERIAL randomized controlled trial. This evaluation estimated the cost-effectiveness of Eluvia when compared with Zilver PTX in Australia, based on 12-month clinical outcomes from the IMPERIAL trial.
A state-transition, decision-analytic model with a 12-month time horizon was developed from an Australian public healthcare system perspective. Cost parameters were obtained from the Australian National Hospital Cost Data Collection Cost Report (2016–17). All costs were captured in Australian dollars (AUD), where AUD 1 = USD 0.69 (June 2020). Complete sets of clinical parameters (primary patency loss, TLR, amputation, and death) and cost parameters from their respective distributions were bootstrapped in samples of 1,000 patients, for each intervention arm of the model. One-way and probabilistic sensitivity analyses were performed.
At 12 months, modeled TLR rates were 4.5 percent for Eluvia and 8.9 percent for Zilver PTX, and mean total direct costs were AUD 6,537 [USD 4,511] and AUD 6,908 [USD 4,767], respectively (Eluvia average per patient savings; overall cohort=AUD 371 [USD 256]; diabetic cohort=AUD 625 [USD 431]). In probabilistic sensitivity analyses, Eluvia was cost-effective relative to Zilver PTX in 92.0 percent of all simulations at a threshold of $10,000 per TLR avoided. Eluvia was more effective and less costly (dominant) than Zilver PTX in 76.0 percent of simulations.
In the first year after the intervention, Eluvia was more effective and less costly than Zilver PTX, making Eluvia the dominant treatment strategy for treatment of symptomatic lower-limb PAD, from an Australian public healthcare system perspective. These findings should be considered when formulating policy and practice guidelines in the context of priority setting and making evidence-based resource allocation decisions for treatment of PAD in Australia.
This chapter comprises the following sections: names, taxonomy, subspecies and distribution, descriptive notes, habitat, movements and home range, activity patterns, feeding ecology, reproduction and growth, behavior, parasites and diseases, status in the wild, and status in captivity.
Microelectrode recordings (MERs) are used during deep brain stimulation surgery (DBS) to optimize patient outcomes and provide a unique method of collecting data regarding neurological conditions. However, MERs can be affected by anesthetics such as dexmedetomidine. Little is known about the effects of dexmedetomidine (DEX) on the globus pallidus interna (GPi), a common target for DBS. The primary aim of this study is to investigate the hypothesis that DEX is associated with alterations in GPi MERs.
We conducted a retrospective analysis comparing MERs from patients with Parkinson’s disease (PD) and dystonia who underwent insertion of DBS of the GPi under DEX sedation with those who went through the same procedure without DEX (No DEX).
Firing rates for GPi neurons in the DEX group were lower (57.44 ± 2.04; mean ± SEM, n = 163 cells) than the No DEX group (69.53 ± 2.06, n = 112 cells, P < 0.0001). Overall, DEX was associated with a greater proportion of GPi cells classified as firing in bursty pattern compared to our No DEX group. (29.41%, n = 153 vs 14.81%, n = 108, P = 0.008). This effect was present for both PD and dystonia patients who underwent the procedure. High doses of DEX were associated with lower firing rates than low doses.
Our results suggest that DEX is associated with a decrease in GPi firing rates and are associated with an increase in burstiness. Furthermore, these effects are similar between dystonia and PD patients. Lastly, the effects of DEX may differ between high doses and low doses.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter phenology in thirteen economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to four weeks after physiological maturity at multiple sites spread across fourteen states in the southern, northern, and mid-Atlantic U.S. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus species seed shatter was low (0 to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2 to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than ten percent of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Telephone consultations have rapidly increased in the out-patient setting because of the coronavirus pandemic. A quality improvement project was implemented to improve patient satisfaction of telephone consultations in our unit.
This was a prospective complete-cycle project. Patient satisfaction questionnaires were sent to patients following telephone consultations in ENT clinics. Based on a literature review and initial results, clinicians were encouraged to follow a structured consultation format. A second questionnaire survey was conducted following its implementation.
One hundred patient questionnaires were collected during the survey (April and June 2020). There was significant improvement over the two surveys in terms of satisfaction scores (p = 0.026), along with a significantly increased preference for telephone consultations over face-to-face consultations (p = 0.021).
This study showed significant improvement in patient satisfaction and an increased telephone consultation preference through the use of a structured consultation model. The potential benefits in terms of infection control and impact on out-patient workload may see telephone consultations persist in the post-coronavirus era.
Existing peer-reviewed literature describing emergency medical technician (EMT) acquisition and transmission of 12-lead electrocardiograms (12L-ECGs), in the absence of a paramedic, is largely limited to feasibility studies.
The objective of this retrospective observational study was to describe the impact of EMT-acquired 12L-ECGs in Suffolk County, New York (USA), both in terms of the diagnostic quality of the transmitted 12L-ECGs and the number of prehospital percutaneous coronary intervention (PCI)-center notifications made as a result of transmitted 12L-ECGs demonstrating a ST-elevation myocardial infarction (STEMI).
A pre-existing database was queried for Emergency Medical Services (EMS) calls on which an EMT acquired a 12L-ECG from program initiation (January 2017) through December 31, 2019. Scanned copies of the 12L-ECGs were requested in order to be reviewed by a blinded emergency physician.
Of the 665 calls, 99 had no 12L-ECG available within the database. For 543 (96%) of the available 12L-ECGs, the quality was sufficient to diagnose the presence or absence of a STEMI. Eighteen notifications were made to PCI-centers about a concern for STEMI. The median time spent on scene and transporting to the hospital were 18 and 11 minutes, respectively. The median time from PCI-center notification to EMS arrival at the emergency department (ED) was seven minutes (IQR 5-14).
In the event a cardiac monitor is available, after a limited educational intervention, EMTs are capable of acquiring a diagnostically useful 12L-ECG and transmitting it to a remote medical control physician for interpretation. This allows for prehospital PCI-center activation for a concern of a 12L-ECG with a STEMI, in the event that a paramedic is not available to care for the patient.