To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An engaging, comprehensive, richly illustrated textbook about the atmospheric general circulation, written by leading researchers in the field. The book elucidates the pervasive role of atmospheric dynamics in the Earth System, interprets the structure and evolution of atmospheric motions across a range of space and time scales in terms of fundamental theoretical principles, and includes relevant historical background and tutorials on research methodology. The book includes over 300 exercises and is accompanied by extensive online resources, including solutions manuals, an animations library, and an introduction to online visualization and analysis tools. This textbook is suitable as a textbook for advanced undergraduate and graduate level courses in atmospheric sciences and geosciences curricula and as a reference textbook for researchers.
This study uses luminescence and 14C accelerator mass spectrometry procedures to date relevant glaciofluvial and glacial deposits from the south-central and southeastern Pyrenees (Andorra–France–Spain). We distinguish two types of end-moraine complexes: (1) those in which at least a far-flung moraine exists beyond a frequently nested end-moraine complex (the most common) and (2) those in which a close-nested end moraine encompasses at least two glacial cycles. Both types formed within six distinctive glacial intervals: (1) A penultimate glacial cycle during Marine Oxygen Isotope Stage (MIS) 6 and older glaciofluvial terraces occurred beyond the range of the luminescence dating method. (2) An early glacial advance in MIS 5d (~97 −15/+19 ka) was followed by glacial retreat during MIS 5c (< 91 ± 9 ka). (3) The last maximum ice extent (LMIE) was in early MIS 4 (~74 ± 4.5 ka). (4) Unexpectedly, glaciers thinned during the second half of MIS 3 (~39 −6/+11 ka). (5) During the MIS 3–2 transition, glaciers subsequently fluctuated behind the LMIE limits. (6) The global last glacial maximum (LGM) started as early as ~26.6 ± 0.365 ka b2k, and the corresponding end moraines were built behind the LMIE limits or merged with it, forming close-nested moraines.
Wheat was first cultivated in the Fertile Crescent (South Western Asia) with a farming expansion that lasted from around 9000 BC to 4000 BC. Whilst humans have been exposed to wheat for around the last 10 000 years, humans have existed for greater than 2·5 million years. Therefore, wheat (and thereby gluten) are relatively new introductions to our diet! By the end of the 20th century, global wheat output had expanded by 5-fold, with a corresponding increase in the prevalence of gluten-related disorders. Coeliac disease (CD) is a state of heightened immunological responsiveness to ingested gluten in genetically susceptible individuals. CD now affects 1 % or more of all adults, for which the management is a strict lifelong gluten-free diet (GFD). However, there is a growing body of evidence to show that a far greater proportion of individuals without CD are self-initiating a GFD. This includes individuals initiating a GFD as a lifestyle choice, people with irritable bowel-type symptoms and those diagnosed with non-coeliac gluten (or wheat) sensitivity. Despite a greater recognition of gluten-related disorders, gaps still remain in our understanding of both their aetiology and management. This article explores the role of the gluten-free diet in gluten-related disorders, along with current uncertainties.
Patient- and proxy-reported outcomes (PROs) are an important indicator of healthcare quality and can be used to inform treatment. Despite the widescale use of PROs in adult cardiology, they are underutilised in paediatric cardiac care. This study describes a six-center feasibility and pilot experience implementing PROs in the paediatric and young adult ventricular assist device population.
The Advanced Cardiac Therapies Improving Outcomes Network (ACTION) is a collaborative learning network comprised of 55 centres focused on improving clinical outcomes and the patient/family experience for children with heart failure and those supported by ventricular assist devices. The development of ACTION’s PRO programme via engagement with patient and parent stakeholders is described. Pilot feasibility, patient/parent and clinician feedback, and initial PRO findings of patients and families receiving paediatric ventricular assist support across six centres are detailed.
Thirty of the thirty-five eligible patients (85.7%) were enrolled in the PRO programme during the pilot study period. Clinicians and participating patients/parents reported positive experiences with the PRO pilot programme. The most common symptoms reported by patients/parents in the first month post-implant period included limitations in activities, dressing change distress, and post-operative pain. Poor sleep, dressing change distress, sadness, and fatigue were the most common symptoms endorsed >30 days post-implant. Parental sadness and worry were notable throughout the entirety of the post-implant experience.
This multi-center ACTION learning network-based PRO programme demonstrated initial success in this six-center pilot study experience and yields important next steps for larger-scale PRO collection, research, and clinical intervention.
Administration of antidepressant drugs – principally selective serotonin reuptake inhibitors (SSRIs) - may induce clinically significant ‘apathy’ which can affect treatment outcomes adversely. We aimed to review all relevant previous reports.
We performed a PUBMED-search of English-language studies, combining terms concerning psychopathology (e.g. apathy) and classes of antidepressants (e.g. SSRI).
According to certain inclusion (e.g. use of DSM/ICD diagnostic criteria) and exclusion (e.g. presence of a clinical condition that may induce apathy) criteria, 50 articles were eligible for review. Together, they suggest that administration of antidepressants – usually SSRIs - can induce an apathy syndrome or emotional blunting, i.e. a decrease in emotional responsiveness to circumstances which would have triggered intense mood reactions prior to pharmacotherapy. The reported prevalence of antidepressant-induced apathy ranges between 5.8%-50%, and for SSRIs ranges between 20%-92%. Antidepressant-induced apathy emerges independently of diagnosis, age, and treatment outcome, and appears dose-dependent and reversible. The main treatment strategy is dose reduction, though some data suggest the usefulness of treatment with olanzapine, bupropion, agomelatine or amisulpride, or the methylphenidate-modafinil-olanzapine combination.
Antidepressant-induced apathy needs careful clinical attention. Further systematic research is needed to investigate the prevalence, course, etiology, and treatment of this important clinical condition.
Management of total anomalous pulmonary venous connections has been extensively studied to further improve outcomes. Our institution previously reported factors associated with mortality, recurrent obstruction, and reintervention. The study purpose was to revisit the cohort of patients and evaluate factors associated with reintervention, and mortality in early and late follow-up.
A retrospective review at our institution identified 81 patients undergoing total anomalous pulmonary venous connection repair from January 2002 to January 2018. Demographic and operative variables were evaluated. Anastomotic reintervention (interventional or surgical) and/or mortality were primary endpoints.
Eighty-one patients met the study criteria. Follow-up ranged from 0 to 6,291 days (17.2 years), a mean of 1263 days (3.5 years). Surgical mortality was 16.1% and reintervention rates were 19.8%. In re-interventions performed, 80% occurred within 1.2 years, while 94% of mortalities were within 4.1 months. Increasing cardiopulmonary bypass times (p = 0.0001) and the presence of obstruction at the time of surgery (p = 0.025) were predictors of mortality, while intracardiac total anomalous pulmonary venous connection type (p = 0.033) was protective. Risk of reintervention was higher with increasing cardiopulmonary bypass times (p = 0.015), single ventricle anatomy (p = 0.02), and a post-repair gradient >2 mmHg on transesophageal echocardiogram (p = 0.009).
Evaluation of a larger cohort with longer follow-up demonstrated the relationship of anatomic complexity and symptoms at presentation to increased mortality risk after total anomalous pulmonary venous connection repair. The presence of a single ventricle or a post-operative confluence gradient >2 mmHg were risk factors for reintervention. These findings support those found in our initial study.
Legitimacy is a bulwark for courts; even when judges engage in controversial or disagreeable behavior, the public tends to acquiesce. Recent studies identify several threats to the legitimacy of courts, including polarization and attacks by political elites. This article contributes to the scholarly discourse by exploring a previously unconsidered threat: scandal, or allegations of personal misbehavior. We argue that scandals can undermine confidence in judges as virtuous arbiters and erode broad public support for the courts. Using survey experiments, we draw on real-world judicial controversies to evaluate the impact of scandal on specific support for judicial actors and their rulings and diffuse support for the judiciary. We demonstrate that scandals erode individual support but find no evidence that institutional support is diminished. These findings may ease normative concerns that isolated indiscretions by controversial jurists may deplete the vast “reservoir of goodwill” that is foundational to the courts.
Deciding whether or not eradication of an invasive species has been successful is one of the main dilemmas facing managers of eradication programmes. When the species is no longer being detected, a decision must be made about when to stop the eradication programme and declare success. In practice, this decision is usually based on ad hoc rules, which may be inefficient. Since surveillance undertaken to confirm species absence is imperfect, any declaration of eradication success must consider the risk and the consequences of being wrong. If surveillance is insufficient, then eradication may be falsely declared (a Type I error), whereas continuation of surveillance when eradication has already occurred wastes resources (a Type II error). We review the various methods that have been developed for quantifying these errors and incorporating them into the decision-making process. We conclude with an overview of future developments likely to improve the practice of determining invasive species eradication success.
Transmission of bacteria between animals and humans in domestic households is increasingly recognized. We evaluated the presence of antimicrobial-resistant fecal bacteria in 8 dog-owner–dog pairs before and after the dog received amoxicillin-clavulanate. The study identified shared flora in the humans and dogs that were affected by antimicrobial administration.
Some charities are much more cost-effective than other charities, which means that they can save many more lives with the same amount of money. Yet most donations do not go to the most effective charities. Why is that? We hypothesized that part of the reason is that people underestimate how much more effective the most effective charities are compared with the average charity. Thus, they do not know how much more good they could do if they donated to the most effective charities. We studied this hypothesis using samples of the general population, students, experts, and effective altruists in five studies. We found that lay people estimated that among charities helping the global poor, the most effective charities are 1.5 times more effective than the average charity (Studies 1 and 2). Effective altruists, in contrast, estimated the difference to be factor 50 (Study 3) and experts estimated the factor to be 100 (Study 4). We found that participants donated more to the most effective charity, and less to an average charity, when informed about the large difference in cost-effectiveness (Study 5). In conclusion, misconceptions about the difference in effectiveness between charities is thus likely one reason, among many, why people donate ineffectively.
Testing of asymptomatic patients for severe acute respiratory coronavirus virus 2 (SARS-CoV-2) (ie, “asymptomatic screening) to attempt to reduce the risk of nosocomial transmission has been extensive and resource intensive, and such testing is of unclear benefit when added to other layers of infection prevention mitigation controls. In addition, the logistic challenges and costs related to screening program implementation, data noting the lack of substantial aerosol generation with elective controlled intubation, extubation, and other procedures, and the adverse patient and facility consequences of asymptomatic screening call into question the utility of this infection prevention intervention. Consequently, the Society for Healthcare Epidemiology of America (SHEA) recommends against routine universal use of asymptomatic screening for SARS-CoV-2 in healthcare facilities. Specifically, preprocedure asymptomatic screening is unlikely to provide incremental benefit in preventing SARS-CoV-2 transmission in the procedural and perioperative environment when other infection prevention strategies are in place, and it should not be considered a requirement for all patients. Admission screening may be beneficial during times of increased virus transmission in some settings where other layers of controls are limited (eg, behavioral health, congregate care, or shared patient rooms), but widespread routine use of admission asymptomatic screening is not recommended over strengthening other infection prevention controls. In this commentary, we outline the challenges surrounding the use of asymptomatic screening, including logistics and costs of implementing a screening program, and adverse patient and facility consequences. We review data pertaining to the lack of substantial aerosol generation during elective controlled intubation, extubation, and other procedures, and we provide guidance for when asymptomatic screening for SARS-CoV-2 may be considered in a limited scope.
Water is the primary carrier for herbicide applications. Spray water qualities such as pH, hardness, temperature, or turbidity can influence herbicide performance and may need to be amended for optimum weed control. Water quality factors can affect herbicide activity by reducing solubility, enhancing degradation in the spray tank, or forming herbicide-salt complexes with mineral cations, thereby reducing the absorption, translocation, and subsequent weed control. The available literature suggests that the effect of water quality varies with herbicide chemistry and weed species. The efficacy of weak-acid herbicides such as glyphosate, glufosinate, clethodim, sethoxydim, bentazon, and 2,4-D is improved with acidic water pH; however, the efficacy of sulfonylurea herbicides is negatively impacted. Hard-water antagonism is more prevalent with weak-acid herbicides, and trivalent cations are the most problematic. Spray solution temperature between 18 and 44 C is optimum for some weak-acid herbicides; however, their efficacy can be reduced at relatively low (5 C) or high (56 C) water temperature. The effect of water turbidity is severe on cationic herbicides such as paraquat and diquat, and herbicides with low soil mobility like glyphosate. Although adjuvants are recommended to overcome the negative effect of spray water hardness or pH, the response has been inconsistent with the herbicide chemistry and weed species. Moreover, information on the effect of spray water quality on various herbicide chemistries, weed species, and adjuvants is limited; therefore, it is difficult to develop guidelines for improving weed control efficacy. Further research is needed to determine spray water factors’ effect and develop specific recommendations for improving herbicide efficacy on problematic weed species.
The objective of this study was to determine antibiotic appropriateness based on Loeb minimum criteria (LMC) in patients with and without altered mental status (AMS).
Retrospective, quasi-experimental study assessing pooled data from 3 periods pertaining to the implementation of a UTI management guideline.
Academic medical center in Lexington, Kentucky.
Adult patients aged ≥18 years with a collected urinalysis receiving antimicrobial therapy for a UTI indication.
Appropriateness of UTI management was assessed in patients prior to an institutional UTI guideline, after guideline introduction and education, and after implementation of a prospective audit-and-feedback stewardship intervention from September to November 2017–2019. Patient data were pooled and compared between patients noted to have AMS versus those with classic UTI symptoms. Loeb minimum criteria were used to determine whether UTI diagnosis and treatment was warranted.
In total, 600 patients were included in the study. AMS was one of the most common indications for testing across the 3 periods (19%–30.5%). Among those with AMS, 25 patients (16.7%) met LMC, significantly less than the 151 points (33.6%) without AMS (P < .001).
Patients with AMS are prescribed antibiotic therapy without symptoms indicative of UTI at a higher rate than those without AMS, according to LMC. Further antimicrobial stewardship efforts should focus on prescriber education and development of clearly defined criteria for patients with and without AMS.
To determine the reliability of teleneuropsychological (TNP) compared to in-person assessments (IPA) in people with HIV (PWH) and without HIV (HIV−).
Participants included 80 PWH (Mage = 58.7, SDage = 11.0) and 23 HIV− (Mage = 61.9, SDage = 16.7). Participants completed two comprehensive neuropsychological IPA before one TNP during the COVID-19 pandemic (March–December 2020). The neuropsychological tests included: Hopkins Verbal Learning Test-Revised (HVLT-R Total and Delayed Recall), Controlled Oral Word Association Test (COWAT; FAS-English or PMR-Spanish), Animal Fluency, Action (Verb) Fluency, Wechsler Adult Intelligence Scale 3rd Edition (WAIS-III) Symbol Search and Letter Number Sequencing, Stroop Color and Word Test, Paced Auditory Serial Addition Test (Channel 1), and Boston Naming Test. Total raw scores and sub-scores were used in analyses. In the total sample and by HIV status, test-retest reliability and performance-level differences were evaluated between the two consecutive IPA (i.e., IPA1 and IPA2), and mean in-person scores (IPA-M), and TNP.
There were statistically significant test-retest correlations between IPA1 and IPA2 (r or ρ = .603–.883, ps < .001), and between IPA-M and TNP (r or ρ = .622–.958, ps < .001). In the total sample, significantly lower test-retest scores were found between IPA-M and TNP on the COWAT (PMR), Stroop Color and Word Test, WAIS-III Letter Number Sequencing, and HVLT-R Total Recall (ps < .05). Results were similar in PWH only.
This study demonstrates reliability of TNP in PWH and HIV−. TNP assessments are a promising way to improve access to traditional neuropsychological services and maintain ongoing clinical research studies during the COVID-19 pandemic.
The transition from residency to paediatric cardiology fellowship is challenging due to the new knowledge and technical skills required. Online learning can be an effective didactic modality that can be widely accessed by trainees. We sought to evaluate the effectiveness of a paediatric cardiology Fellowship Online Preparatory Course prior to the start of fellowship.
The Online Preparatory Course contained 18 online learning modules covering basic concepts in anatomy, auscultation, echocardiography, catheterisation, cardiovascular intensive care, electrophysiology, pulmonary hypertension, heart failure, and cardiac surgery. Each online learning module included an instructional video with pre-and post-video tests. Participants completed pre- and post-Online Preparatory Course knowledge-based exams and surveys. Pre- and post-Online Preparatory Course survey and knowledge-based examination results were compared via Wilcoxon sign and paired t-tests.
151 incoming paediatric cardiology fellows from programmes across the USA participated in the 3 months prior to starting fellowship training between 2017 and 2019. There was significant improvement between pre- and post-video test scores for all 18 online learning modules. There was also significant improvement between pre- and post-Online Preparatory Course exam scores (PRE 43.6 ± 11% versus POST 60.3 ± 10%, p < 0.001). Comparing pre- and post-Online Preparatory Course surveys, there was a statistically significant improvement in the participants’ comfort level in 35 of 36 (97%) assessment areas. Nearly all participants (98%) agreed or strongly agreed that the Online Preparatory Course was a valuable learning experience and helped alleviate some anxieties (77% agreed or strongly agreed) related to starting fellowship.
An Online Preparatory Course prior to starting fellowship can provide a foundation of knowledge, decrease anxiety, and serve as an effective educational springboard for paediatric cardiology fellows.