To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A major obstacle in understanding and treating posttraumatic stress disorder (PTSD) is its clinical and neurobiological heterogeneity. To address this barrier, the field has become increasingly interested in identifying subtypes of PTSD based on dysfunction in neural networks alongside cognitive impairments that may underlie the development and maintenance of symptoms. The current study aimed to determine if subtypes of PTSD, based on normative-based cognitive dysfunction across multiple domains, have unique neural network signatures.
In a sample of 271 veterans (90% male) that completed both neuropsychological testing and resting-state fMRI, two complementary, whole-brain functional connectivity analyses explored the link between brain functioning, PTSD symptoms, and cognition.
At the network level, PTSD symptom severity was associated with reduced negative coupling between the limbic network (LN) and frontal-parietal control network (FPCN), driven specifically by the dorsolateral prefrontal cortex and amygdala Hubs of Dysfunction. Further, this relationship was uniquely moderated by executive function (EF). Specifically, those with PTSD and impaired EF had the strongest marker of LN-FPCN dysregulation, while those with above-average EF did not exhibit PTSD-related dysregulation of these networks.
These results suggest that poor executive functioning, alongside LN-FPCN dysregulation, may represent a neurocognitive subtype of PTSD.
Although mania is the hallmark symptom of bipolar I disorder (BD-I), most patients initially present for treatment with depressive symptoms. Misdiagnosis of BD-I as major depressive disorder (MDD) is common, potentially resulting in poor outcomes and inappropriate antidepressant monotherapy treatment. Screening patients with depressive symptoms is a practical strategy to help healthcare providers (HCPs) identify when additional assessment for BD-I is warranted. The new 6-item Rapid Mood Screener (RMS) is a pragmatic patient-reported BD-I screening tool that relies on easily understood terminology to screen for manic symptoms and other BD-I features in <2 minutes. The RMS was validated in an observational study in patients with clinically confirmed BD-I (n=67) or MDD (n=72). When 4 or more items were endorsed (“yes”), the sensitivity of the RMS for identifying patients with BP-I was 0.88 and specificity was 0.80; positive and negative predictive values were 0.80 and 0.88, respectively. To more thoroughly understand screening tool use among HCPs, a 10-minute survey was conducted.
A nationwide sample of HCPs (N=200) was selected using multiple HCP panels; HCPs were asked to describe their opinions/current use of screening tools, assess the RMS, and evaluate the RMS versus the widely recognized Mood Disorder Questionnaire (MDQ). Results were reported by grouped specialties (primary care physicians, general nurse practitioners [NPs]/physician assistants [PAs], psychiatrists, and psychiatric NPs/PAs). Included HCPs were in practice <30 years, spent at least 75% of their time in clinical practice, saw at least 10 patients with depression per month, and diagnosed MDD or BD in at least 1 patient per month. Findings were reported using descriptive statistics; statistical significance was reported at the 95% confidence interval.
Among HCPs, 82% used a tool to screen for MDD, while 32% used a tool for BD. Screening tool attributes considered to be of the greatest value included sensitivity (68%), easy to answer questions (66%), specificity (65%), confidence in results (64%), and practicality (62%). Of HCPs familiar with screening tools, 70% thought the RMS was at least somewhat better than other screening tools. Most HCPs were aware of the MDQ (85%), but only 29% reported current use. Most HCPs (81%) preferred the RMS to the MDQ, and the RMS significantly outperformed the MDQ across valued attributes; 76% reported that they were likely to use the RMS to screen new patients with depressive symptoms. A total of 84% said the RMS would have a positive impact on their practice, with 46% saying they would screen more patients for bipolar disorder.
The RMS was viewed positively by HCPs who participated in a brief survey. A large percentage of respondents preferred the RMS over the MDQ and indicated that they would use it in their practice. Collectively, responses indicated that the RMS is likely to have a positive impact on screening behavior.
The rise of the radical Right over the last decade has created a situation that demands engagement with the intellectual origins, achievements, and changing worldviews of radical conservative forces. Yet, conservative thought seems to have no distinct place in the theoretical field that has structured debates within the discipline of IR since 1945. This article seeks to explain some of the reasons for this absence. In the first part, we argue that there was in fact a clear strand of radical conservative thought in the early years of the field's development and recover some of these forgotten positions. In the second part, we argue that the near disappearance of those ideas can be traced in part to a process of ‘conceptual innovation’ through which postwar realist thinkers sought to craft a ‘conservative liberalism’ that defined the emerging field's theoretical alternatives in ways that excluded radical right-wing positions. Recovering this history challenges some of IR's most enduring narratives about its development, identity, and commitments – particularly the continuing tendency to find its origins in a defining battle between realism and liberalism. It also draws attention to overlooked resources to reflect upon the challenge of the radical Right in contemporary world politics.
Social cognition has not previously been assessed in treatment-naive patients with chronic schizophrenia, in patients over 60 years of age, or in patients with less than 5 years of schooling.
We revised a commonly used measure of social cognition, the Reading the Mind in the Eyes Test (RMET), by expanding the instructions, using both self-completion and interviewer-completion versions (for illiterate respondents), and classifying each test administration as ‘successfully completed’ or ‘incomplete’. The revised instrument (RMET-CV-R) was administered to 233 treatment-naive patients with chronic schizophrenia (UT), 154 treated controls with chronic schizophrenia (TC), and 259 healthy controls (HC) from rural communities in China.
In bivariate and multivariate analyses, successful completion rates and RMET-CV-R scores (percent correct judgments about emotion exhibited in 70 presented slides) were highest in HC, intermediate in TC, and lowest in UT (adjusted completion rates, 97.0, 72.4, and 49.9%, respectively; adjusted RMET-CV-R scores, 45.4, 38.5, and 34.6%, respectively; all p < 0.02). Stratified analyses by the method of administration (self-completed v. interviewer-completed) and by education and age (‘educated-younger’ v. ‘undereducated-older’) show the same relationship between groups (i.e. NC>TC>UT), though not all differences remain statistically significant.
We find poorer social cognition in treatment-naive than in treated patients with chronic schizophrenia. The discriminant validity of RMET-CV-R in undereducated, older patients demonstrates the feasibility of administering revised versions of RMET to patients who may otherwise be considered ineligible due to education or age by changing the method of test administration and carefully assessing respondents' ability to complete the task successfully.
The objective of this chapter is to introduce the University of Kentucky IR4TD Lean Systems Program (LSP) and the concept of “True Lean,” as well as to discuss what we have observed to be critical challenges (derailers) to the successful implementation of Toyota Production System-(TPS)-based principles within non-Toyota organizations. This learning stems from experience teaching, coaching, and facilitating lean implementation activities in a wide range of industries over the past twenty-five years. Participants in the LSP Lean Certification program have been sent by over 175 companies representing industries from healthcare, steel, glass, ceramics, textiles, automotive, railroads, aerospace, commercial aviation, fast food restaurants, and food processing manufacturers as well as government, education, and NGOs. This chapter shares data collected from our staff and clients in an effort to help understand the current condition of lean in industry today and the major challenges confronting successful lean implementations.
Ecosystem modeling, a pillar of the systems ecology paradigm (SEP), addresses questions such as, how much carbon and nitrogen are cycled within ecological sites, landscapes, or indeed the earth system? Or how are human activities modifying these flows? Modeling, when coupled with field and laboratory studies, represents the essence of the SEP in that they embody accumulated knowledge and generate hypotheses to test understanding of ecosystem processes and behavior. Initially, ecosystem models were primarily used to improve our understanding about how biophysical aspects of ecosystems operate. However, current ecosystem models are widely used to make accurate predictions about how large-scale phenomena such as climate change and management practices impact ecosystem dynamics and assess potential effects of these changes on economic activity and policy making. In sum, ecosystem models embedded in the SEP remain our best mechanism to integrate diverse types of knowledge regarding how the earth system functions and to make quantitative predictions that can be confronted with observations of reality. Modeling efforts discussed are the Century ecosystem model, DayCent ecosystem model, Grassland Ecosystem Model ELM, food web models, Savanna model, agent-based and coupled systems modeling, and Bayesian modeling.
The systems ecology paradigm (SEP) emerged in the late 1960s at a time when societies throughout the world were beginning to recognize that our environment and natural resources were being threatened by their activities. Management practices in rangelands, forests, agricultural lands, wetlands, and waterways were inadequate to meet the challenges of deteriorating environments, many of which were caused by the practices themselves. Scientists recognized an immediate need was developing a knowledge base about how ecosystems function. That effort took nearly two decades (1980s) and concluded with the acceptance that humans were components of ecosystems, not just controllers and manipulators of lands and waters. While ecosystem science was being developed, management options based on ecosystem science were shifting dramatically toward practices supporting sustainability, resilience, ecosystem services, biodiversity, and local to global interconnections of ecosystems. Emerging from the new knowledge about how ecosystems function and the application of the systems ecology approach was the collaboration of scientists, managers, decision-makers, and stakeholders locally and globally. Today’s concepts of ecosystem management and related ideas, such as sustainable agriculture, ecosystem health and restoration, consequences of and adaptation to climate change, and many other important local to global challenges are a direct result of the SEP.
Fundamental knowledge about the processes that control the functioning of the biophysical workings of ecosystems has expanded exponentially since the late 1960s. Scientists, then, had only primitive knowledge about C, N, P, S, and H2O cycles; plant, animal, and soil microbial interactions and dynamics; and land, atmosphere, and water interactions. With the advent of systems ecology paradigm (SEP) and the explosion of technologies supporting field and laboratory research, scientists throughout the world were able to assemble the knowledge base known today as ecosystem science. This chapter describes, through the eyes of scientists associated with the Natural Resource Ecology Laboratory (NREL) at Colorado State University (CSU), the evolution of the SEP in discovering how biophysical systems at small scales (ecological sites, landscapes) function as systems. The NREL and CSU are epicenters of the development of ecosystem science. Later, that knowledge, including humans as components of ecosystems, has been applied to small regions, regions, and the globe. Many research results that have formed the foundation for ecosystem science and management of natural resources, terrestrial environments, and its waters are described in this chapter. Throughout are direct and implicit references to the vital collaborations with the global network of ecosystem scientists.
Emerging from the warehouse of knowledge about terrestrial ecosystem functioning and the application of the systems ecology paradigm, exemplified by the power of simulation modeling, tremendous strides have been made linking the interactions of the land, atmosphere, and water locally to globally. Through integration of ecosystem, atmospheric, soil, and more recently social science interactions, plausible scenarios and even reasonable predictions are now possible about the outcomes of human activities. The applications of that knowledge to the effects of changing climates, human-caused nitrogen enrichment of ecosystems, and altered UV-B radiation represent challenges addressed in this chapter. The primary linkages addressed are through the C, N, S, and H2O cycles, and UV-B radiation. Carbon dioxide exchanges between land and the atmosphere, N additions and losses to and from lands and waters, early studies of SO2 in grassland ecosystem, and the effects of UV-B radiation on ecosystems have been mainstays of research described in this chapter. This research knowledge has been used in international and national climate assessments, for example the IPCC, US National Climate Assessment, and Paris Climate Accord. Likewise, the knowledge has been used to develop concepts and technologies related to sustainable agriculture, C sequestration, and food security.
An increasing number of unexpectedly diverse benthic communities are being reported from microbially precipitated carbonate facies in shallow-marine platform settings after the end-Permian mass extinction. Ostracoda, which was one of the most diverse and abundant metazoan groups during this interval, recorded its greatest diversity and abundance associated with these facies. Previous studies, however, focused mainly on taxonomic diversity and, therefore, left room for discussion of paleoecological significance. Here, we apply a morphometric method (semilandmarks) to investigate morphological variance through time to better understand the ecological consequences of the end-Permian mass extinction and to examine the hypothesis that microbial mats played a key role in ostracod survival. Our results show that taxonomic diversity and morphological disparity were decoupled during the end-Permian extinction and that morphological disparity declined rapidly at the onset of the end-Permian extinction, even though the high diversity of ostracods initially survived in some places. The decoupled changes in taxonomic diversity and morphological disparity suggest that the latter is a more robust proxy for understanding the ecological impact of the extinction event, and the low morphological disparity of ostracod faunas is a consequence of sustained environmental stress or a delayed post-Permian radiation. Furthermore, the similar morphological disparity of ostracods between microbialite and non-microbialite facies indicates that microbial mats most likely represent a taphonomic window rather than a biological refuge during the end-Permian extinction interval.
During the Randomized Assessment of Rapid Endovascular Treatment (EVT) of Ischemic Stroke (ESCAPE) trial, patient-level micro-costing data were collected. We report a cost-effectiveness analysis of EVT, using ESCAPE trial data and Markov simulation, from a universal, single-payer system using a societal perspective over a patient’s lifetime.
Primary data collection alongside the ESCAPE trial provided a 3-month trial-specific, non-model, based cost per quality-adjusted life year (QALY). A Markov model utilizing ongoing lifetime costs and life expectancy from the literature was built to simulate the cost per QALY adopting a lifetime horizon. Health states were defined using the modified Rankin Scale (mRS) scores. Uncertainty was explored using scenario analysis and probabilistic sensitivity analysis.
The 3-month trial-based analysis resulted in a cost per QALY of $201,243 of EVT compared to the best standard of care. In the model-based analysis, using a societal perspective and a lifetime horizon, EVT dominated the standard of care; EVT was both more effective and less costly than the standard of care (−$91). When the time horizon was shortened to 1 year, EVT remains cost savings compared to standard of care (∼$15,376 per QALY gained with EVT). However, if the estimate of clinical effectiveness is 4% less than that demonstrated in ESCAPE, EVT is no longer cost savings compared to standard of care.
Results support the adoption of EVT as a treatment option for acute ischemic stroke, as the increase in costs associated with caring for EVT patients was recouped within the first year of stroke, and continued to provide cost savings over a patient’s lifetime.
Clarifying the relationship between depression symptoms and cardiometabolic and related health could clarify risk factors and treatment targets. The objective of this study was to assess whether depression symptoms in midlife are associated with the subsequent onset of cardiometabolic health problems.
The study sample comprised 787 male twin veterans with polygenic risk score data who participated in the Harvard Twin Study of Substance Abuse (‘baseline’) and the longitudinal Vietnam Era Twin Study of Aging (‘follow-up’). Depression symptoms were assessed at baseline [mean age 41.42 years (s.d. = 2.34)] using the Diagnostic Interview Schedule, Version III, Revised. The onset of eight cardiometabolic conditions (atrial fibrillation, diabetes, erectile dysfunction, hypercholesterolemia, hypertension, myocardial infarction, sleep apnea, and stroke) was assessed via self-reported doctor diagnosis at follow-up [mean age 67.59 years (s.d. = 2.41)].
Total depression symptoms were longitudinally associated with incident diabetes (OR 1.29, 95% CI 1.07–1.57), erectile dysfunction (OR 1.32, 95% CI 1.10–1.59), hypercholesterolemia (OR 1.26, 95% CI 1.04–1.53), and sleep apnea (OR 1.40, 95% CI 1.13–1.74) over 27 years after controlling for age, alcohol consumption, smoking, body mass index, C-reactive protein, and polygenic risk for specific health conditions. In sensitivity analyses that excluded somatic depression symptoms, only the association with sleep apnea remained significant (OR 1.32, 95% CI 1.09–1.60).
A history of depression symptoms by early midlife is associated with an elevated risk for subsequent development of several self-reported health conditions. When isolated, non-somatic depression symptoms are associated with incident self-reported sleep apnea. Depression symptom history may be a predictor or marker of cardiometabolic risk over decades.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
Methiozolin is a new herbicide with an unknown mechanism of action (MOA) for control of annual bluegrass (Poa annua L.) in several warm- and cool-season turfgrasses. In the literature, methiozolin was proposed to be a pigment inhibitor via inhibition of tyrosine aminotransferases (TATs) or a cellulose biosynthesis inhibitor (CBI). Here, exploratory research was conducted to characterize the herbicide symptomology and MOA of methiozolin. Arabidopsis (Arabidopsis thaliana L.) and P. annua exhibited a similar level of susceptibility to methiozolin, and arrest of meristematic growth was the most characteristic symptomology. For example, methiozolin inhibited A. thaliana root growth (GR50 8 nM) and shoot emergence (GR80 ˜50 nM), and apical meristem growth was completely arrested at rates greater than 500 nM. We concluded that methiozolin was neither a TAT nor a CBI inhibitor. Methiozolin had a minor effect on chlorophyll and alpha-tocopherol content in treated seedlings (<500 nM), and supplements in the proposed TAT pathway could not lessen phytotoxicity. Examination of microscopic images of roots revealed that methiozolin-treated (100 nM) and untreated seedlings had similar root cell lengths. Thus, methiozolin inhibits cell proliferation and not elongation from meristematic tissue. Subsequently, we suspected methiozolin was an inhibitor of the mevalonic acid (MVA) pathway, because its herbicidal symptomologies were nearly indistinguishable from those caused by lovastatin. However, methiozolin did not inhibit phytosterol production, and MVA pathway metabolites did not rescue treated seedlings. Further experiments showed that methiozolin produced a physiological profile very similar to cinmethylin across a number of assays, a known inhibitor of fatty-acid synthesis through inhibition of thioesterases (FATs). Experiments with lesser duckweed (Lemna aequinoctialis Welw.; syn. Lemna paucicostata Hegelm.) showed that methiozolin also reduced fatty-acid content in Lemna with a profile similar, but not identical, to cinmethylin. However, there was no difference in fatty-acid content between treated (1 µM) and untreated A. thaliana seedlings. Methiozolin also bound to both A, thaliana and L. aequinoctialis FATs in vitro. Modeling suggested that methiozolin and cinmethylin have comparable and overlapping FAT binding sites. While there was a discrepancy in the effect of methiozolin on fatty-acid content between L. aequinoctialis and A. thaliana, the overall evidence indicates that methiozolin is a FAT inhibitor and acts in a similar manner as cinmethylin.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Recent studies suggest that close-range blast exposure (CBE), regardless of acute concussive symptoms, may have negative long-term effects on brain health and cognition; however, these effects are highly variable across individuals. One potential genetic risk factor that may impact recovery and explain the heterogeneity of blast injury’s long-term cognitive outcomes is the inheritance of an apolipoprotein (APOE) ε4 allele, a well-known genetic risk factor for Alzheimer’s disease. We hypothesized that APOE ε4 carrier status would moderate the impact of CBE on long-term cognitive outcomes.
To test this hypothesis, we examined 488 post-9/11 veterans who completed assessments of neuropsychological functioning, psychiatric diagnoses, history of blast exposure, military and non-military mild traumatic brain injuries (mTBIs), and available APOE genotypes. We separately examined the effects of CBE on attention, memory, and executive functioning in individuals with and without the APOE ε4 allele.
As predicted, we observed a differential impact of CBE status on cognition as a function of APOE ε4 status, in which CBE ε4 carriers displayed significantly worse neuropsychological performance, specifically in the domain of memory. These results persisted after adjusting for clinical, demographic, and genetic factors and were not observed when examining other neurotrauma variables (i.e., lifetime or military mTBI, distant blast exposure), though these variables displayed similar trends.
These results suggest APOE ε4 carriers are more vulnerable to the impact of CBE on cognition and highlight the importance of considering genetic risk when studying cognitive effects of neurotrauma.
Prehospital use of lung ultrasound (LUS) by paramedics to guide the diagnoses and treatment of patients has expanded over the past several years. However, almost all of this education has occurred in a classroom or hospital setting. No published prehospital use of LUS simulation software within an ambulance currently exists.
The objective of this study was to determine if various ambulance driving conditions (stationary, constant acceleration, serpentine, and start-stop) would impact paramedics’ abilities to perform LUS on a standardized patient (SP) using breath-holding to simulate lung pathology, or to perform LUS using ultrasound (US) simulation software. Primary endpoints included the participating paramedics’: (1) time to acquiring a satisfactory simulated LUS image; and (2) accuracy of image recognition and interpretation. Secondary endpoints for the breath-holding portion included: (1) the agreement between image interpretation by paramedic versus blinded expert reviewers; and (2) the quality of captured LUS image as determined by two blinded expert reviewers. Finally, a paramedic LUS training session was evaluated by comparing pre-test to post-test scores on a 25-item assessment requiring the recognition of a clinical interpretation of prerecorded LUS images.
Seventeen paramedics received a 45-minute LUS lecture. They then performed 25 LUS exams on both SPs and using simulation software, in each case looking for lung sliding, A and B lines, and seashore or barcode signs. Pre- and post-training, they completed a 25-question test consisting of still images and videos requiring pathology recognition and formulation of a clinical diagnosis. Sixteen paramedics performed the same exams in an ambulance during different driving conditions (stationary, constant acceleration, serpentines, and abrupt start-stops). Lung pathology was block randomized based on driving condition.
Paramedics demonstrated improved post-test scores compared to pre-test scores (P <.001). No significant difference existed across driving conditions for: time needed to obtain a simulated image; clinical interpretation of simulated LUS images; quality of saved images; or agreement of image interpretation between paramedics and blinded emergency physicians (EPs). Image acquisition time while parked was significantly greater than while the ambulance was driving in serpentines (Z = -2.898; P = .008). Technical challenges for both simulation techniques were noted.
Paramedics can correctly acquire and interpret simulated LUS images during different ambulance driving conditions. However, simulation techniques better adapted to this unique work environment are needed.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.