To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study used repeated measures data to identify developmental profiles of elevated risk for ADHD (i.e., six or more inattentive and/or hyperactive-impulsive symptoms), with an interest in the age at which ADHD risk first emerged. Risk factors that were measured across the first 3 years of life were used to predict profile membership. Participants included 1,173 children who were drawn from the Family Life Project, an ongoing longitudinal study of children's development in low-income, nonmetropolitan communities. Four heuristic profiles of ADHD risk were identified. Approximately two thirds of children never exhibited elevated risk for ADHD. The remaining children were characterized by early childhood onset and persistent risk (5%), early childhood limited risk (10%), and middle childhood onset risk (19%). Pregnancy and delivery complications and harsh-intrusive caregiving behaviors operated as general risk for all ADHD profiles. Parental history of ADHD was uniquely predictive of early onset and persistent ADHD risk, and low primary caregiver education was uniquely predictive of early childhood limited ADHD risk. Results are discussed with respect to how changes to the age of onset criterion for ADHD in DSM5 may affect etiological research and the need for developmental models of ADHD that inform ADHD symptom persistence and desistance.
Residual herbicides applied to summer cash crops have the potential to injure subsequent winter annual cover crops, yet little information is available to guide growers’ choices. Field studies were conducted in 2016 and 2017 in Blacksburg and Suffolk, Virginia, to determine carryover of 30 herbicides commonly used in corn, soybean, or cotton on wheat, barley, cereal rye, oats, annual ryegrass, forage radish, Austrian winter pea, crimson clover, hairy vetch, and rapeseed cover crops. Herbicides were applied to bare ground either 14 wk before cover crop planting for a PRE timing or 10 wk for a POST timing. Visible injury was recorded 3 and 6 wk after planting (WAP), and cover crop biomass was collected 6 WAP. There were no differences observed in cover crop biomass among herbicide treatments, despite visible injury that suggested some residual herbicides have the potential to effect cover crop establishment. Visible injury on grass cover crop species did not exceed 20% from any herbicide. Fomesafen resulted in the greatest injury recorded on forage radish, with greater than 50% injury in 1 site-year. Trifloxysulfuron and atrazine resulted in greater than 20% visible injury on forage radish. Trifloxysulfuron resulted in the greatest injury (30%) observed on crimson clover in 1 site-year. Prosulfuron and isoxaflutole significantly injured rapeseed (17% to 21%). Results indicate that commonly used residual herbicides applied in the previous cash crop growing season result in little injury on grass cover crop species, and only a few residual herbicides could potentially affect the establishment of a forage radish, crimson clover, or rapeseed cover crop.
Three states and one county now allow Emergency Medical Services (EMS) providers to transport injured law enforcement K9s (LEK9s) as long as no human needs the ambulance at the time. Several other states either have pending legislation or are in discussions about this topic. As additional states ponder these laws, it is likely that the EMS transport of LEK9s will become legal in many states. In the wake of this legislation, a significant void was created. Currently, there are no published protocols for the safe transport of LEK9s by EMS providers. Additionally, the transport destination for these LEK9s is unlikely to be programmed into vehicle Global Positioning Systems. The authors of this report convened a Joint Task Force on Working Dog Care, consisting of veterinarians, EMS directors, EMS physicians, and LEK9 handlers, who met to develop a protocol for LEK9s being transported to a veterinary facility. The protocol covers the logistics of getting the LEK9 into the ambulance (eg, when the handler is or is not available), appropriate restraint, and the importance of prior arrangements with a veterinary emergency facility. A LEK9 hand-off form and a Transport Policy Form are provided, downloadable, and customizable for each EMS provider. This protocol provides essential information on safety and transport logistics for injured LEK9s. The hope is that this protocol will assist EMS providers to streamline the transport of an injured LEK9 to an appropriate veterinary facility.
This document is a resource for Emergency Medical Services (EMS) treating an injured law enforcement K9 (LEK9) in the field and/or during transport by ambulance to a veterinary hospital. A Joint Task Force on Working Dog Care was created, which included veterinarians, EMS directors, EMS physicians, and canine handlers, who met to develop a treatment protocol for injured LEK9s. The protocol covers many major life-threatening injuries that LEK9s may sustain in the line of duty, and also discusses personnel safety and necessary equipment. This protocol may help train EMS providers to save the life of an injured LEK9.
We offer a friendly criticism of May's fantastic book on moral reasoning: It is overly charitable to the argument that moral disagreement undermines moral knowledge. To highlight the role that reasoning quality plays in moral judgments, we review literature that he did not mention showing that individual differences in intelligence and cognitive reflection explain much of moral disagreement. The burden is on skeptics of moral knowledge to show that moral disagreement arises from non-rational origins.
To identify potential participants for clinical trials, electronic health records (EHRs) are searched at potential sites. As an alternative, we investigated using medical devices used for real-time diagnostic decisions for trial enrollment.
To project cohorts for a trial in acute coronary syndromes (ACS), we used electrocardiograph-based algorithms that identify ACS or ST elevation myocardial infarction (STEMI) that prompt clinicians to offer patients trial enrollment. We searched six hospitals’ electrocardiograph systems for electrocardiograms (ECGs) meeting the planned trial’s enrollment criterion: ECGs with STEMI or > 75% probability of ACS by the acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI). We revised the ACI-TIPI regression to require only data directly from the electrocardiograph, the e-ACI-TIPI using the same data used for the original ACI-TIPI (development set n = 3,453; test set n = 2,315). We also tested both on data from emergency department electrocardiographs from across the US (n = 8,556). We then used ACI-TIPI and e-ACI-TIPI to identify potential cohorts for the ACS trial and compared performance to cohorts from EHR data at the hospitals.
Receiver-operating characteristic (ROC) curve areas on the test set were excellent, 0.89 for ACI-TIPI and 0.84 for the e-ACI-TIPI, as was calibration. On the national electrocardiographic database, ROC areas were 0.78 and 0.69, respectively, and with very good calibration. When tested for detection of patients with > 75% ACS probability, both electrocardiograph-based methods identified eligible patients well, and better than did EHRs.
Using data from medical devices such as electrocardiographs may provide accurate projections of available cohorts for clinical trials.
Seven half-day regional listening sessions were held between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide-resistance management. The objective of the listening sessions was to connect with stakeholders and hear their challenges and recommendations for addressing herbicide resistance. The coordinating team hired Strategic Conservation Solutions, LLC, to facilitate all the sessions. They and the coordinating team used in-person meetings, teleconferences, and email to communicate and coordinate the activities leading up to each regional listening session. The agenda was the same across all sessions and included small-group discussions followed by reporting to the full group for discussion. The planning process was the same across all the sessions, although the selection of venue, time of day, and stakeholder participants differed to accommodate the differences among regions. The listening-session format required a great deal of work and flexibility on the part of the coordinating team and regional coordinators. Overall, the participant evaluations from the sessions were positive, with participants expressing appreciation that they were asked for their thoughts on the subject of herbicide resistance. This paper details the methods and processes used to conduct these regional listening sessions and provides an assessment of the strengths and limitations of those processes.
Herbicide resistance is ‘wicked’ in nature; therefore, results of the many educational efforts to encourage diversification of weed control practices in the United States have been mixed. It is clear that we do not sufficiently understand the totality of the grassroots obstacles, concerns, challenges, and specific solutions needed for varied crop production systems. Weed management issues and solutions vary with such variables as management styles, regions, cropping systems, and available or affordable technologies. Therefore, to help the weed science community better understand the needs and ideas of those directly dealing with herbicide resistance, seven half-day regional listening sessions were held across the United States between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide resistance management. The major goals of the sessions were to gain an understanding of stakeholders and their goals and concerns related to herbicide resistance management, to become familiar with regional differences, and to identify decision maker needs to address herbicide resistance. The messages shared by listening-session participants could be summarized by six themes: we need new herbicides; there is no need for more regulation; there is a need for more education, especially for others who were not present; diversity is hard; the agricultural economy makes it difficult to make changes; and we are aware of herbicide resistance but are managing it. The authors concluded that more work is needed to bring a community-wide, interdisciplinary approach to understanding the complexity of managing weeds within the context of the whole farm operation and for communicating the need to address herbicide resistance.
A majority of transplanted organs come from donors after brain death (BD). Renal grafts from these donors have higher delayed graft function and lower long-term survival rates compared to living donors. We designed a novel porcine BD model to better delineate the incompletely understood inflammatory response to BD, hypothesizing that adhesion molecule pathways would be upregulated in BD.
Animals were anesthetized and instrumented with monitors and a balloon catheter, then randomized to control and BD groups. BD was induced by inflating the balloon catheter and animals were maintained for 6 hours. RNA was extracted from kidneys, and gene expression pattern was determined.
In total, 902 gene pairs were differently expressed between groups. Eleven selected pathways were upregulated after BD, including cell adhesion molecules.
These results should be confirmed in human organ donors. Treatment strategies should target involved pathways and lessen the negative effects of BD on transplantable organs.
Cardiomyopathy develops in >90% of Duchenne muscular dystrophy (DMD) patients by the second decade of life. We assessed the associations between DMD gene mutations, as well as Latent transforming growth factor-beta-binding protein 4 (LTBP4) haplotypes, and age at onset of myocardial dysfunction in DMD. DMD patients with baseline normal left ventricular systolic function and genotyping between 2004 and 2013 were included. Patients were grouped in multiple ways: specific DMD mutation domains, true loss-of-function mutations (group A) versus possible residual gene expression (group B), and LTBP4 haplotype. Age at onset of myocardial dysfunction was the first echocardiogram with an ejection fraction <55% and/or shortening fraction <28%. Of 101 DMD patients, 40 developed cardiomyopathy. There was no difference in age at onset of myocardial dysfunction among DMD genotype mutation domains (13.7±4.8 versus 14.3±1.0 versus 14.3±2.9 versus 13.8±2.5, p=0.97), groups A and B (14.4±2.8 versus 12.1±4.4, p=0.09), or LTBP4 haplotypes (14.5±3.2 versus 13.1±3.2 versus 11.0±2.8, p=0.18). DMD gene mutations involving the hinge 3 region, actin-binding domain, and exons 45–49, as well as the LTBP4 IAAM haplotype, were not associated with age of left ventricular dysfunction onset in DMD.
The Neotoma Paleoecology Database is a community-curated data resource that supports interdisciplinary global change research by enabling broad-scale studies of taxon and community diversity, distributions, and dynamics during the large environmental changes of the past. By consolidating many kinds of data into a common repository, Neotoma lowers costs of paleodata management, makes paleoecological data openly available, and offers a high-quality, curated resource. Neotoma’s distributed scientific governance model is flexible and scalable, with many open pathways for participation by new members, data contributors, stewards, and research communities. The Neotoma data model supports, or can be extended to support, any kind of paleoecological or paleoenvironmental data from sedimentary archives. Data additions to Neotoma are growing and now include >3.8 million observations, >17,000 datasets, and >9200 sites. Dataset types currently include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Multiple avenues exist to obtain Neotoma data, including the Explorer map-based interface, an application programming interface, the neotoma R package, and digital object identifiers. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.
Objectives: The aim of this study was to evaluate the reliability and validity of three computerized neurocognitive assessment tools (CNTs; i.e., ANAM, DANA, and ImPACT) for assessing mild traumatic brain injury (mTBI) in patients recruited through a level I trauma center emergency department (ED). Methods: mTBI (n=94) and matched trauma control (n=80) subjects recruited from a level I trauma center emergency department completed symptom and neurocognitive assessments within 72 hr of injury and at 15 and 45 days post-injury. Concussion symptoms were also assessed via phone at 8 days post-injury. Results: CNTs did not differentiate between groups at any time point (e.g., M 72-hr Cohen’s d=−.16, .02, and .00 for ANAM, DANA, and ImPACT, respectively; negative values reflect greater impairment in the mTBI group). Roughly a quarter of stability coefficients were over .70 across measures and test–retest intervals in controls. In contrast, concussion symptom score differentiated mTBI vs. control groups acutely), with this effect size diminished over time (72-hr and day 8, 15, and 45 Cohen’s d=−.78, −.60, −.49, and −.35, respectively). Conclusions: The CNTs evaluated, developed and widely used to assess sport-related concussion, did not yield significant differences between patients with mTBI versus other injuries. Symptom scores better differentiated groups than CNTs, with effect sizes weaker than those reported in sport-related concussion studies. Nonspecific injury factors, and other characteristics common in ED settings, likely affect CNT performance across trauma patients as a whole and thereby diminish the validity of CNTs for assessing mTBI in this patient population. (JINS, 2017, 23, 293–303)
Limited data exist comparing the performance of computerized neurocognitive tests (CNTs) for assessing sport-related concussion. We evaluated the reliability and validity of three CNTs—ANAM, Axon Sports/Cogstate Sport, and ImPACT—in a common sample. High school and collegiate athletes completed two CNTs each at baseline. Concussed (n=165) and matched non-injured control (n=166) subjects repeated testing within 24 hr and at 8, 15, and 45 days post-injury. Roughly a quarter of each CNT’s indices had stability coefficients (M=198 day interval) over .70. Group differences in performance were mostly moderate to large at 24 hr and small by day 8. The sensitivity of reliable change indices (RCIs) was best at 24 hr (67.8%, 60.3%, and 47.6% with one or more significant RCIs for ImPACT, Axon, and ANAM, respectively) but diminished to near the false positive rates thereafter. Across time, the CNTs’ sensitivities were highest in those athletes who became asymptomatic within 1 day before neurocognitive testing but was similar to the tests’ false positive rates when including athletes who became asymptomatic several days earlier. Test–retest reliability was similar among these three CNTs and below optimal standards for clinical use on many subtests. Analyses of group effect sizes, discrimination, and sensitivity and specificity suggested that the CNTs may add incrementally (beyond symptom scores) to the identification of clinical impairment within 24 hr of injury or within a short time period after symptom resolution but do not add significant value over symptom assessment later. The rapid clinical recovery course from concussion and modest stability probably jointly contribute to limited signal detection capabilities of neurocognitive tests outside a brief post-injury window. (JINS, 2016, 22, 24–37)
Previous studies have reported similar recovery and improvement rates regardless of treatment duration among patients receiving National Health Service (NHS) primary care mental health psychological therapy.
To investigate whether this pattern would replicate and extend to other service sectors, including secondary care, university counselling, voluntary sector and workplace counselling.
We compared treatment duration with degree of improvement measured by the Clinical Outcomes in Routine Evaluation – Outcome Measure (CORE-OM) for 26 430 adult patients who scored above the clinical cut-off point at the start of treatment, attended 40 or fewer sessions and had planned endings.
Mean CORE-OM scores improved substantially (pre–post effect size 1.89); 60% of patients achieved reliable and clinically significant improvement (RCSI). Rates of RCSI and reliable improvement and mean pre- and post-treatment changes were similar at all tested treatment durations. Patients seen in different service sectors showed modest variations around this pattern.
Results were consistent with the responsive regulation model, which suggests that in routine care participants tend to end therapy when gains reach a good-enough level.
Policymakers may wish to align healthcare payment and quality of care while minimizing unintended consequences, particularly for safety net hospitals.
To determine whether the 2008 Centers for Medicare and Medicaid Services Hospital-Acquired Conditions policy had a differential impact on targeted healthcare-associated infection rates in safety net compared with non–safety net hospitals.
Interrupted time-series design.
SETTING AND PARTICIPANTS
Nonfederal acute care hospitals that reported central line–associated bloodstream infection and ventilator-associated pneumonia rates to the Centers for Disease Control and Prevention’s National Health Safety Network from July 1, 2007, through December 31, 2013.
We did not observe changes in the slope of targeted infection rates in the postpolicy period compared with the prepolicy period for either safety net (postpolicy vs prepolicy ratio, 0.96 [95% CI, 0.84–1.09]) or non–safety net (0.99 [0.90–1.10]) hospitals. Controlling for prepolicy secular trends, we did not detect differences in an immediate change at the time of the policy between safety net and non–safety net hospitals (P for 2-way interaction, .87).
The Centers for Medicare and Medicaid Services Hospital-Acquired Conditions policy did not have an impact, either positive or negative, on already declining rates of central line–associated bloodstream infection in safety net or non–safety net hospitals. Continued evaluations of the broad impact of payment policies on safety net hospitals will remain important as the use of financial incentives and penalties continues to expand in the United States.