To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The National Neuropsychology Network (NNN) is a multicenter clinical research initiative funded by the National Institute of Mental Health (NIMH; R01 MH118514) to facilitate neuropsychology’s transition to contemporary psychometric assessment methods with resultant improvement in test validation and assessment efficiency.
The NNN includes four clinical research sites (Emory University; Medical College of Wisconsin; University of California, Los Angeles (UCLA); University of Florida) and Pearson Clinical Assessment. Pearson Q-interactive (Q-i) is used for data capture for Pearson published tests; web-based data capture tools programmed by UCLA, which serves as the Coordinating Center, are employed for remaining measures.
NNN is acquiring item-level data from 500–10,000 patients across 47 widely used Neuropsychology (NP) tests and sharing these data via the NIMH Data Archive. Modern psychometric methods (e.g., item response theory) will specify the constructs measured by different tests and determine their positive/negative predictive power regarding diagnostic outcomes and relationships to other clinical, historical, and demographic factors. The Structured History Protocol for NP (SHiP-NP) helps standardize acquisition of relevant history and self-report data.
NNN is a proof-of-principle collaboration: by addressing logistical challenges, NNN aims to engage other clinics to create a national and ultimately an international network. The mature NNN will provide mechanisms for data aggregation enabling shared analysis and collaborative research. NNN promises ultimately to enable robust diagnostic inferences about neuropsychological test patterns and to promote the validation of novel adaptive assessment strategies that will be more efficient, more precise, and more sensitive to clinical contexts and individual/cultural differences.
Coronavirus disease 2019 (COVID-19) has migrated to regions that were initially spared, and it is likely that different populations are currently at risk for illness. Herein, we present our observations of the change in characteristics and resource use of COVID-19 patients over time in a national system of community hospitals to help inform those managing surge planning, operational management, and future policy decisions.
To determine risk factors for mortality among COVID-19 patients admitted to a system of community hospitals in the United States.
Retrospective analysis of patient data collected from the routine care of COVID-19 patients.
System of >180 acute-care facilities in the United States.
All admitted patients with positive identification of COVID-19 and a documented discharge as of May 12, 2020.
Determination of demographic characteristics, vital signs at admission, patient comorbidities and recorded discharge disposition in this population to construct a logistic regression estimating the odds of mortality, particular for those patients characterized as not being critically ill at admission.
In total, 6,180 COVID-19+ patients were identified as of May 12, 2020. Most COVID-19+ patients (4,808, 77.8%) were admitted directly to a medical-surgical unit with no documented critical care or mechanical ventilation within 8 hours of admission. After adjusting for demographic characteristics, comorbidities, and vital signs at admission in this subgroup, the largest driver of the odds of mortality was patient age (OR, 1.07; 95% CI, 1.06–1.08; P < .001). Decreased oxygen saturation at admission was associated with increased odds of mortality (OR, 1.09; 95% CI, 1.06–1.12; P < .001) as was diabetes (OR, 1.57; 95% CI, 1.21–2.03; P < .001).
The identification of factors observable at admission that are associated with mortality in COVID-19 patients who are initially admitted to non-critical care units may help care providers, hospital epidemiologists, and hospital safety experts better plan for the care of these patients.
There has been limited evaluation of handover from emergency medical services (EMS) to the trauma team. We sought to characterize these handover practices to identify areas of improvement and determine if handover standardization might be beneficial for trauma team performance.
Data were prospectively collected over a nine-week period by a trained observer at a Canadian level one trauma centre. A randomized scheduled was used to capture a representative breadth of handovers. Data collected included outcome measures such as duration of handover, structure of the handover, and information shared, process measures such as questions and interruptions from the trauma team, and perceptions of the handover from nurses, trauma team leaders and EMS according to a bidirectional Likert scale.
79 formal verbal handovers were observed. Information was often missing regarding airway (present 22%), breathing (54%), medications (59%), and allergies (54%). Handover structure lacked consistency beyond the order of identification and mechanism of injury. Of all questions asked, 35% were questioning previously given information. The majority of handovers (61%) involved parallel conversations between team members while EMS was speaking. There was a statistically significant disparity between the self-evaluation of EMS handovers and the perceived quality determined by nurses and trauma team leaders.
We have identified the need to standardize handover due to poor information content, a lack of structure and active listening, information repetition, and discordant expectations between team members. These data will guide the development of a co-constructed framework integrating the perspectives of all team members.
Even though sub-Saharan African women spend millions of person-hours per day fetching water and pounding grain, to date, few studies have rigorously assessed the energy expenditure costs of such domestic activities. As a result, most analyses that consider head-hauling water or hand pounding of grain with a mortar and pestle (pilão use) employ energy expenditure values derived from limited research. The current paper compares estimated energy expenditure values from heart rate monitors v. indirect calorimetry in order to understand some of the limitations with using such monitors to measure domestic activities.
This confirmation study estimates the metabolic equivalent of task (MET) value for head-hauling water and hand-pounding grain using both indirect calorimetry and heart rate monitors under laboratory conditions.
The study was conducted in Nampula, Mozambique.
Forty university students in Nampula city who recurrently engaged in water-fetching activities.
Including all participants, the mean MET value for head hauling 20 litres (20·5 kg, including container) of water (2·7 km/h, 0 % slope) was 4·3 (sd 0·9) and 3·7 (sd 1·2) for pilão use. Estimated energy expenditure predictions from a mixed model were found to correlate with observed energy expenditure (r2 0·68, r 0·82). Re-estimating the model with pilão use data excluded improved the fit substantially (r2 0·83, r 0·91).
The current study finds that heart rate monitors are suitable instruments for providing accurate quantification of energy expenditure for some domestic activities, such as head-hauling water, but are not appropriate for quantifying expenditures of other activities, such as hand-pounding grain.
All sperm accrue varying amounts of DNA damage during maturation and storage, a process that appears to be mediated through oxidative stress. The clinical significance of genetic damage in the male germ line depends upon severity and how that damage is distributed among the sperm population. In human reproduction, the embryo is capable of significant DNA repair, which occurs prior to the first cleavage event. However, when the magnitude of genomic damage reaches pathologic levels, reproductive outcomes begin to be affected. Evidence now exists linking excessive sperm DNA fragmentation with time to pregnancy for natural conception, pregnancy outcomes of intrauterine insemination and in vitro fertilization, and miscarriage rates when intracytoplasmic sperm injection is employed. This review will discuss the pathophysiology of sperm DNA damage, the studies linking it to impaired reproductive outcomes, and how clinicians may render treatment to optimize the chance of paternity for their patients.
Obstructive azoospermia (OA) is a common presenting condition of male infertility, resulting from either congenital or acquired blockage of the reproductive tract. Men facing a diagnosis of OA now have an array of treatment options, including definitive reconstruction and various forms of sperm retrieval. The optimum treatment decision for OA will depend on the goals, values, and expectations of the patient and his partner. In this review we will discuss the therapeutic approach to OA, stressing the requirement of a clear and thoughtful plan for staged intervention. Any proposed treatment strategy should optimize the chances of paternity while minimizing damage to the male genitourinary system. Special attention will be paid to the role of microdissection testicular sperm extraction (microTESE), as it is a useful and often underutilized rescue procedure for OA. Specifically, the advantages and disadvantages of microTESE will be evaluated, with particular focus on success rates and safety.
The study was triggered by the first author's own experience on an undergraduate elective at the National Mental Wellness Centre in St Lucia. This was an eye-opening experience of psychiatry in a less economically developed environment. It highlighted disparities between practice in the developed and the developing world. Notably significant differences were apparent in facilities, epidemiology of presenting complaints, the interaction of cultural beliefs as well as the method of assessment and management.
To review the literature on the educational impact of electives in psychiatry.
A literature search using Ovid MEDLINE was conducted using the keywords’medical student’ AND’elective’ AND’psychiatry’. A total of 229 results were returned. These were then analysed for their relevance.
Only one paper was found emphasising the importance of electives in psychiatry. This reported on one individual's personal experience. There also were reports highlighting the importance of undergraduate elective experience and the need to increase exposure to psychiatry to improve the uptake of postgraduate training programmes. There were no papers objectively assessing the educational quality or impact of a psychiatric elective experience.
An overseas elective experience was subjectively beneficial for the author but there is a lack of objective research to show the educational benefit of psychiatry electives on a wider scale. Further research regarding the educational benefits of electives in psychiatry is needed.
Schizophrenia often presents in adolescence (13–18 years), is more likely to have a poor prognosis and young people are also more prone to adverse effects. Clearer guidance is needed in order to plan treatment for early onset cases more effectively.
We aimed to evaluate effects of atypical antipsychotic medications for psychosis in adolescents.
We searched the Cochrane Schizophrenia Group's Register. References of all identified studies were inspected for further trials.
All relevant RCTs that compared atypical antipsychotic medication with pharmacological or non-pharmacological interventions in adolescents with psychosis were included. We reliably selected, quality assessed and extracted data from trials.
There were 13 RCTs with a total of 1112 participants. Adolescents improved more on standard dose of risperidone (1.5 – 6.0 mg) against low dose of risperidone (0.15 – 0.6 mg) (1 RCT, n = 255, RR 0.54 CI 0.38 to 0.75). Participants on clozapine were three times more likely to have drowsiness as compared to haloperidol (1 RCT, n = 21, RR 3.30 CI 1.23 to 8.85, NNH 2 CI 2 to 17). Lesser number of adolescents on atypical antipsychotics left the study due to adverse effects (3 RCTs, n = 187, RR 0.65 CI 0.36 to 1.15) than on typical antipsychotics.
There is no convincing evidence that atypical-antipsychotic medications are superior over typical antipsychotic medications. There is some evidence to show that adolescents respond better to standard-dose as opposed to lower dose of medications. Larger, more robust, trials are required.
The aim of the current study was to explore the changing interrelationships among clinical variables through the stages of schizophrenia in order to assemble a comprehensive and meaningful disease model.
Twenty-nine centers from 25 countries participated and included 2358 patients aged 37.21 ± 11.87 years with schizophrenia. Multiple linear regression analysis and visual inspection of plots were performed.
The results suggest that with progression stages, there are changing correlations among Positive and Negative Syndrome Scale factors at each stage and each factor correlates with all the others in that particular stage, in which this factor is dominant. This internal structure further supports the validity of an already proposed four stages model, with positive symptoms dominating the first stage, excitement/hostility the second, depression the third, and neurocognitive decline the last stage.
The current study investigated the mental organization and functioning in patients with schizophrenia in relation to different stages of illness progression. It revealed two distinct “cores” of schizophrenia, the “Positive” and the “Negative,” while neurocognitive decline escalates during the later stages. Future research should focus on the therapeutic implications of such a model. Stopping the progress of the illness could demand to stop the succession of stages. This could be achieved not only by both halting the triggering effect of positive and negative symptoms, but also by stopping the sensitization effect on the neural pathways responsible for the development of hostility, excitement, anxiety, and depression as well as the deleterious effect on neural networks responsible for neurocognition.
Giant miscanthus has the potential to move beyond cultivated fields and invade noncrop areas, but this can be overshadowed by aesthetic appeal and monetary value as a biofuel crop. Most research on giant miscanthus has focused on herbicide tolerance for establishment and production rather than terminating an existing stand. This study was conducted to evaluate herbicide options for control or terminating a stand of giant miscanthus. In 2013 and 2014, field experiments were conducted on established stands of the giant miscanthus cultivars ‘Nagara’ and ‘Freedom.’ Herbicides evaluated in both years included glyphosate, hexazinone, imazapic, imazapyr, clethodim, fluazifop, and glyphosate plus fluazifop. All treatments were applied in summer (June or July) and September. For both years, biomass reduction ranged from 85% to 100% when glyphosate was applied in June or July at 4.5 or 7.3 kg ae ha−1. No other treatment applied at this timing provided more than 50% giant miscanthus biomass reduction 1 yr after application. September applications of glyphosate were not consistent: treatments in 2013 reduced biomass by 40% or less, whereas in 2014, at all rates provided at least 78% biomass reduction. Glyphosate applied in June or July was the only treatment that provided effective and consistent control of giant miscanthus 1 yr after treatment.
Diversified farms are operations that raise a variety of crops and/or multiple species of livestock, with the goal of utilising the products of one for the growth of the other, thus fostering a sustainable cycle. This type of farming reflects consumers' increasing demand for sustainably produced, naturally raised or pasture-raised animal products that are commonly produced on diversified farms. The specific objectives of this study were to characterise diversified small-scale farms (DSSF) in California, estimate the prevalence of Salmonella enterica and Campylobacter spp. in livestock and poultry, and evaluate the association between farm- and sample-level risk factors and the prevalence of Campylobacter spp. on DSSF in California using a multilevel logistic model. Most participating farms were organic and raised more than one animal species. Overall Salmonella prevalence was 1.19% (95% confidence interval (CI95) 0.6–2), and overall Campylobacter spp. prevalence was 10.8% (CI95 = 9–12.9). Significant risk factors associated with Campylobacter spp. were farm size (odds ratio (OR)10–50 acres: less than 10 acres = 6, CI95 = 2.11–29.8), ownership of swine (OR = 9.3, CI95 = 3.4–38.8) and season (ORSpring: Coastal summer = 3.5, CI95 = 1.1–10.9; ORWinter: Coastal summer = 3.23, CI95 = 1.4–7.4). As the number of DSSF continues to grow, evaluating risk factors and management practices that are unique to these operations will help identify risk mitigation strategies and develop outreach materials to improve the food safety of animal and vegetable products produced on DSSF.
Although death by neurologic criteria (brain death) is legally recognized throughout the United States, state laws and clinical practice vary concerning three key issues: (1) the medical standards used to determine death by neurologic criteria, (2) management of family objections before determination of death by neurologic criteria, and (3) management of religious objections to declaration of death by neurologic criteria. The American Academy of Neurology and other medical stakeholder organizations involved in the determination of death by neurologic criteria have undertaken concerted action to address variation in clinical practice in order to ensure the integrity of brain death determination. To complement this effort, state policymakers must revise legislation on the use of neurologic criteria to declare death. We review the legal history and current laws regarding neurologic criteria to declare death and offer proposed revisions to the Uniform Determination of Death Act (UDDA) and the rationale for these recommendations.
Objectives: To describe multivariate base rates (MBRs) of low scores and reliable change (decline) scores on Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) in college athletes at baseline, as well as to assess MBR differences among demographic and medical history subpopulations. Methods: Data were reported on 15,909 participants (46.5% female) from the NCAA/DoD CARE Consortium. MBRs of ImPACT composite scores were derived using published CARE normative data and reliability metrics. MBRs of sex-corrected low scores were reported at <25th percentile (Low Average), <10th percentile (Borderline), and ≤2nd percentile (Impaired). MBRs of reliable decline scores were reported at the 75%, 90%, 95%, and 99% confidence intervals. We analyzed subgroups by sex, race, attention-deficit/hyperactivity disorder and/or learning disability (ADHD/LD), anxiety/depression, and concussion history using chi-square analyses. Results: Base rates of low scores and reliable decline scores on individual composites approximated the normative distribution. Athletes obtained ≥1 low score with frequencies of 63.4% (Low Average), 32.0% (Borderline), and 9.1% (Impaired). Athletes obtained ≥1 reliable decline score with frequencies of 66.8%, 32.2%, 18%, and 3.8%, respectively. Comparatively few athletes had low scores or reliable decline on ≥2 composite scores. Black/African American athletes and athletes with ADHD/LD had higher rates of low scores, while greater concussion history was associated with lower MBRs (p < .01). MBRs of reliable decline were not associated with demographic or medical factors. Conclusions: Clinical interpretation of low scores and reliable decline on ImPACT depends on the strictness of the low score cutoff, the reliable change criterion, and the number of scores exceeding these cutoffs. Race and ADHD influence the frequency of low scores at all cutoffs cross-sectionally.
Jaswal & Akhtar provide several quotes ostensibly from people with autism but obtained via the discredited techniques of Facilitated Communication and the Rapid Prompting Method, and they do not acknowledge the use of these techniques. As a result, their argument is substantially less convincing than they assert, and the article lacks transparency.
Due to concerns over increasing fluoroquinolone (FQ) resistance among gram-negative organisms, our stewardship program implemented a preauthorization use policy. The goal of this study was to assess the relationship between hospital FQ use and antibiotic resistance.
Large academic medical center.
We performed a retrospective analysis of FQ susceptibility of hospital isolates for 5 common gram-negative bacteria: Acinetobacter spp., Enterobacter cloacae, Escherichia coli, Klebsiella pneumoniae, and Pseudomonas aeruginosa. Primary endpoint was the change of FQ susceptibility. A Poisson regression model was used to calculate the rate of change between the preintervention period (1998–2005) and the postimplementation period (2006–2016).
Large rates of decline of FQ susceptibility began in 1998, particularly among P. aeruginosa, Acinetobacter spp., and E. cloacae. Our FQ restriction policy improved FQ use from 173 days of therapy (DOT) per 1,000 patient days to <60 DOT per 1,000 patient days. Fluoroquinolone susceptibility increased for Acinetobacter spp. (rate ratio [RR], 1.038; 95% confidence interval [CI], 1.005–1.072), E. cloacae (RR, 1.028; 95% CI, 1.013–1.044), and P. aeruginosa (RR, 1.013; 95% CI, 1.006–1.020). No significant change in susceptibility was detected for K. pneumoniae (RR, 1.002; 95% CI, 0.996–1.008), and the susceptibility for E. coli continued to decline, although the decline was not as steep (RR, 0.981; 95% CI, 0.975–0.987).
A stewardship-driven FQ restriction program stopped overall declining FQ susceptibility rates for all species except E. coli. For 3 species (ie, Acinetobacter spp, E. cloacae, and P. aeruginosa), susceptibility rates improved after implementation, and this improvement has been sustained over a 10-year period.