To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Volume I offers an introductory survey of the phenomenon of genocide. The first five chapters examine its major recurring themes, while the further nineteen are specific case studies. The combination of thematic and empirical approaches illuminates the origins and long history of genocide, its causes, consistent characteristics, and the connections linking various cases from earliest times to the early modern era. The themes examined include the roles of racism, the state, religion, gender prejudice, famine, and climate crises, as well as the role of human decision-making in the causation of genocide. The case studies cover events on four continents, ranging from prehistoric Europe and the Andes to ancient Israel, Mesopotamia, the early Greek world, Rome, Carthage, and the Mediterranean. It continues with the Norman Conquest of England's North, the Crusades, the Mongol Conquests, medieval India and Viet Nam, and a panoramic study of pre-modern China, as well as the Spanish conquests of the Canary Islands, the Caribbean, and Mexico.
The Lava Jato (or “Car Wash”) corruption investigation offers an important case study of the evolution of legal accountability in consolidating democracies. This chapter evaluates the origins of the investigation and the early successes of prosecutors, analyzes why Lava Jato initially succeeded where numerous previous cases had floundered, discusses the causes and likely consequences of the investigation’s declining effectiveness and ultimate neutering, and reflects on what this experience suggests about legal accountability in Brazil and other democracies facing long-festering patterns of elite collusion, corruption and impunity.
Little is known about selenium intakes and status in very young New Zealand children. However, selenium intakes below recommendations and lower selenium status compared to international studies have been reported in New Zealand (particularly South Island) adults. The Baby-Led Introduction to SolidS (BLISS) randomised controlled trial compared a modified version of baby-led weaning (infants feed themselves rather than being spoon-fed), with traditional spoon-feeding (Control). Weighed three-day diet records were collected and plasma selenium concentration measured using ICP-MS. In total, 101 (BLISS n=50, Control n=51) 12-month-old toddlers provided complete data. The odds of selenium intakes below the estimated average requirement was no different between BLISS and Control (OR: 0.89; 95% CI: 0.39, 2.03), and there was no difference in mean plasma selenium concentration between groups (0.04 μmol/L; 95% CI: -0.03, 0.11). In an adjusted model, consuming breast milk was associated with lower plasma selenium concentrations (-0.12 μmol/L; 95% CI: -0.19, -0.04). Of the food groups other than infant milk (breast milk or infant formula), “breads and cereals” contributed the most to selenium intakes (12% of intake). In conclusion, selenium intakes and plasma selenium concentrations of 12 month old New Zealand toddlers were no different between those who had followed a baby-led approach to complementary feeding and those who followed traditional spoon-feeding. However, more than half of toddlers had selenium intakes below the estimated average requirement.
To identify cognitive phenotypes in late-life depression (LLD) and describe relationships with sociodemographic and clinical characteristics.
Observational cohort study
Baseline data from participants recruited via clinical referrals and community advertisements who enrolled in two separate studies.
Non-demented adults with LLD (n = 120; mean age = 66.73 ± 5.35 years) and non-depressed elders (n = 56; mean age = 67.95 ± 6.34 years).
All completed a neuropsychological battery, and individual cognitive test scores were standardized across the entire sample without correcting for demographics. Five empirically derived cognitive domain composites were created, and cluster analytic approaches (hierarchical, k-means) were independently conducted to classify cognitive patterns in the depressed cohort only. Baseline sociodemographic and clinical characteristics were then compared across groups.
A three-cluster solution best reflected the data, including “High Normal” (n = 47), “Reduced Normal” (n = 35), and “Low Executive Function” (n = 37) groups. The “High Normal” group was younger, more educated, predominantly Caucasian, and had fewer vascular risk factors and higher Mini-Mental Status Examination compared to “Low Executive Function” group. No differences were observed on other sociodemographic or clinical characteristics. Exploration of the “High Normal” group found two subgroups that only differed in attention/working memory performance and length of the current depressive episode.
Three cognitive phenotypes in LLD were identified that slightly differed in sociodemographic and disease-specific variables, but not in the quality of specific symptoms reported. Future work on these cognitive phenotypes will examine relationships to treatment response, vulnerability to cognitive decline, and neuroimaging markers to help disentangle the heterogeneity seen in this patient population
The tolerance of cereal rye to eight herbicides registered for use in wheat, at two rates, was evaluated for potential labeling in cereal rye to expand limited chemical weed control options. Across five site-years, halauxifen-methyl plus florasulam, pyroxsulam, and thifensulfuron-methyl plus tribenuron-methyl applied at a 2X rate to cereal rye at Zadoks (Z) 13 caused less than 15% injury, and had no impact on cereal rye density. These herbicides at the 2X rate reduced cereal rye heights 11% at 10 DAT with rye recovering by 31 DAT; cereal rye heights were not reduced with these herbicides at their 1X rate. In contrast, significant injury was observed with the 1X rate of mesosulfuron-methyl (45%), pinoxaden (27%), and pinoxaden plus fenoxaprop-p-ethyl (30%) applied postemergence (POST); early-season height was reduced 19 to 26%. Residual herbicides pyroxasulfone applied as a delayed preemergence (PRE) at Z 10, or flumioxazin plus pyroxasulfone applied at Z 11, caused 27 to 28% and 16 to 47% injury, respectively, when the 1X rate was activated by rainfall within 2 d of application. These residual herbicides reduced cereal rye height and density up to 35 and 40%, respectively. Cereal rye grain yield was not influenced by herbicide or rate applied.
To understand barriers and facilitators to evidence-based prescribing of antibiotics in the outpatient dental setting.
Outpatient dental setting.
Dentists from 40 Veterans’ Health Administration (VA) facilities across the United States.
Dentists were identified based on their prescribing patterns and were recruited to participate in a semistructured interview on perceptions toward prescribing. All interviews were recorded, transcribed, and double-coded for analysis, with high reliability between coders. We identified general trends using the theoretical domains framework and mapped overarching themes onto the behavior change wheel to identify prospective interventions that improve evidence-based prescribing.
In total, 90 dentists participated in our study. The following barriers and facilitators to evidence-based prescribing emerged as impacts on a dentist’s decision making on prescribing an antibiotic: access to resources, social influence of peers and other care providers, clinical judgment, beliefs about consequences, local features of the clinic setting, and beliefs about capabilities.
Findings from this work reveal the need to increase awareness of up-to-date antibiotic prescribing behaviors in dentistry and may inform the best antimicrobial stewardship interventions to support dentists’ ongoing professional development and improve evidence-based prescribing.
The enzyme 5,10-methylenetetrahydrofolate reductase (MTHFR) links the folate cycle that produces one-carbon units with the methionine cycle that converts these into S-adenosylmethionine (SAM), the universal methyl donor for almost all methyltransferases. Previously, MTHFR has been shown to be regulated by phosphorylation, which suppresses its activity. SAM levels have been shown to increase substantially soon after initiation of meiotic maturation of the mouse germinal vesicle (GV) stage oocyte and then decrease back to their original low level in mature second meiotic metaphase (MII) eggs. As MTHFR controls the entry of one-carbon units into the methionine cycle, it is a candidate regulator of the SAM levels in oocytes and eggs. Mthfr transcripts are expressed in mouse oocytes and preimplantation embryos and MTHFR protein is present at each stage. In mature MII eggs, the apparent molecular weight of MTHFR was increased compared with GV oocytes, which we hypothesized was due to increased phosphorylation. The increase in apparent molecular weight was reversed by treatment with lambda protein phosphatase (LPP), indicating that MTHFR is phosphorylated in MII eggs. In contrast, LPP had no effect on MTHFR from GV oocytes, 2-cell embryos, or blastocysts. MTHFR was progressively phosphorylated after initiation of meiotic maturation, reaching maximal levels in MII eggs before decreasing again after egg activation. As phosphorylation suppresses MTHFR activity, it is predicted that MTHFR becomes inactive during meiotic maturation and is minimally active in MII eggs, which is consistent with the reported changes in SAM levels during mouse oocyte maturation.
Georgia vegetable growers produce more than 27% of the nation’s fresh-market cucumbers. To maximize yields and profit, fields must be weed-free when planting. Limitations with current burndown herbicide options motivated academic, industry, and U.S. Department of Agriculture partners to search for new tools to assist growers. One possibility, glufosinate, controls many common and troublesome weeds, but its influence on cucumber development through residual activity when applied before or at planting is not understood. Thus, four different studies were each conducted two to four times from 2017 to 2020 to determine 1) transplant cucumber response to preplant glufosinate applications as influenced by rate, overhead irrigation, and interval between application and planting; and 2) seeded cucumber response to preemergence (PRE) glufosinate applications as influenced by rate, overhead irrigation, and planting depth. Glufosinate applied at 330, 660, 980, and 1,640 g ai ha−1 the day before transplanting caused 11% to 53% injury on sandy, low organic matter soils. Cucumber vine lengths and plant biomass were reduced up to 28% and 46%, respectively, with the three highest rates. Early-season yield (harvests 1 to 4) noted a 31% to 60% yield loss with glufosinate at 660 to 1,640 g ha−1 with similar trends observed with total yield (11 to 13 harvests). Irrigation (0.75 cm) after application and before transplanting reduced injury to less than 21%, eliminated vine length and biomass suppression except at the highest rate, and eliminated yield loss. Extending the interval between glufosinate application and transplanting from 1 to 4 d was not beneficial, and further extending the interval to 7 d significantly reduced injury half the time. When applied PRE to seeded cucumber and combining the data across locations, glufosinate caused less than 7% injury even at 1,640 g ha−1. Seeded plant vine lengths, biomass, and marketable yield were not influenced by the PRE application, and neither irrigation nor planting depth influenced seeded crop response to glufosinate.
Background: Visual impairment can impact 70% of individuals who have experienced a stroke. Identification and remediation of visual impairments can improve overall function and perceived quality of life. Our project aimed to improve visual assessment and timely intervention for patients with post-stroke visual impairment (PSVI). Methods: We conducted a quality improvement initiative to create a standardized screening and referral process for patients with PSVI to access an orthoptist. Post-stroke visual impairment was identified using the Visual Screen Assessment (VISA) tool. Patients filled out a VFQ-25 questionnaire before and after orthoptic assessment, and differences between scores were evaluated. Results: Eighteen patients completed the VFQ-25 both before and after orthoptic assessment. Of the vision related constructs, there was a significant improvement in reported outcomes for general vision (M=56.9, SD=30.7; M=48.6, SD=16.0), p=0.002, peripheral vision (M=88.3, SD=16; M=75, SD=23.1), p= 0.027, ocular pain (M=97.2, SD=6.9; M=87.5, SD=21.4), p=0.022, near activities (M=82.4, SD=24.1; M=67.8, SD=25.6), p<0.001, social functioning (M=90.2, SD=19; M=78.5, SD=29.3), p=0.019, mental health (M=84.0, SD=25.9; M=70.5, SD=31.2), p=0.017, and role difficulties (M=84.7, SD=26.3; M=67.4, SD=37.9), p=0.005. Conclusions: Orthoptic assessments for those with PSVI significantly improved perceived quality of life in a numerous vision related constructs, suggesting it is a valuable part of a patient’s post-stroke recovery.
Delayed cerebral ischemia (DCI) is a complication of aneurysmal subarachnoid hemorrhage (aSAH) and is associated with significant morbidity and mortality. There is little high-quality evidence available to guide the management of DCI. The Canadian Neurosurgery Research Collaborative (CNRC) is comprised of resident physicians who are positioned to capture national, multi-site data. The objective of this study was to evaluate practice patterns of Canadian physicians regarding the management of aSAH and DCI.
We performed a cross-sectional survey of Canadian neurosurgeons, intensivists, and neurologists who manage aSAH. A 19-question electronic survey (Survey Monkey) was developed and validated by the CNRC following a DCI-related literature review (PubMed, Embase). The survey was distributed to members of the Canadian Neurosurgical Society and to Canadian members of the Neurocritical Care Society. Responses were analyzed using quantitative and qualitative methods.
The response rate was 129/340 (38%). Agreement among respondents was limited to the need for intensive care unit admission, use of clinical and radiographic monitoring, and prophylaxis for the prevention of DCI. Several inconsistencies were identified. Indications for starting hyperdynamic therapy varied. There was discrepancy in the proportion of patients who felt to require IV milrinone, IA vasodilators, or physical angioplasty for treatment of DCI. Most respondents reported their facility does not utilize a standardized definition for DCI.
DCI is an important clinical entity for which no homogeneity and standardization exists in management among Canadian practitioners. The CNRC calls for the development of national standards in the definition, identification, and treatment of DCI.
Depictions of mythical beings appear in many different forms of art world-wide, including rock art of various ages. In this paper we explore a particular type of imagery, back-to-back figures, consisting of two human-like figures or animals of the same species next to each other and facing in opposite directions. Some human-like doubles were joined at the back rather than side-by-side, but also face opposite directions. In this paper, we report on new research on rock art, bark paintings and recent paintings on paper and chart a 9000-year history of making aesthetically, symbolically and spiritually powerful back-to-back figures in Arnhem Land, Northern Territory, Australia.
To evaluate opportunities for assessing penicillin allergies among patients presenting to dental clinics.
Retrospective cross-sectional study.
VA dental clinics.
Adult patients with a documented penicillin allergy who received an antibiotic from a dentist between January 1, 2015, and December 31, 2018, were included.
Chart reviews were completed on random samples of 100 patients who received a noncephalosporin antibiotic and 200 patients who received a cephalosporin. Each allergy was categorized by severity. These categories were used to determine patient eligibility for 3 testing groups based on peer-reviewed algorithms: (1) no testing, (2) skin testing, and (3) oral test-dose challenge. Descriptive and bivariate statistics were used to compare facility and patient demographics first between true penicillin allergy, pseudo penicillin allergy, and missing allergy documentation, and between those who received a cephalosporin and those who did not at the dental visit.
Overall, 19% lacked documentation of the nature of allergic reaction, 53% were eligible for skin testing, 27% were eligible for an oral test-dose challenge, and 1% were contraindicated from testing. Male patients and African American patients were less likely to receive a cephalosporin.
Most penicillin-allergic patients in the VA receiving an antibiotic from a dentist are eligible for penicillin skin testing or an oral penicillin challenge. Further research is needed to understand the role of dentists and dental clinics in assessing penicillin allergies.
The Centers for Medicare and Medicaid mandated that nursing homes implement antibiotic stewardship programs (ASPs) by November 2017. We conducted surveys of Wisconsin nursing-home stewardship practices before and after this mandate. Our comparison of these surveys shows an overall increase in ASP implementation efforts, but it also highlights areas for further improvement.
This chapter considers the legacy of respect for individual autonomy and ‘informed consent’ in health research. The primacy of informed consent as a safeguard has led to a systemic regulatory tendency to conceive of and protect privacy as an individual rather than a collective concern. This has limited any regulatory ability to grasp broader social concerns with the use and disclosure of data gathered and generated by health research. Any systemic failure to recognise collective interests in data, and the public interest in (non-personal) data protection, has profound implications for an information age. The chapter reflects on the value of re-negotiating the interests and expectations protected by health research regulation. It recognises the significance of Graeme Laurie’s preferred conception of privacy to enabling such negotiation, and the value of stewardship in establishing normative expectations free of historical encumbrance. Laurie’s conceptualisation of ‘privacy as separateness’, when placed alongside the idea of stewardship, may allow us to rebalance respect to encompass collective interests as fundamental to self-determination and mutual respect.
While declarative learning is dependent on the hippocampus, procedural learning and repetition priming can operate independently from the hippocampus, making them potential targets for behavioral interventions that utilize non-declarative memory systems to compensate for the declarative learning deficits associated with hippocampal insult. Few studies have assessed procedural learning and repetition priming in individuals with amnestic mild cognitive impairment (aMCI).
This study offers an overview across declarative, conceptual repetition priming, and procedural learning tasks by providing between-group effect sizes and Bayes Factors (BFs) comparing individuals with aMCI and controls. Seventy-six individuals with aMCI and 83 cognitively unimpaired controls were assessed. We hypothesized to see the largest differences between individuals with aMCI and controls on declarative learning, followed by conceptual repetition priming, with the smallest differences on procedural learning.
Consistent with our hypotheses, we found large differences between groups with supporting BFs on declarative learning. For conceptual repetition priming, we found a small-to-moderate between-group effect size and a non-conclusive BF somewhat in favor of a difference between groups. We found more variable but overall trivial differences on procedural learning tasks, with inconclusive BFs, in line with expectations.
The current results suggest that conceptual repetition priming does not remain intact in individuals with aMCI while procedural learning may remain intact. While additional studies are needed, our results contribute to the evidence-base that suggests that procedural learning may remain spared in aMCI and helps inform behavioral interventions that aim to utilize procedural learning in this population.
Chronic muscle diseases (MD) are progressive and cause wasting and weakness in muscles and are associated with reduced quality of life (QoL). The ACTMuS trial examined whether Acceptance and Commitment Therapy (ACT) as an adjunct to usual care improved QoL for such patients as compared to usual care alone.
This two-arm, randomised, multicentre, parallel design recruited 155 patients with MD (Hospital and Depression Scale ⩾ 8 for depression or ⩾ 8 for anxiety and Montreal Cognitive Assessment ⩾ 21/30). Participants were randomised, using random block sizes, to one of two groups: standard medical care (SMC) (n = 78) or to ACT in addition to SMC (n = 77), and were followed up to 9 weeks. The primary outcome was QoL, assessed by the Individualised Neuromuscular Quality of Life Questionnaire (INQoL), the average of five subscales, at 9-weeks. Trial registration was NCT02810028.
138 people (89.0%) were followed up at 9-weeks. At all three time points, the adjusted group difference favoured the intervention group and was significant with moderate to large effect sizes. Secondary outcomes (mood, functional impairment, aspects of psychological flexibility) also showed significant differences between groups at week 9.
ACT in addition to usual care was effective in improving QoL and other psychological and social outcomes in patients with MD. A 6 month follow up will determine the extent to which gains are maintained.
Having attention-deficit/hyperactivity disorder (ADHD) is a risk factor for concussion that impacts concussion diagnosis and recovery. The relationship between ADHD and repetitive subconcussive head impacts on neurocognitive and behavioral outcomes is less well known. This study evaluated the role of ADHD as a moderator of the association between repetitive head impacts on neurocognitive test performance and behavioral concussion symptoms over the course of an athletic season.
Study participants included 284 male athletes aged 13–18 years who participated in high school football. Parents completed the Strengths and Weaknesses of ADHD Symptoms and Normal Behavior (SWAN) ratings about their teen athlete before the season began. Head impacts were measured using an accelerometer worn during all practices and games. Athletes and parents completed behavioral ratings of concussion symptoms and the Attention Network Task (ANT), Digital Trail Making Task (dTMT), and Cued Task Switching Task at pre- and post-season.
Mixed model analyses indicated that neither head impacts nor ADHD symptoms were associated with post-season athlete- or parent-reported concussion symptom ratings or neurocognitive task performance. Moreover, no relationships between head impact exposure and neurocognitive or behavioral outcomes emerged when severity of pre-season ADHD symptoms was included as a moderator.
Athletes’ pre-season ADHD symptoms do not appear to influence behavioral or neurocognitive outcomes following a single season of competitive football competition. Results are interpreted in light of several study limitations (e.g., single season, assessment of constructs) that may have impacted this study’s pattern of largely null results.
At the climax of Egyptian author Out el Kouloub's novel, Zanouba, the reader is witness to a crime. We find ourselves in Matariyya, a village north of Cairo, in a somber bedchamber with a blind shaykh. It is the room where only a week before Zanouba, the novel's titular character, suffered a forced miscarriage in the final month of her pregnancy and lost her long-coveted male child. The women of the household are lined up in front of the shaykh—all except for Zanouba, who is still bedridden, and her co-wife, Mashallah, who is exempt from participating because she is menstruating. They prepare to swear on the Qurʾan their innocence in the matter of the miscarriage, as Zanouba's husband, Abdel Meguid, and her mother-in-law suspect foul play.