To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Antarctica's ice shelves modulate the grounded ice flow, and weakening of ice shelves due to climate forcing will decrease their ‘buttressing’ effect, causing a response in the grounded ice. While the processes governing ice-shelf weakening are complex, uncertainties in the response of the grounded ice sheet are also difficult to assess. The Antarctic BUttressing Model Intercomparison Project (ABUMIP) compares ice-sheet model responses to decrease in buttressing by investigating the ‘end-member’ scenario of total and sustained loss of ice shelves. Although unrealistic, this scenario enables gauging the sensitivity of an ensemble of 15 ice-sheet models to a total loss of buttressing, hence exhibiting the full potential of marine ice-sheet instability. All models predict that this scenario leads to multi-metre (1–12 m) sea-level rise over 500 years from present day. West Antarctic ice sheet collapse alone leads to a 1.91–5.08 m sea-level rise due to the marine ice-sheet instability. Mass loss rates are a strong function of the sliding/friction law, with plastic laws cause a further destabilization of the Aurora and Wilkes Subglacial Basins, East Antarctica. Improvements to marine ice-sheet models have greatly reduced variability between modelled ice-sheet responses to extreme ice-shelf loss, e.g. compared to the SeaRISE assessments.
Little is known about how the Royal College of Emergency Medicine (RCEM) residency programs are selecting their residents. This creates uncertainty regarding alignment between current selection processes and known best practices. We seek to describe the current selection processes of Canadian RCEM programs.
An online survey was distributed to all RCEM program directors and assistant directors. The survey instrument included 22 questions and sought both qualitative and quantitative data from the following six domains: application file, letters of reference, elective selection, interview, rank order, and selection process evaluation.
We received responses from 13 of 14 programs for an aggregate response rate of 92.9%. A candidate's letters of reference were identified as the most important criterion from the paper application (38.5%). Having a high level of familiarity with the applicant was the most important characteristic of a reference letter author (46.2%). In determining rank order, 53.8% of programs weighed the interview more heavily than the paper application. Once final candidate scores are established following the interview stage, all program respondents indicated that further adjustment is made to the final rank order list. Only 1 of 13 program respondents reported ever having completed a formal evaluation of their selection process.
We have identified elements of the selection process that will inform recommendations for programs, students, and referees. We encourage programs to conduct regular reviews of their selection process going forward to be in alignment with best practices.
Brain imaging studies have shown altered amygdala activity during emotion processing in children and adolescents with oppositional defiant disorder (ODD) and conduct disorder (CD) compared to typically developing children and adolescents (TD). Here we aimed to assess whether aggression-related subtypes (reactive and proactive aggression) and callous-unemotional (CU) traits predicted variation in amygdala activity and skin conductance (SC) response during emotion processing.
We included 177 participants (n = 108 cases with disruptive behaviour and/or ODD/CD and n = 69 TD), aged 8–18 years, across nine sites in Europe, as part of the EU Aggressotype and MATRICS projects. All participants performed an emotional face-matching functional magnetic resonance imaging task.
Differences between cases and TD in affective processing, as well as specificity of activation patterns for aggression subtypes and CU traits, were assessed. Simultaneous SC recordings were acquired in a subsample (n = 63). Cases compared to TDs showed higher amygdala activity in response to negative faces (fearful and angry) v. shapes. Subtyping cases according to aggression-related subtypes did not significantly influence on amygdala activity; while stratification based on CU traits was more sensitive and revealed decreased amygdala activity in the high CU group. SC responses were significantly lower in cases and negatively correlated with CU traits, reactive and proactive aggression.
Our results showed differences in amygdala activity and SC responses to emotional faces between cases with ODD/CD and TD, while CU traits moderate both central (amygdala) and peripheral (SC) responses. Our insights regarding subtypes and trait-specific aggression could be used for improved diagnostics and personalized treatment.
This chapter analyzes the fragmentation of architectures of earth system governance. We start with a conceptualization of governance fragmentation and its relation to concepts such as polycentricity and institutional complexity. We then review the origins of governance fragmentation and its problematization, methodological approaches to studying fragmentation and the impacts and consequences of fragmentation. We conclude by identifying future research directions in this domain. Our research shows that fragmentation is ubiquitous, that it varies among policy areas and governance areas and that it is a variable that can be assessed in comparative research across policy areas and over time. The review is based on a comprehensive study of the literature on governance fragmentation over the last decade. We draw on a Scopus search on all articles published in the subject area of social sciences from 2009 to 2018, supplemented by additional studies, such as books, book chapters and a few policy briefs and working papers.
Hierarchization is a deliberate process to create a vertically nested governance architecture where actors and institutions in a lower rank are bound or otherwise compelled to obey, respond to or contribute to higher-order norms and objectives. Drawing on this definition, we review recent research on hierarchization in earth system governance and the political and legal processes that establish, maintain and legitimize it. Here we present three mutually non-exclusive forms of hierarchization – systematization, centralization and prioritization. Each involves different actors and rationales, mechanisms and strategies, while achieving different purposes with varying governance outcomes. We illustrate our argument with empirical examples including the proposed Global Pact for the Environment, the proposal to establish a world environment organization and the Sustainable Development Goals. We conclude with an assessment of the benefits and drawbacks of hierarchization as an approach to some of the challenges inherent in earth system governance, and offer suggestions for future research.
Governance through goals, a relatively new global governance mechanism, has recently gained prominence, particularly since the adoption of the Sustainable Development Goals. Through this mechanism, internationally agreed policy goals orchestrate the activities of governmental and non-governmental actors. This chapter argues that governance through goals has important effects on governance architectures and their degree and type of fragmentation. To analyze these effects, we review literature around four characteristics of governance through goals: their non-legally binding nature, weak global institutional arrangements, inclusive goal-setting processes and national leeway. We argue that alternative forms of bindingness, such as reporting and accountability mechanisms, can steer actors toward a shared vision. This may result in synergistic fragmentation if broad support is obtained through inclusive processes. However, tensions and cherry-picking may arise when goals are prioritized and implemented. Further research on the effects of governance through goals is crucial given that it is likely to maintain – and gain – importance in earth system governance.
Increased fruit and vegetable (FV) intake is associated with reduced blood pressure (BP). However, it is not clear whether the effect of FV on BP depends on the type of FV consumed. Furthermore, there is limited research regarding the comparative effect of juices or whole FV on BP. Baseline data from a prospective cohort study of 10 660 men aged 50–59 years examined not only the cross-sectional association between total FV intake but also specific types of FV and BP in France and Northern Ireland. BP was measured, and dietary intake assessed using FFQ. After adjusting for confounders, both systolic BP (SBP) and diastolic BP (DBP) were significantly inversely associated with total fruit, vegetable and fruit juice intake; however, when examined according to fruit or vegetable sub-type (citrus fruit, other fruit, fruit juices, cooked vegetables and raw vegetables), only the other fruit and raw vegetable categories were consistently associated with reduced SBP and DBP. In relation to the risk of hypertension based on SBP >140 mmHg, the OR for total fruit, vegetable and fruit juice intake (per fourth) was 0·95 (95 % CI 0·91, 1·00), with the same estimates being 0·98 (95 % CI 0·94, 1·02) for citrus fruit (per fourth), 1·02 (95 % CI 0·98, 1·06) for fruit juice (per fourth), 0·93 (95 % CI 0·89, 0·98) for other fruit (per fourth), 1·05 (95 % CI 0·99, 1·10) for cooked vegetable (per fourth) and 0·86 (95 % CI 0·80, 0·91) for raw vegetable intakes (per fourth). Similar results were obtained for DBP. In conclusion, a high overall intake of fruit, vegetables and fruit juice was inversely associated with SBP, DBP and risk of hypertension, but this differed by FV sub-type, suggesting that the strength of the association between FV sub-types and BP might be related to the type consumed, or to processing or cooking-related factors.
Introduction: Workplace based assessments (WBAs) are integral to emergency medicine residency training. However many biases undermine their validity, such as an assessor's personal inclination to rate learners leniently or stringently. Outlier assessors produce assessment data that may not reflect the learner's performance. Our emergency department introduced a new Daily Encounter Card (DEC) using entrustability scales in June 2018. Entrustability scales reflect the degree of supervision required for a given task, and are shown to improve assessment reliability and discrimination. It is unclear what effect they will have on assessor stringency/leniency – we hypothesize that they will reduce the number of outlier assessors. We propose a novel, simple method to identify outlying assessors in the setting of WBAs. We also examine the effect of transitioning from a norm-based assessment to an entrustability scale on the population of outlier assessors. Methods: This was a prospective pre-/post-implementation study, including all DECs completed between July 2017 and June 2019 at The Ottawa Hospital Emergency Department. For each phase, we identified outlier assessors as follows: 1. An assessor is a potential outlier if the mean of the scores they awarded was more than two standard deviations away from the mean score of all completed assessments. 2. For each assessor identified in step 1, their learners’ assessment scores were compared to the overall mean of all learners. This ensures that the assessor was not simply awarding outlying scores due to working with outlier learners. Results: 3927 and 3860 assessments were completed by 99 and 116 assessors in the pre- and post-implementation phases respectively. We identified 9 vs 5 outlier assessors (p = 0.16) in the pre- and post-implementation phases. Of these, 6 vs 0 (p = 0.01) were stringent, while 3 vs 5 (p = 0.67) were lenient. One assessor was identified as an outlier (lenient) in both phases. Conclusion: Our proposed method successfully identified outlier assessors, and could be used to identify assessors who might benefit from targeted coaching and feedback on their assessments. The transition to an entrustability scale resulted in a non-significant trend towards fewer outlier assessors. Further work is needed to identify ways to mitigate the effects of rater cognitive biases.
Introduction: A critical component for successful implementation of any innovation is an organization's readiness for change. Competence by Design (CBD) is the Royal College's major change initiative to reform the training of medical specialists in Canada. The purpose of this study was to measure readiness to implement CBD among the 2019 launch disciplines. Methods: An online survey was distributed to program directors of the 2019 CBD launch disciplines one month prior to implementation. Questions were developed based on the R = MC2 framework for organizational readiness. They addressed program motivation to implement CBD, general capacity for change, and innovation-specific capacity. Questions related to motivation and general capacity were scored using a 5-point scale of agreement. Innovation-specific capacity was measured by asking participants whether they had completed 33 key pre-implementation tasks (yes/no) in preparation for CBD. Bivariate correlations were conducted to examine the relationship between motivation, general capacity and innovation specific capacity. Results: Survey response rate was 42% (n = 79). A positive correlation was found between all three domains of readiness (motivation and general capacity, r = 0.73, p < 0.01; motivation and innovation specific capacity, r = 0.52, p < 0.01; general capacity and innovation specific capacity, r = 0.47, p < 0.01). Most respondents agreed that successful launch of CBD was a priority (74%). Fewer felt that CBD was a move in the right direction (58%) and that implementation was a manageable change (53%). While most programs indicated that their leadership (94%) and faculty and residents (87%) were supportive of change, 42% did not have experience implementing large-scale innovation and 43% indicated concerns about adequate support staff. Programs had completed an average of 72% of pre-implementation tasks. No difference was found between disciplines (p = 0.11). Activities related to curriculum mapping, competence committees and programmatic assessment had been completed by >90% of programs, while <50% of programs had engaged off-service rotations. Conclusion: Measuring readiness for change aids in the identification of factors that promote or inhibit successful implementation. These results highlight several areas where programs struggle in preparation for CBD launch. Emergency medicine training programs can use this data to target additional implementation support and ongoing faculty development initiatives.
Introduction: Little is known about how Royal College emergency medicine (RCEM) residency programs are selecting their residents. This creates uncertainty regarding alignment between our current selection processes and known best practices and results in a process that is difficult to navigate for prospective candidates. We seek to describe the current selection processes of Canadian RCEM programs. Methods: An online survey was distributed to all RCEM program directors and assistant directors. The survey instrument included 22 questions consisting of both open-ended (free text) and closed-ended (Likert scale) elements. Questions sought qualitative and quantitative data from the following 6 domains; paper application, letters of reference, elective selection, interview, rank order, and selection process evaluation. Descriptive statistics were used. Results: We received responses from 13/14 programs for an aggregate response rate of 92.9%. A candidate's letter of reference was identified as the single most important item from the paper application (38.5%). Having a high level of familiarity with the applicant was considered to be the most important characteristic of a reference letter author (46.2%). Respondents found that providing a percentile rank of the applicant was useful when reviewing candidate reference letters. Once the interview stage is reached, 76.9% of programs stated that the interview was weighted at least as heavily as the paper application; 53.8% weighted the interview more heavily. Once final candidate scores are established for both the paper application and the interview, 100% of programs indicated that further adjustment is made to the rank order list. Only 1/13 programs reported ever having completed a formal evaluation of their selection process. Conclusion: The information gained from this study helps to characterize the landscape of the RCEM residency selection process. We identified significant heterogeneity between programs with respect to which application elements were most valued. Canadian emergency medicine residency programs should re-evaluate their selection processes to achieve improved consistency and better alignment with selection best practices.
Introduction: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) was recently developed to assess a resident's ability to safely run an ED shift and is supported by multiple sources of validity evidence. The O-EDShOT uses entrustability scales, which reflect the degree of supervision required for a given task. It was found to discriminate between learners of different levels, and to differentiate between residents who were rated as able to safely run the shift and those who were not. In June 2018 we replaced norm-based daily encounter cards (DECs) with the O-EDShOT. With the ideal assessment tool, most of the score variability would be explained by variability in learners’ performances. In reality, however, much of the observed variability is explained by other factors. The purpose of this study is to determine what proportion of total score variability is accounted for by learner variability when using norm-based DECs vs the O-EDShOT. Methods: This was a prospective pre-/post-implementation study, including all daily assessments completed between July 2017 and June 2019 at The Ottawa Hospital ED. A generalizability analysis (G study) was performed to determine what proportion of total score variability is accounted for by the various factors in this study (learner, rater, form, pgy level) for both the pre- and post- implementation phases. We collected 12 months of data for each phase, because we estimated that 6-12 months would be required to observe a measurable increase in entrustment scale scores within a learner. Results: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors in the pre- and post- implementation phases respectively. Our G study revealed that 21% of total score variance was explained by a combination of post-graduate year (PGY) level and the individual learner in the pre-implementation phase, compared to 59% in the post-implementation phase. An average of 51 vs 27 forms/learner are required to achieve a reliability of 0.80 in the pre- and post-implementation phases respectively. Conclusion: A significantly greater proportion of total score variability is explained by variability in learners’ performances with the O-EDShOT compared to norm-based DECs. The O-EDShOT also requires fewer assessments to generate a reliable estimate of the learner's ability. This study suggests that the O-EDShOT is a more useful assessment tool than norm-based DECs, and could be adopted in other emergency medicine training programs.
Acute change in mental status (ACMS), defined by the Confusion Assessment Method, is used to identify infections in nursing home residents. A medical record review revealed that none of 15,276 residents had an ACMS documented. Using the revised McGeer criteria with a possible ACMS definition, we identified 296 residents and 21 additional infections. The use of a possible ACMS definition should be considered for retrospective nursing home infection surveillance.
Implementation of genome-scale sequencing in clinical care has significant challenges: the technology is highly dimensional with many kinds of potential results, results interpretation and delivery require expertise and coordination across multiple medical specialties, clinical utility may be uncertain, and there may be broader familial or societal implications beyond the individual participant. Transdisciplinary consortia and collaborative team science are well poised to address these challenges. However, understanding the complex web of organizational, institutional, physical, environmental, technologic, and other political and societal factors that influence the effectiveness of consortia is understudied. We describe our experience working in the Clinical Sequencing Evidence-Generating Research (CSER) consortium, a multi-institutional translational genomics consortium.
A key aspect of the CSER consortium was the juxtaposition of site-specific measures with the need to identify consensus measures related to clinical utility and to create a core set of harmonized measures. During this harmonization process, we sought to minimize participant burden, accommodate project-specific choices, and use validated measures that allow data sharing.
Identifying platforms to ensure swift communication between teams and management of materials and data were essential to our harmonization efforts. Funding agencies can help consortia by clarifying key study design elements across projects during the proposal preparation phase and by providing a framework for data sharing data across participating projects.
In summary, time and resources must be devoted to developing and implementing collaborative practices as preparatory work at the beginning of project timelines to improve the effectiveness of research consortia.
Cranberries are high in polyphenols, and epidemiological studies have shown that a high-polyphenol diet may reduce risk factors for diabetes and CVD. The present study aimed to determine if short-term cranberry beverage consumption would improve insulin sensitivity and other cardiovascular risk factors. Thirty-five individuals with obesity and with elevated fasting glucose or impaired glucose tolerance participated in a randomised, double-blind, placebo-controlled, parallel-designed pilot trial. Participants consumed 450 ml of low-energy cranberry beverage or placebo daily for 8 weeks. Changes in insulin sensitivity and cardiovascular risk factors including vascular reactivity, blood pressure, RMR, glucose tolerance, lipid profiles and oxidative stress biomarkers were evaluated. Change in insulin sensitivity via hyperinsulinaemic–euglycaemic clamp was not different between the two groups. Levels of 8-isoprostane (biomarker of lipid peroxidation) decreased in the cranberry group but increased in the placebo group (–2·18 v. +20·81 pg/ml; P = 0·02). When stratified by baseline C-reactive protein (CRP) levels, participants with high CRP levels (>4 mg/l) benefited more from cranberry consumption. In this group, significant differences in the mean change from baseline between the cranberry (n 10) and the placebo groups (n 7) in levels of TAG (–13·75 v. +10·32 %; P = 0·04), nitrate (+3·26 v. −6·28 µmol/l; P = 0·02) and 8-isoprostane (+0·32 v. +30·8 pg/ml; P = 0·05) were observed. These findings indicate that 8 weeks of daily cranberry beverage consumption may not impact insulin sensitivity but may be helpful in lowering TAG and changing certain oxidative stress biomarkers in individuals with obesity and a proinflammatory state.
Studies in children suggest that neurocognitive performance is a possible endophenotype for ADHD. We wished to establish a first connection between key genetic polymorphisms and neurocognitive performance in adults with ADHD.
We genotyped 45 adults with ADHD at four key candidate polymorphisms for the disorder (DRD4 48 bp repeat, DRD4 120 bp duplicated repeat, SLC6A3 40 bp VNTR, and COMT Val158Met). We then sub-grouped the sample for each polymorphism by genotype or by the presence of the (putative) ADHD risk allele and compared the performance of the subgroups on a large battery of neurocognitive tests.
The COMT Val158Met polymorphism was related to differences in IQ and reaction time, both of the DRD4 polymorphisms (48 bp repeat and 120 bp duplication) showed an association with verbal memory skills, and the SLC6A3 40 bp VNTR polymorphism could be linked to differences in inhibition.
Our findings contribute to the complicated search for possible endophenotypes for (adult) ADHD.
Treatment resistant schizophrenia (TRS) is one of the most disabling of psychiatric disorders, affecting about 1/3 of patients. First-line treatments include both atypical and typical antipsychotics. The original atypical, clozapine, is a final option, and although it has been shown to be the only effective treatment for TRS, many patients do not respond well to clozapine. Clozapine use is related to adverse events, most notably agranulocytosis, a potentially fatal blood disorder which affects about 1% of those prescribed clozapine and requires regular blood monitoring. This as a barrier to prescription and there is a long delay in access for TRS patients, of five or more years, from first antipsychotic prescription. Better tools to predict treatment resistance and to identify risk of adverse events would allow faster and safer access to clozapine for patients who are likely to benefit from it. The CRESTAR project (www.crestar-project.eu) is a European Framework 7 collaborative project that aims to develop tools to predict i) treatment response, particularly patients who are less likely to respond to usual antipsychotics, indicating treatment with clozapine as early as possible, ii) patients who are at high or low risk of adverse events and side effects, iii) extreme TRS patients so that they can be stratified in clinical trials for novel treatments. CRESTAR has addressed these questions by examining genome-wide association data, genome sequence, epigenetic biomarkers and epidemiological data in European patient cohorts characterized for treatment response, and adverse drug reaction using data from clozapine therapeutic drug monitoring and linked National population medical and pharmacy databases, to identify predictive factors. In parallel CRESTAR will perform health economic research on potential benefits, and ethics and patient-centred research with stakeholders.