To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The primary objective was to analyze the impact of the national cyberattack in May 2021 on patient flow and data quality in the Paediatric Emergency Department (ED), amid the SARS-CoV-2 (COVID-19) pandemic.
A single site retrospective time series analysis was conducted of three 6-week periods: before, during, and after the cyberattack outage. Initial emergent workflows are described. Analysis includes diagnoses, demographic context, key performance indicators, and the gradual return of information technology capability on ED performance. Data quality was compared using 10 data quality dimensions.
Patient visits totaled 13 390. During the system outage, patient experience times decreased significantly, from a median of 188 minutes (pre-cyberattack) down to 166 minutes, most notable for the period from registration to triage, and from clinician review to discharge (excluding admitted patients). Following system restoration, most timings increased. Data quality was significantly impacted, with data imperfections noted in 19.7% of data recorded during the system outage compared to 4.7% before and 5.1% after.
There was a reduction in patient experience time, but data quality suffered greatly. A hospital’s major emergency plan should include provisions for digital disasters that address essential data requirements and quality as well as maintaining patient flow.
The Hierarchical Taxonomy of Psychopathology (HiTOP) has emerged out of the quantitative approach to psychiatric nosology. This approach identifies psychopathology constructs based on patterns of co-variation among signs and symptoms. The initial HiTOP model, which was published in 2017, is based on a large literature that spans decades of research. HiTOP is a living model that undergoes revision as new data become available. Here we discuss advantages and practical considerations of using this system in psychiatric practice and research. We especially highlight limitations of HiTOP and ongoing efforts to address them. We describe differences and similarities between HiTOP and existing diagnostic systems. Next, we review the types of evidence that informed development of HiTOP, including populations in which it has been studied and data on its validity. The paper also describes how HiTOP can facilitate research on genetic and environmental causes of psychopathology as well as the search for neurobiologic mechanisms and novel treatments. Furthermore, we consider implications for public health programs and prevention of mental disorders. We also review data on clinical utility and illustrate clinical application of HiTOP. Importantly, the model is based on measures and practices that are already used widely in clinical settings. HiTOP offers a way to organize and formalize these techniques. This model already can contribute to progress in psychiatry and complement traditional nosologies. Moreover, HiTOP seeks to facilitate research on linkages between phenotypes and biological processes, which may enable construction of a system that encompasses both biomarkers and precise clinical description.
Approximately one-third of individuals in a major depressive episode will not achieve sustained remission despite multiple, well-delivered treatments. These patients experience prolonged suffering and disproportionately utilize mental and general health care resources. The recently proposed clinical heuristic of ‘difficult-to-treat depression’ (DTD) aims to broaden our understanding and focus attention on the identification, clinical management, treatment selection, and outcomes of such individuals. Clinical trial methodologies developed to detect short-term therapeutic effects in treatment-responsive populations may not be appropriate in DTD. This report reviews three essential challenges for clinical intervention research in DTD: (1) how to define and subtype this heterogeneous group of patients; (2) how, when, and by what methods to select, acquire, compile, and interpret clinically meaningful outcome metrics; and (3) how to choose among alternative clinical trial design options to promote causal inference and generalizability. The boundaries of DTD are uncertain, and an evidence-based taxonomy and reliable assessment tools are preconditions for clinical research and subtyping. Traditional outcome metrics in treatment-responsive depression may not apply to DTD, as they largely reflect the only short-term symptomatic change and do not incorporate durability of benefit, side effect burden, or sustained impact on quality of life or daily function. The trial methodology will also require modification as trials will likely be of longer duration to examine the sustained impact, raising complex issues regarding control group selection, blinding and its integrity, and concomitant treatments.
Pharmacogenomic testing has emerged to aid medication selection for patients with major depressive disorder (MDD) by identifying potential gene-drug interactions (GDI). Many pharmacogenomic tests are available with varying levels of supporting evidence, including direct-to-consumer and physician-ordered tests. We retrospectively evaluated the safety of using a physician-ordered combinatorial pharmacogenomic test (GeneSight) to guide medication selection for patients with MDD in a large, randomized, controlled trial (GUIDED).
Materials and Methods
Patients diagnosed with MDD who had an inadequate response to ≥1 psychotropic medication were randomized to treatment as usual (TAU) or combinatorial pharmacogenomic test-guided care (guided-care). All received combinatorial pharmacogenomic testing and medications were categorized by predicted GDI (no, moderate, or significant GDI). Patients and raters were blinded to study arm, and physicians were blinded to test results for patients in TAU, through week 8. Measures included adverse events (AEs, present/absent), worsening suicidal ideation (increase of ≥1 on the corresponding HAM-D17 question), or symptom worsening (HAM-D17 increase of ≥1). These measures were evaluated based on medication changes [add only, drop only, switch (add and drop), any, and none] and study arm, as well as baseline medication GDI.
Most patients had a medication change between baseline and week 8 (938/1,166; 80.5%), including 269 (23.1%) who added only, 80 (6.9%) who dropped only, and 589 (50.5%) who switched medications. In the full cohort, changing medications resulted in an increased relative risk (RR) of experiencing AEs at both week 4 and 8 [RR 2.00 (95% CI 1.41–2.83) and RR 2.25 (95% CI 1.39–3.65), respectively]. This was true regardless of arm, with no significant difference observed between guided-care and TAU, though the RRs for guided-care were lower than for TAU. Medication change was not associated with increased suicidal ideation or symptom worsening, regardless of study arm or type of medication change. Special attention was focused on patients who entered the study taking medications identified by pharmacogenomic testing as likely having significant GDI; those who were only taking medications subject to no or moderate GDI at week 8 were significantly less likely to experience AEs than those who were still taking at least one medication subject to significant GDI (RR 0.39, 95% CI 0.15–0.99, p=0.048). No other significant differences in risk were observed at week 8.
These data indicate that patient safety in the combinatorial pharmacogenomic test-guided care arm was no worse than TAU in the GUIDED trial. Moreover, combinatorial pharmacogenomic-guided medication selection may reduce some safety concerns. Collectively, these data demonstrate that combinatorial pharmacogenomic testing can be adopted safely into clinical practice without risking symptom degradation among patients.
Heuristics and cognitive biases constantly influence clinical decision-making and often facilitate judgements under uncertainty. They can frequently, however, lead to diagnostic errors and adverse outcomes, particularly when considering rare disease processes that have common, masquerading presentations. Herein, we present two such cases of newborn infants with hypertensive renal disorders that were initially thought to have cardiomyopathy.
Shortages of personal protective equipment during the coronavirus disease 2019 (COVID-19) pandemic have led to the extended use or reuse of single-use respirators and surgical masks by frontline healthcare workers. The evidence base underpinning such practices warrants examination.
To synthesize current guidance and systematic review evidence on extended use, reuse, or reprocessing of single-use surgical masks or filtering face-piece respirators.
We used the World Health Organization, the European Centre for Disease Prevention and Control, the US Centers for Disease Control and Prevention, and Public Health England websites to identify guidance. We used Medline, PubMed, Epistemonikos, Cochrane Database, and preprint servers for systematic reviews.
Two reviewers conducted screening and data extraction. The quality of included systematic reviews was appraised using AMSTAR-2. Findings were narratively synthesized.
In total, 6 guidance documents were identified. Levels of detail and consistency across documents varied. They included 4 high-quality systematic reviews: 3 focused on reprocessing (decontamination) of N95 respirators and 1 focused on reprocessing of surgical masks. Vaporized hydrogen peroxide and ultraviolet germicidal irradiation were highlighted as the most promising reprocessing methods, but evidence on the relative efficacy and safety of different methods was limited. We found no well-established methods for reprocessing respirators at scale.
Evidence on the impact of extended use and reuse of surgical masks and respirators is limited, and gaps and inconsistencies exist in current guidance. Where extended use or reuse is being practiced, healthcare organizations should ensure that policies and systems are in place to ensure these practices are carried out safely and in line with available guidance.
We evaluated the safety and feasibility of high-intensity interval training via a novel telemedicine ergometer (MedBIKE™) in children with Fontan physiology.
The MedBIKE™ is a custom telemedicine ergometer, incorporating a video game platform and live feed of patient video/audio, electrocardiography, pulse oximetry, and power output, for remote medical supervision and modulation of work. There were three study phases: (I) exercise workload comparison between the MedBIKE™ and a standard cardiopulmonary exercise ergometer in 10 healthy adults. (II) In-hospital safety, feasibility, and user experience (via questionnaire) assessment of a MedBIKE™ high-intensity interval training protocol in children with Fontan physiology. (III) Eight-week home-based high-intensity interval trial programme in two participants with Fontan physiology.
There was good agreement in oxygen consumption during graded exercise at matched work rates between the cardiopulmonary exercise ergometer and MedBIKE™ (1.1 ± 0.5 L/minute versus 1.1 ± 0.5 L/minute, p = 0.44). Ten youth with Fontan physiology (11.5 ± 1.8 years old) completed a MedBIKE™ high-intensity interval training session with no adverse events. The participants found the MedBIKE™ to be enjoyable and easy to navigate. In two participants, the 8-week home-based protocol was tolerated well with completion of 23/24 (96%) and 24/24 (100%) of sessions, respectively, and no adverse events across the 47 sessions in total.
The MedBIKE™ resulted in similar physiological responses as compared to a cardiopulmonary exercise test ergometer and the high-intensity interval training protocol was safe, feasible, and enjoyable in youth with Fontan physiology. A randomised-controlled trial of a home-based high-intensity interval training exercise intervention using the MedBIKE™ will next be undertaken.
The Genomics Used to Improve DEpresssion Decisions (GUIDED) trial assessed outcomes associated with combinatorial pharmacogenomic (PGx) testing in patients with major depressive disorder (MDD). Analyses used the 17-item Hamilton Depression (HAM-D17) rating scale; however, studies demonstrate that the abbreviated, core depression symptom-focused, HAM-D6 rating scale may have greater sensitivity toward detecting differences between treatment and placebo. However, the sensitivity of HAM-D6 has not been tested for two active treatment arms. Here, we evaluated the sensitivity of the HAM-D6 scale, relative to the HAM-D17 scale, when assessing outcomes for actively treated patients in the GUIDED trial.
Outpatients (N=1,298) diagnosed with MDD and an inadequate treatment response to >1 psychotropic medication were randomized into treatment as usual (TAU) or combinatorial PGx-guided (guided-care) arms. Combinatorial PGx testing was performed on all patients, though test reports were only available to the guided-care arm. All patients and raters were blinded to study arm until after week 8. Medications on the combinatorial PGx test report were categorized based on the level of predicted gene-drug interactions: ‘use as directed’, ‘moderate gene-drug interactions’, or ‘significant gene-drug interactions.’ Patient outcomes were assessed by arm at week 8 using HAM-D6 and HAM-D17 rating scales, including symptom improvement (percent change in scale), response (≥50% decrease in scale), and remission (HAM-D6 ≤4 and HAM-D17 ≤7).
At week 8, the guided-care arm demonstrated statistically significant symptom improvement over TAU using HAM-D6 scale (Δ=4.4%, p=0.023), but not using the HAM-D17 scale (Δ=3.2%, p=0.069). The response rate increased significantly for guided-care compared with TAU using both HAM-D6 (Δ=7.0%, p=0.004) and HAM-D17 (Δ=6.3%, p=0.007). Remission rates were also significantly greater for guided-care versus TAU using both scales (HAM-D6 Δ=4.6%, p=0.031; HAM-D17 Δ=5.5%, p=0.005). Patients taking medication(s) predicted to have gene-drug interactions at baseline showed further increased benefit over TAU at week 8 using HAM-D6 for symptom improvement (Δ=7.3%, p=0.004) response (Δ=10.0%, p=0.001) and remission (Δ=7.9%, p=0.005). Comparatively, the magnitude of the differences in outcomes between arms at week 8 was lower using HAM-D17 (symptom improvement Δ=5.0%, p=0.029; response Δ=8.0%, p=0.008; remission Δ=7.5%, p=0.003).
Combinatorial PGx-guided care achieved significantly better patient outcomes compared with TAU when assessed using the HAM-D6 scale. These findings suggest that the HAM-D6 scale is better suited than is the HAM-D17 for evaluating change in randomized, controlled trials comparing active treatment arms.
Major depressive disorder (MDD) is a leading cause of disease burden worldwide, with lifetime prevalence in the United States of 17%. Here we present the results of the first prospective, large-scale, patient- and rater-blind, randomized controlled trial evaluating the clinical importance of achieving congruence between combinatorial pharmacogenomic (PGx) testing and medication selection for MDD.
1,167 outpatients diagnosed with MDD and an inadequate response to ≥1 psychotropic medications were enrolled and randomized 1:1 to a Treatment as Usual (TAU) arm or PGx-guided care arm. Combinatorial PGx testing categorized medications in three groups based on the level of gene-drug interactions: use as directed, use with caution, or use with increased caution and more frequent monitoring. Patient assessments were performed at weeks 0 (baseline), 4, 8, 12 and 24. Patients, site raters, and central raters were blinded in both arms until after week 8. In the guided-care arm, physicians had access to the combinatorial PGx test result to guide medication selection. Primary outcomes utilized the Hamilton Depression Rating Scale (HAM-D17) and included symptom improvement (percent change in HAM-D17 from baseline), response (50% decrease in HAM-D17 from baseline), and remission (HAM-D17<7) at the fully blinded week 8 time point. The durability of patient outcomes was assessed at week 24. Medications were considered congruent with PGx test results if they were in the ‘use as directed’ or ‘use with caution’ report categories while medications in the ‘use with increased caution and more frequent monitoring’ were considered incongruent. Patients who started on incongruent medications were analyzed separately according to whether they changed to congruent medications by week8.
At week 8, symptom improvement for individuals in the guided-care arm was not significantly different than TAU (27.2% versus 24.4%, p=0.11). However, individuals in the guided-care arm were more likely than those in TAU to achieve remission (15% versus 10%; p<0.01) and response (26% versus 20%; p=0.01). Remission rates, response rates, and symptom reductions continued to improve in the guided-treatment arm until the 24week time point. Congruent prescribing increased to 91% in the guided-care arm by week 8. Among patients who were taking one or more incongruent medication at baseline, those who changed to congruent medications by week 8 demonstrated significantly greater symptom improvement (p<0.01), response (p=0.04), and remission rates (p<0.01) compared to those who persisted on incongruent medications.
Combinatorial PGx testing improves short- and long-term response and remission rates for MDD compared to standard of care. In addition, prescribing congruency with PGx-guided medication recommendations is important for achieving symptom improvement, response, and remission for MDD patients.
Funding Acknowledgements: This study was supported by Assurex Health, Inc.
A mass casualty event can result in an overwhelming number of critically injured pediatric victims that exceeds the available capacity of pediatric critical care (PCC) units, both locally and regionally. To address these gaps, the New York City (NYC) Pediatric Disaster Coalition (PDC) was established. The PDC includes experts in emergency preparedness, critical care, surgery, and emergency medicine from 18 of 25 major NYC PCC-capable hospitals. A PCC surge committee created recommendations for making additional PCC beds available with an emphasis on space, staff, stuff (equipment), and systems. The PDC assisted 15 hospitals in creating PCC surge plans by utilizing template plans and site visits. These plans created an additional 153 potential PCC surge beds. Seven hospitals tested their plans through drills. The purpose of this article was to demonstrate the need for planning for disasters involving children and to provide a stepwise, replicable model for establishing a PDC, with one of its primary goals focused on facilitating PCC surge planning. The process we describe for developing a PDC can be replicated to communities of any size, setting, or location. We offer our model as an example for other cities. (Disaster Med Public Health Preparedness. 2017;11:473–478)
Empirical social science often relies on data that are not observed in the field, but are transformed into quantitative variables by expert researchers who analyze and interpret qualitative raw sources. While generally considered the most valid way to produce data, this expert-driven process is inherently difficult to replicate or to assess on grounds of reliability. Using crowd-sourcing to distribute text for reading and interpretation by massive numbers of nonexperts, we generate results comparable to those using experts to read and interpret the same texts, but do so far more quickly and flexibly. Crucially, the data we collect can be reproduced and extended transparently, making crowd-sourced datasets intrinsically reproducible. This focuses researchers’ attention on the fundamental scientific objective of specifying reliable and replicable methods for collecting the data needed, rather than on the content of any particular dataset. We also show that our approach works straightforwardly with different types of political text, written in different languages. While findings reported here concern text analysis, they have far-reaching implications for expert-generated data in the social sciences.
To outline the evolution of school food standards and their implementation and evaluation in each of the four countries of the UK since 2000.
Review of relevant policies, surveys and evaluations, including country-specific surveys and regional evaluations.
UK: England, Wales, Scotland and Northern Ireland.
Primary and secondary schools and schoolchildren.
By September 2013 standards will have been introduced in all primary and secondary schools in the UK. Evaluations have varied in their scope and timing, relating to government forward planning, appropriate baselines and funding. Where standards have been implemented, the quality and nutritional value of food provided have improved. Emerging evidence shows improved overall diet and nutrient intake by school-aged children as a result.
The re-introduction of school food standards in the UK has not been centrally coordinated, but by September 2013 will be compulsory across all four countries in the UK, except in England where academies are now exempt. Provision of improved school food has had a demonstrable impact on diet and nutrition beyond the school dining room and the school gate, benefiting children from all socio-economic groups. Improved school food and dining environments are associated with higher levels of school lunch take up. Implementation of school food standards requires investment. It is critical to policy development that the value of this investment is measured and protected using planned, appropriate, robust and timely evaluations. Where appropriate, evaluations should be carried out across government departments and between countries.
We used measurements of radar-detected stratigraphy, surface ice-flow velocities and accumulation rates to investigate relationships between local valley-glacier and regional ice-sheet dynamics in and around the Schmidt Hills, Pensacola Mountains, Antarctica. Ground-penetrating radar profiles were collected perpendicular to the long axis of the Schmidt Hills and the margin of Foundation Ice Stream (FIS). Within the valley confines, the glacier consists of blue ice, and profiles show internal stratigraphy dipping steeply toward the nunataks and truncated at the present-day ablation surface. Below the valley confines, the blue ice is overlain by firn. Data show that upward-progressing overlap of actively accumulating firn onto valley-glacier ice is slightly less than ice flow out of the valleys over the past ∼1200 years. The apparent slightly negative mass balance (-0.25 cm a-1) suggests that ice-margin elevations in the Schmidt Hills may have lowered over this time period, even without a change in the surface elevation of FIS. Results suggest that (1) mass-balance gradients between local valley glaciers and regional ice sheets should be considered when using local information to estimate regional ice surface elevation changes; and (2) interpretation of shallow ice structures imaged with radar can provide information about local ice elevation changes and stability.
Background: People with dementia have a range of needs that are met by informal caregivers. A DVD-based training program was developed using research-based strategies for memory and communication in dementia. The effectiveness of the training on the caregiver experience and the well-being of the person with dementia was evaluated.
Methods: A pre-test/post-test controlled trial was undertaken with caregiver–care-recipient dyads living in the community. Measures of the carers’ knowledge of memory and communication strategies, burden, positive perceptions of caregiving, and perceptions of problem behaviors were taken pre- and three months post-intervention. The depression and well-being of the person with dementia were also evaluated. Satisfaction with the training and feedback were measured.
Results: Twenty-nine dyads (13 training group, 16 control group) participated. Bonferroni's correction was made to adjust for multiple comparisons, setting α at 0.00385. A significant improvement was found in caregivers’ knowledge for the training group compared to the control group (p = 0.0011). The training group caregivers reported a reduction in the frequency of care recipient disruptive behaviors (p = 0.028) and increased perceptions of positive aspects of caregiving (p = 0.039), both at a level approaching significance. The training group care recipients had increased frequency of verbally communicated depressive behaviors at a level approaching significance (p = 0.0126). The frequency of observed depressive behaviors was not significantly different between groups.
Conclusions: This approach to training for caregivers of people with dementia appears promising for its impact on knowledge and the caregiving experience. Further research could monitor the impact of the training on broader measures of depression and well-being, with a larger sample.
The Birth of Tragedy was written in (and, in a sense, against) a number of contexts: the military context of the Franco-Prussian War; the political context of the proclamation of the German Reich in Versailles on 18 January 1871, of the declaration of the Paris Commune on 18 March of the same year, and of the growing revolutionary movement in Europe; and the academic-political context of Basel, especially the philological circles in which Nietzsche had to operate. As early as on 20 November 1868, after his first meeting with Wagner in the Brockhaus household, Nietzsche wrote a letter to Rohde in which his dissatisfaction with scholars and with his academic colleagues was expressed with some force. Here he speaks of “the teeming broods of philologists of today, […] the entire molelike activity, with their full cheek-pouches and their blind eyes” (das wimmelnde Philologengezücht unserer Tage […], das ganze Maulwurfstreiben, die vollen Backentaschen und die blinden Augen), and what upset him was not just “their joy at the captured worm and their indifference to the real, indeed the insistent problems of life” (die Freude ob des erbeuteten Wurms und die Gleichgültigkeit gegen die wahren, ja aufdringlichen Probleme des Lebens; KSB 2, 344). This letter strikes one of the first notes in what will become a constant theme in Nietzsche's writings: the relationship between scholarly, academic activities and the tasks of the “real world”; ultimately, the relation of knowledge to life.
Although Nietzsche had sought, and found, solitude in Sils Maria, he had not given up on his project for a secular monastery. Before leaving Nice, he had written to Franz Overbeck about his hope, when he returned next winter, to establish “a society” (eine Gesellschaft) in which he would not be completely in hiding: possible members included the poet Paul Lanzky (1852–?), whom Nietzsche had gotten to know in Nice, and Heinrich Köselitz (known as Peter Gast), his trusted friend, perhaps even (as unlikely as it sounds) Paul Rée and Lou von Salomé (KSB 6, 494–95). And in a postcard to his mother and his sister in November 1884, he had envisaged Nice as the site of his “future ‘colony’” (zukünftige “Colonie”), which would consist of “pleasant people to whom I can teach my philosophy” (sympathische Menschen, vor denen ich meine Philosophie doziren kann; KSB 6, 563) — as we shall see, a very different colony from the one his sister had in mind …
But Nietzsche was aware that he needed to create a community of readers for his ideas. For, now that it was complete, Zarathustra was intended to act as “an entrance-way“ (eine Vorhalle; KSB 6, 496 and 499) — or, to use a Goethean term, a propylaeum — to his philosophy as a whole, and in 1883, while he was completing part 3 of Zarathustra, he was working on “a larger philosophical project” (eine gröΒere philosophische Arbeit; KSB 6, 414; cf. KSB 6, 427 and 429), provisionally entitled “The Innocence of Becoming: A Guide to Redemption from Morality” (“Die Unschuld des Werdens: Ein Wegweiser zur Erlösung von der Moral”; KSA 10, 8, 343) or “Morality for Moralists” (“Moral für Moralisten”; KSA 10, 7, 305—6 and 24, 660—61).
Of Nietzsche's publications to date (i.e., to 1886), Beyond Good and Evil (Jenseits von Gut und Böse) had enjoyed arguably the best critical reception. In a review for the Swiss journal Der Bund, Josef Victor Widmann described it as Nietzsche's “dangerous book” (gefährliches Buch), pointing out that the dynamite used in the construction of the Gotthardbahn, the railway line that traverses the Swiss Alps, always bore a black warning-flag to alert people to its danger — and that Nietzsche's book deserved a similar warning. Nietzsche was delighted by the review, both for commercial reasons (KSB 7, 249 and 256) and because, in essence, it said his book was dynamite (KSB 7, 251-52 and 258); in Ecce Homo, he would allude to Widmann's review and playfully apply his description of Beyond Good and Evil to himself: “I am not a man, I am dynamite” (Ich bin kein Mensch, ich bin Dynamit; EH “Why I Am a Destiny” §1; KSA 6, 365). Other reviews were positive, too, but Nietzsche's old friend, Erwin Rohde, was not impressed. The two friends were reunited when Nietzsche visited Leipzig again in 1886, but the encounters left both disappointed. Rohde told Franz Overbeck that there was “something totally uncanny” (etwas mir damals völlig unheimliches) about Nietzsche, “as if he came from a country where no one else lived” (als käme er aus einem Lande, wo sonst Niemand wohnt); for his part, Nietzsche wrote to Overbeck that Rohde's case simply proved that “the best go to seed in the atmosphere of the university”