To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The efficient and effective movement of research into practice is acknowledged as crucial to improving population health and assuring return on investment in healthcare research. The National Center for Advancing Translational Science which sponsors Clinical and Translational Science Awards (CTSA) recognizes that dissemination and implementation (D&I) sciences have matured over the last 15 years and are central to its goals to shift academic health institutions to better align with this reality. In 2016, the CTSA Collaboration and Engagement Domain Task Force chartered a D&I Science Workgroup to explore the role of D&I sciences across the translational research spectrum. This special communication discusses the conceptual distinctions and purposes of dissemination, implementation, and translational sciences. We propose an integrated framework and provide real-world examples for articulating the role of D&I sciences within and across all of the translational research spectrum. The framework’s major proposition is that it situates D&I sciences as targeted “sub-sciences” of translational science to be used by CTSAs, and others, to identify and investigate coherent strategies for more routinely and proactively accelerating research translation. The framework highlights the importance of D&I thought leaders in extending D&I principles to all research stages.
Inadequate protein quality may be a risk factor for poor growth. To examine the effect of a macronutrient–micronutrient supplement KOKO Plus (KP), provided to infants from 6 to 18 months of age, on linear growth, a single-blind cluster-randomised study was implemented in Ghana. A total of thirty-eight communities were randomly allocated to receive KP (fourteen communities, n 322), a micronutrient powder (MN, thirteen communities, n 329) and nutrition education (NE, eleven communities, n 319). A comparison group was followed cross-sectionally (n 303). Supplement delivery and morbidity were measured weekly and anthropometry monthly. NE education was provided monthly. Baseline, midline and endline measurements at 6, 12 and 18 months included venous blood draws, diet, anthropometry, morbidity, food security and socio-economics. Length-for-age Z-score (LAZ) was the primary outcome. Analyses were intent-to-treat using mixed-effects regressions adjusted for clustering, sex, age and baseline. No differences existed in mean LAZ scores at endline (−1·219 (sd 0·06) KP, −1·211 (sd 0·03) MN, −1·266 (sd 0·03) NE). Acute infection prevalence was lower in the KP than NE group (P = 0·043). Mean serum Hb was higher in KP infants free from acute infection (114·02 (sd 1·87) g/l) than MN (107·8 (sd 2·5) g/l; P = 0·047) and NE (108·8 (sd 0·99) g/l; P = 0·051). Compliance was 84·9 % (KP) and 87·2 % (MN) but delivery 60 %. Adjusting for delivery and compliance, LAZ score at endline was significantly higher in the KP v. MN group (+0·2 LAZ; P = 0·026). A macro- and micronutrient-fortified supplement KP reduced acute infection, improved Hb and demonstrated a dose–response effect on LAZ adjusting consumption for delivery.
We read with interest the recent editorial, “The Hennepin Ketamine Study,” by Dr. Samuel Stratton commenting on the research ethics, methodology, and the current public controversy surrounding this study.1 As researchers and investigators of this study, we strongly agree that prospective clinical research in the prehospital environment is necessary to advance the science of Emergency Medical Services (EMS) and emergency medicine. We also agree that accomplishing this is challenging as the prehospital environment often encounters patient populations who cannot provide meaningful informed consent due to their emergent conditions. To ensure that fellow emergency medicine researchers understand the facts of our work so they may plan future studies, and to address some of the questions and concerns in Dr. Stratton’s editorial, the lay press, and in social media,2 we would like to call attention to some inaccuracies in Dr. Stratton’s editorial, and to the lay media stories on which it appears to be based.
Ho JD, Cole JB, Klein LR, Olives TD, Driver BE, Moore JC, Nystrom PC, Arens AM, Simpson NS, Hick JL, Chavez RA, Lynch WL, Miner JR. The Hennepin Ketamine Study investigators’ reply. Prehosp Disaster Med. 2019;34(2):111–113
To identify potential participants for clinical trials, electronic health records (EHRs) are searched at potential sites. As an alternative, we investigated using medical devices used for real-time diagnostic decisions for trial enrollment.
To project cohorts for a trial in acute coronary syndromes (ACS), we used electrocardiograph-based algorithms that identify ACS or ST elevation myocardial infarction (STEMI) that prompt clinicians to offer patients trial enrollment. We searched six hospitals’ electrocardiograph systems for electrocardiograms (ECGs) meeting the planned trial’s enrollment criterion: ECGs with STEMI or > 75% probability of ACS by the acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI). We revised the ACI-TIPI regression to require only data directly from the electrocardiograph, the e-ACI-TIPI using the same data used for the original ACI-TIPI (development set n = 3,453; test set n = 2,315). We also tested both on data from emergency department electrocardiographs from across the US (n = 8,556). We then used ACI-TIPI and e-ACI-TIPI to identify potential cohorts for the ACS trial and compared performance to cohorts from EHR data at the hospitals.
Receiver-operating characteristic (ROC) curve areas on the test set were excellent, 0.89 for ACI-TIPI and 0.84 for the e-ACI-TIPI, as was calibration. On the national electrocardiographic database, ROC areas were 0.78 and 0.69, respectively, and with very good calibration. When tested for detection of patients with > 75% ACS probability, both electrocardiograph-based methods identified eligible patients well, and better than did EHRs.
Using data from medical devices such as electrocardiographs may provide accurate projections of available cohorts for clinical trials.
Inefficiencies in the national clinical research infrastructure have been apparent for decades. The National Center for Advancing Translational Science—sponsored Clinical and Translational Science Award (CTSA) program is able to address such inefficiencies. The Trial Innovation Network (TIN) is a collaborative initiative with the CTSA program and other National Institutes of Health (NIH) Institutes and Centers that addresses critical roadblocks to accelerate the translation of novel interventions to clinical practice. The TIN’s mission is to execute high-quality trials in a quick, cost-efficient manner. The TIN awardees are composed of 3 Trial Innovation Centers, the Recruitment Innovation Center, and the individual CTSA institutions that have identified TIN Liaison units. The TIN has launched a national scale single (central) Institutional Review Board system, master contracting agreements, quality-by-design approaches, novel recruitment support methods, and applies evidence-based strategies to recruitment and patient engagement. The TIN has received 113 submissions from 39 different CTSA institutions and 8 non-CTSA Institutions, with projects associated with 12 different NIH Institutes and Centers across a wide range of clinical/disease areas. Already more than 150 unique health systems/organizations are involved as sites in TIN-related multisite studies. The TIN will begin to capture data and metrics that quantify increased efficiency and quality improvement during operations.
Dignity therapy (DT) is designed to address psychological and existential challenges that terminally ill individuals face. DT guides patients in developing a written legacy project in which they record and share important memories and messages with those they will leave behind. DT has been demonstrated to ease existential concerns for adults with advanced-stage cancer; however, lack of institutional resources limits wide implementation of DT in clinical practice. This study explores qualitative outcomes of an abbreviated, less resource-intensive version of DT among participants with advanced-stage cancer and their legacy project recipients.
Qualitative methods were used to analyze postintervention interviews with 11 participants and their legacy recipients as well as the created legacy projects. Direct content analysis was used to assess feedback from the interviews about benefits, barriers, and recommendations regarding abbreviated DT. The legacy projects were coded for expression of core values.
Findings suggest that abbreviated DT effectively promotes (1) self-expression, (2) connection with loved ones, (3) sense of purpose, and (4) continuity of self. Participants observed that leading the development of their legacy projects promoted independent reflection, autonomy, and opportunities for family interaction when reviewing and discussing the projects. Consistent with traditional DT, participants expressed “family” as the most common core value in their legacy projects. Expression of “autonomy” was also a notable finding.
Significance of results
Abbreviated DT reduces resource barriers to conducting traditional DT while promoting similar benefits for participants and recipients, making it a promising adaptation warranting further research. The importance that patients place on family and autonomy should be honored as much as possible by those caring for adults with advanced-stage cancer.
OBJECTIVES/SPECIFIC AIMS: Clostridium difficile infection (CDI) is the most common cause of antibiotic-associated diarrhea and an increasingly common infection in children in both hospital and community settings. Between 20% and 30% of pediatric patients will have a recurrence of symptoms in the days to weeks following an initial infection. Multiple recurrences have been successfully treated with fecal microbiota transplantation (FMT), though the body of evidence in pediatric patients is limited primarily to case reports and case series. The goal of our study was to better understand practices, success, and safety of FMT in children as well as identify risk factors associated with a failed FMT in our pediatric patients. METHODS/STUDY POPULATION: This multicenter retrospective analysis included 373 patients who underwent FMT for CDI between January 1, 2006 and January 1, 2017 from 18 pediatric centers. Demographics, baseline characteristics, FMT practices, C. difficile outcomes, and post-FMT complications were collected through chart abstraction. Successful FMT was defined as no recurrence of CDI within 60 days after FMT. Of the 373 patients in the cohort, 342 had known outcome data at two months post-FMT and were included in the primary analysis evaluating risk factors for recurrence post-FMT. An additional six patients who underwent FMT for refractory CDI were excluded from the primary analysis. Unadjusted analysis was performed using Wilcoxon rank-sum test, Pearson χ2 test, or Fisher exact test where appropriate. Stepwise logistic regression was utilized to determine independent predictors of success. RESULTS/ANTICIPATED RESULTS: The median age of included patients was 10 years (IQR; 3.0, 15.0) and 50% of patients were female. The majority of the cohort was White (89.0%). Comorbidities included 120 patients with inflammatory bowel disease (IBD) and 14 patients who had undergone a solid organ or stem cell transplantation. Of the 336 patients with known outcomes at two months, 272 (81%) had a successful outcome. In the 64 (19%) patients that did have a recurrence, 35 underwent repeat FMT which was successful in 20 of the 35 (57%). The overall success rate of FMT in preventing further episodes of CDI in the cohort with known outcome data was 87%. Unadjusted predictors of a primary FMT response are summarized. Based on stepwise logistic regression modeling, the use of fresh stool, FMT delivery via colonoscopy, the lack of a feeding tube, and a lower number of CDI episodes before undergoing FMT were independently associated with a successful outcome. There were 20 adverse events in the cohort assessed to be related to FMT, 6 of which were felt to be severe. There were no deaths assessed to be related to FMT in the cohort. DISCUSSION/SIGNIFICANCE OF IMPACT: The overall success of FMT in pediatric patients with recurrent or severe CDI is 81% after a single FMT. Children without a feeding tube, who receive an early FMT, FMT with fresh stool, or FMT via colonoscopy are less likely to have a recurrence of CDI in the 2 months following FMT. This is the first large study of FMT for CDI in a pediatric cohort. These findings, if confirmed by additional prospective studies, will support alterations in the practice of FMT in children.
BACKGROUND: IGTS is a rare phenomenon of paradoxical germ cell tumor (GCT) growth during or following treatment despite normalization of tumor markers. We sought to evaluate the frequency, clinical characteristics and outcome of IGTS in patients in 21 North-American and Australian institutions. METHODS: Patients with IGTS diagnosed from 2000-2017 were retrospectively evaluated. RESULTS: Out of 739 GCT diagnoses, IGTS was identified in 33 patients (4.5%). IGTS occurred in 9/191 (4.7%) mixed-malignant GCTs, 4/22 (18.2%) immature teratomas (ITs), 3/472 (0.6%) germinomas/germinomas with mature teratoma, and in 17 secreting non-biopsied tumours. Median age at GCT diagnosis was 10.9 years (range 1.8-19.4). Male gender (84%) and pineal location (88%) predominated. Of 27 patients with elevated markers, median serum AFP and Beta-HCG were 70 ng/mL (range 9.2-932) and 44 IU/L (range 4.2-493), respectively. IGTS occurred at a median time of 2 months (range 0.5-32) from diagnosis, during chemotherapy in 85%, radiation in 3%, and after treatment completion in 12%. Surgical resection was attempted in all, leading to gross total resection in 76%. Most patients (79%) resumed GCT chemotherapy/radiation after surgery. At a median follow-up of 5.3 years (range 0.3-12), all but 2 patients are alive (1 succumbed to progressive disease, 1 to malignant transformation of GCT). CONCLUSION: IGTS occurred in less than 5% of patients with GCT and most commonly after initiation of chemotherapy. IGTS was more common in patients with IT-only on biopsy than with mixed-malignant GCT. Surgical resection is a principal treatment modality. Survival outcomes for patients who developed IGTS are favourable.
We review the materials paradigm for metal amorphous nanocomposite (MANC) soft magnetic materials to showcase in solid state transformers (SSTs). We report 2D finite element analysis (FEA) of 3-phase SSTs operating at 50 Hz–10 kHz frequencies. We benchmark materials in designs to control high frequency losses and achieve higher power densities. FEA models are solved in the time domain for line frequencies of 50 Hz–10 kHz and 100 KW output power for the first 4 cycles. Transformer topologies are coupled to a power analysis using a Steinmetz parameterization of magnetic losses capturing induction and field scaling for transformer grade Si steel as compared to Metglas, Ferrite, FINEMET, Co- and FeNi-based MANCs. Recently discovered FeNi-based MANCs allow smaller transformers at equivalent power as compared to Si steel, Metglas, and Co-based MANCs. Fe-rich and non-Co containing MANCs also offer economies based on lower raw materials costs compared with Co-based MANCs.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
To assess general medical residents’ familiarity with antibiograms using a self-administered survey
Cross-sectional, single-center survey
Residents in internal medicine, family medicine, and pediatrics at an academic medical center
Participants were administered an anonymous survey at our institution during regularly scheduled educational conferences between January and May 2012. Questions collected data regarding demographics, professional training; further open-ended questions assessed knowledge and use of antibiograms regarding possible pathogens, antibiotic regimens, and prescribing resources for 2 clinical vignettes; a series of directed, closed-ended questions followed. Bivariate analyses to compare responses between residency programs were performed.
Of 122 surveys distributed, 106 residents (87%) responded; internal medicine residents accounted for 69% of responses. More than 20% of residents could not accurately identify pathogens to target with empiric therapy or select therapy with an appropriate spectrum of activity in response to the clinical vignettes; correct identification of potential pathogens was not associated with selecting appropriate therapy. Only 12% of respondents identified antibiograms as a resource when prescribing empiric antibiotic therapy for scenarios in the vignettes, with most selecting the UpToDate online clinical decision support resource or The Sanford Guide. When directly questioned, 89% reported awareness of institutional antibiograms, but only 70% felt comfortable using them and only 44% knew how to access them.
When selecting empiric antibiotics, many residents are not comfortable using antibiograms as part of treatment decisions. Efforts to improve antibiotic use may benefit from residents being given additional education on both infectious diseases pharmacotherapy and antibiogram utilization.
Use of ketamine in the prehospital setting may be advantageous due to its potent analgesic and sedative properties and favorable risk profile. Use in the military setting has demonstrated both efficacy and safety for pain relief. The purpose of this study was to assess ketamine training, use, and perceptions in the civilian setting among nationally certified paramedics (NRPs) in the United States.
A cross-sectional survey of NRPs was performed. The electronic questionnaire assessed paramedic training, authorization, use, and perceptions of ketamine. Included in the analysis were completed surveys of paramedics who held one or more state paramedic credentials, indicated “patient care provider” as their primary role, and worked in non-military settings. Descriptive statistics were calculated.
A total of 14,739 responses were obtained (response rate=23%), of which 10,737 (73%) met inclusion criteria and constituted the study cohort. Over one-half (53%) of paramedics reported learning about ketamine during their initial paramedic training. Meanwhile, 42% reported seeking ketamine-related education on their own. Of all respondents, only 33% (3,421/10,737) were authorized by protocol to use ketamine. Most commonly authorized uses included pain management (55%), rapid sequence intubation (RSI; 72%), and chemical restraint/sedation (72%). One-third of authorized providers (1,107/3,350) had never administered ketamine, with another 32% (1,070/3,350) having administered ketamine less than five times in their career. Ketamine was perceived to be safe and effective as the vast majority reported that they were comfortable with the use of ketamine (94%) and would, in similar situations (95%), use it again.
This was the first large, national survey to assess ketamine training, use, and perceptions among paramedics in the civilian prehospital setting. While training related to ketamine use was commonly reported among paramedics, few were authorized to administer the drug by their agency’s protocols. Of those authorized to use ketamine, most paramedics had limited experience administering the drug. Future research is needed to determine why the prevalence of ketamine use is low and to assess the safety and efficacy of ketamine use in the prehospital setting.
BucklandDM, CroweRP, CashRE, GondekS, MalusoP, SirajuddinS, SmithER, DangerfieldP, ShapiroG, WankaC, PanchalAR, SaraniB. Ketamine in the Prehospital Environment: A National Survey of Paramedics in the United States. Prehosp Disaster Med. 2018;33(1):23–28.
To identify clinical variables that influence blood culture volume recovery
Retrospective chart review and linear model analysis
A 621-bed Academic Medical Center with a Clinical Laboratory that processes 20,000+ blood cultures annually and dedicated phlebotomy staff for venipuncture
Consecutive patients requiring blood culture
Over a 6-day period, blood volume was determined in 568 culture bottles from 128 unique adult patients, and clinical data from the time of phlebotomy were extracted from hospital electronic medical records. Conditional hierarchical linear models with random effects for patient and phlebotomy occasion were utilized to analyze correlations between values collected from the same patient and during the same phlebotomy occasion.
Blood samples obtained from a central venous catheter yielded, on average, 2.53 mL more blood (95% CI, 1.63–3.44 mL; P<.001) than those from peripheral venipuncture, and aerobic bottles contained 0.38 mL more blood (95% CI, 0.1–0.67 mL; P=.009) than the anaerobic bottles. The remaining clinical variables (eg, hospital department, patient age, body mass index, gender, mean arterial pressure, concomitant systemic antibiotic use, and Charlson comorbidity index score) failed to reach statistical significance (P<.05) in relation to volume.
Blood cultures obtained from central venous catheters contain significantly greater volume than those obtained via peripheral venipuncture. These data highlight the clinically significant issue of low culture volume recovery, indicate that diagnostic and prognostic tools that rely on volume-dependent phenomena (ie, time to positivity) may require further validation under usual clinical practice circumstances, and suggest goals for future institutional performance improvement.
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
We report the discovery in the Greenland ice sheet of a discrete layer of free nanodiamonds (NDs) in very high abundances, implying most likely either an unprecedented influx of extraterrestrial (ET) material or a cosmic impact event that occurred after the last glacial episode. From that layer, we extracted n-diamonds and hexagonal diamonds (lonsdaleite), an accepted ET impact indicator, at abundances of up to about 5×106 times background levels in adjacent younger and older ice. The NDs in the concentrated layer are rounded, suggesting they most likely formed during a cosmic impact through some process similar to carbon-vapor deposition or high-explosive detonation. This morphology has not been reported previously in cosmic material, but has been observed in terrestrial impact material. This is the first highly enriched, discrete layer of NDs observed in glacial ice anywhere, and its presence indicates that ice caps are important archives of ET events of varying magnitudes. Using a preliminary ice chronology based on oxygen isotopes and dust stratigraphy, the ND-rich layer appears to be coeval with ND abundance peaks reported at numerous North American sites in a sedimentary layer, the Younger Dryas boundary layer (YDB), dating to 12.9 ± 0.1 ka. However, more investigation is needed to confirm this association.
If Jacobus Cornelius Kapteyn (1851–1922) were here today, he would undoubtedly be among those asking the new questions on the cutting edge of contemporary astronomy. It is likely that he would even find a survey of his own contributions to the study of the Milky Way irrelevant. Nevertheless, Kapteyn's life-long interest in the Milky Way shaped the work of many astronomers, including, of course, his students, Willem de Sitter, H.A. Weersman, Pieter van Rhijn, and through van Rhijn: Jan Oort, Bart Bok and many others. But Kapteyn's influence extended far beyond his native Holland. After Kapteyn became a close colleague of George Ellery Hale and a Research Associate at the Mount Wilson facilities in 1908, Hale began to employ a number of Dutch astronomers, including van Rhijn, Adriaan van Maanen, and Kapteyn's Danish future son-in-law, Ejnar Hertzsprung. Moreover, Kapteyn's astronomical colleagues world-wide found his enthusiasm and penetrating insights infectious.
Deriving glacier outlines from satellite data has become increasingly popular in the past decade. In particular when glacier outlines are used as a base for change assessment, it is important to know how accurate they are. Calculating the accuracy correctly is challenging, as appropriate reference data (e.g. from higher-resolution sensors) are seldom available. Moreover, after the required manual correction of the raw outlines (e.g. for debris cover), such a comparison would only reveal the accuracy of the analyst rather than of the algorithm applied. Here we compare outlines for clean and debris-covered glaciers, as derived from single and multiple digitizing by different or the same analysts on very high- (1 m) and medium-resolution (30 m) remote-sensing data, against each other and to glacier outlines derived from automated classification of Landsat Thematic Mapper data. Results show a high variability in the interpretation of debris-covered glacier parts, largely independent of the spatial resolution (area differences were up to 30%), and an overall good agreement for clean ice with sufficient contrast to the surrounding terrain (differences ∼5%). The differences of the automatically derived outlines from a reference value are as small as the standard deviation of the manual digitizations from several analysts. Based on these results, we conclude that automated mapping of clean ice is preferable to manual digitization and recommend using the latter method only for required corrections of incorrectly mapped glacier parts (e.g. debris cover, shadow).