Please note, due to essential maintenance online transactions will not be possible between 02:30 and 04:00 BST, on Tuesday 17th September 2019 (22:30-00:00 EDT, 17 Sep, 2019). We apologise for any inconvenience.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
We used a survey to characterize contemporary infection prevention and antibiotic stewardship program practices across 64 healthcare facilities, and we compared these findings to those of a similar 2013 survey. Notable findings include decreased frequency of active surveillance for methicillin-resistant Staphylococcus aureus, frequent active surveillance for carbapenem-resistant Enterobacteriaceae, and increased support for antibiotic stewardship programs.
Applied psychologists commonly use personality tests in employee selection systems because of their advantages regarding incremental criterion-related validity and less adverse impact relative to cognitive ability tests. Although personality tests have seen limited legal challenges in the past, we posit that the use of personality tests might see increased challenges under the Americans with Disabilities Act (ADA) and the ADA Amendments Act (ADAAA) due to emerging evidence that normative personality and personality disorders belong to common continua. This article aims to begin a discussion and offer initial insight regarding the possible implications of this research for personality testing under the ADA. We review past case law, scholarship in employment law, Equal Employment Opportunity Commission (EEOC) guidance regarding “medical examinations,” and recent literature from various psychology disciplines—including clinical, neuropsychology, and applied personality psychology—regarding the relationship between normative personality and personality disorders. More importantly, we review suggestions proposing the five-factor model (FFM) be used to diagnose personality disorders (PDs) and recent changes in the Diagnostic and Statistical Manual of Mental Disorders (DSM). Our review suggests that as scientific understanding of personality progresses, practitioners will need to exercise evermore caution when choosing personality measures for use in selection systems. We conclude with six recommendations for applied psychologists when developing or choosing personality measures.
In cluster-randomized trials (CRT), groups rather than individuals are randomized to interventions. The aim of this study was to present critical design, implementation, and analysis issues to consider when planning a CRT in the healthcare setting and to synthesize characteristics of published CRT in the field of healthcare epidemiology.
A systematic review was conducted to identify CRT with infection control outcomes.
We identified the following 7 epidemiological principles: (1) identify design type and justify the use of CRT; (2) account for clustering when estimating sample size and report intraclass correlation coefficient (ICC)/coefficient of variation (CV); (3) obtain consent; (4) define level of inference; (5) consider matching and/or stratification; (6) minimize bias and/or contamination; and (7) account for clustering in the analysis. Among 44 included studies, the most common design was CRT with crossover (n = 15, 34%), followed by parallel CRT (n = 11, 25%) and stratified CRT (n = 7, 16%). Moreover, 22 studies (50%) offered justification for their use of CRT, and 20 studies (45%) demonstrated that they accounted for clustering at the design phase. Only 15 studies (34%) reported the ICC, CV, or design effect. Also, 15 studies (34%) obtained waivers of consent, and 7 (16%) sought consent at the cluster level. Only 17 studies (39%) matched or stratified at randomization, and 10 studies (23%) did not report efforts to mitigate bias and/or contamination. Finally, 29 studies (88%) accounted for clustering in their analyses.
We must continue to improve the design and reporting of CRT to better evaluate the effectiveness of infection control interventions in the healthcare setting.
To ascertain opinions regarding etiology and preventability of hospital-onset bacteremia and fungemia (HOB) and perspectives on HOB as a potential outcome measure reflecting quality of infection prevention and hospital care.
Hospital epidemiologists and infection preventionist members of the Society for Healthcare Epidemiology of America (SHEA) Research Network.
A web-based, multiple-choice survey was administered via the SHEA Research Network to 133 hospitals.
A total of 89 surveys were completed (67% response rate). Overall, 60% of respondents defined HOB as a positive blood culture on or after hospital day 3. Central line-associated bloodstream infections and intra-abdominal infections were perceived as the most frequent etiologies. Moreover, 61% thought that most HOB events are preventable, and 54% viewed HOB as a measure reflecting a hospital’s quality of care. Also, 29% of respondents’ hospitals already collect HOB data for internal purposes. Given a choice to publicly report central-line–associated bloodstream infections (CLABSIs) and/or HOB, 57% favored reporting either HOB alone (22%) or in addition to CLABSI (35%) and 34% favored CLABSI alone.
Among the majority of SHEA Research Network respondents, HOB is perceived as preventable, reflective of quality of care, and potentially acceptable as a publicly reported quality metric. Further studies on HOB are needed, including validation as a quality measure, assessment of risk adjustment, and formation of evidence-based bundles and toolkits to facilitate measurement and improvement of HOB rates.
Englerophytum and Synsepalum are two closely related genera of trees and shrubs from the African tropics. Previous molecular studies have shown that these genera collectively form a clade within the subfamily Chrysophylloideae (Sapotaceae). However, little is known about the inter-relationships of the taxa within the Englerophytum–Synsepalum clade. In this study, nuclear ribosomal DNA and plastid trnH–psbA sequences were used to estimate the phylogeny within the clade. Results indicate that the clade consists of six major lineages, two composed solely of taxa from the genus Englerophytum and four composed of taxa from the genus Synsepalum. Each lineage can be distinguished by suites of vegetative and floral characters. Leaf venation patterns, calyx fusion, style length and staminodal structure were among the most useful characters for distinguishing clades. Some of the subclades within the Englerophytum–Synsepalum clade were also found to closely fit descriptions of former genera, most of which were described by Aubréville, that have since been placed in synonymy with Englerophytum and Synsepalum. The clade with the type species of Englerophytum also contains the type species of the genera Wildemaniodoxa and Zeyherella, which are confirmed as synonyms.
Prevalence of multidrug-resistant microorganisms (MDROs) continues to increase, while infection control gaps in healthcare settings facilitate their transmission between patients. In this setting, 5 distinct yet interlinked pathways are responsible for transmission. The complete transmission process is still not well understood. Designing and conducting a single research study capable of investigating all 5 complex and multifaceted pathways of hospital transmission would be costly and logistically burdensome. Therefore, this scoping review aims to synthesize the highest-quality published literature describing each of the 5 individual potential transmission pathways of MDROs in the healthcare setting and their overall contribution to patient-to-patient transmission.
In 3 databases, we performed 2 separate systematic searches for original research published during the last decade. The first search focused on MDRO transmission via the HCW or the environment to identify publications studying 5 specific transmission pathways: (1) patient to HCW, (2) patient to environment, (3) HCW to patient, (4) environment to patient, and (5) environment to HCW. The second search focused on overall patient-to-patient transmission regardless of the transmission pathway. Both searches were limited to transmission of methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus, multidrug-resistant A. baumannii, and carbapenem-resistant Enterobacteriaceae. After abstract screening of 5,026 manuscripts, researchers independently reviewed and rated the remaining papers using objective predefined criteria to identify the highest quality and most influential manuscripts.
High-quality manuscripts were identified for all 5 routes of transmission. Findings from these studies were consistent for all pathways; however, results describing the routes from the environment/HCW to a noncolonized patient were more limited and variable. Additionally, most research focused on MRSA, instead of other MDROs. The second search yielded 10 manuscripts (8 cohort studies) that demonstrated the overall contribution of patient-to-patient transmission in hospitals regardless of the transmission route. For MRSA, the reported cross-transmission was as high as 40%.
This scoping review brings together evidence supporting all 5 possible transmission pathways and illustrates the complex nature of patient-to-patient transmission of MDROs in hospitals. Our findings also confirm that transmission of MDROs in hospitals occurs frequently, suggesting that ongoing efforts are necessary to strengthen infection prevention and control to prevent the spread of MDROs.
Timely identification of multidrug-resistant gram-negative infections remains an epidemiological challenge. Statistical models for predicting drug resistance can offer utility where rapid diagnostics are unavailable or resource-impractical. Logistic regression–derived risk scores are common in the healthcare epidemiology literature. Machine learning–derived decision trees are an alternative approach for developing decision support tools. Our group previously reported on a decision tree for predicting ESBL bloodstream infections. Our objective in the current study was to develop a risk score from the same ESBL dataset to compare these 2 methods and to offer general guiding principles for using each approach.
Using a dataset of 1,288 patients with Escherichia coli or Klebsiella spp bacteremia, we generated a risk score to predict the likelihood that a bacteremic patient was infected with an ESBL-producer. We evaluated discrimination (original and cross-validated models) using receiver operating characteristic curves and C statistics. We compared risk score and decision tree performance, and we reviewed their practical and methodological attributes.
In total, 194 patients (15%) were infected with ESBL-producing bacteremia. The clinical risk score included 14 variables, compared to the 5 decision-tree variables. The positive and negative predictive values of the risk score and decision tree were similar (>90%), but the C statistic of the risk score (0.87) was 10% higher.
A decision tree and risk score performed similarly for predicting ESBL infection. The decision tree was more user-friendly, with fewer variables for the end user, whereas the risk score offered higher discrimination and greater flexibility for adjusting sensitivity and specificity.
Hospital-onset bacteremia and fungemia (HOB), a potential measure of healthcare-associated infections, was evaluated in a pilot study among 60 patients across 3 hospitals. Two-thirds of all HOB events and half of nonskin commensal HOB events were judged as potentially preventable. Follow-up studies are needed to further develop this measure.
To enhance enrollment into randomized clinical trials (RCTs), we proposed electronic health record-based clinical decision support for patient–clinician shared decision-making about care and RCT enrollment, based on “mathematical equipoise.”
As an example, we created the Knee Osteoarthritis Mathematical Equipoise Tool (KOMET) to determine the presence of patient-specific equipoise between treatments for the choice between total knee replacement (TKR) and nonsurgical treatment of advanced knee osteoarthritis.
With input from patients and clinicians about important pain and physical function treatment outcomes, we created a database from non-RCT sources of knee osteoarthritis outcomes. We then developed multivariable linear regression models that predict 1-year individual-patient knee pain and physical function outcomes for TKR and for nonsurgical treatment. These predictions allowed detecting mathematical equipoise between these two options for patients eligible for TKR. Decision support software was developed to graphically illustrate, for a given patient, the degree of overlap of pain and functional outcomes between the treatments and was pilot tested for usability, responsiveness, and as support for shared decision-making.
The KOMET predictive regression model for knee pain had four patient-specific variables, and an r2 value of 0.32, and the model for physical functioning included six patient-specific variables, and an r2 of 0.34. These models were incorporated into prototype KOMET decision support software and pilot tested in clinics, and were generally well received.
Use of predictive models and mathematical equipoise may help discern patient-specific equipoise to support shared decision-making for selecting between alternative treatments and considering enrollment into an RCT.
Introduction: Despite revolutionary changes in the medical education landscape, journal club (JC) continues to be a ubiquitous pedagogical tool and is a primary way that residency programs review new evidence and teach evidence-based medicine. JC is a community of practice among physicians, which may help translate research findings into practice. Program representatives state that JC should have a goal of translating novel research into changes in clinical care, but there has been minimal evaluation of the success of JC in achieving this goal. Specifically, emergency medicine resident perspectives on the utility of JC remain unknown. Methods: We designed a multi-centre qualitative study for three distinct academic environments at the University of British Columbia (Vancouver, Victoria and Kelowna). Pilot testing was performed to generate preliminary themes and to finalize the interview script. An exploratory, semi-structured focus group was performed, followed by multiple one-on-one interviews using snowball sampling. Iterative thematic analysis directed data collection until thematic sufficiency was achieved. Analysis was conducted using a constructivist Grounded Theory method with communities of practice as a theoretical lens. Themes were compared to the existing literature to corroborate or challenge existing educational theory. Results: Pilot testing has revealed the following primary themes: (1) Only select residents are able to increase their participation in JC over the course of residency and navigate the transition from peripheral participant to core member; (2) These residents use their increased clinical experience to perceive relevance in JC topics, and; (3) Residents who remain peripheral participants identify a lack time to prepare for journal club and a lack of staff physician attendance as barriers to resident engagement. We will further develop these themes during the focus group and interview phases of our study. Conclusion: JC is a potentially valuable educational resource for residents. JC works as a community of practice only for a select group of residents, and many remain peripheral participants for the duration of their residency. Incorporation of Free Open-Access Medical Education resources may also decrease preparation time for residents and staff physicians and increase buy-in. To augment clinical impact, the JC community of practice may need to expand beyond emergency medicine and include other specialties.
Introduction: Emergency Department Overcrowding (EDOC) is a multifactorial issue that leads to Access Block for patients needing emergency care. Identified as a national problem, patients presenting to a Canadian Emergency Department (ED) at a time of overcrowding have higher rates of admission to hospital and increased seven-day mortality. Using the well accepted input-throughput-output model to study EDOC, current research has focused on throughput as a measure of patient flow, reported as ED length of stay (LOS). In fact, ED LOS and ED beds occupied by inpatients are two “extremely important indicators of EDOC identified by a 2005 survey of Canadian ED directors. One proposed solution to improve ED throughput is to utilize a physician at triage (PAT) to rapidly assess newly arriving patients. In 2017, a pilot PAT program was trialed at Kelowna General Hospital (KGH), a tertiary care hospital, as part of a PDSA cycle. The aim was to mitigate EDOC by improving ED throughput by the end of 2018, to meet the national targets for ED LOS suggested in the 2013 CAEP position statement. Methods: During the fiscal periods 1-6 (April 1 to September 7, 2017) a PAT shift occurred daily from 1000-2200, over four long weekends. ED LOS, time to inpatient bed, time to physician initial assessment (PIA), number of British Columbia Ambulance Service (BCAS) offload delays, and number of patients who left without being seen (LWBS) were extracted from an administrative database. Results were retrospectively analyzed and compared to data from 1000-2200 of non-PAT trial days during the trial periods. Results: Median ED LOS decreased from 3.8 to 3.4 hours for high-acuity patients (CTAS 1-3), from 2.1 to 1.8 hours for low-acuity patients (CTAS 4-5), and from 9.3 to 8.0 hours for all admitted patients. During PAT trial weekends, there was a decrease in the average time to PIA by 65% (from 73 to 26 minutes for CTAS 2-5), average number of daily BCAS offload delays by 39% (from 2.3 to 1.4 delays per day), and number of patients who LWBS from 2.4% to 1.7%. Conclusion: The implementation of PAT was associated with improvements in all five measures of ED throughput, providing a potential solution for EDOC at KGH. ED LOS was reduced compared to non-PAT control days, successfully meeting the suggested national targets. PAT could improve efficiency, resulting in the ability to see more patients in the ED, and increase the quality and safety of ED practice. Next, we hope to prospectively evaluate PAT, continuing to analyze these process measures, perform a cost-benefit analysis, and formally assess ED staff and patient perceptions of the program.
Introduction: The administration of “to-go” medications in the Kelowna General Hospital Emergency Department was identified as an issue. Frequently, multiple administrations of “to-go” medication pre-packs were administered to individual patients on a frequent basis. In addition, the variability in “to-go” medication was substantial between providers. Recognizing the patient issues (addiction, dependency and diversion) and system issues (costs, risk) a team-based quality improvement initiative was instituted, utilizing a variety of quality improvement techniques. The aim was to reduce the number of “to-go” medications by half, within a year. Methods: The project began January 2015, and is ongoing. Multiple stakeholders were engaged within the emergency department; these included leaders of the physician, nursing and pharmacy teams, including an executive sponsor. Using change theory, and traditional Plan-Do-Study-Act (PDSA) cycles, an iterative methodology was proposed. The outcome measure proposed was number of “to-go” medications administered; secondary measures included number of opioid “to-go” and benzodiazepine “to-go“prescriptions. Balancing measures were the number of narcotic prescriptions written. Physician prescribing practice and nursing practice were reviewed at meetings and huddles. Individualized reports were provided to physicians for self-review. Data was collated at baseline then reviewed quarterly at meetings and huddles. Run charts were utilized along with raw data and individualized reports. Results: At baseline (January 2015), the number of “to-go” medications was 708. Over the next year, this value reduced to 459, showing a 35% reduction in “to-go”. Two years later (June 2017), this had reduced to 142, resulting in an overall reduction of 80% “to-go” medications. Secondary measures are currently under analysis. Further, no increase in prescribing of narcotics was seen during this time period. Conclusion: The administration of “to-go” medications from the emergency department has significant individual and societal impact. Frequently, these medications are diverted; meaning, sold for profit on the black market. Further, opioid prescribing is under increased scrutiny as the linkage between opioid prescriptions and addiction / dependency becomes more evident. This quality improvement initiative was successful for a number of reasons. First, we had strong engagement from the full emergency department clinical teams. The issue was first identified collaboratively, and teamwork and participation was strong from the outset. Second, we used individual and aggregate data to provide feedback on a regular basis. Third, we had strong support from our executive sponsor(s) who were able to support the efforts and champion and present the results locally, and now, throughout the Health Region.
Oxidative stress occurs when antioxidant defence mechanisms are overwhelmed by free radicals and may lead to damage to DNA, which has been implicated in processes such as ageing and cancer. The Comet assay allows detection of oxidative DNA damage in individual cells. As horses with recurrent airway obstruction (RAO) have been shown to demonstrate low antioxidant status and oxidative stress, we hypothesised that peripheral blood mononuclear cells (PBMC) of horses with RAO would demonstrate increases in DNA damage following natural allergen challenge.
Six horses (mean age 15 years, range 8-23 years) diagnosed with RAO (in remission) and 6 healthy breed matched controls (mean age 9 years, range 5-15 years) were studied. Blood samples were collected 7 days prior to challenge and immediately and 3 days after stabling on mouldy hay and straw for 24h. All animals were kept at grass prior to and after the challenge period. Bronchoalveolar lavage (BAL) was performed and neutrophil counts determined.
The pressure transducer technique of Theodorou et al. (1984) is becoming of increasing importance in food evaluation. The main advantage over end-point procedures is the collection of kinetic data on a food. Previously such data were only obtained by sequential sacrifice or in situ techniques. Earlier work (Harris, 1996) showed that kinetics in the early stages of the incubation may not accurately simulate the processes occurring in vivo and this led to the use of a priming technique when the microbial innocula is acclimatized for 24 h to a priming food similar to the basal diet of the donor animal. This work investigates the effect of the length of priming on fermentation characteristics of two foods.
Samples of barley grain and straw were ground through a 1-mm screen and a priming food of grass silage and concentrates prepared according to the method of Harris (1996). Gas production was determined from the barley grain and straw using bovine rumen liquor after exposing the microbial population to the priming food for 0 (unprimed), 6, 12, or 24 h. Gas volumes were recorded manually and the blank corrected volumes fitted to the equation of France et al. (1993).
A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data.
Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals.
Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used.
Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons.
While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions.
We assessed various locations and frequency of environmental sampling to maximize information and maintain efficiency when sampling for Acinetobacter baumannii. Although sampling sites in closer proximity to the patient were more likely positive, to fully capture environmental contamination, we found value in sampling all sites and across multiple days.
The Middle East respiratory syndrome coronavirus (MERS-CoV) is caused by a novel coronavirus discovered in 2012. Since then, 1806 cases, including 564 deaths, have been reported by the Kingdom of Saudi Arabia (KSA) and affected countries as of 1 June 2016. Previous literature attributed increases in MERS-CoV transmission to camel breeding season as camels are likely the reservoir for the virus. However, this literature review and subsequent analysis indicate a lack of seasonality. A retrospective, epidemiological cluster analysis was conducted to investigate increases in MERS-CoV transmission and reports of household and nosocomial clusters. Cases were verified and associations between cases were substantiated through an extensive literature review and the Armed Forces Health Surveillance Branch's Tiered Source Classification System. A total of 51 clusters were identified, primarily nosocomial (80·4%) and most occurred in KSA (45·1%). Clusters corresponded temporally with the majority of periods of greatest incidence, suggesting a strong correlation between nosocomial transmission and notable increases in cases.
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
The interannual variations in atmospheric transport patterns to Summit, Greenland, are studied using twice-daily, three-dimensional, 10 day backward trajectory data corresponding to the summers (1 June–31 August) of 1989–98. While previous trajectory climatology studies have been prepared for Summit, the present work considers both the horizontal and vertical components of transport. A three-dimensional residence-time methodology is employed to account for both horizontal and vertical components of transport. the vertical transport component is quantified by passing all trajectories through a three-dimensional grid and tracking the time spent (i.e. the residence time) in each gridcell. This method also allows inspection of trajectory altitude distributions corresponding to transport from upwind regions of interest. the three-dimensional residence-time methodology is shown to be a valuable tool for diagnosing the details of long-range atmospheric transport to remote locations. for Summit, we find that the frequent transport from North America tends to occur at low altitudes, whereas transport from Europe is highly variable. Mean summertime flow patterns are described, as are anomalous patterns during 1990,1996 and 1998.