To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Healthcare personnel (HCP) were recruited to provide serum samples, which were tested for antibodies against Ebola or Lassa virus to evaluate for asymptomatic seroconversion.
From 2014 to 2016, 4 patients with Ebola virus disease (EVD) and 1 patient with Lassa fever (LF) were treated in the Serious Communicable Diseases Unit (SCDU) at Emory University Hospital. Strict infection control and clinical biosafety practices were implemented to prevent nosocomial transmission of EVD or LF to HCP.
All personnel who entered the SCDU who were required to measure their temperatures and complete a symptom questionnaire twice daily were eligible.
No employee developed symptomatic EVD or LF. EVD and LF antibody studies were performed on sera samples from 42 HCP. The 6 participants who had received investigational vaccination with a chimpanzee adenovirus type 3 vectored Ebola glycoprotein vaccine had high antibody titers to Ebola glycoprotein, but none had a response to Ebola nucleoprotein or VP40, or a response to LF antigens.
Patients infected with filoviruses and arenaviruses can be managed successfully without causing occupation-related symptomatic or asymptomatic infections. Meticulous attention to infection control and clinical biosafety practices by highly motivated, trained staff is critical to the safe care of patients with an infection from a special pathogen.
To assess the impact of a newly developed Central-Line Insertion Site Assessment (CLISA) score on the incidence of local inflammation or infection for CLABSI prevention.
A pre- and postintervention, quasi-experimental quality improvement study.
Setting and participants:
Adult inpatients with central venous catheters (CVCs) hospitalized in an intensive care unit or oncology ward at a large academic medical center.
We evaluated CLISA score impact on insertion site inflammation and infection (CLISA score of 2 or 3) incidence in the baseline period (June 2014–January 2015) and the intervention period (April 2015–October 2017) using interrupted times series and generalized linear mixed-effects multivariable analyses. These were run separately for days-to-line removal from identification of a CLISA score of 2 or 3. CLISA score interrater reliability and photo quiz results were evaluated.
Among 6,957 CVCs assessed 40,846 times, percentage of lines with CLISA score of 2 or 3 in the baseline and intervention periods decreased by 78.2% (from 22.0% to 4.7%), with a significant immediate decrease in the time-series analysis (P < .001). According to the multivariable regression, the intervention was associated with lower percentage of lines with a CLISA score of 2 or 3, after adjusting for age, gender, CVC body location, and hospital unit (odds ratio, 0.15; 95% confidence interval, 0.06–0.34; P < .001). According to the multivariate regression, days to removal of lines with CLISA score of 2 or 3 was 3.19 days faster after the intervention (P < .001). Also, line dwell time decreased 37.1% from a mean of 14 days (standard deviation [SD], 10.6) to 8.8 days (SD, 9.0) (P < .001). Device utilization ratios decreased 9% from 0.64 (SD, 0.08) to 0.58 (SD, 0.06) (P = .039).
The CLISA score creates a common language for assessing line infection risk and successfully promotes high compliance with best practices in timely line removal.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
In 2013, the national surveillance case definition for West Nile virus (WNV) disease was revised to remove fever as a criterion for neuroinvasive disease and require at most subjective fever for non-neuroinvasive disease. The aims of this project were to determine how often afebrile WNV disease occurs and assess differences among patients with and without fever. We included cases with laboratory evidence of WNV disease reported from four states in 2014. We compared demographics, clinical symptoms and laboratory evidence for patients with and without fever and stratified the analysis by neuroinvasive and non-neuroinvasive presentations. Among 956 included patients, 39 (4%) had no fever; this proportion was similar among patients with and without neuroinvasive disease symptoms. For neuroinvasive and non-neuroinvasive patients, there were no differences in age, sex, or laboratory evidence between febrile and afebrile patients, but hospitalisations were more common among patients with fever (P < 0.01). The only significant difference in symptoms was for ataxia, which was more common in neuroinvasive patients without fever (P = 0.04). Only 5% of non-neuroinvasive patients did not meet the WNV case definition due to lack of fever. The evidence presented here supports the changes made to the national case definition in 2013.
A national need is to prepare for and respond to accidental or intentional disasters categorized as chemical, biological, radiological, nuclear, or explosive (CBRNE). These incidents require specific subject-matter expertise, yet have commonalities. We identify 7 core elements comprising CBRNE science that require integration for effective preparedness planning and public health and medical response and recovery. These core elements are (1) basic and clinical sciences, (2) modeling and systems management, (3) planning, (4) response and incident management, (5) recovery and resilience, (6) lessons learned, and (7) continuous improvement. A key feature is the ability of relevant subject matter experts to integrate information into response operations. We propose the CBRNE medical operations science support expert as a professional who (1) understands that CBRNE incidents require an integrated systems approach, (2) understands the key functions and contributions of CBRNE science practitioners, (3) helps direct strategic and tactical CBRNE planning and responses through first-hand experience, and (4) provides advice to senior decision-makers managing response activities. Recognition of both CBRNE science as a distinct competency and the establishment of the CBRNE medical operations science support expert informs the public of the enormous progress made, broadcasts opportunities for new talent, and enhances the sophistication and analytic expertise of senior managers planning for and responding to CBRNE incidents.
Introduction: Acute aortic syndrome (AAS) is a time sensitive aortic catastrophe that is often misdiagnosed. There are currently no Canadian guidelines to aid in diagnosis. Our goal was to adapt the existing American Heart Association (AHA) and European Society of Cardiology (ESC) diagnostic algorithms for AAS into a Canadian evidence based best practices algorithm targeted for emergency medicine physicians. Methods: We chose to adapt existing high-quality clinical practice guidelines (CPG) previously developed by the AHA/ESC using the GRADE ADOLOPMENT approach. We created a National Advisory Committee consisting of 21 members from across Canada including academic, community and remote/rural emergency physicians/nurses, cardiothoracic and cardiovascular surgeons, cardiac anesthesiologists, critical care physicians, cardiologist, radiologists and patient representatives. The Advisory Committee communicated through multiple teleconference meetings, emails and a one-day in person meeting. The panel prioritized questions and outcomes, using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess evidence and make recommendations. The algorithm was prepared and revised through feedback and discussions and through an iterative process until consensus was achieved. Results: The diagnostic algorithm is comprised of an updated pre test probability assessment tool with further testing recommendations based on risk level. The updated tool incorporates likelihood of an alternative diagnosis and point of care ultrasound. The final best practice diagnostic algorithm defined risk levels as Low (0.5% no further testing), Moderate (0.6-5% further testing required) and High ( >5% computed tomography, magnetic resonance imaging, trans esophageal echocardiography). During the consensus and feedback processes, we addressed a number of issues and concerns. D-dimer can be used to reduce probability of AAS in an intermediate risk group, but should not be used in a low or high-risk group. Ultrasound was incorporated as a bedside clinical examination option in pre test probability assessment for aortic insufficiency, abdominal/thoracic aortic aneurysms. Conclusion: We have created the first Canadian best practice diagnostic algorithm for AAS. We hope this diagnostic algorithm will standardize and improve diagnosis of AAS in all emergency departments across Canada.
Introduction: In addition to its clinical utility, the Canadian Triage and Acuity Scale (CTAS) has become an administrative metric used by governments to estimate patient care requirements, emergency department (ED) funding and workload models. The electronic Canadian Triage and Acuity Scale (eCTAS) initiative aims to improve patient safety and quality of care by establishing an electronic triage decision support tool that standardizes that application of national triage guidelines across Ontario. The objective of this study was to evaluate triage times and score agreement in ED settings where eCTAS has been implemented. Methods: This was a prospective, observational study conducted in 7 hospital EDs, selected to represent a mix of triage processes (electronic vs. manual), documentation practices (electronic vs. paper), hospital types (rural, community and teaching) and patient volumes (annual ED census ranged from 38,000 to 136,000). An expert CTAS auditor observed on-duty triage nurses in the ED and assigned independent CTAS in real time. Research assistants not involved in the triage process independently recorded triage time. Interrater agreement was estimated using unweighted and quadratic-weighted kappa statistics with 95% confidence intervals (CIs). Results: 1491 (752 pre-eCTAS, 739 post-implementation) individual patient CTAS assessments were audited over 42 (21 pre-eCTAS, 21 post-implementation) seven-hour triage shifts. Exact modal agreement was achieved for 567 (75.4%) patients pre-eCTAS, compared to 685 (92.7%) patients triaged with eCTAS. Using the auditor's CTAS score as the reference standard, eCTAS significantly reduced the number of patients over-triaged (12.0% vs. 5.1%; Δ 6.9, 95% CI: 4.0, 9.7) and under-triaged (12.6% vs. 2.2%; Δ 10.4, 95% CI: 7.9, 13.2). Interrater agreement was higher with eCTAS (unweighted kappa 0.89 vs 0.63; quadratic-weighted kappa 0.91 vs. 0.71). Research assistants captured triage time for 3808 patients pre-eCTAS and 3489 post implementation of eCTAS. Median triage time was 312 seconds pre-eCTAS and 347 seconds with eCTAS (Δ 35 seconds, 95% CI: 29, 40 seconds). Conclusion: A standardized, electronic approach to performing CTAS assessments improves both clinical decision making and administrative data accuracy without substantially increasing triage time.
This research addresses dementia and driving cessation, a major life event for affected individuals, and an immense challenge in primary care. In Australia, as with many other countries, it is primarily general practitioners (GPs) who identify changes in cognitive functioning and monitor driving issues with their patients with dementia. Qualitative evidence from studies with family members and other health professionals shows it is a complicated area of practice. However we still know little from GPs about how they manage the challenges with their patients and the strategies that they use to facilitate driving cessation.
Data were collected through five focus groups with 29 GPs at their primary care practices in metropolitan and regional Queensland, Australia. A semi-structured topic guide was used to direct questions addressing decision factors and management strategies. Discussions were audio recorded, transcribed verbatim and thematically analyzed.
Regarding the challenges of raising driving cessation, four key themes emerged. These included: (i) Considering the individual; (ii) GP-patient relationships may hinder or help; (iii) Resources to support raising driver retirement; and (iv) Ethical dilemmas and ethical considerations. The impact of discussing driving cessation on GPs is discussed.
The findings of this study contribute to further understanding the experiences and needs of primary care physicians related to managing driving retirement with their patients with dementia. Results support a need for programs regarding identification and assessment of fitness to drive, to upskill health professionals and particularly GPs to manage the complex issues around dementia and driving cessation, and explore cost-effective and timely delivery of such support to patients.
Solvency II came into force on 1 January 2016 and included a transitional measure on technical provisions (“TMTP”) designed to help smooth in the capital impact of Solvency II over a 16-year period. The working party’s view is that the main intention of the TMTP is to mitigate the impact of the introduction of the risk margin, which significantly increases the technical provisions of firms, relative to their Solvency I Pillar 2 liabilities.
The majority of firms who hold a TMTP have now had at least one recalculation approved by the Prudential Regulation Authority (PRA); or are in the process of applying for a recalculation. Despite this large number of approved recalculations, there remains significant uncertainty in the industry around the approach and triggers for recalculation.
This paper considers aspects of TMTP recalculation for regulated UK life firms, for example practicalities of the calculation, asset and liability considerations, and communications/announcements.
In this paper, we outline the need for pragmatism when considering the approach to recalculation of a measure originally intended to serve as the bridge between two regimes. We call for an allowance for doing what is sensible in a principles-based regime balancing what might be more theoretically correct with what is practical and possible to support effective management of the business.
A multichannel calorimeter system is designed and constructed which is capable of delivering single-shot and broad-band spectral measurement of terahertz (THz) radiation generated in intense laser–plasma interactions. The generation mechanism of backward THz radiation (BTR) is studied by using the multichannel calorimeter system in an intense picosecond laser–solid interaction experiment. The dependence of the BTR energy and spectrum on laser energy, target thickness and pre-plasma scale length is obtained. These results indicate that coherent transition radiation is responsible for the low-frequency component (
1 THz) of BTR. It is also observed that a large-scale pre-plasma primarily enhances the high-frequency component (
3 THz) of BTR.
Reconstruction of lake-level fluctuations from landform and outcrop evidence typically involves characterizing periods with relative high stands. We developed a new approach to provide water-level estimates in the absence of shoreline evidence for Owens Lake in eastern California by integrating landform, outcrop, and existing lake-core data with wind-wave and sediment entrainment modeling of lake-core sedimentology. We also refined the late Holocene lake-level history of Owens Lake by dating four previously undated shoreline features above the water level (1096.4 m) in AD 1872. The new ages coincide with wetter and cooler climate during the Neopluvial (~3.6 ka), Medieval Pluvial (~0.8 ka), and Little Ice Age (~0.35 ka). Dates from stumps below 1096 m also indicate two periods of low stands at ~0.89 and 0.67 ka during the Medieval Climatic Anomaly. The timing of modeled water levels associated with 22 mud and sand units in lake cores agree well with shoreline records of Owens Lake and nearby Mono Lake, as well as with proxy evidence for relatively wet and dry periods from tree-ring and glacial records within the watershed. Our integrated analysis provides a continuous 4000-yr lake-level record showing the timing, duration, and magnitude of hydroclimate variability along the south-central Sierra Nevada.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Introduction: Emergency Department Overcrowding (EDOC) is a multifactorial issue that leads to Access Block for patients needing emergency care. Identified as a national problem, patients presenting to a Canadian Emergency Department (ED) at a time of overcrowding have higher rates of admission to hospital and increased seven-day mortality. Using the well accepted input-throughput-output model to study EDOC, current research has focused on throughput as a measure of patient flow, reported as ED length of stay (LOS). In fact, ED LOS and ED beds occupied by inpatients are two “extremely important indicators of EDOC identified by a 2005 survey of Canadian ED directors. One proposed solution to improve ED throughput is to utilize a physician at triage (PAT) to rapidly assess newly arriving patients. In 2017, a pilot PAT program was trialed at Kelowna General Hospital (KGH), a tertiary care hospital, as part of a PDSA cycle. The aim was to mitigate EDOC by improving ED throughput by the end of 2018, to meet the national targets for ED LOS suggested in the 2013 CAEP position statement. Methods: During the fiscal periods 1-6 (April 1 to September 7, 2017) a PAT shift occurred daily from 1000-2200, over four long weekends. ED LOS, time to inpatient bed, time to physician initial assessment (PIA), number of British Columbia Ambulance Service (BCAS) offload delays, and number of patients who left without being seen (LWBS) were extracted from an administrative database. Results were retrospectively analyzed and compared to data from 1000-2200 of non-PAT trial days during the trial periods. Results: Median ED LOS decreased from 3.8 to 3.4 hours for high-acuity patients (CTAS 1-3), from 2.1 to 1.8 hours for low-acuity patients (CTAS 4-5), and from 9.3 to 8.0 hours for all admitted patients. During PAT trial weekends, there was a decrease in the average time to PIA by 65% (from 73 to 26 minutes for CTAS 2-5), average number of daily BCAS offload delays by 39% (from 2.3 to 1.4 delays per day), and number of patients who LWBS from 2.4% to 1.7%. Conclusion: The implementation of PAT was associated with improvements in all five measures of ED throughput, providing a potential solution for EDOC at KGH. ED LOS was reduced compared to non-PAT control days, successfully meeting the suggested national targets. PAT could improve efficiency, resulting in the ability to see more patients in the ED, and increase the quality and safety of ED practice. Next, we hope to prospectively evaluate PAT, continuing to analyze these process measures, perform a cost-benefit analysis, and formally assess ED staff and patient perceptions of the program.
Introduction: In addition to its clinical utility, the Canadian Triage and Acuity Scale (CTAS) has become an administrative metric used by governments to estimate patient care requirements, ED funding and workload models. The Electronic Canadian Triage and Acuity Scale (eCTAS) initiative aims to improve patient safety and quality of care by establishing an electronic triage decision support tool that standardizes the application of national triage guidelines (CTAS) across Ontario. The objective of this study was to evaluate the implementation of eCTAS in a variety of ED settings. Methods: This was a prospective, observational study conducted in 7 hospital EDs, selected to represent a mix of triage processes (electronic vs. manual), documentation practices (electronic vs. paper), hospital types (rural, community and teaching) and patient volumes (annual ED census ranged from 38,000 to 136,000). An expert CTAS auditor observed on-duty triage nurses in the ED and assigned independent CTAS in real time. Research assistants not involved in the triage process independently recorded the triage time. Interrater agreement was estimated using unweighted and quadratic-weighted kappa statistics with 95% confidence intervals (CIs). Results: 1200 (738 pre-eCTAS, 462 post-implementation) individual patient CTAS assessments were audited over 33 (21 pre-eCTAS, 11 post-implementation) seven-hour triage shifts. Exact modal agreement was achieved for 554 (75.0%) patients pre-eCTAS, compared to 429 (93.0%) patients triaged with eCTAS. Using the auditors CTAS score as the reference standard, eCTAS significantly reduced the number of patients over-triaged (12.1% vs. 3.2%; 8.9, 95% CI: 5.7, 11.7) and under-triaged (12.9% vs. 3.9%; 9.0, 95% CI: 5.9, 12.0). Interrater agreement was higher with eCTAS (unweighted kappa 0.90 vs 0.63; quadratic-weighted kappa 0.79 vs. 0.94). Research assistants captured triage time for 4403 patients pre-eCTAS and 1849 post implementation of eCTAS. Median triage time was 304 seconds pre-eCTAS and 329 seconds with eCTAS ( 25 seconds, 95% CI: 18, 32 seconds). Conclusion: A standardized, electronic approach to performing CTAS assessments improves both clinical decision making and administrative data accuracy without substantially increasing triage time.
While our fascination with understanding the past is sufficient to warrant an increased focus on synthesis, solutions to important problems facing modern society require understandings based on data that only archaeology can provide. Yet, even as we use public monies to collect ever-greater amounts of data, modes of research that can stimulate emergent understandings of human behavior have lagged behind. Consequently, a substantial amount of archaeological inference remains at the level of the individual project. We can more effectively leverage these data and advance our understandings of the past in ways that contribute to solutions to contemporary problems if we adapt the model pioneered by the National Center for Ecological Analysis and Synthesis to foster synthetic collaborative research in archaeology. We propose the creation of the Coalition for Archaeological Synthesis coordinated through a U.S.-based National Center for Archaeological Synthesis. The coalition will be composed of established public and private organizations that provide essential scholarly, cultural heritage, computational, educational, and public engagement infrastructure. The center would seek and administer funding to support collaborative analysis and synthesis projects executed through coalition partners. This innovative structure will enable the discipline to address key challenges facing society through evidentially based, collaborative synthetic research.
Giant electromagnetic pulses (EMP) generated during the interaction of high-power lasers with solid targets can seriously degrade electrical measurements and equipment. EMP emission is caused by the acceleration of hot electrons inside the target, which produce radiation across a wide band from DC to terahertz frequencies. Improved understanding and control of EMP is vital as we enter a new era of high repetition rate, high intensity lasers (e.g. the Extreme Light Infrastructure). We present recent data from the VULCAN laser facility that demonstrates how EMP can be readily and effectively reduced. Characterization of the EMP was achieved using B-dot and D-dot probes that took measurements for a range of different target and laser parameters. We demonstrate that target stalk geometry, material composition, geodesic path length and foil surface area can all play a significant role in the reduction of EMP. A combination of electromagnetic wave and 3D particle-in-cell simulations is used to inform our conclusions about the effects of stalk geometry on EMP, providing an opportunity for comparison with existing charge separation models.
The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor.