To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As COVID-19 was declared a health emergency in March 2020, there was immense demand for information about the novel pathogen. This paper examines the clinician-reported impact of Project ECHO COVID-19 Clinical Rounds on clinician learning. Primary sources of study data were Continuing Medical Education (CME) Surveys for each session from the dates of March 24, 2020 to July 30, 2020 and impact surveys conducted in November 2020, which sought to understand participants’ overall assessment of sessions. Quantitative analyses included descriptive statistics and Mann-Whitney testing. Qualitative data were analyzed through inductive thematic analysis. Clinicians rated their knowledge after each session as significantly higher than before that session. 75.8% of clinicians reported they would ‘definitely’ or ‘probably’ use content gleaned from each attended session and clinicians reported specific clinical and operational changes made as a direct result of sessions. 94.6% of respondents reported that COVID-19 Clinical Rounds helped them provide better care to patients. 89% of respondents indicated they ‘strongly agree’ that they would join ECHO calls again.COVID-19 Clinical Rounds offers a promising model for the establishment of dynamic peer-to-peer tele-mentoring communities for low or no-notice response where scientifically tested or clinically verified practice evidence is limited.
OBJECTIVES/GOALS: Using the covariate-rich Veteran Health Administration data, estimate the association between Proton Pump Inhibitor (PPI) use and severe COVID-19, rigorously adjusting for confounding using propensity score (PS)-weighting. METHODS/STUDY POPULATION: We assembled a national retrospective cohort of United States veterans who tested positive for SARS-CoV-2, with information on 33 covariates including comorbidity diagnoses, lab values, and medications. Current outpatient PPI use was compared to non-use (two or more fills and pills on hand at admission vs no PPI prescription fill in prior year). The primary composite outcome was mechanical ventilation use or death within 60 days; the secondary composite outcome included ICU admission. PS-weighting mimicked a 1:1 matching cohort, allowing inclusion of all patients while achieving good covariate balance. The weighted cohort was analyzed using logistic regression. RESULTS/ANTICIPATED RESULTS: Our analytic cohort included 97,674 veterans with SARS-CoV-2 testing, of whom 14,958 (15.3%) tested positive (6,262 [41.9%] current PPI-users, 8,696 [58.1%] non-users). After weighting, all covariates were well-balanced with standardized mean differences less than a threshold of 0.1. Prior to PS-weighting (no covariate adjustment), we observed higher odds of the primary (9.3% vs 7.5%; OR 1.27, 95% CI 1.13-1.43) and secondary (25.8% vs 21.4%; OR 1.27, 95% CI 1.18-1.37) outcomes among PPI users vs non-users. After PS-weighting, PPI use vs non-use was not associated with the primary (8.2% vs 8.0%; OR 1.03, 95% CI 0.91-1.16) or secondary (23.4% vs 22.9%;OR 1.03, 95% CI 0.95-1.12) outcomes. DISCUSSION/SIGNIFICANCE: The associations between PPI use and severe COVID-19 outcomes that have been previously reported may be due to limitations in the covariates available for adjustment. With respect to COVID-19, our robust PS-weighted analysis provides patients and providers with further evidence for PPI safety.
King George Island (South Shetland Islands, Antarctic Peninsula) is renowned for its terrestrial palaeoenvironmental record, which includes evidence for potentially up to four Cenozoic glacial periods. An advantage of the glacigenic outcrops on the island is that they are associated with volcanic formations that can be isotopically dated. As a result of a new mapping and chronological study, it can now be shown that the published stratigraphy and ages of many geological units on eastern King George Island require major revision. The Polonez Glaciation is dated as c. 26.64 ± 1.43 Ma (Late Oligocene (Chattian Stage)) and includes the outcrops previously considered as evidence for an Eocene glacial ('Krakow Glaciation'). It was succeeded by two important volcanic episodes (Boy Point and Cinder Spur formations) formed during a relatively brief interval (< 2 Ma), which also erupted within the Oligocene Chattian Stage. The Melville Glaciation is dated as c. 21–22 Ma (probably 21.8 Ma; Early Miocene (Aquitanian Stage)), and the Legru Glaciation is probably ≤ c. 10 Ma (Late Miocene or younger). As a result of this study, the Polonez and Melville glaciations can now be correlated with increased confidence with the Oi2b and Mi1a isotope zones, respectively, and thus represent major glacial episodes.
New mapping and dating of volcanic outcrops on the east coast of Admiralty Bay, King George Island, has demonstrated that Eocene volcanic sequences are dominant and also crop out extensively elsewhere, particularly on the eastern part of the island. The sequences can be divided into at least three formations (Hennequin, Cape Vauréal and Carruthers Cliff) together with Eocene strata at Warkocz and near Lions Rump that are currently unassigned stratigraphically. New and recently published 40Ar/39Ar ages indicate that all of the formations are Early Eocene in age, mainly Ypresian, extending to Lutetian and possibly even Priabonian time in more easterly outcrops. Compositional contrasts exist between the groups (calc-alkaline vs tholeiitic). The formations are mainly composed of lavas, and many show evidence for contemporary inundation by water. They are interbedded with sedimentary rocks deposited mainly during flooding events as debris flows, debris avalanches, hyperconcentrated flows, from traction currents and in lakes. The common presence of juvenile volcanic detritus suggests that the sediments were probably linked to explosive hydrovolcanic eruptions, some of which were possibly rooted in summit ice caps. Other evidence is also permissive, but the presence of Eocene ice on King George Island is not well established at present.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
Given the demographic challenges of an ageing population combined with rising patient expectation and the growing emphasis placed on cost containment by healthcare providers, economic regenerative medicine approaches for regeneration of damaged and diseased organs and tissues are a major clinical and socio-economic need. The scope of this chapter is to use skeletal regeneration as the exemplar to discuss classical and high-throughput screening approaches to biomaterials development for regenerative medicine, including choice and design of materials based on clinical need, biological assessment and regulatory issues.
Basic principles: development of materials for regenerative medicine
The increase in an ageing population in developed countries is accompanied by a growing need for replacement and repair of damaged organs and tissues. Transplantation of the patient’s own tissue is still considered the gold standard in many applications, but limited availability, and complications associated with harvesting of the so-called autograft, are becoming an important drawback. Tissues and organs from human or animal donors present issues of disease transmission and functional failure. Alternative strategies, based on biological growth factors, cell therapy and tissue-engineered constructs, are being explored as alternatives to the patient’s own tissue, but their use is hampered by biological instability and high costs. These issues demonstrate the need for strategies based on biomaterials, which are often synthetic, and thus less prone to instability problems. In addition, the fact that (synthetic) biomaterials can often be produced in large quantities and thus be available off-the-shelf is an important advantage when coping with an increasing need for regenerative approaches.
People wounded during bombings or other events resulting in mass casualties or in conjunction with the resulting emergency response may be exposed to blood, body fluids, or tissue from other injured people and thus be at risk for bloodborne infections such as hepatitis B virus, hepatitis C virus, human immunodeficiency virus, or tetanus. This report adapts existing general recommendations on the use of immunization and postexposure prophylaxis for tetanus and for occupational and nonoccupational exposures to bloodborne pathogens to the specific situation of a mass casualty event. Decisions regarding the implementation of prophylaxis are complex, and drawing parallels from existing guidelines is difficult. For any prophylactic intervention to be implemented effectively, guidance must be simple, straightforward, and logistically undemanding. Critical review during development of this guidance was provided by representatives of the National Association of County and City Health Officials, the Council of State and Territorial Epidemiologists, and representatives of the acute injury care, trauma, and emergency response medical communities participating in the Centers for Disease Control and Prevention’s Terrorism Injuries: Information, Dissemination and Exchange project. The recommendations contained in this report represent the consensus of US federal public health officials and reflect the experience and input of public health officials at all levels of government and the acute injury response community. (Disaster Med Public Health Preparedness. 2008;2:150–165)
Mass casualty triage is the process of prioritizing multiple victims when resources are not sufficient to treat everyone immediately. No national guideline for mass casualty triage exists in the United States. The lack of a national guideline has resulted in variability in triage processes, tags, and nomenclature. This variability has the potential to inject confusion and miscommunication into the disaster incident, particularly when multiple jurisdictions are involved. The Model Uniform Core Criteria for Mass Casualty Triage were developed to be a national guideline for mass casualty triage to ensure interoperability and standardization when responding to a mass casualty incident. The Core Criteria consist of 4 categories: general considerations, global sorting, lifesaving interventions, and individual assessment of triage category. The criteria within each of these categories were developed by a workgroup of experts representing national stakeholder organizations who used the best available science and, when necessary, consensus opinion. This article describes how the Model Uniform Core Criteria for Mass Casualty Triage were developed.
(Disaster Med Public Health Preparedness. 2011;5:129-137)
Mass casualty triage is a critical skill. Although many systems exist to guide providers in making triage decisions, there is little scientific evidence available to demonstrate that any of the available systems have been validated. Furthermore, in the United States there is little consistency from one jurisdiction to the next in the application of mass casualty triage methodology. There are no nationally agreed upon categories or color designations. This review reports on a consensus committee process used to evaluate and compare commonly used triage systems, and to develop a proposed national mass casualty triage guideline. The proposed guideline, entitled SALT (sort, assess, life-saving interventions, treatment and/or transport) triage, was developed based on the best available science and consensus opinion. It incorporates aspects from all of the existing triage systems to create a single overarching guide for unifying the mass casualty triage process across the United States. (Disaster Med Public Health Preparedness. 2008;2(Suppl 1):S25–S34)