We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The White River Badlands (WRB) of South Dakota record eolian activity spanning the late Pleistocene through the latest Holocene (21 ka to modern), reflecting the effects of the last glacial period and Holocene climate fluctuations (Holocene Thermal Maximum, Medieval Climate Anomaly, and Little Ice Age). The WRB dune fields are important paleoclimate indicators in an area of the Great Plains with few climate proxies. The goal of this study is to use 1 m/pixel-resolution digital elevation models from drone imagery to distinguish Early to Middle Holocene parabolic dunes from Late Holocene parabolic dunes. Results indicate that relative ages of dunes are distinguished by slope and roughness (terrain ruggedness index). Morphological differences are attributed to postdepositional wind erosion, soil formation, and mass wasting. Early to Middle Holocene and Late Holocene paleowind directions, 324°± 13.1° (N = 7) and 323° ± 3.0° (N = 19), respectively, are similar to the modern wind regime. Results suggest significant landscape resilience to wind erosion, which resulted in preservation of a mosaic of Early and Late Holocene parabolic dunes. Quantification of dune characteristics will help refine the chronology of eolian activity in the WRB, provide insight into drought-driven landscape evolution, and integrate WRB eolian activity in a regional paleoenvironmental context.
Shallow firn cores, in addition to a near-basal ice core, were recovered in 2018 from the Quelccaya ice cap (5470 m a.s.l) in the Cordillera Vilcanota, Peru, and in 2017 from the Nevado Illimani glacier (6350 m a.s.l) in the Cordillera Real, Bolivia. The two sites are ~450 km apart. Despite meltwater percolation resulting from warming, particle-based trace element records (e.g. Fe, Mg, K) in the Quelccaya and Illimani shallow cores retain well-preserved signals. The firn core chronologies, established independently by annual layer counting, show a convincing overlap indicating the two records contain comparable signals and therefore capture similar regional scale climatology. Trace element records at a ~1–4 cm resolution provide past records of anthropogenic emissions, dust sources, volcanic emissions, evaporite salts and marine-sourced air masses. Using novel ultra-high-resolution (120 μm) laser technology, we identify annual layer thicknesses ranging from 0.3 to 0.8 cm in a section of 2000-year-old radiocarbon-dated near-basal ice which compared to the previous annual layer estimates suggests that Quelccaya ice cores drilled to bedrock may be older than previously suggested by depth-age models. With the information collected from this study in combination with past studies, we emphasize the importance of collecting new surface-to-bedrock ice cores from at least the Quelccaya ice cap, in particular, due to its projected disappearance as soon as the 2050s.
The coronavirus disease 2019 (COVID-19) pandemic rocked the world, spurring the collapse of national commerce, international trade, education, air travel, and tourism. The global economy has been brought to its knees by the rapid spread of infection, resulting in widespread illness and many deaths. The rise in nationalism and isolationism, ethnic strife, disingenuous governmental reporting, lockdowns, travel restrictions, and vaccination misinformation have caused further problems. This has brought into stark relief the need for improved disease surveillance and health protection measures. National and international agencies that should have provided earlier warning in fact failed to do so. A robust global health network that includes enhanced cooperation with Military Intelligence, Surveillance, and Reconnaissance (ISR) assets in conjunction with the existing international, governmental, and nongovernment medical intelligence networks and allies and partners would provide exceptional forward-looking and early-warning and is a proactive step toward making our future safe. This will be achieved both by surveilling populations for new biothreats, fusing and disseminating data, and then reaching out to target assistance to reduce disease spread in unprotected populations.
Pilot projects (“pilots”) are important for testing hypotheses in advance of investing more funds for full research studies. For some programs, such as Clinical and Translational Science Awards (CTSAs) supported by the National Center for Translational Sciences, pilots also make up a significant proportion of the research projects conducted with direct CTSA support. Unfortunately, administrative data on pilots are not typically captured in accessible databases. Though data on pilots are included in Research Performance Progress Reports, it is often difficult to extract, especially for large programs like the CTSAs where more than 600 pilots may be reported across all awardees annually. Data extraction challenges preclude analyses that could provide valuable information about pilots to researchers and administrators.
Methods:
To address those challenges, we describe a script that partially automates extraction of pilot data from CTSA research progress reports. After extraction of the pilot data, we use an established machine learning (ML) model to determine the scientific content of pilots for subsequent analysis. Analysis of ML-assigned scientific categories reveals the scientific diversity of the CTSA pilot portfolio and relationships among individual pilots and institutions.
Results:
The CTSA pilots are widely distributed across a number of scientific areas. Content analysis identifies similar projects and the degree of overlap for scientific interests among hubs.
Conclusion:
Our results demonstrate that pilot data remain challenging to extract but can provide useful information for communicating with stakeholders, administering pilot portfolios, and facilitating collaboration among researchers and hubs.
This article is a clinical guide which discusses the “state-of-the-art” usage of the classic monoamine oxidase inhibitor (MAOI) antidepressants (phenelzine, tranylcypromine, and isocarboxazid) in modern psychiatric practice. The guide is for all clinicians, including those who may not be experienced MAOI prescribers. It discusses indications, drug-drug interactions, side-effect management, and the safety of various augmentation strategies. There is a clear and broad consensus (more than 70 international expert endorsers), based on 6 decades of experience, for the recommendations herein exposited. They are based on empirical evidence and expert opinion—this guide is presented as a new specialist-consensus standard. The guide provides practical clinical advice, and is the basis for the rational use of these drugs, particularly because it improves and updates knowledge, and corrects the various misconceptions that have hitherto been prominent in the literature, partly due to insufficient knowledge of pharmacology. The guide suggests that MAOIs should always be considered in cases of treatment-resistant depression (including those melancholic in nature), and prior to electroconvulsive therapy—while taking into account of patient preference. In selected cases, they may be considered earlier in the treatment algorithm than has previously been customary, and should not be regarded as drugs of last resort; they may prove decisively effective when many other treatments have failed. The guide clarifies key points on the concomitant use of incorrectly proscribed drugs such as methylphenidate and some tricyclic antidepressants. It also illustrates the straightforward “bridging” methods that may be used to transition simply and safely from other antidepressants to MAOIs.
In this 2019 cross-sectional study, we analyzed hospital records for Medicaid beneficiaries who acquired nonventilator hospital-acquired pneumonia. The results suggest that preventive dental treatment in the 12 months prior or periodontal therapy in the 6 months prior to a hospitalization is associated with a reduced risk of NVHAP.
To assess coronavirus disease 2019 (COVID-19) infection policies at leading US medical centers in the context of the initial wave of the severe acute respiratory coronavirus virus 2 (SARS-CoV-2) omicron variant.
Design:
Electronic survey study eliciting hospital policies on masking, personal protective equipment, cohorting, airborne-infection isolation rooms (AIIRs), portable HEPA filters, and patient and employee testing.
Setting and participants:
“Hospital epidemiologists from U.S. News top 20 hospitals and 10 hospitals in the CDC Prevention Epicenters program.” As it is currently written, it implies all 30 hospitals are from the CDC Prevention Epicenters program, but that only applies to 10 hospitals. Alternatively, we could just say “Hospital epidemiologists from 30 leading US hospitals.”
Methods:
Survey results were reported using descriptive statistics.
Results:
Of 30 hospital epidemiologists surveyed, 23 (77%) completed the survey between February 15 and March 3, 2022. Among the responding hospitals, 18 (78%) used medical masks for universal masking and 5 (22%) used N95 respirators. 16 hospitals (70%) required universal eye protection. 22 hospitals (96%) used N95s for routine COVID-19 care and 1 (4%) reserved N95s for aerosol-generating procedures. 2 responding hospitals (9%) utilized dedicated COVID-19 wards; 8 (35%) used mixed COVID-19 and non–COVID-19 units; and 13 (57%) used both dedicated and mixed units. 4 hospitals (17%) used AIIRs for all COVID-19 patients, 10 (43%) prioritized AIIRs for aerosol-generating procedures, 3 (13%) used alternate risk-stratification criteria (not based on aerosol-generating procedures), and 6 (26%) did not routinely use AIIRs. 9 hospitals (39%) did not use portable HEPA filters, but 14 (61%) used them for various indications, most commonly as substitutes for AIIRs when unavailable or for specific high-risk areas or situations. 21 hospitals (91%) tested asymptomatic patients on admission, but postadmission testing strategies and preferred specimen sites varied substantially. 5 hospitals (22%) required regular testing of unvaccinated employees and 1 hospital (4%) reported mandatory weekly testing even for vaccinated employees during the SARS-CoV-2 omicron surge.
Conclusions:
COVID-19 infection control practices in leading hospitals vary substantially. Clearer public health guidance and transparency around hospital policies may facilitate more consistent national standards.
This chapter reviews collaborative argumentation, where a community of learners works together to advance the collective state of knowledge through debate, engagement, and dialogue. Engagement in collaborative argumentation can help students learn to think critically and independently about important issues and contested values. Students must externalize their ideas and metacognitively reflect on their developing understandings. This chapter summarizes the history of argumentation theory; how arguing can contribute to learning through making knowledge explicit, conceptual change, collaboration, and reasoning skills; how argumentation skill develops in childhood; and how argumentation varies in different cultural and social contexts. The chapter concludes by describing a variety of tools that scaffold effective argumentation, including through computer-mediated communication forums and argumentation maps.
Given the relatively small industry scale of cow-calf operations in New York to other regions of the country, little is known about differences in determinant values for feeder cattle. Using auction prices and quality characteristics over 7 years, differences in market, lot, and quality parameters suggest opportunities for improved marketing performance. A delta profit model is constructed to inform timing of marketing decisions for producers. The results indicate a relatively high potential for producers to increase farm returns by delaying sales of lighter-weight feeder cattle from the fall to spring auction months, given sufficient rates of gain and reasonable overwintering costs.
We interviewed 1,208 healthcare workers with positive SARS-CoV-2 tests between October 2020 and June 2021 to determine likely exposure sources. Overall, 689 (57.0%) had community exposures (479 from household members), 76 (6.3%) had hospital exposures (64 from other employees including 49 despite masking), 11 (0.9%) had community and hospital exposures, and 432 (35.8%) had no identifiable source of exposure.
The coronavirus disease 2019 (COVID-19) pandemic has significantly increased depression rates, particularly in emerging adults. The aim of this study was to examine longitudinal changes in depression risk before and during COVID-19 in a cohort of emerging adults in the U.S. and to determine whether prior drinking or sleep habits could predict the severity of depressive symptoms during the pandemic.
Methods
Participants were 525 emerging adults from the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA), a five-site community sample including moderate-to-heavy drinkers. Poisson mixed-effect models evaluated changes in the Center for Epidemiological Studies Depression Scale (CES-D-10) from before to during COVID-19, also testing for sex and age interactions. Additional analyses examined whether alcohol use frequency or sleep duration measured in the last pre-COVID assessment predicted pandemic-related increase in depressive symptoms.
Results
The prevalence of risk for clinical depression tripled due to a substantial and sustained increase in depressive symptoms during COVID-19 relative to pre-COVID years. Effects were strongest for younger women. Frequent alcohol use and short sleep duration during the closest pre-COVID visit predicted a greater increase in COVID-19 depressive symptoms.
Conclusions
The sharp increase in depression risk among emerging adults heralds a public health crisis with alarming implications for their social and emotional functioning as this generation matures. In addition to the heightened risk for younger women, the role of alcohol use and sleep behavior should be tracked through preventive care aiming to mitigate this looming mental health crisis.
This chapter presents an overview of the nature, assessment, and treatment of obsessive-compulsive and related disorders (OCRD), including obsessive-compulsive disorder (OCD), body dysmorphic disorder (BDD), hoarding disorder (HD), hair-pulling disorder (HPD), and skin-picking disorder (SPD). Specifically, we review the DSM-V diagnostic criteria, epidemiology and impact, clinical features and course, and etiological insights for each of these disorders in turn. Next, we discuss key points to consider when making a differential diagnosis with disorders outside the OCRD category. From there, we turn to a discussion of the assessment and treatment of these disorders using pharmacological, cognitive-behavioral, and neuromodulation interventions. Future directions in the research on OCRDs then follows.
New Zealand has a strategy of eliminating SARS-CoV-2 that has resulted in a low incidence of reported coronavirus-19 disease (COVID-19). The aim of this study was to describe the spread of SARS-CoV-2 in New Zealand via a nationwide serosurvey of blood donors. Samples (n = 9806) were collected over a month-long period (3 December 2020–6 January 2021) from donors aged 16–88 years. The sample population was geographically spread, covering 16 of 20 district health board regions. A series of Spike-based immunoassays were utilised, and the serological testing algorithm was optimised for specificity given New Zealand is a low prevalence setting. Eighteen samples were seropositive for SARS-CoV-2 antibodies, six of which were retrospectively matched to previously confirmed COVID-19 cases. A further four were from donors that travelled to settings with a high risk of SARS-CoV-2 exposure, suggesting likely infection outside New Zealand. The remaining eight seropositive samples were from seven different district health regions for a true seroprevalence estimate, adjusted for test sensitivity and specificity, of 0.103% (95% confidence interval, 0.09–0.12%). The very low seroprevalence is consistent with limited undetected community transmission and provides robust, serological evidence to support New Zealand's successful elimination strategy for COVID-19.
Group Name: CDC Prevention Epicenters Program Background: Reverse-transcriptase polymerase chain reaction (RT-PCR) tests are the reference standard for diagnosing SARS-CoV-2 infection, but false positives can occur and viral RNA may persist for weeks-to-months following recovery. Isolating such patients increases pressure on limited hospital resources and may impede care. Therefore, we quantified the percentage of patients who tested positive by RT-PCR yet were unlikely to be infectious and could be released from isolation. Methods: We prospectively identified all adults hospitalized at Brigham and Women’s Hospital (Boston, MA) who tested positive for SARS-CoV-2 by RT-PCR (primarily Hologic Panther Fusion or Cepheid Xpert platforms) between December 24, 2020, and January 24, 2021. Each case was assessed by infection control staff for possible discontinuation of isolation using an algorithm that incorporated the patient’s prior history of COVID-19, current symptoms, RT-PCR cycle threshold (Ct) values, repeat RT-PCR testing at least 24 hours later, and SARS-CoV-2 serologies (Figure 1). Results: Overall, 246 hospitalized patients (median age, 66 years [interquartile range, 50–74]; 131 [53.3%] male) tested positive for SARS-CoV-2 by RT-PCR during the study period. Of these, 201 (81.7%) were deemed new diagnoses of active disease on the basis of low Ct values and/or progressive symptoms. Moreover, 44 patients (17.9%) were deemed noninfectious: 35 (14.2%) had prior known resolved infections (n = 21) or unknown prior infection but positive serology (n = 14), high Ct values on initial testing, and negative or stably high Ct values on repeat testing. Also, 5 (2.0%) had recent infection but >10 days had passed since symptom onset and they were clinically improving. In addition, 4 (1.6%) results were deemed false positives based on lack of symptoms and at least 1 negative repeat RT-PCR test (Figure 2). One patient was asymptomatic with Ct value <35 but was discharged before further testing could be obtained. Among the 44 noninfectious patients, isolation was discontinued a median of 3 days (IQR, 2–4) after the first positive test. We did not identify any healthcare worker infections attributable to early discontinuation of isolation in these patients. Conclusions: During the winter COVID-19 second surge in Massachusetts, nearly 1 in 5 hospitalized patients who tested positive for SARS-CoV-2 by RT-PCR were deemed noninfectious and eligible for discontinuation of precautions. Most of these cases were consistent with residual RNA from prior known or undiagnosed infections. Active assessments of SARS-CoV-2 RT-PCR tests by infection control practitioners using clinical data, Ct values, repeat tests, and serologies can safely validate the release many patients from isolation and thereby conserve resources and facilitate patient care.
In 2020 a group of U.S. healthcare leaders formed the National Organization to Prevent Hospital-Acquired Pneumonia (NOHAP) to issue a call to action to address non–ventilator-associated hospital-acquired pneumonia (NVHAP). NVHAP is one of the most common and morbid healthcare-associated infections, but it is not tracked, reported, or actively prevented by most hospitals. This national call to action includes (1) launching a national healthcare conversation about NVHAP prevention; (2) adding NVHAP prevention measures to education for patients, healthcare professionals, and students; (3) challenging healthcare systems and insurers to implement and support NVHAP prevention; and (4) encouraging researchers to develop new strategies for NVHAP surveillance and prevention. The purpose of this document is to outline research needs to support the NVHAP call to action. Primary needs include the development of better models to estimate the economic cost of NVHAP, to elucidate the pathophysiology of NVHAP and identify the most promising pathways for prevention, to develop objective and efficient surveillance methods to track NVHAP, to rigorously test the impact of prevention strategies proposed to prevent NVHAP, and to identify the policy levers that will best engage hospitals in NVHAP surveillance and prevention. A joint task force developed this document including stakeholders from the Veterans’ Health Administration (VHA), the U.S. Centers for Disease Control and Prevention (CDC), The Joint Commission, the American Dental Association, the Patient Safety Movement Foundation, Oral Health Nursing Education and Practice (OHNEP), Teaching Oral-Systemic Health (TOSH), industry partners and academia.
ABSTRACT IMPACT: Analyzing the types of technical assistance (basic, targeted or intensive) provided by the Opioid Response Network (ORN) to unique and hard-to-reach populations (UHRP) informs addiction health services and translational research by identifying technical assistance needs in these populations which may require a higher level of intensity. OBJECTIVES/GOALS: To improve ORN dissemination and implementation efforts, the project classifies TA requests into one of three categories: basic, targeted, and intensive. This TA Framework assists the ORN project team in understanding the level of TA required in the delivery of evidence-based practices to address opioids with communities with respect to UHRP. METHODS/STUDY POPULATION: TA requests from April 1, 2019, to April 1, 2020, were selected. The ORN classifies TA requests in one of three categories: basic (dissemination & brief consultation), targeted (services to enhance readiness and capacity), and intensive (full incorporation of innovation considering context, culture, and linguistics) (Fixsen, et. al., 2009; Becker, et al., 2020). Unique and hard-to-reach populations (UHRP) are defined based on physical location (i.e., remote or isolated), social position, or other vulnerabilities (i.e. member of an ethnic or racial minority group) (Thurman, & Harrison, 2020). ORN classifies 26 types of UHRP these types are not mutually exclusive. A frequency analysis of the UHRP types was conducted. Bivariate correlations between UHRP types that had a minimum of 30 cases were performed. RESULTS/ANTICIPATED RESULTS: Among 746 TA requests selected, 226 had missing information about UHRP types and 29 had missing information TA levels. These requests were excluded from the frequency analysis. The three most common UHRP types were people living in rural or remote areas (n=262, 50%), people who are uninsured or underinsured (n=162, 31%), and people who inject drugs (n=158, 30%). Most TA requests were targeted (69%), 23% were intensive, and 9% were basic. Bivariate correlations were performed between 21 UHRP types. Moderate (Pearson’s r=0.4-0.6) or strong correlations (r>0.6) were found for 11 occurrences for the UHRP type of ‘LGBT’, 8 for ‘Mental Illness’, and 7 for ‘Veterans’. Strong correlations were found between ‘Justice Involved’ and ‘Incarcerated’ (r=0.645), and between ‘Disabilities’ and ‘Chronic Pain’ (r=0.603). DISCUSSION/SIGNIFICANCE OF FINDINGS: There were more TA requests at targeted and intensive levels than basic levels suggesting the need for services to enhance readiness and build capacity. The moderate/strong correlations indicate that UHRP types were likely to coexist with other types. Future research can explore combining UHRP types that have moderate/strong correlations.
An early economic evaluation to inform the translation into clinical practice of a spectroscopic liquid biopsy for the detection of brain cancer. Two specific aims are (1) to update an existing economic model with results from a prospective study of diagnostic accuracy and (2) to explore the potential of brain tumor-type predictions to affect patient outcomes and healthcare costs.
Methods
A cost-effectiveness analysis from a UK NHS perspective of the use of spectroscopic liquid biopsy in primary and secondary care settings, as well as a cost–consequence analysis of the addition of tumor-type predictions was conducted. Decision tree models were constructed to represent simplified diagnostic pathways. Test diagnostic accuracy parameters were based on a prospective validation study. Four price points (GBP 50-200, EUR 57-228) for the test were considered.
Results
In both settings, the use of liquid biopsy produced QALY gains. In primary care, at test costs below GBP 100 (EUR 114), testing was cost saving. At GBP 100 (EUR 114) per test, the ICER was GBP 13,279 (EUR 15,145), whereas at GBP 200 (EUR 228), the ICER was GBP 78,300 (EUR 89,301). In secondary care, the ICER ranged from GBP 11,360 (EUR 12,956) to GBP 43,870 (EUR 50,034) across the range of test costs.
Conclusions
The results demonstrate the potential for the technology to be cost-effective in both primary and secondary care settings. Additional studies of test use in routine primary care practice are needed to resolve the remaining issues of uncertainty—prevalence in this patient population and referral behavior.
Observations of teleseismic earthquakes using broadband seismometers on the Ross Ice Shelf (RIS) must contend with environmental and structural processes that do not exist for land-sited seismometers. Important considerations are: (1) a broadband, multi-mode ambient wavefield excited by ocean gravity wave interactions with the ice shelf; (2) body wave reverberations produced by seismic impedance contrasts at the ice/water and water/seafloor interfaces and (3) decoupling of the solid Earth horizontal wavefield by the sub-shelf water column. We analyze seasonal and geographic variations in signal-to-noise ratios for teleseismic P-wave (0.5–2.0 s), S-wave (10–15 s) and surface wave (13–25 s) arrivals relative to the RIS noise field. We use ice and water layer reverberations generated by teleseismic P-waves to accurately estimate the sub-station thicknesses of these layers. We present observations consistent with the theoretically predicted transition of the water column from compressible to incompressible mechanics, relevant for vertically incident solid Earth waves with periods longer than 3 s. Finally, we observe symmetric-mode Lamb waves generated by teleseismic S-waves incident on the grounding zones. Despite their complexity, we conclude that teleseismic coda can be utilized for passive imaging of sub-shelf Earth structure, although longer deployments relative to conventional land-sited seismometers will be necessary to acquire adequate data.