We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many clinical trials leverage real-world data. Typically, these data are manually abstracted from electronic health records (EHRs) and entered into electronic case report forms (CRFs), a time and labor-intensive process that is also error-prone and may miss information. Automated transfer of data from EHRs to eCRFs has the potential to reduce data abstraction and entry burden as well as improve data quality and safety.
Methods:
We conducted a test of automated EHR-to-CRF data transfer for 40 participants in a clinical trial of hospitalized COVID-19 patients. We determined which coordinator-entered data could be automated from the EHR (coverage), and the frequency with which the values from the automated EHR feed and values entered by study personnel for the actual study matched exactly (concordance).
Results:
The automated EHR feed populated 10,081/11,952 (84%) coordinator-completed values. For fields where both the automation and study personnel provided data, the values matched exactly 89% of the time. Highest concordance was for daily lab results (94%), which also required the most personnel resources (30 minutes per participant). In a detailed analysis of 196 instances where personnel and automation entered values differed, both a study coordinator and a data analyst agreed that 152 (78%) instances were a result of data entry error.
Conclusions:
An automated EHR feed has the potential to significantly decrease study personnel effort while improving the accuracy of CRF data.
In another article in this issue, Black et al. discuss their preferred approach to estimating Supreme Court justices’ Big Five personality traits from written text and provide several critiques of the approach of Hall et al. In this rejoinder, we show that Black et al.’s critiques are substantially without merit, their preferred approach suffers from many of the same drawbacks that they project onto our approach, their specific method of implementing their preferred approach runs afoul of many contemporary social scientific norms, our use of concurrences to estimate personality traits is far more justifiable than they suggest (especially in contrast to their use of lower court opinions), and their substantive critiques reflect a potential misunderstanding of the nature of conscientiousness. Nonetheless, we also acknowledge their broader point regarding the state-of-the-art textual analysis methodology vis-à-vis the estimation of personality traits, and we provide some constructive suggestions for the path forward.
Models of behavior on the US Supreme Court almost universally assume that justices’ behavior depends, at least in part, on the characteristics of individual justices. However, few prior studies have attempted to assess these characteristics beyond ideological preferences. In contrast, we apply recent advances in machine learning to develop and validate measures of the Big Five personality traits for Supreme Court justices serving during the 1946 through 2015 terms based on the language in their written opinions. We then conduct an empirical application to demonstrate the importance of these Supreme Court Individual Personality Estimates and discuss their proper use.
Use of intensive longitudinal methods (e.g. ecological momentary assessment, passive sensing) and machine learning (ML) models to predict risk for depression and suicide has increased in recent years. However, these studies often vary considerably in length, ML methods used, and sources of data. The present study examined predictive accuracy for depression and suicidal ideation (SI) as a function of time, comparing different combinations of ML methods and data sources.
Methods
Participants were 2459 first-year training physicians (55.1% female; 52.5% White) who were provided with Fitbit wearable devices and assessed daily for mood. Linear [elastic net regression (ENR)] and non-linear (random forest) ML algorithms were used to predict depression and SI at the first-quarter follow-up assessment, using two sets of variables (daily mood features only, daily mood features + passive-sensing features). To assess accuracy over time, models were estimated iteratively for each of the first 92 days of internship, using data available up to that point in time.
Results
ENRs using only the daily mood features generally had the best accuracy for predicting mental health outcomes, and predictive accuracy within 1 standard error of the full 92 day models was attained by weeks 7–8. Depression at 92 days could be predicted accurately (area under the curve >0.70) after only 14 days of data collection.
Conclusions
Simpler ML methods may outperform more complex methods until passive-sensing features become better specified. For intensive longitudinal studies, there may be limited predictive value in collecting data for more than 2 months.
Sports participation, physical activity, and friendship quality are theorized to have protective effects on the developmental emergence of substance use and self-harm behavior in adolescence, but existing research has been mixed. This ambiguity could reflect, in part, the potential for confounding of observed associations by genetic and environmental factors, which previous research has been unable to rigorously rule out. We used data from the prospective, population-based Child and Adolescent Twin Study in Sweden (n = 18,234 born 1994–2001) and applied a co-twin control design to account for potential genetic and environmental confounding of sports participation, physical activity, and friendship quality (assessed at age 15) as presumed protective factors for adolescent substance use and self-harm behavior (assessed at age 18). While confidence intervals widened to include the null in numerous co-twin control analyses adjusting for childhood psychopathology, parent-reported sports participation and twin-reported positive friendship quality were associated with increased odds of alcohol problems and nicotine use. However, parent-reported sports participation, twin-reported physical activity, and twin-reported friendship quality were associated with decreased odds of self-harm behavior. The findings provide a more nuanced understanding of the risks and benefits of putative protective factors for risky behaviors that emerge during adolescence.
Many graduate programs are sincerely invested in fostering diversity and increasing the number of students from underrepresented backgrounds who will contribute to our discipline. But increasing representation is only one step needed to address inequities, disparities, and injustices. Helping all students thrive, and have an equal opportunity to achieve their educational goals requires the creation of “safe spaces” in which demographic differences are understood, appreciated, and considered in larger educational systems. This chapter discusses a frequently overlooked identity characteristic that can significantly impact the graduate school experience: being a first-generation college student.
Monoclonal antibody therapeutics to treat coronavirus disease (COVID-19) have been authorized by the US Food and Drug Administration under Emergency Use Authorization (EUA). Many barriers exist when deploying a novel therapeutic during an ongoing pandemic, and it is critical to assess the needs of incorporating monoclonal antibody infusions into pandemic response activities. We examined the monoclonal antibody infusion site process during the COVID-19 pandemic and conducted a descriptive analysis using data from 3 sites at medical centers in the United States supported by the National Disaster Medical System. Monoclonal antibody implementation success factors included engagement with local medical providers, therapy batch preparation, placing the infusion center in proximity to emergency services, and creating procedures resilient to EUA changes. Infusion process challenges included confirming patient severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) positivity, strained staff, scheduling, and pharmacy coordination. Infusion sites are effective when integrated into pre-existing pandemic response ecosystems and can be implemented with limited staff and physical resources.
Seed retention, and ultimately seed shatter, are extremely important for the efficacy of harvest weed seed control (HWSC) and are likely influenced by various agroecological and environmental factors. Field studies investigated seed-shattering phenology of 22 weed species across three soybean [Glycine max (L.) Merr.]-producing regions in the United States. We further evaluated the potential drivers of seed shatter in terms of weather conditions, growing degree days, and plant biomass. Based on the results, weather conditions had no consistent impact on weed seed shatter. However, there was a positive correlation between individual weed plant biomass and delayed weed seed–shattering rates during harvest. This work demonstrates that HWSC can potentially reduce weed seedbank inputs of plants that have escaped early-season management practices and retained seed through harvest. However, smaller individuals of plants within the same population that shatter seed before harvest pose a risk of escaping early-season management and HWSC.
The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to
$\sim\!5$
yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of
$\sim\!162$
h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of
$0.24\ \mathrm{mJy\ beam}^{-1}$
and angular resolution of
$12-20$
arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified.
To assess the relationship between food insecurity, sleep quality, and days with mental and physical health issues among college students.
Design:
An online survey was administered. Food insecurity was assessed using the ten-item Adult Food Security Survey Module. Sleep was measured using the nineteen-item Pittsburgh Sleep Quality Index (PSQI). Mental health and physical health were measured using three items from the Healthy Days Core Module. Multivariate logistic regression was conducted to assess the relationship between food insecurity, sleep quality, and days with poor mental and physical health.
Setting:
Twenty-two higher education institutions.
Participants:
College students (n 17 686) enrolled at one of twenty-two participating universities.
Results:
Compared with food-secure students, those classified as food insecure (43·4 %) had higher PSQI scores indicating poorer sleep quality (P < 0·0001) and reported more days with poor mental (P < 0·0001) and physical (P < 0·0001) health as well as days when mental and physical health prevented them from completing daily activities (P < 0·0001). Food-insecure students had higher adjusted odds of having poor sleep quality (adjusted OR (AOR): 1·13; 95 % CI 1·12, 1·14), days with poor physical health (AOR: 1·01; 95 % CI 1·01, 1·02), days with poor mental health (AOR: 1·03; 95 % CI 1·02, 1·03) and days when poor mental or physical health prevented them from completing daily activities (AOR: 1·03; 95 % CI 1·02, 1·04).
Conclusions:
College students report high food insecurity which is associated with poor mental and physical health, and sleep quality. Multi-level policy changes and campus wellness programmes are needed to prevent food insecurity and improve student health-related outcomes.
We applaud the goals and execution of the target article, but note that individual differences do not receive much attention. This is a shortcoming because individual differences can play a vital role in theory testing. In our commentary, we describe programs of research of this type and also apply similar thinking to the mechanisms proposed in the target article.
This SHEA white paper identifies knowledge gaps and challenges in healthcare epidemiology research related to coronavirus disease 2019 (COVID-19) with a focus on core principles of healthcare epidemiology. These gaps, revealed during the worst phases of the COVID-19 pandemic, are described in 10 sections: epidemiology, outbreak investigation, surveillance, isolation precaution practices, personal protective equipment (PPE), environmental contamination and disinfection, drug and supply shortages, antimicrobial stewardship, healthcare personnel (HCP) occupational safety, and return to work policies. Each section highlights three critical healthcare epidemiology research questions with detailed description provided in supplementary materials. This research agenda calls for translational studies from laboratory-based basic science research to well-designed, large-scale studies and health outcomes research. Research gaps and challenges related to nursing homes and social disparities are included. Collaborations across various disciplines, expertise and across diverse geographic locations will be critical.
The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with
$\sim$
15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination
$+41^\circ$
made over a 288-MHz band centred at 887.5 MHz.
To develop a pediatric research agenda focused on pediatric healthcare-associated infections and antimicrobial stewardship topics that will yield the highest impact on child health.
Participants:
The study included 26 geographically diverse adult and pediatric infectious diseases clinicians with expertise in healthcare-associated infection prevention and/or antimicrobial stewardship (topic identification and ranking of priorities), as well as members of the Division of Healthcare Quality and Promotion at the Centers for Disease Control and Prevention (topic identification).
Methods:
Using a modified Delphi approach, expert recommendations were generated through an iterative process for identifying pediatric research priorities in healthcare associated infection prevention and antimicrobial stewardship. The multistep, 7-month process included a literature review, interactive teleconferences, web-based surveys, and 2 in-person meetings.
Results:
A final list of 12 high-priority research topics were generated in the 2 domains. High-priority healthcare-associated infection topics included judicious testing for Clostridioides difficile infection, chlorhexidine (CHG) bathing, measuring and preventing hospital-onset bloodstream infection rates, surgical site infection prevention, surveillance and prevention of multidrug resistant gram-negative rod infections. Antimicrobial stewardship topics included β-lactam allergy de-labeling, judicious use of perioperative antibiotics, intravenous to oral conversion of antimicrobial therapy, developing a patient-level “harm index” for antibiotic exposure, and benchmarking and or peer comparison of antibiotic use for common inpatient conditions.
Conclusions:
We identified 6 healthcare-associated infection topics and 6 antimicrobial stewardship topics as potentially high-impact targets for pediatric research.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
We describe system verification tests and early science results from the pulsar processor (PTUSE) developed for the newly commissioned 64-dish SARAO MeerKAT radio telescope in South Africa. MeerKAT is a high-gain (
${\sim}2.8\,\mbox{K Jy}^{-1}$
) low-system temperature (
${\sim}18\,\mbox{K at }20\,\mbox{cm}$
) radio array that currently operates at 580–1 670 MHz and can produce tied-array beams suitable for pulsar observations. This paper presents results from the MeerTime Large Survey Project and commissioning tests with PTUSE. Highlights include observations of the double pulsar
$\mbox{J}0737{-}3039\mbox{A}$
, pulse profiles from 34 millisecond pulsars (MSPs) from a single 2.5-h observation of the Globular cluster Terzan 5, the rotation measure of Ter5O, a 420-sigma giant pulse from the Large Magellanic Cloud pulsar PSR
$\mbox{J}0540{-}6919$
, and nulling identified in the slow pulsar PSR J0633–2015. One of the key design specifications for MeerKAT was absolute timing errors of less than 5 ns using their novel precise time system. Our timing of two bright MSPs confirm that MeerKAT delivers exceptional timing. PSR
$\mbox{J}2241{-}5236$
exhibits a jitter limit of
$<4\,\mbox{ns h}^{-1}$
whilst timing of PSR
$\mbox{J}1909{-}3744$
over almost 11 months yields an rms residual of 66 ns with only 4 min integrations. Our results confirm that the MeerKAT is an exceptional pulsar telescope. The array can be split into four separate sub-arrays to time over 1 000 pulsars per day and the future deployment of S-band (1 750–3 500 MHz) receivers will further enhance its capabilities.
The coronavirus disease 2019 pandemic requires urgent modification to existing head and neck cancer diagnosis and management practices. A protocol was established that utilises risk stratification, early investigation prior to clinical review and a reduction in aerosol generating procedures to lessen the risk of coronavirus disease 2019 spread.
Methods
Two-week wait referrals were stratified into low, intermediate and high risk. Low risk patients were referred back to primary care with advice; intermediate and high risk patients underwent investigation. Clinical encounters and aerosol generating procedures were minimised. A combined diagnostic and therapeutic surgical approach was undertaken where possible.
Results
Forty-one patients were used to assess feasibility. Thirty-one per cent were low risk, 35 per cent were intermediate and 33 per cent were high risk. Thirty-three per cent were discharged with no imaging.
Conclusion
Implementing this protocol reduces the future burden on tertiary services, by empowering primary care physicians to re-refer low risk patients. The protocol is applicable across the UK and avoids diagnostic delay.
The current COVID-19 pandemic is not just a medical and social tragedy, but within the threat of the outbreak looms the potential for a significant and persistent negative mental health impact, based on previous experience with other pandemics such as Severe Acute Respiratory Syndrome (SARS) in 2003 and the earlier H1N1 outbreak of 1918. This piece will highlight the links between depression and viral illnesses and explore important overlaps with myalgic encephalomyelitis/chronic fatigue syndrome, potentially implicating inflammatory mechanisms in those exposed to a range of viral agents. While containment of psychological distress currently focuses on social anxiety and quarantine measures, a second wave of psychological morbidity due to viral illness may be imminent.
To review the management of temporal bone fractures at a major trauma centre and introduce an evidence-based protocol.
Methods
A review of reports of head computed tomography performed for trauma from January 2012 to July 2018 was conducted. Recorded data fields included: mode of trauma, patient age, associated intracranial injury, mortality, temporal bone fracture pattern, symptoms and intervention.
Results
Of 815 temporal bone fracture cases, records for 165 patients met the inclusion criteria; detailed analysis was performed on the records of these patients.
Conclusion
Temporal bone fractures represent high-energy trauma. Initial management focuses on stabilisation of the patient and treatment of associated intracranial injury. Acute ENT intervention is directed towards the management of facial palsy and cerebrospinal fluid leak, and often requires multidisciplinary team input. The role of nerve conduction assessment for immediate facial palsy is variable across the UK. The administration of high-dose steroids in patients with temporal bone fracture and intracranial injury is not advised. A robust evidence-based approach is introduced for the management of significant ENT complications associated with temporal bone fractures.