To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We aimed to identify factors independently associated with the need for inotropic support for low cardiac output or haemodynamic instability after pulmonary artery banding surgery for CHD.
We performed a retrospective chart review of all neonates and infants who underwent pulmonary banding between January 2016 and June 2019 at our institution. Bivariate and multivariable analyses were performed to identify factors independently associated with the use of post-operative inotropic support, defined as the initiation of inotropic infusion(s) for depressed myocardial function, hypotension, or compromised perfusion within 24 hours of pulmonary artery banding.
We reviewed 61 patients. Median age at surgery was 10 days (25%,75%:7,30). Cardiac anatomy was biventricular in 38 patients (62%), hypoplastic right ventricle in 14 patients (23%), and hypoplastic left ventricle in 9 patients (15%). Inotropic support was implemented in 30 patients (49%). Baseline characteristics of patients who received inotropic support, including ventricular anatomy and pre-operative ventricular function, were not statistically different from the rest of the cohort. Patients who received inotropic support, however, were exposed to larger cumulative doses of ketamine intraoperatively – median 4.0 mg/kg (25%,75%:2.8,5.9) versus 1.8 mg/kg (25%,75%:0.9,4.5), p < 0.001. In a multivariable model, cumulative ketamine dose greater than 2.5mg/kg was associated with post-operative inotropic support (odds ratio 5.5; 95% confidence interval: 1.7,17.8), independent of total surgery time.
Inotropic support was administered in approximately half of patients who underwent pulmonary artery banding and more commonly occurred in patients who received higher cumulative doses of ketamine intraoperatively, independent of the duration of surgery.
Collar-worn deterrents reduce predation by cats while collar-mounted ID enhances return of lost animals. A perception that collars are hazardous limits their use. We defined cases as ‘collar incidents’ (cat snagged its collar or caught a paw), ‘collar injuries’ (veterinary treatment needed for a collar incident), and ‘collar deaths’ (cat died), before integrating data from veterinarians, owners from the general public and owners from a welfare society. Despite biases associated with each of these groups, taken together, the results from these indicated that collar injuries or deaths are rare. Interviews with one hundred and seven veterinarians indicated an average rate of one collar injury observed per 2.3 years of veterinary practice. At one practice, over three years, only 0.33% of 4,460 cat cases were collar injuries, while 180 cat cases at four clinics during August and November 2011 included none. The 63 owners from the general public reported only one collar injury and no deaths in a lifetime of ownership, although 27% experienced collar incidents. In contrast, 22% reported cats needing treatment following road accidents, 53% reported cats needing treatment for fighting injuries and 62% had owned cats killed on the road. Most (62%) of the 55 respondents from a cat welfare society had experienced a collar incident, but only two cats needed treatment. One died. In contrast, 31 and 58% reported cats needing treatment for road accidents and fighting, respectively, and 41% had owned cats killed on the road. Fighting and road accidents are greater hazards to roaming cats than collars, which offer the compensatory benefits of mounting predation deterrents and ID tags.
Answering one question often begets another. We present a decision-theoretic model that describes how this affects the sequencing of decisions over time. Because answering an easy question may raise a more difficult one, a rational judge may delay resolution even if he has perfect information about the correct decision. Furthermore, because otherwise unrelated questions may raise similar follow-ups, he may optimally clump decisions together. Our theory thus generates an endogenous economy of scale in dispute resolution and contributes to the literature on punctuated equilibrium theory. We illustrate the results of our model with a case study from legal history in the United States.
Despite the adversity presented by COVID-19 pandemic, it also pushed for experimenting with innovative strategies for community engagement. The Community Research Advisory Council (C-RAC) at Johns Hopkins University (JHU), is an initiative to promote community engagement in research. COVID-19 rendered it impossible for C-RAC to conduct its meetings all of which have historically been in person. We describe the experience of advancing the work of the C-RAC during COVID-19 using digital and virtual strategies. Since March 2020, C-RAC transitioned from in person to virtual meetings. The needs assessment was conducted among C-RAC members, and individualized solutions provided for a successful virtual engagement. The usual working schedule was altered to respond to COVID-19 and promote community engaged research. Attendance to C-RAC meetings before and after the transition to virtual operation increased from 69% to 76% among C-RAC members from the community. In addition, the C-RAC launched new initiatives and in eighteen months since January 2020, it conducted 50 highly rated research reviews for 20 research teams. The experience of the C-RAC demonstrates that when community needs are assessed and addressed, and technical support is provided, digital strategies can lead to greater community collaborations.
To determine how engagement of the hospital and/or vendor with performance improvement strategies combined with an automated hand hygiene monitoring system (AHHMS) influence hand hygiene (HH) performance rates.
The study was conducted in 58 adult and pediatric inpatient units located in 10 hospitals.
HH performance rates were estimated using an AHHMS. Rates were expressed as the number of soap and alcohol-based hand rub portions dispensed divided by the number of room entries and exits. Each hospital self-assigned to one of the following intervention groups: AHHMS alone (control group), AHHMS plus clinician-based vendor support (vendor-only group), AHHMS plus hospital-led unit-based initiatives (hospital-only group), or AHHMS plus clinician-based vendor support and hospital-led unit-based initiatives (vendor-plus-hospital group). Each hospital unit produced 1–2 months of baseline HH performance data immediately after AHHMS installation before implementing initiatives.
Hospital units in the vendor-plus-hospital group had a statistically significant increase of at least 46% in HH performance compared with units in the other 3 groups (P ≤ .006). Units in the hospital only group achieved a 1.3% increase in HH performance compared with units that had AHHMS alone (P = .950). Units with AHHMS plus other initiatives each had a larger change in HH performance rates over their baseline than those in the AHHMS-alone group (P < 0.001).
AHHMS combined with clinician-based vendor support and hospital-led unit-based initiatives resulted in the greatest improvements in HH performance. These results illustrate the value of a collaborative partnership between the hospital and the AHHMS vendor.
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.
Partial equilibrium models have been used extensively by policy makers to prospectively determine the consequences of government programs that affect consumer incomes or the prices consumers pay. However, these models have not previously been used to analyze government programs that inform consumers. In this paper, we develop a model that policy makers can use to quantitatively predict how consumers will respond to risk communications that contain new health information. The model combines Bayesian learning with the utility-maximization of consumer choice. We discuss how this model can be used to evaluate information policies; we then test the model by simulating the impacts of the North Dakota Folic Acid Educational Campaign as a validation exercise.
Response to lithium in patients with bipolar disorder is associated with clinical and transdiagnostic genetic factors. The predictive combination of these variables might help clinicians better predict which patients will respond to lithium treatment.
To use a combination of transdiagnostic genetic and clinical factors to predict lithium response in patients with bipolar disorder.
This study utilised genetic and clinical data (n = 1034) collected as part of the International Consortium on Lithium Genetics (ConLi+Gen) project. Polygenic risk scores (PRS) were computed for schizophrenia and major depressive disorder, and then combined with clinical variables using a cross-validated machine-learning regression approach. Unimodal, multimodal and genetically stratified models were trained and validated using ridge, elastic net and random forest regression on 692 patients with bipolar disorder from ten study sites using leave-site-out cross-validation. All models were then tested on an independent test set of 342 patients. The best performing models were then tested in a classification framework.
The best performing linear model explained 5.1% (P = 0.0001) of variance in lithium response and was composed of clinical variables, PRS variables and interaction terms between them. The best performing non-linear model used only clinical variables and explained 8.1% (P = 0.0001) of variance in lithium response. A priori genomic stratification improved non-linear model performance to 13.7% (P = 0.0001) and improved the binary classification of lithium response. This model stratified patients based on their meta-polygenic loadings for major depressive disorder and schizophrenia and was then trained using clinical data.
Using PRS to first stratify patients genetically and then train machine-learning models with clinical predictors led to large improvements in lithium response prediction. When used with other PRS and biological markers in the future this approach may help inform which patients are most likely to respond to lithium treatment.
Patients presenting to hospital with suspected coronavirus disease 2019 (COVID-19), based on clinical symptoms, are routinely placed in a cohort together until polymerase chain reaction (PCR) test results are available. This procedure leads to delays in transfers to definitive areas and high nosocomial transmission rates. FebriDx is a finger-prick point-of-care test (PoCT) that detects an antiviral host response and has a high negative predictive value for COVID-19. We sought to determine the clinical impact of using FebriDx for COVID-19 triage in the emergency department (ED).
We undertook a retrospective observational study evaluating the real-world clinical impact of FebriDx as part of an ED COVID-19 triage algorithm.
Emergency department of a university teaching hospital.
Patients presenting with symptoms suggestive of COVID-19, placed in a cohort in a ‘high-risk’ area, were tested using FebriDx. Patients without a detectable antiviral host response were then moved to a lower-risk area.
Between September 22, 2020, and January 7, 2021, 1,321 patients were tested using FebriDx, and 1,104 (84%) did not have a detectable antiviral host response. Among 1,104 patients, 865 (78%) were moved to a lower-risk area within the ED. The median times spent in a high-risk area were 52 minutes (interquartile range [IQR], 34–92) for FebriDx-negative patients and 203 minutes (IQR, 142–255) for FebriDx-positive patients (difference of −134 minutes; 95% CI, −144 to −122; P < .0001). The negative predictive value of FebriDx for the identification of COVID-19 was 96% (661 of 690; 95% CI, 94%–97%).
FebriDx improved the triage of patients with suspected COVID-19 and reduced the time that severe acute respiratory coronavirus virus 2 (SARS-CoV-2) PCR-negative patients spent in a high-risk area alongside SARS-CoV-2–positive patients.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Pandemics have historically shaped the world of work in various ways. With COVID-19 presenting as a global pandemic, there is much speculation about the implications of this crisis for the future of work and for people working in organizations. In this article, we discuss 10 of the most relevant research and practice topics in the field of industrial and organizational psychology that will likely be strongly influenced by COVID-19. For each of these topics, the pandemic crisis is creating new work-related challenges, but it is also presenting various opportunities. The topics discussed herein include occupational health and safety, work–family issues, telecommuting, virtual teamwork, job insecurity, precarious work, leadership, human resources policy, the aging workforce, and careers. This article sets the stage for further discussion of various ways in which I-O psychology research and practice can address the issues that COVID-19 creates for work and organizational processes that are affecting workers now and will shape the future of work and organizations in both the short and long term. This article concludes by inviting I-O psychology researchers and practitioners to address the challenges and opportunities of COVID-19 head-on by proactively adapting the work that we do in support of workers, organizations, and society as a whole.
Sleep and circadian timing shifts later during adolescence, conflicting with early school start times, and resulting in circadian misalignment. Although circadian misalignment has been linked to depression, substance use, and altered reward function, a paucity of experimental studies precludes the determination of causality. Here we tested, for the first time, whether experimentally-imposed circadian misalignment alters the neural response to monetary reward and/or response inhibition.
Healthy adolescents (n = 25, ages 13–17) completed two in-lab sleep schedules in counterbalanced order: An ‘aligned’ condition based on typical summer sleep-wake times (0000–0930) and a ‘misaligned’ condition mimicking earlier school year sleep-wake times (2000–0530). Participants completed morning and afternoon functional magnetic resonance imaging scans during each condition, including monetary reward (morning only) and response inhibition (morning and afternoon) tasks. Total sleep time and circadian phase were assessed via actigraphy and salivary melatonin, respectively.
Bilateral ventral striatal (VS) activation during reward outcome was lower during the Misaligned condition after accounting for the prior night's total sleep time. Bilateral VS activation during reward anticipation was lower during the Misaligned condition, including after accounting for covariates, but did not survive correction for multiple comparisons. Right inferior frontal gyrus activation during response inhibition was lower during the Misaligned condition, before and after accounting for total sleep time and vigilant attention, but only during the morning scan.
Our findings provide novel experimental evidence that circadian misalignment analogous to that resulting from school schedules may have measurable impacts on healthy adolescents' reward processing and inhibition of prepotent responses.
The Promontory caves (Utah) and Franktown Cave (Colorado) contain high-fidelity records of short-term occupations by groups with material culture connections to the Subarctic/Northern Plains. This research uses Promontory and Franktown bison dung, hair, hide, and bone collagen to establish local baseline carbon isotopic variability and identify leather from a distant source. The ankle wrap of one Promontory Cave 1 moccasin had a δ13C value that indicates a substantial C4 component to the animal's diet, unlike the C3 diets inferred from 171 other Promontory and northern Utah bison samples. We draw on a unique combination of multitissue isotopic analysis, carbon isoscapes, ancient DNA (species and sex identification), tissue turnover rates, archaeological contexts, and bison ecology to show that the high δ13C value was not likely a result of local plant consumption, bison mobility, or trade. Instead, the bison hide was likely acquired via long-distance travel to/from an area of abundant C4 grasses far to the south or east. Expansive landscape knowledge gained through long-distance associations would have allowed Promontory caves inhabitants to make well-informed decisions about directions and routes of movement for a territorial shift, which seems to have occurred in the late thirteenth century.
To examine rural–urban differences in temporal trends and risk of inappropriate antibiotic use by agent and duration among women with uncomplicated urinary tract infection (UTI).
Observational cohort study.
Using the IBM MarketScan Commercial Database (2010–2015), we identified US commercially insured women aged 18–44 years coded for uncomplicated UTI and prescribed an oral antibiotic agent. We classified antibiotic agents and durations as appropriate versus inappropriate based on clinical guidelines. Rural–urban status was defined by residence in a metropolitan statistical area. We used modified Poisson regression to determine the association between rural–urban status and inappropriate antibiotic receipt, accounting for patient- and provider-level characteristics. We used multivariable logistic regression to estimate trends in antibiotic use by rural–urban status.
Of 670,450 women with uncomplicated UTI, a large proportion received antibiotic prescriptions for inappropriate agents (46.7%) or durations (76.1%). Compared to urban women, rural women were more likely to receive prescriptions with inappropriately long durations (adjusted risk ratio 1.10, 95% CI, 1.10–1.10), which was consistent across subgroups. From 2011 to 2015, there was slight decline in the quarterly proportion of patients who received inappropriate agents (48.5% to 43.7%) and durations (78.3% to 73.4%). Rural–urban differences varied over time by agent (duration outcome only), geographic region, and provider specialty.
Inappropriate antibiotic prescribing is quite common for the treatment of uncomplicated UTI. Rural women are more likely to receive inappropriately long antibiotic durations. Antimicrobial stewardship interventions are needed to improve outpatient UTI antibiotic prescribing and to reduce unnecessary exposure to antibiotics, particularly in rural settings.
Structural models of psychopathology consistently identify internalizing (INT) and externalizing (EXT) specific factors as well as a superordinate factor that captures their shared variance, the p factor. Questions remain, however, about the meaning of these data-driven dimensions and the interpretability and distinguishability of the larger nomological networks in which they are embedded.
The sample consisted of 10 645 youth aged 9–10 years participating in the multisite Adolescent Brain and Cognitive Development (ABCD) Study. p, INT, and EXT were modeled using the parent-rated Child Behavior Checklist (CBCL). Patterns of associations were examined with variables drawn from diverse domains including demographics, psychopathology, temperament, family history of substance use and psychopathology, school and family environment, and cognitive ability, using instruments based on youth-, parent-, and teacher-report, and behavioral task performance.
p exhibited a broad pattern of statistically significant associations with risk variables across all domains assessed, including temperament, neurocognition, and social adversity. The specific factors exhibited more domain-specific patterns of associations, with INT exhibiting greater fear/distress and EXT exhibiting greater impulsivity.
In this largest study of hierarchical models of psychopathology to date, we found that p, INT, and EXT exhibit well-differentiated nomological networks that are interpretable in terms of neurocognition, impulsivity, fear/distress, and social adversity. These networks were, in contrast, obscured when relying on the a priori Internalizing and Externalizing dimensions of the CBCL scales. Our findings add to the evidence for the validity of p, INT, and EXT as theoretically and empirically meaningful broad psychopathology liabilities.
Optical tracking systems typically trade off between astrometric precision and field of view. In this work, we showcase a networked approach to optical tracking using very wide field-of-view imagers that have relatively low astrometric precision on the scheduled OSIRIS-REx slingshot manoeuvre around Earth on 22 Sep 2017. As part of a trajectory designed to get OSIRIS-REx to NEO 101955 Bennu, this flyby event was viewed from 13 remote sensors spread across Australia and New Zealand to promote triangulatable observations. Each observatory in this portable network was constructed to be as lightweight and portable as possible, with hardware based off the successful design of the Desert Fireball Network. Over a 4-h collection window, we gathered 15 439 images of the night sky in the predicted direction of the OSIRIS-REx spacecraft. Using a specially developed streak detection and orbit determination data pipeline, we detected 2 090 line-of-sight observations. Our fitted orbit was determined to be within about 10 km of orbital telemetry along the observed 109 262 km length of OSIRIS-REx trajectory, and thus demonstrating the impressive capability of a networked approach to Space Surveillance and Tracking.
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.