To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Latinos, especially those who recently immigrated, face many obstacles in navigating the political and judicial environment in the United States. While prior scholarship suggests that racial minorities are more likely to be stopped by law enforcement for traffic violations and face harsher penalties for major crimes, little research has explored whether a defendant’s characteristics are influential in routine traffic court cases. Using an original database, this paper examines disparate treatment in speeding ticket reductions. The results indicate that Latino defendants are less likely to receive meaningful reductions to their charges. However, attorney representation greatly lessens the likelihood of disparate treatment for Latino drivers. As traffic court proceedings often represent the only interaction most people have with the judicial system, these findings have significant implications for racial equality, the administration of justice, attorney representation, and public opinion of the judiciary.
To examine the impact of SARS-CoV-2 infection on CLABSI rate and characterize the patients who developed a CLABSI. We also examined the impact of a CLABSI-reduction quality-improvement project in patients with and without COVID-19.
Retrospective cohort analysis.
Academic 889-bed tertiary-care teaching hospital in urban Los Angeles.
Patients or participants:
Inpatients 18 years and older with CLABSI as defined by the National Healthcare Safety Network (NHSN).
CLABSI rate and patient characteristics were analyzed for 2 cohorts during the pandemic era (March 2020–August 2021): COVID-19 CLABSI patients and non–COVID-19 CLABSI patients, based on diagnosis of COVID-19 during admission. Secondary analyses were non–COVID-19 CLABSI rate versus a historical control period (2019), ICU CLABSI rate in COVID-19 versus non–COVID-19 patients, and CLABSI rates before and after a quality- improvement initiative.
The rate of COVID-19 CLABSI was significantly higher than non–COVID-19 CLABSI. We did not detect a difference between the non–COVID-19 CLABSI rate and the historical control. COVID-19 CLABSIs occurred predominantly in the ICU, and the ICU COVID-19 CLABSI rate was significantly higher than the ICU non–COVID-19 CLABSI rate. A hospital-wide quality-improvement initiative reduced the rate of non–COVID-19 CLABSI but not COVID-19 CLABSI.
Patients hospitalized for COVID-19 have a significantly higher CLABSI rate, particularly in the ICU setting. Reasons for this increase are likely multifactorial, including both patient-specific and process-related issues. Focused quality-improvement efforts were effective in reducing CLABSI rates in non–COVID-19 patients but were less effective in COVID-19 patients.
Background: In the treatment of bloodstream infections, the identification of the causal pathogen, and the evaluation of its susceptibility to antibiotics, often serve as the rate-limiting steps of the patient’s hospital stay. The GenMark Dx ePlex blood culture identification gram-positive (BCID-GP) panel aims to alleviate this bottleneck, thereby reducing the risk of severe complications and the spread of resistance, using electrowetting technology to detect the most common causes of GP bacteremia (20 targets) and 4 antimicrobial resistance (AMR) genes. We hypothesized that implementation of the ePlex BCID-GP panel would improve antimicrobial choice and de-escalation where appropriate. Methods: A mixed blinded and unblinded study was conducted to assess the effect of the BCID-GP panel on the outcomes and antibiotic stewardship of GP bacteremic patients before ePlex results were made clinically available (before implementation, N = 73) and once they accompanied the standard-of-care work-up (after implementation, N = 82). Differences in time to different benchmarks between the 2 modalities and the effect on patient outcomes were analyzed using null-hypothesis significance testing. Results: During the study, the BCID-GP panel identified 63 (42%) Staphylococcus epidermidis isolates, 31 (21%) Staphylococcus spp, 24 (16%) Staphylococcus aureus isolates, 12 (8%) Streptococcus spp, and 7 (5%) Enterococcus spp, and results were similar in the pre- and postimplementation groups (P = .13). The panel saved an average of 32.0 ± 24.2 hours in pathogen identification over standard-of-care methods, with no statistical difference made by the clinical availability of the data (Table 1). In terms of susceptibility testing, the panel saved an average of 70.1 ± 58.2 hours but with less unity between the 2 cohorts (P = .005). Of the 66 cases with follow-up, identification via ePlex indicated an escalation of therapy in 20 (30%) and a narrowing of coverage in 31 (47%). In patients identified to have Staphylococcus aureus, BCID-GP could change antimicrobial therapy in 79%; the need for escalation of antibiotics was identified in 58% of cases. In patients with Staphylococcus epidermidis bacteremia, implementation of BCID-GP panel could have resulted in de-escalation of antimicrobial therapy in 67% of patients. The implementation of the BCID-GP panel was correlated with no significant change of in-hospital mortality (P = .72) but was correlated with a significantly decreased death-censored total length of stay (LOS) (P < .001) and LOS after culture (P = .001). Conclusions: Our study has demonstrated that nonculture identification of bacteria and susceptibility can result in major improvements in antimicrobial therapy in patients, particularly those with contaminants identified.
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.
Given the relatively small industry scale of cow-calf operations in New York to other regions of the country, little is known about differences in determinant values for feeder cattle. Using auction prices and quality characteristics over 7 years, differences in market, lot, and quality parameters suggest opportunities for improved marketing performance. A delta profit model is constructed to inform timing of marketing decisions for producers. The results indicate a relatively high potential for producers to increase farm returns by delaying sales of lighter-weight feeder cattle from the fall to spring auction months, given sufficient rates of gain and reasonable overwintering costs.
The purpose of this study was to compare statistical knowledge of health science faculty across accredited schools of dentistry, medicine, nursing, pharmacy, and public health.
A probability sample of schools was selected, and all faculty at each selected school were invited to participate in an online statistical knowledge assessment that covered fundamental topics including randomization, study design, statistical power, confidence intervals, multiple testing, standard error, regression outcome, and odds ratio.
A total of 708 faculty from 102 schools participated. The overall response rate was 6.5%. Most (94.2%) faculty reported reading the peer-reviewed health-related literature. Respondents answered 66.2% of questions correctly across all questions and disciplines. Public health had the highest performance (80.7%) and dentistry the lowest (53.3%).
Knowledge of statistics is essential for critically evaluating evidence and understanding the health literature. These study results identify a gap in knowledge by educators tasked with training the next generation of health science professionals. Recommendations for addressing this gap are provided.
Archaeologists have long subjected Clovis megafauna kill/scavenge sites to the highest level of scrutiny. In 1987, a Columbian mammoth (Mammuthus columbi) was found in spatial association with a small artifact assemblage in Converse County, Wyoming. However, due to the small tool assemblage, limited nature of the excavations, and questions about the security of the association between the artifacts and mammoth remains, the site was never included in summaries of human-killed/scavenged megafauna in North America. Here we present the results of four field seasons of new excavations at the La Prele Mammoth site that confirm the presence of an associated cultural occupation based on geologic context, artifact attributes, spatial distributions, protein residue analysis, and lithic microwear analysis. This new work identified a more extensive cultural occupation including the presence of multiple discrete artifact clusters in close proximity to the mammoth bone bed. This study confirms the presence of a second Clovis mammoth kill/scavenge site in Wyoming and shows the value in revisiting proposed terminal Pleistocene kill/scavenge sites.
Enterococcus causes clinically significant bloodstream infections (BSIs). In centers with a higher prevalence of vancomycin resistant enterococcus (VRE) colonization, a common clinical question is whether empiric treatment directed against VRE should be initiated in the setting of a suspected enterococcal BSI. Unfortunately, VRE treatment options are limited, and relatively expensive, and subject patients to the risk of adverse reactions. We hypothesized that the results of VRE colonization screening could predict vancomycin resistance in enterococcal BSI.
We reviewed 370 consecutive cases of enterococcal BSI over a 7-year period at 2 tertiary-care hospitals to determine whether vancomycin-resistant BSIs could be predicted based on known colonization status (ie, patients with swabs performed within 30 days, more remotely, or never tested). We calculated sensitivity and specificity, and we plotted negative predictives values (NPVs) and positive predictive values (PPVs) as a function of prevalence.
A negative screening swab within 30 days of infection yielded NPVs of 90% and 95% in settings where <27.0% and 15.0% of enterococcal BSI are resistant to vancomycin, respectively. In patients with known VRE colonization, the PPV for VRE in enterococcal BSI was >50% at any prevalence exceeding 25%.
The results of a negative VRE screening test result performed within 30 days can help eliminate unnecessary empiric therapy in patients with suspected enterococcal BSI. Conversely, patients with positive VRE screening swabs require careful consideration of empiric VRE-directed therapy when enterococcal BSI appears likely.
Background: Adults are at risk of being exposed to influenza from many sources. Healthcare personnel (HCP) have the additional risk of being exposed to ill patients.
To determine whether HCP were at higher risk than adults working in nonhealthcare roles (non-HCP).
Prospective cohort study.
Acute-care hospitals and other businesses in Toronto, Ontario, Canada.
Adults aged 18–69 years were enrolled for 1 or more of the 2010/2011, 2011/2012, and 2012/2013 influenza seasons. Swabs collected during acute respiratory illnesses were tested for influenza and pre- and postseason blood samples were tested for influenza-specific immune response.
The adjusted odds of influenza were similar for HCP and non-HCP (odds ratio [OR], 1.29; 95% confidence interval [CI], 0.63–2.63). Older adults and those vaccinated against influenza had lower odds, and those who shared their workspace and who used corrective eyewear had higher odds of influenza.
HCP and other working adults are at similar risk of influenza infection.
Diverse theoretical perspectives suggest that place plays an important role in human behavior. One recent perspective proposes that habitual and recursive use of places among humans may be an emergent property of obligate tool use by our species. In this view, the costs of tool use are reduced by preferential occupation of previously occupied places where cultural materials have been discarded. Here we use the model to generate five predictions for ethnographic mobility patterns. We then test the predictions against observations made during one month of coresidence with a residentially mobile Dukha family in the Mongolian Taiga. We show that (1) there is a strong tendency to occupy previously used camps, (2) previously deposited materials are habitually recycled, (3) reoccupation of places transcends kinship, (4) occupational hiatuses can span decades or longer, and (5) the distribution of occupation intensity among camps is highly skewed such that most camps are not intensively reoccupied whereas a few camps experience extremely high reoccupation intensity. These findings complement previous archaeological findings and support the conclusion that the constructed dimensions of human habitats exert a strong influence on mobility patterns in mobile societies.
OBJECTIVES/SPECIFIC AIMS: Traditional clinical trials typically enroll a homogenous population to test the efficacy of an intervention. Pragmatic trials deliberately enroll a more diverse population to enhance generalizability, but doing so may increase heterogeneity of treatment effect among subpopulations. For example, the effect of a treatment on an outcome may vary based on patients’ sex, comorbidities, or baseline risk of experiencing the outcome. We hypothesized that heterogeneity of treatment effect by baseline risk for the outcome could be demonstrated in a large pragmatic clinical trial. METHODS/STUDY POPULATION: We performed a prespecified secondary analysis of a recent pragmatic trial comparing balanced crystalloids Versus 0.9% saline among critically ill adults. The primary endpoint of the trial was major adverse kidney events within 30 days of ICU admission, censored at hospital discharge (MAKE30). MAKE30 is a composite outcome of all-cause mortality, new renal replacement therapy, or persistent renal dysfunction. Using a previously published model with high predictive accuracy for MAKE30 (area under the curve=0.903), we calculated the baseline risk of MAKE30 for all trial participants. We then developed a logistic regression model for MAKE30 with independent covariates of fluid group assignment, baseline risk of MAKE30 as a nonlinear continuous variable, and the interaction between group assignment and MAKE30 baseline risk. RESULTS/ANTICIPATED RESULTS: Among 15,802 patients from 5 intensive care units enrolled in the original trial, 126 had missing variables for predicted risk of MAKE30. Mean predicted risk of MAKE30 among all patients was 15.4%; median was 4.4% (interquartile range 2.2%–17.1%). Predicted risk of MAKE30 did not significantly differ between groups (p=0.61 by Mann-Whitney U-test). The incidence of MAKE30 in the trial was 14.9%, and the prediction model was well-calibrated overall (AUC=0.891). In a logistic regression model examining the interaction between group assignment and predicted risk of MAKE30, group assignment significantly affected MAKE30 (odds ratio saline:balanced 1.13, 95% CI: 1.02–1.27, p=0.02), but we observed no interaction between the effect of group assignment on MAKE30 and patients’ predicted risk of MAKE30 at baseline (p=0.66 for interaction term). DISCUSSION/SIGNIFICANCE OF IMPACT: In a large pragmatic trial demonstrating a significant difference in the primary outcome of MAKE30 between balanced crystalloids and saline, a previously published model accurately predicted MAKE30 using baseline factors. However, contrary to our hypothesis, the baseline risk of MAKE30 did not modify the effect of fluid group on the observed incidence of MAKE30. Our analysis could not account for unmeasured confounders and may be underpowered to detect a significant interaction. Our findings suggest that the impact of balanced crystalloids versus normal saline on renal outcomes in critically patients is consistent across all levels of risk.
We have been intensely monitoring photometric variability in proto-planetary nebulae (PPNe) over the past 25 years and radial velocity variability over the past ten years. Pulsational variability has been obvious, in both the light and velocity, although the resulting curves are complex, with multiple periods and varying amplitudes. Observed periods range from 25 to 160 days, and the periods and amplitudes reveal evolutionary trends. We will present our observational results to date for approximately 30 PPNe, and discuss these results, including the search for period changes that might help constrain post-AGB evolutionary timescales.
Hill (Twin Research and Human Genetics, Vol. 21, 2018, 84–88) presented a critique of our recently published paper in Cell Reports entitled ‘Large-Scale Cognitive GWAS Meta-Analysis Reveals Tissue-Specific Neural Expression and Potential Nootropic Drug Targets’ (Lam et al., Cell Reports, Vol. 21, 2017, 2597–2613). Specifically, Hill offered several interrelated comments suggesting potential problems with our use of a new analytic method called Multi-Trait Analysis of GWAS (MTAG) (Turley et al., Nature Genetics, Vol. 50, 2018, 229–237). In this brief article, we respond to each of these concerns. Using empirical data, we conclude that our MTAG results do not suffer from ‘inflation in the FDR [false discovery rate]’, as suggested by Hill (Twin Research and Human Genetics, Vol. 21, 2018, 84–88), and are not ‘more relevant to the genetic contributions to education than they are to the genetic contributions to intelligence’.
Risk prediction scores have been devised to identify patients at increased risk for Venous Thromboembolism (VTE) in different patient populations and settings. Guideline recommendations for VTE risk assessment vary greatly. We performed a systematic review to synthesize evidence on clinical risk prediction scores for VTE in hospitalized medical and surgical patients.
We systematically searched Medline, EMBASE, Cochrane, National Institute of Health and Care Excellence (NICE), National Guidelines Clearinghouse (NGC), and Guidelines International Network (GIN) databases up to March 2016. We included studies validating risk prediction scores for adult hospitalized patients. We excluded studies for any of the following reasons: non-English publication, conducted in non-OECD (Organisation for Economic Co-operation and Development) countries, validation cohorts focused solely on critical care patients, or scores developed for specific surgical or medical sub-specialty populations. We plotted receiver operating characteristic (ROC) curves of included studies and performed summary ROC meta-analyses for scores in which >1 external validation studies were combinable. Risk of bias was assessed qualitatively. We assessed the strength of the evidence base using Grading of Recommendations Assessment, Development and Evaluation (GRADE).
We screened 110 primary studies and included 18 of those for analysis. There were seven studies of the Caprini score, three studies of the Padua score, two studies of the IMPROVE score; and one study each of the Arcelus, Geneva, Khorana, RAP, and Kucher scores . Strength of evidence was downgraded for study risk of bias because most studies disproportionately included patients at high risk of VTE. Our summary estimates of the performance of the three combinable scores at clinically-relevant thresholds are: Caprini score at a threshold of three in surgical patients – 96 percent sensitivity, 44 percent specificity; IMPROVE at a threshold of one in medical patients – 96 percent sensitivity, 20 percent specificity; and Padua at a threshold of 4–87 percent sensitivity and 58 percent specificity.
There is moderate strength evidence for use of the Caprini score to predict VTE in surgical patients and for the Padua and IMPROVE scores in medical patients. Lower thresholds may be warranted to achieve sufficient sensitivity to identify low risk populations who may not require routine VTE prophylaxis. Studies making direct comparisons of risk prediction scores in similar patient populations are lacking and are necessary to ascertain which score is most effective.
What is the role of nuclear weapons in world politics? The political effects of nuclear weapons were central to the study of international relations during the Cold War. However, after the collapse of the Soviet Union, many people assumed that nuclear weapons were no longer relevant. Careful thinking about nuclear deterrence ground to a halt.
Events soon reminded us that nuclear weapons did not disappear with the end of the Cold War. India and Pakistan tested nuclear weapons in 1998. The United States invaded Iraq in 2003, in part due to concerns about Saddam Hussein's nuclear ambitions. North Korea withdrew from the Non-Proliferation Treaty and joined the nuclear club, carrying out its first successful nuclear explosion in 2006. Iran recently appeared to be on the cusp of building nuclear weapons. Clearly, nuclear weapons continue to influence the contemporary international landscape in many ways. Yet the field of international relations has been unable to answer critical questions about this new nuclear era.
One of those unanswered questions is how nuclear weapons shape the dynamics of coercion in international politics. An emerging wisdom – which we call the “nuclear coercionist” school – holds that nuclear weapons provide states with tremendous political leverage. According to this view, nuclear powers have special advantages in international diplomacy. Not only are they better able to deter attacks against themselves and their allies, but they can also win crises with greater ease and extract political concessions more effectively than nonnuclear countries. Nuclear weapons, according to coercionist logic, are useful for much more than self-defense – they also help states engage in military coercion.
This book has challenged this notion. It has offered an alternative theoretical approach, nuclear skepticism theory, that better explains the role of nuclear weapons in international affairs. This perspective argues that nuclear weapons do not help states throw their weight around in world politics. The reason is that it is exceedingly difficult to make coercive nuclear threats believable. Nuclear blackmail does not work because threats to launch nuclear attacks for offensive political purposes fundamentally lack credibility.
Do nuclear weapons provide countries with advantages in international bargaining? If so, under what conditions? Scholars and policymakers have debated these questions for decades. Remarkably, nearly seventy years into the nuclear age, we still lack consensus about the coercive value of nuclear weapons. Our goal in this chapter is to add greater clarity to the nuclear blackmail debate.
We start by describing the basic complexion of coercion in international politics. Next, we develop a generalized framework of coercion that yields several conclusions about the conditions that favor coercive success, and then ask whether nuclear weapons help bring about – or bolster – these conditions. In the end, we conclude that they do not. Nuclear weapons may be useful for deterrence and self-defense, but they are not useful for coercion.
Coercion: An Introduction
In its broadest sense, coercion involves using threats – either explicit or implied – to motivate someone to act. At its core, then, coercion is about behavior modification. A coercer aims to persuade a victim to alter its behavior by taking actions that serve the coercer's interests. The coercer's objective is to change the target's behavior without actually having to execute the threat. Executing threats can be costly not only for the target, but also for the challenger. Coercers therefore would prefer that their words be sufficient. As Clausewitz wrote: “The aggressor is always peace-loving… he would prefer to take over our country unopposed.” Coercion, then, is at its most effective when no punishment is ever imposed.
In our lives, as in international politics, coercion is inextricably woven into the daily rhythms of human interaction. A parent threatening to withhold a toy from a misbehaving child, a boss warning an insubordinate employee, or a homeowner threatening to sue a builder for breach of contract are all engaging in coercion. In each case, the coercer holds out the possibility of some unpleasant consequence unless the target behaves to the coercer's liking. Consider the following (hypothetical) scenarios:
The dog next door has been terrorizing the neighborhood for months. One day, the dog attacks a child who was playing in her own front yard. The child's mother promptly marches to the door of the dog's owner, looks him in the eye, and issues a stern warning: “We've had enough. If you don't get rid of that dog today, then I will.”