We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
American Indian and Alaska Native peoples (AI/AN) have a disproportionately high rate of obesity, but little is known about the social determinants of obesity among older AI/AN. Thus, our study assessed social determinants of obesity in AI/AN aged ≥ 50 years.
Design:
We conducted a cross-sectional analysis using multivariate generalised linear mixed models to identify social determinants associated with the risk of being classified as obese (BMI ≥ 30·0 kg/m2). Analyses were conducted for the total study population and stratified by median county poverty level.
Setting:
Indian Health Service (IHS) data for AI/AN who used IHS services in FY2013.
Participants:
Totally, 27 696 AI/AN aged ≥ 50 years without diabetes.
Results:
Mean BMI was 29·8 ± 6·6 with 43 % classified as obese. Women were more likely to be obese than men, and younger ages were associated with higher obesity risk. While having Medicaid coverage was associated with lower odds of obesity, private health insurance was associated with higher odds. Living in areas with lower rates of educational attainment and longer drive times to primary care services were associated with higher odds of obesity. Those who lived in a county where a larger percentage of people had low access to a grocery store were significantly less likely to be obese.
Conclusions:
Our findings contribute to the understanding of social determinants of obesity among older AI/AN and highlight the need to investigate AI/AN obesity, including longitudinal studies with a life course perspective to further examine social determinants of obesity in older AI/AN.
Recurrent laryngeal nerve injury leading to vocal cord paralysis is a known complication of cardiothoracic surgery. Its occurrence during interventional catheterisation procedures has been documented in case reports, but there have been no studies to determine an incidence.
Objective:
To establish the incidence of left recurrent laryngeal nerve injury leading to vocal cord paralysis after left pulmonary artery stenting, patent ductus arteriosus device closure and the combination of the procedures either consecutively or simultaneously.
Methods:
Members of the Congenital Cardiovascular Interventional Study Consortium were asked to perform a retrospective analysis to identify cases of recurrent laryngeal nerve injury after the aforementioned procedures. Twelve institutions participated in the analysis. They also contributed the total number of each procedure performed at their respective institutions for statistical purposes.
Results:
Of the 1337 patients who underwent left pulmonary artery stent placement, six patients (0.45%) had confirmed vocal cord paralysis. 4001 patients underwent patent ductus arteriosus device closure, and two patients (0.05%) developed left vocal cord paralysis. Patients who underwent both left pulmonary artery stent placement and patent ductus arteriosus device closure had the highest incidence of vocal cord paralysis which occurred in 4 of the 26 patients (15.4%). Overall, 92% of affected patients in our study population had resolution of symptoms.
Conclusion:
Recurrent laryngeal nerve injury is a rare complication of left pulmonary artery stent placement or patent ductus arteriosus device closure. However, the incidence is highest in patients undergoing both procedures either consecutively or simultaneously. Additional research is necessary to determine contributing factors that might reduce the risk of recurrent laryngeal nerve injury.
The inaugural data from the first systematic program of sea-ice observations in Kotzebue Sound, Alaska, in 2018 coincided with the first winter in living memory when the Sound was not choked with ice. The following winter of 2018–19 was even warmer and characterized by even less ice. Here we discuss the mass balance of landfast ice near Kotzebue (Qikiqtaġruk) during these two anomalously warm winters. We use in situ observations and a 1-D thermodynamic model to address three research questions developed in partnership with an Indigenous Advisory Council. In doing so, we improve our understanding of connections between landfast ice mass balance, marine mammals and subsistence hunting. Specifically, we show: (i) ice growth stopped unusually early due to strong vertical ocean heat flux, which also likely contributed to early start to bearded seal hunting; (ii) unusually thin ice contributed to widespread surface flooding. The associated snow ice formation partly offset the reduced ice growth, but the flooding likely had a negative impact on ringed seal habitat; (iii) sea ice near Kotzebue during the winters of 2017–18 and 2018–19 was likely the thinnest since at least 1945, driven by a combination of warm air temperatures and a persistent ocean heat flux.
To determine the impact of electronic health record (EHR)–based interventions and test restriction on Clostridioides difficile tests (CDTs) and hospital-onset C. difficile infection (HO-CDI).
Design:
Quasi-experimental study in 3 hospitals.
Setting:
957-bed academic (hospital A), 354-bed (hospital B), and 175-bed (hospital C) academic-affiliated community hospitals.
Interventions:
Three EHR-based interventions were sequentially implemented: (1) alert when ordering a CDT if laxatives administered within 24 hours (January 2018); (2) cancellation of CDT orders after 24 hours (October 2018); (3) contextual rule-driven order questions requiring justification when laxative administered or lack of EHR documentation of diarrhea (July 2019). In February 2019, hospital C implemented a gatekeeper intervention requiring approval for all CDTs after hospital day 3. The impact of the interventions on C. difficile testing and HO-CDI rates was estimated using an interrupted time-series analysis.
Results:
C. difficile testing was already declining in the preintervention period (annual change in incidence rate [IR], 0.79; 95% CI, 0.72–0.87) and did not decrease further with the EHR interventions. The laxative alert was temporally associated with a trend reduction in HO-CDI (annual change in IR from baseline, 0.85; 95% CI, 0.75–0.96) at hospitals A and B. The gatekeeper intervention at hospital C was associated with level (IRR, 0.50; 95% CI, 0.42-0.60) and trend reductions in C. difficile testing (annual change in IR, 0.91; 95% CI, 0.85–0.98) and level (IRR 0.42; 95% CI, 0.22–0.81) and trend reductions in HO-CDI (annual change in IR, 0.68; 95% CI, 0.50–0.92) relative to the baseline period.
Conclusions:
Test restriction was more effective than EHR-based clinical decision support to reduce C. difficile testing in our 3-hospital system.
ABSTRACT IMPACT: This study aims to provide insight into naturally acquired immunity against severe malaria, thereby laying the foundation for the design of novel vaccine candidates to prevent severe disease as well as monoclonal antibody therapies to treat severe malaria. OBJECTIVES/GOALS: Severe malaria is caused by parasite surface antigens that contain high sequence diversity. Nevertheless, P. falciparum-exposed individuals develop antibody responses against these antigens. Our goal is to isolate antibodies with broad reactivity to understand how disease protection is acquired. METHODS/STUDY POPULATION: Our study cohort consists of Ugandan adults living in a malaria-endemic region with high transmission intensity, who are protected against severe malaria. Using fluorescently labeled probes of parasite surface antigens, we have isolated antigen-specific B cells from these donors. We then expressed the corresponding monoclonal antibodies in vitro. These antibodies were screened against a library of variant surface antigens to determine antibody breadth and potential to inhibit interaction of the parasite surface antigen with host receptors, a critical step in pathogenesis. Additionally, using a panel of variant surface antigen mutants, we have predicted the epitopes targeted by the broadest monoclonal antibodies. RESULTS/ANTICIPATED RESULTS: We have identified three monoclonal antibodies with exceptionally broad reactivity and inhibitory activity against our panel of severe disease-inducing variant surface antigens. We have identified two major sites targeted by these broadly reactive antibodies. The first site was associated with the largest breadth, but limited inhibitory potential, while the second site showed high-affinity antibody binding and inhibition of receptor binding. Interestingly, two of these three antibodies were very similar in structure, even though they were isolated from different donors. Isolation of antigen-specific B cells from additional donors will enable us to identify how common such broadly reactive antibodies are and allow the identification of additional epitopes DISCUSSION/SIGNIFICANCE OF FINDINGS: This study is the first to isolate broadly reactive antibodies that are likely to protect against severe malaria in naturally immune individuals. Further characterization of antibody-antigen interactions will inform the development of this surface antigen as a vaccine candidate for malaria.
Identify risk factors that could increase progression to severe disease and mortality in hospitalized SARS-CoV-2 patients in the Southeast region of the United States.
Design, setting, and participants:
Multicenter, retrospective cohort including 502 adults hospitalized with laboratory-confirmed COVID-19 between March 1, 2020, and May 8, 2020 within 1 of 15 participating hospitals in 5 health systems across 5 states in the Southeast United States.
Methods:
The study objectives were to identify risk factors that could increase progression to hospital mortality and severe disease (defined as a composite of intensive care unit admission or requirement of mechanical ventilation) in hospitalized SARS-CoV-2 patients in the Southeast United States.
Results:
In total, 502 patients were included, and 476 of 502 (95%) had clinically evaluable outcomes. The hospital mortality rate was 16% (76 of 476); 35% (177 of 502) required ICU admission and 18% (91 of 502) required mechanical ventilation. By both univariate and adjusted multivariate analyses, hospital mortality was independently associated with age (adjusted odds ratio [aOR], 2.03 for each decade increase; 95% confidence interval [CI], 1.56-–2.69), male sex (aOR, 2.44; 95% CI, 1.34–4.59), and cardiovascular disease (aOR, 2.16; 95% CI, 1.15–4.09). As with mortality, risk of severe disease was independently associated with age (aOR, 1.17 for each decade increase; 95% CI, 1.00–1.37), male sex (aOR, 2.34; 95% CI, 1.54–3.60), and cardiovascular disease (aOR, 1.77; 95% CI, 1.09–2.85).
Conclusions:
In an adjusted multivariate analysis, advanced age, male sex, and cardiovascular disease increased risk of severe disease and mortality in patients with COVID-19 in the Southeast United States. In-hospital mortality risk doubled with each subsequent decade of life.
Background: The standardized infection ratio (SIR) is the nationally adopted metric used to track and compare catheter-associated urinary tract infections (CAUTIs) and central-line– associated bloodstream infections (CLABSIs). Despite its widespread use, the SIR may not be suitable for all settings and may not capture all catheter harm. Our objective was to look at the correlation between SIR and device use for CAUTIs and CLABSIs across community hospitals in a regional network. Methods: We compared SIR and SUR (standardized utilization ratio) for CAUTIs and CLABSIs across 43 hospitals in the Duke Infection Control Outreach Network (DICON) using a scatter plot and calculated an R2 value. Hospitals were stratified into large (>70,000 patient days), medium (30,000–70,000 patient days), and small hospitals (<30,000 patient days) based on DICON’s benchmarking for community hospitals. Results: We reviewed 24 small, 11 medium, and 8 large hospitals within DICON. Scatter plots for comparison of SIRs and SURs for CLABSIs and CAUTIs across our network hospitals are shown in Figs. 1 and 2. We detected a weak positive overall correlation between SIR and SUR for CLABSIs (0.33; R2 = 0.11), but no correlation between SIR and SUR for CAUTIs (−0.07; R2 = 0.00). Of 15 hospitals with SUR >1, 7 reported SIR <1 for CLABSIs, whereas 10 of 13 hospitals with SUR >1 reported SIR <1 for CAUTIs. Smaller hospitals showed a better correlation for CLABSI SIR and SUR (0.37) compared to medium and large hospitals (0.19 and 0.22, respectively). Conversely, smaller hospitals showed no correlation between CAUTI SIR and SUR, whereas medium and larger hospitals showed a negative correlation (−0.31 and −0.39, respectively). Conclusions: Our data reveal a weak positive correlation between SIR and SUR for CLABSIs, suggesting that central line use impacts CLABSI SIR to some extent. However, we detected no correlation between SIR and SUR for CAUTIs in smaller hospitals and a negative correlation for medium and large hospitals. Some hospitals with low CAUTI SIRs might actually have higher device use, and vice versa. Therefore, the SIR alone does not adequately reflect preventable harm related to urinary catheters. Public reporting of SIR may incentivize hospitals to focus more on urine culture stewardship rather than reducing device utilization.
Background: UV-C light reduces contamination of high-touch clinical surfaces. Few studies have tested the relative efficacy of UV-C devices in real-world clinical environments. Methods: We assessed the efficacy of the Tru-D (SmartUVC) and Moonbeam-3 UV-C (Diversey) devices at eradicating important clinical pathogens in 2 hyperbaric chambers at a tertiary-care hospital. Formica sheets were inoculated with 106–107 CFU of MRSA (USA300) or 104–105 CFU of C. difficile (NAP1). Sheets were placed in 6 predetermined locations throughout the chambers. Two Moonbeam-3 UV-C devices were positioned in the center of each chamber and were run for 3-minute (per manufacturer’s instructions) and 5-minute cycles. One Tru-D was positioned in the center of the chamber and was run on the vegetative cycle for MRSA and the spore cycle for C. difficile. UV-C dosage was measured for both machines. Quantitative cultures were collected using Rodac plates with DE neutralizing agar and were incubated at 37C for 48 hours. C. difficile was likewise plated onto sheep’s blood agar. Results: We ran each combination of chamber, microbe, and UV-C device in triplicate for In total, 108 samples per species.
For MRSA, the Tru-D vegetative cycle, the 5-minute Moonbeam cycle, and the 3-minute Moonbeam cycle resulted in average CFU log10 reductions of 7.02 (95% CI, 7.02–7.02), 6.99 (95% CI, 6.95–7.02), and 6.58 (95% CI, 6.37–6.79), respectively (Fig. 1). The Tru-D vegetative and 5-minute Moonbeam cycles were similarly effective (P > .99), and both were more effective than the 3-minute Moonbeam cycle (P < .001 and P < .001, respectively). MRSA samples receiving direct UV-C exposure had significantly greater log10 reductions (6.95; 95% CI, 6.89–7.01) than did indirect exposure (6.67; 95% CI, 6.46–6.87; P < .05) (Fig. 2). For C. difficile, the Tru-D sporicidal, the 5-, and 3-minute Moonbeam cycles resulted in average CFU log10 reductions of 1.78 (95% CI, 1.43–2.12), 0.57 (95% CI, 0.33–0.81) and 0.64 (95% CI, 0.42–0.86), respectively (Fig. 1). Tru-D was significantly more effective than either the 3- or 5-minute Moonbeam cycles (P < .00). C. difficile samples receiving direct UV-C exposure had higher dosage and significantly greater log10 reductions (1.34; 95% CI, 1.10–1.58) than did indirect exposure (0.58; 95% CI, 0.31–0.86; P < .01) (Fig. 2). Conclusions: Use of the Tru-D vegetative cycle and the Moonbeam 3- and 5-minute cycles resulted in similar reductions in MRSA; both resulted in significantly greater reductions than the manufacturer’s recommended 3-minute Moonbeam cycle. Therefore, healthcare facilities should carefully evaluate manufacturer-recommended run times in their specific clinical setting. For C. difficile, the Tru-D sporicidal cycle was significantly more effective than either of the Moonbeam cycles, likely due to higher irradiation levels. As such, direct UV-C exposure resulted in greater average reductions than indirect exposure.
Cognitive deficits at the first episode of schizophrenia are predictive of functional outcome. Interventions that improve cognitive functioning early in schizophrenia are critical if we hope to prevent or limit long-term disability in this disorder.
Methods
We completed a 12-month randomized controlled trial of cognitive remediation and of long-acting injectable (LAI) risperidone with 60 patients with a recent first episode of schizophrenia. Cognitive remediation involved programs focused on basic cognitive processes as well as more complex, life-like situations. Healthy behavior training of equal treatment time was the comparison group for cognitive remediation, while oral risperidone was the comparator for LAI risperidone in a 2 × 2 design. All patients were provided supported employment/education to encourage return to work or school.
Results
Both antipsychotic medication adherence and cognitive remediation contributed to cognitive improvement. Cognitive remediation was superior to healthy behavior training in the LAI medication condition but not the oral medication condition. Cognitive remediation was also superior when medication adherence and protocol completion were covaried. Both LAI antipsychotic medication and cognitive remediation led to significantly greater improvement in work/school functioning. Effect sizes were larger than in most prior studies of first-episode patients. In addition, cognitive improvement was significantly correlated with work/school functional improvement.
Conclusions
These results indicate that consistent antipsychotic medication adherence and cognitive remediation can significantly improve core cognitive deficits in the initial period of schizophrenia. When combined with supported employment/education, cognitive remediation and LAI antipsychotic medication show separate significant impact on improving work/school functioning.
Chapter 4 examines the bubble in railway shares which occurred in the UK in the mid-1840s. Railway share prices more than doubled between 1843 and the autumn of 1845. In addition, there was a promotion boom with hundreds of new railways being authorised by Parliament. By the autumn of 1845, 562 new railway schemes had been submitted to Parliament. Following several major newspaper editorials regarding this folly, the bubble came to an end. The chapter then moves on to discuss the causes of the bubble. The incorporation of hundreds of railway companies by Parliament resulted in an increase in marketability. In terms of money and credit, interest rates were at an historical low and part-paid shares leveraged the buying of shares. The railway bubble witnessed the democratisation of speculation, with many middle-class individuals buying shares for the first time. The spark which set the bubble fire alight was the Railway Act. This Act signalled that railways had the potential to be very remunerative investments. It also created the Railway Board, which was a means of coordinating applications to build railways so that a national rail network was constructed. The chapter concludes by examining the consequences of the bubble, arguing that the bubble was a deeply inefficient way to create a national rail network, and much too wasteful to be considered useful.
Chapter 11 examines the stock market bubbles which occurred in China in 2007 and 2015. Between the end of 2005 and October 2007, the stock market soared by over 400 per cent. One year later, the market had fallen by 70 per cent. Similarly, in the year before June 2015, the stock market had increased by more than 150 per cent. It then collapsed by more than 50 per cent in under three months. The chapter discusses how, in the space of 20 years, China went from having almost no marketability to having heavily controlled marketability, and then near-free marketability. China also went from having virtually no middle class to having the world’s largest middle class, which then became the new speculating class. Thanks to margin lending, they were able to borrow heavily to finance their investments. Both bubbles are very clear examples of how and why governments engineer bubbles in the first instance. In 2007 the Chinese authorities needed to stimulate privatisation and in 2015 they needed to unwind the largest economic stimulus in history.
Chapter 8 examines the land and stock market bubbles that occurred in Japan in the 1980s. In the seven years before its peak, the Japanese stock market appreciated 386 per cent. Similarly, land prices rose by 207 per cent. By August 1992, the Japanese stock market had fallen 62 per cent from its peak, and by 1995, land was 50 per cent below its peak. Both land prices and the stock market continued to fall into the next decade. The chapter then uses the bubble triangle to explain the Japanese land and stock bubbles. These bubbles were purely political creations. Not only did the Japanese government provide the spark, but it systematically cultivated all three sides of the bubble triangle with the explicit goal of generating a boom. This process was clearest in the realm of money and credit, where an expansion was both a central part of Japan’s economic policy and, after the Plaza Accord, an international commitment. The chapter concludes by looking at how the collapse of the Japanese bubbles weakened the country’s banking system, which eventually had to be rescued by the government, and resulted in a stagnant economy for over two decades.
Why do stock and housing markets sometimes experience amazing booms followed by massive busts and why is this happening more and more frequently? In order to answer these questions, William Quinn and John D. Turner take us on a riveting ride through the history of financial bubbles, visiting, among other places, Paris and London in 1720, Latin America in the 1820s, Melbourne in the 1880s, New York in the 1920s, Tokyo in the 1980s, Silicon Valley in the 1990s and Shanghai in the 2000s. As they do so, they help us understand why bubbles happen, and why some have catastrophic economic, social and political consequences whilst others have actually benefited society. They reveal that bubbles start when investors and speculators react to new technology or political initiatives, showing that our ability to predict future bubbles will ultimately come down to being able to predict these sparks.
Chapter 10 examines the housing bubble which occurred in Ireland, Spain, the UK and the United States in the 2000s. House prices in many parts of these countries more than doubled in the years leading up to 2007. They then crashed with terrible consequences for the global financial system, which imploded in September 2008 when Lehman Brothers entered bankruptcy. The chapter then discusses how the bubble triangle explains this episode. Financial alchemy meant that mortgage finance could be provided to a wider range of people, thus making the family home much more marketable and an object of speculation. The spark which ignited the subprime bubble was a policy decision taken in the late 1990s that attempted to use loose mortgage lending standards as a substitute for government-provided social housing. The chapter concludes by examining the economic, social and political consequences of the bubble. The housing bubble of the 2000s is a perfect example of an economically and socially destructive bubble, despite extraordinary measures taken by governments and central bankers to save the system. The chapter concludes by drawing a line from the housing bubble and its collapse to the rise of populism.
Chapter 12 is the conclusion of the book. The chapter starts by arguing that the bubble triangle can explain why the cryptocurrency bubble occurred in 2017. It then asks whether the bubble triangle is a good predictive tool. The answer to that question is yes, but bubbles are still difficult to predict because the sparks are difficult to discern. The bubble triangle is also able to predict which bubbles will be destructive (politically sparked bubbles with high bank lending) and which will be useful (technology sparked bubbles with low leverage). The chapter then moves on to look at what governments could do to prevent bubbles. However, since political bubbles are often created because they are in the government’s interest, governments cannot be relied upon to take these measures. The question then arises as to whether the news media can alert investors to the presence of bubbles. The answer to this question very much depends on whether they have the incentive to do so, and this incentive appears to be diminishing over time. The chapter concludes by arguing that investors need to build broad mental models, which include history, if they are to have any chance of predicting bubbles.
Chapter 5 examines the bubble that occurred in Australia in the late 1880s. During 1887 and 1888, there was a major bubble in the price of suburban land, particularly in Melbourne. In addition, companies involved in the financing and development of urban land were created at this time and during the first half of 1888, their share prices doubled. After the peak in October 1888, the share prices of these companies and urban land prices fell sharply. We then explain why it took several years for the liquidation of the land boom to affect the wider economy. The chapter then moves on to discuss how the bubble triangle explains this episode. In particular, this was the first major bubble where investors were speculating with other people’s money, provided ultimately by the country’s banks. The spark which ignited the land boom was the liberalisation in 1887 of the restriction on banks’ lending on the security of real estate. This was the final act in a 25-year liberalisation process. The chapter concludes by examining the dire consequences of the bubble. In 1893, the Australian banking system collapsed and, as a result, Australia experienced a very long and deep economic recession