We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Increasing resources are devoted to osteoarthritis surgical care in Australia annually, with significant expenditure attributed to hip and knee arthroplasties. Safe, efficient, and sustainable models of care are required. This study aimed to determine the impact on healthcare costs of implementing an enhanced short-stay model of care (ESS-MOC) for arthroplasty at a national level.
Methods
Budget impact analysis was conducted for hospitals providing arthroplasty surgery over the years 2023 to 2030. Population-based sample projections obtained from clinical registry and administrative datasets of individuals receiving hip or knee arthroplasty for osteoarthritis were applied. The ESS-MOC assigned 30 percent of eligible patients to a shortened acute-ward-stay pathway and outpatient rehabilitation. The remaining 70 percent received a current practice pathway. The primary outcome was total healthcare cost savings post-implementation of the ESS-MOC, with return on investment (ROI) ratio and hospital bed-days utilized also estimated. Costs are presented in Australian dollars (AUD) and United States dollars (USD), at 2023 prices.
Results
Estimated hospital cost savings for the years 2023 to 2030 from implementing the ESS-MOC were AUD641 million (USD427 million) (95% CI: AUD99 million [USD66 million] to AUD1,250 million) [USD834 million]). This corresponds to a ROI ratio of 8.88 (1.3 to 17.9) dollars returned for each dollar invested in implementing the care model. For the period 2023 to 2030, an estimated 337,000 (261,000 to 412,000) acute surgical ward bed-days, and 721,000 (471,000 to 1,028,000) rehabilitation bed-days could be saved. Total implementation costs for the ESS-MOC were estimated at AUD72 million (USD46 million) over eight years.
Conclusions
Implementation of an ESS-MOC for eligible arthroplasty patients in Australia would generate significant cost and healthcare resource savings. This budget impact analysis demonstrates a best practice approach to comprehensively assessing value, at a national level, of implementing sustainable models of care in high-burden healthcare contexts. Findings are relevant to other settings where hospital stay following joint arthroplasty remains excessively long.
Words with complex semantic types such as book are characterised by a multiplicity of interpretations that are not mutually exclusive (e.g., as a physical object and/or informational content). Their status with respect to lexical ambiguity is notoriously unclear, and it is debatable whether complex types are a particular form of polysemy (closely related to metonymy) or whether they belong to monosemy. In this study, we investigate the nature of complex types by conducting two experiments on ambiguous nouns in French. The first experiment collects speakers’ judgements about the sameness of meaning between different uses of complex-type, metonymic and monosemous words. The second experiment uses a priming paradigm and a sensicality task to investigate the online processing of complex-type words, as opposed to metonymic and monosemous words. Overall results indicate that, on a continuum of lexical ambiguity, complex types are closer to monosemy than to metonymy. The different interpretations of complex-type words are highly connected and fall under the same meaning, arguably in relation to a unique reference. These results suggest that complex types are associated with single underspecified entries in the mental lexicon. Moreover, they highlight the need for a model of lexical representations of ambiguous words that can account for the difference between complex types and metonymy.
This article presents a short summary of the conclusions we report in a longer manuscript (available in our Supplementary Material) subjecting Lagodny et al.’s new measure of state policy mood to the same set of face validity and construct validity tests we applied earlier to Enns and Koch’s measure. We encourage readers to read this longer manuscript, which contains not only the conclusions herein, but also the evidence justifying these conclusions, before accepting or rejecting any claims we make. Our results show that the characteristics of Enns and Koch’s measure that led us to be doubtful that it is valid are also present in Lagodny et al.’s new measure – leaving us just as doubtful that Lagodny et al.’s measure is valid. Moreover, the low correlation between Lagodny et al.’s measure and Enns and Koch’s measure, combined with evidence from replications of seven published studies that the two measures frequently yield quite different inferences about the impact of policy mood on public policy, indicate that Lagodny et al.’s claim that both their measure and Enns and Koch’s measure are valid is wrong; either neither measure is valid, or one is valid and the other is not. Moreover, extending the replications to include not only Lagodny et al.’s and Enns and Koch’s measures, but also Berry et al.’s measure and Caughey and Warshaw’s measure of mass economic liberalism, shows that each of the four measures yields a substantive conclusion about the effect of policy mood that is dramatically different than each of the other three measures. This suggests that the goal of developing a measure of state policy mood that would be widely accepted as valid remains elusive.
Background: Oncology patients are at high risk for bloodstream infection (BSI) due to immunosuppression and frequent use of central venous catheters. Surveillance in this population is largely relegated to inpatient settings and limited data are available describing community burden. We evaluated rates of BSI, clinic or emergency department (ED) visits, and hospitalizations in a large cohort of oncology outpatients with peripherally inserted central catheters (PICCs). Methods: In this prospective, observational study, we followed a convenience sample of adults (age>18) with PICCs at a large academic outpatient oncology clinic for 35 months between July 2015 and November 2018. We assessed demographics, malignancy type, PICC insertion and removal dates, history of prior PICC, and line duration. Outcomes included BSI events (defined as >1 positive blood cultures or >2 positive blood cultures if coagulase-negative Staphylococcus), ED visits (without hospitalization), and unplanned hospitalizations (excluding scheduled chemotherapy hospitalizations). We used χ2 analyses to compare the frequency of categorical outcomes, and we used unpaired t tests to assess differences in means of continuous variable in hematologic versus solid-tumor malignancy patients. We used generalized linear mixed-effects models to assess differences in BSI (clustered by patient) separately for gram-positive and gram-negative BSI outcomes. Results: Among 478 patients with 658 unique PICC lines and 64,190 line days, 271 patients (413 lines) had hematologic malignancy and 207 patients (232 lines) had solid-tumor malignancy. Cohort characteristics and outcomes stratified by malignancy type are shown in Table 1. Compared to those with hematologic malignancy, solid-tumor patients were older, had 47% fewer clinic visits, and had 32% lower frequency of prior PICC lines. Overall, there were 75 BSI events (12%; 1.2 per 1,000 catheter days). We detected no significant difference in BSI rates when comparing solid-tumor versus hematologic malignancies (P = 0.20); BSIs with gram-positive pathogen were 69% higher in patients with solid tumors. Gram-negative BSIs were 41% higher in patients with hematologic malignancy. Solid-tumor malignancy was associated with 4.5-fold higher odds of developing BSI with gram-positive pathogen (OR, 4.48; 95% CI, 1.60–12.60; P = .005) compared to those with hematologic malignancy, after adjusting for age, sex, history of prior PICC, and line duration. Differences in gram-negative BSI were not significant on multivariate analysis. Conclusions: The burden of all-cause BSIs in cancer clinic adults with PICC lines was 12% or 1.2 per 1,000 catheter days, as high as nationally reported inpatient BSI rates. Higher risk of gram-positive BSIs in solid-tumor patients suggests the need for targeted infection prevention activities in this population, such as improvements in central-line monitoring, outpatient care, and maintenance of lines and/or dressings, as well as chlorhexidine bathing to reduce skin bioburden.
Paleoethnobotanical perspectives are essential for understanding past lifeways yet continue to be underrepresented in Paleoindian research. We present new archaeobotanical and radiocarbon data from combustion features within stratified cultural components at Connley Caves, Oregon, that reaffirm the inclusion of plants in the diet of Paleoindian groups. Botanical remains from three features in Connley Cave 5 show that people foraged for diverse dryland taxa and a narrow range of wetland plants during the summer and fall months. These data add new taxa to the known Pleistocene food economy and support the idea that groups equipped with Western Stemmed Tradition toolkits had broad, flexible diets. When viewed continentally, this work contributes to a growing body of research indicating that regionally adapted subsistence strategies were in place by at least the Younger Dryas and that some foragers in the Far West may have incorporated a wider range of plants including small seeds, leafy greens, fruits, cacti, and geophytes into their diet earlier than did Paleoindian groups elsewhere in North America. The increasing appearance of diverse and seemingly low-ranked resources in the emerging Paleoindian plant-food economy suggests the need to explore a variety of nutritional variables to explain certain aspects of early foraging behavior.
Enns and Koch question the validity of the Berry, Ringquist, Fording, and Hanson measure of state policy mood and defend the validity of the Enns and Koch measure on two grounds. First, they claim policy mood has become more conservative in the South over time; we present empirical evidence to the contrary: policy mood became more liberal in the South between 1980 and 2010. Second, Enns and Koch argue that an indicator’s lack of face validity in cross-sectional comparisons is irrelevant when judging the measure’s suitability in the most common form of pooled cross-sectional time-series analysis. We show their argument is logically flawed, except under highly improbable circumstances. We also demonstrate, by replicating several published studies, that statistical results about the effect of state policy mood can vary dramatically depending on which of the two mood measures is used, making clear that a researcher’s measurement choice can be highly consequential.
Ethnohistoric accounts indicate that the people of Australia's Channel Country engaged in activities rarely recorded elsewhere on the continent, including food storage, aquaculture and possible cultivation, yet there has been little archaeological fieldwork to verify these accounts. Here, the authors report on a collaborative research project initiated by the Mithaka people addressing this lack of archaeological investigation. The results show that Mithaka Country has a substantial and diverse archaeological record, including numerous large stone quarries, multiple ritual structures and substantial dwellings. Our archaeological research revealed unknown aspects, such as the scale of Mithaka quarrying, which could stimulate re-evaluation of Aboriginal socio-economic systems in parts of ancient Australia.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
Political districts may be drawn to favor one group or political party over another, or gerrymandered. A number of measurements have been suggested as ways to detect and prevent such behavior. These measures give concrete axes along which districts and districting plans can be compared. However, measurement values are affected by both noise and the compounding effects of seemingly innocuous implementation decisions. Such issues will arise for any measure. As a case study demonstrating the effect, we show that commonly used measures of geometric compactness for district boundaries are affected by several factors irrelevant to fairness or compliance with civil rights law. We further show that an adversary could manipulate measurements to affect the assessment of a given plan. This instability complicates using these measurements as legislative or judicial standards to counteract unfair redistricting practices. This paper accompanies the release of packages in C++, Python, and R that correctly, efficiently, and reproducibly calculate a variety of compactness scores.
Background: In June 2019, 3 people were diagnosed with Ebola virus disease (EVD) in Kasese district, Uganda, all of whom had come from the Democratic Republic of Congo (DRC). Although no secondary transmission of Ebola occurred, an assessment of infection prevention and control (IPC) using the WHO basic IPC facility assessment checklist revealed significant gaps. Robust IPC systems are critical for the prevention of healthcare-associated infections like EVD. A rapid intervention was developed and implemented in Kasese to strengthen IPC capacity in high-risk facilities. Methods: Of 117 healthcare facilities, 50 were considered at high risk of receiving suspected EVD cases from DRC based on population movement assessments. In August 2019, IPC mentors were selected from 25 high-risk facilities and assigned to support their facility and a second high-risk facility. Mentors ensured formation of IPC committees and implemented the national mentorship strategy for IPC preparedness in non-EVD treatment facilities. This effort focused on screening, isolation, and notification of suspect cases: 4 mentorship visits were conducted (1 per week for 1 month). Middle and terminal assessments were conducted using the WHO IPC checklist 2 and 4 weeks after the intervention commenced. Results were evaluated against baseline data. Results: Overall, 39 facilities had data from baseline, middle, and end assessments. Median scores in facility IPC standard precautions increased from baseline 50% (IQR, 39%–62%) to 73% (IQR, 67%–76%) at the terminal assessments. Scores increased for all measured parameters except for water source (access to running water). Greatest improvements were seen in formation of IPC committees (41% to 75%), hand hygiene compliance (47% to 86%), waste management (51% to 83%), and availability of dedicated isolation areas (16% to 42%) for suspect cases. Limited improvement was noted for training on management of suspect isolated cases and availability of personal protective equipment (PPE) (Fig. 1). No differences were noted in scores for facilities with nonresident mentors versus those with resident mentors at baseline (48% vs 50%) and end assessments (72% vs 74%). Conclusions: This intervention improved IPC capacity in health facilities while avoiding the cost and service disruption associated with large-scale classroom-based training of health workers. The greatest improvements were seen in activities relying on behavior change, such as hand hygiene, IPC committee, and waste management. Smaller changes were seen in areas requiring significant investments such as isolation areas, steady water source, and availability of personal protective equipment (PPE). Mentorship is ongoing in moderate- and lower-risk facilities in Kasese district.
Funding: None
Disclosures: Mohammed Lamorde reports contract research for Janssen Pharmaceutica, ViiV, Mylan.
The principal aim of this study was to optimize the diagnosis of canine neuroangiostrongyliasis (NA). In total, 92 cases were seen between 2010 and 2020. Dogs were aged from 7 weeks to 14 years (median 5 months), with 73/90 (81%) less than 6 months and 1.7 times as many males as females. The disease became more common over the study period. Most cases (86%) were seen between March and July. Cerebrospinal fluid (CSF) was obtained from the cisterna magna in 77 dogs, the lumbar cistern in f5, and both sites in 3. Nucleated cell counts for 84 specimens ranged from 1 to 146 150 cells μL−1 (median 4500). Percentage eosinophils varied from 0 to 98% (median 83%). When both cisternal and lumbar CSF were collected, inflammation was more severe caudally. Seventy-three CSF specimens were subjected to enzyme-linked immunosorbent assay (ELISA) testing for antibodies against A. cantonensis; 61 (84%) tested positive, titres ranging from <100 to ⩾12 800 (median 1600). Sixty-one CSF specimens were subjected to real-time quantitative polymerase chain reaction (qPCR) testing using a new protocol targeting a bioinformatically-informed repetitive genetic target; 53/61 samples (87%) tested positive, CT values ranging from 23.4 to 39.5 (median 30.0). For 57 dogs, it was possible to compare CSF ELISA serology and qPCR. ELISA and qPCR were both positive in 40 dogs, in 5 dogs the ELISA was positive while the qPCR was negative, in 9 dogs the qPCR was positive but the ELISA was negative, while in 3 dogs both the ELISA and qPCR were negative. NA is an emerging infectious disease of dogs in Sydney, Australia.
Our aim was to develop a brief cognitive behavioural therapy (CBT) protocol to augment treatment for social anxiety disorder (SAD). This protocol focused specifically upon fear of positive evaluation (FPE). To our knowledge, this is the first protocol that has been designed to systematically target FPE.
Aims:
To test the feasibility of a brief (two-session) CBT protocol for FPE and report proof-of-principle data in the form of effect sizes.
Method:
Seven patients with a principal diagnosis of SAD were recruited to participate. Following a pre-treatment assessment, patients were randomized to either (a) an immediate CBT condition (n = 3), or (b) a comparable wait-list (WL) period (2 weeks; n = 4). Two WL patients also completed the CBT protocol following the WL period (delayed CBT condition). Patients completed follow-up assessments 1 week after completing the protocol.
Results:
A total of five patients completed the brief, FPE-specific CBT protocol (two of the seven patients were wait-listed only and did not complete delayed CBT). All five patients completed the protocol and provided 1-week follow-up data. CBT patients demonstrated large reductions in FPE-related concerns as well as overall social anxiety symptoms, whereas WL patients demonstrated an increase in FPE-related concerns.
Conclusions:
Our brief FPE-specific CBT protocol is feasible to use and was associated with large FPE-specific and social anxiety symptom reductions. To our knowledge, this is the first treatment report that has focused on systematic treatment of FPE in patients with SAD. Our protocol warrants further controlled evaluation.
The extent to which citizens comply with newly enacted public health measures such as social distancing or lockdowns strongly affects the propagation of the virus and the number of deaths from COVID-19. It is however very difficult to identify non-compliance through survey research because claiming to follow the rules is socially desirable. Using three survey experiments, we examine the efficacy of different ‘face-saving’ questions that aim to reduce social desirability in the measurement of compliance with public health measures. Our treatments soften the social norm of compliance by way of a short preamble in combination with a guilty-free answer choice making it easier for respondents to admit non-compliance. We find that self-reported non-compliance increases by up to +11 percentage points when making use of a face-saving question. Considering the current context and the importance of measuring non-compliance, we argue that researchers around the world should adopt our most efficient face-saving question.
The aim of this study was to determine what clinically important events occur in ST-elevation myocardial infarction (STEMI) patients transported for primary percutaneous coronary intervention (PCI) via a primary care paramedic (PCP) crew, and what proportion of such events could only be treated by advanced care paramedic (ACP) protocols.
Methods
We conducted a health record review of STEMI transports by PCP-only crews and those transferred from PCP to ACP crews (ACP-intercept) from 2011 to 2015. A piloted data collection form was used to extract clinically important events, interventions during transport, and mortality.
Results
We identified 214 STEMI bypass cases (118 PCP-only and 96 ACP-intercept). Characteristics were mean age 61.4 years; 44.4% inferior infarcts; mean response time 6 minutes, 19 seconds; total paramedic contact time 29 minutes, 40 seconds; and, in cases of ACP-intercept, 7 minutes, 46 seconds of PCP-only contact time. A clinically important event occurred in 127 (59.3%) of cases: SBP < 90 mm Hg (26.2%), HR < 60 (30.4%), HR > 100 (20.6%), arrhythmias 7.5%, altered mental status 6.5%, airway intervention 2.3%. Two patients (0.9%) arrested, both survived. Of the events identified, 42.5% could be addressed differently by ACP protocols. The majority related to fluid boluses for hypotension (34.6%). In the ACP-intercept group, ACPs acted on 51.6% of events. There were six (2.8%) in-hospital deaths.
Conclusions
Although clinically important events are common in STEMI bypass patients, a smaller proportion of events would be addressed differently by ACP compared with PCP protocols. The majority of clinically important events were transient and of limited clinical significance. PCP-only crews can safely transport STEMI patients directly to primary PCI.