To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Neurocognitive testing may advance the goal of predicting near-term suicide risk. The current study examined whether performance on a Go/No-go (GNG) task, and computational modeling to extract latent cognitive variables, could enhance prediction of suicide attempts within next 90 days, among individuals at high-risk for suicide.
136 Veterans at high-risk for suicide previously completed a computer-based GNG task requiring rapid responding (Go) to target stimuli, while withholding responses (No-go) to infrequent foil stimuli; behavioral variables included false alarms to foils (failure to inhibit) and missed responses to targets. We conducted a secondary analysis of these data, with outcomes defined as actual suicide attempt (ASA), other suicide-related event (OtherSE) such as interrupted/aborted attempt or preparatory behavior, or neither (noSE), within 90-days after GNG testing, to examine whether GNG variables could improve ASA prediction over standard clinical variables. A computational model (linear ballistic accumulator, LBA) was also applied, to elucidate cognitive mechanisms underlying group differences.
On GNG, increased miss rate selectively predicted ASA, while increased false alarm rate predicted OtherSE (without ASA) within the 90-day follow-up window. In LBA modeling, ASA (but not OtherSE) was associated with decreases in decisional efficiency to targets, suggesting differences in the evidence accumulation process were specifically associated with upcoming ASA.
These findings suggest that GNG may improve prediction of near-term suicide risk, with distinct behavioral patterns in those who will attempt suicide within the next 90 days. Computational modeling suggests qualitative differences in cognition in individuals at near-term risk of suicide attempt.
The Passive Surveillance Stroke Severity (PaSSV) Indicator was derived to estimate stroke severity from variables in administrative datasets but has not been externally validated.
We used linked administrative datasets to identify patients with first hospitalization for acute stroke between 2007-2018 in Alberta, Canada. We used the PaSSV indicator to estimate stroke severity. We used Cox proportional hazard models and evaluated the change in hazard ratios and model discrimination for 30-day and 1-year case fatality with and without PaSSV. Similar comparisons were made for 90-day home time thresholds using logistic regression. We also linked with a clinical registry to obtain National Institutes of Health Stroke Scale (NIHSS) and compared estimates from models without stroke severity, with PaSSV, and with NIHSS.
There were 28,672 patients with acute stroke in the full sample. In comparison to no stroke severity, addition of PaSSV to the 30-day case fatality models resulted in improvement in model discrimination (C-statistic 0.72 [95%CI 0.71–0.73] to 0.80 [0.79–0.80]). After adjustment for PaSSV, admission to a comprehensive stroke center was associated with lower 30-day case fatality (adjusted hazard ratio changed from 1.03 [0.96–1.10] to 0.72 [0.67–0.77]). In the registry sample (N = 1328), model discrimination for 30-day case fatality improved with the inclusion of stroke severity. Results were similar for 1-year case fatality and home time outcomes.
Addition of PaSSV improved model discrimination for case fatality and home time outcomes. The validity of PASSV in two Canadian provinces suggests that it is a useful tool for baseline risk adjustment in acute stroke.
This paper provides a large-scale, per Major League Baseball (MLB) game analysis of foul ball (FB) injury data and provides estimates of injury frequency and severity.
This study’s goal was to quantify and describe the rate and type of FB injuries at MLB games.
This was a retrospective review of medical care reports for patients evaluated by on-site health care providers (HCPs) over a non-contiguous 11-year period (2005-2016). Data were obtained using Freedom of Information Act (FOIA) requests.
Data were received from three US-based MLB stadiums.
The review reported 0.42-0.55 FB injuries per game that were serious enough to warrant presentation at a first aid center. This translated to a patients per 10,000 fans rate (PPTT) of 0.13-0.23. The transport to hospital rate (TTHR) was 0.02-0.39. Frequently, FB injuries required analgesics but were overwhelmingly minor and occurred less often than non-FB traumatic injuries (5.2% versus 42%-49%). However, FB injured fans were more likely to need higher levels of care and transport to hospital (TH) as compared to people suffering other traumatic injuries at the ballpark. Contusions or head injuries were common. Finally, FB injured fans were often hit in the abdomen, upper extremity, face, or head. It was found that FB injuries appeared to increase with time, and this increase in injuries aligns with the sudden increase in popularity of smartphones in the United States.
Conclusions and Relevance:
These data suggest that in roughly every two or three MLB games, a foul ball causes a serious enough injury that a fan seeks medical attention. This rate is high enough to warrant attention, but is comparable in frequency to other diagnostic categories. Assessing the risk to fans from FBs remains difficult, but with access to uniform data, researchers could answer persistent questions that would lead to actionable changes and help guide public policy towards safer stadiums.
Patients presenting to hospital with suspected coronavirus disease 2019 (COVID-19), based on clinical symptoms, are routinely placed in a cohort together until polymerase chain reaction (PCR) test results are available. This procedure leads to delays in transfers to definitive areas and high nosocomial transmission rates. FebriDx is a finger-prick point-of-care test (PoCT) that detects an antiviral host response and has a high negative predictive value for COVID-19. We sought to determine the clinical impact of using FebriDx for COVID-19 triage in the emergency department (ED).
We undertook a retrospective observational study evaluating the real-world clinical impact of FebriDx as part of an ED COVID-19 triage algorithm.
Emergency department of a university teaching hospital.
Patients presenting with symptoms suggestive of COVID-19, placed in a cohort in a ‘high-risk’ area, were tested using FebriDx. Patients without a detectable antiviral host response were then moved to a lower-risk area.
Between September 22, 2020, and January 7, 2021, 1,321 patients were tested using FebriDx, and 1,104 (84%) did not have a detectable antiviral host response. Among 1,104 patients, 865 (78%) were moved to a lower-risk area within the ED. The median times spent in a high-risk area were 52 minutes (interquartile range [IQR], 34–92) for FebriDx-negative patients and 203 minutes (IQR, 142–255) for FebriDx-positive patients (difference of −134 minutes; 95% CI, −144 to −122; P < .0001). The negative predictive value of FebriDx for the identification of COVID-19 was 96% (661 of 690; 95% CI, 94%–97%).
FebriDx improved the triage of patients with suspected COVID-19 and reduced the time that severe acute respiratory coronavirus virus 2 (SARS-CoV-2) PCR-negative patients spent in a high-risk area alongside SARS-CoV-2–positive patients.
Paramedics received training in point-of-care ultrasound (POCUS) to assess for cardiac contractility during management of medical out-of-hospital cardiac arrest (OHCA). The primary outcome was the percentage of adequate POCUS video acquisition and accurate video interpretation during OHCA resuscitations. Secondary outcomes included POCUS impact on patient management and resuscitation protocol adherence.
A prospective, observational cohort study of paramedics was performed following a four-hour training session, which included a didactic lecture and hands-on POCUS instruction. The Prehospital Echocardiogram in Cardiac Arrest (PECA) protocol was developed and integrated into the resuscitation algorithm for medical non-shockable OHCA. The ultrasound (US) images were reviewed by a single POCUS expert investigator to determine the adequacy of the POCUS video acquisition and accuracy of the video interpretation. Change in patient management and resuscitation protocol adherence data, including end-tidal carbon dioxide (EtCO2) monitoring following advanced airway placement, adrenaline administration, and compression pauses under ten seconds, were queried from the prehospital electronic health record (EHR).
Captured images were deemed adequate in 42/49 (85.7%) scans and paramedic interpretation of sonography was accurate in 43/49 (87.7%) scans. The POCUS results altered patient management in 14/49 (28.6%) cases. Paramedics adhered to EtCO2 monitoring in 36/36 (100.0%) patients with an advanced airway, adrenaline administration for 38/38 (100.0%) patients, and compression pauses under ten seconds for 36/38 (94.7%) patients.
Paramedics were able to accurately obtain and interpret cardiac POCUS videos during medical OHCA while adhering to a resuscitation protocol. These findings suggest that POCUS can be effectively integrated into paramedic protocols for medical OHCA.
We examined the accuracy of International Classification of Disease 10th iteration (ICD-10) diagnosis codes within Canadian administrative data in identifying cerebral venous thrombosis (CVT). Of 289 confirmed cases of CVT admitted to our comprehensive stroke center between 2008 and 2018, 239/289 were new diagnoses and 204/239 were acute events with only 75/204 representing symptomatic CVTs not provoked by trauma or structural processes. Using ICD-10 codes in any position, sensitivity was 39.1% and positive predictive value was 94.2% for patients with a current or history of CVT and 84.0% and 52.5% for acute and symptomatic CVTs not provoked by trauma or structural processes.
Collateral status is an indicator of a favorable outcome in stroke. Leptomeningeal collaterals provide alternative routes for brain perfusion following an arterial occlusion or flow-limiting stenosis. Using a large cohort of ischemic stroke patients, we examined the relative contribution of various demographic, laboratory, and clinical variables in explaining variability in collateral status.
Patients with acute ischemic stroke in the anterior circulation were enrolled in a multi-center hospital-based observational study. Intracranial occlusions and collateral status were identified and graded using multiphase computed tomography angiography. Based on the percentage of affected territory filled by collateral supply, collaterals were graded as either poor (0–49%), good (50–99%), or optimal (100%). Between-group differences in demographic, laboratory, and clinical factors were explored using ordinal regression models. Further, we explored the contribution of measured variables in explaining variance in collateral status.
386 patients with collateral status classified as poor (n = 64), good (n = 125), and optimal (n = 197) were included. Median time from symptom onset to CT was 120 (IQR: 78–246) minutes. In final multivariable model, male sex (OR 1.9, 95% CIs [1.2, 2.9], p = 0.005) and leukocytosis (OR 1.1, 95% CIs [1.1, 1.2], p = 0.001) were associated with poor collaterals. Measured variables only explained 44.8–53.0% of the observed between-patient variance in collaterals.
Male sex and leukocytosis are associated with poorer collaterals. Nearly half of the variance in collateral flow remains unexplained and could be in part due to genetic differences.
Endovascular thrombectomy (EVT) has significantly improved outcomes for patients with acute ischemic stroke due to large vessel occlusion. However, despite advances, more than half of patients remain functionally dependent 3 months after their initial stroke. Anesthetic strategy may influence both the technical success of the procedure and overall outcomes. Conventionally, general anesthesia (GA) has been widely used for neuroendovascular procedures, particularly for the distal intracranial circulation, because the complete absence of movement has been considered imperative for procedural success and to minimize complications. In contrast, in patients with acute stroke undergoing EVT, the optimal anesthetic strategy is controversial. Nonrandomized studies suggest GA negatively affects outcomes while the more recent anesthesia-specific RCTs report improved or unchanged outcomes in patients managed with versus without GA, although these findings cannot be generalized to other EVT capable centers due to a number of limitations. Potential explanations for these contrasting results will be addressed in this review including the effect of different anesthetic strategies on cerebral and systemic hemodynamics, revascularization times, and periprocedural complications.
This chapter takes up a small part of the writings of a group of Muslim intellectuals from China who studied at Al-Azhar University in Cairo in the 1930s and 1940s and worked to think through the connections between China, Islam, the Arab world, and literatures in Chinese and Arabic. Through a close reading of Recollections of Childhood (Tongniande huiyi), Ma Junwu’s translation of the first volume of Taha Husayn’s The Days (al-Ayyām), we see how the Sino-Muslim Azharites provide a valuable historical example and theoretical resource for our own scholarly practice at a time when attempts to go beyond the boundaries of national literatures and languages default all too quickly to monolingual approaches.
We examined the return on investment (ROI) from the Endovascular Reperfusion Alberta (ERA) project, a provincially funded population-wide strategy to improve access to endovascular therapy (EVT), to inform policy regarding sustainability.
We calculated net benefit (NB) as benefit minus cost and ROI as benefit divided by cost. Patients treated with EVT and their controls were identified from the ESCAPE trial. Using the provincial administrative databases, their health services utilization (HSU), including inpatient, outpatient, physician, long-term care services, and prescription drugs, were compared. This benefit was then extrapolated to the number of patients receiving EVT increased in 2018 and 2019 by the ERA implementation. We used three time horizons, including short (90 days), medium (1 year), and long-term (5 years).
EVT was associated with a reduced gross HSU cost for all the three time horizons. Given the total costs of ERA were $2.04 million in 2018 ($11,860/patient) and $3.73 million in 2019 ($17,070/patient), NB per patient in 2018 (2019) was estimated at −$7,313 (−$12,524), $54,592 ($49,381), and $47,070 ($41,859) for short, medium, and long-term time horizons, respectively. Total NB for the province in 2018 (2019) were −$1.26 (−$2.74), $9.40 ($10.78), and $8.11 ($9.14) million; ROI ratios were 0.4 (0.3), 5.6 (3.9) and 5.0 (3.5). Probabilities of ERA being cost saving were 39% (31%), 97% (96%), and 94% (91%), for short, medium, and long-term time horizons, respectively.
The ERA program was cost saving in the medium and long-term time horizons. Results emphasized the importance of considering a broad range of HSU and long-term impact to capture the full ROI.
Increased risk donors in paediatric heart transplantation have characteristics that may increase the risk of infectious disease transmission despite negative serologic testing. However, the risk of disease transmission is low, and refusing an IRD offer may increase waitlist mortality. We sought to determine the risks of declining an initial IRD organ offer.
Methods and results:
We performed a retrospective analysis of candidates waitlisted for isolated PHT using 20072017 United Network of Organ Sharing datasets. Match runs identified candidates receiving IRD offers. Competing risks analysis was used to determine mortality risk for those that declined an initial IRD offer with stratified Cox regression to estimate the survival benefit associated with accepting initial IRD offers. Overall, 238/1067 (22.3%) initial IRD offers were accepted. Candidates accepting an IRD offer were younger (7.2 versus 9.8 years, p < 0.001), more often female (50 versus 41%, p = 0.021), more often listed status 1A (75.6 versus 61.9%, p < 0.001), and less likely to require mechanical bridge to PHT (16% versus 23%, p = 0.036). At 1- and 5-year follow-up, cumulative mortality was significantly lower for candidates who accepted compared to those that declined (6% versus 13% 1-year mortality and 15% versus 25% 5-year mortality, p = 0.0033). Decline of an IRD offer was associated with an adjusted hazard ratio for mortality of 1.87 (95% CI 1.24, 2.81, p < 0.003).
IRD organ acceptance is associated with a substantial survival benefit. Increasing acceptance of IRD organs may provide a targetable opportunity to decrease waitlist mortality in PHT.