We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Studies have demonstrated the efficacy of mechanical devices at delivering high-quality cardiopulmonary resuscitation (HQ-CPR) in various transport settings. Herein, this study investigates the efficacy of manual and mechanical HQ-CPR delivery on a fire rescue boat.
Methods:
A total of 15 active firefighter-paramedics were recruited for a prospective manikin-based trial. Each paramedic performed two minutes manual compression-only CPR while navigating on a river-based fire rescue boat. The boat was piloted in either a stable linear manner or dynamic S-turn manner to simulate obstacle avoidance. For each session of manual HQ-CPR, a session of mechanical HQ-CPR was also performed with a LUCAS 3 (Stryker; Kalamazoo, Michigan USA). A total of 60 sessions were completed. Parameters recorded included compression fraction (CF) and the percentage of compressions with correct depth >5cm (D%), correct rate 100-120 (R%), full release (FR%), and correct hand position (HP%). A composite HQ-CPR score was calculated as follows: ((D% + R% + FR% + HP%)/4) * CF%). Differences in magnitude of change seen in stable versus dynamic navigation within study conditions were evaluated with a Z-score calculation. Difficulty of HQ-CPR delivery was assessed utilizing the Borg Rating of Perceived Exertion Scale.
Results:
Participants were mostly male and had a median experience of 20 years. Manual HQ-CPR delivered during stable navigation out-performed manual HQ-CPR delivered during dynamic navigation for composite score and trended towards superiority for FR% and R%. There was no difference seen for any measured variable when comparing mechanical HQ-CPR delivered during stable navigation versus dynamic navigation. Mechanical HQ-CPR out-performed manual HQ-CPR during both stable and dynamic navigation in terms of composite score, FR%, and R%. Z-score calculation demonstrated that manual HQ-CPR delivery was significantly more affected by drive style than mechanical HQ-CPR delivery in terms of composite HQ-CPR score and trended towards significance for FR% and R%. Borg Rating of Perceived Exertion was higher for manual CPR delivered during dynamic sessions than for stable sessions.
Conclusion:
Mechanical HQ-CPR delivery is superior to manual HQ-CPR delivery during both stable and dynamic riverine navigation. Whereas manual HQ-CPR delivery was worse during dynamic transportation conditions compared to stable transport conditions, mechanical HQ-CPR delivery was unaffected by drive style. This suggests the utility of routine use of mechanical HQ-CPR devices in the riverine patient transport setting.
Policies that promote conversion of antibiotics from intravenous to oral route administration are considered “low hanging fruit” for hospital antimicrobial stewardship programs. We developed a simple metric based on digestive days of therapy divided by total days of therapy for targeted agents and a method for hospital comparisons. External comparisons may help identify opportunities for improving prospective implementation.
To examine differences in surgical practices between salaried and fee-for-service (FFS) surgeons for two common degenerative spine conditions. Surgeons may offer different treatments for similar conditions on the basis of their compensation mechanism.
Methods:
The study assessed the practices of 63 spine surgeons across eight Canadian provinces (39 FFS surgeons and 24 salaried) who performed surgery for two lumbar conditions: stable spinal stenosis and degenerative spondylolisthesis. The study included a multicenter, ambispective review of consecutive spine surgery patients enrolled in the Canadian Spine Outcomes and Research Network registry between October 2012 and July 2018. The primary outcome was the difference in type of procedures performed between the two groups. Secondary study variables included surgical characteristics, baseline patient factors, and patient-reported outcome.
Results:
For stable spinal stenosis (n = 2234), salaried surgeons performed statistically fewer uninstrumented fusion (p < 0.05) than FFS surgeons. For degenerative spondylolisthesis (n = 1292), salaried surgeons performed significantly more instrumentation plus interbody fusions (p < 0.05). There were no statistical differences in patient-reported outcomes between the two groups.
Conclusions:
Surgeon compensation was associated with different approaches to stable lumbar spinal stenosis and degenerative lumbar spondylolisthesis. Salaried surgeons chose a more conservative approach to spinal stenosis and a more aggressive approach to degenerative spondylolisthesis, which highlights that remuneration is likely a minor determinant in the differences in practice of spinal surgery in Canada. Further research is needed to further elucidate which variables, other than patient demographics and financial incentives, influence surgical decision-making.
In a prospective cohort of healthcare personnel (HCP), we measured severe acute respiratory syndrome coronavirus virus 2 (SARS-CoV-2) nucleocapsid IgG antibodies after SARS-CoV-2 infection. Among 79 HCP, 68 (86%) were seropositive 14–28 days after their positive PCR test, and 54 (77%) of 70 were seropositive at the 70–180-day follow-up. Many seropositive HCP (95%) experienced an antibody decline by the second visit.
This chapter presents the multi-scale co-creation methodology used in SURE-Farm to involve stakeholders with the aim of assessing the resilience of European farming systems. This methodology resulted in a wide range of valuable insights and allowed to identify convergent and divergent stakeholders’ perceptions with possible policy implications.
To present an updated version of the ‘Post-acute Level Of Consciousness scale’ (PALOC-s), in accordance with the latest scientific insights.
Methods:
Within the context of a research project, 20 years ago, the PALOC-s was developed for the purpose of following the development of the level of consciousness of young unconscious patients participating in a rehabilitation program. Meanwhile, the understanding of the behavior related to different levels of consciousness has developed and terminology has changed, resulting in the need to revise the PALOC-s. With the preservation of the original description of the eight hierarchical levels of PALOC-s, adaptations are made in the terminology and grouping of these levels.
Results and conclusion:
This manuscript presents the revised version of PALOC-sr, which is suitable for use in clinical practice. The validation of this scale is recommended for its optimal use in future (international) research projects.
To determine the impact of various aerosol mitigation interventions and to establish duration of aerosol persistence in a variety of dental clinic configurations.
Methods:
We performed aerosol measurement studies in endodontic, orthodontic, periodontic, pediatric, and general dentistry clinics. We used an optical aerosol spectrometer and wearable particulate matter sensors to measure real-time aerosol concentration from the vantage point of the dentist during routine care in a variety of clinic configurations (eg, open bay, single room, partitioned operatories). We compared the impact of aerosol mitigation strategies (eg, ventilation and high-volume evacuation (HVE), and prevalence of particulate matter) in the dental clinic environment before, during, and after high-speed drilling, slow–speed drilling, and ultrasonic scaling procedures.
Results:
Conical and ISOVAC HVE were superior to standard-tip evacuation for aerosol-generating procedures. When aerosols were detected in the environment, they were rapidly dispersed within minutes of completing the aerosol-generating procedure. Few aerosols were detected in dental clinics, regardless of configuration, when conical and ISOVAC HVE were used.
Conclusions:
Dentists should consider using conical or ISOVAC HVE rather than standard-tip evacuators to reduce aerosols generated during routine clinical practice. Furthermore, when such effective aerosol mitigation strategies are employed, dentists need not leave dental chairs fallow between patients because aerosols are rapidly dispersed.
Analytical studies of nanoparticles (NPs) are frequently based on huge datasets derived from hyperspectral images acquired using scanning transmission electron microscopy. These large datasets require machine learning computational tools to reduce dimensionality and extract relevant information. Principal component analysis (PCA) is a commonly used procedure to reconstruct information and generate a denoised dataset; however, several open questions remain regarding the accuracy and precision of reconstructions. Here, we use experiments and simulations to test the effect of PCA processing on data obtained from AuAg alloy NPs a few nanometers wide with different compositions. This study aims to address the reliability of chemical quantification after PCA processing. Our results show that the PCA treatment mitigates the contribution of Poisson noise and leads to better quantification, indicating that denoised results may be reliable from the point of view of both uncertainty and accuracy for properly planned experiments. However, the initial data need to be of sufficient quality: these results can only be obtained if the signal-to-noise ratio of input data exceeds a minimal value to avoid the occurrence of random noise bias in the PCA reconstructions.
The behavioral variant of frontotemporal dementia presents clinical specificities and difficulties for its early diagnosis in the initial stages due to the overlap of symptoms with other psychiatric pathologies. The delay in diagnosis places the subject in a state of vulnerability because the treatment will not be adequate and the alteration in the psycho-functional capacity can expose him to risks.
Objective
The objective of this research was to describe the importance at the forensic and health level of the neuropsychological evaluation of social cognition in people with behavioral variant frontotemporal dementia and to correlate the results with the clinical manifestations of the patients.
Materials and Methods
Forty-five patients with behavioral variant frontotemporal dementia were studied with social cognition tests (Reading the Mind in the Eyes and Faux Pas Tests) and staged with standardized scales (CDR [Clinical Dementia Rating], GDS [Global Deterioration Scale], and the FTD-FRS [Frontotemporal Dementia Rating Scale]). The results were analyzed with descriptive and inferential statistical tests and the current ethical-legal requirements were met (requirement of informed consent, reservation of the identity of the participants, compliance with the GCP-Good clinical practice-, ANMAT provision 6677/10 and adherence to the Ethical Principles derived from the Declaration of Helsinki).
Results
We found a significant prevalence of alterations in social cognition tests, mainly in Faux Pas Test, from the initial stages of the disease, which were correlated with the clinical stage of the patient.
Conclusions
The behavioral variant of frontotemporal dementia is a condition with significant diagnostic complexity in its initial stages that affects decision-making, the type of treatment to be instituted and presents the consequences for the subject and their environment. Early detection with a deep assessment of social tools will provide clinical tools for pharmacological treatment, as well as to know the capacity and safeguard the rights of the subject and implement the necessary support measures. It was confirmed that the alterations in the social cognition tests were correlated with the clinical stage in the FTD-FRS scale and high implication in the results of the Faux Pas Test mainly, and secondarily in the Reading the Mind in the Eyes Test.
We performed severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antinucleocapsid IgG testing on 5,557 healthcare providers and found a seroprevalence of 3.9%. African Americans were more likely to test positive than Whites, and HCWs with household exposure and those working on COVID-19 cohorting units were more likely to test positive than their peers.
Dicamba is a synthetic auxin herbicide that may be applied over the top of transgenic dicamba-tolerant crops. The increasing prevalence of herbicide-resistant weeds has resulted in increased reliance on dicamba-based herbicides in soybean production systems. Because of the high volatility of dicamba it is prone to off-target movement, and therefore concern exists regarding its drift onto nearby specialty crops. The present study evaluates 12 mid-Atlantic vegetable crops species for sensitivity to sublethal rates of dicamba. Soybean, snap bean, lima bean, tomato, eggplant, bell pepper, cucumber, summer squash, watermelon, pumpkin, sweet basil, lettuce, and kale were grown in a greenhouse and exposed to dicamba at 0, 0.056, 0.11, 0.28, 0.56, 1.12, 2.24 g ae ha−1, which is, respectively, 0, 1/10,000, 1/5,000, 1/2,000, 1/1,000, 1/500, and 1/250 of the maximum recommended label rate for soybean application (560 g ae ha−1). Vegetable crop injury was evaluated 4 wk after treatment using visual rating methods and leaf deformation index measurements. Overall, snap bean was the most sensitive crop, with dicamba rates as low as 0.11 g ae ha−1 resulting in significantly higher leaf deformation levels compared with the nontreated control. Other Fabaceae and Solanaceae species also demonstrated high sensitivity to sublethal rates of dicamba with rates ranging 0.28 to 0.56 g ae ha−1 causing higher leaf deformation compared with the nontreated control. While cucumber, pumpkin, and summer squash were no or moderately sensitive to dicamba, watermelon showed greater sensitivity with unique symptoms at rates as low as 0.056 g ae ha−1 based on visual evaluation. Within the range of tested dicamba rates, sweet basil, lettuce, and kale demonstrated tolerance to dicamba with no injury observed at the maximum rate of 2.24 g ae ha−1.
Parental level of education, instruction time, and amount of language practice that children receive have enhanced our understanding of how bilingual and multilingual children learn to comprehend text. Guided by the simple view of reading and the interdependence hypothesis, this longitudinal study conducted in Canadian French immersion programs examined the (a) within- and cross-language association between oral language skills and reading comprehension of bilingual English–French and multilingual children and (b) patterns of growth, while controlling for possible influences of parental level of education and methods of instruction on reading achievement. The sample included 150 children tested once at the beginning of Grade 4 (T1) and again at the end of Grade 4 (T2) and in Grade 6 (T3). Individual growth modeling revealed that bilingual and multilingual children showed similar development in oral language and reading skills across the timeframe. Moreover, growth in English and French reading comprehension was associated with within-language variables. English reading comprehension in Grade 4 was also associated with cross-language variables, including French listening comprehension and vocabulary knowledge. Reading development in the second and third language is enhanced in contexts where classroom instruction, as well as social, economic, and educational opportunities to learn, is equivalent for all students.
To determine the incidence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among healthcare personnel (HCP) and to assess occupational risks for SARS-CoV-2 infection.
Design:
Prospective cohort of healthcare personnel (HCP) followed for 6 months from May through December 2020.
Setting:
Large academic healthcare system including 4 hospitals and affiliated clinics in Atlanta, Georgia.
Participants:
HCP, including those with and without direct patient-care activities, working during the coronavirus disease 2019 (COVID-19) pandemic.
Methods:
Incident SARS-CoV-2 infections were determined through serologic testing for SARS-CoV-2 IgG at enrollment, at 3 months, and at 6 months. HCP completed monthly surveys regarding occupational activities. Multivariable logistic regression was used to identify occupational factors that increased the risk of SARS-CoV-2 infection.
Results:
Of the 304 evaluable HCP that were seronegative at enrollment, 26 (9%) seroconverted for SARS-CoV-2 IgG by 6 months. Overall, 219 participants (73%) self-identified as White race, 119 (40%) were nurses, and 121 (40%) worked on inpatient medical-surgical floors. In a multivariable analysis, HCP who identified as Black race were more likely to seroconvert than HCP who identified as White (odds ratio, 4.5; 95% confidence interval, 1.3–14.2). Increased risk for SARS-CoV-2 infection was not identified for any occupational activity, including spending >50% of a typical shift at a patient’s bedside, working in a COVID-19 unit, or performing or being present for aerosol-generating procedures (AGPs).
Conclusions:
In our study cohort of HCP working in an academic healthcare system, <10% had evidence of SARS-CoV-2 infection over 6 months. No specific occupational activities were identified as increasing risk for SARS-CoV-2 infection.
While comorbidity of clinical high-risk for psychosis (CHR-P) status and social anxiety is well-established, it remains unclear how social anxiety and positive symptoms covary over time in this population. The present study aimed to determine whether there are more than one covariant trajectory of social anxiety and positive symptoms in the North American Prodrome Longitudinal Study cohort (NAPLS 2) and, if so, to test whether the different trajectory subgroups differ in terms of genetic and environmental risk factors for psychotic disorders and general functional outcome.
Methods
In total, 764 CHR individuals were evaluated at baseline for social anxiety and psychosis risk symptom severity and followed up every 6 months for 2 years. Application of group-based multi-trajectory modeling discerned three subgroups based on the covariant trajectories of social anxiety and positive symptoms over 2 years.
Results
One of the subgroups showed sustained social anxiety over time despite moderate recovery in positive symptoms, while the other two showed recovery of social anxiety below clinically significant thresholds, along with modest to moderate recovery in positive symptom severity. The trajectory group with sustained social anxiety had poorer long-term global functional outcomes than the other trajectory groups. In addition, compared with the other two trajectory groups, membership in the group with sustained social anxiety was predicted by higher levels of polygenic risk for schizophrenia and environmental stress exposures.
Conclusions
Together, these analyses indicate differential relevance of sustained v. remitting social anxiety symptoms in the CHR-P population, which in turn may carry implications for differential intervention strategies.
South-east Asia is home to exceptional biodiversity, but threats to vertebrate species are disproportionately high in this region. The IUCN Species Survival Commission Asian Species Action Partnership aims to avert species extinctions. Strengthening individual and organizational capacity is key to achieving long-term, sustainable conservation impact, and is a core strategic intervention for the Partnership. To look at the needs and opportunities for developing capacity for species conservation in South-east Asia, we undertook a needs assessment with organizations implementing species conservation within this region. We conducted a review of available training opportunities, mapping them against a list of identified competences needed for species conservation to determine gaps in current training. Our assessments revealed an imbalance in the focus of training opportunities vs the actual competences needed for effective species conservation, and that training opportunities within South-east Asia are limited in number and highly competitive. These findings corroborate other similar reviews, particularly on capacity gaps in the Global South. We discuss the implications of our review and use the findings to generate recommendations.