We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Next generation high-power laser facilities are expected to generate hundreds-of-MeV proton beams and operate at multi-Hz repetition rates, presenting opportunities for medical, industrial and scientific applications requiring bright pulses of energetic ions. Characterizing the spectro-spatial profile of these ions at high repetition rates in the harsh radiation environments created by laser–plasma interactions remains challenging but is paramount for further source development. To address this, we present a compact scintillating fiber imaging spectrometer based on the tomographic reconstruction of proton energy deposition in a layered fiber array. Modeling indicates that spatial resolution of approximately 1 mm and energy resolution of less than 10% at proton energies of more than 20 MeV are readily achievable with existing 100 μm diameter fibers. Measurements with a prototype beam-profile monitor using 500 μm fibers demonstrate active readouts with invulnerability to electromagnetic pulses, and less than 100 Gy sensitivity. The performance of the full instrument concept is explored with Monte Carlo simulations, accurately reconstructing a proton beam with a multiple-component spectro-spatial profile.
The expensive-tissue hypothesis (ETH) posited a brain–gut trade-off to explain how humans evolved large, costly brains. Versions of the ETH interrogating gut or other body tissues have been tested in non-human animals, but not humans. We collected brain and body composition data in 70 South Asian women and used structural equation modelling with instrumental variables, an approach that handles threats to causal inference including measurement error, unmeasured confounding and reverse causality. We tested a negative, causal effect of the latent construct ‘nutritional investment in brain tissues’ (MRI-derived brain volumes) on the construct ‘nutritional investment in lean body tissues’ (organ volume and skeletal muscle). We also predicted a negative causal effect of the brain latent on fat mass. We found negative causal estimates for both brain and lean tissue (−0.41, 95% CI, −1.13, 0.23) and brain and fat (−0.56, 95% CI, −2.46, 2.28). These results, although inconclusive, are consistent with theory and prior evidence of the brain trading off with lean and fat tissues, and they are an important step in assessing empirical evidence for the ETH in humans. Analyses using larger datasets, genetic data and causal modelling are required to build on these findings and expand the evidence base.
Soybean [Glycine max (L.) Merr.] that lack resistance to auxin herbicides [i.e., not genetically modified for resistance] have well-documented responses to those particular herbicides, with yield loss being probable. When a soybean field is injured by auxin herbicides, regulatory authorities often collect a plant sample from that field. This research attempted to simulate soybean exposures due to accidental mixing of incorrect herbicides, tank contamination, or particle drift. This research examined whether analytical testing of herbicide residues on soybean to aminocyclopyrachlor (ACP), aminopyralid, 2,4-D, or dicamba would be related to the visual observations and yield responses from these herbicides. ACP and aminopyralid were applied to R1 soybean at 0.1, 1, and 10 g ae ha−1; 2,4-D and dicamba were applied at 1, 10, and 100 g ae ha−1. Visual evaluations and plant sample collections were undertaken at 1, 3, 7, 14, and 21 d after treatment (DAT), and yield was measured. The conservative limits of detection for the four herbicides in this project were 5, 10, 5, and 5 ng g−1 fresh weight of soybean for ACP, aminopyralid, 2,4-D, and dicamba, respectively. Many of the plant samples were non-detects, especially at lower application dosages. All herbicide concentrations rapidly declined soon after application, and many reached nondetectable limits by 14 DAT. All herbicide treatments caused soybean injury, although the response to 2,4-D was markedly lower than the responses to the other three herbicides. There was no apparent correlation between herbicide concentrations (which were declining over time) and the observed soybean injury (which was increasing over time or staying the same). This research indicated that plant samples should be collected as soon as possible after soybean exposure to auxin herbicides.
Adsorption of Cu2+ and Co2+ by synthetic imogolite, synthetic allophanes with a range of SiO2/ Al2O3 ratios, and allophanic clay fractions from volcanic ash soils was measured in an ionic medium of 0.05 M Ca(NO3)2. The effect of pH (and metal concentration) on adsorption was qualitatively similar for the synthetic and natural allophanes with relatively minor changes in behavior caused by variable SiO2/Al2O3 ratios. Cu and Co were chemisorbed by allophane at pH 5.0–5.5 and 6.9–7.2 (pH values for 50% adsorption level), respectively, with concomitant release of 1.6–1.9 protons/metal ion adsorbed. Quantitatively, adsorption by imogolite was less than that by the allophanes, presumably because of fewer sites available for chemisorption on the tubular structure of imogolite. Electron spin resonance studies of the imogolite and allophanes revealed that Cu2+ was adsorbed as a monomer on two types of surface sites. The preferred sites were likely adjacent AlOH groups binding Cu2+ by a binuclear mechanism; weaker bonding occurred at isolated AlOH or SiOH groups. These chemisorbed forms of Cu2+ were readily extracted by EDTA, CH3COOH, and metals capable of specific adsorption, but were not exchangeable. In addition, the H2O and/or OH− ligands of chemisorbed Cu2+ were readily displaced by NH3, with the formation of ternary Cu-ammonia-surface complexes.
The negative surface charge of synthetic allophanes with a range of Si/Al ratios decreased and positive charge increased with increasing alumina content at a given pH. The phosphate adsorption capacity also increased with increasing Al content. That this relationship between composition and chemical reactivity was not found for the soil allophanes is attributed to the presence of specifically adsorbed organic or inorganic anions on the natural material. Both synthetic and natural imogolites had a much lower capacity to adsorb phosphate than the allophanes and adsorbed anomalously high amounts of Cl− and ClO4− at high pH. It is proposed that intercalation of salt occurs in imogolite, although electron spin resonance studies using spin probes failed to reveal the trapping of small organic molecules in imogolite tubes. These spin probes in the carboxylated form did, however, suggest an electrostatic retention of carboxylate by imogolite and a more specific adsorption by allophane involving ligand exchange of surface hydroxyl. The results illustrate the inherent differences in charge and surface properties of allophane and imogolite despite the common structural unit which the two minerals incorporate.
High-quality evidence is lacking for the impact on healthcare utilisation of short-stay alternatives to psychiatric inpatient services for people experiencing acute and/or complex mental health crises (known in England as psychiatric decision units [PDUs]). We assessed the extent to which changes in psychiatric hospital and emergency department (ED) activity were explained by implementation of PDUs in England using a quasi-experimental approach.
Methods
We conducted an interrupted time series (ITS) analysis of weekly aggregated data pre- and post-PDU implementation in one rural and two urban sites using segmented regression, adjusting for temporal and seasonal trends. Primary outcomes were changes in the number of voluntary inpatient admissions to (acute) adult psychiatric wards and number of ED adult mental health-related attendances in the 24 months post-PDU implementation compared to that in the 24 months pre-PDU implementation.
Results
The two PDUs (one urban and one rural) with longer (average) stays and high staff-to-patient ratios observed post-PDU decreases in the pattern of weekly voluntary psychiatric admissions relative to pre-PDU trend (Rural: −0.45%/week, 95% confidence interval [CI] = −0.78%, −0.12%; Urban: −0.49%/week, 95% CI = −0.73%, −0.25%); PDU implementation in each was associated with an estimated 35–38% reduction in total voluntary admissions in the post-PDU period. The (urban) PDU with the highest throughput, lowest staff-to-patient ratio and shortest average stay observed a 20% (−20.4%, CI = −29.7%, −10.0%) level reduction in mental health-related ED attendances post-PDU, although there was little impact on long-term trend. Pooled analyses across sites indicated a significant reduction in the number of voluntary admissions following PDU implementation (−16.6%, 95% CI = −23.9%, −8.5%) but no significant (long-term) trend change (−0.20%/week, 95% CI = −0.74%, 0.34%) and no short- (−2.8%, 95% CI = −19.3%, 17.0%) or long-term (0.08%/week, 95% CI = −0.13, 0.28%) effects on mental health-related ED attendances. Findings were largely unchanged in secondary (ITS) analyses that considered the introduction of other service initiatives in the study period.
Conclusions
The introduction of PDUs was associated with an immediate reduction of voluntary psychiatric inpatient admissions. The extent to which PDUs change long-term trends of voluntary psychiatric admissions or impact on psychiatric presentations at ED may be linked to their configuration. PDUs with a large capacity, short length of stay and low staff-to-patient ratio can positively impact ED mental health presentations, while PDUs with longer length of stay and higher staff-to-patient ratios have potential to reduce voluntary psychiatric admissions over an extended period. Taken as a whole, our analyses suggest that when establishing a PDU, consideration of the primary crisis-care need that underlies the creation of the unit is key.
Therapeutics targeting frontotemporal dementia (FTD) are entering clinical trials. There are challenges to conducting these studies, including the relative rarity of the disease. Remote assessment tools could increase access to clinical research and pave the way for decentralized clinical trials. We developed the ALLFTD Mobile App, a smartphone application that includes assessments of cognition, speech/language, and motor functioning. The objectives were to determine the feasibility and acceptability of collecting remote smartphone data in a multicenter FTD research study and evaluate the reliability and validity of the smartphone cognitive and motor measures.
Participants and Methods:
A diagnostically mixed sample of 207 participants with FTD or from familial FTD kindreds (CDR®+NACC-FTLD=0 [n=91]; CDR®+NACC-FTLD=0.5 [n=39]; CDR®+NACC-FTLD>1 [n=39]; unknown [n=38]) were asked to remotely complete a battery of tests on their smartphones three times over two weeks. Measures included five executive functioning (EF) tests, an adaptive memory test, and participant experience surveys. A subset completed smartphone tests of balance at home (n=31) and a finger tapping test (FTT) in the clinic (n=11). We analyzed adherence (percentage of available measures that were completed) and user experience. We evaluated Spearman-Brown split-half reliability (100 iterations) using the first available assessment for each participant. We assessed test-retest reliability across all available assessments by estimating intraclass correlation coefficients (ICC). To investigate construct validity, we fit regression models testing the association of the smartphone measures with gold-standard neuropsychological outcomes (UDS3-EF composite [Staffaroni et al., 2021], CVLT3-Brief Form [CVLT3-BF] Immediate Recall, mechanical FTT), measures of disease severity (CDR®+NACC-FTLD Box Score & Progressive Supranuclear Palsy Rating Scale [PSPRS]), and regional gray matter volumes (cognitive tests only).
Results:
Participants completed 70% of tasks. Most reported that the instructions were understandable (93%), considered the time commitment acceptable (97%), and were willing to complete additional assessments (98%). Split-half reliability was excellent for the executive functioning (r’s=0.93-0.99) and good for the memory test (r=0.78). Test-retest reliabilities ranged from acceptable to excellent for cognitive tasks (ICC: 0.70-0.96) and were excellent for the balance (ICC=0.97) and good for FTT (ICC=0.89). Smartphone EF measures were strongly associated with the UDS3-EF composite (ß's=0.6-0.8, all p<.001), and the memory test was strongly correlated with total immediate recall on the CVLT3-BF (ß=0.7, p<.001). Smartphone FTT was associated with mechanical FTT (ß=0.9, p=.02), and greater acceleration on the balance test was associated with more motor features (ß=0.6, p=0.02). Worse performance on all cognitive tests was associated with greater disease severity (ß's=0.5-0.7, all p<.001). Poorer performance on the smartphone EF tasks was associated with smaller frontoparietal/subcortical volume (ß's=0.4-0.6, all p<.015) and worse memory scores with smaller hippocampal volume (ß=0.5, p<.001).
Conclusions:
These results suggest remote digital data collection of cognitive and motor functioning in FTD research is feasible and acceptable. These findings also support the reliability and validity of unsupervised ALLFTD Mobile App cognitive tests and provide preliminary support for the motor measures, although further study in larger samples is required.
Alterations in cerebral blood flow (CBF) are associated with risk of cognitive decline and Alzheimer’s disease (AD). Although apolipoprotein E (APOE) ε4 and greater vascular risk burden have both been linked to reduced CBF in older adults, less is known about how APOE ε4 status and vascular risk may interact to influence CBF. We aimed to determine whether the effect of vascular risk on CBF varies by gene dose of APOE ε4 alleles (i.e., number of e4 alleles) in older adults without dementia.
Participants and Methods:
144 older adults without dementia from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) underwent arterial spin labeling (ASL) and T1-weighted MRI, APOE genotyping, fluorodeoxyglucose positron emission tomography (FDG-PET), lumbar puncture, and blood pressure assessment. Vascular risk was assessed using pulse pressure (systolic blood pressure -diastolic blood pressure), which is thought to be a proxy for arterial stiffening. Participants were classified by number of APOE ε4 alleles (n0 alleles = 87, m allele = 46, n2 alleles = 11). CBF in six FreeSurfer-derived a priori regions of interest (ROIs) vulnerable to AD were examined: entorhinal cortex, hippocampus, inferior temporal cortex, inferior parietal cortex, rostral middle frontal gyrus, and medial orbitofrontal cortex. Linear regression models tested the interaction between categorical APOE ε4 dose (0, 1, or 2 alleles) and continuous pulse pressure on CBF in each ROI, adjusting for age, sex, cognitive diagnosis (cognitively unimpaired vs. mild cognitive impairment), antihypertensive medication use, cerebral metabolism (FDG-PET composite), reference CBF region (precentral gyrus), and AD biomarker positivity defined using the ADNI-optimized phosphorylated tau/ß-amyloid ratio cut-off of > 0.0251 pg/ml.
Results:
A significant pulse pressure X APOE ε4 dose interaction was found on CBF in the entorhinal cortex, hippocampus, and inferior parietal cortex (ps < .005). Among participants with two e4 alleles, higher pulse pressure was significantly associated with lower CBF (ps < .001). However, among participants with zero or one ε4 allele, there was no significant association between pulse pressure and CBF (ps > .234). No significant pulse pressure X APOE ε4 dose interaction was found in the inferior temporal cortex, rostral middle frontal gyrus, or medial orbitofrontal cortex (ps > .109). Results remained unchanged when additionally controlling for general vascular risk assessed via the modified Hachinski Ischemic Scale.
Conclusions:
These findings demonstrate that the cross-sectional association between pulse pressure and region-specific CBF differs by APOE ε4 dose. In particular, a detrimental effect of elevated pulse pressure on CBF in AD-vulnerable regions was found only among participants with the e4/e4 genotype. Our findings suggest that pulse pressure may play a mechanistic role in neurovascular unit dysregulation for those genetically at greater risk for AD. Given that pulse pressure is just one of many potentially modifiable vascular risk factors for AD, future studies should seek to examine how these other factors (e.g., diabetes, high cholesterol) may interact with APOE genotype to affect cerebrovascular dysfunction.
Frequent and remote cognitive assessment may improve sensitivity to subtle cognitive decline associated with preclinical Alzheimer’s disease (AD). The objective of this study was to evaluate the feasibility and acceptability of repeated remote memory assessment in late middle-aged and older adults.
Participants and Methods:
We recruited participants from a longitudinal aging cohort to complete three medial temporal lobe-based memory paradigms (Object-In-Room Recall [ORR], Mnemonic Discrimination for Objects and Scenes [MDT-OS], Complex Scene Recognition [CSR]) using the neotiv application at repeated intervals over one year. We conducted initial telephone calls to perform screening, consent, and download instructions. Participants were assigned 24 remote sessions on a smartphone or tablet and were alerted via push notification when an assignment was ready to complete. Participants were randomly assigned to: (1) complete memory tests every other week or (2) complete memory tests for multiple days within one week every other month. Each remote session lasts approximately 10 minutes and includes one memory paradigm and brief usability/acceptability questionnaires followed by a delayed retrieval session 90 minutes later. Feasibility metrics examined included participation, retention, compliance, and usability/acceptability.
Results:
Of 150 participants recruited, 113 consented and were enrolled into the study (participation rate = 75%). Current retention rate is 75%, with 85/113 currently active (n=73) or completed (n=12). Of the 85 active or completed participants, the mean age is 68.7 (range = 4882), 64% are women, 70% used a smartphone (30% tablet), 84 are cognitively unimpaired and 1 has mild cognitive impairment. The primary threat to retention was participants consenting into the study but never registering in the app or completing their first scheduled assignment. After enrollment, 130 telephone calls were made by study staff to facilitate registration into the app or to remind participants to complete tasks. 74-80% of participants completed delayed retrieval tasks within 30 minutes of push notification, but average retrieval time was 125137 minutes post-learning trials. Regarding acceptability/usability, 94% agreed the application was easy to use, 56% enjoyed completing the mobile memory tests (36% felt neutral), 40% prefer remote mobile memory tests to standard in-person paper and pencil tests, and 50% understood the test instructions. 87% felt the frequency of tests assigned was “just right” (13% “too often”) and 90% felt the test length was “just right” (7% too short, 3% too long). Participants who completed all 24 sessions to date (n=12) all endorsed being “satisfied” or “very satisfied” with the platform and visit schedule, as well as recommended continued use of this type of cognitive testing.
Conclusions:
Remote memory assessment using smartphones and tablets is feasible and acceptable for cognitively unimpaired late middle-aged and older adults. Follow-up by study staff was needed to ensure adequate retention. Comprehension of instructions and compliance with completing delayed retrieval tasks within the expected timeframe was lower than expected. These feedback will be incorporated into an updated version of the app to improve compliance and retention. Longitudinal data collection is ongoing and results will be updated with a larger sample. Results will be compared across frequency schedule groups.
Prior work on associations between self-reported cognition and objective cognitive performance in Veterans has yielded mixed findings, with some evidence indicating that mild traumatic brain injury (TBI) may not impact the associations between subjective and objective cognition. However, few studies have examined these relationships in both mild and moderate-to-severe TBI, in older Veterans, and within specific cognitive domains. Therefore, we assessed the moderating effect of TBI severity on subjective and objective cognition across multiple cognitive domains.
Participants and Methods:
This study included 246 predominately male Vietnam-Era Veterans (age M=69.61, SD=4.18, Range = 60.87 – 85.16) who completed neuropsychological testing and symptom questionnaires as part of the Department of Defense-Alzheimer’s Disease Neuroimaging Initiative (DoD-ADNI). Participants were classified as having history of no TBI (n=81), mild TBI (n=80), or moderate-tosevere TBI (n=85). Neuropsychological composite scores in the domains of memory, attention/executive functioning, and language were included as the outcome variables. The Everyday Cognition (ECog) measure was used to capture subjective cognition and, specifically, the ECog domain scores of memory, divided attention, and language were chosen as independent variables to mirror the objective cognitive domains. General linear models, adjusting for age, education, apolipoprotein E ε4 carrier status, pulse pressure, depressive symptom severity, and PTSD symptom severity, tested whether TBI severity moderated the associations of domain-specific subjective and objective cognition.
Results:
Across the sample, subjective memory was associated with objective memory (β=-.205, 95% CI [-.332, -.078], p=.002) and subjective language was associated with objective language (β=-.267, 95% CI [-.399, -.134], p<.001). However, the main effect of subjective divided attention was not associated with objective attention/executive functioning (p=.124). The main effect of TBI severity was not associated with any of the objective cognitive domain scores after adjusting for the other variables in the model. The TBI severity x subjective cognition interaction was significant for attention/executive functioning [F(2,234)=5.18, p=.006]. Specifically, relative to Veterans without a TBI, participants with mild TBI (β=-.311, 95% CI [-.620, -.002], p=.048) and moderate-to-severe TBI (β=-.499, 95% CI [-.806, -.193], p=.002) showed stronger negative associations between subjective divided attention and objective attention/executive functioning. TBI severity did not moderate the associations between subjective and objective cognition for memory or language domains. The pattern of results did not change when the total number of TBIs was included in the models.
Conclusions:
In this DoD-ADNI sample, stronger associations between subjective and objective attention were evident among individuals with mild and moderate-to-severe TBI compared to Veterans without a TBI history. Attention/executive functioning measures (Trails A and B) may be particularly sensitive to detecting subtle cognitive difficulties related to TBI and/or comorbid psychiatric symptoms, which may contribute to these attention-specific findings. The strongest associations were among those with moderate-to-severe TBI, potentially because the extent to which their attention difficulties are affecting their daily lives are more apparent despite no significant differences in objective attention performance by TBI group. This study highlights the importance of assessing both subjective and objective cognition in older Veterans and the particular relevance of the attention domain within the context of TBI.
Veterans with a history of mild traumatic brain injury (mTBI) often endorse enduring postconcussive symptoms (PCS) including cognitive and neuropsychiatric complaints. However, although several studies have shown associations between these complaints and brain structure and cerebrovascular function, few studies have examined relationships between structural and functional brain alterations and PCS in the context of remote mTBI. We therefore examined whether PCS were associated with cortical thickness and cerebral blood flow (CBF) in a well-characterized sample of Veterans with a history of mTBI.
Participants and Methods:
116 Veterans underwent structural neuroimaging and a clinical interview to obtain detailed TBI history and injury-related information. Participants also completed the following self-report measures: the Neurobehavioral Symptom Inventory (NSI) for ratings of cognitive, emotional, somatic-sensory, and vestibular symptoms, and the Posttraumatic Stress Disorder (PTSD) Checklist for PTSD symptom severity. Regional brain thickness was indexed using FreeSurfer-derived cortical parcellations of frontal and temporal regions of interest (ROIs) including the superior frontal gyrus (SFG), middle frontal gyrus (MFG), inferior frontal gyrus (IFG), orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), medial temporal lobe (MTL), and lateral temporal lobe (LTL). A subset of Veterans (n=50) also underwent multi-phase pseudo-continuous arterial spin labeling (MPPCASL) to obtain resting CBF. T1-weighted structural and MPPCASL scans were co-registered and CBF estimates were extracted from the 7 bilateral parcellations of ROIs. To assess the relationship between NSI total and subscale scores and ROI thickness and CBF, multiple regression analyses were conducted adjusting for age, sex, and PTSD symptom severity. False Discovery Rate was used to correct for multiple comparisons.
Results:
NSI total and subscale scores were not associated with cortical thickness of any ROI. However, higher NSI scores were associated with increased ROI CBF of the SFG (q=.014) and MFG CBF (q=.014). With respect to symptom subscales, higher affective subscale scores were associated with increased SFG (q=.001), MFG (q=.001), IFG (q=.039), ACC (q=.026), and LTL CBF (q=.026); higher cognitive subscale scores were associated with increased SFG (q=.014) and MFG CBF (q=.032); and higher vestibular subscale scores were associated with increased ACC CBF (q=.021). NSI somatic-sensory subscale scores were not associated with ROI CBF.
Conclusions:
Results demonstrate that in TBI-susceptible anterior ROIs, alterations in CBF but not cortical thickness are associated with postconcussive symptomatology in Veterans with a history of mTBI. Specifically, postconcussive total symptoms as well as affective, cognitive, and vestibular subscale symptoms were strongly linked primarily to CBF of frontal regions. Remarkably, these results indicate that enduring symptoms in generally younger samples of Veterans with head injury histories may be closely tied to cerebrovascular function rather than brain structure changes. These findings may provide a neurological basis for negative clinical outcomes (e.g., enduring PCS and poor quality of life) that is frequently reported by many individuals following mTBI. Future work is needed to examine unique effects of blast exposure as well as associations with repeated injury on brain-behavior relationships.
Medical surge events require effective coordination between multiple partners. Unfortunately, the information technology (IT) systems currently used for information-sharing by emergency responders and managers in the United States are insufficient to coordinate with health care providers, particularly during large-scale regional incidents. The numerous innovations adopted for the COVID-19 response and continuing advances in IT systems for emergency management and health care information-sharing suggest a more promising future. This article describes: (1) several IT systems and data platforms currently used for information-sharing, operational coordination, patient tracking, and resource-sharing between emergency management and health care providers at the regional level in the US; and (2) barriers and opportunities for using these systems and platforms to improve regional health care information-sharing and coordination during a large-scale medical surge event. The article concludes with a statement about the need for a comprehensive landscape analysis of the component systems in this IT ecosystem.
Marine radiocarbon (14C) ages are an important geochronology tool for the understanding of past earthquakes and tsunamis that have impacted the coastline of New Zealand. To advance this field of research, we need an improved understanding of the radiocarbon marine reservoir correction for coastal waters of New Zealand. Here we report 170 new ΔR20 (1900–1950) measurements from around New Zealand made on pre-1950 marine shells and mollusks killed by the 1931 Napier earthquake. The influence of feeding method, living depth and environmental preference on ΔR is evaluated and we find no influence from these factors except for samples living at or around the high tide mark on rocky open coastlines, which tend to have anomalously low ΔR values. We examine how ΔR varies spatially around the New Zealand coastline and identify continuous stretches of coastline with statistically similar ΔR values. We recommend subdividing the New Zealand coast into four regions with different marine reservoir corrections: A: south and western South Island, ΔR20 –113 ± 33 yr, B: Cook Strait and western North Island, ΔR20 –171 ± 29 yr, C: northeastern North Island, ΔR20 –143 ± 18 yr, D: eastern North Island and eastern South Island, ΔR20 –70 ± 39 yr.
Institutions matter for postdisaster recovery. Conversely, natural disasters can also alter a society's institutions. Using the synthetic control method, this study examines the effects that Hurricane Katrina (2005) had on the formal and informal institutions in Louisiana. As measures of formal institutions, we employ two economic freedom scores corresponding to government employment (GE) (as a share of total employment at the state-level) and property tax (PT). These measures serve as proxies for the level of governmental interference into the economy and the protection of private property rights respectively. To assess the impact on informal institutions, we use state-level social capital data. We find that Hurricane Katrina had lasting impacts on Louisiana's formal institutions. In the post-Katrina period, we find that actual Louisiana had persistently higher economic freedom scores for both GE and PT than the synthetic Louisiana that did not experience the hurricane. These findings imply that the hurricane led to a reduction in both PTs and GE, which indicates a decrease in the relative size of the public sector as a share of the state's economy. On the other hand, we find no impact on our chosen measure of informal institution.
We assessed the implementation of telehealth-supported stewardship activities in acute-care units and long-term care (LTC) units in Veterans’ Administration medical centers (VAMCs).
Design:
Before-and-after, quasi-experimental implementation effectiveness study with a baseline period (2019–2020) and an intervention period (2021).
Setting:
The study was conducted in 3 VAMCs without onsite infectious disease (ID) support.
Participants:
The study included inpatient providers at participating sites who prescribe antibiotics.
Intervention:
During 2021, an ID physician met virtually 3 times per week with the stewardship pharmacist at each participating VAMC to review patients on antibiotics in acute-care units and LTC units. Real-time feedback on prescribing antibiotics was given to providers. Additional implementation strategies included stakeholder engagement, education, and quality monitoring.
Methods:
The reach–effectiveness–adoption–implementation–maintenance (RE-AIM) framework was used for program evaluation. The primary outcome of effectiveness was antibiotic days of therapy (DOT) per 1,000 days present aggregated across all 3 sites. An interrupted time-series analysis was performed to compare this rate during the intervention and baseline periods. Electronic surveys, periodic reflections, and semistructured interviews were used to assess other RE-AIM outcomes.
Results:
The telehealth program reviewed 502 unique patients and made 681 recommendations to 24 providers; 77% of recommendations were accepted. After program initiation, antibiotic DOT immediately decreased in the LTC units (−30%; P < .01) without a significant immediate change in the acute-care units (+16%; P = .22); thereafter DOT remained stable in both settings. Providers generally appreciated feedback and collaborative discussions.
Conclusions:
The implementation of our telehealth program was associated with reductions in antibiotic use in the LTC units but not in the smaller acute-care units. Overall, providers perceived the intervention as acceptable. Wider implementation of telehealth-supported stewardship activities may achieve reductions in antibiotic use.
We use a mathematical model to investigate the effect of basal topography and ice surface slope on transport and deposition of sediment within a water-filled subglacial channel. In our model, three zones of different behaviour occur. In the zone furthest upstream, variations in basal topography lead to sediment deposition under a wide range of conditions. In this first zone, even very small and gradually varying basal undulations (~5 m amplitude) can lead to the deposition of sediment within a modelled channel. Deposition is concentrated on the downstream gradient of subglacial ridges, and on the upstream gradient of subglacial troughs. The thickness and steepness of the ice sheet has a substantial impact on deposition rates, with shallow ice profiles strongly promoting both the magnitude and extent of sediment deposition. In a second zone, all sediment is transported downstream. Finally, a third zone close to the ice margin is characterised by high rates of sediment deposition. The existence of these zones has implications for esker formation and the dynamics of the subglacial environment.