To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Much remains unknown about how the 2008 Great Recession, coupled with the ageing baby-boomer cohort, have shaped retirement expectations and realised retirement timing across diverse groups of older Americans. Using the Health and Retirement Study (1992–2016), we compared expectations about full-time work at age 62 (reported at ages 51–61) with realised labour force status at age 62. Of the 12,049 respondents, 34 per cent reported no chance of working full time at 62 (zero probability) and 21 per cent reported it was very likely (90–100 probability). Among those reporting no chance of working, there was a 0.111 probability of unmet expectations; among those with high expectations of working, there was a 0.430 probability of unmet expectations. Black and Hispanic Americans were more likely than white Americans to have unmet expectations of both types. Educational attainment was associated with higher probability of unexpectedly working and lower probability of unexpectedly not working. Baby-boomers experienced fewer unmet expectations than prior cohorts but more uncertainty about work status at 62. Our findings highlight the unpredictability of retirement timing for significant segments of the US population and the role of the Great Recession in contributing to uncertainty. Given the individual and societal benefits of long work lives, special attention should be paid to the high rates of unexpectedly not working at age 62.
During the COVID-19 pandemic, the antimicrobial stewardship module in our electronic medical record was reconfigured for the management of COVID-19 patients. This change allowed our subspecialist providers to review charts quickly to optimize potential therapy and management during the patient surge.
Ego-boundary disturbance (EBD) in schizophrenia is a unique psychopathological cluster characterized by passivity experiences (involving thoughts, actions, emotions and sensations) attributed by patients to some external agency. Aberrant mirror neuron activation may explain impaired self-monitoring and agency attribution underlying these ‘first rank’ symptoms.
We aim to study mirror neuron activity (MNA) in schizophrenia patients with and without EBD using transcranial magnetic stimulation (TMS).
50 right-handed schizophrenia patients (DSM IV) were evaluated using the Mini-International Neuropsychiatric Interview and the Positive & Negative Syndrome Scale. They completed a TMS experiment to assess putative premotor MNA. Motor evoked potential (MEP) was recorded in the right first dorsal interosseous muscle (FDI) with (a) 120% of resting motor threshold (RMT) and (b) stimulus intensity set to evoke MEP of 1 millivolt amplitude (MT1). These were done in 3 states: actual observation of an action using the FDI, virtual-observation (video) of this action and resting state. The difference of MEP between resting to action-observation states formed the measure of MNA.
MNA measured using MT1 and 120% RMT paradigms for real-observation was significantly lower in the 18 patients with EBD (thought-broadcast/withdrawal/insertion, made-act/impulse/affect and somatic passivity) than the 32 patients without EBD (t=2.75, p = 0.009; t = 2.41, p = 0.02 respectively for the two paradigms). The two groups did not differ on age, gender, education and total symptom scores.
Schizophrenia patients with EBD have lower premotor MNA. This highlights the role of MNA dysfunction in the pathophysiology of this unique and intriguing symptom cluster in schizophrenia.
Cortical inhibition (CI) is a neurophysiological outcome of the interaction between GABA inhibitory interneurons and other excitatory neurons. Transcranial magnetic stimulation (TMS) measures of CI deficits have been documented in both symptomatic and remitted bipolar disorder (BD) suggesting it could be a trait marker. The effects of medications and duration of illness may contribute to these findings.
To study CI in BD.
To compare CI across early-course medication-naive BD-mania, remitted first episode mania (FEM) and healthy subjects (HS).
Symptomatic BD subjects having < 3 episodes, currently in mania and medication-naive (n = 27), remitted FEM (n = 27; YMRS < 12 and HDRS < 8) and 45 HS, matched for age and gender, were investigated. Resting motor threshold (RMT) and 1-millivolt motor threshold (MT1) were estimated from the right first dorsal interosseous muscle. Paired-pulse TMS measures of short (SICI; 3ms) and long interval intracortical inhibition (LICI; 100ms) were acquired. Group differences in measures of CI were examined using ANOVA.
Symptomatic mania patients had the highest motor thresholds and the maximum LICI indicating a state of an excessive GABA-B neurotransmitter tone. Remitted mania patients had deficits in SICI indicating reduced GABA-A neurotransmitter tone. Putative changes in GABA-A neurotransmitter system activity with treatment may be investigated in future studies. CI has received less attention in BD as compared to schizophrenia and is a potential avenue for future research in this area.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Predicting recurrent Clostridium difficile infection (rCDI) remains difficult. METHODS. We employed a retrospective cohort design. Granular electronic medical record (EMR) data had been collected from patients hospitalized at 21 Kaiser Permanente Northern California hospitals. The derivation dataset (2007–2013) included data from 9,386 patients who experienced incident CDI (iCDI) and 1,311 who experienced their first CDI recurrences (rCDI). The validation dataset (2014) included data from 1,865 patients who experienced incident CDI and 144 who experienced rCDI. Using multiple techniques, including machine learning, we evaluated more than 150 potential predictors. Our final analyses evaluated 3 models with varying degrees of complexity and 1 previously published model.
Despite having a large multicenter cohort and access to granular EMR data (eg, vital signs, and laboratory test results), none of the models discriminated well (c statistics, 0.591–0.605), had good calibration, or had good explanatory power.
Our ability to predict rCDI remains limited. Given currently available EMR technology, improvements in prediction will require incorporating new variables because currently available data elements lack adequate explanatory power.
To investigate the feasibility of a national audit of epistaxis management led and delivered by a multi-region trainee collaborative using a web-based interface to capture patient data.
Six trainee collaboratives across England nominated one site each and worked together to carry out this pilot. An encrypted data capture tool was adapted and installed within the infrastructure of a university secure server. Site-lead feedback was assessed through questionnaires.
Sixty-three patients with epistaxis were admitted over a two-week period. Site leads reported an average of 5 minutes to complete questionnaires and described the tool as easy to use. Data quality was high, with little missing data. Site-lead feedback showed high satisfaction ratings for the project (mean, 4.83 out of 5).
This pilot showed that trainee collaboratives can work together to deliver an audit using an encrypted data capture tool cost-effectively, whilst maintaining the highest levels of data quality.
Most research on interventions to counter stigma and discrimination has
focused on short-term outcomes and has been conducted in high-income
To synthesise what is known globally about effective interventions to
reduce mental illness-based stigma and discrimination, in relation first
to effectiveness in the medium and long term (minimum 4 weeks), and
second to interventions in low- and middle-income countries (LMICs).
We searched six databases from 1980 to 2013 and conducted a
multi-language Google search for quantitative studies addressing the
research questions. Effect sizes were calculated from eligible studies
where possible, and narrative syntheses conducted. Subgroup analysis
compared interventions with and without social contact.
Eighty studies (n = 422 653) were included in the
review. For studies with medium or long-term follow-up (72, of which 21
had calculable effect sizes) median standardised mean differences were
0.54 for knowledge and −0.26 for stigmatising attitudes. Those containing
social contact (direct or indirect) were not more effective than those
without. The 11 LMIC studies were all from middle-income countries.
Effect sizes were rarely calculable for behavioural outcomes or in LMIC
There is modest evidence for the effectiveness of anti-stigma
interventions beyond 4 weeks follow-up in terms of increasing knowledge
and reducing stigmatising attitudes. Evidence does not support the view
that social contact is the more effective type of intervention for
improving attitudes in the medium to long term. Methodologically strong
research is needed on which to base decisions on investment in
This white paper identifies knowledge gaps and new challenges in healthcare epidemiology research, assesses the progress made toward addressing research priorities, provides the Society for Healthcare Epidemiology of America (SHEA) Research Committee's recommendations for high-priority research topics, and proposes a road map for making progress toward these goals. It updates the 2010 SHEA Research Committee document, “Charting the Course for the Future of Science in Healthcare Epidemiology: Results of a Survey of the Membership of SHEA,” which called for a national approach to healthcare-associated infections (HAIs) and a prioritized research agenda. This paper highlights recent studies that have advanced our understanding of HAIs, the establishment of the SHEA Research Network as a collaborative infrastructure to address research questions, prevention initiatives at state and national levels, changes in reporting and payment requirements, and new patterns in antimicrobial resistance.
To investigate whether inadequate dose to Point-A necessitates treatment plan changes in a time of computed tomography (CT)-image-guided brachytherapy treatment planning for cervix cancer.
Materials and methods
A total of 125 tandem and ovoid insertions from 25 cervix patients treated were reviewed. CT-image-based treatment planning was carried out for each insertion. Point-A is identified and the dose documented; however, dose optimisation in each plan was based on covering target while limiting critical organ doses (PlanTarget). No attempts were made to equate prescription and Point-A dose. For each insertion, a second hypothetical treatment plan was generated by prescribing dose to Point-A (PlanPoint-A). Plans were inter-compared using dose–volume histogram analyses.
A total of 250 treatment plans were analysed. For the study population, the median cumulative dose at Point-A was 80 Gy (range 70–95) for PlanTarget compared with 84·25 Gy for PlanPoint-A. Bladder and rectal doses were higher for PlanPoint-A compared with PlanTarget (p < 0·0001). Target D90 did not correlate with Point-A dose (p = 0·60).
Depending on applicator geometry, tumour size and patient anatomy, Point-A dose may vary in magnitude compared with prescription dose. Treatment plan modifications purely based on inadequate Point-A dose are unnecessary, as these may result in higher organ-at-risk doses and not necessarily improve target coverage.
The success of central line-associated bloodstream infection (CLABSI) prevention programs in intensive care units (ICUs) has led to the expansion of surveillance at many hospitals. We sought to compare non-ICU CLABSI (nCLABSI) rates with national reports and describe methods of surveillance at several participating US institutions.
Design and Setting.
An electronic survey of several medical centers about infection surveillance practices and rate data for non-ICU Patients.
Ten tertiary care hospitals.
In March 2011, a survey was sent to 10 medical centers. The survey consisted of 12 questions regarding demographics and CLABSI surveillance methodology for non-ICU patients at each center. Participants were also asked to provide available rate and device utilization data.
Hospitals ranged in size from 238 to 1,400 total beds (median, 815). All hospitals reported using Centers for Disease Control and Prevention (CDC) definitions. Denominators were collected by different means: counting patients with central lines every day (5 hospitals), indirectly estimating on the basis of electronic orders (n = 4), or another automated method (n = 1). Rates of nCLABSI ranged from 0.2 to 4.2 infections per 1,000 catheter-days (median, 2.5). The national rate reported by the CDC using 2009 data from the National Healthcare Surveillance Network was 1.14 infections per 1,000 catheter-days.
Only 2 hospitals were below the pooled CLABSI rate for inpatient wards; all others exceeded this rate. Possible explanations include differences in average central line utilization or hospital size in the impact of certain clinical risk factors notably absent from the definition and in interpretation and reporting practices. Further investigation is necessary to determine whether the national benchmarks are low or whether the hospitals surveyed here represent a selection of outliers.
In 1978, 22 staff members of the National Institute of Virology, Pune, India, were given two doses of human diploid cell antirabies vaccine (HDCV) for primary pre-exposure prophylactic immunization; the interval between the two doses being approximately 4 weeks. Eighteen of these 22 vaccinees were given a booster dose 1 year later. All 18 vaccinees developed protective levels of antibody; most of them had antibody levels exceeding 10 IU/ml.
In 1984, 5 years after the booster dose, 11 (79·0%) of 14 vaccinees tested still possessed neutralizing antibody levels ranging from 0·5 IU/ml to 10 IU/ml. Fourteen days after the administration of a booster dose, the antibody levels ranged from 10 to ≥ 100 IU/ml for all except one vaccinee (5·2 IU/ml). These findings demonstrate that the majority of vaccinees retained detectable neutralizing antibody after pre-exposure prophylaxis for as long as 5 years and that a single booster dose thereafter evoked a good antibody response.
Accurate neuropsychological assessment of older individuals from heterogeneous backgrounds is a major challenge. Education, ethnicity, language, and age are associated with scale level differences in test scores, but item level bias might contribute to these differences. We evaluated several strategies for dealing with item and scale level demographic influences on a measure of executive abilities defined by working memory and fluency tasks. We determined the impact of differential item functioning (DIF). We compared composite scoring strategies on the basis of their relationships with volumetric magnetic resonance imaging (MRI) measures of brain structure. Participants were 791 Hispanic, white, and African American older adults. DIF had a salient impact on test scores for 9% of the sample. MRI data were available on a subset of 153 participants. Validity in comparison with structural MRI was higher after scale level adjustment for education, ethnicity/language, and gender, but item level adjustment did not have a major impact on validity. Age adjustment at the scale level had a negative impact on relationships with MRI, most likely because age adjustment removes variance related to age-associated diseases. (JINS, 2008, 14, 746–759.)
High-κ dielectrics based the oxide of Al were prepared by atomic layer deposition (ALD) on 200-mm p-type Si wafers. Films were deposited directly on clean Si or on 0.5-nm underlayers of rapid thermal oxide or oxynitrides grown in O2 and/or NO ambients. The purpose of the underlayer films is to provide a barrier for atomic diffusion from the crystal Si to the high-κ dielectric film. Deposited Al-oxide films varied in thickness from 2 to 6 nm. Post deposition anneals were used to stabilize the ALD oxides. Equivalent SiO2-oxide thickness varied from 1.0 to 3.5 nm. In situ P-doped amorphous-Si 160 nm films were deposited over the oxides to prepare heavily doped n-type gate electrodes in MOS structures. Samples were rapid thermal annealed in N2 ambient at 800°C for 30 s, or spike annealed at 950, 1000, and 1050°C (nominally zero time at peak temperature). Flat band voltages, VFB were determined from C-V measurements on dot patterns. The 800°C anneals were used as a baseline, at which the poly-Si electrodes are crystallized and acquire electrical activation while subjecting the high-κ dielectrics to a low thermal budget. Positive shifts in VFB were observed, relative to a pure SiO2 control, ranging from 0.2 to 0.8 V. Spike annealing reduces the VFB shift for ALD films deposited over underlayer films. The VFB shift and the changes with annealing temperature show systematic dependence on the nitridation of the underlayer.
The most commonly performed procedure for treating
coronary artery stenosis is percutaneous transluminal
coronary angioplasty (PTCA) and, where the vessel lumen is
severely narrowed, coronary artery bypass grafting (CABG).
In PTCA, regions of atherosclerotic plaques are disrupted,
and the vessel lumen increased by inflating a balloon
catheter. In CABG an autologous saphenous vein into
coronary artery interposition graft is performed in order to
bypass occluded regions of epicardial coronary arteries.
Both interventions cause varying degrees of vascular damage
and the long-term efficacy of these procedures is limited by
a high incidence of neointimal formation and subsequent
vascular restenosis (Bach et al. 1994; Bryan & Angelini, 1994).
The endothelium-derived constrictor peptide, endothelin-1 (ET-1)
(Yanagisawa et al. 1988), also possesses mitogenic
activity on vascular smooth muscle cells (Hirata et al. 1989)
and has been suggested as playing a role in atherosclerosis
(Dashwood et al. 1993; Zeiher et al. 1994) and intimal
hyperplasia (Dashwood et al. 1993; Douglas et al. 1994).