To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
To better understand barriers and facilitators that contribute to antibiotic overuse in long-term care and to use this information to inform an evidence and theory-informed program.
Information on barriers and facilitators associated with the assessment and management of urinary tract infections were identified from a mixed-methods survey and from focus groups with stakeholders working in long-term care. Each barrier or facilitator was mapped to corresponding determinants of behavior change, as described by the theoretical domains framework (TDF). The Rx for Change database was used to identify strategies to address the key determinants of behavior change.
In total, 19 distinct barriers and facilitators were mapped to 8 domains from the TDF: knowledge, skills, environmental context and resources, professional role or identity, beliefs about consequences, social influences, emotions, and reinforcements. The assessment of barriers and facilitators informed the need for a multifaceted approach with the inclusion of strategies (1) to establish buy-in for the changes; (2) to align organizational policies and procedures; (3) to provide education and ongoing coaching support to staff; (4) to provide information and education to residents and families; (5) to establish process surveillance with feedback to staff; and (6) to deliver reminders.
The use of a stepped approach was valuable to ensure that locally relevant barriers and facilitators to practice change were addressed in the development of a regional program to help long-term care facilities minimize antibiotic prescribing for asymptomatic bacteriuria. This stepped approach provides considerable opportunity to advance the design and impact of antimicrobial stewardship programs.
The Florida Department of Health in Miami-Dade County (DOH-Miami-Dade) investigated 106 reported carbon monoxide (CO) exposures over a 9-day timeframe after Hurricane Irma. This report evaluates risk factors for CO poisoning and the importance of heightened surveillance following natural disasters.
Data on CO poisoning cases from September 9 to 18, 2017 were extracted from Merlin, the Florida Department of Health Surveillance System. Medical records were obtained and follow-up interviews were conducted to collect data on the confirmed CO poisoning cases. Data were analyzed using SAS v9.4.
Ninety-one of the 106 people exposed to CO met the case definition for CO poisoning: 64 confirmed, 7 probable, and 20 suspect cases. Eighty-eight percent of the affected individuals were evaluated in emergency departments and 11.7% received hyperbaric oxygen treatment. The most frequently reported symptoms included headache (53.3%), dizziness (50.7%), and nausea (46.7%). Three patients expired due to their exposure to CO.
Post Hurricane Irma, the DOH-Miami-Dade investigated numerous cases for CO exposure. By understanding who is most likely to be impacted by CO and the impact of generators’ location on people’s health, education efforts can be tailored to the population most at risk and further CO exposures and related mortalities following natural disasters can be reduced. (Disaster Med Public Health Preparedness. 2019;13:94–96)
Patients who experience Transient Ischaemic Attack (TIA) should be assessed and treated in a specialist clinic to reduce risk of further TIA or stroke. But referrals are often delayed. We aimed to identify published studies describing pathways for emergency assessment and referral of patients with suspected TIA at first medical contact: primary care; ambulance services; and emergency department.
We conducted a scoping literature review. We searched four databases (PubMed, CINAHL, Web of Science, Scopus). We screened studies for eligibility. We extracted and analysed data to describe setting, assessment and referral processes reported in primary research on referral of suspected TIA patients directly to specialist outpatient services.
We identified eight studies in nine papers from five countries: 1/9 randomized trial; 6/9 before-and-after designs; 2/9 descriptive account. Five pathways were used by family doctors and three by Emergency Department (ED) physicians. None were used by paramedics. Clinicians identified TIA patients using a checklist incorporating the ABCD2 tool to describe risk of further stroke, online decision support tool or clinical judgement. They referred to a specialist clinic, either directly or via a telephone helpline. Anti-platelet medication was often given, usually aspirin unless contraindicated. Some patients underwent neurological and blood tests before referral and discharge. Five studies reported reduced incident of stroke at 90 days, from 6–10 percent predicted rate to 1.2-2.1 percent actual rate. Between 44 percent and 83 percent of suspected TIA cases in these studies were directly referred to stroke clinics through the pathways.
Research literature has focused on assessment and referral by family doctors and ED physicians to reduce hospitalization of TIA patients. No pathways for paramedic use were reported. Since many suspected TIA patients present to ambulance services, effective pre-hospital assessment and referral pathways are needed. We will use review results to develop a paramedic referral pathway to test in a feasibility trial.
Transient Ischaemic Attack (TIA) is a neurologic event with symptom resolution within 24 hours. Early specialist assessment of TIA reduces risk of stroke and death. National United Kingdom (UK) guidelines recommend patients with TIA are seen in specialist clinics within 24 hours (high risk) or seven days (low risk).
We aimed to develop a complex intervention for patients with low risk TIA presenting to the emergency ambulance service. The intervention is being tested in the TIER feasibility trial, in line with Medical Research Council (MRC) guidance on staged development and evaluation of complex interventions.
We conducted three interrelated activities to produce the TIER intervention:
•Survey of UK Ambulance Services (n = 13) to gather information about TIA pathways already in use
•Scoping review of literature describing prehospital care of patients with TIA
•Synthesis of data and definition of intervention by specialist panel of: paramedics; Emergency Department (ED) and stroke consultants; service users; ambulance service managers.
The panel used results to define the TIER intervention, to include:
1.Protocol for paramedics to assess patients presenting with TIA and identify and refer low risk patients for prompt (< 7day) specialist review at TIA clinic
2.Patient Group Directive and information pack to allow paramedic administration of aspirin to patients left at home with referral to TIA clinic
3.Referral process via ambulance control room
4.Training package for paramedics
5.Agreement with TIA clinic service provider including rapid review of referred patients
We followed MRC guidance to develop a clinical intervention for assessment and referral of low risk TIA patients attended by emergency ambulance paramedic. We are testing feasibility of implementing and evaluating this intervention in the TIER feasibility trial which may lead to fully powered multicentre randomized controlled trial (RCT) if predefined progression criteria are met.
Objectives: Human immunodeficiency virus (HIV) disproportionately affects Hispanics/Latinos in the United States, yet little is known about neurocognitive impairment (NCI) in this group. We compared the rates of NCI in large well-characterized samples of HIV-infected (HIV+) Latinos and (non-Latino) Whites, and examined HIV-associated NCI among subgroups of Latinos. Methods: Participants included English-speaking HIV+ adults assessed at six U.S. medical centers (194 Latinos, 600 Whites). For overall group, age: M=42.65 years, SD=8.93; 86% male; education: M=13.17, SD=2.73; 54% had acquired immunodeficiency syndrome. NCI was assessed with a comprehensive test battery with normative corrections for age, education and gender. Covariates examined included HIV-disease characteristics, comorbidities, and genetic ancestry. Results: Compared with Whites, Latinos had higher rates of global NCI (42% vs. 54%), and domain NCI in executive function, learning, recall, working memory, and processing speed. Latinos also fared worse than Whites on current and historical HIV-disease characteristics, and nadir CD4 partially mediated ethnic differences in NCI. Yet, Latinos continued to have more global NCI [odds ratio (OR)=1.59; 95% confidence interval (CI)=1.13–2.23; p<.01] after adjusting for significant covariates. Higher rates of global NCI were observed with Puerto Rican (n=60; 71%) versus Mexican (n=79, 44%) origin/descent; this disparity persisted in models adjusting for significant covariates (OR=2.40; CI=1.11–5.29; p=.03). Conclusions: HIV+ Latinos, especially of Puerto Rican (vs. Mexican) origin/descent had increased rates of NCI compared with Whites. Differences in rates of NCI were not completely explained by worse HIV-disease characteristics, neurocognitive comorbidities, or genetic ancestry. Future studies should explore culturally relevant psychosocial, biomedical, and genetic factors that might explain these disparities and inform the development of targeted interventions. (JINS, 2018, 24, 163–175)
There are few reported efforts to define universal disaster response performance measures. Careful examination of responses to past disasters can inform the development of such measures. As a first step toward this goal, we conducted a literature review to identify key factors in responses to 3 recent events with significant loss of human life and economic impact: the 2003 Bam, Iran, earthquake; the 2004 Indian Ocean tsunami; and the 2010 Haiti earthquake. Using the PubMed (National Library of Medicine, Bethesda, MD) database, we identified 710 articles and retained 124 after applying inclusion and exclusion criteria. Seventy-two articles pertained to the Haiti earthquake, 38 to the Indian Ocean tsunami, and 14 to the Bam earthquake. On the basis of this review, we developed an organizational framework for disaster response performance measurement with 5 key disaster response categories: (1) personnel, (2) supplies and equipment, (3) transportation, (4) timeliness and efficiency, and (5) interagency cooperation. Under each of these, and again informed by the literature, we identified subcategories and specific items that could be developed into standardized performance measures. The validity and comprehensiveness of these measures can be tested by applying them to other recent and future disaster responses, after which standardized performance measures can be developed through a consensus process. (Disaster Med Public Health Preparedness. 2017;11:505–509)
Transparent evidence-based decision making has been promoted worldwide to engender trust in science and policy making. Yet, little attention has been given to transparency implementation. The degree of transparency (focused on how uncertain evidence was handled) during the development of folate and vitamin D Dietary Reference Values was explored in three a priori defined areas: (i) value request; (ii) evidence evaluation; and (iii) final values.
Qualitative case studies (semi-structured interviews and desk research). A common protocol was used for data collection, interview thematic analysis and reporting. Results were coordinated via cross-case synthesis.
Australia and New Zealand, Netherlands, Nordic countries, Poland, Spain and UK.
Twenty-one interviews were conducted in six case studies.
Transparency of process was not universally observed across countries or areas of the recommendation setting process. Transparency practices were most commonly seen surrounding the request to develop reference values (e.g. access to risk manager/assessor problem formulation discussions) and evidence evaluation (e.g. disclosure of risk assessor data sourcing/evaluation protocols). Fewer transparency practices were observed to assist with handling uncertainty in the evidence base during the development of quantitative reference values.
Implementation of transparency policies may be limited by a lack of dedicated resources and best practice procedures, particularly to assist with the latter stages of reference value development. Challenges remain regarding the best practice for transparently communicating the influence of uncertain evidence on the final reference values. Resolving this issue may assist the evolution of nutrition risk assessment and better inform the recommendation setting process.
Children in foster care have often encountered a range of adverse experiences, including neglectful and/or abusive care and multiple caregiver transitions. Prior research findings suggest that such experiences negatively affect inhibitory control and the underlying neural circuitry. In the current study, event-related functional magnetic resonance imaging was employed during a go/no go task that assesses inhibitory control to compare the behavioral performance and brain activation of foster children and nonmaltreated children. The sample included two groups of 9- to 12-year-old children: 11 maltreated foster children and 11 nonmaltreated children living with their biological parents. There were no significant group differences on behavioral performance on the task. In contrast, patterns of brain activation differed by group. The nonmaltreated children demonstrated stronger activation than did the foster children across several regions, including the right anterior cingulate cortex, the middle frontal gyrus, and the right lingual gyrus, during correct no go trials, whereas the foster children displayed stronger activation than the nonmaltreated children in the left inferior parietal lobule and the right superior occipital cortex, including the lingual gyrus and cuneus, during incorrect no go trials. These results provide preliminary evidence that the early adversity experienced by foster children impacts the neural substrates of inhibitory control.
Background: Substantial epidemiological research has shown that psychotic experiences are more common in densely populated areas. Many patients with persecutory delusions find it difficult to enter busy social urban settings. The stress and anxiety caused by being outside lead many patients to remain in-doors. We therefore developed a brief CBT intervention, based upon a formulation of the way urban environments cause stress and anxiety, to help patients with paranoid thoughts to feel less distressed when outside in busy streets. Aims: The aim was to pilot the new intervention for feasibility and acceptability and gather preliminary outcome data. Method: Fifteen patients with persecutory delusions in the context of a schizophrenia diagnosis took part. All patients first went outside to test their reactions, received the intervention, and then went outside again. Results: The intervention was considered useful by the patients. There was evidence that going outside after the intervention led to less paranoid responses than the initial exposure, but this was only statistically significant for levels of distress. Conclusions: Initial evidence was obtained that a brief CBT module specifically focused on helping patients with paranoia go outside is feasible, acceptable, and may have clinical benefits. However, it could not be determined from this small feasibility study that any observed improvements were due to the CBT intervention. Challenges in this area and future work required are outlined.
Acute hepatitis B virus (HBV) infections have been reported in long-term care facilities (LTCFs), primarily associated with infection control breaks during assisted blood glucose monitoring. We investigated HBV outbreaks that occurred in separate skilled nursing facilities (SNFs) to determine factors associated with transmission.
Outbreak investigation with case-control studies.
Two SNFs (facilities A and B) in Durham, North Carolina, during 2009–2010.
Residents with acute HBV infection and controls randomly selected from HBV-susceptible residents during the outbreak period.
After initial cases were identified, screening was offered to all residents, with repeat testing 3 months later for HBV-susceptible residents. Molecular testing was performed to assess viral relatedness. Infection control practices were observed. Case-control studies were conducted to evaluate associations between exposures and acute HBV infection in each facility.
Six acute HBV cases were identified in each SNF. Viral phylogenetic analysis revealed a high degree of HBV relatedness within, but not between, facilities. No evaluated exposures were significantly associated with acute HBV infection in facility A; those associated with infection in facility B (all odds ratios >20) included injections, hospital or emergency room visits, and daily blood glucose monitoring. Observations revealed absence of trained infection control staff at facility A and suboptimal hand hygiene practices during blood glucose monitoring and insulin injections at facility B.
These outbreaks underscore the vulnerability of LTCF residents to acute HBV infection, the importance of surveillance and prompt investigation of incident cases, and the need for improved infection control education to prevent transmission.
The focus of this paper is on the assessment of the two main processes that children must acquire at the single word reading level: word recognition (lexical) and decoding (nonlexical) skills. Guided by the framework of the dual route model, this study aimed to (1) investigate the impact of item characteristics on test performance, and (2) determine to what extent widely used reading measures vary in their detection of lexical and nonlexical reading difficulties. Thirty children with reading difficulties were administered selected reading subtests from the Woodcock-Johnson III, the Wechsler Individual Achievement Test – Second Edition, the Castles and Coltheart Reading Test 2 (CC2), as well as a measure of nonverbal IQ. Both within-subjects analyses and descriptive data are presented. Results suggest that in comparison to a pure measure of irregular word reading, children with reading difficulties perform better on word identification subtests containing both regular and irregular word items. Furthermore, certain characteristics (e.g., length, similarity to real words) appear to influence the level of difficulty of nonword items and tests. The CC2 subscales identified the largest proportions of children with reading difficulties. Differences between all test scores were of statistical and clinical significance. Clinical and theoretical implications are discussed.
The common lawyer's understanding of land still hovers between a purely material conception of the physical stuff of land and a more cerebral image of land as comprising a co-ordinated set of abstract entitlements. This underlying tension between the physical and the conceptual has imparted a multi-dimensional complexity to the term […]. Kevin Gray and Susan Gray
What is a building? To begin to extrapolate our reasons for posing this question, consider the context of ‘Further Reading’, and the supplementary documents this supposes. To those whose profession is dedicated to the design of buildings, and whose professional life is, all too often, beleaguered by impediments threatening the actualisation, the construction, of their design, supplementary documents are not simply footnotes to design but evidence of struggles over design control. Within this trajectory, law is, most frequently, figured as yet another problematic. But consider a rather different trope – one which uses aspects of legal thinking, or rather thinking through law, to re-engage with the idea of ‘a building’ and of ‘design’. Supplementary documents, in this sense, become a ‘supplement’ which we could characterise as an ‘excess’. Documents which before might have been marginalised now become reconstituted as artefacts, evidencing other accounts of the processes of architecture. Legal artefacts related to design and building, which evidence a series of techniques and strategies for diagramming rights and responsibilities, may be deployed to open new perspectives on the question of ‘What is a building?’
Mandatory reporting of healthcare-associated infections is common, but underreporting by hospitals limits meaningful interpretation.
To validate mandatory intensive care unit (ICU) central line–associated bloodstream infection (CLABSI) reporting by Oregon hospitals.
Blinded comparison of ICU CLABSI determination by hospitals and health department–based external reviewers with group adjudication.
Forty-four Oregon hospitals required by state law to report ICU CLABSIs.
Seventy-six patients with ICU CLABSIs and a systematic sample of 741 other patients with ICU-related bacteremia episodes.
External reviewers examined medical records and determined CLABSI status. All cases with CLABSI determinations discordant from hospital reporting were adjudicated through formal discussion with hospital staff, a process novel to validation of CLABSI reporting.
Hospital representatives and external reviewers agreed on CLABSI status in 782 (96%) of 817 bacteremia episodes (k = 0.77 [95% confidence interval (CI), 0.70-0.84]). Among the 27 episodes identified as CLABSIs by external reviewers but not reported by hospitals, the final status was CLABSI in 16 (59%). The measured sensitivities of hospital ICU CLABSI reporting were 72% (95% CI, 62%-81%) with adjudicated CLABSI determination as the reference standard and 60% (95% CI, 51%-69%) with external review alone as the reference standard (P = .07). Validation increased the statewide ICU CLABSI rate from 1.21 (95% CI, 0.95-1.51) to 1.54 (95% CI, 1.25-1.88) CLABSIs/1,000 central line–days; ICU CLABSI rates increased by more than 1.00 CLABSI/1,000 central line–days in 6 (14%) hospitals.
Validating hospital CLABSI reporting improves accuracy of hospital-based CLABSI surveillance. Discussing discordant findings improves the quality of validation.