To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During a disease outbreak, healthcare workers (HCWs) are essential to treat infected individuals. However, these HCWs are themselves susceptible to contracting the disease. As more HCWs get infected, fewer are available to provide care for others, and the overall quality of care available to infected individuals declines. This depletion of HCWs may contribute to the epidemic's severity. To examine this issue, we explicitly model declining quality of care in four differential equation-based susceptible, infected and recovered-type models with vaccination. We assume that vaccination, recovery and survival rates are affected by quality of care delivered. We show that explicitly modelling HCWs and accounting for declining quality of care significantly alters model-predicted disease outcomes, specifically case counts and mortality. Models neglecting the decline of quality of care resulting from infection of HCWs may significantly under-estimate cases and mortality. These models may be useful to inform health policy that may differ for HCWs and the general population. Models accounting for declining quality of care may therefore improve the management interventions considered to mitigate the effects of a future outbreak.
We evaluated the safety and feasibility of high-intensity interval training via a novel telemedicine ergometer (MedBIKE™) in children with Fontan physiology.
The MedBIKE™ is a custom telemedicine ergometer, incorporating a video game platform and live feed of patient video/audio, electrocardiography, pulse oximetry, and power output, for remote medical supervision and modulation of work. There were three study phases: (I) exercise workload comparison between the MedBIKE™ and a standard cardiopulmonary exercise ergometer in 10 healthy adults. (II) In-hospital safety, feasibility, and user experience (via questionnaire) assessment of a MedBIKE™ high-intensity interval training protocol in children with Fontan physiology. (III) Eight-week home-based high-intensity interval trial programme in two participants with Fontan physiology.
There was good agreement in oxygen consumption during graded exercise at matched work rates between the cardiopulmonary exercise ergometer and MedBIKE™ (1.1 ± 0.5 L/minute versus 1.1 ± 0.5 L/minute, p = 0.44). Ten youth with Fontan physiology (11.5 ± 1.8 years old) completed a MedBIKE™ high-intensity interval training session with no adverse events. The participants found the MedBIKE™ to be enjoyable and easy to navigate. In two participants, the 8-week home-based protocol was tolerated well with completion of 23/24 (96%) and 24/24 (100%) of sessions, respectively, and no adverse events across the 47 sessions in total.
The MedBIKE™ resulted in similar physiological responses as compared to a cardiopulmonary exercise test ergometer and the high-intensity interval training protocol was safe, feasible, and enjoyable in youth with Fontan physiology. A randomised-controlled trial of a home-based high-intensity interval training exercise intervention using the MedBIKE™ will next be undertaken.
Hospitalized patients placed in isolation due to a carrier state or infection with resistant or highly communicable organisms report higher rates of anxiety and loneliness and have fewer physician encounters, room entries, and vital sign records. We hypothesized that isolation status might adversely impact patient experience as reported through Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys, particularly regarding communication.
Retrospective analysis of HCAHPS survey results over 5 years.
A 1,165-bed, tertiary-care, academic medical center.
Patients on any type of isolation for at least 50% of their stay were the exposure group. Those never in isolation served as controls.
Multivariable logistic regression, adjusting for age, race, gender, payer, severity of illness, length of stay and clinical service were used to examine associations between isolation status and “top-box” experience scores. Dose response to increasing percentage of days in isolation was also analyzed.
Patients in isolation reported worse experience, primarily with staff responsiveness (help toileting 63% vs 51%; adjusted odds ratio [aOR], 0.77; P = .0009) and overall care (rate hospital 80% vs 73%; aOR, 0.78; P < .0001), but they reported similar experience in other domains. No dose-response effect was observed.
Isolated patients do not report adverse experience for most aspects of provider communication regarded to be among the most important elements for safety and quality of care. However, patients in isolation had worse experiences with staff responsiveness for time-sensitive needs. The absence of a dose-response effect suggests that isolation status may be a marker for other factors, such as illness severity. Regardless, hospitals should emphasize timely staff response for this population.
Effects of soil tillage systems and nitrogen (N) fertilizer management on spring wheat yield components, grain yield and N-use efficiency (NUE) were evaluated in contrasting weather of 2013 and 2014 on a clay soil at the Royal Agricultural University's Harnhill Manor Farm, Cirencester, UK. Three tillage systems – conventional plough tillage (CT), high intensity non-inversion tillage (HINiT) and low intensity non-inversion tillage (LINiT) for seedbed preparation – were compared at four rates of N fertilizer (0, 70, 140 and 210 kg N/ha). Responses to the effects of the management practices were strongly influenced by weather conditions and varied across seasons. Grain yields were similar between LINiT and CT in 2013, while CT produced higher yields in 2014. Nitrogen fertilization effects also varied across the years with no significant effects observed on grain yield in 2013, while in 2014 applications up to 140 kg N/ha increased yield. Grain protein ranged from 10·1 to 14·5% and increased with N rate in both years. Nitrogen-use efficiency ranged from 12·6 to 49·1 kg grain per kg N fertilizer and decreased as N fertilization rate increased in both years. There was no tillage effect on NUE in 2013, while in 2014 NUE under CT was similar to LINiT and higher than HINiT. The effect of tillage and N fertilization on soil moisture and soil mineral N (SMN) fluctuated across years. In 2013, LINiT showed significantly higher soil moisture than CT, while soil moisture did not differ between tillage systems in 2014. Conventional tillage had significantly higher SMN at harvest time in 2014, while no significant differences on SMN were observed between tillage systems in 2013. These results indicate that LINiT can be used to produce similar spring wheat yield to CT on this particular soil type, if a dry cropping season is expected. Crop response to N fertilization is limited when soil residual N is higher, while in conditions of lower residual SMN, a higher N supply is needed to increase yield and improve grain protein content.
This study describes psychometric properties of the NIH Toolbox Cognition Battery (NIHTB-CB) Composite Scores in an adult sample. The NIHTB-CB was designed for use in epidemiologic studies and clinical trials for ages 3 to 85. A total of 268 self-described healthy adults were recruited at four university-based sites, using stratified sampling guidelines to target demographic variability for age (20–85 years), gender, education, and ethnicity. The NIHTB-CB contains seven computer-based instruments assessing five cognitive sub-domains: Language, Executive Function, Episodic Memory, Processing Speed, and Working Memory. Participants completed the NIHTB-CB, corresponding gold standard validation measures selected to tap the same cognitive abilities, and sociodemographic questionnaires. Three Composite Scores were derived for both the NIHTB-CB and gold standard batteries: “Crystallized Cognition Composite,” “Fluid Cognition Composite,” and “Total Cognition Composite” scores. NIHTB Composite Scores showed acceptable internal consistency (Cronbach’s alphas=0.84 Crystallized, 0.83 Fluid, 0.77 Total), excellent test–retest reliability (r: 0.86–0.92), strong convergent (r: 0.78–0.90) and discriminant (r: 0.19–0.39) validities versus gold standard composites, and expected age effects (r=0.18 crystallized, r=−0.68 fluid, r=−0.26 total). Significant relationships with self-reported prior school difficulties and current health status, employment, and presence of a disability provided evidence of external validity. The NIH Toolbox Cognition Battery Composite Scores have excellent reliability and validity, suggesting they can be used effectively in epidemiologic and clinical studies. (JINS, 2014, 20, 1–11)
This study introduces a special series on validity studies of the Cognition Battery (CB) from the U.S. National Institutes of Health Toolbox for the Assessment of Neurological and Behavioral Function (NIHTB) (Gershon, Wagster et al., 2013) in an adult sample. This first study in the series describes the sample, each of the seven instruments in the NIHTB-CB briefly, and the general approach to data analysis. Data are provided on test–retest reliability and practice effects, and raw scores (mean, standard deviation, range) are presented for each instrument and the gold standard instruments used to measure construct validity. Accompanying papers provide details on each instrument, including information about instrument development, psychometric properties, age and education effects on performance, and convergent and discriminant construct validity. One study in the series is devoted to a factor analysis of the NIHTB-CB in adults and another describes the psychometric properties of three composite scores derived from the individual measures representing fluid and crystallized abilities and their combination. The NIHTB-CB is designed to provide a brief, comprehensive, common set of measures to allow comparisons among disparate studies and to improve scientific communication. (JINS, 2014, 20, 1–12)
Patterns in radar-detected internal layers in glaciers and ice streams can be tracked hundreds of kilometers downstream. We use distinctive patterns to delineate flowbands of Thwaites Glacier in the Amundsen Sea sector of West Antarctica. Flowbands contain information for the past century to millennium, the approximate time for ice to flow through the study region. GPS-detected flow directions (acquired in 2007/08) agree within uncertainty (~4°) with the radar-detected flowlines, indicating that the flow direction has not changed significantly in recent centuries. In contrast, InSAR-detected directions (from 1996) differ from the radar- and GPS-detected flowlines in all but the middle tributary, indicating caution is needed when using InSAR velocities to define flow directions. There is agreement between all three datasets in the middle tributary. We use two radar-detected flowlines to define a 95 km long flowband and perform a flux balance analysis using InSAR-derived velocities, radar-detected ice thickness, and estimates of the accumulation rate. Inferred thinning of 0.49 ± 0.34 m a–1 is consistent with satellite altimetry measurements, but has higher uncertainty due mainly to the velocity uncertainty. The uncertainty is underestimated because InSAR velocities often differ from GPS velocities by more than the stated uncertainties.
Insurance accounting has for many years proved a challenging topic for standard setters, preparers and users, often described as a “black box”. Will recent developments, in particular the July 2010 Insurance Contracts Exposure Draft, herald a new era?
This paper reviews these developments, setting out key issues and implications. It concentrates on issues relevant to life insurers, although much of the content is also relevant to non-life insurers.
The paper compares certain IFRS and Solvency II developments, recognising that UK insurers face challenges in implementing new financial and regulatory reporting requirements in similar timeframes. The paper considers resulting external disclosure requirements and a possible future role for supplementary information.
Children frequently are the victims of disasters due to natural hazards or terrorist attacks. However, there is a lack of specific pediatric emergency preparedness planning worldwide. To address these gaps, the federal grant-funded New York City Pediatric Disaster Coalition (PDC) established guidelines for creating Pediatric Critical care (PCC) surge plans and assisted hospitals in creating their plans. To date, five hospitals completed plans, thereby adding 92 beds to surge capacity. On 01 May 2010, 18:00h, there was an attempt to detonate a car bomb in Times Square, a large urban attraction in the heart of New York City. The perpetrator was later convicted of the attempted use of a weapon of mass destruction. Had the bomb exploded, given the location and time of day, it is possible that many critically injured victims would have been children.
The unit director or a senior attending of nine major hospitals in the NYC area (five in close proximity and four at secondary sites) were surveyed for the number of their vacant pediatric critical care beds at the time of the event before activation of surge plans.
At the time the car bomb was discovered, the nine hospitals, which have a total of 141 PCC beds, had only 29 vacant approved pediatric critical care beds.
Had the event resulted in many pediatric casualties, the existing PCC vacant beds at these hospitals may not have satisfied the need. Activating surge plans at five of these hospitals would have added 92 to the 29 available PCC beds for a total of 121. In order to provide PCC to a large number of victims, it is crucial that hospitals prepare PCC surge plans.
The New York City (NYC) Department of Health and Mental Hygiene (DOHMH) has supported a federal grant establishing a Pediatric Disaster Coalition (PDC) comprised of pediatric critical care (PCC) and emergency preparedness consultants from major city hospitals and health agencies. One of the PDC's goals was to develop recommendation for hospital-based PCC surge plans.
Members of the PDC convened bi-weekly and among other projects, developed guidelines for creating PCC surge capacity plans. The PDC members, acting as consultants, conducted scheduled visits to hospitals in NYC and actively assisted in drafting PCC surge plans as annexes to existing hospital disaster plans. The support ranged from facilitating meetings to providing draft language and content, based on each institutions request.
New York City has 25 hospitals with PCC services with a total of 244 beds. Five major hospitals have completed plans, thereby adding 92 PCC beds to surge capacity. Thirteen additional hospitals are in the process of developing a plan. The PDC consultants participated in meetings at 11 of the planning hospitals, and drafted language for 10 institutions. The PDC continues to reach out to all hospitals with the goal of initiating plans at all 25 PCC hospitals.
Providing surge guidelines and the utilization of on-site PDC consultants was a successful model for the development and implementation of citywide PCC surge capacity planning. Visiting hospitals and actively assisting them in creating their plans was an effective, efficient and well received, method to create increased PCC surge capacity. By first planning with major hospitals, a significant increase of surge beds (92 or 38%) was created, from a minimal number of hospitals. Once hospitals complete plans, it is anticipated that there will be the addition of at least 200 PCC surge beds that can be incorporated in to regional city-wide response to pediatric mass-casualty incident.
There remains a lack of comprehensive pediatric emergency preparedness planning worldwide. A disaster or mass-casualty incident (MCI) involving pediatric patients could overwhelm existing pediatric resources within the New York City (NYC) metropolitan region. The NYC Department of Health and Mental Hygiene (DOHMH) recognizing the importance to plan for a MCI with a large number of pediatric victims, implemented a project (the Pediatric Disaster Coalition; PDC), to address gaps in the healthcare system to provide effective and timely pediatric care during a MCI.
The PDC includes experts in emergency preparedness, critical care, surgery, and emergency medicine from the NYC pediatric/children's hospitals, DOHMH, Office of Emergency Management, and Fire Department (FDNY). Two committees addressed pediatric prehospital triage, transport, and pediatric critical care (PCC) surge capacities. They developed guidelines and recommendations for pediatric field triage and transport, matching patients' needs to resources, and increasing PCC Surge Capacities.
Surge recommendations were formulated. The algorithm developed provides specific pediatric triage criteria that identify severity of illness using the traditional Red, Yellow, and Green categories plus an Orange designation for continual reassessments that has been adopted by FDNY that has trained > 3,000 FDNY EMS personnel in its use. Triaged patients can be transported to appropriate resources based on a tiered system that defines pediatric hospital capabilities. The Surge Committee has created PCC Surge Capacity Guideline that can be used by hospitals to create their individual PCC surge plans. 15 of 25 NYC hospitals with PCC capabilities are participating with PDC planning; 5 have completed surge plans, 3 are nea completion, and 7 are in development. The completed plans add 92 surge beds to 244 regularly available PICU beds. The goal is to increase the PCC surge bed capacity by 200 + beds.
The project is an effective, multidisciplinary group approach to planning for a regional, large-scale pediatric MCI. Regional lead agencies must emphasize pediatric emergency preparedness in their disaster plans.
Adult readers with developmental phonological dyslexia exhibit significant difficulty comparing pseudowords and pure tones in auditory working memory (AWM). This suggests deficient AWM skills for adults diagnosed with dyslexia. Despite behavioral differences, it is unknown whether neural substrates of AWM differ between adults diagnosed with dyslexia and normal readers. Prior neuroimaging of adults diagnosed with dyslexia and normal readers, and post-mortem findings of neural structural anomalies in adults diagnosed with dyslexia support the hypothesis of atypical neural activity in temporoparietal and inferior frontal regions during AWM tasks in adults diagnosed with dyslexia. We used fMRI during two binaural AWM tasks (pseudowords or pure tones comparisons) in adults diagnosed with dyslexia (n = 11) and normal readers (n = 11). For both AWM tasks, adults diagnosed with dyslexia exhibited greater activity in left posterior superior temporal (BA 22) and inferior parietal regions (BA 40) than normal readers. Comparing neural activity between groups and between stimuli contrasts (pseudowords vs. tones), adults diagnosed with dyslexia showed greater primary auditory cortex activity (BA 42; tones > pseudowords) than normal readers. Thus, greater activity in primary auditory, posterior superior temporal, and inferior parietal cortices during linguistic and non-linguistic AWM tasks for adults diagnosed with dyslexia compared to normal readers indicate differences in neural substrates of AWM comparison tasks. (JINS, 2008, 14, 629–639.)
The ‘gateway’ pattern of drug initiation describes a normative sequence, beginning with alcohol and tobacco use, followed by cannabis, then other illicit drugs. Previous work has suggested that ‘violations’ of this sequence may be predictors of later problems but other determinants were not considered. We have examined the role of pre-existing mental disorders and sociodemographics in explaining the predictive effects of violations using data from the US National Comorbidity Survey Replication (NCS-R).
The NCS-R is a nationally representative face-to-face household survey of 9282 English-speaking respondents aged 18 years and older that used the World Health Organization (WHO) Composite International Diagnostic Interview (CIDI) to assess DSM-IV mental and substance disorders. Drug initiation was estimated using retrospective age-of-onset reports and ‘violations’ defined as inconsistent with the normative initiation order. Predictors of violations were examined using multivariable logistic regressions. Discrete-time survival analysis was used to see whether violations predicted progression to dependence.
Gateway violations were largely unrelated to later dependence risk, with the exception of small increases in risk of alcohol and other illicit drug dependence for those who initiated use of other illicit drugs before cannabis. Early-onset internalizing disorders were predictors of gateway violations, and both internalizing and externalizing disorders increased the risks of dependence among users of all drugs.
Drug use initiation follows a strong normative pattern, deviations from which are not strongly predictive of later problems. By contrast, adolescents who have already developed mental health problems are at risk for deviations from the normative sequence of drug initiation and for the development of dependence.