To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To determine the false-positive rate of pulse oximetry screening at moderate altitude, presumed to be elevated compared with sea level values and assess change in false-positive rate with time.
We retrospectively analysed 3548 infants in the newborn nursery in Albuquerque, New Mexico, (elevation 5400 ft) from July 2012 to October 2013. Universal pulse oximetry screening guidelines were employed after 24 hours of life but before discharge. Newborn babies between 36 and 36 6/7 weeks of gestation, weighing >2 kg and babies >37 weeks weighing >1.7 kg were included in the study. Log-binomial regression was used to assess change in the probability of false positives over time.
Of the 3548 patients analysed, there was one true positive with a posteriorly-malaligned ventricular septal defect and an interrupted aortic arch. Of the 93 false positives, the mean pre- and post-ductal saturations were lower, 92 and 90%, respectively. The false-positive rate before April 2013 was 3.5% and after April 2013, decreased to 1.5%. There was a significant decrease in false-positive rate (p = 0.003, slope coefficient = −0.082, standard error of coefficient = 0.023) with the relative risk of a false positive decreasing at 0.92 (95% CI 0.88–0.97) per month.
This is the first study in Albuquerque, New Mexico, reporting a high false-positive rate of 1.5% at moderate altitude at the end of the study in comparison to the false-positive rate of 0.035% at sea level. Implementation of the nationally recommended universal pulse oximetry screening was associated with a high false-positive rate in the initial period, thought to be from the combination of both learning curve and altitude. After the initial decline, it remained steadily elevated above sea level, indicating the dominant effect of moderate altitude.
The Expanded Program for Immunization Consortium – Human Immunology Project Consortium study aims to employ systems biology to identify and characterize vaccine-induced biomarkers that predict immunogenicity in newborns. Key to this effort is the establishment of the Data Management Core (DMC) to provide reliable data and bioinformatic infrastructure for centralized curation, storage, and analysis of multiple de-identified “omic” datasets. The DMC established a cloud-based architecture using Amazon Web Services to track, store, and share data according to National Institutes of Health standards. The DMC tracks biological samples during collection, shipping, and processing while capturing sample metadata and associated clinical data. Multi-omic datasets are stored in access-controlled Amazon Simple Storage Service (S3) for data security and file version control. All data undergo quality control processes at the generating site followed by DMC validation for quality assurance. The DMC maintains a controlled computing environment for data analysis and integration. Upon publication, the DMC deposits finalized datasets to public repositories. The DMC architecture provides resources and scientific expertise to accelerate translational discovery. Robust operations allow rapid sharing of results across the project team. Maintenance of data quality standards and public data deposition will further benefit the scientific community.
Economic progress in India over the past three decades has not been accompanied by a commensurate improvement in the nutritional status of children, and a disproportionate burden of undernutrition is still focused on socioeconomically disadvantaged populations in the poorest regions. This study examined the nutritional status of children under 3 years of age using data from the fourth round of Indian National Family Health Survey conducted in 2015–2016. Child undernutrition was assessed in a sample of 126,431 under-3 children using the anthropometric indices of stunting, underweight and wasting (‘anthropometric failure’) across 640 districts, 5489 primary sampling units and 35 states/UTs of India. Descriptive statistics were used to examine the regional pattern of childhood undernutrition. Multilevel logistic regression models were fitted to examine the adjusted effect of social group (tribal vs non-tribal) and economic, demographic and contextual factors on the risks of stunting, underweight and wasting accounting for the hierarchical nature of the data. Interaction effects were estimated to model the joint effects of socioeconomic position (household wealth, maternal education, urban/rural residence and geographical region) and social group (tribal vs non-tribal) with the likelihood of anthropometric failure among children. The burden of childhood undernutrition was found to vary starkly across social, economic, demographic and contextual factors. Interaction effects demonstrated that tribal children from economically poorer households, with less-educated mothers, residing in rural areas and living in the Central region of India had elevated odds of anthropometric deprivation than other tribal children. The one-size-fits-all approach to tackling undernutrition in tribal children may not be efficient and could be counterproductive.
Case fatality rate (CFR) and doubling time are important characteristics of any epidemic. For coronavirus disease 2019 (COVID-19), wide variations in the CFR and doubling time have been noted among various countries. Early in the epidemic, CFR calculations involving all patients as denominator do not account for the hospitalised patients who are ill and will die in the future. Hence, we calculated cumulative CFR (cCFR) using only patients whose final clinical outcomes were known at a certain time point. We also estimated the daily average doubling time. Calculating CFR using this method leads to temporal stability in the fatality rates, the cCFR stabilises at different values for different countries. The possible reasons for this are an improved outcome rate by the end of the epidemic and a wider testing strategy. The United States, France, Turkey and China had high cCFR at the start due to low outcome rate. By 22 April, Germany, China and South Korea had a low cCFR. China and South Korea controlled the epidemic and achieved high doubling times. The doubling time in Russia did not cross 10 days during the study period.
This paper offers a framework for measuring global growth and inflation, built on standard index number theory, national accounts principles, and the concepts and methods for international macro-economic comparisons. Our approach provides a sound basis for purchasing power parity (PPP)- and exchange rate (XR)-based global growth and inflation measures. The Sato–Vartia index number system advocated here offers very similar results to a Fisher system but has the added advantage of allowing a complete decomposition with PPP or XR effects. For illustrative purposes, we present estimates of global growth and inflation for 141 countries over the years 2005 and 2011. The contribution of movements in XRs and PPPs to global inflation are presented. The aggregation properties of the method are also discussed.
In the treatment of MDD, insufficient treatment outcome and the delayed onset of action still remain major problems.
Measuring plasma concentrations, i.e. TDM is a possible option to improve therapeutic outcomes.
The aim of this prospective and naturalistic study was to evaluate the economic and clinical benefit of TDM for depressed inpatients treated with citalopram.
Inpatients with MDD according to ICD-10 were included and treated with citalopram. Psychopathology was assessed by the 17-item Hamilton Depression (HAMD-17) rating scale in weekly intervals for five weeks. In parallel, serum concentrations of citalopram were measured.
55 patients were included (27f). 84% of the patients with citalopram plasma concentrations below 50 ng/ml (n = 36) were non-responders in week five. Among patients who achieved plasma concentrations ≥50 ng/ml (n = 19) on day 7, 47% became responder at week five (p = 0.025). Patients with plasma levels ≥50 ng/ml had a significantly shorter duration of hospitalization (49 ± 20) than patients below 50 ng/ml (72 ± 37; p = 0.033).
Our results show that citalopram plasma levels above 50 ng/ml are predictive for later treatment outcome and that TDM is cost effective due to reduced duration of hospitalization.
Depression is known to be associated with low serum Brain-Derived Neurotrophic Factor (BDNF) and elevated levels of cortisol. Yoga has been shown to be associated with significant antidepressant effect as well as increase in serum BDNF levels and reduction in serum cortisol levels in these patients.
Aims and Objectives
We examined the association between serum cortisol and BDNF levels in patients with depression who were on treatment with antidepressants, yoga therapy, and both in combination.
Fifty-one consenting drug-naive outpatients (29 males) aged between 18-55 years, diagnosed with Major Depression received antidepressant medication alone (n=15), yoga therapy with (n=18), or without (n=18) concurrent antidepressants. Subjects in the yoga groups practiced a specific Yoga module for three months. Depression was assessed using the Hamilton Depression Rating Scale (HDRS). Serum BDNF & cortisol levels were obtained before and after three months using sandwich ELISA method. The group differences were analyzed using one-way ANOVA. Correlations between Serum BDNF & cortisol levels were analyzed using Pearson's correlation.
Significant negative correlations were observed between baseline BDNF & cortisol levels in the Yoga+Medication group (r=0.569*; P=0.01), and between change in BDNF and cortisol level in the Yoga alone group (r=0.582*; P=0.01). No other significant correlations were found.
There is a significant association between serum cortisol and BDNF levels in patients with depression who underwent Yoga with or without antidepressants. This suggests that Yoga may have stress reduction and neuroplastic effects alone or in combination with medications in depressed patients.
The main aim of this study is to investigate the capacity of a number of variables from four dimensions (clinical, psychosocial, cognitive and genetic domains) to predict the antidepressant treatment outcome, and combined the predictors in one integrate regression model with the aim to investigate which predictor contributed most.
In a semi-naturalistic prospective cohort study with a total of 241 fully assessed MDD patients, decrease in HAM-D scores from baseline to after 6 weeks of treatment was used to measure the antidepressant treatment outcome.
The clinical and psychosocial model (R2 = 0.451) showed that HAM-D scores at baseline and MMPI-2 scale paranoia was the best clinical and psychosocial predictor of treatment outcome respectively. The cognitive model (R2 = 0.502) revealed that combination of better performance on TMT-B test and worse performance on TOH and WAIS-R Digit Backward testes could predict decline in HAM-D scores. The genetics analysis only found median of percent improvement in HAM-D scores in G-allele of GR gene BclI polymorphism carriers (72.2%) was significant lower than that in non-G allele carriers (80.1%). The integrate model showed that three predictors, combination of HAM-D scores at baseline, MMPI-2 scale paranoia and TMT-B test, explained 57.1% of the variance.
Three markers, HAM-D scores at baseline, MMPI-2 scale paranoia and TMT-B test, might serve as predictor of antidepressant outcome in daily psychiatric practice.
There are a number of good standard practices available for prescribing long acting antipsychotics. Adherence to these guidelines will minimise any harm to the service users.
To compare depot antipsychotic prescribing practice with good standard practice guidelines of BNF, Trust and Maudsley guidelines.
To compare practice with standards in the areas of:
– licensed indication;
– dose/frequency range;
– avoiding poly-pharmacy;
– regular review of clinical and side effects.
Case notes of a randomly selected sample of 30 patients from the depot clinic at the City East Adult Community Mental Health Team Leicester, UK were retrospectively investigated. The data collected was analysed and the results were produced. Compliance with the best practice guidelines was calculated and recommendations made based on the findings.
One hundred percent compliance was noticed in licensed indications and dose/frequency within BNF range. However, 14% patients received poly-pharmacotherapy, 86% had regular outpatient review, but only 46% had review of side effects.
Better quality of documentations by the clinicians, improvised technology to elicit automatic review reminders, introduction of checklist for clinics to include review of all clinically important information, wider dissemination of the findings of this investigation, and re-auditing practice to explore impact of this investigation was recommended.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
The Apolipoprotein (APOE) ε4 allele increases the risk for mild cognitive impairment (MCI) and dementia, but not all carriers develop MCI/dementia. The purpose of this exploratory study was to determine if early and subtle preclinical signs of cognitive dysfunction and medial temporal lobe atrophy are observed in cognitively intact ε4 carriers who subsequently develop MCI.
Twenty-nine healthy, cognitively intact ε4 carriers (ε3/ε4 heterozygotes; ages 65–85) underwent neuropsychological testing and MRI-based measurements of medial temporal volumes over a 5-year follow-up interval; data were converted to z-scores based on a non-carrier group consisting of 17 ε3/ε3 homozygotes.
At follow-up, 11 ε4 carriers (38%) converted to a diagnosis of MCI. At study entry, the MCI converters had significantly lower scores on the Mini-Mental State Examination, Rey Auditory Verbal Learning Test (RAVLT) Trials 1–5, and RAVLT Immediate Recall compared to non-converters. MCI converters also had smaller MRI volumes in the left subiculum than non-converters. Follow-up logistic regressions revealed that left subiculum volumes and RAVLT Trials 1–5 scores were significant predictors of MCI conversion.
Results from this exploratory study suggest that ε4 carriers who convert to MCI exhibit subtle cognitive and volumetric differences years prior to diagnosis.
The aim of this work was to study the acceptability of plans prepared for prostate patients treated by volumetric modulated arc therapy (VMAT) with the vision to evaluate the quality of plans and test pre-treatment quality assurance (QA).
VMAT plans of 35 patients, planned on the Eclipse Treatment Planning System (Aria 15), were included in the study. Plan acceptability was checked using statistical analysis, which includes homogeneity index, radical and median homogeneity index, coverage and uniformity index. Dose–volume histograms (DVH) of the plans were also studied to check prescribed dose (PD), Dmax, Dmin, D5 and D95. Portal dosimetry was also done by gamma analysis using 3%/3 mm criterion. SD and mean SD error were also calculated and analysed.
Statistical analysis showed a mean HI of 1·054, coverage 0·959, UI 1·055, mDHI 0·962 and rDHI 0·866. SD of HI, coverage, UI, mDHI and rDHI was 0·019, 0·019, 0·014, 0·013 and 0·030, respectively. From the DVHs, mean of D5, D95, Dmin and Dmax was calculated at 6,252·9, 5,757·4, 6,413·3 and 5,657·7 cGy, respectively, with a prescribed dose of 6,000 cGy. According to gamma analysis, area gamma < 1 was 99·12% with a tolerance limit of 95%, maximum gamma was 1·466 with a tolerance limit of 3·5, average gamma was 0·388 with a tolerance limit of 0·5, area gamma > 1·2 was 0·242% with a tolerance limit of 0·5%, maximum dose difference was 0·6 with a tolerance limit of 1·0 and average dose difference was 0·029 with a tolerance limit of 0·2.
All three computations showed the results to be within acceptable limits. VMAT possesses a unique feature of delivering the whole treatment with only two rotations of the gantry. VMAT has an improved efficiency of delivery for equivalent dosimetric quality.
Integration of depression treatment into primary care could improve patient outcomes in low-resource settings. Losses along the depression care cascade limit integrated service effectiveness. This study identified patient-level factors that predicted detection of depressive symptoms by nurses, referral for depression treatment, and uptake of counseling, as part of integrated care in KwaZulu-Natal, South Africa.
This was an analysis of baseline data from a prospective cohort. Participants were adult patients with at least moderate depressive symptoms at primary care facilities in Amajuba, KwaZulu-Natal, South Africa. Participants were screened for depressive symptoms prior to routine assessment by a nurse. Generalized linear mixed-effects models were used to estimate associations between patient characteristics and service delivery outcomes.
Data from 412 participants were analyzed. Nurses successfully detected depressive symptoms in 208 [50.5%, 95% confidence interval (CI) 38.9–62.0] participants; of these, they referred 76 (36.5%, 95% CI 20.3–56.5) for depression treatment; of these, 18 (23.7%, 95% CI 10.7–44.6) attended at least one session of depression counseling. Depressive symptom severity, alcohol use severity, and perceived stress were associated with detection. Similar factors did not drive referral or counseling uptake.
Nurses detected patients with depressive symptoms at rates comparable to primary care providers in high-resource settings, though gaps in referral and uptake persist. Nurses were more likely to detect symptoms among patients in more severe mental distress. Implementation strategies for integrated mental health care in low-resource settings should target improved rates of detection, referral, and uptake.
Thirty-one accessions of Oryza glaberrima were evaluated to study the genetic variability, correlation, path, principal component analysis (PCA) and D2 analysis. Box plots depicted high estimates of variability for days to 50% flowering and grain yield per plant in Kharif 2016, plant height, productive tillers, panicle length and 1000 seed weight in Kharif 2017. Correlation studies revealed days to 50% flowering, plant height, panicle length, number of productive tillers, spikelets per panicle having a high direct positive association with grain yield, while path analysis identified the number of productive tillers having the maximum direct positive effect on grain yield. Days to 50% flowering via spikelets per panicle, productive tillers and plant height via spikelets per panicle exhibited high positive indirect effects on grain yield per plant. PCA showed that a cumulative variance of 54.752% from yield per plant, days to 50% flowering, spikelets per panicle and panicle length, contributing almost all the variation of traits while D2 analysis identified days to 50% flowering and grain yield per plant contributing maximum to the genetic diversity. Therefore, selection of accessions with more number of productive tillers and early maturity would be most suitable for yield improvement programme. The study has revealed the utility of African rice germplasm and its potential to utilize in the genetic improvement of indica rice varieties.
In this study, we estimate the burden of foodborne illness (FBI) caused by five major pathogens among nondeployed US Army service members. The US Army is a unique population that is globally distributed, has its own food procurement system and a food protection system dedicated to the prevention of both unintentional and intentional contamination of food. To our knowledge, the burden of FBI caused by specific pathogens among the US Army population has not been determined. We used data from a 2015 US Army population survey, a 2015 US Army laboratory survey and data from FoodNet to create inputs for two model structures. Model type 1 scaled up case counts of Campylobacter jejuni, Shigella spp., Salmonella enterica non-typhoidal and STEC non-O157 ascertained from the Disease Reporting System internet database from 2010 to 2015. Model type 2 scaled down cases of self-reported acute gastrointestinal illness (AGI) to estimate the annual burden of Norovirus illness. We estimate that these five pathogens caused 45 600 (5%–95% range, 30 300–64 000) annual illnesses among nondeployed active duty US Army Service members. Of these pathogens, Norovirus, Campylobacter jejuni and Salmonella enterica non-typhoidal were responsible for the most illness. There is a tremendous burden of AGI and FBI caused by five major pathogens among US Army Soldiers, which can have a tremendous impact on readiness of the force. The US Army has a robust food protection program in place, but without a specific active FBI surveillance system across the Department of Defence, we will never have the ability to measure the effectiveness of modern, targeted, interventions aimed at the reduction of specific foodborne pathogens.
Throughout history, acute gastrointestinal illness (AGI) has been a significant cause of morbidity and mortality among US service members. We estimated the magnitude, distribution, risk factors and care seeking behaviour of AGI among the active duty US Army service members using a web-based survey. The survey asked about sociodemographic characteristics, dining and food procurement history and any experience of diarrhoea in the past 30 days. If respondents reported diarrhoea, additional questions about concurrent symptoms, duration of illness, medical care seeking and stool sample submission were asked. Univariable and multivariable logistic regression were used to identify the factors associated with AGI and factors associated with seeking care and submitting a stool sample. The 30-day prevalence of AGI was 18.5% (95% CI 16.66–20.25), the incidence rate was 2.24 AGI episodes per person-year (95% CI 2.04–2.49). Risk factors included a region of residence, eating at the dining facility and eating at other on-post establishments. Individuals with AGI missed 2.7–3.7 days of work, which costs approximately $ 847 451 629 in paid wages. Results indicate there are more than 1 million cases of AGI per year among US Army Soldiers, which can have a major impact on readiness. We found that care-seeking behaviours for AGI are different among US Army Service Members than the general population. Army Service Members with AGI report seeking care and having a stool sample submitted less often, especially for severe (bloody) diarrhoea. Factors associated with seeking care included rank, experiencing respiratory symptoms (sore throat, cough), experiencing vomiting and missing work for their illness. Factors associated with submitting a stool sample including experiencing more than five loose stools in 24 h and not experiencing respiratory symptoms. US Army laboratory-based surveillance under-estimates service members with both bloody and non-bloody diarrhoea. To our knowledge, this is the first study to estimate the magnitude, distribution, risk factors and care-seeking behaviour of AGI among Army members. We determined Army service members care-seeking behaviours, AGI risk factors and stool sample submission rates are different than the general population, so when estimating burden of AGI caused by specific foodborne pathogens using methods like Scallan et al. (2011), unique multipliers must be used for this subset of the population. The study legitimises not only the importance of AGI in the active duty Army population but also highlights opportunities for public health leaders to engage in simple strategies to better capture AGI impact so more modern intervention strategies can be implemented to reduce burden and indirectly improve operational readiness across the Enterprise.
Surgery for CHD has been slow to develop in parts of the former Soviet Union. The impact of an 8-year surgical assistance programme between an emerging centre and a multi-disciplinary international team that comprised healthcare professionals from developed cardiac programmes is analysed and presented.
Material and methods
The international paediatric assistance programme included five main components – intermittent clinical visits to the site annually, medical education, biomedical engineering support, nurse empowerment, and team-based practice development. Data were analysed from visiting teams and local databases before and since commencement of assistance in 2007 (era A: 2000–2007; era B: 2008–2015). The following variables were compared between periods: annual case volume, operative mortality, case complexity based on Risk Adjustment for Congenital Heart Surgery (RACHS-1), and RACHS-adjusted standardised mortality ratio.
A total of 154 RACHS-classifiable operations were performed during era A, with a mean annual case volume by local surgeons of 19.3 at 95% confidence interval 14.3–24.2, with an operative mortality of 4.6% and a standardised mortality ratio of 2.1. In era B, surgical volume increased to a mean of 103.1 annual cases (95% confidence interval 69.1–137.2, p<0.0001). There was a non-significant (p=0.84) increase in operative mortality (5.7%), but a decrease in standardised mortality ratio (1.2) owing to an increase in case complexity. In era B, the proportion of local surgeon-led surgeries during visits from the international team increased from 0% (0/27) in 2008 to 98% (58/59) in the final year of analysis.
The model of assistance described in this report led to improved adjusted mortality, increased case volume, complexity, and independent operating skills.