To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To analyse nutritional and packaging characteristics of toddler specific foods and milks in the Australian retail food environment to identify how such products fit within the Australian Dietary Guidelines (ADG) and the NOVA classification.
Cross-sectional retail audit of toddler foods and milks. On-pack product attributes were recorded. Products were categorised as (i) food or milk, (ii) snack food or meal, and (iii) snacks sub-categorised dependent on main ingredients. Products were classified as a discretionary or core food as per the ADG and level of processing according to NOVA classification.
Supermarkets and pharmacies in Australia
A total of 154 foods and 32 milks were identified. 80% of foods were snacks, and 60% of foods were classified as core foods, while 85% were ultra-processed (UP). Per 100g, discretionary foods provided significantly more energy, protein, total and saturated fat, carbohydrate, total sugar and sodium (p<0.001) than core foods. Total sugars were significantly higher (p<0.001) and sodium significantly lower (p<0.001) in minimally processed foods than in ultra-processed foods. All toddler milks (n=32) were found to have higher energy, carbohydrate and total sugar levels than full fat cow’s milk per 100mL. Claims and messages were present on 99% of foods and all milks.
The majority of toddler foods available in Australia are UP snack foods, and do not align with the ADG. Toddler milks, despite being UP, do align with the ADG. A strengthened regulatory approach may address this issue.
Treatment of hypoplastic left heart syndrome varies across institutions. This study examined the impact of introducing a standardised programme.
This retrospective cohort study evaluated the effects of a comprehensive strategy on 1-year transplant-free survival with preserved ventricular and atrioventricular valve (AVV) function following a Norwood operation. This strategy included standardised operative and perioperative management and dedicated interstage monitoring. The post-implementation cohort (C2) was compared to historic controls (C1). Outcomes were assessed using logistic regression and Kaplan–Meier analysis.
The study included 105 patients, 76 in C1 and 29 in C2. Groups had similar baseline characteristics, including percentage with preserved ventricular (96% C1 versus 100% C2, p = 0.28) and AVV function (97% C1 versus 93% C2, p = 0.31). Perioperatively, C2 had higher indexed oxygen delivery (348 ± 67 ml/minute/m2 C1 versus 402 ± 102ml/minute/m2 C2, p = 0.015) and lower renal injury (47% C1 versus 3% C2, p = 0.004). The primary outcome was similar in both groups (49% C1 and 52% C2, p = 0.78), with comparable rates of death and transplantation (36% C1 versus 38% C2, p = 0.89) and ventricular (2% C1 versus 0% C2, p = 0.53) and AVV dysfunction (11% C1 versus 11% C2, p = 0.96) at 1-year. When accounting for cohort and 100-day freedom from hospitalisation, female gender (OR 3.7, p = 0.01) increased and ventricular dysfunction (OR 0.21, p = 0.02) and CPR (OR 0.11, p = 0.002) or ECMO use (OR 0.15, p = 001) decreased the likelihood of 1-year transplant-free survival.
Standardised perioperative management was not associated with improved 1-year transplant-free survival. Post-operative ventricular or AVV dysfunction was the strongest predictor of 1-year mortality.
Self-harm is a major international public health concern and is especially prevalent among prisoners. In this editorial, we explore recent trends in prisoner self-harm during the coronavirus lockdown, and consider strategies for improving the prevention and management of self-harm in prisons as we emerge from the pandemic.
Even though sub-Saharan African women spend millions of person-hours per day fetching water and pounding grain, to date, few studies have rigorously assessed the energy expenditure costs of such domestic activities. As a result, most analyses that consider head-hauling water or hand pounding of grain with a mortar and pestle (pilão use) employ energy expenditure values derived from limited research. The current paper compares estimated energy expenditure values from heart rate monitors v. indirect calorimetry in order to understand some of the limitations with using such monitors to measure domestic activities.
This confirmation study estimates the metabolic equivalent of task (MET) value for head-hauling water and hand-pounding grain using both indirect calorimetry and heart rate monitors under laboratory conditions.
The study was conducted in Nampula, Mozambique.
Forty university students in Nampula city who recurrently engaged in water-fetching activities.
Including all participants, the mean MET value for head hauling 20 litres (20·5 kg, including container) of water (2·7 km/h, 0 % slope) was 4·3 (sd 0·9) and 3·7 (sd 1·2) for pilão use. Estimated energy expenditure predictions from a mixed model were found to correlate with observed energy expenditure (r2 0·68, r 0·82). Re-estimating the model with pilão use data excluded improved the fit substantially (r2 0·83, r 0·91).
The current study finds that heart rate monitors are suitable instruments for providing accurate quantification of energy expenditure for some domestic activities, such as head-hauling water, but are not appropriate for quantifying expenditures of other activities, such as hand-pounding grain.
The Cognitive Abilities Screening Instrument (CASI) is a screening test of global cognitive function used in research and clinical settings. However, the CASI was developed using face validity and has not been investigated via empirical tests such as factor analyses. Thus, we aimed to develop and test a parsimonious conceptualization of the CASI rooted in cognitive aging literature reflective of crystallized and fluid abilities.
Secondary data analysis implementing confirmatory factor analyses where we tested the proposed two-factor solution, an alternate one-factor solution, and conducted a χ2 difference test to determine which model had a significantly better fit.
Data came from 3,491 men from the Kuakini Honolulu-Asia Aging Study.
The Cognitive Abilities Screening Instrument.
Findings demonstrated that both models fit the data; however, the two-factor model had a significantly better fit than the one-factor model. Criterion validity tests indicated that participant age was negatively associated with both factors and that education was positively associated with both factors. Further tests demonstrated that fluid abilities were significantly and negatively associated with a later-life dementia diagnosis.
We encourage investigators to use the two-factor model of the CASI as it could shed light on underlying cognitive processes, which may be more informative than using a global measure of cognition.
Cognitive impairment is a core feature of psychotic disorders, but the profile of impairment across adulthood, particularly in African-American populations, remains unclear.
Using cross-sectional data from a case–control study of African-American adults with affective (n = 59) and nonaffective (n = 68) psychotic disorders, we examined cognitive functioning between early and middle adulthood (ages 20–60) on measures of general cognitive ability, language, abstract reasoning, processing speed, executive function, verbal memory, and working memory.
Both affective and nonaffective psychosis patients showed substantial and widespread cognitive impairments. However, comparison of cognitive functioning between controls and psychosis groups throughout early (ages 20–40) and middle (ages 40–60) adulthood also revealed age-associated group differences. During early adulthood, the nonaffective psychosis group showed increasing impairments with age on measures of general cognitive ability and executive function, while the affective psychosis group showed increasing impairment on a measure of language ability. Impairments on other cognitive measures remained mostly stable, although decreasing impairments on measures of processing speed, memory and working memory were also observed.
These findings suggest similarities, but also differences in the profile of cognitive dysfunction in adults with affective and nonaffective psychotic disorders. Both affective and nonaffective patients showed substantial and relatively stable impairments across adulthood. The nonaffective group also showed increasing impairments with age in general and executive functions, and the affective group showed an increasing impairment in verbal functions, possibly suggesting different underlying etiopathogenic mechanisms.
As referrals to specialist palliative care (PC) grow in volume and diversity, an evidence-based triage method is needed to enable services to manage waiting lists in a transparent, efficient, and equitable manner. Discrete choice experiments (DCEs) have not to date been used among PC clinicians, but may serve as a rigorous and efficient method to explore and inform the complex decision-making involved in PC triage. This article presents the protocol for a novel application of an international DCE as part of a mixed-method research program, ultimately aiming to develop a clinical decision-making tool for PC triage.
Five stages of protocol development were undertaken: (1) identification of attributes of interest; (2) creation and (3) execution of a pilot DCE; and (4) refinement and (5) planned execution of the final DCE.
Six attributes of interest to PC triage were identified and included in a DCE that was piloted with 10 palliative care practitioners. The pilot was found to be feasible, with an acceptable cognitive burden, but refinements were made, including the creation of an additional attribute to allow independent analysis of concepts involved. Strategies for recruitment, data collection, analysis, and modeling were confirmed for the final planned DCE.
Significance of results
This DCE protocol serves as an example of how the sophisticated DCE methodology can be applied to health services research in PC. Discussion of key elements that improved the utility, integrity, and feasibility of the DCE provide valuable insights.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Stress-related pathophysiology drives comorbid trajectories that elude precise prediction. Allostatic load algorithms that quantify biological “wear and tear” represent a comprehensive approach to detect multisystemic disease processes of the mind and body. However, the multiple morbidities directly or indirectly related to stress physiology remain enigmatic. Our aim in this article is to propose that biological comorbidities represent discrete pathophysiological processes captured by measuring allostatic load. This has applications in research and clinical settings to predict physical and psychiatric comorbidities alike. The reader will be introduced to the concepts of allostasis, allostasic states, allostatic load, and allostatic overload as they relate to stress-related diseases and the proposed prediction of biological comorbidities that extend rather to understanding psychopathologies. In our transdisciplinary discussion, we will integrate perspectives related to (a) mitochondrial biology as a key player in the allostatic load time course toward diseases that “get under the skin and skull”; (b) epigenetics related to child maltreatment and biological embedding that shapes stress perception throughout lifespan development; and (c) evolutionary drivers of distinct personality profiles and biobehavioral patterns that are linked to dimensions of psychopathology.
Catheter-associated urinary tract infection (CAUTI) is considered a reasonably preventable event in the hospital setting, and it has been included in the US Department of Health and Human Services National Action Plan to Prevent Healthcare-Associated Infections. While multiple definitions for measuring CAUTI exist, each has important limitations, and understanding these limitations is important to both clinical practice and policy decisions. The National Healthcare Safety Network (NHSN) surveillance definition, the most frequently used outcome measure for CAUTI prevention efforts, has limited clinical correlation and does not necessarily reflect noninfectious harms related to the catheter. We advocate use of the device utilization ratio (DUR) as an additional performance measure for potential urinary catheter harm. The DUR is patient-centered and objective and is currently captured as part of NHSN reporting. Furthermore, these data are readily obtainable from electronic medical records. The DUR also provides a more direct reflection of improvement efforts focused on reducing inappropriate urinary catheter use.
Infect. Control Hosp. Epidemiol. 2016;37(3):327–333
The number of pediatric antimicrobial stewardship programs (ASPs) is increasing and program evaluation is a key component to improve efficiency and enhance stewardship strategies.
To determine the antimicrobials and diagnoses most strongly associated with a recommendation provided by a well-established pediatric ASP.
DESIGN AND SETTING
Retrospective cohort study from March 3, 2008, to March 2, 2013, of all ASP reviews performed at a free-standing pediatric hospital.
ASP recommendations were classified as follows: stop therapy, modify therapy, optimize therapy, or consult infectious diseases. A multinomial distribution model to determine the probability of each ASP recommendation category was performed on the basis of the specific antimicrobial agent or disease category. A logistic model was used to determine the odds of recommendation disagreement by the prescribing clinician.
The ASP made 2,317 recommendations: stop therapy (45%), modify therapy (26%), optimize therapy (19%), or consult infectious diseases (10%). Third-generation cephalosporins (0.20) were the antimicrobials with the highest predictive probability of an ASP recommendation whereas linezolid (0.05) had the lowest probability. Community-acquired pneumonia (0.26) was the diagnosis with the highest predictive probability of an ASP recommendation whereas fever/neutropenia (0.04) had the lowest probability. Disagreement with ASP recommendations by the prescribing clinician occurred 22% of the time, most commonly involving community-acquired pneumonia and ear/nose/throat infections.
Evaluation of our pediatric ASP identified specific clinical diagnoses and antimicrobials associated with an increased likelihood of an ASP recommendation. Focused interventions targeting these high-yield areas may result in increased program efficiency and efficacy.
To better understand tuberculosis (TB) infection control (IC) in healthcare facilities (HCFs) in Georgia.
A cross-sectional evaluation of healthcare worker (HCW) knowledge, beliefs and behaviors toward TB IC measures including latent TB infection (LTBI) screening and treatment of HCWs.
Georgia, a high-burden multidrug-resistant TB (MDR-TB) country.
HCWs from the National TB Program and affiliated HCFs.
An anonymous self-administered 55-question survey developed based on the Health Belief Model (HBM) conceptual framework.
In total, 240 HCWs (48% physicians; 39% nurses) completed the survey. The overall average TB knowledge score was 61%. Only 60% of HCWs reported frequent use of respirators when in contact with TB patients. Only 52% of HCWs were willing to undergo annual LTBI screening; 48% were willing to undergo LTBI treatment. In multivariate analysis, HCWs who worried about acquiring MDR-TB infection (adjusted odds ratio [aOR], 1.7; 95% confidence interval [CI], 1.28–2.25), who thought screening contacts of TB cases is important (aOR, 3.4; 95% CI, 1.35–8.65), and who were physicians (aOR, 1.7; 95% CI, 1.08–2.60) were more likely to accept annual LTBI screening. With regard to LTBI treatment, HCWs who worked in an outpatient TB facility (aOR, 0.3; 95% CI, 0.11–0.58) or perceived a high personal risk of TB reinfection (aOR, 0.5; 95% CI, 0.37–0.64) were less likely to accept LTBI treatment.
The concern about TB reinfection is a major barrier to HCW acceptance of LTBI treatment. TB IC measures must be strengthened in parallel with or prior to the introduction of LTBI screening and treatment of HCWs.
The addition of a CdMgTe (CMT) layer at the back of a CdTe solar cell should improve its performance by reflecting both photoelectrons and forward-current electrons away from the rear surface. Higher collection of photoelectrons will increase the cell’s current, and reduction of forward current will increase its voltage. To achieve electron reflection, conformal CMT layers were deposited at the back of CdTe cells, and a variety of measurements including performance curves, transmission electron microscopy, x-ray photoelectron spectroscopy, and energy-dispersive x-ray spectroscopy were performed. Oxidation of magnesium in the CMT layer was addressed by adding a CdTe capping layer. MgCl2 passivation was substituted for CdCl2 in some cases, but little difference was seen.
We present the results of an approximately 6 100 deg2 104–196 MHz radio sky survey performed with the Murchison Widefield Array during instrument commissioning between 2012 September and 2012 December: the MWACS. The data were taken as meridian drift scans with two different 32-antenna sub-arrays that were available during the commissioning period. The survey covers approximately 20.5 h < RA < 8.5 h, − 58° < Dec < −14°over three frequency bands centred on 119, 150 and 180 MHz, with image resolutions of 6–3 arcmin. The catalogue has 3 arcmin angular resolution and a typical noise level of 40 mJy beam− 1, with reduced sensitivity near the field boundaries and bright sources. We describe the data reduction strategy, based upon mosaicked snapshots, flux density calibration, and source-finding method. We present a catalogue of flux density and spectral index measurements for 14 110 sources, extracted from the mosaic, 1 247 of which are sub-components of complexes of sources.
To examine regional variation in the use and appropriateness of indwelling urinary catheters and catheter-associated urinary tract infection (CAUTI).
Design and Setting.
US acute care hospitals.
Hospitals were divided into 4 regions according to the US Census Bureau. Baseline data on urinary catheter use, catheter appropriateness, and CAUTI were collected from participating units. The catheter utilization ratio was calculated by dividing the number of catheter-days by the number of patient-days. We used the National Healthcare Safety Network (NHSN) definition (number of CAUTIs per 1,000 catheter-days) and a population-based definition (number of CAUTIs per 10,000 patient-days) to calculate CAUTI rates. Logistic and Poisson regression models were used to assess regional differences.
Data on 434,207 catheter-days over 1,400,770 patient-days were collected from 1,101 units within 726 hospitals across 34 states. Overall catheter utilization was 31%. Catheter utilization was significantly higher in non-intensive care units (ICUs) in the West compared with non-ICUs in all other regions. Approximately 30%–40% of catheters in non-ICUs were placed without an appropriate indication. Catheter appropriateness was the lowest in the West. A total of 1,099 CAUTIs were observed (NHSN rate of 2.5 per 1,000 catheter-days and a population-based rate of 7.8 per 10,000 patient-days). The population-based CAUTI rate was highest in the West (8.9 CAUTIs per 10,000 patient-days) and was significantly higher compared with the Midwest, even after adjusting for hospital characteristics (P = .02).
Regional differences in catheter use, appropriateness, and CAUTI rates were detected across US hospitals.