We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Diagnostic stewardship of urine cultures from patients with indwelling urinary catheters may improve diagnostic specificity and clinical relevance of the test, but risk of patient harm is uncertain.
Methods:
We retrospectively evaluated the impact of a computerized clinical decision support tool to promote institutional appropriateness criteria (neutropenia, kidney transplant, recent urologic surgery, or radiologic evidence of urinary tract obstruction) for urine cultures from patients with an indwelling urinary catheter. The primary outcome was a change in catheter-associated urinary tract infection (CAUTI) rate from baseline (34 mo) to intervention period (30 mo, including a 2-mo wash-in period). We analyzed patient-level outcomes and adverse events.
Results:
Adjusted CAUTI rate decreased from 1.203 to 0.75 per 1,000 catheter-days (P = 0.52). Of 598 patients triggering decision support, 284 (47.5%) urine cultures were collected in agreement with institutional criteria and 314 (52.5%) were averted. Of 314 patients whose urine cultures were averted, 2 had a subsequent urine culture within 7 days that resulted in a change in antimicrobial therapy and 2 had diagnosis of bacteremia with suspected urinary source, but there were no delays in effective treatment.
Conclusion:
A diagnostic stewardship intervention was associated with an approximately 50% decrease in urine culture testing for inpatients with a urinary catheter. However, the overall CAUTI rate did not decrease significantly. Adverse outcomes were rare and minor among patients who had a urine culture averted. Diagnostic stewardship may be safe and effective as part of a multimodal program to reduce unnecessary urine cultures among patients with indwelling urinary catheters.
Background: Indiscriminate urine culturing of patients with indwelling urinary catheters may lead to overdiagnosis of urinary tract infections, resulting in unnecessary antibiotic treatment and inaccurate reporting of catheter-associated urinary tract infections (CAUTIs) as a hospital quality metric. We evaluated the impact of a computerized diagnostic stewardship intervention to improve urine culture testing among patients with indwelling urinary catheters. Methods: We performed a single-center retrospective observational study at Rush University Medical Center from April 2018 – July 2023. In February 2021, we implemented a computerized clinical decision support tool to promote adherence to our internal urine culture guidelines for patients with indwelling urinary catheters. Providers were required to select one guideline criteria: 1) neutropenia, 2) kidney transplant, 3) recent urologic procedure, 4) urinary tract obstruction; or if none of the criteria were met, then an infectious diseases consultation was required for approval. We compared facility-wide CAUTI rate per 10,000 catheter days and standardized infection ratio (SIR) during baseline and intervention periods using ecologic models, controlling for time and for monthly Covid-19 hospitalizations. In the intervention period, we evaluated how providers responded to the intervention. Potential harm was defined as collection of a urine culture within 7 days of the intervention that resulted in a change in clinical management. Results: In unadjusted models, CAUTI rate decreased from 12.5 to 7.6 per 10,000 catheter days (p=0.04) and SIR decreased from 0.77 to 0.49 (p=0.09) during baseline vs intervention periods. In adjusted models, the CAUTI rate decreased from 6.9 to 5.5 per 10,000 catheter days (p=0.60) (Figure 1) and SIR decreased from 0.41 to 0.35 (p=0.65) during baseline vs intervention periods. Urine catheter standard utilization ratio (SUR) did not change (p=0.36). There were 598 patient encounters with ≥1 intervention. Selecting the first intervention for each encounter, 284 (47.5%) urine cultures met our guidelines for testing and 314 (52.5%) were averted (Figure 2). Of these, only 3 ( < 1 %) had a urine culture collected in the subsequent 7 days that resulted in change in clinical management. Conclusion: We observed a trend of decreased CAUTIs over time, but effect of our diagnostic stewardship intervention was difficult to assess due to healthcare disruption caused by Covid-19. Adverse outcomes were rare among patients who had a urine culture averted. A computerized clinical decision support tool may be safe and effective as part of a multimodal program to reduce unnecessary urine cultures in patients with indwelling urinary catheters.
The coronavirus disease 2019 (COVID-19) pandemic has resulted in shortages of personal protective equipment (PPE), underscoring the urgent need for simple, efficient, and inexpensive methods to decontaminate masks and respirators exposed to severe acute respiratory coronavirus virus 2 (SARS-CoV-2). We hypothesized that methylene blue (MB) photochemical treatment, which has various clinical applications, could decontaminate PPE contaminated with coronavirus.
Design:
The 2 arms of the study included (1) PPE inoculation with coronaviruses followed by MB with light (MBL) decontamination treatment and (2) PPE treatment with MBL for 5 cycles of decontamination to determine maintenance of PPE performance.
Methods:
MBL treatment was used to inactivate coronaviruses on 3 N95 filtering facepiece respirator (FFR) and 2 medical mask models. We inoculated FFR and medical mask materials with 3 coronaviruses, including SARS-CoV-2, and we treated them with 10 µM MB and exposed them to 50,000 lux of white light or 12,500 lux of red light for 30 minutes. In parallel, integrity was assessed after 5 cycles of decontamination using multiple US and international test methods, and the process was compared with the FDA-authorized vaporized hydrogen peroxide plus ozone (VHP+O3) decontamination method.
Results:
Overall, MBL robustly and consistently inactivated all 3 coronaviruses with 99.8% to >99.9% virus inactivation across all FFRs and medical masks tested. FFR and medical mask integrity was maintained after 5 cycles of MBL treatment, whereas 1 FFR model failed after 5 cycles of VHP+O3.
Conclusions:
MBL treatment decontaminated respirators and masks by inactivating 3 tested coronaviruses without compromising integrity through 5 cycles of decontamination. MBL decontamination is effective, is low cost, and does not require specialized equipment, making it applicable in low- to high-resource settings.
Background: In an effort to reduce inappropriate testing of hospital-onset Clostridioides difficile infection (HO-CDI), we sequentially implemented 2 strategies: an electronic health record-based clinical decision support tool that alerted ordering physicians about potentially inappropriate testing without a hard stop (intervention period 1), replaced by mandatory infectious diseases attending physician approval for any HO-CDI test order (intervention period 2). We analyzed appropriate HO-CDI testing rates of both intervention periods. Methods: We performed a retrospective study of patients 18 years or older who had an HO-CDI test (performed after hospital day 3) during 3 different periods: baseline (no intervention, September 2014–February 2015), intervention 1 (clinical decision support tool only, April 2015–September 2015), and intervention 2 (ID approval only, December 2017–September 2018). From each of the 3 periods, we randomly selected 150 patients who received HO-CDI testing (450 patients total). We restricted the study to the general medicine, bone marrow transplant, medical intensive care, and neurosurgical intensive care units. We assessed each HO-CDI test for appropriateness (see Table 1 for criteria), and we compared rates of appropriateness using the χ2 test or Kruskall-Wallis test, where appropriate. Results: In our cohort of 450 patients, the median age was 61 years, and the median hospital length of stay was 20 days. The median hospital day that HO-CDI testing was performed differed among the 3 groups: 12 days at baseline, 10 days during intervention 1, and 8.5 days during intervention 2 (P < .001). Appropriateness of HO-CDI testing increased from the baseline with both interventions, but mandatory ID approval was associated with the highest rate of testing appropriateness (Fig. 1). Reasons for inappropriate ordering did not differ among the periods, with <3 documented stools being the most common criterion for inappropriateness. During intervention 2, among the 33 inappropriate tests, 8 (24%) occurred where no approval from an ID attending was recorded. HO-CDI test positivity rates during the 3 time periods were 12%, 11%, and 21%, respectively (P = .03). Conclusions: We found that both the clinical decision support tool and mandatory ID attending physician approval interventions improved appropriateness of HO-CDI testing. Mandatory ID attending physician approval leading to the highest appropriateness rate. Even with mandatory ID attending physician approval, some tests continued to be ordered inappropriately per retrospective chart review; we suspect that this is partly explained by underdocumentation of criteria such as stool frequency. In healthcare settings where appropriateness of HO-CDI testing is not optimal, mandatory ID attending physician approval may provide an option beyond clinical decision-support tools.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
Methods
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
Results
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
Conclusions
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
OBJECTIVES/SPECIFIC AIMS: (1) Assess if the total duration of EEG suppression during a protocolized exposure to general anesthesia predicts cognitive performance in multiple cognitive domains immediately following emergence from anesthesia. (2) Assess if the total duration of EEG suppression in the same individuals predicts the rate of cognitive recovery in a three-hour period following emergence from anesthesia. METHODS/STUDY POPULATION: This was a non-specified substudy of NCT01911195, a multicenter investigation taking place at the University of Michigan, University of Pennsylvania, and Washington University in St. Louis. 30 healthy volunteers aged 20-40 years were recruited to receive general anesthesia. Participants in the anesthesia arm were anesthetized for three hours at isoflurane levels compatible with surgery (1.3 MAC). Multichannel sensor nets were used for EEG acquisition during the anesthetic exposure. EEG suppression was detected through automated voltage-thresholded classification of 2-second signal epochs, with concordance assessed across sensors. Following return of responsiveness to verbal commands, participants completed up to three hours of serial cognitive tests assessing executive function, reaction time, cognitive throughput, and working memory. Non-linear mixed effects models will be used to estimate the initial cognitive deficit and the rate of cognitive recovery following anesthetic exposure; these measures of cognitive function will be assessed in relation to total duration of suppression during anesthesia. RESULTS/ANTICIPATED RESULTS: Participants displayed wide variability in the total amount of suppression during anesthesia, with a median of 31.2 minutes and range from 0 minutes to 115.2 minutes. Initial analyses suggest that greater duration of burst suppression had a weak relationship with participants’ initial cognitive deficits upon return of responsiveness from anesthesia. Model generation of rate of recovery following anesthetic exposure is pending, but we anticipate this will also have a weak relationship with burst suppression. DISCUSSION/SIGNIFICANCE OF IMPACT: In healthy adults receiving a standardized exposure to anesthesia without surgery, burst suppression appears to be a poor predictor of post-anesthesia cognitive task performance. This suggests that burst suppression may have limited utility as a predictive marker of post-operative cognitive functioning, particularly in young adults without significant illness.
The aim of the present study was to evaluate the association between depression and SSRI monotherapy and frailty both baseline and prospectively in older adults.
Design:
Prospective cohort study, 12-month follow-up.
Setting:
Geriatric outpatient clinic in São Paulo, Brazil.
Participants:
A total of 811 elderly adults aged 60 or older.
Measurements:
Depression was diagnosed as follows: (1) a diagnosis of major depression disorder (MDD) according to DSM-5; or (2) an incomplete diagnosis of MDD, referred to as minor or subsyndromic depression, plus Geriatric Depression Scale 15-itens ≥ 6 points, and social or functional impairment secondary to depressive symptoms and observed by relatives. Frailty evaluation was performed through the FRAIL questionnaire, which is a self-rated scale. Trained investigators blinded to the baseline assessment conducted telephone calls to evaluate frailty after 12-month follow-up. The association between depression and the use of SSRI with frailty was estimated through a generalized estimating equation adjusted for age, gender, total drugs, and number of comorbidities.
Results:
Depression with SSRI use was associated with frailty at baseline (OR 2.82, 95% CI = 1.69–4.69) and after 12 months (OR 2.75, 95% CI = 1.84–4.11). Additionally, depression with SSRI monotherapy was also associated with FRAIL subdomains Physical Performance (OR 1.99, 95% CI = 1.29–3.07) and Health Status (OR 4.64, 95% CI = 2.11–10.21). SSRI use, without significant depressive symptoms, was associated with subdomain Health Status (OR 1.52, 95% CI = 1.04–2.23).
Conclusion:
It appears that depression with SSRI is associated to frailty, and this association cannot be explained only by antidepressant use.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
Aims
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Method
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
Results
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
Conclusions
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Hypertrophic cardiomyopathy has a range of clinical severity in children. Treatment options are limited, mainly on account of small patient size. Disopyramide is a sodium channel blocker with negative inotropic properties that effectively reduces left ventricular outflow tract gradients in adults with hypertrophic cardiomyopathy, but its efficacy in children is uncertain. A retrospective chart review of patients ⩽21 years of age with hypertrophic cardiomyopathy at our institution and treated with disopyramide was performed. Left ventricular outflow tract Doppler gradients before and after disopyramide initiation were compared as the primary outcome measure. Nine patients received disopyramide, with a median age of 5.6 years (range 6 days–12.9 years). The median left ventricular outflow tract Doppler gradient before initiation of disopyramide was 81 mmHg (range 30–132 mmHg); eight patients had post-initiation echocardiograms, in which the median lowest recorded Doppler gradient was 43 mmHg (range 15–100 mmHg), for a median % reduction of 58.2% (p=0.002). With median follow-up of 2.5 years, eight of nine patients were still alive, although disopyramide had been discontinued in six of the nine patients. Reasons for discontinuation included septal myomectomy (four patients), heart transplantation (one patient), and side effects (one patient). Disopyramide was effective for the relief of left ventricular outflow tract obstruction in children with hypertrophic cardiomyopathy, although longer-term data suggest that its efficacy is not sustained. In general, it was well tolerated. Further study in larger patient populations is warranted.
Kinetic modeling of laser-ion beam generation from the “break-out afterburner” (BOA) has been modeled for several deuteron-rich solid-density target foils. Modeling the transport of these beams in a beryllium converter shows as much as a fourfold increase in neutron yield over the present state of the art through the use of alternative target materials. Additionally, species-separation dynamics during the BOA can be exploited to control the hardness of the neutron spectra, of interest for, for example, enhancing penetrability in shielded material in active neutron interrogation settings.
Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster.
Methods
We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties.
Results
The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature.
Conclusions
The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127–137)
Alnico alloys have long been used as strong permanent magnets because of their ferromagnetism and high coercivity. Understanding their structural details allows for better prediction of the resulting magnetic properties. However, quantitative three-dimensional characterization of the phase separation in these alloys is still challenged by the spatial quantification of nanoscale phases. Herein, we apply a dual tomography approach, where correlative scanning transmission electron microscopy (STEM) energy-dispersive X-ray spectroscopic (EDS) tomography and atom probe tomography (APT) are used to investigate the initial phase separation process of an alnico 8 alloy upon non-magnetic annealing. STEM-EDS tomography provides information on the morphology and volume fractions of Fe–Co-rich and Νi–Al-rich phases after spinodal decomposition in addition to quantitative information of the composition of a nanoscale volume. Subsequent analysis of a portion of the same specimen by APT offers quantitative chemical information of each phase at the sub-nanometer scale. Furthermore, APT reveals small, 2–4 nm Fe-rich α1 phases that are nucleated in the Ni-rich α2 matrix. From this information, we show that phase separation of the alnico 8 alloy consists of both spinodal decomposition and nucleation and growth processes. The complementary benefits and challenges associated with correlative STEM-EDS and APT are discussed.
Objectives: Blast explosions are the most frequent mechanism of traumatic brain injury (TBI) in recent wars, but little is known about their long-term effects. Methods: Functional connectivity (FC) was measured in 17 veterans an average of 5.46 years after their most serious blast related TBI, and in 15 demographically similar veterans without TBI or blast exposure. Subcortical FC was measured in bilateral caudate, putamen, and globus pallidus. The default mode and fronto-parietal networks were also investigated. Results: In subcortical regions, between-groups t tests revealed altered FC from the right putamen and right globus pallidus. However, following analysis of covariance (ANCOVA) with age, depression (Center for Epidemiologic Studies Depression Scale), and posttraumatic stress disorder symptom (PTSD Checklist – Civilian version) measures, significant findings remained only for the right globus pallidus with anticorrelation in bilateral temporal occipital fusiform cortex, occipital fusiform gyrus, lingual gyrus, and cerebellum, as well as the right occipital pole. No group differences were found for the default mode network. Although reduced FC was found in the fronto-parietal network in the TBI group, between-group differences were nonsignificant after the ANCOVA. Conclusions: FC of the globus pallidus is altered years after exposure to blast related TBI. Future studies are necessary to explore the trajectory of changes in FC in subcortical regions after blast TBI, the effects of isolated versus repetitive blast-related TBI, and the relation to long-term outcomes in veterans. (JINS, 2016, 22, 631–642)
We present the results of two 2.3 μm near-infrared (NIR) radial velocity (RV) surveys to detect exoplanets around 36 nearby and young M dwarfs. We use the CSHELL spectrograph (R ~ 46,000) at the NASA InfraRed Telescope Facility (IRTF), combined with an isotopic methane absorption gas cell for common optical path relative wavelength calibration. We have developed a sophisticated RV forward modeling code that accounts for fringing and other instrumental artifacts present in the spectra. With a spectral grasp of only 5 nm, we are able to reach long-term radial velocity dispersions of ~20–30 m s−1 on our survey targets.
Studies have shown the clock-drawing test (CDT) to be a useful screening test that differentiates between normal, elderly populations, and those diagnosed with dementia. However, the results of studies which have looked at the utility of the CDT to help differentiate Alzheimer's disease (AD) from other dementias have been conflicting. The purpose of this study was to explore the utility of the CDT in discriminating between patients with AD and other types of dementia.
Methods:
A review was conducted using MEDLINE, PsycINFO, and Embase. Search terms included clock drawing or CLOX and dementia or Parkinson's Disease or AD or dementia with Lewy bodies (DLB) or vascular dementia (VaD).
Results:
Twenty studies were included. In most of the studies, no significant differences were found in quantitative CDT scores between AD and VaD, DLB, and Parkinson's disease dementia (PDD) patients. However, frontotemporal dementia (FTD) patients consistently scored higher on the CDT than AD patients. Qualitative analyses of errors differentiated AD from other types of dementia.
Conclusions:
Overall, the CDT score may be useful in distinguishing between AD and FTD patients, but shows limited value in differentiating between AD and VaD, DLB, and PDD. Qualitative analysis of the type of CDT errors may be a useful adjunct in the differential diagnosis of the types of dementias.
To assess the relative validity and reproducibility of the quantitative FFQ used in the Tzu Chi Health Study (TCHS).
Design
The reproducibility was evaluated by comparing the baseline FFQ with the 2-year follow-up FFQ. The validity was evaluated by comparing the baseline FFQ with 3 d dietary records and biomarkers (serum folate and vitamin B12). Median comparison, cross-classification and Spearman correlation with and without energy adjustment and deattenuation for day-to-day variation were assessed.
Setting
TCHS is a prospective cohort containing a high proportion of true vegetarians and part-time vegetarians (regularly consuming a vegetarian diet without completely avoiding meat).
Subject
Subsets of 103, seventy-eight and 1528 TCHS participants were included in the reproducibility, dietary record-validity and biomarker-validity studies, respectively.
Results
Correlations assessing the reproducibility for repeat administrations of the FFQ were in the range of 0·46–0·65 for macronutrients and 0·35–0·67 for micronutrients; the average same quartile agreement was 40%. The correlation between FFQ and biomarkers was 0·41 for both vitamin B12 and folate. Moderate to good correlations between the baseline FFQ and dietary records were found for energy, protein, carbohydrate, saturated and monounsaturated fat, fibre, vitamin C, vitamin A, K, Ca, Mg, P, Fe and Zn (average crude correlation: 0·47 (range: 0·37–0·66); average energy-adjusted correlation: 0·43 (range: 0·38–0·55); average energy-adjusted deattenuated correlation: 0·50 (range: 0·44–0·66)) with same quartile agreement rate of 39% (range: 35–45%), while misclassification to the extreme quartile was rare (average: 4% (range: 0–6%)).
Conclusions
The FFQ is a reliable and valid tool to rank relative intake of major nutrients for TCHS participants.