We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Serial position scores on verbal memory tests are sensitive to early Alzheimer’s disease (AD)-related neuropathological changes that occur in the entorhinal cortex and hippocampus. The current study examines longitudinal change in serial position scores as markers of subtle cognitive decline in older adults who may be in preclinical or at-risk states for AD.
Methods:
This study uses longitudinal data from the Religious Orders Study and the Rush Memory and Aging Project. Participants (n = 141) were included if they did not have dementia at enrollment, completed follow-up assessments, and died and were classified as Braak stage I or II. Memory tests were used to calculate serial position (primacy, recency), total recall, and episodic memory composite scores. A neuropathological evaluation quantified AD, vascular, and Lewy body pathologies. Mixed effects models were used to examine change in memory scores. Neuropathologies and covariates (age, sex, education, APOE e4) were examined as moderators.
Results:
Primacy scores declined (β = −.032, p < .001), whereas recency scores increased (β = .021, p = .012). No change was observed in standard memory measures. Greater neurofibrillary tangle density and atherosclerosis explained 10.4% of the variance in primacy decline. Neuropathologies were not associated with recency change.
Conclusions:
In older adults with hippocampal neuropathologies, primacy score decline may be a sensitive marker of early AD-related changes. Tangle density and atherosclerosis had additive effects on decline. Recency improvement may reflect a compensatory mechanism. Monitoring for changes in serial position scores may be a useful in vivo method of tracking incipient AD.
To test the hypothesis that higher level of purpose in life is associated with lower likelihood of dementia and mild cognitive impairment (MCI) in older Brazilians.
Methods:
As part of the Pathology, Alzheimer’s and Related Dementias Study (PARDoS), informants of 1,514 older deceased Brazilians underwent a uniform structured interview. The informant interview included demographic data, the Clinical Dementia Rating scale to diagnose dementia and MCI, the National Institute of Mental Health Diagnostic Interview Schedule for depression, and a 6-item measure of purpose in life, a component of well-being.
Results:
Purpose scores ranged from 1.5 to 5.0 with higher values indicating higher levels of purpose. On the Clinical Dementia Rating Scale, 940 persons (62.1%) had no cognitive impairment, 121 (8.0%) had MCI, and 453 (29.9%) had dementia. In logistic regression models adjusted for age at death, sex, education, and race, higher purpose was associated with lower likelihood of MCI (odds ratio = .58; 95% confidence interval [CI]: .43, .79) and dementia (odds ratio = .49, 95% CI: .41, .59). Results were comparable after adjusting for depression (identified in 161 [10.6%]). Neither race nor education modified the association of purpose with cognitive diagnoses.
Conclusions:
Higher purpose in life is associated with lower likelihood of MCI and dementia in older black and white Brazilians.
As part of the Pathology, Alzheimer’s and Related Dementias Study, we conducted uniform structured interviews with knowledgeable informants (72% children) of 1,493 older (age > 65) Brazilian decedents.
Measurements:
The interview included measures of social isolation (number of family and friends in at least monthly contact with decedent), emotional isolation (short form of UCLA Loneliness Scale), and major depression plus the informant portion of the Clinical Dementia Rating Scale to diagnose dementia and its precursor, mild cognitive impairment (MCI).
Results:
Decedents had a median social network size of 8.0 (interquartile range = 9.0) and a median loneliness score of 0.0 (interquartile range = 1.0). On the Clinical Dementia Rating Scale, 947 persons had no cognitive impairment, 122 had MCI, and 424 had dementia. In a logistic regression model adjusted for age, education, sex, and race, both smaller network size (odds ratio [OR] = 0.975; 95% confidence interval [CI]: 0.962, 0.989) and higher loneliness (OR = 1.145; 95% CI: 1.060, 1.237) were associated with higher likelihood of dementia. These associations persisted after controlling for depression (present in 10.4%) and did not vary by race. After controlling for depression, neither network size nor loneliness was related to MCI.
Conclusion:
Social and emotional isolation are associated with higher likelihood of dementia in older black and white Brazilians.
Given the evidence of multi-parameter risk factors in shaping cognitive outcomes in aging, including sleep, inflammation, cardiometabolism, and mood disorders, multidimensional investigations of their impact on cognition are warranted. We sought to determine the extent to which self-reported sleep disturbances, metabolic syndrome (MetS) factors, cellular inflammation, depressive symptomatology, and diminished physical mobility were associated with cognitive impairment and poorer cognitive performance.
Design:
This is a cross-sectional study.
Setting:
Participants with elevated, well-controlled blood pressure were recruited from the local community for a Tai Chi and healthy-aging intervention study.
Participants:
One hundred forty-five older adults (72.7 ± 7.9 years old; 66% female), 54 (37%) with evidence of cognitive impairment (CI) based on Montreal Cognitive Assessment (MoCA) score ≤24, underwent medical, psychological, and mood assessments.
Measurements:
CI and cognitive domain performance were assessed using the MoCA. Univariate correlations were computed to determine relationships between risk factors and cognitive outcomes. Bootstrapped logistic regression was used to determine significant predictors of CI risk and linear regression to explore cognitive domains affected by risk factors.
Results:
The CI group were slower on the mobility task, satisfied more MetS criteria, and reported poorer sleep than normocognitive individuals (all p < 0.05). Multivariate logistic regression indicated that sleep disturbances, but no other risk factors, predicted increased risk of evidence of CI (OR = 2.00, 95% CI: 1.26–4.87, 99% CI: 1.08–7.48). Further examination of MoCA cognitive subdomains revealed that sleep disturbances predicted poorer executive function (β = –0.26, 95% CI: –0.51 to –0.06, 99% CI: –0.61 to –0.02), with lesser effects on visuospatial performance (β = –0.20, 95% CI: –0.35 to –0.02, 99% CI: –0.39 to 0.03), and memory (β = –0.29, 95% CI: –0.66 to –0.01, 99% CI: –0.76 to 0.08).
Conclusions:
Our results indicate that the deleterious impact of self-reported sleep disturbances on cognitive performance was prominent over other risk factors and illustrate the importance of clinician evaluation of sleep in patients with or at risk of diminished cognitive performance. Future, longitudinal studies implementing a comprehensive neuropsychological battery and objective sleep measurement are warranted to further explore these associations.
Though theory suggests that individual differences in neuroticism (a tendency to experience negative emotions) would be associated with altered functioning of the amygdala (which has been linked with emotionality and emotion dysregulation in childhood, adolescence, and adulthood), results of functional neuroimaging studies have been contradictory and inconclusive. We aimed to clarify the relationship between neuroticism and three hypothesized neural markers derived from functional magnetic resonance imaging during negative emotion face processing: amygdala activation, amygdala habituation, and amygdala-prefrontal connectivity, each of which plays an important role in the experience and regulation of emotions. We used general linear models to examine the relationship between trait neuroticism and the hypothesized neural markers in a large sample of over 500 young adults. Although neuroticism was not significantly associated with magnitude of amygdala activation or amygdala habituation, it was associated with amygdala–ventromedial prefrontal cortex connectivity, which has been implicated in emotion regulation. Results suggest that trait neuroticism may represent a failure in top-down control and regulation of emotional reactions, rather than overactive emotion generation processes, per se. These findings suggest that neuroticism, which has been associated with increased rates of transdiagnostic psychopathology, may represent a failure in the inhibitory neurocircuitry associated with emotion regulation.
OBJECTIVES/SPECIFIC AIMS: Delirium, a form of acute brain dysfunction, characterized by changes in attention and alertness, is a known independent predictor of mortality in the Intensive Care Unit (ICU). We sought to understand whether catatonia, a more recently recognized form of acute brain dysfunction, is associated with increased 30-day mortality in critically ill older adults. METHODS/STUDY POPULATION: We prospectively enrolled critically ill patients at a single institution who were on a ventilator or in shock and evaluated them daily for delirium using the Confusion Assessment for the ICU and for catatonia using the Bush Francis Catatonia Rating Scale. Coma, was defined as a Richmond Agitation Scale score of −4 or −5. We used the Cox Proportional Hazards model predicting 30-day mortality after adjusting for delirium, coma and catatonia status. RESULTS/ANTICIPATED RESULTS: We enrolled 335 medical, surgical or trauma critically ill patients with 1103 matched delirium and catatonia assessments. Median age was 58 years (IQR: 48 - 67). Main indications for admission to the ICU included: airway disease or protection (32%; N=100) or sepsis and/or shock (25%; N=79. In the unadjusted analysis, regardless of the presence of catatonia, non-delirious individuals have the highest median survival times, while delirious patients have the lowest median survival time. Comparing the absence and presence of catatonia, the presence of catatonia worsens survival (Figure 1). In a time-dependent Cox model, comparing non-delirious individuals, holding catatonia status constant, delirious individuals have 1.72 times the hazards of death (IQR: 1.321, 2.231) while those with coma have 5.48 times the hazards of death (IQR: 4.298, 6.984). For DSM-5 catatonia scores, a 1-unit increase in the score is associated with 1.18 times the hazards of in-hospital mortality. Comparing two individuals with the same delirium status, an individual with a DSM-5 catatonia score of 0 (no catatonia) will have 1.178 times the hazard of death (IQR: 1.086, 1.278), while an individual with a score of 3 catatonia items (catatonia) present will have 1.63 times the hazard of death. DISCUSSION/SIGNIFICANCE OF IMPACT: Non-delirious individuals have the highest median survival times, while those who are comatose have the lowest median survival times after a critical illness, holding catatonia status constant. Comparing the absence and presence of catatonia, the presence of catatonia seems to worsen survival. Those individual who are both comatose and catatonic have the lowest median survival time.
Many patients with advanced serious illness or at the end of life experience delirium, a potentially reversible form of acute brain dysfunction, which may impair ability to participate in medical decision-making and to engage with their loved ones. Screening for delirium provides an opportunity to address modifiable causes. Unfortunately, delirium remains underrecognized. The main objective of this pilot was to validate the brief Confusion Assessment Method (bCAM), a two-minute delirium-screening tool, in a veteran palliative care sample.
Method
This was a pilot prospective, observational study that included hospitalized patients evaluated by the palliative care service at a single Veterans’ Administration Medical Center. The bCAM was compared against the reference standard, the Diagnostic and Statistical Manual of Mental Disorders, fifth edition. Both assessments were blinded and conducted within 30 minutes of each other.
Result
We enrolled 36 patients who were a median of 67 years (interquartile range 63–73). The primary reasons for admission to the hospital were sepsis or severe infection (33%), severe cardiac disease (including heart failure, cardiogenic shock, and myocardial infarction) (17%), or gastrointestinal/liver disease (17%). The bCAM performed well against the Diagnostic and Statistical Manual of Mental Disorders, fifth edition, for detecting delirium, with a sensitivity (95% confidence interval) of 0.80 (0.4, 0.96) and specificity of 0.87 (0.67, 0.96).
Significance of Results
Delirium was present in 27% of patients enrolled and never recognized by the palliative care service in routine clinical care. The bCAM provided good sensitivity and specificity in a pilot of palliative care patients, providing a method for nonpsychiatrically trained personnel to detect delirium.
OBJECTIVES/SPECIFIC AIMS: Background: Delirium is a well described form of acute brain organ dysfunction characterized by decreased or increased movement, changes in attention and concentration as well as perceptual disturbances (i.e., hallucinations) and delusions. Catatonia, a neuropsychiatric syndrome traditionally described in patients with severe psychiatric illness, can present as phenotypically similar to delirium and is characterized by increased, decreased and/or abnormal movements, staring, rigidity, and mutism. Delirium and catatonia can co-occur in the setting of medical illness, but no studies have explored this relationship by age. Our objective was to assess whether advancing age and the presence of catatonia are associated with delirium. METHODS/STUDY POPULATION: Methods: We prospectively enrolled critically ill patients at a single institution who were on a ventilator or in shock and evaluated them daily for delirium using the Confusion Assessment for the ICU and for catatonia using the Bush Francis Catatonia Rating Scale. Measures of association (OR) were assessed with a simple logistic regression model with catatonia as the independent variable and delirium as the dependent variable. Effect measure modification by age was assessed using a Likelihood ratio test. RESULTS/ANTICIPATED RESULTS: Results: We enrolled 136 medical and surgical critically ill patients with 452 matched (concomitant) delirium and catatonia assessments. Median age was 59 years (IQR: 52–68). In our cohort of 136 patients, 58 patients (43%) had delirium only, 4 (3%) had catatonia only, 42 (31%) had both delirium and catatonia, and 32 (24%) had neither. Age was significantly associated with prevalent delirium (i.e., increasing age associated with decreased risk for delirium) (p=0.04) after adjusting for catatonia severity. Catatonia was significantly associated with prevalent delirium (p<0.0001) after adjusting for age. Peak delirium risk was for patients aged 55 years with 3 or more catatonic signs, who had 53.4 times the odds of delirium (95% CI: 16.06, 176.75) than those with no catatonic signs. Patients 70 years and older with 3 or more catatonia features had half this risk. DISCUSSION/SIGNIFICANCE OF IMPACT: Conclusions: Catatonia is significantly associated with prevalent delirium even after controlling for age. These data support an inverted U-shape risk of delirium after adjusting for catatonia. This relationship and its clinical ramifications need to be examined in a larger sample, including patients with dementia. Additionally, we need to assess which acute brain syndrome (delirium or catatonia) develops first.
Direct ink writing of silicone elastomers enables printing with precise control of porosity and mechanical properties of ordered cellular solids, suitable for shock absorption and stress mitigation applications. With the ability to manipulate structure and feedstock stiffness, the design space becomes challenging to parse to obtain a solution producing a desired mechanical response. Here, we derive an analytical design approach for a specific architecture. Results from finite element simulations and quasi-static mechanical tests of two different parallel strand architectures were analyzed to understand the structure-property relationships under uniaxial compression. Combining effective stiffness-density scaling with least squares optimization of the stress responses yielded general response curves parameterized by resin modulus and strand spacing. An analytical expression of these curves serves as a reduced order model, which, when optimized, provides a rapid design capability for filament-based 3D printed structures. As a demonstration, the optimal design of a face-centered tetragonal architecture is computed that satisfies prescribed minimum and maximum load constraints.
Timing of weed emergence and seed persistence in the soil influence the ability to implement timely and effective control practices. Emergence patterns and seed persistence of kochia populations were monitored in 2010 and 2011 at sites in Kansas, Colorado, Wyoming, Nebraska, and South Dakota. Weekly observations of emergence were initiated in March and continued until no new emergence occurred. Seed was harvested from each site, placed into 100-seed mesh packets, and buried at depths of 0, 2.5, and 10 cm in fall of 2010 and 2011. Packets were exhumed at 6-mo intervals over 2 yr. Viability of exhumed seeds was evaluated. Nonlinear mixed-effects Weibull models were fit to cumulative emergence (%) across growing degree days (GDD) and to viable seed (%) across burial time to describe their fixed and random effects across site-years. Final emergence densities varied among site-years and ranged from as few as 4 to almost 380,000 seedlings m−2. Across 11 site-years in Kansas, cumulative GDD needed for 10% emergence were 168, while across 6 site-years in Wyoming and Nebraska, only 90 GDD were needed; on the calendar, this date shifted from early to late March. The majority (>95%) of kochia seed did not persist for more than 2 yr. Remaining seed viability was generally >80% when seeds were exhumed within 6 mo after burial in March, and declined to <5% by October of the first year after burial. Burial did not appear to increase or decrease seed viability over time but placed seed in a position from which seedling emergence would not be possible. High seedling emergence that occurs very early in the spring emphasizes the need for fall or early spring PRE weed control such as tillage, herbicides, and cover crops, while continued emergence into midsummer emphasizes the need for extended periods of kochia management.
An experiment was conducted at five locations in Nebraska to determine the extent of demise of weed seed in soil when seed production was eliminated from 1975 through 1979 in corn (Zea mays L.). Weed yields, weed seed production, and corn yields were determined under four weed management levels in 1980. Annual broadleaf weed seed were more prevalent than grass seed in cultivated soil throughout the study. The population of viable weed seed in soil declined 95% during the 5-yr period that weed seed production was eliminated. Weed seed buildup recovered to within 90% of the 1975 level during 1980 at Concord and Clay Center but remained low at Lincoln, North Platte, and Scottsbluff. Thus, seed longevity in soil was sometimes sufficient to withstand modern weed control methods and still reinfest a field after 5 yr of eliminating weed seed production. Corn yields were maintained 1 yr with minimum weed management effort following 5 yr of no weed seed production.
The Chemical Movement through Layered Soils (CMLS) model was modified and combined with the USDA-SCS State Soil Geographic Data Base (STATSGO) and Montana Agricultural Potentials System (MAPS) digital databases to assess the likelihood of groundwater contamination from selected herbicides in Teton County, MT. The STATSGO and MAPS databases were overlaid to produce polygons with unique soil and climate characteristics and attribute tables containing only those data needed by the CMLS model. The Weather Generator (WGEN) computer simulation model was modified and used to generate daily precipitation and evapotranspiration values. A new algorithm was developed to estimate soil carbon as a function of soil depth. The depth of movement of the applied chemicals at the end of the growing season was estimated with CMLS for each of the soil series in the STATSGO soil mapping units and the results were entered into ARC/INFO to produce the final hazard maps showing best, weighted average, and worst case results for every unique combination (polygon) of soil mapping unit and climate. County weed infestation maps for leafy spurge and spotted knapweed were digitized and overlaid in ARC/INFO with the CMLS model results for picloram to illustrate how the results might be used to evaluate the threat to groundwater posed by current herbicide applications.
Research conducted since 1979 in the north central United States and southern Canada demonstrated that after repeated annual applications of the same thiocarbamate herbicide to the same field, control of some difficult-to-control weed species was reduced. Laboratory studies of herbicide degradation in soils from these fields indicated that these performance failures were due to more rapid or “enhanced” biodegradation of the thiocarbamate herbicides after repeated use with a shorter period during which effective herbicide levels remained in the soils. Weeds such as wild proso millet [Panicum miliaceum L. spp. ruderale (Kitagawa) Tzevelev. #3 PANMI] and shattercane [Sorghum bicolor (L.) Moench. # SORVU] which germinate over long time periods were most likely to escape these herbicides after repeated use. Adding dietholate (O,O-diethyl O-phenyl phosphorothioate) to EPTC (S-ethyl dipropyl carbamothioate) reduced problems caused by enhanced EPTC biodegradation in soils treated previously with EPTC alone but not in soils previously treated with EPTC plus dietholate. While previous use of other thiocarbamate herbicides frequently enhanced biodegradation of EPTC or butylate [S-ethyl bis(2-methylpropyl)carbamothioate], previous use of other classes of herbicides or the insecticide carbofuran (2,3 -dihydro-2,2 -dimethyl-7-benzofuranyl methylcarbamate) did not. Enhanced biodegradation of herbicides other than the thiocarbamates was not observed.
The influence of manure and herbicide applications on weed-free fieldbeans (Phaseolus vulgaris L.) and root rot of fieldbeans was studied during 1978 and 1979 at Scottsbluff, Nebraska. Rates of cattle feedlot manure used were 0, 30000, 56000, 112000, or 168000 kg/ha. Preplant herbicide treatments used were alachlor [2-chloro-2′6′-diethyl-N-(methoxymethyl)acetanilide] at 3.4 kg/ha, EPTC (S-ethyl dipropylthiocarbamate) at 3.4 kg/ha, EPTC + trifluralin (α,α,α-trifluoro-2,6-dinitro-N,N-dipropyl-p-toluidine) at 2.2 + 0.6 kg/ha, or dinoseb (2-sec-butyl-4,6-dinitrophenol) at 6.7 kg/ha. Manure rates of 56000 kg/ha or higher reduced fieldbean yields both years, but did not increase electrical conductivity or soil exchangeable sodium enough to explain these yield reductions. In 1978, but not in 1979, height and yield of weed-free fieldbeans were reduced by EPTC, EPTC + trifluralin, and alachlor treatments. A significant interaction between manure and herbicide treatments was not detected and none of the treatments increased the severity of root rot in either year.
Combinations of herbicides applied preplant incorporated and preemergence plus at layby were evaluated in corn (Zea mays L.) at three Nebraska locations in 1977 through 1980 for their effectiveness in providing season-long weed control. Pendimethalin [N-(1 -ethylpropyl) - 3,4-dimethyl - 2,6 - dinitrobenzenamine], pendimethalin + atrazine [2- chloro - 4 - (ethylamino) -6 - isopropylamino) -s- triazine ], metolachlor [2 - chloro - N - (2 - ethyl - 6 - methylphenyl) - N - (2 - methoxy -1 - methylethyl) acetamide], alachlor [2 - chloro -2′,6’ - diethyl - N- (methoxymethyl)acetanilide], cyanazine {2 -[[4 - chloro- 6 - (ethylamino) - s - triazin-2 - yl] amino] -2 -methylpropionitrile}, and cyanazine + atrazine were shown to be selective to corn and effective in controlling late - germinating weeds. Irrigated - corn yield was not increased by layby herbicide applications on the weed species and weed densities present in these experiments.
In North America, terrestrial records of biodiversity and climate change that span Marine Oxygen Isotope Stage (MIS) 5 are rare. Where found, they provide insight into how the coupling of the ocean–atmosphere system is manifested in biotic and environmental records and how the biosphere responds to climate change. In 2010–2011, construction at Ziegler Reservoir near Snowmass Village, Colorado (USA) revealed a nearly continuous, lacustrine/wetland sedimentary sequence that preserved evidence of past plant communities between ~140 and 55 ka, including all of MIS 5. At an elevation of 2705 m, the Ziegler Reservoir fossil site also contained thousands of well-preserved bones of late Pleistocene megafauna, including mastodons, mammoths, ground sloths, horses, camels, deer, bison, black bear, coyotes, and bighorn sheep. In addition, the site contained more than 26,000 bones from at least 30 species of small animals including salamanders, otters, muskrats, minks, rabbits, beavers, frogs, lizards, snakes, fish, and birds. The combination of macro- and micro-vertebrates, invertebrates, terrestrial and aquatic plant macrofossils, a detailed pollen record, and a robust, directly dated stratigraphic framework shows that high-elevation ecosystems in the Rocky Mountains of Colorado are climatically sensitive and varied dramatically throughout MIS 5.
Older African Americans tend to perform more poorly on cognitive function tests than older Whites. One possible explanation for their poorer performance is that the tests used to assess cognition may not reflect the same construct in African Americans and Whites. Therefore, we tested measurement invariance, by race and over time, of a structured 18-test cognitive battery used in three epidemiologic cohort studies of diverse older adults. Multi-group confirmatory factor analyses were carried out with full-information maximum likelihood estimation in all models to capture as much information as was present in the observed data. Four different aspects of the data were fit to each model: comparative fit index (CFI), standardized root mean square residuals (SRMR), root mean square error of approximation (RMSEA), and model
$$\chi ^{2} $$
. We found that the most constrained model fit the data well (CFI=0.950; SRMR=0.051; RMSEA=0.057 (90% confidence interval: 0.056, 0.059); the model
$$\chi ^{2} $$
=4600.68 on 862 df), supporting the characterization of this model of cognitive test scores as invariant over time and racial group. These results support the conclusion that the cognitive test battery used in the three studies is invariant across race and time and can be used to assess cognition among African Americans and Whites in longitudinal studies. Furthermore, the lower performance of African Americans on these tests is not due to bias in the tests themselves but rather likely reflect differences in social and environmental experiences over the life course. (JINS, 2016, 22, 66–75)
The aim of this study was to compare patterns of cognitive decline in older Latinos and non-Latinos. At annual intervals for a mean of 5.7 years, older Latino (n=104) and non-Latino (n=104) persons of equivalent age, education, and race completed a battery of 17 cognitive tests from which previously established composite measures of episodic memory, semantic memory, working memory, perceptual speed, and visuospatial ability were derived. In analyses adjusted for age, sex, and education, performance declined over time in each cognitive domain, but there were no ethnic group differences in initial level of function or annual rate of decline. There was evidence of retest learning following the baseline evaluation, but neither the magnitude nor duration of the effect was related to Latino ethnicity, and eliminating the first two evaluations, during which much of retest learning occurred, did not affect ethnic group comparisons. Compared to the non-Latino group, the Latino group had more diabetes (38.5% vs. 25.0; χ2[1]=4.4; p=.037), fewer histories of smoking (24.0% vs. 39.4%, χ2[1]=5.7; p=.017), and lower childhood household socioeconomic level (−0.410 vs. −0.045, t[185.0]=3.1; p=.002), but controlling for these factors did not affect results. Trajectories of cognitive aging in different abilities are similar in Latino and non-Latino individuals of equivalent age, education, and race. (JINS, 2016, 22, 58–65)
This paper describes select results of a longitudinal study of 62 mild to moderate Alzheimer's disease (AD) patients, in comparison to 60 age-matched healthy controls. Initial neurologic, radiologic, psychiatric, laboratory and cognitive examinations, required two full days, followed by one-day examinations at annual intervals. Of the total original sample, 31 AD patients and 39 controls could actually be followed for three annual examinations. Cognitive examination data confirmed cross-sectional (group discriminative) validity of memory and language measures, and showed the expected longitudinal deterioration in the AD sample, with controls maintaining consistent performance over the three years. However, those measures showing largest group differences at initial examination were not the best for tracking patient deterioration over time. Implications of these results for the selection of cognitive assessment measures are discussed.
Intestinal health is important for maximising the health, welfare, and performance of poultry. In addition, intestinal health issues in poultry can have devastating financial impacts for producers, and food safety concerns for consumers. Until recently, intestinal health issues were seen as a handful of known infectious agents leading to a set of severe and identifiable named diseases. There is however an emerging area which depicts intestinal health as a more complex and multifaceted system than previously known. Recent progress in technology suitable for microbial community analysis has evolved our understanding of the chicken intestinal microbiome. It is now understood that shifts in the composition of microbial communities can occur. These shifts can result in a series of implications, including: disease, welfare, environmental, and food safety concerns. Minor shifts in intestinal microbial balance can result in a wide continuum of disease presentations ranging from severe to mild clinical, subclinical or asymptotic. Differential diagnosis of poultry intestinal health issues may be challenging and is important for applying appropriate treatment options. This review discusses new and emerging topics in broiler chicken intestinal health, with a focus on microbial composition, newly discovered microbial shifts in classical poultry diseases, range in severity of enteric diseases, newly identified organisms in normal intestinal flora, implications of shifts in intestinal microbial communities and diagnosis of emerging intestinal health issues in poultry.