To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When analyzing an unknown by electron-excited energy dispersive X-ray spectrometry, with the entire periodic table possibly in play, how does the analyst discover minor and trace constituents when their peaks are overwhelmed by the intensity of an interfering peak(s) from a major constituent? In this paper, we advocate for and demonstrate an iterative analytical approach, alternating qualitative analysis (peak identification) and standards-based quantitative analysis with peak fitting. This method employs two “tools”: (1) monitoring of the “raw analytical total,” which is the sum of all measured constituents as well as any such as oxygen calculated by the method of assumed stoichiometry, and (2) careful inspection of the “peak fitting residual spectrum” that is constructed as part of the quantitative analysis procedure in the software engine DTSA-II (a pseudo-acronym) from the National Institute of Standards and Technology. Elements newly recognized after each round are incorporated into the next round of quantitative analysis until the limits of detection are reached, as defined by the total spectrum counts.
The authors of this paper met at Harvard University in July, 1955, to discuss the theoretical problems which arise in the study of culture contact situations in archaeology. The subject of contacts between cultures is one in which ethnologists have long been interested, and there is a substantial body of literature, both descriptive and theoretical, on contemporary and recent historic situations of this kind. Archaeological interest in the subject is somewhat more recent, but a few excellent reports on specific examples have appeared which we could use as a basis for our discussions. We believe that this paper is only the second attempt to contribute something to this field by generalization from archaeological data (Willeyl953).
Increased out-of-pocket health-care expenditures may exert budget pressure on low-income households that leads to food insecurity. The objective of the present study was to examine whether older adults with higher chronic disease burden are at increased risk of food insecurity.
Secondary analysis of the 2013 Health and Retirement Study (HRS) Health Care and Nutrition Study (HCNS) linked to the 2012 nationally representative HRS.
Respondents of the 2013 HRS HCNS with household incomes <300 % of the federal poverty line (n 3552). Chronic disease burden was categorized by number of concurrent chronic conditions (0–1, 2–4, ≥5 conditions), with multiple chronic conditions (MCC) defined as ≥2 conditions.
The prevalence of food insecurity was 27·8 %. Compared with those having 0–1 conditions, respondents with MCC were significantly more likely to report food insecurity, with the adjusted odds ratio for those with 2–4 conditions being 2·12 (95 % CI 1·45, 3·09) and for those with ≥5 conditions being 3·64 (95 % CI 2·47, 5·37).
A heavy chronic disease burden likely exerts substantial pressure on the household budgets of older adults, creating an increased risk for food insecurity. Given the high prevalence of food insecurity among older adults, screening those with MCC for food insecurity in the clinical setting may be warranted in order to refer to community food resources.
The term mild cognitive impairment has been associated with a varying degree of clinical utility and controversy. The concept has been introduced to try and define a pre-dementia period associated with underlying neurodegenerative pathology and a higher likelihood of the person developing a dementia syndrome. As scientific understanding improves then the definition of MCI rightly adapts, meaning that the MCI concept is prone quite rightly to frequent evolution. We consider that we are a long way away from the concept having evolved to a point where it can be embedded with confidence in clinical practice as a diagnosis but should remain as a term primarily for use in research.
Polygenic risk scores (PRS) for depression correlate with depression status and chronicity, and provide causal anchors to identify depressive mechanisms. Neuroticism is phenotypically and genetically positively associated with depression, whereas psychological resilience demonstrates negative phenotypic associations. Whether increased neuroticism and reduced resilience are downstream mediators of genetic risk for depression, and whether they contribute independently to risk remains unknown.
Moderating and mediating relationships between depression PRS, neuroticism, resilience and both clinical and self-reported depression were examined in a large, population-based cohort, Generation Scotland: Scottish Family Health Study (N = 4166), using linear regression and structural equation modelling. Neuroticism and resilience were measured by the Eysenck Personality Scale Short Form Revised and the Brief Resilience Scale, respectively.
PRS for depression was associated with increased likelihood of self-reported and clinical depression. No interaction was found between PRS and neuroticism, or between PRS and resilience. Neuroticism was associated with increased likelihood of self-reported and clinical depression, whereas resilience was associated with reduced risk. Structural equation modelling suggested the association between PRS and self-reported and clinical depression was mediated by neuroticism (43–57%), while resilience mediated the association in the opposite direction (37–40%). For both self-reported and clinical diagnoses, the genetic risk for depression was independently mediated by neuroticism and resilience.
Findings suggest polygenic risk for depression increases vulnerability for self-reported and clinical depression through independent effects on increased neuroticism and reduced psychological resilience. In addition, two partially independent mechanisms – neuroticism and resilience – may form part of the pathway of vulnerability to depression.
Given its positive relationship with valued organizational outcomes, worker-related engagement has become a prominent issue for practitioners and for scholars. While recent research has begun to validate various engagement antecedents and outcomes, little is known about the effects that work orientation and supportive leadership have on engagement, particularly among millennial workers, the soon to be dominant generational work group globally. To explore these gaps, we studied a particular form of work orientation – those indicating having a ‘calling’ – along with perceptions of how supportive leadership is for study subjects’ current work. Specifically, we posited positive worker engagement relationships for both worker calling and perceptions of leadership support, as well as for their interaction. Drawing upon a United States-based sample of 297 millennial workers, we found a positive relationship for each hypothesis. This study contributes to the expanding literature on the value of understanding how work orientation and leadership perceptions impact important organizational outcomes.
Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence—PENEPMA and NIST DTSA-IIa (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.
Electron-excited X-ray microanalysis performed with scanning electron microscopy and energy-dispersive spectrometry (EDS) has been used to measure trace elemental constituents of complex multielement materials, where “trace” refers to constituents present at concentrations below 0.01 (mass fraction). High count spectra measured with silicon drift detector EDS were quantified using the standards/matrix correction protocol embedded in the NIST DTSA-II software engine. Robust quantitative analytical results for trace constituents were obtained from concentrations as low as 0.000500 (mass fraction), even in the presence of significant peak interferences from minor (concentration 0.01≤C≤0.1) and major (C>0.1) constituents. Limits of detection as low as 0.000200 were achieved in the absence of peak interference.
A scanning electron microscope with a silicon drift detector energy-dispersive X-ray spectrometer (SEM/SDD-EDS) was used to analyze materials containing the low atomic number elements B, C, N, O, and F achieving a high degree of accuracy. Nearly all results fell well within an uncertainty envelope of ±5% relative (where relative uncertainty (%)=[(measured−ideal)/ideal]×100%). Quantification was performed with the standards-based “k-ratio” method with matrix corrections calculated based on the Pouchou and Pichoir expression for the ionization depth distribution function, as implemented in the NIST DTSA-II EDS software platform. The analytical strategy that was followed involved collection of high count (>2.5 million counts from 100 eV to the incident beam energy) spectra measured with a conservative input count rate that restricted the deadtime to ~10% to minimize coincidence effects. Standards employed included pure elements and simple compounds. A 10 keV beam was employed to excite the K- and L-shell X-rays of intermediate and high atomic number elements with excitation energies above 3 keV, e.g., the Fe K-family, while a 5 keV beam was used for analyses of elements with excitation energies below 3 keV, e.g., the Mo L-family.
Diluvian Clustering is an unsupervised grid-based clustering algorithm well suited to interpreting large sets of noisy compositional data. The algorithm is notable for its ability to identify clusters that are either compact or diffuse and clusters that have either a large number or a small number of members. Diluvian Clustering is fundamentally different from most algorithms previously applied to cluster compositional data in that its implementation does not depend upon a metric. The algorithm reduces in two-dimensions to a case for which there is an intuitive, real-world parallel. Furthermore, the algorithm has few tunable parameters and these parameters have intuitive interpretations. By eliminating the dependence on an explicit metric, it is possible to derive reasonable clusters with disparate variances like those in real-world compositional data sets. The algorithm is computationally efficient. While the worst case scales as O(N2) most cases are closer to O(N) where N is the number of discrete data points. On a mid-range 2014 vintage computer, a typical 20,000 particle, 30 element data set can be clustered in a fraction of a second.