We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Acquire complete knowledge of the basics of air-breathing turbomachinery with this hands-on practical text. This updated new edition for students in mechanical and aerospace engineering discusses the role of entropy in assessing machine performance, provides a review of flow structures, and includes an applied review of boundary layer principles. New coverage describes approaches used to smooth initial design geometry into a continuous flow path, the development of design methods associated with the flow over blade shape (cascades loss theory) and annular type flows, as well as a discussion of the mechanisms for the setting of shaft speed. This essential text is also fully supported by over 200 figures, numerous examples, and homework problems, many of which have been revised for this edition.
‘Dietary variety’ has been identified as a factor associated with food intake. Whilst this relationship may have longer-term benefits for body weight management when eating low-energy, nutrient-dense foods, it may increase the risk of overconsumption (and body adiposity) when foods are high energy density. This study sought to further explore pathways underpinning the relationship between dietary variety and body weight, by considering energy density as a moderating factor and portion size as a mediating factor in this relationship. Using prospective data from the UK Biobank, dietary variety scores (DVS), cumulative portion size and energy density were derived from 24-h dietary recall questionnaires at baseline and follow-up. BMI, whole-body fat percentage and fat-free mass were included as outcomes. Contrary to predictions, linear multiple regression models found some evidence of a negative, direct association between DVS and body weight outcomes at baseline (b = –0·13). Though dietary variety was significantly associated with larger portions across time points (b = 41·86–82·64), a moderated mediation effect was not supported at baseline or follow-up (Index ≤ 0·035). Taken together, these findings provide population-level evidence to support a positive association between variety and food intake, which in turn has potential implications for body weight management, both in terms of moderating food intake and benefitting diet quality.
The canon of British First World War poetry seems well established and beyond dispute with a set of key ‘representative’ poets referenced continuously. Yet these poets have been selected and promoted over the decades for various reasons. Moreover, how representative were they of the British experience and reaction to the events 1914–1918? Using a quantitative study of the poets and poems appearing in anthologies during and after the war, this essay reconsiders the true canon of the British poetical response to the war charting the rise (and fall) of certain poets and why this might be so. It also considers the hidden canon of poetry that focuses on other theatres of war, at sea and in the air.
Childhood adversities (CAs) predict heightened risks of posttraumatic stress disorder (PTSD) and major depressive episode (MDE) among people exposed to adult traumatic events. Identifying which CAs put individuals at greatest risk for these adverse posttraumatic neuropsychiatric sequelae (APNS) is important for targeting prevention interventions.
Methods
Data came from n = 999 patients ages 18–75 presenting to 29 U.S. emergency departments after a motor vehicle collision (MVC) and followed for 3 months, the amount of time traditionally used to define chronic PTSD, in the Advancing Understanding of Recovery After Trauma (AURORA) study. Six CA types were self-reported at baseline: physical abuse, sexual abuse, emotional abuse, physical neglect, emotional neglect and bullying. Both dichotomous measures of ever experiencing each CA type and numeric measures of exposure frequency were included in the analysis. Risk ratios (RRs) of these CA measures as well as complex interactions among these measures were examined as predictors of APNS 3 months post-MVC. APNS was defined as meeting self-reported criteria for either PTSD based on the PTSD Checklist for DSM-5 and/or MDE based on the PROMIS Depression Short-Form 8b. We controlled for pre-MVC lifetime histories of PTSD and MDE. We also examined mediating effects through peritraumatic symptoms assessed in the emergency department and PTSD and MDE assessed in 2-week and 8-week follow-up surveys. Analyses were carried out with robust Poisson regression models.
Results
Most participants (90.9%) reported at least rarely having experienced some CA. Ever experiencing each CA other than emotional neglect was univariably associated with 3-month APNS (RRs = 1.31–1.60). Each CA frequency was also univariably associated with 3-month APNS (RRs = 1.65–2.45). In multivariable models, joint associations of CAs with 3-month APNS were additive, with frequency of emotional abuse (RR = 2.03; 95% CI = 1.43–2.87) and bullying (RR = 1.44; 95% CI = 0.99–2.10) being the strongest predictors. Control variable analyses found that these associations were largely explained by pre-MVC histories of PTSD and MDE.
Conclusions
Although individuals who experience frequent emotional abuse and bullying in childhood have a heightened risk of experiencing APNS after an adult MVC, these associations are largely mediated by prior histories of PTSD and MDE.
Birnbaum and Quispe-Torreblanca (2018) evaluated a set of six models developed under true-and-error theory against data in which people made choices in repeated gambles. They concluded the three models based on expected utility theory were inadequate accounts of the behavioral data, and argued in favor of the simplest of the remaining three more general models. To reach these conclusions, they used non-Bayesian statistical methods: frequentist point estimation of parameters, bootstrapped confidence intervals of parameters, and null hypothesis significance testing of models. We address the same research goals, based on the same models and the same data, using Bayesian methods. We implement the models as graphical models in JAGS to allow for computational Bayesian analysis. Our results are based on posterior distribution of parameters, posterior predictive checks of descriptive adequacy, and Bayes factors for model comparison. We compare the Bayesian results with those of Birnbaum and Quispe-Torreblanca (2018). We conclude that, while the very general conclusions of the two approaches agree, the Bayesian approach offers better detailed answers, especially for the key question of the evidence the data provide for and against the competing models. Finally, we discuss the conceptual and practical advantages of using Bayesian methods in judgment and decision making research highlighted by this case study.
The less-is-more effect predicts that people can be more accurate making paired-comparison decisions when they have less knowledge, in the sense that they do not recognize all of the items in the decision domain. The traditional theoretical explanation is that decisions based on recognizing one alternative but not the other can be more accurate than decisions based on partial knowledge of both alternatives. I present new data that directly test for the less-is-more effect, coming from a task in which participants judge which of two cities is larger and indicate whether they recognize each city. A group-level analysis of these data provides evidence in favor of the less-is-more effect: there is strong evidence people make decisions consistent with recognition, and that these decisions are more accurate than those based on knowledge. An individual-level analysis of the same data, however, provides evidence inconsistent with a simple interpretation of the less-is-more effect: there is no evidence for an inverse-U-shaped relationship between accuracy and recognition, and especially no evidence that individuals who recognize a moderate number of cities outperform individuals who recognize many cities. I suggest a reconciliation of these contrasting findings, based on the systematic change of the accuracy of recognition-based decisions with the underlying recognition rate. In particular, the data show that people who recognize almost none or almost all cities make more accurate decisions by applying the recognition heuristic, when compared to the accuracy achieved by people with intermediate recognition rates. The implications of these findings for precisely defining and understanding the less-is-more effect are discussed, as are the constraints our data potentially place on models of the learning and decision-making processes involved. Keywords: recognition heuristic, less-is-more effect.
We consider the recently-developed “surprisingly popular” method for aggregating decisions across a group of people (Prelec, Seung and McCoy, 2017). The method has shown impressive performance in a range of decision-making situations, but typically for situations in which the correct answer is already established. We consider the ability of the surprisingly popular method to make predictions in a situation where the correct answer does not exist at the time people are asked to make decisions. Specifically, we tested its ability to predict the winners of the 256 US National Football League (NFL) games in the 2017–2018 season. Each of these predictions used participants who self-rated as “extremely knowledgeable” about the NFL, drawn from a set of 100 participants recruited through Amazon Mechanical Turk (AMT). We compare the accuracy and calibration of the surprisingly popular method to a variety of alternatives: the mode and confidence-weighted predictions of the expert AMT participants, the individual and aggregated predictions of media experts, and a statistical Elo method based on the performance histories of the NFL teams. Our results are exploratory, and need replication, but we find that the surprisingly popular method outperforms all of these alternatives, and has reasonable calibration properties relating the confidence of its predictions to the accuracy of those predictions.
We demonstrate the usefulness of cognitive models for combining human estimates of probabilities in two experiments. The first experiment involves people’s estimates of probabilities for general knowledge questions such as “What percentage of the world’s population speaks English as a first language?” The second experiment involves people’s estimates of probabilities in football (soccer) games, such as “What is the probability a team leading 1–0 at half time will win the game?”, with ground truths based on analysis of large corpus of games played in the past decade. In both experiments, we collect people’s probability estimates, and develop a cognitive model of the estimation process, including assumptions about the calibration of probabilities and individual differences. We show that the cognitive model approach outperforms standard statistical aggregation methods like the mean and the median for both experiments and, unlike most previous related work, is able to make good predictions in a fully unsupervised setting. We also show that the parameters inferred as part of the cognitive modeling, involving calibration and expertise, provide useful measures of the cognitive characteristics of individuals. We argue that the cognitive approach has the advantage of aggregating over latent human knowledge rather than observed estimates, and emphasize that it can be applied in predictive settings where answers are not yet available.
Heuristic decision-making models, like Take-the-best, rely on environmental regularities. They conduct a limited search, and ignore available information, by assuming there is structure in the decision-making environment. Take-the-best relies on at least two regularities: diminishing returns, which says that information found earlier in search is more important than information found later; and correlated information, which says that information found early in search is predictive of information found later. We develop new approaches to determining search orders, and to measuring cue discriminability, that make the reliance of Take-the-best on these regularities clear, and open to manipulation. We then demonstrate, in the well-studied German cities environment, and three new city environments, when and how these regularities support Take-the-best. To do this, we focus not on the accuracy of Take-the-best, as most previous studies have, but on a measure of its coherence as a decision-making process. In particular, we consider whether Take-the-best decisions, based on a single piece of information, can be justified because an exhaustive search for information is unlikely to yield a different decision. Using this measure, we show that when the two environmental regularities are present, the decisions made by limited search are unlikely to have changed after exhaustive search, but that both regularities are often necessary.
We study whether experts and novices differ in the way they make predictions about National Football League games. In particular, we measure to what extent their predictions are consistent with five environmental regularities that could support decision making based on heuristics. These regularities involve the home team winning more often, the team with the better win-loss record winning more often, the team favored by the majority of media experts winning more often, and two others related to surprise wins and losses in the teams’ previous game. Using signal detection theory and hierarchical Bayesian analysis, we show that expert predictions for the 2017 National Football League (NFL) season generally follow these regularities in a near optimal way, but novice predictions do not. These results support the idea that using heuristics adapted to the decision environment can support accurate predictions and be an indicator of expertise.
We consider the wisdom of the crowd situation in which individuals make binary decisions, and the majority answer is used as the group decision. Using data sets from nine different domains, we examine the relationship between the size of the majority and the accuracy of the crowd decisions. We find empirically that these calibration curves take many different forms for different domains, and the distribution of majority sizes over decisions in a domain also varies widely. We develop a growth model for inferring and interpreting the calibration curve in a domain, and apply it to the same nine data sets using Bayesian methods. The modeling approach is able to infer important qualitative properties of a domain, such as whether it involves decisions that have ground truths or are inherently uncertain. It is also able to make inferences about important quantitative properties of a domain, such as how quickly the crowd accuracy increases as the size of the majority increases. We discuss potential applications of the measurement model, and the need to develop a psychological account of the variety of calibration curves that evidently exist.
Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants). Two worked-examples show how hierarchical models can be developed to account for and explain the diversity of both search and stopping rules seen across the simulated individuals. We discuss how the results provide insight into current debates in the literature on heuristic decision making and argue that they demonstrate the power and flexibility of hierarchical Bayesian methods in modeling human decision-making.
Drafting is a competitive task in which a set of decision makers choose from a set of resources sequentially, with each resource becoming unavailable once selected. How people make these choices raises basic questions about human decision making, including people’s sensitivity to the statistical regularities of the resource environment, their ability to reason about the behavior of their competitors, and their ability to execute and adapt sophisticated strategies in dynamic situations involving uncertainty. Sports provides one real-world example of drafting behavior, in which a set of teams draft players from an available pool in a well-regulated way. Fantasy sport competitions provide potentially large data sets of drafting behavior. We study fantasy football drafting behavior from the 2017 National Football League (NFL) season based on 1350 leagues hosted by the http://sleeper.app platform. We find people are sensitive to some important environmental regularities in the order in which they draft players, but also present evidence that they use a more narrow range of strategies than is likely optimal in terms of team composition. We find little to no evidence for the use of the complicated but well-documented strategy known as handcuffing, and no evidence of irrational influence from individual-level biases for different NFL teams. We do, however, identify a set of circumstances for which there is clear evidence that people’s choices are strongly influenced by the immediately preceding choice made by a competitor.
There are many ways to measure how people manage risk when they make decisions. A standard approach is to measure risk propensity using self-report questionnaires. An alternative approach is to use decision-making tasks that involve risk and uncertainty, and apply cognitive models of task behavior to infer parameters that measure people’s risk propensity. We report the results of a within-participants experiment that used three questionnaires and four decision-making tasks. The questionnaires are the Risk Propensity Scale, the Risk Taking Index, and the Domain Specific Risk Taking Scale. The decision-making tasks are the Balloon Analogue Risk Task, the preferential choice gambling task, the optimal stopping problem, and the bandit problem. We analyze the relationships between the risk measures and cognitive parameters using Bayesian inferences about the patterns of correlation, and using a novel cognitive latent variable modeling approach. The results show that people’s risk propensity is generally consistent within different conditions for each of the decision-making tasks. There is, however, little evidence that the way people manage risk generalizes across the tasks, or that it corresponds to the questionnaire measures.
With advances in care, an increasing number of individuals with single-ventricle CHD are surviving into adulthood. Partners of individuals with chronic illness have unique experiences and challenges. The goal of this pilot qualitative research study was to explore the lived experiences of partners of individuals with single-ventricle CHD.
Methods:
Partners of patients ≥18 years with single-ventricle CHD were recruited and participated in Experience Group sessions and 1:1 interviews. Experience Group sessions are lightly moderated groups that bring together individuals with similar circumstances to discuss their lived experiences, centreing them as the experts. Formal inductive qualitative coding was performed to identify salient themes.
Results:
Six partners of patients participated. Of these, four were males and four were married; all were partners of someone of the opposite sex. Themes identified included uncertainty about their partners’ future health and mortality, becoming a lay CHD specialist, balancing multiple roles, and providing positivity and optimism. Over time, they took on a role as advocates for their partners and as repositories of medical history to help navigate the health system. Despite the uncertainties, participants described championing positivity and optimism for the future.
Conclusions:
In this first-of-its-kind pilot study, partners of individuals with single-ventricle CHD expressed unique challenges and experiences in their lives. There is a tacit need to design strategies to help partners cope with those challenges. Further larger-scale research is required to better understand the experiences of this unique population.
On continuous recognition tasks, changing the context objects are embedded in impairs memory. Older adults are worse on pattern separation tasks requiring identification of similar objects compared to younger adults. However, how contexts impact pattern separation in aging is unclear. The apolipoprotein (APOE) ϵ4 allele may exacerbate possible age-related changes due to early, elevated neuropathology. The goal of this study is to determine how context and APOE status affect pattern separation among younger and older adults.
Method:
Older and younger ϵ4 carriers and noncarriers were given a continuous object recognition task. Participants indicated if objects on a Repeated White background, Repeated Scene, or a Novel Scene were old, similar, or new. The proportions of correct responses and the types of errors made were calculated.
Results:
Novel scenes lowered recognition scores compared to all other contexts for everyone. Younger adults outperformed older adults on identifying similar objects. Older adults misidentified similar objects as old more than new, and the repeated scene exacerbated this error. APOE status interacted with scene and age such that in repeated scenes, younger carriers produced less false alarms, and this trend switched for older adults where carriers made more false alarms.
Conclusions:
Context impacted recognition memory in the same way for both age groups. Older adults underutilized details and over relied on holistic information during pattern separation compared to younger adults. The triple interaction in false alarms may indicate an even greater reliance on holistic information among older adults with increased risk for Alzheimer’s disease.
One common and informative way that people express their beliefs, preferences, and opinions is by providing rankings. We use Thurstonian cognitive models to explore individual differences in naturally occurring ranking data for a variety of political, lifestyle, and sporting topics. After demonstrating that the standard Thurstonian model does not capture individual differences, we develop two extended models. The first allows for subgroups of people with different beliefs and opinions about all of the stimuli. The second allows for just a subset of polarized stimuli for which some people have different beliefs or opinions. We apply these two models, using Bayesian methods of inference, and demonstrate how they provide intuitive and useful accounts of the individual differences. We discuss the benefits of incorporating theory about individual differences into the processing assumptions of cognitive models, rather than through the statistical extensions that are currently often used in cognitive modeling.
We present the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY) Pilot Phase I Hi kinematic models. This first data release consists of Hi observations of three fields in the direction of the Hydra and Norma clusters, and the NGC 4636 galaxy group. In this paper, we describe how we generate and publicly release flat-disk tilted-ring kinematic models for 109/592 unique Hi detections in these fields. The modelling method adopted here—which we call the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) and for which the corresponding scripts are also publicly available—consists of combining results from the homogeneous application of the FAT and 3DBarolo algorithms to the subset of 209 detections with sufficient resolution and
$S/N$
in order to generate optimised model parameters and uncertainties. The 109 models presented here tend to be gas rich detections resolved by at least 3–4 synthesised beams across their major axes, but there is no obvious environmental bias in the modelling. The data release described here is the first step towards the derivation of similar products for thousands of spatially resolved WALLABY detections via a dedicated kinematic pipeline. Such a large publicly available and homogeneously analysed dataset will be a powerful legacy product that that will enable a wide range of scientific studies.