We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Humanity’s situation with climate change is sometimes compared to that of a frog in a slowly boiling pot of water. Most of our climate science takes the form of prediction: telling the frog that in five minutes’ time he will be a little bit warmer. We need more risk assessment: telling the frog that the worst that could happen is he could boil to death, and that this is becoming increasingly likely over time. This approach can give a much clearer picture of the risks of climate change to human health, food security, and coastal cities.
We developed a mechanism model which allows for simulating the novel coronavirus (COVID-19) transmission dynamics with the combined effects of human adaptive behaviours and vaccination, aiming at predicting the end time of COVID-19 infection in global scale. Based on the surveillance information (reported cases and vaccination data) between 22 January 2020 and 18 July 2022, we validated the model by Markov Chain Monte Carlo (MCMC) fitting method. We found that (1) if without adaptive behaviours, the epidemic could sweep the world in 2022 and 2023, causing 3.098 billion of human infections, which is 5.39 times of current number; (2) 645 million people could be avoided from infection due to vaccination; and (3) in current scenarios of protective behaviours and vaccination, infection cases would increase slowly, levelling off around 2023, and it would end completely in June 2025, causing 1.024 billion infections, with 12.5 million death. Our findings suggest that vaccination and the collective protection behaviour remain the key determinants against the global process of COVID-19 transmission.
Determining accurate standard time using direct measurement techniques is especially challenging in companies that do not have a proper environment for time measurement studies or that manufacture items requiring complex production schedules. New and specific time measurement techniques are required for such companies. This research developed a novel time estimation approach based on several machine learning methods. The set of collected inputs in the manufacturing environment, including a number of products, the number of welding operations, product's surface area factor, difficulty/working environment factors, and the number of metal forming processes. The data were collected from one of the largest bus manufacturing companies in Turkey. Experimental results demonstrate that when model accuracy was measured using performance measures, k-nearest neighbors outperformed other machine learning techniques in terms of prediction accuracy. “The number of welding operations” and “the number of pieces” were found to be the most effective parameters. The findings show that machine learning algorithms can estimate standard time, and the findings can be used for several purposes, including lowering production costs, increasing productivity, and ensuring efficiency in the execution of their operating processes by other companies that manufacture similar products.
In six studies, we find evidence for an upward mobility bias, or a tendency to predict that a rise in ranking is more likely than a decline, even in domains where motivation or intention to rise play no role. Although people cannot willfully change their height (Study 1), and geographical entities cannot willfully alter their temperature (Study 2), number of natural disasters (Study 3), levels of precipitation (Studies 4A and 4B), or chemical concentration (Study 5), subjects believed that each is more likely to rise than drop in ranking. This bias is due to an association between a ranking’s order and the direction of absolute change, and to the tendency to give considerable weight to a focal agent over non-focal agents. Because people generally expect change to be represented in terms of higher ranks, and because they tend to focus on specific, focal targets, they believe that any given target will experience a larger relative increase than other targets. We discuss implications for social policy.
The Welfare Quality® (WQ) protocols are increasingly used for assessing welfare of farm animals. These protocols are time consuming (about one day per farm) and, therefore, costly. Our aim was to assess the scope for reduction of on-farm assessment time of the WQ protocol for dairy cattle. Seven trained observers quantified animal-based indicators of the WQ protocol in 181 loose-housed and 13 tied Dutch dairy herds (herd size from 10 to 211 cows). Four assessment methods were used: avoidance distance at the feeding rack (ADF, 44 min); qualitative behaviour assessment (QBA, 25 min); behavioural observations (BO, 150 min); and clinical observations (CO, 132 min). To simulate reduction of on-farm assessment time, a set of WQ indicators belonging to one assessment method was omitted from the protocol. Observed values of omitted indicators were replaced by predictions based on WQ indicators of the remaining three assessment methods, resources checklist, and interview, thus mimicking the performance of the full WQ protocol. Agreement between predicted and observed values of WQ indicators, however, was low for ADF, moderate for QBA, slight to moderate for BO, and poor to moderate for CO. It was concluded that replacing animal-based WQ indicators by predictions based on remaining WQ indicators shows little scope for reduction of on-farm assessment time of the Welfare Quality® protocol for dairy cattle. Other ways to reduce on-farm assessment time of the WQ protocol for dairy cattle, such as the use of additional data or automated monitoring systems, should be investigated.
Fast-and-frugal trees (FFTs) are simple algorithms that facilitate efficient and accurate decisions based on limited information. But despite their successful use in many applied domains, there is no widely available toolbox that allows anyone to easily create, visualize, and evaluate FFTs. We fill this gap by introducing the R package FFTrees. In this paper, we explain how FFTs work, introduce a new class of algorithms called fan for constructing FFTs, and provide a tutorial for using the FFTrees package. We then conduct a simulation across ten real-world datasets to test how well FFTs created by FFTrees can predict data. Simulation results show that FFTs created by FFTrees can predict data as well as popular classification algorithms such as regression and random forests, while remaining simple enough for anyone to understand and use.
Psychologists typically measure beliefs and preferences using self-reports, whereas economists are much more likely to infer them from behavior. Prediction markets appear to be a victory for the economic approach, having yielded more accurate probability estimates than opinion polls or experts for a wide variety of events, all without ever asking for self-reported beliefs. We conduct the most direct comparison to date of prediction markets to simple self-reports using a within-subject design. Our participants traded on the likelihood of geopolitical events. Each time they placed a trade, they first had to report their belief that the event would occur on a 0–100 scale. When previously validated aggregation algorithms were applied to self-reported beliefs, they were at least as accurate as prediction-market prices in predicting a wide range of geopolitical events. Furthermore, the combination of approaches was significantly more accurate than prediction-market prices alone, indicating that self-reports contained information that the market did not efficiently aggregate. Combining measurement techniques across behavioral and social sciences may have greater benefits than previously thought.
Predictions of magnitudes (costs, durations, environmental events) are often given as uncertainty intervals (ranges). When are such forecasts judged to be correct? We report results of four experiments showing that forecasted ranges of expected natural events (floods and volcanic eruptions) are perceived as accurate when an observed magnitude falls inside or at the boundary of the range, with little regard to its position relative to the “most likely” (central) estimate. All outcomes that fell inside a wide interval were perceived as equally well captured by the forecast, whereas identical outcomes falling outside a narrow range were deemed to be incorrectly predicted, in proportion to the magnitude of deviation. In these studies, ranges function as categories, with boundaries distinguishing between right or wrong predictions, even for outcome distributions that are acknowledged as continuous, and for boundaries that are arbitrarily defined (for instance, when the narrow prediction interval is defined as capturing 50 percent and the wide 90 percent of all potential outcomes). However, the boundary effect is affected by label. When the upper limit of a range is described as a value that “can” occur (Experiment 5), outcomes both below and beyond this value were regarded as consistent with the forecast.
People frequently underestimate the time needed to complete tasks and we examined a strategy – known as backward planning – that may counteract this optimistic bias. Backward planning involves starting a plan at the end goal and then working through required steps in reverse-chronological order, and is commonly advocated by practitioners as a tool for developing realistic plans and projections. We conducted four experiments to test effects on completion time predictions and related cognitive processes. Participants planned for a task in one of three directions (backward, forward, or unspecified) and predicted when it would be finished. As hypothesized, predicted completion times were longer (Studies 1–4) and thus less biased (Study 4) in the backward condition than in the forward and unspecified conditions. Process measures suggested that backward planning may increase attention to situational factors that delay progress (e.g., obstacles, interruptions, competing demands), elicit novel planning insights, and alter the conceptualization of time.
Recent investigations of adolescents’ beliefs about risk have led to surprisingly optimistic conclusions: Teens’ self estimates of their likelihood of experiencing various life events not only correlate sensibly with relevant risk factors (Fischhoff et al., 2000), but they also significantly predict later experiencing the events (Bruine de Bruin et al., 2007). Using the same dataset examined in previous investigations, the present study extended these analyses by comparing the predictive value of self estimates of risk to that of traditional risk factors for each outcome. The analyses focused on the prediction of pregnancy, criminal arrest, and school enrollment. Three findings emerged. First, traditional risk factor information tended to out-predict self assessments of risk, even when the risk factors included crude, potentially unreliable measures (e.g., a simple tally of self-reported criminal history) and when the risk factors were aggregated in a nonoptimal way (i.e., unit weighting). Second, despite the previously reported correlations between self estimates and outcomes, perceived invulnerability was a problem among the youth: Over half of the teens who became pregnant, half of those who were not enrolled in school, and nearly a third of those who were arrested had, one year earlier, indicated a 0% chance of experiencing these outcomes. Finally, adding self estimates of risk to the other risk factor information produced only small gains in predictive accuracy. These analyses point to the need for greater education about the situations and behaviors that lead to negative outcomes.
Errors in estimating and forecasting often result from the failure to collect and consider enough relevant information. We examine whether attributes associated with persistence in information acquisition can predict performance in an estimation task. We focus on actively open-minded thinking (AOT), need for cognition, grit, and the tendency to maximize or satisfice when making decisions. In three studies, participants made estimates and predictions of uncertain quantities, with varying levels of control over the amount of information they could collect before estimating. Only AOT predicted performance. This relationship was mediated by information acquisition: AOT predicted the tendency to collect information, and information acquisition predicted performance. To the extent that available information is predictive of future outcomes, actively open-minded thinkers are more likely than others to make accurate forecasts.
Climate computer models are irreplaceable scientific tools to study the climate system and to allow projections of future climate change. They play a major role in IPCC reports, underpinning paleoclimate reconstructions, attribution studies, scenarios of future climate change, and concepts such as climate sensitivity and carbon budgets. While models have greatly contributed to the construction of climate change as a global problem, they are also influenced by political expectations. Models have their limits, they never escape uncertainties, and they receive criticisms, in particular for their hegemonic role in climate science. And yet climate models and their simulations of past, present and future climates, coordinated via an efficient model intercomparison project, have greatly contributed to the IPCC’s epistemic credibility and authority.
Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
The winter stratospheric polar vortex (SPV) exhibits considerable variability in magnitude and structure, which can result in extreme SPV events. These extremes can subsequently influence weather in the troposphere from weeks to months and thus are important sources of surface predictability. However, the predictability of the SPV extreme events is limited to 1–2 weeks in state-of-the-art prediction systems. Longer predictability timescales of SPV would strongly benefit long-range surface prediction. One potential option for extending predictability timescales is the use of machine learning (ML). However, it is often unclear which predictors and patterns are important for ML models to make a successful prediction. Here we use explainable multiple linear regressions (MLRs) and an explainable artificial neural network (ANN) framework to model SPV variations and identify one type of extreme SPV events called sudden stratospheric warmings. We employ a NN attribution method to propagate the ANN’s decision-making process backward and uncover feature importance in the predictors. The feature importance of the input is consistent with the known precursors for extreme SPV events. This consistency provides confidence that ANNs can extract reliable and physically meaningful indicators for the prediction of the SPV. In addition, our study shows a simple MLR model can predict the SPV daily variations using sequential feature selection, which provides hints for the connections between the input features and the SPV variations. Our results indicate the potential of explainable ML techniques in predicting stratospheric variability and extreme events, and in searching for potential precursors for these events on extended-range timescales.
Central and peripheral biomarkers can be used to diagnose, treat, and potentially prevent major psychiatric disorders. But there is uncertainty about the role of these biological signatures in neural pathophysiology, and their clinical significance has yet to be firmly established. Psychomotor, cognitive, affective, and volitional impairment in these disorders results from the interaction between neural, immune, endocrine, and enteric systems, which in turn are influenced by a person’s interaction with the environment. Biomarkers may be a critical component of this process. The identification and interpretation of biomarkers also raise ethical and social questions. This article analyzes and discusses these aspects of biomarkers and how advances in biomarker research could contribute to personalized psychiatry that could prevent or mitigate the effects of these disorders.
Corruption has pervasive effects on economic development and the well-being of the population. Despite being crucial and necessary, fighting corruption is not an easy task because it is a difficult phenomenon to measure and detect. However, recent advances in the field of artificial intelligence may help in this quest. In this article, we propose the use of machine-learning models to predict municipality-level corruption in a developing country. Using data from disciplinary prosecutions conducted by an anti-corruption agency in Colombia, we trained four canonical models (Random Forests, Gradient Boosting Machine, Lasso, and Neural Networks), and ensemble their predictions, to predict whether or not a mayor will commit acts of corruption. Our models achieve acceptable levels of performance, based on metrics such as the precision and the area under the receiver-operating characteristic curve, demonstrating that these tools are useful in predicting where misbehavior is most likely to occur. Moreover, our feature-importance analysis shows us which groups of variables are most important in predicting corruption.
How do violations of predictability and plausibility affect online language processing? How does it affect longer-term memory and learning when predictions are disconfirmed by plausible or implausible words? We investigated these questions using a self-paced sentence reading and noun recognition task. Critical sentences violated predictability or plausibility or both, for example, “Since Anne is afraid of spiders, she doesn’t like going down into the … basement (predictable, plausible), garden (unpredictable, somewhat plausible), moon (unpredictable, deeply implausible).” Results from sentence reading showed earlier-emerging effects of predictability violations on the critical noun, but later-emerging effects of plausibility violations after the noun. Recognition memory was exclusively enhanced for deeply implausible nouns. The earlier-emerging predictability effect indicates that having word form predictions disconfirmed is registered very early in the processing stream, irrespective of semantics. The later-emerging plausibility effect supports models that argue for a staged architecture of reading comprehension, where plausibility only affects a post-lexical integration stage. Our memory results suggest that, in order to facilitate memory and learning, a certain magnitude of prediction error is required.
This review provides an update on what we know about differences in prediction in a first and second language after several years of extensive research. It shows when L1/L2 differences are most likely to occur and provides an explanation as to why they occur. For example, L2 speakers may capitalize more on semantic information for prediction than L1 speakers, or possibly they do not make predictions due to differences in the weighting of cues. A different weighting of cues can be the result of prior experience from the L1 and/or the prior experience in an experiment which affects L1 and L2 processing to a different extent. Overall, prediction in L2 processing often emerges later and/or is weaker than in L1 processing. Because L2 processing is generally slower, L1/L2 differences are likely to occur at certain levels of prediction, most notably at the form level, in line with a prediction-by-production mechanism.