Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-27T02:17:49.905Z Has data issue: false hasContentIssue false

Advancing theorizing about fast-and-slow thinking

Published online by Cambridge University Press:  02 September 2022

Wim De Neys*
Affiliation:
LaPsyDE, CNRS, Université Paris Cité, Paris, France wim.de-neys@u-paris.fr
Rights & Permissions [Opens in a new window]

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual-process models that capitalize on this distinction have been used to account for numerous phenomena – from logical reasoning biases, over prosocial behavior, to moral decision making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic – precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual-process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Type
Target Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

Sometimes thinking can be hard. As majestically portrayed in Rodin's “The Thinker” sculpture, in these cases it will take us laborious inferencing to arrive at a problem solution. At other times, however, thinking can be surprisingly easy. If you ask an educated adult how much half of $100 is, in what city the Statue of Liberty is located, or whether a toddler should be allowed to drink beer, they can answer in a split second. At least since antiquity, such duality in our mental experiences has led to the idea that there are two types of thinking, one that is fast and effortless, and one that is slower and requires more effort (Frankish & Evans, Reference Frankish, Evans, Evans and Frankish2009; Pennycook, Reference Pennycook and De Neys2017). This distinction between what is often referred to as a more intuitive and deliberate mode of cognitive processing – or the nowadays more popular “system 1” and “system 2” labels – lies at the heart of the influential “fast-and-slow” dual-process view that has been prominent in research on human thinking since the 1960s (Evans, Reference Evans2008; Kahneman, Reference Kahneman2011).

It is presumably hard to overestimate the popularity of dual-process models in current-day psychology, economics, philosophy, and related disciplines (Chater & Schwarzlose, Reference Chater and Schwarzlose2016; Melnikoff & Bargh, Reference Melnikoff and Bargh2018). As De Neys (Reference De Neys2021) clarified, they have been applied in a very wide range of fields including research on thinking biases (Evans, Reference Evans2002; Kahneman, Reference Kahneman2011), morality (Greene & Haidt, Reference Greene and Haidt2002), human cooperation (Rand, Greene, & Nowak, Reference Rand, Greene and Nowak2012), religiosity (Gervais & Norenzayan, Reference Gervais and Norenzayan2012), social cognition (Chaiken & Trope, Reference Chaiken and Trope1999), management science (Achtziger & Alós-Ferrer, Reference Achtziger and Alós-Ferrer2014), medical diagnosis (Djulbegovic, Hozo, Beckstead, Tsalatsanis, & Pauker, Reference Djulbegovic, Hozo, Beckstead, Tsalatsanis and Pauker2012), time perception (Hoerl & McCormack, Reference Hoerl and McCormack2019), health behavior (Hofmann, Friese, & Wiers, Reference Hofmann, Friese and Wiers2008), theory of mind (Wiesmann, Friederici, Singer, & Steinbeis, Reference Wiesmann, Friederici, Singer and Steinbeis2020), intelligence (Kaufman, Reference Kaufman, Sternberg and Kaufman2011), creativity (Barr, Pennycook, Stolz, & Fugelsang, Reference Barr, Pennycook, Stolz and Fugelsang2015), fake news susceptibility (Bago, Rand, & Pennycook, Reference Bago, Rand and Pennycook2020), and even machine thinking (Bonnefon & Rahwan, Reference Bonnefon and Rahwan2020). In addition, the dual-process framework is regularly featured in the popular media (Lemir, Reference Lemir2021; Shefrin, Reference Shefrin2013; Tett, Reference Tett2021) and has inspired policy recommendations on topics ranging from economic development (World Bank Group, 2015), over carbon emissions (Beattie, Reference Beattie2012), to the corona-virus pandemic (Sunstein, Reference Sunstein2020).

The present paper tries to clarify that despite the popularity, a lot of the current day use of dual-process models is poorly conceived. Foundational assumptions are empirically questionable and/or conceptually problematic. I argue that a core underlying problem is the exclusivity feature or the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. For example, influential dual-process accounts of biases in logical reasoning rely on exclusivity when attributing flawed thinking to a failure to correct an intuitively generated response with a deliberate response (Evans & Stanovich, Reference Evans and Stanovich2013; Kahneman, Reference Kahneman2011). Likewise, dual-process accounts of moral and prosocial reasoning rely on it to explain how intuitive emotional responses prevent us from taking the consequences of our actions into account (e.g., Greene, Reference Greene2013; Greene & Haidt, Reference Greene and Haidt2002) or to clarify why people behave selfishly rather than cooperate (e.g., Rand et al., Reference Rand, Greene and Nowak2012). In section 1 I review the empirical evidence in key fields and will show that although the exclusivity assumption might be appealing, there is no solid ground for it.

In section 2, I focus on a conceptual consequence of the exclusivity feature. Any dual-process model needs a switch mechanism that allows us to shift between intuitive and deliberate processing. Given that we can use two types of reasoning, there might be cases in which either one will be more or less beneficial. But how do we know that we can rely on an intuitively cued problem solution or need to engage in costly further deliberation? And when do we switch back to the intuitive processing mode once we start deliberating? I review popular traditional dual-process accounts for the switch issue and show that they are conceptually problematic – precisely because they presuppose exclusivity. In section 3, I build on this insight and recent theoretical advances to sketch a more viable general dual-process architecture that can serve as theoretical groundwork to build future dual-process models in various fields. Finally, in the closing section, I use the model to identify new and outstanding questions that should advance the field in the coming years.

Before moving to the main sections, it might be a good idea to clarify my use of the nomenclature. I adopt the fast-and-slow dual-process label as a general header to refer to models that posit an interaction between intuitive and deliberate reasoning processes. Dual-process theories are sometimes opposed to single-model theories. Both single- and dual-process theories focus on the interaction between intuition and deliberation. But they differ concerning the question as to whether the difference between the two types of processing should be conceived as merely quantitative or qualitative in nature (target article Introduction; see De Neys, Reference De Neys2021, for a recent review). My argument here is completely orthogonal to this issue (see sect. 4.8). My criticism and recommendations equally apply to single- and dual-process models. I stick to the dual-process label simply because it is more widely adopted.

There are also a wide range of labels that are being used to refer to the two types of reasoning that are posited by dual-process models (e.g., type 1/2, system 1/2, heuristic/analytic thinking, associative/rule-based thinking, automatic/reflective, intuitive/deliberate, etc.). I will stick here to the traditional labels “intuitive” and “deliberate” processing as well as the nowadays more popular “system 1” and “system 2” processing. The system term can sometimes refer to a specific subtype of dual-process models (Gawronski & Creighton, Reference Gawronski, Creighton and Carlston2013). Here it is used in a generic, general sense. As in Kahneman (Reference Kahneman2011), system 1 and 2 processing can be interpreted as synonyms for the type of effortless intuiting and effortful deliberating that are traditionally contrasted in dual-process theories.

1. Exclusivity in dual-process models

As briefly introduced, the exclusivity feature refers to the tendency to associate intuitive and deliberate processing with the computation of unique responses. System 1 is believed to be responsible for generating a response X and system 2 is responsible for generating an alternative response Y. Critically, here is the underlying exclusivity: Generation of the alleged deliberate response is by definition believed to be beyond the capacity of the intuitive system 1.

This simple exclusive dichotomization is appealing. System 1 quickly provides us with one type of response. If we want to generate the alternative response, we will necessarily need to switch to effortful deliberation. By combining this with the human tendency to minimize cognitive effort (“cognitive miserliness,” e.g., Stanovich & West, Reference Stanovich and West2000) one has a seemingly simple account of a wide range of mental processes. To illustrate, below I sketch in more detail how popular dual-process models in various fields are relying on the exclusivity assumption. I focus on the dual-process model of logical, moral, and prosocial reasoning because these have been among the most influential applications and allow me to demonstrate the generality of the findings. I present a brief introduction of the paradigmatic model in each field and then move to a discussion of the empirical evidence.

1.1 Logical, moral, and prosocial dual-process exclusivity

1.1.1 Logical reasoning bias

One of the first fields in the cognitive sciences in which dual-process models were popularized is research on “biases” in logical reasoning (e.g., Evans, Reference Evans2016; Kahneman, Reference Kahneman2000, Reference Kahneman2011; Wason, Reference Wason1960; Wason & Evans, Reference Wason and Evans1975). Since the 1960s numerous studies started showing that people readily violate the most elementary logical, mathematical, or probabilistic rules when a task cues an intuitive response that conflicts with these principles.Footnote 1

For example, imagine that we have two trays with red and white marbles. There's a small tray with 10 marbles of which one is red. There is also a large tray holding 100 marbles of which nine are red. You can draw one marble from either one of the trays. If the marble is red, you win a nice prize. Which tray should you draw from to maximize your chances of winning? From a logical point of view, it is clear that the small tray gives you a 10% chance of drawing a red marble (1/10) whereas the large tray gives you only a 9% (9/100) chance. However, people often prefer to draw from the large tray because they intuitively tend to use the absolute number of red marbles as a shortcut or “heuristic” to guide their inferences (Epstein, Reference Epstein1994). Obviously, there are indeed more red marbles in the large tray than in the small tray (i.e., nine vs. one). In case there would be the same number of white marbles in both trays, the simple absolute number focus would lead to a correct judgment. However, in the problem in question, there are also a lot more white marbles in the large tray. If you take the ratio of red and white marbles into account it is crisp clear that you need to draw from the small tray. Unfortunately, the available evidence suggests that in situations in which an intuitive association cues a response that conflicts with more logical considerations (e.g., the role of denominators or ratios), people seem to neglect the logical principle and opt for the intuitively cued conclusion (Kahneman, Reference Kahneman2011).Footnote 2 Hence, our intuitions often seem to lead us astray and bias our judgment.

The dual-process framework presents a simple and elegant explanation for the bias phenomenon (Evans, Reference Evans2008; Kahneman, Reference Kahneman2011). In general, dual-process theorists have traditionally highlighted that taking logical principles into account typically requires demanding system 2 deliberation (e.g., Evans, Reference Evans2002, Reference Evans2008; Evans & Over, Reference Evans and Over1996; Kahneman, Reference Kahneman2011; Stanovich & West, Reference Stanovich and West2000). Because human reasoners have a strong tendency to minimize demanding computations, they will often refrain from engaging or completing the slow deliberate processing when mere intuitive processing has already cued a response (Evans & Stanovich, Reference Evans and Stanovich2013; Kahneman, Reference Kahneman2011). Consequently, most reasoners will simply stick to the intuitive response that quickly came to mind and fail to consider the logical implications. It will only be the few reasoners who have sufficient resources and motivation to complete the deliberate computations and override the initially generated intuitive response, who will manage to reason correctly and give the logical response (Stanovich & West, Reference Stanovich and West2000).

This illustrates how the bias account critically relies on the exclusivity assumption. Taking logical principles in classic reasoning tasks into account is uniquely linked to deliberation. Because this is out of reach of the intuitive system, sound reasoning will require us to switch from system 1 to demanding system 2 processing – something that few will manage to accomplish. To avoid confusion, it is important to stress here that the exclusivity assumption does not entail that system 1 is always biased and system 2 always leads to correct answers. Dual-process theorists have long argued against such a simplification (Evans, Reference Evans2011; Evans & Stanovich, Reference Evans and Stanovich2013). Clearly, nobody will disagree that educated adults can intuitively solve a problem such as “Is 9 more than 1?” or “How much is 2 + 2?.” The hypothesis concerns situations in which the two systems are assumed to be generating conflicting responses. More generally, as any scientific theory, dual-process models make their assertions within a specific application context. For the dual-process model of logical reasoning, the application context concerns situations in which an intuitively cued problem solution conflicts with a logico-mathematical norm. The classic “heuristics and biases” tasks in the field (such as the earlier ratio bias problem with the two trays) all capitalize on such conflict and are designed such that they cue a salient conflicting intuitive heuristic response that is pitted against logical considerations. It is in such conflict cases that avoiding biased thinking is expected to require switching to system 2 deliberation.

1.1.2 Dual-process model of moral reasoning

The influential dual-process model of moral cognition focuses on situations in which utilitarian and deontological considerations lead to conflicting moral judgments (e.g., is it acceptable to sacrifice one human life to save five others?). From a utilitarian point of view, one focuses on the consequences of an action. Harming an individual can be judged acceptable if it prevents comparable harm to a greater number of people. One performs a cost–benefit analysis and chooses the greater good. Hence, from a utilitarian perspective, it can be morally acceptable to sacrifice someone's life to save others. Alternatively, the moral perspective of deontology focuses on the intrinsic nature of an action. Here harming someone is considered wrong regardless of its potential benefits. From a deontological point of view, sacrificing one life to save others is never acceptable. In a nutshell, the dual-process model of moral reasoning (Greene, Reference Greene2013; Greene & Haidt, Reference Greene and Haidt2002) has associated utilitarian judgments with deliberate system 2 processing and deontological judgments with intuitive system 1 processing. The core idea is that giving a utilitarian response to moral dilemmas requires that one engages in system 2 thinking and allocates cognitive resources to override an intuitively cued system 1 response that primes us not to harm others (Greene, Reference Greene2007; Paxton, Ungar, & Greene, Reference Paxton, Ungar and Greene2012). Hence, here too the exclusivity assumption is key: Utilitarian reasoning is assumed to be out of reach of the intuitive system and requires a switch to costly effortful processing.

1.1.3 Dual-process model of prosocial reasoning

Finally, the dual-process model of prosocial reasoning or human cooperation focuses on situations in which self-interest can conflict with the group interest (e.g., get more money yourself or share more with others). Some authors have claimed that making prosocial choices requires deliberate system 2 control of our intuitive selfish impulses (e.g., DeWall, Baumeister, Gailliot, & Maner, Reference DeWall, Baumeister, Gailliot and Maner2008; Knoch, Pascual-Leone, Meyer, Treyer, & Fehr, Reference Knoch, Pascual-Leone, Meyer, Treyer and Fehr2006; Martinsson, Myrseth, & Wollbrant, Reference Martinsson, Myrseth and Wollbrant2014). Alternatively, others have argued that system 1 cues prosocial choices and it is only after deliberation that we will seek to maximize our self-interest (e.g., Rand, Reference Rand2019; Rand et al., Reference Rand, Greene and Nowak2012; Sanfey, Rilling, Aronson, Nystrom, & Cohen, Reference Sanfey, Rilling, Aronson, Nystrom and Cohen2003). However, despite the differences concerning which behavior is assumed to be favored by deliberation and intuition, both views are built on the same underlying exclusive dual-process logic: Intuition will favor one type of behavior whereas making the competing choice will require slow, deliberate system 2 processes to control and correct the initial intuitive impulse (Hackel, Wills, & Van Bavel, Reference Hackel, Wills and Van Bavel2020; Isler, Yilmaz, & Maule, Reference Isler, Yilmaz and Maule2021).

To be clear, just like the dual-process model of logical reasoning, dual-process models of prosocial (and moral) reasoning also have a specific application context. As with logical reasoning, this context concerns prototypical cases in which the two systems are assumed to be generating conflicting responses. For example, dual-process models of prosocial choice focus on anonymous decision settings (i.e., the identity of the decision maker and recipient are never revealed and they only interact one single time, e.g., Rand et al., Reference Rand, Greene and Nowak2012). Clearly, even models that posit that prosocial (vs. selfish) decisions require system 2 processing would not dispute that the prosocial decision to share with one's offspring, for example, can be made completely intuitively. Similarly, the dual-process model of moral reasoning focuses on moral dilemmas that cue a strong moral transgression (e.g., killing). In some cases, the deontological option might be so trivial (e.g., is it acceptable to tell a white lie to save five people?) that it will not give rise to a proper conflict. In these non-conflict cases it would not be expected that a utilitarian judgment necessarily requires system 2 processing (Greene, Sommerville, Nystrom, Darley, & Cohen, Reference Greene, Sommerville, Nystrom, Darley and Cohen2001).

Note that the empirical evidence I will review in the following section always concerns the prototypical test and application context that the dual-process models traditionally envisaged. The fundamental problem I will raise is that – in contrast to widely publicized initial reports – even in these cherished prototypical contexts there is no solid empirical ground for the exclusivity assumption. For completeness, I start by discussing the traditional evidence that has been cited in support of the exclusivity assumption and then move on to a discussion of more recent counter-evidence.

1.2 Empirical evidence

Why have dual-process models ever assumed the exclusivity feature? What empirical evidence was there to support it? The undisputed starting point here is that deliberation is defined as being more time and effort-demanding than system 1 processing. Hence, if the exclusivity assumption holds, one would expect that the alleged system 2 response will take longer than the intuitive system 1 response. Likewise, generation of the alleged system 2 response should be more likely among those higher in cognitive capacity (and motivation to use this capacity). This would be consistent with the idea that the alleged system 2 response indeed requires slow, effortful deliberation. The introduction of traditional dual-process models in various fields has typically been accompanied by correlational studies that supported these predictions (Trémolière, De Neys, & Bonnefon, Reference Trémolière, De Neys, Bonnefon, Ball and Thompson2019). For example, from logical, over moral, to prosocial reasoning, various studies showed that people who give the alleged deliberate response indeed tend to take more time to answer and score higher on standard cognitive ability/disposition tests than people who give the alleged intuitive response (e.g., De Neys, Reference De Neys2006a, Reference De Neys2006b; Greene et al., Reference Greene, Sommerville, Nystrom, Darley and Cohen2001; Moore, Clark, & Kane, Reference Moore, Clark and Kane2008; Paxton et al., Reference Paxton, Ungar and Greene2012; Rand et al., Reference Rand, Greene and Nowak2012; Stanovich & West, Reference Stanovich and West1998, Reference Stanovich and West2000).

In addition to correlational studies, dual-process proponents have also pointed to experimental evidence coming from cognitive constraint paradigms in which people are forced to respond under time-pressure or secondary cognitive load (e.g., concurrent memorization). The rationale here is again that deliberation requires more time and cognitive resources than system 1 processing. Consequently, depriving people of these resources by forcing them to respond quickly or while they are performing a capacity demanding secondary task, should make it less likely that the exclusive system 2 response can be generated. Across logical, moral, and prosocial reasoning studies, dual-process proponents have indeed shown that these constraints often hinder the production of the alleged deliberate responses (e.g., Conway & Gawronski, Reference Conway and Gawronski2013; De Neys, Reference De Neys2006b; Evans & Curtis-Holmes, Reference Evans and Curtis-Holmes2005; Rand et al., Reference Rand, Greene and Nowak2012, Reference Rand, Peysakhovich, Kraft-Todd, Newman, Wurzbacher, Nowak and Greene2014; Trémolière, De Neys, & Bonnefon, Reference Trémolière, De Neys and Bonnefon2012). In sum, the point of this short overview is that dual-process theorists have not made their claims in an empirical vacuum. There are past findings that are consistent with the exclusivity assumption.

However, a first problem is that over the years these initial positive findings have not always been confirmed. Recent studies and large-scale replication efforts have pointed to negative findings and null-effects (e.g., Baron, Reference Baron, Bonnefon and Trémolière2017; Baron & Gürçay, Reference Baron and Gürçay2017; Białek & De Neys, Reference Białek and De Neys2017; Bouwmeester et al., Reference Bouwmeester, Verkoeijen, Aczel, Barbosa, Bègue, Brañas-Garza and Wollbrant2017; Grossman & Van der Weele, Reference Grossman and Van der Weele2017; Gürçay & Baron, Reference Gürçay and Baron2017; Robison & Unsworth, Reference Robison and Unsworth2017; Tinghög et al., Reference Tinghög, Andersson, Bonn, Johannesson, Kirchler, Koppel and Västfjäll2016). Available meta-analyses suggest that if there is an effect, it is very small. For example, Rand (Reference Rand2019) found that experimental manipulations that limited deliberation (and/or favored intuition) led on average to an increase of 3.1% prosocial choices (see also Kvarven et al., Reference Kvarven, Strømland, Wollbrant, Andersson, Johannesson, Tinghög and Myrseth2020). Likewise, in one of the largest studies to date on reasoning bias, Lawson, Larrick, and Soll (Reference Lawson, Larrick and Soll2020) found that experimental constraints on a wide range of classic bias tasks led on average to a 9.4% performance decrease (from 62 to 52% accuracy). As Lawson et al. put it, this suggests that the alleged deliberate response can often be generated intuitively. Even when deliberation is prevented, the alleged deliberate response is still frequently observed. Hence, although there is indeed some evidence that deliberation pushes responses in the expected dual-process direction (e.g., more alleged system 2 responses) it is becoming clear – contra the exclusivity assumption – that generation of the alleged unique system 2 response does often not require deliberation and is not uniquely tied to system 2.

Critically, studies adopting new experimental paradigms have presented further direct evidence against the exclusivity assumption (De Neys & Pennycook, Reference De Neys and Pennycook2019). Perhaps most illustrative are studies with the two-response paradigm (Thompson, Turner, & Pennycook, Reference Thompson, Turner and Pennycook2011). In this paradigm, participants are asked to give two consecutive answers to a problem. First, they have to answer as quickly as possible with the first response that comes to mind. Immediately afterward, they are shown the problem again and can take all the time they want to reflect on it and give a final answer. To make maximally sure that the initial answer is generated intuitively, it typically has to be generated under time-pressure and/or cognitive load (Bago & De Neys, Reference Bago and De Neys2017; Newman, Gibb, & Thompson, Reference Newman, Gibb and Thompson2017). As with the cognitive constraint paradigms above, the rationale is that this will deprive participants of the very resources they need to engage in proper deliberation. Consequently, the paradigm gives us a good indication of which response can be generated intuitively and deliberately (Bago & De Neys, Reference Bago and De Neys2017, Reference Bago and De Neys2020; Raoelison, Thompson, & De Neys, Reference Raoelison, Thompson and De Neys2020; Thompson et al., Reference Thompson, Turner and Pennycook2011).

Under the exclusivity assumption, it is expected that people who generate the alleged system 2 response as their final response will initially have generated the system 1 response in the first, intuitive response stage. That is, in the prototypical dual-process test situation in which both systems are expected to cue a conflicting response, it is assumed that slow deliberation will need to correct and override the intuitively generated fast system 1 response. For example, in a classic bias task, it is hypothesized that people will initially generate the biased system 1 response but that sound reasoners will consequently be able to correct this once they are allowed to take the time to deliberate. To illustrate, take the infamous cognitive reflection test (e.g., “A bat and ball cost $1.10 together. The bat costs $1 more than the ball. How much does the ball cost?,” Frederick, Reference Frederick2005). Here it is expected that sound reasoners will reason correctly precisely because they will take the time to reflect on their first hunch (“10 cents”) which allows them to realize that it is incorrect. It is this demanding deliberation or “reflection” that is assumed to be crucial for generation of the correct answer (“5 cents”). However, two-response studies with these and other classic bias tasks have shown that this is typically not the case. Those reasoners who give the correct response as their final response after deliberation often already generate this same correct response at the initial, intuitive response stage (e.g., Bago & De Neys, Reference Bago and De Neys2017, Reference Bago and De Neys2019a; Burič & Konrádová, Reference Burič and Konrádová2021; Burič & Šrol, Reference Burič and Šrol2020; Dujmović, Valerjev, & Bajšanski, Reference Dujmović, Valerjev and Bajšanski2021; Raoelison et al., Reference Raoelison, Thompson and De Neys2020; Thompson & Johnson, Reference Thompson and Johnson2014). Hence, sound reasoners do not need to deliberate to correct an initial response, their initial response is already correct.

This same pattern has been observed during moral (Bago & De Neys, Reference Bago and De Neys2019b; Vega, Mata, Ferreira, & Vaz, Reference Vega, Mata, Ferreira and Vaz2021) and prosocial (Bago, Bonnefon, & De Neys, Reference Bago, Bonnefon and De Neys2021; Kessler, Kivimaki, & Niederle, Reference Kessler, Kivimaki and Niederle2017) reasoning. People who generate the alleged system 2 response (e.g., utilitarian moral decision or selfish prosocial choice) typically already generate this same decision as their intuitive response in the initial response stage. Hence, pace the exclusivity assumption, the alleged system 2 response is often already generated intuitively.

Related evidence comes from studies with the conflict detection paradigm (e.g., De Neys & Pennycook, Reference De Neys and Pennycook2019). This paradigm focuses specifically on those participants who give the alleged system 1 response. The studies contrast people's processing of classic prototypical problems (i.e., “conflict problems”) in which systems 1 and 2 are expected to cue different responses and control “no-conflict” problems in which both systems are expected to cue the same response. For example, in a logical reasoning task such as the introductory ratio bias problem, a control problem could be one in which participants have to choose between a small tray with one red marble and a large tray with 11 (instead of nine) red marbles. In this case both the absolute number of red marbles (nine vs. one) and the ratios (11/100 vs. 1/10) favor the large tray. In a moral reasoning study, a no-conflict control problem could ask whether it is acceptable to kill five people to save the life of one person (instead of killing one to save five). Both utilitarian and deontological considerations will converge here in that the action is not permissible.

By and large, conflict detection studies have found that on various processing measures, reasoners who give the alleged system 1 response typically show sensitivity to the presence of conflict with the alleged system 2 response. For example, they take longer and are less confident when solving classic “conflict” versus control “no-conflict” problems (e.g., Białek & De Neys, Reference Białek and De Neys2016; Frey, Johnson, & De Neys, Reference Frey, Johnson and De Neys2018; Gangemi, Bourgeois-Gironde, & Mancini, Reference Gangemi, Bourgeois-Gironde and Mancini2015; Mata, Reference Mata2020; Šrol & De Neys, Reference Šrol and De Neys2021; Vartanian et al., Reference Vartanian, Beatty, Smith, Blackler, Lam, Forbes and De Neys2018; see De Neys, Reference De Neys2017, for a review but also Travers, Rolison, & Feeney, Reference Travers, Rolison and Feeney2016; or Mata, Ferreira, Voss, & Kollei, Reference Mata, Ferreira, Voss and Kollei2017, for negative findings). Hence, even people who give the alleged system 1 response seem to be processing the alleged system 2 response. Critically, this conflict sensitivity is also observed when potential system 2 processing is knocked out with experimental constraint manipulations (e.g., Bago & De Neys, Reference Bago and De Neys2017, Reference Bago and De Neys2019b; Białek & De Neys, Reference Białek and De Neys2017; Burič & Konrádová, Reference Burič and Konrádová2021; Burič & Šrol, Reference Burič and Šrol2020; Johnson, Tubau, & De Neys, Reference Johnson, Tubau and De Neys2016; Pennycook, Trippas, Handley, & Thompson, Reference Pennycook, Trippas, Handley and Thompson2014; Thompson & Johnson, Reference Thompson and Johnson2014). In line with the two-response findings, this indicates that the alleged unique system 2 response is also being processed intuitively.

In sum, although the idea that intuitive and deliberate processing are cueing unique responses is appealing in its simplicity, taken together, the empirical evidence reviewed here indicates that there is no strong empirical ground for it. In the most influential dual-process applications, the alleged system 2 response does not seem to be out of reach of the intuitive system 1. Rather than positing unique responses in systems 1 and 2, it appears that system 1 can often handle both responses.

To avoid confusion, it is important to stress here that the above conclusion does not argue against the idea that deliberation can lead to generation of the alleged system 2 response. For example, the meta-analyses I referred to often suggest that there is evidence for a small effect in the expected dual-process direction (i.e., more alleged system 2 responses after deliberation). Also, the two-response data consistently indicate that there are cases in which an initial, intuitively generated response is replaced with the alleged system 2 response after deliberation. The point is that this is rare. More often than not, the alleged deliberate response tends to be generated intuitively. Exclusive deliberate generation of the alleged system 2 response seems to be the exception rather than the rule. This implies that any model in which generation of this response is exclusively or predominantly tied to the operation of the deliberate system will have poor empirical fit.

A possible general argument against the reviewed empirical evidence contra the exclusivity assumption is that we can never be sure that the study designs prevented all possible deliberation. For example, it might be that the two-response studies still allowed some minimal deliberation during the initial response generation. It might be this minimal deliberation that drives the generation of the “alleged” system 2 response during the initial response stage. Here it should be noted that the two-response studies adopted the same constraint methodology and logic as the initial studies that were used to argue in favor of the exclusivity assumption. Moreover, whereas traditional studies used either time-pressure or load manipulations, the two-response studies have combined both to further restrict potential deliberate intrusion (e.g., Bago & De Neys, Reference Bago and De Neys2017). In addition, control studies indicate that making the constraints even more challenging by increasing the load and decreasing the deadlines typically does not alter the results (e.g., Bago & De Neys, Reference Bago and De Neys2017, Reference Bago and De Neys2019b; Bago et al., Reference Bago, Bonnefon and De Neys2021), suggesting that deliberation was successfully minimized in the design. Nevertheless, the point still stands that no matter how challenging the test conditions might be, we can never be completely sure that participants did not deliberate. The problem here is that the dual-process framework does not give us an unequivocal threshold (i.e., longer than x seconds or less than x amount of load implies deliberation) that allows us to universally demarcate intuition and deliberation (Bago & De Neys, Reference Bago and De Neys2019a; De Neys, Reference De Neys2021). Ultimately, this implies that exclusivity cannot be empirically falsified. As long as one keeps on observing alleged system 2 responses under constraints, one can always argue that the constraints were not challenging enough. The general point is that the cognitive constraint evidence needs to be interpreted within practical, relative boundaries (Bago & De Neys, Reference Bago and De Neys2019a). In sum, although empirical evidence can question exclusivity and can point to a lack of strong supporting evidence, it can never rule it out completely. Therefore, in the next section, I will focus on a conceptual critique that underscores that positing exclusivity is fundamentally problematic for a dual-process model.

2. Switch issue

Although it might not be necessary to generate the alleged “system 2 response” per se, we sometimes clearly do engage in deliberation. Given that we can use two types of reasoning, there might be cases in which either one will be more or less beneficial. For example, in situations in which intuitive and deliberate processing are expected to cue the same response (e.g., the “no-conflict” problems I referred to earlier), there is no need to waste precious resources by engaging in costly deliberation. But how do we know that we can rely on an intuitively cued problem solution or need to revert to deliberation? And when we do decide to engage in deliberation, at what point do we decide it is safe to switch back to the mere intuitive processing mode?

Of course, there are some situations in which this is straightforward. One concerns cases in which we are faced with an entirely new problem we haven't seen before and our intuitions are not cueing a response. Here, all we can do to arrive at an answer is to engage in deliberation. Likewise, there will be cases in which the decision is made for us. That is, in some situations we get external feedback that indicates that an intuitively cued response is problematic. Generally speaking, these are cases of expectancy violations. For example, imagine your superior told you that you are getting a new colleague named Sue. Given their name, you'd readily expect that Sue is female. If your officemate subsequently tells you that the new colleague is a man, you'll presumably be surprised. Your system 1 has built up an expectation that is not met in the face of feedback. This expectancy violation will cue deliberation (Did you mishear the name? Was your colleague mistaken? Are Sue's parents Johnny Cash fansFootnote 3? etc.). Unfortunately, the expectancy violation mechanism only works in case you're actually getting feedback. In many situations this will not be available or we want reasoners to operate (and avoid mistakes) without external supervision. Hence, reasoners need an internal mechanism that signals a need to switch between mere intuitive and deliberate processing.

My point is that traditional dual-process models have failed to present a viable internal switch mechanism. Popular accounts are conceptually problematic and this can be directly tied to the exclusivity assumption. I'll clarify that as long as we posit exclusivity, it will always be hard for a dual-process model to explain how reasoners can ever reliably determine whether there is a need to switch between intuitive system 1 and deliberate system 2 processing. I start by giving an overview of the dominant traditional switch views to clearly illustrate the problem.

2.1 Traditional switch accounts

2.1.1 Conflict monitoring system 2

Dual-process models are typically – what is being referred to as – “default-interventionist” in nature (Evans & Stanovich, Reference Evans and Stanovich2013; Kahneman, Reference Kahneman2011). This implies that they posit a serial processing architecture. The idea is that we rely on system 1 by default and only turn on the costly deliberate system to intervene when it is needed. It is this feature that brings about the switch question, of course. The traditional solution is to assume that system 2 is monitoring the output of system 1 and will be activated in case of conflict between the two systems (Kahneman, Reference Kahneman2011; Stanovich & West, Reference Stanovich and West2000). Hence, system 2 will intervene on system 1 whenever the system 1 output conflicts with more deliberate system 2 considerations. This idea is appealing in its simplicity. However, on second thought it is clear that it readily leads to a paradox (De Neys, Reference De Neys2012; Evans, Reference Evans2019). To detect that our system 1 intuition conflicts with unique deliberate system 2 considerations, we would already need to engage system 2 first to compute the system 2 response. Unless we want to posit an all-knowing homunculus, system 2 cannot activate itself. Hence, the decision to activate system 2 cannot rely on the activation of system 2. The prototypical conflict monitoring system 2 account simply begs the question here (De Neys, Reference De Neys2012).

2.1.2 Low-effort deliberation

A popular variant of the simple conflict monitoring system 2 position – or a workaround – is to posit that the monitoring relies on low-effort deliberation and not on full-fledged demanding system 2 processing (De Neys & Glumicic, Reference De Neys and Glumicic2008; Kahneman, Reference Kahneman2011). Whenever system 1 is cueing a response it will be passed on to system 2 which is by default in this non-demanding, low-effort mode. If the low-effort deliberation detects a conflict between system 1 and 2 processing, it will trigger deeper, high-effort deliberation (De Neys & Glumicic, Reference De Neys and Glumicic2008; Kahneman, Reference Kahneman2011). Unfortunately, this simply pushes the explanatory burden one step forward. Clearly, if the low-effort mode suffices to generate a response against which the intuitive response can be contrasted, there is no need to postulate a unique high-effort deliberation (and to assume that the alleged system 2 response can only be computed by those highest in cognitive capacity, for example). In this case, everyone – even those lowest in cognitive capacity – should be able to generate the non-demanding deliberate response and it should not be considered unique to system 2. However, in case we assume that generating the deliberate response does require proper demanding system 2 processing, we are back at square one and we cannot explain how the low-effort system 2 processing detects conflict with the high-effort deliberate response in the first place. Hence, although it might sound appealing, the low-effort deliberation position does not present a viable processing mechanism.

2.1.3 System 3

One of the core problems of the conflict monitoring system 2 account is that system 2 is assumed to both generate a unique deliberate response and monitor for conflict between systems 1 and 2 to make the switch decision. It serves multiple functions: response generation and monitoring/switching. One suggested solution is to attribute the monitoring and switch decision to a third type of system or process (i.e., system 3 or type 3 processing, e.g., Evans, Reference Evans, Evans and Frankish2009; Houdé, Reference Houdé2019). Hence, system 2 computes a deliberate response and system 3 compares the output of systems 1 and 2. System 3 itself operates automatically and does not require the limited cognitive resources that system 2 needs. In case system 3 detects an output conflict, it will intervene, call for more deliberation and block the system 1 response. However, this solution still begs the question and leads to an infinite regression. To decide whether the system 1 output conflicts with the system 2 output, system 2 needs to be activated to compute a response, of course. Even an automatically operating system 3 cannot know whether there is a conflict between systems 1 and 2 without engaging system 2 first.

2.1.4 Parallel solution

A radically different solution to explain how we know that our intuition can be trusted or we need to engage in deliberation is to simply assume that systems 1 and 2 operate in parallel (Epstein, Reference Epstein1994; Sloman, Reference Sloman1996). In contrast to the dominant serial view, parallel dual-process models assume that intuitive and deliberate thought processes are always activated simultaneously when we are faced with a reasoning problem. Hence, just like intuitive processing, system 2 is always on. We always activate both reasoning systems from the start. Consequently, we also do not need a mechanism to decide whether or not we need to engage in deliberation and switch system 2 on.

The key problem is that the parallel account throws out the cognitive advantage of a dual-process model (De Neys, Reference De Neys2012). That is, nobody contests that system 1 will often converge with system 2 and can cue sound decisions. Hence, in these cases there is no need to burden our precious cognitive resources with demanding system 2 activation. Consequently, a parallel model will often be wasting scarce resources in situations where it is not needed. From a cognitive economy point of view, this is highly implausible. Furthermore, in case the parallel system 1 and 2 computations do lead to conflicting responses, the fast system 1 will need to wait until the slow system 2 has computed its response to register the conflict and decide which response to favor. But if the fast system 1 always waits for system 2, we lose the capacity to reason and act fast. On the contrary, if the fast system 1 does not wait for system 2, how are we to know that the system 1 response is valid and does not conflict with system 2? Hence, just like its serial competitors, the parallel account leads to conceptual inconsistencies and fails to present a working processing account.

To avoid confusion, note that the problem for the parallel account is not the parallel activation of systems 1 and 2 per se but the postulated continuous parallel activation of both systems. That is, the serial default-interventionist account also assumes that once system 2 is activated, system 1 remains activated and that the two systems will be running in parallel at this point. The key difference is that the serial model posits that there needs to be an initial phase in which people do not deliberate yet – and it is this feature that brings about the switch problem. One might be tempted to argue that a parallel model does not necessarily need to assume that system 2 is always on. When there is no longer a need for deliberation, system 2 could be switched off to avoid wasting resources and it may be turned on again whenever it is needed. But at this point, one will have re-introduced the switch issue and will need to explain how this decision is made. That is, such a “parallel” model throws out its conceptual advantage over the serial model (i.e., no need for a switch mechanism) and faces the same difficulties as its rivals.

Relatedly, one may argue that even if system 2 is always on, it doesn't always have to run to completion. Maybe it only provides some quick partial computations that suffice to generate a response and check whether it conflicts with the cued system 1 answer. Note that under this reading, the parallel model boils down to the low-effort-deliberation account (see sect. 2.1.2) and will face the same problems: If low-effort or partial system 2 processing already allows generating an accurate proxy of the complete system 2 response, there is no need to assume that computation of the alleged unique system 2 response is demanding and necessarily requires time and effort. But if more extensive system 2 processing is necessary, it is not clear how the partial deliberations may ever reliably signal conflict.

2.1.5 Stuck-in-system 1 or no switch account

Finally, a last alternative possibility is to assume that people do not detect there is a need to engage system 2 and always stay in system 1 mode. In this “no switch” model, reasoners simply never internally switch from system 1 to system 2 themselves. People can use system 2 but only in case system 1 does not cue a response or they are externally told to do so. Whenever system 1 cues a response they are bound to blindly rely on the intuitively cued problem solution. Hence, the account solves the switch question by positing that reasoners never switch. Such a model can explain why people often give the alleged system 1 response (e.g., why they are biased in the case of logical reasoning): They simply fail to detect there is a need to activate system 2 (e.g., Evans & Stanovich, Reference Evans and Stanovich2013; Kahneman, Reference Kahneman2011; Morewedge & Kahneman, Reference Morewedge and Kahneman2010; Stanovich & West, Reference Stanovich and West2000). Note that although the account might be questioned on empirical grounds (e.g., see the conflict detection findings in sect. 1), in contrast to the other accounts I reviewed it is at least conceptually coherent. It does not beg the question or introduce a homunculus. The problem, however, is that it only models half the story.

The “no switch” model allows us to account for the behavior of people who give the alleged system 1 response, but it turns a blind eye to those who do give the alleged system 2 response. Indeed, although it might be rarer, there are always reasoners who arrive at the alleged system 2 response themselves. In general, the fact that there are two types of responses is a key motivation to posit an (exclusive) dual-process model in the first place. Hence, one still needs to explain how these “system 2” responders managed to detect there was a need to engage system 2. Consequently, even in the stuck-in-system 1 account, the switch issue inevitably rears up its head again.

2.2 Toward a working switch solution

The overview pointed to the fundamental conceptual problems that plague popular switch accounts in traditional dual-process models. How can we avoid this conceptual muddle and arrive at a viable switch account? Any solution will have two necessary core components. First, we need to postulate that the internal switch decision is itself intuitive in nature. The switch decision needs to rely on mere system 1 processing. System 1 decides whether system 2 is activated or not. This avoids the paradox of assuming that to decide whether to engage in costly system 2 deliberation you already need to engage system 2 (De Neys, Reference De Neys2012; Evans, Reference Evans2019; Stanovich, Reference Stanovich2018). Second, and more controversially, we will need to discard the exclusivity feature. If we agree that system 1 takes the switch decision, the billion-dollar question then becomes how exactly it does this. What informs the decision within system 1? My point is that solving this puzzle forces us to get rid of exclusivity. Instead of allocating unique responses to each system, we need to assume that the alleged system 2 response can also be cued by system 1. Hence, system 1 will be generating different types of responses or intuitions. One of these will be the traditional alleged system 1 response (e.g., a biasing heuristic, deontological, or prosocial intuition), the other one will be the traditional alleged system 2 response (e.g., logical, utilitarian, or selfish intuition). In case both intuitions cue the same response, the response can be given without further system 2 deliberation. In case the two intuitions cue conflicting responses, system 2 will be called upon to intervene.

With these building blocks in hand, it is possible to present a conceptually coherent switch account. It will be conflict between competing intuitions within system 1 that will function as the trigger to switch on system 2. But clearly, by definition, the account can only work if the alleged system 2 response is not exclusively calculated by system 2. If exclusivity is maintained, there is no way for system 1 to be reliably informed about potential conflict with the exclusive system 2 response. An exclusive model is bound to fall prey to the same conceptual pitfalls that plague the traditional switch accounts.

To avoid confusion, the point is not that exclusivity is impossible per se. Non-exclusivity is not a necessary prerequisite for a dual-process model. The point concerns the necessary conceptual coupling between the exclusivity and switch features. A dual-process model may posit exclusivity, but it will pay the price at the switch front. To remain coherent, a dual-process model that posits exclusivity will also need to postulate that reasoners have no internal mechanism that allows them to switch from system 1 to system 2 themselves (i.e., the stuck-in-system 1 position). One cannot have their exclusive cake and eat it here.

The good news is that the empirical evidence reviewed in section 1 indicates that the elementary conditions for the above switch mechanism may often be met. In key dual-process applications there is evidence that the alleged system 2 response can indeed be processed more intuitively. Hence, the required building blocks for a coherent switch mechanism seem to be in place. However, although positing non-exclusivity might provide the building blocks, it clearly does not suffice to arrive at a workable model. For example, one may wonder why reasoners often still opt for the alleged system 1 response if the alternative response is also intuitively available? Relatedly, what exactly determines system 2 engagement? Does the mere generation of two conflicting intuitions suffice per se? Does the amount of conflict matter? Furthermore, we do not only need to explain when reasoners will engage system 2 but also when they will stop doing so. That is, once we have activated system 2 it doesn't stay activated forever. At what point does a reasoner decide it is safe to revert back to system 1 processing then? In the following section, I sketch a general architecture that allows us to address these issues.

3. Working model

The model I develop here builds on emerging ideas from various authors working in a range of dual-process application fields (e.g., Bago & De Neys, Reference Bago and De Neys2019b, Reference Bago and De Neys2020; Bago et al., Reference Bago, Bonnefon and De Neys2021; Baron & Gürçay, Reference Baron and Gürçay2017; De Neys & Pennycook, Reference De Neys and Pennycook2019; Evans, Reference Evans2019; Pennycook, Fugelsang, & Koehler, Reference Pennycook, Fugelsang and Koehler2015; Reyna, Rahimi-Golkhandan, Garavito, & Helm, Reference Reyna, Rahimi-Golkhandan, Garavito, Helm and De Neys2017; Stanovich, Reference Stanovich2018; Thompson & Newman, Reference Thompson, Newman and De Neys2017; Trippas & Handley, Reference Trippas, Handley and De Neys2017Footnote 4). Because these ideas often entail some revision of traditional dual-process models they are sometimes collectively referred to as dual-process theory 2.0 (De Neys, Reference De Neys2017). The current model presents a personal integration and specification of what I see as key features. I focus on a general, field-independent specification that can serve as a basic architecture for future models across various fields.

The model has four core components which I will introduce in more detail below. Figure 1 presents a schematic illustration.

Figure 1. Schematic illustration of the working model's core components. I1, intuition 1; I2, intuition 2; d, deliberation threshold. The dashed arrow indicates the optional nature of the deliberation stage.

3.1 Intuitive activation

The first component (illustrated in Fig. 1.1) reflects the starting point that system 1 can be conceived as a collection of intuitively cued responses. For convenience, I focus on the critical case in which two competing intuitions are being cued. These are labeled as intuition 1 (I1) and intuition 2 (I2). These can be the alleged system 1 and alleged system 2 responses but in general, they can be any two intuitions that cue a different response. Each intuition is simply identified by the response it cues.

At each point in time, an intuition is characterized by its activation level or strength. The strength can change over time. Once an intuition is generated it can grow, peak, and decay. The y-axis in Figure 1.1 represents the intuition strength, the x-axis represents time. The peak activation strength of an intuition reflects how automatized or instantiated the underlying knowledge structures are (i.e., how strongly it is tied to its eliciting stimulus, e.g., Stanovich, Reference Stanovich2018). The stronger an intuitive response is tied to its eliciting stimulus, the higher the resulting activation strength. This implies that not all intuitions will be created equal. Some might be stronger than others.

But where do these intuitions and strength differences come from? Although it is not excluded that some intuitive associations might be innate, the working model postulates that intuitive responses primarily emerge through an automatization or learning process. Throughout development, any response might initially require exclusive deliberation but through repeated exposure and practice this response will become compiled and automatized (e.g., Shiffrin & Schneider, Reference Shiffrin and Schneider1977). Note that although such a claim is uncontroversial for the alleged system 1 response in traditional dual-process models (e.g., Evans & Stanovich, Reference Evans and Stanovich2013; Rand et al., Reference Rand, Greene and Nowak2012), it is assumed here that it also applies to the alleged system 2 response. The rationale is that in most dual-process fields, adult reasoners have typically already been exposed to the system 2 response through education and daily life experience. For example, the ratio principle in the introductory ratio bias task is explicitly taught during elementary and secondary education (e.g., fractions). Likewise, children will have had many occasions to experience that selfish behavior has often negative consequences (e.g., if you don't share with your little brother your mom and dad will be mad, your brother will be less likely to share with you in the future, etc.). Hence, through repeated exposure and practice an original system 2 response may gradually become automatized and will be generated intuitively (De Neys, Reference De Neys2012). But because not every response will have been equally well automatized or instantiated, strength differences may arise, and not every eliciting stimulus will cue the associated response equally well in system 1.

Note that the eliciting stimulus can be any specific problem feature. For example, when solving the ratio bias problem with the marbles and trays, the absolute number information (e.g., “1 red marble in small tray, 9 red in large tray”) might give rise to one intuition (e.g., “pick large”) and the ratio information (e.g., “1 out of 10 red vs. 9 out of 100 red”) might give rise to a conflicting one (e.g., “pick small”). In a moral reasoning problem, the information that an action will result in harm (e.g., a person will die) can cue a deontological intuition (e.g., “action not acceptable”) and the subsequent information that it may prevent more harm (e.g., “if nothing done, 5 people will die”) an utilitarian one (e.g., “action acceptable”). Hence, the intuition 1 (I1) and intuition 2 (I2) labels in the illustration simply refer to the temporal order in which the intuitions accidentally happened to be cued. They bear no further implications concerning the nature of the intuition per se.

3.2 Uncertainty monitoring

The second component of the model is what we can refer to as an uncertainty monitoring process. The idea is simply that system 1 will continuously calculate the strength difference between activated intuitions. This results in an uncertainty parameter U. The more similar in strength the competing intuitions are, the higher the resulting experienced uncertainty. Once the uncertainty reaches a critical threshold (represented by d in Fig. 1.2), system 2 will be activated. However, in case one intuition clearly dominates the other in strength, the resulting uncertainty will be low and the deliberation threshold will not be reached. In that case, the reasoner will remain in system 1 mode and the dominant intuition can lead to an overt response without any further deliberation.

This explains why postulating non-exclusivity and assuming that the traditionally alleged system 2 response can also be generated intuitively does not imply that reasoners will always opt for the alleged system 2 response. For different individuals and situations, the strength of the competing intuitions can differ. Sometimes the alleged system 1 intuition will dominate. Consequently, although the presence of a competing intuition that cues the alleged system 2 response will result in some uncertainty, this may not be sufficient to engage system 2. In the case of logical reasoning bias, for example, this explains why some reasoners may detect that their dominant intuitive answer is questionable but nevertheless will fail to engage in further deliberation to double-check and correct it.

A possible mathematical representation of the uncertainty parameter is: U = 1 − |I1 − I2|. U stands for uncertainty and can range from 0 to 1. I1 and I2 represent the strength of the respective intuitions. The strength can also range between 0 and 1. The vertical bars (|) denote we calculate the absolute difference. Hence, the more similar the activation strength, the smaller the absolute difference and the higher the uncertainty will be.

A simple analogy might clarify the basic idea. Imagine that as part of a lunch combo, a local cafeteria offers its customers a choice between two desserts: ice cream or a cupcake. John is fond of ice cream but really dislikes cupcakes. Hence, John will readily choose the ice cream without giving it any further reflection. Steve, however, likes both equally well. When presented with the two options, Steve's decision will be harder and require deeper deliberation. For example, he might try to remember what he had last time he ate at the cafeteria and decide to give the other option a try. Or he might try to look for arguments to help him make a decision (e.g., “The cupcake has blueberries in it this week. Blueberries are healthy. Better take the cupcake.”). Just like the strength of our food preferences, the activation strength of our intuitions will similarly determine whether or not we will deliberate about our response.

Note that although I focus on two competing intuitions, the monitoring also applies in case there is only one or no intuition cued. For example, if a reasoner is being faced with an entirely new problem for which system 1 does not cue a response, the absolute difference factor will equal 0 (i.e., the intuition strength equals 0), the resulting uncertainty will be maximal (e.g., U = 1 − 0), and system 2 will be called upon to compute and answer. If a problem only cues one single intuition (or both intuitions cue the same response), the difference factor will equal its strength (e.g., 0.8). Consequently, if the strength is high, the uncertainty will be low (e.g., U = 1 − 0.8) and the cued response can be selected without further deliberation. Conversely, a weaker intuition will result in a higher uncertainty, which increases the likelihood that the deliberation threshold is crossed, system 2 is activated and the reasoner engages in additional deliberation about the problem. Finally, one may also envisage cases in which more than two intuitions are simultaneously activated. If there is one intuition that clearly dominates, the strength difference will be high and no further deliberation will be engaged. In case the differences are more diffuse, deliberation will likewise be triggered.

It is important to recap that uncertainty monitoring is a core system 1 process. It operates effortlessly without any system 2 supervision. For illustrative purposes, it is represented as a separate box in Figure 1. It can be functionally isolated but at an implementation level there is no need to postulate a different type of system or processing. It should also be clear that deliberation is always optional; it will only be engaged when the uncertainty monitoring deliberation threshold is reached. This is represented in Figure 1 by the dashed arrow between the uncertainty monitoring and deliberation component.

3.3 Deliberation

The third component is system 2 activation. It is at this stage (and this stage only) that the reasoners will engage in slow, demanding deliberation. Deliberation can take many forms. For example, one classic function is its role as response inhibitor (e.g., De Neys & Bonnefon, Reference De Neys and Bonnefon2013; Evans & Stanovich, Reference Evans and Stanovich2013). Here attentional control resources will be allocated to the active suppression of one of the competing intuitions. In addition, some authors have pointed to the algorithmic nature of deliberation and its role in the generation of new responses (e.g., Houdé, Reference Houdé2019). In this case system 2 allows us to retrieve and execute a stepwise sequence of rules. For example, when we have to multiply multiples of 10 (e.g., “How much is 220 × 30?”), we can use a multiplication algorithm (e.g., multiply the non-zero part of the numbers, i.e., 22 × 3 = 66; count the zeros in each factor, i.e., 2; add the same number of zeros to the product, i.e., 6,600) to calculate an answer. While we're executing each step we need to memorize the results of the previous steps which will burden our attentional resources. When system 1 does not readily cue an intuitive response, such algorithmic system 2 deliberation allows us to generate an answer.

Likewise, some authors have also pointed to the role of deliberation in a justification or rationalization process (Bago & De Neys, Reference Bago and De Neys2020; Evans, Reference Evans2019; Evans & Wason, Reference Evans and Wason1976; Pennycook et al., Reference Pennycook, Fugelsang and Koehler2015; see also Cushman, Reference Cushman2020; Mercier & Sperber, Reference Mercier and Sperber2011). In this case we will deliberate to look for an explicit argument to support an intuition. This explains why engagement of system 2 does not imply that the alleged system 2 response will be generated. Reasoners can also use their cognitive resources to look for a justification for the alleged system 1 intuition (e.g., the incorrect “heuristic” intuition in logical reasoning tasks). More generally, this underscores the argument that system 2 engagement does not “magically” imply that the resulting response will be “correct,” “rational,” or “normative” (De Neys, Reference De Neys2020; Evans, Reference Evans, Evans and Frankish2009, Reference Evans2019). It simply implies that a reasoner will have taken the time and resources to explicitly deliberate about their answer.

Clearly, none of these roles need to be mutually exclusive. Deliberation might entail a combination of response suppression, generation, justification, or additional processes. Whatever the precise nature of deliberation may be, what is critical for the current purpose is the outcome or result. The key point is that deliberation will always operate on system 1 in that it will modulate the strength of the different activated intuitions in system 1 (or generate a new intuitive response altogether). Consequently, although it is possible to have system 1 activation without system 2 activation, the reverse is not true. During deliberation, the effortless system 1 remains activated and deliberation will operate on its strength representations. As I will explain in more detail below, it is this feature that provides us with a mechanism to stop system 2.

3.4 Feedback

A last component of the model is what we can refer to as a feedback loop. A reasoning process does not stop at the point that one starts to deliberate. Traditionally, dual-process models have mainly focused on the question as to how we can know when to engage system 2. The question as to how we know we can stop system 2 engagement has received far less attention. Clearly, a viable switch account requires us to address both questions. When we activate the effortful system 2, at some point we will need to revert back to system 1. Hence a working dual-process model needs to specify when system 2 will be switched on and off. Put bluntly, we not only need to know what makes us think (Pennycook et al., Reference Pennycook, Fugelsang and Koehler2015) but also what makes us stop thinking.

The simple idea I put forward here is that of a feedback loop. System 2 operates on the strength representations in system 1 such that the outcome of system 2 processing is fed back into system 1. Hence, because deliberation will act on the strength representations, it will also affect the uncertainty parameter. For example, if we deliberately suppress one of two competing intuitions, this will decrease its activation level. Because of this decrease, the activation difference with the non-suppressed intuition will increase. As a result, the uncertainty parameter will decrease. At the point that the uncertainty falls below the deliberation threshold, system 2 deliberation will be switched off and the reasoner will return to mere system 1 processing.

In other words, in essence, the critical determinant of system 2 engagement is the uncertainty parameter. As soon as it surpasses the deliberation threshold, the reasoner will start deliberating. System 2 deliberation will extend for as long as the uncertainty remains above the threshold. As soon as the uncertainty drops below the threshold, deliberation stops, and the reasoner will revert to mere system 1 processing. Hence, it is the uncertainty parameter that determines the extent of deliberation. Figure 2 tries to illustrate this core idea. The figure sketches a situation in which initially only system 1 is activated and two intuitions are generated, a first intuition (I1) and slightly later a second intuition (I2). The activation strength of the two intuitions gradually increases. Initially, there is a large activation difference between I1 and I2 and consequently, the U parameter will be low. However, at a certain point I1 plateaus whereas I2 is still increasing. Consequently, their activation strength becomes more similar, U will increase, and the deliberation threshold will be crossed. At this point (t1), system 2 will be activated. This activation will modulate the strength through deliberate suppression, rationalization, and so on. This may decrease or increase the activation strengths and uncertainty parameter. As long as the uncertainty parameter remains above the threshold, system 2 activation will be extended (represented by the gray bar in Fig. 2). At a certain point (t2 in the figure), the activation difference will be sufficiently large again such that the uncertainty falls below the threshold and the reasoner switches back to pure system 1 processing.

Figure 2. Illustration of the idea that the strength interplay of conflicting intuitions determines uncertainty and the extent of deliberation. I1, intuition 1; I2, intuition 2; d, deliberation threshold; t1 and t2, time points at which the deliberation threshold is crossed. The gray area represents the time during which system 2 deliberation will be engaged.

To avoid confusion, it is important to stress that deliberation does not necessarily need to lead to a decreased uncertainty (or “conflict resolution”) per se. Deliberation can also increase uncertainty and lead to more deliberation. For example, one can think of a situation in which initially a single weak intuitive response is cued. This leads to high uncertainty and system 2 engagement. Subsequently, algorithmic processing leads to the generation of a new, competing response. This response will also be represented in system 1 and have a specific strength. Depending on the specific activation levels, the net result might very well be more rather than less uncertainty which will lead to further deliberation. Alternatively, imagine that during logical reasoning on a classic bias task, a reasoner generates both a logically correct and incorrect (“heuristic”) intuition. The heuristic intuition is only slightly stronger than the logically correct one and the resulting uncertainty triggers system 2 deliberation. During deliberation the reasoner looks for a justification for the heuristic intuition but does not find one. As a result, its strength will decrease making it even more similar to the logical intuition. Consequently, the uncertainty will increase and deliberation will be boosted rather than stopped. These are illustrative examples but they underscore the core point that there is no necessary coupling between deliberation and uncertainty reduction or resolution per se. The point is that the feedback mechanism guarantees that deliberation can reduce uncertainty and thereby stop system 2 engagement.

In the full model sketch in Figure 1, the feedback component – just like the uncertainty monitoring component – is represented in a separate box. Just as with the uncertainty monitoring component, it can be functionally isolated but there is no need to postulate a different type of system or type of processing. Feedback results from system 2 processing but the critical updating of the system 1 representations itself occurs automatically and does not require additional cognitive resources. In this sense it is a system 1 process. At the same time, the feedback component also underscores that in practice, thinking always involves a continuous interaction between system 1 and system 2 activation. At a specific isolated point in time we'll be either in system 2 mode or not but this split-up is always somewhat artificial. In practice, reasoning involves a dynamic interaction between the two systems. System 1 can call for system 2 activation which will operate on system 1 which can lead to more or less system 2 operation which will further affect system 1 operations. This dynamic interaction is represented by the flow arrows in Figure 1.

3.5 Working guidelines

The combined intuitive activation, uncertainty monitoring, deliberation, and feedback components sketch the basic architecture of a dual-process model that can explain how people switch between system 1 and system 2 thinking. The model sketch also allows us to delineate some more general principles that a working dual-process model needs to respect: First, the model needs to be default-interventionist in nature. The idea of a parallel model in which systems 1 and 2 are always activated simultaneously is both empirically and conceptually problematic. A dual-process model should not assume that system 2 is always on. There will always need to be a processing stage in which the reasoner remains in mere system 1 mode. Second, because system 2 cannot always be on, the model needs to specify a switch mechanism that allows us to decide when system 2 will be turned on (and off). Third, while it is critical that there is a state in which system 1 is activated without parallel system 2 activation, the reverse does not hold. During system 2 activation, system 1 always remains activated. System 2 necessarily operates on the system 1 representations. This modulation ultimately allows us to stop deliberating. Fourth, a viable internal switch account implies that the model will be non-exclusive. As soon as we posit exclusive responses that are out of reach of the intuitive system, it will be impossible for the reasoner to accurately determine whether there is a need to generate the exclusive deliberate response when they are in the intuitive processing mode. In case exclusivity is nevertheless maintained, the model necessarily posits that there is no reliable internal switch mechanism.

If these features or principles are not met, the model will not “work” and cannot qualify as a proper dual-process model that allows us to explain how intuition and deliberation interact. As such, the model sketch may help to separate the wheat from the chaff when evaluating future dual-process accounts.

4. Prospects

I referred to the architecture I presented as a working model. This label serves two goals. On the one hand, it stresses that the model “works” in that it presents a viable account that avoids the conceptual pitfalls that plague traditional dual-process models. However, on the other, the “working” also refers to its preliminary status – the model is a work-in-progress. The current specification is intended as a first, high-level verbal description of the core processes and operating principles. Clearly, the model will need to be further fleshed out, fine-tuned, and developed at a more fine-grained processing level. In this section I point to critical outstanding questions that will need to be addressed. These queries have remained largely neglected in the dual-process field. As such, the section can also illustrate the models' potential to identify and generate new research questions and set the research agenda in the coming years.

4.1 Uncertainty parameter specification

The working model specifies the uncertainty parameter U as the absolute strength difference between competing intuitions (i.e., U = 1 − |I1 − I2|). This is most likely an oversimplification. For example, the current model does not take the absolute activation level into account. That is, two weak intuitions that have the same strength level (e.g., both have activation level 0.1 out of 1) are assumed to result in the same level of uncertainty as two strong intuitions that have the same strength level (e.g., both have activation level 0.9 out of 1). If two intuitions have trivially small activation levels, one may wonder whether potential conflict requires or warrants deliberation. It is not unreasonable to assume that we would primarily allocate our precious cognitive resources to the most highly activated or most intense conflicts. One way to account for this feature would be to incorporate the absolute strength level into the U parameter. For example, by multiplying the absolute difference with the individual strength levels such that U = (1 − |I1 − I2|) × I1 × I2. Under this specification, conflict between stronger intuitions will be weighted more heavily and result in more uncertainty.

Likewise, one may wonder whether the variability of the strength levels is taken into account. Imagine two situations in which upon generation of competing intuitions the uncertainty parameter reaches the deliberation threshold after 1 second. In the first case, the intuition strength levels gradually change such that the U parameter gradually increases until the deliberation threshold is reached. With every unit of time, the uncertainty smoothly increases. Contrast this with a case whereby the strength levels are highly variable and constantly shoot up and down. For example, imagine that initially the uncertainty steeply rises but after a couple of milliseconds it steeply drops, then rises again, drops, and then rises again before it ultimately crosses the threshold. In theory, this variability may be informative. Strength instability might signal an increased need for deliberation. Such a feature could be integrated into the model by factoring strength variability into the U parameter such that, for example, U = (1 − |I1 − I2|) × V(I1) × V(I2). The V factor then simply reflects the variability of the strength level over an elapsed period of time (e.g., standard signal deviation). Consequently, more variability will result in more uncertainty and faster deliberation engagement.

In the same sense, in theory, the uncertainty may be impacted by the intuition rise time or strength slope. That is, imagine two intuitive responses that have the exact same peak strength level at a certain point in time. However, it took the first response twice as long to reach that level as the second response. In other words, the slope of the strength function of the first intuition will be much lower than that of the second intuition (i.e., the second one is steeper). Is this factored into the uncertainty equation? Or is the slope simply invariant (i.e., intuitive strength always rises at a fixed rate)? These are open queries but illustrate how the working model generates new research questions that have hitherto remained unexplored in the dual-process field.

Currently, these suggestions or hypotheses remain purely speculative. The absolute strength level, strength variability, slope, and other factors might or might not affect the uncertainty parameter. This remains to be tested and empirically verified. The point is that, in theory, the model can be updated to account for these refinements, and pinpointing the precise signal or strength characteristics that affect the experienced uncertainty should be a promising avenue for further research.

4.2 Nature of non-exclusive system 1 and 2 responses

In a non-exclusive model there is no unique, exclusive response in system 2 that can only be generated through deliberation. Any response that can be computed by system 2 can also be computed by system 1. However, it is important that this equivalence is situated at the response or outcome level. Generating a logically correct response in bias tasks, making a utilitarian decision during moral reasoning, or deciding between a selfish or prosocial decision in a cooperation task, can all be done intuitively. But this does not imply that the intuitive and deliberate calculation of the responses is generated through the same mechanism or has the same features. Indeed, given that one is generated through a fast automatic process and one through a slow deliberate process, by definition, the processing mechanisms will differ. To illustrate, consider one is asked how much “3 × 10” is. For any educated adult, the answer “30” will immediately pop up through mere intuitive processing. An 8-year-old who starts learning multiplication will initially use a more deliberate addition strategy (e.g., 3 times 10 equals 10 + 10 + 10; 10 + 10 equals 20, plus 10 is 30). Both strategies will result in the same answer, but they are generated differently and do not have the same features. For example, the intuitive strategy might allow the adult to respond instantly but when asked for a justification even adults might need to switch to a more deliberate addition strategy (“well, it's 30 because 10 + 10 + 10 is thirty”). Hence, non-exclusivity does not entail that there is no difference between intuition and deliberation. The point is that intuition and deliberation can cue the same response.

However, it will be important to pinpoint how exactly the non-exclusive system 1 and system 2 responses differ. For example, one of the features that is often associated with deliberation is its cognitive transparency (Bonnefon, Reference Bonnefon2018; Reber & Allen, Reference Reber and Allen2022). Deliberate decisions can typically be justified; we can explain why we opt for a certain response after we reflected on it. Intuitive processes often lack this explanatory property: People tend to have little insight into their intuitive processes and do not always manage to justify their “gut-feelings” (Marewski & Hoffrage, Reference Marewski and Hoffrage2015; Mega & Volz, Reference Mega and Volz2014). Hence, one suggestion is that non-exclusive system 1 and 2 responses might differ in their level of transparency (e.g., De Neys, Reference De Neys, Reber and Allen2022). For example, in one of their two-response studies on logical reasoning bias, Bago and De Neys (Reference Bago and De Neys2019a) also asked participants to justify their answers after the initial and final response stages. Results showed that reasoners who gave the correct logical response in the final response stage typically managed to justify it explicitly. However, although reasoners frequently generated the same correct response in the initial response phase, they often struggled to justify it. Bago and De Neys (Reference Bago and De Neys2019b) observed a similar trend during moral reasoning; although the alleged utilitarian system 2 response was typically already generated in the intuitive response stage, sound justifications of this response were more likely after deliberation in the final response stage. Hence, a more systematic exploration of the role of deliberation in response explicitation or justification seems worthwhile.

Likewise, one may wonder what the exact problem features are that system 1 reasoning exploits to generate the alleged system 2 response. For example, it has been suggested that computation of correct intuitive responses during deductive reasoning may rely on surface features that closely co-vary with the logical status of a conclusion rather than logical validity per se (Ghasemi, Handley, Howarth, Newman, & Thompson, Reference Ghasemi, Handley, Howarth, Newman and Thompson2022; Hayes et al., Reference Hayes, Stephens, Lee, Dunn, Kaluve, Choi-Christou and Cruz2022; Meyer-Grant et al., Reference Meyer-Grant, Cruz, Singmann, Winiger, Goswami, Hayes and & Klauer2022). In this sense, intuitive logical reasoning would serve to calculate a proxy of logical reasoning but not actual logical reasoning. These questions concerning the precise nature of non-exclusive system 1 intuitions should help to fine-tune the model in the coming years.

4.3 System 2 automatization

The working model posits that the critical emergence of a non-exclusive “alleged system 2” intuition within system 1 typically results from a developmental learning or automatization process. Through repeated exposure and practice, the system 2 response will gradually become automatized and will be elicited intuitively (De Neys, Reference De Neys2012; Stanovich, Reference Stanovich2018). The basic idea that an originally deliberate response may be automatized through practice, is theoretically sound (e.g., Shiffrin & Schneider, Reference Shiffrin and Schneider1977) and well-integrated in traditional dual-process models (e.g., Evans & Stanovich, Reference Evans and Stanovich2013; Rand et al., Reference Rand, Greene and Nowak2012).

However, although the automatization idea might not be unreasonable, there is currently little direct evidence to support it (De Neys & Pennycook, Reference De Neys and Pennycook2019). This points to a need for developmental research to test the emergence of these new intuitions (e.g., Raoelison, Boissin, Borst, & De Neys, Reference Raoelison, Boissin, Borst and De Neys2021). Likewise, individual differences in the strength of intuitions might be linked to differences in response automatization. People might differ in the extent to which they have automatized the system 2 operations. To test this idea more directly, one may envisage training studies in which the activation level or automatization is further boosted through practice. Although there have been some recent promising findings in this respect (Boissin, Caparos, Raoelison, & De Neys, Reference Boissin, Caparos, Raoelison and De Neys2021; Purcell, Wastell, & Sweller, Reference Purcell, Wastell and Sweller2020), a more systematic exploration is key. Such work may have critical applied importance. Rather than training people to deliberate better to suppress faulty or unwanted intuitions, we might actually help them to boost the desired intuition directly within system 1 (e.g., Milkman, Chugh, & Bazerman, Reference Milkman, Chugh and Bazerman2009).

Emerging evidence in the logical reasoning field also suggests that spontaneous differences in the strength of sound “logical” intuitions might be associated with individual differences in cognitive capacity (Raoelison et al., Reference Raoelison, Thompson and De Neys2020; Schubert, Ferreira, Mata, & Riemenschneider, Reference Schubert, Ferreira, Mata and Riemenschneider2021; Thompson, Reference Thompson2021; Thompson, Pennycook, Trippas, & Evans, Reference Thompson, Pennycook, Trippas and Evans2018). That is, people higher in cognitive capacity might have automatized the logical operations better and developed more accurate intuitions (Thompson et al., Reference Thompson, Pennycook, Trippas and Evans2018). Consequently, rather than predicting how good one is at deliberately correcting faulty intuitions, cognitive capacity would predict how likely it is that a correct intuition will dominate from the outset in the absence of deliberation (Raoelison et al., Reference Raoelison, Thompson and De Neys2020). Although promising, this finding will require further testing (e.g., Thompson & Markovits, Reference Thompson and Markovits2021) and generalization to different fields.

4.4 Deliberation issues

The deliberation component of the working model will also need further development. I noted that deliberation can take many forms. It will be important to specify these and their potential interaction in more detail. For example, one may wonder about the link between suppression and justification. Do we ever suppress an intuitive response without a justification? That is, do we need an explicit argument or reason to discard an intuitive response, or is such justification independent of the suppression process and does it follow (rather than precede) suppression (Evans, Reference Evans2019)? More critically perhaps, how are deliberative processes instantiated? For example, does the suppression process imply an active suppression of a target intuition per se or rather a boosting of the activation level of the competing intuition? Alternatively, it has been argued that deliberate suppression can be conceived as a mere response delay (Martiny-Huenger, Bieleke, Doerflinger, Stephensen, & Gollwitzer, Reference Martiny-Huenger, Bieleke, Doerflinger, Stephensen and Gollwitzer2021). Under this interpretation, the activation level of a dominant intuition automatically decays if it is not acted upon (i.e., does not result in an overt response). Hence, as long as the reasoner refrains from responding, the mere passive passing of time will guarantee that the activation level of an initially dominant intuition will fall below its competitor. Consequently, it would be the act of refraining from responding rather than the suppression of a dominant intuition itself that would be demanding. This illustrates how more work is needed to specify the precise instantiation of deliberation.

Another question concerns the gradual or discrete nature of deliberation engagement (Dewey, Reference Dewey2021, Reference Dewey2022). In the current model specification, I focused on the extent of deliberation. The longer the uncertainty parameter remains above the threshold, the longer we will remain deliberating. But in addition to the question as to how long we will keep deliberating for, one may also wonder how hard we will deliberate. How much of our cognitive resources do we allocate to the task at hand? Do we always go all-in, in an all-or-nothing manner or do we set the amount of allocated resources more gradually? In theory, the amount of deliberation might be determined by the uncertainty parameter. For example, the higher the uncertainty parameter (above the threshold), the more resources will be allocated. This issue will need to be determined empirically (e.g., see Dewey, Reference Dewey2022) but again illustrates how the current working model leads to new questions and can guide future research.

Finally, one can also question whether the cost of deliberation is factored into our decision to revert to system 1 processing. Imagine that even when we are engaging all our available resources, we still do not manage to resolve a conflict between competing intuitions. What do we do when we do not readily find a solution to a problem? We cannot deliberate forever so at a certain point we need to stop deliberation even when the uncertainty might not have been resolved. Here we presumably need to take the opportunity cost of deliberation into account (e.g., Boureau, Sokol-Hessner, & Daw, Reference Boureau, Sokol-Hessner and Daw2015; Sirota, Juanchich, & Holford, Reference Sirota, Juanchich and Holford2022). Although in a typical experimental study participants only need to focus on the specific reasoning task at hand, in a more ecologically valid environment we always face multiple tasks or challenges. Resources spent on one task, cannot be spend on another one. If another task is more pressing or more rewarding, we may deliberately decide to stop allocating cognitive resources to the current target task. In theory, this opportunity factor may affect the uncertainty parameter. That is, one consequence of not being able to solve a problem is that we may lose interest in it and shift to a different challenge. This may be instantiated by an overall lowering of the activation strength of the intuitions or the inclusion of an opportunity cost factor into the U parameter calculation, for example, which may both decrease the experienced uncertainty. Hence, bluntly put, the longer a deliberation process takes, the less we may bother about it. These suggestions are speculative but they illustrate how research on the opportunity cost of deliberation can be integrated into the model.

4.5 Multiple, one, or no intuitions

The current model focuses on the paradigmatic case in which a reasoner is faced with two competing intuitions. As I noted, in theory, the model can be extended to situations in which no, one, or more than two intuitions are cued. In the latter case, the uncertainty parameter might focus on the absolute difference or strength variability of the different intuitions. The more similar in strength they are, the higher the uncertainty. In case there is no intuitive response cued, its strength will obviously be zero. Consequently, the uncertainty will be maximal and the reasoner will be obliged to look for a deliberate response. However, note that in practice, these cases have received little or no empirical testing in dual-process studies. For example, rather than variability per se, uncertainty might be determined by the distance between the strongest intuition and its competitors. Imagine that in a first case three competing intuitions have strength levels 0.9, 0.1, and 0.1, and in a second case 0.9, 0.9, and 0.1. In both cases the average strength deviation (e.g., standard deviation) will be the same but uncertainty and need for deliberate judication might be higher in the second case. Likewise, although it is generally assumed in the dual-process literature that the absence of an intuitive cue will necessarily imply activation of system 2 (e.g., Evans & Stanovich, Reference Evans and Stanovich2013; Kahneman, Reference Kahneman2011; Stanovich, Reference Stanovich2011), this activation might also depend on the perceived opportunity cost of deliberation (Shenhav, Prater Fahey, & Grahek, Reference Shenhav, Prater Fahey and Grahek2021). Future dual-process research will need to pay more empirical attention to these a-typical cases.

Finally, the working model's uncertainty monitoring account also applies when only one intuition is cued. In this case the difference factor will equal the intuition's strength. If the strength is high, the uncertainty will be low and the cued response can be selected without further deliberation. A weaker intuition will result in a higher uncertainty, which increases the likelihood that the deliberation threshold is crossed, and system 2 is called upon. Here the working model fits well with recent accounts that examine the role of metacognition in reasoning (i.e., so-called metareasoning, e.g., Ackerman & Thompson, Reference Ackerman and Thompson2017; see also Baron, Reference Baron1985, for a related older suggestion). The basic idea is that an intuitive response is always accompanied by an intuitive confidence judgment (i.e., the so-called feeling of rightness, Ackerman & Thompson, Reference Ackerman and Thompson2017). This confidence level would then determine deliberation engagement (i.e., the lower the confidence, the higher the deliberation probability). In essence, this process serves the same role as the uncertainty monitoring in the current working model and it might be worthwhile to integrate the accounts further.

4.6 Links with other fields

Some of the challenges that the working model tries to address show interesting similarities and connections with ongoing developments in other fields such as work on the automatic triggering of cognitive control (e.g., Algom & Chajut, Reference Algom and Chajut2019), mental effort allocation (e.g., Kool & Botvinick, Reference Kool and Botvinick2018; Shenhav et al., Reference Shenhav, Prater Fahey and Grahek2021), or computational modeling of changes-of-mind in perceptual decision making (e.g., Stone, Mattingley, & Rangelov, Reference Stone, Mattingley and Rangelov2022; Turner, Feuerriegel, Andrejević, Hester, & Bode, Reference Turner, Feuerriegel, Andrejević, Hester and Bode2021). Although these fields have typically focused on lower-level tasks than dual-process models of reasoning – and have remained somewhat isolated from this literature – the working model might allow us to integrate both which can offer some guidance for the further development of dual-process models of higher-order cognition.Footnote 5

For example, research on the engagement of cognitive control in tasks such as the Stroop (e.g., name the ink color in which a color word is written), has indicated that various processes that had long been considered the hallmark of deliberate controlled processing can also operate automatically (e.g., Desender, Van Lierde, & Van den Bussche, Reference Desender, Van Lierde and Van den Bussche2013; Jiang, Correa, Geerts, & van Gaal, Reference Jiang, Correa, Geerts and van Gaal2018; Linzarini, Houdé, & Borst, Reference Linzarini, Houdé and Borst2017). These findings have resulted in broader theoretical advances that indicate how core control mechanisms can also be achieved through low-level associative mechanisms (Abrahamse, Braem, Notebaert, & Verguts, Reference Abrahamse, Braem, Notebaert and Verguts2016; Algom & Chajut, Reference Algom and Chajut2019; Braem & Egner, Reference Braem and Egner2018). Hence, as in the dual-process literature, there seems to be a tendency to move from an exclusive to a non-exclusive view on elementary control processes (e.g., see also Hassin, Reference Hassin2013, for a related point on conscious and unconscious processing).

Likewise, the field of mental effort allocation has long studied the motivational aspects of deliberate control (e.g., Kool & Botvinick, Reference Kool and Botvinick2018; Shenhav et al., Reference Shenhav, Musslick, Lieder, Kool, Griffiths, Cohen and Botvinick2017, Reference Shenhav, Prater Fahey and Grahek2021). Here, the decision to engage effortful controlled processing in a cognitive task is modeled as a function of the likelihood that allocation of control will result in the desired outcome and the weighing of the costs and benefits of allocating control to the task. Such a framework might be highly relevant for the integration of an opportunity cost factor in dual-process models of reasoning (Sirota et al., Reference Sirota, Juanchich and Holford2022).

In the same vein, research on so-called changes-of-mind (Evans, Dutilh, Wagenmakers, & van der Maas, Reference Evans, Dutilh, Wagenmakers and van der Maas2020; Resulaj, Kiani, Wolpert, & Shadlen, Reference Resulaj, Kiani, Wolpert and Shadlen2009; Turner et al., Reference Turner, Feuerriegel, Andrejević, Hester and Bode2021; Van Den Berg et al., Reference Van Den Berg, Anandalingam, Zylberberg, Kiani, Shadlen and Wolpert2016) can be inspirational. Scholars in this field try to explain when and how participants will revise perceptual decisions (e.g., whether or not a stimulus was perceived). For example, you initially might infer that an “X” was briefly flashed on screen but milliseconds later revise this answer and decide it was a “Y.” Various computational models that make differential assumptions about whether an increase in the activation level of one decision automatically implies an activation decrease of its competitor or whether such activation necessarily decays over time, have been developed and can be contrasted (e.g., Pleskac & Busemeyer, Reference Pleskac and Busemeyer2010; Usher & McClelland, Reference Usher and McClelland2001). Integration of this modeling work might be useful for the further fine-grained specification of the intuitive activation component of the working model.

4.7 Computation issues

The present working model is intended to serve as a first, verbal model of core processes and operating principles. It does not present a computational model that specifies how the operations are calculated and what processes ultimately underlie system 1 or the generation of intuitions. However, such a specification or integration is not impossible. For example, Oaksford and Hall (Reference Oaksford and Hall2016) showed how a probabilistic Bayesian approach might in theory be used to model conflict between competing intuitions and the generation of “logical” (or alleged system 2) intuitions in classic reasoning tasks. Oaksford and Hall gave the example of a base-rate neglect task in which base-rate information (e.g., a sample with 995 men and 5 women) can conflict with information provided by a stereotypical description (e.g., a randomly drawn individual from the sample is described as someone who likes shopping). Traditionally it is assumed that the description will cue an incorrect intuitive response (i.e., the randomly drawn individual is most likely female) and that taking the base-rate information into account will require system 2 deliberation. Oaksford and Hall demonstrated how both might be done intuitively in system 1 by an unconscious sampling of probability distributions. In a nutshell, probabilities are represented as probability density functions in the model (e.g., Clark, Reference Clark2013). Different cues in the problem information (e.g., base-rates and the description) will give rise to a probability distribution of possible values. The first cue that is encountered (e.g., base-rates) will give rise to a prior distribution. The second cue (e.g., description) will modify this to a posterior probability distribution. A decision is then made by sampling values from these distributions. In essence, this unconscious process of probability distribution sampling would ultimately underlie system 1 processing. Although such an account would need to be generalized to other tasks and domains, it indicates that a more fine-grained computational account is not a mere promissory note. In theory, the underlying computational model can be specified and tested. This remains an important challenge for the current working model. At the same time, it also underscores the value of a verbal working model. If our theories maintain that a response is out of reach of the intuitive system, there is no point in trying to model how such a response can be intuitively instantiated either.

4.8 Dual schmosses?

This paper pointed out that there is little empirical and conceptual support for foundational dual-process assumptions and presented a revised working model to address these challenges. However, given the empirical and conceptual dual-process issues, one might be tempted to draw a radically different conclusion. That is, rather than to try building a more credible version of the framework, shouldn't we simply abandon the dual-process enterprise of splitting cognition into a fast and slow system altogether? This critique can be read and targeted at multiple levels. First, various scholars have long questioned dual-process models (e.g., Gigerenzer & Regier, Reference Gigerenzer and Regier1996; Keren & Schul, Reference Keren and Schul2009; Melnikoff & Bargh, Reference Melnikoff and Bargh2018; Osman, Reference Osman2004). Often this is accompanied by a call to switch to so-called single-process models (e.g., Kruglanski & Gigerenzer, Reference Kruglanski and Gigerenzer2011; Osman, Reference Osman2004). As I noted in the Introduction, both single- and dual-process models focus on the interaction between intuition and deliberation. But whereas single-process proponents believe there is only a quantitative difference between intuition and deliberation (i.e., the difference is one of degree, not kind), dual-process theorists have traditionally argued for a qualitative view on this difference (e.g., see Keren & Schul, Reference Keren and Schul2009, and De Neys, Reference De Neys2021, for reviews). Bluntly put, whereas the qualitative view sees intuition and deliberation as running on different engines, the quantitative view entails they run on one and the same engine that simply operates at different intensities. My main argument was orthogonal to this specific issue and I therefore used the fast-and-slow dual-process label as a general header that covers both the qualitative and quantitative interpretation. The simple reason is that single-process models also differentiate between intuitive and deliberate processing and posit that some responses require more deliberation than others (e.g., Kruglanski & Gigerenzer, Reference Kruglanski and Gigerenzer2011). At one point we may be at the intuitive end of the processing scale and will need to decide whether we need to move to the more deliberate end, and invest more time and resources (e.g., whether or not we hit the gas pedal and let the engine run at full throttle). Hence, quantitative single-process models face the same switch issue as their qualitative rivals. Any solution will require them to drop exclusivity and postulate that responses that can be computed when we're at the deliberate extreme of the processing scale, can also be computed when we're at the intuitive end. In short, the issues outlined here are not solved by simply moving from a qualitative to a quantitative single-process view on intuition and deliberation.

Another possible general critique of the dual-process approach has to do with the specific reading of the “system” label (e.g., Oaksford & Chater, Reference Oaksford and Chater2012). Dual-process models are also being referred to as dual system models. These labels are often used interchangeably (as in the present paper) but sometimes they are used to refer to a specific subclass of models. For example, some dual-process models are more specific in their scope, others more general (Gawronski & Creighton, Reference Gawronski, Creighton and Carlston2013). The more specific models are developed to account for specific phenomena or tasks, the more general ones are intended to be more integrative and apply to various phenomena. Some authors use the system label to specifically refer to the latter, more general models (e.g., Smith & DeCoster, Reference Smith and DeCoster2000; Strack & Deutsch, Reference Strack and Deutsch2004). One critique of dual-process models has to do with this general “system” interpretation. One may argue that although intuitive and deliberate processing in various domains might bear some phenomenological family resemblance, they ultimately share no common core. For example, “system 1” processing in moral reasoning might have nothing to do with “system 1” processing during prosocial decision making or logical reasoning. Hence, rather than positing a general intuitive and deliberate processing type, we may have subsets of more intuitively and deliberately operating processes that are at play in different tasks. This is a valid point but it is ultimately independent of the issue addressed here. That is, even if there are domain- or task-specific intuitive and deliberate processes at play, we still need to explain how we switch from one to the other in the specific task at hand. Hence, the classic “system” view is not the problem here. This does help to underscore that the processing details (e.g., the precise value of the deliberation threshold) of the working model may vary across domains (or even tasks). The point is that its core principles (e.g., non-exclusivity, monitoring, feedback component) will need to apply if we want to account for the switch process in any of these individual domains (or tasks).

Finally, one may also wonder whether the central dual-process switch issue is simply an instantiation of the more general challenge of deciding when to stop a calculation. That is, imagine that all human cognition is deliberative in nature. Even in this case where there is never an intuition/deliberation switch decision to make, we would still need to decide whether to keep on calculating or stop and make a stab at the answer in the light of the deliberate calculations we already made. As I noted (sect. 4.6), this “stop” question is specifically examined in work on mental effort allocation and might be especially useful to integrate an opportunity cost factor into the working model (e.g., Sirota et al., Reference Sirota, Juanchich and Holford2022). However, is this all we need? I believe it is important to highlight that dual-process models typically focus on a slightly different situation. That is, rather than deciding whether or not to spend (more) resources to get to an answer per se, they deal with cases in which a plausible, salient answer is intuitively cued from the outset before we spend any effort at all. The question is whether there is a need to go beyond this first hunch. Do we need to start deliberating if we are instantly repulsed by a moral option, feel that it's better to share with others than to make more ourselves, or have a positive first impression of a job candidate? Whether such a switch decision can be accounted for by the same mechanism as the general calculation or deliberation “stopping” machinery is ultimately an empirical question. At the very least it will require us to examine and account for the switching in the specific situations that dual-process models envisage. Developing a revised account that provides a viable specification of the postulated intuition/deliberation switch mechanism should always be useful here. Clearly, if the dual-process model doesn't specify a switch account yet, there is no point in contrasting it with other “switch” approaches. Hence, even if one questions the idea that we can distinguish more intuitive and deliberate processing in human cognition and favors an alternative account, it is paramount to request dual-process theorists to develop the best possible specification of the core “fast-and-slow” switch mechanism. The point is that this will allow for a more informative contrast with possible rival accounts. Put simply, if we want to know whether NFL players have a better physique than basketball players, we should test them against NBA players rather than players from the local recreational team. To avoid any confusion, my point is not that the current working model provides the best possible dual-process specification (or that it's the LeBron James of dual-process theory), but that it is sensible to strive for the best possible version of the framework.

5. Conclusion

In the last 50 years dual-process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, Reference Chater2018; De Neys, Reference De Neys2021). However, it is time to rethink foundational assumptions. Traditional dual-process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual-process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within system 1 that determines system 2 engagement.

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual-process models in various fields. In addition, it should at the very least force dual-process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual-process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate – fast-and-slow – thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual-process future.

Financial support

Writing of this paper was supported by a grant from Agence Nationale de la Recherche (DIAGNOR ANR-16-CE28-0010-01).

Competing interest

None.

Footnotes

1. I will use “logical” as a general header to refer to logical, probabilistic, and mathematical principles and reasoning.

2. For a recent illustration consider the widespread mistaken belief that Covid-19 vaccines are unsafe because more vaccinated than unvaccinated people are hospitalized (neglecting that the group of vaccinated people is far larger in most Western countries, e.g., Devis, Reference Devis2021).

3. See the legendary Johnny Cash song “A boy named Sue” (Cash, Reference Cash1969).

4. This does not imply that these authors agree with or can be held accountable for the claims made here. I simply want to acknowledge that my theorizing does not come out of the blue and was inspired by the thinking of multiple scholars.

5. For example, vice versa this could also help to scale-up models focusing on more elementary low-level cognition tasks to higher-level reasoning about morality, cooperation, and logic.

References

Abrahamse, E., Braem, S., Notebaert, W., & Verguts, T. (2016). Grounding cognitive control in associative learning. Psychological Bulletin, 142, 693.CrossRefGoogle ScholarPubMed
Achtziger, A., & Alós-Ferrer, C. (2014). Fast or rational? A response-times study of Bayesian updating. Management Science, 60, 923938.10.1287/mnsc.2013.1793CrossRefGoogle Scholar
Ackerman, R., & Thompson, V. A. (2017). Meta-reasoning: Monitoring and control of thinking and reasoning. Trends in Cognitive Sciences, 21, 607617.CrossRefGoogle ScholarPubMed
Algom, D., & Chajut, E. (2019). Reclaiming the Stroop effect back from control to input-driven attention and perception. Frontiers in Psychology, 10, 1683.10.3389/fpsyg.2019.01683CrossRefGoogle ScholarPubMed
Bago, B., Bonnefon, J. F., & De Neys, W. (2021). Intuition rather than deliberation determines selfish and prosocial choices. Journal of Experimental Psychology: General, 150, 10811094.10.1037/xge0000968CrossRefGoogle ScholarPubMed
Bago, B., & De Neys, W. (2017). Fast logic?: Examining the time course assumption of dual process theory. Cognition, 158, 90109.CrossRefGoogle ScholarPubMed
Bago, B., & De Neys, W. (2019a). The smart system 1: Evidence for the intuitive nature of correct responding on the bat-and-ball problem. Thinking & Reasoning, 3, 257299.10.1080/13546783.2018.1507949CrossRefGoogle Scholar
Bago, B., & De Neys, W. (2019b). The intuitive greater good: Testing the corrective dual process model of moral cognition. Journal of Experimental Psychology: General, 48, 17821801.10.1037/xge0000533CrossRefGoogle Scholar
Bago, B., & De Neys, W. (2020). Advancing the specification of dual process models of higher cognition: A critical test of the hybrid model view. Thinking & Reasoning, 26, 130.10.1080/13546783.2018.1552194CrossRefGoogle Scholar
Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General, 149, 16081613.CrossRefGoogle ScholarPubMed
Baron, J. (1985). Rationality and intelligence. Cambridge University Press.10.1017/CBO9780511571275CrossRefGoogle Scholar
Baron, J. (2017). Utilitarian vs. deontological reasoning: Method, results, and theory. In Bonnefon, J.-F. & Trémolière, B. (Eds.), Moral inferences (pp. 137151). Psychology Press.Google Scholar
Baron, J., & Gürçay, B. (2017). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory & Cognition, 45, 566575.10.3758/s13421-016-0686-8CrossRefGoogle ScholarPubMed
Barr, N., Pennycook, G., Stolz, J. A., & Fugelsang, J. A. (2015). Reasoned connections: A dual-process perspective on creative thought. Thinking & Reasoning, 21, 6175.10.1080/13546783.2014.895915CrossRefGoogle Scholar
Beattie, G. (2012). Psychological effectiveness of carbon labelling. Nature Climate Change, 2, 214217.10.1038/nclimate1468CrossRefGoogle Scholar
Białek, M., & De Neys, W. (2016). Conflict detection during moral decision-making: Evidence for deontic reasoners’ utilitarian sensitivity. Journal of Cognitive Psychology, 28, 631639.CrossRefGoogle Scholar
Białek, M., & De Neys, W. (2017). Dual processes and moral conflict: Evidence for deontological reasoners’ intuitive utilitarian sensitivity. Judgment and Decision Making, 12, 148167.CrossRefGoogle Scholar
Boissin, E., Caparos, S., Raoelison, M., & De Neys, W. (2021). From bias to sound intuiting: Boosting correct intuitive reasoning. Cognition, 211, 104645.10.1016/j.cognition.2021.104645CrossRefGoogle ScholarPubMed
Bonnefon, J. F. (2018). The pros and cons of identifying critical thinking with system 2 processing. Topoi, 37, 113119.10.1007/s11245-016-9375-2CrossRefGoogle Scholar
Bonnefon, J. F., & Rahwan, I. (2020). Machine thinking, fast and slow. Trends in Cognitive Sciences, 24, 10191027.10.1016/j.tics.2020.09.007CrossRefGoogle ScholarPubMed
Boureau, Y. L., Sokol-Hessner, P., & Daw, N. D. (2015). Deciding how to decide: Self-control and meta-decision making. Trends in Cognitive Sciences, 19, 700710.10.1016/j.tics.2015.08.013CrossRefGoogle ScholarPubMed
Bouwmeester, S., Verkoeijen, P. P., Aczel, B., Barbosa, F., Bègue, L., Brañas-Garza, P., … Wollbrant, C. E. (2017). Registered replication report: Rand, Greene, and Nowak (2012). Perspectives on Psychological Science, 12, 527542.10.1177/1745691617693624CrossRefGoogle Scholar
Braem, S., & Egner, T. (2018). Getting a grip on cognitive flexibility. Current Directions in Psychological Science, 27, 470476.CrossRefGoogle ScholarPubMed
Burič, R., & Konrádová, Ľ. (2021). Mindware instantiation as a predictor of logical intuitions in the cognitive reflection test. Studia Psychologica, 63, 114128.10.31577/sp.2021.02.822CrossRefGoogle Scholar
Burič, R., & Šrol, J. (2020). Individual differences in logical intuitions on reasoning problems presented under two-response paradigm. Journal of Cognitive Psychology, 32, 460477.CrossRefGoogle Scholar
Cash, J. (1969). A boy named Sue. Columbia Records.Google Scholar
Chaiken, S., & Trope, Y. (Eds.) (1999). Dual-process theories in social psychology. Guilford Press.Google Scholar
Chater, N. (2018). Is the type 1/type 2 distinction important for behavioral policy? Trends in Cognitive Sciences, 22, 369371.10.1016/j.tics.2018.02.007CrossRefGoogle ScholarPubMed
Chater, N., & Schwarzlose, R. F. (2016). Thinking about thinking: 28 years on. Trends in Cognitive Sciences, 20, 787.CrossRefGoogle Scholar
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36, 181204.10.1017/S0140525X12000477CrossRefGoogle ScholarPubMed
Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology, 104, 216.10.1037/a0031021CrossRefGoogle ScholarPubMed
Cushman, F. (2020). Rationalization is rational. Behavioral and Brain Sciences, 43, e28.10.1017/S0140525X19001730CrossRefGoogle Scholar
De Neys, W. (2006a). Dual processing in reasoning: Two systems but one reasoner. Psychological Science, 17, 428433.10.1111/j.1467-9280.2006.01723.xCrossRefGoogle ScholarPubMed
De Neys, W. (2006b). Automatic-heuristic and executive-analytic processing in reasoning: Chronometric and dual task considerations. Quarterly Journal of Experimental Psychology, 59, 10701100.10.1080/02724980543000123CrossRefGoogle ScholarPubMed
De Neys, W. (2012). Bias and conflict a case for logical intuitions. Perspectives on Psychological Science, 7, 2838.10.1177/1745691611429354CrossRefGoogle ScholarPubMed
De Neys, W. (Ed.) (2017). Dual process theory 2.0. Routledge.10.4324/9781315204550CrossRefGoogle Scholar
De Neys, W. (2020). Morality, normativity, and the good system 2 fallacy. Diametros, 17, 16. doi: 10.33392/diam.1447CrossRefGoogle Scholar
De Neys, W. (2021). On dual and single process models of thinking. Perspectives on Psychological Science, 16, 14121427.10.1177/1745691620964172CrossRefGoogle ScholarPubMed
De Neys, W. (2022). The cognitive unconscious and dual process theories of reasoning. In Reber, A. S. & Allen, R. (Eds.), The cognitive unconscious: The first fifty years. Oxford University Press.Google Scholar
De Neys, W., & Bonnefon, J. F. (2013). The whys and whens of individual differences in thinking biases. Trends in Cognitive Sciences, 17, 172178.10.1016/j.tics.2013.02.001CrossRefGoogle ScholarPubMed
De Neys, W., & Glumicic, T. (2008). Conflict monitoring in dual process theories of thinking. Cognition, 106, 12481299.10.1016/j.cognition.2007.06.002CrossRefGoogle ScholarPubMed
De Neys, W., & Pennycook, G. (2019). Logic, fast and slow: Advances in dual-process theorizing. Current Directions in Psychological Science, 28, 503509.10.1177/0963721419855658CrossRefGoogle Scholar
Desender, K., Van Lierde, E., & Van den Bussche, E. (2013). Comparing conscious and unconscious conflict adaptation. PLoS ONE 8(2), e55976.CrossRefGoogle ScholarPubMed
Devis, D. (2021). Why are there so many vaccinated people in hospital?. Retrieved from https://cosmosmagazine.com/health/covid/why-are-there-so-many-vaccinated-people-in-hospital/Google Scholar
DeWall, C. N., Baumeister, R. F., Gailliot, M. T., & Maner, J. K. (2008). Depletion makes the heart grow less helpful: Helping as a function of self-regulatory energy and genetic relatedness. Personality and Social Psychology Bulletin, 34, 16531662.10.1177/0146167208323981CrossRefGoogle ScholarPubMed
Dewey, A. R. (2021). Reframing single- and dual-process theories as cognitive models: Commentary on De Neys (2021). Perspectives on Psychological Science, 16, 14281431.CrossRefGoogle Scholar
Dewey, A. R. (2022). Metacognitive control in single- vs. dual-process theory. Manuscript submitted for publication.Google Scholar
Djulbegovic, B., Hozo, I., Beckstead, J., Tsalatsanis, A., & Pauker, S. G. (2012). Dual processing model of medical decision-making. BMC Medical Informatics and Decision Making, 12, 94.10.1186/1472-6947-12-94CrossRefGoogle ScholarPubMed
Dujmović, M., Valerjev, P., & Bajšanski, I. (2021). The role of representativeness in reasoning and metacognitive processes: An in-depth analysis of the Linda problem. Thinking & Reasoning, 27, 161186.10.1080/13546783.2020.1746692CrossRefGoogle Scholar
Epstein, S. (1994). Integration of the cognitive and the psychodynamic unconscious. American Psychologist, 49, 709724.CrossRefGoogle ScholarPubMed
Evans, J. S. B. (2002). Logic and human reasoning: An assessment of the deduction paradigm. Psychological Bulletin, 128(6), 978996.10.1037/0033-2909.128.6.978CrossRefGoogle ScholarPubMed
Evans, J. S. B. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255278.CrossRefGoogle ScholarPubMed
Evans, J. S. B. (2016). Reasoning, biases and dual processes: The lasting impact of Wason (1960). The Quarterly Journal of Experimental Psychology, 69, 20762092.10.1080/17470218.2014.914547CrossRefGoogle ScholarPubMed
Evans, J. S. B. (2019). Reflections on reflection: The nature and function of type 2 processes in dual-process theories of reasoning. Thinking & Reasoning, 25, 383415.CrossRefGoogle Scholar
Evans, J. S. B., & Curtis-Holmes, J. (2005). Rapid responding increases belief bias: Evidence for the dual-process theory of reasoning. Thinking & Reasoning, 11, 382389.10.1080/13546780542000005CrossRefGoogle Scholar
Evans, J. S. B., & Over, D. E. (1996). Rationality and reasoning. Psychology Press.Google Scholar
Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition advancing the debate. Perspectives on Psychological Science, 8, 223241.CrossRefGoogle ScholarPubMed
Evans, J. S. B., & Wason, P. C. (1976). Rationalization in a reasoning task. British Journal of Psychology, 67, 479486.10.1111/j.2044-8295.1976.tb01536.xCrossRefGoogle Scholar
Evans, J. S. B. T. (2009). How many dual process theories do we need? One, two or many? In Evans, J. St. B. T. & Frankish, K. (Eds). In two minds: Dual processes and beyond (pp. 132). Oxford University Press.10.1093/acprof:oso/9780199230167.001.0001CrossRefGoogle Scholar
Evans, J. S. B. T. (2011). Dual-process theories of reasoning: Contemporary issues and developmental applications. Developmental Review, 31, 86102.10.1016/j.dr.2011.07.007CrossRefGoogle Scholar
Evans, N. J., Dutilh, G., Wagenmakers, E. J., & van der Maas, H. L. (2020). Double responding: A new constraint for models of speeded decision making. Cognitive Psychology, 121, 101292.10.1016/j.cogpsych.2020.101292CrossRefGoogle ScholarPubMed
Frankish, K., & Evans, J. St. B. T. (2009). The duality of mind: An historical perspective. In Evans, J. St. B. T. & Frankish, K. (Eds.), In Two minds: Dual processes and beyond (pp. 129). Oxford University Press.Google Scholar
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 2542.10.1257/089533005775196732CrossRefGoogle Scholar
Frey, D., Johnson, E. D., & De Neys, W. (2018). Individual differences in conflict detection during reasoning. Quarterly Journal of Experimental Psychology, 71, 11881208.10.1080/17470218.2017.1313283CrossRefGoogle ScholarPubMed
Gangemi, A., Bourgeois-Gironde, S., & Mancini, F. (2015). Feelings of error in reasoning – In search of a phenomenon. Thinking & Reasoning, 21(4), 383396.10.1080/13546783.2014.980755CrossRefGoogle Scholar
Gawronski, B., & Creighton, L. A. (2013). Dual-process theories. In Carlston, D. E. (Ed.), The Oxford handbook of social cognition (pp. 282312). Oxford University Press.Google Scholar
Gervais, W. M., & Norenzayan, A. (2012). Analytic thinking promotes religious disbelief. Science, 336, 493496.10.1126/science.1215647CrossRefGoogle ScholarPubMed
Ghasemi, O., Handley, S., Howarth, S., Newman, I. R., & Thompson, V. A. (2022). Logical intuition is not really about logic. Journal of Experimental Psychology: General, 151(9), 20092028.10.1037/xge0001179CrossRefGoogle Scholar
Gigerenzer, G., & Regier, T. (1996). How do we tell an association from a rule? Comment on Sloman (1996). Psychological Bulletin, 119, 2326.10.1037/0033-2909.119.1.23CrossRefGoogle Scholar
Greene, J. (2013). Moral tribes: Emotion, reason and the gap between us and them. Penguin Press.Google Scholar
Greene, J., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, 6, 517523.10.1016/S1364-6613(02)02011-9CrossRefGoogle ScholarPubMed
Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11, 322323.CrossRefGoogle ScholarPubMed
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 21052108.10.1126/science.1062872CrossRefGoogle ScholarPubMed
Grossman, Z., & Van der Weele, J. J. (2017). Dual-process reasoning in charitable giving: Learning from non-results. Games, 8, 36.10.3390/g8030036CrossRefGoogle Scholar
Gürçay, B., & Baron, J. (2017). Challenges for the sequential two-system model of moral judgement. Thinking & Reasoning, 23, 4980.CrossRefGoogle Scholar
Hackel, L. M., Wills, J. A., & Van Bavel, J. J. (2020). Shifting prosocial intuitions: Neurocognitive evidence for a value-based account of group-based cooperation. Social Cognitive and Affective Neuroscience, 15(4), 371381.10.1093/scan/nsaa055CrossRefGoogle ScholarPubMed
Hassin, R. R. (2013). Yes it can: On the functional abilities of the human unconscious. Perspectives on Psychological Science, 8, 195207.10.1177/1745691612460684CrossRefGoogle ScholarPubMed
Hayes, B. K., Stephens, R. G., Lee, M. D., Dunn, J. C., Kaluve, A., Choi-Christou, J., & Cruz, N. (2022). Always look on the bright side of logic? Testing explanations of intuitive sensitivity to logic in perceptual tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 48(11), 15981617. doi: 10.1037/xlm0001105Google ScholarPubMed
Hoerl, C., & McCormack, T. (2019). Thinking in and about time: A dual systems perspective on temporal cognition. Behavioral and Brain Sciences, 42, e244.CrossRefGoogle Scholar
Hofmann, W., Friese, M., & Wiers, R. W. (2008). Impulsive versus reflective influences on health behavior: A theoretical framework and empirical review. Health Psychology Review, 2, 111137.10.1080/17437190802617668CrossRefGoogle Scholar
Houdé, O. (2019). 3-System theory of the cognitive brain: A post-Piagetian approach to cognitive development. Routledge.CrossRefGoogle Scholar
Isler, O., Yilmaz, O., & Maule, J. A. (2021). Religion, parochialism and intuitive cooperation. Nature Human Behaviour, 5, 512521.10.1038/s41562-020-01014-3CrossRefGoogle ScholarPubMed
Jiang, J., Correa, C. M., Geerts, J., & van Gaal, S. (2018). The relationship between conflict awareness and behavioral and oscillatory signatures of immediate and delayed cognitive control. NeuroImage, 177, 1119.10.1016/j.neuroimage.2018.05.007CrossRefGoogle ScholarPubMed
Johnson, E. D., Tubau, E., & De Neys, W. (2016). The doubting system 1: Evidence for automatic substitution sensitivity. Acta Psychologica, 164, 5664.10.1016/j.actpsy.2015.12.008CrossRefGoogle ScholarPubMed
Kahneman, D. (2000). A psychological point of view: Violations of rational rules as a diagnostic of mental processes. Behavioral and Brain Sciences, 23, 681683.10.1017/S0140525X00403432CrossRefGoogle Scholar
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.Google Scholar
Kaufman, S. B. (2011). Intelligence and the cognitive unconscious. In Sternberg, R. J. & Kaufman, S. B. (Eds.), The Cambridge handbook of intelligence (pp. 442467). Cambridge University Press.CrossRefGoogle Scholar
Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-system theories. Perspectives on Psychological Science, 4, 533550.CrossRefGoogle ScholarPubMed
Kessler, J., Kivimaki, H., & Niederle, M. (2017). Thinking fast and slow: Generosity over time. Retrieved from http://assets.wharton.upenn.edu/~juddk/papers/KesslerKivimakiNiederle_GenerosityOverTime.pdfGoogle Scholar
Knoch, D., Pascual-Leone, A., Meyer, K., Treyer, V., & Fehr, E. (2006). Diminishing reciprocal fairness by disrupting the right prefrontal cortex. Science, 314, 829832.10.1126/science.1129156CrossRefGoogle ScholarPubMed
Kool, W., & Botvinick, M. (2018). Mental labour. Nature Human Behaviour, 2, 899908.10.1038/s41562-018-0401-9CrossRefGoogle ScholarPubMed
Kruglanski, A. W., & Gigerenzer, G. (2011). Intuitive and deliberate judgments are based on common principles. Psychological Review, 118, 97109.CrossRefGoogle ScholarPubMed
Kvarven, A., Strømland, E., Wollbrant, C., Andersson, D., Johannesson, M., Tinghög, G., … Myrseth, K. O. R. (2020). The intuitive cooperation hypothesis revisited: A meta-analytic examination of effect size and between-study heterogeneity. Journal of the Economic Science Association, 6, 2642.10.1007/s40881-020-00084-3CrossRefGoogle Scholar
Lawson, M. A., Larrick, R. P., & Soll, J. B. (2020). Comparing fast thinking and slow thinking: The relative benefits of interventions, individual differences, and inferential rules. Judgment and Decision Making, 15, 660684.CrossRefGoogle Scholar
Lemir, J. (2021). This book is not about baseball but baseball teams swear by it. Retrieved from https://www.nytimes.com/2021/02/24/sports/baseball/thinking-fast-and-slow-book.html.Google Scholar
Linzarini, A., Houdé, O., & Borst, G. (2017). Cognitive control outside of conscious awareness. Consciousness and Cognition, 53, 185193.10.1016/j.concog.2017.06.014CrossRefGoogle ScholarPubMed
Marewski, J. N., & Hoffrage, U. (2015). Modeling and aiding intuition in organizational decision making. Journal of Applied Research in Memory and Cognition, 4, 145311.Google Scholar
Martinsson, P., Myrseth, K. O. R., & Wollbrant, C. (2014). Social dilemmas: When self-control benefits cooperation. Journal of Economic Psychology, 45, 213236.10.1016/j.joep.2014.09.004CrossRefGoogle Scholar
Martiny-Huenger, T., Bieleke, M., Doerflinger, J., Stephensen, M. B., & Gollwitzer, P. M. (2021). Deliberation decreases the likelihood of expressing dominant responses. Psychonomic Bulletin & Review, 28, 139157.10.3758/s13423-020-01795-8CrossRefGoogle ScholarPubMed
Mata, A. (2020). Conflict detection and social perception: Bringing meta-reasoning and social cognition together. Thinking & Reasoning, 26(1), 140149.CrossRefGoogle Scholar
Mata, A., Ferreira, M. B., Voss, A., & Kollei, T. (2017). Seeing the conflict: An attentional account of reasoning errors. Psychonomic Bulletin & Review, 24(6), 19801986.CrossRefGoogle ScholarPubMed
Mega, L. F., & Volz, K. G. (2014). Thinking about thinking: Implications of the introspective error for default-interventionist type models of dual processes. Frontiers in Psychology, 5, 864.CrossRefGoogle ScholarPubMed
Melnikoff, D. E., & Bargh, J. A. (2018). The mythical number two. Trends in Cognitive Sciences, 22, 280293.CrossRefGoogle ScholarPubMed
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 5774.10.1017/S0140525X10000968CrossRefGoogle ScholarPubMed
Meyer-Grant, C. G., Cruz, N., Singmann, H., Winiger, S., Goswami, S., Hayes, B. K., & Klauer, K. C. (2022). Are logical intuitions only make-believe? Reexamining the logic-liking effect. Journal of Experimental Psychology: Learning, Memory, and Cognition. https://doi.org/10.1037/xlm0001152Google ScholarPubMed
Milkman, K. L., Chugh, D., & Bazerman, M. H. (2009). How can decision making be improved? Perspectives on Psychological Science, 4, 379383.CrossRefGoogle ScholarPubMed
Moore, A. B., Clark, B. A., & Kane, M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19, 549557.CrossRefGoogle Scholar
Morewedge, C. K., & Kahneman, D. (2010). Associative processes in intuitive judgment. Trends in Cognitive Sciences, 14(10), 435440.CrossRefGoogle ScholarPubMed
Newman, I. R., Gibb, M., & Thompson, V. A. (2017). Rule-based reasoning is fast and belief-based reasoning can be slow: Challenging current explanations of belief-bias and base-rate neglect. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43, 1154.Google ScholarPubMed
Oaksford, M., & Chater, N. (2012). Dual processes, probabilities, and cognitive architecture. Mind & Society, 11, 1526.CrossRefGoogle Scholar
Oaksford, M., & Hall, S. (2016). On the source of human irrationality. Trends in Cognitive Sciences, 20, 336344.10.1016/j.tics.2016.03.002CrossRefGoogle ScholarPubMed
Osman, M. (2004). An evaluation of dual-process theories of reasoning. Psychonomic Bulletin & Review, 11, 9881010.10.3758/BF03196730CrossRefGoogle ScholarPubMed
Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and reasoning in moral judgment. Cognitive Science, 36, 163177.CrossRefGoogle ScholarPubMed
Pennycook, G. (2017). A perspective on the theoretical foundation of dual-process models. In De Neys, W. (Ed.), Dual process theory 2.0 (pp. 527). Routledge.10.4324/9781315204550-2CrossRefGoogle Scholar
Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). What makes us think? A three-stage dual-process model of analytic engagement. Cognitive Psychology, 80, 3472.CrossRefGoogle ScholarPubMed
Pennycook, G., Trippas, D., Handley, S. J., & Thompson, V. A. (2014). Base rates: Both neglected and intuitive. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 544554.Google ScholarPubMed
Pleskac, T. J., & Busemeyer, J. R. (2010). Two-stage dynamic signal detection: A theory of choice, decision time, and confidence. Psychological Review, 117, 864901.10.1037/a0019737CrossRefGoogle ScholarPubMed
Purcell, Z. A., Wastell, C. A., & Sweller, N. (2020). Domain-specific experience and dual-process thinking. Thinking & Reasoning, 27, 239267.10.1080/13546783.2020.1793813CrossRefGoogle Scholar
Rand, D. G. (2019). Intuition, deliberation, and cooperation: Further meta-analytic evidence from 91 experiments on pure cooperation. Available at SSRN 3390018. Retrieved from http://dx.doi.org/10.2139/ssrn.3390018CrossRefGoogle Scholar
Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed. Nature, 489, 427430.CrossRefGoogle ScholarPubMed
Rand, D. G., Peysakhovich, A., Kraft-Todd, G. T., Newman, G. E., Wurzbacher, O., Nowak, M. A., & Greene, J. D. (2014). Social heuristics shape intuitive cooperation. Nature Communications, 5, 112.10.1038/ncomms4677CrossRefGoogle ScholarPubMed
Raoelison, M., Boissin, E., Borst, G., & De Neys, W. (2021). From slow to fast logic: The development of logical intuitions. Thinking & Reasoning, 27, 599622.10.1080/13546783.2021.1885488CrossRefGoogle Scholar
Raoelison, M., Thompson, V., & De Neys, W. (2020). The smart intuitor: Cognitive capacity predicts intuitive rather than deliberate thinking. Cognition, 204, 104381.CrossRefGoogle ScholarPubMed
Reber, A., & Allen, R. (2022). The cognitive unconscious: The first fifty years. Oxford University Press.10.1093/oso/9780197501573.001.0001CrossRefGoogle Scholar
Resulaj, A., Kiani, R., Wolpert, D. M., & Shadlen, M. N. (2009). Changing your mind: A computational mechanism of vacillation. Nature, 461(7261), 263.10.1038/nature08275CrossRefGoogle Scholar
Reyna, V. F., Rahimi-Golkhandan, S., Garavito, D. M. N., & Helm, R. K. (2017). The fuzzy-trace dual-process model. In De Neys, W. (Ed.), Dual process theory 2.0 (pp. 90107). Routledge.Google Scholar
Robison, M. K., & Unsworth, N. (2017). Individual differences in working memory capacity and resistance to belief bias in syllogistic reasoning. Quarterly Journal of Experimental Psychology, 70, 14711484.10.1080/17470218.2016.1188406CrossRefGoogle ScholarPubMed
Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the ultimatum game. Science, 300, 17551758.10.1126/science.1082976CrossRefGoogle ScholarPubMed
Schubert, A. L., Ferreira, M. B., Mata, A., & Riemenschneider, B. (2021). A diffusion model analysis of belief bias: Different cognitive mechanisms explain how cognitive abilities and thinking styles contribute to conflict resolution in reasoning. Cognition, 211, 104629.10.1016/j.cognition.2021.104629CrossRefGoogle ScholarPubMed
Shefrin, H. (2013). Advice for CFOs: Beware of fast thinking. Retrieved from https://www.wsj.com/articles/SB10001424127887324299104578531561852942612Google Scholar
Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T. L., Cohen, J. D., & Botvinick, M. M. (2017). Toward a rational and mechanistic account of mental effort. Annual Review of Neuroscience, 40, 99124.CrossRefGoogle Scholar
Shenhav, A., Prater Fahey, M., & Grahek, I. (2021). Decomposing the motivation to exert mental effort. Current Directions in Psychological Science, 30, 307314.10.1177/09637214211009510CrossRefGoogle ScholarPubMed
Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84, 127190.10.1037/0033-295X.84.2.127CrossRefGoogle Scholar
Sirota, M., Juanchich, M., & Holford, D. L. (2022). Rationally irrational: When people do not correct their reasoning errors even if they could. Manuscript submitted for publication.Google Scholar
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 322.CrossRefGoogle Scholar
Smith, E. R., & DeCoster, J. (2000). Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Personality and Social Psychology Review, 4, 108131.CrossRefGoogle Scholar
Šrol, J., & De Neys, W. (2021). Predicting individual differences in conflict detection and bias susceptibility during reasoning. Thinking and Reasoning, 27, 3868.10.1080/13546783.2019.1708793CrossRefGoogle Scholar
Stanovich, K. (2011). Rationality and the reflective mind. Oxford University Press.Google Scholar
Stanovich, K. E. (2018). Miserliness in human cognition: The interaction of detection, override and mindware. Thinking & Reasoning, 24, 423444.CrossRefGoogle Scholar
Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General, 127, 161.CrossRefGoogle Scholar
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate. Behavioral and Brain Sciences, 23, 645665.10.1017/S0140525X00003435CrossRefGoogle ScholarPubMed
Stone, C., Mattingley, J. B., & Rangelov, D. (2022). On second thoughts: Changes of mind in decision-making. Trends in Cognitive Sciences, 26, 419431.10.1016/j.tics.2022.02.004CrossRefGoogle ScholarPubMed
Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review, 8, 220247.CrossRefGoogle ScholarPubMed
Sunstein, C. R. (2020). The cognitive bias that makes us panic about coronavirus. Retrieved from https://www.bloomberg.com/opinion/articles/2020-02-28/coronavirus-panic-caused-by-probability-neglectGoogle Scholar
Tett, G. (2021). Mood and emotion are driving market swings. Retrieved from https://www.ft.com/content/89c95e78-ec7f-4d06-9ea8-a9b19a5ed6daGoogle Scholar
Thompson, V., & Newman, I. (2017). Logical intuitions and other conundra for dual process theories. In De Neys, W. (Ed.), Dual process theory 2.0 (pp. 121136). Routledge.10.4324/9781315204550-8CrossRefGoogle Scholar
Thompson, V. A. (2021). Eye-tracking IQ: Cognitive capacity and strategy use on a ratio-bias task. Cognition, 208, 104523.CrossRefGoogle ScholarPubMed
Thompson, V. A., & Johnson, S. C. (2014). Conflict, metacognition, and analytic thinking. Thinking & Reasoning, 20, 215244.10.1080/13546783.2013.869763CrossRefGoogle Scholar
Thompson, V. A., & Markovits, H. (2021). Reasoning strategy vs cognitive capacity as predictors of individual differences in reasoning performance. Cognition, 217, 104866.CrossRefGoogle ScholarPubMed
Thompson, V. A., Pennycook, G., Trippas, D., & Evans, J. S. B. (2018). Do smart people have better intuitions? Journal of Experimental Psychology: General, 147, 945961.10.1037/xge0000457CrossRefGoogle ScholarPubMed
Thompson, V. A., Turner, J. A. P., & Pennycook, G. (2011). Intuition, reason, and metacognition. Cognitive Psychology, 63, 107140.10.1016/j.cogpsych.2011.06.001CrossRefGoogle ScholarPubMed
Tinghög, G., Andersson, D., Bonn, C., Johannesson, M., Kirchler, M., Koppel, L., & Västfjäll, D. (2016). Intuition and moral decision-making – The effect of time pressure and cognitive load on moral judgment and altruistic behavior. PLoS ONE, 11, e0164012.CrossRefGoogle ScholarPubMed
Travers, E., Rolison, J. J., & Feeney, A. (2016). The time course of conflict on the cognitive reflection test. Cognition, 150, 109118.10.1016/j.cognition.2016.01.015CrossRefGoogle ScholarPubMed
Trémolière, B., De Neys, W., & Bonnefon, J. F. (2012). Mortality salience and morality: Thinking about death makes people less utilitarian. Cognition, 124, 379384.10.1016/j.cognition.2012.05.011CrossRefGoogle ScholarPubMed
Trémolière, B., De Neys, W., & Bonnefon, J. F. (2019). Reasoning and moral judgment: A common experimental toolbox. In Ball, L. J. & Thompson, V. A. (Eds.), International handbook of thinking and reasoning (pp. 575590). Psychology Press.Google Scholar
Trippas, D., & Handley, S. (2017). The parallel processing model of belief bias: Review and extensions. In De Neys, W. (Ed.), Dual process theory 2.0. Routledge.Google Scholar
Turner, W., Feuerriegel, D., Andrejević, M., Hester, R., & Bode, S. (2021). Perceptual change-of-mind decisions are sensitive to absolute evidence magnitude. Cognitive Psychology, 124, 101358.10.1016/j.cogpsych.2020.101358CrossRefGoogle ScholarPubMed
Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: The leaky, competing accumulator model. Psychological Review, 108, 550592.10.1037/0033-295X.108.3.550CrossRefGoogle ScholarPubMed
Van Den Berg, R., Anandalingam, K., Zylberberg, A., Kiani, R., Shadlen, M. N., & Wolpert, D. M. (2016). A common mechanism underlies changes of mind about decisions and confidence. eLife, 5, e12192.CrossRefGoogle ScholarPubMed
Vartanian, O., Beatty, E. L., Smith, I., Blackler, K., Lam, Q., Forbes, S., & De Neys, W. (2018). The reflective mind: Examining individual differences in susceptibility to base rate neglect with fMRI. Journal of Cognitive Neuroscience, 30, 10111022.10.1162/jocn_a_01264CrossRefGoogle ScholarPubMed
Vega, S., Mata, A., Ferreira, M. B., & Vaz, A. R. (2021). Metacognition in moral decisions: Judgment extremity and feeling of rightness in moral intuitions. Thinking & Reasoning, 27, 124141.10.1080/13546783.2020.1741448CrossRefGoogle Scholar
Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12, 129140.10.1080/17470216008416717CrossRefGoogle Scholar
Wason, P. C., & Evans, J. S. B. (1975). Dual processes in reasoning? Cognition, 3, 141154.10.1016/0010-0277(74)90017-1CrossRefGoogle Scholar
Wiesmann, C. G., Friederici, A. D., Singer, T., & Steinbeis, N. (2020). Two systems for thinking about others’ thoughts in the developing brain. Proceedings of the National Academy of Sciences, 117, 69286935.10.1073/pnas.1916725117CrossRefGoogle Scholar
World Bank Group (2015). World development report: Mind, society, and behavior. World Bank.Google Scholar
Figure 0

Figure 1. Schematic illustration of the working model's core components. I1, intuition 1; I2, intuition 2; d, deliberation threshold. The dashed arrow indicates the optional nature of the deliberation stage.

Figure 1

Figure 2. Illustration of the idea that the strength interplay of conflicting intuitions determines uncertainty and the extent of deliberation. I1, intuition 1; I2, intuition 2; d, deliberation threshold; t1 and t2, time points at which the deliberation threshold is crossed. The gray area represents the time during which system 2 deliberation will be engaged.