Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-14T17:53:43.625Z Has data issue: false hasContentIssue false

Misleading Higher-Order Evidence and Rationality: We Can't Always Rationally Believe What We Have Evidence to Believe

Published online by Cambridge University Press:  15 September 2023

Wade Munroe*
Affiliation:
University of Michigan, Ann Arbor, MI, USA
Rights & Permissions [Opens in a new window]

Abstract

Evidentialism as an account of theoretical rationality is a popular and well-defended position. However, recently, it's been argued that misleading higher-order evidence (HOE) – that is, evidence about one's evidence or about one's cognitive functioning – poses a problem for evidentialism. Roughly, the problem is that, in certain cases of misleading HOE, it appears evidentialism entails that it is rational to adopt a belief in an akratic conjunction – a proposition of the form “p, but my evidence doesn't support p” – despite it being the case that believing an akratic conjunction appears to be clearly irrational. In this paper, I diffuse the problem for evidentialism using the distinction between propositional and doxastic rationality. I argue that, although it can be propositionally rational to believe an akratic conjunction (according to evidentialism), one cannot inferentially base an akratic belief in one's evidence, and, thus, one cannot doxastically rationally possess an akratic belief. In addition, I address the worry that my solution to the puzzle commits evidentialists to the possibility of epistemic circumstances in which a proposition, p, is propositionally rational to believe (namely, an akratic conjunction), yet one cannot, in principle, (doxastically) rationally believe p. As I demonstrate, cases of misleading HOE are not the only types of cases that force evidentialists to accept that propositional rationality does not entail the possibility of doxastic rationality. There are no new problems raised by misleading HOE that weren't already present in cases involving purely first-order evidence.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Evidentialism as an account of theoretical rationality is the position that,

(Evidentialism) a doxastic attitude, D, toward a proposition, p, is rational for an agent, S, at a time, t, iff having D(p) fits S's evidence at t

where the fittingness of D(p) on S's evidence is typically analyzed in terms of evidential support for the propositional contents of the attitude (i.e., p).Footnote 1 For instance, belief in a proposition best fits one's evidence, and is thus the rational attitude to take according to evidentialism, “[w]hen the evidence better supports [the] proposition than its negation” (Feldman and Conee Reference Feldman and Conee2005: 97).Footnote 2 Evidentialism is a popular and well-defended position; however, recently, it's been argued that misleading higher-order evidence (HOE) – roughly, evidence about one's evidence or about one's cognitive functioning – poses a problem for evidentialism.Footnote 3 Take the following case of misleading HOE, which I will call “Flight”:

Imagine you are flying a small, propeller driven aircraft. Midway through your journey you calculate that you have enough fuel to make it to your destination on the basis of your true beliefs regarding the current fuel level of your aircraft, the distance to your destination, the miles per gallon your aircraft can travel given its current speed, etc. To make this case as strong as possible, let's stipulate that your evidence entails your conclusion. After performing the calculation, your co-pilot – who has the same evidence as you – (incorrectly) asserts that your evidence doesn't support your conclusion; you made a miscalculation, which caused you to adopt a belief the propositional contents of which are unsupported by your evidence. From a long history of working with your co-pilot you know her to be (dispositionally) a significantly stronger reasoner than yourself. Whenever you've disagreed about what propositions are evidentially supported by a body of evidence, your co-pilot has been right, and you've been in error. However, unbeknownst to you, your co-pilot is sleep deprived and isn't her regular, hyperrational self. It's the first time that a disagreement over evidential support is explained by your co-pilot making a reasoning error and not yourself.

The misleading testimony from your co-pilot (the HOE) doesn't change the fact that your total evidence still entails – and, thereby, provides very strong evidential support for – the proposition that you have enough fuel to make it to your destination. Entailment is monotonic; the fact that your evidence entails a particular proposition cannot be altered by gaining further evidence. Therefore, according to evidentialism it is (propositionally) rational to believe that you have enough fuel to make it to your destination. However, the testimony from your co-pilot that it's not the case that the proposition that you have enough fuel to make it to your destination is supported by your evidence, along with your knowledge that your co-pilot is (dispositionally) a significantly stronger reasoner than yourself, appears to give you very strong evidential support for <it's not the case that the proposition that you have enough fuel to make it to your destination is supported by your evidence>. In Flight, your total evidence appears to support an akratic conjunction, that is, a proposition of the following form:

(Akratic Conjunction) p, but my evidence doesn't support p.

Thus, according to evidentialism, it appears that it is (propositionally) rational to adopt an akratic belief (a belief in an akratic conjunction). However, akratic beliefs appear to be clearly irrational, despite the fact that their propositional contents can (seemingly) be strongly supported by one's evidence in cases of misleading HOE, like Flight.

Although I've framed the discussion thus far in terms of evidentialism, the issue is an instance of a more general problem, which I join Ru Ye (Reference Ye2014) in calling “Fumerton's Puzzle.” Fumerton's puzzle affects any theory of rationality that takes some condition(s), c, to be necessary and sufficient for it being the case that a proposition, p, is rational to believe such that the following are true:Footnote 4

(Rational Belief) Belief in p is rational iff p meets c. (Assuming evidentialism, Rational Belief amounts to the claim that believing p is rational iff p is adequately supported by one's evidence.)

(Licensed Failure) It is possible that p and the proposition that p doesn't meet c both meet c. (Assuming evidentialism, Licensed Failure amounts to the claim that cases, like Flight, are possible in which one's evidence supports both p and <p isn't supported by one's evidence>.)

(Anti-akrasia) It's not the case that belief in the proposition <p, yet p does not meet c> is ever rational. (Assuming evidentialism, Anti-akrasia amounts to the claim that akratic beliefs are never rational.)

Rational Belief and Licensed Failure entail that it's possible that it is rational to believe an akratic conjunction, that is, a proposition of the form “p, yet p does not meet c,” while Anti-akrasia appears to be the denial of this possibility. Given the structure of the problem, there are two straightforward ways to save our favored account of rationality, whatever that account may be: (i) we can reject Licensed Failure and argue that it's not possible that we occupy an epistemic circumstance in which a proposition, p, and <p doesn't meet c> both meet c. In the context of evidentialism, denying Licensed Failure amounts to arguing that, for example, we can never have sufficient misleading HOE so that our total evidence supports both p and <our evidence doesn't support p>.Footnote 5 Alternatively, (ii) we can deny Anti-akrasia, which, in the context of evidentialism, amounts to accepting that, in certain circumstances, akratic beliefs can be rational.Footnote 6, Footnote 7

In this paper, I argue for a third solution to Fumerton's puzzle. I diffuse Fumerton's puzzle by suggesting that we read Rational Belief as a claim about propositional rationality and Anti-akrasia as a claim about doxastic rationality (I discuss the propositional/doxastic distinction in the following section). There is no conflict between Rational Belief, Licensed Failure, and Anti-akrasia, if Rational Belief and Anti-akrasia invoke two different senses of “rational.”Footnote 8 The solution is a general one. Insofar as your favored account of rationality allows you to draw a distinction between propositional and doxastic rationality, you will be able to use the solution. Of course, it's beyond the scope of this paper to detail how my solution functions for every plausible account of rationality. I offer a thorough discussion of my solution in the context of evidentialism, as evidentialism is assumed in much of the literature on Fumerton's puzzle. In addition, for ease of discussion, I assume evidence consist of propositions (Dougherty Reference Dougherty and Dougherty2011). However, my general solution doesn't hinge on accepting evidentialism or propositionalism about evidence. If you aren't partial to evidentialism or propositionalism about evidence, my discussion may still provide a roadmap for diffusing Fumerton's puzzle. The details of the lessons drawn for evidentialism are applicable, mutatis mutandis, to other accounts of rationality as well.

The paper is structured as follows. In Section 1, I discuss the distinction between propositional and doxastic rationality in terms of reasoning and epistemic basing. In addition, I discuss my choice to read rational belief as a claim about propositional rationality and anti-akrasia as a claim about doxastic rationality. In Section 2, I argue that one cannot inferentially base an akratic belief in one's evidence, and, thus, one cannot (doxastically) rationally possess an akratic belief. In Section 3, I distinguish my view from other positions in the extant literature that invoke the propositional/doxastic distinction in the context of Fumerton's puzzle or in similar contexts involving misleading HOE. In Section 4, I address the worry that my solution to Fumerton's puzzle commits the evidentialist to the possibility of epistemic circumstances in which a proposition, p, is propositionally rational to believe (namely, an akratic conjunction), yet one cannot, in principle, (doxastically) rationally believe p.

1. Propositional and doxastic rationality

It's commonly accepted that evidentialist accounts of justification are accounts of propositional, as opposed to doxastic, justification. Similarly, we should accept that an evidentialist account of rationality is a theory of propositional rationality. Although many elide talk of rationality and justification as if the two were the same notion, I do not assume the two to be identical.Footnote 9 Nonetheless, I take it that the propositional/doxastic distinction can apply to rationality as well. In the remainder of the paper, I talk of justification and rationality interchangeably for ease of discussion. Treating the two as interchangeable is harmless in the context of my argument.

Roughly, on an evidentialist framework, a proposition is propositionally rational to believe when there is sufficient evidence to warrant believing the proposition, and a belief is doxastically rational when one holds the belief on the basis of that evidence.Footnote 10 Propositional rationality is a feature of propositions, whereas doxastic rationality is a feature of beliefs. Traditionally, propositional rationality is taken to be (conceptually/theoretically/metaphysically) primary – one's belief in a proposition, p, is doxastically rational only if (i) p is propositionally rational to believe, and (ii) one epistemically bases one's belief on adequate evidence (Korcz Reference Korcz1997, Reference Korcz2000).Footnote 11

It's also commonly accepted that there are, roughly, two cognitive means of basing a belief, B(p), in one's evidence, depending on the type of evidence one has for p: either (i) one can base B(p) inferentially by inferring B(p) from an antecedent set of attitudes, where the propositional contents of those attitudes constitute one's relevant evidence for p, or (ii) one can base B(p) non-inferentially as a direct response to an experience (or other relevant non-doxastic representational state) that p (Boghossian Reference Boghossian2018; Moretti and Piazza Reference Moretti, Piazza, Carter and Bondy2019). I take it that if one is to properly base an akratic belief in one's evidence, one must do so inferentially. We don't have experiences with propositional contents of the conjunctive form “p, but my evidence doesn't support p” upon which we can directly base an akratic belief. Instead, akratic beliefs need to be inferred (e.g., from beliefs in the conjuncts). Thus, I assume that to properly base an akratic belief in one's evidence, one must do so inferentially.Footnote 12

In the following section I argue that one cannot rationally base an akratic belief in one's evidence in the following sense:

(Thesis) Basing an akratic belief in one's evidence necessitates committing oneself to a contradiction.

In order to establish Thesis, I draw extensively from recent philosophical work on inference and cognitive psychological work on the metacognitive monitoring and control procedures involved in inference. Metacognition is, roughly, “cognition about one's own cognition” (Dokic Reference Dokic2014), and metacognitive monitoring and control are important executive functions that afford us flexibility in regulating our thoughts. I argue that it is not possible to infer an akratic belief without committing oneself to a contradiction. Thus, insofar as akratic beliefs can only be inferentially based, Thesis follows.

As I demonstrate, the mere fact that an agent, S, possesses evidence that strongly supports a proposition, p – like an akratic conjunction – doesn't entail that S can, in principle, do what is constitutive of properly basing a belief in p in S's evidence. Cases of misleading HOE, like Flight, are such that (i) a certain proposition (an akratic conjunction) is propositionally rational to believe, yet (ii) one cannot adopt a (doxastically) rational belief in the proposition. In Section 4, I argue that there are cases outside of those involving misleading HOE where (i) and (ii) hold. There is no theoretical cost to the evidentialist in arguing that cases of misleading HOE are cases in which both (i) and (ii) hold, as the evidentialist is already committed to the joint possibility of (i) and (ii) by other types of cases.

So, given that evidentialism is a theory of propositional rationality, we can consistently accept rational belief, licensed failure, and anti-akrasia, if we accept that anti-akrasia is a claim about doxastic rationality. But why ought we be inclined to read anti-akrasia as a claim about doxastic rationality? Although there is little extended discussion in the extant literature of why akratic beliefs are (or at least appear to be) irrational, the discussions that do occur often focus on what it would be like for an agent to possess an akratic belief.Footnote 13 Sophie Horowitz (Reference Horowitz2014) and Jessica Brown (Reference Brown2018: Ch. 6), for example, motivate the claim that akratic beliefs are irrational on the basis of the poor reasoning dispositions and irrational actions that possessing akratic beliefs would engender. In his (Reference Littlejohn2015), Clayton Littlejohn asks the reader to imagine a conversation with our epistemic conscience regarding our possession of an akratic belief. As Littlejohn writes, the discovery that we possess an akratic belief, “should be the beginning of epistemic self-assessment and revision, not the conclusion of it…The mindset of [a person who knowingly possesses an akratic belief] is opaque” (ibid.: 265). Alexander Worsnip notes that possessing an akratic belief:

amounts to saying “I have nothing that gives any adequate indication to me that p is the case; nevertheless, p is the case”…First-personally, these states do not seem capable of withstanding serious reflection. And third-personally, while we can imagine such agents, in describing and explaining them we reach for some story involving self-deception or a failure to recognize their own mental states. (Reference Worsnip2018: 17)

Instead of focusing on the reasons one might possess that support an akratic conjunction (propositional rationality), Horowitz, Brown, Littlejohn, and Worsnip draw our attention to the utter peculiarity of a mind that possesses an akratic belief (doxastic rationality). The intuitive pull of anti-akrasia – the position that akratic beliefs are irrational – is grounded in the aberrant psychology of one who possesses an akratic belief, as opposed to the strength (or lack thereof) of the evidential support that one possesses for the propositional contents of the akratic belief.

In addition, the reason theorists use the term “akratic” to talk about akratic beliefs is because of the structural similarity between akratic belief and practical akrasia (Greco Reference Greco2014). Practical akrasia (in one of its forms) is a matter of intending to perform an action (or in fact performing an action) that one believes one ought not perform (Wedgwood Reference Wedgwood2013). The irrationality of practical akrasia (insofar as we accept that practical akrasia is possible) is not a function of the epistemic and practical reasons one might possess for adopting both (i) a belief about what one ought to do and (ii) an intention to act in a contrary manner. The irrationality of practical akrasia is a function of the conflict between (i) and (ii) as possessed by an agent – that is, the conflict of intending to act in a way that one believes one ought not.

Interpreting anti-akrasia as a claim about doxastic rationality is not a mere ad hoc assumption used to get my solution to Fumerton's puzzle off the ground. In arguing that one can't properly base an akratic belief in one's evidence – and, thus, that akratic beliefs are doxastically irrational – I offer an account of the aberrant psychology of one who possesses an akratic belief that reflects why we intuitively find akratic beliefs to be irrational and that respects the structural similarity between akratic belief and practical akrasia.

That being said, Declan Smithies (Reference Smithies2019: Ch. 9) offers a novel argument for why an akratic conjunction cannot be propositionally rational to believe. Roughly, Smithies argues that belief “aims at knowledge” in the following sense:

Necessarily, you have justification to believe that p only if you have justification to believe that you're in a position to know that p. (ibid.: 306)

Akratic conjunctions, however, are “knowably unknowable,” to use Smithies' turn of phrase. It's easily demonstrated that one cannot know a proposition of the form “p, but my evidence doesn't support p” – if one knows one of the conjuncts, one can't know the other. For instance, if one knows p, then it must be the case that one is justified in believing p (given knowledge requires justification). Assuming evidentialism, one's total evidence must, thereby, support p. So, it's not the case that one's evidence doesn't support p. Because one cannot know a false proposition, one doesn't know that one's evidence doesn't support p. Given that akratic conjunctions are knowably unknowable, and belief aims at knowledge in the above sense, we can't have justification to believe akratic conjunctions.

Although I disagree with Smithies' claim that belief aims at knowledge (at least in the sense that he explicates), engaging with Smithies' arguments would take us too far afield. Instead of further defending the claim that we ought to read anti-akrasia as a claim about doxastic rationality, I position my argument as an exploration of a possible solution to Fumerton's puzzle – a solution that has the theoretical benefit of retaining many of our intuitions about cases of misleading HOE, like Flight. For the sake of this paper, I assume that akratic conjunctions can be propositionally rational to believe. In other words, I assume that licensed failure is true for evidentialism. In cases like Flight, intuitively, your evidence seems to (on balance) support an akratic conjunction. Thus, given evidentialism, an akratic conjunction is propositionally rational to believe. Of course, a defense of this position would require responding to Smithies and, more broadly, advocates of the fixed-point thesis who deny licensed failure and argue that one cannot be rationally mistaken about the demands of (propositional) rationality. However, seeing that others have already responded to the fixed-point thesis in the literature (e.g., Field Reference Field2019; Skipper Reference Skipper, Skipper and Steglich-Petersen2019a) and many philosophers accept that akratic conjunctions can be propositionally rational to believe (e.g., Coates Reference Coates2012; Lasonen-Aarnio Reference Lasonen-Aarnio2014, Reference Lasonen-Aarnio2020; Weatherson Reference Weatherson2019), I will not devote space to responding to Smithies or the fixed-point thesis here.

Nonetheless, my view is also able to maintain that there is something clearly irrational about akratic beliefs. On my account, the irrationality of akratic beliefs has nothing to do with evidential support for akratic conjunctions; instead, as I argue, the irrationality has to do with attempts to base an akratic belief in one's evidence. Like Horowitz, Brown, Littlejohn, Worsnip, and others, I explain the irrationality of akratic beliefs in terms of the aberrant psychology of one who possesses an akratic belief. For the sake of space, the bulk of the paper will be devoted to providing a novel defense of the claim that akratic beliefs are doxastically irrational, despite it being possible that akratic conjunctions can be propositionally rational to believe. Thus, on my view:

  1. (1) we don't have to accept the fixed-point thesis, a view that, as Claire Field (Reference Field2019) notes, even some advocates admit is counterintuitive, yet

  2. (2) we can also accept anti-akrasia by reading it as a claim about doxastic rationality.

In addition, as I discuss in Section 4, my view comes at little theoretical cost to the evidentialist.

2. Reasoning and rationality

Recently, a cottage industry has formed with the goal of analyzing person-level reasoning and inference.Footnote 14 The dominant position in the literature is that reasoning consists of rule-governed operations defined over propositional attitudes (or their contents) (Boghossian Reference Boghossian2014, Reference Boghossian, Jackson and Jackson2019; Broome Reference Broome2013). In reasoning, one transitions from propositional attitudes to propositional attitudes in virtue of following (as opposed to merely conforming to) a rule, where the rules one follows are (or at least can be modeled as) functions from sets of propositions to further propositions. The structure of the rules reflects the common evidentialist sentiment that rationality is a function of apportioning one's doxastic attitudes to one's evidence. For instance, as Anna-Sara Malmgren claims:

for a proposition, q, to be (good) reason or evidence to believe another proposition, p, q must stand in an appropriate logical – or, more broadly, implication or confirmation – relation to pA “good (inference) rule,” in turn, is just a rule that encodes some such relation…. (Reference Malmgren2018: 224, emphasis mine)

What matters for our purposes are not the details of a developed account of reasoning but how philosophers have attempted to distinguish reasoning from other state transitions between propositional attitudes. For instance, a psychoanalyst may ask her patient to engage in free association, which can certainly involve a transition between propositional attitudes, and which may provide the grounds for some rather profound insights into the patient's psyche. However, associative transitions are not inferential.

It's my contention that what (at least in part) separates inference from associations and other non-inferential types of transitions between propositional attitudes is the following:

(Commitment) Inference is a commitment constituting process. More specifically, what distinguishes inference from other state transitions between propositional attitudes is that inferring a belief, B(p), from a set of doxastic attitudes, Γ, constitutively involves the reasoner committing herself to the truth of the claim that the propositional contents of Γ support p.

In Section 2.1, I defend Commitment by arguing for a narrower claim, namely, Paul Boghossian's Taking Condition (which I define in Section 2.1) on which commitment comes out true. I also argue that commitment, along with our assumptions about the nature of misleading HOE, entail thesis. Finally, I argue that commitment allows for propositional and doxastic rationality to come apart in cases of misleading HOE such that an akratic conjunction can be propositionally rational to believe yet it not be possible for one to properly base an akratic belief.

As I discuss in Section 2.2, there are a several theorists who reject the Taking Condition but, nonetheless, accept Commitment. Ultimately, what matters for my solution to Fumerton's puzzle is that (i) Commitment is true, and (ii) Commitment, along with our assumptions about the nature of misleading HOE, entails Thesis. Although I have particular views about the nature of inference and what makes Commitment true – views that I defend in Section 2.1 – as long as one accepts Commitment, one can avail oneself of my solution to Fumerton's puzzle.

2.1. Commitment, the taking condition, and thesis

Commitment is reminiscent of a popular, much discussed means of distinguishing inference from other types of attitudinal transitions, namely, Boghossian's Taking Condition:

(Taking Condition) Inferring necessarily involves (i) the thinker taking her premises to support her conclusion and (ii) drawing her conclusion because of (i). (Boghossian Reference Boghossian2014)Footnote 15

As I've framed the Taking Condition, it is composed of two claims, namely, that inference necessarily involves a thinker

  1. (1) taking her premises to support her conclusion, where this taking is typically assumed to be a representational state, more specifically, either a belief or an intuition, and

  2. (2) the taking (in part) explains the fact that the reasoner draws her conclusion.

Although the taking condition is not ubiquitously accepted (e.g., McHugh and Way Reference McHugh and Way2016; Wright Reference Wright2014), it has ample intuitive appeal and successfully demarcates inference. For instance, the impetus for an associative transition in thought is not an agent's recognition of an epistemic support relation but the existence of some context relevant commonality between the content of the agent's thoughts.

Theorists explicate the taking relation – that is, the relation one takes there to be between one's premises and conclusion – in different ways. As stated, on Boghossian's account the taking relation involves one's premises “supporting” one's conclusion.Footnote 16 According to Markos Valaris (Reference Valaris2014, Reference Valaris2016), the taking relation holds when one's conclusion “follows” from one's premises. On Ram Neta's (Reference Neta2013) account, the taking relation holds when one's premises give one justification to believe one's conclusion. Finally, according to Andres Nes (Reference Nes, Breyer and Gutland2016), the taking relation requires that one's premises naturally mean one's conclusion in Grice's (Reference Grice1957) sense of “natural meaning.” Although I will use Boghossian's terminology of “support,” what is important for our purposes is that on all accounts of the taking relation, it can't be the case that the relation holds between one's premises and conclusion and yet one's premises do not evidentially support one's conclusion.

In the following, I defend an interpretation of the Taking Condition on which the taking state constitutes an intuition. Additionally, as I demonstrate, Commitment comes out true on my interpretation, and Commitment, along with our assumptions about the nature of misleading HOE, entails Thesis. However, it should be noted in passing that on a doxastic account of taking, Commitment also comes out true, and it is clearly impossible for one to infer the conjuncts of an akratic belief (or the akratic conjunction itself) without committing oneself to a contradiction. On a doxastic account (e.g., Deutscher Reference Deutscher, Brown and Rollins1969; Neta Reference Neta2013; Valaris Reference Valaris2014, Reference Valaris2017, Reference Valaris2020), taking consists in believing that one's premises support one's conclusion and drawing an inference in virtue of this belief. Recall, in cases of misleading HOE, like Flight, an agent possesses a total body of evidence on which a first-order proposition, p, and a higher-order proposition, <p isn't supported by the agent's evidence>, are both evidentially supported such that both propositions (and, thus, the akratic conjunction of the two) are propositionally rational to believe. If the agent infers p from her evidence, then, according to the doxastic account of taking, the agent must believe that her evidence supports p (reasoning constitutively requires that one adopt this higher-order belief on the doxastic account). However, if the agent also believes the higher-order proposition that p isn't supported by her evidence then the agent will believe both that p is supported by her evidence and that it's not the case that p is supported by her evidence. Thus, if an agent reasons to an akratic belief in a case of misleading HOE, like Flight, she will end up believing a contradiction. Insofar as beliefs clearly constitute commitments, Commitment is true on the doxastic account of taking, and Thesis straightforwardly follows.

Although many theorists find the doxastic account of taking compelling, there are good reasons to be dubious of the account. If taking is understood as full-fledged belief, it appears that the taking condition (i) engenders a familiar Carrollinian (Reference Carroll1895) regress (mustn't the taking belief be reasoned to and, therefore, itself require a meta-level taking belief?) and (ii) over intellectualizes reasoning (children and at least some non-human animals can reason despite lacking the relevant conceptual competences to formulate beliefs regarding epistemic support). There are good responses to (i) and (ii) in the literature (e.g., Müller Reference Müller2019; Valaris Reference Valaris2014), but it is not my intent to defend the doxastic account of taking. Instead, I proceed to defend my favored, intuitional account.

Other theorists argue that taking consists of an intuition or intellectual seeming that one's premise attitudes support one's conclusion (e.g., Broome Reference Broome2013; Chudnoff Reference Chudnoff and Kriegelforthcoming; Dogramaci Reference Dogramaci2013). Minimally, an intuitional account of taking avoids the Carrollinian regress, as intuitions aren't the result of inference, and, therefore, wouldn't require a meta-level taking intuition. However, intuitions needn't constitute commitments to their representational contents. Thus, it's less clear whether, on an intuitional account, Commitment comes out true. For instance, it's not irrational that it intuitively seem to one that p (e.g., that one's premises support a particular conclusion) and yet one adopt the belief that not-p, if one has sufficient reason to reject the intuition.

As I argue, our intuitions regarding which propositions (or proposition types) support which guide the inferences we make. These intuitions constitute commitments to the proposition that one's premise attitudes support one's conclusion in virtue of the guiding role the intuitions play in inference. In order to unpack my claim that intuitions can constitute commitments in virtue of the guiding role they play in inference (and in other cognitive processes, more broadly), I draw from recent work in cognitive psychology on metacognitive monitoring and control, and meta-reasoning in particular (Ackerman and Thompson Reference Ackerman, Thompson, Feeney and Thompson2015, Reference Ackerman and Thompson2017a, Reference Ackerman, Thompson, Ball and Thompson2017b). It's my contention that recent work on metacognition empirically vindicates an intuitional version of the taking condition. However, before I proceed to discuss meta-reasoning, I first discuss metacognition in the case of memory search in which an agent initiates and guides a search of long-term memory. Much of the recent literature on metacognition focuses on mnemonic processing. By first discussing metacognition in the context of memory search I am more easily able to introduce central concepts in the metacognition literature and explain how intuitions can constitute commitments.

In searching long-term memory for an episodic memory of an event or the semantic memory of a set of facts, a series of metacognitive representations allow us intelligently to guide the search process in terms of initiating, persisting in, and terminating the search in light of the likelihood of successfully retrieving relevant information. These metacognitive representations are instances of what cognitive psychologists call epistemic or noetic feelings (Arango-Muñoz Reference Arango-Muñoz2014; da Sousa Reference da Sousa2009; Dokic Reference Dokic2014). As Arango-Muñoz and Michaelian write, “[f]eelings, in general, are spontaneously-emerging occurrent phenomenal experiences” (Arango-Muñoz and Michaelian Reference Arango-Muñoz and Michaelian2014). Epistemic feelings, in particular, are feelings with particular types of evaluative content directed at cognitive processes. Although the correct account of epistemic feelings is contentious, we can summarize the dominant account of epistemic feelings in the following four claims:

  1. (1) Epistemic feelings are intentional states with representational content directed at cognitive processes that constitute evaluations of the processes. For instance, tip-of-the-tongue (TOT) states are commonly experienced epistemic feelings directed at a memory retrieval process (Brown Reference Brown1991). TOT states represent that (/constitute a seeming that) one knows something while not, presently, being able to access (fully) that knowledge such that further mnemonic search may likely succeed in recalling the information that remains unaccessed.

  2. (2) Epistemic feelings play a crucial role in guiding intellectual activity and are closely linked to agency in thought (da Sousa Reference da Sousa2009). For example, TOT states assist in an agent's flexible decision regarding whether to continue to expend cognitive resources on a memory search.

  3. (3) Epistemic feelings are the result of type-1 processes.Footnote 17 In other words, epistemic feelings are not the product of controlled deliberation but are generated by automatic processes operating non-consciously. For example, TOT states aren't generated by a conscious, deliberative estimation of the chance of successful recall on the basis of available evidence. Instead, they are generated by automatic processes operating outside of consciousness.

  4. (4) Finally, epistemic feelings have a phenomenology. For instance, there is something it is like to be in a TOT state – for it to seem as if one knows something while not, presently, being able to access (fully) that knowledge.

Although I will talk of epistemic feelings – thus, using the terminology of cognitive psychology – theorists who accept an intuitional account of taking, like Sinan Drogramaci and Elijah Chudnuff, would categorize epistemic feelings as intuitions. Chudnoff (Reference Chudnoff2020) even mentions a particular epistemic feeling, the feeling of rightness, by name in a recent discussion of intuition.Footnote 18 Epistemic feelings are a particular subtype of intuition, where intuitions are, roughly, sui generis seemings, distinct from perception and occurrent belief (Chudnoff Reference Chudnoff2013).Footnote 19 It should also be noted that epistemic feelings are not some recherché theoretical posit exclusively discussed in cognitive psychology. In fact, several philosophers have recently employed epistemic feelings for a litany of theoretical ends. For instance, Matthew Frise (Reference Frise and McCain2018) uses epistemic feelings in a defense of evidentialism. Anna Drożdżowicz (Reference Drożdżowicz2023) appeals to epistemic feelings in offering an account of the experience of understanding an utterance in a language in which one is fluent. And, finally, Jacques-Henri Vollet (Reference Vollet2022) appeals to epistemic feelings in his analysis of epistemic excuses.

So, what types of epistemic feelings play a role in guiding a search of long-term memory? In attempting to recall some event, set of facts, etc., an initial feeling of knowing will occur prior to any information is consciously accessed from long-term memory.Footnote 20 The gradable strength of the feeling of knowing constitutes, for an agent, an assessment of the relative likelihood that a memory search will be successful (Reder Reference Reder and Bower1988). This initial feeling of knowing, thus, guides an agent's choice to search long-term memory. For instance, in determining the product of two integers agents will use a feeling of knowing to determine whether they need to explicitly calculate the product using an algorithm like long multiplication, or whether they can just recall the product from a rote memorized multiplication table, thus forgoing calculation (Paynter et al. Reference Paynter, Reder and Kieffaber2009). As a search unfolds, feelings of processing fluency, that is, the experience of the demandingness of the cognitive task, are taken by the agent to represent whether further search will (continue to) produce results or whether search should be terminated. As representations are accessed from long-term memory they may be accompanied by, what Johnson et al. (Reference Johnson, Hashtroudi and Lindsay1993) call, a feeling of pastness that indicates to the agent that the representations are of remembered events or facts as opposed to, for example, merely imagined or unrelated events or facts. For instance, when attempting to recall a previously seen list of words – a commonly used task in cognitive psychological research on memory – the activation of a representation of one word may activate representations of semantically associated words, even if those semantically associated words were not on the originally observed list. Agents may use the accompanying feeling of pastness to determine the source of the activated word representation, for example, whether the represented word was previously observed on the list or whether the word is merely semantically associated with a word on the list. As the search continues and requires greater attentional demands on working memory, eventually the gradable feeling of processing fluency will be taken by the agent to indicate that continued search will no longer be successful and ought to be terminated.

So, what makes these epistemic feelings commitments? Broadly speaking, the fact that a mental representational state constitutes a commitment to the truth of its content – or a taking to be true – is grounded in how that state (or states of that type) functions in cognition and guiding behavior. For instance, believing that p constitutes a commitment to the truth of p, whereas imagining that p doesn't. What distinguishes believing that p from imagining that p has nothing to do with the propositional contents (or format of representation) of the representational states. Instead, they differ in the functional role states of the respective types play in cognition and in guiding behavior. We needn't settle on an exact analysis of the functional profile of belief or imagination to recognize that believing p constitutes a commitment to the truth of p, whereas imagining p doesn't. In turn, it's the fact that believing p constitutes a commitment to the truth of p that makes belief the proper subject of theoretical rational evaluation, unlike imagination which involves no such commitment given its functional profile.

Given how the feeling of knowing, feeling of processing fluency, feeling of pastness, and other metacognitive representations guide memory search, the representations constitute evaluative commitments on the part of the agent. For instance, insofar as an agent uses a feeling of knowing to determine whether to initiate and allocate cognitive effort to a memory search, the agent is committed to it being the case (/takes it to be the case) that the search is worth the cognitive effort, given the likelihood of success. The agent cannot rationally believe that the memory search isn't worth the cognitive effort while simultaneously using a feeling of knowing to determine whether to initiate the search, as the agent would, thereby, commit herself to the contradiction that the memory search is worth the cognitive effort, and it's not the case that the memory search is worth the cognitive effort.

Certain mental process types, like the controlled search of long-term memory, constitutively involve an agent adopting commitments. In other words, what, in part, delineates these process types from other, similar processes are the commitments that constitutively guide the processes. The metacognitive states agents use to guide memory search are what differentiates, say, a memory search in which a set of words is recalled in a controlled manner from mere verbal mind wandering in which the same set of words is tokened in working memory without control being exerted by the agent. In turn, it's these metacognitive states that make memory search a process attributable to an agent as opposed to a mental process that is merely happening to her.

It's my contention that (Commitment) inference is, similarly, a commitment constituting process. What differentiates genuine inference from association or other types of state transitions between propositional attitudes are the commitments undertaken by the reasoner, where these commitments manifest as metacognitive monitoring states used to flexibly control the reasoning process. In turn, it's these commitments that make reasoning something attributable to an agent, as opposed to a ballistic cognitive process that merely happens to the agent. Although, as previously noted, much of the work on metacognition focuses on mnemonic processes, more recently, Ackerman and Thompson have generated a model of meta-reasoning, or the metacognitive monitoring and control procedures involved in reasoning (Ackerman and Thompson Reference Ackerman, Thompson, Feeney and Thompson2015, Reference Ackerman and Thompson2017a, Reference Ackerman, Thompson, Ball and Thompson2017b). On their model, meta-reasoning monitoring processes give rise to feelings of certainty and uncertainty throughout deliberation that constitute assessments of the epistemic quality of the attitudinal transitions the reasoner makes. As Jérôme Dokic puts it, feelings of (un)certainty constitute evaluations of “the non-perceptual method [i.e., inference] we have used to reach [our] conclusion” (Dokic Reference Dokic2014: 136). These feelings of (un)certainty are used to control, for example, the allocation of cognitive effort to various processes, the choice of decision procedure to use when problem solving, and whether the agent takes her conclusion to be correct or decides that further reasoning or solution search is necessary. Feelings of (un)certainty are intuitions about the rational status of our inferential transitions. In turn, given the guiding role that feelings of (un)certainty play in reasoning, they constitute commitments (/takings) on the part of the reasoner.

Epistemic feelings of (un)certainty function to guide inferential transitions just like taking beliefs are supposed to guide inferential transitions on the doxastic account of taking. In using epistemic feelings of (un)certainty to guide inference we, thus, commit ourselves to their content. The irrationality of reasoning to an akratic belief on the intuitional account of taking is grounded in our use of certain epistemic feelings to guide inference. Just as it is on the doxastic account of taking, if an agent infers an akratic belief, she will commit herself to a contradiction of the form “p is supported by my evidence, and it's not the case that p is supported by my evidence.” This commitment may not manifest as an explicit belief of the agent, but it is no less a commitment (in virtue of the functional role epistemic feelings play in thought) and no less irrational.

In sum:

  1. (1) Inferring a belief, B(p), requires committing oneself to the claim that the evidence on the basis of which one infers B(p) supports p. (Commitment, which I've defended in this section.)

  2. (2) In a case of misleading HOE, like Flight, an akratic conjunction – a proposition of the form “p, yet p isn't support by my evidence” – is propositionally rational to believe. (Assumption defended in Section 1.)

  3. (3) Properly basing an akratic belief requires inferring the belief from one's evidence (Assumption defended in Section 1.)

  4. (4) If one infers p, one commits oneself to the claim that p is supported by one's evidence. (From (1).)

  5. (5) If one believes what one's evidence supports in a case of misleading HOE, like Flight, one will believe – and thus commit oneself to – the proposition that p isn't supported by one's evidence. (From (2).)

  6. (6) Thus, (Thesis) basing an akratic belief in one's evidence necessitates committing oneself to a contradiction of the form “p is supported by one's evidence, and it's not the case that p is supported by one's evidence.” (From (3)–(5).)Footnote 21

It's important to note that one commits oneself to the claim that B(p) is supported by one's evidence by inferring B(p). So, in an epistemic circumstance involving misleading HOE, like Flight, in which one's evidence supports an akratic conjunction, one only becomes committed to a contradiction of the form “p is supported by my evidence, and it's not the case that p is supported by my evidence” if one infers an akratic belief. The mere possession of evidence that strongly supports the akratic conjunction doesn't, by itself, commit one to a contradiction – it's the act of inferring the akratic belief that generates the commitment. Therefore, although the akratic conjunction is propositionally rational to believe in virtue of the evidential support for the conjunction, an agent can't properly base an akratic belief without incurring a pair of contradictory commitments. Thus, rational belief, licensed failure, and anti-akrasia can all be true, insofar as we accept that rational belief is a claim about propositional rationality and anti-akrasia is a claim about doxastic rationality.

2.2. Rejecting the taking condition

In the previous section, I provided empirical support for the Taking Condition using work on metacognition (and meta-reasoning in particular) to argue that inference involves:

  1. (1) mentally representing that one's premise attitudes support one's conclusion,

  2. (2) where these representations guide the propositional attitude transitions involved in reasoning.

More specifically, I argued that these representations are epistemic feelings that constitute epistemic evaluations of the propositional attitude transitions we make. However, it's important to note that one needn't accept (1) and (2) – or, more broadly, the Taking Condition – in order to accept Commitment and, thus, be eligible for my solution to Fumerton's puzzle.

For instance, Christopher Blake-Turner (Reference Blake-Turner2022), Christian Kietzmann (Reference Kietzmann2018), and Eric Marcus (Reference Marcus2020) have all recently argued for accounts of inference on which inference constitutively involves representing that one's premise attitudes support one's conclusion in a manner that constitutes a commitment on the part of the reasoner (thus accepting Commitment). However, Blake-Turner, Kietzmann, and Marcus reject (2). In other words, they reject the claim that one's commitment to the proposition that one's premise attitudes support one's conclusion guides the inferential process. Nonetheless, Blake-Turner, Kietzmann, and Marcus could still accept my solution to Fumerton's puzzle. As I demonstrated at the end of the previous section, Thesis follows from Commitment and our assumptions about the nature of misleading HOE. Insofar as Blake-Turner, Kietzmann, and Marcus accept Commitment and our assumptions about the nature of misleading HOE, they also ought to accept Thesis.

Departing even further from the position I advanced in the previous section, McHugh and Way (Reference McHugh and Way2015, Reference McHugh and Way2016, Reference McHugh and Way2018a, Reference McHugh and Way2018b) offer a functional account of reasoning on which reasoning is constitutively aim-directed. Although McHugh and Way would reject (1) and (2) – as they reject the claim that inference must involve any mental representation that one's premise attitudes support one's conclusion – McHugh and Way still accept Commitment. For instance, McHugh and Way write:

Theoretical reasoning is guided by the aim of acquiring fitting beliefs. If p does not support q, then reasoning from p to q is not a good way to pursue this aim. So, reasoning from p to q while judging that p does not support q amounts to taking what you acknowledge to be an unreliable means to your end. That looks plainly irrational…this seems enough to give a sense in which reasoning from p to q commits you to thinking that p supports q…. (McHugh and Way Reference McHugh and Way2018b: 191, emphasis mine)

Insofar as McHugh and Way accept Commitment, advocates of McHugh's and Way's account of inference can, thus, avail themselves of my solution to Fumerton's puzzle.

It's clearly beyond the scope of this paper to discuss all extant accounts of inference in the philosophical literature. However, as I've demonstrated in this section, there are several accounts that accept Commitment while rejecting the particular view I've offered regarding the nature of inference and what makes Commitment true. Although I've argued for a representational reading of Commitment on which inference constitutively involves epistemic feelings that guide the attitudinal transitions we make, ultimately, what matters for my solution to Fumerton's puzzle is that (i) Commitment is true, and (ii) Commitment, along with our assumptions about misleading HOE, entails Thesis.

3. Comparing my view to others

Paul Silva (Reference Silva2017), Declan Smithies (Reference Smithies, Silva and Oliveira2022), and Han van Weitmarschen (Reference Van Wietmarschen2013) all discuss the propositional/doxastic distinction in the context of Fumerton's puzzle or in similar contexts involving misleading HOE. In the following, I briefly discuss, in turn, differences between my position and those offered by Silva, Smithies and, van Weitmarschen. It's beyond the scope of this paper to provide an exhaustive discussion of each view; however, as I make clear, the position I defend is significantly dissimilar to those on offer in the extant literature.

Silva argues for a similar thesis to my own, namely, that the propositional/doxastic distinction is key to resolving Fumerton's puzzle and that, although it can be propositionally rational to believe an akratic conjunction, one cannot doxastically rationally possess an akratic belief. However, Silva assumes (without argument) that a person can properly base an akratic belief in her evidence. Thus, Silva is forced to advocate for the position (recently defended by Turri Reference Turri2010) that epistemic basing is not what distinguishes doxastic and propositional justification. Silva argues for the following necessary condition on doxastic justification:

S's doxastic attitude, D(p), is doxastically justified only if S lacks undefeated propositional justification to believe that S's total evidence does not support taking D(p)

to secure the claim that S cannot be doxastically justified in holding an akratic belief. However, as I've demonstrated, (pace Silva) one cannot rationally base an akratic belief in virtue of the fact that akratic beliefs need to be inferentially based and inference is a commitment constitute process. Inferring an akratic belief would commit oneself to a contradiction. The impetus is on Silva to argue that an akratic belief can be rationally based. There is no need to appeal to an additional necessary condition on doxastic justification, like Silva's, to secure the result that akratic beliefs cannot be doxastically justified.

Similarly, Smithies argues that cases of misleading HOE, like Flight, are such that there is a proposition that you are propositionally justified to believe, yet you cannot hold a doxastically justified belief in the proposition. However, as previously mentioned, Smithies is an advocate of (a version of) the fixed-point thesis; thus, he doesn't allow that it is ever rational to be mistaken about the demands of (propositional) rationality. According to Smithies, in Flight your evidence would support the proposition <you have enough fuel to make it to your destination, and your total evidence supports the proposition that you have enough fuel to make it to your destination>, but you can't doxastically rationality believe the proposition or either of its conjuncts. In order to secure this result, Smithies argues for a condition on doxastic justification according to which a belief is properly based “only if it manifests a more general disposition to believe what the evidence supports” (ibid.: 110). Thus, a necessary condition on properly basing a belief on one's evidence is that the one's belief manifests a general sensitivity to the evidence. In other words, in nearby worlds where the evidence is relevantly different, one's belief would be relevantly different.

According to Smithies, the issue with, say, maintaining your first-order belief that you have enough fuel to make it to your destination in Flight in the face of the testimony from your co-pilot is that – for non-ideal agents like us – maintaining the first-order belief would constitute manifesting the disposition to dogmatically maintain beliefs despite HOE that those beliefs are the result of poor reasoning. The disposition to dogmatically maintain beliefs despite HOE that those beliefs are the result of poor reasoning would (given Smithies' characterization of the disposition as “dogmatic”) result in you maintaining beliefs unsupported by your evidence in certain nearby worlds. Which nearby worlds? Bad case worlds, that is, worlds in which the HOE isn't misleading and, thus, you haven't respected your first-order evidence. For instance, given the reasoning acumen of your co-pilot, it could easily happen, in a nearby possible world, that you make a mathematical error, and your co-pilot correctly points out the error (this would be a bad case world). So, if in Flight (the good case) you are disposed to remain steadfast in the face of the evidence from your co-pilot, then (according to Smithies) you would be equally disposed to ignore your co-pilot and stick to your guns in a nearby bad case world in which you've made a routine mathematical error and your co-pilot is correct in her assessment of which propositions your evidence supports. According to Smithies, in both the good case and bad case worlds you manifest the same disposition to dogmatically retain your beliefs despite HOE of your reasoning failure. Thus, for non-ideal agents like us, cases like Flight are such that a certain proposition is propositionally rational to believe (e.g., that we enough fuel to make it to our destination, and that our evidence supports this) yet we cannot doxastically rationally believe the proposition because doing so would be to a manifest a dogmatic disposition such that we wouldn't be properly sensitive to shifts in our evidence in nearby worlds.

The qualification “for non-deal agents” is important for Smithies. It's not in principle impossible to doxastically rationally believe <you have enough fuel to make it to your destination, and your total evidence supports the proposition that you have enough fuel to make it to your destination> in Flight, it's just impossible for non-ideal agents. In fact, according to Smithies, “[b]ecause ideally rational agents are perfectly sensitive to what their evidence supports, they can remain steadfast in good cases without thereby manifesting any disposition to remain steadfast in bad cases where their reasoning dispositions are held constant” (ibid.: 112, emphasis mine). However, this is an odd remark by Smithies. If we hold ideally rational agents' reasoning dispositions fixed – where these dispositions are characterized as “perfectly sensitive to what their evidence supports” – then our modal assessment of the sensitivity to shifts in evidence of ideally rational agents' beliefs won't include any bad case worlds. Bad cases are, by stipulation, worlds in which one isn't perfectly sensitive to what one's evidence supports. Trivially, ideally rational agents will never manifest a disposition to remain dogmatically steadfast in nearby possible worlds in which we hold fixed their perfect evidential sensitivity.

It should be clear that Smithies' result crucially depends on how we characterize the dispositions of ideal and non-ideal agents and, thus, what we hold fixed in examining the evidential sensitivity of agents' beliefs in nearby worlds. Different characterizations of the relevant dispositions for both ideal and non-ideal agents would yield different results. Regardless, it should be obvious that Smithies' position is distinct from my own. I make no appeal to dispositions, sensitivity to shifts in evidence, etc. Again, my argument solely depends on what constitutively distinguishes inference from other types of transitions between propositional attitudes.

Finally, in his (Reference Van Wietmarschen2013), van Wietmarschen discusses the distinction between propositional and doxastic rationality in the context of assessing conciliatory views of peer disagreement on an evidentialist framework. Although van Wietmarschen is specifically focused on peer disagreement, his remarks could be generalized to other types of HOE. Van Wietmarschen concludes that conciliatory views are false when understood to be claims about propositional rationality but true when understand to be claims about doxastic rationality. In order to establish this result, van Wietmarschen invokes the following claim about doxastically rational belief:

for S's belief that p to be [inferentially] well-grounded in S's evidence E [i.e., doxastically rational]: the argument on the basis of which S in fact believes p is or resembles a good argument from E to p. (ibid.: 415)

where a good argument for p given E is an argument that S would find convincing on ideal reflection.

In arguing for his position, van Wietmarschen discuses a case adapted from David Christensen (Reference Christensen2007) in which you are out to lunch with a friend whom you rationally believe is equally as mathematically component as yourself (and, thus, your peer when it comes to mathematical matters). You and your friend agree to split the $46.00 lunch bill evenly and tip 20 percent. You both calculate your respective shares in your heads. You rightly conclude that your shares are $27.60 each while your friend claims that the shares are $27.10. According to van Wietmarschen, your disagreement with your friend presents you with a potential defeater. Responding to this potential defeater would require demonstrating that the best explanation for your disagreement is that your friend made a mistake while you reasoned correctly from the first-order evidence. However, given the disagreement, a good argument from your conclusion “can no longer simply be a calculation from E to the conclusion that your shares are $27.60; a good argument must also respond to [your friend's] disagreement as a potential defeater” (ibid.). In addition, van Wietmarschen invokes the following independence principle, also adapted from Christensen:

when we determine what a subject is justified in believing about the explanation of his or her disagreement with S about p, we should bracket the subject's original reasoning about p. (ibid.: 416)

Therefore, given the disagreement with your friend and the above independence principle, you are no longer doxastically rational in believing that your lunch shares are each $27.60. You lack a good argument for the claim that your lunch shares are each $27.60 and, thus, a belief in this claim wouldn't be well-grounded. So, although you are propositionally rational in believing that your lunch shares are each $27.60 (this proposition is entailed by your evidence, properly construed), you aren't doxastically rational in believing the proposition.

Again, I don't have the space to engage with van Wietmarschen's arguments, but it should be clear how van Wietmarschen's arguments differ from my own. I don't invoke an independence principle or any claims about when our original reasoning ought to be bracketed in the face of HOE. More broadly, the positions of Silva, Smithies, and van Wietmarschen all invoke additional claims about what is required for proper basing, that is, there need's to be a lack of undefeated HOE (in Silva's case), one must exhibit a dispositional sensitivity to shifts in evidence (in Smithies' case), or one's reasoning must meet a Christensen style independence principle (in van Wietmarschen's case). My strategy is different. Instead of discussing what's required for proper basing in general, I shift our attention to the nature of inference. By invoking (what I take to be) a very plausible claim about what constitutively separates inference from other types of transitions between propositional attitudes, I am able to diffuse Fumerton's puzzle.

4. Propositional rationality does not entail doxastic rationality

Accepting my solution to Fumerton's puzzle commits the evidentialist to the possibility of epistemic circumstances (e.g., circumstances, like Flight, in which one gains misleading HOE) in which a proposition, p, is propositionally rational to believe, but it's not possible that one (doxastically) rationally believe p without committing oneself to a contradiction. It might be objected that if a proposition is propositionally rational to believe, it must be possible for one to rationally believe the proposition. In other words, propositional rationality ought to entail the possibility of doxastic rationality.Footnote 22

As I demonstrate in Section 4.1, cases of misleading HOE are not the only epistemic circumstances in which (on an evidentialist framework) a proposition is propositionally rational to believe, yet one won't be able to rationally (doxastically) believe the proposition, without enmeshing oneself in some further form of irrationality. Epistemic circumstances involving finkish evidence – to borrow an expression from Smithies (Reference Smithies2016, Reference Smithies2019) – are cases involving purely first-order evidence in which propositional rationality does not entail the possibility of doxastic rationality. Regardless of how we handle cases of misleading HOE, the evidentialist is committed to accepting that propositional rationality does not entail the possibility of doxastic rationality. Thus, the evidentialist does not incur an additional theoretical cost by accepting my solution to Fumerton's puzzle.

4.1. Anti-expertise, finkish evidence, and finkish epistemic circumstances

One's evidence is finkish if “it is destroyed or undermined in the process of attempting to form a doxastically rational belief that is properly based on the evidence” (Smithies Reference Smithies2016: 205). There are several cases of finkish evidence discussed in the literature, but cases of anti-expertise are a particularly stark example. In a case of anti-expertise, one gains compelling evidence that one is an anti-expert with respect to some proposition (or class of propositions), p, where an anti-expert, S, with respect to p is one for whom the following holds:

p iff it's not the case that S believes (or judges that) p.

Take the following oft cited case of anti-expertise from Earl Conee (Reference Conee1982) (I've altered the case in several non-essential ways for ease of discussion):

After repeated and flawless trials using the best in brain-scanning technology with a massive and diverse sample of people, a thirtieth century brain physiologist, Dave, discovers that a person's N-fibers fire iff it's not the case that the person believes they are all firing. Dave begins to wonder about the following proposition: (q) All of Dave's N-fibers are firing.

Given Dave knows that a person's N-fibers fire iff it's not the case that the person believes they are all firing, Dave knows the following:

  1. (1) If Dave believes q is false, q is true.

  2. (2) If Dave believes q is true, q is false.

  3. (3) If Dave refrains from judgment or holds no doxastic attitude with respect to q, q is true.

Assuming Dave has access to his propositional attitudes about N-fibers, there will be a proposition that is propositionally rational for Dave to believe, given his evidence, but that Dave cannot (doxastically) rationally believe. For example, if Dave has access to the fact that he believes that all of his N-fibers are firing, then Dave's evidence strongly supports the proposition that it's not the case that all of Dave's N-fibers are firing. Thus, the proposition that it's not the case that all of Dave's N-fibers are firing is propositionally rational to believe. But Dave cannot rationally believe the proposition in virtue of the fact that his evidence is finkish. Once Dave believes that it's not the case that all of his N-fibers are firing, his evidence will support the proposition that all of his N-fibers are firing.Footnote 23

As cases of anti-expertise (like Dave's) demonstrate, propositional rationality does not entail the possibility of doxastic rationality, at least on an evidentialist framework. Cases of misleading HOE are not unique in that cases involving finkish evidence also require the evidentialist to accept that propositional rationality does not entail the possibility of doxastic rationality. Although cases of misleading HOE are not cases of finkish evidence, they are, more broadly, what I will call finkish epistemic circumstances. Let's let an epistemic circumstance be the total evidence and set of commitments an agent possesses at a time. An epistemic circumstance, c, is finkish in my sense insofar as

(Finkish Epistemic Circumstance) at least one proposition, p, is such that p is propositionally rational to believe in c, but attempting to form a doxastically rational belief in p would shift c – either by shifting one's evidence or commitments – in a manner that would make a belief in p irrational.

Dave's case counts as a finkish epistemic circumstance in virtue of the fact that attempting to form a doxastically rational belief about his N-fibers would relevantly shift his epistemic circumstance by shifting his evidence. Cases of misleading HOE also count as finkish, in my sense, in virtue of the fact that attempting to form a doxastically rational belief in an akratic conjunction would relevantly shift one's epistemic circumstances by shifting one's commitments. Attempting to form a doxastically rational belief in a proposition of the form “p, but my evidence doesn't support p” would involve undertaking a commitment to the truth of the proposition that one's evidence does support p. The undertaking of this commitment would shift one's epistemic circumstances such that one could not (doxastically) rationally believe the akratic conjunction without being committed to a contradiction.

What allows for the possibility of finkish epistemic circumstances on an evidentialist framework is the fact that the conditions on possessing a total body of evidence that strongly supports some proposition, p, (thus making p propositionally rational to believe) don't guarantee that one can engage in the cognitive activity constitutive of rationally reasoning to or properly basing a belief in p (thus making one's belief in p doxastically rational). In other words, it's not built into the conditions on possessing strong evidence for p that one be able to engage in the constitutive cognitive activity required to rationally reason to or base a belief in p. Dave meets the conditions for possessing very strong evidence for the proposition that it's not the case that all of Dave's N-fibers are firing, but the fact that Dave meets these conditions clearly doesn't entail that he can do what is constitutively required to adopt a doxastically rational belief in the proposition. Similarly, in Flight you meet the conditions for possessing very strong evidence for an akratic conjunction, but the fact that you meet these conditions doesn't entail that you can do what is constitutively required to adopt a doxastically rational akratic belief.

If we want our theory of rationality to make it the case that the (propositional) rationality of believing a proposition, p, entails the possibility of (doxastically) rationally believing p, then we need a theory on which the conditions for propositional rationality entail that the conditions for rationally believing p can be met. Evidentialism just isn't such a theory (Munroe Reference Munroe2023). The objection that propositional rationality ought to entail the possibility of doxastic rationality is an objection to the overarching evidentialist framework that we've assumed for discussion, as opposed to a pointed objection to my solution to Fumerton's puzzle.

5. Conclusion

To take stock: I've argued that, given we accept that,

(Rational Belief) it is rational to adopt a belief in a proposition, p, iff p meets some condition(s) c

we can also accept the following:

(Licensed Failure) It is possible that p and the proposition that p doesn't meet c both meet c.

(Anti-akrasia) It's not the case that belief in the proposition <p, yet p does not meet c> is ever rational

as long as we interpret Rational Belief as a claim about propositional rationality and Anti-akrasia as a claim about doxastic rationality. Fumerton's puzzle is defused with the appropriate understanding of Rational Belief and Anti-akrasia.

One might worry that my solution still involves a conflict of rational injunctions, as it allows that there are epistemic circumstances in which one won't be able to adopt the doxastic attitudes required from the standpoint of propositional rationality while also doing what is required for doxastic rationality. Of course, this worry assumes a deontological conception of (propositional and doxastic) rationality on which rationality is not merely an epistemic evaluative notion but also involves a set of epistemic norms for governing one's attitudes. If we take rationality to be a purely evaluative notion, there will be no conflict of injunctions.Footnote 24 But even assuming a deontological conception of rationality – doxastic and propositional rationality are two different epistemic notions. For example, under a deontological reading of evidentialism, propositional rationality deals with the doxastic attitudes one ought to have given one's evidence, whereas doxastic rationality governs how one ought to hold these attitudes (e.g., one ought to base one's attitudes in one's evidence). There is nothing untoward about conflicts between different types of injunctions. Analogously, in the moral domain it's not uncommon for philosophers to argue that there are objective and subjective senses of the moral “ought” (Dorsey Reference Dorsey2012; Olsen Reference Olsen2017). The objective-ought deals with what one morally ought to do given the normative and non-normative facts, whereas the subjective-ought deals with what one morally ought to do given one's evidence. For example, assuming a simple act utilitarianism is true, there may be circumstances in which one's evidence strongly – but misleadingly – suggests that performing some action, a, will maximize utility. However, as a matter of fact, a-ing won't maximize utility whereas performing some other action, b, will. In this scenario, one objectively ought to b, as b-ing will in fact maximize utility, but one subjectively ought to a, as a-ing is the action that one's evidence suggests will maximize utility. There is no intra-level conflict of injunctions when what one morally objectively ought to do conflicts with what one morally subjectively ought to do. Similarly, there is no worrisome intra-level conflict of requirements in the case of conflicts between propositional and doxastic rationality.

One might want to know what one all-things-considered rationally ought to believe in cases of misleading HOE, which would require an account of how we resolve conflicts between propositional and doxastic rationality. However, discussions of all-things-considered oughts are beyond the scope of this paper. It should be sufficient to note that conflicts between different types of requirements aren't unique to the epistemic domain. Not only are there other domains, for example, the moral domain, in which philosophers posit conflicting types of oughts, but there are also conflicts of requirements across domains. What one is legally required to do may very well conflict with prudential, moral, or aesthetic norms. There is nothing unique to the epistemic domain that would require there to be no conflicts between propositional and doxastic rationality.

I've demonstrated in detail how my solution functions under an evidentialist account of rationality in which it is rational to adopt a belief in a proposition p iff p is adequately supported by one's evidence. Misleading HOE raises no problems for an evidentialist account of propositional rationality. The mere fact that one possesses sufficient evidence to believe a proposition doesn't entail that one can do what is necessary to reason to or properly base a belief in the proposition in one's evidence, without engendering some further form of irrationality. As argued in Section 4, cases of misleading HOE are not the only types of cases that force evidentialists to accept that propositional rationality does not entail the possibility of doxastic rationality. There are no new problems raised by misleading HOE that weren't already present in cases involving purely first-order evidence.

Footnotes

1 This definition is adapted from Feldman and Conee (Reference Feldman and Conee1985). See the Afterword in Conee and Feldman (Reference Conee and Feldman2004) for further discussion.

2 For ease of discussion, I assume a coarse-grained framework of doxastic attitudes on which there are three possible attitudes one might take toward a given proposition: belief, disbelief, or the suspension of judgment.

3 See Christensen (Reference Christensen2010) and Feldman (Reference Feldman2005) for informative discussions regarding the nature of HOE. However, there appear to be several distinct uses of the term “higher-order evidence” in the literature. As is customary, I use an example to help introduce HOE and the puzzle it presents.

4 As Ye indicates, the puzzle was first discussed in Fumerton (Reference Fumerton1990) and later given the name “Fumerton's puzzle” in Foely's (Reference Foley1990) eponymous article. There are several formulations of the puzzle in the extant literature (see the essays in Lasonen-Aarnio Reference Lasonen-Aarnio2014; Skipper and Steglich-Petersen Reference Skipper and Steglich-Petersen2019; Worsnip Reference Worsnip2018; Ye Reference Ye2014), but an extended discussion of the structure of the puzzle is orthogonal to my concerns. In addition, although I discuss Fumerton's puzzle in terms of belief, the puzzle can be generalized further to cover other types of doxastic attitudes.

5 There are several ways to argue for (i). For example, according to the fixed-point thesis (e.g., Smithies Reference Smithies2019; Titelbaum Reference Titelbaum, Gendler and Hawthorne2015, Reference Titelbaum, Skipper and Steglich-Petersen2018), we always have sufficient evidence to determine the demands of (propositional) rationality. Therefore, we will never have sufficient evidence to believe that a proposition isn't supported by our evidence when in fact it is. (However, it should be noted that, according to Smithies, it's still possible that we gain misleading HOE that our beliefs are not properly based and, thus, are not doxastically rational. I discuss Smithies' view further in Section 3.) Alternatively, one could argue that misleading HOE defeats our relevant first-order beliefs on which the HOE bears so that they are no longer rational. In the case of Flight, this would amount to claiming that the testimony from your co-pilot defeats your first-order belief that you have sufficient fuel to make it to your destination. See Field (Reference Field2019); Skipper (Reference Skipper, Skipper and Steglich-Petersen2019a, Reference Skipper2019b); and Whiting (Reference Whiting2020) for further discussion.

6 Those who deny Anti-akrasia are colloquially known as “level-splitters,” as they deny that what one believes at a higher-order level about what one's evidence supports, what it is rational to believe, etc., affects the rational status of one's first-order beliefs, and vice versa. Level-splitters include Allen Coates (Reference Coates2012), Maria Lasonen-Aarnio (Reference Lasonen-Aarnio2014), Brian Weatherson (Reference Weatherson2019, Reference Weathersonm.s.), and, arguably, Foley (Reference Foley1990).

7 There are additional responses to Fumerton's puzzle in the literature that involve embracing Rational Belief, Licensed Failure, and Anti-akrasia. David Christensen (Reference Christensen2010, Reference Christensen, Christensen and Lackey2013), for example, argues that believing in accordance with one's evidence and avoiding certain incoherent combinations of attitudes (like akratic beliefs) are rational ideals as opposed to rational injunctions that one is obligated to meet in one's doxastic practices. Cases of misleading HOE are situations in which one cannot typify both ideals of evidential responsiveness and internal coherence. Alexander Worsnip (Reference Worsnip2018) also draws a distinction between evidential and coherence requirements, which Worsnip argues cannot be jointly met in certain cases of misleading HOE. It is beyond the scope of this paper to discuss how my solution to Fumerton's puzzle differs from those offered by Christensen and Worsnip. Similarly, it's beyond the scope of this paper to discuss solutions that utilize a graded framework of partial belief or credence (e.g., Henderson Reference Henderson2022).

8 But isn't there still a conflict between rational injunctions in terms of what one propositionally–rationally ought to believe and what one doxastically–rationally ought to believe? Yes – at least under a deontological understanding of rationality in which rationality is in the business of issuing rules or requirements – but this conflict isn't troublesome. I discuss the issue further in the conclusion.

9 See Sylvan (Reference Sylvan2014) for a means of driving a wedge between rationality and justification.

10 For the sake of brevity, I will drop the qualifier “under an evidentialist framework.” However, it should be kept in mind that I am merely assuming evidentialism as a means of demonstrating how my more general solution to Fumerton's puzzle works for a particular account of rationality.

11 Some theorists have recently challenged this traditional characterization by arguing that doxastic rationality ought to be taken to be (conceptually/theoretically/metaphysically) primary (see Silva Reference Silva2015; Turri Reference Turri2010; Vahid Reference Vahid2016). Although I don't find these arguments convincing, it's beyond the scope of this paper to engage with these challenges. In addition, not all accounts of doxastic rationality give pride of place to epistemic basing or reasoning (at least as dominantly conceived) (cf. Kornblith Reference Kornblith2015). It's possible to excise (Silva and Oliveira Reference Silva, Oliveira, Lasonen-Aarnio and Littlejohnforthcoming) talk of reasons from epistemology and analyze doxastic justification in terms of the reliability of an agent's attitude formation and revision procedures. However, because I've assumed an evidentialist framework, we focus on epistemic basing.

12 The distinction between inferential and non-inferential means of basing mirrors the distinction between inferential and non-inferential justification (Pryor Reference Pryor2003). A full discussion of different means of basing is beyond the scope of this paper. However, it should be noted that there may be certain cognitive basing processes that don't cleanly fit into the inferential/non-inferential dichotomy as I've characterized it. For instance, on simple monitoring accounts of introspection it may be the case that the higher-order belief that we possess some first order belief, B(p), is non-inferentially based on B(p) but not in virtue of any type of (quasi)perceptual experience of B(p). There may simply be a monitoring mechanism that takes B(p) as input and (non-inferentially) outputs B(B(p)) into the “belief box” of an agent (Nichols and Stich Reference Nichols and Stich2003). A more nuanced discussion of basing isn't relevant for our concerns. Basing an akratic belief will require inferring the attitude and, thus, inferentially basing the attitude in one's evidence. Nonetheless, in note 21, I discuss the possibility of a non-inferential, non-perceptual means of basing an akratic belief.

13 Paul Silva (Reference Silva2017) makes a similar observation.

14 There is a philosophical tradition of distinguishing between reasoning and inference (e.g., Brown Reference Brown1955; Ryle Reference Ryle1949; Welsh Reference Welsh1957), but contemporary work rarely distinguishes between the two (although Quilty-Dunn and Mandelbaum (Reference Quilty-Dunn and Mandelbaum2018) do make the distinction). I follow suit and use “inference” and “reasoning” interchangeably.

15 There is ample historical precedent for the taking condition. As Boghossian (Reference Boghossian2014) notes, Frege claims, “[t]o make a judgment because we are cognizant of other truths as providing a justification for it is known as inferring” (Reference Frege1979: 3). In addition, Max Deutscher (Reference Deutscher, Brown and Rollins1969) claims that to infer some proposition, p, from a proposition, q, one must believe that q makes a belief in p reasonable. Although Deutscher talks of belief as opposed to the less committal “taking,” he is clearly committed to the taking condition. Similarly, Judith Jarvis Thomson argues, “your conclusion has only been reasoned to from your premises together with your supposition (true or false) that your stated premise is reason for your conclusion” (Reference Thomson and Black1965: 298).

16 Of course, one can still count as inferring a proposition, p, from a proposition, q, even if q doesn't actually support p. In order for one to reason from q to p, one merely needs to take q to support p.

17 See Evans (Reference Evans and Neys2018) and Evans and Stanovich (Reference Evans and Stanovich2013) for a discussion of the type-1/type-2 distinction.

18 The feeling of rightness is an epistemic feeling that guides an agent's choice to accept an initial judgment or to exert additional cognitive effort in solution search (Thompson and Morsanyi Reference Thompson and Morsanyi2012).

19 A full discussion of the nature of intuition is beyond the scope of this paper (see DePaul and Ramsey Reference DePaul and Ramsey1998). However, it should be clear that those who advocate for an intuitional account of taking – as contrasted with a doxastic account – accept that intuitions are sui generis seemings, distinct from occurrent belief. If we accepted a doxastic account of intuition on which intuitions are doxastic attitudes (or dispositions to accept certain doxastic attitudes, e.g., Van Inwagen Reference Van Inwagen1997) then the distinction between the intuitional and doxastic accounts of taking would collapse. Thus, insofar as Drogramaci and Chudnuff accept an intuitional account of taking, they clearly accept the sui generis seeming account of intuitions.

20 Paynter et al. (Reference Paynter, Reder and Kieffaber2009), for example, estimate that the feeling of knowing occurs in a time window of 300–500 milliseconds, whereas the retrieval of an item from long-term memory takes longer.

21 As I've noted, my argument depends on (3) – that is, on it being the case that akratic beliefs must be based inferentially. One might object that there may be some non-inferential, non-perceptual cognitive means of basing an akratic belief that isn't a commitment constituting process. Thus, there may be some cognitive means of basing an akratic belief without committing oneself to a contradiction. However, the impetus would be on the objector to provide an account of what this cognitive process is. In addition, even if one could generate an account of this non-inferential, non-perceptual means of basing an akratic belief, I would argue that an analogue of commitment is true for basing. In other words, basing constitutively involves an agent committing herself to the truth of the claim that the contents of the background attitudes on which she bases her belief, B(p), support p. It's beyond the scope of this paper to engage in a full discussion of basing, but it should be noted that there are a host of accounts of basing on which this analogue of commitment clearly comes out true. For instance, a collection of theorists advocate for a doxastic account of basing on which it is either a necessary condition (Audi Reference Audi1986; Longino Reference Longino1978; Marcus Reference Marcus2012; Ye Reference Ye, Carter and Bondy2019) or a jointly necessary and sufficient condition (Tolliver Reference Tolliver1982) for an agent, S, to base a belief, B(p), on a set of beliefs, Γ, that S believe that the propositional contents of Γ support p.

22 On at least some glosses of the propositional/doxastic distinction, it appears to be the case that propositional rationality entails the possibility of doxastic rationality. For example, Turri (Reference Turri2010: 312) characterizes the propositional/doxastic distinction as the distinction between being in a position to justifiedly believe and justifiedly believing.

23 Roy Sorensen (Reference Sorensen1987) and Andy Eagan and Adam Elga (Reference Egan and Elga2005) argue that we will never be in an epistemic situation in which we have sufficient evidence that we are an anti-expert. We can see Sorensen's and Eagan's and Elga's position as an analogue of the fixed-point thesis in the context of anti-expertise; one's evidence will never support that one is an anti-expert (/an akratic conjunction), thus, it is never propositionally rational to believe the problematic proposition. Although I lack the space to argue the point here, I agree with Reed Richter (Reference Richter1990) that the style of argument offered by Sorensen and Eagan and Elga fails.

24 Evidentialism is formulated both as (i) an account of the doxastic attitudes one ought to adopt and (ii) an analysis of epistemic justification/rationality. Advocates of evidentialism frequently elide the deontological and conceptual analysis formulations, although it is more common to talk of evidentialism as an analysis. As Luis Oliveria writes, “[m]uch of the literature in defense of evidentialism states it as an account of epistemic justification. In such cases, it is often unclear which kind of normative claim evidentialism is intended to be and…whether and how it is related to [a deontological formulation of evidentialism]” (Reference Oliveira2017: 486–87).

References

Ackerman, R. and Thompson, V. A. (2015). ‘Meta-Reasoning.’ In Feeney, A. and Thompson, V.A. (eds), Reasoning as Memory, pp. 164–82. New York: Psychology Press.Google Scholar
Ackerman, R. and Thompson, V. A. (2017 a). ‘Meta-Reasoning: Monitoring and Control of Thinking and Reasoning.’ Trends in Cognitive Sciences 21(8), 607–17.CrossRefGoogle ScholarPubMed
Ackerman, R. and Thompson, V. A. (2017 b). ‘Meta-Reasoning: Shedding Meta-Cognitive Light on Reasoning Research.’ In Ball, L. J. and Thompson, V. A. (eds), The Routledge International Handbook of Thinking and Reasoning, pp. 115. New York: Psychology Press.Google Scholar
Arango-Muñoz, S. (2014). ‘The Nature of Epistemic Feelings.’ Philosophical Psychology 27(2), 193211.CrossRefGoogle Scholar
Arango-Muñoz, S. and Michaelian, K. (2014). ‘Epistemic Feelings, Epistemic Emotions: Review and Introduction to the Focus Section.’ Philosophical Inquiries, 2(1), 97122.Google Scholar
Audi, R. (1986). ‘Belief, Reason, and Inference.’ Philosophical Topics 14(1), 2765.CrossRefGoogle Scholar
Blake-Turner, C. (2022). ‘The Hereby-Commit Account of Inference.’ Australasian Journal of Philosophy 100(1), 86101.CrossRefGoogle Scholar
Boghossian, P. (2014). ‘What is Inference?Philosophical Studies 169(1), 118.CrossRefGoogle Scholar
Boghossian, P. (2018). ‘Delimiting the Boundaries of Inference.’ Philosophical Issues 28(1), 5569.CrossRefGoogle Scholar
Boghossian, P. (2019). ‘Inference, Agency and Responsibility.’ In Jackson, M. B. and Jackson, B. B. (eds), Reasoning: New Essays on Theoretical and Practical Thinking, pp. 101–24. Oxford: Oxford University Press.CrossRefGoogle Scholar
Broome, J. (2013). Rationality through Reasoning. Malden, MA: Wiley-Blackwell.CrossRefGoogle Scholar
Brown, D. (1955). ‘The Nature of Inference.’ The Philosophical Review 64(3), 351–69.CrossRefGoogle Scholar
Brown, A. S. (1991). ‘A Review of the Tip-of-the-Tongue Experience.’ Psychological Bulletin 109(2), 204.CrossRefGoogle ScholarPubMed
Brown, J. A. (2018). Fallibilism: Evidence and Knowledge. Oxford: Oxford University Press.CrossRefGoogle Scholar
Carroll, L. (1895). ‘What the Tortoise Said to Achilles.’ Mind 4(14), 278–80.CrossRefGoogle Scholar
Christensen, D. (2007). ‘Epistemology of Disagreement: The Good News.’ Philosophical Review 116(2), 187217.CrossRefGoogle Scholar
Christensen, D. (2010). ‘Higher-Order Evidence.’ Philosophy and Phenomenological Research 81(1), 185215.CrossRefGoogle Scholar
Christensen, D. (2013). ‘Epistemic Modesty Defended.’ In Christensen, D. and Lackey, J. (eds), The Epistemology of Disagreement: New Essays, pp. 7798. Oxford: Oxford University Press.CrossRefGoogle Scholar
Chudnoff, E. (2013). Intuition. Oxford: Oxford University Press.CrossRefGoogle Scholar
Chudnoff, E. (2020). ‘In Search of Intuition.’ Australasian Journal of Philosophy 98(3), 465–80.CrossRefGoogle Scholar
Chudnoff, E. (forthcoming). ‘Inferential Seemings.’ In Kriegel, U. (ed.), Oxford Studies in Philosophy of Mind, Vol. 4. Oxford: Oxford University Press.Google Scholar
Coates, A. (2012). ‘Rational Epistemic Akrasia.’ American Philosophical Quarterly 49(2), 113–24.Google Scholar
Conee, E. (1982). ‘Utilitarianism and Rationality.’ Analysis 42(1), 5559.CrossRefGoogle Scholar
Conee, E. and Feldman, R. (2004). Evidentialism: Essays in Epistemology, Vol. 48. Oxford: Oxford University Press.CrossRefGoogle Scholar
da Sousa, R. (2009). ‘Epistemic Feelings.’ Mind and Matter 7(2), 139–61.Google Scholar
DePaul, M. and Ramsey, W. (eds) (1998). Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry. Lanham, MD: Rowman & Littlefield.Google Scholar
Deutscher, M. (1969). ‘A Causal Account of Inferring.’ In Brown, R. and Rollins, C. D. (eds), Contemporary Philosophy in Australia, pp. 97118. New York: Routledge.Google Scholar
Dogramaci, S. (2013). ‘Intuitions for Inferences.’ Philosophical Studies 165(2), 371–99.CrossRefGoogle Scholar
Dokic, J. (2014). ‘Feelings of (Un)Certainty and Margins for Error.’ Philosophical Inquiries 2(1), 123–44.Google Scholar
Dorsey, D. (2012). ‘Objective Morality, Subjective Morality, and the Explanatory Question.’ Journal of Ethics and Social Philosophy 6(3), 125.CrossRefGoogle Scholar
Dougherty, T. (2011). ‘In Defense of Propositionalism about Evidence.’ In Dougherty, T. (ed.), Evidentialism and Its Discontents, pp. 226–33. Oxford: Oxford University Press.CrossRefGoogle Scholar
Drożdżowicz, A. (2023). ‘Experiences of linguistic understanding as epistemic feelings.’ Mind & Language, 38(1), 274–95.CrossRefGoogle Scholar
Egan, A. and Elga, A. (2005). ‘I Can't Believe I'm Stupid.’ Philosophical Perspectives 19(1), 7793.CrossRefGoogle Scholar
Evans, J. S. B. (2018). ‘Dual Process Theory: Perspectives and Problems.’ In Neys, W. D. (ed.), Dual Process Theory 2.0, pp. 137–55. Abingdon, Oxfordshire: Routledge.Google Scholar
Evans, J. S. B. and Stanovich, K. E. (2013). ‘Dual-Process Theories of Higher Cognition: Advancing the Debate.’ Perspectives on Psychological Science 8(3), 223–41.CrossRefGoogle ScholarPubMed
Feldman, R. (2005). ‘Respecting the Evidence.’ Philosophical Perspectives 19(1), 95119.CrossRefGoogle Scholar
Feldman, R. and Conee, E. (1985). ‘Evidentialism.’ Philosophical Studies, 48(1), 1534.CrossRefGoogle Scholar
Feldman, R. and Conee, E. (2005). ‘Some Virtues of Evidentialism.’ Veritas (Porto Alegre) 50(4), 95108.CrossRefGoogle Scholar
Field, C. (2019). ‘It's OK to Make Mistakes: Against the Fixed Point Thesis.’ Episteme 16(2), 175–85.CrossRefGoogle Scholar
Foley, R. (1990). ‘Fumerton's Puzzle.’ Journal of Philosophical Research 15, 109–13.CrossRefGoogle Scholar
Frege, G. (1979). Posthumous Writings (P. Long & R. White, Trans.). Oxford: Basil Blackwell.Google Scholar
Frise, M. (2018). ‘Metacognition as Evidence for Evidentialism.’ In McCain, K. (ed.), Believing in Accordance with the Evidence: New Essays on Evidentialism, pp. 91107. Cham, Switzerland: Synthese Library.CrossRefGoogle Scholar
Fumerton, R. A. (1990). Reason and Morality: A Defense of the Egocentric Perspective. Ithaca, NY: Cornell University Press.Google Scholar
Greco, D. (2014). ‘A Puzzle about Epistemic Akrasia.’ Philosophical Studies 167, 201–19.CrossRefGoogle Scholar
Grice, H. P. (1957). ‘Meaning.’ The Philosophical Review 66(3), 377–88.CrossRefGoogle Scholar
Henderson, L. (2022). ‘Higher-Order Evidence and Losing One's Conviction.’ Noûs 56(3), 513–29.CrossRefGoogle Scholar
Horowitz, S. (2014). ‘Epistemic Akrasia.’ Noûs 48(4), 718–44.CrossRefGoogle Scholar
Johnson, M. K., Hashtroudi, S. and Lindsay, D. S. (1993). ‘Source Monitoring.’ Psychological Bulletin 114(1), 3.CrossRefGoogle ScholarPubMed
Kietzmann, C. (2018). ‘Inference and the Taking Condition.’ Ratio 31(3), 294302.CrossRefGoogle Scholar
Korcz, K. A. (1997). ‘Recent Work on the Basing Relation.’ American Philosophical Quarterly 34(2), 171–91.Google Scholar
Korcz, K. A. (2000). ‘The Causal-Doxastic Theory of the Basing Relation.’ Canadian Journal of Philosophy 30(4), 525–50.CrossRefGoogle Scholar
Kornblith, H. (2015). ‘The Role of Reasons in Epistemology.’ Episteme 12(2), 225–39.CrossRefGoogle Scholar
Lasonen-Aarnio, M. (2014). ‘Higher-Order Evidence and the Limits of Defeat.’ Philosophy and Phenomenological Research 88(2), 314–45.CrossRefGoogle Scholar
Lasonen-Aarnio, M. (2020). ‘Enkrasia or evidentialism? Learning to love mismatch.’ Philosophical Studies, 177(3), 597632.CrossRefGoogle Scholar
Littlejohn, C. (2015). ‘Stop Making Sense? On a Puzzle about Rationality.’ Philosophy and Phenomenological Research 96(2), 257–72.CrossRefGoogle Scholar
Longino, H. E. (1978). ‘Inferring.’ Philosophy Research Archives 4, 1726.CrossRefGoogle Scholar
Malmgren, A. S. (2018). ‘Varieties of Inference?Philosophical Issues 28, 221–54.CrossRefGoogle Scholar
Marcus, E. (2012). Rational Causation. Cambridge, Massachusetts: Harvard University Press.CrossRefGoogle Scholar
Marcus, E. (2020). ‘Inference as Consciousness of Necessity.’ Analytic Philosophy 61(4), 304–22. https://doi.org/10.1111/phib.12153CrossRefGoogle Scholar
McHugh, C. and Way, J. (2015). ‘Broome on Reasoning.’ Teorema: International Journal of Philosophy 34(2), 131–40.Google Scholar
McHugh, C. and Way, J. (2016). ‘Against the Taking Condition.’ Philosophical Issues 26(1), 314–31.CrossRefGoogle Scholar
McHugh, C. and Way, J. (2018 a). ‘What is Good Reasoning?Philosophy and Phenomenological Research, 96(1), 153–74.CrossRefGoogle Scholar
McHugh, C. and Way, J. (2018 b). ‘What is Reasoning?Mind 127(505), 167–96.CrossRefGoogle Scholar
Moretti, L. and Piazza, T. (2019). ‘The Many Ways of the Basing Relation.’ In Carter, J. A. and Bondy, P. (eds), Well-Founded Belief: New Essays on the Epistemic Basing Relation, pp. 7491. New York: Routledge.CrossRefGoogle Scholar
Müller, A. (2019). ‘Reasoning and Normative Beliefs: Not too Sophisticated.’ Philosophical Explorations 22(1), 215.CrossRefGoogle Scholar
Munroe, W. (2023). ‘Evidentialism and Occurrent Belief: You Aren't Justified in Believing Everything Your Evidence Clearly Supports.’ Erkenntnis, 88(7), 3059–78.CrossRefGoogle Scholar
Nes, A. (2016). ‘The Sense of Natural Meaning in Conscious Inference.’ In Breyer, T. and Gutland, C. (eds), Phenomenology of Thinking, pp. 97115. New York: Routledge.Google Scholar
Neta, R. (2013). ‘What is an Inference?Philosophical Issues 23(1), 388407.CrossRefGoogle Scholar
Nichols, S. and Stich, S. P. (2003). Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds. Oxford: Oxford University Press.CrossRefGoogle Scholar
Oliveira, L. (2017). ‘Deontological Evidentialism, Wide-Scope, and Privileged Values.’ Philosophical Studies 174(2), 485506.CrossRefGoogle Scholar
Olsen, K. (2017). ‘A Defense of the Objective/Subjective Moral Ought Distinction.’ The Journal of Ethics 21(4), 351–73.CrossRefGoogle Scholar
Paynter, C. A., Reder, L. M. and Kieffaber, P. D. (2009). ‘Knowing We Know before We Know: ERP Correlates of Initial Feeling-of-Knowing.’ Neuropsychologia 47(3), 796803.CrossRefGoogle ScholarPubMed
Pryor, J. (2003). ‘Is there Non-Inferential Justification?’ Manuscript, 1–33.Google Scholar
Quilty-Dunn, J. and Mandelbaum, E. (2018). ‘Inferential Transitions.’ Australasian Journal of Philosophy 96(3), 532–47.CrossRefGoogle Scholar
Reder, L. M. (1988). ‘Strategic Control of Retrieval Strategies’. In Bower, G. H. (ed.), Psychology of Learning and Motivation, Vol. 22, pp. 227–59. San Diego: Academic Press.Google Scholar
Richter, R. (1990). ‘Ideal Rationality and Hand Waving.’ Australasian Journal of Philosophy 68(2), 147–56.CrossRefGoogle Scholar
Ryle, G. (1949). The Concept of Mind. London: Hutchinson's University Library.Google Scholar
Silva, P. (2015). ‘On Doxastic Justification and Properly Basing One's Beliefs.’ Erkenntnis 80(5), 945–55.CrossRefGoogle Scholar
Silva, P. (2017). ‘How Doxastic Justification Helps Us Solve the Puzzle of Misleading Higher-Order Evidence.’ Pacific Philosophical Quarterly 98, 308–28.CrossRefGoogle Scholar
Silva, P. and Oliveira, L. (forthcoming). ‘Propositional Justification and Doxastic Justification.’ In Lasonen-Aarnio, M. and Littlejohn, C. (eds), The Routledge Handbook of the Philosophy of Evidence. New York: Routledge.Google Scholar
Skipper, M. (2019 a). ‘Higher-Order Defeat and the Impossibility of Self-Misleading Evidence.’ In Skipper, M. and Steglich-Petersen, A. (eds), Higher-Order Evidence: New Essays, pp. 189208. Oxford: Oxford University Press.CrossRefGoogle Scholar
Skipper, M. (2019 b). ‘Reconciling Enkrasia and Higher-Order Defeat.’ Erkenntnis 84, 1369–86.CrossRefGoogle Scholar
Skipper, M. and Steglich-Petersen, A. (eds) (2019). Higher-Order Evidence: New Essays. Oxford: Oxford University Press.CrossRefGoogle Scholar
Smithies, D. (2016). ‘Belief and Self-Knowledge.’ Philosophical Issues 26, 393421.CrossRefGoogle Scholar
Smithies, D. (2019). The Epistemic Role of Consciousness. Oxford: Oxford University Press.CrossRefGoogle Scholar
Smithies, D. (2022). ‘The Epistemic Function of Higher-Order Evidence.’ In Silva, P. Jr. and Oliveira, L. R. G.(eds), Propositional and Doxastic Justification: New Perspectives in Epistemology, pp. 97120. New York: Routledge.CrossRefGoogle Scholar
Sorensen, R. A. (1987). ‘Anti-Expertise, Instability, and Rational Choice.’ Australasian Journal of Philosophy 65(3), 301–15.CrossRefGoogle Scholar
Sylvan, K. (2014). ‘On Divorcing the Rational and the Justified in Epistemology.’ Manuscript, 1–31.Google Scholar
Thompson, V. A. and Morsanyi, K. (2012). ‘Analytic Thinking: Do You Feel Like It?Mind & Society 11(1), 93105.CrossRefGoogle Scholar
Thomson, J. J. (1965). ‘Reasons and Reasoning.’ In Black, M. (ed.), Philosophy in America, pp. 281303. London: Routledge.Google Scholar
Titelbaum, M. (2015). ‘Rationality's Fixed Point (Or: In Defense of Right Reason).’ In Gendler, T. S. and Hawthorne, J. (eds), Oxford Studies in Epistemology, Vol. 5, pp. 253–94. Oxford: Oxford University Press.CrossRefGoogle Scholar
Titelbaum, M. (2018). ‘Return to Reason.’ In Skipper, M. and Steglich-Petersen, A. (eds), Higher-Order Evidence New Essays, pp. 226–46. Oxford: Oxford University Press.Google Scholar
Tolliver, J. (1982). ‘Basing Beliefs on Reasons.’ Grazer Philosophische Studien 15, 149–61.Google Scholar
Turri, J. (2010). ‘On the Relationship between Propositional and Doxastic Justification.’ Philosophy and Phenomenological Research 80(2), 312–26.CrossRefGoogle Scholar
Vahid, H. (2016). ‘A Dispositional Analysis of Propositional and Doxastic Justification.’ Philosophical Studies 173(11), 3133–52.CrossRefGoogle Scholar
Valaris, M. (2014). ‘Reasoning and Regress.’ Mind 123(489), 101–27.CrossRefGoogle Scholar
Valaris, M. (2016). ‘Supposition and Blindness.’ Mind 125(499), 895901.CrossRefGoogle Scholar
Valaris, M. (2017). ‘What Reasoning Might Be.’ Synthese 194(6), 2007–24.CrossRefGoogle Scholar
Valaris, M. (2020). ‘Reasoning, Defeasibility, and the Taking Condition.’ Philosophers' Imprint 20(28), 116.Google Scholar
Van Inwagen, P. (1997). ‘Materialism and the Psychological-Continuity Account of Personal Identity.’ Philosophical Perspectives 11, 305–19.Google Scholar
Van Wietmarschen, H. (2013). ‘Peer Disagreement, Evidence, and Well-Groundedness.’ Philosophical Review 122(3), 395425.CrossRefGoogle Scholar
Vollet, J.-H. (2022). ‘Epistemic Excuses and the Feeling of Certainty.’ Analysis, 82(4), 663–72.CrossRefGoogle Scholar
Weatherson, B. (m.s.). ‘Do Judgments Screen Evidence?’ Manuscript, 1–23.Google Scholar
Weatherson, B. (2019). Normative Externalism. Oxford: Oxford University Press.CrossRefGoogle Scholar
Wedgwood, R. (2013). ‘Akrasia and Uncertainty.’ Organon F: Medzinárodný Časopis Pre Analytickú Filozofiu 20(4), 483505.Google Scholar
Welsh, P. (1957). ‘On the Nature of Inference.’ Philosophical Review 66(4), 509–24.CrossRefGoogle Scholar
Whiting, D. (2020). ‘Higher-Order Evidence.’ Analysis 80(4), 789807.CrossRefGoogle Scholar
Worsnip, A. (2018). ‘The Conflict of Evidence and Coherence.’ Philosophy and Phenomenological Research 91(1), 344.CrossRefGoogle Scholar
Wright, C. (2014). ‘Comment on Paul Boghossian, “What is Inference”.’ Philosophical Studies 1(1), 111.Google Scholar
Ye, R. (2014). ‘Fumerton's Puzzle for Theories of Rationality.’ Australasian Journal of Philosophy 93(1), 93108.CrossRefGoogle Scholar
Ye, R. (2019). ‘A Doxastic-Causal Theory of Epistemic Basing.’ In Carter, J. A. and Bondy, P. (eds), Well-Founded Belief New Essays on the Epistemic Basing Relation, pp. 1533. New York: Routledge.CrossRefGoogle Scholar