Hostname: page-component-848d4c4894-8kt4b Total loading time: 0 Render date: 2024-06-27T01:18:09.387Z Has data issue: false hasContentIssue false

THE FALLIBILITY PARADOX

Published online by Cambridge University Press:  03 September 2019

Chandra Sripada*
Affiliation:
Philosophy, Psychiatry, University of Michigan

Abstract:

Reasons-responsiveness theories of moral responsibility are currently among the most popular. Here, I present the fallibility paradox, a novel challenge to these views. The paradox involves an agent who is performing a somewhat demanding psychological task across an extended sequence of trials and who is deeply committed to doing her very best at this task. Her action-issuing psychological processes are outstandingly reliable, so she meets the criterion of being reasons-responsive on every single trial. But she is human after all, so it is inevitable that she will make rare errors. The reasons-responsiveness view, it is claimed, is forced to reach a highly counterintuitive conclusion: she is morally responsible for these rare errors, even though making rare errors is something she is powerless to prevent. I review various replies that a reasons-responsiveness theorist might offer, arguing that none of these replies adequately addresses the challenge.

Type
Research Article
Copyright
Copyright © Social Philosophy and Policy Foundation 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Thanks to the contributors to this volume for extensive feedback on an earlier draft of this essay. Special thanks to Michael McKenna, Samuel Murray, Manuel Vargas, and an anonymous reviewer for this journal for detailed comments that greatly improved the manuscript.

References

1 Reasons-responsiveness is typically offered as a necessary, but not sufficient, condition for moral responsibility. Other common criteria include a knowledge condition and historical condition, among others. I am assuming throughout this essay, unless noted otherwise, that these other conditions for moral responsibility are met.

2 Another common formulation is agent-based rather than mechanism-based: an agent is morally responsible for an action only if the agent is reasons-responsive. See Brink, David O. and Nelkin, Dana K., Fairness and the Architecture of Responsibility, ed. Shoemaker, David (Oxford: Oxford University Press, 2013);Google Scholar Vargas, Manuel, Building Better Beings: A Theory of Moral Responsibility ( Oxford: Oxford University Press, 2013);CrossRefGoogle Scholar McKenna, Michael, “Reasons-Responsiveness, Agents, and Mechanisms,” in Oxford Studies in Agency and Responsibility Volume 1, ed. Shoemaker, David (Oxford: Oxford University Press, 2013), 151–83. http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199694853.001.0001/acprof-9780199694853-chapter-7, for more on the distinction).CrossRefGoogle Scholar For ease of exposition, I formulate the fallibility paradox for mechanism-based reasons-responsiveness views first. Later, in Section V.C, I argue that switching to an agent-based formulation makes little difference.

3 See MacLeod, C. M., “Half a Century of Research on the Stroop Effect: An Integrative Review,” Psychological Bulletin 109, no. 2 (1991): 163203, for a review of the history of this task and summary of key findings.CrossRefGoogle Scholar

4 In a series of papers, Santiago Amaya has drawn attention to slips, which are quite similar to what I am here calling “errors.” On his account, slips are to be understood as intentional actions that fail to correspond to what the agent preferred to do at the time. He also distinguishes slips from other kinds of agential failings, such irresoluteness and “Freudian” conduct. See Santiago, Amaya, “Slips,” Noûs 47, no. 3 (2013): 559–76. https://doi.org/10.1111/j.1468-0068.2011.00838.x;Google Scholar Amaya, , “The Argument from Slips,” in Agency, Freedom, and Moral Responsibility, ed. Buckareff, Andrei, Moya, Carlos, and Rosell, Sergi (London: London: Palgrave Macmillan, 2015), 13–29, as well asGoogle Scholar Amaya, Santiago and Doris, John M., “No Excuses: Performance Mistakes in Morality,” in Handbook of Neuroethics, ed. Clausen, Jens and Levy, Neil (Dordrecht, Netherlands: Springer, 2015), 253–72.Google Scholar

5 The standard finding is that subjects make an error on roughly 5-10 percent of the “incongruent” trials where the named color and ink color disagree (MacLeod, “Half a Century of Research on the Stroop Effect”), and error rates go down if incentives are given for accurate responding (Mimi Liljeholm and John P. O’Doherty, “Anything You Can Do, You Can Do Better: Neural Substrates of Incentive-Based Performance Enhancement.” PLoS Biology 10, no. 2 (2012): e1001272. https://doi.org/10.1371/journal.pbio.1001272.) However, across the course of a prolonged experiment, virtually no one, no matter what incentives are given, achieves perfect accuracy (indeed, as incentives get sufficiently high, performance often suffers due to “choking under pressure” effects; Ariely, Dan, Gneezy, Uri, Loewenstein, George, and Mazar, Nina, “Large Stakes and Big Mistakes,” The Review of Economic Studies 76, no. 2 (2009): 451–69. https://doi.org/10.1111/j.1467-937X.2009.00534.x;CrossRefGoogle Scholar Chib, Vikram S., Shimojo, Shinsuke, and O’Doherty, John P., “The Effects of Incentive Framing on Performance Decrements for Large Monetary Outcomes: Behavioral and Neural Mechanisms,” Journal of Neuroscience 34, no. 45 (2014): 14833–44. https://doi.org/10.1523/JNEUROSCI.1491-14.2014.CrossRefGoogle ScholarPubMed

6 Fischer says the mechanism that issues in action must be “moderately” reasons responsive: across worlds in which there is sufficient reason to do otherwise, it “regularly” recognizes these reasons and reacts to these reasons in at least one world ( Fischer, John Martin and Ravizza, Mark, Responsibility and Control: A Theory of Moral Responsibility [New York: Cambridge University Press, 1998]).CrossRefGoogle Scholar Others set a higher threshold for reasons reactivity (e.g., Brink and Nelkin, Fairness and the Architecture of Responsibility.) See McKenna, “Reasons-Responsiveness, Agents, and Mechanisms” for more general discussions of the role of thresholds in reasons-responsiveness accounts.

7 Thus far, I am assuming that, since all 1,000 trials are highly similar and she tries equally hard on every trial, it is the same mechanism that issues in action across all the trials. This strikes me as a highly intuitive picture of what goes on in this task. In Section IV, I consider the possibility that different mechanisms are at work on correct versus incorrect trials.

8 Elsewhere, I discuss the connection between inevitable rare errors of the kind Fei exhibits and loss of control in non-laboratory, “real-world” contexts; see Sripada, Chandra, “Addiction and FallibilityJournal of Philosophy 115, no. 2 (2018): 569–87;CrossRefGoogle Scholar Sripada, , “Self-Expression: A Deep Self Theory of Moral Responsibility,” Philosophical Studies 173, no. 5 (2016): 1203–32. https://doi.org/10.1007/s11098-015-0527-9.CrossRefGoogle Scholar

9 Slips and errors have figured into other challenges to reasons-responsiveness views. But interestingly, the challenge posed by the fallibility paradox comes from a diametrically opposed direction. The fallibility paradox presents a challenge for reasons-responsiveness views because it says these views are overinclusive: they count an agent as morally responsible for certain kinds of slips and errors when they should not. Reasons-responsiveness views have also faced the opposite charge: it is claimed they are underinclusive and fail to count an agent as morally responsible for certain kinds of slips and errors when they should. Reasons-responsiveness theorists have offered responses. For example, McKenna and Warmke discuss the findings from the literature on situationism. They consider the claim that these findings show we are typically not reasons-responsive enough, opening the door to skepticism about moral responsibility, and they offer detailed replies. Murray, Samuel, “Responsibility and Vigilance,” Philosophical Studies 174, no. 2 (2017): 507–27. https://doi.org/10.1007/s11098-016-0694-3,CrossRefGoogle Scholar considers cases of forgetting and other failures of vigilance. He puts forward an argument for why a reasons-responsiveness view can in fact account for why people are morally responsible for these failings. However, even if McKenna and Warmke (Michael McKenna and Brandon Warmke, “Does Situationism Threaten Free Will and Moral Responsibility?” Journal of Moral Philosophy 1, no. 36 [2017]) and Murray are right and the charge of underinclusiveness is successfully rebutted, this does not address the fallibility paradox, which attacks from precisely the opposite direction.

10 Kyburg, Henry, Probability and the Logic of Rational Belief (Middletown, CT: Wesleyan University Press, 1961).Google Scholar

11 Ibid.

12 Fischer says very little about how to individuate mechanisms for the purposes of evaluating their reasons-responsiveness, and in nearly all his examples, the relevant mechanism is not specified in any detail. McKenna (“Reasons-Responsiveness, Agents, and Mechanisms”) takes up this problem for Fischer’s view in some detail.

13 There are a number of computational models of Stroop task performance, all of which are compatible with the key conclusions I want to draw (e.g., Hans Phaf, R., C. Van der Heijden, H., and Hudson, Patrick T. W.,“SLAM: A Connectionist Model for Attention in Visual Selection Tasks,” Cognitive Psychology 22, no. 3 [1990]: 273341. https://doi.org/10.1016/0010-0285(90)90006-P;CrossRefGoogle Scholar Roelofs, Ardi, “Goal-Referenced Selection of Verbal Action: Modeling Attentional Control in the Stroop Task,” Psychological Review 110, no. 1 [2003]: 88125.)CrossRefGoogle ScholarPubMed I am here focusing on the approach of Jonathon Cohen and his colleagues as laid out in a number of articles (see, for example, Cohen, J. D., Dunbar, K., and McClelland, J. L., “On the Control of Automatic Processes: A Parallel Distributed Processing Account of the Stroop Effect,” Psychological Review 97, no. 3 [1990]: 332–61.)CrossRefGoogle ScholarPubMed

14 See Dolan, Ray J. and Dayan, Peter, “Goals and Habits in the Brain,” Neuron 80, no. 2 (2013): 312–25. https://doi.org/10.1016/j.neuron.2013.09.007, for a review of brain-based computational algorithms that underlie habit learning.CrossRefGoogle Scholar

15 See, for example, Egner, Tobias and Hirsch, Joy, “Cognitive Control Mechanisms Resolve Conflict through Cortical Amplification of Task-Relevant Information.” Nature Neuroscience 8, no. 12 (2005): n.1594. https://doi.org/10.1038/nn1594.CrossRefGoogle ScholarPubMed

16 For discussions of stochasticity in neural computation, see Shadlen, Michael N. and Roskies, Adina L., “The Neurobiology of Decision-Making and Responsibility: Reconciling Mechanism and Mindedness,” Frontiers in Neuroscience 6 (2012). https://doi.org/10.3389/fnins.2012.00056;CrossRefGoogle ScholarPubMed Shadlen, Michael N., “Comments on Adina Roskies,‘Can Neuroscience Resolve Issues about Free Will?’” In Moral Psychology, Volume 4: Free Will and Moral Responsibility, ed.Sinnott-Armstrong, Walter (Cambridge, MA: MIT Press, 2014), 3950.Google Scholar For discussions of how stochasticity manifests in performance in Stroop-like tasks, see Jensen, Arthur R., “The Importance of Intraindividual Variation in Reaction Time,” Personality and Individual Differences 13, no. 8 (1992): 869–81. https://doi.org/10.1016/0191-8869(92)90004-9;CrossRefGoogle Scholar Nesselroade, John R. and Ram, Nilam, “Studying Intraindividual Variability: What We Have Learned That Will Help Us Understand Lives in Context,” Research in Human Development 1, nos. 1-2 (2004): 929. https://doi.org/10.1080/15427609.2004.9683328;CrossRefGoogle Scholar Xavier Castellanos, F., Sonuga-Barke, Edmund J. S., Scheres, Anouk, Di Martino, Adriana, Hyde, Christopher, and Walters, Judith R.,“Varieties of Attention-Deficit/Hyperactivity Disorder-Related Intra-Individual Variability,” Biological Psychiatry 57, no. 11 (2005): 1416–23. https://doi.org/10.1016/j.biopsych.2004.12.005.CrossRefGoogle Scholar

17 The main features of the preceding mechanistic description of Stroop task performance are nicely captured in the classic drift diffusion model of Ratcliff, Roger and McKoon, Gail, “The Diffusion Decision Model: Theory and Data for Two-Choice Decision Tasks,” Neural Computation 20, no. 4 (2007): 873922. https://doi.org/10.1162/neco.2008.12-06-420;CrossRefGoogle Scholar Voss, Andreas, Nagler, Markus, and Lerche, Veronika, “Diffusion Models in Experimental Psychology,” Experimental Psychology 60, no. 6 (2013): 385402. https://doi.org/10.1027/1618-3169/a000218).CrossRefGoogle ScholarPubMed The model treats one’s responses on a broad array of tasks as arising from a continuous random diffusion process (called Weiner-type diffusion) that evolves over time, eventually hitting a decision boundary that determines the response (READ the word or say the INK color). Top-down attention serves to strongly bias the evolution of the diffusion process in favor of the correct response (INK). But in each time instant, noise processes can potentially push the evolving diffusion path in either direction. The result is that—so long as the level of top-down attention is sufficient—on most trials, the person produces the correct response, but, inevitably, rare incorrect decisions and subsequent responses will also occur. Certain modifications of the classic drift diffusion model are required for conflict tasks like the Stroop task (Ulrich, Rolf, Schröter, Hannes, Leuthold, Hartmut, and Birngruber, Teresa, “Automatic and Controlled Stimulus Processing in Conflict Tasks: Superimposed Diffusion Processes and Delta Functions,” Cognitive Psychology 78 [May 2015]: 148–74. https://doi.org/10.1016/j.cogpsych.2015.02.005;CrossRefGoogle ScholarPubMed White, Corey N., Servant, Mathieu, and Logan, Gordon D., “Testing the Validity of Conflict Drift-Diffusion Models for Use in Estimating Cognitive Processes: A Parameter-Recovery Study,” Psychonomic Bulletin and Review 25, no. 1 [2018]: 286301. https://doi.org/10.3758/s13423-017-1271-2), but they don’t change the preceding basic picture.CrossRefGoogle ScholarPubMed

18 McKenna, “Reasons-Responsiveness, Agents, and Mechanisms.”

19 Brink and Nelkin, Fairness and the Architecture of Responsibility.

20 I borrow this helpful terminology from Murray, Samuel, Murray, Elise D., Stewart, Gregory, Sinnott-Armstrong, Walter, and De Brigard, Felipe, “Responsibility for Forgetting,” Philosophical Studies (2018), 125. https://doi.org/10.1007/s11098-018-1053-3, who citeGoogle Scholar Doris, John M., Talking to Our Selves: Reflection, Ignorance, and Agency, reprint edition (New York: Oxford University Press, 2015) andCrossRefGoogle Scholar Sripada, Chandra, “Self-Expression: A Deep Self Theory of Moral Responsibility,” Philosophical Studies 173, no. 5 (2016): 1203–32. https://doi.org/10.1007/s11098-015-0527-9CrossRefGoogle Scholar as recent examples of valuationist views. Murray and colleagues’ article focuses on moral responsibility for slips and errors in cases where the agent should be morally responsible. The fallibility paradox, as I noted earlier, presents the opposite kind of challenge: it concerns slips where the agent should not be morally responsible. A complete defense of a valuationist approach to responsibility for slips and errors should address Murray and colleagues’ arguments, though I will not attempt such a defense here.

21 Sripada, “Self-Expression: A Deep Self Theory of Moral Responsibility” argues that the specific kind of causal contribution that is relevant for moral responsibility is motivational contribution: an action expresses an element of one’s evaluative point of view if that element motivationally supports performing the action. Older valuationist views understood the idea of expressing or flowing from one’s evaluative point of view in explicit, conscious, and often highly rationalistic terms. For example, an action expresses an agent’s evaluative point of view only if the agent consciously, reflectively endorses the action. Frankfurt, Harry, “Freedom of the Will and the Concept of a Person,” The Journal of Philosophy 68, no. 1 (1971): 520. https://doi.org/10.2307/2024717.CrossRefGoogle Scholar

22 Earlier I discussed the “different mechanism” strategy that might be taken up by reasons-responsiveness theorists. Some readers of that section might have thought about the following strategy for mechanism individuation: In the 996 trials in which Fei performs the word reading response, the mechanism that issues in action produces an action that is appropriately caused by and expresses Fei’s goals (to read the word rather than say the ink color) and values (caring for animals). On the four trials in which she makes an error, the mechanism that issues in action produces an action that is not caused by, and in fact conflicts with, these goals and values. Could this difference be a principled basis, one that has strong roots in intuition, for saying that there are different mechanisms at work on the 996 success trials versus the four error trials? The discussion in the present section serves to show why this strategy is misguided. This approach to mechanism individuation is sufficiently different from the standard reasons-responsiveness approach, and sufficiently similar to the valuationist approach, that a reasons-responsiveness theorist who takes this tack is essentially collapsing his view into a form of valuationism (see McKenna, “Reasons-Responsiveness, Agents, and Mechanisms” for further discussion of related points).

23 Here, as I have been doing throughout this essay (unless explicitly noted otherwise), I am assuming all other conditions for moral responsibility are met.