Skip to main content Accessibility help

Rational Choice for Machines: A Research Program for Normative Philosophy*

  • Christopher W. Morris (a1)


Why be moral? The question is very old. It takes many forms and is subject to many interpretations. On one interpretation, the question does not make sense (failed presupposition); to ask it is evidence of misunderstanding. This view is not as popular as it once was. The more fashionable answer today is that we have reasons to be moral. These reasons may themselves be moral, or they may be non-moral. In the first case, we may not have the answer we wanted to our question. If we think of reasons as considerations favouring a course of action, decisively in the absence of other reasons, then the fact that there are moral reasons to act morally merely tells us that there are moral considerations in favour of acts required or recommended by morality. It leaves open the question of whether we act against or contrary to reason simpliciter when we fail to act as we morally should. By contrast, if there are non-moral reasons to act morally, then to refrain from doing what is morally required of us is a failure of reason; it is to act contrary to the considerations relevant to the choices one faces.



Hide All


1 Similarly–and equally unhelpful–there are legal reasons to obey the law.

2 The pursuit of a fundamental justification does not, Danielson argues, commit him to a foundationalist epistemology (pp. 27–28). I think he is right and have argued the point elsewhere.

3 Rational choice theory focuses on the connections between ends and means. While most rational choice theorists are instrumentalists about reason and do not believe that there are rational ends, the theory itself must be neutral on this matter. It says nothing about the manner in which the ends we pursue acquire their value. In principle, then, it should be available to most non-instrumentalists.

4 Oxford: Clarendon Press, 1986.

5 See Axelrod, Robert, The Evolution of Cooperation (New York: Basic Books, 1978).

6 “Our most developed social science, economics, is overwhelmingly cynical about moral motivation and pursues a program of finding institutional replacements for morality. The received theory of rational choice, by defining rationality as unconstrained choice, makes morality irrational by definition” (p. 3).

7 I omit discussion of so-called “revealed-preference” theorists, as the simple-minded behaviourism implicit in their view is not taken seriously by philosophers, whatever lingering influence it may have in economics. I also shall not discuss the view of some that “preferences” in the technical sense reflect desirebased values. Some fall into a position such as this one by confusing technical and ordinary senses of the term “preference.” More serious defenders of desire-based accounts of value need not be committed to the view that preferences (in the technical sense) exhaust all reasons for action. Gauthier is someone who accepts a desire-based theory of value but rejects this sort of view of preference and the associated account of rational choice.

8 Noting some of the limits of Prolog, Danielson remarks that it “goes a remarkable way towards that philosophical dream, the automatic argument testing machine” (p. 71).

9 He continues, “Indeed, if one-shot games did not exist, moral theorists would need to invent them.” Some theorists deny that there are any, or many, genuine single-play games in the world. I often think that the motivation for these sorts of conjectures (usually unsubstantiated by evidence) is the belief that rational agents could not cooperate in single-play games, at least in those involving any significant element of conflict (like the PD). The situation is different for zerosum games; especially if one introduces probabilities (and “mixed strategies”), the defining condition of complete opposition of interest is too demanding to be realized in practice. As we know, love and war are not, in fact, genuine zerosum gains.

10 The compliance problem is best explained by using extended or sequential PDs, rather than the simultaneous ones represented by the familiar matrices.

11 Reasons and Maximization,” Canadian Journal of Philosophy, 4 (1975): 411–33.

12 A conclusion supported by Frank, Robert, Passions Within Reason: The Strategic Role of the Emotions (New York and London: W. W. Norton, 1988).

13 Focusing on more complicated PDs would also help, e.g., the “divisible PD,” one which has more than a single mutually advantageous cooperative outcome and where some means to select one of these cooperative outcomes is a condition for cooperation. See Coleman, Jules, Risks and Wrongs (Cambridge: Cambridge University Press, 1992), pp. 106108.

14 Another is that of McClennen, Edward F., Rationality and Dynamic Choice (Cambridge: Cambridge University Press, 1990).

15 This line is so common, both in the literature and in discussions, that references do not seem needed.

16 I simplify in many ways, one of which is the complexity of Danielson's position. Toward the end of the book, he wishes to argue “that what best advances a player's substantive interests is not to be found at the level of particular acts, nor even at the level of his principles, but rather in the manner in which he chooses these principles” (p. 189).

* Peter Danielson, Artificial Morality: Virtuous Robots for Virtual Games (London and New York: Routledge, 1992), xiii + 240 pp., $21.50. Parenthetical page references are to this work. I am grateful to Eric Barnes for some helpful comments on an earlier draft.

Rational Choice for Machines: A Research Program for Normative Philosophy*

  • Christopher W. Morris (a1)


Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed