Hostname: page-component-8448b6f56d-qsmjn Total loading time: 0 Render date: 2024-04-18T06:51:12.753Z Has data issue: false hasContentIssue false

Cooperation, psychological game theory, and limitations of rationality in social interaction

Published online by Cambridge University Press:  02 October 2003

Andrew M. Colman*
Affiliation:
School of Psychology, University of Leicester, LeicesterLE1 7RH, United Kingdomwww.le.ac.uk/home/amc

Abstract:

Rational choice theory enjoys unprecedented popularity and influence in the behavioral and social sciences, but it generates intractable problems when applied to socially interactive decisions. In individual decisions, instrumental rationality is defined in terms of expected utility maximization. This becomes problematic in interactive decisions, when individuals have only partial control over the outcomes, because expected utility maximization is undefined in the absence of assumptions about how the other participants will behave. Game theory therefore incorporates not only rationality but also common knowledge assumptions, enabling players to anticipate their co-players' strategies. Under these assumptions, disparate anomalies emerge. Instrumental rationality, conventionally interpreted, fails to explain intuitively obvious features of human interaction, yields predictions starkly at variance with experimental findings, and breaks down completely in certain cases. In particular, focal point selection in pure coordination games is inexplicable, though it is easily achieved in practice; the intuitively compelling payoff-dominance principle lacks rational justification; rationality in social dilemmas is self-defeating; a key solution concept for cooperative coalition games is frequently inapplicable; and rational choice in certain sequential games generates contradictions. In experiments, human players behave more cooperatively and receive higher payoffs than strict rationality would permit. Orthodox conceptions of rationality are evidently internally deficient and inadequate for explaining human interaction. Psychological game theory, based on nonstandard assumptions, is required to solve these problems, and some suggestions along these lines have already been put forward.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2003

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. According to a debatable behavioral interpretation of RCT (Herrnstein 1990), its central assumption is that organisms maximize reinforcement, and this “comes close to serving as the fundamental principle of the behavioral sciences” (p. 356). But, as Herrnstein pointed out, experimental evidence suggests that RCT, thus interpreted, predicts human and animal behavior only imperfectly.

2. Green and Shapiro (1994) did not touch on this most crucial problem in their wide-ranging critical review of the rational choice literature. Neither did any of the participants in the ensuing “rational choice controversy” in the journal Critical Review, later republished as a book (Friedman 1996).

3. This implies that rational beliefs have to respect the (known) evidence. For example, a person who has irrefutable evidence that it is raining and therefore knows (believes that the probability is 1) that it is raining, but also believes that it is fine, fails to respect the evidence and necessarily holds internally inconsistent beliefs.

4. I am grateful to Ken Binmore for pointing this out to me.

5. Kreps (1988) has provided an excellent summary of Savage's theory, although Savage's (1954) own account is brief and lucid.

6. To a psychologist, revealed preference theory explains too little, because there are other sources of information about preferences apart from choices, and too much, because there are other factors apart from preferences that determine choices – see the devastating “rational fools” article by Sen (1978).

7. The possibility of systematic irrationality, or of demonstrating it empirically, has been questioned, notably by Broome (1991), Cohen (1981), and Stein (1996).

8. See, for example, Bicchieri (1993, Chs. 2, 3); Colman (1997; 1998); Cubitt and Sugden (1994; 1995); Hollis and Sugden (1993); McClennen (1992); Sugden (1991b; 1992). The common knowledge assumptions are sometimes relaxed in recent research (e.g., Aumann & Brandenburger 1995).

9. In other circumstances, experimental evidence suggests that human reasoners do not even come close to full common knowledge (Stahl & Wilson 1995).

10. The theory is determinate for every strictly competitive (finite, two-person, zero-sum) game, because if such a game has multiple equilibria, then they are necessarily equivalent and interchangeable, but this does not hold for other games.

11. See Colman (1995a, pp. 169–75) for a simple example of an empty core in Harold Pinter's play, The Caretaker.

12. Even Hume nods. Port comes from Portugal, of course.

13. Janssen's (2001b) principle of individual team member rationality is slightly weaker (it does not require equilibrium): “If there is a unique strategy combination that is Pareto-optimal, then individual players should do their part of the strategy combination” (p. 120). Gauthier's (1975) principle of coordination is slightly stronger (it requires both equilibrium and optimality): “In a situation with one and only one outcome which is both optimal and a best equilibrium … it is rational for each person to perform that action which has the best equilibrium as one of its possible outcomes” (p. 201).

14. If e and f are any two equilibrium points in a game, then e risk-dominates f if and only if the minimum possible payoff resulting from the choice of the strategy corresponding to e is strictly greater for every player than the minimum possible payoff resulting from the choice of the strategy corresponding to f. According to Harsanyi and Selten's (1988) risk-dominance principle, if one equilibrium point risk-dominates all others, then players should choose its component strategies. It is used when subgame perfection and payoff dominance fail to yield a determinate solution.

15. According to the sure-thing principle, if an alternative ai is judged to be as good as another aj in all possible contingencies that might arise, and better than aj in at least one, then a rational decision maker will prefer ai to aj. Savage's (1954) illustration refers to a person deciding whether or not to buy a certain property shortly before a presidential election, the outcome of which could radically affect the property market. “Seeing that he would buy in either event, he decides that he should buy, even though he does not know which event will obtain” (p. 21).

16. I am grateful to Werner Güth for this insight.