Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-17T23:31:34.109Z Has data issue: false hasContentIssue false

Boundary effects in the Marschak-Machina triangle

Published online by Cambridge University Press:  01 January 2023

Krzysztof Kontek*
Affiliation:
Warsaw School of Economics
Rights & Permissions [Opens in a new window]

Abstract

This paper presents the results of a study that sheds new light on the shape of indifference curves in the Marschak-Machina triangle. The most important observation, obtained non-parametrically, concerns jumps in indifference curves at the triangle legs towards the triangle origin. These jumps, however, do not appear at the hypotenuse. The pattern observed suggests discontinuity in lottery valuation when the range of lottery outcomes changes and is best explained by decision-making models based on the psychological phenomenon of range dependence (Parducci, 1965; Cohen, 1992; Kontek & Lewandowski, 2018). Models founded on other psychological phenomena, e.g., discontinuity in decision weights (Kahneman & Tversky, 1979), cumulative probability weighting (Tversky & Kahneman, 1992), attention shifting (Birnbaum, 2008), overweighting of salient payoffs (Bordallo, Gennaioli & Shefrin, 2012), and treating stated probabilities as imperfect information (Viscusi, 1989), predict indifference curve shapes that differ from the one obtained in this study.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2018] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

The Marschak-Machina triangle (Reference MarschakMarschak, 1950; Reference MachinaMachina, 1982) is a graphical tool for both theoretical and experimental considerations concerning the modeling of decision-making under risk. The triangle represents the set of all lotteries involving three fixed outcomes x 1 < x 2 < x 3 with respective probabilities of p 1, p 2, and p 3. Probability p 1 is represented on the horizontal axis; probability p 3 is represented on the vertical axis; and probability p 2 is their sum subtracted from 1. Every point in this triangle represents a particular lottery: a point inside the triangle represents a three-outcome lottery where p 1, p 2, and p 3 are strictly positive; a point on the boundary of the triangle (but not at one of the corners) represents a two-outcome lottery, since one p i is zero; while the corners represent certainties.

A common and useful way to visualize the predictions of the various decision-making models is to inspect their indifference curves, as they connect points representing lotteries of equal utility. If the decision-maker behaves in accordance with Expected Utility Theory (Reference Von Neumann and Morgensternvon Neumann & Morgenstern, 1944), then his or her preferences can be represented in the Marschak-Machina triangle by a set of parallel straight line indifference curves. A number of theories of decision-making under risk have been developed to explain Expected Utility violations (more specifically the Allais paradox). These models predict indifference curves of various shapes: straight lines that “fan-out” (i.e., that intersect at a point to the south-west of the triangle origin, Reference Chew and MacCrimmonChew & MacCrimmon, 1979; Loomes & Sugden, 1982), “fan-in” (i.e., that intersect at a point to the north-east of the hypotenuse, Reference BlavatskyyBlavatskyy, 2006), are a mixture of both (Reference GulGul, 1991; Reference NeilsonNeilson, 1992; Reference Jia, Dyer and ButlerJia et al., 2001; Reference Bordalo, Gennaioli and SchleiferBordalo, Gennaioli & Schleifer, 2012), or do not converge to any specific point (Reference DekelDekel, 1986). The indifference curves may be concave (Reference Kahneman and TverskyKahneman and Tversky, 1979), concave or convex (Reference Becker and SarinBecker, Sarin, 1987), or concave and convex (Reference Tversky and KahnemanTversky & Kahneman, 1992; Reference BirnbaumBirnbaum, 2008). They may also be discontinuous at all boundaries (Reference Kahneman and TverskyKahneman & Tversky, 1979; Reference ViscusiViscusi, 1989; Reference BirnbaumBirnbaum, 2008; Reference Bordalo, Gennaioli and SchleiferBordalo, Gennaioli, Schleifer, 2012), or only at the triangle legs (Reference CohenCohen, 1992; Reference Kontek and LewandowskiKontek & Lewandowski, 2018). The Marschak-Machina triangle, with the indifference curves inside it, is therefore a powerful tool for distinguishing the predictions of different decision-making models. Example shapes of the indifference curves predicted by the models discussed in this paper are presented in Figure 1.

Figure 1: Indifference curve shapes predicted by: Expected Utility Theory (EUT, Reference Von Neumann and Morgensternvon Neumann and Morgenstern, 1944), Cumulative Prospect Theory (CPT, Reference Tversky and KahnemanTversky & Kahneman, 1992), the TAX model (TAX, Reference BirnbaumBirnbaum, 2008), Salience Theory (ST, Reference Bordalo, Gennaioli and SchleiferBordalo, Gennaioli, and Shefrin, 2012), Prospective Reference Theory (PRT, Reference ViscusiViscusi, 1989), and the Decision Utility model (DUT, Reference Kontek and LewandowskiKontek and Lewandowski, 2018).

Many investigations have tested hypotheses about the shape of the indifference map using real data (e.g., Reference Hey and StrazzeraHey & Strazzera, 1989; Reference CamererCamerer, 1989; Reference LoomesLoomes, 1991; Reference HarlessHarless, 1992; Reference BlavatskyyBlavatskyy, 2006; Reference Bardsley, Cubbit, Loomes, Moffat, Starmer and SugdenBardsley et al., 2010). Reference Harless and CamererHarless and Camerer (1994), after analyzing a large number of experimental data sets, conclude that the EU model should be used when all the lotteries are located in the interior of the triangle (from which follows that the indifference curves are parallel straight lines inside the triangle), but a different model has to be used when some of the lotteries are located on the boundaries or in the corners of the triangle. The literature contains evidence of boundary effects (e.g., Reference ConliskConlisk, 1989), although the shape of the indifference curves in the vicinity of the triangle boundaries is not clearly stated. For instance, Reference Abdellaoui and MunierAbdellaoui and Munier (1998) state only that the hypothesis concerning parallelism of the indifference curves at the triangle legs is strongly rejected.

This paper presents the results of a study that sheds new light on the shape of indifference curves in the Marschak-Machina triangle. The study was performed using a novel method of non-parametrically plotting indifference curves using certainty equivalents based on the common cartographic practice of plotting contour maps (Section 2). The approach allows the indifference curves to be visualized (Section 3) in contrast to most previous studies, which tested only hypotheses about the shapes of the indifference curves in the entire triangle or in its regions (Section 6). Importantly, many of the lotteries considered in the present study were located close to the Marschak-Machina triangle boundaries (a discussion on the optimal lottery grid is presented in Section 7). This facilitated the observation of the boundary effects, most importantly the jumps in the indifference curves at the triangle legs towards the triangle origin (Section 3). This effect is characterized by a sudden change in the slopes of the indifference curves (Section 4). Such jumps, however, do not appear at the hypotenuse. The indifference curves in the triangle interior are parallel straight lines (with a tendency to fan-in along, but not around, the two legs).

To confirm the main observations obtained non-parametrically, an estimation of six decision-making models founded on various psychological phenomena was made (Section 5). This included Expected Utility Theory (Reference Von Neumann and Morgensternvon Neumann and Morgenstern, 1944), Cumulative Prospect Theory (CPT, Reference Tversky and KahnemanTversky & Kahneman, 1992), Prospective Reference Theory (PRT, Viscusi, 1989), the TAX model (Reference BirnbaumBirnbaum, 2008), Salience Theory (ST, Reference Bordalo, Gennaioli and SchleiferBordalo, Gennaioli & Shefrin, 2012), and the Decision Utility model (DUT, Reference Kontek and LewandowskiKontek & Lewandowski, 2018). As shown, the best fit is obtained by the Decision Utility and Prospective Reference models, i.e., those that predict parallel straight indifference curves in the triangle interior and discontinuous jumps at the triangle legs towards the triangle origin. The Cumulative Prospect Theory model, which predicts nonlinear but smooth indifference curves, was ranked only fourth. The model ranking naturally leads to a discussion on which of the psychological phenomena underlying the models might correctly explain the shape of the indifference curves obtained non-parametrically in this study (Section 8). This pattern suggests discontinuity in the lottery valuation when the range of lottery outcomes changes and is best explained by models based on the psychological phenomenon of range dependence (Reference ParducciParducci, 1965; Reference CohenCohen, 1992; Reference Kontek and LewandowskiKontek & Lewandowski, 2018). Models founded on other psychological phenomena, e.g., discontinuity in decision weights (Reference Kahneman and TverskyKahneman & Tversky, 1979), cumulative probability weighting (Reference Tversky and KahnemanTversky & Kahneman, 1992), attention shifting (Reference BirnbaumBirnbaum, 2008), overweighting of salient payoffs (Bordallo, Gennaioli & Shefrin, 2012), and treating stated probabilities as imperfect information (Reference ViscusiViscusi, 1989), predict indifference curve shapes that differ from the one obtained non-parametrically in this study.

2 Method

The idea of the non-parametric method of plotting indifference curves comes from contour mapping: a contour line (often simply called a “contour”) joins points of equal elevation (height) above a given level, e.g., mean sea level. The procedure is as follows. First, the lotteries to be examined are chosen; these are the points in the Marschak-Machina triangle. Second, lottery certainty equivalents (CE) are determined; these are the “heights” of the respective points. Finally, these CE values are used to plot a contour map; the contours are the required indifference curve(s) joining points having the same interpolated CE value.

2.1 Lotteries involved

The experiment involved 67 lotteries for each of two payoff schedules: x 1 = 0 zł, x 2 = 150 zł, x 3 = 300 zł (Triangle 1); and x 1 = 0 zł, x 2 = 450 zł and x 3 = 900 zł (Triangle 2). Złoty (zł) is the Polish currency; $1 ≈ 4 zł, although the purchasing power for basic goods is closer to identity. Of the 67 lotteries, 3 were located in the corners of the triangle, 24 on the boundaries, and the remaining 40 in the interior. To verify the boundary effects, the distribution of lotteries was chosen to be more dense near the triangle boundaries and corners (Figure 2).

Figure 2: The Marschak-Machina triangle with the lotteries examined in the experiment.

The lotteries were constructed from the following list of p 1 and p 3 probabilities: {0, 0.01, 0.05, 0.2, 0.4, 0.6, 0.8, 0.95, 0.99, 1}. All combinations { p 1, 1−p 1p 3, p 3} such that 1−p 1p 3≥0 resulted in the lotteries: {0, 1, 0}, {0, 0.99, 0.01}, {0, 0.95, 0.05}, etc. The following lotteries were added to verify the boundary effects close to the hypotenuse: {0.04, 0.01, 0.95}, {0.19, 0.01, 0.8}, {0.39, 0.01, 0.6}, {0.6, 0.01, 0.39}, {0.8, 0.01, 0.19}, {0.95, 0.01, 0.04}, all having p 2 = 0.01 and {0.1, 0.05, 0.85}, {0.25, 0.05, 0.7}, {0.4 0.05, 0.55}, {0.55, 0.05, 0.4}, {0.7, 0.05, 0.25}, and {0.85, 0.05, 0.1}, all having p 2 = 0.05.

2.2 CE determination

The term “certainty equivalent” was not used in the instruction (see Appendix 2), as it is unknown or difficult to understand for most people. The lotteries were presented in the form of urns containing black, gray and white balls (for lotteries located in the corners or on the edges of the triangle, the balls were only one or two colors). To the right of the urn containing the balls of three colors was another urn that only contained balls with crosses.

An example problem is demonstrated in Figure 3. In this sample problem, the value of the black ball was 300 zł, the gray ball 150 zł, and the white ball 0 zł. The subjects had to state the value that a ball with a cross would need to have to make them indifferent between drawing a ball from the left or right urn. The subjects thereby determined the CEs of the lotteries presented on the left side of the panel.

Figure 3: An example problem from the experiment.

The experiment was conducted on the Internet. The problems were presented to the subjects in random order. Six HTML forms with 134 randomly ordered problems were prepared. A given form was randomly assigned to each subject. The black ball offered the maximum payoff in three of the forms, and the minimum payoff in the others. The gray ball always offered an intermediate payoff.

2.3 Subjects

There were 237 subjects, all undergraduate economics students at the Warsaw School of Economics. Their ages ranged from 18 to 25 years with a mean of 20.2 years and 47% were women. The students received information about the experiment from their supervisors, who worked with the author of this paper and agreed to promote the experiment. Participation was voluntary. The subjects were given a 12-zł voucher that they could redeem at the campus cafeteria. As is known from the literature, the incentive method may have an impact on the level of risk aversion demonstrated by subjects (e.g., Reference Holt and LauryHolt & Laury, 2002). The subjects were therefore incentivized by performance as well. They were informed beforehand that some of them would be taking part in a real lottery. Four subjects were randomly selected after the data had been collected. The two who offered the lowest CE for a randomly selected lottery received the amounts they quoted (70 zł and 90 zł). The other two (who quoted CEs of 100 zł and 130 zł) had to play this lottery. They did not, however, win anything.

As the experiment was conducted on the Internet, the subjects could respond at their convenience. They first registered and familiarized themselves with the instructions online. They were then required to solve two sample problems. The time to answer all questions was planned at 40–50 minutes, although they were asked to work at their own pace. The average time was about 41 minutes. The value of the voucher (12 zł) therefore exceeded the minimum hourly wage in Poland, which is about 10 zł.

3 Results

3.1 Aggregating the data

Responses in experiments involving lottery CEs are usually noisy, skewed, and contain a lot of outliers. The level of noise encountered in this experiment is probably magnified by the fact that many of the lotteries under consideration involved three outcomes, rather than two, as in typical lottery experiments. Moreover, people tend to round their CE valuations to the nearest ten, fifty, or even hundred (e.g., 10, 50, 700 rather than 9, 54, 670). These rounded CE values may then appear several times in the responses of different subjects. The term “tied values” is used in the literature on robust statistics to describe repetitive responses (see e.g., Reference WilcoxWilcox, 2011, 2012). Example histograms of CE responses obtained for particular lotteries are presented in Figure 4.

Figure 4: Example histograms of CE responses for particular lotteries presented with expected (ev), median (med), mean (mn) and 20% trimmed mean (trm) values.

For the reasons presented above, it is of great importance to choose a proper measure of CE location. The mean value is known to be very sensitive to outliers (e.g., the upper left graph in Figure 4, which has a mean value of 60.6, but a median value of only 10.0). The median value is less sensitive to outliers. However, it is sensitive to tied values (e.g., graphs in the middle row and the lower left, with a median of 200, 700, and 500). Many robust location estimators are proposed in the literature. The trimmed mean estimator is simple to compute, yet, according to Reference WilcoxWilcox (2012), often performs better than more complex ones when sampling from heavy-tailed distributions. More specifically, it usually has a narrower confidence interval than the median, mean, and other measures of central tendency. The trimmed mean is the mean of the elements in a list after dropping a fraction f of the smallest and largest elements. Reference WilcoxWilcox (2011) suggests f = 20% for data in social and behavioral sciences. The 20% trimmed mean is a compromise between the mean (f = 0 %; no points dropped) and the median (f = 50%; all but one point dropped). Therefore, as seen in Figure 4, the 20% trimmed mean (in green) assumes a value between the mean (in red) and the median (in orange).

Applying the 20% trimmed mean estimator results in 134 aggregated CE values (presented in Appendix 1), which are further used in the analyses. However, using median or mean CE values does not change the general observations regarding the shape of the indifferences curves.

3.2 Plotting the CE surfaces

The aggregated CE values are first visualized using a 3D plot (see Figure 5). Note that the three triangle corners are tied to the values of 0, 150, and 300 (left) and 0, 450, and 900 (right), as they represent certainties. Observe the CE surface shape at its edges. The surface rises and drops sharply at the p 1 and p 3 axes, although the slope is maintained at the hypotenuse.

Figure 5: 3D plots of the aggregated CE values: Triangle 1 (left); and Triangle 2 (right).

Note that moving upwards from the p 1 axis into the area of positive p 3 values (i.e., introducing a new high outcome x 3) sharply increases the lottery CE value. Per contra, moving right from the p 3 axis into the area of positive p 1 values (i.e., introducing a new low outcome x 1) sharply decreases the lottery CE value. This suggests that changing the range of lottery outcomes might explain this pattern. No such sharp change in the CE value is observed when moving from the cube diagonal into the triangle interior (i.e., introducing a new middle outcome x 2). In this case, however, the range of lottery outcomes remains unchanged.

3.3 Plotting the indifferences curves

The indifference curves are plotted using the Wolfram Mathematica® ListContourPlot function, which generates a contour plot from height values defined at specified points. Similar functionalities are offered by e.g., “filled.contour” in R and “contour” in Matlab. According to the Mathematica® tutorial, the program plots the required contour lines by linearly interpolating heights along the lines connecting adjacent points on the plane. For instance, to plot a contour of 200, the program first searches for points on the plane having an interpolated height of 200. Connecting these points then results in the required indifference curve of 200. Note that the indifference curves are plotted using a single command: no dedicated software is needed. The indifference curves obtained using aggregated CE values for Triangle 1 and Triangle 2 are shown in Figure 6.

Figure 6: Indifference curves in two Marschak-Machina triangles: Triangle 1 (left) with outcomes x 1 = 0 zł, x 2 = 150 zł, and x 3 = 300 zł; and Triangle 2 (right) with outcomes x 1 = 0 zł, x 2 = 450 zł, and x 3 = 900 zł. The Mathematica® program draws colored contour plots, so that areas of low CE contour values are marked using “cold” colors, and areas of high contour values are marked using “warm” colors.

Note that indifference curves are expressed in terms of monetary CE values, rather than hypothetical “utils” (see plot legends). The contour values of 0, 20, 40, …., 300 are presented in the left diagram, and those of 0, 60, 120, …, 900 in the right (which results in 14 curves on each plot). However, the plots for any arbitrarily chosen number of indifference curves and indifference curve values can be generated without re-running the experiment.

Several observations need to be made. First, the indifference curves seem to be straight parallel lines in the middle of the triangle. This is the area where behavior conforms to Expected Utility Theory.

Second, the further north of the origin towards the northwest corner of the triangle, the flatter the slopes of the indifference curves, and the further east of the origin towards the southeast corner of the triangle, the steeper the slopes of the indifference curves. This results in a pattern of “fanning-in” around the two legs of the triangle (i.e., the tendency to intersect at a point to the north-east of the hypotenuse) and “fanning-out” in the area around the southwest corner of the triangle (i.e., the tendency to intersect at the point to the south-west of the origin).

Finally, and most importantly for the present study, the indifference curves appear to have jumps in the direction of the origin near the legs of the triangle. Significantly, these jumps are not observed near the hypotenuse.

3.4 Limitation and robustness of the method

The method of non-parametrically plotting indifference curves is sensitive to noise and often results in plots of poor quality when applied to individual data. At the same time, it is very robust when the individual data are aggregated using the 20% trimmed mean. The limitation and the robustness of the method will be illustrated by the following simulation. Let us assume that the pattern of the indifference curves presented in Figure 6 reflects the real preferences of the “average” subject, but a Gaussian noise is added to every aggregated certainty equivalent value:

The simulated indifference curves are presented in Figure 7. As can be seen, even a small noise ( σ =0.05, graph on the left) seriously distorts the curves, and a larger one ( σ =0.20, graph on the right) results in curves that would suggest a lack of any pattern in the triangle interior and at its boundaries. In fact, the plot on the right resembles the plots of many of the subjects who took part in the experiment.

Figure 7: Simulated indifferences curves after adding a Gaussian noise to aggregated certainty equivalents. On the left: σ = 0.05, on the right: σ = 0.20.

However, adding a Gaussian noise to individual certainty equivalents (which are already very noisy), and aggregating them using 20% trimmed means, results in an almost perfect recovery of the observed pattern (see Figure 8). In fact, there is hardly any discernible difference between the curves presented in Figure 6 and Figure 8: the median absolute difference between the certainty equivalent values used to plot them is 0.4% for σ = 0.05 (graphs on the left), and 2% for σ = 0.20 (graphs on the right).

Figure 8: Simulated indifferences curves after adding a Gaussian noise to individual certainty equivalents, and then aggregated them using 20% trimmed mean. On the left: σ = 0.05, on the right: σ = 0.20.

The following conclusions can be drawn. First, the method might not be suitable for plotting individual indifference curves, unless some other means of reducing the noise are implemented (see also the discussion in Section 8). Second, the lack of “nice” plots on the individual level does not necessarily result from a lack of any interesting effects at the boundaries and in the interior of the triangle; the method simply fails to recover them when the noise level is high. Third, the method is very robust when individual data are aggregated using the 20% trimmed means. Fourth, the pattern obtained on the group level very likely demonstrates the real preferences of the “average” decision maker and is not the result of misidentification.

4 Indifference curve slopes

The main observations concerning the shape of the indifference curves are confirmed by estimating their slopes in the triangle sub-areas. If decision-makers behave as predicted by Expected Utility Theory, then their preferences can be represented in the Marschak-Machina triangle by a set of parallel straight line indifference curves. For , as in the present experiment, the slope of the indifference curves for risk averse people is greater than 1, for risk seeking people less than 1, and for risk neutral people equal to 1. Lottery CEs may serve to determine indifference curve slopes using a linear model:

in which the required slope is given by the parameter b.

4.1 The entire triangle

The minimum least squares procedure applied to aggregated CE values in the entire triangle leads to b = 1.02 (0.03) for Triangle 1, and b = 1.06 (0.03) for Triangle 2 (standard errors are given in parentheses). This suggests that, on average, the subjects demonstrated slight risk aversion.

4.2 Triangle sub-areas

The same least squares procedure may be applied in triangle sub-areas to determine local slopes in indifference curves. The triangle has been split into smaller regions as presented in Figure 9. The numbers on the graph show the number of lotteries in each area. It is assumed that lotteries located on the boundaries between two regions (marked as dotted lines) belong to both regions.

Figure 9: Sub-areas of the Marschak-Machina triangle used to determine local slopes of indifference curves. The numbers on the graph show the number of lotteries in each area.

A linear regression procedure was performed in each of these regions to obtain a number of linear models approximating the indifference curves locally (the number of degrees of freedom in each model is 3 less than the number of lotteries). The b value estimations for aggregated CE values are presented graphically in Figure 10. The statistical significance of estimated slope values is marked with ** for p-value ≤ 0.01, and * for 0.01 < p-value < 0.05. Definitions of areas A, B, C, D, and E, which are most important for deriving conclusions, are given in Figure 10.

Figure 10: Indifference curves estimated non-parametrically from the experiment presented together with local estimations of their slopes in Triangle 1 (left) and Triangle 2 (right). The statistical significance of the estimated slope values is denoted as: ** for p-value ≤ 0.01, and * for 0.01<p-value ≤ 0.05. Area A: 0 ≤ p 1 ≤ 0.01 and 0.2 ≤ p 3 ≤ 0.8; Area B: 0.01 ≤ p 1 ≤ 0.2 and 0.2 ≤ p 3 ≤ 0.8; Area C: 0.2 ≤ p 1 ≤ 0.8 and 0.2 ≤ p 3 ≤ 0.8; Area D: 0.2 ≤ p 1 ≤ 0.8 and 0.01 ≤ p 3 ≤ 0.2; Area E: 0.2 ≤ p 1 ≤ 0.8 and 0 ≤ p 3 ≤ 0.01.

4.3 Areas of straight parallel lines and fanning-in patterns

It can be seen that the indifference curves in Area C of Triangle 1 are parallel straight lines with a slope of 1.14 (0.04). This is the area of conformity to Expected Utility.

At the same time, the slope of the indifference curves is 0.87 (0.08) in Area B, and 1.45 (0.22) in Area D. The same pattern (albeit with some distortions) can be seen in Triangle 2, where the slope values are 1.00 (0.08), 0.78 (0.12), and 1.60 (0.30) respectively. This indicates a fanning-in pattern in areas B and D: indifference curves tend to converge to a point somewhere to the north-east of the Marschak-Machina triangle.

4.4 Jumps at the legs

These results reveal a sudden change in the slopes of the indifference curves close to the triangle legs. In Triangle 1, the slope value changes from 1.45 (0.22) in Area D to 0.07 (0.01) in Area E, i.e., by a factor of 20. In Triangle 2, there is a similar change from 1.60 (0.30) to 0.10 (0.02), i.e., by a factor of 16. The same pattern is observed at the vertical leg. In Triangle 1, the slope value changes from 6.79 (1.90) in Area A to 0.87 (0.08) in Area B, i.e., by a factor of 8, and in Triangle 2 from 8.79 (2.99) to 0.78 (0.12), i.e., by a factor of 11. Less abrupt changes can also be observed in the areas around the three triangle corners, but the results in this case are not statistically significant.

Unfortunately, the regression procedure on the individual level leads to statistically insignificant results: individual b values estimated in the most interesting triangle sub-areas A, B, D, and E involving 8 or 13 points were statistically insignificant (p-value > 0.05) for the vast majority of subjects (93% for Triangle 1, and 95% for Triangle 2). The slope values estimated for the remaining subjects assumed a value of 1 in about half the cases, meaning that they gave lottery expected value as their responses.

4.5 Continuous or discontinuous indifference curves?

These data raise the question as to whether these jumps at the triangle legs are continuous or discontinuous. It could be argued that 0.01, the minimum non-zero probability used in the experiment, is still far greater than 0 (at least on a logarithmic scale) and that the indifference curves might become smooth for probabilities of less than 0.01. The hypothesis regarding continuity or discontinuity of the indifference curves at the triangle legs is, however, not testable. Even if the jumps observed in the experiment had occurred at a probability of say 0.001, it could still be argued that this was far greater than 0.

The path of the indifference curves near the two legs of the triangle suggests, however, that the jumps in the curves are discontinuous. It is highly unlikely that the indifference curves, which are parallel as they depart from the hypotenuse and remain so in the middle of the triangle, would first turn away from the origin, and then (somewhere between a probability of 0 and 0.01) smoothly turn back towards it. The indifference curve discontinuity hypothesis would not require such dramatic changes in the slope values: the indifference curves would remain steep near the horizontal leg for any non-zero probability p 3, and remain flat near the vertical leg for any non-zero probability p 1.

5 Model estimation

The observations presented so far were confirmed using another analysis: the data collected in the experiment were used to estimate and compare six decision-making models under risk. Four of them predict jumps at the triangle boundaries. The idea of this analysis is to check which models, predicting smooth or discontinuous indifference curves, better describe the data collected. The models are estimated using CE values obtained from the experiment and the sum of squared errors (SSE) values are compared to choose the most accurate model. Certainty equivalents were used in the past to estimate the CPT parameters (e.g., Reference Tversky and KahnemanTversky & Kahneman, 1992; Reference Gonzales and WuGonzales & Wu, 1999) and to compare decision-making models (e.g., Reference BlavatskyyBlavatskyy, 2007). This approach differs from other studies (e.g., Reference Hey and OrmeHey & Orme, 1994), in which models were compared on the basis of preference questions and correct predictions of choices between two lotteries.

5.1 The models

The typical shapes of the indifferences curves predicted by the models under consideration are presented in Figure 1. How these models evaluate CE is detailed below. As before, it is assumed that x 1 < x 2 < x 3. In what follows (especially in the case of binary lotteries) x L = Min[x i] and x H = Max[x i] will occasionally be used to denote lottery minimum and maximum outcomes having respective probabilities of p L and p H (when p 1 = 0, the minimum outcome is x 2 rather than x 1; when p 3 = 0, the maximum outcome is x 2 rather than x 3). A power utility function v(x) = x α is assumed for the first four models and the predicted CE value is the utility inverse of the functional.

Expected Utility Theory (EUT, Reference Von Neumann and Morgensternvon Neumann and Morgenstern, 1944) is the standard model of decision-making under risk. It evaluates prospects as follows:

As the model is linear in probabilities, the indifference curves in the Marschak-Machina triangle are parallel straight lines with no jumps at the triangle boundaries.

Prospective Reference Theory

(PRT; Reference ViscusiViscusi, 1989) is a variant of the Expected Utility model in which the individual treats stated probabilities as imperfect information and uses them to update his/her prior probabilities to posterior ones in the standard Bayesian fashion. For convenience, the theory assumes a prior probability of 1/n for each outcome, where n is the number outcomes with p i>0. It follows that:

where the parameter β weights: a). the expected utility functional using stated probabilities (the term on the left) and b). the expected utility functional using equal probabilities of 1/n (the term on the right). Changing the number of outcomes n results in a discontinuous change of the predicted CE PRT value. This leads to discontinuous jumps of the indifference curves at all triangle boundaries. As the model is linear in probabilities, the indifference curves inside the triangle are parallel straight lines, as they are in the EU model.

Cumulative Prospect Theory

(CPT; Reference Tversky and KahnemanTversky & Kahneman, 1992) evaluates prospects using a probability weighting function w(p) applied to cumulative probabilities; the probability weights are then de-cumulated. Three-outcome lotteries are evaluated using the functional:

As the CPT model is nonlinear in probabilities, the indifference curves in the Marschak-Machina triangle are also nonlinear: they are concave in the upper-left part of the triangle and convex in the lower-right part for a typical inverse S-shaped probability weighting function. The probability weighting function is described in this study using the two-parameter Reference PrelecPrelec (1998) function:

where parameter δ is responsible for the curvature (lower the parameter value, more curved the function), and parameter γ is responsible for the elevation (lower the parameter value, greater the elevation).

There is no discontinuous change of the predicted CE CPT value when one p i becomes 0, and the functional simplifies to Therefore, the indifference curves predicted by the CPT model are smooth everywhere.

TAX model

(TAX; Reference BirnbaumBirnbaum, 2008) assumes that prospect branches are assigned decision weights that depend on the “attention” that the decision maker allocates to a particular branch. A risk-averse individual shifts attention from high outcome branches to low outcome ones. The lottery utility is then a weighted average of the outcome utilities with weights that depend on probabilities and outcome rankings. Three-outcome lotteries are evaluated as:

where: , , , and where t(p) is the weight of the branch’s probability (not decumulative probability as in CPT), and parameter δ defines attention (weights) transfers from branch to branch (higher the parameter value, greater the attention transfers to lower branches). Two-outcome lotteries are evaluated as:

where: and The weights A, B, and C change discontinuously when the number of outcomes with a positive probability varies. Therefore, the lottery valuation changes discontinuously in this case and the indifference curves might be discontinuous at all triangle boundaries. The probability weighting function t is described using the power function t(p) = p γ, and, as the model is nonlinear in probabilities, the indifference curves are nonlinear: concave in the left-upper part of the triangle and convex in the lower-right part, as they are in the CPT model.

Decision Utility model

(DUT; Reference Kontek and LewandowskiKontek & Lewandowski, 2018) applies a normalized utility function D (decision utility) to each lottery range under consideration. This way the lottery valuation depends on its range [x L,x H]. Lotteries are compared with respect to their CE values:

For a binary lottery, the functional simplifies to: , which is the same as for CPT, assuming v(x) = x, and D −1(p) = w(p). Contrary to CPT, however, when one p i becomes 0 and the lottery range [x L,x H] changes, the predicted CE DUT value changes discontinuously. Therefore, the indifference curves are discontinuous at the triangle legs, but not at the hypotenuse. As the model is linear in probabilities, the indifference curves inside the triangle are parallel straight lines, as they are in the EU model. The D function is described in this study using the CDF of the Two-Sided Power Distribution (Reference Kotz and van DorpKotz & van Dorp, 2004):

where denotes the relative position of x i within the lottery range [x L,x H], δ is the parameter responsible for the curvature (greater the parameter value, greater the curvature), and r 0 defines the value of r at which the curve crosses the diagonal.

Salience Theory

(ST; Reference Bordalo, Gennaioli and SchleiferBordalo, Gennaioli & Schleifer, 2012) provides a context-dependent representation of lotteries in which true probabilities are replaced by decision weights distorted in favor of salient payoffs. The functional for the lottery CE is not given in the original paper and its derivation indicates flaws in the model (for details, see Reference KontekKontek, 2016). For instance, the CE value is undefined for some probability intervals. More seriously, any assumption regarding CE in those intervals violates monotonicity. Only the formula for binary lotteries is presented below:

where: , p = p H, parameter θ affects the salience function (greater the parameter value, lower the salience of payoffs in a given state), parameter δ measures the extent to which salience distorts decision weights (lower the parameter value, less salient states are more discounted), and where a constant CE value in the middle row is assumed to make the model operational. The model predicts discontinuous jumps at all boundaries, as introducing or removing an outcome results in a discontinuous change in the predicted CE value. The indifference curves are non-parallel straight lines (there are areas of fanning-in, fanning out, and constant CE). Despite its peculiar features, the ST model is used in this study because it has recently gained a lot of attention among researchers.

5.2 Estimation results on the group level

The fit of 134 aggregated CE values was performed using the Mathematica “NonlinearModelFit” function, which constructs a nonlinear least-squares model and assumes that errors are independent and normally distributed. Possible settings for the search method include “ConjugateGradient”, “Gradient”, “LevenbergMarquardt”, “Newton”, “NMinimize”, and “QuasiNewton”, with the default being “Automatic” (in which case the method is chosen automatically by the function; this option was used in estimations). The “NonlinearModelFit” function enables the parameter space to be constrained, but this was not required for the aggregate data (except of the ST model). The estimation results are presented in Table 1. As can be seen, the two-parameter DUT model offers the best fit, and the PRT model, which also has two parameters, the next best. The three-parameter TAX model is third, and the CPT model, which also has 3 parameters, only comes fourth. The ST model, with much poorer results, is fifth (the p-value of the θ parameter is very high indicating problems with estimation of the salience function σ (x i,x j) which is essential for the ST model). The one-parameter EUT model is only slightly better than Expected Value. This model ranking is confirmed by the AIC and BIC measures.

Table 1: Estimation results of several decision-making models under risk.

As seen the DUT, PRT, and TAX models predicting jumps at the triangle boundaries are more accurate than CPT, which predicts smooth indifference curves. This happens even though the DUT and PRT models use one parameter less than CPT. The poor performance of the ST model is not surprising given its peculiar features as described above.

5.3 Predicted vs. observed indifference curves

To make the estimation results more readily comprehensible, the indifference curves predicted by the best-fit EUT, ST, CPT, TAX, PRT, and DUT models are presented in Figure 11, together with the indifference curves obtained non-parametrically. The plots illustrate the manner and extent to which the predicted curves match those obtained non-parametrically. As can be seen, the best-fit PRT and DUT models predict discontinuous jumps on both legs towards the triangle origin, while the best-fit TAX model predicts discontinuous jumps on the vertical leg only (jumps at other boundaries are almost invisible). The CPT model does not predict any jumps. This explains the ranking of the models obtained in the comparison.

Figure 11: Indifference curves obtained non-parametrically (dashed) and predicted by the best-fit models.

These ranking results suggest that the boundary effects at the triangle legs capture most of the variation in the data. The nonlinearity of the indifference curves in the triangle interior (if any) is only a second-order effect. Both the DUT and PRT models perform well because they conform to Expected Utility inside the triangle, and, at the same time, capture the specific effects at the triangle legs. Note the difference between the DUT and PRT models. The size of predicted jumps along both legs is always the same for PRT but differs for DUT. This explains why the DUT model performs better than PRT.

It should be noted that the best-fit PRT and TAX models do not predict any jumps at the hypotenuse, although they generally allow such jumps (in fact, they predict jumps at the hypotenuse, but these are too small to be seen on the plot). This additionally confirms that the jumps in the indifference curves are present only at the triangle legs.

5.4 Estimation results on the individual level

The CPT, TAX, PRT, and DUT models were next estimated using individual data. Individual data is much more noisy than aggregate data and, in some cases, may lead to problems with obtaining results. Therefore, fits were performed with the constrained parameter space (although the constraints were not too restrictive; for instance, it was assumed for the CPT model that 0 < α <4, 0 < γ < 20, and 0< δ < 20). Four starting points were chosen for each individual and each model, and the estimation with the lowest SSE value was chosen as the best fit. Calculations were performed with the working precision of 16 digits and were conducted in parallel using 8 Mathematica kernels.

The most accurate model for each subject (i.e., having the lowest SSE value), was then chosen. Table 2 shows the number of subjects for whom the respective model was the most accurate.

Table 2: The number of subjects for whom the respective model has the lowest SSE value.

As can be seen, the CPT model has the lowest SSE value for 46.4% of subjects. Other models, which allow jumps in the indifference curves at the triangle boundaries, are the best in the case of the remaining 53.6%. As the CPT and TAX models have three parameters, whereas PRT and DUT only two, estimations were also made for two-parameter versions of the CPT and TAX models to make the comparison fair. In the case of CPT, the version with γ = 1 means that a nonlinear value function was used together with a one-parameter probability weighting function, and the version with α =1 means that a linear value function was used together with a two-parameter weighting function. In both cases, the two-parameter CPT model performs poorer than previously, and the DUT model becomes the prevailing one for 33.8% of subjects on average (see Table 3).

Table 3: The number of subjects for whom the respective two-parameter model has the lowest SSE value.

Importantly, the CPT model, which predicts smooth indifference curves, is, on average, the best for only 26.3% of subjects (20-33% depending on which two parameters are used). Other models (including DUT), that allow jumps in the indifference curves at the triangle boundaries, are the best for the remaining 73.7%. The PRT model is generally slightly less accurate than CPT on the individual level, and the TAX model is the worst in this comparison.

A direct comparison was finally made between the two-parameter versions of the CPT and the DUT models. In this case, DUT offers more accurate fits for 51.5%-59.5% of subjects and performs slightly better than CPT. The results are presented in Table 4.

Table 4: The number of subjects for whom the two-parameter CPT or DUT model has a lower SSE value, and the mean and median absolute differences between the models expressed in %.

It may be argued that the model comparisons presented in Tables 2, 3, and 4 identify a discrete “winning model”. An SSE difference of 0.01 could lead one to claim that one model is superior without any way to determine the difference between this situation and one where a model excels by a difference of 100. Therefore, Table 4 presents also mean and median absolute differences between the models in terms of SSE (calculated as an average for all individuals). As seen, the best model fits differ on average by 5.3%-5.8% (from 0.1% to 363%) in terms of SSE with either CPT more accurate than DUT, or DUT more accurate than CPT. This shows that either one or another model offers a clear advantage for a given individual.

The analysis of individual data presented in this section confirms the existence of the boundary effects captured by models that predict jumps at the triangle legs. The comparison also shows that the CPT model performs comparatively better on the individual than on the aggregate level. The flexibility of this model by the addition of the third parameter (important when individual data diverge from the average pattern) may explain this result.

6 Related literature

Many studies have tested the various hypotheses about the shape of the indifference map (e.g., Reference Hey and StrazzeraHey & Strazzera, 1989, Reference CamererCamerer, 1989; Reference LoomesLoomes, 1991; Reference HarlessHarless, 1992; Reference Harless and CamererHarless & Camerer, 1994; Reference Abdellaoui and MunierAbdellaoui & Munier, 1998; Reference BlavatskyyBlavatskyy, 2006; Reference Bardsley, Cubbit, Loomes, Moffat, Starmer and SugdenBardsley et al., 2010). There are generally two ways to identify indifference curves in experiments: ask indifference questions; ask preference questions. The former involves asking subjects to indicate those lotteries to which they are indifferent vis-à-vis a given one. This procedure allows an indifference curve to be plotted by connecting the points representing indifferent lotteries inside the triangle (Reference Hey and StrazzeraHey & Strazzera, 1989). The method proposed in this paper is a version of this general approach: subjects indicate certainty equivalents to which they are indifferent vis-à-vis a given lottery, and these certainty equivalents serve to plot the indifference curves. The other approach involves presenting subjects with a set of pairwise choices and asking them to indicate their preferences (e.g., Reference Harless and CamererHarless & Camerer, 1994). This approach allows only hypotheses regarding the shapes of indifference curves in the triangle or regions of it to be tested. Combinations of both approaches have been used. Reference LoomesLoomes (1991) and Reference Cubitt, Navarro-Martinez and StarmerCubitt et al. (2015) ask indifference questions to test hypotheses about the shapes of indifference curves and underlying axioms. Yet another approach is used by Reference Hey and OrmeHey and Orme (1994), who ask preference questions to estimate preference functionals (i.e., decision-making models). The indifference curves predicted by the best model indicate the underlying pattern.

The observation that the EU model works fine for lotteries inside the Marschak-Machina triangle has often been reported in the literature. Reference Hey and OrmeHey and Orme (1994) state that the EU model appears to fit no worse than any of the other models for 39% of subjects. Similarly, Carbone and Hey (1994) find that approximately half their subjects appear to conform to the EU model. Reference Hey and StrazzeraHey and Strazzera (1989) additionally find that, for the majority of their subjects, the indifference curves were in accordance with EU theory. Reference Abdellaoui and MunierAbdellaoui and Munier (1998) find that the shape of the indifference curve is compatible with the EU hypothesis along the middle part of the hypotenuse and in the “immediate” interior of that middle part. This result is very close to the one obtained in the present study.

The existence of fanning-in has been reported in the literature by e.g. Reference Hey and Di CagnoHey and Di Cagno (1990), who observed that the fanning-in point was to the northeast of one of the three triangles for 14 subjects. Moreover, the indifference curves fan in for 22 of the 56 subject/triangle pairs. Reference BlavatskyyBlavatskyy (2006) presents a more detailed study concerning “fanning-out” and “fanning-in”, and suggests that an individual’s indifference curves tend to “fan-in” when probability mass is associated with the best and the worst outcomes and tend to “fan-out” when probability mass is associated with intermediate outcomes. The results presented in this paper are clearly close to Blavatskyy’s summary.

There is evidence of boundary effects in the literature, although how and to what extent these effects affect the shape of the indifference curves is not clearly stated. Reference ConliskConlisk (1989) moved the Allais lotteries to the interior of the triangle and concluded that EU theory violations are less frequent and cease to be systematic when boundary effects are removed. Camerer (1989), Reference HarlessHarless (1992), and Reference Sopher and GigliottiSopher and Gigliotti (1993) obtained similar results. Reference Harless and CamererHarless and Camerer (1994), after analyzing a large number of experimental data sets, conclude that the EU model should be used when all the lotteries have the same number of probable outcomes (i.e., the lotteries are located in the interior of the triangle), but a different model has to be used when the lotteries have different numbers of probable outcomes (i.e., some of the lotteries are located on the boundaries or in the corners of the triangle). Boundary effects were studied in detail by Reference Abdellaoui and MunierAbdellaoui and Munier (1998), who concluded that indifference curves were distorted near triangle boundaries. They draw a distinction, however, between behavior near different edges of the triangle. One test, restricted to segments linking the hypotenuse to the triangle interior leads to an acceptance of the hypothesis of parallelism. By contrast, the same hypothesis concerning the segments linking the left and lower edges to the interior is strongly rejected. The present experiment not only captures this difference but additionally shows that the distortion near the triangle legs is due to jumps towards the triangle origin in the indifference curves.

7 Optimal lottery grid

It may be argued that the variability of the indifference curves at the triangle legs observed in the present experiment (coupled with a lack of such variability in the triangle interior) is simply the result of repeated measurements in this region of the triangle. The jumps, so this reasoning goes, are observed at the legs because that is where the points were repeatedly sampled; had several points been measured close to any point in the triangle interior, similar jumps would have been observed. This objection can be easily countered by stating that, if the variability at the legs is the result of repeated sampling, then the variability observed at the hypotenuse, where the measurement points were likewise densely located, should be similar. This, however, is not the case.

More broadly, this objection raises an important question concerning the optimal lottery grid in Marschak-Machina triangle experiments. It is well known from experiments involving binary lotteries that the greatest variability in lottery certainty equivalents occurs for probabilities close to 0 and 1: the change of the certainty equivalent value is large when the probability changes from 0 to 0.01, or from 1 to 0.99, but not that large when the probability changes from 0.50 to 0.51. This phenomenon is best expressed by an inverse S-shaped probability weighting function (Reference Tversky and KahnemanTversky & Kahneman, 1992; Reference Gonzales and WuGonzales & Wu, 1999), which is nonlinear at the probability endpoints and almost linear in the middle. Reference Tversky and KahnemanTversky and Kahneman (1992) offered a psychological hypothesis to explain this shape, which they called diminishing sensitivity. According to this hypothesis, people become less sensitive to changes in probability as they move away from 0 or 1, just as they are less sensitive to changes in outcome values as they move away from the reference point.

It follows that the optimal set of lotteries to test diminishing sensitivity experimentally should consist of more lotteries having probabilities close to 0 and 1. Therefore, Tversky and Kahneman used lotteries having probabilities of 0.01, 0.05, 0.10, 0.25, 0.50, 0.75, 0.90, 0.95, and 0.99, rather than having an equal spread. The diminishing sensitivity observed experimentally is thus not the result of applying more lotteries close to the probability endpoints, but certainly more lotteries at the probability endpoints are required to test the anticipated diminishing sensitivity (for a contrary opinion see Reference Stewart, Reimers and HarrisStewart, Reimers & Harris, 2015). For the very same reason, and in order to verify the boundary effects, the distribution of lotteries in the Marschak-Machina triangle in the present study was chosen to be more dense near the triangle boundaries and corners. Using probabilities having an equal spread could make any boundary effects impossible to detect. Note that some previous investigations were conducted using an equal spread of probabilities (e.g., Reference Hey and OrmeHey & Orme, 1994, with a spread of 1/8). It is therefore not surprising that the EU model appeared to fit no worse than any of the other models for a substantial percentage of subjects, as the boundary effects had not been probably taken into account in the experimental set-up. This also applies to the present data: removing lotteries close to the triangle boundaries (i.e., having probabilities of either 0.01 or 0.05) from the data set results in plots of the indifference curves showing no jumps at the legs.

Verifying whether there is any local variability in the triangle interior would require more lotteries in this region. This would resemble detailed testing of the probability weighting function shape around its middle part, which, according to many studies, is more less linear. Thus, although such examination is possible, it would probably result in conclusions stated already in this paper and in a number of former studies, i.e., that the indifference curves in the middle part of the Marschak-Machina triangle are linear with local variability caused possibly by the noise only.

The question of how to select an optimal lottery grid inside the Marschak-Machina triangle not only involves determining the shape of the indifference curves, but also estimating the model parameters, and comparing decision-making models. The model estimation results presented in this paper come with the caveat that the fits and the model ranking apply only to the specific lottery grid in the Marschak-Machina triangle examined in the experiment. The grid used in the experiment involves many lotteries in the vicinity of the triangle edges (to explore the boundary effects), so the model ranking could be quite different for a different grid. In fact, removing lotteries close to the triangle boundaries from the data set results in CPT having the highest ranking, followed by DUT, TAX, and PRT. On the other hand, using a grid with an equal probability spread would possibly lead to a ranking that does not take any boundary effects into account, even so well accepted in the literature as overweighting of small probabilities and underweighting of large ones. This raises the general question of how to select an optimal lottery grid when discriminating between decision-making models (Reference Cavagnaro, Gonzales, Myung and PittCavagnaro et al., 2013). Designing an optimal lottery grid based on axioms in order to plot the indifference curves and to discriminate between the models of decision-making under risk is certainly an interesting direction of future research. The number of lotteries used in the experiment is one of the parameters that needs to be optimized: the natural inclination to increase this number would lead to more detailed plots, but, on the other hand, would leave the subjects less time to focus on each problem and would probably increase the noise. Thus limiting, rather than increasing the number of lotteries should be also considered.

8 Discussion

This paper presents experimental results concerning the shape of indifference curves in the Marschak-Machina triangle. The plots obtained non-parametrically indicate that: (i) indifference curves are straight parallel lines in the middle part of the triangle; this is the area of conformity to the Expected Utility model; (ii) the indifference curves “fan-in” along the triangle legs (but not in their vicinity); and (iii) the indifference curves jump towards the origin along the two legs of the triangle; these boundary effects are, however, not present at the hypotenuse. This observation is confirmed by estimating the slopes of the indifference curves in the triangle sub-areas and by estimating six decision-making models under risk.

The model ranking naturally leads to a discussion on which of the psychological phenomena underlying the models best explains the shape of the indifference curves obtained in this study. The path of the indifference curves near the two legs of the triangle suggests that the jumps in the curves are discontinuous. Abrupt changes in the slopes of the indifference curves were statistically confirmed in the vicinity of both triangle legs. The discontinuity of indifference curves is not particularly welcomed by mathematicians and could even be regarded as a weakness in the model. One possible explanation of this phenomenon could be discontinuity in probabilities. This feature has a solid psychological foundation and is prevalent in the psychological literature. The tendency to overweight certain outcomes relative to merely probable ones was labeled the “certainty effect” by Reference Kahneman and TverskyKahneman and Tversky (1979). Certain outcomes, however, are located in the corners of the Marschak-Machina triangle, and only the jumps starting from there can be explained by this effect. Discontinuity in decision weights for probabilities close to 0 and 1 is a slightly more general concept which can also be applied to lotteries located on the triangle boundaries. This holds, however, for all probabilities, including the probability of the middle outcome. The indifference curves are therefore discontinuous at all three triangle boundaries in the original Prospect Theory.

Cumulative Prospect Theory, per contra, predicts smooth indifference curves. For this reason, they do not match the indifference curves obtained in this study particularly well. Moreover, the lottery certainty equivalents (especially their aggregated values) are not fitted so accurately by CPT as by other models. This raises the question as to whether the cumulative probability weighting postulated by CPT is the right phenomenon to explain the anomalies observed in decision-making under conditions of risk. The original idea of probability weighting applied to individual probabilities (Reference Kahneman and TverskyKahneman & Tversky, 1979) was simple and convincing: small probabilities are overweighted whereas medium and large ones are underweighted. Unfortunately, this kind of probability weighting violates first-order stochastic dominance, which limits its applicability to prospects involving no more than two non-zero outcomes. Weighting of cumulative probabilities was introduced to fix this issue (Quiggin, 1980; Yaari, 1989; CPT, Reference Tversky and KahnemanTversky & Kahneman, 1992). This solution is mathematically elegant and allows prospects involving any number of outcomes to be considered. Psychologically, however, it is less plausible, as this would mean that people assess probabilities cumulatively rather than individually. Reference BirnbaumBirnbaum (2004) has presented evidence to refute this. The results presented in this study provide further evidence that the cumulative probability weighting postulated by CPT might not be psychologically vindicated.

The phenomenon of range dependence (Reference ParducciParducci, 1965; Reference CohenCohen, 1992; Reference Kontek and LewandowskiKontek & Lewandowski, 2018) is sound psychologically and adequately explains the pattern observed. Introducing a new high outcome x 3 results in an upward payoff range extension and a sharp increase in the lottery CE value. Introducing a new low outcome x 1 results in a downward payoff range extension and a sharp decrease in the lottery CE value. Introducing a new middle outcome x 2 does not change the payoff range and so no sharp change in the lottery CE value is observed. Essentially, the same explanation was presented earlier by Reference Abdellaoui and MunierAbdellaoui and Munier (1998): lotteries located on the legs of the triangle do not have the same support as those located elsewhere in the triangle; as a result, either p 1=0 or p 3=0 implies a more dramatic change in individual behavior than p 2=0. The three-criteriaFootnote 1 (Reference CohenCohen, 1992) and decision utility (Reference Kontek and LewandowskiKontek & Lewandowski, 2018) models, both based on range dependence, predict parallel straight line indifference curves in the triangle interior with jumps towards the triangle origin at the legs. This pattern was stated non-parametrically in the present study. The DUT model was, moreover, the most accurate in fitting aggregated certainty equivalents and in fitting individual ones when two-parametric models were compared.

Prospective Reference Theory (Reference ViscusiViscusi, 1989) predicts the same pattern of the indifference curves as models based on range dependence, but allows jumps at the hypotenuse. These happened to be almost invisible for the aggregated data and the fit was only slightly less accurate than that of DUT. The PRT model was slightly less accurate than CPT on the individual level. It is interesting to note that the PRT model was also among the winners in Reference Hey and OrmeHey and Orme’s study (1994). Its good performance in this study is therefore not accidental. However, the assumption underlying the theory that individuals treat stated experimental probabilities as imperfect information and use them to update their prior probabilities to posterior ones in the standard Bayesian fashion, seems psychologically less plausible.

Models founded on other psychological phenomena only partially explain the shape of the indifference curves obtained in this study. The TAX model, which postulates a shift in attention from high outcome branches to low outcome ones, generally predicts various shapes of the indifference curves with jumps allowed at all boundaries. However, the shape predicted by the best-fit model only partially corresponds with the stated one (jumps on the vertical axis only). As a result, the model takes an intermediate position in the rankings: it performs better than CPT on the group level, but does not perform so well on the individual level. The overweighting of salient outcomes postulated by Salience Theory has recently gained a lot of attention among researchers as a phenomenon explaining anomalies in various areas of economics and finance. The shape of the indifferences curves predicted by this model is, however, in strong disagreement with the one stated in this study. As a result, the Salience Theory model fits the lottery certainty equivalents collected in the experiment poorly and performs only slightly better than EU.

To sum up, the best lottery certainty equivalent fits were offered by models that predict parallel straight line indifference curves in the triangle interior with jumps at the legs towards the origin. The shape of the indifference curves obtained non-parametrically explains this result. As stated in previous studies, Expected Utility Theory generally holds in the triangle interior. This study shows that the psychological phenomenon of range-dependence best explains the boundary effects. To conclude, Expected Utility holds for lotteries having the same range of outcomes; Expected Utility is violated when the range of outcomes changes (note that the lotteries in the famous Allais paradox have different ranges).

The above analysis was made possible thanks to a novel method of plotting indifference curves inside the Marschak-Machina triangle using lottery certainty equivalents. One major advantage of this method is that it allows the indifference curves to be visualized, and not merely hypotheses concerning their shapes to be tested. Moreover, plots can be obtained using a standard command available on most statistical packages, without any dedicated software having to be written.

One problem encountered in this study is the high level of noise observed in the certainty equivalent values. The level of noise is probably magnified by the fact that many of the lotteries under consideration involved three outcomes, rather than two, as in typical experiments. Despite this, the method proposed generates clear indifference curve patterns for individual data aggregated using 20% trimmed means, and is very robust, even if the noise level in individual responses is artificially increased, as shown by a separate simulation. The method, however, results in poor quality plots when the level of noise is too high. Therefore, it often fails to demonstrate a clear pattern of individual indifference curves, which exposes its limitation. Possible approaches towards solving this problem include conducting similar experiments in a laboratory, rather than on the Internet, using stronger incentive schemes, and devising other ways of determining certainty equivalents. It is not certain, however, whether using these means will result in cleaner data. A better approach might be to repeat collecting responses from the same subject. This would allow his/her responses to be averaged and the noise levels at given measurement points to be reduced. Another option would be to perform a smoothing procedure on the plane in order to get the average value of the points in the vicinity of any given one. The initial tests using a smoothing procedure demonstrated the advantage of this approach and, in many cases, a great improvement of the plots quality. However, care needs to be exercised when smoothing values on the plane as it may also smooth the jumps, which are of the greatest interest in this study.

Appendix 1: CE values aggregated for the group for Triangle 1 (CE300 column) and Triangle 2 (CE900 column).

Appendix 2 – Instruction (a translation from Polish)

The left and right urns each have 100 balls. You are required to select a ball at random from one of the two urns.

The left urn has black, white and gray balls. The value of each ball is given under the illustration. Selecting a ball from this urn carries a risk. The payoff depends on the color of the ball and its value.

All the balls in the right urn are marked with a cross. They are all identical. Therefore, if you select a ball from this urn, you are guaranteed a given sum of money with absolutely no risk.

Task:

Nominate the value of the balls in the right urn that would make you indifferent between a random selection from either urn, i.e., so that it would not matter to you which urn you chose when making a random selection.

Example 1:

In this example, the left urn has 50 black balls with a value of 0 zł, 40 gray balls with a value of 150 zł, and 10 white balls with a value of 300 zł.

  1. a) Think of the value a cross-marked ball would need to have to make you indifferent as to which urn to choose when selecting a ball at random. Write this value under the right illustration.

  2. b) If you feel that you would prefer to select a ball from the right urn, then the value you have nominated for the cross-marked balls is too high.

  3. c) If you feel that you would prefer to select a ball from the left urn, then the value you have selected for the cross-marked balls is too low.

  4. d) Repeat steps a), b), c) and d) until you are indifferent as to whether you randomly select a ball from the left or right urn.

Further comments:

Carefully consider the amounts given in the problems, and remember that you stand to gain real money. In fact, some of you will be selected and will take part in a real lottery after the experiment is finished.

Try to nominate the value of the cross-marked balls as precisely as possible – at least to within 5-20 zł. Avoid giving rounded amounts. The more precise your answers, the greater their academic worth.

Do not try to be “mathematically correct”. Obviously, you are not prohibited from counting. It might even be advisable that you do so. Keep in mind, however, that this is a psychological, and not a mathematical, test.

Verifying the responses

If when completing the next problem, you find a filled-in field changes color to red, then that field has been filled in incorrectly.

A figure greater than the maximum value of the balls in the left urn (300 zł in the above example) or less than their minimum value (0 zł in the above example), or perhaps a non-numerical character, might have been entered. Your form will not be accepted unless the error is corrected.

Green, on the other hand, means that the field has been filled in correctly.

Before you complete the experiment, try one more example.

Example 2:

If you understand the instructions, start the test by clicking “Next”.

If you are not sure about anything, read the instructions again.

If you do not wish to complete the test, press “Return”.

Footnotes

This research was financed by the Narodowe Centrum Nauki (National Science Center, Poland). Grant No.: 2016/23/D/HS4/02899. The author would like to thank Michael Birnbaum, Daniel Cavagnaro, and the other participants at the 53rd Edwards Bayesian Research Conference in Fullerton, CA (March 2015), John Hey and the other participants at the seminar at the University of York (November 2015), Graham Loomes and the other participants of the FUR Conference in Warwick (June 2016), as well as Mark Machina and Michał Lewandowski, for their valuable comments. Many thanks also to Honorata Sosnowska for involving her students in the experiment and for providing other assistance. Special thanks go to Joseph Johnson and Jonathan Baron for their comments and suggestions which greatly improved the paper during the reviewing process.

1 Cohen’s criteria are: security level, potential level, and Expected Utility. The concepts of security and potential levels were earlier introduced by Reference LopesLopes (1987) in her SP/A model. Their models, however, differ greatly. Cohen uses both levels to define the range in which Expected Utility holds. Lopes, per contra, integrates security, potential, and aspiration levels using Rank-Dependent Utility. For differences between the three-criteria and decision utility models, see Reference Kontek and LewandowskiKontek and Lewandowski (2018).

References

Abdellaoui, M., Munier, B. (1998). The Risk-structure Dependence Effect: Experimenting with an Eye to Decision-aiding. Annals of Operations Research, 80, 237252.CrossRefGoogle Scholar
Bardsley, N., Cubbit, R., Loomes, G., Moffat, P., Starmer, C., & Sugden, R. (2010). Experimental Economics: Rethinking the Rules. Princeton University Press.CrossRefGoogle Scholar
Becker, J. L., & Sarin, R. K. (1987). Lottery Dependent Utility. Management Science, 33(11), 13671382.CrossRefGoogle Scholar
Birnbaum, M. H. (2004). Tests of rank-dependent utility and cumulative prospect theory in gambles represented by natural frequencies: Effects of format, event framing, and branch splitting. Organizational Behavior and Human Decision Processes, 95, 4065.CrossRefGoogle Scholar
Birnbaum, M. H. (2008). New paradoxes of risky decision making. Psychological Review, 115, 463501.CrossRefGoogle ScholarPubMed
Blavatskyy, P. (2006). Axiomatization of a Preference for Most Probable Winner. Theory and Decision, 60, 1733.CrossRefGoogle Scholar
Blavatskyy, P. R. (2007). Stochastic expected utility theory. Journal of Risk and Uncertainty, 34(3), 259286.CrossRefGoogle Scholar
Bordalo, P., Gennaioli, N., & Schleifer, A. (2012). Salience theory of choice under risk. The Quarterly Journal of Economics, 127(3), 12431285.CrossRefGoogle Scholar
Camerer, C., (1989). An experimental test of several generalized utility theories. Journal of Risk and Uncertainty, 2, 61104.CrossRefGoogle Scholar
Carbone, E., & Hey, J. D. (1994). Estimation of expected utility and non-expected utility preference functionals using complete ranking data. In Munier, B., & Machina, M. J. (Eds.) Models and experiments in risk and rationality, pp. 119139 Kluwer Academic Publishers.CrossRefGoogle Scholar
Cavagnaro, D., Gonzales, R., Myung, J. I., & Pitt, M. A. (2013). Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach. Management Science, 59(2), 358375.CrossRefGoogle ScholarPubMed
Chew, S. H., MacCrimmon, K. (1979). Alpha-nu choice theory: A generalization of expected utility theory. Working paper 669, University of British Columbia.Google Scholar
Cohen, M. (1992). Security level, potential level, expected utility: A three-criteria decision model under risk. Theory and Decision, 33(2), 101134.CrossRefGoogle Scholar
Conlisk, J., (1989). Three variants on the Allais example. American Economic Review, 79(3), 392407.Google Scholar
Cubitt, R. P., Navarro-Martinez, D., & Starmer, C. (2015). On preference imprecision. Journal of Risk and Uncertainty, 50(1), 134.CrossRefGoogle Scholar
Dekel, E. (1986). An axiomatic characterization of preferences under uncertainty: Weakening the independence axiom. Journal of Economic Theory, 40, 304318.CrossRefGoogle Scholar
Gonzales, R., & Wu, G. (1999). On the shape of the probability weighting function. Cognitive Psychology, 38, 129166.CrossRefGoogle Scholar
Gul, F. (1991). A Theory of disappointment aversion. Econometrica, 59, 667686.CrossRefGoogle Scholar
Harless, D. (1992). Predictions about indifference curves inside the unit triangle. Journal of Economic Behavior and Organization, 18, 391414.CrossRefGoogle Scholar
Harless, W., & Camerer, C. (1994). The predictive utility of generalized expected utility theories. Econometrica, 62(6), 12511289.CrossRefGoogle Scholar
Hey, J. D., & Strazzera, E., (1989). Estimation of indifference curves in the Marschak-Machina triangle. Journal of Behavioral Decision Making, 2, 239260.CrossRefGoogle Scholar
Hey, J. D., & Di Cagno, D. (1990). Circles and triangles: An experimental estimation of indifference lines in the Marschak-Machina triangle. Journal of Behavioral Decision Making, 3, 279306.CrossRefGoogle Scholar
Hey, J. D., & Orme, C. (1994). Investigating generalizations of expected utility theory using experimental data. Econometrica, 62(6), 12911326.CrossRefGoogle Scholar
Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. American Economic Review, 92(5), 16441655.CrossRefGoogle Scholar
Jia, J., Dyer, J. S., & Butler, J. C. (2001). Generalized disappointment models. Journal of Risk and Uncertainty, 22(1), 5978.CrossRefGoogle Scholar
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263291.CrossRefGoogle Scholar
Kontek, K. (2016). A critical note on Salience Theory of choice under risk. Economics Letters, 149, 168171.CrossRefGoogle Scholar
Kontek, K., & Lewandowski, M. (2018). Range-dependent utility. Management Science, 64(6), 28122832.CrossRefGoogle Scholar
Kotz, S., & van Dorp, J. (2004). Beyond beta. Other continuous families of distributions with bounded support and applications. World Scientific Publishing.CrossRefGoogle Scholar
Lopes, L. L. (1987). Between hope and fear: The psychology of risk. Advances in Experimental Social Psychology, 20, 255295.CrossRefGoogle Scholar
Loomes, G., & Sugden, R. Regret theory: An alternative theory of rational choice under uncertainty. Economic Journal, 92, 805824.CrossRefGoogle Scholar
Loomes, G. (1991). Testing decision theories by using ‘value equivalences’. Oxford Economic Papers, 43, 644666.CrossRefGoogle Scholar
Machina, M. (1982). Expected utility analysis without the independence axiom. Econometrica, 50, 277323.CrossRefGoogle Scholar
Marschak, J. (1950). Rational behavior, uncertain prospects, and measurable utility. Econometrica, 18(2), 111141.CrossRefGoogle Scholar
Neilson, W. S. (1992). Some mixed results on boundary effects. Economics Letters, 39, 275278.CrossRefGoogle Scholar
Parducci, A. (1965). Category judgment: A range-frequency model. Psychological Review, 72(6), 407418.CrossRefGoogle ScholarPubMed
Prelec, D. (1998). The probability weighting function. Econometrica, 66, 497527.CrossRefGoogle Scholar
Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior and Organization, 3(4), 323–43.CrossRefGoogle Scholar
Sopher, B., & Gigliotti, G. (1993). A test of generalized expected utility theory. Theory and Decision, 35, 75106.CrossRefGoogle Scholar
Stewart, N., Reimers, S., & Harris, A. J. L. (2015). On the origin of utility, weighting, and discounting functions: How they get their shapes and how to change their shapes. Management Science, 61(3), 687705.CrossRefGoogle Scholar
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297323.CrossRefGoogle Scholar
Viscusi, K. (1989). Prospective reference theory: Toward an explanation of the paradoxes. Journal of Risk and Uncertainty, 2, 235264.CrossRefGoogle Scholar
Von Neumann, L. J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press, Princeton, NJ.Google Scholar
Wilcox, R. (2011). Modern statistics for the social and behavioral sciences: A practical introduction. CRC Press, Taylor & Francis Group, Boca Raton, FL.CrossRefGoogle Scholar
Wilcox, R. (2012). Introduction to robust estimation and hypothesis testing. Elsevier.Google Scholar
Yaari, M. E. (1987). The dual theory of choice under risk. Econometrica, 55(1), 95115.CrossRefGoogle Scholar
Figure 0

Figure 1: Indifference curve shapes predicted by: Expected Utility Theory (EUT, von Neumann and Morgenstern, 1944), Cumulative Prospect Theory (CPT, Tversky & Kahneman, 1992), the TAX model (TAX, Birnbaum, 2008), Salience Theory (ST, Bordalo, Gennaioli, and Shefrin, 2012), Prospective Reference Theory (PRT, Viscusi, 1989), and the Decision Utility model (DUT, Kontek and Lewandowski, 2018).

Figure 1

Figure 2: The Marschak-Machina triangle with the lotteries examined in the experiment.

Figure 2

Figure 3: An example problem from the experiment.

Figure 3

Figure 4: Example histograms of CE responses for particular lotteries presented with expected (ev), median (med), mean (mn) and 20% trimmed mean (trm) values.

Figure 4

Figure 5: 3D plots of the aggregated CE values: Triangle 1 (left); and Triangle 2 (right).

Figure 5

Figure 6: Indifference curves in two Marschak-Machina triangles: Triangle 1 (left) with outcomes x1 = 0 zł, x2 = 150 zł, and x3 = 300 zł; and Triangle 2 (right) with outcomes x1 = 0 zł, x2 = 450 zł, and x3 = 900 zł. The Mathematica® program draws colored contour plots, so that areas of low CE contour values are marked using “cold” colors, and areas of high contour values are marked using “warm” colors.

Figure 6

Figure 7: Simulated indifferences curves after adding a Gaussian noise to aggregated certainty equivalents. On the left: σ = 0.05, on the right: σ = 0.20.

Figure 7

Figure 8: Simulated indifferences curves after adding a Gaussian noise to individual certainty equivalents, and then aggregated them using 20% trimmed mean. On the left: σ = 0.05, on the right: σ = 0.20.

Figure 8

Figure 9: Sub-areas of the Marschak-Machina triangle used to determine local slopes of indifference curves. The numbers on the graph show the number of lotteries in each area.

Figure 9

Figure 10: Indifference curves estimated non-parametrically from the experiment presented together with local estimations of their slopes in Triangle 1 (left) and Triangle 2 (right). The statistical significance of the estimated slope values is denoted as: ** for p-value ≤ 0.01, and * for 0.01<p-value ≤ 0.05. Area A: 0 ≤ p1 ≤ 0.01 and 0.2 ≤ p3 ≤ 0.8; Area B: 0.01 ≤ p1 ≤ 0.2 and 0.2 ≤ p3 ≤ 0.8; Area C: 0.2 ≤ p1 ≤ 0.8 and 0.2 ≤ p3 ≤ 0.8; Area D: 0.2 ≤ p1 ≤ 0.8 and 0.01 ≤ p3 ≤ 0.2; Area E: 0.2 ≤ p1 ≤ 0.8 and 0 ≤ p3 ≤ 0.01.

Figure 10

Table 1: Estimation results of several decision-making models under risk.

Figure 11

Figure 11: Indifference curves obtained non-parametrically (dashed) and predicted by the best-fit models.

Figure 12

Table 2: The number of subjects for whom the respective model has the lowest SSE value.

Figure 13

Table 3: The number of subjects for whom the respective two-parameter model has the lowest SSE value.

Figure 14

Table 4: The number of subjects for whom the two-parameter CPT or DUT model has a lower SSE value, and the mean and median absolute differences between the models expressed in %.

Figure 15

Example 1:

Figure 16

Example 2:

Supplementary material: File

Kontek supplementary material

Kontek supplementary material 1
Download Kontek supplementary material(File)
File 62 KB
Supplementary material: File

Kontek supplementary material

Kontek supplementary material 2
Download Kontek supplementary material(File)
File 64.1 KB