Experimental Economics and Experimental Game Theory

Daniel Houser, Kevin McCabe, in Neuroeconomics (Second Edition), 2014

Ultimatum Games

The Ultimatum Game, introduced by Werner Guth and colleagues (1982), is a simple, take-it-or-leave-it bargaining environment. In ultimatum experiments two people are randomly and anonymously matched, one as proposer and one as responder, and told they will play a game exactly one time. The proposer is endowed with an amount of money, and suggests a division of that amount between herself and her responder. The responder observes the suggestion and then decides whether to accept or reject. If the division is accepted then both earn the amount implied by the proposer’s suggestion. If rejected, then both the proposer and responder earn nothing for the experiment.

The key result of ultimatum experiments is that most proposers offer between 40% and 50% of the endowed amount, and that this split is almost always accepted by responders. When the proposal falls to 20% of the endowment it is rejected about half of the time, and rejection rates increase as the proposal falls to 10% and lower. As discussed by Camerer (2003, Chapter 2), ultimatum game results are highly robust to a variety of natural design manipulations (e.g., repetition, stake size, degree of anonymity and a variety of demographic variables).

An important exception to the robustness results is reported by Hoffman and Spitzer (1985), who show that offers become significantly smaller, and rejections significantly less frequent, when participants compete for and earn the right to propose. An explanation is that this procedure changes the perception of “fair”, and draws attention to the importance of context in personal (as compared to market) exchange environments. Effects might also stem from varying the degree of anonymity among the subjects, or between the subjects and the experimenter (Hoffman et al., 1996).

A key focus of recent ultimatum game research has been to understand why responders reject low offers. Economic theory based on self-interested preferences suggests responders should accept any positive offer and consequently, proposers should offer the smallest possible positive amount. We review some well-known research on the topic of responder rejections in the Neuroeconomics Experiments section below.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780124160088000024

Social Cognition

Daniel C. Krawczyk, in Reasoning, 2018

The Ultimatum Game

The Ultimatum game is a behavioral economics exchange game that is played over numerous trials. The situation places the monetary interests of two people into close association (Güth, Schmittberger, & Schwarze, 1982). In a standard Ultimatum game, there is an amount of money that can be split between two players, a proposer and a responder. Often a sum of 10 dollars is used. The proposer is placed in control of the money and has to make an offer to the responder. If that offer is accepted, the proposer and the responder each receive his or her agreed upon amounts. If they do not agree and the responder rejects the proposer’s offer, then nobody receives any money. For example, let’s imagine that a round of the Ultimatum game is going to be played by Joshua and Dominic. Joshua is the proposer and will have a chance to split 10 dollars with Dominic. Dominic knows that there are 10 dollars available and so he will be fully aware of how much Joshua would get to keep and how much he is being offered. Let’s imagine Joshua is extremely fair and offers to split the 10 dollars evenly and offers Dominic 5 dollars. Almost everyone will accept this offer and Dominic goes ahead and says, “Yes, Joshua, that is very fair, I accept the 5 dollars.” A fair even split in an economic game will emerge as a favored strategy for many people, as it is an equilibrium point where both people receive the most possible without anyone being short-changed (Haselhuhn & Mellers, 2005). From a purely rational economic perspective, Dominic should theoretically always accept any offer made by Joshua, even if it is one cent, as the alternative option of rejecting the offer results in no money.

Let’s imagine these two play the Ultimatum game again. It is in Joshua’s best interest from a purely monetary standpoint to see if he can get away with a more selfish and lopsided offer, as long as he thinks Dominic will still accept it. On this round Joshua offers $3.50. Dominic knows that Joshua will get to keep $6.50 for himself. He feels this is a bit shady, but $3.50 is not bad and certainly it is better than nothing. He accepts saying, “Joshua, I’ll take the offer, but I’m not thrilled that you’re keeping more for yourself.” Indeed over 60% of the time people will still accept an offer between 3 and 4 dollars. Now a third round occurs and Joshua really decides to push his luck. Dominic has accepted his past two offers and he’s on a roll. He now offers Dominic a sum of 75 cents. Dominic has to think about this one a bit. He could probably pay for very little with that sum of money, and he is just plain irritated with Joshua at this point. He decides to reject the offer saying, “Joshua, you’ve gone too far. You’ve got to be kidding me if you think I’m going to accept only 75 cents when I know you’re getting to keep 9.25 dollars.” In this case, Dominic is violating expected utility theory from a purely rational economic model. Seventy-five cents should always outweigh nothing. Most people will reject this kind of unfair offer that they know is stacked so heavily toward benefitting the proposer. Over 90% of unfair offers below one dollar are rejected (Haselhuhn & Mellers, 2005). Economic exchange behavior in the Ultimatum game represents a case of deciding in favor of punishing the unfair by rejecting an offer that is simply too heavily weighted in that person’s favor.

In some versions of the Ultimatum game there are variations on this premise. In some instances many rounds of the game are played. In these cases it is probably a sensible strategy to occasionally reject offers when they are too unfairly weighted in favor of the proposer. Rejecting offers is likely to drive up the next offer or future offers overall. If the responder appears to be too much of a push-over who will accept any offer, then the only incentive for the proposer to make even split offers is to avoid the guilty feeling of constantly taking advantage of the responder. Indeed future proposer offers are likely to be guided by the history that the two individuals experience in the game. In other variations, people play against a computer. In this case, people will typically behave more rationally by accepting much lower offers. Even when the proposer is known to be computerized, people do still choose to reject the most extreme unfair offers in the range of one dollar or less. The impulse to correct unfair offers is strong, even when we know our opponent is not a real person (Blount, 1995)!

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128092859000120

Social Preferences and the Brain

Ernst Fehr, Ian Krajbich, in Neuroeconomics (Second Edition), 2014

The Ultimatum game (see Figure Box 11.2) is identical to the Dictator game except that the recipient can reject the proposed allocation (Güth et al., 1982). If she rejects it, both players receive nothing. Rejections are evidence of negative reciprocity (Rabin, 1993), the motive to punish players who have treated you unfairly, or inequity aversion (Fehr and Schmidt, 1999), which is a distaste for unfair outcomes (see Box 11.2). The amount a recipient loses by rejecting a proposed allocation serves as a measurement of the strength of these motives. Offers of less than 20% are rejected about half the time; proposers seem to anticipate these rejections and consequently offer on average approximately 40%. Cross-cultural studies, however, show that across small-scale societies, ultimatum offers are more generous when cooperative activity and market trade are more common (Henrich et al., 2001).

Figure Box 11.2. Example of an Ultimatum game.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780124160088000115

An Embarrassment of Riches

Cristina Bicchieri, Jiji Zhang, in Philosophy of Economics, 2012

2.2 A conditional preference for following norms: the Bicchieri model

The norm-based utility function introduced by Bicchieri [2006] tries to capture the idea that, when a social norm exists, individuals will show different ‘sensitivities’ to it, and this should be reflected in their utility functions. Consider a typical n-person (normal-form) game. For the sake of formal treatment, we represent a norm as a (partial) function that maps what the player expects other players to do into what the player “ought” to do. In other words, a norm regulates behavior conditional on other people's (expected) behaviors. Denote the strategy set of player i by Si, and let Si=jiSj be the set of strategy profiles of players other than i. Then a norm for player i is formally represented by a function Ni: LiSi, where LiSi. Two points are worth noting. First, given the other players' strategies, there may or may not be a norm that prescribes how player i ought to behave. So Li need not be — and usually is not — equal to Si. In particular, Li could be empty in the situation where there is no norm whatsoever to regulate player i's behavior. Second, there could be norms that regulate joint behaviors. A norm, for example, that regulates the joint behaviors of players i and j may be represented by Ni,j: Li,−jSi × Sj. Since we are concerned with a two-person game here, we will not further complicate the model in that direction.

A strategy profile s = (s1, …, sn) instantiates a norm for j if sjLj, that is, if Nj is defined at sj. It violates a norm if for some j, it instantiates a norm for j but sjNj (sj). Let πi be the payoff function for player i. The norm-based utility function of player i depends on the strategy profile s, and is given by

Ui(s)=πi(s)kimaxsjLjmaxmj{πm(sj,Nj(sj))πm(s),0}

ki ≥ 0 is a constant representing i's sensitivity to the relevant norm. Such sensitivity may vary with different norms; for example, a person may be very sensitive to equality and much less so to equity considerations. The first maximum operator takes care of the possibility that the norm instantiation (and violation) might be ambiguous in the sense that a strategy profile instantiates a norm for several players simultaneously. We conjecture, however, that this situation is rare, and under most situations the first maximum operator degenerates. The second maximum operator ranges over all the players other than the norm violator. In plain words, the discounting term (multiplied by ki) is the maximum payoff deduction resulting from all norm violations.

In the Ultimatum game, the norm we shall consider is the norm that prescribes a fair amount the proposer ought to offer. To represent it we take the norm functions to be the following: the norm function for the proposer, N1, is a constant N function, and the norm function for the responder, N2, is nowhere defined.5 If the responder (player 2) rejects, the utilities of both players are zero.6

U1reject(x)=U2reject(x)=0

Given that the proposer (player 1) offers x and the responder accepts, the utilities are the following:

U1accept(x)=Mxk1max(Nx,0)U2accept(x)=xk2max(Nx,0)

where N denotes the fair offer prescribes by the norm, and ki is non-negative. Note, again, that k1 measures how much the proposer dislikes to deviate from what he takes to be the norm. To obey a norm, ‘sensitivity’ to the norm need not be high. Fear of retaliation may make a proposer with a low k behave according to what fairness dictates but, absent such risk, her disregard for the norm will lead her to be unfair.

Again, the responder should accept the offer if U2accept(x) > U2reject = 0, which implies the following threshold for acceptance: x > k2N /(1+k2). Obviously the threshold is less than N: an offer more than what the norm prescribes is not necessary for the sake of acceptance.

For the proposer, the utility function is decreasing in x when xN, hence a rational proposer will not offer more than N. Suppose xN. If k1 >1, the utility function is increasing in x, which means that the best choice for the proposer is to offer N. If k1 < 1, the utility function is decreasing in x, which implies that the best strategy for the proposer is to offer the least amount that would result in acceptance, i.e. (a little bit more than) the threshold k2N/(1+k2). If k1 =1, it does not matter how much the Proposer offers provided the offer is between k2N/(1+k2) and N.

It should be clear at this point that k1 plays a very similar role as that of β1 in the Fehr-Schmidt model. In fact, if we take N to be M/2 and k1to be 2β1, the two models agree on what the proposer's utility is. Similarly, k2 in this model is analogous to α2 in the Fehr-Schmidt model. There is, however, an important difference between these formally analogous parameters. The α's and β's in the Fehr-Schmidt model measure people's degree of aversion towards inequality, which is a very different disposition than the one measured by the k's, i.e. people's sensitivity to various norms. The latter will usually be a stable disposition, and behavioral changes may thus be caused by changes in focus or in expectations. A theory of norms can explain such changes, whereas a theory of inequity aversion does not. We will come back to this point later.

It is also the case that the proposer's belief about the responder's type figures in her decision when k1 < 1. The belief may be represented by a probability distribution over k2. The proposer should choose an offer that maximizes the expected utility

EU(x)=P(k2N/(1+k2)<x)×(Mxk1(Nx)).

As will become clear, an advantage this model has over the Fehr-Schmidt model is that it can explain some variants of BUG more naturally. However, it shares a problem with the Fehr-Schmidt model in that they both entail that if the proposer offers a close-to-fair but not exactly fair amount, the only thing that prevented her from being too mean is the fear of rejection. This prediction, however, could be easily refuted by a parallel dictator game where rejection is not an option.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780444516763500208

Behavioral Economics

N.S. Grewal, ... E. Moses, in Encyclopedia of Mental Health (Second Edition), 2016

Ultimatum game

In the ultimatum game (Güth et al., 1982), the proposer’s role is to offer a split of the initial endowment between herself and the responder. The responder can then either accept the split and both players walk away with their designated portion, or refuse the split and both players get US$0. As predicted by neoclassical theory, rationally the responder should accept whatever the split is, even if it is just US$1, because it is a gain (US$1 >US$0). However, in reality, if the responder views the split from the proposer as being ‘unfair’ (typically, an ‘unfair’ split is 35% and under), they will punish the responder by refusing the deal. In this case, both players walk away with nothing (Camerer, 2003). The act of punishing the proposer for lack of fairness is in itself more rewarding for the responder than an objective US$1 gain.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123970459002019

The Study of Emotion in Neuroeconomics

Elizabeth A. Phelps, in Neuroeconomics, 2009

Subjective Experience

One anomaly in economic decision making is revealed in the ultimatum game, in which participants will reject a small monetary offer when the alternative is to receive nothing. A general interpretation of this effect is that when an offer is deemed to be unfair, the receiver will pay a cost to punish the offerer, who receives nothing if the offer is rejected. It is suggested this type of altruistic punishment may play a role in maintaining social norms (Fehr and Rockenbach, 2004).

In an effort to determine whether emotion plays a role in the decision to reject unfair offers in the ultimatum game, Pillutla and Murnighan (1996) asked participants to rate the offers received both in terms of fairness and their subjective experience of anger at receiving the offer. They found that different factors (such as type of knowledge about the game) differentially influenced ratings of fairness and anger, although they were correlated. In addition, they found that anger was a better explanation of rejections than the perception that the offers were unfair. Although a sense of fairness may be associated with an affective evaluation, by directly assessing an emotional response (subjective feelings), an additional factor that may mediate decisions in this economic game was identified. This result provides an example of the value of assessing emotion in studies of economic decision making.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123741769000166

Responses to Inequity in Non-human Primates

Sarah F. Brosnan, in Neuroeconomics, 2009

UG Variation: The Impunity Game

The impunity game (IG) is another variation on the UG in which the responder has only limited recourse. After the proposer makes a division, the responder can reject, but her rejection affects only her – and not the proposer's – payoff. Thus, the rational response is for proposers to offer the least amount possible and for responders to accept it, as in the DG (Bolton and Zwick, 1995).

However, responders often refuse their winnings, both when the proposer will know the responder's actions and when the proposer is ignorant (i.e., the proposer believes them to be playing a DG; Yamagishi, 2007). While this is unexpected from a perspective of rationality, as this response leads to increased (rather than decreased) inequity, it may indicate that people's responses serve not only to equalize outcomes but also to send a signal to both their partners and themselves. Such signals could constitute commitment devices (Frank, 1988) which inform others of the player's refusal to participate in outcomes which do not have fair outcomes, increasing the player's long-term gains in cooperative interactions. Similar responses are seen in non-human primates (Brosnan and de Waal, 2003, Brosnan et al., 2005).

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123741769000191

Social cognition in addiction

Boris B. Quednow, in Cognition and Addiction, 2020

Social decision-making

AD individuals have been consistently reported to reject unfair offers in the ultimatum game more often than healthy controls (Tsukue et al., 2015; Brevers et al., 2013, 2015). The proportion of rejected unfair offers has been shown to be correlated with elevated physiological arousal as assessed by the skin conductance response (Brevers et al., 2015) as well as with reward impulsivity measured using a delay discounting task (Tsukue et al., 2015). These findings indicate that AD individuals have a higher sensitivity to unfairness, or that they have more problems with controlling their emotions in unfair situations, resulting in more aggressive or retributive responses (Tsukue et al., 2015; Brevers et al., 2013, 2015).

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128152980000058

Volume 3

Yang Hu, ... Xiaolin Zhou, in Encyclopedia of Behavioral Neuroscience, 2nd edition, 2022

Negative Reciprocity

A widely used behavioral task in the research of negative reciprocity is the ultimatum game (UG). In a typical UG, participants act as a responder and decide whether to accept a fair or unfair division of money suggested by a proposer (Sanfey et al., 2003). If the division is accepted, the money would be split as proposed; but if the division is rejected, neither one would receive anything. Participants commonly accepted offers when the divisions comply with the fairness norm (fair offers). Although participants could have obtained a certain amount of money by accepting the unfair offers, they rejected more offers (i.e., receiving nothing) as the extent of the proposer's norm violation increase (i.e., the offers become less fair), indicating the negative reciprocity and negative cost enforcement. In one line of research, neuroimaging studies using this task have consistently demonstrated the involvements of brain areas related to the initial evaluation of norm compliance/violation (Aoki et al., 2015; Feng et al., 2015; Gabay et al., 2014). Specifically, responders gave higher happiness ratings to more equal offers (Tabibnia et al., 2008); this observation was consistent with the greater responses in the vmPFC to fair (vs. unfair) offers, suggesting that the vmPFC contributed to the processing of the social rewards of fairness norm compliance (Baumgartner et al., 2011; Dawes et al., 2012; Tabibnia et al., 2008; Xiang et al., 2013). In contrast, compared with fair offers, unfair offers would activate the anterior insula, an area implicated in detecting norm violation (Cheng et al., 2017; Civai, 2013; Civai et al., 2012; Guo et al., 2013; Strobel et al., 2011; Xiang et al., 2013) or signaling emotional processing via representations of aversive internal states (Chang and Sanfey, 2011; Corradi-Dell'Acqua et al., 2012; Guo et al., 2013; Sanfey et al., 2003), and the amygdala, which was linked to signal negative emotional response to norm violation (Gospic et al., 2013; Haruno and Frith, 2010; Yu et al., 2014).

Another line of research revealed greater activations in brain regions related to the integration of social norms and economic self-interest in favor of flexible decision-making in the unfair condition as compared to the fair condition (Aoki et al., 2014; Feng et al., 2015; Gabay et al., 2014). Specifically, the unfairness-evoked aversive responses and the self-interest that would be obtained by acceptance contradict each other, resulting in a motivational conflict that was suggested to be monitored by the dACC (Fehr and Camerer, 2007; Sanfey et al., 2003). Neural evidence suggested two ways to resolve this conflict: first, the unfairness-evoked aversive responses may be suppressed, probably implemented by brain regions associated with emotion regulation such as the vlPFC and dmPFC, resulting in an increase in acceptance rates (Civai et al., 2012; Grecucci et al., 2012; Tabibnia et al., 2008). Second, the conflict may be resolved by inhibiting selfish motives to promote norm compliance; this would rely on the cognitive control functions in the right dlPFC (Knoch et al., 2006; Ruff et al., 2013; Zhu et al., 2014). In addition, it was shown that, as compared to the gain frame used in the traditional UG, participants were more likely to reject unfair offers in the loss frame, where the proposers proposed unfair offers to share the loss (Zhou and Wu, 2011). Neuroimaging data indicated that loss reduced the responsiveness of the dopamine system (ventral striatum) to fairness while enhancing the motivation to reject the offer. This process was complemented by increased responses of dlPFC to insultingly unfair offers (Guo et al., 2013; Wu et al., 2014).

Notably, the reciprocal behaviors in UG are based not only on the preference for fair outcomes (i.e., egalitarianism) but also on reciprocal considerations regarding the others' intentions (i.e., intention-based reciprocity) (Charness and Rabin, 2002; Dufwenberg and Kirchsteiger, 2004; Falk et al., 2003; Rabin, 1993; Zheng et al., 2014). For example, the same unfair offers are more likely to be accepted if the proposer demonstrates good intentions by choosing the inequitable division over an even more unfair division (Falk et al., 2003). This increase in acceptance rates is associated with activity in the anterior medial prefrontal cortex and the TPJ, implying that higher demands in moral mentalizing are required in social decision-making when the decision to reject could not be readily justified (Güroğlu et al., 2010). Moreover, a gradual shift in other-regarding preferences was observed from simple rule-based egalitarianism to complex intention-based reciprocity from early childhood to young adulthood (Sul et al., 2017). The preference shift was associated with cortical thinning of the dmPFC and posterior temporal cortex, which were involved in social inference as indicated by the meta-analytic reverse-inference analysis.

Moreover, Yu et al. (2015) investigated the neural substrates underlying the processing of both intention and consequence of the other's harm using an interactive game. In the task, participants interacted with anonymous co-players, who decided to deliver pain stimulation either to him/herself or to the paired participant to earn a monetary reward. In some cases, the decision was reversed by the computer. Unbeknownst to the co-player, the participant was then allowed to punish the co-player by reducing his/her monetary reward after seeing the co-player's intention. Behaviorally, the punishment was lower in the accidental condition (unintended harm relative to intended harm) but higher in the failed-attempt condition (intended no-harm relative to unintended no-harm). Neurally, the left amygdala was activated in conditions with blameworthy intention (i.e., intentional harm and failed attempt). The accidental (relative to intentional) harm activated the right TPJ and IFG, while the failed attempt (relative to genuine no-harm) activated the anterior insula and posterior IFG. Effective connectivity analysis revealed that in the unintentional conditions (i.e., accidental and failed attempt) the IFG received input from the TPJ and AI and sent regulatory signals to the amygdala. These findings demonstrate that the processing of intention may gate the emotional responses to transgression and regulate subsequent reactive punishment.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128196410001511

Sex Differences in Neurology and Psychiatry

Jonas Hornung, ... Birgit Derntl, in Handbook of Clinical Neurology, 2020

Decision making

Mehta and Beer (2010) investigated the processing of unfair offers playing an ultimatum game with women and men. The authors identified a region in the left medial OFC that was less activated with rising endogenous testosterone levels contrasting unfair versus fair offers. In the same study, it was shown that higher levels of testosterone also led to more rejection of unfair offers.

Neuroimaging studies that systematically investigated the impact of testosterone on reward-related behaviors found testosterone to modulate both striatal areas as well as frontal brain regions. This is in line with a recent model (Welker et al., 2015), which proposes that both elevated endogenous levels as well as dynamic increases of testosterone facilitate reward processing (i.e., increased striatal activity), which in turn facilitates dysregulatory behaviors (i.e., decreased frontocortical activity).

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B978044464123600014X