forum.jpg (4424 bytes)     "Inside  every small problem is a large problem struggling to get out."

Rules Forum Contributors [For contributors only]

Topics


Applications
Auctions
Bargaining
Experimental Economics
Forum
General Equilibrium
Napster
other
Other Topics
Prisoners Dilemma
Zero Sum Games

 

Thread and Full Text View


Ask a question about: Prisoners Dilemma
Respond to the question: Is tit-for-tat a good idea?

03/03/2001 02:16 AM by Birol Baskan; To Mr. Levine : as long as future payoffs are concerned, not necessarily so
I am aware of that it was long times ago you wrote this message. Since I am a new discoverer of this forum, I should be apologized. Your points have important insights for me since I use repeated prisoner's dilemma game in my thesis.
[View full text and thread]

03/29/2000 10:40 AM by David K. Levine; Is tit-for-tat a good idea?

Is tit-for-tat a good strategy in a repeated prisoner's dilemma game? Here is a standard prisoner's dilemma game matrix where the prisoner's have two options not confess, or confess. As usual they are both better off if neither confesses, but regardless of whether or not the other prisoner is confessing, a player always increases his payoff by one if he confesses.

  not confess confess
not confess 5,5 0,6
confess 6,0 1,1

Naturally when each prisoner follows his own self-interest they both confess, each receiving a payoff of only one, while if neither had confessed both would have received five. Now suppose the game is repeated (without a definite end), and that one player plays "tit-for-tat." This means do not confess in the first period, and thereafter do whatever your opponent did last period. If the other player is playing tit-for-tat, I can get 5 every period by not confessing. On the other hand, if I confess, I gain 1 today, but lose 5 tomorrow. Since the loss is much greater than the gain, unless I am terribly impatient, I should not confess. In fact, I might as well play tit-for-tat myself, since that means neither of us will confess in the first period, and we will both continue not to confess as long as the other has not. So we both get 5, which is the most possible for each of us, given that our opponent is playing tit-for-tat. In short, tit-for-tat is a Nash equilibrium.

But is tit-for-tat a good idea? Axelrod (1984) argues that it is. We both get 5 which is the most we can both get at the same time. It is a Nash equilibrium, so if the other player is playing tit-for-tat it is in our individual self-interest to do the same. It is "forgiving," so that if the other player makes a mistake he is condemned for only one period, after which he can return to our good graces by not confessing.  And finally it is simple.

There is, however, and important weakness in tit-for-tat. If my opponent is playing tit-for-tat, I get 5 by playing tit-for-tat, but I can also get 5 my the even simpler "altruistic" strategy of simply never confessing. In an evolutionary setting, where players in a population adopt strategies based on how well they are performing, since there is a tie between tit-for-tat and altruism, there is a tendency to drift back and forth randomly between the two. The unfortunate consequence of this is that if enough players drift into altruism, the "selfish" strategy becomes the best one. Indeed, Foster and Young (1990) show that for this reason, the selfish strategy emerges as the unique long-run survivor of evolutionary competition. An analogy in the field is the type of social utopianism that was widely espoused during the 1960s arguing, for example, that it is uncivilized to punish criminals. The predictable consequence of these utopian reforms was more crime.

The tie between tit-for-tat and altruism is rather artificial however; it may exist in the artificial computer world of Axelrod (1984), but it does not exist either in the laboratory, or in the field. Outside the artificial computer world, there is noise. Noise has many forms, but the most relevant fact is that intentions are neither always clear, nor are they always executed perfectly. A simple model is the Selten (1965) "trembling-hand" model: each player has a small chance of mistakenly confessing when he intended not to and vice versa. This makes punishments real rather than hypothetical, because regardless of intentions, players sometimes confess. Unfortunately, as not [Manage messages]