Agent Strategies and the Shadow of the Future
Main Menu Previous Topic Next Topic
Tit-for-Tat vs. All-Defection
Note: in order to run the simulation referred to in this slide, go here, to the Java Applet version. You will be directed to download the latest version of the Java plug-in.
In the graphic to the left, we set up two agents that are engaged in a series of Iterated Prisoner's Dilemma games. Press "Go" several times to advance the Society by several moves. During each move several things happen:
One of the agents employs the All-Defection (All-D) strategy function, which is indicated by her red color. The other one, however, uses the Tit-for-Tat strategy function, shown by her blue color. Throughout the rest of this tutorial we will continue to associate a particular strategy used by the agent with a particular color.
While the two agents are connected by a cyan line, they are interacting within a single instance of an Iterated Prisoner's Dilemma. When the cyan line breaks off, it means that the current instance of Iterated PD is over, and the next move will start a new instance. Here we have set the "shadow of the future" parameter δ to 1/2. Recall from the previous slides that δ is the probability that any given instance of Iterated PD continues after every given move. Indeed, if you were to count the percentage of times the cyan line breaks off after the first move of every Iterated PD, you would find that it approaches 50%.
The two strategies are non-probabilistic, and thus the sequence of moves and associated payoffs is the same within each instance of Iterated PD:
In a sense, every couple of moves Tit-for-Tat forgets just how badly All-D has treated her in the past, a new instance of Iterated PD starts, and All-D gets another chance to take advantage of Tit-for-Tat's propensity to cooperate when she doesn't have any prior information. Tit-for-Tat could definitely benefit from better memory (i.e. higher δ) in this case.
Previous Slide Next Slide