Labels

Sunday, August 10, 2014

To cooperate of defect (besides of coding): Prisoners dilemma, a game theory example in R


Hello Computer Science and/or R enthusiasts.

This week I had the opportunity to try something that was in my To-Do list a while ago. The idea came almost instantly after reading Dr. Richard Dawkins book, The Selfish Gene (which was a BD gift, thanks Andy).

I feel the obligated necessity to program my own implementation of the prisoners dilemma and make my own version of the contest. To try different parameters of greediness and vindictiveness, just coding in the name of science.

You can download the code here

*****************
INTRODUCTION
*****************

Here's a basic definition of prisoners dilemma that can be found in this Wikipedia article:

The structure of the traditional Prisoners’ Dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors, red and blue, and that each player chooses to either "Cooperate" or "Defect".

If both players cooperate, they both receive the reward, R, for cooperating. If Blue defects while Red cooperates, then Blue receives the temptation, T payoff while Red receives the "sucker's", S, payoff. 

Similarly, if Blue cooperates while Red defects, then Blue receives the sucker's payoff S while Red receives the temptation payoff T. If both players defect, they both receive the punishment payoff P.


And to be a prisoner's dilemma game in the strong sense, the following condition must hold for the payoffs:

T > R > P > S

*****************
MATERIALS
*****************

I decided to program three basic strategy functions:

  1. tit.for.tat.bot: this simple strategy repeats opponent's last choice
    1. This bot/strategy is parameter-free
  2. greedy.bot: this strategy is affected by the parameter of greedy.level, which increases the probability of the bot to "defect" when this parameter is close to 1.0. If this parameter is set to 0.5, then the bot will choose randomly. 
    1. This bot does not care about the previous action of the other player, it only decides to "defect" or "cooperate" based on its own greediness.
  3. vindictive.bot:  this strategy is affected by the parameter of vindictive.level, which is only used when the previous action of the other player is "defect" in the same sense as the greedy.level affects the greedy.bot, otherwise it will "cooperate". 
    1. This  means that if the other player plays "cooperate", this bot will "cooperate" too, but at any moment when the other player plays "defect", this bot will trigger its own vindictive.level to decide if it will play "defect" or to "cooperate".
  4. All bots have three parameters as arguments:
    1. action: last action of the other player
    2. greedy.level*
    3. vindictive.level
  5. *The greedy.level parameter in the tit.for.tat.bot and the vindictive.bot is only used when this bots are playing as player.1 and have to decide the first action/move against player.2. After that event, this parameter is no longer used for these bots.
  6. Each point in the plots means 1000 (iter=1000) plays of player.1 vs player.2
  7. Parameters for the payoff matrix: T=20, R=5, P=1 and S=0

*****************
METHODS
*****************

  1. Try different pairwise comparisons of strategies varying the greediness and the vindictiveness of the bots
  2. Make our clandestine fight club with the bots (1st RULE: You do not code about FIGHT CLUB)

*****************
RESULTS
*****************

1.1) GREEDY BOT VS TIT FOR TAT BOT


Using different thresholds of greediness in a greedy.bot vs tit.for.tat.bot. we can see that the tit.for.tat.bot handles very well when this bot increases its greediness. this bot cooperates when the adversary cooperates, but reacts when the other player decides to "defects"

1.2) VINDICTIVE BOT VS TIT FOR TAT BOT


Using different thresholds of vindictive.level in a vindictive.bot vs tit.for.tat.bot.  A) We can see that the vindictive.bot does not "defect" unless provoked.  Therefore as these bots "cooperate" along all the simulations, they get the score for 5000 which is the maximum score for mutual cooperation along 1000 iterations.  B) For this experiment, we are increasing the greedy.level and the vindictive.level by steps of 0.05. The greedy.level in the vindictive.bot only determines the probability to "defect" in the first move only and only if the vindictive.bot is playing as player.1. We can see that the tit.for.tat.bot handles very well when this bot increases its greediness and the vindictiveness. This bot cooperates when the adversary cooperates, but reacts when the other player decides to "defects"


1.3) GREEDY BOT VS VINDICTIVE BOT


Using different thresholds of greedy.level in greedy.bot vs the effect of vindictive.level in a vindictive.bot.  A) greedy.level and vindictive.level increased proportionally equal among the two bots. we can see that the greedy.bot when becomes more greedy it outperform the vindictive.bot because the vindictive.bot only triggers its self-defense strategy when provoked, and the probability to take revenge is based on the vindictive.level. B) greedy.level and vindictive.level increased proportionally inverse among the two bots. This means that the most greedy bot will face with the most forgiving bot.


2.1) FIGHT CLUB: ROUND 1 (0.7 to 1.0 in greedy and vindictive.level)



Tournament of all vs all using a range of parameters from 0.7 to 1.0 in greedy and vindictive.level  A) Scatter-plot of each strategy. in x-axis is the mean score of the strategy versus all other strategies when it is playing as player.1 and y.axis when is playing as player.2. We can see that the greedy.bot.1.0 (greedy.level=1.0) has the lowest score followed by the vindictive.bots and the tit.for.tat.bot. The bots that gets higher scores are the ones that have a greedy personality, but not "so-greedy" after all, for example the most successful bot only has 0.7 of probability of "defect". This experiment is in someway biased because the greedy.bots are taking advantage by exploiting the "nice bots" and getting higher scores for that reason.  B)  This barplot shows how is the effect of the greediness in the greedy.bot, as it becomes more greedy, the scores tend to be lower. The opposite behavior is shown for the vindictive.bot


2.2) FIGHT CLUB: ROUND 2 (0.8 to 1.0 in greedy and vindictive.level)

To avoid that bias, now we show the experiment with more fierce adversaries


Tournament of all vs all using a range of parameters from 0.8 to 1.0 in greedy and vindictive.level  A) Scatter-plot of each strategy. in x-axis is the mean score of the strategy versus all other strategies when it is playing as player.1 and y.axis when is playing as player.2. We can see that the greedy.bot.1.0 (greedy.level=1.0) has the lowest score followed by the vindictive.bots and the not-so-greedy bots. Now the bot that wins the contest is the simplest strategy, tit.for.that. B)  This barplot shows how is the effect of the greediness in the greedy.bot, as it becomes more greedy, the scores tend to be lower. The opposite behavior is shown for the vindictive.bot. In general we can conclude that the the vindictive.bots performs better than the greedy.bots because they tend to cooperate unless provoked, so, they are nicer bots.

*****************
DISCUSSION AND CONCLUSIONS
*****************

NICE "BOTS" FINISH FIRST, as Dr Dawkins concluded in his book.

Dr. Dawkins illustrates the four conditions of tit for tat.

  1. Unless provoked, the agent will always cooperate.
  2. If provoked, the agent will retaliate.
  3. The agent is quick to forgive.
  4. The agent must have a good chance of competing against the opponent more than once.

In conclusion, it is better to be a nice human than a greedy one. 

Benjamin





Friday, August 1, 2014

UPDATE: 100 Prisoners, 100 lines of code, a game theory example in R

I  was looking for some Game-Theory post in R-bloggers and found this very cool post,  obviously I decided to read the post, to look at the code and implement a version of it.

The idea of the post was very cool for me that I thought it would be a nice idea to get more "juice" from it.

All the credit for this post, is for its original author.

The code is available here.

As Matt Asher describes in his original post:

# ***************
# INTRODUCTION
# ***************

In this game, the warden places 100 numbers in 100 boxes, at random with equal probability that any number will be in any box. Each convict is assigned a number. One by one they enter the room with the boxes, and try to find their corresponding number. They can open up to 50 different boxes. Once they either find their number or fail, they move on to a different room and all of the boxes are returned to exactly how they were before the prisoner entered the room.

The prisoners can communicate with each other before the game begins, but as soon as it starts they have no way to signal to each other. The warden is requiring that all 100 prisoners find their numbers, otherwise she will force them to listen to hundreds of hours of non-stop, loud rock musician interviews. Can they avoid this fate?

The first thing you might notice is that if every prisoner opens 50 boxes at random, they will have a 0.5 probability of finding their number.

There's actually a strategy that can guarantee all the prisoners to get their numbers.

For this post, we will call that strategy, model, and the strategy with no strategy at all, we will call it random.model.

The model strategy consist in that at the beginning, each prisoner will open the box based on their number, for example, the prisoner 42 will open the box at the position 42, after open it, say it contains the number 12, and given that 42 is not equal to 12, he will have to open the box at position 12. This will happen until he had opened 50 boxes or found his number.

The random.model strategy consist in no strategy at all, only opening boxes at random until he had opened 50 boxes or found his number.

Every time a prisoner found his number we will call this event a success.

Our interest is to find the strategy that maximizes the success rate. We will run each simulation 1000 times.

# ***************
# RESULTS
# ***************

Let's take a look at the performance of each model individually with this histogram:


We can see that the random.model behaves as we were expecting assuming that by chance we have a 0.5 chance of getting our number by selecting 50 random boxes, On the other hand, the other strategy has a very different distribution, there's a gap between 51 and 99 with 0 of success rate!, any suggestion to explain how is this possible?

Another way to plot the distribution of the success rates obtained from each model is by looking at a density plot:
 

This histogram is telling the same story than the previous plots, but in the same plot:


And as usual, I always left the best plot for the grand finale. 


What we can say about this plot?, before going there, I will continue using Matt's explanation:

The theoretical success rate of the model strategy is about 31%

One way to look at the distribution of numbers in boxes is to see it as a permutation of the numbers from 1 to 100. Each permutation can be partitioned into what are called cycles. A cycle works like this: pick any number in your permutation. Let's say it's 23. Then you look at the number the 23rd place (ie the number in the 23rd box, counting from the left). If that number is 16, you look at the number in the 16th place. If that number is 87, go open box number 87 and follow that number. Eventually, the box you open up will have the number that brings you back to where you started, completing the cycle. Different permutations have different cycles.

The key for the prisoner is that by starting with the box that is the same place from the left as his number, and by following the numbers in the boxes, the prisoner guarantees that if he is in a cycle of length less than 50, he will eventually open the box with his number in it, which would complete the cycle he began. One way to envision cycles of different lengths is to think about the extreme cases. If a particular permutation shifted every single number over one to the left (and wrapped number 1 onto the end), you would have a single cycle of length 100. Box 1 would contain number 2, box 2 number 3 and so on. On the other hand, if a permutation flipped every pair of consecutive numbers, you would have 50 cycles, each of length 2: box 1 would have number 2, box 2 would have number 1. Of course if your permutation doesn't change anything you have 100 cycles of length 1.

As you can see from the histogram of the model strategy, you can never have between 50 and 100 winning prisoners. Anytime you have a single cycle of length greater than 50, for example 55, then all 55 prisoners who start on that cycle will fail to find their number. If no cycles are longer than 50, everyone wins.

Now, looking at the boxplot, OK we would say, using the model strategy we have like a 30% percent of chance that every one of the prisoners will get their number, but on the other hand, the random strategy guarantees that at least every prisoner will have 50% chance of getting his number, and therefore win.

So, the model strategy is like an all of nothing strategy, you can win the lottery or die trying. This is observable looking at the large standard deviation in the boxplot, on the other hand, the random.model strategy is very consistent with low dispersion values. This means that if the prisoners decides to use the model strategy, they will have to face tree possible outcomes, or everybody wins, or nobody does, or only a few does. On the other hand, with the random.model strategy, they will expect that at least 50 prisoners will win, thinking as a prisoner as a logical individual, I think this is the best strategy he can use.

Write your opinions.

Thanks to Matt Asher for the idea

Benjamin