Hacker News new | past | comments | ask | show | jobs | submit login
Coin Puzzle: Predict the Other's Coin (pratikpoddarcse.blogspot.in)
33 points by pratikpoddar on Dec 30, 2012 | hide | past | favorite | 45 comments



If they both agree to use their own coins result as the guess of the OTHER person's coin, they should get it right 50% of the time instead of 25%.

Possible tosses:

TT win

HT lose

TH lose

HH win

Counter-intuitive though isn't it.

Why does using your own result improve the odds of winning the game!?


>Why does using your own result improve the odds of winning the game!?

I find the result very confusing as well. I suspect the source of this counterintuitiveness is that I over simplified the puzzle when I first read it.

I simplified the puzzle to "A attempts to guess B's coin, and B attempts to guess A's coin", whereas in fact the true puzzle is "A and B together try to guess the total set of results".

When A and B guess randomly, they attempt to guess the total set without using the information they have available to them. When A and B both guess the results of their own coin, they use the information they have regarding the set of results (i.e. the results of their own coin) to reduce the problem space and increase their chances of success. Clearly, if you have heads, you know the chances of the set of results being heads-heads are much higher than choosing a random set of results.

In other words, the proposition for A is not "What are the odds of B having heads given that you have heads?" but rather "What are the odds of the set of results being heads-heads given that you have heads?"


I'm not sure the conditional probability argument explains it. Specifically, given A=H, the probability of (H,H) is the same as the probability of (H,T) so that alone shouldn't inform your guess.

I believe the core mechanism at play here is that coordination eliminates the possibility of one player being right and the other being wrong, because one can partition the coin outcomes into "We have the same result" and "We have different results"; by agreeing in advance to only guess "we have the same result" every time you eliminate the "I'm right, you're wrong" and "You're right I'm wrong" failure modes (50% of the probability space under random guessing).

One could accomplish the same by always guessing the opposite of what you obtain (i.e. always bet on "we got different results" which again has a 50% probability).

This is related to the "guess your own hat's color" riddle:

"You and a friend are on a game show. The host sets you facing each other at a table, blindfolded. A hat, known to be either white or black, is placed on each of your heads. The host removes the blindfolds, and asks each of you to write down the color of your own hat without communicating with each other. If either of you guesses correctly, you both win a prize. How do you guarantee success by pre-communicating a strategy?"

The answer here uses the same partitioning into "either we're the same, or we're different". One friend agrees to always guess his own hat to be the same as the friend's, while the other always guesses that his hat is different.


I think that you are correct and I am incorrect.


Nicely broken down!


I was struggling with why this helps but the answer is that this strategy correlates their correct guesses. They are either both wrong or both right.


It's surprising and also means that A and B can choose a strategy when they lose 100% of the time:

- A always uses A's result as a guess for B's result.

- B always uses opposite to B's result as a guess for A's result.

They would always lose then.


Conversely, they can use a negation of their result. In this case they win given TH and HT and lose given TT and HH.


It doesn't, you're simply wrong. See my other comments.


Think of it like this:

What is the probability that two coins come up with the same face value? 0.5

If we use the guessing scheme where each player guesses his own guess as the result of the other player's coin, then players A and B win if their coins are the same. Their coins are the same with probability 0.5.

Thus, Expectation[game] = Expectation[Game|A & B Win] * P(A & B win) + Expectation[Game|A & B Lose] * P(A & B Lose) = 1 * .5 + -2 * .5 = -1, so C should not play.

If the players guessed randomly then the expectation of the game would be as you say it is.


If you want to say someone is wrong, it would be better to provide an explanation of exactly why they're wrong rather than point everyone elsewhere and make then figure out the hole in the logic.

Point out the hole directly.


Humm let me see if I get this right

- Assuming everybody is honest

The chance of A getting the guess right is 0.5, same for B, so the joint probability should be 0.25 (or 1/4)

You're out of 2 dollars (3 - the 1 you get) for something with a chance of 1/4 (in the other 3/4 you're being payed $1) SO

Looks like it's worth playing.

Expected losses: 2 * 1/4 < Expected wins: 1 * 3/4

Edit: see comment below

Edit 2, see the other comment below where the probability of prediction can be of 1/2, in this case: Expected losses: 2* 1/2, wins: 1/2 so you should not play


> You're out of 3 dollars for something with a chance of 1/4 (in the other 3/4 you're being payed $1)

You're out $2 (since you get the $1 from the other players at the beginning, no matter what)


Humm makes sense =) Knew I was forgetting something!


Here is a bit harder version of the puzzle, with 100 players instead of 2.

100 prisoners are given either a blue or a red hat, at random. Each prisoner is told colors of every hat except their own, and has to make a guess about his hat with absolutely no communication. They win only if they all guess correctly. They can agree on a strategy beforehand. What is the optimal probability of success?


Each should guess whatever would make an even number of red hats. That makes it a 50% chance they're all wrong, and a 50% chance they're all correct -- thus consolidating all their "rightness" into a single block of probability.


That assumes a uniform distribution. If there were more red hats than blue hats in general, it would be more advantageous for everyone to always guess whatever makes an even number of the majority color, as they would win more than 50% of the time.


I'm not following. With 100 hats, an even number of one color means there's an even number of the other color, too.


Yeah, sorry, I'm mistaken. If one color is more probable than the other, you can win more by guessing odd rather than even, but you'd have to know the distribution. If you got 51 red hats every time, for example, you'd never win making even hats.

For this to work, however, you'd have to know the exact expected probability of each hat (and if it was even or odd), which is improbable.


Indeed, this is the solution. (One can also note that for for the original puzzle of two prisoners, guessing parity reduces to checking whether their hats are the same or different.)


Following up on my other comment about the power of coordination to eliminate certain failure modes:

If everyone guesses randomly they stand a (1/2)^100 chance.

If they all guess the majority, with, say, blue as the tie-breaker just to have a deterministic algorithm, then they win twice as often because they capture the all-blues and all-reds cases, each with probability (1/2)^100.

I don't know if that's optimal but I always find these "improve random outcomes even with really stupid coordination mechanisms" situations amusing. If there's something better to be done I'd be curious to know :)

[Edit: raldi's clever answer that partitions by parity eliminates all but the we're-all-right and we're-all-wrong outcomes, and is clearly superior :)]


raldi's suggestion below looks reasonable, and brings it up to 50%.


At random from a pool of 100 hats, or at random from some undetermined distribution?


Each hat is independently red-blue 50/50.


The team of A and B can increase the chance from ¼ to ½ by consistently taking their output from step 2 and using it as the guess for step 3.

Same thing if they both agree to guess the inverse of step 2.


[deleted]


The way this works is that it essentially reduces two independent guesses (A=X && B=Y) into one guess (A=B).


This does not work because the text says "Players A and B build a team, they have one fair coin each" NOT one fair coin total.


_but_ they can communicate a strategy ahead of time in step 1.


To answer the actual question: whether C should play this game depends on C's estimated probability of A and B coming up with a winning strategy. In the real world, there would be many people with whom I would play this game.


Spoiler. The below contains an exact response

-------------------------- The fact that there is no communication ensures that whether A is right and whether B is right are completely indpendent. Conceptually, predicting a coin flip is like making a statement "1" or "0" which will then be XOR'd with a 1 or 0 from a securely, randomly, uniformly generated OTP of which no copies exist. In other words, the plaintext is immediately lost forever and you just have the ciphertext.

As a result of this, we must truly consider that A being right is a 50/50 proposition. It is also indpendent of B being right.

Thus we have the following four cases:

A right, B right - Result value * Percent chance = EV

0, 0 = +1 * 0.25 = +0.25

0, 1 = +1 * 0.25 = +0.25

1, 0 = +1 * 0.25 = +0.25

1, 1 = -2 * 0.25 = -0.5

------------------------------- sum of above: +0.25. Therefore, as long as each team member is independently predicting their own coin (which means that their prediciton will be xor'd by a random bit) C should play this game long-term.

(The values of +1 is because each round starts with the team giving C +1. If at the end of the roudn C must return 3 then this is +1 -3 = -2 for the round.)

Now here is another interesting question. What if under the same conditions A and B both try to predict C's coin toss, of which there is only one? Should C now play? Here is the answer is: "No", because A can predict heads, B can predict heads, and then it looks like this: A right, B right - Result value * Percent chance = EV

0, 0 = +1 * 50% = 0.5

0, 1 = +1 * 0% = 0 } not possible

1, 0 = +1 * 0% = 0 }

1, 1 = -2 * 50% = -1

-------------------

-0.5

In this case, C should not play. This is because in this case the events are not truly independent, there is a way to break the 25% 25% 25% 25% into 50% and 50% - namely by picking the same prediction together.

--------------------------


The problem with this analysis is that with the "choose the same as your flip strategy", the case where A is right and B is wrong do not occur. Likewise when A is wrong and B is right does not occur. The results of each coin flip are independent, however the strategy produces a non-independent result.

  A, B, GuessA, GuessB, Winner, AmountC, Chance, EV
  H, H, H,      H,      AB,     -3,      0.25,   -0.75
  H, T, H,      T,      C,      +1,      0.25,   +0.25
  T, H, T,      H,      C ,     +1,      0.25,   +0.25
  T, T, T,      T,      AB,     -3,      0.25,   -0.75
Sum(EV)=-1 per round So C should not play


C loses 2, not 3, when he loses (he gets one from AB at the start of each round). C's EV should therefore be -0.5


No, you are wrong and hmexx is right.

By using the algorithm that hmexx says (use the result of your own coin flip to make the guess of the other's coin flip) what this is really doing is reducing the problem to "What is the probability that two coin flips are the same?"

The probability that two coin flips are the same is 50% (HH or TT vs HT or TH). The algorithm could also be "Use your own coin flip and then guess the opposite of your own result" and the probabilities would be the same, ie. 50%.

Give the fact that you have a 50% chance of losing $2 and a 50% chance of winning $1, this is not a game that C should play, since the expected value is -$0.50.


Your analysis is incorrect because A and B are allowed to decide on a strategy ahead of time - and there are two simple and similar strategies that allow A and B to guess correctly, together, half the time.

The first is described by: 'If I get heads, I will guess you got heads, otherwise I will guess you got tails' The second: 'If I get heads, I will guess you got tails, otherwise I will guess you got heads'

These are functionally identical, in that in half of the cases they get the same result (first strategy wins these) and in the other half they get opposite results (second strategy wins these) - they get 2$ for winning and give 1$ when they lose, since they win half the time under either strategy C should not play this game.


Play with different strategies:

  import random
 
  # Modify strategy here
  def turn():
      toss = random.randint(0, 1)
      guess = random.randint(0, 1)
      # Uncomment to make A/B win
      # guess = toss
      return toss, guess

  random.seed()

  score_ab, score_c = 0, 0

  for i in range(1, 1001):
      score_ab -= 1
      score_c  += 1
      toss_a, guess_a = turn()
      toss_b, guess_b = turn()

      if toss_a == guess_b and toss_b == guess_a:
        score_ab += 3
        score_c  -= 3

      print i, 'ab:', score_ab, 'c:', score_c

  if score_ab > score_c:
      print "Players A, B win"
  else:
      print "Player C wins"


But they _do_ get to communicate beforehand and could collude to always vote the value of the coin they tossed as their guess for the other coin toss?

Isn't that step 1 where they can agree on a strategy (of which randomly, independently guessing is just one possible strategy)?

Are you the author of the original problem? Perhaps the English prose could be tighter to be more correct with the solution you have in mind.


Both A and B guess the same as the results of their own toss. Their answers are correct if the tosses are HH and TT, 50% chance.

So no, C should not play this game.


Dysfunctional teams never can agree. By that I mean, if A decides to use the strategy "guess what I flip"; B decides on the strategy "guess opposite what I flip". And vice versa. These teams can never win! So for dysfunctional teams, C can play the game no matter the payout.


Actually I've done the hash on this approach and this still results in a losing game for C.

Instead of HH and TT being winners with this approach, HT and TH become winners with this approach. Any combination of opposite or agreement will result in the same net odds.

The way for C to win is for A or B to treat it randomly.

A very interesting experiment would be to test how long it would take a random human subject A with no contact with a consistent B (but a memory for past results) to figure out how to win (in cases where they were told winning was possible, impossible, or neither).


Yes, but my point was that one team member uses HH & TT, while the other team member uses HT & TH. This is always a loss for the team regardless of how the flips turn out.


Maybe you could do even better than the obvious "guess the same as your coinflip" strategy, by using quantum tricks.


The question should really be rephrased to "Will C make money if he/she plays this game?" Maybe C doesn't like money, has too much of it but really only likes giving it out after coordinated acts of random chance. Then C would love this game regardless of A and B's strategy. Maybe C spends all of his/her free time at RPS tournaments, throwing singles at the winners like a rap video.


Anyone find a solution? I have an idea but want to check.


Yes. Break it down into the possible cases, assign probability for the case, and value for the case. Multiply each probability by its value and sum these up.


Hmm. If A and B are unwitting investors, and C is a quant with inside information, OF COURSE C should take this "bet".

It's not like C would ever everrrr....Lie....

And when G(overnment) comes around to fix this evil fraud, they fine for 10% of the profits. Everybody but A and B make out like a bandit. Sound like '09, doesn't it?

(tongue in cheek, only because there were multiple vald analysees of this question elsewhere in the comments.)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: