Hacker News new | past | comments | ask | show | jobs | submit login
Seven Puzzles You Think You Must Not Have Heard Correctly (2006) [pdf] (dartmouth.edu)
255 points by jmount 10 months ago | hide | past | web | favorite | 139 comments



Love in Kleptopia needs to be explained better. The problem can only be solved if you can afix two padlocks onto a box, and I was presuming the lock box had a single, normally shaped padlock eye, which would make such a thing impossible.

I find this happens a lot with "thought" problems: I can't solve it (and can often prove that) because the rules of the problem are inadequately explained.


Also, the problem assumes that padlocked boxes themselves are not stolen. Presumably we're expected to consider this a reasonable assumption, because a thief wouldn't bother stealing a box that they couldn't open. But just as with public-key cryptography, a thief who can both steal packages and forge return addresses could intercept the box, place their own padlock on it, and then execute the rest of the protocol to perform a classic man-in-the-middle attack.


Or just attach a lock that needs a combo (ex: 3,7,15) instead of a key to open it, then tell her the combination after she gets it and the risk of theft is zero.

The problem specifically says they are communicating using the internet, so why not?

"Hey, I sent you a box. It's locked with a combo lock. Call me when you get it: I want to be talking to you when you see the surprise! I'll tell you the combo on the phone!"


The situation is actually a bit tricky. If they met over the internet, there's no way to guarantee the absence of a Man-in-the-Middle who has been allowing the romance to proceed without interference, but who tampers with the messages which give the address to which the ring must be sent (and who fakes the message which says the box has been received.)

The only way to avoid this possibility (of a man in the middle attack) is to be "introduced" by a third party you already trust. See e.g. https://en.wikipedia.org/wiki/Needham–Schroeder_protocol

In the absence of a trusted third party, you could be talking to anybody, through any number of lurking intermediaries. If you've got a trusted third party then the problem, as posed, is moot.


All of the other solutions are vulnerable to MITM as well, though.


Can't believe I didn't think of that before I got to the double padlock solution.


Reminds me of this: 1, 2, 3, 4, 5, 6, 7, 8, 9,... What comes next?

10 if it's the sequence of natural numbers

13 if it's the sequence of N such that N=2^n for natural numbers n, where N does not contain a 0

153 if it's sequence of N such that the the sum of the digits of N each raised to the power of the number of digits in N equals N.


While I fully agree with the premise that any sequence of numbers can have an essentially infinite choice of 'next number', depending on how you define your sequence, and the simplest way to prove that is simply to given a sequence of _n_ numbers solve the _n+1_ polynomial equation, given that it is incompletely specified then you have an infinite number of solutions. Or just drop in a heavy side step function (https://en.wikipedia.org/wiki/Heaviside_step_function).

However, i'm not sure that I understand your second example.

If N=2^n (assuming 2 to the power of n, not 2 xor n) then one would expect the sequence to be 1, 2, 4, 8, 16, etc.

If N=2^n using the C xor notation, then we would expect a sequence 3, 0, 1, 6, 7, 4, etc

Your third example is pure evil, and I'm glad that I never had you as a maths teacher ;-)


It's the sequence of n such that 2^n doesn't contain the digit 0. This excludes 10 (1024), 11 (2048), and 12 (4096). See http://oeis.org/A007377


Those problems always bothered me. I think that for any sequence of numbers there is an infinite number of next-in-sequence solutions regardless of the sequence length or numbers contained. One may be more obvious but you can put any number next and find a pattern that matches. Example - what if those are a sequence of digits in pi.


I think "What comes next?" is an incomplete question, without any context. "What comes next in the sequence of (blah)?" is a complete question with full context. But that question wouldn't make anybody feel superior.

"Identify as many sequences as you can which fit this set of numbers, and tell me the next number in each sequence" is the non-trick actually being asked.


I think the only sane approach is to apply a Kolmogorov-shaded Occam's Razor, and select, from the set of possible sequences which fit the pattern, the one which has the simplest generating function.


That can be tricky in itself, though. 1, 2, 3, 4, .... might be a sequence of positive integers, or it might be a subset of pseudo-random digits emitted by a maximal-length shift register sequence.

Someone recently pointed out to me that if you were designing hardware to (e.g.) refresh all memory locations in a system, or otherwise visit a set of 2^n sequential locations in no particular order, then using a maximal-length shift register of the appropriate width is actually simpler than a 'simple' adder or subtractor. That could be seen as a literal interpretation of Kolmogorov complexity.


In philosophy of sciences, it is called 'underdetermination'.


In regards to the second example, I understand the sequence but it seems totally arbitrary. It could also be the sequence

    11 if it's the sequence of N such that N=2^n for natural numbers n, where N does not contain a 24
But then again that's your point, cheers ;-)


A if you’re counting in hexadecimal.


To get super pedantic, even if that was the case you could use a lock out tag out type device to still attach two locks.

https://www.media-partners.com/upload/i20121017160441/img1.j...


Unfortunate that they don't show multiple locks in that image.


But they clearly show multiple holes.


Another alternative would be to thread a small steel cable with looped ends large enough for two locks through the eye and then lock the loops together.


the problem is if you introduce new things, then you can solve it like

1. Bolt Cutters 2. Break the box 3. Send the key design via the internet and a link to https://www.youtube.com/watch?v=8M-HYptOMJI


Also, given that

1. these are regular padlocks;

2. Alice and Bob already have a secure communications channel along which information, but not objects, can flow (the Internet);

...then the puzzle has a "cheese" solution: just communicate the metrics of the key along your secure channel, and have your counterparty reproduce the key using them! Padlock keys are essentially number sequences :)


Yeah, underdetermined/badly explained problems are just irritating. Especially since you don't know what level is the person targeting, or come with a smug answer about some detail that was not specified.

I don't waste much time with them to be honest.


I initially had considered two padlocks being on the box, but ruled it out because "come on, that would just bring more attention to it such that they'd spend enough to break our measures".


I haven't looked at the solution but I don't see why it requires a box that can have two padlocks attached.

A sends their open padlock and a box to B. B puts their open padlock in the box, and locks it with A's padlock and sends it back. A opens his padlock, puts the ring inside, locks it with B's padlock and sends it back.

Maybe it was just too hard for you? :P


> A sends their open padlock and a box to B

And they are both promptly stolen, because neither is inside a padlocked box. "anything sent through the mail will be stolen unless it is enclosed in a padlocked box". Anything includes padlocks and boxes. In fact, even the padlocked box would be stolen because it's not inside a padlocked box. And if it were, the outer box would just be stolen. The puzzle as written is just turtles all the way down.

The proper form of the riddle would be that anything except for a padlocked box will be stolen from the mail.


“anything sent through the mail will be stolen unless it is enclosed in a padlocked box”

That’s why — A’s open padlock will be stolen.


In this solution the open padlock would be stolen, because it is not in a locked box.


In the Dot-town suicides, say the stranger says "Alice has a blue dot." Alice then kills herself. Why would the other residents die?

I think this satisfies the requirements: there's certainly some number of blue dots for which the statement would be false, namely zero.


That's not what the author intended by "anything about the number of blue dots". The author means "anything about the number of blue dots, but not who has them", because the proof of the solution relies on the fact that each person ("Alice") doesn't know whether their own dot is in the set the stranger speaks about but everyone else does know whether "Alice" is in the set.

"Alice has a blue dot" adds extra information (It's equivalent to "There is at least one blue dot", and Alice has a blue dot"), and that extra information is critical in that, even though it adds some information, it removes useful information by reducing the information entropy of the situation, which stymies efforts to make inferences.

The problem wording is (arguably) ambiguous on that point.


I think that's right. So the author's description of "anything non-trivial" is weakened to "anything non-trivial maintaining information symmetry", which feels constricting, certainly no longer "frighteningly general."

This also reveals a strategy for townsfolk who wish to survive: simply inform a blue-dot of their color, breaking the logic chain. That winning move ought to factor into the strategy of these perfect logicians!


I suppose then they will all simultaneously do it and all die. Or perhaps being perfect logicians they will hold a lottery and inform the winner of their state causing them to die alone for the good of the group.


Yeah, I was thinking about if the stranger said something like, “not all the dots are blue.” This is non-trivial by the definition given, and no one would have to kill themselves at all.


"not all the dots are blue" - in the case where there are n people with n-1 blue dots and 1 red dot, the person with the red dot kills themself, and then everyone else kills themselves because they know that the red dot person killed themself because they did not see any red dots.


Right, but if there were two red dots the town is fine.

In a one red dot town, you could say “there is at least one blue dot”


Would they be fine?

So let's say there are two people with red dots, let's name them Ruth and Rudy. Ruth now knows that "not all dots are blue" (which is equivalent to "there is at least one red dot"). She sees Rudy with the red dot: Fine, here's the person with the red dot.

But wait a minute, why is Rudy not killing himself? If Rudy is the only person with a red dot, he should have seen only blue dots... however he knows there has to be at least one red dot, so if he only saw blue dots, the red dot must have been him, so he would have killed himself.

But he didn't, so that means he has seen at least one red dot, which isn't him, and isn't anybody else Ruth has seen because she only saw blue dots otherwise.

Oh no, it must be Ruth with the second red dot!


Given 2 red dots, they would already know that not all dots are blue even before the stranger comes. At their first town meeting, everyone would know "not all dots are blue" and they would know that everyone else knew that information too.

Would this mean the colony would self-implode?


If you have a red dot, you only know that there is at least 1 red dot, and you don't know if that one person knows "not all dots are blue". Hence, it's new information when that person doesn't commit suicide.


Ok now it is making sense. The same would be true if you went to three dots.... the third person would think, “wait, why aren’t the two blue dotted people killing themselves? There must be a third blue dot... wait, the third blue dot must be me”

That makes sense. What about the other commenter, who said something like “Alice has a blue dot.”

Wouldn’t that not lead to everyone’s death?


In your three dots logic, this would have a red dot person kill themselves if there are two blue dots: The two blue dotted people aren't killing themselves because they are still thinking "wait, why isn't he killing himself" about the other.

I feel like some timing procedures need to be defined.


I think the 'they meet every night' is supposed to be the 'timing procedure'


I think that "Alice has a blue dot" is technically not a statement about the number of blue dots.


But how would they know how long to wait before they kill themselves? What if Rudy hasn't killed himself yet because he's slightly slower at solving logic problems?


Since they meet every night the implicit idea is that whoever is to kill themselves will do it by the end of the night or next morning -- so after that, in the next-next night's meeting they can take whether someone committed suicide or not into account.

Whether someone is "slower at solving logic problems" goes into IQ etc, and is not relevant to the logic problem (the same way "what if someone refuses to kill themselves over something so superficial" is -- one should assume they can all do the math it takes equally well, else we would have been told of that.


By induction and perspective-taking, 1 person dies each time everyone sits and thinks for a while.


I think that statement is still subject to the proof given. "Not all the dots are blue" just initialize K to {n}, and the induction proceeds as written.

The flaw in the proof is that it assumes that townsfolk only learn their dot color based on counting arguments. If the stranger's "anything non-trivial" statement includes information beyond just a raw count, then a townsfolk may learn their dot-color prematurely (e.g. by being told in the statement) and the proof fails.


If that were said to a population of 2 people, one with a blue dot and one with a red dot, the person with the red dot would kill themself immediately


Right, but isn’t the challenge supposed to be to prove that in every case, everyone dies? Not that ‘in at least one case, everyone dies’

If it was the latter, a one person town would always die.


Another very counterintuitive (for me) problem:

how do you do better that break even in the following game: 'A' chooses two distinct integers, writes them on slips of paper and holds one out in each hand in a fist. You choose a hand and reveal a number. You must then guess whether the other number is higher or lower than the revealed one, winning $1 if you guess right and losing $1 otherwise.


This problem frustrates me, I'm not quite convinced that it's well defined as written.

My first instinct is to say "Based on your knowledge of Alice, assign a probability distribution over the pairs of integers she might pick. Then when one is revealed you should condition on that fact. Then just pick whichever of higher or lower is most likely."

The problem setter will object that we have no way of assigning a probability distribution to Alice's actions. But if that's true, what does it mean to "do better than break even"? Apparently it doesn't mean "win with probability >50%", because there is no probability of winning.


I agree that it’s not entirely strictly specified - in particular, there’s no random (uniform) distribution over integers. I guess you could get around this by specifying in in a way you did, or “parametrized” over some parameter, or say “for any distribution” (and mandate a single turn of the game)...

Anyways, this is also highly unintuitive for me, so I’m far from certain that the solution below is correct; having said that, I’m unable to find a flaw in it. If you can, I’d be interested in hearing it.

SPOILER / SOLUTION:

Pick a number - say 0 for the sake of it. After Alice reveals the first number, assume that the other number is on the opposite side of 0 (i.e. if Alice reveals a negative number, assume the other number is positive and say “higher”, and vice versa). If your number is within the interval of Alice’s numbers, you win; otherwise, you still have a 50% chance of winning.


I think the "orthodox" answer is that your method wouldn't work, because if Alice picks 10 and 20 and opens 10 first then you have a 0% chance of winning. Instead (they say) you should pick your number at random, so that it has at least some probability of being any given integer. For example you could pick the number n with probability 2^|n|/3. That way no matter which two numbers Alice picks there's always some probability your number will be between them.

Now personally I think this is silly. You know nothing about what Alice is going to do, so your own choice of number can't possibly matter. You might as well always pick zero rather than randomizing. The reason the orthodoxy disagree is because the problem they're answering is something like "Find a strategy such that if you knew which numbers Alice had picked, but hadn't yet chosen your own random numbers, then you would always anticipate a >50% chance of winning."

This resembles the definition of a 95% confidence interval: "A procedure for generating intervals from data, such that if you knew the true value of the parameter, but not the data, you would assign a 95% probability that the true value lies in the interval". Of course Bayesians turn it around and define a credence interval where you know the data but not the true value (which I think is better, because this actually is your state of knowledge).


It's important to remember that Alice is a perfect random number generator. She certainly isn't going limit her numbers to what can be comfortably written on the paper for instance.

Which is fun because while there are hundreds of puzzles that require people to be able memorize perfect hash functions on the fly or remember the state of a FSM 1000 states ago or be capable of implementing the necessary function required by the axiom of choice for a given situation...

this is the first puzzle I can think of that requires some one to be able to, with perfect randomness select two integers (-∞,∞)


I don’t think it’s necessary, I think solution I posted works for any distribution (but obviously only gives you an edge for some distributions).


> if Alice picks 10 and 20 and opens 10 first then you have a 0% chance of winning.

No because you pick the hand (i.e. it’s essentually random), so you still have 50% chance of first revealing 20 and gettingit right.


Good point. So our methods are similar, but yours puts the randomness in the choice of hand rather than the choice of integer. But in the (10,20) case your method gets exactly 50% and not more than 50%.


This is why your choice of threshold matters. Your strategy should be to choose a threshold randomly from any distribution that is non-zero everywhere (such as a normal distribution), then choose a hand uniformly at random.

Choosing zero as a threshold deterministically does not work for all choices that Alice can make because, as you say, for some choices such as (10,20) it gives you exactly 50% chance to win. But as long as there is always a non-zero chance that you chose a threshold between Alice's numbers, then there is a > 50% chance that you win the game.


Does not work according to [1], see followup puzzle 2 and its solution. At least if my interpretation of your proposed solution is correct, i.e. that you fix your threshold number in advance. Not sure if a delayed random choice would help.

[1] https://johncarlosbaez.wordpress.com/2015/07/20/the-game-of-...


Hmmm... If Ininterpret their proof (i.e. “Puzzle 2”) correctly, they prove that there’s no deterministic solution that yields probability > 50% for any two numbers... the idea of the solution I posted (which is deterministic) is that you have >= 50% for all numbers, and > 50% for some numbers.


This is actually funny. I would argue the solution is invalid, except that due to mortality it actually works.

The problem lies in the assumption about how Alice thinks.

The set of integers is countably infinite, so your probability to guess a specific number N is zero if the distribution is uniform (e.g. there isn't a higher likelihood of picking "smaller / closer to zero" numbers.) [1]

But because it would take a lot of mortal time to describe arbitrarily large numbers, you MUST assume that because Alice is mortal, she'll pick a number she could describe in her lifetime.

Therefore, the question is not valid. Alice does not truly have a domain over all the integers to choose from. She only has the ability to pick numbers she could write down on a slip of paper within her lifetime.

Since the paper is limited in how much information it can contain anyway, it therefore cannot represent any arbitrary integer from the infinite set.

So the question is re-written: Alice picks two integers from a finite pool..

[1] https://math.stackexchange.com/questions/30619/probability-o...


Hmm I'm not exactly good with maths (or puzzles for that matter) but here is a blind stab:

By break even, I think you mean that the game is played multiple times, as long as necessary. If you are not following a martingale like strategy (edit: you can't anyways, the bets are fixed) and respond randomly (or with full faith that this is your lucky day, doesn't matter really), you are expected to break even with 50% chance.

How can we improve and make better than 50%?

'A' chooses two distinct integers each time. They can choose among infinite number of integers. For the first guess, I can't do better than 50% chance but if I start to keep history of the numbers that were picked it feels to me like I can do better than chance if I assume A chooses their distinct integers uniformly random by expecting a uniform distribution. I don't have much ideas about how to judge uniformness in an unbounded set of integers though but maybe something like: if first hand is less than the average of all previous numbers, I'd say "other number is higher", otherwise I'd say "other number is lower". I feel like this would improve my chances to more than 50% in the very long term (at infinite plays perhaps?) but am not able to prove it.

If A is not choosing their numbers randomly with a true RNG then I'd play many games at 50% chance then run their choices through a neural network to extract patterns then I'd do better than random for sure.


You can do better than that. You can guarantee there is > 50% chance you win on the first guess.


Wow, that is very counterintuitive..

I suppose you could set any threshold x, and play so that if the first number is bigger than x you stick, otherwise you switch. This would be strictly better than random (break even) if x is within the range of numbers that A is picking from and the same as random if not.

It depends on when you measure your chances: before A has picked a generator this gives you >50% chance but if A has already picked a range to draw from that doesn't contain x then you're back to 50/50. Still, I think this would count as guaranteeing >50%?


> You can guarantee there is > 50% chance you win on the first guess.

You mean "greater than" and not "greater than or equal to"?

The latter is easy; lots of naive strategies accomplish that. But the former implies that you can declare a strategy and the puzzle offerer cannot then pick a distribution that defeats it. It's hard to imagine how you could get >50% for example if the puzzler's strategy is to pick N and N+1 for very large N.


Ok, my solution is needlessly complicated because unbounded numbers between -Infinity and Infinity has a mean of 0, so if we just pick 0 and say "if hand1 < 0 then next is bigger, otherwise smaller" strategy already gives 75% accuracy long term.

pyk's solution above works 66.6% even for the first try...


Oh that's interesting. Thank you for the clarification.


After writing this one out, this reminds me of the Monty Hall problem. In this case my guess is that you use a prior -- assume the two unknown numbers are A & B, and then assume a random integer yourself C.

From there, if A (the first revealed number) is less than C, then that narrows the remaining cases giving you a 2/3 chance. If A is greater than C, that also narrows the remaining cases and gives you a 2/3 chance as well.

On a number line, the cases are below.

If A > C then the six originally equally possible cases are narrowed to three cases:

A-----B-----C (impossible)

A-----C-----B (impossible)

B-----A-----C (impossible)

B-----C-----A B < A

C-----A-----B B > A

C-----B-----A B < A

So you would guess B < A -- the first hand's number is higher with probability 2/3.

If A < C then the six originally equally possible cases are also narrowed to three cases:

A-----B-----C B > A

A-----C-----B B > A

B-----A-----C B < A

B-----C-----A (impossible)

C-----A-----B (impossible)

C-----B-----A (impossible)

So you would guess B > A -- the first hand's number is lower with probability 2/3.


> and then assume a random integer yourself C.

This is impossible; there's no uniform measure on the integers (as σ-additivity makes it impossible to bound such a measure).


This works, but you don't get a probability of 2/3, because the cases aren't equally likely. There isn't a probability distribution for C such that for any A and B the probability of A<C<B is 1/3. We would have to have P(0<C<10)=1/3 and P(10<C<20)=1/3 and P(0<C<20)=1/3, which is impossible.


Why does this work with probability 2/3 then?

https://jsfiddle.net/q4qbewp1/


Because of the particular distributions that you happen to have chosen. Alice doesn't have to pick uniformly at random. Since 0^p=0 and 1^p=1 we can experiment with different distributions by using "Math.pow(Math.random(),p)" in place of "Math.random()". For example:

  var prior = Math.pow(Math.random(),1);
  var hand1 = Math.pow(Math.random(),10);
  var hand2 = Math.pow(Math.random(),10);
  > Win probability: 0.5757182
and

  var prior = Math.pow(Math.random(),1);
  var hand1 = Math.pow(Math.random(),0.1);
  var hand2 = Math.pow(Math.random(),10);
  > Win probability: 0.9026432
But the win probability will always be >0.5 so long as the "prior" probability distribution has a nonzero probability of being between Alice's two numbers.


> But the win probability will always be >0.5 so long as the "prior" probability distribution has a nonzero probability of being between Alice's two numbers.

I'm not sure if that always holds true. Let's say Alice flips a coin and picks 3 or 4 for num1, then Alice flips another coin and picks either num1 - 2 or num1 + 2 for num2.

You come along and pick a number between 1 and 6. You have a non-zero chance of having picked a number between Alice's two numbers, but your chances of winning are not > 0.5.

If you picked a number between 2 and 5 though, your chances of winning are now > 0.5. Like jstanley said, you'd have to know how Alice picks numbers in order to win in the long run.


> the win probability will always be >0.5 so long as the "prior" probability distribution has a nonzero probability of being between Alice's two numbers.

Wow. This is the key piece of information that makes the problem interesting, IMO. That's quite unintuitive.

If you know anything at all about how your opponent chooses numbers, you win in the long term.


It also tells us what Alice's optimal strategy is: pick the first number at random, and then select an adjacent integer as the second number. Thus, there's no space between them that your prior can assign any probability to.


Well your strategy has to have some way of breaking ties, when Alice's number is the same as yours. Lets say that you always say "higher" in that case. Then you always win whenever your number is between Alice's or equal to the smaller of them. Equivalently you could just pick a random half-integer.


Folks who are interested in reading more about this problem should know the search term "two-envelopes problem".

https://en.m.wikipedia.org/wiki/Two_envelopes_problem


This is the same as discussed by John Baez and others at https://johncarlosbaez.wordpress.com/2015/07/20/the-game-of-... right? (spoilers!)


Assume that the range of integers is limited by the largest positive or negative number that can be written on the limited space of the paper. Whenever the largest positive or negative number comes up your answer will be certain. Turn a profit over a few billion years.



Is the range of integers limited? Does she choose them randomly?

If the range isn't limited, hoe would we even define a normalized probability distribution?



*how


I think there's an easier to visualize solution to the box problem.

Let A have dimensions (a,b,c) and let B have dimensions (x,y,z). Assume A fits inside B.

We have (a+b+c)^2 = a^2 + b^2 + c^2 + 2ab + 2ac + 2bc. This is the sum of the A's hypotenuse squared plus its surface area. The same holds for B.

Note that A's hypotenuse is at most that of B-- the hypotenuse of a is its longest axis, and it needs to fit in B somehow. Further, note that the surface area of A is less than that of B. To see this, consider the nesting of A inside B and realize that both boxes' interiors are convex sets. Imagine inflating A inside of B by taking the sets A_t consisting of all points within B that are within distance t of a point in A. It is not hard to see that this inflating operation can only increase the surface area of A, and since the maximum surface area we can get is that of B we have that A has smaller surface area than that of B. Thus,

(a+b+c)^2 = (A hypotenuse)^2 + (A surface area) <= (B hypotenuse)^2 + (B surface area) = (x+y+z)^2. The claim follows.


I loved this. Some comments. Not really spoilers I hope...

For "Unwanted Expansion" the answer is technically correct but I am displeased that it doesn't prove there wont be any infinite loops. Whereas analyzing invariant in the tree should prove that.

For Boxes in Boxes it says "But, if we take ε to be huge", but how big is huge, and what if it isn't huge? Seems like something in the proof is being hand waved over.

The natives and suicides I enjoyed they really made me go aah!. The irony about the suicides is that the if the people were too dumb to apply the logic, they'd survive.


"If we take ε to be huge" just means "let's analyze the growth rate as ε goes to infinity". More formally, what's going on is that you have a function which is polynomial in ε and which is always positive; therefore the leading coefficient must be positive, as if it were negative, the polynomial would be eventually negative.

The question "what if it isn't huge" makes no sense; we can pick ε, it's not some external given. (Actually, as mentioned, we're not picking it to be one specific value but rather letting it tend to infinity, but that's another matter.)


Thanks. Nice explanation. It's been a while (18 years) since I've read maths proofs.


Turn the "suicides" into escaping prisoners, so being smart enough makes them survive.


Yes, the solution to "Unwanted Expansion" is simply "because we assume that math works". It's begging the question. If you don't know what he's assuming, you don't know what proofs are "interesting".

You could just as well say, "Because it's a finite" expression.

"Boxes in Boxes" doesn't really fit the theme. It's a complicated elementary math problem that's hard to solve, but not "tricky" in way that makes it seem impossible.


The unstated observation made in Unwanted Expansion is that if every variable is equal to 2, then every operation in the expression must increase the total by at least 2 (note that only addition and multiplication are allowed, not subtraction or division). In other words, an expression with n terms or factors will always evaluate to at least 2n. This puts an upper bound on how far any finite-valued expression can be expanded.


I was is similarly displeased with "Unwanted Expansion". I'd prefer some sort of fixpoint algorithm, which could then be a applied to the case with "-", which their version couldn't.


Problem 7 has an even more fiendish counterpart: https://en.wikipedia.org/wiki/The_Hardest_Logic_Puzzle_Ever.


To make it more fiendish, what if the gods / natives don't know about each other's TRUE/FALSE/RANDOM status (but they are excellent logicians, so they can intuit any information from answers to your previous questions).


Very interesting, but incredibly complex. I'm tempted to make a python implementation to play around with for better understanding.


The "boxes in boxes" problem actually turns out to be true in much greater generality, using a different solution: https://math.stackexchange.com/questions/1909085/suppose-a-b...


I always liked the prisoner box question, but I prefer the phrasing where the prisoners are assigned a number and the boxes are numbered. I feel like the "prisoners have to come up with a name to number mapping" step just gets in the way of the interesting part.


I don't understand one assertion that the author makes in the solution, though:

    If it happens that the permutation has no cycles of 
    length greater than 50, this process will work every 
    time and the prisoners will be spared.
Obviously that's not true as written, because the first prisoner has odds of 50% no matter what function or algorithm they use to choose the boxes they open. If they fail in their initial guesses, the game stops immediately and everyone dies. What am I missing?

Furthermore, if you simplify the case to two prisoners and two boxes, where each is allowed to open one box, the odds of "success" are clearly only 25%. What happens as the number of prisoners and boxes grows that improves the odds? This isn't a classic Monty Hall variation where the participants have additional options as the game progresses -- it's completely predetermined.


If it has cycles of 50 or less only, then they're guaranteed to find their own name (they start the cycle on the box corresponding to themselves, so that cycle must contain them). Incidentally, the "first prisoner" might as well be every prisoner, because they aren't allowed to observe each other, communicate, or modify the room.


If it has cycles of 50 or less only, then they're guaranteed to find their own name (they start the cycle on the box corresponding to themselves, so that cycle must contain them)

But there are 100 boxes, assigned at random. The prisoners can come up with a mapping function that predetermines which boxes they will open based on their names, but that function will have no relationship to the one (if any) that was used by the warden.

There is no way to guarantee that the first prisoner finds his name, and that seems to be true for all of the others. It must genuinely be a case where I haven't understood the problem correctly.


If no cycle is longer than 50 boxes (~30% chance of that being true), then by starting with the box that matches your number, you have a 100% chance of navigating to the box containing your number before your 50-box limit is reached. It’s impossible to start in the wrong cycle, because that cycle contains neither the pointer or the value.

You have to find your pointer in a circular linked list. But it’s only singly-linked, so you start just in front of it and work around the links the long way. 30% of the time, all the lists are 50 elements or shorter, meaning everyone is guaranteed to succeed.


But what if your name is in box #51, which you aren't allowed to open?


Then there's a cycle of length > 50 and everyone loses.


Like sibling says, this only happens when there’s a cycle longer than 50. In that case, you’re all doomed.


> Furthermore, if you simplify the case to two prisoners and two boxes, where each is allowed to open one box, the odds of "success" are clearly only 25%.

No it's still 50%, because first person opens box 1 and second person opens box 2. They both either live or die together.

Three people, 2 chances. Each just guessing independently would be 2/3 chance so the chance for all to win is (2/3)^3 or 30%. But if the first person opens box 1 then they will find their name or not (1/3) and if so the second and third opening 2 and 3 are guaranteed to find their name, so 33% total chance.

The basic idea is that you choose boxes in a dependent pattern so that group either all wins or all loses as much as possible. The more people that can win or lose at the same time the better the overall chance for the group.


No it's still 50%, because first person opens box 1 and second person opens box 2. They both either live or die together.

As the problem is stated, the boxes remain where they are and must be reclosed after being opened. There are no other choices to be made, there's no way to retain or communicate any information about a particular prisoner's actions, and there are no order-dependent aspects to the problem. Everybody sees the same 100 closed boxes and gets to open 50 of them.


> there's no way to retain or communicate any information about a particular prisoner's actions

The communication happens before the people go in to open the boxes. In the 2-person 2-box 1-choice example, person 1 says to person 2 "I'll open box 1 and you open box 2".

With person 1 always opening box 1 and person 2 always opening box 2, what do you feel their chances are? List out the permutations and see.


I'll grant that with two people, they have a 50% chance of survival, because there's no way that only one of them can be right. But with more than two, the odds seem to get worse in a hurry.

Presumably the same exclusion principle that improves their odds from the "obvious" 25% to 50% will apply to any larger number of participants, and converge near 30%... but it's certainly unintuitive.


Yeah the solution doesn't even try to explain it intuitively.

Worse, they actively confuse the issue by making the warden an adversary and adding another layer of randomness that's just a red herring. Instead imagine they just put their names in alphabetical order so box 1 is Andy, box 2 is Bob, and so on.

First consider if they were randomly ordered buttons and everyone had to press their button. The people can plan out any order of pressing buttons, it doesn't matter because each person has to win their own separate 50 in 100 bet. So this should be intuitive. (1/2)^100 is impossible odds.

The key part is this: "He then looks into the box belonging to the name he just found".

They are picking an order that's based on the arrangement of names in boxes so they are no longer acting independently from the random box-name arrangement. If everybody was assigned some arbitrary starting box, they are not making 100 separate choices anymore they are making 1 collective choice. But it's the same chance! Instead of making 100 separate fairly likely choices, they are making 1 very unlikely choice (that the starting box order they picked is right).

The second key is starting with the box assigned to their own name.

So imagine this problem as a directed graph. Each person's box is a node and the name within is a directed edge to another node. Every node has exactly one edge to it and one edge from it, and the edges are all random. It's incredibly unlikely to be one full circuit though, instead there are little isolated circuits. For example, box 1 could have "Bob" and box 2 could have "Andy" and if anybody else picked box 1 as their starting box they would never find their name, instead finding 25 Bobs and 25 Andys.

By picking their own box, they're guaranteed to be on a circuit that has their name on it and they are excluding all other random arrangements, so Conan getting stuck on an Alex-Bob circuit of 2 is impossible. The only question is if it will take more than 50 steps.

Now imagine you are randomly constructing this directed graph one step at a time. You open box 1 (Andy's box) and randomly put in Conan, open box 3 (Conan's box) and put in Xavier - can't be Conan because you already used that name! Every next step in a chain has an increasing chance of coming back to the beginning. So on the second step there's 99 names left and one of them is Andy, on the third step there's 98 left and one is Andy. The longer the chain, the more likely it is to end and become a circuit. The warden won't be following this process to assign names, but since every step is random it's the same result as any other random way to assign names to boxes.

So the longer the chain the more likely to become a circuit, and the shorter every other circuit has to be, is finally the reason why the win chance is higher than seems possible.

Why it's 30.5% instead of 23% or something else. Well, imagine the first chain being constructed. The chance of it getting longer than 50 is (99/100)(98/99)...(50/51) which works out to 50%. So the first person (Andy) opening his box has a 50% chance that his chain got longer than 50 before it become a circuit - exactly what you'd expect for the first box. But the chance of building the second chain longer than 50 is much smaller because the first one took a lot of names out of the hat, so say first circuit is 25 long, second chain is (74/75)(73/74)... so you'll have to ask a math genius how to estimate that, but intuitively it's much less likely.


If the evil dictator does the naming/numbering then they can guarantee failure. The randomisation of the numbering of the prisoners is important to avoid that.


Previous discussion: https://news.ycombinator.com/item?id=12380879

Some additional fun ones in the comments there.


Where can I find more of these?

Do you guys recommend this author's books such as https://www.amazon.com/Mathematical-Puzzles-Connoisseurs-Pet... ?

(It's hard to find the right search term for this that doesn't return a lot of non-mathematical brainteasers or stuff aimed at kids)


Winkler's book looks pretty good, yes.

You'd probably also enjoy the works of Martin Gardner, who wrote a recreational mathematics column in Scientific American for 25 years, which has been collected across many volumes.


I quite like William Poundstone's books. In this case these two are relevant: https://www.amazon.co.uk/How-Would-Move-Mount-Fuji/dp/031677... https://www.amazon.co.uk/Prisoners-Dilemma-Neumann-Theory-Pu...

Edit: These are not puzzle books specifically, but books about puzzles.


The logician Raymond Smullyan wrote many books full of logic puzzles, this one being the most well known:

https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird


Yes, if you liked these you'll probably enjoy Winkler's books. If you're going to get exactly one of them, I suggest the first rather than the second, but both are good.


Book of Enigmas by Fabrice Mazza has puzzles similar to some of these in the PDF.


Awesome post! I kind of want to find more of these types of things (solutions to seemingly complex questions). If anyone has recommendations would love to hear


I read quite a bit of books of that kind when I was young. I didn't read in English, but "One hundred thousand whys" - maths in Chinese was a great one, and this [1] in Vietnamese is a great one. They are all old books and some have solutions we can figure out by using a computer, but in general, they are really cool.

My personal favorite that has an English translation available, although it's not on Maths is "Physics for entertainment" by Perelman -- you can get it on Amazon right now. It was published 100+ years ago and problems are still relevant now. It discussed questions such as "When you jump out of a train, should you jump forward or backward?" "Who is wetter, a person standing in the rain or a person running in the rain?"

I guess basic maths and science hasn't changed that much, or maybe I was reading them when I was young, so I might not have known any better.

1: http://khoia0.com/PDF-Files/80-BAI-TOAN-THONG-MINH_handuc_v2...


I was linked to this blog from a post here the other day. It has a lot of these kinds of questions and the solutions. Here is the prisoners problem:

http://datagenetics.com/blog/december12014/index.html

And here is the dots problem:

http://datagenetics.com/blog/october22012/index.html


The worst question of the bunch is undoubtedly the tennis question. A very clever alien could answer the other six, but having never been exposed to the rules of tennis would be hopeless against the Wimbledon question.

(In fact my knowledge of tennis is poor enough that I don't understand why the question would be considered difficult for somebody who played tennis.)


Spent quite a while on the Wimbledon problem before giving up and reading the answer. But the answer is wrong, because they don’t play tiebreakers at Wimbledon (or any of the grand slams besides the US Open). Pretty frustrating TBH. Kind of felt the same about the padlocked boxes one too, although at least that one is more just a little ambiguous vs outright wrong.


They don't play tiebreakers in the fifth set. All other sets work normally. The answer is correct because it's only talking about the first three sets.


Yeah, but it's so easy! Why is that considered a hard problem? Just because most people are unfamiliar with tiebreakers in tennis?


Honestly I already lost interest after reading the solution to the first problem, because there's no indication that the prisoners are allowed to alter the boxes in advance. This is such a dumb "gotcha".

Edit: I re-read the solution and it doesn't require actual labeling (It's hard to imagine, because I'm not a native English speaker).


What are you talking about? They don't alter the boxes in any way, they just say "hey, I'm number 1, you're number 2, etc...".


you're right, thanks for pointing it out.


The Dot Town Suicides appears to be incorrect as stated.

Suppose there are 100 residents and 2 blue dots. The stranger tells everyone "there are not 98 blue dots"

This meets the condition of the question's "non-trivial" but the residents can relax: nobody has learned anything new. The proposed induction is broken, in my opinion.


I thought this initially as well: if there are many of each colour, and the only information is that there is at least one blue dot, how could this be new information to trigger the suicides?

But this isn't correct. Let's say n = 6, a-c are blue, d-f are red, and the information is that there is at least one blue dot. a knows there are at least two, but thinks b may believe there is only one blue dot. Moreover, a believes b may believe that c may believe that there are no blue dots.

So if a was red, the information would mean that b would commit suicide if c did not initially commit suicide, but since neither has happened, a must have a blue dot and therefore must commit suicide.

A convoluted process to be sure but does it make sense how the information transmits by ruling out hypotheticals?


The question is stated correctly, I believe the mix up is in the part about triviality.

> “Non-trivial” means here that there is some number of blue dots for which the statement would not have been true.

I think this is supposed to say "Trivial means..."

If there were 98 blue dots, your statement would not have been true, so it was trivial information.

Essentially, if the people of dot town can figure it out themselves, then you've given them a trivial piece of information. They know there aren't 98 blue dots, because they can already see that for themselves.


Unless "There are not 98 blue dots" also means "There are not 2 red dots"? Then the process starts and months later all red dots leave.

This requires that everyone know only blue or red eyes are allowed.


Likely posted because problem #1 was recently on the front page:

https://news.ycombinator.com/item?id=16984815


The nice thing about this is that we can use Dot-town as a halting oracle, which can then be used to solve any specific undecidable problem we'd want to solve. Make the machine output a set of dots after running some unknown computation, such as a search for twin primes. The residents will kill themselves depending on the results.


How so?


For a proposition P, make your statement of the form "there are N blue dots iff P".


I would love to watch/hear someone go about solving these problems, just to see their approach and thought patterns.


Communicating badly and then acting smug when you're misunderstood is not cleverness.

https://www.xkcd.com/169/

I’ve stopped trying to do puzzles. I’ve been misled by bad communication (intentional or otherwise) too many times.


As to the dots, I enjoyed this variation from xkcd, where I first encountered it: https://xkcd.com/blue_eyes.html

It massively improved our play at Hanabi: https://boardgamegeek.com/boardgame/98778/hanabi

Probably would be a useful puzzle for improving your Spades or Bridge game, or any game you play with a partner with limited information.

Also, it gets much worse than The Random Native. George Boolos has a variation called "the hardest logic puzzle ever." It goes like this:

“Three gods A, B, and C are called, in some order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of A, B, and C by asking three yes-no questions; each question must be put to exactly one god. The gods understand English, but will answer all questions in their own language in which the words for ‘yes’ and ‘no’ are ‘da’ and ‘ja’, in some order. You do not know which word means which.”

There's an ongoing effort out there to make this variation even harder: https://arxiv.org/abs/1206.1926


I found the "trick" solution to Unwanted Expansion to be unsatisfying: unless I am mistaken, it assumes that all of the values must be positive integers, which was not stated as being the case.


no, it's about the structure of the expression expanding indefinitely. if that happened then if all the values were positive then the value of the expression would also tend to infinity


Ah, thank you for clearing up my misconception! That makes a lot more sense.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: