I find this happens a lot with "thought" problems: I can't solve it (and can often prove that) because the rules of the problem are inadequately explained.
The problem specifically says they are communicating using the internet, so why not?
"Hey, I sent you a box. It's locked with a combo lock. Call me when you get it: I want to be talking to you when you see the surprise! I'll tell you the combo on the phone!"
The only way to avoid this possibility (of a man in the middle attack) is to be "introduced" by a third party you already trust. See e.g. https://en.wikipedia.org/wiki/Needham–Schroeder_protocol
In the absence of a trusted third party, you could be talking to anybody, through any number of lurking intermediaries. If you've got a trusted third party then the problem, as posed, is moot.
10 if it's the sequence of natural numbers
13 if it's the sequence of N such that N=2^n for natural numbers n, where N does not contain a 0
153 if it's sequence of N such that the the sum of the digits of N each raised to the power of the number of digits in N equals N.
However, i'm not sure that I understand your second example.
If N=2^n (assuming 2 to the power of n, not 2 xor n) then one would expect the sequence to be 1, 2, 4, 8, 16, etc.
If N=2^n using the C xor notation, then we would expect a sequence 3, 0, 1, 6, 7, 4, etc
Your third example is pure evil, and I'm glad that I never had you as a maths teacher ;-)
"Identify as many sequences as you can which fit this set of numbers, and tell me the next number in each sequence" is the non-trick actually being asked.
Someone recently pointed out to me that if you were designing hardware to (e.g.) refresh all memory locations in a system, or otherwise visit a set of 2^n sequential locations in no particular order, then using a maximal-length shift register of the appropriate width is actually simpler than a 'simple' adder or subtractor. That could be seen as a literal interpretation of Kolmogorov complexity.
11 if it's the sequence of N such that N=2^n for natural numbers n, where N does not contain a 24
1. Bolt Cutters
2. Break the box
3. Send the key design via the internet and a link to https://www.youtube.com/watch?v=8M-HYptOMJI
1. these are regular padlocks;
2. Alice and Bob already have a secure communications channel along which information, but not objects, can flow (the Internet);
...then the puzzle has a "cheese" solution: just communicate the metrics of the key along your secure channel, and have your counterparty reproduce the key using them! Padlock keys are essentially number sequences :)
I don't waste much time with them to be honest.
A sends their open padlock and a box to B. B puts their open padlock in the box, and locks it with A's padlock and sends it back. A opens his padlock, puts the ring inside, locks it with B's padlock and sends it back.
Maybe it was just too hard for you? :P
And they are both promptly stolen, because neither is inside a padlocked box. "anything sent through the mail will be stolen unless it is enclosed in a padlocked box". Anything includes padlocks and boxes. In fact, even the padlocked box would be stolen because it's not inside a padlocked box. And if it were, the outer box would just be stolen. The puzzle as written is just turtles all the way down.
The proper form of the riddle would be that anything except for a padlocked box will be stolen from the mail.
That’s why — A’s open padlock will be stolen.
I think this satisfies the requirements: there's certainly some number of blue dots for which the statement would be false, namely zero.
"Alice has a blue dot" adds extra information (It's equivalent to "There is at least one blue dot", and Alice has a blue dot"), and that extra information is critical in that, even though it adds some information, it removes useful information by reducing the information entropy of the situation, which stymies efforts to make inferences.
The problem wording is (arguably) ambiguous on that point.
This also reveals a strategy for townsfolk who wish to survive: simply inform a blue-dot of their color, breaking the logic chain. That winning move ought to factor into the strategy of these perfect logicians!
In a one red dot town, you could say “there is at least one blue dot”
So let's say there are two people with red dots, let's name them Ruth and Rudy. Ruth now knows that "not all dots are blue" (which is equivalent to "there is at least one red dot"). She sees Rudy with the red dot: Fine, here's the person with the red dot.
But wait a minute, why is Rudy not killing himself? If Rudy is the only person with a red dot, he should have seen only blue dots... however he knows there has to be at least one red dot, so if he only saw blue dots, the red dot must have been him, so he would have killed himself.
But he didn't, so that means he has seen at least one red dot, which isn't him, and isn't anybody else Ruth has seen because she only saw blue dots otherwise.
Oh no, it must be Ruth with the second red dot!
Would this mean the colony would self-implode?
That makes sense. What about the other commenter, who said something like “Alice has a blue dot.”
Wouldn’t that not lead to everyone’s death?
I feel like some timing procedures need to be defined.
Whether someone is "slower at solving logic problems" goes into IQ etc, and is not relevant to the logic problem (the same way "what if someone refuses to kill themselves over something so superficial" is -- one should assume they can all do the math it takes equally well, else we would have been told of that.
The flaw in the proof is that it assumes that townsfolk only learn their dot color based on counting arguments. If the stranger's "anything non-trivial" statement includes information beyond just a raw count, then a townsfolk may learn their dot-color prematurely (e.g. by being told in the statement) and the proof fails.
If it was the latter, a one person town would always die.
how do you do better that break even in the following game: 'A' chooses two distinct integers, writes them on slips of paper and holds one out in each hand in a fist. You choose a hand and reveal a number. You must then guess whether the other number is higher or lower than the revealed one, winning $1 if you guess right and losing $1 otherwise.
My first instinct is to say "Based on your knowledge of Alice, assign a probability distribution over the pairs of integers she might pick. Then when one is revealed you should condition on that fact. Then just pick whichever of higher or lower is most likely."
The problem setter will object that we have no way of assigning a probability distribution to Alice's actions. But if that's true, what does it mean to "do better than break even"? Apparently it doesn't mean "win with probability >50%", because there is no probability of winning.
Anyways, this is also highly unintuitive for me, so I’m far from certain that the solution below is correct; having said that, I’m unable to find a flaw in it. If you can, I’d be interested in hearing it.
SPOILER / SOLUTION:
Pick a number - say 0 for the sake of it. After Alice reveals the first number, assume that the other number is on the opposite side of 0 (i.e. if Alice reveals a negative number, assume the other number is positive and say “higher”, and vice versa). If your number is within the interval of Alice’s numbers, you win; otherwise, you still have a 50% chance of winning.
Now personally I think this is silly. You know nothing about what Alice is going to do, so your own choice of number can't possibly matter. You might as well always pick zero rather than randomizing. The reason the orthodoxy disagree is because the problem they're answering is something like "Find a strategy such that if you knew which numbers Alice had picked, but hadn't yet chosen your own random numbers, then you would always anticipate a >50% chance of winning."
This resembles the definition of a 95% confidence interval: "A procedure for generating intervals from data, such that if you knew the true value of the parameter, but not the data, you would assign a 95% probability that the true value lies in the interval". Of course Bayesians turn it around and define a credence interval where you know the data but not the true value (which I think is better, because this actually is your state of knowledge).
Which is fun because while there are hundreds of puzzles that require people to be able memorize perfect hash functions on the fly or remember the state of a FSM 1000 states ago or be capable of implementing the necessary function required by the axiom of choice for a given situation...
this is the first puzzle I can think of that requires some one to be able to, with perfect randomness select two integers (-∞,∞)
No because you pick the hand (i.e. it’s essentually random), so you still have 50% chance of first revealing 20 and gettingit right.
Choosing zero as a threshold deterministically does not work for all choices that Alice can make because, as you say, for some choices such as (10,20) it gives you exactly 50% chance to win. But as long as there is always a non-zero chance that you chose a threshold between Alice's numbers, then there is a > 50% chance that you win the game.
The problem lies in the assumption about how Alice thinks.
The set of integers is countably infinite, so your probability to guess a specific number N is zero if the distribution is uniform (e.g. there isn't a higher likelihood of picking "smaller / closer to zero" numbers.) 
But because it would take a lot of mortal time to describe arbitrarily large numbers, you MUST assume that because Alice is mortal, she'll pick a number she could describe in her lifetime.
Therefore, the question is not valid. Alice does not truly have a domain over all the integers to choose from. She only has the ability to pick numbers she could write down on a slip of paper within her lifetime.
Since the paper is limited in how much information it can contain anyway, it therefore cannot represent any arbitrary integer from the infinite set.
So the question is re-written: Alice picks two integers from a finite pool..
By break even, I think you mean that the game is played multiple times, as long as necessary. If you are not following a martingale like strategy (edit: you can't anyways, the bets are fixed) and respond randomly (or with full faith that this is your lucky day, doesn't matter really), you are expected to break even with 50% chance.
How can we improve and make better than 50%?
'A' chooses two distinct integers each time. They can choose among infinite number of integers. For the first guess, I can't do better than 50% chance but if I start to keep history of the numbers that were picked it feels to me like I can do better than chance if I assume A chooses their distinct integers uniformly random by expecting a uniform distribution. I don't have much ideas about how to judge uniformness in an unbounded set of integers though but maybe something like: if first hand is less than the average of all previous numbers, I'd say "other number is higher", otherwise I'd say "other number is lower". I feel like this would improve my chances to more than 50% in the very long term (at infinite plays perhaps?) but am not able to prove it.
If A is not choosing their numbers randomly with a true RNG then I'd play many games at 50% chance then run their choices through a neural network to extract patterns then I'd do better than random for sure.
I suppose you could set any threshold x, and play so that if the first number is bigger than x you stick, otherwise you switch. This would be strictly better than random (break even) if x is within the range of numbers that A is picking from and the same as random if not.
It depends on when you measure your chances: before A has picked a generator this gives you >50% chance but if A has already picked a range to draw from that doesn't contain x then you're back to 50/50. Still, I think this would count as guaranteeing >50%?
You mean "greater than" and not "greater than or equal to"?
The latter is easy; lots of naive strategies accomplish that. But the former implies that you can declare a strategy and the puzzle offerer cannot then pick a distribution that defeats it. It's hard to imagine how you could get >50% for example if the puzzler's strategy is to pick N and N+1 for very large N.
pyk's solution above works 66.6% even for the first try...
From there, if A (the first revealed number) is less than C, then that narrows the remaining cases giving you a 2/3 chance. If A is greater than C, that also narrows the remaining cases and gives you a 2/3 chance as well.
On a number line, the cases are below.
If A > C then the six originally equally possible cases are narrowed to three cases:
B-----C-----A B < A
C-----A-----B B > A
C-----B-----A B < A
So you would guess B < A -- the first hand's number is higher with probability 2/3.
If A < C then the six originally equally possible cases are also narrowed to three cases:
A-----B-----C B > A
A-----C-----B B > A
B-----A-----C B < A
So you would guess B > A -- the first hand's number is lower with probability 2/3.
This is impossible; there's no uniform measure on the integers (as σ-additivity makes it impossible to bound such a measure).
var prior = Math.pow(Math.random(),1);
var hand1 = Math.pow(Math.random(),10);
var hand2 = Math.pow(Math.random(),10);
> Win probability: 0.5757182
var prior = Math.pow(Math.random(),1);
var hand1 = Math.pow(Math.random(),0.1);
var hand2 = Math.pow(Math.random(),10);
> Win probability: 0.9026432
I'm not sure if that always holds true. Let's say Alice flips a coin and picks 3 or 4 for num1, then Alice flips another coin and picks either num1 - 2 or num1 + 2 for num2.
You come along and pick a number between 1 and 6. You have a non-zero chance of having picked a number between Alice's two numbers, but your chances of winning are not > 0.5.
If you picked a number between 2 and 5 though, your chances of winning are now > 0.5. Like jstanley said, you'd have to know how Alice picks numbers in order to win in the long run.
Wow. This is the key piece of information that makes the problem interesting, IMO. That's quite unintuitive.
If you know anything at all about how your opponent chooses numbers, you win in the long term.
If the range isn't limited, hoe would we even define a normalized probability distribution?
Let A have dimensions (a,b,c) and let B have dimensions (x,y,z). Assume A fits inside B.
We have (a+b+c)^2 = a^2 + b^2 + c^2 + 2ab + 2ac + 2bc. This is the sum of the A's hypotenuse squared plus its surface area. The same holds for B.
Note that A's hypotenuse is at most that of B-- the hypotenuse of a is its longest axis, and it needs to fit in B somehow. Further, note that the surface area of A is less than that of B. To see this, consider the nesting of A inside B and realize that both boxes' interiors are convex sets. Imagine inflating A inside of B by taking the sets A_t consisting of all points within B that are within distance t of a point in A. It is not hard to see that this inflating operation can only increase the surface area of A, and since the maximum surface area we can get is that of B we have that A has smaller surface area than that of B. Thus,
(a+b+c)^2 = (A hypotenuse)^2 + (A surface area) <= (B hypotenuse)^2 + (B surface area) = (x+y+z)^2. The claim follows.
For "Unwanted Expansion" the answer is technically correct but I am displeased that it doesn't prove there wont be any infinite loops. Whereas analyzing invariant in the tree should prove that.
For Boxes in Boxes it says "But, if we take ε to be huge", but how big is huge, and what if it isn't huge? Seems like something in the proof is being hand waved over.
The natives and suicides I enjoyed they really made me go aah!. The irony about the suicides is that the if the people were too dumb to apply the logic, they'd survive.
The question "what if it isn't huge" makes no sense; we can pick ε, it's not some external given. (Actually, as mentioned, we're not picking it to be one specific value but rather letting it tend to infinity, but that's another matter.)
You could just as well say, "Because it's a finite" expression.
"Boxes in Boxes" doesn't really fit the theme. It's a complicated elementary math problem that's hard to solve, but not "tricky" in way that makes it seem impossible.
If it happens that the permutation has no cycles of
length greater than 50, this process will work every
time and the prisoners will be spared.
Furthermore, if you simplify the case to two prisoners and two boxes, where each is allowed to open one box, the odds of "success" are clearly only 25%. What happens as the number of prisoners and boxes grows that improves the odds? This isn't a classic Monty Hall variation where the participants have additional options as the game progresses -- it's completely predetermined.
But there are 100 boxes, assigned at random. The prisoners can come up with a mapping function that predetermines which boxes they will open based on their names, but that function will have no relationship to the one (if any) that was used by the warden.
There is no way to guarantee that the first prisoner finds his name, and that seems to be true for all of the others. It must genuinely be a case where I haven't understood the problem correctly.
You have to find your pointer in a circular linked list. But it’s only singly-linked, so you start just in front of it and work around the links the long way. 30% of the time, all the lists are 50 elements or shorter, meaning everyone is guaranteed to succeed.
No it's still 50%, because first person opens box 1 and second person opens box 2. They both either live or die together.
Three people, 2 chances. Each just guessing independently would be 2/3 chance so the chance for all to win is (2/3)^3 or 30%. But if the first person opens box 1 then they will find their name or not (1/3) and if so the second and third opening 2 and 3 are guaranteed to find their name, so 33% total chance.
The basic idea is that you choose boxes in a dependent pattern so that group either all wins or all loses as much as possible. The more people that can win or lose at the same time the better the overall chance for the group.
As the problem is stated, the boxes remain where they are and must be reclosed after being opened. There are no other choices to be made, there's no way to retain or communicate any information about a particular prisoner's actions, and there are no order-dependent aspects to the problem. Everybody sees the same 100 closed boxes and gets to open 50 of them.
The communication happens before the people go in to open the boxes. In the 2-person 2-box 1-choice example, person 1 says to person 2 "I'll open box 1 and you open box 2".
With person 1 always opening box 1 and person 2 always opening box 2, what do you feel their chances are? List out the permutations and see.
Presumably the same exclusion principle that improves their odds from the "obvious" 25% to 50% will apply to any larger number of participants, and converge near 30%... but it's certainly unintuitive.
Worse, they actively confuse the issue by making the warden an adversary and adding another layer of randomness that's just a red herring. Instead imagine they just put their names in alphabetical order so box 1 is Andy, box 2 is Bob, and so on.
First consider if they were randomly ordered buttons and everyone had to press their button. The people can plan out any order of pressing buttons, it doesn't matter because each person has to win their own separate 50 in 100 bet. So this should be intuitive. (1/2)^100 is impossible odds.
The key part is this: "He then looks into the box belonging to the name he just found".
They are picking an order that's based on the arrangement of names in boxes so they are no longer acting independently from the random box-name arrangement. If everybody was assigned some arbitrary starting box, they are not making 100 separate choices anymore they are making 1 collective choice. But it's the same chance! Instead of making 100 separate fairly likely choices, they are making 1 very unlikely choice (that the starting box order they picked is right).
The second key is starting with the box assigned to their own name.
So imagine this problem as a directed graph. Each person's box is a node and the name within is a directed edge to another node. Every node has exactly one edge to it and one edge from it, and the edges are all random. It's incredibly unlikely to be one full circuit though, instead there are little isolated circuits. For example, box 1 could have "Bob" and box 2 could have "Andy" and if anybody else picked box 1 as their starting box they would never find their name, instead finding 25 Bobs and 25 Andys.
By picking their own box, they're guaranteed to be on a circuit that has their name on it and they are excluding all other random arrangements, so Conan getting stuck on an Alex-Bob circuit of 2 is impossible. The only question is if it will take more than 50 steps.
Now imagine you are randomly constructing this directed graph one step at a time. You open box 1 (Andy's box) and randomly put in Conan, open box 3 (Conan's box) and put in Xavier - can't be Conan because you already used that name! Every next step in a chain has an increasing chance of coming back to the beginning. So on the second step there's 99 names left and one of them is Andy, on the third step there's 98 left and one is Andy. The longer the chain, the more likely it is to end and become a circuit. The warden won't be following this process to assign names, but since every step is random it's the same result as any other random way to assign names to boxes.
So the longer the chain the more likely to become a circuit, and the shorter every other circuit has to be, is finally the reason why the win chance is higher than seems possible.
Why it's 30.5% instead of 23% or something else. Well, imagine the first chain being constructed. The chance of it getting longer than 50 is (99/100)(98/99)...(50/51) which works out to 50%. So the first person (Andy) opening his box has a 50% chance that his chain got longer than 50 before it become a circuit - exactly what you'd expect for the first box. But the chance of building the second chain longer than 50 is much smaller because the first one took a lot of names out of the hat, so say first circuit is 25 long, second chain is (74/75)(73/74)... so you'll have to ask a math genius how to estimate that, but intuitively it's much less likely.
Some additional fun ones in the comments there.
Do you guys recommend this author's books such as https://www.amazon.com/Mathematical-Puzzles-Connoisseurs-Pet...
(It's hard to find the right search term for this that doesn't return a lot of non-mathematical brainteasers or stuff aimed at kids)
You'd probably also enjoy the works of Martin Gardner, who wrote a recreational mathematics column in Scientific American for 25 years, which has been collected across many volumes.
Edit: These are not puzzle books specifically, but books about puzzles.
My personal favorite that has an English translation available, although it's not on Maths is "Physics for entertainment" by Perelman -- you can get it on Amazon right now. It was published 100+ years ago and problems are still relevant now. It discussed questions such as "When you jump out of a train, should you jump forward or backward?" "Who is wetter, a person standing in the rain or a person running in the rain?"
I guess basic maths and science hasn't changed that much, or maybe I was reading them when I was young, so I might not have known any better.
And here is the dots problem:
(In fact my knowledge of tennis is poor enough that I don't understand why the question would be considered difficult for somebody who played tennis.)
Edit: I re-read the solution and it doesn't require actual labeling (It's hard to imagine, because I'm not a native English speaker).
Suppose there are 100 residents and 2 blue dots.
The stranger tells everyone "there are not 98 blue dots"
This meets the condition of the question's "non-trivial" but the residents can relax: nobody has learned anything new. The proposed induction is broken, in my opinion.
But this isn't correct. Let's say n = 6, a-c are blue, d-f are red, and the information is that there is at least one blue dot. a knows there are at least two, but thinks b may believe there is only one blue dot. Moreover, a believes b may believe that c may believe that there are no blue dots.
So if a was red, the information would mean that b would commit suicide if c did not initially commit suicide, but since neither has happened, a must have a blue dot and therefore must commit suicide.
A convoluted process to be sure but does it make sense how the information transmits by ruling out hypotheticals?
> “Non-trivial” means here that there is some number of blue dots for which the statement would not have been true.
I think this is supposed to say "Trivial means..."
If there were 98 blue dots, your statement would not have been true, so it was trivial information.
Essentially, if the people of dot town can figure it out themselves, then you've given them a trivial piece of information. They know there aren't 98 blue dots, because they can already see that for themselves.
This requires that everyone know only blue or red eyes are allowed.
I’ve stopped trying to do puzzles. I’ve been misled by bad communication (intentional or otherwise) too many times.
It massively improved our play at Hanabi:
Probably would be a useful puzzle for improving your Spades or Bridge game, or any game you play with a partner with limited information.
Also, it gets much worse than The Random Native. George Boolos has a variation called "the hardest logic puzzle ever." It goes like this:
“Three gods A, B, and C are called, in some order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of A, B, and C by asking three yes-no questions; each question must be put to exactly one god. The gods understand English, but will answer all questions in their own language in which the words for ‘yes’ and ‘no’ are ‘da’ and ‘ja’, in some order. You do not know which word means which.”
There's an ongoing effort out there to make this variation even harder: