Hacker News new | past | comments | ask | show | jobs | submit login
How random can you be? (expunctis.com)
335 points by mdp on Mar 8, 2019 | hide | past | favorite | 150 comments

Here's a fun trick I learned from Flajolet and Sedgewick's Analytic Combinatorics (available online, page 52). I imagine it will seem incredible when tried with an actual classroom:

> Révész in [Strong theorems on coin tossing] tells the following amusing story attributed to T. Varga: “A class of high school children is divided into two sections. In one of the sections, each child is given a coin which he throws two hundred times, recording the resulting head-and-tail sequence on a piece of paper. In the other section, the children do not receive coins, but are told instead that they should try to write down a ‘random’ head-and-tail sequence of length two hundred. Collecting these slips of paper, [a statistician] then tries to subdivide them into their original groups. Most of the time, he succeeds quite well.”

> The statistician’s secret is [...] in a randomly produced sequence of length 200, there are usually runs of length 6 or more: the probability of the event turns out to be close to 97%. On the other hand most children (and adults) are usually afraid of writing down runs longer than 4 or 5 as this is felt as strongly “non-random”. The statistician simply selects the slips that contain runs of length 6 or more as the true random ones. Voilà!


Obviously, the way to beat this site (or the above classroom trick) would be to use "true" random numbers. But if one doesn't have access to coins or computers, it raises the question: what is a good way to generate a long sequence of reasonably random coin flips in one's head? For example, if you've memorized many digits of pi or e or some such "believed to be normal" constant, you could use whether each digit is odd or even (or maybe even something like throw away 8 and 9, and read each remaining digit in octal to get 3 random bits). But that only gets you so far...

> But if one doesn't have access to coins or computers, it raises the question: what is a good way to generate a long sequence of reasonably random coin flips in one's head?

Doing it in your head is hard, but if you a great memory or can write things down on paper, then you could:

1. Generate a bunch more "random" coin flips than you need and write them down. Do your best to be unbiased, but don't sweat it.

2. Start a new separate list of results.

3. Go through the original results a pair at a time. If the pair is "head, tail", write "head" in the new list. If it's "tail, head", write "tail". If it's "tail, tail" or "head, head", discard it.

This is basically Neumann's "fair coin from a biased coin" trick:


In theory, since you know the trick, it might influence the way you generate your initial random sequence. In practice, most of us probably aren't that smart. If you are, repeating the whole process using the results of the previous run a few times should guarantee enough complex mixing that your brain isn't clever enough to back-propagate the trick into how you generate the initial random coin flips.

This works for getting rid of bias in individual flips towards heads or tails, but not for getting rid of dependences between separate flips (the very problem noted in the OP that humans are bad about).

It would be interesting to modify the site to do this automatically -- it measures your progress using the bigram method that you detail instead of using the input directly, and see if that generally makes humans do better.

Interesting. I would have suggested to XOR a few sequences, which would improve the result a bit, but this method works better.

Or if you’ve already memorized some digits of pi, just use odd and even digits as left and right (or 0 and 1).

I did that for e [1] and the program was no better than 50% correct.

[1] I have a program that calculates e to some arbitrary number of digits.

Genius. I will actually remember this, in case I ever need to be ... really random.

> what is a good way to generate a long sequence of reasonably random coin flips in one's head?

I tried the modulo 2 length of the words in the text on the site.

This was effective enough; at the end of three paragraphs the guesser had 50% correct and I had a positive balance. It's also pretty broadly applicable, there's a lot of texts in the world one could choose to use (anyone with a nearby bookshelf or internet connection is set, anyone who can memorize song lyrics, bible passages, poetry, etc doesn't even need that).

There's a reasonable question of whether this distribution is itself biased. If you take one list of the 500 most common English words (https://docs.google.com/spreadsheets/d/1qh3MvdKLfTadXWZMIsaq... ), it turns out that there are 278 with an even length and 222 odd, so that's an indication there may be an exploitable bias. There's also the question of whether relevant N-gram patterns confound that beyond recognition ... or add another layer of predictability.

But then again, there's such a wide variability in potential text choices and the function is so lossy (while being simple enough that most anyone could use it) that it seems like it's a pretty good choice for generating difficult to predict coin flip sequences.

This bias is pretty easy to overcome. Let’s denote E with even length words and O with odd length. If two words come up with EE or OO, discard, if it is EO use 0 and if OE then use 1. Same probability for both. This trades the bias for need more words.

that's pretty good.

In a pinch I was just choosing left/right depending on vowel or consonant, and flipping the rule every word or so. but the distribution is still a bit too tight.

One way to generate a sequence of heads and tails is to start with either heads or tails, then randomly choose how many of those to use before switching to the other. You're sampling the length of runs. e.g. Heads, then choosing 12113 would translate to HTTHTHHH.

It feels easier to ditch some of my biases generating a sequence this way.

A fibonacci pseudo random number generator would not be too hard to keep in mind (two integers modulo a fixed number) and might give you ok results if you choose your parameter well and avoid using the weakest bits as the generated boolean : https://en.wikipedia.org/wiki/Lagged_Fibonacci_generator

Or perhaps take the lyrics from your favorite song and convert the letters to their respective number rank in the alphabet. Then heads for even and tails for odd numbers.

Letters are not uniformly distributed.

When poker players need a random source (for instance, deciding when to bluff) they can look at the current or previous tabled cards (or their own cards).

This is definitely not random

Not random, but not deterministic for the machine - which blocks its ability to guess your next choice.

Technically, there is nothing in the universe that is random. We just need something sufficiently complex that no one can predict it.

I don't think this is proven. Can you elaborate?

It is definitely not proven. If it was we'd have a completely different view of quantum mechanics. This was even quite a big debate among quantum physicists in the 20th century and talked about in the beginning of every quantum book (at least that I've seen).

Well, if you have enough data you can predict anything. A dice roll is "random", unless you know the starting position of the die, mass, size, air currents, force of throw, on and on. With perfect data, you could predict the outcome. Thus, it isn't 'really' random.

Except that you're wrong.

> unless you know the starting position of the die, mass, size, air currents, force of throw, on and on. With perfect data, you could predict the outcome.

This is what Einstein believed, but he was wrong (quote about God doesn't play dice). Physics uses the Copenhagen Interpretation, which means that we have probability waves until we measure and the wave collapses. You may have also heard of the Heisenberg Uncertainty Principle (read: "There is a resolution to what we can measure").

The fact is that you can't know all your conditions. That's why we say things are random. Because there is a probability distribution to all these constraints (even if we had a device that could measure with "infinite" precision. The problem is that the universe itself hides information from us.

There's great sources for true random numbers. White noise (in atmospheric data), particle decays, particle motion (yes, even particle motion), etc.

Tldr: There's a lot of random stuff.

Given the implementation, an interesting way to cheat is to use de Bruijn sequences [1]. B(2,5) is 00000100011001010011101011011111, so using this sequence means that you will cycle through all possible 5-grams, and thus make lots of money.

Ah, should have read more carefully -- it's using 5-grams as the base, not as the model, so really it's 6-grams, so we need B(2,6). We can try 0000001000011000101000111001001011001101001111010101110110111111

[1] https://en.wikipedia.org/wiki/De_Bruijn_sequence

I'm virtually rich

var seq = '0000001000011000101000111001001011001101001111010101110110111111'; var index = 0; setInterval(function(){ captureKeyFunc({code : seq[index%seq.length] === '1' ? 'ArrowLeft':'ArrowRight', preventDefault: function () {}}); index++; }, 10);

Adapted to use Math.random

setInterval(function(){ captureKeyFunc({code : Math.round(Math.random()) === 1 ? 'ArrowLeft':'ArrowRight', preventDefault: function () {}}); }, 10);

With Math.random you get ~50% win, with the sequence you get 85% because it's an exploit

Why use seq rather than Math.random()?

Because it's an exploit (read top comment)

Given the implementation, an interesting way to cheat is to use the following "never loses" sequence:

You could easily avoid the repetition by saying that on ties you just choose a prediction at random (and likewise for the first 4).

What's at stake here is the same thing that threatens to eventually cause another civil war in the US, but that is a question for another time. :) It is a broad class of theorems which state that if you have a function f: X -> X and you repeat it over and over it naturally finds an input x such that f(x) = x. This repetition has naturally found a set of moves which resets the Markov matrix back to what it was. If you randomized the prediction on ties then I am not sure what I would do exactly... I would have to steer the Markov matrix into a cycle but assuming that it stores the absolute counts for every single prediction, any oscillations I'm using would want to naturally decay to zero... I might be able to exploit it if you were to only look at, say, my last 1000 moves' history to make your predictions, and maybe I can still induce an oscillation that does better than random chance, but I am not sure that I can.

That would make a crazy snare rudiment

I modified the "randomize!" button to inject the de Bruijn sequence DB(2,6), and I get a rate of 50%.

Is there an error in my patch ?

  diff --git a/not-so-random.html b/not-so-random.html
  index 48c04da..c650b5b 100644
  --- a/not-so-random.html
  +++ b/not-so-random.html
  @@ -169,8 +169,12 @@
         randomHelpFunc = function(evt) {
            //document.onkeydown = null;
  -         for (let i = 0; i<10; i++) {
  -            lastKey = Math.round(Math.random());
  +         // db(2, 6)
  +         var inputString = "0000001000011000101000111001001011001101001111010101110110111111";
  +         for (let i = 0; i< inputString.length; i++) {
  +            lastKey = inputString[i] == "0" ? 0 : 1;

I didn't review the patch, but that is the expected result. That sequence will cause it to think that whatever previous 5 pattern it saw, your next is equally likely to be 1 or 0. So it will guess randomly, and half the time it is right.

However with $1 vs $1.05 returns, you'll steadily make money.

Thank you for posting this! I've been trying to figure out what it was called for years!

There was an attack I read about for garage door openers or numeric entry doors where they didn't have an enter button. So entering they entered the De Bruijn sequence and had the answer in a small fraction of the time compared to trying each combo once and hitting "Enter".

This might be what you're talking about:


the guesser gets 14% right on that sequence, fwiw. And you can just keep repeating it indefinitely.

By using "1" for left (lastKey = 0) and "0" for right (lastKey = 1), I get 12%.

A de Bruijn sequence DB(2, k) is spelled by an Eulerian path in the corresponding de Bruijn graph, whose vertices are {0, 1}^k and where any arc (x, y) has the following property: x[2..k] == y[1..k-1].

Obviously, there are as many Eulerian paths as there are de Bruijn sequences.

Each such de Bruijn sequence has a different capability at getting a good score at the game "https://www.expunctis.com/2019/03/07/Not-so-random.html".

  diff --git a/not-so-random.html b/not-so-random.html
  index 48c04da..168a287 100644
  --- a/not-so-random.html
  +++ b/not-so-random.html
  @@ -169,8 +169,12 @@
         randomHelpFunc = function(evt) {
            //document.onkeydown = null;
  -         for (let i = 0; i<10; i++) {
  -            lastKey = Math.round(Math.random());
  +         // db(2, 6)
  +         var inputString = "0000001000011000101000111001001011001101001111010101110110111111";
  +         for (let i = 0; i< inputString.length; i++) {
  +            lastKey = inputString[i] == "0" ? 1 : 0;

How to find which sequence is the best?

Only if you map 0=right and 1=left.

Mapped the other way, 0=left and 1=right, the guesser gets it right ~43%.

Could the algorithm "detect" this and change the grams then? And then you will fight at an higher level where it will be whether you increase it or not?

Please notice that the payoffs in this program are set up such that you will, in fact, feel cheated.

The basis for this is that you will likely spend maybe 30-60s playing this game so you will register something between 100-200 keypresses or so. If you just click the "Randomize" button you can see the problem: a truly random input source will fluctuate over 100-200 keypresses much more than it will be biased upwards, so that after 100-200 you will probably see some run of "bad luck" by which the truly-random algorithm crashes from $1005 to $995 or so. Now if this were you, you would have stopped there with the last 5-10% being predicted perfectly, and said "okay, okay, the algorithm has learned how I behave." But it hasn't.\

Part of this is the fallacy that people generally assume that over a large number of trials the standard deviation of a sum of random variables drops to zero -- that is true of an average but not a sum. So you have a discrete random variable which takes on the values +1.05 with probability 0.5 or -1 with probability 0.5, so its variance is basically 1.0 (off by 625 ppm but whatever) and so its standard deviation is basically 1.0 and if you sum N of these you have a standard deviation that is 1.0 √N while the mean is 0.05 N, these only equal when √N = 1.0/0.05 = 20 and thus N=400.

So at 100-200 trials, you cannot generally expect the systematic bias from winning extra money when you are random to visibly outweigh the random noise from just randomly being wrong, you have to go to 500+ trials to really prove your mettle. But most people just won't play this thing for that long.

This is interesting, i first thought I can use all 4 arrow keys, so i tried to make random movements with 4 keys.

The program only guessed 49% of my inputs right over 100 Iterations.

When I understood it's only two keys the program guessed correct 60% of the time.

The (napkin-sketch) model of that behaviour is interesting. I'd guess that when you are pressing 4 buttons, your "internal pseudo-RNG" is incorporating a broader scope for entropy that isn't visible to the algorithm which only "sees" the two horizontal arrows.

Incidentally, I also got about the same rate (59%) of success from the algorithm when I used only the two recognized keys.

I just tried this out, using all 4 arrow keys lands me around 52%, just the two is 60% like you guys.

This is kinda cool, our brain's RNG does better with more "mental outputs"...

If you imagine you're pressing one of 4 keys and then collapse the key choice as a projection on to two keys then do you get closer to the original 4-key randomness?

Same result. 50% with four keys, 60% with just the two.

This is very interesting...

One advantage to having a couple dozen digits of pi memorized is having a psuedo random number generator at your disposal.

Useful for generating random rocks paper scissors moves, or in this case, random directions (odd = right, even = left)

Perhaps getting a bit off topic, but when playing rock-paper-scissors, for multiple rounds against the same person consecutively, I've noticed that people tend to throw what would've beaten the last thing you played, so you just throw what it takes to beat that. I used to tell people I'd be able to beat them pretty consistently at rock, paper, scissors..of course, they think it's random and that I can't consistently win. If they take me up on it, we play 10 or 15 consecutive rounds, I beat them over and over, and then explain what I've observed/how it works.

> I've noticed that people tend to throw what would've beaten the last thing you played

Yes, that's a well-established effect. Also, most people tend to choose "rock" for the very first throw of a session.

I'll remember this next time we're playing rock, paper, scissors.

Technically you should convert Pi to base 2 before doing that, because using each decimal digit's parity collapses a lot of information in a lossy way. If your function is:

    f: π(k) --> δ(k)
where π(k) is the kth digit of π and δ(k) = 0 if k is even and 1 if k is odd, then your function is injective. The image of every even under f is equal, likewise with the image of every odd under f. A huge amount of entropy is destroyed that way.

So even if Pi could be used as a pseudorandom generator (and it actually can't be), you'd lose that property by defining an injective map from your domain of inputs to the codomain of outputs.

Why can't pi be used as a PRNG, and why does converting each digit to its parity lose a possible PRNG property? Sure it removes a lot of entropy, but it does so by destroying information, not from creating/overwriting memory.

Pi can be as it's (thought to be) normal (all digits appear with uniform frequency), but like anything, it's how you use it. Imagine you selected a digit from pie and used it to decide rock, paper, or scissors (3 options, so let's take a digit and mod 3)

    0: 0 Rock
    1: 1 Paper
    2: 2 Scissors
    3: 0 Rock
    4: 1 Paper
    5: 2 Scissors
    6: 0 Rock
    7: 1 Paper
    8: 2 Scissors
    9: 0 Rock
Now, let's check the frequency of each option:

    0/Rock: 4
    1/Paper: 3
    2/Scissors: 3
Your RNG is biased towards 0 here. The same thing happens, and is very common, when people just take the system random number generator and mod it by the number of values they want. They always end up biasing the bottom section of their distribution.

The common way of dealing with this is to "ignore" any number that would make the set biased. Here you would ignore 9 and you have an even distribution. So, you're playing 7 rounds of RPS and you go

    9! SKIP! 2/S

The simple reason is because a pseudorandom number generator is a complexity theoretic thing, but an information theoretic thing. In order for it to be robust, it must be indistinguishable from true random for any polynomial time algorithm. Since we have algorithms which can calculate Pi in polynomial time, as long as the computer recognized the sequence (or the deterministic seed of the sequence as Pi), it could with certainty predict the next digit of the sequence.

As for why converting digits in this way matters - a lot of randomness is expressed by the entropy. It's harder for you to correctly guess the sequence {1,7,9,3,6,8,2,4} than it is to guess the sequence {1,1,1,1,0,0,0,0}.

If I ask you to guess a decimal digit I've chosen "randomly", you have a 1/10 chance of being correct. If I ask you to do the same for binary digits, you have a 1/2 chance of being correct.

Basically you want to think of these as subsequences, not individual numbers. If Pi is normal (which is a big if), then Pi is normal in every single base, including decimal or binary. But it's not generally true that a normal number generates another normal number by mapping each digit to the digit's parity.

> it could with certainty predict the next digit of the sequence //

If pi is [absolutely] normal though all sequences exist in it at equal frequency. Meaning that for any given sequence there is an infinite number of positions in pi to find it and that all the possible following digit sequences are equally likely.

So the computer could never know the next digit.

Aside: guessing a D16 roll seems way more likely than guessing a nibble of binary, and perhaps a little less likely than guessing 4 coin flips!??

> A huge amount of entropy is destroyed

And why is that a problem for generating random arrow presses?

Because the sets of decimal and binary digits are:

respectively. Any subsequence of digits of Pi, such as {1,1,3,9,3,5}, is equal to the subsequence {7,9,3,5,3,3} under the proposed function. They have the same image. That completely mucks with your probability because you've eliminated so much uncertainty.

You're selecting from a space of 10 digits for inputs, but the computer only has to guess from a space of 2 digits for outputs.


[There are some technical details because pi is not a random number, but for the sake of simplicity, let's assume that pi is a random number.]

It's much easier if we'd live in a word that use base 8 instead of base 10.

Let's suppose that we have the sequence of digits of pi in base 8.The algorithm of the GP is to replace {0,2,4,6}->0 and {1,3,5,7}->1 to obtain a binary "random" sequence.

Your alternative is to write pi in binary, and use it as a "random" sequence. But if this is a good "random" sequence then you can pick every third digit and get another good "random" sequence. [Here good means something like iid with uniform distribution]

But if you start at the correct position, it's equivalent to pick every third number of the binary representation and to classify the digits in the base 8 representation as even or odd.

If you choose other starting points to pick every third digit, you get alternative maps:

* low and high: {0,1,2,3}->0 and {4,5,6,7}->1 (like in the roulette[1])

* crazy: {0,1,4,5}->0 and {2,3,6,7}->1

These other two selections produce also good "random" sequences.

The important part is that the projection that is selected maps the same number of elements to each element. In this case the three methods maps 4 elements to 1. This ensures that it maps iid with an uniform distribution to an iid with a uniform distribution.

Moreover, you can pick any arbitrary 4 numbers and map them to 0 and map the other 4 to 1 and it will work as well as the other three maps I used. (This is like the red/black option in the roulette[1].)


Back to base 10. Any map that maps 5 number to 1 will maps iid with an uniform distribution to an iid with a uniform distribution. In particular the even/odd map that the GP is using is fine.

With this map you loose a lot of entropy, but since there is infinite entropy you can drop a lot of it and still keep infinite entropy. It's not as efficient as using the base 2, but it correct.

[1] An ilegal fair roulette, with 36 numbers, without the green 0.

Yes you're correct. I neglected to account for the bijection between even naturals and odd naturals, and was only considering finite strings. Since we're talking about infinite (or potentially infinite) strings here, the reduction in entropy is alright as you've stated since even and odds occur with equal probability if every digit of Pi occurs with equal probability.

You could use rejection sampling to simply drop the digits that are 8 or 9, then you'd have a base 8 number to work with.

I'm no math expert, but to me the only question that should matter is: how random is the parity distribution of digits in pi? If the answer is "perfectly random", then that means each digit in pi should have a perfectly random parity. If the answer is not "perfectly random" then that means you should be able to predict the parity of the next digit based on the parity of previous digits with probability >50% in the long run, which I don't think holds true for pi (I could be wrong).

> how random is the parity distribution of digits in pi? If the answer is "perfectly random"

Nobody knows! Everyone in math is sure that it is, but nobody had found a proof yet. (It may be false...) An extension to this question is if pi is a "normal number". More details: https://en.wikipedia.org/wiki/Normal_number http://mathworld.wolfram.com/NormalNumber.html

That's an interesting point I hadn't really thought about. The set of all even naturals is isomorphic to the set of all odd naturals, so technically speaking you'd be right. If every number appears with equal likelihood in a sequence, you'd have evens and odds appearing with equal likelihood.

π is normal in all bases, so in the end it doesn't matter, you're just using up more data.

No, it's not known that Pi is normal. This is a very, very open question. We don't even have a way of tackling that question at the moment.

You're correct that if Pi is normal, it's normal in all bases. But that's precisely the point I'm getting at - if Pi is normal, you need to use base 2 for this to work because your codomain is just {0,1}.

Mapping a number to another number such that each digit becomes its own parity is materially different from converting that number to base 2. They aren't the same thing whatsoever, and you can't generally take a normal number and create another normal number this way.

So even if we accept the reasonable conjecture that Pi is normal, you still need to map it to the same base as your codomain in order for its entropy to be preserved. The proposed function is injective and destroys entropy.

> But that's precisely the point I'm getting at - if Pi is normal, you need to use base 2 for this to work because your codomain is just {0,1}.

You can use any base that is a multiple of 2 (like 10) and then apply a parity function to the digits. If pi is normal, then the digit parity sequence is also normal.

Sure you lose some entropy, but in some sense the digits of pi have 0 entropy anyway since they can be calculated. And in the other sense of treating the digits of pi as an unknown random sequence, there is infinite entropy, so throwing away 70% of it doesn't matter.

Yeah someone else pointed out a similar point about the bijection of evens and odds, good point. My initial claim is incorrect, so I'll concede that. I was operating from the idea that there is no bijection between the naturals and a parity check function, but you're right that if one sequence is normal the second set should also be normal regardless of invertibility.

I wrote a quick and dirty Python script that outputs a string of guesses that will beat the guesser. It keeps track of the 5-gram counters and always opts for the less-used option. If there is no less-used option it picks a random direction.



You input length is 101.

If you "play" for a long time, eventually, all 64 6-grams will have occurred.

Fun experiment but very easy to game by learning their predictions:

https://i.imgur.com/O3EggFY.png (0% guessed right after 15 key presses)

Here's the sequence I used. I'd be interested if this works for other people too:

    1:  right
    2:  right
    3:  right
    4:  right
    5:  right
    6:  right
    7:  left
    8:  right
    9:  right
    10: right
    11: left
    12: right
    13: left
    14: right
    15: left

Keep going though. You’ll settle into a rhythm, guaranteed. Unless you keep track in your head or a piece of paper, this strategy starts to fail after about 100 entries. Long term average will be 50%, equivalent of a coin toss.

Don't you only need to remember the last 5 or 6?

There is no need to "learn" their predictions (beyond a few at the beginning; edit: and the undetermined cases, I don't know if they choose randomly or deterministically), they are very transparent on how the system works:

"The program keeps a database of each possible combination of 5 presses, and two counters are stored under each entry — one is for every zero that follows the combination, and the other one is for all the ones that follow this combination. So every time you press a key, an entry in the database gets updated. To make a prediction, the program needs only to look up the entry corresponding to the last 5 presses and decide by looking at the counters which key press is more likely to follow."

That was my experience at first as well, but I think the point (that the page doesn't really get across) is that you should start hammering on the keys rather than methodically choosing your next input each time. I did that and saw my score take a nosedive.

Yeah, I do get that and I do think that site demonstrates the point it's making really effectively too. My post was just about the engineer / hacker in me having a bit of fun.


So what I'm learning here is the key to beating this is not to be random at all!

(this comment was meant in jest rather than as a criticism. I really enjoyed this site)

Reminds me of when I tried to give someone a fake social security number. I changed out the last four digits to something "random" but later noticed I had chosen the last four digits of my credit card. Being random is hard.

The program collects statistics based on your previous 5 key-presses. That's one way to do it, but there's a much better, extremely elegant algorithm for quickly making predictions by averaging over a doubly exponential number of models of this form. The algorithm is called Context Tree Weighting, or CTW for shorts. Look it up.

I used the first 100 digits of pi (left for even, right for odd) and it guessed correctly 49% of the time.

By the way, within those first 100 digits, there are multiple occurrences of even or odd sequences that go past the 5-gram level.

Within the first 1000 digits of pi, there is one 11-digit sequence of odd numbers, which you will lose money on.

If you convert Pi to base 2 you won't be penalized for those long sequences of odd numbers. Mapping the decimal digits of Pi to a parity function collapses a lot of the entropy.

> If you convert Pi to base 2 you won't be penalized for those long sequences of odd numbers

There's no way to convert pi to base 2 in your head though...

Clearly the solution is to remember pi in hex.

I tried...

    $ python3
    >>> # not using os.urandom to save a import...
    >>> rng = open("/dev/urandom", "rb")
    >>> "{0:08b}".format(rng.read(1)[0])
And act accordingly, soon the correct rate approaches 50% as expected.

Paying for each import was the worst design decision in Python!

What do you mean by this? I'm not a Python expert by any means.

I think it was a jocular jab aimed at the idea of "saving an import" and documenting that choice with a comment that's longer than the module import would have been.

It's a joke.

Surprised nobody has mentioned the Aaronson Oracle [0] yet.

[0] http://people.ischool.berkeley.edu/~nick/aaronson-oracle/

It’s already mentioned on the webpage.

One of my favorites.

I had a number generator randomly generate lists of 0 or 1s. 0 was left and 1 was right. I did this 500 times. It was able to guess the number entered 49% of the time. The balance never went below $1000. The interesting thing is the graph when through a cycle. It started out at $1000, climbed to $1030, dropped back to about $1000, then climbed back to about $1030. It was on its way back down again when I stopped. I'm tired of playing the game, but I wonder if the cycle would've continued and if there is an explanation for the cycle.

> I added three custom metrics/events in google analytics to collect statistics on correct guesses after 100, 200 and 500 inputs. If you’d rather not be counted, please load the html file from my github repository instead.

I'm happy to participate in such statistics so I don't need that HTML file, but I'm not okay with sending it to some third party that then tracks a whole lot of other things as well. So Google Analytics is blocked as always; I'm sorry that I couldn't submit my 52% score...

This is fun an interesting.

I tried using an actual random number generator to generate inputs, just to verify with my eyes that, as expected, the computers guess rate hovered around 50%, and didn't get more than 5% off usually. (I realize it _could_).

I tried picking a 'random' (knowing surely I'll still have unconcious patterns) number 1-5, and then doing that many lefts, pick another number, that many rights.

That got me a really good percentage (computer was only around 30% right) up to ~40 or so iterations. But didn't last, eventually it got to around 50% or so as usual, and then I started to lose.

It would be interesting, if knowing the algorithm the guesser is using, if you could devise an algorithm to defeat it, and "win" lots of money from the "bank". I mean, I guess, the obvious thing to do, if you had the algorithm the guesser was using, you could just run it on your own past input, and then always pick next whatever the opposite of what it would pick is. It seems like that would have to work, and that it couldn't possibly work, heh. This becomes an interesting illustration of... something or other computer science-wise, it reminds me of Godel Escher Bach or something.

What this whole thing makes me think of is surveillance. Say, in an old detective thing, trying to "lose a tail" by making "random" turns. In our society of increasingly universal surveillance, _plus_ computers analyzing the data (and you don't know the exact algorithms being used)... there's probably no way to "lose the tail".

One can make the algorithm harder to defeat by making it probabilistic. Eg if it will have some kind of expected next move for you to make, and this probably won’t be “definitely left” or “definitely right”, and then so long as it can get some random source (which you don’t have), it should randomly guess your next move from the probability distribution of your moves. That way it’s guesses can’t win in advance and so it probably won’t be wrong so much, and the problem of beating it becomes harder too. E.g. suppose the algorithm works as follows:

1. Let {l,r} be the numbers of times in the history we have seen a five-letter sequence with a prefix of whatever the four most recently made moves were, and a suffix of {left,right}

2. Pick r randomly between 0 and 1

3. If r > l/(l+r) (or l=r=0), guess the next move is r.

You know this is the strategy so if you can calculate l and r and so make the opposite move (randomly even) and win with a probability of r/(l+r), however by making your move, you make l/(l+r) converge towards 1/2 and so your probability of winning goes to 1/2 too. I suspect the best strategy in the long run is to play randomly or to just repeat some longish cycle that includes each length five cycle an equal number of times.

Holds up spork.

beat me to it ;)

I tried:

  let left = () => captureBtnLeftFunc($.Event())
  let right = () => captureBtnRightFunc($.Event())
  () => captureBtnRightFunc($.Event())
    setInterval(() => {
      if (Math.random() < 0.5) {
      } else {
  }, 1000)

The third line is a mispaste btw. You can delete it. ;)

Running this myself, the guesser is correct 46% of the time after 100 iterations.

What was the result? 50%?

Yes, when running over many iterations

And this is why you should never, ever, choose your own password. You just aren’t as random as you think you are.

One of the less interesting things I've done in my life is memorize a large number of digits of pi. An interesting thing I've discovered is that I can recite those digits faster than you can make up random digits without an obvious pattern. The brain is optimized for patterns, not randomness.

A more visually illustrative implementation of the prediction mechanism can be found here: https://roadtolarissa.com/oracle/

Both cite that same "Aaronson Oracle" as the source inspiration.

Just to test my opponent's predictions, I wrote a quick script using a real CSPRNG to give me left/right instructions. As expected, after a hundred moves I was way ahead.

The interesting part was this: I had the script just run a loop with a short delay between outputs. I started with a little under a second, and my brain/fingers kept trying to anticipate the next element. Often incorrectly, making my input predictable by the site. My fingers were just twitching to press a key.

In order to "win", I had to set the delay to 1.2 seconds, physically lift my fingers off the keyboard, and spend a little bit of effort reading each choice to make sure that I was actually following the real random instructions.

My brain just struggled to cope with the lack of pattern.

I found it interesting to go at it randomly myself for about 100 iterations or so, the accuracy hovered around 55%. Then I started flipping a coin to make my choice, the accuracy started dropping pretty quickly, I got it down to 43% before i got bored.

My method was to close my eyes then open them somewhere "random" on my screen and the first letter my eyes focused on, I checked whether it was a low (left) or high (right) letter. I'm aware that the distribution isn't this simple but that's what I used. This worked okay for a while and the machine was only predicting me at 43%, then I tried using just my brain and it started winning.

A person good at mental arithmetic could memorize a small-state PRNG and get good results but I couldn't personally do this. I wonder if this has any strategic advantages in any areas of life. Maybe poker?

Did anyone else find this depressing? I realize the payoff is set up to make you feel cheated (see "crdrost" below), but it had a more existential impact on me until I tried using /dev/urandom parity and after 500 presses it was still within the margins described. I feel like there's a psychology experiment under the hood here... (And also wonder what kind of exciting behavior data unrelated to the "game" that the webdev captured! e.g., average play time, average win during playtime, # of times the B(2,6) showed up, # times Pi parity showed up, etc...)

This (or some variation) could be an interesting competitive sport. Especially if the "guesser" algorithm was smarter, or if the guessing is done by your opponent.

Apparently I can be reasonably random -- I decided to map the length of the words mod 2 for the first three paragraphs he wrote on the page to the right (0) and left (1) arrow keys. After three paragraphs, the algorithm had 50% of my motions correct (as good as a coin flip!), and I had a positive balance. :)

Flaw in my plan: I don't know if the word length distribution for most english texts (much less a given writer) is biased.

I remember playing this game on the Commodore 64 (or perhaps VC-20) in the eighties. Can't find it now but it was probably a BASIC program.

I did much better once I closed my eyes, at 44% correct after 130 iterations. Watching the graph completely screwed me up.

These bots are somewhat easy to beat with complex repeating sequences that are not biased over the long term.

e.g. L,R,LL,RR,LLL,RRR, and if it starts guessing right enough, reset or reverse.

Then, it ends up being closed loop two armed bandit ...

did not seem that hard I did 106 iterations and it guessed correct 35% of the time. Maybe over more iterations I would have gotten less random (due mainly to boredom).

I was winning for the first 300 iterations, but then by 1000th iteration it was guessing correct 57% of the time. I noticed the faster I go the easier it is to predict me.

How far are you supposed to go to get the 57% average? At first it had some success, but after 100+ presses its guesses were correct only 49% of the time.

I noticed that if I slow down and press left or right deliberately, after thinking about it, I win. If I do it really rapidly, the algorithms gets to 60-70%.

I was around 49% at 100 too. It only took another 100 to be at 57%.

1000 iterations for me

"Walk without rhythm and it won't attract the worm"

It's harder than I expected...

I can't be random. I can only be arbitrary.

By hand, 37% correct after 55 inputs.

I feel like it wasn’t very random. I could predict your answers to some extent after the first few.

Life is anything but random. It is nothing but a collection of actions and outcomes, both of which are tightly coupled and definitely connected.

Let's take a simple example: you had an important meeting at 9 am but since you were stuck in traffic, you were late by 10 mins.

The final outcome can be clearly explained by actions that directly or indirectly led to them. May be there was an accident causing the traffic jam. May be you woke up 10 mins late than usual thereby getting caught in rush hour traffic. May be the cab you took to work, arrived late at your destination cause he woke up late.

All of the above are totally controllable actions, albeit not in your direct control. Hence the feeling that life is random. Because none of us have any control over the outcomes of the actions of others, who subtly influence our lives on a daily basis.

Think of it this way: our life is intertwined with the lives of others in one way or the other. From the cab driver who drives us to work to our loved ones who shatter our hearts, their actions influence us, some more than the others, but influence us nonetheless.

It's a complex equation, with a whole lot of variables, most of which are beyond our control and unknown and the solution to this equation is the outcome of an event.

I believe that if one knew the values to all the variables at any given point of time, then one would be able to solve the equation with precision

You are correct that randomness is relative.

Randomness implies unpredictability, which implies a lack of knowledge; if you can accurately predict what is going to happen next in some sequence, temporal or otherwise, by observing what has come before, then it isn't a random sequence.

But to someone who possesses all possible knowledge, nothing is random, because everything is predictable. A sequence might still retain certain properties that we associate with randomness (e.g. an unbiased distribution), but no sequence is random if you can always accurately predict the next item.

Obviously, none of us mere mortals is omniscient, but it is possible, for instance, to develop new predictive techniques and to turn once-thought-random sequences into non-random sequences.

>But to someone who possesses all possible knowledge, nothing is random, because everything is predictable

Not according to quantum mechanics.

My background is not in quantum mechanics, so I could be totally misunderstanding the idea of quantum indeterminacy, but I thought it had to do with what we can know based on measurement. But a truly omniscient being would not have to rely on measurement.

So ...

Are you saying an omniscient being is impossible, because something about quantum mechanics implies that it is impossible to possess all possible knowledge?

Or are you saying that even if an omniscient being possesses all possible knowledge, they would nevertheless be unable to accurately predict all sequences because of some knowledge we have about quantum mechanics?

Turning the math into words is subject to interpretation (because it's _so weird_, it's not clear what it means; and I'm no physicist or mathematician either, but my understanding is...), but quantum mechanics seems to say there are physical laws actually limiting how precisely something can be known. Which is part of what makes it not just "what we can know based on measurement."

That is, a "truly omniscient being that would not have to rely on measurement" would apparently violate the laws of physics, is one thing quantum mechanics maybe seems to say.


"Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology"

> Are you saying an omniscient being is impossible, because something about quantum mechanics implies that it is impossible to possess all possible knowledge?

> Or are you saying that even if an omniscient being possesses all possible knowledge, they would nevertheless be unable to accurately predict all sequences because of some knowledge we have about quantum mechanics?

That's an extremely astute question. I'm impressed that you managed to realize that that's an important distinction and that you don't know which it is.

It's actually the latter, and incredibly we can prove it. And the proof of this is surprisingly accessible:


I'm still not sold on this. I'm no quantum mechanics expert, but I do have a philosophy background, and it seems as if true omniscience -- possessing all possible knowledge -- seems like it should transcend this limitation.

Maybe I'm thinking about a form of omniscience that exists outside of time, in which case, of course an omniscient being would know what happens next, because they would know the future just as well as the past. (Example: Any omniscient being will know which photons will pass through a polarizing lens not based on prediction but based on already knowing the outcome.)

There are always hidden assumptions. The proof only applies up to half-omniscience being that know everything in the current and past of the universe, but not in the future.

As a less powerful entity, if you know all the result in the lab until now, you can consider an experiment made 1 hour ago and "predict" the outcome without breaking the laws of quantum mechanics.

Such an entity cannot exist.

Such a statement can't be proven.

Sure, just append "unless it turns out we were wrong" to every scientific and mathematical claim if it helps you feel better.

I only append that to philosophical claims and string theory.

"philosophical claims and string theory" sounds redundant to me

Interesting. I can point you to some very influential philosophers who argue that not only can such an entity exist, it necessarily exists.

Can you tell me more about why you think it can't?

Apparently many other influential philosophers disagree, so take that proof with a grain of salt. https://en.wikipedia.org/wiki/Trademark_argument#Criticisms_...

We experience the world as a series of observations. Predicting those seems to be fundamentally impossible.

Of course that still leaves room for an omniscient being that never collapses the wave function and just deals in probabilities. But then your predicitions sound like "there's a 99.978% probability you will be hit by a car on tuesday at 10:04 am." with the remaining 0.022% accounting for quantum flactuations adding up to produce a different result. Whether you call that "all knowing" and whether that proves that nothing is random sounds like an interesting philosophical question.

Quantum mechanics theoretically is just another deterministic universe or hidden variables and effecting our deterministic universe to appear random but in reality is deterministic.

Not just quantum. Many micro effects are amplified in random way, so the measurement problem appears in macro world.

However your life is also influenced by quantum effects, and the possibility that uncertainty in quantum physics is based on hidden variables seems to be thoroughly ruled out. To our best understanding the entire world is just a truly random probability distribution.

You can have a deterministic universe effecting our deterministic universe and where we would perceive the effects as random but isn't. It could also be impossible to measure the effects as deterministic based on our lifespan not being able to analysis the data.

Yikes, how strange it must be to be so completely convinced about something so indiscernible.

I find people that think they have real choice over their actions as strange. Free will is an illusion when you understand determinism.



Episode 5, "What it means when a man falls from the sky" by Lesley Nneka Arimah

Perhaps there are just too many variables.

Totes random.

Like, totally random ️

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact