When I see the word "random" I parse it as "unpredictable", since this is usually what people mean (even if they don't realise it) and can clear up many arguments. The concept of 'random' is interesting in some cases, but the vast majority of the time it's far stronger than what is required.
In this case, the question is:
> If a number generator is uniformly distributed, why might that not be 'random enough'?
If we rephrase this as:
> If a number generator is uniformly distributed, why might that be 'too predictable'?
This makes the flaw obvious: there are many ways to choose numbers uniformly which are completely predictable. For example, choose the smallest value first, then choose a value which is furthest from any previously-chosen value (in, say, lexicographic gray-code order).
The point here is that 'uniformity' is not the property we care about; it is a consequence of the property we care about (unpredictability). If a distribution were non-uniform, we would be better able to predict it (by biasing out predictions to match the distribution), hence the least-predictable distribution is necessarily a uniform one.
Other times this comes up:
> Are quantum measurements 'truly random'?
That's untestable, but we can test whether they're predictable or not.
> This encryption algorithm requires a source of random bits.
It would work just as well with a source of bits that are merely unpredictable.
> This strategy can only be defeated by a random opponent.
It can also be defeated by an opponent which you can't predict.
That is a good way to think of it. Indeed, that is how cryptographic randomness is typically defined.
Simply put: given the first k bits of a random stream, can you predict the k+1th bit (more than 50% of the time)?
A generator that passes this test will pass any statistical randomness test, but the converse is not necessarily true. For example Mersenne Twister is a good generator from a statistical perspective, but it's actually quite easy to recover its internal state by observing a small amount of output. (Around 20k bits, if I recall correctly.)
> Simply put: given the first k bits of a random stream, can you predict the k+1th bit (more than 50% of the time)?
This definition isn't sufficient. Suppose you have a random stream, and one predictor which asserts the next bit is "1" and another predictor which says that next bit is "0".
As k increases, there's a nearly 100% chance that one of the two predictors will be correct more than 50% of the time.
Even if you pick a single predictor, say, that all-1s predictor, there's an almost 50% chance that for a given random stream and k that it will have better than 50% predictive ability.
Just because Guildenstern's coin is heads 92 times in a row doesn't mean that it's not random. Only that it's very unlikely to be random.
"[T]he vast majority of the time it's far stronger than what is required."
Oftentimes it's also just not what is actually required. Often you want uniform spacing with some jitter, not points drawn from a uniform distribution.
put on hold as primarily opinion-based by Xavierjazz,
Olli, Excellll, HackToHell, Tog 3 hours ago
Many good questions generate some degree of opinion
based on expert experience, but answers to this
question will tend to be almost entirely based on
opinions, rather than facts, references, or specific
expertise.
If this question can be reworded to fit the rules in
the help center, please edit the question.
Stack exchange is broken.
The question mixes up uniform distribution and randomness, but the answers are factual. There's little to no opinion involved here.
As a general rule across the web, I'd agree with you. Although, I feel like on StackExcahge (or at least SO) those with the highest scores typically gave good answers in their domain as judged by their community and would not commonly give any answers outside their domain that could be voted up simply based on user rep alone.
I don't really have any hard data on this, it's opinion based on seeing extremely good answers from high rep people regularly, and not being able to recall seeing poor answers from high rep users voted highly when they aren't actually useful.
I've been running a virtual dice roller site for pen & paper roleplayers for about ten years. I get these kinds of emails all the time. "Your RNG is faulty, I just rolled three 20s in a row!" and stuff like that.
It's probably not fixable psychologically, we're just seeing patterns everywhere. This also happens with physical dice by the way. If a player rolled two or more abysmal results in a row it's common to say "hey, you should really swap out these dice".
I've toyed with the idea of not using a fair RNG for games for this reason. People expect die rolls to converge to the average distribution much much faster than they actually do. So perhaps you can make a RNG program that does exactly that and does what the user expects, even if mathematically ridiculous.
For example, if 20 is rolled then the probability of 20 being rolled again is halved (so it has a 1/40 chance) while the probability of its inverse is doubled (so probability of 1 is 1/10). This dramatically reduces the chance of the three 20s in a row scenario and would very quickly regress to the mean, just like people expect. Added flair for affecting the probabilties of all possible values subtly (so that after rolling a 20, a 2 becomes more likely and a 19 less likely).
I'm sure I can't be the only person to think of this.
Say the player loses a 75% roll, then attempts another identical 75% roll. The game will actually give you better odds for the second roll. I think it's silly -- the game basically lies to you about your odds on the second attempt.
It may be deceitful, but it makes for a better game. Seeing your five hit attack miss five times at 5% miss chance per attack will induce true frustration. Battle for Wesnoth doesn't care. It will be honest, and you will scream in anger.
> Seeing your five hit attack miss five times at 5% miss
> chance per attack will induce true frustration.
I was curious, so I worked out the chances of this happening. For a given five hit attack the chance is 0.00003125%, or 1 in 3.2 million. So yes, I imagine it is very frustrating, but it happens incredibly rarely.
Shuffling is different from selecting at random. I had an MP3 player that would use random selection instead of shuffling all tracks and it would annoy the hell out of me to hear the same songs twice in a row. I don't know if this is what iTunes used to do but if so then those people were right to complain.
It is. iTunes would play a song, and randomly select the next one (it would do random play, rather than shuffle).
Nowadays it generates a shuffled playlist and goes along, and people whine because it doesn't reshuffle automatically (you have to disable and enable shuffle to reshuffle).
An other option is to use random sampling, and to re-sample after you're done with the current set (possibly excluding any song of the previous sample)
iTunes used a shuffle that always resulted in the same order as long as there was the same number of songs on the device. (Or perhaps just shuffling with a tiny PRNG that used the number of songs as seed.)
You're right that would probably lead to better user satisfaction, but I'm not going to implement something like this. I refer you to the answer I've given herokusaki below. Basically, the site is not about straight-up die rolling. It supports a myriad of different rule systems. On top of that, it doesn't keep track of users, so it's not possible to tinker with results based on user state. I have no real intention of changing that either ;)
I've thought about the same for Settlers of Catan, which I think is too random if you use dice, disrupting people's ability to do more than basic strategy.
I know there exist "dice" packs of cards, which contain the correct distribution of rolls. If you shuffle a few of those together, you'll probably get a nicer game.
That sounds like a nice idea, actually. I hate those rounds where 11 or 2 occur more often than 6 or 8. I know it can happen, but it can make the game rather annoying at times.
I actually think that makes the games more fun. Especially so when you have a larger game so that there are people on those tiles.
For me, it makes the game feel less "solvable" by basic analysis. Sure, statistically certain observations will hold. For the game you are in, though, you have to be ready to adjust your strategy based on what has happened. (Clearly not in anticipation of future rolls, but more based on the resources you have managed to get, not the ones you wanted.)
Admittedly, we already tend to take the number planning a bit out of the game by initially turning them on the other side and you can only reveal the numbers to hexes you have built at (for the inital round of placing settlements/streets they are revealed after everyone has placed both). Makes for a nice variant where exploration holds surprises, or you can expand to hexes that are already known.
One of the nice features of the Settler of Catan assistant for the iPhone is that it will roll the dice for you, and it keeps a chart of the results. Thus one has data to back up the complaint that "Your 3 is out producing my 6!"
I've had to do this when training research monkeys in a former life, particularly when there is only one or two parameters being varied. They would learn the 'patterns' far too quickly. After seven heads in a row, they'd stop paying attention and always choose heads for some time afterwards.
The solution we settled on was block randomization. But that's probably not what you'd want for a human game as they could potentially figure out the end of each block (and what value remains). But it worked wonderfully for us.
People are born with bad intuitions about how randomness works -- it's not learned. For whatever reason, our brains are wired to make uniformity feel random and randomness feel ordered.
It's why we see stars in the sky, which are more or less distributed randomly in the star field, yet they don't appear random: they appear to form constellations.
It's basically impossible to correct this bad intuition. People educated in the statistics and mathematics of random numbers have to consciously learn what real randomness looks like and override their intuition.
But try as you might, the night sky will still form constellations. And dice rolls will still, frustratingly and confusingly, fall into streaks more often than you'd expect.
There are a lot of systems in the brain. It's theoretically possible that one of them will be untrainably spitting out the same flawed predictions, but we can certainly learn how to respond appropriately to those sensations.
IMO the problem can be best explained with a classroom experiment, almost like a magic trick. It is a built in human intuition which is at odds with the reality of randomness.
Split your class into 2. Give half A the players dice and a piece of paper. Give other half B just a piece of paper. Half B plays with imaginary dice, Half A with real dice. Everyone records 20 games of a piece of paper. You hand in the papers and the demonstrator (or maybe a new one that hasn't been in the class, for extra theatrics) looks at the papers. He will accurately place them into separate piles.
The trick is that fake randomness and real randomness are different enough that they are distinguishable with a high level of accuracy. Real dice will roll the same number in a row far more often than fake dice. The more games you play, the more accurately they can be distinguished.
You can look at the formulas to figure out the odds of rolling the same number 4 times in 100 rolls, but it doesn't drive home the message that your intuition is broken until you see intuition being beaten consistently and reliably by a better tool. It's fun too.
You are referring to the common mistake of confusing randomness and uniformity.
Your players are thinking: "Dice rolls are equiprobable, so I should roll evenly-distributed results." That only starts being true after a great number of rolls.
Random distributions will be uneven and present clumps, strands and voids. For the same reason, during WWII, British intelligence thought that the Germans were bombing specific targets in London, while they were just dropping bombs at random.
If you knew how many times I launched into that randomness vs uniformity lecture myself ;)
My point was that it's not possible to convince certain people with logic. They see random numbers and something happens to their brains, maybe it's religion. They'll always see oracles or at the very least shady manipulators behind random data.
Re: Physical dice. I've gamed a lot over the years, and I don't believe I played with anyone anyone for whom swapping out dice had anything to do with actually effecting randomness/probability, but rather was just a fun way to wish for better luck. And, even then, that move "toward" luck or "away" from bad luck was always tongue-in-cheek.
I wonder if visualizing the dice roll could help. If you don't have ethical objections to it you could also A/B test biased dice (say, biased away from 1 and 20) for complaints (give a different contact email address to players who get biased dice) or instead show the players the distribution of the last 100 rolls.
First, the roller is absolutely anonymous. There are basically three major functions the site performs: just rolling some dice according to user input, an API taking HTTP requests answering in JSON, and a chat room feature that has a dice roller bot. It's not possible to identify individual players except maybe in the chat rooms (though they're not really authenticated there either).
I do sometimes get requests to look into chat logs of a particular room over allegations that one player somehow tampered with the results. This happens about once a month. Invariably, the accused user isn't even that lucky, they almost always have bog standard average results. It's just that the other players think this person is unreasonably lucky and they keep this belief up even after I tell them about the data (which they could have calculated themselves by looking at their own chat logs).
Second, the core value proposition of the site is its ability to render roleplaying dice codes directly. That means you don't select your dice from a drop down - you enter the code. A simple example would be "3d6+5" for damage or something, but there are lots of different codes for different rules - some of them quite convoluted. It's not feasible to tinker with them in any meaningful way because players use the site in so many different ways.
PRNGs (pseudo-random-number-generators) take in a small amount of randonmness (a seed) and produce a long stream of numbers from that.
Anyone using the same PRNG can look at the output of yours and try to put their PRNG in the same state. If they succeed, the output of theirs will match yours - now and in the future - unless you re-seed.
Two problems can occur here:
1) you seed with something they can predict. e.g. seconds since 1970 (or microseconds since 1970). If they have a reasonable sample of numbers from your system, they can try lots of different seeds and see if they can find the one which gives the same output as you.
2) PRNGs have "internal state", which is a bunch of numbers they mix together. Some PRNGs have the property that if you can you can observe enough numbers in a row from the PRNG, you can turn them back into the internal state locally and then you can do the same thing as if you knew the seed (predict future numbers).
One of my favorite examples of this ever: Once in Las Vegas a keno machine mistakenly used a fixed seed. Meaning once someone figured this out, he could show up when the game started for the day and predict what the machine would do with 100% accuracy.
They found a flaw in a REAL poker site because they were using a pseudo random number generator (and a stupid algorithm) and were able to know the order of the cards being used in the game!
>The RST exploit itself requires five cards from the deck to be known. Based on the five known cards, our program searches through the few hundred thousand possible shuffles and deduces which one is a perfect match. In the case of Texas Hold'em poker, this means our program takes as input the two cards that the cheating player is dealt, plus the first three community cards that are dealt face up (the flop). These five cards are known after the first of four rounds of betting and are enough for us to determine (in real time, during play) the exact shuffle. Figure 5 shows the GUI we slapped on our exploit. The "Site Parameters" box in the upper left is used to synchronize the clocks. The "Game Parameters" box in the upper right is used to enter the five cards and initiate the search. Figure 5 is a screen shot taken after all cards have been determined by our program. We know who holds what cards, what the rest of the flop looks, and who is going to win in advance.
> Just say you code in any language at all to roll some dice (just using dice as an example), after 600,000 rolls, I bet each number would have been rolled around 100,000 times, which to me, seems exactly what you expect.
Bad example. Because the author specified "some dice", we can assume more than one die, in which case some numbers have a greater chance to appear than others, in a series of fair "random" throws.
It's a bad sign that the author of a piece about randomness isn't aware of the systematic behavior of his chosen example, a behavior biased in favor of certain outcomes.
The idea of something being truly random bothers me because it goes against cause and effect. An effect happened without a cause. It violates laws of conservation, entropy, determinism, and a lot of other things.
> The idea of something being truly random bothers me because it goes against cause and effect.
Not at all. The fact that one cannot predict the next number in a random sequence is irrelevant to the fact that, in the long term, that number has a predictable relationship with other numbers in the pool of possibilities.
For a random generator of the digits 0-9, can I predict the next number in the sequence? No. Can I say what the proportion of, say, 7s will be, within a large list of outcomes? Yes, and the larger the list, the more reliable the prediction.
If your position had merit, quantum theory, probabilistic on a small scale, would be seen as violating cause/effect relationships, a rather important part of physical theory. But, because of the mathematics of quantum probability, individually unpredictable atoms become very predictable macroscopic objects.
If I toss a coin into the air and then it lands on tails, there is a visible cause and effect. The velocity of my arm, the position of the coin in my hand, the moment of release, the air currents in the room all affect the outcome. If I could measure these to a high degree of accuracy and plug the measurements into a detailed enough simulation, I could predict the outcome. However, I can't. So I am unable to predict the outcome (hence the outcome is random).
Eric Lippert's answer is great. It isn't the distribution of answers, it's the predictability. In most cases it's ok if the distribution looks random, but in some it turns out not to be ok.
Ivy Bridge and newer CPU's of Intel have this new "random" instruction (I still have a Sandy Bridge CPU tho). Is that one truly random enough for crypto purposes? Thanks!
No, and it does not matter who you trust. Analog circuitry simply has too much inbound and outbound information leakage to be trustworthy. You must round it out with a mixing algorithm.
The issue is that if you run the same program twice without selecting a suitably random seed you will make the same 60 million rolls.
If you can somehow control, predict or influence the seed you can predict or influence rolls.
For example if the game uses JUST the system time as a seed, I can shift the system time back to a specific position and run the application again to get the same random number generation.
If you know the algorithm, for example you're playing an open source card game, and you can determine the time (offset) on the server or the other players computer you could cheat by calculating their random variables.
Now it really isn't an issue in games, but it's incredibly important in security. Randomness makes up a huge component of encryption.
That's why you have applications that require you to move the mouse a bit, hopefully randomly and they take in a whole bunch of other entropy fed in by the system.
In Unix-like OS's you have /dev/random and /dev/urandom. /dev/random requires a certain amount of entropy and environmental noise, and it blocks on reads until it's satisfied with the output.
/dev/urandom does not block, but it gives pseudo random output. For the purposes of security, /dev/urandom should NEVER be used. However neither are truly 'random'.
No. Cryptographic applications on Linux should use urandom, not random. Your belief that urandom should "NEVER" be used is due to an urban legend perpetuated by a badly written Linux man page.
Before you listen to anyone "explaining" to you that /dev/random is good for cryptographic secrets and /dev/urandom is good for everything else, consider that Adam Langley owns Golang's crypto code and Golang's crypto/random interface just pulls from /dev/urandom; Daniel Bernstein, Tanja Lange, and Peter Schwabe --- all cryptographers, with a pretty serious Unix pedigree --- wrote Nacl, an extraordinarily well-regarded crypto library, and Nacl's CSPRNG... wait for it... just pulls from /dev/urandom.
But for any other purpose than generating private keys one should NEVER use /dev/random. If you're not sure use /dev/urandom.
I've had to painstakingly explain to certain people why, as an example, erasing a HDD from /dev/urandom is allright. And why their program that simulates some random input should use /dev/urandom.
But no, they babble about true randomness and then complain why they get like paltry few hundred bytes per second.
Even if you have a server running casino games just use /dev/urandom/. It requires a total compromise of the server to get the internal state out of that, and in that case it's easier just to change urandom into /dev/zero.
You should be careful when using word like quality in this context. Some might think that by quality you mean any kind of statistical property. E.g dice rolls generated by either of them are somehow statistically different, which they are not.
What does this mean in practice: If I give you 10 10MB files. 5 of them created with urandom and 5 of them created using hardware RNG there is practically no chance that you could differentiate which came from which, barring knowledge of the urandom entropy pool.
But it is true that HW RNG could be useful just to keep CPU load down. By now we know that those are backdoored by NSA, so you should still use /dev/random as a source for private keys. So HW RNG is useful just to keep the load down. Not to avoid any attacks.
This is so important that I'll have to repeat it again: The only difference between urandom and random is that urandom is theoretically suspectible to an attack which could allow prediction of the output values if the attacked knows the internal entropy pool state. The statistical properties of both are the same.
Agreed that some (e.g. RDRAND) are potentially untrustworthy, but others aren't - for example, linux has daemons available that can source entropy from audio or video noise.
A growing number of chipsets have them built in and accessible.
Many new Intel chips do (though off the top of my head I can't tell you how easy to access it is).
The SoC used in rPis units has one that can be read from at over half an Mbit per second via /dev/hwrng (once the relevant module is loaded), so you can either use it directly or (better for portability) keep reading from /dev/urandom and use rngtools to feed the entropy into the kernel's pool as needed.
In both the above cases there is a trust issue as exactly how the RNGs work is not publically documented, but unless you are extremely paranoid (by neccesity or "issues"!) I woudl consider them decent sources of entropy.
You should always use /dev/urandom on Linux for cryptographic use, unless you know exactly why you need /dev/random. Hint: you don't.
(On FreeBSD it doesn't matter because it's all the same)
I've been meaning to write up a coherent summary of that issue with all kinds of sources forever now, and I should really get around to just doing it, because this comes up every few weeks on HN.
The short version is as follows: /dev/random is basically the entropy pool. There isn't much data in there and whatever lands in it must be carefully chosen to prevent entropy going down (Debian accidentally broke that part a few years ago). Since there isn't much data you probably shouldn't take too much of it when you can help it because if it runs out, it blocks until there is enough entropy back in the pool.
/dev/urandom on the other hand is a CSPRNG that is regularly re-seeded from /dev/random. The purpose is to stretch the avilable entropy to more (pseudo-)random numbers. Since it's a CSPRNG the sequence is deterministic but impractical to predict, re-seeding helps in preventing others from observing too much output to reverse-engineer the seed.
So advice is generally to only use /dev/random when you're implementing something like /sev/urandom and for SSL keys and the like you just use /dev/urandom.
/dev/random and /dev/urandom are two interfaces to essentially the same CSPRNG.
On Linux, unlike FreeBSD (where there is no difference between the two), there are two differences between random and urandom:
(1) random will block when a kernel entropy estimator indicates that too many bytes have been drawn from the CSPRNG and not enough entropy bytes have been added to it
(2) urandom and random pull from separate output pools (both of which receive the same entropy input, and both of which are managed the same way).
It is not true that /dev/urandom is a PRNG and /dev/random is not; it is not true that /dev/random is a CSPRNG and /dev/urandom is just a PRNG. They are the same thing, with a extremely silly policy difference separating them.
In this case, the question is:
> If a number generator is uniformly distributed, why might that not be 'random enough'?
If we rephrase this as:
> If a number generator is uniformly distributed, why might that be 'too predictable'?
This makes the flaw obvious: there are many ways to choose numbers uniformly which are completely predictable. For example, choose the smallest value first, then choose a value which is furthest from any previously-chosen value (in, say, lexicographic gray-code order).
The point here is that 'uniformity' is not the property we care about; it is a consequence of the property we care about (unpredictability). If a distribution were non-uniform, we would be better able to predict it (by biasing out predictions to match the distribution), hence the least-predictable distribution is necessarily a uniform one.
Other times this comes up:
> Are quantum measurements 'truly random'?
That's untestable, but we can test whether they're predictable or not.
> This encryption algorithm requires a source of random bits.
It would work just as well with a source of bits that are merely unpredictable.
> This strategy can only be defeated by a random opponent.
It can also be defeated by an opponent which you can't predict.
And so on.