Maria Konnikova  mentions that poker players, even when they have an on-average winning strategy, randomize their play using the second hand on their watches. For instance, they might execute their play 3 times every 4 plays, depending what their watches tell them.
For me the deeper implication is that in an environment with stochasticity (the real world is a mix between determinism and randomness), executing the same deterministic strategy all the time is likely not going to guard you against the random events that are harmful. Randomization provides a sort of protection by diversifying your strategies.
 in an interview with Sean Carroll on the Mindscape podcast
A good example of this is rock-paper-scissors. Imagine playing against someone with a non-random strategy.
There sure is a lot of mathematical content there; it's a graduate-level class in CS theory. The math is all there for a good reason if you want to understand precisely what's true in this area vs. what's too good to be true, or to be able to prove that it's true. But I think the key insights can be understood without much math.
The author discusses this all much better than I can in a comment, so I recommend reading the introduction and conclusion chapters. Or this other exposition by a different author, which I discovered through a reference in the intro:
But here's an attempt at describing just one of the key insights: the right definition of "pseudorandom" is computational. Specifically, something that generates a bunch of data is pseudorandom if you can't tell it apart from genuinely random data, within feasible resources like polynomial time.
And one reason that's the right definition is: if you have such data, now you can plug it in literally anyplace you would use random data. Because if something would go wrong by doing so... well, that'd be a way to tell the difference! :-) There is no such way, so it must behave just like using the real thing.
I actually have a pseudorandom number generator as my interview question and it's quite simple. I have a slightly simplified asci-art version of it that use essentially a variation of Wolfram's Rule 30.
Coding it is easy, requiring no algorithms or data structures, just some iteration, if statements and substring look ups. At first some are annoyed at what looks like a useless, made up toy project, but then I tell them this was actually a part of Mathematica in the early versions and we start discussing how you'd turn that output into a useful pseudorandom number generating function with a seed, acceptable performance etc. Quite a few end up happy with the problem at the end.
Let's implement Rule30, which is quite beautiful and he talks about it a lot: https://mathworld.wolfram.com/Rule30.html
We start with a simple seed and apply a set of simple rules over and over again. If you scroll past the first few lines of the output, you start to get very complicated patters. If you try other rules, you usually don't see any patterns or anything interesting, rule30 is kind of rare.
Wolfram's done some analysis and he says if you go down the central line (assuming the width keep on growing, which is not the case in my fiddle), and take each "bit" (which I represent as "-" and "X") and plot it, you'll find they're fairly uniformly distributed. Once we have these stream of uniformly distributed bits, you can convert it a string of integers or whatever you need.
For the math behind whether something is random or not, you can look up the "chi squared test", but I think the intuition is way simpler - random things are uniformly distributed. Random things look like white noise on old TVs. If you plot the frequency of their occurrence, each one should be just as likely as every other thing. In our example, if you take the output, flatten it into a string, segment it into chunks of 8 (representing a byte), then count the occurrence of each byte ("--------", "-------X", "------X-") they should have the same number of occurrences.
It ends up getting pretty mathematical too, but it seems like Goldreich takes more time up front to really get into the concepts at a philosophical level before proceeding to the math - so you might enjoy reading the introduction, if nothing else.
Can be used to shuffle a billion points per second on the GPU with sufficient randomness for some use cases.
I had to look it up and Wikipedia has this statement a bit differently. This one makes Neumann sound funny because his Mid-Square method is a terrible PRN generator.
All pseudorandom numbers are deterministic; that’s what the ‘pseudo-‘ prefix is meant to indicate. There are other kinds of random number generator that observe physical entropy sources instead. Perhaps the most iconic is CloudFlare’s wall of lava lamps.
We're not the first ones to do this. Our LavaRand system was inspired by a similar system first proposed and built by Silicon Graphics and patented in 1996 (the patent has since expired).
I'm hardly accusing anyone of any wrong-doing. It is a turn-of-phrase that I'm sure most english-speakers have encountered.
Compare saying "wow, I'm really stupid" when you make a mistake vs. saying "wow, you're really stupid" when they do.
Self-deprecation applied to others is just deprecation.
Why the accusation that Cloudflare stole this? The patent expired in 2016 and Cloudflare acknowledges SGI on their page.