Hacker News new | past | comments | ask | show | jobs | submit login
Pseudo-Random Number Generation Using Generative Adversarial Networks (2018) (arxiv.org)
30 points by cpeterso on May 1, 2022 | hide | past | favorite | 13 comments



Seems crazy!.. Why?

A NN couldn’t be used as a (valid) source of random, right? I thought a GAN actually need some random to start with...

> The most significant is partially concealing the output of the GAN’s generator, and training the adversary to dis- cover a mapping from the overt part to the concealed part. The generator therefore learns to produce values the adversary cannot predict, rather than to approximate an explicit reference distribution. We demonstrate that a GAN can effectively train even a small feed-forward fully con- nected neural network to produce pseudo-random number sequences with good statistical properties. At best, subjected to the NIST test suite, the trained generator passed around 99% of test instances and 98% of overall tests, outperforming a number of standard non-cryptographic PRNGs.


Not to degrade the quality of the conversation with sci fi level speculation, but it's funny ideas like these can inspire and unblock other ones. This comment started as an adolescent cog.sci joke in the 90's about "the blinkenlight that was smarter than you" because it was powered by a PRNG, and if we couldn't reproduce or predict it, it was therefore ethically a generally intelligent AI and the burden of proof shifted to humans to show it was not.

However, I'd take the joke a step further where if a neural network model can act as a strong CRNG, it is essentially a unique instance of itself, which has intent and a function of will that is not within our ability to precompute. It's sci-fi magical thinking, but also simple enough to scale.

Where it becomes kind of fun to think about is that the instance of an ML model expressing a CRNG is now unique in the universe - in an information theoretic sense. It's not that RNG's are alive, but a uniqueness constraint for phenomena in a given universe would probably rely on something like one, where the "cryptographic" aspect removes mutual information between its outputs, and distributes them so they don't collide. If you are an AI, and you are living in a universe where uniqueness and "distance" is defined by a function that removes information between instances of local phenomena, there may be aspects of it that imply correlations and limits, even collisions and shortcuts.

Loopy, sure, but speculating about how an AI might navigate and operate on its substrate could reflect conceptual limits of what we can know about our own.


Thank you.

Yeah sure, I’m not dismissing the idea in any way, it’s really interesting. That’s actually something I could have wanted to try, just like... learning a RNN to compute a hash, or even reverse a hash, or...


I suspect learning to hash and learning to reverse hashes is equally hard given the right training data.


It's an important question, but not too crazy for me. Think about Wolfram's automata.. 1 dimensional, recurrent, 3-wide inputs (ie super simple).. and one of the 256 possible rules generates randomness that is used as the PRNG in Mathematica.

Depends on the details of the ANN, but if it's a recurrent net then seems like there's at least one very shallow net that solves it.


Yes, you’re right about it, thank you. I read true RNG while the paper is talking about PRNG. And a RNN would fit the bill, indeed.


Pure rule 30 won't pass BigCrush.


Yeah, if you mean including all output bits, agreed, but note it's not used that way and there's a well-known area of reliable randomness to the right end of the output bits that is used. To the original point, I think a "right pass filter" is simple for an ANN to figure out.

Some interesting discussion on Rule30 and BigCrush, updated as of 2017 by Daniel Lichtblau at Wolfram Research:

"In house testing has shown ExtendedCA passing all Diehard and BigCrush tests... The Rule 30 generator is almost as good as this in terms of the Big Crush suite... The upshot is that the Rule 30 and ExtendedCA RNGs are, by the standards of current testing, quite sound."

https://mathematica.stackexchange.com/questions/3208/quality...

ExtendedCA seems like also a good demonstration of a very simple high quality PRNG that an RNN could discover:

https://reference.wolfram.com/language/tutorial/RandomNumber...

It's 1d, 5-wide input, inputs 4 cells apart from each other


You don't /need/ a source of randomness for GAN inference; it just helps with some applications. The basic idea is Network G produces some output, and Network D tries to classify it...


It doesn't seem to be a very good idea to decide that since the discriminator cannot predict the output of the generator that prediction is impossible. I have been chipping away at improving ML prediction of non-CSPRNGs for the last few years and prediction is very sensitive to hyperparameter choice (thus far anyway). It seems easy to imagine a PRNG produced in this way that seems usable.. until a more sophisticated network learns to predict it.


It would be interesting to try to turn the randomness tests themselves into additional loss functions, supplementing the GAN loss.

Production-ready GANs in my experience are trained with a lot of auxillary losses. For example: HiFi GAN (for audio generation) uses four different discriminators to detect problems in the spectrogram and fine structure of the waveform, and the losses are more about matching the discriminator features than the actual classification result. Works great, but is hilariously 'impure.'


Is your work published?


Hm. It's a way to make a good hash, not a good random number generator. A good hash is one where you put in nonrandom key values, maybe from a small set, and what comes out passes tests as random. For example, change one input bit and half the output bits should change.

A good hasher coupled to a bad input looks like a random number generator, but it's not. If you feed sequential numbers, say in the range 0..2^32-1, to a good hasher, you may get something that passes standard tests for random numbers. But you only have 4 bytes of key, and such a cryptosystem can be easily broken by brute force.

Now, if you make a hasher which takes in input, folds it down to a short value, and then generates an output value with a good hasher, you have something that looks like a crypto key generator, but isn't. This neural net approach offers a new way to automatically develop plausible, obfuscated, but easily breakable random key generators.

So it's useful as a way to generate backdoors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: