A NN couldn’t be used as a (valid) source of random, right? I thought a GAN actually need some random to start with...
> The most significant is partially concealing
the output of the GAN’s generator, and training the adversary to dis-
cover a mapping from the overt part to the concealed part. The generator
therefore learns to produce values the adversary cannot predict, rather
than to approximate an explicit reference distribution. We demonstrate
that a GAN can effectively train even a small feed-forward fully con-
nected neural network to produce pseudo-random number sequences with
good statistical properties. At best, subjected to the NIST test suite, the
trained generator passed around 99% of test instances and 98% of overall
tests, outperforming a number of standard non-cryptographic PRNGs.
However, I'd take the joke a step further where if a neural network model can act as a strong CRNG, it is essentially a unique instance of itself, which has intent and a function of will that is not within our ability to precompute. It's sci-fi magical thinking, but also simple enough to scale.
Where it becomes kind of fun to think about is that the instance of an ML model expressing a CRNG is now unique in the universe - in an information theoretic sense. It's not that RNG's are alive, but a uniqueness constraint for phenomena in a given universe would probably rely on something like one, where the "cryptographic" aspect removes mutual information between its outputs, and distributes them so they don't collide. If you are an AI, and you are living in a universe where uniqueness and "distance" is defined by a function that removes information between instances of local phenomena, there may be aspects of it that imply correlations and limits, even collisions and shortcuts.
Loopy, sure, but speculating about how an AI might navigate and operate on its substrate could reflect conceptual limits of what we can know about our own.
Yeah sure, I’m not dismissing the idea in any way, it’s really interesting. That’s actually something I could have wanted to try, just like... learning a RNN to compute a hash, or even reverse a hash, or...
Depends on the details of the ANN, but if it's a recurrent net then seems like there's at least one very shallow net that solves it.
Some interesting discussion on Rule30 and BigCrush, updated as of 2017 by Daniel Lichtblau at Wolfram Research:
"In house testing has shown ExtendedCA passing all Diehard and BigCrush tests... The Rule 30 generator is almost as good as this in terms of the Big Crush suite... The upshot is that the Rule 30 and ExtendedCA RNGs are, by the standards of current testing, quite sound."
ExtendedCA seems like also a good demonstration of a very simple high quality PRNG that an RNN could discover:
It's 1d, 5-wide input, inputs 4 cells apart from each other
Production-ready GANs in my experience are trained with a lot of auxillary losses. For example: HiFi GAN (for audio generation) uses four different discriminators to detect problems in the spectrogram and fine structure of the waveform, and the losses are more about matching the discriminator features than the actual classification result. Works great, but is hilariously 'impure.'
A good hasher coupled to a bad input looks like a random number generator, but it's not. If you feed sequential numbers, say in the range 0..2^32-1, to a good hasher, you may get something that passes standard tests for random numbers. But you only have 4 bytes of key, and such a cryptosystem can be easily broken by brute force.
Now, if you make a hasher which takes in input, folds it down to a short value, and then generates an output value with a good hasher, you have something that looks like a crypto key generator, but isn't. This neural net approach offers a new way to automatically develop plausible, obfuscated, but easily breakable random key generators.
So it's useful as a way to generate backdoors.