Got it, thanks! For me the better question is often simply "Why is it X?" Knowing why the speed is X is often of much greater utility than simply knowing that X is the speed. I like to focus on determining the current single limiting factor before considering the harder question of what would be necessary to achieve 10x.
Summarizing, the paper suggests that you can generate high quality pseudorandom numbers by hashing a counter with a cryptographic hash (such as AES). Since AES has hardware support in modern processors, this is fast: less than 2 cycles per byte of randomness. If you do fewer "rounds" of encryption than used for cryptography, you can pass all existing tests for randomness with run time of less than 1 cycle per byte. Using a counter as the state makes it very easy to skip ahead in the sequence, and to distribute "streams" across multiple processors.
I'm sort of surprised this could be a paper published in 2011. Isn't this pretty much the obvious thing to do? Counter hashing is a well known way to generate good random numbers, and it seems almost as obvious how you'd scale to multiple processors.
Forgive my ignorance in cryptography. If a series of pseudorandom numbers can be trivially replicated by figuring out the counter and hash algorithm (seems simple with open source software), how can the psuedorandom numbers be said to be of high quality?
The key(s) used for the cipher would need to be randomly seeded in some way that an attacker could not easily guess (e.g. with a different CSPRNG).
Edit: And for hash functions, you can generate some random bytes that you append the counter to before putting it through the hash algorithm. The same as above, if those bytes are hard to guess then the construction should be safe.
Although this thing is able to fool the 200 or so tests from TestU01, it would take a short time for an expert to devise a test that would distinguish it from random, and even recover the key (read: seed). On the other hand, AES-CTR (conjecturally) fools _all_ statistical tests that take less than 2^128 effort, and on top of that it is still very very fast (0.63 cycles per byte on Haswell). So what really was gained? Cryptographic primitives have gotten so fast that the additional speed from these ad hoc generators seems like an unworthy risk. I concede that it can be very fun to try to make them, though.
Additionally, a generator that passes those 200 tests is not guaranteed to be high-quality for a particular usage, which might as well be considered nothing but another statistical test. There is the famous case of the r250 generator---an additive Fibonacci generator that passed all statistical tests of the time---which turned out to have characteristics that rendered costly real-world simulations wrong .
That is C++, yes, but I don't think there is much of anything modern about it. Trying to save on space probably contributed to its unreadability a bit.
I know this paper, and like this approach. Reduced-round primitives let you take advantage of existent analysis, and keep some guarantees while improving speed. In the case of AES, we know that the probability of any differential for 4 rounds is at most 2^-113; similar for linear probabilities . I'm curious why ARS-4 fails Crush; I wonder if the custom key schedule is the culprit.
Not quite that, but here's someone's approximation of this approach for an extremely fast hash:
So this was a fun hash that I came up with when I was looking
for a 128-bit hash for falkervisor's code coverage database.
This hash passes all of smhasher's tests, faster than any other
hash that does the same.
I made this up by randomly testing things until it passed
all of smhashers tests. It is likely that this hash is heavily
flawed and smhasher just doesn't stress its flaws.
These are interesting links, but they have to do with treating individual ornamental ash trees that are infested with a particular insect. Fruit can only be grafted onto the same species (or a very closely related one) so this has no particular relevance. Information about standard treatments for ornamental plums or crabapples would be relevant, though.
Maybe I'm wrong, but I don't think you are understanding the issue. These adult eels spend their lives in the rivers of Atlantic Canada. They come down the river once in their lives to breed thousands of miles away in the North Atlantic. The young eels somehow then make their way back to the rivers.
The high prices are for the "glass eels", the juvenile stage. They are caught with nets in the river as they return. This article is about the path taken by the adult eels from the river, to the ocean. Knowing the rough path they take in the ocean is unlikely to help anyone catch more eels.
If you want to catch the eels, you do it in the small river, rather than in the open ocean. Knowing more about their reproduction helps us understand and protect eels. Do you have reason to believe this is not the case?