Note that the thing that looks like a paywall isn't - while Medium does have paywalled articles, they're rare (and as far as I can tell this one isn't) and at the discretion of the author, not of Medium, anyway. Medium aggressively wants you to sign up if you've read more than n articles in a month, but you can just click the X and read n+1 articles.
These discussions about randomness have the tendency to get too philosophical. In practice technical details matter more. For cryptographic or statistical applications you need steady quality.
256 bits of random state that gets slowly updated with few random bits from physical source (thermal or quantum noise), is enough when it's fed into PRNG process that whitens it.
You can take 'bad' sources of noise and make them better, but these generally require a fair bit of math  and can be costly to implement. At that point, why not just use a PRNG a la urandom?
Why not just use that?
In essence, the difference between Bitcoin and an aggregate scheme is that in Bitcoin, your control over the entropy is probabilistic and directly proportional to the number of bribed parties, whereas in an aggregate scheme, you have no control whatsoever until you manage to bribe every party, at which point you gain full control.
All considered that means antpool has a 2.5-3% chance of flipping an wanted result into a wanted result. At $125,000 per block, that's $4 million of cost per bit manipulated favorably, getting exponentially worse as you need to manipulate more bits.
And remember that the timestamp of the block is oply an approximation. You can use any time you want. I'm not sure if the net reject the block when the time is too off, but you can use a fake delayed time of a few minutes and wait to broadcast it (perhaps mining secretly the next block), or perhaps you can use a fake previous time of a few minutes and blame the delay of the broadcast to a bad connection. (It's mode difficult to do this secretly in a pool.)
The newspaper here make a big fuzz about the first baby of the year, and I always suspected the exact born time in that moment is not reliable.
Can easily put secure bounds on this and use it for most applications.
Since hashing is a serial operation, and each hash is a random mapping of input to output, with enough iterations (hundreds of billions) you make it completely infeasible for the miner to even know what the result was by the time they have to make the block public.
Zcash actually did this for their second trusted setup; IIRC the delay was set to be about a week's worth of computation. It's a much better scheme for many use-cases than anything else I've seen in this conversation. The main downside is exactly when which participate actually finds out what the final result is isn't well defined. But for cases where you can commit to the result in advance that's fine.
You can use simple techniques like this to make most use cases secure.
This doesn't seem to be much more than a fancy way of mixing randomness together (which produces valid randomness if just one source was truly random) and then using BLS signatures on top of that. BLS signatures let you verify the nodes that were involved, and it seems like with how they treshold, they're trying to prevent attacks based on not revealling (ie. the randomness is the aggregated threshold signature of the random data they mixed together).
They're signing their contributions.
For all practical purposes a properly implemented software random number generator provided by the OS is fine. The few corner cases where this isn't the case (mosly early boot entropy problems) appear in situations where you don't yet have Internet connectivity.
If the article deserves criticism, it is in the suggestion that true randomness is hard to come by. We have billions of connected devices with high-density CCD image sensors. Each pixel generates random noise that may be sampled at many frames per second. The true randomness of any pixel is limited -- bias is easy to demonstrate -- but mixed with millions of others their randomness becomes exemplary. The top-quality random bits available from these legions of phone and surveillance cameras far exceeds any practical need.
Those without a camera often have access to a microphone. Microphones, likewise, provide a ready source of random noise in their least-significant bit. Where a stereo source is available, XORing low bits is better.
Those with a radio receiver can find easy random bits by tuning to an unused channel -- or even a used one -- and
stirring least-significant bits into a pool.
Such a pool may be left over from previous runs, so a device with any persistent storage need never start with no random bits.
Systems with access to a CCD, microphone, or receiver have a ready solution to the problem of early-boot key generation.
The truly random system can also provide more sophisticated simulation / effects of said simulation.
Think of trying to programmatically trying to solve unbound equations / millennium problems. Anything involved in testing to deal with chaos would not be accurately represented until a true random is created.
Now what is true random, and does true random exist is a whole other topic of conversation.
Unfortunately, at least from a philosophical perspective for now, things like Superdeterminism have to be considered. One might come to the conclusion that there are both deterministic models, and randomness models of our universe, and thus the physical laws don't commit one way or the other.
One of the problems with current PKI is weakness in the face of quantum computers, leading to a new crop of algorithms being submitted to NIST, etc.
I wanted to ask whether the following simple scheme, based just on cryptographic hashes, can be used CONFIDENTLY, SECURELY and RELIABLY in many situations where Assymetric Key cryptography is used today, and in many others too, such as providing provably random polling etc. It is very similar to a One Time Pad but uses a cryptographic hash function to generate the OTP codes.
Here is the scheme:
Everyone generates a random private key K[p] and store it just like any assymetric private key (encrypted with some non-stored derived key).
They use any cryptographic hash function that hasn’t had a serious preimage attack (perhaps even MD5?), hash it n (eg 10,000,000) times to get h[p][n], and publicly commit to that number. This is like a public key.
The hashes are long enough that it’s infeasible to reverse them. Key strengthening can be achieved by jumping a few onion layers between transactions.
If you start running out then you post a new public key, signed with one of your remaining onion layer codes.
Any verifiers store the original public key per participant, and then can replace them with the new public key if it was properly signed by the old one, etc.
Use case: generating provably random numbers by mutually distrusting parties
Participants they gradually reveal their hashes, one onion layer per transaction. Each provably random seed is a function of the alphabetically smallest/largest three of those hashes at the next onion layer. If not all of them reveal the hashes in time, they gossip, verify and agree on which ones are the smallest/largest three before some cutoff point like “most reported that most reported”. That leaves tons of bits of entropy coming from everyone!
Use case: Authenticator Apps
The hash h[p][n+1] would be a hash of some substring of h[p][n] with enough bits that finding all chosen preimages (by an eavesdropper of the previous code) would be infeasible in advance. Perhaps 10 alphanumeric characters is enough. Also when displaying the code to enter, the authenticator app can tell the user a number from 1-100 indicating to the verifier how many onion layers to peel, making it harder to precompute the preimages. Or the user would have to enter the entire hash via the network-connected computer scanning a QR code, NFC or something. From a security standpoint, this method seems superior to the HOTP and TOTP schemes used in authenticator apps today, since there is no need to trust the verifier with any secret keys (https://www.ietf.org/rfc/rfc4226.txt) Also there is no need to sychronize clocks, since the client simply lets the server know how many times to run the hash, and increments that number every time.
Use case: Signing Payloads
Participants reveal a payload and commit to an HMAC signature by using cryptographic key at the next onion level, which at that point would be known only to them. All these signatures are collected into a blockchain block / merkle tree timestamp / similar thing, and it is sent to the participant before they reveal the onion key they used to sign it.
Use case: Off the Record Messaging
The Blockchain or Merkle tree is private between a few parties only, so once the next onion level is revealed, no participant can prove the payload was generated by a given participant, since all the onion hashes were known, any of them could generate a new valid tree with any payload history. They can only prove it to each other, or given enough “witnesses” attest to that tree, people might trust then on the basis of consensus of (presumably) mutually distrusting parties, but that’s not the same thing as cryptographic proof. But that is true of any OTR conversation.
Use case: Restoring Access
This can be used instead of Shamir Secret Key sharing. The server would have to store keys for every participant, and M of N participants would just sign that they approve authorization of some new session, some new key, or whatever. These signatures could be easily checked by anyone who has the public keys of the M participants who signed it.
Use case: Decrypting payloads
Not sure how one would do this one, to be honest. With PKI, someone could encrypt a payload that can only be decrypted by a private key holder. I see how to do signatures and HMAC, but not the other way.