We're one of the sponsors of this competition. Please let us know if you have any questions!
Specifically, I compute y = x^(2^N) mod M using N modular squarings, how do you verify the result?
The scheme is surprisingly simple and fits on just one slide. See https://www.youtube.com/watch?v=zqL_cMlPjOI&feature=youtu.be...
 https://www.hypres.com/ seems to sell them
However, (PoS) is a lot less monopolistic than PoW for other reasons:
1. Massively reduced economies of scale. Pretty much all POW mining happens in mining farms for a reason: it's a lot cheaper, per unit of revenue, to move all your mining to one big warehouse. This makes it pretty much impossible for a normal person to contribute to PoW in any meaningful way -- good luck running a competitive mining farm out of your house. In PoS, however, 10x more coins will get you exactly 10x more rewards, and costs don't fall at all, leading to less economies of scale, and so less pressure towards a monopoly.
2. Ability for the protocol to apply punishments as well as rewards. In PoW, because the miners aren't tied to any address, there is no real mechanism for punishing them. In PoS, because the "mining power" (aka coins) is in the protocol, bad behavior can be punished. This has two benefits from the perspective of monopolistic market structure. First, it makes it so 51% attacks are costly in protocol. If evidence is shown that you 51% attacked, your coins can be "slashed," or taken away. Notably, in PoW, a 51% attack is free to the attacker if successful. Second, collective punishments can be applied to miners (validators in PoS language). This makes it so miners can be punished if they censor other miners -- again leading to less centralization/monopoly pressure.
Please do disagree (or make me clarify) with anything that seems stupid/unclear/incorrect in the above. I'd love to engage more!
Where do the coins come from? Has there been any other mechanisms for minting other than minting the supply from day 1, or burning computational capital/energy in the form of PoW mining?
In most PoS protocols, coins come from either PoW mining before a transition to PoS, or from an initial coin offering. Neither are great solutions IMO, but PoW doesn't really distribute coins very well either.
To respond to (1) I think I don't understand the mechanism of proof-of-stake well enough. I think (correct me if I'm wrong) that the reason pooling is incentivized in proof-of-work is because having more mining power doesn't give you more coins as much as it increases the likelihood of you receiving a coin. In other words, it's the statistical nature of proof-of-work that makes pooling necessary. For proof-of-stake to not incentivize pooling I think it would need to not distribute coins statistically, but more like how stock dividends are issued today.
That being said I'm still not sure that the incentive for miners to pool is the same as a pressure on the network to consolidate holdings. I'll need to reflect on that.
Regarding (2) I'm not sure I understand why or how you would punish participants. ie. how would the network differentiate a 51% attack from a "legitimate" use of 51% to steer the network, and what could it do as a result?
PoS - you can just buy your way in, and stay in. Great for whales.
Whales should be kept at the currency value layer, not in the network layer, because whales are greedy and dangerous.
In terms of implementing PoS, most BFT algos have traditionally scaled poorly (HotStuff which is linear in communication costs relative to the number of nodes/validators is very recent). In addition, going offline for validators, even accidentally, hurts the network and slashing PoS schemes often have uptime requirements. Validators also need to have the infra to be hidden and not DDOS'd.
tl;dr, Proof of stake is still decently centralized, but a lot of work is being done to scale it.
Also, w.r.t. linear communication, CBC Casper has the ability to come to consensus on a blockchain with, in the limit (and in the optimal case), a single message per block that is finalized. It uses essentially the same form of pipelining that HotStuff does, but to the extreme (at at the cost of a much longer time for blocks to be finalized). Full disclosure: I work on CBC Casper, so take what I'm saying with a grain of salt.
The 1/3 bound is without any synchronicity assumptions; with strong synchronicity assumptions you can get to 1/2. This seems strictly better than PoW (no liveness at all in an asynchronous network, and 1/2 with synchronicity assumptions).
A bit more information here https://www.youtube.com/watch?v=9-77V1IareQ&feature=youtu.be...
> The primary infrastructure for the contest will be Amazon EC2 F1 instances
So no, you need to design it on the F1 FPGA instance.
And if it's not embarrassingly parallelizable, first, tell me why, and second, what's all this business about optimizing the algorithm in the first place?
It's a cryptographic assumption that repeated squaring in an RSA group is "inherently sequential". This assumption was first made in 1996 by Rivest, Shamir and Wagner for timelock puzzles. See https://people.csail.mit.edu/rivest/pubs/RSW96.pdf
> what's all this business about optimizing the algorithm
To link sequential computation with physical time we need the general public to evaluate the VDF at roughly the same speed as an attacker. By "roughly the same speed" we mean that the attacker (even with a huge budget) cannot build hardware that runs, say, 10x or 100x faster than a publicly available ASIC.
OK, I found this fairly informative . It's designed to be an entropy source feeding into a proof-of-stake algorithm.
If you can't run it "that much" faster than everyone else, it's more likely your result is not manipulated to steer an unfair portion of the block reward your way.
I haven't read much about any of this. The point that seems to stick out most is the idea that "everyone" submits some amount of data, versus just the first to reach the difficulty target submitting.
Once you have that concept, then you need time gates to control how quickly the rounds happen, and VDF can be used to dictate the clock rate.
A conceptual piece I must still be missing is how all the submissions are aggregated, and why many/most can't just be ignored. Ah, fucking blockchain catnip.
 - https://ethresear.ch/t/verifiable-delay-functions-and-attack...
For a video explanation of the randomness mechanism see https://www.youtube.com/watch?v=zqL_cMlPjOI&feature=youtu.be
I think the rest of the puzzle is fairly hard to guess.
It is a partial subsidy to the winner. So they aren't really selling shovels they're paying people to develop better shovels.
Moreover, it seems very unclear that wasting human capital is less economically bad than wasting the same “valu” in raw energy, particularly if a lot of that energy is from renewable sources anyway.
There's no direct reward for being a bit faster than everyone else. If you are much faster (say, 100x faster than the general public) then you may be able to bias the randomness, and therefore manipulate things like lotteries built on Ethereum.
> The article claims that proof of work is costing Ethereum hundreds of millions of dollars per year, but this VDF competition is also wasting millions of dollars in human capital
It's about orders of magnitude. Ethereum burns about $0.5B per year. The VDF project is about $15m, and the ASIC should last about 10 years.
Given the competition is only a ~$15m investment in a group of hobbyists, and given we’ve already seen order of magnitude improvements over and over again with proof of work in Bitcoin once the incentives are there, why is it assumed that someone won’t mount a long range attack to break the system by engineering a 100x faster VDF? Moreover, how does the system recover if that were to happen?
The Ethereum 2.0 consensus layer gracefully falls back to a slightly biasable form of randomness called RANDAO. This is merely a "weakening" of the consensus, not a breakage. Applications built on Ethereum 2.0 (such as lotteries) may indeed break with RANDAO biasability.
> why is it assumed that someone won’t mount a long range attack to break the system by engineering a 100x faster VDF?
One of the things we are doing is trying to prove lower bounds on the circuit depth of modular squaring. (Ryan Williams is working on such lower bounds right now.) On the hardware engineering front, getting a 100x speedup is extremely difficult. Improvements to sequential speeds in traditional ASICs has tapered off at around 5% per year (see for example https://github.com/preshing/analyze-spec-benchmarks). And if possible, being able to increase the speed of multiplication by 100x would be an amazing development for humanity (multiplication is arguably the key basic operation in computing).
> Given the competition is only a ~$15m investment in a group of hobbyists
We can afford a few experts with $15m :)
> we’ve already seen order of magnitude improvements over and over again with proof of work in Bitcoin
Keep in mind that Bitcoin is an energy consumption game, not a latency game. Latency improvements do not follow Moore's law and other related laws.
Nearly anyone can do the former, but only a few have the luxury to attempt the latter.
This is conceptually a completely different approach and VDFs therefore aren't susceptible to the same problems as POW (though they have other unrelated problems, which this contest is trying to mitigate)