Hacker News new | past | comments | ask | show | jobs | submit login
Time, Randomness, and a $100k Prize to Forever Change Blockchain (amazon.com)
115 points by Supranational 73 days ago | hide | past | web | favorite | 61 comments



I recently typed a transcript of a talk about verifiable delay functions: http://diyhpl.us/wiki/transcripts/verifiable-delay-functions...


Hey All!

We're one of the sponsors of this competition. Please let us know if you have any questions!


What is the algorithm for verifying the VDF output?

Specifically, I compute y = x^(2^N) mod M using N modular squarings, how do you verify the result?


One efficient verification scheme is by Benjamin Wesolowski (see academic paper [here](https://eprint.iacr.org/2018/623.pdf)).

The scheme is surprisingly simple and fits on just one slide. See https://www.youtube.com/watch?v=zqL_cMlPjOI&feature=youtu.be...



Head's up -- your link is broken. I don't think markdown is supported :)


Have you considered the impact of superconducting circuitry? They're very niche and few people have heard of them, but they're not science fiction[1] and have a significant speed advantage.

[1] https://www.hypres.com/ seems to sell them


chia.net is also running VDF competitions with $100,000 prize. They recently announced the results for round 2.

https://www.chia.net/2019/07/18/chia-vdf-competition-round-2...


Can someone explain why proof-of-stake is not inherently monopolistic? It seems like a system where the more of a currency you possess, the more you can obtain, which is obviously unstable, right?


In current Proof-of-work (PoW) blockchains, the more mining hardware you have, the more coins you get, and the more mining hardware you can buy. In this sense, the same exact monopolistic structure you mention exists in both PoW and Proof-of-stake (PoS) blockchains.

However, (PoS) is a lot less monopolistic than PoW for other reasons:

1. Massively reduced economies of scale. Pretty much all POW mining happens in mining farms for a reason: it's a lot cheaper, per unit of revenue, to move all your mining to one big warehouse. This makes it pretty much impossible for a normal person to contribute to PoW in any meaningful way -- good luck running a competitive mining farm out of your house. In PoS, however, 10x more coins will get you exactly 10x more rewards, and costs don't fall at all, leading to less economies of scale, and so less pressure towards a monopoly.

2. Ability for the protocol to apply punishments as well as rewards. In PoW, because the miners aren't tied to any address, there is no real mechanism for punishing them. In PoS, because the "mining power" (aka coins) is in the protocol, bad behavior can be punished. This has two benefits from the perspective of monopolistic market structure. First, it makes it so 51% attacks are costly in protocol. If evidence is shown that you 51% attacked, your coins can be "slashed," or taken away. Notably, in PoW, a 51% attack is free to the attacker if successful. Second, collective punishments can be applied to miners (validators in PoS language). This makes it so miners can be punished if they censor other miners -- again leading to less centralization/monopoly pressure.

Please do disagree (or make me clarify) with anything that seems stupid/unclear/incorrect in the above. I'd love to engage more!


So an Oligarchy that's less monopolistic?

Where do the coins come from? Has there been any other mechanisms for minting other than minting the supply from day 1, or burning computational capital/energy in the form of PoW mining?


PoW is currently an oligarchy, for what it's worth. I was using the term monopolistic in a lose sense (i.e. tending towards large players controling the market [i.e. "centralizing"]).

In most PoS protocols, coins come from either PoW mining before a transition to PoS, or from an initial coin offering. Neither are great solutions IMO, but PoW doesn't really distribute coins very well either.


I see what you mean about how proof-of-work can be monopolistic as well, but I guess there are at least some extra steps involved. You need to take your earned coins, sell them for computers, and then turn those computers toward mining (as opposed to any other productive use). In proof-of-stake, simply by virtue of holding coins you obtain more coins. This also comes with an opportunity cost, of course, but not to the same extent I think.

To respond to (1) I think I don't understand the mechanism of proof-of-stake well enough. I think (correct me if I'm wrong) that the reason pooling is incentivized in proof-of-work is because having more mining power doesn't give you more coins as much as it increases the likelihood of you receiving a coin. In other words, it's the statistical nature of proof-of-work that makes pooling necessary. For proof-of-stake to not incentivize pooling I think it would need to not distribute coins statistically, but more like how stock dividends are issued today.

That being said I'm still not sure that the incentive for miners to pool is the same as a pressure on the network to consolidate holdings. I'll need to reflect on that.

Regarding (2) I'm not sure I understand why or how you would punish participants. ie. how would the network differentiate a 51% attack from a "legitimate" use of 51% to steer the network, and what could it do as a result?


PoS incentivizes hoarding. There is no such incentive with PoW.


PoW - you have to make actual effort (within the realm of deprecating mining hardware) to become a part of the network. Bad for whales.

PoS - you can just buy your way in, and stay in. Great for whales.

Whales should be kept at the currency value layer, not in the network layer, because whales are greedy and dangerous.


Actually you have to make an effort in Ethereum's PoS. For each 32 ETH of stake, you have to run a validator node.


While proof of stake does have a lot of advantages over PoW, it cannot handle more than 1/3 of the voiting power acting incorrectly. This is a well known hard bound on BFT algorithms.

In terms of implementing PoS, most BFT algos have traditionally scaled poorly (HotStuff which is linear in communication costs relative to the number of nodes/validators is very recent). In addition, going offline for validators, even accidentally, hurts the network and slashing PoS schemes often have uptime requirements. Validators also need to have the infra to be hidden and not DDOS'd.

tl;dr, Proof of stake is still decently centralized, but a lot of work is being done to scale it.


The 1/3 bound is for general byzantine behavior. If you're willing to get more specific about the type of behavior you want to tolerate, the bounds are more flexible[1] (this link is from the HotStuff people, actually!). For example, a node could tolerate no more than "20% Byzantine faults and 30% alive-but-corrupt faults" In turn, this allows different clients to make different assumptions (one huge advantage PoW has over traditional BFT protocols).

Also, w.r.t. linear communication, CBC Casper[2] has the ability to come to consensus on a blockchain with, in the limit (and in the optimal case), a single message per block that is finalized. It uses essentially the same form of pipelining that HotStuff does, but to the extreme (at at the cost of a much longer time for blocks to be finalized). Full disclosure: I work on CBC Casper, so take what I'm saying with a grain of salt.

[1] https://dahliamalkhi.wordpress.com/2019/04/24/flexible-byzan...

[2] https://github.com/cbc-casper/cbc-casper-paper/blob/master/c...


Good point with the flexible BFT paper. I didn't realize casper pipelines, I'll have to look into it. I'm working on IBFT (fixed to be more pbft like).


> While proof of stake does have a lot of advantages over PoW, it cannot handle more than 1/3 of the voiting power acting incorrectly.

The 1/3 bound is without any synchronicity assumptions; with strong synchronicity assumptions you can get to 1/2. This seems strictly better than PoW (no liveness at all in an asynchronous network, and 1/2 with synchronicity assumptions).


Proof of work is like labor. More work, more coins.


You could say the same about any interest or income producing asset. USD, real estate, etc.


I think there may be a difference of degree here. Combine an interest based monetary system with a high rate of deflation and you will have the fastest most boring game of Monopoly ever.


Wouldn't this simply be a matter of designing hardware that does these operations on chip as a software accessible operation?


The ultimate goal is indeed to build an ultra-fast ASIC. There's a $1M ASIC circuit competition planned for for 2020. This $100K FPGA competition is a "warm up" to the ASIC competition.

A bit more information here https://www.youtube.com/watch?v=9-77V1IareQ&feature=youtu.be...


From the linked article:

> The primary infrastructure for the contest will be Amazon EC2 F1 instances

So no, you need to design it on the F1 FPGA instance.


It's a matter of programming an FPGA to be faster than anyone else's FPGA when handed the task from software. So technically, yes, it's simply a matter of designing hardware. The part that's not so simple is designing hardware given the same FPGA as every other competitor and making it faster than all of them while returning only correct answers.


It sounds like that part is already done; now they're optimizing the hardware design.


This is rad. You can actually build a fairly secure vdf with a sequential hash function using intel’s sha256 instructions, and verify the output across all the available simd lanes.


Really pleased to finally see a possible alternative to the atrociously wasteful proof-of-work algorithms. Here's hoping bitcoin will switch, too.


The whole point of PoW is to be expensive. The security of the network is a derivative of the cost of running the nodes.


PoS is also expensive but the expense is in opportunity cost not in capex and energy. There's clean expensive and dirty expensive.


If it is embarrassingly parallelizable, then why would the total amount of energy (megawatts) invested in running the function not grow until it equals the economic value of the outputs, regardless of the particulars of the algorithm.

And if it's not embarrassingly parallelizable, first, tell me why, and second, what's all this business about optimizing the algorithm in the first place?


> if it's not embarrassingly parallelizable, first, tell me why

It's a cryptographic assumption that repeated squaring in an RSA group is "inherently sequential". This assumption was first made in 1996 by Rivest, Shamir and Wagner for timelock puzzles. See https://people.csail.mit.edu/rivest/pubs/RSW96.pdf

> what's all this business about optimizing the algorithm

To link sequential computation with physical time we need the general public to evaluate the VDF at roughly the same speed as an attacker. By "roughly the same speed" we mean that the attacker (even with a huge budget) cannot build hardware that runs, say, 10x or 100x faster than a publicly available ASIC.


The computation described (compute x^(2^t) mod N) is designed to not be parallelizable.


Interesting... so let's assume everyone can compute it in about the same amount of time, how does the "lottery" work?

OK, I found this fairly informative [1]. It's designed to be an entropy source feeding into a proof-of-stake algorithm.

If you can't run it "that much" faster than everyone else, it's more likely your result is not manipulated to steer an unfair portion of the block reward your way.

I haven't read much about any of this. The point that seems to stick out most is the idea that "everyone" submits some amount of data, versus just the first to reach the difficulty target submitting.

Once you have that concept, then you need time gates to control how quickly the rounds happen, and VDF can be used to dictate the clock rate.

A conceptual piece I must still be missing is how all the submissions are aggregated, and why many/most can't just be ignored. Ah, fucking blockchain catnip.

[1] - https://ethresear.ch/t/verifiable-delay-functions-and-attack...


ELI5 version: 100 people, one by one, (re)roll dice placed in a dark room. After the last person lights turn on, revealing a fair random number. The Verifiable Delay Function (VDF) ensures lights aren't turned on early.

For a video explanation of the randomness mechanism see https://www.youtube.com/watch?v=zqL_cMlPjOI&feature=youtu.be


But squaring numbers is paralellizable. You pick an exponent you want and take a modulus only later. Iterative squaring is also trivial if exponent is a power of two. Calculating x^(2^t) mod n should take no time on any computer.

I think the rest of the puzzle is fairly hard to guess.


Iterative squaring is not paralellizable - to compute x^4 as (x^2)^2 you have to compute t = x^2 first before starting the computation of t^2. Calculating x^(2^t) mod n takes t squarings.


If the solution could save hundreds of millions of dollars and is of import to Amazon, why would anyone give it away for 100k?


Bingo. The absurdity of this, let’s create something that by their own admission is worth hundreds of millions for a few thousand dollars and some street cred. Also, VDF is probably worth much more than that if it can be effectively done.


VDFs could save hundreds of millions of dollars (this is exaggerated but let's roll with it) but they have already been invented. It's not clear how much a slightly faster implementation of their VDF is worth; maybe it's less than $100K. The Chia VDF contest had a few entries so this is not a crazy concept.


Doesn't this assume that Blockchains are useful?


It is sponsored by organizations that have utility for blockchains and believe it can have utility for others


...well, Amazon just likes selling shovels to miners...


I'm not sure I really understand that happening in this case, the contest seems to want people to create new hardware, so this wouldn't use Amazon infrastructure.

It is a partial subsidy to the winner. So they aren't really selling shovels they're paying people to develop better shovels.


Amazon wants people to create bitstreams that run on FPGAs rented from Amazon. They're paying people to develop more demand for their shovels.


Who at amazon promoted this, and does management know that this person is promoting their investment in ethereum, using the company brand and resources...


Amazon already invested in Ethereum https://aws.amazon.com/managed-blockchain/


If there’s a reward for computing a VDF faster than everyone else, why won’t the amount of wasted resources, in the the form of e.g. human capital dedicated to computing faster VDFs, not be equivalent to the economic waste if the verification scheme used is proof of work? The article claims that proof of work is costing Ethereum hundreds of millions of dollars per year, but this VDF competition is also wasting millions of dollars in human capital to compute a fast VDF before there’s even a running blockchain using it (ie before there’s even a real non-synthetic incentive to break the system). Given that we need millions of dollars of waste just to launch the system safely, why wouldn’t we expect the wasted human capital dedicated to computing faster vdfs to increase to hundreds of millions once an actual attackable system is out in the wild, same as proof of work.

Moreover, it seems very unclear that wasting human capital is less economically bad than wasting the same “valu” in raw energy, particularly if a lot of that energy is from renewable sources anyway.


> If there’s a reward for computing a VDF faster than everyone else

There's no direct reward for being a bit faster than everyone else. If you are much faster (say, 100x faster than the general public) then you may be able to bias the randomness, and therefore manipulate things like lotteries built on Ethereum.

> The article claims that proof of work is costing Ethereum hundreds of millions of dollars per year, but this VDF competition is also wasting millions of dollars in human capital

It's about orders of magnitude. Ethereum burns about $0.5B per year. The VDF project is about $15m, and the ASIC should last about 10 years.


From what you’re saying it seems like an “incentive cliff” at 100x the power of whatever the competition produces is being relied upon for the security of the currency, meaning there exists a strong incentive for someone to waste human capital computing a faster VDF if he thinks he can “hurdle” the incentive cliff of 100x the power of whatever the competition produces.

Given the competition is only a ~$15m investment in a group of hobbyists, and given we’ve already seen order of magnitude improvements over and over again with proof of work in Bitcoin once the incentives are there, why is it assumed that someone won’t mount a long range attack to break the system by engineering a 100x faster VDF? Moreover, how does the system recover if that were to happen?


> how does the system recover if that were to happen?

The Ethereum 2.0 consensus layer gracefully falls back to a slightly biasable form of randomness called RANDAO. This is merely a "weakening" of the consensus, not a breakage. Applications built on Ethereum 2.0 (such as lotteries) may indeed break with RANDAO biasability.

> why is it assumed that someone won’t mount a long range attack to break the system by engineering a 100x faster VDF?

One of the things we are doing is trying to prove lower bounds on the circuit depth of modular squaring. (Ryan Williams is working on such lower bounds right now.) On the hardware engineering front, getting a 100x speedup is extremely difficult. Improvements to sequential speeds in traditional ASICs has tapered off at around 5% per year (see for example https://github.com/preshing/analyze-spec-benchmarks). And if possible, being able to increase the speed of multiplication by 100x would be an amazing development for humanity (multiplication is arguably the key basic operation in computing).

> Given the competition is only a ~$15m investment in a group of hobbyists

We can afford a few experts with $15m :)

> we’ve already seen order of magnitude improvements over and over again with proof of work in Bitcoin

Keep in mind that Bitcoin is an energy consumption game, not a latency game. Latency improvements do not follow Moore's law and other related laws.


There is quite a difference between a steady stream of income, and some far off reward which you may or may not be able to achieve.

Nearly anyone can do the former, but only a few have the luxury to attempt the latter.


Modern asic farms require a lot of up-front investment in developing hardware. All it takes is for one person to think he can hurdle the cliff and he can raise a lot of money to do it if the reward is there, not unlike the first group of people to raise funds to develop ASICS for Bitcoin (which were orders of magnitude faster than the hardware available at the time). We know Bitcoin can cope with a 100x more powerful asic coming online— can these systems cope with a 100x more powerful VDF?


Well, it does not create CO2, for starters. Also several million is less than several hundred million. By 100x


But it’s several million just to get it started. If the reward for pulling off an attack is the same as in proof of work systems, why wouldn’t the waste wind up equaling the reward (ie hundreds of millions in waste if it’s used to secure something like Ethereum or Bitcoin)?


VDFs don't replace proof of work. In essence, POW is a lottery where everyone gets their own random lottery ticket and the lowest number wins. VDFs, instead, allow a large group of people to all share a SINGLE ticket, which simply spits out the ID of the winner.

This is conceptually a completely different approach and VDFs therefore aren't susceptible to the same problems as POW (though they have other unrelated problems, which this contest is trying to mitigate)


Possibly. But the CO2 is the big deal. I don't have much issue with "stake"-based mining, i.e. if it's just burning abstract money, instead of burning physical commodities.


To me this is like arguing that something like high frequency trading is ok because it doesn’t generate c02. The reality is that hundreds of millions in wasted human capital, particularly the best mathematical minds on the planet, is extremely costly. That much human capital, if allocated correctly, could likely do a lot to offset climate change, for example, among many other positive things.


Sure, go ahead and start a new project. Money's cheap with zero interest rates, funding startups has been ridiculously easy for the past 10 years.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: