What do you mean by "allowed to"? In a PoW system, the PoW is a distributed timekeeping device. That's the actual operation the PoW does, and the distributed timekeeping is then what you can build a blockchain on. PoS doesn't not do distributed timekeeping. If you sign a block now, then go back later and sign a different block with the same key, there's no distributed clock that can be used to prove which was actually signed first.
The obvious argument here is "the one that was signed first will then have other blocks built on top of it". But since there's no PoW, building a parallel blockchain is trivial to compute, the only restriction is being able to produce something that's actually convincing enough. That and having people say "well, I was there at the time and I saw a different block than this", but that's just relying on authority rather than something that can be proven within the system.
Basically, PoS requires something external to the system to prove that history hasn't been changed. PoW technically does too, but what it relies on is "physics" and "provable historical fact" (i.e. approximate computing power available in the past).
You certainly can build a system that depends on something external to itself to ensure its consistency, but this challenges its claim to being "decentralized" and limits the amount of trust you can place in the system (and consequently the power of what it can do).
The clock issue is an excellent point, but the ethereum PoS have a nano scale PoW mechanism for this exact problem. Look at VFD "Verifiable Delay Functions" [1].
In short:
If you take the pbkdf2 key derivation function: its job is to slow down hashing a thousand fold or so, so that hashing an entire search space becomes impractical. You give your secret in input, and it gives you a hash, let's say, in 1 second. You'll have to spend the time again to recompute the hash. With a faster machine, you can compute in maybe 100ms, but still, there is a limit in how fast you can obtain the result.
Now change the cryptographic properties of pbkdf2, so that you can go back from the output to the input in constant time, so you can find the secret from the hash in O(1). Then, it becomes useless for actual secrets, but you now have an instantly verifiable proof that a certain amount of time (or serial computation) had to pass to get from the input to the output. Plug the input to the previous block hash, and embed the result in the next block, and you have your clock, based on physics and provable historical facts.
The site isn’t particularly accessible for a quick discussion so I appreciate your explanation, thank you.
However, I’m not sure I understand how this is supposed to help. Proving that a few seconds passed just slows down block generation a little, but this cannot be a significant barrier to block generation or else you just have a full PoW system again. And if it’s not a significant barrier then it’s not clear to me what this is supposed to do, beyond preventing me from generating and signing a new block within milliseconds of some event happening.
But since the “nano scale” PoW doesn’t define the rate of block generation, it just establishes a lower bound, it feels like it’s just a speed bump for anyone trying to attack the system. If it only takes 10 seconds to rebuild the last 100 minutes worth of blocks, then it doesn’t establish a universal clock and therefore cannot prove which block came first.
With VDF you cannot rebuild the 100 minutes worth of blocks in less than 100 minutes, because with the VDF logging, you just proved that you would need at least 100 minutes to go from block n to n+(100 minutes). You can check in 10 seconds, but can only produce in 100 minutes, just like you can check in an instant that a bitcoin block starts with enough zeroes. So it defines the rate of generating blocks.
Of course, nothing can stop anyone from creating parallel 100-minute long branches if that was the only thing, as, unlike PoW, it does not cost anything (except time) to create branches.
So you still need a consensus mechanism, a way to, as an agent of the network, decide what is the right branch. On bitcoin, it is very simple: go to the longest chain, it's where the majority of mining power went, so that is clearly the consensus (with 1 joule = 1 vote).
On ethereum, it's much more complex, involving promises with money at stake locked somewhere, so that anyone can detect cheaters, automatically unlock and take their money as punishment, and reward the whistle-blower with it. So, unless everyone is foolish enough to watch their money seized by the network, it does not happen.
The exact way the correct branch is decided is by random election of one staker, where the randomness is proved to be actually random. After all, using a VDF, you can now prove that its output won't be known until x seconds have passed, if you put the most recent block hash as input. So during that time, you can agree on an fair pseudo-random election algorithm that will take this VDF output as a seed when it becomes available.
Ok I just looked at https://medium.com/@djrtwo/vdfs-are-not-proof-of-work-91ba3b... for an explanation. VDF is proof of work. It's just proof of sequential work. It does seem plausible that a VDF would significantly reduce the amount of computing hardware being used in generating blocks, but it fundamentally is still a proof-of-work scheme, just one that requires faster processors rather than more nodes if you want to speed it up.
The thing though is that this doesn't prove that X seconds have passed. It proves that X seconds have passed on whatever baseline hardware has been used to calibrate it. I don't know who actually computes the VDF in the proposed proof-of-stake schemes, though I would assume it's "whoever is proposing a block" (is this the same as the staker? Does this mean every single staker is picking a block and computing their own VDF, meaning everyone is still burning CPU?). And this means the VDF can only establish a minimum CPU requirement. It can say "X seconds have passed on the minimum hardware we're requiring at the moment", but anyone with faster hardware can still compute it faster.
And also because this PoW scheme cannot require more than X seconds for any participant to compute, it means an attacker that starts computing their alternative blockchain at the same time as the block they want to replace faces no difficulty. All this does is interferes with the ability to decide after the fact that you want to attempt to replace history. And even then, if you have hardware faster than the baseline, you can still reach back in time to recalculate a block, you just have to wait longer to do so. And by that I mean if you want to edit a block from 100 minutes ago, and you've got a CPU that's twice as fast as whatever the VDF is tuned for, then it just takes you 100 minutes to compute the replacement blockchain (50 minutes to compute the past 100 minutes, and 50 minutes to compute the new blocks that have been added since you started the attack). So after 100 minutes you now have an alternative chain that everyone thinks took 200 minutes to compute.
Which means now we're just back at the problem of "attacking consensus", where nobody can look at the two blockchains and see within the system which one was calculated first.
---
I suppose the VDF could be calculated by some volunteer with the fastest hardware, though this requires rewarding them for doing so (which means you basically have a monopolist sucking up all of the VDF rewards and no real incentive for nearly all participants to even try and compete). And this is still attackable by someone who can put together hardware that is even just slightly faster than then volunteer. It just takes longer. If the security of the system relies on a volunteer being assumed to have the fastest hardware on the planet, then the system isn't secure. I also question what happens in this scenario when the volunteer goes offline and nobody else has hardware that's as fast. Now the next block isn't ready in X seconds. I assume there's some protocol for "oops nobody has finished computing the VDF in time", but this does provide another avenue of attack for anyone in a position to disrupt the volunteer's connection to the network. Of course, anyone in a position to do that is likely to have access to unusually fast hardware already, but the point is that you cannot rely on the idea that "nobody can possibly calculate this block faster than the VDF is tuned for".
This attack is possible in Bitcoin too, except because Bitcoin is parallelizable, the defense there is that this attack requires spending more money than it is worth as the computing power used to calculate blocks is roughly a function of the value of the network. The danger there is generally in centralizing too much of the computing power among too few participants rather than an outsider breaking the scheme. This attack does work on smaller PoW coins of course, generally by folks who control a chunk of Bitcoin computing power and just redirect it temporarily (if the value in attacking the coin is greater than the expected bitcoin mining rewards for the time it takes to do the attack, then this makes sense).
Honestly, it really seems like we should only have one global PoW network, and everything else should use other systems. Perhaps they should satisfy security by doing things like VDFs for short-term security and storing their blockchain hashes into Bitcoin for long-term security. Bitcoin using up a ton of power is still a problem of course, but maybe there's some sort of approach that can be used to solve the problem of "PoW to establish a global distributed clock" once you remove the "and we want to use this as a currency" part that doesn't invoke a massive arms race. This may involve ditching the idea of "anyone can participate", which also then allows you to change the incentives for running the PoW scheme.
---
Edit: I suppose the VDF's input might not be "the block being computed" but instead "the previous block", and the output then used to elect participants who are then trusted to build the new block. This would allow the new block to indicate whether the VDF actually took longer than expected. But then we're back at a probabilistic function with PoS, where those with the highest stake are now most likely to be trusted and therefore are in a position to abuse that trust.
I suppose reading up more on PoS systems might answer this question. But I really don't want to do that. I've already spent far longer on this than I intended to.
I think you basically ask the right questions. But when you have progressed in your understanding, and you're confronted with the next problem caused by PoS, you seem to assume that PoS is flawed, because you can't immediately think of a solution (which is expected, that's quite a hard problem). The reality is that the issues you mention are well identified and they have solutions for them.
About VDFs, there is a tolerance, you need to be in the same ballpark as the fastest, not _the_ absolute fastest. The more tolerance you need, the less snappy the PoS blockchain will be. They plan to make a low-power asic for that task, to be as close as the theoretical max speed for that, and have the lowest tolerance margin as a result.
Also, there is a way to reduce all VDFs results so that only one of the whole set of VFS-ers need to be honest.
So it's not one volunteer, but a pool of volunteers, using low powered asics so close to the theoretical max (ie speed of transistor switching) that you couldn't outrun them enough to profit from that speed up. I am not sure if they are incentivized, because it's not costing a lot, but maybe.
> Edit: I suppose the VDF's input might not be "the block being computed" but instead "the previous block", and the output then used to elect participants who are then trusted to build the new block. This would allow the new block to indicate whether the VDF actually took longer than expected.
Indeed, that's what I thought I was saying, but maybe I was not clear enough.
The obvious argument here is "the one that was signed first will then have other blocks built on top of it". But since there's no PoW, building a parallel blockchain is trivial to compute, the only restriction is being able to produce something that's actually convincing enough. That and having people say "well, I was there at the time and I saw a different block than this", but that's just relying on authority rather than something that can be proven within the system.
Basically, PoS requires something external to the system to prove that history hasn't been changed. PoW technically does too, but what it relies on is "physics" and "provable historical fact" (i.e. approximate computing power available in the past).
You certainly can build a system that depends on something external to itself to ensure its consistency, but this challenges its claim to being "decentralized" and limits the amount of trust you can place in the system (and consequently the power of what it can do).