Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Checkpoints would be hardcoded into the software (not downloaded dynamically from some centralized source). As such, the checkpoint would simply be a peer reviewed change to an open source codebase, just like anything else.

How is it decentralized? Anyone can create an independent codebase that implements the protocol. Anyone can review the code and raise the alarm to the community if the checkpoint isn't correct. Ideally there would be a number of independent implementations of the software, each with many devs and numerous reviewers. In addition to that, the previous version of the software could be used to automatically validate that the new software contains a checkpoint that matches the longest chain as seen by that software - and again, it can raise an alarm to the user if it doesn't match, who can alert the community.

Any change in bitcoin works like this. Changes are discussed, implemented, and reviewed by hundreds or thousands of people. Users need to find the correct software in some way. Generally they would use the internet to find the right software, hopefully cross checking multiple sources. They might ask their friends what software to run. Etc. But there is no math to find the right software - each person has to use their social network (and the internet) to figure out which software is "Bitcoin". Once they download the software, it can do the rest.

The same is true for a piece of software with a hardcoded checkpoint. There is no central source for the checkpoint. Everyone who is currently part of the network can validate that the checkpoint is correct. Many people will actually do it. It would be so easy to validate that it could be automatically reviewed by people's software (unlike most other codebase changes).

So in what way would a PoS checkpoint be different than bitcoin? As I've shown above, the checkpoint itself is just another piece of the code like any other code change. The difference is that the checkpoint would be necessary to do at some regular frequency (say once a year). By contrast, you could imagine a future where the core Bitcoin software has been frozen and has not changed for years - decades maybe. And one could expect to go offline for 10 years and still be able to bring up their software without updating it.

A PoS system would be slightly different. Because of the issues around short-range and long-range revisions, people who have been offline long enough should at very least download a new checkpoint (even if the rest of the software remains the same). The new checkpoint a user downloads should be recent enough that the chances of a short-range revision attack (eg a history attack) with sufficient accumulated minting power (or whatever your preferred alternative term is for "accumulated difficulty") is sufficiently unlikely. One new checkpoint per year would ensure that a user downloading the latest checkpoint is downloading a checkpoint no farther in the past than 1 year. This would require that a sufficient number of devs and reviewers get together to review and validate the released checkpoint to make the release sufficiently decentralized. This could realistically be structured so software checks this automatically and raises an alarm to the user if the new checkpoint doesn't match or if too much time goes by without a new checkpoint being released. Millions of people could realistically participate in that - anyone that runs a full node. Even anyone that runs a light node.

Furthermore, when a user does download a checkpoint, some users are going to be careless and download some malicious checkpoint. If they do, but they have honest software, the software can ask their connections if the new checkpoint matches up with them. If it doesn't, it can again raise an alarm to the user.

For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised, they would have to either have a virus that changed the code of the software (at which point any software PoS or PoW is vulnerable), or they would have to be eclipsed by the attacker (connected to only attacker nodes) and the attacker must also have a way to sign the release (of the checkpoint) with the authors' signatures (which the software should also automatically check) and the attacker must have accumulated enough minting power (eg in old keys bought from people who have drained those addresses already) since the software last left the chain. The attacker can't simply create a brand new chain with a different genesis block - the software would raise an alarm about that.

For a new entrant to the system, there is a higher risk, but it is no higher than with Bitcoin. The new entrant must find and install the right software somehow, on a machine that isn't compromised. This wouldn't be different for PoS.

In summary:

A. Users who have been connected to the network all along can't be tricked by any kind of history attack.

B. Users who are newly connecting to the network simply need to download the correct software, as they need to do with bitcoin.

C. Users who have been connected to the network for a time, but left for a period of time, just need to (manually) download and (automatically) verify a checkpoint.

Item C is situtation that differs most from current Bitcoin. There is some additional possibility for an attack there, but it would still be extraordinarily difficult to pull off. In Bitcoin, there is no need to download any new data, and so there is no equivalent attack vector similar to tricking the user into accepting an invalid checkpoint. However, this attack would be very difficult (eclipse, key theft), cost a lot (buying old addresses with no coins currently in them), and has a pretty limited reward potential (only the possibility of attacking returning and new entrants they can eclipse). So yes, it is a trade off. I think its a good trade off to buy higher security against a 51% attack and lower fees.

Does this make it clearer how checkpoints can be decentralized?



Since it seems you prefer questions to statements, I'll ask a question, Socratic-method style, but it requires some explanation:

Let's say I download two copies of the updated source code of your software, one from an honest mirror, and one from a malicious mirror.

The honest source code has a change in the author signature, because the original developer is no longer involved in the creation of the software. The malicious source code has a change in the author signature, for obvious reasons. (Real life example: Satoshi Nakamoto hasn't signed a Bitcoin release in years).

The honest source code contains a change in the initial nodes you connect to, because a DDOS a year ago caused the initial nodes to become a point of failure. The malicious source code contains a change to the initial nodes you connect to, which adds nodes that the attacker controls. (Real life: https://fintechs.fi/2021/07/06/bitcoin-org-hit-with-massive-...).

The honest chain has a large-scale validator drop caused by an outage of AWS US East 1. The malicious chain has a large-scale validator drop at the same time caused by the malicious validator failing to include re-staking transactions, resulting in the malicious validator controlling 51% of the coins on the malicious chain, after which it's easy for the malicious attacker to create transactions that control 100% of the staking on their fake chain in a way that looks like normal traffic on chain. (Real life: https://www.datacenterdynamics.com/en/news/aws-us-east-1-reg...).

At the point of the large scale validator drop, there are a lot of missed blocks on the honest chain, so traffic eventually falls back to a different validator to allow the blockchain to progress. As the same point, there are a lot of missed blocks on the malicious chain because the attacker didn't control the validator chosen by the provable random function, but traffic eventually falls back to a different validator which the attacker controls. These validators don't include transactions that add staking power to addresses the attacker doesn't control.

The honest chain has blocks validated every 20 seconds (this number pulled from Cardano), which were validated at that rate because honest nodes wouldn't accept a block earlier than the allotted time. The malicious chain has blocks that were all created in a span 20 minutes and signed by staking addresses controlled.

The attacker controls your internet connection to the point that about half the time, if you poll the network, you'll receive answers from the attacker (Real life: China).

Given this situation, how does your system tell which chain is the honest one, and which is the malicious one?

Keep in mind that Proof of Work handles this situation trivially: the malicious chain is shorter--a lot shorter if your node has been disconnected for some time.


That's a pretty clear description of an attack, thanks.

> The honest source code has a change in the author signature

I assume we're talking about the scenario where a user has already-installed honest software that has validated the chain, but has been offline for a while?

If we were talking about a new entrant, just the fact that users' internet is often controlled by an attacker 50% of the time would probably be enough to trick users into downloading malicious software. If 50% of connections are hijacked, most users would probably not check signatures and so ~50% of them would get malicious software, the users that do check signatures would get honest software 25% of the time, malicious software 25% of the time, and a sig mismatch 50% of the time. There's some bad things that can happen there for any software where security is important. So let's stick to the scenario where the user already has honest software.

First I want to comment on the scenario, and then I'll outline a procedure allows the user to determine which is the honest chain.

> The honest chain has a large-scale validator drop caused by an outage of AWS US East 1

The outage you mentioned lasted less than 2 hours. But I think we can consider an outage that lasts say, 1 week. Kind of an absurd amount of time for an outage that hits such a huge amount of people, but even a 1 month outage would not give an attacker an opportunity here. And how many validators would drop out in this scenario? 20%? 40%? Any significant percent seems highly unlikely, but let's say it is a 40% drop for 1 week.

> The malicious chain has a large-scale validator drop at the same time caused by the malicious validator failing to include re-staking transactions

VPoS doesn't do staking, but the equivalent here is that the malicious chain would simply have blocks submitted at longer intervals until the "difficulty" re-adjusts, which would both equivalently indicate fewer validators.

> resulting in the malicious validator controlling 51% of the coins on the malicious chain

The malicious validator can mint in secret and always control 100% of the coins actively minting on the chain, no? This still doesn't help tho, because the honest chain can be seen to have more active validators than the malicious chain.

In a quorum-based system like Casper where the quorum chooses new randomness that determines the next quorum, it could be possible for an attacker to capture the quorum if they currently make up a large minority of the quorum and 40% of the rest of the quorum drops out. They'd have to have to make up at least 30% of the quorum, so that when they stop responding in the honest quorum, the honest quorum only has 30% left (matching their 30%). An attacker could 51% attack the honest chain in this scenario, no need for a separate malicious chain.

This is the same in bitcion - if 40% of the hashpower went offline, an attacker with only 30% of the hashpower would turn into an attacker with 50% of the hashpower. VPoS isn't quorum-based, but it would have the same problem if 40% of minters lose access to their coins for a period of time. Am I misunderstanding your scenario here? Seems like the move would be to 51% attack the honest network rather than try to attack a smaller set of nodes that are probably lower value.

> there are a lot of missed blocks on the honest chain, so traffic eventually falls back to a different validator to allow the blockchain to progress

I'm not sure about other PoS protocols, but in VPoS, there is no "fall back". The block progression simply slows and "difficulty" readjusts over time. The set of validators and how they're chosen wouldn't change.

But let me suggest a different attack scenario: let's say the attacker finds, creates, or buys old keys that collectively contain as much coins as the total coins minting honestly (a history attack). No need for an amazon outage. The attacker simply creates a chain from the point where those addresses collectively had as much minting power as the honest chain. After a time, they would capture all the randomness, and could put even more coins to work minting (with the use of stake grinding), which could look like a heavier chain (with more validators) than the honest chain.

> The honest chain has blocks validated every 20 seconds (this number pulled from Cardano), which were validated at that rate because honest nodes wouldn't accept a block earlier than the allotted time. The malicious chain has blocks that were all created in a span 20 minutes and signed by staking addresses controlled.

It sounds like you mean that the honest chain has one block every 20 seconds, whereas the malicious chain has a potentially unlimited number of blocks per second. Is that what you're saying?

I think in every PoS protocol, there is some verifiable time limiation. Yes an attacker could create an alternate chain starting from from 5 years ago in arbitrarily little time. However, they could not create more blocks in their fake 5 years of time than the honest chain did in real 5 years. And nodes obviously won't accept blocks with timestamps significantly in the future. Protocols have adjustments made when blocks have timestamps that are too close together, just like bitcoin. Any attacker that creates a chain with timestamps too close together will reduce their ability to create blocks proportionately.

But maybe you could clarify what you mean here.

In any case, to answer your primary question, this is a process that could happen:

1. User downloads honest software in 2021 and runs it continuously and/or regularly.

2. The user shuts down their software in 2022 and gets hit by a car and goes into a coma or something.

3. The user wakes up in 2026 and of course the first thing they do is start up their computer along with the currency software.

4. The software tells the user its been disconnected from the chain for too long and needs a new checkpoint.

5. The user goes to the website they're used to and downloads a new checkpoint and a signature for it (or ideally a battery of signatures).

6. The user uploads the checkpoint and signature(s) into the software. The software checks the signatures against the checkpoint and against its list of trusted public keys. Let's say none match.

7. So the user scours the internet and finds many (honest) articles that talk about how the dev group had a big change up and all the signatures are expected to be created with different keys now.

8. So the user goes and finds some new public keys to validate against. Chances are they go to a search engine and search a few places for keys. There's a 50% chance that they land on a malicious page, and they'll probably keep using that same page for subsequent searches. So 50% chance they get public keys from the attacker. If they're ultra careful, they could start a new web page (and connection) for each search and so only have a 50% chance of getting a malicious public key for each key - but lets just say they don't do that and so 50% of the users just get malicious keys.

9. The user puts in these keys and 50% of the time they match. In the case they don't match, an alarm is raised and they're alerted to the fact that they're possibly being scammed/attacked. 25% of users get malicious keys that matched the checkpoint data.

10. The software then connects to the network. While normally the software might use 8 connections like bitcoin does (tho double that is probably warranted), just for this case of validating the checkpoint, many more connections can be used. 100 wouldn't be very burdensome on the network, but would make it incredibly unlikely that a user would be eclipsed. Again 50% of these connections would be redirected to malicious nodes. Let's also say the attacker has a 50% Sybil in the network, so that even connections that aren't redirected by the attacker may still end up connecting to an attacker. So this is a 75% chance of connecting to an attacker. This results in a .75^100 chance that all their connections are to an attacker. If every person in the world tried to reconnect an old node during the attack window, there would be a probability of less than 0.3% that even a single person gets eclipsed.

11. The software asks these connections what their checkpoint is. If any don't match, an alarm is raised and the user is told they may be being attacked and told to verify out of band what the checkpoint is.

Continued...


...

So all said and told here, any given user trying to reconnect after a long time during the attack window has a 3 in 10 trillion chance of being successfully duped without an alarm being raised. And those aren't the only fractions at play. The number of users trying to reconnect after a long time during the attack window is probably pretty tiny as well.

In any case, the one item up there that opens up a further attack opportunity is item 11. An attacker could create malcious public nodes that act like honest nodes until they're asked what their snapshot is by a new connection. When asked, they give some nonsense snapshot hash. A small number of public nodes could cause a bit of chaos. But that chaos could only affect applicable reconnecting nodes that would need to go through the above process. So what nodes are those exactly?

Well, the attack is only cheaper than a normal 51% attack when the attacker can obtain old addresses that used to contain coins cheaper than they can contain actual coins. In VPoS, the randomness that decides who can mint is hidden for a period of time and is afterward active for a period of time. If the period of time that the randomness is hidden for is longer than the frequency at which new checkpoints are released. A 1 year timespan seems reasonable for both.

So because that possibility can be closed off, the other history attack possible is a longer-range history attack from before the checkpoint - which requires tricking users into accepting a malicious checkpoint. So the attacker might attempt to obtain addresses that contained coins a year ago (but no longer do). However, the only nodes that could even potentially fall for this trick are ones that have been offline for over a year. How many nodes go offline for that long? Probably almost none. But whatever that number is: that's the number that must go through the above process and that's the number that could be griefed by a malicious actor that releases fake data in step 11 above.

These steps do hinge on "raising an alarm" being sufficient to prompt people to do some deeper digging as to what chain is the right one. This could be as easy as calling up some trusted friends and asking them to read out a hash to you from their software. It could be asking the merchants you deal with most often, or your employer. I'd argue that similar steps to the above would be incredibly valuable to add to the bitcoin software upon update, since similar issues can happen if you install malicious softare (worse issues really).

There are also other mitigations that wouldn't stop an immediate attack, but would help prevent the attack from scamming a user for a long period of time. If successful, the attacker could simply mirror all transactions from the normal chain on their malicious chain. So the victim could get paid and pay honest people, but the attacker could be paying the victim for things with just fake coins on the malicious chain. However there is an idea that has been discussed before of putting a recent block hash in the transction, so that transctions are pinned to a particular chain and the attacker can't build a malicious chain with honest transactions. If transactions required recent block hashes, the victim would be alterted to the malicious chain as soon as they tried to pay someone honest or get paid by someone honest.

But I think there are still some things to clarify, since I may have not correctly understood a couple items in your attack scenario.


Okay I read your entire post.

> A. Users who have been connected to the network all along can't be tricked by any kind of history attack.

That was never part of the attack proposed by On Stake and Consensus. I'm not accusing you of not knowing this, and I'm not accusing you of ignoring this, I'm just stating it for completeness.

> B. Users who are newly connecting to the network simply need to download the correct software, as they need to do with bitcoin.

You've made a pretty important shift here from comparing PoS-to-PoW, to comparing PoS-to-Bitcoin. You're no longer saying, "my system is decentralized", you're now saying "my system is just as decentralized as Bitcoin". That doesn't work: just because Bitcoin relaxes its decentralization in some ways doesn't mean it's okay for other solutions to relax decentralization.

In fact, this is one aspect in which Bitcoin isn't decentralized: almost everyone goes to a centralized source, Bitcoin.org, and downloads the binary there. Technical users might verify the hash, but that's still a centralized solution. The only truly decentralized solution that Bitcoin offers is that you can download the source code and verify that it does what it says it does, but very few users have the ability to do that: it's a decentralized solution, but it's not a good decentralized solution.

However, Bitcoin's solution is still a better solution than the one you're offering. If I download the source code and have the technical ability to do so, I can verify that the source code does what it says it does. There's no centralized trust here: I'm merely agreeing to the terms how the blockchain works. Choosing to accept the updates to the Bitcoin software doesn't imply any consensus about the state of the Bitcoin blockchain.

Checkpoints mean something entirely different: that means that I'm trusting the provider of the checkpoint about the state of the blockchain.

I'm going to reiterate the difference because it's extremely important:

1. With Bitcoin, there's no trust required if I verify the source code myself. If I review the source code and decide to compile and run it with the latest changes, I'm merely agreeing to the changes in the rules of the blockchain--and in fact I don't have to agree to them (which results in a hard fork: see Bitcoin Cash or Ethereum Classic). I'm not trusting anything about the state of the blockchain.

2. With your proposed "checkpoint" solution, I'm trusting that the source of the checkpoints isn't lying to me about the state of the blockchain. Contrary to your statement, "There is no central source for the checkpoint," there IS a central source for the checkpoint: the server you're downloading from.

Remember the problem proposed by Poestra: you receive two different blockchains, and need to figure out which is the real one. All you've done is change the source of the attack slightly: you receive two different source codes containing two different checkpoints and links to two different blockchains, and need to figure out which is the real one. This isn't a fundamental change to the attack, it's the same attack. This is what I meant when I said that getting around your "solution" is trivial. As I said before, checkpoints do exactly nothing to address the problem. All you've done is move some of the block hashes into the source code.

Statements like "Everyone who is currently part of the network can validate that the checkpoint is correct" show a fundamental misunderstanding of the problem: with Bitcoin, I don't need to ask anyone which chain is correct. I don't need to ask the community with Bitcoin: your statements about how people can "alert the community" are irrelevant. I don't need to ask the authors with Bitcoin: your statements about things being signed by the authors are irrelevant. The longer chain is correct, period. If I have to ask the network if my checkpoints are valid, that opens up the possibility of the attack proposed by Poestra. Just to reiterate:

> Furthermore, when a user does download a checkpoint, some users are going to be careless and download some malicious checkpoint. If they do, but they have honest software, the software can ask their connections if the new checkpoint matches up with them. If it doesn't, it can again raise an alarm to the user.

If you downloaded a malicious piece of software, that piece of software will likely connect you to connections that it controls. Even if you introduce your own connections and they provide you with the correct chain information, there's no way for you to verify which source of information is telling you the truth. Again, with PoW this is easy: the longer chain is the real chain. With PoS, the longer chain could be manufactured: you still have not responded to my statement, "Show me a single mechanism in existence that can allow me to look at two blockchains and reject one based on the fact that the blocks were mined too quickly. Hint: it exists, but you're not going to like what it is!"

> For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised [...] they would have to be eclipsed by the attacker (connected to only attacker nodes)

No, because even if they connect to some valid nodes, their software with the malicious checkpoint would identify the valid chain as malicious.

> For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised [...] the attacker must also have a way to sign the release (of the checkpoint) with the authors' signatures (which the software should also automatically check)

The authors are a centralized entity.

> For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised [...] and the attacker must have accumulated enough minting power (eg in old keys bought from people who have drained those addresses already) since the software last left the chain.

That's true, but you've literally proposed one way that could happen. There are other ways this could happen, which are proposed by Poestra.

Now, remember when I said you didn't understand the attack proposed by Poestra, and you took that as an insult? Remember how you said that I didn't understand your solution? I've read your post, and it added nothing to my understanding--I did understand your solution before. Does that mean you were insulting me? I'm not going to take it as an insult because that's pointless: all I'm saying is that let's keep this on the level of respectful disagreement and not take disagreement as insult.

> For a new entrant to the system, there is a higher risk, but it is no higher than with Bitcoin. The new entrant must find and install the right software somehow, on a machine that isn't compromised.

This is a true statement about Bitcoin, but it isn't a true statement about Proof of Work. Bitcoin Cash software from ten years ago can still detect a sybil attack on the Bitcoin Cash chain as long as you connect to one valid node. Again the mistake you're making is that with Bitcoin, the software only encodes agreement to changes to the protocol, whereas in your proposed solution, your checkpoints encode changes trust in changes to the blockchain. This is not "higher risk, but [...] no higher than with Bitcoin". It's a significantly higher risk than with Bitcoin.

What you're alluding to here is a real problem, which is how to reach consensus on changes to the protocol. I don't know of a good solution to that problem: certainly Bitcoin Cash's "never change the protocol" solution isn't a good solution. Probably the best solution I know of is Polkadot's on-chain governance, but while Polkadot is PoS, there's no reason on-chain governance couldn't be implemented in a PoW system. And I'm not sure on-chain governance actually solves the problem: it encodes an agreement on how updates to the protocol are agreed upon, but there's still nothing preventing a motivated minority from changing their source code and creating a hard fork.

> C. Users who have been connected to the network for a time, but left for a period of time, just need to (manually) download and (automatically) verify a checkpoint.

> Item C is situtation that differs most from current Bitcoin. There is some additional possibility for an attack there, but it would still be extraordinarily difficult to pull off. In Bitcoin, there is no need to download any new data, and so there is no equivalent attack vector similar to tricking the user into accepting an invalid checkpoint. However, this attack would be very difficult (eclipse, key theft), cost a lot (buying old addresses with no coins currently in them), and has a pretty limited reward potential (only the possibility of attacking returning and new entrants they can eclipse). So yes, it is a trade off. I think its a good trade off to buy higher security against a 51% attack and lower fees.

This is why I say I'm not a PoS detractor. I do recognize that there are tradeoffs here. I'm not convinced of your claim that this attack would be very difficult--while I don't know of a time that it has been implemented in practice, a lot of the pieces of the attack have already been implemented in practice. You ultimately turn out to be correct that it's difficult to implement, but I don't think you know that. Certainly Vitalik and a great many other smart researchers are worried about how this could go wrong, and nothing you've said convinces me that your confidence is justified.


I think as usual the crux of this debate is the security properties of figuring out correct software, and the parallels to checkpoints. I think you misunderstood me on a couple things, but I'd recommend that we focus mostly on the question of how a user can download correct software/data. If we can come to an agreement on that, I think how to get on the same page about the rest will become much clearer.

> That doesn't work: just because Bitcoin relaxes its decentralization in some ways doesn't mean it's okay for other solutions to relax decentralization.

Ok. Well that could be a valid point. However, discussing a design is only useful when comparing it to some realistic alternative. Bitcoin is the defacto standard of cryptocurrencies. I'm sure you'd agree its at least fair to compare to bitcoin, even if there might be other designs out there that claim to be better. I'd suggest that we both compare against bitcoin, because it seems likely that both of us understand that. Were you to bring up some other coin that you claim does it better, I think it would just hinder us coming to a mutual understanding. Once we've come to such an understanding, I'd be happy to move on to compare against something you think is better than bitcoin.

You're absolutely right that most people go to bitcoin.org to find full node software. However, that has nothing to do with bitcoin's consensus protocol, nor PoW vs PoS. Seems kind of irrelevant, as far as I can see.

> If I download the source code and have the technical ability to do so, I can verify that the source code does what it says it does.

Sure. Ignoring what you said about most people not being able to do that (and I'd argue that the vast majority don't have the combination of time and expertise to review changes to the source code and ensure there aren't things like security holes or maliciously injected code), the fact of the matter is that you can't know just by reading the source code whether or not that source code implements the protocol that everyone else is using (and calling "bitcoin", or whatever coin you're trying to use). In order to know that the software is compatible, you need to ask other people. There is no way around that. This isn't centralized trust, but it is decentralized minimal-trust. Just looking at a codebase or set of diffs can't tell you what chain is bitcoin.

> Checkpoints mean something entirely different: that means that I'm trusting the provider of the checkpoint about the state of the blockchain.

Given all that I said about how checkpoints can be verified against numerous connections, I would have hoped you'd at least have instead said "that means I'm trusting (many but a finite number of) providerS of that checkpoint about the state of the blockchain".

> Contrary to your statement, "There is no central source for the checkpoint," there IS a central source for the checkpoint: the server you're downloading from.

I thought you read my entire message? Why must someone download the checkpoint from a single source? Why not download it from many sources and ensure they all match?

> you receive two different source codes ... and need to figure out which is the real one

I agree with you, this is a problem. But tell me, how is this problem different for bitcoin or any other cryptocurrency? A prerequisite is that the user installs the correct software. How do they know which one is correct? How does a user know which software is the correct bitcoin software? Can they tell just by looking at the source code? Can they tell just by looking at the binary? What is the trustless way of installing the correct bitcoin software?

> The longer chain is correct, period.

This is not true in the case of a 51% attack, or more realistically, a dangerous majority consensus change. For example, what if bitcoin were the worldwide currency, and most people were tired of high onchain fees and decided to increase the blocksize by 100x with some kind of soft work. That would very likely be detrimental in the long run. However, it seems reasonably possible that this could actually happen some day. Smart people would fork off a different chain that preserves the old rules. So it would depend on what you mean by "correct". If by "corect" you mean the chain with the most economic activity - that's probably the longest chain. If by "correct" you instead mean the chain with the rules you expect, that chain may no longer exist, that chain may not be the longest chain. The only way to know is by asking people and learning what changes have happened, how many people followed what rules, and whether you agree with them. Its not always as simple as "follow the longest chain". However, I certainly agree that 99.9% of the time the longest chain is what you want.

> If you downloaded a malicious piece of software

It seems you somehow misread what I wrote. The case was if a user downloads "malicious checkpoint" and retains (their original) "honest software".

> but you've literally proposed one way [an attacker could accumulate enough minting power]

Of course its possible. That doesn't make it easy, cheap, profitable, or likely. A 51% attack is possible, but its (hopefully) difficult enough that it will never happen.

> Bitcoin Cash software from ten years ago can still detect a sybil attack on the Bitcoin Cash chain as long as you connect to one valid node.

And a PoS currency that uses checkpoints can also detect a sybil attack if they can download a checkpoint from at least one honest node. Its literally the exact same mechanism.

> I'm not convinced of your claim that this attack would be very difficult--while I don't know of a time that it has been implemented in practice, a lot of the piece of the attack have already been implemented in practice.

That's something I could analyse in more detail if you're interested.

> Vitalik and a great many other smart researchers are worried about how this could go wrong

There's certainly plenty that could go wrong. I'm not claiming that PoS is easy or a sure thing. What I am claiming is that most of the arguments against PoS that have been raised are solved problems. But that doesn't mean there aren't more subtle known problems that aren't raised as often, and it doesn't mean that there aren't unknown problems. There is certainly a possibility that PoS can't beat PoW. However I have yet to see convincing evidence that's the case.


> I think as usual the crux of this debate is the security properties of figuring out correct software, and the parallels to checkpoints. I think you misunderstood me on a couple things, but I'd recommend that we focus mostly on the question of how a user can download correct software/data. If we can come to an agreement on that, I think how to get on the same page about the rest will become much clearer.

Okay. I agree that this is one of the two key points we disagree on.

If you want to focus on your specific proposed implementation on PoS rather than all possible PoS implementations, we can narrow this further. I think you're saying that it's possible to verify the checkpoints in a decentralized way without PoW. Is that a fair statement of your opinion?

The other key point I think we disagree on is that you seem to think that it's possible to verify that time has elapsed between block validations in PoS. Is that a fair statement of your opinion?


> If you want to focus on your specific proposed implementation on PoS rather than all possible PoS implementations, we can narrow this further.

Sounds good to me.

> I think you're saying that it's possible to verify the checkpoints in a decentralized way without PoW. Is that a fair statement of your opinion?

Yes. To elaborate, for users who don't have any software already installed, this would be a social process of asking many other people/sources what the correct software is. For users who already have correct software already installed, either that correct software has been running recently enough and can generate its own checkpoint, or that software was offline for long enough that it needs a checkpoint, but it can help substantially automate the process of validating a checkpoint and if it cannot validate it, the user should fall back to a similar process of social discovery to determine what the correct checkpoint/update is.

Do you view the idea of asking many untrusted and/or trusted people/entities what the correct checkpoint is, as not decentralized? Or not sufficiently decentralized?

> The other key point I think we disagree on is that you seem to think that it's possible to verify that time has elapsed between block validations in PoS. Is that a fair statement of your opinion?

The question isn't clear enough to me. I think what I can say about that is, given that you have two time anchors (eg one in the past, the checkpoint, and one in the present, your own computer's clock), its possible to ensure that only a certain number of blocks can validly be added between those two time anchors, to a high degree of statistical probability.

But maybe you could clarify the question?


> Yes. To elaborate, for users who don't have any software already installed, this would be a social process of asking many other people/sources what the correct software is. For users who already have correct software already installed, either that correct software has been running recently enough and can generate its own checkpoint, or that software was offline for long enough that it needs a checkpoint, but it can help substantially automate the process of validating a checkpoint and if it cannot validate it, the user should fall back to a similar process of social discovery to determine what the correct checkpoint/update is.

> Do you view the idea of asking many untrusted and/or trusted people/entities what the correct checkpoint is, as not decentralized? Or not sufficiently decentralized?

I'll allow that this falls on a spectrum of decentralization and is definitely more decentralized than "not decentralized at all".

Whether it's "sufficiently decentralized" is a difficult question for two reasons:

1. Maybe you have some formal algorithm for social discovery that you haven't presented here, but without that, it's quite difficult to speculate how it would play out.

2. From the way you're describing this, you're not relying on a formal algorithm, but on a reliable, diverse network. Maybe you're aware that computer networks are inherently unreliable, so you're getting around this by not using a computer network: for example, making a phone call to get a checkpoint hash from someone you trust. There's a lot of human elements here, and humans are unpredictable.

My gut feel is that no, it's not sufficiently decentralized, at least not in a way that presents any real advantages over simply trusting the network, but something like a PGP web of trust[1] could make this more reliable--it's hard to say without fleshing this plan out more.

The two claims I'm willing to confidently make here are:

1. As far as I can tell, you're not proposing an automated way to bootstrap trust OR consensus here, and without this, it's going to be both slow, and prone to the introduction of human error.

2. This system is inherently less decentralized AND less secure than PoW. Verifying a blockchain via PoW doesn't require trust (or put another way, you trust the math, not the nodes you're connected to), so there isn't a need to bootstrap trust. I don't know how PoS will play out, but I do know how PoW will play out in the situations described by Poestra. PoS may work out in practice: I genuinely hope it does, because there would be significant upsides! But I'm not confident that PoS will work out, and I am confident that PoW will.

> > The other key point I think we disagree on is that you seem to think that it's possible to verify that time has elapsed between block validations in PoS. Is that a fair statement of your opinion?

> The question isn't clear enough to me. I think what I can say about that is, given that you have two time anchors (eg one in the past, the checkpoint, and one in the present, your own computer's clock), its possible to ensure that only a certain number of blocks can validly be added between those two time anchors, to a high degree of statistical probability.

> But maybe you could clarify the question?

Okay, given this explanation, I think this basically falls back to your system of bootstrapping trust. Yes, if you can trust the checkpoints, you can trust the times between them. So this disagreement basically collapses back to the first disagreement: I would say that the reliability of the elapsed time between checkpoints is only as valid as the reliability of the checkpoints, and I am not confident in the reliability of the checkpoints.

You did claim earlier that all PoS implementations have elapsed time between blocks, and I'm still mystified as to how you're claiming this is enforced.

[1] https://www.linux.com/training-tutorials/pgp-web-trust-core-...


> Maybe you have some formal algorithm for social discovery that you haven't presented here, but without that, it's quite difficult to speculate how it would play out.

However, this process is already important when choosing software in the first place, or when updating software. Even when using PoW, every new entrant to the network needs to do some kind of social discovery to figure out which software to download in the first place. And even with PoS, people that have been regularly connected to the network do not need any social discovery. With PoS, there is an additional kind of user that an attacker can force to need to do social discovery: users who were at one point part of the network but have been offline for a long period of time. My conjecture is that this set of users is quite small in comparison to either the set of new entrants or the set of nodes who have been online frequently enough to not need any social discovery.

Would you agree that if that set of users is small enough, the difference might be insignificant? Eg would increasing the number of users that have to do some kind of social discovery by 1% be acceptable?

> My gut feel is that no, it's not sufficiently decentralized

I would tend to agree that the process of finding the right sofware is generally not decentralized enough - too easy for people to find bad software or virus ridden downloads. The only thing that saves us is that the vast majority of humanity isn't malicious.

> something like a PGP web of trust[1] could make this more reliable

I think there are a lot of things we could do like that. We have a long way to go towards actually making good computer security accessible to a significant fraction of people. First step is operating systems - or maybe even hardware.

> you're not proposing an automated way to bootstrap trust OR consensus here

Correct. What I'm proposing is a way to verify with confidence if the data (checkpoint) you received is very likely valid in cases where you're not being attacked, and a way to alert the user when an attack might be happening. The trust / social discovery part of things is basically out of scope - but already exists in its own haphazard individualized way.

> You did claim earlier that all PoS implementations have elapsed time between blocks, and I'm still mystified as to how you're claiming this is enforced.

Well I meant that the timestamps that blocks have are enforced. The actual time they're created can't be enforced. But to elaborate, in VPoS, every UTXO gets one chance per second to mint a block. Nodes will know well in advance whether or not they can mint a block or not (although other nodes can't know which peers will get that chance until that peer actually broadcasts a block). However, a node will reject blocks with a timestamp greater than its clock. Furthermore, a "difficulty" adjustment happens similar to bitcoin. X blocks per minute are targetted on average, and if 2X blocks/minute are minted in a given time range, the "difficulty" will increase until only X blocks/minute are minted. Basically, if a number C of coins has 2 chances in 10,000 of minting a block, if the difficulty doubles, that number of coins then has 1 chance in 10,000. This is how its ensured that blocks are, on average, minted some target time apart.

With a quorum based system like Casper, each quorum is allowed to mint a particular number of blocks, probably with timestamp constraints as well. And the quorums themselves must change after a specific number of blocks. I assume some similar difficulty-like adjustment is done to keep these timed properly, so that both quorums and blocks are maintained at a cadence.

Is this what you mean, or are you talking about something else?


> Ok. Well that could be a valid point. However, discussing a design is only useful when comparing it to some realistic alternative. Bitcoin is the defacto standard of cryptocurrencies. I'm sure you'd agree its at least fair to compare to bitcoin, even if there might be other designs out there that claim to be better. I'd suggest that we both compare against bitcoin, because it seems likely that both of us understand that. Were you to bring up some other coin that you claim does it better, I think it would just hinder us coming to a mutual understanding. Once we've come to such an understanding, I'd be happy to move on to compare against something you think is better than bitcoin.

That's reasonable, but the flipside is that if we're trying to improve on PoW by moving to PoS, it doesn't make sense to compare a "state of the art" PoS to the oldest PoW that has far too much momentum to implement most of the last decade's worth of improvements. If we must choose a real world implementation, I would choose Bitcoin Cash, not because it's the best but because it's the simplest.

But I think you glossed over the more important point I made, which I even took the time to reiterate because it's very important: agreeing to an implementation is not the same as trusting an entity about the state of the blockchain. These are two very different propositions.

> I agree with you, this is a problem. But tell me, how is this problem different for bitcoin or any other cryptocurrency? A prerequisite is that the user installs the correct software. How do they know which one is correct? How does a user know which software is the correct bitcoin software? Can they tell just by looking at the source code? Can they tell just by looking at the binary? What is the trustless way of installing the correct bitcoin software?

> This is not true in the case of a 51% attack, or more realistically, a dangerous majority consensus change. For example, what if bitcoin were the worldwide currency, and most people were tired of high onchain fees and decided to increase the blocksize by 100x with some kind of soft work. That would very likely be detrimental in the long run. However, it seems reasonably possible that this could actually happen some day. Smart people would fork off a different chain that preserves the old rules. So it would depend on what you mean by "correct". If by "corect" you mean the chain with the most economic activity - that's probably the longest chain. If by "correct" you instead mean the chain with the rules you expect, that chain may no longer exist, that chain may not be the longest chain. The only way to know is by asking people and learning what changes have happened, how many people followed what rules, and whether you agree with them. Its not always as simple as "follow the longest chain". However, I certainly agree that 99.9% of the time the longest chain is what you want.

Well, this is what I'm saying when I say that you're agreeing to a protocol, not to the state of the blockchain. Whether the changes to the protocol are valid is a philosophical question, not a mathematical one.

This is a difference we can look at with real-world examples. Let's say you have a Bitcoin client and an Ethereum client from 10 years ago, and you download the sources for a new Bitcoin client and a new Ethereum client. In reading the changes to the source code, you discover that SegWit[1] was added to Bitcoin, and a hard fork was performed against Ethereum by its own developers to reverse the DAO hack[2].

Now at this point, you can ask, "Which chain is Bitcoin?" and "Which chain is Ethereum?" If you decide to answer that question from a philosophical perspective, you might say, "SegWit greatly decreases decentralization and is a bastardization of Satoshi Nakamoto's vision," and you can reject the Bitcoin protocol changes, connect with your old client, and you'll see the Bitcoin Cash chain. And you might say, "Code is law, the DAO hack was in accordance with the law and should not be reversed," and reject the Ethereum changes, and connect with your old client, and see the Ethereum Classic chain. You can argue that Bitcoin Cash isn't Bitcoin, and you can argue that Ethereum Classic isn't Ethereum, but ultimately that's a philosophical argument, not a mathematical one. I don't think you can reasonably argue that these chains are malicious chains.

Alternatively, you could say, "I want all my money on all the chains", and simply connect with all four clients (new and old for Bitcoin and Ethereum) and see that all connect to long chains with thousands of new and unique transactions on each chain. But critically, at this point, you'd see that the SegWit changes to Bitcoin have billions more hashes behind them than Bitcoin Cash, and the Ethereum hard fork to reverse the DAO hack also has billions more hashes behind it than Ethereum Classic. Just based on that, you can tell the majority opinion about which is the valid protocol. You might disagree with the majority, but ultimately, I don't think it's actually true that the chains don't tell you which chain is Bitcoin or which chain is Ethereum, in a "social adoption" sense. On the contrary, the chains give you a very good idea of how much adoption each chain has received.

And critically, when the hard fork is philosophical rather than mathematical, you can't be tricked into accepting a double spend. A spend on Bitcoin Cash and a spend on Bitcoin-with-SegWit aren't double spends just because the coins were acquired before the hard fork--they're both valid spends on valid chains, with value in both.

In a PoW system, a truly "malicious protocol" would be one that is presented as if it has mass adoption, but doesn't have mass adoption. Whether that's detectable is dependent on what the changes are, but it is possible to construct a protocol change which would accept blocks without large scale miner adoption. An example of this is merge mining[3], a "feature" of DogeCoin where they use work on the LiteCoin chain to validate blocks on the DogeCoin chain, which was added to address the lack of DogeCoin miners in 2014. This is a philosophical change with mathematical implications, and you'd be able to see those mathematical implications from the source code. This is one of the (oh so many) reasons DogeCoin is a terrible protocol.

> > Checkpoints mean something entirely different: that means that I'm trusting the provider of the checkpoint about the state of the blockchain.

> Given all that I said about how checkpoints can be verified against numerous connections, I would have hoped you'd at least have instead said "that means I'm trusting (many but a finite number of) providerS of that checkpoint about the state of the blockchain".

I'm going to have to disagree with you here: the entire vulnerability here is based on the unreliable-ness of the network. You don't know how many providers you're connecting to. This is why it's a problem that your proposal requires diverse connections to the network.

PoW solutions only require that you be connected to one valid node--the valid chain you receive from that node will be longer than the malicious chains you receive even if you're connected to thousands of malicious nodes.

As a random aside: if you're willing to rely on network consensus, then the hardest problem isn't proving consensus, it's achieving consensus in the first place. Checkpoints are a very slow way to do this, and indeed relaxing the requirement for on-chain provability allows us to achieve consensus a lot faster. I don't know if your solution has formalized a method by which nodes should collect these checkpoints, but the state of the art in network consensus might be Avalanche Consensus[4], which allows <5 second finality with millions of nodes, and is tunable to allow consensus even when >50% of nodes are malicious (arbitrary security levels can be tuned, but consensus is slower if you tune it to prevent attacks where, say, 90% of nodes are malicious). I'll admit that my knowledge of these kinds of protocols is a bit weak so there may be better options, but that seems pretty good to me. But ultimately this still requires diverse connections to the network, which is a pretty large weakness compared to PoW protocols.

> And a PoS currency that uses checkpoints can also detect a sybil attack if they can download a checkpoint from at least one honest node. Its literally the exact same mechanism.

It can't be the same mechanism (longest chain), because you haven't proposed a way to prove elapsed time between blocks besides network consensus.

[1] https://www.investopedia.com/terms/s/segwit-segregated-witne...

[2] https://www.gemini.com/cryptopedia/the-dao-hack-makerdao#sec...

[3] https://www.coindesk.com/dogecoin-allow-litecoin-merge-minin...

[4] https://www.avalabs.org/whitepapers


> If we must choose a real world implementation, I would choose Bitcoin Cash, not because it's the best but because it's the simplest.

Unfortunately I don't know enough about the differences between bitcoin cash and bitcoin to be helpful there. I didn't think there was any substantial difference with respect to the consensus mechanism. Is there a big difference?

> agreeing to an implementation is not the same as trusting an entity about the state of the blockchain

I believe I did address that. But we're also addressing this in a separate comment as well. The way you state that I certainly agree - agreeing amongst many peers (to anything) is not the same as trusting a single entity (about anything). However, I think I made it clear that I'm not suggesting trusting a single centralized entity. Must I repeat the stuff about verifying the checkpoint with many peers and having a codebase reviewed by many reviewers?

> you're agreeing to a protocol, not to the state of the blockchain

Agreeing to a protocol is the same as agreeing to the state of the blockchain. A protocol will choose one chain to follow. It does this with various types of rules. If you change the rules, you might change the chain. A checkpoint is just yet another rule that becomes part of the protocol. This is why if you download a malicious piece of cryptocurrency software, it can make you follow whatever chain it wants you to follow. A checkpoint is far more constrained than a software update. It can't change most of the rules, just one: which chain to follow if only some chains contain the indicated block.

> connect with your old client, and you'll see the Bitcoin Cash chain

Given that Segwit was a soft fork, you'd still see the Bitcoin chain, not Bitcoin Cash. Not sure about Ethereum Classic.

> I don't think you can reasonably argue that these chains are malicious chains.

I agree. But the user still needs a way to answer that question. Most users will choose based on who they want to interact with. Only users that have really deep knowledge will choose based on the code itself. This seems to be what you're saying here:

> I don't think it's actually true that the chains don't tell you which chain is Bitcoin or which chain is Ethereum, in a "social adoption" sense. On the contrary, the chains give you a very good idea of how much adoption each chain has received.

I believe you're right. It sounds like you're point is that the user will usually want to follow the heaviest chain, and so my point about dangerous soft forks is moot. I'll concede that's a reasonable point.

> the entire vulnerability here is based on the unreliable-ness of the network. You don't know how many providers you're connecting to. This is why it's a problem that your proposal requires diverse connections to the network.

But this is always true in any decentralized network. That's the whole issue around sybil attacks - you can't prove that two identities are actually distinct. All decentralized networks require diverse connections to the network.

> PoW solutions only require that you be connected to one valid node

Yes, I agree. But this comes back to the original software download. If 7 sources have malicious software, and 1 sources has the honest software, how do you know which one is honest? Its the same problem. The only difference is that in proof of work, as long as there's no required software updates, you can come back online after an arbitrary period of time and find the honest chain, whereas in PoS there is a horizon after which you need to download new data (the checkpoint). Do you agree that's the salient difference - how long you can go away before you'll have to download and verify updated data to identify the correct chain?

> the hardest problem isn't proving consensus, it's achieving consensus in the first place. Checkpoints are a very slow way to do this, and indeed relaxing the requirement for on-chain provability allows us to achieve consensus a lot faster.

I don't understand what you're referring to about "relaxing the requirement for on-chain provability". Could you elaborate?

> <5 second finality with millions of nodes

Avalanche sounds a bit like Nano.

> allow consensus even when >50% of nodes are malicious

This sounds dubious. I remember Charlie Lee once said "If a blockchain can't be 51% attacked, its centralized and permissioned".




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: