Hacker News new | past | comments | ask | show | jobs | submit login

> The granddaddy of all consensus mechanisms—behind Bitcoin, Litecoin, Monero, and (for the time being at least) Ethereum—is called proof of work. Essentially, PoW makes adding transactions to the blockchain computationally—and therefore financially—very expensive, so as to discourage fraudulent activity. ...

This passage reflects a deep-seated, widespread misunderstanding of how/why Bitcoin works. I suspect the author knows, but the brevity of the article prevents actually explaining the issue. This is a recurring problem in this space and has opened up a large fraud opportunity as every bottle of snake oil looks the same to those who don't do the deep technical dive.

Proof-of-work has nothing to do with ensuring the validity of transactions. Transactions contain all of the information needed to detect invalid transactions. Hashes, signatures, scripts, and amounts can all be checked and will be rejected if invalid. The validity of every transaction can be tested without proof-of-work. Systems like this had been developed prior to Bitcoin.

So it's possible to eliminate every kind of fraud through cryptography alone, except one.

The problem proof-of-work solves is double spending, a term you won't find in the article. A malicious actor can try to spend the same coin twice (or more). Both versions of the double spend transaction are valid, but a node can only accept one. Which one does it accept? It accepts the one on the block with the most cumulative proof-of-work ("strongest chain"). In the event of a tie, the node accepts the first version it saw. The strongest proof-of-work chain can therefore change from one moment to the next as blocks are received.

Proof-of-work imposes a non-recoverable cost to publishing a block of transactions. That cost and its finality discourage double spending by making the publication of double spending transactions permanently expensive. An attacker needs to out-spend the entire rest of the network to succeed, and loses everything spent if the attack fails. The cost of a failed attack can never be recovered.

So any scheme to replace proof-of-work needs to maintain that element of irrecoverable loss of value for a failed double spending attack. It hasn't been an easy nut to crack. A lot of projects claimed to have done it only to discover some flaw in the game mechanics or technology.




To include the whole passage you are talking about:

> The granddaddy of all consensus mechanisms—behind Bitcoin, Litecoin, Monero, and (for the time being at least) Ethereum—is called proof of work. Essentially, PoW makes adding transactions to the blockchain computationally—and therefore financially—very expensive, so as to discourage fraudulent activity. At the same time, users who go to the trouble of creating valid blocks, known as mining, are rewarded with cryptocurrency.

I would say this passage gets the facts totally correct. The author doesn't say that Proof-of-work has to do with the validity of transactions, they say it has to do with the validity of blocks.

Indeed, it can't be easy to make valid blocks or double spending would become a problem as you say. Not sure why you think this reflects the misunderstanding of the inherent costs of consensus systems.


Yes and no. Yes, proof of works prevents double spending or more generally establishes consensus on which transaction become accepted. But no, this could be achieved with many different consensus protocol and does in general not require any irrecoverable loss. Just issue a key pair to each human and make them send signed votes to establish consensus by simple majority.

The true reason something like proof of work is required is because the system is also anonymous and distributed where being anonymous is the more critical aspect. Because of that you can not know who is out there and can cast a vote. Are this votes from a million legitimate users or did one user just send a million votes? The solution to prevent users from casting arbitrary many votes is then to make it hard for them to cast a vote, you make them for example waste a lot of clock cycles for each vote.

But, as I said, not because this is inherently a requirement but because you want to establish consensus among members of an anonymous group of unknown size and them wasting a certain number of clock cycles is a rough proxy for everyone gets one vote assuming that everyone has very roughly access to the same amount of computing power.


For context, this is usually known as a 'Sybil attack'.

[1] https://en.wikipedia.org/wiki/Sybil_attack


This is a common mistake w/r/t Proof of Work. It doesn't prevent Sybil attacks. The Bitcoin whitepaper performs slight of hand to say "instead of identities, we'll use CPUs" which are obviously what Sybil attacks are in the first place. One person can own lots of CPUs... (Thus mining operations.)

What it does do is make Sybilers larger and larger shareholders in the network to make it so that the more you Sybil the network the less you'd want to try to damage it.


If you have a central entity that can generate key pairs then they may as well be running a database. It doesnt achieve decentralisation at all. It has nothing to do with anonymity: anything based on identity is broken because it has an entity that can fake those identities -- you can't verify ID with code


I don't get it.

Suppose a bad actor tries to double spend. They have to cryptographically sign both transactions, so the whole network will eventually receive knowledge of both transactions.

Why can't the network simply wait for some duration of time before accepting a transaction, to verify that no double-spends propagate from elsewhere?

If double spending is ever detected from a party, that party is dis-trusted by the network. The network will de-prioritize processing transactions from dis-trusted parties. Problem solved?


There are two solutions you're proposing, each of which has problems:

1. You're introducing a requirement that all parties who wish to participate in the network must have diverse connections to the network. In practice many nodes do not meet this requirement. If a new node connects to only a few nodes, those nodes may return conflicting chains. With PoW, a longer chain has significantly more computational effort behind it even if it's only ahead by one or two blocks, so it's easy to prove that the longer chain is the valid one. On PoS, there's no such limitation, and a time limitation doesn't really address the issue (see On Stake and Consensus[1] for a more detailed explanation). Vitalik Buterin calls this "weak subjectivity"[2].

2. Introducing a concept of "dis-trusting". The implementation of this in practice is "slashing", but it doesn't work: I can participate as a legitimate actor in the network, transfer my coins to another address, and then use my original, "trusted" address to validate a chain of blocks which include spends not included in the consensus chain. A new entrant to the network, having no knowledge of the consensus chain, might receive the consensus chain and my malicious chain, and have no way of ascertaining which chain is valid: the consensus chain shows me unstaking and spending my coins, but my chain shows me continuing to stake and holding on to my coins.

[1] https://download.wpsoftware.net/bitcoin/pos.pdf

[2] https://vitalik.ca/general/2020/11/06/pos2020.html


> On PoS, there's no such limitation

There is no significant computational effort needed, but it is incorrect to say that PoS protocols do not limit the number of blocks that can be created. Most PoS systems do in fact have such a limitation.


Not one that can be verified by looking at the chain.

I encourage you to read the first paper I linked, instead of continuing to respond to parts of my posts out of context.


> transfer my coins to another address, and then use my original, "trusted" address to validate a chain of blocks

Any new entrant would be able to clearly see the transaction on one of the chains and from that can determine that the chain not containing that transaction but instead using that output to mint blocks is the cheating block chain. Your attack wouldn't work.


An attacker only needs to fool the network for a short amount of time -- e.g. to double-spend someone. It doesn't matter to the attacker that they are eventually discovered; what matters is that they can get away with it before getting caught.


Again, that's simply not true. Let's go through the hypothetical attack:

1. The attacker accumulates some coin then spends it / sells it. 2. The attacker starts building a chain using their old keys 3. The attacker presents this chain to a new entrant that they've eclipsed 4. The attacker buys something from the eclipsed victim

Any self respecting PoS protocol would have long cooldowns that would force step 1 and step 2 to be pretty far apart (on the order of months). In step 3, a fake chain would not fool new entrants because the new chain would be growing at a much slower rate than expected (because only the attacker is building on top of it). So there's two reasons that the attack wouldn't work here.


1. The fact that 1 and 2 are far apart is irrelevant to nodes that were absent for the two steps. It's true that this requires a long absence, but given many people disconnect cold storage wallets for years, we can't rely on this for security. Indeed, the time when you most need to be able to distinguish between real and fake chains (because your cold storage wallets are likely to be your most valuable) is when your proposed solution doesn't work.

2. That's simply incorrect, and I'm quite unsure why you think that. Proof of Work is the thing that slows down the creation of blockchains: a PoS attacker who creates a blockchain can emit blocks at a very high rate. The fact that they are the only validator doesn't slow them in any way.


> many people disconnect cold storage wallets for years

Yes, and when they come back, they should update their software - which should have a hardcoded checkpoint in it from much more recently. That's the solution that you're repeatedly ignoring.

> a PoS attacker who creates a blockchain can emit blocks at a very high rate

You're just wrong. Show me a single active PoS protocol that works this way. You won't be able to because that's not how it works. No one else would accept most of these blocks. Why are you so sure about yourself when you clearly have enormous holes in your understanding of how PoS protocols work in general?


> Yes, and when they come back, they should update their software - which should have a hardcoded checkpoint in it from much more recently. That's the solution that you're repeatedly ignoring.

Setting aside for a moment the fact that users cannot be relied upon to update software, do you understand how this isn't a decentralized, trustless solution? If you're willing to download a checkpoint from a trusted, centralized entity, why even bother with decentralization? Why not just have the US Government sign blocks?

> You're just wrong. Show me a single active PoS protocol that works this way. You won't be able to because that's not how it works. No one else would accept most of these blocks.

Well, let's start with Cardano, Polkadot, or Tezos. Yes, all of these chains enforce a delay between blocks: the way they enforce that is that no one accepts blocks before they're supposed to, i.e. consensus.

...which is completely useless when you're entering the network and receive two conflicting blockchains, because you have no way of verifying when any of the blocks were validated. Consensus doesn't help you, because you don't know what the consensus is.

Show me a single mechanism in existence that can allow me to look at two blockchains and reject one based on the fact that the blocks were mined too quickly. Hint: it exists, but you're not going to like what it is!


> users cannot be relied upon to update software

Their software can be relied on to tell them when its too out of date to be used.

> this isn't a decentralized, trustless solution

It is in fact decentralized. Nothing is trustless, but it is in fact just as trust-minimized as bitcoin is.

> If you're willing to download a checkpoint from a trusted, centralized entity, why even bother with decentralization? Why not just have the US Government sign blocks?

So many false assumptions in there. You know what they say about making assumptions don't you?

What really bothers me is that you just assert that what I'm talking about is broken without even attempting to understand what I'm talking about. You don't seem to care about actually having a conversation, but instead just want to win. I don't appreciate that. If you want to keep talking about this, I'd like you to ask me some questions that clarify your understanding of what I'm talking about and show me you're actually trying to understand me instead of just trying to take the most attackable misunderstanding of my words and attack that. I'm getting very frustrated at you because of your willful misunderstanding, and unless you change your attitude, I'm just going to start ignoring you.

So go ahead, you want to talk about this? Ask me how I'm proposing these checkpoints work. Ask me how they're decentralized. Ask me how they're trust minimized. Instead of asserting things that aren't true based on false assumptions. I'll give you one more chance.


Okay, how do the checkpoints work, and how are they decentralized?


Checkpoints would be hardcoded into the software (not downloaded dynamically from some centralized source). As such, the checkpoint would simply be a peer reviewed change to an open source codebase, just like anything else.

How is it decentralized? Anyone can create an independent codebase that implements the protocol. Anyone can review the code and raise the alarm to the community if the checkpoint isn't correct. Ideally there would be a number of independent implementations of the software, each with many devs and numerous reviewers. In addition to that, the previous version of the software could be used to automatically validate that the new software contains a checkpoint that matches the longest chain as seen by that software - and again, it can raise an alarm to the user if it doesn't match, who can alert the community.

Any change in bitcoin works like this. Changes are discussed, implemented, and reviewed by hundreds or thousands of people. Users need to find the correct software in some way. Generally they would use the internet to find the right software, hopefully cross checking multiple sources. They might ask their friends what software to run. Etc. But there is no math to find the right software - each person has to use their social network (and the internet) to figure out which software is "Bitcoin". Once they download the software, it can do the rest.

The same is true for a piece of software with a hardcoded checkpoint. There is no central source for the checkpoint. Everyone who is currently part of the network can validate that the checkpoint is correct. Many people will actually do it. It would be so easy to validate that it could be automatically reviewed by people's software (unlike most other codebase changes).

So in what way would a PoS checkpoint be different than bitcoin? As I've shown above, the checkpoint itself is just another piece of the code like any other code change. The difference is that the checkpoint would be necessary to do at some regular frequency (say once a year). By contrast, you could imagine a future where the core Bitcoin software has been frozen and has not changed for years - decades maybe. And one could expect to go offline for 10 years and still be able to bring up their software without updating it.

A PoS system would be slightly different. Because of the issues around short-range and long-range revisions, people who have been offline long enough should at very least download a new checkpoint (even if the rest of the software remains the same). The new checkpoint a user downloads should be recent enough that the chances of a short-range revision attack (eg a history attack) with sufficient accumulated minting power (or whatever your preferred alternative term is for "accumulated difficulty") is sufficiently unlikely. One new checkpoint per year would ensure that a user downloading the latest checkpoint is downloading a checkpoint no farther in the past than 1 year. This would require that a sufficient number of devs and reviewers get together to review and validate the released checkpoint to make the release sufficiently decentralized. This could realistically be structured so software checks this automatically and raises an alarm to the user if the new checkpoint doesn't match or if too much time goes by without a new checkpoint being released. Millions of people could realistically participate in that - anyone that runs a full node. Even anyone that runs a light node.

Furthermore, when a user does download a checkpoint, some users are going to be careless and download some malicious checkpoint. If they do, but they have honest software, the software can ask their connections if the new checkpoint matches up with them. If it doesn't, it can again raise an alarm to the user.

For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised, they would have to either have a virus that changed the code of the software (at which point any software PoS or PoW is vulnerable), or they would have to be eclipsed by the attacker (connected to only attacker nodes) and the attacker must also have a way to sign the release (of the checkpoint) with the authors' signatures (which the software should also automatically check) and the attacker must have accumulated enough minting power (eg in old keys bought from people who have drained those addresses already) since the software last left the chain. The attacker can't simply create a brand new chain with a different genesis block - the software would raise an alarm about that.

For a new entrant to the system, there is a higher risk, but it is no higher than with Bitcoin. The new entrant must find and install the right software somehow, on a machine that isn't compromised. This wouldn't be different for PoS.

In summary:

A. Users who have been connected to the network all along can't be tricked by any kind of history attack.

B. Users who are newly connecting to the network simply need to download the correct software, as they need to do with bitcoin.

C. Users who have been connected to the network for a time, but left for a period of time, just need to (manually) download and (automatically) verify a checkpoint.

Item C is situtation that differs most from current Bitcoin. There is some additional possibility for an attack there, but it would still be extraordinarily difficult to pull off. In Bitcoin, there is no need to download any new data, and so there is no equivalent attack vector similar to tricking the user into accepting an invalid checkpoint. However, this attack would be very difficult (eclipse, key theft), cost a lot (buying old addresses with no coins currently in them), and has a pretty limited reward potential (only the possibility of attacking returning and new entrants they can eclipse). So yes, it is a trade off. I think its a good trade off to buy higher security against a 51% attack and lower fees.

Does this make it clearer how checkpoints can be decentralized?


Since it seems you prefer questions to statements, I'll ask a question, Socratic-method style, but it requires some explanation:

Let's say I download two copies of the updated source code of your software, one from an honest mirror, and one from a malicious mirror.

The honest source code has a change in the author signature, because the original developer is no longer involved in the creation of the software. The malicious source code has a change in the author signature, for obvious reasons. (Real life example: Satoshi Nakamoto hasn't signed a Bitcoin release in years).

The honest source code contains a change in the initial nodes you connect to, because a DDOS a year ago caused the initial nodes to become a point of failure. The malicious source code contains a change to the initial nodes you connect to, which adds nodes that the attacker controls. (Real life: https://fintechs.fi/2021/07/06/bitcoin-org-hit-with-massive-...).

The honest chain has a large-scale validator drop caused by an outage of AWS US East 1. The malicious chain has a large-scale validator drop at the same time caused by the malicious validator failing to include re-staking transactions, resulting in the malicious validator controlling 51% of the coins on the malicious chain, after which it's easy for the malicious attacker to create transactions that control 100% of the staking on their fake chain in a way that looks like normal traffic on chain. (Real life: https://www.datacenterdynamics.com/en/news/aws-us-east-1-reg...).

At the point of the large scale validator drop, there are a lot of missed blocks on the honest chain, so traffic eventually falls back to a different validator to allow the blockchain to progress. As the same point, there are a lot of missed blocks on the malicious chain because the attacker didn't control the validator chosen by the provable random function, but traffic eventually falls back to a different validator which the attacker controls. These validators don't include transactions that add staking power to addresses the attacker doesn't control.

The honest chain has blocks validated every 20 seconds (this number pulled from Cardano), which were validated at that rate because honest nodes wouldn't accept a block earlier than the allotted time. The malicious chain has blocks that were all created in a span 20 minutes and signed by staking addresses controlled.

The attacker controls your internet connection to the point that about half the time, if you poll the network, you'll receive answers from the attacker (Real life: China).

Given this situation, how does your system tell which chain is the honest one, and which is the malicious one?

Keep in mind that Proof of Work handles this situation trivially: the malicious chain is shorter--a lot shorter if your node has been disconnected for some time.


That's a pretty clear description of an attack, thanks.

> The honest source code has a change in the author signature

I assume we're talking about the scenario where a user has already-installed honest software that has validated the chain, but has been offline for a while?

If we were talking about a new entrant, just the fact that users' internet is often controlled by an attacker 50% of the time would probably be enough to trick users into downloading malicious software. If 50% of connections are hijacked, most users would probably not check signatures and so ~50% of them would get malicious software, the users that do check signatures would get honest software 25% of the time, malicious software 25% of the time, and a sig mismatch 50% of the time. There's some bad things that can happen there for any software where security is important. So let's stick to the scenario where the user already has honest software.

First I want to comment on the scenario, and then I'll outline a procedure allows the user to determine which is the honest chain.

> The honest chain has a large-scale validator drop caused by an outage of AWS US East 1

The outage you mentioned lasted less than 2 hours. But I think we can consider an outage that lasts say, 1 week. Kind of an absurd amount of time for an outage that hits such a huge amount of people, but even a 1 month outage would not give an attacker an opportunity here. And how many validators would drop out in this scenario? 20%? 40%? Any significant percent seems highly unlikely, but let's say it is a 40% drop for 1 week.

> The malicious chain has a large-scale validator drop at the same time caused by the malicious validator failing to include re-staking transactions

VPoS doesn't do staking, but the equivalent here is that the malicious chain would simply have blocks submitted at longer intervals until the "difficulty" re-adjusts, which would both equivalently indicate fewer validators.

> resulting in the malicious validator controlling 51% of the coins on the malicious chain

The malicious validator can mint in secret and always control 100% of the coins actively minting on the chain, no? This still doesn't help tho, because the honest chain can be seen to have more active validators than the malicious chain.

In a quorum-based system like Casper where the quorum chooses new randomness that determines the next quorum, it could be possible for an attacker to capture the quorum if they currently make up a large minority of the quorum and 40% of the rest of the quorum drops out. They'd have to have to make up at least 30% of the quorum, so that when they stop responding in the honest quorum, the honest quorum only has 30% left (matching their 30%). An attacker could 51% attack the honest chain in this scenario, no need for a separate malicious chain.

This is the same in bitcion - if 40% of the hashpower went offline, an attacker with only 30% of the hashpower would turn into an attacker with 50% of the hashpower. VPoS isn't quorum-based, but it would have the same problem if 40% of minters lose access to their coins for a period of time. Am I misunderstanding your scenario here? Seems like the move would be to 51% attack the honest network rather than try to attack a smaller set of nodes that are probably lower value.

> there are a lot of missed blocks on the honest chain, so traffic eventually falls back to a different validator to allow the blockchain to progress

I'm not sure about other PoS protocols, but in VPoS, there is no "fall back". The block progression simply slows and "difficulty" readjusts over time. The set of validators and how they're chosen wouldn't change.

But let me suggest a different attack scenario: let's say the attacker finds, creates, or buys old keys that collectively contain as much coins as the total coins minting honestly (a history attack). No need for an amazon outage. The attacker simply creates a chain from the point where those addresses collectively had as much minting power as the honest chain. After a time, they would capture all the randomness, and could put even more coins to work minting (with the use of stake grinding), which could look like a heavier chain (with more validators) than the honest chain.

> The honest chain has blocks validated every 20 seconds (this number pulled from Cardano), which were validated at that rate because honest nodes wouldn't accept a block earlier than the allotted time. The malicious chain has blocks that were all created in a span 20 minutes and signed by staking addresses controlled.

It sounds like you mean that the honest chain has one block every 20 seconds, whereas the malicious chain has a potentially unlimited number of blocks per second. Is that what you're saying?

I think in every PoS protocol, there is some verifiable time limiation. Yes an attacker could create an alternate chain starting from from 5 years ago in arbitrarily little time. However, they could not create more blocks in their fake 5 years of time than the honest chain did in real 5 years. And nodes obviously won't accept blocks with timestamps significantly in the future. Protocols have adjustments made when blocks have timestamps that are too close together, just like bitcoin. Any attacker that creates a chain with timestamps too close together will reduce their ability to create blocks proportionately.

But maybe you could clarify what you mean here.

In any case, to answer your primary question, this is a process that could happen:

1. User downloads honest software in 2021 and runs it continuously and/or regularly.

2. The user shuts down their software in 2022 and gets hit by a car and goes into a coma or something.

3. The user wakes up in 2026 and of course the first thing they do is start up their computer along with the currency software.

4. The software tells the user its been disconnected from the chain for too long and needs a new checkpoint.

5. The user goes to the website they're used to and downloads a new checkpoint and a signature for it (or ideally a battery of signatures).

6. The user uploads the checkpoint and signature(s) into the software. The software checks the signatures against the checkpoint and against its list of trusted public keys. Let's say none match.

7. So the user scours the internet and finds many (honest) articles that talk about how the dev group had a big change up and all the signatures are expected to be created with different keys now.

8. So the user goes and finds some new public keys to validate against. Chances are they go to a search engine and search a few places for keys. There's a 50% chance that they land on a malicious page, and they'll probably keep using that same page for subsequent searches. So 50% chance they get public keys from the attacker. If they're ultra careful, they could start a new web page (and connection) for each search and so only have a 50% chance of getting a malicious public key for each key - but lets just say they don't do that and so 50% of the users just get malicious keys.

9. The user puts in these keys and 50% of the time they match. In the case they don't match, an alarm is raised and they're alerted to the fact that they're possibly being scammed/attacked. 25% of users get malicious keys that matched the checkpoint data.

10. The software then connects to the network. While normally the software might use 8 connections like bitcoin does (tho double that is probably warranted), just for this case of validating the checkpoint, many more connections can be used. 100 wouldn't be very burdensome on the network, but would make it incredibly unlikely that a user would be eclipsed. Again 50% of these connections would be redirected to malicious nodes. Let's also say the attacker has a 50% Sybil in the network, so that even connections that aren't redirected by the attacker may still end up connecting to an attacker. So this is a 75% chance of connecting to an attacker. This results in a .75^100 chance that all their connections are to an attacker. If every person in the world tried to reconnect an old node during the attack window, there would be a probability of less than 0.3% that even a single person gets eclipsed.

11. The software asks these connections what their checkpoint is. If any don't match, an alarm is raised and the user is told they may be being attacked and told to verify out of band what the checkpoint is.

Continued...


...

So all said and told here, any given user trying to reconnect after a long time during the attack window has a 3 in 10 trillion chance of being successfully duped without an alarm being raised. And those aren't the only fractions at play. The number of users trying to reconnect after a long time during the attack window is probably pretty tiny as well.

In any case, the one item up there that opens up a further attack opportunity is item 11. An attacker could create malcious public nodes that act like honest nodes until they're asked what their snapshot is by a new connection. When asked, they give some nonsense snapshot hash. A small number of public nodes could cause a bit of chaos. But that chaos could only affect applicable reconnecting nodes that would need to go through the above process. So what nodes are those exactly?

Well, the attack is only cheaper than a normal 51% attack when the attacker can obtain old addresses that used to contain coins cheaper than they can contain actual coins. In VPoS, the randomness that decides who can mint is hidden for a period of time and is afterward active for a period of time. If the period of time that the randomness is hidden for is longer than the frequency at which new checkpoints are released. A 1 year timespan seems reasonable for both.

So because that possibility can be closed off, the other history attack possible is a longer-range history attack from before the checkpoint - which requires tricking users into accepting a malicious checkpoint. So the attacker might attempt to obtain addresses that contained coins a year ago (but no longer do). However, the only nodes that could even potentially fall for this trick are ones that have been offline for over a year. How many nodes go offline for that long? Probably almost none. But whatever that number is: that's the number that must go through the above process and that's the number that could be griefed by a malicious actor that releases fake data in step 11 above.

These steps do hinge on "raising an alarm" being sufficient to prompt people to do some deeper digging as to what chain is the right one. This could be as easy as calling up some trusted friends and asking them to read out a hash to you from their software. It could be asking the merchants you deal with most often, or your employer. I'd argue that similar steps to the above would be incredibly valuable to add to the bitcoin software upon update, since similar issues can happen if you install malicious softare (worse issues really).

There are also other mitigations that wouldn't stop an immediate attack, but would help prevent the attack from scamming a user for a long period of time. If successful, the attacker could simply mirror all transactions from the normal chain on their malicious chain. So the victim could get paid and pay honest people, but the attacker could be paying the victim for things with just fake coins on the malicious chain. However there is an idea that has been discussed before of putting a recent block hash in the transction, so that transctions are pinned to a particular chain and the attacker can't build a malicious chain with honest transactions. If transactions required recent block hashes, the victim would be alterted to the malicious chain as soon as they tried to pay someone honest or get paid by someone honest.

But I think there are still some things to clarify, since I may have not correctly understood a couple items in your attack scenario.


Okay I read your entire post.

> A. Users who have been connected to the network all along can't be tricked by any kind of history attack.

That was never part of the attack proposed by On Stake and Consensus. I'm not accusing you of not knowing this, and I'm not accusing you of ignoring this, I'm just stating it for completeness.

> B. Users who are newly connecting to the network simply need to download the correct software, as they need to do with bitcoin.

You've made a pretty important shift here from comparing PoS-to-PoW, to comparing PoS-to-Bitcoin. You're no longer saying, "my system is decentralized", you're now saying "my system is just as decentralized as Bitcoin". That doesn't work: just because Bitcoin relaxes its decentralization in some ways doesn't mean it's okay for other solutions to relax decentralization.

In fact, this is one aspect in which Bitcoin isn't decentralized: almost everyone goes to a centralized source, Bitcoin.org, and downloads the binary there. Technical users might verify the hash, but that's still a centralized solution. The only truly decentralized solution that Bitcoin offers is that you can download the source code and verify that it does what it says it does, but very few users have the ability to do that: it's a decentralized solution, but it's not a good decentralized solution.

However, Bitcoin's solution is still a better solution than the one you're offering. If I download the source code and have the technical ability to do so, I can verify that the source code does what it says it does. There's no centralized trust here: I'm merely agreeing to the terms how the blockchain works. Choosing to accept the updates to the Bitcoin software doesn't imply any consensus about the state of the Bitcoin blockchain.

Checkpoints mean something entirely different: that means that I'm trusting the provider of the checkpoint about the state of the blockchain.

I'm going to reiterate the difference because it's extremely important:

1. With Bitcoin, there's no trust required if I verify the source code myself. If I review the source code and decide to compile and run it with the latest changes, I'm merely agreeing to the changes in the rules of the blockchain--and in fact I don't have to agree to them (which results in a hard fork: see Bitcoin Cash or Ethereum Classic). I'm not trusting anything about the state of the blockchain.

2. With your proposed "checkpoint" solution, I'm trusting that the source of the checkpoints isn't lying to me about the state of the blockchain. Contrary to your statement, "There is no central source for the checkpoint," there IS a central source for the checkpoint: the server you're downloading from.

Remember the problem proposed by Poestra: you receive two different blockchains, and need to figure out which is the real one. All you've done is change the source of the attack slightly: you receive two different source codes containing two different checkpoints and links to two different blockchains, and need to figure out which is the real one. This isn't a fundamental change to the attack, it's the same attack. This is what I meant when I said that getting around your "solution" is trivial. As I said before, checkpoints do exactly nothing to address the problem. All you've done is move some of the block hashes into the source code.

Statements like "Everyone who is currently part of the network can validate that the checkpoint is correct" show a fundamental misunderstanding of the problem: with Bitcoin, I don't need to ask anyone which chain is correct. I don't need to ask the community with Bitcoin: your statements about how people can "alert the community" are irrelevant. I don't need to ask the authors with Bitcoin: your statements about things being signed by the authors are irrelevant. The longer chain is correct, period. If I have to ask the network if my checkpoints are valid, that opens up the possibility of the attack proposed by Poestra. Just to reiterate:

> Furthermore, when a user does download a checkpoint, some users are going to be careless and download some malicious checkpoint. If they do, but they have honest software, the software can ask their connections if the new checkpoint matches up with them. If it doesn't, it can again raise an alarm to the user.

If you downloaded a malicious piece of software, that piece of software will likely connect you to connections that it controls. Even if you introduce your own connections and they provide you with the correct chain information, there's no way for you to verify which source of information is telling you the truth. Again, with PoW this is easy: the longer chain is the real chain. With PoS, the longer chain could be manufactured: you still have not responded to my statement, "Show me a single mechanism in existence that can allow me to look at two blockchains and reject one based on the fact that the blocks were mined too quickly. Hint: it exists, but you're not going to like what it is!"

> For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised [...] they would have to be eclipsed by the attacker (connected to only attacker nodes)

No, because even if they connect to some valid nodes, their software with the malicious checkpoint would identify the valid chain as malicious.

> For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised [...] the attacker must also have a way to sign the release (of the checkpoint) with the authors' signatures (which the software should also automatically check)

The authors are a centralized entity.

> For a person with an old version of the software to get a malicious checkpoint without an automatic alarm being raised [...] and the attacker must have accumulated enough minting power (eg in old keys bought from people who have drained those addresses already) since the software last left the chain.

That's true, but you've literally proposed one way that could happen. There are other ways this could happen, which are proposed by Poestra.

Now, remember when I said you didn't understand the attack proposed by Poestra, and you took that as an insult? Remember how you said that I didn't understand your solution? I've read your post, and it added nothing to my understanding--I did understand your solution before. Does that mean you were insulting me? I'm not going to take it as an insult because that's pointless: all I'm saying is that let's keep this on the level of respectful disagreement and not take disagreement as insult.

> For a new entrant to the system, there is a higher risk, but it is no higher than with Bitcoin. The new entrant must find and install the right software somehow, on a machine that isn't compromised.

This is a true statement about Bitcoin, but it isn't a true statement about Proof of Work. Bitcoin Cash software from ten years ago can still detect a sybil attack on the Bitcoin Cash chain as long as you connect to one valid node. Again the mistake you're making is that with Bitcoin, the software only encodes agreement to changes to the protocol, whereas in your proposed solution, your checkpoints encode changes trust in changes to the blockchain. This is not "higher risk, but [...] no higher than with Bitcoin". It's a significantly higher risk than with Bitcoin.

What you're alluding to here is a real problem, which is how to reach consensus on changes to the protocol. I don't know of a good solution to that problem: certainly Bitcoin Cash's "never change the protocol" solution isn't a good solution. Probably the best solution I know of is Polkadot's on-chain governance, but while Polkadot is PoS, there's no reason on-chain governance couldn't be implemented in a PoW system. And I'm not sure on-chain governance actually solves the problem: it encodes an agreement on how updates to the protocol are agreed upon, but there's still nothing preventing a motivated minority from changing their source code and creating a hard fork.

> C. Users who have been connected to the network for a time, but left for a period of time, just need to (manually) download and (automatically) verify a checkpoint.

> Item C is situtation that differs most from current Bitcoin. There is some additional possibility for an attack there, but it would still be extraordinarily difficult to pull off. In Bitcoin, there is no need to download any new data, and so there is no equivalent attack vector similar to tricking the user into accepting an invalid checkpoint. However, this attack would be very difficult (eclipse, key theft), cost a lot (buying old addresses with no coins currently in them), and has a pretty limited reward potential (only the possibility of attacking returning and new entrants they can eclipse). So yes, it is a trade off. I think its a good trade off to buy higher security against a 51% attack and lower fees.

This is why I say I'm not a PoS detractor. I do recognize that there are tradeoffs here. I'm not convinced of your claim that this attack would be very difficult--while I don't know of a time that it has been implemented in practice, a lot of the pieces of the attack have already been implemented in practice. You ultimately turn out to be correct that it's difficult to implement, but I don't think you know that. Certainly Vitalik and a great many other smart researchers are worried about how this could go wrong, and nothing you've said convinces me that your confidence is justified.


I think as usual the crux of this debate is the security properties of figuring out correct software, and the parallels to checkpoints. I think you misunderstood me on a couple things, but I'd recommend that we focus mostly on the question of how a user can download correct software/data. If we can come to an agreement on that, I think how to get on the same page about the rest will become much clearer.

> That doesn't work: just because Bitcoin relaxes its decentralization in some ways doesn't mean it's okay for other solutions to relax decentralization.

Ok. Well that could be a valid point. However, discussing a design is only useful when comparing it to some realistic alternative. Bitcoin is the defacto standard of cryptocurrencies. I'm sure you'd agree its at least fair to compare to bitcoin, even if there might be other designs out there that claim to be better. I'd suggest that we both compare against bitcoin, because it seems likely that both of us understand that. Were you to bring up some other coin that you claim does it better, I think it would just hinder us coming to a mutual understanding. Once we've come to such an understanding, I'd be happy to move on to compare against something you think is better than bitcoin.

You're absolutely right that most people go to bitcoin.org to find full node software. However, that has nothing to do with bitcoin's consensus protocol, nor PoW vs PoS. Seems kind of irrelevant, as far as I can see.

> If I download the source code and have the technical ability to do so, I can verify that the source code does what it says it does.

Sure. Ignoring what you said about most people not being able to do that (and I'd argue that the vast majority don't have the combination of time and expertise to review changes to the source code and ensure there aren't things like security holes or maliciously injected code), the fact of the matter is that you can't know just by reading the source code whether or not that source code implements the protocol that everyone else is using (and calling "bitcoin", or whatever coin you're trying to use). In order to know that the software is compatible, you need to ask other people. There is no way around that. This isn't centralized trust, but it is decentralized minimal-trust. Just looking at a codebase or set of diffs can't tell you what chain is bitcoin.

> Checkpoints mean something entirely different: that means that I'm trusting the provider of the checkpoint about the state of the blockchain.

Given all that I said about how checkpoints can be verified against numerous connections, I would have hoped you'd at least have instead said "that means I'm trusting (many but a finite number of) providerS of that checkpoint about the state of the blockchain".

> Contrary to your statement, "There is no central source for the checkpoint," there IS a central source for the checkpoint: the server you're downloading from.

I thought you read my entire message? Why must someone download the checkpoint from a single source? Why not download it from many sources and ensure they all match?

> you receive two different source codes ... and need to figure out which is the real one

I agree with you, this is a problem. But tell me, how is this problem different for bitcoin or any other cryptocurrency? A prerequisite is that the user installs the correct software. How do they know which one is correct? How does a user know which software is the correct bitcoin software? Can they tell just by looking at the source code? Can they tell just by looking at the binary? What is the trustless way of installing the correct bitcoin software?

> The longer chain is correct, period.

This is not true in the case of a 51% attack, or more realistically, a dangerous majority consensus change. For example, what if bitcoin were the worldwide currency, and most people were tired of high onchain fees and decided to increase the blocksize by 100x with some kind of soft work. That would very likely be detrimental in the long run. However, it seems reasonably possible that this could actually happen some day. Smart people would fork off a different chain that preserves the old rules. So it would depend on what you mean by "correct". If by "corect" you mean the chain with the most economic activity - that's probably the longest chain. If by "correct" you instead mean the chain with the rules you expect, that chain may no longer exist, that chain may not be the longest chain. The only way to know is by asking people and learning what changes have happened, how many people followed what rules, and whether you agree with them. Its not always as simple as "follow the longest chain". However, I certainly agree that 99.9% of the time the longest chain is what you want.

> If you downloaded a malicious piece of software

It seems you somehow misread what I wrote. The case was if a user downloads "malicious checkpoint" and retains (their original) "honest software".

> but you've literally proposed one way [an attacker could accumulate enough minting power]

Of course its possible. That doesn't make it easy, cheap, profitable, or likely. A 51% attack is possible, but its (hopefully) difficult enough that it will never happen.

> Bitcoin Cash software from ten years ago can still detect a sybil attack on the Bitcoin Cash chain as long as you connect to one valid node.

And a PoS currency that uses checkpoints can also detect a sybil attack if they can download a checkpoint from at least one honest node. Its literally the exact same mechanism.

> I'm not convinced of your claim that this attack would be very difficult--while I don't know of a time that it has been implemented in practice, a lot of the piece of the attack have already been implemented in practice.

That's something I could analyse in more detail if you're interested.

> Vitalik and a great many other smart researchers are worried about how this could go wrong

There's certainly plenty that could go wrong. I'm not claiming that PoS is easy or a sure thing. What I am claiming is that most of the arguments against PoS that have been raised are solved problems. But that doesn't mean there aren't more subtle known problems that aren't raised as often, and it doesn't mean that there aren't unknown problems. There is certainly a possibility that PoS can't beat PoW. However I have yet to see convincing evidence that's the case.


> I think as usual the crux of this debate is the security properties of figuring out correct software, and the parallels to checkpoints. I think you misunderstood me on a couple things, but I'd recommend that we focus mostly on the question of how a user can download correct software/data. If we can come to an agreement on that, I think how to get on the same page about the rest will become much clearer.

Okay. I agree that this is one of the two key points we disagree on.

If you want to focus on your specific proposed implementation on PoS rather than all possible PoS implementations, we can narrow this further. I think you're saying that it's possible to verify the checkpoints in a decentralized way without PoW. Is that a fair statement of your opinion?

The other key point I think we disagree on is that you seem to think that it's possible to verify that time has elapsed between block validations in PoS. Is that a fair statement of your opinion?


> If you want to focus on your specific proposed implementation on PoS rather than all possible PoS implementations, we can narrow this further.

Sounds good to me.

> I think you're saying that it's possible to verify the checkpoints in a decentralized way without PoW. Is that a fair statement of your opinion?

Yes. To elaborate, for users who don't have any software already installed, this would be a social process of asking many other people/sources what the correct software is. For users who already have correct software already installed, either that correct software has been running recently enough and can generate its own checkpoint, or that software was offline for long enough that it needs a checkpoint, but it can help substantially automate the process of validating a checkpoint and if it cannot validate it, the user should fall back to a similar process of social discovery to determine what the correct checkpoint/update is.

Do you view the idea of asking many untrusted and/or trusted people/entities what the correct checkpoint is, as not decentralized? Or not sufficiently decentralized?

> The other key point I think we disagree on is that you seem to think that it's possible to verify that time has elapsed between block validations in PoS. Is that a fair statement of your opinion?

The question isn't clear enough to me. I think what I can say about that is, given that you have two time anchors (eg one in the past, the checkpoint, and one in the present, your own computer's clock), its possible to ensure that only a certain number of blocks can validly be added between those two time anchors, to a high degree of statistical probability.

But maybe you could clarify the question?


> Yes. To elaborate, for users who don't have any software already installed, this would be a social process of asking many other people/sources what the correct software is. For users who already have correct software already installed, either that correct software has been running recently enough and can generate its own checkpoint, or that software was offline for long enough that it needs a checkpoint, but it can help substantially automate the process of validating a checkpoint and if it cannot validate it, the user should fall back to a similar process of social discovery to determine what the correct checkpoint/update is.

> Do you view the idea of asking many untrusted and/or trusted people/entities what the correct checkpoint is, as not decentralized? Or not sufficiently decentralized?

I'll allow that this falls on a spectrum of decentralization and is definitely more decentralized than "not decentralized at all".

Whether it's "sufficiently decentralized" is a difficult question for two reasons:

1. Maybe you have some formal algorithm for social discovery that you haven't presented here, but without that, it's quite difficult to speculate how it would play out.

2. From the way you're describing this, you're not relying on a formal algorithm, but on a reliable, diverse network. Maybe you're aware that computer networks are inherently unreliable, so you're getting around this by not using a computer network: for example, making a phone call to get a checkpoint hash from someone you trust. There's a lot of human elements here, and humans are unpredictable.

My gut feel is that no, it's not sufficiently decentralized, at least not in a way that presents any real advantages over simply trusting the network, but something like a PGP web of trust[1] could make this more reliable--it's hard to say without fleshing this plan out more.

The two claims I'm willing to confidently make here are:

1. As far as I can tell, you're not proposing an automated way to bootstrap trust OR consensus here, and without this, it's going to be both slow, and prone to the introduction of human error.

2. This system is inherently less decentralized AND less secure than PoW. Verifying a blockchain via PoW doesn't require trust (or put another way, you trust the math, not the nodes you're connected to), so there isn't a need to bootstrap trust. I don't know how PoS will play out, but I do know how PoW will play out in the situations described by Poestra. PoS may work out in practice: I genuinely hope it does, because there would be significant upsides! But I'm not confident that PoS will work out, and I am confident that PoW will.

> > The other key point I think we disagree on is that you seem to think that it's possible to verify that time has elapsed between block validations in PoS. Is that a fair statement of your opinion?

> The question isn't clear enough to me. I think what I can say about that is, given that you have two time anchors (eg one in the past, the checkpoint, and one in the present, your own computer's clock), its possible to ensure that only a certain number of blocks can validly be added between those two time anchors, to a high degree of statistical probability.

> But maybe you could clarify the question?

Okay, given this explanation, I think this basically falls back to your system of bootstrapping trust. Yes, if you can trust the checkpoints, you can trust the times between them. So this disagreement basically collapses back to the first disagreement: I would say that the reliability of the elapsed time between checkpoints is only as valid as the reliability of the checkpoints, and I am not confident in the reliability of the checkpoints.

You did claim earlier that all PoS implementations have elapsed time between blocks, and I'm still mystified as to how you're claiming this is enforced.

[1] https://www.linux.com/training-tutorials/pgp-web-trust-core-...


> Maybe you have some formal algorithm for social discovery that you haven't presented here, but without that, it's quite difficult to speculate how it would play out.

However, this process is already important when choosing software in the first place, or when updating software. Even when using PoW, every new entrant to the network needs to do some kind of social discovery to figure out which software to download in the first place. And even with PoS, people that have been regularly connected to the network do not need any social discovery. With PoS, there is an additional kind of user that an attacker can force to need to do social discovery: users who were at one point part of the network but have been offline for a long period of time. My conjecture is that this set of users is quite small in comparison to either the set of new entrants or the set of nodes who have been online frequently enough to not need any social discovery.

Would you agree that if that set of users is small enough, the difference might be insignificant? Eg would increasing the number of users that have to do some kind of social discovery by 1% be acceptable?

> My gut feel is that no, it's not sufficiently decentralized

I would tend to agree that the process of finding the right sofware is generally not decentralized enough - too easy for people to find bad software or virus ridden downloads. The only thing that saves us is that the vast majority of humanity isn't malicious.

> something like a PGP web of trust[1] could make this more reliable

I think there are a lot of things we could do like that. We have a long way to go towards actually making good computer security accessible to a significant fraction of people. First step is operating systems - or maybe even hardware.

> you're not proposing an automated way to bootstrap trust OR consensus here

Correct. What I'm proposing is a way to verify with confidence if the data (checkpoint) you received is very likely valid in cases where you're not being attacked, and a way to alert the user when an attack might be happening. The trust / social discovery part of things is basically out of scope - but already exists in its own haphazard individualized way.

> You did claim earlier that all PoS implementations have elapsed time between blocks, and I'm still mystified as to how you're claiming this is enforced.

Well I meant that the timestamps that blocks have are enforced. The actual time they're created can't be enforced. But to elaborate, in VPoS, every UTXO gets one chance per second to mint a block. Nodes will know well in advance whether or not they can mint a block or not (although other nodes can't know which peers will get that chance until that peer actually broadcasts a block). However, a node will reject blocks with a timestamp greater than its clock. Furthermore, a "difficulty" adjustment happens similar to bitcoin. X blocks per minute are targetted on average, and if 2X blocks/minute are minted in a given time range, the "difficulty" will increase until only X blocks/minute are minted. Basically, if a number C of coins has 2 chances in 10,000 of minting a block, if the difficulty doubles, that number of coins then has 1 chance in 10,000. This is how its ensured that blocks are, on average, minted some target time apart.

With a quorum based system like Casper, each quorum is allowed to mint a particular number of blocks, probably with timestamp constraints as well. And the quorums themselves must change after a specific number of blocks. I assume some similar difficulty-like adjustment is done to keep these timed properly, so that both quorums and blocks are maintained at a cadence.

Is this what you mean, or are you talking about something else?


> Ok. Well that could be a valid point. However, discussing a design is only useful when comparing it to some realistic alternative. Bitcoin is the defacto standard of cryptocurrencies. I'm sure you'd agree its at least fair to compare to bitcoin, even if there might be other designs out there that claim to be better. I'd suggest that we both compare against bitcoin, because it seems likely that both of us understand that. Were you to bring up some other coin that you claim does it better, I think it would just hinder us coming to a mutual understanding. Once we've come to such an understanding, I'd be happy to move on to compare against something you think is better than bitcoin.

That's reasonable, but the flipside is that if we're trying to improve on PoW by moving to PoS, it doesn't make sense to compare a "state of the art" PoS to the oldest PoW that has far too much momentum to implement most of the last decade's worth of improvements. If we must choose a real world implementation, I would choose Bitcoin Cash, not because it's the best but because it's the simplest.

But I think you glossed over the more important point I made, which I even took the time to reiterate because it's very important: agreeing to an implementation is not the same as trusting an entity about the state of the blockchain. These are two very different propositions.

> I agree with you, this is a problem. But tell me, how is this problem different for bitcoin or any other cryptocurrency? A prerequisite is that the user installs the correct software. How do they know which one is correct? How does a user know which software is the correct bitcoin software? Can they tell just by looking at the source code? Can they tell just by looking at the binary? What is the trustless way of installing the correct bitcoin software?

> This is not true in the case of a 51% attack, or more realistically, a dangerous majority consensus change. For example, what if bitcoin were the worldwide currency, and most people were tired of high onchain fees and decided to increase the blocksize by 100x with some kind of soft work. That would very likely be detrimental in the long run. However, it seems reasonably possible that this could actually happen some day. Smart people would fork off a different chain that preserves the old rules. So it would depend on what you mean by "correct". If by "corect" you mean the chain with the most economic activity - that's probably the longest chain. If by "correct" you instead mean the chain with the rules you expect, that chain may no longer exist, that chain may not be the longest chain. The only way to know is by asking people and learning what changes have happened, how many people followed what rules, and whether you agree with them. Its not always as simple as "follow the longest chain". However, I certainly agree that 99.9% of the time the longest chain is what you want.

Well, this is what I'm saying when I say that you're agreeing to a protocol, not to the state of the blockchain. Whether the changes to the protocol are valid is a philosophical question, not a mathematical one.

This is a difference we can look at with real-world examples. Let's say you have a Bitcoin client and an Ethereum client from 10 years ago, and you download the sources for a new Bitcoin client and a new Ethereum client. In reading the changes to the source code, you discover that SegWit[1] was added to Bitcoin, and a hard fork was performed against Ethereum by its own developers to reverse the DAO hack[2].

Now at this point, you can ask, "Which chain is Bitcoin?" and "Which chain is Ethereum?" If you decide to answer that question from a philosophical perspective, you might say, "SegWit greatly decreases decentralization and is a bastardization of Satoshi Nakamoto's vision," and you can reject the Bitcoin protocol changes, connect with your old client, and you'll see the Bitcoin Cash chain. And you might say, "Code is law, the DAO hack was in accordance with the law and should not be reversed," and reject the Ethereum changes, and connect with your old client, and see the Ethereum Classic chain. You can argue that Bitcoin Cash isn't Bitcoin, and you can argue that Ethereum Classic isn't Ethereum, but ultimately that's a philosophical argument, not a mathematical one. I don't think you can reasonably argue that these chains are malicious chains.

Alternatively, you could say, "I want all my money on all the chains", and simply connect with all four clients (new and old for Bitcoin and Ethereum) and see that all connect to long chains with thousands of new and unique transactions on each chain. But critically, at this point, you'd see that the SegWit changes to Bitcoin have billions more hashes behind them than Bitcoin Cash, and the Ethereum hard fork to reverse the DAO hack also has billions more hashes behind it than Ethereum Classic. Just based on that, you can tell the majority opinion about which is the valid protocol. You might disagree with the majority, but ultimately, I don't think it's actually true that the chains don't tell you which chain is Bitcoin or which chain is Ethereum, in a "social adoption" sense. On the contrary, the chains give you a very good idea of how much adoption each chain has received.

And critically, when the hard fork is philosophical rather than mathematical, you can't be tricked into accepting a double spend. A spend on Bitcoin Cash and a spend on Bitcoin-with-SegWit aren't double spends just because the coins were acquired before the hard fork--they're both valid spends on valid chains, with value in both.

In a PoW system, a truly "malicious protocol" would be one that is presented as if it has mass adoption, but doesn't have mass adoption. Whether that's detectable is dependent on what the changes are, but it is possible to construct a protocol change which would accept blocks without large scale miner adoption. An example of this is merge mining[3], a "feature" of DogeCoin where they use work on the LiteCoin chain to validate blocks on the DogeCoin chain, which was added to address the lack of DogeCoin miners in 2014. This is a philosophical change with mathematical implications, and you'd be able to see those mathematical implications from the source code. This is one of the (oh so many) reasons DogeCoin is a terrible protocol.

> > Checkpoints mean something entirely different: that means that I'm trusting the provider of the checkpoint about the state of the blockchain.

> Given all that I said about how checkpoints can be verified against numerous connections, I would have hoped you'd at least have instead said "that means I'm trusting (many but a finite number of) providerS of that checkpoint about the state of the blockchain".

I'm going to have to disagree with you here: the entire vulnerability here is based on the unreliable-ness of the network. You don't know how many providers you're connecting to. This is why it's a problem that your proposal requires diverse connections to the network.

PoW solutions only require that you be connected to one valid node--the valid chain you receive from that node will be longer than the malicious chains you receive even if you're connected to thousands of malicious nodes.

As a random aside: if you're willing to rely on network consensus, then the hardest problem isn't proving consensus, it's achieving consensus in the first place. Checkpoints are a very slow way to do this, and indeed relaxing the requirement for on-chain provability allows us to achieve consensus a lot faster. I don't know if your solution has formalized a method by which nodes should collect these checkpoints, but the state of the art in network consensus might be Avalanche Consensus[4], which allows <5 second finality with millions of nodes, and is tunable to allow consensus even when >50% of nodes are malicious (arbitrary security levels can be tuned, but consensus is slower if you tune it to prevent attacks where, say, 90% of nodes are malicious). I'll admit that my knowledge of these kinds of protocols is a bit weak so there may be better options, but that seems pretty good to me. But ultimately this still requires diverse connections to the network, which is a pretty large weakness compared to PoW protocols.

> And a PoS currency that uses checkpoints can also detect a sybil attack if they can download a checkpoint from at least one honest node. Its literally the exact same mechanism.

It can't be the same mechanism (longest chain), because you haven't proposed a way to prove elapsed time between blocks besides network consensus.

[1] https://www.investopedia.com/terms/s/segwit-segregated-witne...

[2] https://www.gemini.com/cryptopedia/the-dao-hack-makerdao#sec...

[3] https://www.coindesk.com/dogecoin-allow-litecoin-merge-minin...

[4] https://www.avalabs.org/whitepapers


> If we must choose a real world implementation, I would choose Bitcoin Cash, not because it's the best but because it's the simplest.

Unfortunately I don't know enough about the differences between bitcoin cash and bitcoin to be helpful there. I didn't think there was any substantial difference with respect to the consensus mechanism. Is there a big difference?

> agreeing to an implementation is not the same as trusting an entity about the state of the blockchain

I believe I did address that. But we're also addressing this in a separate comment as well. The way you state that I certainly agree - agreeing amongst many peers (to anything) is not the same as trusting a single entity (about anything). However, I think I made it clear that I'm not suggesting trusting a single centralized entity. Must I repeat the stuff about verifying the checkpoint with many peers and having a codebase reviewed by many reviewers?

> you're agreeing to a protocol, not to the state of the blockchain

Agreeing to a protocol is the same as agreeing to the state of the blockchain. A protocol will choose one chain to follow. It does this with various types of rules. If you change the rules, you might change the chain. A checkpoint is just yet another rule that becomes part of the protocol. This is why if you download a malicious piece of cryptocurrency software, it can make you follow whatever chain it wants you to follow. A checkpoint is far more constrained than a software update. It can't change most of the rules, just one: which chain to follow if only some chains contain the indicated block.

> connect with your old client, and you'll see the Bitcoin Cash chain

Given that Segwit was a soft fork, you'd still see the Bitcoin chain, not Bitcoin Cash. Not sure about Ethereum Classic.

> I don't think you can reasonably argue that these chains are malicious chains.

I agree. But the user still needs a way to answer that question. Most users will choose based on who they want to interact with. Only users that have really deep knowledge will choose based on the code itself. This seems to be what you're saying here:

> I don't think it's actually true that the chains don't tell you which chain is Bitcoin or which chain is Ethereum, in a "social adoption" sense. On the contrary, the chains give you a very good idea of how much adoption each chain has received.

I believe you're right. It sounds like you're point is that the user will usually want to follow the heaviest chain, and so my point about dangerous soft forks is moot. I'll concede that's a reasonable point.

> the entire vulnerability here is based on the unreliable-ness of the network. You don't know how many providers you're connecting to. This is why it's a problem that your proposal requires diverse connections to the network.

But this is always true in any decentralized network. That's the whole issue around sybil attacks - you can't prove that two identities are actually distinct. All decentralized networks require diverse connections to the network.

> PoW solutions only require that you be connected to one valid node

Yes, I agree. But this comes back to the original software download. If 7 sources have malicious software, and 1 sources has the honest software, how do you know which one is honest? Its the same problem. The only difference is that in proof of work, as long as there's no required software updates, you can come back online after an arbitrary period of time and find the honest chain, whereas in PoS there is a horizon after which you need to download new data (the checkpoint). Do you agree that's the salient difference - how long you can go away before you'll have to download and verify updated data to identify the correct chain?

> the hardest problem isn't proving consensus, it's achieving consensus in the first place. Checkpoints are a very slow way to do this, and indeed relaxing the requirement for on-chain provability allows us to achieve consensus a lot faster.

I don't understand what you're referring to about "relaxing the requirement for on-chain provability". Could you elaborate?

> <5 second finality with millions of nodes

Avalanche sounds a bit like Nano.

> allow consensus even when >50% of nodes are malicious

This sounds dubious. I remember Charlie Lee once said "If a blockchain can't be 51% attacked, its centralized and permissioned".


The unstaking transaction can be on both chains and it's irrelevant to the attack. What matters is that a history exists starting from a block validated by the cheating validator. The fork starts from before the unstaking transaction.

That's not a difficult thing to reason out, but you wouldn't have to reason it out if you had read the paper I linked.


> The fork starts from before the unstaking transaction.

So what? The fork wouldn't be built on by anyone but the attacker. Any new entrant would have software that expects a blockchain being built substantially faster (or with substantially higher difficulty).

> you wouldn't have to reason it out if you had read the paper I linked.

I have read Peolstra's "pos.pdf" that PoS detractors incessently reference multiple times. The paper's conclusion is not correct. All the problems he presents are solvable at once (eg in this protocol: https://github.com/fresheneesz/validatedProofOfStake). He doesn't consider that hard coded checkpoints handily solve the problem of long-range revisions.


> Any new entrant would have software that expects a blockchain being built substantially faster (or with substantially higher difficulty).

"Difficulty" isn't a thing in PoS. Provable random functions that choose validators always have fallbacks, because if they didn't, a single absent validator would bring the entire chain to a halt.

Repeated fallbacks would look suspicious, so a smart attacker would simply neglect to include transactions from the real chain that transfer money or power away from addresses he controls, which would quickly lead to >50% ownership of the fake chain. It would be impossible to distinguish this on-chain from a real validator drop event (such as an AWS US1 East outage).

> He doesn't consider that hard coded checkpoints handily solve the problem of long-range revisions.

Hard-coded checkpoints do absolutely nothing to address this issue. If you're requiring that the checkpoints come from a trusted entity, you've just given up decentralization. If you're not requiring that they come from a trusted entity, then they can be spoofed just as easily as a block.

Instead of reading a paper and immediately believing what it says, you need to try to think like an attacker and see how you would try to get around the "solution". There's likely not a paper written on this, because it's so trivial to get around that it really doesn't require a paper to refute. The fact that you presented this idea non-ironically seriously undermines your credibility.

FWIW, I'm not a PoS detractor. I'm quite excited for Ethereum's implementation of PoS, because if anyone can solve the issues with PoS, it's the Ethereum team. My current thinking is very much in the "wait and see" camp.

Weak subjectivity with PoS is a real thing (if you don't believe me, perhaps you should listen to Vitalik Buterin, who cannot be accused of being a "PoS detractor"). What remains to be seen is how much of a problem it is in practice. The release of Ethereum 2.0 will be the largest full-scale test of a mature implementation, but depending on how quickly full roll-outs of PoS happens for Cardano or Polkadot happen, one of them may get there first.


> "Difficulty" isn't a thing in PoS

Yes it is. The more people with coins actively minting in the system, the higher the difficulty of getting chosen. Please do more research.

If you really want to keep discussing this with me, read through the WHOLE of https://github.com/fresheneesz/validatedProofOfStake and show me a full attack on it.

> FWIW, I'm not a PoS detractor.

You could have fooled me. But it sounds like you're saying you believe in Vitalik, and anyone else on the internet (ie me) must be idiots who can't understand solutions to these problems. Yet, you're not putting in the effort to understand the solutions out there. PoS protocols are more complex than PoW - there are more edge cases to shore up. So telling me you can "trivially" get around any particular solution ignores the fact that solutions work together. They aren't isolated. So slow down and actually try to understand a protocol holistically before attacking it. If you're not a detractor, than how about just trying to learn instead of insulting the intelligence of people who have actually worked on solutions to these problems?


> Yes it is. The more people with coins actively minting in the system, the higher the difficulty of getting chosen.

I'm well aware. But eventually a large validator will get chosen--it's not difficult to get chosen, it's rare. The fact that PoS folks call this "difficulty" does not make it so.

> If you really want to keep discussing this with me, read through the WHOLE of https://github.com/fresheneesz/validatedProofOfStake and show me a full attack on it.

Given your proposed solution is checkpoints from a centralized trusted entity, you're giving up decentralization. Sure, your solution works. If you don't trust the source of the checkpoint, you're back to square one--it no longer works. And if you're willing to give up decentralization, may I suggest you just store all transactions in Postgres? You don't need a blockchain if you're just going to trust a centralized entity.

I'm not going to give your design a free audit. If there's some way this isn't centralized, feel free to explain, otherwise, this isn't relevant--I think we can assume that PoS is being discussed in the context of decentralized, trustless models.

> If you're not a detractor, than how about just trying to learn instead of insulting the intelligence of people who have actually worked on solutions to these problems?

I haven't insulted your intelligence. You're probably a smart guy. Which is probably why you're overconfident in your solution: smart people aren't used to being wrong, so you aren't taking the time to actually grok the attack I've already proposed, or why the only working solution you've actually proposed depends on centralization.

I did question your credibility to make claims here, because it's pretty clear you either don't understand the attack proposed by On Stake and Consensus, or you don't understand decentralization, which are both fundamental prerequisites to this discussion. The time constraints defense is literally in the paper, and the checkpoints solution you've proposed is just proof of authority by a different name. If you choose to feel insulted by someone saying you're wrong, that's your prerogative.


Thank you for all your incredible input. I felt I grokked POW but I’ve been looking for concise introductions to PoS wrt double spending mechanisms and equivalents to 51% attacks. Thanks again for the depth.


I'm not going to continue this thread anymore. One is enough, and I'll try one more time to continue on your other comment. You are insulting me btw. I'm this close to following in your footsteps.


But how would someone know later which transaction is valid? Like, a new entrant to the network gets two transactions spending the same input, without a block chain to determine ordering of transactions, this new entrant wouldn't know which was actually spent. Both transactions could have subsequent spends, so that is also not a way to determine which is the one to follow. You can't decide to follow neither, because it's possible (even likely) that the person who owns those coins now was not the person who tried double spending them.


Well let's say we do things this way, and I double spend a transaction, sending one to Alice and one to Bob. Suppose Alice waits t seconds after receiving the transaction, and if she doesn't see any double spends in that time, she accepts that the coin is hers and she then sends me something in return.

One second before Alice "confirms" my transaction, I can send out my transaction to Bob. Since my message takes time to propagate through the network, it won't reach Alice soon enough and she will think I've paid her, and she'll send me whatever I was paying for. But other people will see both these transactions within t seconds of each other. They won't know which they should believe is correct, and they might not accept payment from Alice in the future.


The same thing happens in PoW networks. You have to wait several blocks to gain confidence that a transaction won't be reverted.

In Ethereum's PoS, by waiting a similar amount of time you actually get finality; at that point your transaction cannot be reverted without a third of total stake being destroyed.


Unless Vitalik proposes a rollback.


And gets support for that rollback from a large majority of the community.

Given that level of community support, the same could happen in any PoW network.


Yeah, good point. If the attacker knows how long each person will wait, then they can exploit that.


That's similar to what Ethereum's proof-of-stake does. Any staker can submit the two inconsistent transactions as proof of malfeasance by another staker. When that happens, the misbehaving staker is automatically "slashed," losing a portion of their stake.


The problem with waiting AFAIK is that it's vulnerable to attacks on the underlying peer-to-peer network.

Let's say you wait 1 hour to see whether there was a double spend. If an attacker wanted to, they could spend twice and somehow mess with the network for 1 hour.

For example, an attacker at the internet service provider could intercept packets from the Bitcoin network and just block the other spend transaction from getting to you. You wait 1 hour and consider the payment finalized, and you swap goods for Bitcoin.


Obvious fix, peg the time limit to measured network throughput.


There is no such thing as dis-trusted.

Bitcoin is "trustless". There is no trust of any form.


Public keys identify wallets. Key X tries to double spend, it is detected, proof propagates across network, nodes vote to insert into the consensus ledger a public record that key X has been "dis-trusted". The network thenceforth rejects transactions from wallet identified by key X.

Now key X has been dis-trusted.


They are not going to reuse a wallet for a second double spend attack.


PoW is not the only way to solve double spending (and make sure that the blocks are ordered in a particular timeline), in PoS it is solved by randomly choosing which validator will create the next block according to an algorithm that all the nodes follow. So there's no race to find the next block, the block producer is known in advance.

What's more, in PoS, you have guaranteed finality, meaning that after a certain amount of blocks, you can be sure that the order will not change. This is superior to the probabilistic finality offered by PoW.

> loses everything spent if the attack fails

They don't lose everything, they still get to keep the mining hardware and can retry the attack again. They can also use a "selfish mining" strategy and earn block rewards that subsidize their attack (by earning more than their fair share).

In PoS on the other hand, the attacker loses everything after an attack and cannot try again. This is because attackers get slashed (their stacked coins are destroyed). If they want to try again, they need to acquire more coins.


There are a bunch of ways in which you're not exactly correct here.

First, you're describing a very particular implementation of PoS. Not all PoS implementations include slashing. This isn't a mere technicality: the most popular implementation of PoS (as measured by market cap) is Cardano's and it doesn't include slashing. And not all PoS implementations involve randomly choosing a validator.

The bigger problem here is that proof of stake is actually a misnomer because doesn't actually provide proof, period--it's a consensus algorithm, not a proof algorithm. That's fine if you're actively connected to a bunch of nodes, because you can resolve double spends by polling the network to find out the consensus. But if you're only connected to two nodes and those two nodes are showing you two different spends of the same balance, you have no way to resolve that conflict. In PoW (a real proof algorithm) you simply look at the longer chain. In PoS, there's no way to actually enforce the longer chain, because there's no cost to producing a longer chain. This is a problem which occurs in practice when nodes newly connect to a network or reconnect after a long absence.

Slashing does nothing to address this issue. If I'm running a PoS validator node, I can stake coins, unstake them, and then spend the coins. Now I've got a free license to mine blocks without risk of slashing. I can mine blocks by simply validating blocks that say that I never unstaked. And I'm at no risk of slashing because I've already spent the coins: there's no balance to slash.

You alluded to an attempted solution with "it is solved by randomly choosing which validator will create the next block according to an algorithm that all the nodes follow" but that doesn't work: a provable random function can choose a validator which creates artificial scarcity, but there has to be a fallback in case validators drop offline, otherwise a downed node would grind the entire chain to a halt as soon as it was chosen as a validator and doesn't validate a block when it's supposed to. The malicious validator can use this because there's no temporality stored on-chain: you just find a situation where your staking address is the fallback node and claim that the validators who were chosen by the random function validated late, and nobody can prove you wrong. This may sound suspicious, but in fact these fallbacks happen in practice. To create a believable chain after this, you have to create a situation where a lot of validators drop off and are replaced by nodes you control. This also looks suspicious, but again this also happens in practice: for example if AWS S3 US East VA goes down, there will be a huge number of validators that drop offline.

Probably the most thorough explanation of this type of attack and the theory behind it can be found in On Stake and Consensus[1].

There's some debate on the importance of this attack--Vitalik Buterin calls this "weak subjectivity" and has written about it in length[2]. But to say he's not concerned about it would be inaccurate; he says, "actually implementing a proof of stake algorithm that is effective is proving to be surprisingly complex"[3]. That sentence was written in 2014 and Ethereum still does not rely on proof of stake in 2021--be assured that if in seven years Vitalik Buterin has not satisfactorily solved the problem, it's not trivial. I would caution anyone extolling the virtues of Proof of Stake from overstating their case--PoS is not a panacea and presents many challenges of its own.

[1] https://download.wpsoftware.net/bitcoin/pos.pdf

[2] https://blog.ethereum.org/2014/11/25/proof-stake-learned-lov...

[3] https://blog.ethereum.org/2014/10/03/slasher-ghost-developme...


> I can stake coins, unstake them, and then spend the coins. Now I've got a free license to mine blocks without risk of slashing

Except you won't because you'll have to wait so long to be able to use the old coins that you'll miss the window of opportunity. Also, it doesn't matter because you need > 50% of the actively minting coins in order to succeed anyway. And if you had that, you don't need to worry about being slashed at all.


> Except you won't because you'll have to wait so long to be able to use the old coins that you'll miss the window of opportunity.

What "window of opportunity"? Ostensibly these coins will remain valuable far into the future, and therefore the ability to spend them twice will also remain valuable.

> Also, it doesn't matter because you need > 50% of the actively minting coins in order to succeed anyway.

If you have the ability to mint blocks, you can use include signed transactions from the real chain to unstake and build majority on nodes you control.

I would discourage you from responding on this topic until you've read the first paper I linked, as you have not said anything that wasn't addressed in that paper.


> What "window of opportunity"?

The window of opportunity to use your old keys (with old coins) to create a malicious chain.

> If you have the ability to mint blocks, you can use include signed transactions from the real chain to unstake and build majority on nodes you control.

If the attacker is building their own chain, no one else will be building on top of it. Anyone will see that the chain has very few coins actively minting and will be immediately suspect, even if they're eclipsed. If they're not eclipsed, they'll go with the honest chain, which would be clearly longer unless this was a 51% attack situation.

>I would discourage you from responding on this topic

If you're going to be rude, I'm just going to ignore you. Its pretty insulting to tell me that. I've written two different consensus protocols, and analysed the security properties of many more. I very much doubt you've put anywhere near as much work into understanding the attacks and solutions around consensus protocols. So please don't be pompous.


How do you know you have the longest chain without having all the blocks? You need to reach consensus on which blocks exist, and you can't get that without talking to enough other nodes to be pretty sure you have enough blocks that a longer chain is unlikely.

When there is a network split, nodes on one side of the split might think they have the longest chain and be wrong about it. Unless you know you have most of the computing power on your side of the network split, you can't really know which side has the longest chain until the network split is over.

So no, Bitcoin doesn't avoid the need to achieve consensus.


Glad we've come to the conclusion that things I didn't say aren't true!

What I actually said is, "[I]f you're only connected to two nodes and those two nodes are showing you two different spends of the same balance, you have no way to resolve that conflict. In PoW (a real proof algorithm) you simply look at the longer chain."

You're correct that this isn't a proof of consensus.


Fair enough. But looking at the longest chain you have (from the two nodes you can connect to) and assuming it's valid is not sufficient. You also need some reason to believe there isn't a longer chain out there. (In practice this is by waiting for a certain number of "confirmations.")

Maybe that's assumed in "simply look at the longer chain" but I thought I'd point it out.


> you're describing a very particular implementation of PoS.

It's silly to say PoS doesn't work when what you really mean is that some badly designed PoS systems don't work. Slashing is a solution to a problem. It's honestly irrelevant if some PoS systems don't do slashing.


It's silly to respond to one particular claim in isolation, when I wrote a long post that addressed the limitations of slashing. Please don't respond to what I say out of context.



> The bigger problem here is that proof of stake is actually a misnomer because doesn't actually provide proof, period--it's a consensus algorithm, not a proof algorithm.

From peercoin (first pos coin) whitepaper: "Roughly speaking, proof-of-stake means a form of proof of ownership of the currency."

> But if you're only connected to two nodes and those two nodes are showing you two different spends of the same balance, you have no way to resolve that conflict. In PoW (a real proof algorithm) you simply look at the longer chain.

From the same whitepaper: "The protocol for determining which competing block chain wins as main chain has been switched over to use consumed coin age. Here every transaction in a block contributes its consumed coin age to the score of the block. The block chain with highest total consumed coin age is chosen as main chain."

I mean, looking at it it seems that your main complaints were actually addressed some 9 years ago in the first PoS whitepaper.


> From peercoin (first pos coin) whitepaper: "Roughly speaking, proof-of-stake means a form of proof of ownership of the currency."

Just because a whitepaper says something doesn't mean it's has held up to peer review.

> From the same whitepaper: "The protocol for determining which competing block chain wins as main chain has been switched over to use consumed coin age. Here every transaction in a block contributes its consumed coin age to the score of the block. The block chain with highest total consumed coin age is chosen as main chain."

Consumed coin age is not provable by looking at the chain--it's only provable by polling the network for consensus.

> I mean, looking at it it seems that your main complaints were actually addressed some 9 years ago in the first PoS whitepaper.

I mean, looking at it it seems you didn't read the first paper I linked, or indeed much of the literature on proof of stake which has been written in the last 9 years.


> I can stake coins, unstake them, and then spend the coins. Now I've got a free license to mine blocks without risk of slashing.

Lol, you're a genius! Bwahahaha!


What if an attacker’s goal is to see another validator get slashed?


That's a different type of attack. The attacker would need to compromise the validator's signing keys to do that. In crypto, if you have lost control to the signing keys then game over anway.


> The problem proof-of-work solves is double spending,

That doesn't seem to be my understanding. In my mind the problem proof of work solves is how quickly new blocks can be generated, thus limiting the supply of new blocks and giving an incentive to move the head pointer to an agreed upon next value (by granting coins to the miner who was able to compute the value).

Double spend is solved due to the single branch tree nature of the chain, since that block can validate that the coin hasn't been spent within that one block, nor in previous blocks (and side blocks can be ignored).

Proof of work has nothing to do with that, any system that can encourage everyone to agree on the next block does the same prevention of double spend.


>Proof of work has nothing to do with that, any system that can encourage everyone to agree on the next block does the same prevention of double spend.

What foundational frame of reference are you using to write that sentence above?

Because for Bitcoin's specific implementation, the purpose of POW is to prevent double spend without centralized trust. Consider the following "if()" statement in bitcoin source code:

Excerpt from validation.cpp [1]:

  bool CBlockIndexWorkComparator::operator()(const CBlockIndex *pa, const CBlockIndex *pb) const {
      // First sort by most total work, ...
      if (pa->nChainWork > pb->nChainWork) return false;
      if (pa->nChainWork < pb->nChainWork) return true;
In older versions of source code, excerpt from main.cpp [2]:

  if (pindexNew->bnChainWork > bnBestChainWork)
    { ...

The motivation for the C++ code above to compare the chain work is in the original 2009 Bitcoin whitepaper. Excerpt:

>12. Conclusion : We have proposed a system for electronic transactions without relying on trust. We started with the usual framework of coins made from digital signatures, which provides strong control of ownership, but is incomplete without a way to prevent double-spending. To solve this, we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power.

[1] https://github.com/bitcoin/bitcoin/blob/271155984574a5bba961...

[2] https://github.com/bitcoin/bitcoin/blob/40cd0369419323f8d738...


In my mind the problem proof of work solves is how quickly new blocks can be generated [...]

There is actually no real point of rate limiting the creation of new blocks, why not generate them as fast as possible and therefore execute transactions as quickly as possible? This certainly needs some more thought, you do not want miners spamming empty blocks or blocks full of their own fake transactions, you can not provide a fixed block reward, and whatnot. But in general the mining rate limit is a limitation and not a feature.

Proof of work has nothing to do with that, any system that can encourage everyone to agree on the next block does the same prevention of double spend.

Sure, proof of work establishes consensus and any consensus algorithm could be used. But Bitcoin is anonymous, so you do not know how many parties are there that need to agree and therefore a simple vote based consensus protocol does not work. You have to prevent a malicious user from casting a million votes and changing the consensus branch as they please. How do you do that? You make casting a vote hard, you make them, for example, waste a lot of clock cycles to do so.


> There is actually no real point of rate limiting the creation of new blocks…

To avoid fracturing the network and wasting significant amounts of work, the time between blocks needs to be much longer than the time it takes to propagate newly mined blocks to most of the nodes.

There are also concerns about the minimum amount of storage and bandwidth required to participate as a full node, and how that may drive increasing centralization. Storage can be mitigated somewhat with better pruning of old, spent transactions, but bandwidth is a harder problem facing any attempt to increase the core transaction rate.


> any scheme to replace proof-of-work needs to maintain that element of irrecoverable loss of value for a failed double spending attack.

As others have said, this is not true. All that is required is that it is expensive to successfully double spend.


More expensive than the value of the 2nd spend (in terms of expected value)?


Well, generally a 51% attack doesn't requier expending cost as much as it requires accumulating capital. With PoW, that capital is mining hardware. With PoS, that capital is the currency itself. In both PoW and PoS, a double spend could be done in a way that doesn't "cost" the attacker anything. But it does require accumulating so much capital that its hopefully extremely unlikely that any one person or group (even a government) could accumulate that much. An attack would generally substantially decrease the value of the capital accumulated (again, mining hardware or coins) and so there is a cost there. But it isn't the kind of thing where you can say "1 attack costs X dollars and 2 attacks costs 2X dollars", so the value of a particular double spend isn't really that relevant.


I would consider a double spend transaction to be invalid.


Which one was the first spend and which one was the double-spend?

You are talking about a distributed network where there is no universal time by which this can be measured.

The blockchain is the solution to this problem. It provides an ordering which is to be universally agreed upon, because the only way to disagree with the ordering is to use >51% of the amount of electricity securing the network to state your disagreement.

A spend is invalid if the consensus in the network is that the TXO being used as an input for a transaction has already been spent in a transaction which has been mined into a valid block. Until that happens, any valid transaction which spends a TXO may be the correct one.


> Which one was the first spend and which one was the double-spend?

The answer to that question is actually that it doesn't matter. Seems weird at first but if you think about it, no one who uses the system cares which one is "the first" the only thing people care about is that once one is picked it will not change later on aka finality. And for that a majority must agree on one. Which one again doesn't matter at all.

FBA (Federated Byzantine Agreement) "blockchains" make use of that. Each node signals which Tx it saw first but if no super-majority can be reached it simply switches to the Tx that has more votes. The sole purpose is to agree on one which makes the second Tx unfunded and thus fail.

Bitcoin and bitcoin-like system cant do this. They always have some kind of lottery that decides who writes the Tx. There is no agreement and thus no finality and that's the whole reason why double-spends can happen.


I agree. It seems to me that pow is specifically designed to make adding new transactions to the blockchains take a certain amount of time. If anyone could add to the blockchain instantaneously, imagine the same account being used to spend money on both sides of the earth at the same "time". Afaik general relativity says it would be virtually impossible to tell which one happened "first" in a decentralized way. So by slowing everything down, we force there to be a window for transactions to propagate.


He did mention 51% attack which implies double spend.


Another way to describe this is that PoW is a Cybil resistance measure.


Although just saying that doesn't really tell anyone much, who doesn't already know.

A Sybil attack is the creation of a bunch of identities to subvert a system that is attempting to distribute power amongst a bunch of participants.

The Sybil resistance of these schemes comes from allocating power in proportion to some form of commitment of a scarce resource, which could be indigenous to the system (e.g. proof of stake, proof of burn) or exogenous (e.g. proof of work, proof of storage).

I think it's noteworthy that all of these systems are plutocratic. They will all effectively redistribute wealth upward, because anything that doesn't would encourage the creation of more identities.


Exactly. The current techniques to Sybil resistance directly leads to plutocracy because of economies of scale.

I think the solution is fees based on transaction expediency. If I want a transaction to go through for the minimum possible fee I could schedule it for a year in advance. In one years time, the money would be available. If I wanted it sooner, say instantly, that transaction would cost say, 1x. So if it was a $100 transaction, the fee would be $100. Maybe 1 day is 0.1x, 1 week is 0.01x and 1 month is .001x.

If you want the best odds of double spending and choose to pick instantly, the fee is equivalent to the transaction, so you don't actually end up gaining anything.

Smart merchants will start essentially doing futures. I am going to need corn in one year, I setup the deal today, when it goes onto the chain I transact my corn.

Better still, we can use these fees to add to the supply. The only reason that a transaction would go through with a fee is if the profit of the transaction from both sides was above 0, so we can directly track a portion of the surplus of our currency.

We can calculate how much security we need based on the available currency at a certain time, and mostly the miners only earn enough of the fees such that we can ensure that we won't be harmed by a malicious attacker who does not care that they aren't getting anything. So the difficulty is based not on mining power, but on exploitability.

So what do we do with the rest of the fees including the new coins generated through those fees? We give them to people who have actively locked up coins as we want to encourage people locking up their coins, as the more they have locked up, the less risk there is that a double spend can occur. So that means if you know that you will not transact your coins for one year, you will transact with yourself and lock in, and essentially you get yourself a CD.

This is all super preliminary, and I am looking for problems with this setup, but it seems to work.



What I fail to understand, and thanks to your explanation, is why not use a central trusted ledger which would synchronize all transactions. It could surely work at the current speed of bitcoin and relieve the entire planet of the energy cost of bitcoin, providing the same safety that no double spend could happen.

All you'd need is a small private army, a few lawyers, an historically accepted jurisdiction, and you'd be ready to go with your central ledger.

I don't think you're right to say double spending is the only thing being solved with proof of work, I think people are trying very very very hard not to centralize (for no good reason in my opinion), despite the prohibitive cost, and why ? Because of that insane fear the ledgers would be changed: hence you'd have to admit the author was a bit right that eventually this whole story is to protect paranoid people against a very absurd risk at the cost of asking government to subsidize reopening coal mines...


This is called the state backed banking system.

Eventually they will make all your money digital. They will track everything you buy.

Then at the very least they will sell your information to advertisers, and at worst retain totalitarian control over all your purchasing power.

Its already happening in China - the state has the power to prevent people from travelling etc. based on their "social credit"

The US is making a CBDC (central bank digital currency). No matter how pure the intentions of creating a government backed digital currency are, it will degenerate into a means of control


> Eventually they will make all your money digital. They will track everything you buy.

Which is why Bitcoin starts out doing exactly that by default?


If the central trusted ledger starts censoring transactions, blocking users or producing invalid blocks with additional rewards to themselves, then what? Just keep going with it and accept the new reality? Then we would be back at what we have now - the current central bank monetary system.

Trustless decentralized architecture makes the Bitcoin network incredibly robust against attacks on its principles. It can't be changed (without users/capital switching over to new version), stopped or censored by anyone.


You mean like... paying using a card number (VISA, Mastercard, Amex) on the Internet?




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: