Awesome! Requiring total order for everyday transactions is like running the accounting year-end close checklist of a fortune 500 company every time I buy a cup of coffee, it's wasteful and pointless.
IMO, instead of imposing total order from the top down, we should grow total order organically by iteratively building up and combining sets of locally ordered transactions.
Edit: Ok so I read through the paper and it looks OK, perhaps even "obvious" (though I dislike how that term can minimize good contributions), except for the assumption about validators: "we assume up to f = (n−1)/3 of the validators are
adversarial and behave in an arbitrary way". How do you control the number of validators? They are cheap to manage (they just locally verify and sign transactions after all), so what stops the adversary from flooding the network with n*10 hostile validators and thus invalidating the assumption?
Maybe there could be a permissionless way to get all clients to deterministically agree on who the validators are...maybe using something called proof of work...
I've considered a completely different system called proof-of-fee that has no miners or stakers:
- Each transaction pays a fee (that is either redistributed or burned)
- The best blockchain is the one with the most fees paid
- Each transaction signs the previous block
- The blockchain length is limited to 1 block per 15 seconds
- Nodes put their own transactions into the current block or make a new block with it (depending on if the blockchain length is maxed out) and broadcast it. No miners or stakers, everyone's equal.
Therefore, in order to replace a buried transaction (to double-spend), you'd need to pay more fees than all the transactions in blocks that were burying your old one. This would be unprofitable for deeply buried transactions.
There would be a lot of reorganization going on around the blockchange time at the 15 second mark, but that's an engineering problem.
I say this not to criticize, but to continue the discussion. Your idea is interesting.
This also means that transactions are buried faster the more heavily the blockchain is used. For example, you'd only have to wait one block if there were 1000 transactions in it.
The only problem is if the double-spender is trying to double-spend 100 transactions at the same time...
Still, my intuition is that it wouldn't work very well for large payments. I mean, if the average payment is $1 or $5 worth of value, but sometimes someone wants to buy a car or a house, how does that work out?
I suppose it would be possible to write a simulation to try to figure it out, although there may be a more elegant and mathematical way to think about it.
Have you thought about creating a whitepaper or some sort of website, or actually starting this kind of project? Or is it just a fun idea?
There will be a distribution of transaction values with a median less than the mean, as seen in credit cards, swift, and all other payment systems. There's a long tail of large value transactions that would have to wait longer. So if you buy a car you'd have to wait for the equivalent of 1000 more car-valued transactions to occur, but this is helped by the ton of smaller-valued transactions backing you up.
If the maximum someone is willing to wait for a transaction to be confirmed is 1 hour, then the ceiling of acceptable transaction value is 0.1% of the sum of total value transacted per hour, but this gradually rises as the system gets more popular. If you think about it, a similar phenomenon happens in any payment scheme, more or less. Even Bitcoin would make you wait a long time for a trillion dollar transaction because the cost of mining a double-spend would take a long time to reach a trillion dollars. In the early days you wouldn't trust it with even a $10,000 transaction because anyone could farm out a huge mining operation.
I had some funding, worked on it for a year solid, built working prototypes, learned a lot, prepared for a whitepaper, but then I realized that an attacker could still profitably double-spend if they made a ton of small payments at the same block to different people. This would show up as a suspiciously-large transaction volume though...
I think it means that (worst case) to accept an avg payment you'd have to wait for 1000 avg blocks instead of 1000 avg transactions...? If the fee was 1%, instead, though, you'd only have to wait 100 blocks, so it becomes reasonable? I'm kind of stuck and have nobody to talk to about it...
Edit: You've inspired me to again work on this! Thanks!
Let's consider this "worst-case" scenario with 100% of txs are made by the same attacker with an intent to double-spend. Before generalizing, let's fix the amounts to 1000 txs/block with a payment price of $100/tx with a $1 fee/tx.
Block 1: The attacker sends 1000 txs (the "payments") worth $100 each to 1000 different merchants. He pays $1000 in fees. He stands to have revenues of 1000 x $100 = $100,000 if he can double-spend the payments. His payments are currently buried by 1000 x $1 = $1000 in their own fee costs, for a net profit of $99,000 if the merchants accept his payments right now. The merchants are cynical and rationally choose to wait until this block is buried by 1000 x 100 x $1 fee/tx = $100,000 in fees before accepting the payments. (It doesn't even matter if they know he's trying to double-spend or not).
Block 2: The attacker sends 1000 loopback txs to himself to bury his payments in another 1000 x $1 = $1000 of fees. $98,000 to go...
Block 3, 4, 5, ... 100: The attacker sends 1000 loopback txs to himself per block for 98 blocks to bury his payments in another 1000 x $1 x 98 = $98,000 of fees for a total of $100,000 in costs. The merchants are now satisfied with the payments and accept them as honest transactions. The merchants send the attacker $100,000 in goods.
At this point, the attacker replaces block 1 with another block that is just full of loopback txs to himself rather than payments to the merchant. He then replaces blocks 2-100 with a new set filled with loopbacks. Then he makes block 101 with one loopback tx to himself so as to outbury the old blockchain with an extra $1.
The attacker's current revenues are $100,000 in goods, but he had to pay $100,001 in fees to do it.
We can generalize this to mean that to trust a transaction, worst-case, you have to wait until the the total payment value of the block itself is buried by its own value in transaction fees. Therefore, you'd have to enforce minimum % fees because, if you allow low transaction fees then an attacker could send a huge loopback transaction that makes everyone else in the block to have to wait a long, long time until enough fees bury that block. Maybe you could solve this using fixed block sizes that force transactions to have to bid for a spot. I guess you have to do that anyway, though...
Accept provides efficient horizontal scaling without any limitation. To that end, Accept satisfies a relaxed form of consensus and does not establish an ordering of unrelated transactions. Furthermore, Accept achieves instant finality and does not depend on a source of randomness"
Double-spending. Suppose some execution of the protocol produced two confirmed transactions t1 and t2 that spend the same output. Each confirmed transaction is signed by a validator quorum. Since the adversary controls at most f validators, at least f +1 correct validators signed t1 and t2. Since there are 2f +1 correct validators, some correct validator v signed both t1 and t2. However, when signing a transaction, correct validators check whether they have not signed any of the inputs previously – a contradiction.
- In addition to transaction data, each validator stores three sets: CURRENT, the current validator set; IN_PENDING, the set of clients who are to join the validator set; OUT_PENDING, the set of validators who are to leave the validator set.
- Validators support four additional requests: v_nominate, v_initialize, v_remove, v_eject.
- When a client wants to join the validator set, it sends the v_nominate request to all validators. Validators who agree add the client to IN_PENDING, sign the tuple (CURRENT, IN_PENDING) and reply.
- If the candidate client receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_initialize request to all validators (along with the signatures). Validators receiving this request remove this candidate from IN_PENDING and add it to CURRENT.
- When a validator wants to remove a validator (can ask to remove itself) from the set, it sends the v_remove request to all validators. Validators who agree add the outgoing validator to OUT_PENDING, sign the tuple (CURRENT, OUT_PENDING) and reply.
- If the validator who originates the removal request receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_eject request to all validators (along with the signatures). Validators receiving this request remove the outgoing validator from OUT_PENDING and CURRENT.
Wouldn't arguments similar to the ones in the article also work for showing consensus on these sets?
(edited to fix formatting and typos)
- Adding or removing validators is not covered
- Validators must remember every spent output for every transaction they've ever signed. I get the impression that their benchmark implementation ignores persistence. Arguably that's ok when you're looking at throughput and not latency.
- Issuing conflicting transactions can easily make an output unspendable - a potential implementation hazard for clients if they e.g. automatically retry a transaction.
- I had some doubts whether their sharding strategy not allowing inputs from multiple keys was realistic, but thinking about it you can easily issue multiple transactions to gather into an intermediate key if necessary. The verifier doesn't strictly speaking need to store the outputs anywhere, it just has to remember that the input was spent, which is what makes it shard so nicely.
- You'd probably want publish a transparency log to actually detect byzantine attacks
e: - the receiver also must verify that it hasn't received the same transaction before
Overall A+ not a blockchain.
Crucially it includes benchmarks on real hardware, although the simulated configuration doesn't correspond to the real world in important ways, such as with latency and failures.
I also don't understand how sharding interacts with the quorum thresholds. Is there a separate q of n quorum for each shard? Or is sharding completely separate, ie each validator must run one separate process per shard?
As for the relationship to cryptocurrency systems, I think this paper is overly naive. The idea that the client should be responsible for providing the existence of a spendable input, and that commuting transactions don't need to be sequenced by the protocol, is not novel and exists in many cryptocurrency systems. The most well known is probably the planned ETH 2.0 system, which uses receipts to send information between shards.
Also known as kiting.