Hacker News new | past | comments | ask | show | jobs | submit login
A Limitlessly Scalable Transaction System (arxiv.org)
30 points by belter 43 days ago | hide | past | favorite | 25 comments



> Accept satisfies a relaxed form of consensus and does not establish an ordering of unrelated transactions.

Awesome! Requiring total order for everyday transactions is like running the accounting year-end close checklist of a fortune 500 company every time I buy a cup of coffee, it's wasteful and pointless.

IMO, instead of imposing total order from the top down, we should grow total order organically by iteratively building up and combining sets of locally ordered transactions.

Edit: Ok so I read through the paper and it looks OK, perhaps even "obvious" (though I dislike how that term can minimize good contributions), except for the assumption about validators: "we assume up to f = (n−1)/3 of the validators are adversarial and behave in an arbitrary way". How do you control the number of validators? They are cheap to manage (they just locally verify and sign transactions after all), so what stops the adversary from flooding the network with n*10 hostile validators and thus invalidating the assumption?


It's a permissioned system. A bank, basically.


So, validators are centrally controlled? I guess that'll be nice for existing real-world institutions or Ripple or something.


Yeah. The developers hardcode (eg) 100 validators into the client. Transactions need to be signed by 67 of them. Why not require only 51?

Maybe there could be a permissionless way to get all clients to deterministically agree on who the validators are...maybe using something called proof of work...

I've considered a completely different system called proof-of-fee that has no miners or stakers:

- Each transaction pays a fee (that is either redistributed or burned)

- The best blockchain is the one with the most fees paid

- Each transaction signs the previous block

- The blockchain length is limited to 1 block per 15 seconds

- Nodes put their own transactions into the current block or make a new block with it (depending on if the blockchain length is maxed out) and broadcast it. No miners or stakers, everyone's equal.

Therefore, in order to replace a buried transaction (to double-spend), you'd need to pay more fees than all the transactions in blocks that were burying your old one. This would be unprofitable for deeply buried transactions.

There would be a lot of reorganization going on around the blockchange time at the 15 second mark, but that's an engineering problem.


Wouldn't this make double spend attacks very practical? I mean, if I spend $X, someone is incentivized to steal if the fee I pay is less than $X. I realize this gets better as more blocks are added to the end of the chain. But it seems like this system would never work for relatively large payments. It may work fine for small value payments. It's illustrative to think about the amount of bitcoin transferred in a block vs. the amount of fees paid. The former enormously dominates the latter.

I say this not to criticize, but to continue the discussion. Your idea is interesting.


It gets better as more transactions are added to the end of the chain (in any later blocks). If fees are fixed at 0.1%, the average transaction only has to be buried by 1000 more average transactions. Most transactions are smaller than average, so they don't have to wait that long.

This also means that transactions are buried faster the more heavily the blockchain is used. For example, you'd only have to wait one block if there were 1000 transactions in it.

The only problem is if the double-spender is trying to double-spend 100 transactions at the same time...


I appreciate the mathematical precision there.

Still, my intuition is that it wouldn't work very well for large payments. I mean, if the average payment is $1 or $5 worth of value, but sometimes someone wants to buy a car or a house, how does that work out?

I suppose it would be possible to write a simulation to try to figure it out, although there may be a more elegant and mathematical way to think about it.

Have you thought about creating a whitepaper or some sort of website, or actually starting this kind of project? Or is it just a fun idea?


Nothing is free. The energy input of this system is the time-value of the coins in payments waiting to be buried by succeeding blocks. It's kind of like proof-of-stake, but the stakers are the transactors themselves.


Basically, how big of a payment you can do depends on how much volume there is. It's a positive feedback loop, though.

There will be a distribution of transaction values with a median less than the mean, as seen in credit cards, swift, and all other payment systems. There's a long tail of large value transactions that would have to wait longer. So if you buy a car you'd have to wait for the equivalent of 1000 more car-valued transactions to occur, but this is helped by the ton of smaller-valued transactions backing you up.

If the maximum someone is willing to wait for a transaction to be confirmed is 1 hour, then the ceiling of acceptable transaction value is 0.1% of the sum of total value transacted per hour, but this gradually rises as the system gets more popular. If you think about it, a similar phenomenon happens in any payment scheme, more or less. Even Bitcoin would make you wait a long time for a trillion dollar transaction because the cost of mining a double-spend would take a long time to reach a trillion dollars. In the early days you wouldn't trust it with even a $10,000 transaction because anyone could farm out a huge mining operation.

I had some funding, worked on it for a year solid, built working prototypes, learned a lot, prepared for a whitepaper, but then I realized that an attacker could still profitably double-spend if they made a ton of small payments at the same block to different people. This would show up as a suspiciously-large transaction volume though...

I think it means that (worst case) to accept an avg payment you'd have to wait for 1000 avg blocks instead of 1000 avg transactions...? If the fee was 1%, instead, though, you'd only have to wait 100 blocks, so it becomes reasonable? I'm kind of stuck and have nobody to talk to about it...

Edit: You've inspired me to again work on this! Thanks!

Let's consider this "worst-case" scenario with 100% of txs are made by the same attacker with an intent to double-spend. Before generalizing, let's fix the amounts to 1000 txs/block with a payment price of $100/tx with a $1 fee/tx.

Block 1: The attacker sends 1000 txs (the "payments") worth $100 each to 1000 different merchants. He pays $1000 in fees. He stands to have revenues of 1000 x $100 = $100,000 if he can double-spend the payments. His payments are currently buried by 1000 x $1 = $1000 in their own fee costs, for a net profit of $99,000 if the merchants accept his payments right now. The merchants are cynical and rationally choose to wait until this block is buried by 1000 x 100 x $1 fee/tx = $100,000 in fees before accepting the payments. (It doesn't even matter if they know he's trying to double-spend or not).

Block 2: The attacker sends 1000 loopback txs to himself to bury his payments in another 1000 x $1 = $1000 of fees. $98,000 to go...

Block 3, 4, 5, ... 100: The attacker sends 1000 loopback txs to himself per block for 98 blocks to bury his payments in another 1000 x $1 x 98 = $98,000 of fees for a total of $100,000 in costs. The merchants are now satisfied with the payments and accept them as honest transactions. The merchants send the attacker $100,000 in goods.

At this point, the attacker replaces block 1 with another block that is just full of loopback txs to himself rather than payments to the merchant. He then replaces blocks 2-100 with a new set filled with loopbacks. Then he makes block 101 with one loopback tx to himself so as to outbury the old blockchain with an extra $1.

The attacker's current revenues are $100,000 in goods, but he had to pay $100,001 in fees to do it.

We can generalize this to mean that to trust a transaction, worst-case, you have to wait until the the total payment value of the block itself is buried by its own value in transaction fees. Therefore, you'd have to enforce minimum % fees because, if you allow low transaction fees then an attacker could send a huge loopback transaction that makes everyone else in the block to have to wait a long, long time until enough fees bury that block. Maybe you could solve this using fixed block sizes that force transactions to have to bid for a spot. I guess you have to do that anyway, though...


Yep, in blockchainland we call this a "proof of authority" blockchain.


"We present Accept, a simple, asynchronous transaction system that achieves perfect horizontal scaling. Usual blockchain-based transaction systems come with a fundamental throughput limitation as they require that all (potentially unrelated) transactions must be totally ordered. Such solutions thus require serious compromises or are outright unsuitable for large-scale applications, such as global retail payments.

Accept provides efficient horizontal scaling without any limitation. To that end, Accept satisfies a relaxed form of consensus and does not establish an ordering of unrelated transactions. Furthermore, Accept achieves instant finality and does not depend on a source of randomness"

https://arxiv.org/pdf/2108.05236.pdf


Haven’t read the paper yet but there are a numbers of consensusless protocols out there already: AT2 and Fastpay. Since there’s no total ordering you can’t do smart contracts on them, but payment is fast and scales. I have a feeling the payment chain of Avalanche (X-chain) is something like that as well.


Why do smart contracts require a total ordering of unrelated transactions?


Because these transactions mutate the state of the contract. It’s exactly the problem you have when doing updates on distributed databases.


I think the title/abstract are seriously overstating the result -- the validators still have to hold onto what is essentially the set of spent outputs, which grows linearly with the number of transactions processed by the system.


Wouldn't they only need to store the outputs that they themselves signed, in order to prevent themselves from signing the same one twice?


Yes but that's two-thirds of the total number of transactions, since no transaction is final until two-thirds of validators have signed it.


The "correctness" section is very cool:

  Double-spending. Suppose some execution of the protocol produced two confirmed transactions t1 and t2 that spend the same output. Each confirmed transaction is signed by a validator quorum. Since the adversary controls at most f validators, at least f +1 correct validators signed t1 and t2. Since there are 2f +1 correct validators, some correct validator v signed both t1 and t2. However, when signing a transaction, correct validators check whether they have not signed any of the inputs previously – a contradiction.


Does the set of validators really need to be static (or fixed-size)? I may be missing something obvious, but it seems like we can also support a dynamic set. Consider the following scheme:

- In addition to transaction data, each validator stores three sets: CURRENT, the current validator set; IN_PENDING, the set of clients who are to join the validator set; OUT_PENDING, the set of validators who are to leave the validator set.

- Validators support four additional requests: v_nominate, v_initialize, v_remove, v_eject.

- When a client wants to join the validator set, it sends the v_nominate request to all validators. Validators who agree add the client to IN_PENDING, sign the tuple (CURRENT, IN_PENDING) and reply.

- If the candidate client receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_initialize request to all validators (along with the signatures). Validators receiving this request remove this candidate from IN_PENDING and add it to CURRENT.

- When a validator wants to remove a validator (can ask to remove itself) from the set, it sends the v_remove request to all validators. Validators who agree add the outgoing validator to OUT_PENDING, sign the tuple (CURRENT, OUT_PENDING) and reply.

- If the validator who originates the removal request receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_eject request to all validators (along with the signatures). Validators receiving this request remove the outgoing validator from OUT_PENDING and CURRENT.

Wouldn't arguments similar to the ones in the article also work for showing consensus on these sets?

(edited to fix formatting and typos)


That's surprisingly simple for a Byzantine fault-tolerant protocol. Some random thoughts:

- Adding or removing validators is not covered

- Validators must remember every spent output for every transaction they've ever signed. I get the impression that their benchmark implementation ignores persistence. Arguably that's ok when you're looking at throughput and not latency.

- Issuing conflicting transactions can easily make an output unspendable - a potential implementation hazard for clients if they e.g. automatically retry a transaction.

- I had some doubts whether their sharding strategy not allowing inputs from multiple keys was realistic, but thinking about it you can easily issue multiple transactions to gather into an intermediate key if necessary. The verifier doesn't strictly speaking need to store the outputs anywhere, it just has to remember that the input was spent, which is what makes it shard so nicely.

- You'd probably want publish a transparency log to actually detect byzantine attacks

e: - the receiver also must verify that it hasn't received the same transaction before

Overall A+ not a blockchain.


Cool, do it on a federated Bitcoin sidechain using OP_CTV or something, integrate MimbleWimble or some anonymizing tech, use a Spacechain, burn the bitcoins going in to create a perpetual one way peg to stabilize the peg from perverse incentives, then you've got yourself a decentralized global stablecoin?


I read the paper. It's well written, short, accessible, and answers most questions a reader would have about the claims. It introduces a complete system and evaluated performance tradeoffs involved in using different cryptographic primitives. I this paper shows that simple digital currency systems do not need to be complicated to be fast.

Crucially it includes benchmarks on real hardware, although the simulated configuration doesn't correspond to the real world in important ways, such as with latency and failures.

I also don't understand how sharding interacts with the quorum thresholds. Is there a separate q of n quorum for each shard? Or is sharding completely separate, ie each validator must run one separate process per shard?

As for the relationship to cryptocurrency systems, I think this paper is overly naive. The idea that the client should be responsible for providing the existence of a spendable input, and that commuting transactions don't need to be sequenced by the protocol, is not novel and exists in many cryptocurrency systems. The most well known is probably the planned ETH 2.0 system, which uses receipts to send information between shards.


The quorum is over the n validators. The sharding is an implementation detail of a validator - the protocol is designed to allow a sharded implementation, but does not require it, you could just store the spent outputs in one big table.


Can someone explain to me why ordering matters at all in financial systems? Transactions are idempotent by their nature and in this domain, I just don't see the value of imposing any kind of order. Am I missing something?


So you'll let me deposit $5 with you and you'll cover $5000000 in withdrawals over the next 6 months? There's $6000000 coming your way, I swear.

Also known as kiting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: