Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually there is a great way to scale on-chain: UTXO Set Commitment allows nodes to simply discard unnecessary blockchain data (in a way not too dissimilar to classic "pruning"). The data set maintained by nodes would basically grow linearly with the UTXO Set instead of the number of txns. Eg. right now it would cut 150GB down to 3GB.

Sadly it seems under-researched.



utxo set is one issue with large number of transactions. The far bigger is the big block size you would need for visa-level transaction numbers (1GB / 10min was mentioned).

And it's not only an issue for end-users (can't run a bitcoin client on your phone, you'd burn through your data within a couple days), but also for miners. They'd be far more likely to mine on top of an old block, leading to stale blocks.

There was a talk a while ago how the bitcoin developers improved the propagation of new blocks within the network and it basically eliminated stale blocks (at least for a while).


«issue for end-users»

It's not really an issue. End-users always have the possibility of running in SPV mode where they don't download full blocks.

«They'd be far more likely to mine on top of an old block»

Not really. First of all, a 1GB block today would be transmitted in typically only ~12-15MB thanks to Compact Blocks (6 bytes per txn ID). Graphene would cut this down to 3-5 MB (http://forensics.cs.umass.edu/graphene/). Secondly, with current software and current hardware, we can still bump the block size ~100× while keeping the block verification time low enough that it's still workable. Then with software optimizations and another few years of technical hardware and network bandwidth improvements, we can get another 10×.


ISP’s have to move all that bandwidth as well. Netflix has servers on ISP level where they pay for i’d imagine, I don’t think that there is a way to do that with Bitcoin’s data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: