
Unsolved Problems in Blockchain Sharding - fagnerbrack
https://medium.com/nearprotocol/unsolved-problems-in-blockchain-sharding-2327d6517f43
======
_nalply
I am (perhaps naively) wondering whether a blockchain can be sharded
hierarchically instead.

Each shard works like a channel like in the Lightning Network and the channel
can have sub-channels, this means the channels all have their own blockchain,
i. e. they are blockchain shards. To verify a transaction from A to B it's
like a tree walk from a tree node A to the other tree node B. First verify the
channel A is in, then go to the parent channel till the common parent channel
is reached then descend again to the partner B's channel, verifying on each
step ascending and descending.

~~~
SkidanovAlex
That is somewhat similar to what Vlad Zamfir's sharding is doing
([https://medium.com/nearprotocol/so-what-exactly-is-vlads-
sha...](https://medium.com/nearprotocol/so-what-exactly-is-vlads-sharding-poc-
doing-37e538177ed9))

For a given transaction you'd only need to verify log(n) chains, but the
problems from the top level post remain. The question is how you verify the
transaction in a given chain. If you verify the entire chain, than you are
expected to end up verifying all the chains (since ultimately cross shard
transactions will be coming from all shards), removing any benefit from
sharding. If you somehow trust that the shards were doing their job properly,
then you need to somehow deal with shards being corrupted.

~~~
_nalply
I am probably naive again, but it seems for me that you don't need to verify
other transactions, only the one between A and B. You could ignore the
shards/channels not affected by the transaction.

Another optimisation would be that for example an exchange tries to pull all
its local transactions into the same channel. Or people join the retail's
channel before paying. The idea is to avoid ascending as much as possible. In
other words, a variant of Lightning Network!

------
SkidanovAlex
Alex from Near is here, the author of the post. Would be happy to answer any
questions.

------
jillesvangurp
It currently takes several weeks to spin up a Stellar validator. Last time I
checked, you needed several days to a week to do the same for Ethereum. This
has probably not improved. Both are growing rapidly and it's becoming a
problem for adoption.

Both have on disk data well over a TB. And it's early days. This stuff is
getting progressively less feasible to run for normal users as more users
start using this and start adding their own transactions. Long term, sharding
is not going to be optional.

~~~
ourmandave
Blockchain just suffers from the classic dilemma of being ahead of its time.

Once we have infinite power from practical fusion we can mine coins for cheap.

And when quantum computers are common you can update the chain or do sharding
in no time at all.

~~~
imtringued
Mining a bitcoin block always takes 10 minutes and each block can only handle
around 5000 transactions because that's what bitcoin was designed to do. It
doesn't matter if you're powering all miners on earth with a dyson sphere or a
single hamster wheel, it will still take 10 minutes.

------
Donzo
Thanks for expressing these problems clearly and succinctly.

Does the Fisherman approach involve validators randomly checking other
validators for phony transactions?

Will validators be incentivized to do this?

Is that why they are called fisherman? Because honest validators are "fishing"
around for opportunities to catch invalid transactions and increase their
stakes?

How long is the challenge period currently being discussed?

The other attack vector that the fisherman opens, "grieving" attacks, whereby
malicious nodes launch a series of false reports, knowingly sacrificing their
stakes presumably to overwhelm the system and push through a double-spend or
phony token mint or something.

Am I interpreting how this attack works correctly?

Has any thought been given to making the challenge period as long as it needs
to be to process all challenges, and incentivizing goodwill from those
affected by the slowdowns with a fractional return of the slashed stakes...
Maybe like a reverse gas cost? Is that insane?

~~~
drcode
The main issue with fishermen is that they can't help much with data
availability issues, because an attacker can just make the data available
later during the arbitration process, which leads to a vector for DOS attacks
against the arbitration system.

~~~
Donzo
So the data availability problem renders the fisherman solution useless in a
pragmatic sense?

~~~
Donzo
Has sharding just recreated the double-spend problem by shattering the
consensus mechanism that solved it into 100 pieces?

Are we now free-floating, hoping to find a new solution to the double-spend
problem on a fragmented system?

------
api
If complexity is increasing you are going in the wrong direction _unless_ that
complexity maps directly to a real problem or constraint domain.

Cryptocurrencies are just simple accounting systems and they run on general
purpose computers, so no and no.

I don't think this whole direction works unless you can find an approach that
doesn't add much complexity or even simplifies things in some way. I feel the
same way about byzantine proof of stake schemes.

This is definitely an engineering intuition that I have, but I think I can
ground it to some extent in thermodynamics and learning theory. If the
universe isn't mandating complexity, it's superfluous.

------
wpietri
This is interesting, but where the article falls down for me is tying it back
to actual real-world use cases. I find it very difficult to think about
technical trade-offs without being able to tie it back to the actual interests
of the people the system serves. It becomes too easy to solve technical
problems in ways that do not solve the human problems that the system is meant
to serve.

------
Ar-Curunir
Small correction: SNARKs _can_ be used for general computations. Efficiency
takes a hit, though.

