
Catching Cheating Servers in Decentralized Storage Networks - siavosh
http://hackingdistributed.com/2018/08/06/PIEs/
======
Drdrdrq
Incentives are not aligned here. As a user, I'm not paying for file to be
stored - I'm paying so it can be _retrieved_ anytime. As a space owner, I want
to maximize profits per space, with cheapest ( _not_ reliable) disks possible.

This system would work much better if payment was tied to retrieval, not
storage.

~~~
hamandcheese
This system has its own set of issues though:

\- if I am storing data, how do I know I’ll get paid? What if the data is
never retrieved?

\- if I want data stored for later retrieval, how do I know it will continue
to be stored?

\- if my data is stored with only one remaining party, how do I make sure they
don’t learn this and hold my data hostage?

\- how are retrievals public ally verified?

~~~
niyikiza
My team -- and I'm sure a number of other teams -- are looking into different
approaches of implementing a "Proof of Retrievability". What we need is a
cost-effective way for a data owner to verify that their data would be
available intact should they need it. And that's where we want to tie our
incentive policy for storage nodes owners. And yes, a storage owner does not
need to know whose data that is, or which other nodes are storing replicas for
the same data.

------
ghthor
I've been trying to figure this out as well. A few of my ideas.

Payouts could be based on having the largest unique set of hashes. So this
should make a server excited to see something it hasn't seen before, and would
discourage it from dumping unique data, as it's worth the most.

------
runeks
> To do it properly, you need to store the file in multiple places since you
> don't trust any individual stranger's computer. But how do you differentiate
> between three honest servers with one copy each and three cheating servers
> with one copy total? Anything you ask one server about the file it can get
> from its collaborator.

How about just storing three copies of the same data, where each copy is
encrypted with a unique private key (before sending it off to storage)? This
way the servers essentially don’t know that they’re storing the same data
three times.

~~~
a1369209993
As TFA mentions, this works fine for private storage, but we're trying to
incentivise people to back up publicly accessable data like the blockchain or
chunks of the internet archive. We want a way for possibly-malicious servers
to prove that they're storing distinct bottom-layer encodings of data they
_do_ know is the same.

~~~
irq-1
So encrypt large files not blocks. Then you ask for random segments and either
the file is stored, or the cheater would need to re-encrypt a large file over
and over, which would be much more expensive than storage. (and if it's not
more expensive, make the files larger, the requests more often, etc...)

------
wyldfire
> What happens if it's a public good like blockchain state?

Can't these be nodes use pruning such that this state is only needed for "full
nodes"? (yes, this does mean less decentralization). Aren't full nodes already
incentivized to store and backup the 75GB somewhat by the coin's stakers?

> And worse, we don't trust anyone to encrypt it properly. We need something
> totally different that anyone can check and anyone can decode.

We don't trust anyone but we might trust "everyone". If you had a backup that
was signed by several devs or community leaders, it would probably be "good
enough."

I think the solution is interesting but the problem less so.

