
Maidsafe is working on a decentralized network - markmassie
http://techcrunch.com/2014/07/23/maidsafe/
======
sbierwagen
[http://en.wikipedia.org/wiki/Freenet](http://en.wikipedia.org/wiki/Freenet)

    
    
      Freenet is a peer-to-peer platform for censorship-resistant 
      communication. It uses a decentralized distributed data store to 
      keep and deliver information, and has a suite of free software 
      for publishing and communicating on the Web without fear of 
      censorship.[4][5]:151 Both, Freenet and some of its associated 
      tools were originally designed by Ian Clarke, who defined 
      Freenet's goal as providing freedom of speech on the Internet 
      with strong anonymity protection.[6][7]
      
      Freenet has been under continuous development since 2000.
    

I ran a Freenet node for quite a while. I eventually stopped for two reasons:

1.) Freenet is really astonishingly slow. Think ten kilobits per second of
transfer, and tens of seconds of latency. It doesn't fit the www user
interface very well at all. It would probably need to maintain a dozen copies
of every file in order to attain a reasonable amount of throughput. Bittorrent
has it beat cold for mildly illegal files, (copyrighted music, movies, etc)
which means that Freenet's users mostly use it for very illegal files, thus:

2.) Man, it is absolutely full of child porn. If you donate 10 GiB of disk
space to Freenet, then you can be sure that at least 5 GiB of that is going to
be dedicated to child porn.

~~~
devindotcom
Yep I love the idea of a decentralized, free network (mesh or otherwise) but
it has to be very specialized or it gets used for CP all day long. I think the
answer is way less bandwidth so it can only be used basically for text-based
data - chat, scientific stuff, streams from temperature and wind gauges, etc.

~~~
api
I had the idea a long time ago for a decentralized network with really small
object size limits and bandwidth caps. It dodges a lot of the scalability
problems. The idea would be for people to use it for communication, not big
file transfer. If someone wanted to send a file they could post a magnet: link
or a regular http: link, etc.

------
wyager
I would like to point out that maidsafe is objectively a scam.

Operation of the maidsafe system as advertised relies on a number of provably
impossible technologies, like purely algorithmic proof-of-identity.

They gave a presentation at a recent Bitcoin conference in DC. I asked a few
basic questions about how they planned to do certain things critical to
maidsafe's operation (that no one knows how to do, and many think are
impossible), and their answers were so obscenely stupid that anyone in the
room with relevant technical knowledge was laughing.

Example: "How do you plan to prevent bots from gaming the data transfer
payment system?" The answer was something like "Oh, it's way too hard to make
a bot. There are too many steps."

~~~
sedachv
> purely algorithmic proof-of-identity

Is that what's presented in this paper?
[https://github.com/maidsafe/MaidSafe/wiki/unpublished_papers...](https://github.com/maidsafe/MaidSafe/wiki/unpublished_papers/SelfAuthentication.pdf?raw=true)

~~~
wyager
No. Proof-of-identity means proving you are a unique human. I.e you're not a
bot, and one human can't claim to be multiple humans.

It is not possible to have proof-of-identity on a distributed system without
having a trusted centralized identification service (which maidsafe claims to
have solved, yet offers no evidence).

~~~
nwh
Yes, as far as anybody seriously believes this is completely impossible. Their
"solution" will likely be not one at all. There's several comments floating
around that MaidSafe has saved all of the people who bought MasterCoin and
didn't have anywhere to dump it due to the markets being too shallow. I'm
inclined to believe that's more the purpose than anything.

~~~
pdx
They started coding 6 years ago.

~~~
nwh
They had the "IPO" in April.

Just because the same is the same doesn't mean the target is the same (see
Ripple being bought out and turned into a pump and dump).

------
josh2600
Repeating my comments from another thread:

I thought like this once.

Futurists have a tendency to imagine a world of changed human behavior and
it's compelling to do so. The reality is that the future rarely arrives as
sweeping change, but rather as metaphor and specialization.

Whereas you can imagine others adopting new patterns of behavior because you
understand the underlying reasons why such behavior is reasonable, the
metaphor through which you explain this change is not readily understood. Why,
as a User do I want this? If the answer is control and privacy, you might be
barking up the wrong tree (time and again we've shown that those are not
things consumers want or are willing to pay for).

If you want to drive dynamic change in the world, you have to change the
underlying structure of complicated systems while steadfastly avoiding changes
in user behavior. It turns out this is quite hard.

I applaud your efforts but encourage you to avoid the rabbit hole of endless
specialization and to improve the marketing metaphor/rhetoric.

~~~
me1010
I think you are wrong on the control and privacy issue. It would seem to me to
be more a lack of understanding on what control and privacy mean for the
average user. Most Internet users are like most car owners. They have no idea
about the inner workings, nor do they care. They don't generally assume that
having a seat belt is more safe than not having one. And they generally trust
that the manufacturer has their [the user's] individual best interests in mind
when developing, building, and selling the car. -- Likewise, most internet
users firmly believe that if it was "good" for me it would be already "built-
in" to the system. This inherent trust in what is presented is the problem,
rather than consumers not wanting control and privacy.

~~~
eevilspock
The problem is far bigger than just loss of control and privacy. We are paying
so much more[1]:

1\. The advertisers who pay for it all still get their money from us, but
baked into prices of the things we buy from them. There is no free lunch.

2\. The overhead cost of advertising is huge and we pay for that too. Ad
systems and data collection systems, ad engineers and people like the author.
Ad agencies. Creative agencies. Ad tracking. Marketing departments.

3\. As the article points out, we pay the opportunity cost of a product that
cannot put users first because they live or die by giving advertisers what
they want. Costs include our lost privacy, content and services design that
optimize for advertising revenue instead of its users, and our time and
attention stolen by surreptitious ads. As has been said, we are more Google's
products than we are their customers.

4\. We pay the social costs. Democracy and the free market assume people make
voting and purchasing decisions based on facts and reason. Advertising is
predominantly about manipulation and deceit. To me this is the most expensive
cost of all.

Added together, we are paying a lot more for "free" web content and services
than if we could just straight up pay web sites for straight-up ad-free
versions. But as in the prisoners dilemma, we individually make decisions that
hurt us all collectively. Whatever you think of MaidSafe, the article is so
right when it says,

 _" Do we have the Internet we deserve? There’s an argument to say that yes,
we absolutely do. Given web users’ general reluctance to pay for content. We
are of course, paying."_

So, as you point out, the question is, "How do we get users to understand
this?" Got any ideas?

-

[1] This is a condensed version of a more detailed case with reference links
that I made here:
[https://news.ycombinator.com/item?id=7485773](https://news.ycombinator.com/item?id=7485773)

------
pfraze
Maidsafe makes some big claims about their cryptography and verifiable
behavior that was panned in this /r/crypto thread [1]. Can anybody add some
thoughts to this?

1\.
[http://www.reddit.com/r/crypto/comments/24zext/what_does_rcr...](http://www.reddit.com/r/crypto/comments/24zext/what_does_rcrypto_think_of_maidsafes_self/)

------
icehawk

      “Our network knows within 20 milliseconds if the status of a 
      piece of data or a node has changed. It has to happen that 
      fast because if you turn your computer off the network has to 
      recreate that chunk on another node on the network to maintain 
      four copies at all time.”
    

What do they mean by that 20ms figure? That can't be the entire network, since
trans-pacific latency is something on the order of 100ms

~~~
nwh
Sounds pretty made up. It's over 200ms for me to see a reply from most servers
just due to the latency coming in and out of the deep sea cables. I wager most
nodes in a p2p system would have peers far above that. You can't get around
limitation that no matter how fancy your code.

~~~
api
It's possible with central tracking servers if you de-hype-ify that number to
say <500ms. With a decentralized network it's not possible. Even trying to
achieve this would result in completely insane exponential bandwidth overhead
and consummate issues around vulnerability to amplification DOS attacks.

This is why we have no huge-scale meshnet. As the network's size increases the
bandwidth required to maintain the network's routing tables increases
exponentially, not linearly, until no bandwidth remains for actual traffic.

If this is solvable it will either require a creative redefinition of the
problem or a fundamental innovation in mathematics, probably in the realm of
graph theory.

~~~
nwh
I think the lack of wider scale meshnets is just that the people willing to
set one up or join one are either geographically isolated or aren't aware of
others in a similar position. You need a high density of willing participants
for something like it to work. Yes there's issues with authentication and
flood control, but I'm not sure meshnets have ever become a big or important
enough thing for that to be an issue (I'd love to be corrected on that).

------
api
I wish these guys luck, but I'm becoming increasingly pessimistic about the
idea of a 100% edge-only decentralized network that is really robust and
useful.

I don't think it's just a matter of putting the engineering effort to bear. I
think there are fundamental mathematically-based barriers here. Try this paper
for starters:

[https://www.zerotier.com/misc/2011__A_Little_Centralization_...](https://www.zerotier.com/misc/2011__A_Little_Centralization__Tsitsiklis_Xu.pdf)

The CAP theorem is also very relevant.

------
netcraft
I was thinking about something similar to this the other day for a replacement
(or evolution of) wikipedia. If you wanted to store all of human knowledge and
history in some sort of archive, it would be enormous - but build it on a p2p
basis, everyone having a slice of it and that slice being replicated on
everyone's machine. Access would mean that you have to agree to hold on to and
serve part of it.

But I don't get how that could work for applications, especially in security
sensitive applications.

~~~
mjquinn
You might be interested in Smallest Federated Wiki[0][1], a software project
by Ward Cunningham (creator of the first wiki). It focuses more on
collaborative, distributed editing than distributing content though.

[0] [https://github.com/WardCunningham/Smallest-Federated-
Wiki](https://github.com/WardCunningham/Smallest-Federated-Wiki) [1]
[https://github.com/fedwiki](https://github.com/fedwiki)

------
Udo
Servers have a lot of nice features that are hard to replicate in a totally
decentralized environment - I say this as someone who experimented a lot with
different peering structures for a hobby project.

And indeed, this too seems to rely on persistent nodes, though they don't say
in what capacity (whether they work like torrent trackers or if they actually
relay content).

In an increasingly mobile-flavored network where one person has many devices
it makes sense to have servers. However, that doesn't mean we all need to get
behind the mega silos of Facebook and Google. The early internet actually got
this right, both technologically and topoligically.

The problem is partly cultural since we have gotten used to the all-or-
nothing, anti-federation approach of Web 2.0+, but it's also due to the
inability to deliver and change features in a timely manner. But if, say,
Facebook UI innovations were to stagnate (and some say that they already
have), it would become more feasible to implement a slower-moving federated
service.

------
bagosm
This sounds good but there are some problems that I think can't be solved.

1st: What if a user wants to flood the network with meaningless data?

2nd: Latency-critical applications, especially on geographically distributed
peers.

~~~
wmf
Doesn't MaidSafe have some kind of cryptocurrency? You should have to pay for
any resources you use.

------
sedachv
Does anyone have opinion about the MaidSafe self-authentication paper?
[https://github.com/maidsafe/MaidSafe/wiki/unpublished_papers...](https://github.com/maidsafe/MaidSafe/wiki/unpublished_papers/SelfAuthentication.pdf?raw=true)

I came across it a while ago somewhere but still haven't read it.

------
clarry
It sounds like they are reinventing Freenet. Badly. Not realizing how hard
some things are. Instead, they make big claims. Where's the code?

Not that I think Freenet is terribly good.

~~~
pdx
> Where's the code?

Enjoy!
[https://github.com/search?q=maidsafe&ref=cmdform](https://github.com/search?q=maidsafe&ref=cmdform)

------
spindritf
Like FileCoin, this is very interesting but is any of those networks ready to
be used _today_? Even in some limited capacity? Right now, it sounds like pure
hype.

~~~
wmf
Space Monkey works but it's a proprietary system.

------
YuriNiyazov
I find it very hard to believe that datacenters, not programmers, are the most
expensive part of running the Internet.

------
wisemnaofhyrule
Something like this has already been done with I2P minus the cryptocurrency
aspect.

------
sp332
I thought RUDP was just a draft spec from 1999.
[https://tools.ietf.org/html/draft-ietf-sigtran-reliable-
udp-...](https://tools.ietf.org/html/draft-ietf-sigtran-reliable-udp-00) Is
there a more recent standard?

------
drivingmenuts
It's all fun and games until the government decides you're hosting child porn
on your computer, even if you didn't put it there.

So, they'll have to come up with some means of centralized censorship, which
is going to hack off the devout civil libertarians who would initially support
it. And the vast majority of internet users aren't going to care until someone
tries to explain to them how the internet is using their resources for
storage.

Then the fun starts.

~~~
sbierwagen
In the Freenet model, the local cache is encrypted, and you don't have the
decryption key.

[http://en.wikipedia.org/wiki/Freenet#Technical_design](http://en.wikipedia.org/wiki/Freenet#Technical_design)

------
doctorshady
So I'm a little confused here - let me just ask something;

How is this going to work?

The answer seems obvious, but look at the battle raging over net neutrality
right now. With a decentralized infrastructure, it's going to be a lot harder
to get around the prospect of paid peering then it'd be with Uncle Google
and/or Amazon paying their way into your home.

------
VikingCoder
An alternative approach is something like sandstorm.io

------
dang
Can anyone suggest a better (i.e. more neutral and accurate) title?

~~~
chimeracoder
Perhaps something like "P2P/distributed hosting with Safe", "MaidSafe:
Decentralized servers/hosting", or some combination like that?

~~~
dang
We picked language from the article that seems to describe it.

------
AznHisoka
as long as it's not overpriced like AWS, I dun care. Otherwise, no, give me
back my cheap server.

------
theg2
So all of your users data is going to be stored in random places? I don't see
how anyone thinks this could feasibly work unless everyone is suddenly cool
with loss of trade secrets, the wholesale giveaway of intellectual work, and
complete loss control over data/content.

Unless I'm missing something?

~~~
dheera
It's hypothetically possible to come up with versions of algorithms that
distribute the processing of your data such that each node does not get enough
information to recover secrets, in a cryptographically secure fashion.

As a developer though in 99% of cases I'd much rather just pay for a few
servers in multiple datacenters and load balancing than waste time researching
the above, let alone proving its cryptographic security for my use cases.

It could, however, be useful for cases where you're trying to get over foreign
government censorship or oppression against whatever service you're trying to
provide. Imagine, for example, a free speech discussion service that was
decentralized and essentially un-blockable. Bonus points if its cryptographic
system can be designed such that an arbitrary processing node can never be
proven from its data alone that it was helping run the service (among other
services).

~~~
oakwhiz
It's already possible to execute some algorithms on untrusted computers using
homomorphic cryptosystems. I'm not sure what to what extend this will proceed
- it might be possible to securely execute arbitrary algorithms on untrusted
platforms. This is sort of a holy grail of cryptography - to be able to
process secret information efficiently without gaining knowledge of it.

------
thanatropism
This is the internet all over again. Remember "online services"? BBSes?

Some form of this was bound to sprout as the internet became feasible to
regulate and demarcate. It started with BitTorrent because people wanted to
pirate music. (Maybe before, but torrents are impressive in that they reside
literally nowhere).

This is going to happen. Maybe in 2014/15, maybe in 2020 when people are
having their prostates probed by the NSA/Europe's right to forget/Brazil's lei
de mídia/yadda yadda yadda.

