
Why the Internet Needs IPFS - confiscate
http://techcrunch.com/2015/10/04/why-the-internet-needs-ipfs-before-its-too-late/
======
nicklaf
I view the favorable performance characteristics of IPFS at scale (over the
current centralized client-server architecture of the present web) more as a
general symptom of the pathology resulting from the incomplete, ill-planned
architecture of the web in general.

The greatest advantage of an architecture like IPFS instead lies in its
friendliness to more a democratic, semantic web, in which users and programs
may make use of URI's at a fine-grained and peer-to-peer level. If we can
decouple resources from a central server, and then build programs around these
static resources, the web will not only become more permanent, but also more
robust against walled-gardens robbing us of programmatic control of those
resources.

To paraphrase the Plan 9 authors[1], it might make sense to say that a p2p,
content addressable web should try to build a web out of a lot of little
systems, not a system out of a lot of little webs. (Well, that doesn't
completely make sense, and I'm starting to mix my metaphors, but my point is
that what we have now is significantly hindered by lack of fine-grained
semantics allowing interop. Hypermedia has the potential to be so much more
than simply manipulating the DOM with some javascript!)

[1] [http://www.plan9.bell-labs.com/sys/doc/9.html](http://www.plan9.bell-
labs.com/sys/doc/9.html)

~~~
_prometheus
(IPFS Dev) Yes yes yes yes! :)

This is very much what we're going for.

Glad you mention semantic web-- it tends to be a very tricky word with most
hacker cultures, because everyone loves to hate on failed attempts :( :( --
but in fact SW (Linked Data!) is a super interesting model that has made
Google and Facebook tons and tons of money (Knowledge Graph and Open Graph!).
One interesting fallout of IPFS is that we can make Linked Data waaaaay more
robust and faster, because you no longer need the _insane_ tons of little HTTP
hits to query, retrieve content, retrieve other content + definitions, etc.,
on each request. With IPFS we can turn everything into a (merkle)dag and
distribute it as a single bundle of objects. think Browserify/Webpack but for
Linked Data. All with valid content addressing and signatures :)

And +1 to plan9 -- plan9 (in particular Fossil/Venti and 9p) have always been
great inspirations for me, and their philosophy surfaces a lot in IPFS. From
the fossil/venti (git) approach to content, to making everything mountable, to
just using the path namespace for everything, etc. :)

(ps: woah, a message positive on BOTH Linked Data AND plan9! unexpected find!)

~~~
nicklaf
Thanks for the fantastic work. Your project represents something that I'd love
to use, and would love to help succeed. I have some coursework to attend to,
but I'd love to get involved at some point.

~~~
_prometheus
Thanks! and please do-- find us on github.com/ipfs/ and #ipfs on
irc.freenode.org :)

~~~
knight17
Hi! I have a dumb question, is there a way to get cleaner URLs?

~~~
david_ar
Yes, by using DNS TXT records. For example, the main ipfs.io website is
actually just proxying
[https://ipfs.io/ipns/ipfs.io/](https://ipfs.io/ipns/ipfs.io/)

------
friendzis
I see two core problems with content addressable web:

    
    
      * Dynamic content
      * Private/sensitive information
    

Dynamic content on a distributed system poses rather hard challenges on data
integrity - how would it be possible to ensure that different pieces of data
you have are actually from the same [temporal] set? The very same HN - how
would it be possible to see at least all the children of parent you want to
comment to in a distributed manner? Anything I can come up with involves some
sort of central dependency repository. Two child "commits" create branches in
content tree. If these children are duplicates - we pick the better one and
cast the other into abyss, the bigger challenge is how do we merge them
together. This is a wide topic, so please excuse my brevity.

Private information is yet another rake to step on. I would really love my
bank account information to remain accessible to me and the bank only. I don't
even trust myself not to lose decryption keys, I don't trust long-term
security of encryption algorithms (flaws could be found, brute forcing might
become viable), therefore the information better stays in as little places as
possible. Another end of this stick is authentication/authorization.
Encryption does not work, because access rights change. What I'm authorized to
view/do today might not be the same tomorrow. The only solution is not to
serve the content in the first place. As for authentication I don't see a
solution at all.

Although content addressable web is awesome solution for [more or less] static
content - wikipedia, blogs, etc.

~~~
einrealist
I see more core problems:

1) This only scales if enough people are willing to give up a portion of their
storage devices. I would think twice about that for my SSDs.

2) Private customer internet connections often have a lesser upload bandwidth
and that is part of the overall traffic calculation of ISPs. So the networks
have to change a lot to face changing traffic requirements in this regard.
That is something that will not scale with the demand of a successful protocol
(see BitTorrent). Also this IPFS traffic may harmful to other traffic like
gaming sessions.

3) Illegal content will be the killer. Don't expect laws to change for IPFS.
No one will permanently use VPNs / anonymizers permanently for using IPFS,
because some law firms will specialise on gathering endpoint information like
they already do with BitTorrent.

~~~
david_ar
1) The plan is to incentivise people with schemes like
[http://filecoin.io/](http://filecoin.io/)

2) Yeah, I'll admit the asymmetric nature of home internet is a bit of a
hassle. I suspect you'll still require machines on the backbone seeding
content, with domestic machines focussed more on providing content to local
peers and/or seeding rarely accessed content. In the long-term, IPFS can help
ISPs in reducing their costs (content can be grabbed from within their own
network, instead of having to peer with another network for almost
everything), which might help to convince them to change.

3) IPFS doesn't force you to host any content you don't want to. If you choose
to host illegal content, that's your problem.

~~~
einrealist
> 1) The plan is to incentivise people with schemes like
> [http://filecoin.io/](http://filecoin.io/)

Adding a market to this problem is interesting and may work, indeed. But you
must not forget, that most internet connectivity plans for private customers
prohibit commercial use, where the network is the essential part of that
business. If people start to make money on IPFS, the ISPs will not just watch.
Another aspect of this is taxes that people must pay if they make a profit,
and all the requirements to operate a business. With this in mind, I am
skeptical that this will scale to a size where you can say: And now it is
really a distributed thing.

> 2) Yeah, I'll admit the asymmetric nature of home internet is a bit of a
> hassle. I suspect you'll still require machines on the backbone seeding
> content, with domestic machines focussed more on providing content to local
> peers and/or seeding rarely accessed content. In the long-term, IPFS can
> help ISPs in reducing their costs (content can be grabbed from within their
> own network, instead of having to peer with another network for almost
> everything), which might help to convince them to change.

Many ISPs already do that. CDNs use ISP data centers to bring content to
customers. In fact, IPFS is not much different from their solutions. But IPFS
can be a standard for content distribution. That would be a good thing.

> 3) IPFS doesn't force you to host any content you don't want to. If you
> choose to host illegal content, that's your problem.

It is not always obvious what content is legal. Copyright laws will test IPFS
hard.

~~~
david_ar
1) Whilst it can be used to make a profit, it can also be used as a system to
allow you to trade storage space on your own computer in return for
replicating your own content on other people's machines. The more storage you
provide, the more redundantly your own content is hosted on the network, and
the longer you can still serve your own content whilst your own machine is
offline. IANAL though, so I can't comment on the legal implications.

2) Sure, you can view it as a CDN anyone can participate in (including
machines within your LAN).

3) There are plans for handling DMCA takedowns
[https://github.com/ipfs/gateway-dmca-
denylist](https://github.com/ipfs/gateway-dmca-denylist)

~~~
einrealist
Well, I am thrilled to see it succeed. To me the most interesting part is IPFS
acting as a CDN standard. That could make an impact, certainly.

~~~
synctext
Their incentive idea is realistic, it will work.

My research team has build very similar operational systems over the past 8
years. "Decentralized credit mining in P2P systems", IFIP Networking 2015
"Towards a Peer-to-Peer Bandwidth Marketplace", 2014 "Bandwidth as a computer
currency", news.harvard.edu/gazette/story/2007/08/creating-a-computer-
currency/, (PRE-Bitcoin), 2007

Their work is still early, essential thing about this type of approach is
spam. You can often trick people into caching stuff. So not your typical DDoS
anymore, but another resource eater. Plus there is the counterintuitive
oversupply problem: <more citations to our own work> "Investment Strategies
for Credit-Based P2P Communities"

------
kwijibob
I think the IPFS vision is fantastic.

The article misses the main advantage when it tries to say that IPFS would
help corporations like Google/Youtube/Netflix.

Big players will always be able to expertly run powerful distributed CDN's,
but newer smaller websites will always start with one server under the current
model.

IPFS would help to level the playing field for distributed data services.

~~~
_prometheus
(IPFS Dev) Thanks! :) Yeah, we want to help sites scale without needing
massive infrastructure funding.

------
AstroJetson
"As I explain in my upcoming book...." now the article makes sense, TC is
helping her pimp her book.

What the internet needs is a new financial model since the one we are have now
isn't working in the long term.

~~~
dfc
> Now this article makes sense

This is all TC does, you can call it pimping or fluffing. Before this article
did you think TC was an investigative journalism powerhouse?

[http://www.paulgraham.com/submarine.html](http://www.paulgraham.com/submarine.html)

------
smegger001
IPFS is essentially web (here meaning htmls css & javascript) over torrent + a
distributed hash table. Correct me if I am wrong but isn't this just freenet
without the anonymity or kind of like tribler but for websites instead of
pirated files?

~~~
mburns
IPFS shares a lot of similarities to the Freenet datastore, yes. Both support
static and dynamic keys that map to content-addressed URIs and can be accessed
via a DHT. IPFS lets you selectively (opt-in) save data locally and host it,
unlike Freenet which can cache and serve that data to other clients without
user action.

The lack of Freenet's anonymity feature means it can perform much faster and
even be implemented in client-side javascript (Work In Progress) which will
drastically increase user adoption.

~~~
_prometheus
(IPFS Dev) Yeah, we have many similarities with Freenet, but we also have
significant differences. You mention the lack of anonymity, and yep, that's
definitely one -- to speak more about this: this parts from our content model.
Among explicit constraints, we have:

\- __content must be able to move as fast as the underlying network permits
__. this rules out designs like freenet 's and other oblivious storage
platforms, as the base case. Like you said, they're just way too slow for most
of IPFS use cases. But, these can be implemented trivially with the use of
privacy focused transports (like Tor and I2P -- there's actually work towards
this and people are getting close), content encryption, and so on.

\- __IPFS nodes should be able to only store and /or distribute content they
_explicitly_ want to store and/or distribute __. This means that computers
that run IPFS nodes do not have to host "other people's stuff", which is a
very important thing when you consider that lots of content in the internet is
-- in some for or other -- illegal under certain jurisdictions. Legitimate
companies have way too much in their plate to additionally worry about
potentially be storing a ton of illegal stuff. For serious companies like
Google to use IPFS, we need to have a mode of operation that allows
implementations to ONLY move the content THEY want to move.

\- __Websites /Webapps must be able to operate entirely disconnected __\--
this means that it should be possible to build applications which create data
locally, __signed by the user __, which can be distributed encrypted end-to-
end to other users, without needing to ever touch specific backbone servers.
This means users can move the data end-to-end via the closest route possible
and in disconnected networks (think users on mobile phones on a plane using
messaging webapp or web game and moving the bits over bluetooth or an ad-hoc
wifi network. And this also means users in the broader internet do not _need
to_ rely on backbone servers -- the model we 're going for is that dedicated
computers in the backbone CAN of course and SHOULD help, but you shouldn't
HAVE TO RELY on them.

Can write more about this :)

~~~
mirimir
If you could compare/contrast with Tahoe-LAFS, that would be cool.

------
Renaud
Criticism of the article aside, having a decentralised, heavily redundant,
safer and harder to disrupt web is an excellent idea worth pursuing.

I can see how that could work for statics resources, but I don't get how you
can decentralise the dynamic portion of a website without single points of
failures to the backend.

~~~
dogma1138
It's not really harder to disrupt by any means, at the end the internet is a
network, or how congress thinks of it a series of pipes.

Most internet censorship eventually works on L3, having a distributed
alternative to the world wide web won't make the internet more resilient.

~~~
_prometheus
(IPFS Dev here) Maybe. I want to live in a web where,

(1) if a region's network uplink disconnected (like it was in Egypt)
websites/webapps don't cease to work. People should be able to communicate +
compute in these local networks without backbone access. yes this is possible.

(2) if i manage to make contact with you over a totally unorthodox data
channel, like ham radio, or satellite, i should be able to _easily_ pipe my
traffic and updates to you, and do so in the git-style of replication: offline
first + bursts of condensed traffic.

this is not rocket science. (this is rocket science:
[https://www.youtube.com/watch?v=Pl3x71-kJGM](https://www.youtube.com/watch?v=Pl3x71-kJGM)
:D) our problems are much easier to solve, and we __must __solve them.

~~~
sethjgore
It's definitely not rocket science and yes i agree that it has to be solved.

I think it's important to make it useful to the non-techie person, so they
know that whatever they put out there is guaranteed to last, rather than
having to pay for hosting and such. This is just enthralling.

Creating an open content creation platform that will let users to create and
upload media and text without any fuss will make IFPS a very viable choice.

I think it's interesting to consider the architecture of websites and how a
non-decentralized content service has forced a centralized, systemized design
of websites. Websites nowadays put more focus on design and fluff than on
linkability of content. Maybe the very idea of a website is mainly because
it's served from a single server. You couldn't have it coming from a million
places, so you had to ensure that everyhting looked consistent and similar
within this site.

But now with IFPS, we can reconsider the web stack and typical web design. We
can reduce the content to themselves only and allow these to be linked with
other content so everything's closer to being true hypermedia.

We really don't need any fancy web templates (think wordpress, squarespace,
etc) if we are to achieve true decentralization. Heavily designed websites
prevent them from being "decentralized" as in being too systematized/tightly
coupled. Do I make sense here?

I'm actually working on this at the moment. I'm working on making it easy to
create decentralized web content. I want to say I can, with some help from you
and others, integrate IFPS as the backbone of this. Keeping IFPS in the
picture brings back the decoupled nature of content with the guarante that the
links won't go out of business. Nobody has to pay for hosting, nobody has to
worry about content anymore. The world just became our one huge hard drive!

------
wpietri
Sigh. There's talk about how the Internet's "own internal contradictions
[will] unravel it from within", but I'm not seeing it. The first time I can
recall an "the Internet is doomed" prediction is 1995:

[https://en.wikipedia.org/wiki/Robert_Metcalfe#Incorrect_pred...](https://en.wikipedia.org/wiki/Robert_Metcalfe#Incorrect_predictions)

At this point, there are circa 3 billion Internet users, nearly half the
planet, so I think we're well past the point where "ZOMG GROWTH N STUFF" is a
reasonable justification for this sort of hype. The Internet's growth rate
peaked in the 90s:

[http://www.internetlivestats.com/internet-
users/](http://www.internetlivestats.com/internet-users/)

Now it's under 10% user growth, which seems entirely manageable.

~~~
david_ar
Internet != Web. The Internet isn't broken, HTTP is.

~~~
wpietri
Well, the article says Internet, but if you're proposing that instead it's the
Web that will unravel from its own internal contradictions, could you explain
how that will happen?

~~~
david_ar
The major issue is that the Web is vulnerable to single-points-of-failure
(unlike the Internet). Personally I wouldn't use the phrase "unravel from its
own internal contradictions", but I think most people have been bitten by a
SPOF on the Web, even though the Internet itself is robust to failures. IPFS
is about making the Web as reliable as the Internet itself.

~~~
bwohlergo
most websites need a shared database for all users. how do you decentralize
them?

~~~
david_ar
See
[https://github.com/ipfs/notes/issues/40](https://github.com/ipfs/notes/issues/40)

------
karissa
This kinda makes me sick to my stomach -- essentially, she's saying that a
peer-to-peer internet is great because companies will be able to have better
uptime.

She doesn't even cover the clearly obvious economic aspects of this -- why
would I run an IPFS node if it just benefits the company and not me?

~~~
_prometheus
(IPFS dev here) I think she does, but perhaps the discrepancy may be that
she's got a different target audience: the established website owners? Also to
be fair, there's not much space in these sorts of articles __LOTS __gets cut
by editors and -- from having done something like this -- you have to hit this
absuuuurdly high level and can 't sink into details much at all. I'm actually
pretty surprised at the level of technical detail in this article -- i
would've expected much more to not make it past the "remove jargon - write for
the average user" media filter. I've done some interviews / articles that
ended up annoyingly waaaaay more high level and completely missed my expected
mark. You might want to write to the author, too. :)

~~~
esfandia
The OP asks an interesting question though: why would I run an IPFS node if it
just benefits the company and not me?

~~~
david_ar
There will be incentives in the future, both in terms of promoting a good
seeding/leeching ratio like bittorrent ("bitswap"), as well as
[http://filecoin.io/](http://filecoin.io/)

~~~
hexscrews
And how would you deal with materials that could get you either dead or
imprisioned? Can you elect not to host content? For example, in china, you
have a situation where you can be materially affected by political content.
[https://www.privateinternetaccess.com/blog/2015/10/in-
china-...](https://www.privateinternetaccess.com/blog/2015/10/in-china-your-
credit-score-is-now-affected-by-your-political-opinions-and-your-friends-
political-opinions/) Or what if you happen to live in a country hostile to
LGBT content, like russia or Zimbabwe, where it may be dangerous to host such
content?

~~~
clacke2
That is exactly what ipfs does, in contrast to e.g. freenet. Mirroring is opt-
in.

------
euske
This might mitigate some bandwidth problem, for sure, but how can this improve
our privacy and the right to be forgotten? I think I'm missing something.
Isn't this just a modern reimplementation of distributed hash tables that were
researched circa 2005?

~~~
_prometheus
It's more than that, take a look at the other comments in this thread, like
[https://news.ycombinator.com/item?id=10329236](https://news.ycombinator.com/item?id=10329236)
and
[https://news.ycombinator.com/item?id=10329221](https://news.ycombinator.com/item?id=10329221)

One interesting thing is you can have totally end-to-end encrypted
applications, encrypt the application code and all the generated data.

Also, remember that the HTTP Web itself was a reimplementation of decades-old
hypertext systems. Hypertext hails all the way back to Xanadu (60-80s!).

------
jerguismi
“We use content-addressing so content can be decoupled from origin servers,
and instead, can be stored permanently. "

Why should I believe that claim? What are the incentives for storing my data
in the network? As far as I understand, no incentivization is done for running
the network. That's why I wouldn't trust it to store anything but a trivial
amount of data.

~~~
david_ar
Not yet, but this is planned: [http://filecoin.io/](http://filecoin.io/)

~~~
repomies691
OK, why would I believe that IPFS is able to handle this issue better than the
numerous other projects trying to implement incentivization? IPFS focus has
been quite clearly elsewhere.

~~~
david_ar
You're right, this isn't the only possible incentive system, and ideally IPFS
will be compatible with multiple ones. Could you link to the other projects
you're talking about?

~~~
repomies691
Storj, maidsafe at least. There has been some talk about incentivizing tahoe-
lafs AFAIK. I think from the altcoin camp there are also more projects.

~~~
david_ar
There's been some talk of collaborating with those projects, so it's
definitely a possibility. IPFS is more about tying together all the existing
components in a nice way, rather than trying to reinvent everything itself.
Please get involved if you have ideas about the direction that should be taken
:)

------
nottednelson
cf. [http://named-data.net/](http://named-data.net/) and
[https://www.parc.com/work/focus-area/content-centric-
network...](https://www.parc.com/work/focus-area/content-centric-networking/)

~~~
_prometheus
(IPFS dev here) +1 o/ NDN is awesome. If any NDN devs see this, please reach
out. We'd love to work with NDN on things-- we think we can generate lots of
demand for NDN

------
zobzu
Such articles should start by: IPFS = Internet Protocol File system

Just because it makes stuff clearer. Heck the project page could use some of
that too ;)

~~~
fiatjaf
It's Interplanetary.

~~~
zobzu
My point!

~~~
fiatjaf
Gah!

------
firebones
One milestone for IPFS: get past the point of the primary informational site
about it (ipfs.io) being categorically blocked by corporate content filters as
"blocked: peer-to-peer". It's hard to convey the vision within your
corporation when you can't even share the primary informational site.

~~~
_prometheus
(IPFS dev here) wow, that sucks. is this blocked in your corp firewall? Any
chance of talking to the sysadmins? Blocking IPFS is like blocking HTTP :/ :/
:/

We may have to move our gateway back off our main domain, and have a
"recommended blocker", so that people don't jump on this.

Anyway, one piece of good news: despite everyone's belief about UDP transports
never going through corp firewalls, QUIC now handles MOST (>60%, close to 80%
i believe) of "Google Chrome<\-->Google sites" traffic across the world.

So, temp setbacks may be annoying, but in the end, once users install IPFS --
and once we have browser implementations, we can make all the traffic look
like HTTP TLS flows, so blocking it will be very hard without also blocking
regular HTTP TLS traffic.

~~~
dingaling
> so blocking it will be very hard without also blocking regular HTTP TLS
> traffic.

Many large or data-sensitive companies do SSL MiTM at their firewall so
unfortunately that's not hard at all.

~~~
_prometheus
true, but then you're not in a {censorship, attacker} free environment at all
-- MITMing TLS _is_ violating the security expectations of today's websites. I
wouldn't even check my non-work email.

i.e. all bets are off. there will always be pockets. but over time we could
win even there, as the perf improvement will matter.

------
skybrian
For publishers, it's probably best thought of as a combination of a free CDN
and the Internet Archive.

But I don't think it's going to take off until there's a way to take down
files you don't want published anymore. Even with the Internet Archive, if you
add them to robots.txt, they will take it down. [1]

Removing things from the Internet is always going to be imperfect since there
will always be people who archive old copies of files (and that's a good
thing). But the official software should honor retractions or mainstream
publishers won't be interested.

[1]
[https://archive.org/about/exclude.php](https://archive.org/about/exclude.php)

~~~
toomuchtodo
If Bittorrent has taught us anything, you don't need a publisher's permission
for a technology to take off.

This'll most likely start as CDN endpoints (that no longer need an origin),
and move on from there.

~~~
skybrian
I didn't say it wouldn't take off with the Bittorrent crowd. Website owners
and file sharers are different users with different concerns. Publishers
already have CDN's so if they can't undo, this would be a downgrade.

------
MCRed
I recently learned about MAID SAFE--
[http://maidsafe.net](http://maidsafe.net) \-- but I haven't researched it
much. I would be interested if someone could compare and contrast the two.

~~~
david_ar
[https://github.com/ipfs/faq/issues/2](https://github.com/ipfs/faq/issues/2)

------
tptacek
I'm a believer in this idea generally --- that we should replace applications
built directly on IP/TCP with applications built on a content-addressed
overlay network running on top of IP/TCP --- but I think the logic used here
is faulty.

For instance: I'm not clear on how IPFS protects applications from DDOS.
Systems like IPFS spread the load of delivering _content_ , but applications
themselves are intrinsically centralized.

~~~
_prometheus
(IPFS Dev here) hey again tptacek! o/

> I'm not clear on how IPFS protects applications from DDOS. Systems like IPFS
> spread the load of delivering content, but applications themselves are
> intrinsically centralized.

Think about an application whose content is moving around entirely distributed
by IPFS as well -- think of apps who run mostly on clientside, with signed (+
maybe encrypted) data generated on the users' browsers, with maybe a few "non-
browser" nodes contributing to building indices or providing trusted oracles.

What we're taking about is a model for webapps in which not just the content,
but the logic + processing is decentralized too. At one extreme are
bitcoin/ethereum style applications, where everyone runs the same computation
to verify it, and another extreme where everyone just computes on their own
data + the data they care about, and sign all their updates.

How to do this well is not easy-- distributing the content is one part,
another is making a really good capabilities library (Tahoe-LAFS has done an
excellent job with this, for example, and e-rights has tons more great ideas).
Another part still is thinking about the sync models with ephemeral nodes
which create tons of small pieces of data, blast them out to content bouncers,
and go offline. Building scalable real-time indices on this sort of stuff is
going to be tricky :)

Another interesting area is thinking about how databases look once you do
this-- think both NoSQL AND SQL models on top of IPFS. yep, may sound crazy,
but we have some preliminary work towards this (NoSQL is easy, SQL is less
easy, but very doable! -- after all a database is just a good datastructure
and good algorithms for operating on it).

Happy to write more about this, it's a super interesting model we're
exploring.

~~~
on_
Not tptacek, but could you give a bit more on the contrast between Ethereum an
IPFS. Also, what is your "version" of DNS? How can I map a human readable name
to a file/connection and verify authenticity? Project looks awesome, checked
it out briefly last time an article was posted. Excited! Thanks.

~~~
_prometheus
(IPFS Dev) Sure!

We divide naming in two parts:

(1) Providing a long-term reliable mutable pointer. (no consensus needed) (2)
Providing a long-term reliable short and human-readable identifier. (consensus
needed)

Where "long-term reliable" means i can rely on it for decades for important
businesses. I.e. nobody will just take it from me by a fluke of the protocol.

IPNS, the naming system of IPFS, separates these into two steps:

(1) First, it makes a cryptographic name-system (this is based on SFS -- by
David Mazieres -- look it up, fantastic system and a prelude to the core
design of IPFS, Gnunet, Freenet, Tahoe-LAFS and many other systems). This
cryptographic name-system means a "name" is the hash of a public key ("eeew
that's ugly"\-- yes, hang on). That hash name can be updated only by the
holder of a private key (how? via the DHT and other record distribution
systems, more on that later). The important part is that it (a) does not
require consensus at all, anybody can make names (it's just a key pair!), and
(b) it can be updated really fast over DHT, Pub/sub (multicast) and other
network distribution systems.

(2) Second, it delegates the human-readable naming to _other, existing_ name
authorities (note that _stable global solutions_ to this problem require
consensus). We don't want to have to make _our own_ naming authority, lots
exist already: DNS, all the DNS alternate universes, and more recently in the
cryptocurrency world: Namecoin, Onename, and even Ethereum is making one. So,
_instead of adding one_, we just work with all of them, and integrate. You can
bind an IPNS name (a public key path, like
`/ipns/QmbBHw1Xx9pUpAbrVZUKTPL5Rsph5Q9GQhRvcWVBPFgGtC`) to a name in those
authorities _once_, and never have to do it again. For example, with DNS you
do this:

    
    
       1. setup a DNS TXT record like: dnslink=/ipns/QmbBHw1Xx9pUpAbrVZUKTPL5Rsph5Q9GQhRvcWVBPFgGtC
       2. continue using QmbBHw1Xx9pUpAbrVZUKTPL5Rsph5Q9GQhRvcWVBPFgGtC as usual.
    
    
    

You'd have something similar with Ethereum, Onename, Namecoin and so on: you
just link to the IPNS name once. Now you can use your private key to update
that name whenever without paying the cost of going on the consensus network.
And So, resolving an IPFS url like:

    
    
       /ipns/ipfs.io 
       -> /ipns/QmbBHw1Xx9pUpAbrVZUKTPL5Rsph5Q9GQhRvcWVBPFgGtC
       -> /ipfs/QmcQBvKTP8R7p8DgLEtKuoeuz1BBbotGpmofEFBEYBfc97
    

(Note at time of this writing, /ipns/ipfs.io links directly from `/dns/ipfs.io
-> /ipfs/QmcQBvKTP8R7p8DgLEtKuoeuz1BBbotGpmofEFBEYBfc97`, not through IPNS, as
this is good enough to run a static website for now, and it makes it more
robust as we experiment with lots of IPNS things).

One more thing: resolving names via local-names and paths (i.e. a web of
trust, using either SDSI style naming, or SFS's much nicer path version) is
entirely possible and averts the requirement of consensus for meaningful human
names. This is really useful and cool, and we will experiment with it in the
future. But in general, this doesn't (IMO) give you the ability to do "global
long-term reliable" names, as "jbenet" might mean something different to
different segments of the network, so i couldn't _print_ the words "yeah, just
go to `/jbenet/cool-site`" in _paper_, because there would be no global
consensus for `/jbenet` and i would like to make sure all my references are
viewable by anyone across space and time.

Hope this helps!

~~~
on_
Really helpful. Thanks for the detailed response. Going to dev, do a few
experiments with this. Good luck.

------
Animats
It's a lot like BitTorrent - everything is identified by its hash.

Many of the claimed advantages of IPFS can be achieved with subresource
integrity. If you use subresource integrity, files are validated by their
hash. We just need some convention by which the hash is encoded into the URL.
Then any caching server in the path can safely fulfill the request.

~~~
nicklaf
This isn't really a valid criticism of IPFS, unless you can point to a
competing initiative which implements 'subresource integrity'. Alan Kay said
that each object on a webpage should have its own URI way back in 1997, and
yet here we are.

~~~
_prometheus
(IPFS dev) Yes! One of my explicit goals is to help get to Alan Kay's "one
address for each object" goal.

------
crablar
Podcast about IPFS with Juan Benet:
[http://softwareengineeringdaily.com/2015/08/25/interplanetar...](http://softwareengineeringdaily.com/2015/08/25/interplanetary-
file-system-ipfs-with-juan-benet/)

------
guessmyname
"[...] Before it's too late"... But when will it be too late?

EDIT: OP changed the title, it was originally written as "Why the Internet
Needs IPFS Before It's Too Late"

------
grayfox
Hey, _prometheus!

What's an email I can reach you at? I'd like to pick your brain about some of
this stuff and how it relates to databases.

Cheers!

------
sklivvz1971
Meh, it's a distributed static website thing.

Of course if one doesn't need transactions, everything is easier even today --
I mean, who doesn't use a CDN? It's a very similar concept.

The problem is much tougher when the content is personalized or when there
needs to be a transaction.

This makes distribution a huge huge problem no one has solved yet (even a
blockchain, IIRC can't really scale up in transactions...)

~~~
_prometheus
(IPFS dev here) Hey! Yeah, we're focusing right now on "distributed static
websites" because those are really easy to get right and make work all over
the network. However, what's _really_ interesting about IPFS is making
completely distributed applications which can do their process unhinged from a
single origin -- websites/webapps without an origin server. Posted more about
this elsewhere on this thread, take a look:
[https://news.ycombinator.com/item?id=10329221](https://news.ycombinator.com/item?id=10329221)

Also see the Neocities + IPFS blog post:
[https://ipfs.io/ipfs/QmTVcD87Ecjps6wv9jMaGhvMuzZ2BgP6NyXDcnM...](https://ipfs.io/ipfs/QmTVcD87Ecjps6wv9jMaGhvMuzZ2BgP6NyXDcnMU74RELx/)

Take a look also at this talk, which presents different use cases for this:
[https://www.youtube.com/watch?v=skMTdSEaCtA](https://www.youtube.com/watch?v=skMTdSEaCtA)

Can also see some of the random crazy things we're planning in
[https://github.com/ipfs/notes/issues/](https://github.com/ipfs/notes/issues/)
\+
[https://github.com/ipfs/archives/issues/](https://github.com/ipfs/archives/issues/)
\+
[https://github.com/ipfs/apps/issues/](https://github.com/ipfs/apps/issues/)

(Edit: fix links)

~~~
21echoes
Are you working with [https://unhosted.org/](https://unhosted.org/) at all on
the unhinged apps concept?

~~~
_prometheus
not yet, but we should! will email them, thanks for ptr!

------
ilaksh
You can tell this is a great idea by how much some people hate it.

------
mirimir
I wonder if IPFS can access resources via Tor onion services.

~~~
david_ar
[https://github.com/ipfs/notes/issues/37](https://github.com/ipfs/notes/issues/37)

~~~
mirimir
Thanks. I'll test.

------
jsprogrammer
>our rapidly dwindling connectivity

...

Sorry to be extremely obtuse, but since when has our connectivity been rapidly
dwindling? It seems to have only been growing since the network launched.

~~~
_prometheus
(IPFS Dev here) Sort of, actually. The gist is that in general yes, the
network is growing and getting better. But, there's actually a growing divide
between the hyper-developed cities and ... the rest of the world.

One example is this: if your bandwidth is too low, or your latency too high,
you basically cannot use the Web as it is today. Native mobile apps do
__muuuch __better, because you download them once and can use them mostly
offline or with low network usage. The reason this is "getting worse" is more
of a perception problem and an artifact of this fact: storage is getting
cheaper faster than bandwidth is getting cheaper. Over time, our media usage
(and thus perceived requirements) grows and grows, tons of websites today are
_several megabytes!!_ and all the media we use keeps growing to fit our nicer
screens. Also, as more and more people come on line, and their usage
increases, the networks saturate. This causes the "perceived bandwidth" to
decrease, meaning that the pipes (which are getting absolutely better) can
handle a smaller percentage of our individual load and thus "feel worse".

Then, there's are other stupid problems, like the fact that many major
websites/webapps will be totally useless if you go over certain latencies
(particularly with wireless meshes, meaning lots of packet loss). This is
because the servers' request timeouts are way too low. I was recently
traveling through _Europe_ and in many places (trains on the countryside or
small cities) the mobile (and sometimes wired even!) latency was so high that
i could not browse the web. TLS handshakes wouldn't complete. HTTP servers
would give up on me. It was terrible-- as a person accustomed to LTE + fiber
("Aaaaahhhhhh the data!!!") I couldn't believe just how stupid and bad of an
experience we're giving our users out there. This was Europe-- now transport
yourself to Bangladesh, or many places in rural India where people are
beginning to be plugged into the internet. Think of places where access to
Wikipedia, to Khan Academy, to the messenger services, could make huge life-
changing differences for people. Having a bad web _there_ is blocking people
from having the amazing powers of communication and computing that we all get
to enjoy.

We must fix these problems. And we're going to fix them by improving the data
model and the distribution protocols, not with optimistic policies.

~~~
wpietri
Do you have any data that indicates that network saturation is getting worse?
My experience is the opposite.

Also, how would your protocol help your example of you riding on a train? I'm
not seeing it.

------
vezzy-fnord
It seems like a pastime of TC is to write sensational filler articles out of
the more questionable bile emanating from HN. See also "Death to C":
[https://news.ycombinator.com/item?id=9477878](https://news.ycombinator.com/item?id=9477878)

~~~
_prometheus
(IPFS dev here) "questionable bile"? we'd love your technical feedback. We
know we have a ton of things to do better -- but what sort of things are you
thinking about? what can we improve on?

~~~
larzang
The core of the modern web isn't static: it's transactional, it's
personalized, and it's end-to-end encrypted for both security and privacy. How
does a distributed caching model accommodate such things? If it can't, how can
it be a replacement for http? If it's not a replacement for http, is it at
least better than current CDNs? If so, how?

It's hard to imagine you haven't already thought of these questions and
answered them somewhere, but this non-article certainly doesn't address it or
point to better resources (besides "buy my book!").

~~~
_prometheus
IPFS is not _just_ a static system. See the comments above:
[https://news.ycombinator.com/item?id=10329221](https://news.ycombinator.com/item?id=10329221)
\-- note that git ant bitcoin are "just static systems" as much as ipfs :). It
turns out all content is "static", it's just "static content" that is
"dynamically generated" at times-- the point being that you can still do all
the dynamic stuff just fine, and use IPFS as a transport for the data.

------
chmike
This is a naive idea that is wrong is so many way. It keeps popping up
recurrently and I don't understand why people didn't see yet that it won't be
the next Internet revolution. Oh well!

Let me try to explain it again. The idea of content base key is strongly
narrowing the application domain of the system. Modifying your information
(I.e. typo correction) invalidates all the keys. More precisely, people
holding the old key won't be able to access the corrected information. of
course this model fails completely with dynamic data.

The other problem that is often overlooked with such idea is the function that
translate your key into the location of the information. That is: determine
the IP address of the server(s) hosting the information from the hash key of
it. One needs something like a distributed index for that. Don't use an
algorithm because you don't want to add the constrain that your informations
can't be moved or replicated.

Another problem is staying in control of your information. An owner of
information want to be able to fix or modify it, or delete it when he wants.

Finally, another naive idea is sharing a distributed storage. This is a nice
idea but it simply doesn't work. Some People will abuse it. To avoid this you
need a system to monitor and control usage. Accounting ? good luck. By the
way, this is the problem of Internet today. It is a shared resource, but a few
are consuming a hell lot of it and earning a lot of money without sharing back
the profit. I'm looking at you google with YouTube.

I'm thinking and working on this problem for a long time. My conclusions are :

    
    
        1. Decouple keys from content 
        2. Make the distributed locating index your infrastructure
        3. Optimize keys for index traversal
        4. Owner of information must stay in control of Information
        5. Owner of information must assume the cost of hosting

~~~
mirimir
> 4\. Owner of information must stay in control of Information

Once you put something on the Internet, you no longer own it.

~~~
chmike
That applies to public information. But I consider an extended application
domain with access control to information. In this case controlling the
information host is required. It's all about the application domain you
target. For sharing public static data, IPFS would do it. But that is a narrow
application domain already covered by the web.

~~~
mirimir
I'm arguing that anything not air-gapped is potentially public. One can
encrypt, control access, and so on. But once it's out there, it's far^N more
vulnerable. And of course, there's a continuum. Even air-gapped stuff can leak
through side channels. And some approaches to online storage are quite robust.

------
huslage
There are better things than this hack on top of IP. It won't work. It cannot
work because the basic underlying protocols mean that the overhead of routing
and other meta-traffic will rapidly overwhelm the scale required for it to
work. Can we please stop talking about IPFS as if it could actually work???

Check out Content Centric Networking (ccnx.org) from Parc for something that
actually has a chance at being a real solution.

~~~
_prometheus
(IPFS author here)

It turns out that:

\- (a) IPFS already works

\- (b) IPFS layers over CCN extremely well AND generates demand FOR CCN. (i'm
a big fan of CCN actually, and want to see it in more and more systems. But
CCN has huge adoption problems. Look, we don't even have IPv6 fully deployed
yet! So if all IPFS does is generate enough demand for CCN to be fully
deployed, we've done a good job.)

\- (c) all the routing problems are very real, but can be solved just fine. If
you haven't looked deeper at how IPFS __actually works __instead of short
summaries, you 'll realize that the IPFS specs define the routing layer as
entirely pluggable for this reason: we need to evolve to better and better
schemes over time.

Instead of blindly saying "this cannot work", read more first. Understand the
systems _goals_, decisions, and roadmap. Ask questions if you're unsure how
something could possibly work. Lots of very smart people are working on IPFS
and we're doing it because we see that it can indeed (and does) work. Blind
negativity does not help CCN, and does not help anyone make better things.

~~~
huslage
I see the goals and I've seen these exact problems attempting to be fixed in
similar ways many times. I think the goals and roadmap are great, but overly
idealized. I also think it's important to be skeptical of claims of this sort.

That being said, I hope you're right.

