
IPFS is the Distributed Web - jasikpark
http://ipfs.io
======
d--b
I have a truly naive question about the distributed web: what makes the
supporters of it think it will be any different from the original web. I mean,
isn't it likely that at some point, there will be the need for a centralized
search engine for it? Isn't it unavoidable that big companies like facebook
runs their own non-distributed subnetwork, so that it can deliver standard
functionality to all its users? The original web IS distributed already, isn't
it? It's just that organically, the way people use it has become a lot more
centralized, no? Or am I missing the main argument for a distributed
architecture?

~~~
daveguy
The difference is IPFS (or similar) would distribute _content_. Currently
communication is distributed, but content is not. IPFS is like https, but
where every page is a bittorrent. There are challenges for security in
distributing services and monetization. However, the key is that content would
be distributed. YouTube probably would not switch to ipfs, but DNS (what was
targeted recently) would make sense. A DNS host list should be more
distributed. Even with DNS there is a dynamic content challenge. When a DNS
entry changes, it would still need to propagate.

This brings it back to a distributed communication problem. If you could
distribute content more consistently and 100,000 computers could step in as
dns providers _with authentic data_ rather than concentrating dns at dyn or
Google it would help. That will still require protocol updates so that the
fail over uses local content. Really ipfs and distributed content could be a
component to help with distribution. It definitely isn't a complete solution
alone (yet -- there is a push to make it the solution).

~~~
glibgil
The internet may run on content (storage) and communication, but both those
things run on _computation_. Until computation can be safely distributed real
changes can't happen

~~~
wyager
What do you mean? It seems to me that the single largest destructive force on
the internet is forcible content takedowns. IPFS is immune to those.

~~~
clueless404
In which way is IPFS immune to content takedowns?

IPFS provides no anonymity, so it won't take long before DMCA notices to start
arriving.

~~~
tom_mellior
> In which way is IPFS immune to content takedowns?

In the way that, if one node is forced to take down some information, that
information may still be accessible from other nodes _under the same address
as before_. It's not quite immunity: They can still harrass individuals. Call
it "resilience", maybe.

> IPFS provides no anonymity, so it won't take long before DMCA notices to
> start arriving.

It should compose well with Tor, I'd hope.

~~~
clueless404
Which makes it about as resilient as regular content hosting, which is not
very.

Does IPFS work with Tor?

~~~
mburns
>Which makes it about as resilient as regular content hosting, which is not
very.

It is decidedly more resilient than regular hosting in that situation.

~~~
clueless404
How so?

IPFS so only as resilient as you are good at dodging DMCA notices.

~~~
mburns
Because mirroring content on IPFS is trivial and transparent to the consumer.
All it takes is a single user (or an arbitrary number) outside of US
jurisdiction and the content is more or less DMCA-immune.

The equivalent is not practical in traditional HTTP, where content at risk of
being taken down is scraped from the server and that snapshot-in-time is
hosted as a mirror, generally at a different domain.

~~~
clueless404
> Because mirroring content on IPFS is trivial and transparent to the
> consumer.

Which is also why a regular consumer should never use IPFS and why prosumer
should immediately disable caching and seeding upon install.

> All it takes is a single user (or an arbitrary number) outside of US
> jurisdiction and the content is more or less DMCA-immune.

Perhaps so, but if your plan for resiliency is based on the kindness of
strangers in countries where the DMCA nor censorship does not apply, it's not
much of a plan.

Plus you'll still have to seed the original which opens you up to liability,
as IPFS does not provide any kind of anonymity.

So, while better than traditional HTTP, IPFS isn't really immune to takedowns
nor very resilient.

~~~
wyager
> Perhaps so, but if your plan for resiliency is based on the kindness of
> strangers in countries where the DMCA nor censorship does not apply, it's
> not much of a plan.

Have you ever used Bittorent? It works great.

~~~
clueless404
It works great only for a limited number of use cases.

It's terrible for unpopular content, it is not immune to takedown notices nor
is it very resilient in the face of a determined adversary.

~~~
aaroninsf
Backing 'long tail' content with webseeds works well for Bittorrent.

...but that's the same as saying let's back the decentralized web with
centralized resources erp.

~~~
clueless404
Exactly. Which begs the question of why bother with the decentralization part
at all of long tail / unpopular content?

------
idlewords
I worry that this is another example of throwing technology at a social and
political problem.

That the current web is centralized has little to do with its technical
design, and everything to do with economic and structural incentives that have
made it that way.

It's tempting to say "start afresh", but we'll just be trading our current
problems for a new set of problems IPFS introduces. It's a law of nature that
problems are always conserved.

I would rather we do the hard work of fixing the web we've got, in particular
the hard issue of how to re-decentralize it.

~~~
fizzbatter
> I worry that this is another example of throwing technology at a social and
> political problem.

I'm so lost at this assertion. How is this a social / political problem?

IPFS is not permanent hosting. It is not an attempt to take power away from
others and distribute it to the masses. It is purely a technical problem,
which i think is objectively true.

I have a feeling that you believe IPFS is partially designed to subvert
governments/etc. And to an extent, that may be _possible_ , but it is being
built with DMCA's in mind. IPFS wants to allow governments / copyright holders
to take content down from the network. It is not trying to become a pirate
haven. Why? Well, mainly because piracy inhibits adoption. There may not be a
way to DMCA _currently_ (i haven't checked in months), but there are Github
issue(s) on it. The creators don't want to see IPFS blocked in countries
because it can permanently host illegal content. Which makes a ton of sense.

The problem IPFS is trying to solve, as i see it, is purely technical. As
advertised in pretty much every pitch i've seen for this technology, we are
simply scaling too fast in storage and not fast enough in bandwidth. It
appears impossible to handle the infrastructure load of files we are creating.
IPFS solves this by automatically pulling content as locally as possible,
reducing network overhead. Furthermore, it means that a DDOS/viral/etc of a
central resource will not kill it. It's safely and mostly permanently (barring
DMCA/etc requests) available.

So with that said, what political and social problem do you see IPFS trying to
solve? I don't see any.

~~~
inimino
From the article:

"The Internet has been one of the great equalizers in human history and a real
accelerator of innovation. But the increasing consolidation of control is a
threat to that.

"IPFS remains true to the original vision of the open and flat web, but
delivers the technology which makes that vision a reality."

This presents IPFS as the technological cure for the "consolidation of
control" that threatens the Internet.

These are not purely technical problems.

~~~
fizzbatter
That is unfortunate, because i have yet to see how IPFS is even remotely that.
Especially since it wants to comply with DMCAs and etc.

The technology and the marketing seem at odds, to me.

------
TimJRobinson
So I've been thinking about creating a basic site running on IPFS and here's
my dillema. The hash of each page is a sha256 of the contents right? So lets
say you have 3 pages A, B and C, A links to B, B links to C, C links to A. How
do you create all 3 pages with correct links to each other?

When you create page A you have to have the SHA of page B, but then to create
page B you have to have the SHA of page C and finally to create it you need
the SHA of page A. You get into this cyclical loop where you can't generate
any page and link to others. What is the solution to this problem?

~~~
kirushik
Well, IPFS allows you to upload trees of files to a single IPFS node. So you
just create three webpages, put them into a single folder and use `ipfs add`
on this folder. Your webpages will be available with relative links, being
exposed at addresses like `/ipfs/<Sha>/page_a.html` and
`/ipfs/<Sha>/page_b.html`.

Under the hood those pages will be standalone ipfs Duke nodes with separate
hash addresses, and your root folder will be actually a set of pointers to
those hash addresses — but you don't need to keep that in mind when uploading
a site.

~~~
kirushik
One can also study ipfs project websites (ipfs.io, dist.ipfs.io and so on).
Those are just static bunches of files (html and resources) hosted in ipfs and
served with a standard ipfs gateway (located on gateway.ipfs.io address).
Gateway just looks up _dnslink TXT records for corresponding domains, and
serves files from the ipfs hash address specified there.

------
runeks

        > Each network node stores only content it is interested in [...]
    

Isn't that the issue here? Storing data that will maybe be there later isn't
really storing data. People want to publish something that must always be
available, so why inject data into the IPFS network and hope it will be there
in a year, rather than set up a $10/yr VPS?

    
    
        > With video delivery, a P2P approach could save 60% in bandwidth costs.
    

In my opinion, this may be true, but total costs will be greater. P2P
solutions are awesome because they are resilient, not because they are
cheaper. Distributing pirated movies by dumping them on public FTP servers is
much cheaper than BitTorrent. BitTorrent appeared because the centralized
method was not resilient enough against adversaries, not because it was
cheaper (quite the contrary).

~~~
lukaslalinsky
If I get it right, the idea is that if you want to make sure it's there one
year later, you set up your own server and host the files on your own. IPFS
just allows others to distribute a mirror of the files.

~~~
runeks
Why would nodes in the IPFS network voluntarily act as a load balancer for my
VPS? Usually I have to pay for load balancing, so I'm a bit suspicious when
someone claims I can get it for free from IPFS.

~~~
lukaslalinsky
There is a lot of (mostly free) content that people are willing to distribute
freely.

Take Wikipedia for example, even though it's changing very fast, I imagine a
huge portion of it is static. If the latest version of each page was served
via IPFS, somebody could setup a local mirror and people nearby would
automatically use that.

~~~
runeks
I don't disagree with the fact that IPFS has value. I just don't like the way
it's presented as a generic transport protocol that's "superior" to HTTP.

~~~
bitJericho
Just take a look at old torrents of obscure stuff and you'll see just how
resilient bit torrent is. (it isn't)

~~~
qwertyuiop924
It's more resilliant than HTTP: When you grab a torrent, you can download from
any seeder. With HTTP, the host is specified, so if that host goes down, good
luck.

In either case, if all the hosts go down and you didn't mirror it, you're
screwed. However, IPFS makes it easier to mirror things.

~~~
mSparks
While this sounds like it "should" be true. I think managing that will turn
into something of a nightmare. IPFS is making awesome inroads into achieving
that, but its not clear of the benefit over something like say freenet (other
than freenet is very slow because it prefers privacy and resilience over
speed).

I can see it being "really" useful as a backbone for something like Open
Cobalt - [http://www.opencobalt.net/](http://www.opencobalt.net/) And I'm
really looking forward to seeing that mature.

But the last time Open Cobalt seems to have been updated is 2010 - that's
quite some time ago.

I wonder if it's being held up by patent trolls, but really all of them feels
like more like a solution looking for a problem. That can take a very very
long time to mature. Which is a shame.

~~~
qwertyuiop924
Alpha 22 was actually released just last year, and it's still actively being
worked on AFAIK. It's just that the website hasn't been updated, save for the
downloads.

------
supergreg
If I try to host a javascript application that uses LocalStorage for saving
data, it would be visible to any other ipfs JavaScript application because
they all exist under the same domain, right? Have you thought about having the
URLs be something like ipfs://<hash>/index.html instead of
[http://local](http://local) host/<hash>/index.html so browsers keep the
LocalStorage for each ipfs hash separated?

~~~
diggan
We're planning to use Same-origin policy to prevent this. It was first brought
up here: [https://github.com/ipfs/go-
ipfs/issues/651](https://github.com/ipfs/go-ipfs/issues/651)

You can read more about same-origin policy here:
[https://developer.mozilla.org/en-
US/docs/Web/Security/Same-o...](https://developer.mozilla.org/en-
US/docs/Web/Security/Same-origin_policy)

~~~
0xcde4c3db
Putting a hash in the hostname/domain would also be using same-origin policy.
The issue you linked refers to using suborigins, which seems to still be a
draft proposal with no implementer buy-in outside of Chromium [1].

[1]
[https://bugzilla.mozilla.org/show_bug.cgi?id=1231225](https://bugzilla.mozilla.org/show_bug.cgi?id=1231225)

------
msane
If you want to tamper with content on the web, the idea that content is
fingerprinted in IPFS is a huge deal.

IPNS (the name service) then becomes the vulnerability, but that is also
distributed.

~~~
tscs37
Well, as I understood IPNS is a bit limited; it only has one resource and
there is only one authority that can change content, the node in this case.

It does protect against DDOS, since old content remains visible and online but
if the private key is leaked or the node is malicious, it has exactly 0
protection.

Even worse, if the private key is stolen, you can't do much about it unless
you have DNS setup... which brings you back to centralized.

~~~
Kubuxu
IPNS is bug Proof of Concept that mutable content is possible in IPFS, there
is long running goal of InterPlanetary Record System that would include much
more complex validity schemes including revokeation.

~~~
Kubuxu
s/bug/big/ in my previous comment

------
fosh
How do hosting providers fit in here, if at all? E.g., if I want to host a
website on IPFS, do I publish it from my own machine and then wait a healthy
amount of time for the content to be absorbed by the ether, or is there some
way I can encourage other nodes to pick it up without requiring end-users to
actively seek out my fresh material?

~~~
danieldk
It seems that the role of a hosting provider could be: I pay you <amount> per
month to fetch <hash> or <name> on <N> peers with bandwidth <bandwidth>.

Of course, the economics are quite different. In the normal web, you need to
add more resources when a page becomes more popular. In a distributed web,
more resources are a side-effect of a page becoming more popular.

------
mcbits
Suppose I'm poking around IPFS and unintentionally download some unauthorized
copyrighted content. Is my computer going to automatically start sharing this
content, exposing me and my ISP to legal action?

Or if there is a way to prevent sharing particular content that I've accessed,
what's to stop me from leeching everything and never sharing anything?

(Edit: Ah, now I see "BitSwap" as possibly addressing my second question, but
I'm still concerned about the first.)

~~~
qwertyuiop924
For #1, yes. For #2, also yes, and there's nothing to stop you from leeching
save the goodness of your heart and your own personal laziness.

~~~
mcbits
Since it's impossible to know who owns, claims to own, or has licensed any
particular content, #1 is a fatal flaw unless users can officially receive
immunity (which seems doubtful). Otherwise we'll have a situation like
speeding on the highway, where "everyone does it" at the risk of a random
ticket, but in this case the fines are thousands to millions of dollars.

~~~
qwertyuiop924
I don't see how it's bad: the hosting user receives no credit, and local
mirroring keeps retrieval speeds high. Your users effectively become a giant
CDN for you. Who would sue?

~~~
mcbits
I wouldn't sue anyone. But I can watch music videos on YouTube without
contacting all of the the relevant rights holders to ensure both the uploader
and YouTube have permission to show me the video. On a distributed file share
lacking anonymity and/or immunity, I have to ensure that _I_ have those rights
before viewing (thus distributing) any content.

Immunity is not likely to happen, as it would amount to granting
redistribution rights to anyone who downloads a piece of content. So the
political solution is out. Anonymity is a requirement.

That's not to say IPFS is useless. It's just not going to "replace HTTP" or
any significant part of the web as long as it's possible to see who's sharing
what. Or if it does, the sharks will feed.

~~~
qwertyuiop924
No, that doesn't add up: You only host data that's already available on a
public network, pretty much by definition. If the rights distributor didn'f
want it to be globally accessible, they wouldn't have put it up on IPFS.

~~~
mcbits
I'm happy to play the naivete card when it comes to downloading. (If the
producers didn't want me watching this CAM footage of a new release on
YouTube, they wouldn't have uploaded it, wink wink.) I'm a bit more cautious
when it comes to uploading.

~~~
qwertyuiop924
But you're not uploading something that hasn't been, you're mirroring
something that's already up.

~~~
clueless404
You won't have any luck with that argument in court.

If you doubt me, see all verdicts for infringement when bitorrenting.

~~~
qwertyuiop924
But this is very different: This is more like the original copyright holder
putting something on BitTorrent, inviting people to download it freely, and
them suing anybody who seeds it: By putting the data on a P2P protocol, you've
kind of implicitly stated that you're okay with people seeding, or whatever
the local terminology for the same is.

~~~
clueless404
Um, no.

You have no guarantees that the content put up via IPFS is by the original
copyright owner.

Thus immediately after you receive the infringing content over IPFS and start
seeding it, you are infringing on the copyright owners rights. You have no
recourse and no legal defenses to protect you.

~~~
qwertyuiop924
Well, then, as soon as you receive notification, you can immediately stop
hosting the data: it's like a DMCA takedown, but you have to send it to way
more people.

~~~
clueless404
Too late. Once you've retrieved content via IPFS, you are already infringing.
You can still be sued, waiting for a DMCA notice won't save you.

------
empath75
As a devops guy, I sort of think ipfs seems more useful as a private, backend
sort of solution where you trust all the nodes. I'm sort of vaguely imagining
it running as a shared file system in AWS, running on docker containers.

~~~
viraptor
> where you trust all the nodes

Why do you think that's needed? Of course if majority of nodes cheated and
said "sure, I've got that file" and send you random garbage instead so you
have to retry, that would be bad. But if majority are running proper
implementation, you don't really need to trust anyone.

~~~
Kubuxu
You need just one node to send you the right data. No need for the majority.

~~~
viraptor
You need a reasonable number of nodes. Likely a large majority. Otherwise
you'll just keep redownloading garbage data and discarding it. And likely
doing it slower than bad nodes can leave/rejoin the network with new ips.

------
voltagex_
Beware anyone on a metered connection - in 20 or so minutes, the ipfs daemon
has used 3 gigabytes of bandwidth.

~~~
mark_l_watson
Did this useable continue indefinitely? I would hope that there was a socket
throttling option.

~~~
voltagex_
Frustratingly I couldn't reproduce this and the bandwidth-using part of the
IPFS daemon appears to be completely opaque - i.e. I had no clue what was
using all the bandwidth (other than the fact there were 200+ connections).

No visibility into an app means it's unlikely to stay on my system.

------
kefka
Ive been using IPFS to port and make serverless webapps.

[http://ipfs.io/ipfs/QmbLPfyehFnViKZpU237P6a6DpjCfWFSoDBMQFGU...](http://ipfs.io/ipfs/QmbLPfyehFnViKZpU237P6a6DpjCfWFSoDBMQFGUAgYW2t/)

~~~
etherealG
what is it?

------
clueless404
Does IPFS come with some kind of content filter or firewall to protect its
users?

When child porn inevitably shows up, how do you protect yourself from
accidentally downloading _and_ then seeding it?

~~~
anc84
By simply not downloading and seeding it. ;)

In IPFS you do not automatically relay things, you have to explicitly decide
to.

~~~
clueless404
> By simply not downloading and seeding it. ;)

You seem to have glanced over the qualifier I used: accidentally.

Given a hash for an IPFS resource, how do you know it does not contain child
porn before you retrieve it?

Once you have downloaded it you have committed a crime. There are no take
backs with strict liability crimes and IPFS provides no anonymity, so there is
a record of you downloading _and_ seeding child porn.

> In IPFS you do not automatically relay things, you have to explicitly decide
> to.

Really?

Correct me if I'm wrong, but I was under the impression that immediately once
you retrieve a hash via IPFS, you cache it and start seeding it.

~~~
pdkl95
> Given a hash for an IPFS resource, how do you know it does not contain child
> porn before you retrieve it?

Given a URL for an HTTP resource, how do you know it does not contain child
porn before you retrieve it?

> seeding

[https://freenetproject.org/help.html#childporn](https://freenetproject.org/help.html#childporn)

~~~
lkjhgfdsa57
> Given a URL for an HTTP resource, how do you know it does not contain child
> porn before you retrieve it?

As clueless404 stated, on the http web you don't automatically seed the
content that you accidentally retrieved. On IPFS you're distributing it.

As an example, another posted here provided an IPFS link to resources, some of
which are a copyright violation in some regions. In IPFS you can discover the
nodes that are seeding it:

    
    
        $ ipfs dht findprovs ...contenthash...
        QmfWQHVazH6so9p27z27rr8TJSdBFGpH7hunDcaZ1EAQ2c
        ...
    

These are the node id's sharing the content. You can find all the IP addresses
published by the nodes, including private ones:

    
    
        $ ipfs id ...nodehash...
        {
          ...,
          "Addresses" : [
            "/ip4/127.0.0.1/tcp/4001/ipfs/...hash...",
    	"/ip4/192.168.1.5/tcp/4001/ipfs/...hash...",
    	"/ip4/172.17.0.1/tcp/4001/ipfs/...hash...",
    	"/ip4/1.2.3.4/tcp/4001/ipfs/...hash..."
          ],
          ...
        }
    

If your node accidentally retrieved a hash containing content that was illegal
or otherwise bad your physical IP is easily discoverable.

A database of bad hashes is easily calculated given existing content using:

    
    
        $ ips add -n foo.mp4
        added ...hash... foo.mp4
    

This generates a hash for a file without uploading it to IPFS. You can then
use findprov to see if anyone is sharing it.

------
tijs14tijs
Interesting, I have two questions:

Can you create your private ipfs network? (accessible by anyone, upload only
me)

If you upload sensitive material to the global ipfs network, what do you think
will happen?

~~~
diggan
Private networks is something that we're working on (track it here:
[https://github.com/ipfs/notes/issues/146](https://github.com/ipfs/notes/issues/146))
but it would be private-private, only access+upload between your nodes.
"Accessible by anyone - upload only by you" is not gonna be supported since it
doesn't really make sense. If someone has a hash of a file, they can rehost it
because they can access it. In IPFS you're not locating files based on WHERE
they are but their name rather.

~~~
ashark
> Private networks is something that we're working on

Pretty sure you can already do it by setting up your a node on _e.g._ a
Digitalocean droplet, removing _all_ bootstrap nodes from its config file, and
having everyone on the network replace the bootstrap node addresses in their
config file with your server's address. Granted that's not a very user-
friendly way to do it, but when I tried it it seemed to work.

~~~
diggan
Yeah, there is two things that is missing for it to be good. First is
encryption of content in case something happens to leave the network and
secondly rejecting the connections from anyone who is not supposed to connect
to you.

With your current setup, if/when a node gets connected to someone else (maybe
someone connects directly to you), your entire network will connect from that,
since the connections will be shared with the peers.

------
clueless404
What problem does IPFS actually solve?

~~~
jpalomaki
Content distribution. Many people however don't even realise this is a
problem, because they have been spoiled by free-as-beer services provided by
Google, CloudFlare, Github and various other companies.

~~~
clueless404
Just exactly how does IPFS solve content distribution?

You still have to serve the content from a server, since you cannot depend on
the kindness of strangers to persist your content.

~~~
lukasb
The idea is that some fraction of users will be configured to reshare content,
which helps distribution for popular content scale. This seems to work in
practice for bittorrent.

~~~
clueless404
Yes, but only for suitably popular content, as all the dead torrents will
testify. This doesn't make IPFS very appealing to Joe Blow, content maker
unextraordinaire.

~~~
anc84
Joe Blow would _just_ need to find some fans who think that keeping his
content available for others is worth their while.

I believe this could work amazingly well for netlabels and indie bands.

~~~
clueless404
Regardless of whether I think this is a bit of a far fetched use case or not,
this really does nothing for Joe Blow and his content distribution needs.

But let's roll with it anyway. How does IPFS solve this problem, and more
specifically how does IPFS solve this problem any better than just publishing
a torrent of Joe Blows collected ramblings?

~~~
anc84
Sorry, I don't really care about your theoretical questions. IPFS is nice and
fun technology. If you don't want to participate, you don't have to.

~~~
clueless404
> theoretical

I don't think that word means what you think it means.

Asking what problem a technology solves is a very _practical_ question.
Countering with "IPFS is a nice and fun technology", is the very opposite,
i.e. alluding to that there might be some _theoretical_ benefits to it, but
that your mainly into it for gits and shiggles.

------
kylehotchkiss
There's an interesting emphasis on developing nations not engaging with the
Internet, but I think that might be partially cultural too. What tools have we
given the developing world to really engage with the internet? The easy-to-use
publishing platform often require an email and usually a real name. Both of
these things may be unavailable to countries where being connected to thoughts
posted online could be dangerous.

Most content is not written in simple english, and there's just not much
incentive for somebody who may not know how to think critically/complexly (due
to lack of western education) to engage with the internet.

I think distributed web is an interesting idea, and that IPFS really lists out
some issues with the internet that we'd all win in solving, but I think maybe
some of these, like developing nation web access, are solvable with current
tech, and more culturally based solutions

~~~
drivingmenuts
I think the better solutions come from within the society, rather than being
imposed from without (Cuba's flash-drive sneakernet). An outsider might
provide the gross implementation, but it should be up to the locals to work
out the messy details, even if it means they don't connect to the net, as a
whole, on a regular basis. It's their lives on the line, they're going to have
the best ideas on how to preserve them. An outsider is going to be, at best,
somewhat ill-informed and at worst, inimical to the solution.

------
ShakataGaNai
So I've got a (let's say) WordPress blog. Where's the "here's how to get your
existing content on IPFS in less than an hour" guide?

~~~
0xCMP
I think it'd have to be saved statically. Which actually makes this question
interesting: How do you store comments on an IPFS site? Constantly updating a
single file on IPFS?

~~~
dredmorbius
Interesting question. I know nothing about IPFS other than what I've read in
the past five minutes on Wikipedia, but:

1\. HTML wants badly for a nested relational document format. Essentially tin
or mutt's "in reply to" and "references" headers.

2\. A comments stream is a) a parent document to which b) multiple child
documents, themselves possibly having c) parent-child relationships, d) are
associated. Rather than thinking of "threaded discussion == single document"
think "threaded discussion == a set of related documents".

That gives the option of having a discussion "occurring" across multiple
sites, with some form of trust, whitelist, blacklist, or other mechanisms for
reflecting what you do or don't include in the discussion. Individual
comments, as their own docs, could also be freestanding instances.

Finding children from the parent becomes an interesting question.

There's also the matter of versioning a document. Tying a git-like capability
to this could be interesting.

------
teekert
Some content here to play with: [0]

Interestingly some links to copyrighted material end in ¨Unavailable for Legal
Reasons¨ however, running the daemon and issuing an ¨ipfs get hash¨ the
download does start.

[0]
[https://ipfs.io/ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQWaTa7ZcVLY...](https://ipfs.io/ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQWaTa7ZcVLY2PDxNxG/ipfs_links.html)

~~~
voltagex_
The "unavailable for legal reasons" message appears to be issued by the
ipfs.io gateway - IMO perfectly reasonable.

~~~
teekert
Sure but it´s not really unavailable I mean.

~~~
bergie
Yep, but ipfs.io refuses to distribute it. Doesn't mean other IPFS hosts
necessarily do

------
girzel
Hey I have a related question: so with IPFS we all host bits of the internet,
and with IPv6 our machines are all directly world-accessible, right? So how do
we prevent this from turning into a huge pwn-fest? If routers aren't doing NAT
and a bit of firewalling along with that, would each machine be completely
responsible for its own security?

~~~
fulafel
Home routers are doing IPv6 firewalling by default, no need for NAT. NAT is
strictly inferior to firewalling.

(Of course you shouldn't put things on your internet-connected network that
need the firewall, just look at it as a porous defense-in-depth element, just
like with IPv4)

~~~
tmzt
Does that mean they are assigning random IPv6 addresses to computers on the
network for each peer or connection and maintaining a mapping in memory?

~~~
viraptor
Normal static address is assigned. Whether it comes from the external range or
from the link-local range doesn't really matter. You could potentially
randomise the source at the exit to prevent identification of your hosts, but
you don't have to.

For most practical cases just imagine you're getting a big ipv4 range.
Whatever you can do with it - you can do the same with ipv6. NAT, no NAT,
filtering, static or dynamic assignment.

~~~
tmzt
That's what I'm asking. IPv6 feels like a regressiin over NATv4 because it can
leak which internal device made a request. Is there a standard way to
randomize addresses that works with ofd-the-shelf router firmware. Also, are
link-local IPv6 leaking MAC addresses?

~~~
fulafel
Yes, there is a standard since about 10 years ago. It's not dependent on your
router firmware, in accordance with IP's end-to-end design philosophy (keep
the network dumb, and hosts smart). Here's some links to get you started:

[https://slaptijack.com/networking/osx-disable-
ipv6-address-p...](https://slaptijack.com/networking/osx-disable-ipv6-address-
privacy.html)

[http://andatche.com/blog/2012/02/disabling-
rfc4941-ipv6-priv...](http://andatche.com/blog/2012/02/disabling-
rfc4941-ipv6-privacy-extensions-in-windows/)

As for link-local IPv6 addresses, those aren't even accepted by the socket
API[1] in place of normal routable addresses. They're only used for low level
things like neighbour discovery (IPv6 equivalent of ARP) and apps that go out
of their way to use them. They aren't routable outside your L2 segment.

[1] as in:

    
    
       $ telnet fe80::something:something:42
       Trying fe80::something:something:42...
       telnet: Unable to connect to remote host: Invalid argument

------
JulianMorrison
This sounds like distributed Geocities, where you can have any content you
like so long as it's static, or at least, changes in iterations of static
files like a HTML-generator blog.

If you do anything that needs a central server, suddenly its advantages
vanish. I could imagine Wikipedia using this; I couldn't imagine gmail doing
so.

~~~
inimino
Most of the pages on the Web are still static HTML or something close to it,
though.

~~~
JulianMorrison
Maybe but very few of them can do their job with _only_ static HTML.

~~~
inimino
I would say most of the articles linked to on HN could be static HTML pages.
How practical HN itself would be on IPFS seems less clear.

------
usgroup
What I'd personally like to see is built in monetisation such that hosting and
serving other peoples pages becomes a socialised cost and benefit although one
would guess that such as feature would have to be deeply designed into the
system itself, and cannot be added as an after-thought?

------
j45
I really hope something like this takes off.

Connecting and indexing documents has been the challenge of a few internet
generations. Creating a document at a point of filing is a subtle but
potentially large shift.

Hopefully this lands on homebrew soon to aid it's growth.

------
mark_l_watson
I heard about IPFS at the Decentralized Web Conference in SF last spring. It
sounded promising, long term. Anyone here using it right now? What are the
costs for running it on a VPS, for example, bandwidth, storage, and CPU load?

------
delegate
ipfs is fantastic, but it is half the solution. We also need a distributed p2p
application framework, with which nodes can securely communicate and allow
building distributed apps, like search.

We can think differently with ipfs. Traditional web allows everyone to publish
content _somewhere_ , hoping that search engines will index it.

With ipfs, the same file (with the same content) is only indexed/stored once
and then you reference the hash to get to the content.

This fact changes the problem of search.

Take all the world's movies. With ipfs + p2p network, you only need _one_ back
end in the form of a distributed search index, which can index all the movies
in the world.

Same with the world's music. You only need _one_ back end which can index all
the music.

The index can be as simple as {"movie title": [sha256]}, where the array
contains the hashes of different 'encodings' of the same content (eg. 'dvd
rip', 'blue ray' or 'mp3').

Content can be indexed by all kinds of properties of course and it can grow
organically over time to include more and more details.

With ipfs plus the p2p network we'll build 'apps', not 'pages'. People can
have a list of 'apps' running on their machines - which are node instances in
various distributed applications, sharing the same p2p network and using ipfs
as storage.

Apps can have 'backend' and 'front end' parts - the back end is the part which
participates in the p2p network, while the 'front end' provides a human
interface to the back end, were users can search/browse/view the content.

Apps are distributed as git repositories stored in ipfs, while the 'core'
running on the user's machine compiles the sources (inside a build vm) and
loads the resulting binaries into containers running in virtual machines.

This would make it easy for devs to write and publish new distributed apps,
making the network totally decentralised and virtually unstoppable.

Ps. If you feel that this insanity could work, then I'd love to discuss it in
more depth - delegate78@gmx.com

------
tscs37
IPFS is a pretty nice project, but it's pretty slow at times.

~~~
diggan
I work with IPFS so it would be very handy to know what you feel is slow
exactly. We're working our hardest to make it as fast as possible but
sometimes we fall through, so more information so we can fix it would be
wonderful!

~~~
tscs37
Most "slowness" is related to content that is either relatively fresh on the
network or or rare content.

If my node is the only one seeding the file, accessing it via gateway.ipfs.io
is slow but it's pretty obvious why; I'm the only provider.

So while for very public content IPFS is rather fast, if you share a couple
files with a few friends, it's a bit slower.

Of course, this all depends on upload/download speeds, mine aren't stellar in
both directions either.

Plus I'm rather certain that my ISP is throttling any kind of P2P traffic.

~~~
diggan
Ah, I understand. Yeah, it's not much we can do about that but in the future
it can be expected that you'll be able to use some sort of service to help you
with the initial seeding of the content.

So instead of you sharing a file from your home connection and five friends
have to share that, you'll share your file with some hosting provider and once
they have the file, you'll just send the same hash to your friends and you'll
have at least 2 nodes providing the file.

------
manigandham
Who exactly runs these nodes that store data?

~~~
gant
Everybody that pins it or retrieved the file from someone else (second one to
maximum cache size that is configurable). Files and folders are referred to by
hash.

Usually you still need to run a server to seed the files, but if a file
becomes popular it will be served from anybody who also retrieved it and
likely persist in the network even if your server goes down.

Think of it as BitTorrent for the web. It has Gateways that allow current
browsers to access the network and pointers to update content with a new
version (that's /ipns/ not /ipfs/ \- the old version persists as long as
someone hosts it).

What's also interesting is that due to the nature of the hashing algorithm
(it's a merkle tree), you can "ipfs add" a directory, preserving the file
structure inside, so websites on IPFS can use relative paths.

------
z3t4
We keep things that are important, and throw away the garbage. But if we keep
everything, there will be mostly garbage.

~~~
inimino
It's impossible to keep everything. A system like this one keeps the things
people access, rather than the current system where content remains available
only at the will (and expense) of the publisher.

------
descript
IPFS is vaporware. They are going to launch a token and try to take your money

------
bfrog
how does this differ from maidsafe?

------
vegabook
IPFS appears here every 6 months, every 6 months the same questions get asked,
the same problems get raised, the same collective sigh of
bewilderment/disappointment appears to emanate from the comments, and it goes
away again for another 6 months. Everybody wants something this clever and
community-spirited to work, but the basic problem is, I don't want my data to
be vulnerable to slow, unreliable endpoints, or people switching off their
IPFS servers. I can't really trust an unremunerated volunteer system with my
data, and I don't believe that my keeping your data is remuneration enough for
you to keep mine forever.

Peer-to-peer is excellent for ephemeral streaming stuff like chat, file
transfer, even gaming. But it is not good for permanence unless some monetary
remuneration gets involved, either via a centralizing entity asking for
payments (dropbox et al), or a distributed monetization system like bitcoin.
Somewhere, somehow, someone needs to get paid to keep the system running.

~~~
c22
I would say the impermanence is one of the most appealing aspects of the
system. Content that no one uses and that no one is willing to sustain gets
culled from the network, what more could you ask for?

~~~
vegabook
I assume you would also like ext4 to automatically delete your seldom used
files, without asking?

~~~
c22
That's absurd, ext4 solves a different problem. This is about distribution.

~~~
vegabook
it is the Internet Protocol _File System_. The clue is in the name. The
analogy is entirely relevant. This thing is not just a replacement for http.

------
knocte
504 Gateway timeout

A bit ironic :) (being distributed it shouln't be a single point of failure ;)
)

~~~
viraptor
It is distributed. The website should have a copy at
/ipfs/QmRaS4AZriMzw9nekub7hojTnvQYsVTDqkYG7BggQsexNt (check TXT record). You
just need to access it over the right protocol.

~~~
knocte
But I thought ipfs-js would make it run today within the browser...?

~~~
viraptor
To use js-ipfs, you still need to host the script somewhere. Doesn't help when
the whole host doesn't respond.

~~~
knocte
Right, ok, but then ipfs.io is already using ipfs-js itself? Because some time
ago I checked and the reference impl was python, and the ipfs-js port was not
complete yet.

~~~
haadcode
(IPFS dev here)

js-ipfs, the Javascript implementation of IPFS, has made a lot of progress in
the past 6 months. It's still early but totally usable.

We've been working on go-ipfs and js-ipfs interop so that browser nodes can
talk to "native" nodes. It's not fully ready yet but soon. This will open a
lot of doors for a more advanced network and applications using IPFS.

See [https://github.com/ipfs/js-ipfs](https://github.com/ipfs/js-ipfs).

As for your question re. ipfs.io using js-ipfs, the answers is no, it doesn't
use js-ipfs implementation yet.

~~~
david-given
Is there a working ipfs demo page anywhere? I've had a look around and can't
find one (the closest appears to be Orbit, but the ipfs-js version is down).

I'm particularly interested in using ipfs pubsub capabilities. It looks like
an interesting way of allowing multiple users of a web app to talk to each
other without needing (much) server side support.

~~~
haadcode
Unfortunately there's no proper demo page :/ We're working on improving the
docs (we know this is big issues atm).

There's an old version (from June) of Orbit at
[http://orbit.libp2p.io](http://orbit.libp2p.io) which you can try. Much has
happened since and we're working on bringing the js-ipfs version Orbit back to
a working state.

Re.Pubsub, I'm personally also very excited about it! :) The specs and general
info are located here
[https://github.com/libp2p/pubsub](https://github.com/libp2p/pubsub). go-ipfs
merged pubsub into master some time ago with this commit
[https://github.com/ipfs/go-
ipfs/commit/e1c40dfa347e38bdc9812...](https://github.com/ipfs/go-
ipfs/commit/e1c40dfa347e38bdc98126d422c495dc157b3642) and we're working to get
it into js-ipfs here [https://github.com/ipfs/js-
ipfs/issues/530](https://github.com/ipfs/js-ipfs/issues/530).

~~~
qwertyuiop924
Can I just take a moment to talk about how happy I am that the IPFS team is
actually taking protocol documentation seriously, allowing for multiple
implementations and the ability to understand how the protocol works?

IPFS and Matrix seem to be the only teams doing this: Dat and SSB have vague
protocol documentation at best, only documenting bits of the protocol at all.

