
HTTP is obsolete. It's time for the Distributed Web (2015) - Karrot_Kream
https://blog.neocities.org/blog/2015/09/08/its-time-for-the-distributed-web.html
======
macawfish
I need to comment because people are missing the point... there's _nothing_ in
this text that says the web won't need servers.

Imagine that you and some friends want to launch a small local business and
need to host a website. Instead of paying to host it "up in the cloud", why
not plug a few raspberry pi's into the walls at each of your houses? Between
that and also seeding it from your laptops, the site should have decent
coverage. Maybe you could also offer some discounts for loyal customers who
choose to seed it. This site will be redundantly hosted in the location where
it's most likely to be accessed.

If you're doing something more permanent, then you will want to make sure you
have more stable hosts, and not just some peoples cell phones. Just like you
do now. It's the same thing as now. If you want to host something, make sure
there is at least one computer that is serving it on a decent internet
connection.

The difference is that with distributed web technologies, there is a smooth
continuum for scaling. You don't even need to assume there is an ISP to seed!
All you need is LANs. But if you want to do big things, you can harness the
power of thousands of peers all streaming something that they're into.

I use syncthing and resilio sync all the time, and it works great with just 3
devices.

~~~
apatters
> Imagine that you and some friends want to launch a small local business and
> need to host a website. Instead of paying to host it "up in the cloud", why
> not plug a few raspberry pi's into the walls at each of your houses?

Setup, updates, maintenance, tech support, and uptime guarantees, just to name
a few reasons that "the cloud" is better. A service like Wordpress.com or Wix
beats the self-hosted Pi on all of these counts.

I interact with a lot of non-technical small business owners and am "that tech
guy" in their minds. A question I'm hearing more and more frequently is _why
even bother with a website when a Facebook page is much easier and they can
see people interacting with it._

Their reasons are not all that different from why many tech savvy HN readers
are using a Mac instead of Linux: convenience; less shit to worry about.

Hosting anything on a Pi plugged into the wall goes in the exact opposite
direction from what these people want. The centralized services are winning
because they pay attention to what the market wants, they build it, and they
make it easy to sign up.

~~~
macawfish
I don't literally mean a Raspberry Pi. Raspberry Pi is the Apple 2 of what I'm
imagining. I'm talking about some next generation stuff, picture a Firestick
with a much more refined iteration of sandstorm, with distributed apps that
have hardly even been conceived of right today in 2017.

If my roommate can plug a Roku into the TV, and knows how to use Ableton Live
and Squarespace, there's absolutely no reason he couldn't use something like
that.

And there's no reason that those non-technical people couldn't continue to pay
you for helping them use stuff like that.

And there's no reason that people can't continue to use stuff like Facebook.
But I have a feeling that people are going to be over that way of doing things
by the time the next two decades are over.

~~~
ivanhoe
But you still have all maintenance related problems? How do you upgrade a hard
disk or memory, go to their home? What happens if their home net is down or
slow, no one can visit the site? Back in the days I was running a few web
servers from my office directly, and it's a lot of extra work that is just not
worth it.

~~~
TeMPOraL
Who cares?

The Internet is breaking all the time anyway. Every day at least one of the
bigger / more important sites I visit has a temporary problem with something.
Three days a week HN keeps returning CloudFlare errors to me. Even Facebook
has some issues that break it every other week. The world isn't ending because
of this, and it isn't going to end because the site I co-host with my other
friend is down for the night.

As you grow to the point where close-to-perfect reliability matters, you'll be
able to afford to get someone do handle the hosting for you, just like you do
today.

~~~
skinnymuch
Not denying you get these issues or errors with sites. But how come I never
seem to have any issues with major sites. I do see cloudflare stuff. But never
for a top 1000 site.

------
jlrbuellv
It's kind of funny that the example link to a "permanent" object still returns
a 404 ("ipfs resolve -r /ipns/QmTodvhq9CUS9hH8rirt4YmihxJKZ5tYez8PtDmpWrVMKP:
Could not resolve name.") I totally want the web to magically be distributed
too, but clearly not even the author is bothering to host their IPFS content
anymore...

~~~
__s
> IPNS isn’t done yet, so if that link doesn’t work, don’t fret. Just know
> that I will be able to change what that pubkeyhash points to, but the
> pubkeyhash will always remain the same. When it’s done, it will solve the
> site updating problem.

~~~
IncRnd
That's how these things always work, by saying "it will work one day."

If it would work, and the cost benefit ratio were there, people would adopt it
quickly. That's what happens with just about everything else.

~~~
jwfxpr
> If it would work, and the cost benefit ratio were there, people would adopt
> it quickly. That's what happens with just about everything else.

Great point! Just like:

* Betamax * HD DVD * Minidisc * Hoverboards * IPV6 * DNSSEC * PGP & PKI * Linux desktops * Dvorak keyboards * The metric system * Decimal time * [flavour-of-the-month programming language] * [flavour-of-the-month database] * [flavour-of-the-month cypher] * ...

The factors that influence the proliferation of a technology are wildly
divergent from the criteria 'works well, cost/benefit'. I'm not even sure
those are weakly correlated proxy indicators of technology uptake.

~~~
katastic
Actually, it's pretty simple.

"Better" has to actually "be better ENOUGH" to warrant all of the retooling of
existing systems. I've got plenty of clients who would happily run Windows
2003 ("it's paid for") if it weren't for changing standards that aren't
compatible (newer TLS, Exchange, etc) and security breaches. They only upgrade
because they have to. "E-mail is e-mail" to them.

But if you sell them some magical new technology that promises to meet new
features, like tons of data analysis tools and easily graphs and charts in a
new version of CRM, they'll happily upgrade.

~~~
lgierth
Another important factor for adoption/adoptability is how well the new system
integrates with existing deployments of older systems. Ideally it completely
interoperates with the older systems, while providing you with additional
value right from the start.

~~~
carolc
Agreed with lgierth and I believe this is what sets IPFS apart from many
similar technologies: integration path for existing technologies. As far as I
can tell, it has been an important design decision from early on for IPFS.

------
Too
While this might solve distributing content serving, i'm still stuck with
transporting all my data through my single point of failure + spying ISP, when
all of my neighbors live within wifi-range.

For a truly distributed network we should look into ways to make internet work
more like a mesh.

Back in university, all the student dorms were connected to the same internal
campus network, back then internet was slow but you could still share files
blazingly fast with all the other students on the network (using DC++ at that
time). While this wasn't exactly a pure mesh either it shows that local data
works and beats internet many times, even with just a few thousand clients.
Then DC++ was mostly focused on pirated content but with a more human friendly
solution, like IPNS, it's not unimaginable that average Joe neighbor with one
click can create a local mirror of the whole Wikipedia for you.

~~~
lgierth
IPFS can discover other nodes in the same local network via mDNS -- you don't
need to have an internet connection at all to share data locally.

Combine this with the recent work on PubSub [1] and CRDTs [2] and you can make
many applications work locally, that are otherwise annoyingly strongly coupled
to internet services (thing Etherpad, Google Docs, Skype, Github, etc.)

> For a truly distributed network we should look into ways to make internet
> work more like a mesh.

Yes! We'll be putting more work into the network stack (libp2p) in the coming
months. IPFS itself has done a ton to rework how content is defined and moved
around, and libp2p will do the same for the network connections underneath.
Think overlay networks, cryptokey routing, packet switching.

[1] [https://ipfs.io/blog/29-js-ipfs-pubsub/](https://ipfs.io/blog/29-js-ipfs-
pubsub/)

[2] [https://ipfs.io/blog/30-js-ipfs-crdts.md](https://ipfs.io/blog/30-js-
ipfs-crdts.md)

------
Asdfbla
Why are distributed filesystems like IPFS so popular again these days ?
Freenet has been around (and super niche) for close to 20 years soon. Is it
because the Bitcoin hype has reinvigorated crypto-anarchists?

~~~
tmerr
Hmm they are sort of different things. Freenet basically has hash-addressed
content plus some relationship between human-readable strings and the hashes,
so you/can/refer/to/stuff/like/this making it easy to use HTTP on top of
Freenet for navigation. In contrast IPFS makes the hashed content itself in
charge of navigation by using git-like objects -- if you understand the way
git objects work [https://git-scm.com/book/en/v2/Git-Internals-Git-
Objects](https://git-scm.com/book/en/v2/Git-Internals-Git-Objects) then you
understand that this allows you to navigate an immutable tree of content and
an immutable history. In contrast, on Freenet particular files are immutable,
but that's as much as they guarantee.

They also differ in the way routing works. On Freenet you ask a (mostly)
random neighbor whether they have a file with the hash you want. If they don't
have it, they ask another (mostly) random neighbor. This can go on for a
while, until it either finds the content or hits a maximum number of hops, in
which case it backtracks. The only point of these rube-goldberg shenanigans is
anonymity. Since IPFS is more concerned about performance it flips this on its
head: instead of blindly asking nodes for content, it carefully keeps track of
peers who advertise what they're looking for; aside from being much more
efficient, this also allows you to choose not to do business with leechers,
like in bittorrent.

Maybe IPFS is a reinvention of a past technology, but certainly not Freenet.
(Does anyone know of something closer?)

Freenet paper:
[http://www.cs.cornell.edu/courses/cs414/2003sp/papers/freene...](http://www.cs.cornell.edu/courses/cs414/2003sp/papers/freenet.pdf)

IPFS paper: [https://github.com/ipfs/ipfs/blob/master/papers/ipfs-
cap2pfs...](https://github.com/ipfs/ipfs/blob/master/papers/ipfs-
cap2pfs/ipfs-p2p-file-system.pdf?raw=true)

~~~
clarry
> Freenet basically has hash-addressed content plus some relationship between
> human-readable strings and the hashes, so you/can/refer/to/stuff/like/this
> making it easy to use HTTP on top of Freenet for navigation. In contrast
> IPFS makes the hashed content itself in charge of navigation by using git-
> like objects [..] on Freenet particular files are immutable, but that's as
> much as they guarantee.

Actually there isn't that much of a difference here. Freenet manifests are
analogous to git trees; knowing the chk of a manifest file gets you to the
metadata that identifies all the files under the tree. It's all immutable.

There are some noteworthy differences though. One that stands out in
particular is that freenode breaks large (>32kB) files into chunks. Everything
is encrypted too. So finding a file you want is not quite as simple as taking
the hash of the plain, unencrypted file.

Either way, you can easily build a git-like hierarchy of immutable content
(and history) on Freenet, and this is more or less what happens under the hood
anyway with manifests and splitfiles.

As a slight deviation from the norm, Freenet also can also address (signed)
content by its public key rather than content hash. This is one way to enable
mutable data; not entirely unlike heads. They can still link to immutable
content hash keys.

> They also differ in the way routing works. On Freenet you ask a (mostly)
> random neighbor whether they have a file with the hash you want. If they
> don't have it, they ask another (mostly) random neighbor. This can go on for
> a while, until it either finds the content or hits a maximum number of hops,
> in which case it backtracks. The only point of these rube-goldberg
> shenanigans is anonymity.

It's worth pointing out that Freenet _does_ have a simple but powerful routing
system. Each network node has a virtual location, in key space. Requests are
routed towards the nodes that are most close to the requested key. With
careful selection of peers, the network topology can make for very efficient
routing. E.g. one could have a small number of nodes "far apart" in key space,
to faciliate routing towards far-away keys. Then you have a larger number of
relatively close nodes, so that when a request comes in "your general
direction" from a far-away node, you're likely to have the right peer to route
to.

It is true that some randomisation helps with anonymity.

EDIT:

I'll add that the flipside of a network that essentially enables leeching is
that it's also good for retention of popular (or "popular") items. Requested
data is cached en route, so it's sort of automatic load balancing. Soon enough
popular resources are likely to be held by whoever is nearby. People won't
need to manually pin the content, and it's hard to directly DoS those who
share the content you're "after." Asking for it just makes it more available.

I think this is important to consider if we're discussing reliability of
distributed networks.

I'm not saying what the implications are -- for they can be good or bad.

~~~
tmerr
Thanks for the correction! I don't think manifests were mentioned in the
whitepaper but I found some info on the wiki:
[https://github.com/freenet/wiki/wiki/Simple-
Manifest](https://github.com/freenet/wiki/wiki/Simple-Manifest).

It looks like I will also have to learn more about IPFS's routing. Clearly
Freenet's has some merits, and I would hope that IPFS is strictly faster since
it sacrifices privacy, but I don't understand it well enough to make sense of
how it scales.

------
CryoLogic
Obsolete? Maybe not yet. But I do think putting forth effort to improving
distributed protocols like IPFS could be very helpful in preventing internet
censorship like we have seen from Comcast, YouTube, etc. recently.

~~~
projectant
Exactly. It's like, Woah, hold your horses there. "Obsolete"? I don't think
so. To what extent can this IPFS serve an API or a dynamic database at this
point? To what extent will it ever be able to do that?

I think HTTP/websockets is very good for these things. Static data is one
thing, dynamic is a whole other story. It seems IPFS is just a new distributed
way to archive data. So what? It doesn't help serve something like FB over a
distributed network does it?

And to what extent could some sort of "protocol" vulnerability stop these
networks from being "uncensorable". Are they truly resistant to censorship, or
could they be effectively shut down somehow? Wouldn't DDOS attacks cripple
these? I mean, that's a crucial flaw, right, you just have to look up all
nodes for an piece of content and constantly flood them with DDOS traffic and
then, hey, you've censored the network, right?

~~~
carolc
Databases, and dynamic content in general, can be done with/on IPFS.

Take look at OrbitDB ([https://github.com/orbitdb/orbit-
db](https://github.com/orbitdb/orbit-db)) - "Distributed peer-to-peer database
for the decentralized web" or their blog post "Decentralized Real-Time
Collaborative Documents - Conflict-free editing in the browser using js-ipfs
and CRDTs" ([https://blog.ipfs.io/30-js-ipfs-
crdts.md](https://blog.ipfs.io/30-js-ipfs-crdts.md)).

And all that works in the browser without running a local IPFS in the
background. That's pretty amazing imo.

~~~
Luker88
In general? No. Just because you can, it does not mean you should use a
distributed db. Please remember to say that distributed, open databases have
very narrow use cases.

Leaving aside use cases like credit card information, there are a lot of user
information that is illegal to share unless the user explicitly consents. In
the EU you can't even share your access logs by default.

And how do you handle authentication? Passwords? how do you avoid user
enumeration, the collection of user email and info?

Distributed filesystems and CDN in general are great, but let's use them for
things that do not actually need a single bit of security, please.

~~~
carolc
> "Distributed filesystems and CDN in general are great, but let's use them
> for things that do not actually need a single bit of security, please."

The notion that distributed filesystems are inherently, or can't be, secure is
way off. I would argue that with these technologies, such as IPFS, they can be
_more_ secure.

The use cases are not only "open databases" (by which I assume you mean open
to public), private databases and data sets can be achieved just as well. Just
because it's "distributed" doesn't mean it can't be private or access
controlled.

Agreed on the comment re. "...illegal to share unless the user explicitly
consents" and I believe this will turn out better in the trustless,
distributed web, eventually. Our whole current approach is based on the
client-server paradigm forcing us to put every user and their data into one
massive centralized database. But we can change the model here. Instead, how
about _you_ owning your data(base) and controlling who gets to access it?
"Allow Facebook to read your social graph?" "Oh, no? How about another social
network app?". As a user, I would want to have that choice.

That bridges to your next point on authentication, which can be done on the
protocol level with authenticated data structures. You can define who can
read/write to a database by using public key signing/verification. It could be
just you, or it could be a set of keys. One good example of this is Secure
Scuttlebut ([http://scuttlebot.io/](http://scuttlebot.io/)). I highly
recommend to take a look an understanding the data structures underneath.

~~~
userpass
[http://scuttlebot.io/more/protocols/secure-
scuttlebutt.html](http://scuttlebot.io/more/protocols/secure-scuttlebutt.html)
>"Unforgeable" means that only the owner of a feed can update that feed, as
enforced by digital signing (see Security properties).

[https://github.com/ssbc/patchwork](https://github.com/ssbc/patchwork) >You
have to follow somebody to get messages from them, so you won't get spammed.

Doesn't that make it completely pointless because updates are still
centralised? It merely shifted trusting a single provider to trusting each
user which is not a scalable solution. The value add is so low you might as
well just use IPNS and make people subscribe to IPNS addresses.

~~~
arj
But it is scaleable. On scuttlebot you follow people just like you have
friends in real life. I also don't need to ask the government permission to
talk to that person. That is DEcentralized for you right there.

------
yeukhon
So a couple questions even reading this...

1\. The availability depends on the number of peers like BitTorrent? If so,
and if no seed is available, how does one access the content, esp in the
context of an intranet?

2\. Any change to how we run infrastructure except not serving HTTP?

~~~
tscs37
The usual solutions proposed to fix number 1 is to pay someone to host your
content. So basically as before but with the added chattyness and overhead of
a p2p protocol.

~~~
cookiecaper
IMO Filecoin is the most potentially revolutionary thing to come out of the
IPFS space, and it or something like it may have impacts that extend beyond
keeping content distributed. We _badly_ need a safe, self-organizing,
apparently-persistent storage mechanism.

What if a "universal basic income" meant "get paid for sharing your excess
disk space"? This could even be made transparent similar to OS page caches,
and then everyone with a computer + internet connection would be a
participant.

------
hyperpape
Does HTTP have to lose for IPFS to win? Except the cost argument, none of the
points in this article prevent sites from serving things via HTTP with IPFS
metadata for the long term (though I don't know how that would play with
advertising).

~~~
beager
One question that came to mind while reading this was "Why do we have to
demean HTTP in order for the tenets of IPFS to succeed?"

HTTP is ubiquitous, and no purism or idealistic superiority of some other
protocol is going to sway everybody to the New Hot Thing. This is not a knock
on IPFS but rather a recognition of reality: you're going to have to work
within the current system to supplant it. And that means not only tolerating
the old thing while pushing for the advantages of the new thing, but accepting
that absorption of the dynamics of the new thing within the old thing
represents a victory for the new thing, even if it doesn't get the named
recognition.

Maybe we're ready to evolve beyond the limitations of HTTP (and HTTP/2, which
I see as a viable and feasible, if not short-sighted improvement to HTTP). How
are you going to get Google, Facebook, Amazon, and everyone else to go along
with you? If you offer the benefits as compatible add-ons to the existing
norms, you will succeed. If you demand that we fully jettison HTTP to achieve
something better, methinks you will have an insurmountably hard time.

------
LukeB42
Synchrony is an intended solution to centralisation written back in 2015 after
being designed since around 2011:
[http://github.com/psybernetics/Synchrony](http://github.com/psybernetics/Synchrony)
though this implementation also ships a peer-to-peer hyperdocument editor
baked into the core of the UI so the Python PoC is a bit of a mutant.

Currently aiming to release a Go implementation at some point after canning a
C implementation earlier this year, where p2p is instead accessed via CONNECT
proxy.

Is there any interest in this project?

------
shade23
How does ipfs deal with dynamic content? And how would you make sure that
everyone uses an updated version of the website.

~~~
brownbat
I was worried about that too.

Scheduled republication is my best answer so far.

If you promised to sign and republish the same file every day with a new
timestamp, then people would know when they had the latest, and when they
didn't... would just have to wonder if you fell off the earth, which we sort
of do already with all those abandoned free software projects online.

Republication may be cost prohibitive for large files, so instead you could
republish a metadata file that pointed to the latest hash as of the metadata
file's publication time.

For the "hit by a bus" problem (or for a server doing this automatically, the
"hit by a comet" problem?), it'd be nice to include a dead man's switch from a
third party, where they can publish a "FINAL -- EXPECT NO MORE UPDATES"...

But that that point you're trusting a third party. If you're willing to trust
a third party this is far easier. So that might be what we'd end up with...
something like DNS providers, but they're suddenly managing indexes and
metadata for hosted files? I don't know...

(Also, this has probably been worked out already by smarter people than me, I
haven't looked at IPFS much, this was just a back of the napkin guess.)

------
styfle
Does anyone know if Dat can be used over HTTP like IPFS can?

~~~
frabrunelle
Yes, Dat sites can be rehosted over HTTPS. See for example
[https://github.com/beakerbrowser/dathttpd](https://github.com/beakerbrowser/dathttpd)

Also: [https://hashbase.io/](https://hashbase.io/)

------
fingerguns
IPFS is using Filecoin, which in itself is a questionable ICO.

[https://tokeneconomy.co/the-analysis-filecoin-doesnt-want-
yo...](https://tokeneconomy.co/the-analysis-filecoin-doesnt-want-you-to-
read-e60d5243f17c)

~~~
lgierth
You got it the wrong way around - Filecoin will be built on top of IPFS, but
IPFS itself works fine without Filecoin.

------
TheRealPomax
So, we're not commenting on "That hash is guaranteed by cryptography to always
only represent the contents of that file"? Because we should be. The
fundamental rule of hashing is that you are guaranteed to have collisions. The
challenge is to find a good hashing function for the kind of file you're
hashing, but on the internet you get EVERY TYPE OF FILE, so at the scale we're
talking about for IPFS, we're dealing with a large enough number of files of
every conceivable type that collisions are guaranteed.

~~~
nearbuy
You're thinking of non-cryptographic hashes. A cryptographic hash that could
generate collisions in any practical situation would be considered broken.
You'd need more files than there are atoms in the galaxy to have even a 1 in a
billion chance of collision with a secure 512 bit hash.

------
carlhjerpe
What about dynamic content?

------
teddyh
Indeed it has been said that decentralization is the worst form of networking
except all those other forms that have been tried from time to time.

(With apologies to Winston Churchill.)

------
cbhl
I can't tell whether the link "promising leads" was broken on purpose when the
page was written in 2015 (to make a point), or if it itself is an example of
centrally-served pages that slowly bit-rot over time.

------
j_s
An off-grid social network |
[https://news.ycombinator.com/item?id=14050049](https://news.ycombinator.com/item?id=14050049)
(Apr 2017)

------
z3t4
You can add redundancy to HTTP by having more servers, just add more IP
addresses to the domains A record. And files will be cached on the client
(forever, if you let it).

~~~
feanaro
Yes, you can, but do you think that's a comparably easy thing to do?
Especially when you take into account that IPFS also facilitates coordination
between unrelated parties in making the caching happen.

------
Animats
So why aren't these guys as big as Wix, which is in the same space with crappy
technology?

------
acd
I would like distributed web methods like IPFS to be a w3c World Wide Web
consortium standard.

Combine IPFS and crypto currencies for payments and we have a new distribution
standard. Then one could have distributed movie sites like Netflix and YouTube
where you pay for royalties to legally file share the content.

------
prithvizi
How would I use ipfs in a serverless model like AWS Lambda?

------
throw036281
How would you deal with child pornography hosted on IPFS? What about IP
infringement?

~~~
quintushoratius
More importantly, how do you deal with a seeder that is (unwittingly) seeding
such items?

~~~
stri8ed
Opt-in community based blacklist.

------
gafferongames
_wipes tear from eye_

------
alexasmyths
Wonderful articulation.

But I suggest that internet is 'centralized' not due to any technological
function, but due to the nature of information, particularly the economics and
business of information.

The 'protocol' used to pass information from A-B, were it changed to something
better, would not yield what the author is suggesting. Google 'would still
have all our stuff' \- I believe.

When 'individuals' can 'run services' from a variety of physical locales, with
robustness - we will see more decentralization, I think.

I believe this will happen probably by accident, gradually, as 'more tech'
creeps into our homes, one day, most people will have enough 'gear' to run
some kind of service from home/small office - and other places.

Finally - despite the obvious failings of HTTP ... incumbency is what it is.
I'll be we're stuck with it for a very, very long time.

If a powerful enough entity decided to change that - like Google AND Amazon
together, or the US Government, or the Chinese Government ... that could
change. Funny enough I think it's China that's best positioned to do it. They
have the wherewithal and the tech momentum, they could do it by 'fiat' and in
10 years implement some kind of 'new, better network' that we'd all eventually
move to.

Hey - America is still using 'swipe cards' and doesn't even use 'smart cards'
though they've been in use for 40 years around the world :)

~~~
caymanbruce
No. The Chinese government will fancy a 'centralized' internet that the
government has total control. It will try as hard as it could to make sure the
internet is 'centralized' and dismantle anything that is distributed or P2P.
Some ISP in China doesn't even allow you to use a public IP. So I think this
IPFS thing is most likely a huge threat to the GFW(Great Firewall) system of
China. I hope this thing can eventually tear down the entire effort of the
Internet censorship in China.

~~~
alexasmyths
I didn't mean to say that they would do something 'decentralized' \- of course
they wouldn't. But possibly something 'better' than HTTP.

One of the things about the net, is that it's transactionally open.

There's no inherent identity/or security, it was grafted on with SSL - 'kind
of'.

I suggest it might have been better if identity were required to even make
connections, to avoid many kinds of attacks. Hopefully, it could be done in an
unbureaucratic manner, also wherein 'identities' can remain de-facto
anonymous.

My main point was that 'http' is kind of old, but we're stuck with it unless a
'major power' does something about it.

~~~
caymanbruce
I know what you mean. But this 'better' than HTTP thing would only be a dream
in China. Most websites, including some very big websites in China, have no
concern about security. It will take longer than you can ever imagine for them
to embrace SSL.

------
jlebrech
something like? [https://zeronet.io/](https://zeronet.io/)

------
gnaritas
HTTP is not obsolete, it might be outdated, but words have meaning and HTTP
one of the most used protocols there is, calling it obsolete is silly.

~~~
PrimHelios
"adj. Outmoded in design, style, or construction: an obsolete locomotive. "[0]

So yes, it's obsolete.

[0]
[https://www.wordnik.com/words/obsolete](https://www.wordnik.com/words/obsolete)

~~~
tyrw
You are of course choosing to ignore the first definition, which says "no
longer in use"

~~~
PrimHelios
No, I'm not. Words can have multiple definitions. One definition does not
apply, but a different one does. That's how language works. Hell, most
adjectives mean more than one thing, that's just English.

~~~
tyrw
> One definition does not apply, but a different one does.

This is you choosing to ignore the first definition. And it's NBD.

I do understand how language works. You can choose to focus on one definition
and ignore another, as is clear by this thread.

------
free2rhyme214
It's time for Blockstack.

------
Semiapies
_Ugh, servers!_

Except, you're gonna have to have servers unless you've got the entire web
backed up on everyone's computer. Otherwise, you don't and can't know how many
copies of a page or other file are out there. But who's going to pay for
servers to retain random peoples' and companies' web detritus? This whole
project exists because _that 's not feasible_ in the long term...

It's not "crazy", it's pure, thoughtless hype.

~~~
alwillis
Because IPFS uses a distributed hash table like Bittorent, you don't have to
know where stuff is--that's the problem with the HTTP, which is location
addressed. IPFS is content addressed—the hash is the location.

You never host anything unless you want to; you don't host random stuff.

It's 2017, if you're using Google Docs or an instant messenger program and
lose access to the backbone, you can't communicate with somebody who's in the
same room with you. That's kinda silly. IPFS solves that issue.

IPFS is censorship resistance because it's a distributed protocol that can use
a variety of transports. If I run example.com, people can DDOS it; it's much
harder to do that when hundreds or thousands of nodes have the same content
and you can connect to any of them. Sure worked in Turkey:
[http://observer.com/2017/05/turkey-wikipedia-
ipfs/](http://observer.com/2017/05/turkey-wikipedia-ipfs/).

Filecoin is a cryptocurrency that will be mined by providing storage via IPFS.

Folks may want to read the white paper before they make assumptions about
what's possible and what's hype:
[https://filecoin.io/filecoin.pdf](https://filecoin.io/filecoin.pdf)

~~~
Asdfbla
Someone has to have replicated the file though, else it can be lost. Yeah, you
still have the hash, but if no one is left that stored the file, what are you
gonna do - brute-force search for a preimage of the hash to get your content?

As long as IPFS requires replication to be voluntary on the side of the nodes,
the argument of the parent holds.

~~~
macawfish
That's the same thing we've got now, except that going through the process of
"replicating the file" usually means either paying an a hosting company and
learning how to maintain a server or signing your control and rights over to a
company like Facebook.

There's nothing preventing anyone from going through these same measures with
IPFS or dat. It's just that you don't have to in order to get started hosting
something.

