
IPFS, Again - unicornporn
https://macwright.org/2019/06/08/ipfs-again.html
======
janandonly
Who is going to defend the free internet agains Azure, AWS and Google Cloud?
They are the very opposite of a free and open internet where everyone can "run
a website" on her own machine.

It pains me to see a great idea like the Interplanetary File System _still_
not working. I had similar experiences with IPFS and yes, we do need a project
like this, only without the broken incentive structure attached to it. Why a
"Filecoin"? We already have a tested and working native internet currency:
bitcoin.

I've long believed that a browser itself should evolve to be both viewing
visited websites but also to _host_ said website for an x number of
minutes/hour/days. A bit like webTorrent aims to work. This way a website that
gets visitors gets decentralised hosters as well :-)

~~~
dTal
Filecoin isn't a currency. It's a token that proves "I replicated x amount of
data, y reliably, for z amount of time", and (protocol-wise) can be exchanged
for file storage services only. The market decides how much file storage is
_actually worth_ , by arbitrage.

The same principle goes for Namecoin, incidentally. The token embodies the
value of the resource. I actually think this kind of thing is a much more
stable base for a currency than Bitcoin's "it's valuable because we say so"
system.

~~~
marknadal
We don't need tokens.

We need P2P stuff that works.

Like
[https://github.com/webtorrent/webtorrent](https://github.com/webtorrent/webtorrent)
and [https://github.com/amark/gun](https://github.com/amark/gun)

They're both run in production, at scale (millions of users), and do NOT
require any tokens.

~~~
jasode
_> , and do NOT require any tokens._

You're looking at it from a pure _technical_ perspective of pushing bytes
around in a decentralized way.

What the folks pushing "tokens" are trying to solve is the _game theory of
financial incentives_ to store & serve those decentralized bytes. In contrast,
things like Bittorrent/Beaker/webtorrent/etc depend on others' "altruism" to
host and serve files.

Because altruism doesn't _scale_ , that's why nobody wants to seed my 100
gigabytes of personal vacation photos. Sure, they'll be happy to seed &
disseminate the latest cracked copy of Adobe Photoshop or a bluray rip of the
latest Marvel Avengers movie. But my personal files are uninteresting to the
current decentralized web.

(But I'm not claiming Filecoin actually solves the incentive puzzle. I'm
merely pointing out that the "problem" Filecoin tries to solve is at a higher
abstraction level (the economics) than webtorrent (the protocol).)

~~~
marknadal
I agree, the problem is that Bitcoin and Filecoin (per author's IPFS scaling
issues) do NOT scale though.

You must solve the technical scaling problem first, then sure, heck, add
tokens if you dandy.

WebTorrent/GUN/etc. do scale. Add economics to that.

Preferably, add something that is time-scarce so people do not have to lose
money (they don't pay FB or Google! If they have to pay Filecoin, they'll
still choose free FB), something like BAT or Pirate Booty (
[https://hackernoon.com/hollywood-crypto-behavioral-
economics...](https://hackernoon.com/hollywood-crypto-behavioral-economics-of-
piracy-netflix-and-the-future-73f093e56130) ).

~~~
dTal
You can't always just "layer economics" on top. Sometimes the problem _is_
economical.

Consider what it would take to replace DNS with a distributed system. In a
global namespace, names have value. You can't just operate on first-come
first-served - there needs to be a system that ensures that names go to
whoever wants it most. In other words, the challenge to be solved there isn't
the technical one of having a DHT - that's a mostly solved problem - it's how
the hell to make the names cost money, and who the money goes to. (And if you
don't think the names should cost money, how do you propose allocation should
work?)

------
emmanueloga_
Just today I was looking into IPFS vs DAT, does anybody have any insights
about the similarities/differences other than the ones listed here [1]?

From far away, DAT looks smaller and better documented (perhaps less
ambitious, too?) Apparently the best IPFS overview is the 2015 paper [2] which
looks pretty daunting and does not seem to cover any practical considerations.

1: [https://docs.datproject.org/docs/faq#how-is-dat-different-
th...](https://docs.datproject.org/docs/faq#how-is-dat-different-than-ipfs)

2: [https://github.com/ipfs/papers/blob/master/ipfs-
cap2pfs/ipfs...](https://github.com/ipfs/papers/blob/master/ipfs-
cap2pfs/ipfs-p2p-file-system.pdf)

~~~
zaarn
I consider dat:// to be the better protocol, in part because of what you
mentioned. Other advantages are the lack of duplicating data on disk (IPFS
makes a copy of all data it shares) as well as having a versioned history of
all changes. That way app owners can'tp ublish malicious versions while
preventing people from using the non-malicious ones.

Essentially, dat:// behaves like BitTorrent but the torrent data can change.

The only downside for both protocols I can think of is that the integration
story outside the browser and CLI tools is very poor (there is no FFI/C lib Ic
an bind my Rust app to)

~~~
asark
> (there is no FFI/C lib Ic an bind my Rust app to)

Language choices in both have set off my "this is doomed to obscurity" alarm
bells for precisely that reason. You don't write the reference implementation
for a new Internet protocol—especially the core library for it, and especially
a very complex one—in a language that can't easily be included in most other
languages. So, probably C.

Dat in particular seems great but ain't no way I'm relying on a large JS
project for anything I don't absolutely have to, on my own time, especially if
it deals with my files.

~~~
eudoxus
With regards to IPFS (don't have much experience with Dat), I have a hard time
understanding how choosing Go to implement their protocol, as well as
Javascript makes it doomed for obscurity. A lot of distributed/decentralized
applications and platforms are being written in Go. This is the first time
I've heard this argument.

Not to mention you can call into Go funcs over shared libs from C.

Rust is a great language definately, but not using it is a completely sane
design choice. C has been a source of a number of security issues due to
memory management, that type safe languages solve.

Not to mention, there are various parts of the IPFS/LibP2P stack that are
being written in Rust by other teams.

------
cryptica
I work for a cryptocurrency company as a software engineer and I definitely
agree that a lot of projects backed by crypto funds tend to over promise and
under deliver.

It's clear that those who control the funds aren't always the best at judging
tech talent. Big tech corporations have had years to settle down and build a
reputation to attract top talent. Crypto companies tend to attract greed over
talent and it shows in practially all projects.

Its improving for some projects but not others.

~~~
drawnwren
I've seen this argument before and tend to disagree. There is certainly greed,
but I think the larger problem is that crypto doesn't have users.

I don't think it's particularly novel in 2019 to say that a successful app is
built off of the feedback of its users. Any app, regardless of intentions or
engineering prowess, is going to struggle if it doesn't have a sizeable user
base providing feedback (and devs who listen to that feedback).

~~~
kebman
I think you're touching on an important point. There are already other vessels
for the niche that cryptocurrency fills—at least as far as websites and web
services go—that are way easier to both implement and use. With that said, I'm
still rooting for cryptocurrency as a general competitor to state-backed
currency, that in turn gives people all over the world more freedom to conduct
their business without state or corporate intervention. That is most of all
about freedom, and the only bogeyman they've yet to conjure against it is
"it's used for criminal stuff". Like the dollar isn't also used for crime...

~~~
plywoodtrees
The distinction between "gives people more freedom to conduct their business
without state intervention" and "enables crime" seems very elusive.

I can see an argument that enabling crime under oppressive regimes is moral.
Or even enabling particular crimes that you feel shouldn't be crimes.

I have trouble seeing the legitimate use case for cryptocurrency in first
world countries. If the main use case is crime, money laundering and
speculation, it will either be squashed or remain niche.

~~~
drawnwren
I tend to agree with your assessment, but you are dancing around an important
point.

Most people in the world don't live in a first world democracy. Today, many of
these countries are stable and providing currency for their citizens. But ask
a Venezuelan if they can see a reason why we might want an alternative to a
state backed currency and I feel like the answer is self-evident.

Vitalik saw this use case and talked about how disheartened he was by the ICO
boom because it was use cases like this that he believed in, so it isn't at
all an afterthought.

------
api
IPFS is a victim of the great plague of modern software development: over-
engineering. They've spent far too much time coding and far too little time
thinking of ways to avoid coding by simplifying their algorithms and protocols
and most importantly by carefully defining and scoping their problem.

The massive amount of cryptocurrency funny money they raised makes this
problem worse, not better. Lots of money leads to too many cooks in the
kitchen. Large organizations require immense discipline to avoid scope creep
and runaway complexity growth. They need someone to say "no!" 95% of the time
and relentlessly exterminate features and not allow things to be released
unless they are ready and actually solve a problem.

Most large organizations don't have this, which is why most "enterprise"
software is a mass of twine and chewing gum. In the case of enterprise over-
engineered bloatware you usually have some corporate requirement forcing its
use. No corporate requirement forces IPFS or any of Protocol Labs' other
sprawling mega-projects to be used, so nobody is going to use them.

------
codingslave
This might sound a bit weird, but when I look at the employee base of protocol
labs (the makers of IPFS), the most impressive employees are the business
people. Multiple Harvard Business School graduates, tons of Stanford degrees.
For the technical people, some are impressive, but no where near the stature
of the business people. No distinguished ex-FAANG engineers, no principal
engineers from notable companies, a few top STEM PHDs (Not cs), but lots of
obscure developers. This isn't to dig on them, but with the kind of money that
company has, and the level of technical complexity they are trying to
solve...why don't they have better engineers?

EDIT: Not looking to argue about whether leetcode filters for good
programmers.

EDIT 2: Self taught developers at random companies can be amazing, but for a
company "evolving the web" and 300 Million in the bank, they have hired almost
no nationally recognized experts, and theyre greater developer base is not
made up of people with 20 years experience. But rather a bunch of developers
who have been coding for 2-4 years.

~~~
onion2k
Something 20+ years of development experience has taught me is that developers
vary _a lot_. People who went to Ivy League schools can be amazing and they
can be awful, and the same is true for every other dev who went to a 'lesser'
college or didn't get a degree at all. Certificates don't tell you much.
Output is what matters. You should judge Protocol Labs on what they build
rather than where their employees went to school.

~~~
StavrosK
I do, and I expected much more, unfortunately. The daemon still has problems
with rampant memory usage, chews through two CPUs during normal operation, the
pinning API is abysmal, DHT resolution is so problematic that you routinely
need to connect two nodes directly together so they can discover each other's
files, etc.

I don't want to say they're bad at what they're doing, as I don't know how
hard it is, but at least a better pinning implementation seems easy to do and
relatively useful. Currently, to pin something, the command will block for
hours/days or even for ever if the content is large and rare. If my torrent
client worked that way, it just wouldn't get used.

~~~
asdkhadsj
I agree. Personally I've had similar thoughts but instead of viewing it as a
signal of their lack of talent, I viewed it.. sadly, as a signal for their
lack of attention to IPFS.

From a largely outsiders perspective, it has felt like FileCoin was the thing
they wanted to do. Either that, or they just don't have enough bandwidth to
take on all of these projects.

Regardless, it doesn't feel to me as if IPFS has had the attention it deserves
_from ProtocolLabs_. I love the idea of FileCoin, but I'd at least like
ProtocolLabs to get one thing finished before they chase the next shiny
project. Hard to imagine the "next internet" being created with such floaty
attention spans.

~~~
StavrosK
This is exactly the impression I've formed as well. I'm sure they have super
smart people (and I've talked to many of them), it just doesn't seem they're
interested enough in improving IPFS.

The "canary in the coal mine" for me is that pinning API ticket, which has
been open for years and amounts to "please put an async UI over pinning". If
something that simple and useful doesn't get fixed quickly, I'm not optimistic
about the future of the project.

It's too bad, because I think it's an extremely useful idea.

~~~
whyrusleeping
Doing this _right_ is not simple. It requires an entire task management system
internally, which (as you know) i've sketched up here:
[https://github.com/ipfs/go-ipfs/issues/3114](https://github.com/ipfs/go-
ipfs/issues/3114)

Sure, we could hack this together quickly, but we're trying hard to avoid
adding technical debt at this point. Adding every feature requested would put
us in a bad spot.

Prioritizing this over the countless other things people continually ask for
is hard, but we hear you. We welcome pull requests, and since you needed this
for the service you were building, helping us out here would be fantastic.

~~~
StavrosK
I hear you, it's just that the current way the daemon does pinning is
unacceptable. For example, I can't tell my customers how much of their file
I've pinned, or, in some cases, if I've pinned it at all.

Maybe I'm in a small minority of people who are interested in this feature,
but how do people pin things without it? Do they just `screen ipfs pin <hash>`
and leave that terminal there for ever? Wouldn't it be in the project's best
interest to make pinning work well?

Unfortunately, I don't know Go at all, or I would at least take a stab at
this...

~~~
whyrusleeping
So, at a high level you want more than just the backgrounding the pin task,
you also need to be able to tell what the progress is, periodically. This
probably means we need to do something with the api endpoint output (i was
initially thinking 'just throw it away and mark completion').

> in some cases, if I've pinned it at all.

What cases would those be? If the file is pinned it should always show up in
an `ipfs pin ls $CID`.

In the short term, I assume your service is a server side app that is making
requests to the ipfs node, right? You could set some background task in that
application that just waits on the `ipfs pin add --progress` call and keeps
track of the progress for that pin, that way, when a customer queries it, and
its in progress, you can return that information. Agree this should be built
into ipfs at this point, but that seems like a reasonable workaround for now.

~~~
StavrosK
> you also need to be able to tell what the progress is

That would be nice, but not really required.

> I assume your service is a server side app that is making requests to the
> ipfs node, right?

Yes, exactly.

> You could set some background task in that application that just waits on
> the `ipfs pin add --progress` call

I did that, there were problems with the fact that you can't have too many
pinning calls waiting due to resource constraints. I can't just have 100k
requests open and waiting, they time out, I need to restart the server, things
happen. At that point, every single task needs to fire up again and make an
API call to the server.

I did the same thing with batching, but then long-running pins (files not in
the network) would block and new files that _were_ on the network wouldn't
work. This happens with IPFS Cluster too, I just don't have to write my own
code to worry about it.

Believe me, I've spent lots of time on working around this, and the workaround
can only be so good with the current system.

------
neiman
We've been playing a lot recently with the concept of IPFS websites.

We even created a plugin ([https://github.com/almonit/almonit-
plugin](https://github.com/almonit/almonit-plugin) \- unreleased officially
yet!) for websites that combine IPFS with ENS (a decentralized DNS). Our
repository contains a list of decentralized websites using this method. We
found about 20 so far.

 _Technically_ , IPFS work well for us, but we made sure to have a server
seeding our website at all times (where we suffer similar problems like the
author of the post describes).

Even then, we still had to make sure that the website is available in all main
gateways, since most people don't run their own IPFS daemon. The strange part
is that sometimes content is available in one gateway, but not in others. Or
sometimes it's available in one gateway, but we can't get it on our local IPFS
node.

I understand that IPFS is not a blockchain, so I can't expect all the nodes to
have the same content. But I do expect the main gateways to communicate more
directly with each other.

 _Conceptually_ , IPSF websites are a bit like sending your website to someone
via an unreliable slow mail; i.e., it's not that attractive. You can make it
somehow more dynamic, using what they call IPNS (it allows you to update your
content). But the result is so slow, that even the most devoted monk would
lose his patience eventually.

A workaround is using a decentralized name system, like ENS.

This works very well, but the result are still static websites. No comments or
anything really interesting happening.

Those websites are censorship-resistance and very robust, you don't have to
worry about ddos attacks. But then again, how many people worry about such
things?

That said, I still like the concept of IPFS. We are exploring a few options to
add dynamic behavior to that now, where the dream is to mimic existing
services in a decentralized way.

Surely, they won't work as well, but the pro would be that it will be
controlled by the users, and that they will be able to survive financially
with no ads.

~~~
momack2
Almonit looks cool. The IPFS in Web Browsers working group has also been
collaborating with Nick Johnson et al from ENS to create an “EthDNS” server to
resolve ENS names to IPFS content in a decentralized way (along the lines of
[https://github.com/mcdee/coredns](https://github.com/mcdee/coredns)). Should
be a working demo in time for IPFS Camp in ~2 weeks!

~~~
neiman
Sounds interesting. Is there more info about it or I'll have to wait for the
IPFS camp to satisfy my curiosity?

~~~
momack2
Lidel has written a lot about this in issues (nice index here:
[https://github.com/ipfs/in-web-
browsers/issues/147](https://github.com/ipfs/in-web-browsers/issues/147)), but
I think the live (recorded) demo will have to wait. =]

------
Karrot_Kream
Unfortunately, a lot of the problems in this blog post can only be solved by
squaring Zooko's Triangle[1]. Most refutations of Zooko's Triangle depend on
some form of blockchain.

[1]:
[https://en.wikipedia.org/wiki/Zooko%27s_triangle](https://en.wikipedia.org/wiki/Zooko%27s_triangle)

~~~
PureParadigm
Just to give an example of a blockchain approach to this: Ethereum Name
Service [1].

1\. [https://ens.domains/](https://ens.domains/)

~~~
skyfaller
I can't support Ethereum as it uses proof-of-work, and while it is a different
implementation than Bitcoin, proof-of-work is inherently energy-intensive /
environmentally problematic.

If blockchain technology were to become popular and widely used by ordinary
people, its already worrying environmental impact would skyrocket, assuming
the technology is capable of scaling at all. We need to bring the energy
consumption of blockchain technology down to a manageable level, and that
means abandoning proof-of-work as far as I can tell.

~~~
ElKrist
Can someone give a proper counter-argument to this?

I keep telling my friends hyped by bitcoin that I do not believe in its future
as a currency for real daily exchanges because of this energy consumption
problem.

Is there any serious track to address this issue?

Unlike most technologies, I do not see the traditional efficiency gains when
the technology gets more mature. It seems inherent to proof-of-work and cannot
be improved in this paradigm

Assuming a solution is found, is it possible to "update" Bitcoin?

~~~
ghoulfarmer
The Bitcoin wiki has a number of counter-arguments
[https://en.bitcoin.it/wiki/Myths#Bitcoin_mining_is_a_waste_o...](https://en.bitcoin.it/wiki/Myths#Bitcoin_mining_is_a_waste_of_energy_and_harmful_for_ecology)

The future for ethereum is proof-of-stake. This project is called Casper and
will be in Ethereum 2.0. For Bitcoin most transactions will be off-loaded to
layer-2 networks but PoW will remain on the main chain.

------
beders
Anyone remember the P2P craze of the early 2000s'? Or was it late 90s? What's
left over from that? Freenet?

What I'm always wondering about: Someone needs to foot the server/bandwidth
bills. Who? If you run a Pi on your home network and serve requests with your
5MB/s upload: That's fine. More power to you. We need more of that. (and in
that case, you are paying your ISP)

But the transformation envisioned and the bandwidth and compute required
doesn't come for free. Someone's gotta pay. In things like dollars. How can
this work?

~~~
api
The main barrier to P2P isn't cost or bandwidth or algorithms. The main
barrier is NAT. As long as IPv4 with P2P-unfriendly symmetric NAT is the
dominant way of accessing the network, P2P will remain hard and niche.

One ugly hack to get around one problem (IPv4 address scarcity) has single-
handedly transformed the structure of the Internet from a mesh to a top-down
monopoly-driven medium. NAT is like literally Satan.

It wouldn't be quite as evil if it weren't so often symmetric, but for some
odd reason symmetric is what many vendors implement. I can't for the life of
me understand why symmetric NAT exists when the same scalability can be
achieved with port restricted cone NAT that falls back to symmetric-like
behavior if port preservation is not possible due to resource exhaustion. That
would yield working P2P >90% of the time instead of <5% of the time.

~~~
zzzcpan
Things aren't that bad. Hole punching can still work most of the time for
broadband and you don't really need all that many nodes with routable IP
addresses. Scale makes this problem even less of an issue. Mobile networks are
somewhat problematic though, but they cannot be heavy nodes either way and
have to be lightweight clients that piggy back from normal nodes.

~~~
whyrusleeping
We actually find NAT to be a pretty big problem still. Even using NAT-PMP,
upnp, and hole punching, we still see a roughly 70% (sometimes much higher)
undialable rate. Especially for people running ipfs in China (though high
failure rates are also observed in other countries).

We're pushing hard on getting libp2p relays up and running to help get through
this. The idea is that we can use a relay to make the initial connection, then
ask the remote peer to try dialing back (assuming that both peers involved
arent undialable).

------
microcolonel
Indeed IPFS performs very poorly in almost every way; it is extremely resource
intensive, and generally unable to find objects which are not extremely well
known.

------
spir
IPFS is providing some value already for blockchain dapps to deploy UIs,
albeit currently with a dependency on centralized ipfs gateways.

Eg. [https://augur.net/ipfs-redirect.html](https://augur.net/ipfs-
redirect.html)

------
yason
I don't think this is ultimately an IFPS problem. How I've always understood
IPFS is the protocol being a decentralized blob storage. You would use it in
place of a CDN, to share files, or other building blocks for something higher-
level. This is in contrast to browsing where there are established semantics
on addressing absolute and relative URIs, and clustering relevant content
under hierarchical addresses in the URI path.

The author insisted on doing this on bare IPFS but I think this is (vaguely)
analogous to building a website based on IP addresses and port numbers, not
URIs. To the point: the semantics just aren't there.

I could imagine an IPFS based web site being built with a local URI resolution
map as part of the bundle of objects that is the web page. The sub pages would
refer to symbolic URIs like before but the browser would also download a site
map that links URIs to actual hashes of the latest revision, and resolve
references like "foo/bar.html" or "/root/foo.html" based on the map. Or a
proxy could do this transparently, translating URI requests to hashes and
fetching data directly from IPFS, then serving it back to the browser as if it
was downloaded from "/root/foo.html" instead of
"ipfs://50ad443758222efea0286f3a94db2c25".

The top-level entry to the web page would basically be this URI resolution map
which, as content-addressable, would effectively refer to a single revision of
the web page. This could be implemented as a separate URI scheme, like ipfs-
uri://bb9f6cbcc28829b57dd25102f67b9d37/main/news.html where
ipfs://bb9f6cbcc28829b57dd25102f67b9d37 would point to the URI map and the URI
handler would resolve the relative URIs such as /main/news.html based on the
offered mappings.

But all this does require extra lifting from the browser or a proxy. I don't
think IPFS as designed is feasible to replace something like HTTP which was
explicitly meant to work with URIs.

~~~
autonome
Yes, IPFS addresses some of the use-cases of HTTP while solving other problems
where HTTP falls short. It isn't a complete drop-in replacement, but does set
up a foundation for an internet that will _last longer_ and (at some point)
perform better across varying network conditions.

> but I think this is (vaguely) analogous to building > a website based on IP
> addresses and port numbers, not URIs.

IPNS is a naming solution designed to address this:

[https://docs.ipfs.io/guides/concepts/ipns/](https://docs.ipfs.io/guides/concepts/ipns/)

It's not super fast right now, but there's some work happening now to make it
_much faster_.

ENS, the Ethereum name system, is also an emerging way of doing this.

> basically be this URI resolution map

IPLD is a data model that works with IPFS to address the use-case you're
describing, where you have a permanent reference to a mutable set of data:
[https://ipld.io/](https://ipld.io/)

~~~
yason
All right so there is already work ongoing to solve these problems, and that
work is built on top of IPFS instead of extending the base protocol, and the
author who just insisted using plain IPFS was thus suffering from expected
difficulties as IPFS really isn't the direct answer to that particular
usecase.

------
ohnoesjmr
Hey, but it's great if you want to piggy-back a botnet on it, with no way to
kill c&c: [https://www.anomali.com/blog/the-interplanetary-storm-new-
ma...](https://www.anomali.com/blog/the-interplanetary-storm-new-malware-in-
wild-using-interplanetary-file-systems-ipfs-p2p-network)

~~~
whyrusleeping
Yeah, we're watching this. They aren't so much piggybacking as they are just
using the open source code for libp2p and our pubsub (which is fully peer to
peer, not reliant on any central server that could shut it down). However, the
way they are doing it makes it really easy to find infected nodes. Running a
scraper we see around 3k peers involved

------
kebman
Just out of curiosity, how would you delete something off IPFS? Is the only
solution to have the address hash point to a program, or app, if at all
possible? Or a space that does not update the hash, iven if the "thing" is
updated? I'm new to this interplanetary world... :D

~~~
hobofan
From IPFS: You don't. As long as a connected node is hosting the file, it will
be available to other peers. For a file where a single host is "pinning" a
file (= purposfully hosting the file), it will fade out organically once it
leaves the caches of all the nodes that accessed it once.

From IPNS: You point the IPNS entry to something nonsensical and then by
default it will be gone once the TTL of the old IPNS entry is reached (kind of
like DNS).

------
christroutner
I built some open source tools to accomplish everything that this article
tries to achieve. It's _totally_ possible and borderline-easy to use IPFS to
create uncensorable websites:

Check it out. Make up your own opinion:

[http://troutsblog.com](http://troutsblog.com)

------
joepie91_
Here's your periodic reminder that IPFS is a _distribution mechanism_ , and
_not_ a storage mechanism. It doesn't persist anything. It's like a more
granular BitTorrent, and not really a filesystem at all.

This can be very useful in certain situations and projects! But it means that
it is _not_ a replacement for a centrally-hosted website... and the IPFS site
does a very poor job of conveying this limitation.

(I can't take Filecoin seriously as a 'solution' here either - as far as I can
tell, it suffers from the exact same problem as every 'blockchain-backed
storage system' I've seen before... it's unable to reliably verify that some
peer is actually storing data, without a local copy to verify against.)

~~~
whyrusleeping
> it's unable to reliably verify that some peer is actually storing data,
> without a local copy to verify against.

Filecoin can actually do this. I'm planning on doing a blog post about how
this works soon (in all that copius free time), but a good summary is here:
[https://github.com/filecoin-
project/specs/issues/155](https://github.com/filecoin-
project/specs/issues/155)

------
sprash
A good alternative to IPFS that works today is magnet links and torrents.

There are "youtube competitors" that offload the high bandwith requirements
via webtorrents running in javascript.

You can not host your full webpage with it but you can reduce bandwith costs
drastically.

A good alternative to DNS is namecoin. It already works flawlessly and I
honestly wonder why it has not been adopted more widely.

~~~
davidgerard
I think if you're proposing something as a "good alternative", then being able
to understand why it got near-zero adoption, and failed in the real world, is
probably important.

I would say that Namecoin appears to solve a problem that nobody actually has
in practice.

I know advocates are pushing Ethereum Name Service, which does the same thing
- but again, what are the lessons learned from nobody bothering with Namecoin?

(Namecoin is interesting, as it was literally the _first_ altcoin. It hadn't
occurred to anyone to fork the Bitcoin code and make their own coin until
then.)

------
woodandsteel
For years I've been reading HN comments sections for IPFS links for years. In
the past IPFS developers would show up to defend it, but for this link at
least so far I haven't seen any.

Is this an admission that IPFS is unworkable, and they don't have any
prospects of making it usable in the next few years?

~~~
_prometheus
Hello! o/ We've responded to a number of points here.

You can check out our roadmap

> IPFS is unworkable... making it usable in the next few years

Absolutely not.

The OP brings up a lot of great, useful feedback for us, and we'll respond to
it.

But the OP is also simply wrong in saying it's "not usable". There are
millions of end users benefiting from IPFS, 100Ks of libp2p nodes, we see
PBs/mo of traffic in the infrastructure that we run, and millions of daily
requests to our gateway. Look to fully decentralized applications and systems
like OpenBazaar, Textile, Dtube, and others.

Beyond that, we're well aware of the many shortcomings, and working on them.
We're unfortunately spread thin across a lot of projects (IPFS, libp2p,
filecoin, ipfs-cluster, etc), but each is seeing significant growth and
improvement.

You can see the long-term IPFS roadmap here
[https://github.com/ipfs/roadmap](https://github.com/ipfs/roadmap)

------
jancsika
For all the time you spent on this you could have just posted an onion site in
about five minutes and spent the rest building a little web-of-trust indexing
site to fill the demand for the less than 1,000 people who want to post
personal sites on a decentralized web.

------
chrismatheson
Unicornporn thamks for the writeup, saved me a lot of time (I thought IPFS was
ready to go for this sort of use case)

P.s I think there is a typo at ‘ipfs pin add’ where it’s used twice instead of
‘ipfs pin add’ then ‘ipfs add’ I think?

~~~
unicornporn
I didn't write the blog post. Tom MacWright did. :)

------
jhabdas
IPFS looks good on paper. In practice my experience wasn't very good. ZeroNet,
on the other hand, feels like the future for DWeb and that's the direction I'm
personally headed. What about you?

~~~
truth_seeker
What do you use ZeroNet for ? Does it gives you reasonable latency performance
?

~~~
jhabdas
I use it for zero-cost, uncensorable hosting. It's fairly trivial to get set-
up if you're running Linux. Here're instructions for Manjaro if you fancy
Arch-based distros: [https://habd.as/post/surfing-uncensorable-
web/](https://habd.as/post/surfing-uncensorable-web/)

------
chriswarbo
I agree that IPNS is pretty unusable (do names still expire after 1 day?).

Last I saw, the IPFS devs seemed to be pretty excited by a pub/sub mechanism
they were building into the system; potentially to replace the workings of
IPNS.

Is that a stable or useful alternative for indexing changing content like a
blog? How decentralised is the pub/sub (i.e. do new subscribers need to
contact the publisher, or are messages persisted for a time in the network)?

~~~
aschmahmann
The way IPNS uses the DHT faces some deep challenges if you really need the
“latest” IPNS update (if you don’t there are various ways to make IPNS perform
significantly better, see [https://discuss.ipfs.io/t/ipns-resolution-takes-a-
very-long-...](https://discuss.ipfs.io/t/ipns-resolution-takes-a-very-long-
time-first-time-around/5620) for guidance).

There is also some work to make IPNS work over PubSub independently from the
DHT. That work is being tracked at [https://github.com/ipfs/go-
ipfs/issues/6447](https://github.com/ipfs/go-ipfs/issues/6447), and should
significantly improve IPNS performance as well as add in features like
allowing users other than the author to keep IPNS records alive (so no 1 day
expiration issues).

As for the decentralized nature of pubsub it is essentially an opt-in system.
Random people on the network are not holding or forwarding your messages as
they would in the DHT. However, anyone who has subscribed to a topic will
propagate messages for you to other subscribers. This means subscribers do not
need to directly contact the original IPNS record publisher to get a record,
but instead can get it from anyone who has it and is advertising that they do
so.

------
AgentME
>Ready to publish it to the web? Not so fast. Clicking a link brings us back
to my issue in 2017 [link]: the way that the IPFS gateway works will break
your links.

(following the link to 2017 post)

>And even if I used the base tag to re-route that link so that it points
[http://localhost:8080/ipfs/QmR96iiRBEroYAe955my8cdHkK4kAu6SA...](http://localhost:8080/ipfs/QmR96iiRBEroYAe955my8cdHkK4kAu6SApGC2eS5kSzHwt/2017/08/01/recently.html)
... It wouldn’t work: the correct URL for that Recently post is, instead,
[http://localhost:8080/ipfs/QmTbJ6RSLZDmVYy8dgdoeQLCtKya7UrNT...](http://localhost:8080/ipfs/QmTbJ6RSLZDmVYy8dgdoeQLCtKya7UrNTCqaw93yZCpH5T/).
So IPFS content links are fully content-addressed. I suppose to make my site
fully IPFS, I’d have to build each individual page and then construct a home
page that linked to the generated hashes for those pages. That leaves an open
question: how could two pages link to each other? Adding a link from one page
to the other would change its hash, so wouldn’t it be impossible for pages to
reference each other? This might be a lack-of-coffee problem on my part.

They've misunderstood that. Both links of these links would be valid. You
would have all the pages be under
[http://localhost:8080/ipfs/QmR96iiRBEroYAe955my8cdHkK4kAu6SA...](http://localhost:8080/ipfs/QmR96iiRBEroYAe955my8cdHkK4kAu6SApGC2eS5kSzHwt/)
and refer to each other by relative links.

The author's point of relative links or a base tag being necessary for the
site to work on the ipfs gateway URLs is valid though.

(back to new article)

>So, links don’t work. I posted an issue detailing this issue, and while I got
an encouraging response that there’s a real solution planned, there’s no real
solution. People use specific plugins just for IPFS, like this one for
GatsbyJS, to get it to work. I ended up writing make-relative, a script that
rewrites my built site to use relative links. This is where the story about
IPFS being useful here and now for web developers breaks down a little. I’ve
done enough HTML-mangling and path-resolution in my decade in industry that
writing this script was straightforward. But the knowledge required to do it
is not all that common, and I think this is where a majority of web developers
would call it quits, because IPFS’s ‘website hosting’ story would look broken.

You can either use relative links, a base tag, or keep absolute links and only
support your domain with dnslink (and eventually when the web gateways support
the hash in the subdomain field, or extensions support the ipfs:// protocol,
then your site will also work through that). I'm hoping for the hash-as-
subdomain support which will make things simple.

>IPNS

Yeah, it's awkward. This is because of multiple reasons (zooko's triangle and
hosting of the names) that can only be addressed with a centralized service
(see DNSLink) or a blockchain. Thankfully there's the DNSLink option too. I
wish the IPFS docs wouldn't push IPNS as much because it's not ready / it's
not what most people want.

>That was an incorrect assumption. IPFS-based websites do update their DNS
records every time that they update their website, so that they can avoid
using IPNS, because IPNS is just too slow. This was a tough discovery, because
it works against everything I know about DNS – a system that isn’t
particularly designed to be fast or scriptable.

IPFS isn't for frequently changing websites. You're not going to host a forum
or other dynamic site on IPFS. (Or if you do, it's going to work by being
static content hosted on IPFS that contains some javascript that talks to some
regular external HTTP(S) servers or maybe even a WebRTC swarm for all of the
dynamic content.)

>Unfortunately, once I started getting this set up as a ‘pinning device’
[server], the fun stopped. I tried running ipfs pin add with the hash
generated earlier from ipfs add -r, but it just ‘hung’ - outputting nothing at
all. After a while I realized that, like ipfs pin add, IPFS doesn’t
communicate very well when it’s having a problem. So I figured out how to turn
logging information all the way up, and then… I was never able to get past a
‘got error on dial’ failure, despite trying all potential configurations of
the IPFS daemon, enabling logging, upgrading to the newest version, and so on.
There are about 63 similar issues in the tracker, 21 of which are marked as
bugs.

I think the "got error on dial" errors are just it complaining that it can't
connect to some random people in the p2p swarm. Like people who may have
recently turned off their computers, etc. You can't expect to be able to
connect to everyone in the p2p swarm. IPFS is just trying to connect to tons
of people in the swarm until it finds people with the blocks that you're
trying to pin, and it's reporting errors that it can't connect to some of the
people in the swarm. The real problem is that it should be giving some kind of
logs of how it's trying to find some content, it's successfully connected to a
bunch of people, but none of them it's talked to yet have it. Also of course
it should eventually make a connection back to your computer and successfully
get those blocks. I'm not sure why that's not happening and that's
disappointing it's not working. I really hope IPFS improves in things like
this because the concept of IPFS seems great.

------
stefek99
Recently I was playing with IPFS: [https://genesis.re/kleros-metaevidence-
metahash/](https://genesis.re/kleros-metaevidence-metahash/)

Using another part of crypto ecosystem (Infura)

It's still not straightforward.

BTW... A few weeks ago I saw on HN an automated tool to publish static website
on IPFS. It was pretty sleek...

~~~
DenseComet
Can you link the tool?

~~~
autonome
Maybe this one:

[https://github.com/agentofuser/ipfs-
deploy](https://github.com/agentofuser/ipfs-deploy)

Makes it super easy to push static sites on IPFS.

------
velcro
[https://www.shiftnrg.org/](https://www.shiftnrg.org/) might be interesting
here too - IPFS-based project targeting web hosting specifically (includes a
custom DNS + working on serving dynamic content, CMS, etc).

BTW as a proof of concept their website is already hosted on their own IPFS
cluster.

~~~
acdha
Their website is hosted on CloudFlare:

[https://redbot.org/?descend=True&uri=https://www.shiftnrg.or...](https://redbot.org/?descend=True&uri=https://www.shiftnrg.org/&req_hdr=User-
Agent%3ARED/1.2%20\(https://redbot.org/\)&req_hdr=Referer%3Ahttps://www.shiftnrg.org/&check_name=default)

They’re using DNSLink so basically what we’re seeing is that using DNS and a
major CDN makes your site reliable, which does not seem like a particularly
novel advance.

------
WanderPanda
Non-native speaker here, is the usage of "than" in "Your browser than uses
DNS, a decentralized naming system, ..." actually correct? I happen to see
this a lot and I am not sure if this is a spelling mistake when people want to
write "then" or if it is actually intended.

~~~
yoz-y
It is a very common spelling mistake.

~~~
yason
I wonder why? It seems to be common with native speakers.

I'm not native in English but, for me, it would seem to be nearly impossible
to mix these two words even in a moment of carelessness. They mark greatly
differing meanings and thus they practically live in different slots in my
brain. Even a blind typo won't explain it because 'a' and 'e' are not too
adjacent on a qwerty.

~~~
rkangel
As a native English speaker, words in the language appear to be stored against
sound rather than anything else. When trying to write clear English to be
read, I am reading it to myself as I write to ensure that it reads clearly,
and so it's more an audio process, allowing for confusion of words that sound
similar.

I would guess that those who have learned the language have a more logical,
grammar based structure and write more deliberately.

~~~
fifnir
> I would guess that those who have learned the language have a more logical,
> grammar based structure and write more deliberately.

We have our own native languages with their own idiosyncrasies and we don't
make mistakes like "could of" or "you're/your" in those languages. There's
just no excuse for those simple mistakes...

Greek has 5 very common and 1/2 rare spellings for the sound 'i' (as in
'kit'): ι, υ, η, οι, ει /υι,ηι

Imagine if people who can't tell you're/your and write 'could of' had to face
this reality... They are lazy ignorant people and the other native speakers
need to stop enabling them and making excuses.

<edit>

I'm not actually sure if 'ηι' exists, google is showering me with irrelevant
results.. in any case, furthermore, there's

two ways to spell 'e' (as in bed): ε, αι

two ways to spell o: ο,ω

two ways to spell the av/ev sound: αβ/αυ , εβ/ευ

two ways to spell the af/ef sound: αφ/αυ , εφ/ευ

and more. Maybe the difference is that all this crap makes you actually pay
some attention to the language if you don't want to embarrass yourself

~~~
phit_
As a native German speaker, who is basically at the same level in English now,
I also started doing these weird mistakes native English speakers do. I never
had this issue in German not sure what it is about English, I think it's just
the random spelling vs pronunciation in English. Your brain just can't keep up
matching your vocal thoughts to the right spelling when typing quickly.

------
_tlinton
IPFS is a decentralized file store. It isn't a decentralized website host,
although it can be used as one. That is just a nice consequence, not its
reason for being. IPFS is a building block. If you want a better experience,
build a tool that takes advantage of IPFS.

~~~
chriswarbo
I think this comparison is pretty good: the author's problem with absolute
URIs is essentially the same as using `file://` URIs and accessing the site
from different directories (although `file://` doesn't have the chicken-and-
egg problem of pages trying to link to their own hash; IMHO that would be
worse than relative links anyway, even if it were possible).

Still, I think that the author's criticism is valid, since the IPFS project
does seem to encourage the view that it's suitable for Web hosting, rather
than just "a nice consequence". The author's struggle with IPNS, and
recommendation that it be de-emphasised in the docs, is in line with my own
experience.

~~~
_tlinton
The IPFS documentation only provides a "basic example" of hosting a website
with IPFS. In general, the IPFS website focuses on its uses as a file system.

------
billconan
I tried to port libp2p to c++. Each time, I got discouraged by so many sub
projects linked together. I really don’t like the way they organize code.

~~~
protomike
There is a C++ implementation of libp2p being developed at
[https://github.com/soramitsu/kagome](https://github.com/soramitsu/kagome)
with help from the go-libp2p original implementers (and funding from
web3.foundation). They've implemented a lot of the major functionality
already. I'm sure they would appreciate contributors if you want to get
involved. (Full disclosure: I work on the libp2p project)

------
programmarchy
Anyone tried urbit?

------
unicornporn
Hi, I made this submission. If you want a take on the competition (Dat:// and
Beaker Browser) I made another submission here:

[https://news.ycombinator.com/item?id=20162199](https://news.ycombinator.com/item?id=20162199)

Direct link:

[https://www.kickscondor.com/on-dat/](https://www.kickscondor.com/on-dat/)

------
ghoulfarmer
A smile comes on my face whenver I seen IPFS on HN...

The author could have gone an step further and used ENS. IPFS+ENS is becoming
a common duo. Web browsers can't do ipfs:// but they can do
[https://insertipfsgateway/ipfs/Qm..](https://insertipfsgateway/ipfs/Qm..).
With ENS (.eth) domains can resolve to ipfs:// and Metamask picks those up and
turns them into gateway links.

There's a list of .eth domains & other ENS+IPFS info here:

[https://gateway.ipfs.io](https://gateway.ipfs.io)
/ipfs/QmUJKsA1FQRo4rGKJ9WAXNnv6o6HXy4NoJiWH97gPZUddV/ens+ipfs/list-of_ENSIPFS-
websites.html

