The greatest advantage of an architecture like IPFS instead lies in its friendliness to more a democratic, semantic web, in which users and programs may make use of URI's at a fine-grained and peer-to-peer level. If we can decouple resources from a central server, and then build programs around these static resources, the web will not only become more permanent, but also more robust against walled-gardens robbing us of programmatic control of those resources.
This is very much what we're going for.
Glad you mention semantic web-- it tends to be a very tricky word with most hacker cultures, because everyone loves to hate on failed attempts :( :( -- but in fact SW (Linked Data!) is a super interesting model that has made Google and Facebook tons and tons of money (Knowledge Graph and Open Graph!). One interesting fallout of IPFS is that we can make Linked Data waaaaay more robust and faster, because you no longer need the _insane_ tons of little HTTP hits to query, retrieve content, retrieve other content + definitions, etc., on each request. With IPFS we can turn everything into a (merkle)dag and distribute it as a single bundle of objects. think Browserify/Webpack but for Linked Data. All with valid content addressing and signatures :)
And +1 to plan9 -- plan9 (in particular Fossil/Venti and 9p) have always been great inspirations for me, and their philosophy surfaces a lot in IPFS. From the fossil/venti (git) approach to content, to making everything mountable, to just using the path namespace for everything, etc. :)
(ps: woah, a message positive on BOTH Linked Data AND plan9! unexpected find!)
* Dynamic content
* Private/sensitive information
Private information is yet another rake to step on. I would really love my bank account information to remain accessible to me and the bank only. I don't even trust myself not to lose decryption keys, I don't trust long-term security of encryption algorithms (flaws could be found, brute forcing might become viable), therefore the information better stays in as little places as possible. Another end of this stick is authentication/authorization. Encryption does not work, because access rights change. What I'm authorized to view/do today might not be the same tomorrow. The only solution is not to serve the content in the first place. As for authentication I don't see a solution at all.
Although content addressable web is awesome solution for [more or less] static content - wikipedia, blogs, etc.
1) This only scales if enough people are willing to give up a portion of their storage devices. I would think twice about that for my SSDs.
2) Private customer internet connections often have a lesser upload bandwidth and that is part of the overall traffic calculation of ISPs. So the networks have to change a lot to face changing traffic requirements in this regard. That is something that will not scale with the demand of a successful protocol (see BitTorrent). Also this IPFS traffic may harmful to other traffic like gaming sessions.
3) Illegal content will be the killer. Don't expect laws to change for IPFS. No one will permanently use VPNs / anonymizers permanently for using IPFS, because some law firms will specialise on gathering endpoint information like they already do with BitTorrent.
Actually BitTorrent effectively enables arbitrage between fast download bursts and giving back long slow uploads. You trade one for the other. Your fast download burst is a momentary aggregate of many slow uploads from other users.
Looks like currently IPFS is handling DMCA by having a centralised DMCA blacklist (voluntary). This should probably be solved in a more nuanced than that if IPFS really takes off but demonstrates it's not a killer problem.
2) Yeah, I'll admit the asymmetric nature of home internet is a bit of a hassle. I suspect you'll still require machines on the backbone seeding content, with domestic machines focussed more on providing content to local peers and/or seeding rarely accessed content. In the long-term, IPFS can help ISPs in reducing their costs (content can be grabbed from within their own network, instead of having to peer with another network for almost everything), which might help to convince them to change.
3) IPFS doesn't force you to host any content you don't want to. If you choose to host illegal content, that's your problem.
Adding a market to this problem is interesting and may work, indeed. But you must not forget, that most internet connectivity plans for private customers prohibit commercial use, where the network is the essential part of that business. If people start to make money on IPFS, the ISPs will not just watch. Another aspect of this is taxes that people must pay if they make a profit, and all the requirements to operate a business. With this in mind, I am skeptical that this will scale to a size where you can say: And now it is really a distributed thing.
> 2) Yeah, I'll admit the asymmetric nature of home internet is a bit of a hassle. I suspect you'll still require machines on the backbone seeding content, with domestic machines focussed more on providing content to local peers and/or seeding rarely accessed content. In the long-term, IPFS can help ISPs in reducing their costs (content can be grabbed from within their own network, instead of having to peer with another network for almost everything), which might help to convince them to change.
Many ISPs already do that. CDNs use ISP data centers to bring content to customers. In fact, IPFS is not much different from their solutions. But IPFS can be a standard for content distribution. That would be a good thing.
> 3) IPFS doesn't force you to host any content you don't want to. If you choose to host illegal content, that's your problem.
It is not always obvious what content is legal. Copyright laws will test IPFS hard.
2) Sure, you can view it as a CDN anyone can participate in (including machines within your LAN).
3) There are plans for handling DMCA takedowns https://github.com/ipfs/gateway-dmca-denylist
My research team has build very similar operational systems over the past 8 years.
"Decentralized credit mining in P2P systems", IFIP Networking 2015
"Towards a Peer-to-Peer Bandwidth Marketplace", 2014
"Bandwidth as a computer currency", news.harvard.edu/gazette/story/2007/08/creating-a-computer-currency/, (PRE-Bitcoin), 2007
Their work is still early, essential thing about this type of approach is spam. You can often trick people into caching stuff. So not your typical DDoS anymore, but another resource eater. Plus there is the counterintuitive oversupply problem: <more citations to our own work> "Investment Strategies for Credit-Based P2P Communities"
Or the other way around...
In terms of revoking access rights, that's not really a problem unique to IPFS (cf. DRM)
What do you think of embedding IPFS links within html? Secure content can be directly loaded from the central server, content that only needs to be checked for integrity or hidden from the local network can be loaded from the distnet.
It'd be like mushing torrents and the web together.
The article misses the main advantage when it tries to say that IPFS would help corporations like Google/Youtube/Netflix.
Big players will always be able to expertly run powerful distributed CDN's, but newer smaller websites will always start with one server under the current model.
IPFS would help to level the playing field for distributed data services.
What the internet needs is a new financial model since the one we are have now isn't working in the long term.
This is all TC does, you can call it pimping or fluffing. Before this article did you think TC was an investigative journalism powerhouse?
- content must be able to move as fast as the underlying network permits. this rules out designs like freenet's and other oblivious storage platforms, as the base case. Like you said, they're just way too slow for most of IPFS use cases. But, these can be implemented trivially with the use of privacy focused transports (like Tor and I2P -- there's actually work towards this and people are getting close), content encryption, and so on.
- IPFS nodes should be able to only store and/or distribute content they _explicitly_ want to store and/or distribute. This means that computers that run IPFS nodes do not have to host "other people's stuff", which is a very important thing when you consider that lots of content in the internet is -- in some for or other -- illegal under certain jurisdictions. Legitimate companies have way too much in their plate to additionally worry about potentially be storing a ton of illegal stuff. For serious companies like Google to use IPFS, we need to have a mode of operation that allows implementations to ONLY move the content THEY want to move.
- Websites/Webapps must be able to operate entirely disconnected -- this means that it should be possible to build applications which create data locally, signed by the user, which can be distributed encrypted end-to-end to other users, without needing to ever touch specific backbone servers. This means users can move the data end-to-end via the closest route possible and in disconnected networks (think users on mobile phones on a plane using messaging webapp or web game and moving the bits over bluetooth or an ad-hoc wifi network. And this also means users in the broader internet do not _need to_ rely on backbone servers -- the model we're going for is that dedicated computers in the backbone CAN of course and SHOULD help, but you shouldn't HAVE TO RELY on them.
Can write more about this :)
But see also:
- BitTorrent's Project Maelstrom
- MojoNation, the technology that inspired BitTorrent
...just to name a few.
The problem with these systems is they don't provide a whole lot of compelling benefits over the regular web, and generally provide a worse user experience.
Solving the UX problem is the real hurdle to adoption.
I can see how that could work for statics resources, but I don't get how you can decentralise the dynamic portion of a website without single points of failures to the backend.
Since we now have these technologies and other technologies like security protocols, I think we are at a point in history where decentralizing the web is plausible and maybe even a straightforward thing.
Most internet censorship eventually works on L3, having a distributed alternative to the world wide web won't make the internet more resilient.
(1) if a region's network uplink disconnected (like it was in Egypt) websites/webapps don't cease to work. People should be able to communicate + compute in these local networks without backbone access. yes this is possible.
(2) if i manage to make contact with you over a totally unorthodox data channel, like ham radio, or satellite, i should be able to _easily_ pipe my traffic and updates to you, and do so in the git-style of replication: offline first + bursts of condensed traffic.
this is not rocket science. (this is rocket science: https://www.youtube.com/watch?v=Pl3x71-kJGM :D) our problems are much easier to solve, and we must solve them.
I think it's important to make it useful to the non-techie person, so they know that whatever they put out there is guaranteed to last, rather than having to pay for hosting and such. This is just enthralling.
Creating an open content creation platform that will let users to create and upload media and text without any fuss will make IFPS a very viable choice.
I think it's interesting to consider the architecture of websites and how a non-decentralized content service has forced a centralized, systemized design of websites. Websites nowadays put more focus on design and fluff than on linkability of content. Maybe the very idea of a website is mainly because it's served from a single server. You couldn't have it coming from a million places, so you had to ensure that everyhting looked consistent and similar within this site.
But now with IFPS, we can reconsider the web stack and typical web design. We can reduce the content to themselves only and allow these to be linked with other content so everything's closer to being true hypermedia.
We really don't need any fancy web templates (think wordpress, squarespace, etc) if we are to achieve true decentralization. Heavily designed websites prevent them from being "decentralized" as in being too systematized/tightly coupled. Do I make sense here?
I'm actually working on this at the moment. I'm working on making it easy to create decentralized web content. I want to say I can, with some help from you and others, integrate IFPS as the backbone of this. Keeping IFPS in the picture brings back the decoupled nature of content with the guarante that the links won't go out of business. Nobody has to pay for hosting, nobody has to worry about content anymore. The world just became our one huge hard drive!
Could you explain a little more about this?
The most common form of internet censorship is through blocking IP addresses/ranges, and ports, the next common one is using DNS redirection for additional filtering. URL and packet inspection (keywords) happen but are quite rare, selective protocol filtering (deep packet inspection/IX) can happen but it's very rare on national scales.
Complete internet blockage is usually done simply through BGP, the government simply publishes new routes for all of their ISP's which usually just direct to a blackhole, since BGP is insecure in general an ISP can technically redirect traffic on it's own if it wants too and it did happen several times before (hacking team allegedly published rogue BGP routes to sniff traffic).
At this point, there are circa 3 billion Internet users, nearly half the planet, so I think we're well past the point where "ZOMG GROWTH N STUFF" is a reasonable justification for this sort of hype. The Internet's growth rate peaked in the 90s:
Now it's under 10% user growth, which seems entirely manageable.
That said, there's no doubt that TCP/IP has been a runaway success, and I'm sure the credit largely belongs to some early visionary (who I should probably know about, but no name comes to mind).
Edit: Vint Cerf and Bob Kahn.
For those working at scale, single points of failure get solved through CDNs, multiple servers with failover, and offering services from multiple data centers. Google's a good example of this; despite Google being an enormously complex service, I've seen effectively no downtime from them.
For me IPFS seems like a solution in search of a problem. It's definitely a cool solution, but cool isn't enough for broad adoption.
She doesn't even cover the clearly obvious economic aspects of this -- why would I run an IPFS node if it just benefits the company and not me?
IIRC local machines by default only caches stuff for a very short time which makes sense for personal computers.
On company networks however I'd expect sysadmins to run nodes to reduce upstream bandwidth usage.
Edit: see also https://news.ycombinator.com/item?id=10329262 the specific example is the exact opposite but I see no technical reason why you cannot do this the other way around.
Perhaps this is a sign you've over-internalized things.
One interesting thing is you can have totally end-to-end encrypted applications, encrypt the application code and all the generated data.
Also, remember that the HTTP Web itself was a reimplementation of decades-old hypertext systems. Hypertext hails all the way back to Xanadu (60-80s!).
Why should I believe that claim? What are the incentives for storing my data in the network? As far as I understand, no incentivization is done for running the network. That's why I wouldn't trust it to store anything but a trivial amount of data.
Regarding incentivization, looking at the literature, you can split existing incentive schemes into two camps, local and global. Tit-for-tat would be an example of a local scheme, currency based systems would be global.
Neither of these types of scheme really work, with local schemes, you have no way to use accrued reputation with new nodes. Global schemes require too much coordination and overhead to be suitable for small 'transactions'.
As I understand it, IPFS deals with the problem by having a local scheme, but with a get out clause. If you want something you may have to do work on behalf of the node you want something from, but that work should be easier if you have a good reputation with other nodes in the network.
Under my proposed system, nodes keep track of how much they are in debt or in credit with the nodes that they are in contact with. The amount that you can go into debt with another node is limited to 1% of the amount of useful data that you have transferred to that node. (This way, if a node terminates a connection whilst it is in debt to another node, that loss, amortized over the useful data that you have received is very small). Nodes are incentivized keep connections open, since short lived connections have very tight reins on how much debt/credit can be accrued.
As this stands, this doesn't solve the problem. It's entirely local and there's no way to initiate connections since nodes, by default, don't trust each other.
The solution is two-fold: credit transfer and proof of work. I have designed a protocol that allows nodes to transfer credit to cancel out 'cycles of debt'. Nodes can also gain credit with other nodes by solving pointless problems. The innovation here is that when confronted with a puzzle, a node is able to delegate the work to its debtors. Proof of work is useful since, given the choice, a node would probably prefer to use its upstream bandwidth than burn up CPU time.
There are more details to this, like: the specifics of how to discover and execute a credit transfer along a long chain using only local interactions, how to incentive routing as well as data transfer and many other things.
If I had the time, I'd implement it.
> I have designed a protocol that allows nodes to transfer credit to cancel out 'cycles of debt'.
Awesome, sounds like if Filecoin were based on Ripple instead of Bitcoin. Ideas are always welcome, so please share yours either via Github issues or IRC (freenode#ipfs) :)
Ripple relies on a common shared ledger, which is a distributed database storing information about all Ripple accounts. The network is "managed by a network of independent validating servers that constantly compare their transaction records."
I don't think any system that relies on a global ledger is workable for micropayments like this.
Edit: I spoke too soon, Ripple is more complicated than I thought. It is somewhat like Ripple.
Of course if one doesn't need transactions, everything is easier even today -- I mean, who doesn't use a CDN? It's a very similar concept.
The problem is much tougher when the content is personalized or when there needs to be a transaction.
This makes distribution a huge huge problem no one has solved yet (even a blockchain, IIRC can't really scale up in transactions...)
Also see the Neocities + IPFS blog post: https://ipfs.io/ipfs/QmTVcD87Ecjps6wv9jMaGhvMuzZ2BgP6NyXDcnM...
Take a look also at this talk, which presents different use cases for this: https://www.youtube.com/watch?v=skMTdSEaCtA
Can also see some of the random crazy things we're planning in https://github.com/ipfs/notes/issues/ + https://github.com/ipfs/archives/issues/ + https://github.com/ipfs/apps/issues/
(Edit: fix links)
[Edit: this seems to be a problem with the route names on the Neocities blog, not IPFS itself]
Also, IPFS works today too, https://ipfs.io is served directly via IPFS :)
Sorry to be extremely obtuse, but since when has our connectivity been rapidly dwindling? It seems to have only been growing since the network launched.
One example is this: if your bandwidth is too low, or your latency too high, you basically cannot use the Web as it is today. Native mobile apps do muuuch better, because you download them once and can use them mostly offline or with low network usage. The reason this is "getting worse" is more of a perception problem and an artifact of this fact: storage is getting cheaper faster than bandwidth is getting cheaper. Over time, our media usage (and thus perceived requirements) grows and grows, tons of websites today are _several megabytes!!_ and all the media we use keeps growing to fit our nicer screens. Also, as more and more people come on line, and their usage increases, the networks saturate. This causes the "perceived bandwidth" to decrease, meaning that the pipes (which are getting absolutely better) can handle a smaller percentage of our individual load and thus "feel worse".
Then, there's are other stupid problems, like the fact that many major websites/webapps will be totally useless if you go over certain latencies (particularly with wireless meshes, meaning lots of packet loss). This is because the servers' request timeouts are way too low. I was recently traveling through _Europe_ and in many places (trains on the countryside or small cities) the mobile (and sometimes wired even!) latency was so high that i could not browse the web. TLS handshakes wouldn't complete. HTTP servers would give up on me. It was terrible-- as a person accustomed to LTE + fiber ("Aaaaahhhhhh the data!!!") I couldn't believe just how stupid and bad of an experience we're giving our users out there. This was Europe-- now transport yourself to Bangladesh, or many places in rural India where people are beginning to be plugged into the internet. Think of places where access to Wikipedia, to Khan Academy, to the messenger services, could make huge life-changing differences for people. Having a bad web _there_ is blocking people from having the amazing powers of communication and computing that we all get to enjoy.
We must fix these problems. And we're going to fix them by improving the data model and the distribution protocols, not with optimistic policies.
Also, how would your protocol help your example of you riding on a train? I'm not seeing it.
Just because it makes stuff clearer. Heck the project page could use some of that too ;)
We may have to move our gateway back off our main domain, and have a "recommended blocker", so that people don't jump on this.
Anyway, one piece of good news: despite everyone's belief about UDP transports never going through corp firewalls, QUIC now handles MOST (>60%, close to 80% i believe) of "Google Chrome<-->Google sites" traffic across the world.
So, temp setbacks may be annoying, but in the end, once users install IPFS -- and once we have browser implementations, we can make all the traffic look like HTTP TLS flows, so blocking it will be very hard without also blocking regular HTTP TLS traffic.
Many large or data-sensitive companies do SSL MiTM at their firewall so unfortunately that's not hard at all.
i.e. all bets are off. there will always be pockets. but over time we could win even there, as the perf improvement will matter.
(It's also illegal BTW in civilized countries! At least on networks with employees doing web browsing, as that's personal communications of the employee and and part of fundamental rights)
You should take a survey of F500 companies to see how many can actually hit the site. Or check your logs.
But I don't think it's going to take off until there's a way to take down files you don't want published anymore. Even with the Internet Archive, if you add them to robots.txt, they will take it down. 
Removing things from the Internet is always going to be imperfect since there will always be people who archive old copies of files (and that's a good thing). But the official software should honor retractions or mainstream publishers won't be interested.
This'll most likely start as CDN endpoints (that no longer need an origin), and move on from there.
Also, it is impossible to know who owns a file. Since it is identified only by a hash, everybody possessing the same file at any point could claim that file is theirs and prove it by producing the hash. In this way, downloading a file is owning it.
Also, the Internet Archive policy seems too naïve. So anyone who just bought a domain can exclude everything saved at that domain in the IA up to that day?
The key would be to convince publishers that removing the hash from their directory tree counts as deletion. Maybe the hashes shouldn't be that prominent?
I don't see how the Internet Archive policy is naive. Yes, they're assuming whoever owns the domain owns the files and maybe sometimes that means someone deletes some files when they didn't have the right. But taking a stand against domain owners who want the files gone would probably land them in court.
For instance: I'm not clear on how IPFS protects applications from DDOS. Systems like IPFS spread the load of delivering content, but applications themselves are intrinsically centralized.
> I'm not clear on how IPFS protects applications from DDOS. Systems like IPFS spread the load of delivering content, but applications themselves are intrinsically centralized.
Think about an application whose content is moving around entirely distributed by IPFS as well -- think of apps who run mostly on clientside, with signed (+ maybe encrypted) data generated on the users' browsers, with maybe a few "non-browser" nodes contributing to building indices or providing trusted oracles.
What we're taking about is a model for webapps in which not just the content, but the logic + processing is decentralized too. At one extreme are bitcoin/ethereum style applications, where everyone runs the same computation to verify it, and another extreme where everyone just computes on their own data + the data they care about, and sign all their updates.
How to do this well is not easy-- distributing the content is one part, another is making a really good capabilities library (Tahoe-LAFS has done an excellent job with this, for example, and e-rights has tons more great ideas). Another part still is thinking about the sync models with ephemeral nodes which create tons of small pieces of data, blast them out to content bouncers, and go offline. Building scalable real-time indices on this sort of stuff is going to be tricky :)
Another interesting area is thinking about how databases look once you do this-- think both NoSQL AND SQL models on top of IPFS. yep, may sound crazy, but we have some preliminary work towards this (NoSQL is easy, SQL is less easy, but very doable! -- after all a database is just a good datastructure and good algorithms for operating on it).
Happy to write more about this, it's a super interesting model we're exploring.
We divide naming in two parts:
(1) Providing a long-term reliable mutable pointer. (no consensus needed)
(2) Providing a long-term reliable short and human-readable identifier. (consensus needed)
Where "long-term reliable" means i can rely on it for decades for important businesses. I.e. nobody will just take it from me by a fluke of the protocol.
IPNS, the naming system of IPFS, separates these into two steps:
(1) First, it makes a cryptographic name-system (this is based on SFS -- by David Mazieres -- look it up, fantastic system and a prelude to the core design of IPFS, Gnunet, Freenet, Tahoe-LAFS and many other systems). This cryptographic name-system means a "name" is the hash of a public key ("eeew that's ugly"-- yes, hang on). That hash name can be updated only by the holder of a private key (how? via the DHT and other record distribution systems, more on that later). The important part is that it (a) does not require consensus at all, anybody can make names (it's just a key pair!), and (b) it can be updated really fast over DHT, Pub/sub (multicast) and other network distribution systems.
(2) Second, it delegates the human-readable naming to _other, existing_ name authorities (note that _stable global solutions_ to this problem require consensus). We don't want to have to make _our own_ naming authority, lots exist already: DNS, all the DNS alternate universes, and more recently in the cryptocurrency world: Namecoin, Onename, and even Ethereum is making one. So, _instead of adding one_, we just work with all of them, and integrate. You can bind an IPNS name (a public key path, like `/ipns/QmbBHw1Xx9pUpAbrVZUKTPL5Rsph5Q9GQhRvcWVBPFgGtC`) to a name in those authorities _once_, and never have to do it again. For example, with DNS you do this:
1. setup a DNS TXT record like: dnslink=/ipns/QmbBHw1Xx9pUpAbrVZUKTPL5Rsph5Q9GQhRvcWVBPFgGtC
2. continue using QmbBHw1Xx9pUpAbrVZUKTPL5Rsph5Q9GQhRvcWVBPFgGtC as usual.
One more thing: resolving names via local-names and paths (i.e. a web of trust, using either SDSI style naming, or SFS's much nicer path version) is entirely possible and averts the requirement of consensus for meaningful human names. This is really useful and cool, and we will experiment with it in the future. But in general, this doesn't (IMO) give you the ability to do "global long-term reliable" names, as "jbenet" might mean something different to different segments of the network, so i couldn't _print_ the words "yeah, just go to `/jbenet/cool-site`" in _paper_, because there would be no global consensus for `/jbenet` and i would like to make sure all my references are viewable by anyone across space and time.
Hope this helps!
Many of the claimed advantages of IPFS can be achieved with subresource integrity. If you use subresource integrity, files are validated by their hash. We just need some convention by which the hash is encoded into the URL. Then any caching server in the path can safely fulfill the request.
However, this is very far from "many of the claimed advantages". I think it covers a few, but take a look at all the other stuff we can do, like making offline/disconnected webapps work! Think of IPFS like git or bitcoin, not just bittorrent.
EDIT: OP changed the title, it was originally written as "Why the Internet Needs IPFS Before It's Too Late"
What's an email I can reach you at? I'd like to pick your brain about some of this stuff and how it relates to databases.
It's hard to imagine you haven't already thought of these questions and answered them somewhere, but this non-article certainly doesn't address it or point to better resources (besides "buy my book!").
Let me try to explain it again. The idea of content base key is strongly narrowing the application domain of the system. Modifying your information (I.e. typo correction) invalidates all the keys. More precisely, people holding the old key won't be able to access the corrected information. of course this model fails completely with dynamic data.
The other problem that is often overlooked with such idea is the function that translate your key into the location of the information. That is: determine the IP address of the server(s) hosting the information from the hash key of it. One needs something like a distributed index for that. Don't use an algorithm because you don't want to add the constrain that your informations can't be moved or replicated.
Another problem is staying in control of your information. An owner of information want to be able to fix or modify it, or delete it when he wants.
Finally, another naive idea is sharing a distributed storage. This is a nice idea but it simply doesn't work. Some People will abuse it. To avoid this you need a system to monitor and control usage. Accounting ? good luck. By the way, this is the problem of Internet today. It is a shared resource, but a few are consuming a hell lot of it and earning a lot of money without sharing back the profit. I'm looking at you google with YouTube.
I'm thinking and working on this problem for a long time. My conclusions are :
1. Decouple keys from content
2. Make the distributed locating index your infrastructure
3. Optimize keys for index traversal
4. Owner of information must stay in control of Information
5. Owner of information must assume the cost of hosting
Once you put something on the Internet, you no longer own it.
Check out Content Centric Networking (ccnx.org) from Parc for something that actually has a chance at being a real solution.
It turns out that:
- (a) IPFS already works
- (b) IPFS layers over CCN extremely well AND generates demand FOR CCN. (i'm a big fan of CCN actually, and want to see it in more and more systems. But CCN has huge adoption problems. Look, we don't even have IPv6 fully deployed yet! So if all IPFS does is generate enough demand for CCN to be fully deployed, we've done a good job.)
- (c) all the routing problems are very real, but can be solved just fine. If you haven't looked deeper at how IPFS actually works instead of short summaries, you'll realize that the IPFS specs define the routing layer as entirely pluggable for this reason: we need to evolve to better and better schemes over time.
Instead of blindly saying "this cannot work", read more first. Understand the systems _goals_, decisions, and roadmap. Ask questions if you're unsure how something could possibly work. Lots of very smart people are working on IPFS and we're doing it because we see that it can indeed (and does) work. Blind negativity does not help CCN, and does not help anyone make better things.
That being said, I hope you're right.