Hacker News new | past | comments | ask | show | jobs | submit login
Filecoin Foundation Successfully Deploys IPFS in Space (fil.org)
165 points by diggan on Jan 16, 2024 | hide | past | favorite | 209 comments



I'm still a little lost. Could someone more knowledgeable than me explain how operations in space would benefit from a public content-addressable filesystem?

This seems like the kind of application that works best when bandwidth and storage are cheap, and I would have assumed that that, in space, both are about as far from cheap as you can get.


The internet traditionally uses location-addressing. A DNS name pointing to a IP which has a specific location your computer tries to reach in order to fetch the content. This means that particular node needs to respond to you, and the bandwidth is depending on the bandwidth available on the particular path to that specific node.

Instead, IPFS uses content-addressing, where the content is hashed and given a ID based on the content itself. So "foo" could have the ID A, while "bar" could have the ID B. No matter who added the content to the network, if the content is the same, the ID will be the same.

The benefit of this is that you can fetch the data from anywhere, and you'll get the data you wanted guaranteed (assuming the content is available somewhere on the network).

In terms of space, imagine being on the moon and requesting some website. With location-addressing, you have to be able to reach that particular node, and the data has to travel from there to you. With content-addressing, that content can be fetched from anywhere, maybe another node on the moon has it already, so it can fetched quicker from there, or some node in-between the moon and earth.

More reading about content-addressing: https://en.wikipedia.org/wiki/Content-addressable_storage

Magnet links and git are two fairly common technologies that uses content-addressing with lots of benefits.

(edit: of course, this is a simplification of both how IP, networks, IPFS, DNS and more works, but I hope it provides a fair overview at least)


OK. But, going back to my comment about cheap. This claim:

> The benefit of this is that you can fetch the data from anywhere

Only works if the data I want is being replicated in many locations. But storage in spacecraft is presumably a tightly constrained resource, due to the need to use technology that's space-hardened and has an extremely good reliability record, and is therefore probably far from being both cutting edge and high density. JWST, for example, only has 68GB of storage despite being an instrument for taking high resolution photographs.

I would guess that that creates a situation that is very far from the one content-addressable storage is trying to solve. The goal isn't on-demand access to arbitrary files that are just sitting around in long term storage, because I'm guessing that, in space, there is effectively no long-term storage of data files in the first place. Instead, what you're looking to do is to get the data off of the spacecraft and down to earth as quickly and efficiently as possible, and then delete it from the spacecraft's storage so that you can make room to do more cool space stuff.

And I also don't want to be using my spacecraft's SSD to cache or replicate data for others if I can possibly avoid it. That will unnecessarily shorten the lifetime of the SSD, and, by extension, the piece of hardware that I just spent millions or perhaps even billions of dollars to build and launch into space.

And I just can't follow you as far as

> imagine being on the moon and requesting some website

because, right here right now, that is such a hypothetical situation that I have absolutely no idea why it needs a real-world demonstration of proof of concept using currently-available technology. Let's wait to see if browsing the Web from the moon leaves the domain of science fiction and becomes science reality first, so that then we can benefit from whatever technology exists in that future when we're solving the problem.


>> imagine being on the moon and requesting some website > because, right here right now, that is such a hypothetical situation that I have absolutely no idea why it needs a real-world demonstration of proof of concept using currently-available technology.

So I just want to point out that IPFS was fairly deliberately designed to have numerous, forward-compatible features that could be swapped out in the future : like https://multiformats.io/ and in particular https://multiformats.io/multiaddr/ .

In the IPFS community, there's always been a fairly heated discussion about which bit of the entire system should be stuck with the term IPFS. Like, if you took away the libp2p protocol, and just served CIDs over http, would it be IPFS? What if you took away CAR files (the merkle-tree file format used to define multi-item content)? What if you're a private IPFS network, with no shared nodes with the public network (like https://github.com/TryQuiet/quiet ). What if you didn't use bitswap, the file transfer protocol (Filecoin doesn't use bitswap, and mostly doesn't interconnect with the main public IPFS network). What about if you didn't use a DHT to find providers of a CID. What if you're not using any of the "IPFS" software stack, but your implementation still uses bits and pieces of content-addressability as defined in the standard?

Interestingly, right now, there are a bunch of experiments going in all of these directions: I think it's fair to say that if you wanted to test out content-addressable networks across the solar system, they probably wouldn't be IPFS as it is now, but their nature could probably be described using the primitives the IPFS stack uses, and learning about what needs to change would give a useful direction to some part of the extended IPFS ecosystem.


You're asking why a superglue company would superglue a construction worker to a steel beam. Its promotional material.


I think the key is understanding why promotional material matters so much to crypto. Crypto operates on hype -> higher price -> FOMO -> higher price cycle. The trade is that for large holders $$$ invested in hype would return significantly higher gains. This enabled questionable coins / ICOs to get celebrity endorsements for easy money in the past. In current regime the model can still work, but with smaller returns and significantly higher execution barriers.


> Only works if the data I want is being replicated in many locations

Yes, the actual transfer won't be fetching from multiple sources unless it's replicated.

The difference between location-addressing and content-addressing is that it enables the content from being fetched from anywhere, which has uses outside of space too, as many storage solutions has already discovered since long time ago (multiple decades).

> SSD to cache or replicate data for others if I can possibly avoid it. That will unnecessarily shorten the lifetime of the SSD

Correct me if I'm wrong, but SSD lifetime is based on writes, not on reads. So you should be OK with replicating already saved data, in that case?

> that is such a hypothetical situation that I have absolutely no idea why it needs a real-world demonstration

The whole "IPFS in space" is a "utopia goal" of sorts, but aiming for it gives us benefits today.

Personally, I'd love to see a move from location-addressing to content-addressing as I'm often in locations with spotty global-internet connection and really poor latency. Sometimes, websites refuse to even try to serve me data, as it takes really long time for packets to go between where I am, and the host I'm trying to reach, so they decide to cut the connection as tell me "sorry, timed out..."

If the web was already built with content-addressing in mind, my machine could automatically fetch the content from another machine on my network, my neighbor, a dynamic cache at the ISP or wherever, anyone could serve me that data instead of that particular node with that particular IP on the other side of the world. Sharing a YouTube video between two computers on the same network would just transfer bytes on the local network, instead of reaching out to the internet. The benefits of this should be fairly obvious to anyone who is familiar with networking today.

Realistically, I don't think IPFS will be the technology that makes this possible in 10+ years, but that doesn't mean content-addressing itself is the reason why it isn't more widespread. Content-addressing been around for a long time already, and I'm sure even more and more people will realize its value over time.


> If the web was already built with content-addressing in mind, my machine could automatically fetch the content from another machine on my network, my neighbor, a dynamic cache at the ISP or wherever, anyone could serve me that data instead of that particular node with that particular IP on the other side of the world.

I don't think the lack of content-addressing is the main thing holding this "utopia" back. The bigger problem is designing P2P communication protocols that work efficiently, utilize bandwidtch fairly, are not easily attackable, do not accidentally cost peers excessive amounts of money and so on. If your neighbor is on a metered connection, I'm sure they are happy the web is not using their machine to serve you content. Even if they are not, if connections are spotty and bandwidth limited, they may not be happy if their YouTube video starts stuttering because their system is now serving you Netflix content. And when they're watching porn, they're probably happy that someone else can't easily find out which porn movie they are watching by polling nearby peers for a content hash.

Ultimately the client-server model has major advantages over peer-to-peer that are highly ingrained in our society. It is the model by which delivery has almost always worked, way before the internet.


You don’t need IPFS or even content-based addressing to get a system like you are describing.

You can add distributed caching to HTTPS with message signatures (or a similar scheme): https://httpwg.org/http-extensions/draft-ietf-httpbis-messag...


I don't think price is the point at this time. We need to start thinking about how to manage data transfer where latency is significant

Imagine shipping an IPFS node to Mars and being able to have content consistency


The unstated major premise of a statement like "we need to start thinking about..." is that we aren't already thinking about it. But I don't think that premise holds in this case.

See, for example, DTN, which is already being used to do real work on the ISS. There are also plans use it elsewhere, including in some future lunar missions. https://www.nasa.gov/communicating-with-missions/delay-disru...


You still have to talk to specific machines to get the data, so having it content addressable will not automatically make it either more or less rapidly available.

Also, even for a peer-to-peer network, you still need some way to discover the peers that actually have the data, so you need some kind of centralized infrastructure that can help coordinate. Ultimately whether you connect to a CDN or to the centralized P2P facilitator is not such a massive difference.

Also, given the way ISPs typically operate, you will probably have much better bandwidth downloading from a centralized server than from a generic other end-user.

This is why BitTorrent only really has two successfully deployed use cases:

1. Windows updates and similar, where machines can rely on broadcasts on a small LAN to coordinate without needing more complex infrastructure (doesn't scale beyond LAN)

2. "Piracy", where the major advantage is that it diffuses the blame for copyright infringement, instead of presenting a single large target that holds all of it.


Yeah, I don’t think your explanation offers anything over and above what actual CDNs already do for us right here on earth.

CAS is a separate issue than a caching hierarchy.


> Yeah, I don’t think your explanation offers anything over and above what actual CDNs already do for us right here on earth

CDNs usually offer location-addressing, sometimes via different ways but most of the time boils down to "I have to reach that specific node to fetch this specific content". I'm not sure in what other words I could use to explain how content-addressing is different from that...

(not even talking about IPFS at this point (or in my grand-parent comment), it's just about URIs vs URLs in the end)

(edit: besides, the CDNs you're talking about are with 99% certainty using content-addressing all over the place internally)


Torrents with extra steps, eh?


As chance would have it, an excellent deep dive into the differences between IPFS and Bittorrent was published by Daniel Norman today.

https://norman.life/posts/ipfs-bittorrent


It's basically the same idea as magnet links, so you could say it's torrents with the same steps.


The main advantage IPFS has over torrents with magnet links is global deduplication / a global swarm. If you seed a torrent that just contains book A, and I seed a torrent that contains books A and B, I won't provide any bandwidth to people downloading book A through your torrent. If we were using IPFS, someone downloading book A will be able to pull from both you and me, even if we don't personally know about each other and started seeding independently.


Actually, BitTorrent v2 (as specified in BEP 52) has per-file hash trees, so it supports sharing a swarm if the same file is shared between different torrents. Not a whole lot of torrents use it though.


This sounds like a feature but it's actually a source of serious performance issues in IPFS.


More specifically, I think it’s basically the same idea as trackerless magnet links where you find peers for your file via a DHT. In fact, the Mainline DHT used by most BitTorrent clients is apparently very similar to the DHT used by IPFS. They are both Kademlia implementations.


Torrents also rely on content-addressing, they go hand-in-hand :)


> The internet traditionally uses location-addressing. A DNS name pointing to a IP which has a specific geographical location

This complete nonsense. First of all DNS can give you multiple IP addresses[1] (that's how CDNs work eh!) and then IP addresses are only very loosely coupled to geographical locations…

[1]: https://www.cloudflare.com/learning/dns/what-is-anycast-dns/


Obviously, it's a simplification, as I'm sure you realized by now. Generally though, I think it can serve as a initial starting point to understanding more.

Edit: I removed the "geographical" from the "pointing to a IP which has a specific geographical location" part. Hopefully it'll make things clear enough while not over-simplifying it.


Caches solve this placement problem just fine today and content addressing does not alleviate the placement problem at all.

Reality is the space thing is just part of the usual crypto grift narrative building.


> Caches solve this placement problem just fine today and content addressing does not alleviate the placement problem at all.

Being able to arbitrary place caches anywhere and let anyone serve the data, and you can trust you're getting the right data, is a huge benefit over location-addressing. Ignoring IPFS, content-addressing makes sense for so many things, which is what lots of storage/caching/CDN solutions have discovered decades ago.

Content-addressing is nothing new (nor did Protocol Labs/IPFS "invent" it) and if you're using a CDN to serve content today, the CDN is with 99% certainty using content-addressing at least internally.


> Being able to arbitrary place caches anywhere and let anyone serve the data

This is the grift.

The "libertarians" want to be able to host their CSAM, sorry, their "freedom materials" on everyone's servers in such a way that nobody can ever "silence their freedom" because it would mean silencing the cat gifs, too. Want to be "on the net"? Well, you absolutely have to agree to host anyone else's content without question at all times. But, it's okay, we'll pay you with... coins!


Like many others, you seem to not really grasp how IPFS works. Nothing gets distributed automatically.

Besides, how on earth is your argument even relevant when we're talking about content-addressing as a whole, without involving IPFS at all?


I get how IPFS works. I think the problem you have is that you can't distinguish between IPFS and CAS.

Lots of things use CAS, not just IPFS.

The specific case you outline can be served by existing technology, without having to introduce magic coins, beans, or other distractions.

I was pretty interested in IPFS prior to it being overtaken by web3 grifters.


Simple caches do not allow third parties to deploy them while being sufficiently secure.


How do you mark something for deletion?

Or once its in the network thats it?

Seems easy to DDOS by changing one character many times in a file which generates a new id each time


> How do you mark something for deletion?

The same as you'd delete anything from any protocol essentially, you delete it locally. If you've shared it on the internet, you ask people nicely to delete it, if they refuse, you either pursue them legally or give up. Basically, just like how you delete a picture from the internet today.

> Seems easy to DDOS by changing one character many times in a file which generates a new id each time

All you're doing in that case is adding more files locally and filling your hard-drive with random files. No content gets automatically distributed unless people request it.


It's quite the opposite. We've sketched out a school platform using IPFS that allows very remote villages in third world countries that have little to no internet to still have viable "google drive" like experiences for school home work. The idea is that a single USB drive that contains all the updated files for the google drive is sufficient for the entire network (that exists in the village, that is decent, but has otherwise very low to no bandwidth to the outside world) to have access to the files. Or, a single node downloads the homework/movie/article/whatever, and the remainder of the network has access to it - through IPFS, which only asks for the content's hash, and gets it whever it can.

The same thing could be applied to space.


Have you actually used it? The usb with random information is a very versatile distribution method, I am not sure IPFS is much better than this.


I think the idea behind content addressable databases like this is that they are supposed to be permanent. Putting it out into space may be a way of saying "its never going down".


Most likely and obfuscation technique, they (IPFS people) are afraid to admit that there is no guarantee about file preservation there at all (unless you pay to people hosting your files specifically) and cache eviction may happen to your file at any time, so they try to shout all across the internet how their tech is so reliable and great to drown any dissent. Scary and important looking headlines like "Something In Space" help to instill superficial respect in the tech.


Maybe I missed something obvious, but does the IPFS project itself say anywhere that it does "file preservation"?

The only thing I can find from official resources is documentation saying IPFS does not do that at all. Example from https://docs.ipfs.tech/concepts/what-is-ipfs/#what-ipfs-isn-...

> What IPFS isn't

> IPFS is not ... A storage provider: While there are storage providers built with IPFS support (typically known as pinning services), IPFS itself is a protocol, not a provider


The only times I've seen IPFS being brought up in the interned discussions was by people who claimed that their token startup will guarantee preservation of some data via IPFS. This was also a usual argument in comparisons to commercial cloud storage.

Reading your link I don't quite understand how come they claim IPFS is both a protocol AND a network of nodes AND not a storage. For example TCP is a protocol, but TCP is not a network of nodes even if there is a network of nodes using TCP to communicate.

I think this is some cheap wordplay to play pretend distributed filestorage and but also absolve themselves from any responsibility in case something goes wrong "IPFS is only a protocol, you probably was holding it wrong".


> The only times I've seen IPFS being brought up in the interned discussions was by people who claimed that their token startup

Sounds like those people either don't understand what they're saying, or they're mindlessly shilling for Filecoin. Both are obviously shitty, but I don't think it's fair to point the team behind IPFS in a bad light because of what others say about it.

> I don't quite understand how come they claim IPFS is both a protocol AND a network of nodes AND not a storage

IPFS is a protocol, IPFS is also the name of the network. IPFS doesn't automatically distribute data, hence it's not "storage", but a protocol for doing storage.

In the analogy to TCP, TCP/http are the most common protocols, and the most common network is what we call "Internet". Initially, there was many competing networks, until eventually The Internet overtook all the other networks and essentially "won".

(edit: again, gross over-simplification about the history the internet and more, hopefully won't be too misleading. Better overview can be found here: https://en.wikipedia.org/wiki/History_of_the_Internet)


libgen uses ipfs. books on ipfs are a great idea.


http is not a website. but you can use it to serve websites.

ipfs is not a fileserver. but you can use it to host files.


Here's a secret: IPFS is not a data storage protocol. It's a data routing protocol.

Start thinking that way and it will take you to a whole new universe.


So it is name of the protocol and not the name of the network system using this protocol, right?


I have uploaded files about 3 years ago to ipfs and somehow its still there.


Unfortunately I have to repeat what I just wrote word for word:

"there is no GUARANTEE about file preservation there at all (unless you pay to people hosting your files specifically)"

Sure your files are accidentally still present in the cache. Good for you. The problem is deceiving other people who may think it's somehow spacemagically-interplanetary guaranteed.


No reason to repeat yourself. I was surprised that it was still up. I knew very well they were likely to dissapear. I assumed it would have been gone by now.


Worth noting that content-addressing is not meant to help making the data itself permanent, but it does help with making the identifier of the content permanent.

It doesn't guarantee that the content the ID resolves to is available, but once you have a content-addressable ID, you can be sure you'll be able to get the data as long as the data is available somewhere on the network.


It’s only not going down if the content is actually present in such nodes. It’s not.


Its just a router node?


Then what good is it?


I wasnt being sarcastic I was genuiely asking if its a storage node or a route node.


You're asking about why run this service in space, but this service is a reliable scalable data store. Ask instead what kinds of data would you want in space, and why?

The throughput probably won't be great, but mainly if we assume 1:1 not broadcast connections! There will be roving access, not continual, to a given satellite - a fact until there are thousands of other satellites also serving ipfs. Costs will be high.

But still, there are wins. So far, satellite reliability is fairly high; there haven't been natural or human-made disasters afflicting many satellites. In some conditions the roving nature of the satellite could be a boon; you can get information out to a lot of people.

Wikipedia, news, important events... those would all benefit from guaranteed-reoccuring-availability model, perhaps. This would be an interesting tamper-resistant way to store something like votes, if it could be vetted. If there's satellite to satellite communications, the ability to have an expanding archive of crucial orbital information is useful; the swarm can grow & update vital behavior with ipfs in interesting ways.

Rather than assess just on whether this will make a good ipfs node as ipfs is typically used (high bandwidth, high storage nodes), I think it's worth considering this on the merits of what this technology offers that is distinct. Pulling out a single ruler to measure everything isn't always the best and only way to judge; indeed I think we miss a lot when we apply only our current set of expectations to new ideas & encounters. I'd encourage a more liberal consideration. Although I agree, I'm not sure either what the killer use is, I want to see thinking that pioneers what could work, that explored what values are on offer, even if it's not the same old values as the incumbent (fiber optics on the ground).


The kings of marketing and buzz-words at it again trying to re-spin CDNs as their invention. IPFS doesn't solve persistence of data; doesn't solve churn in p2p systems; doesn't actually 'store' anything. Sounds cool though! Space and shit, some-thing, something 'decentralized' --hand waving-- 'space', umm, can I have money now? I'm doing it.


It’s all about the filecoin, the pumpers are lying to themselves how it will change humanity and spreading their preachings on social media. Some actually knows it’s bs but they know a lot of them will buy it up


> CDNs

If you think IPFS is trying to "re-spin CDNs as their invention", I'm pretty sure you misunderstand what IPFS. The homepage is a great starting point if you're curious rather than antagonistic: https://ipfs.tech/

> IPFS doesn't solve persistence of data

I don't think it claims to solve this either? What it does claim to solve is the persistence of identifiers of data.

> doesn't solve churn in p2p systems

What P2P system has ever done so or even claimed to have done so?


Cheers, read the paper 10+ years ago. Nothing new or interesting in that time. Very impressive restatements of old ideas introduced by other people in typical pseudo-academic Protocol Labs style. Protocol Labs is famous for reinventing wheels poorly. It's like they look at dated papers and say: 'what can we do to sell this basic shit as our own.' Then they end up with horrible versions of STUN, TURN, hole-punching, DHTs, torrenting, commit-reveal schemes, hash-locks, and other technologies (((that are already well-known and ah... exist?)))

You might be thinking that I'm making this up to troll. But the funny thing is so few people have any idea about this area of technology a company like Protocol Labs can get away with being completely insular, mediocre, unimaginative, impractical, and actually full of shit, and no one will notice. In fact, I'm routinely reminded what a 'visionary' the founder is. Despite nothing of value ever coming out of the company. But what I've learned is if enough people believe the lie it might as well be true.

Now give me money! Space, space, blockchain, space!


You sound like you're having a grand time already with some personal beef with Protocol Labs/the founder, so I won't get in your way :) Enjoy


To be fair, they're "innovating" by combining a CDN with cryptocurrency.

Typical "web3" though, solving problems nobody actually has.


I don't think IPFS is a CDN in the sense I know it as, e.g. Akamai. Isn't it meant to be run by end-users instead of a central company?


Huge achievement. A big, big stepping stone toward interplanetary communication.

But while I love the idea of IPFS, it comes with a bunch of tradeoffs that I think make it very unlikely that it'll ever become mainstream.

What I do think will happen in space, is similar to what already happens with PoPs around the world, no IPFS required. As an example: I believe that there will be a YouTube on the moon and YouTube on Mars, and YouTube servers caching most of the content on both, and it'll all work the same. Cross-planet communication will be high latency, but it won't matter.


> What I do think will happen in space, though, is similar to what already happens with PoPs around the world. As an example: I believe that there will be a YouTube on the moon and YouTube on Mars, and YouTube servers caching most of the content on both, and it'll all work the same. Cross-planet communication will be high latency, but it won't matter.

Importantly, this is enabled by content addressable storage in IPFS, where the address of the data is derived deterministically with cryptographic hashing. Agree it likely won't ever become super popular, but it is a core component of distributed storage systems without central orchestration.

https://en.wikipedia.org/wiki/Content-addressable_storage


> Importantly, this is enabled by content addressable storage in IPFS

Can't it be enabled in all sorts of ways? E.g. edge caching, the way YouTube works today?


Coordination, namespace, and addressing challenge with traditional addressing and caching. It’s why BitTorrent is so good at distribution: it is uncoordinated edge distribution, IPFS is just a bit more formalized. Subresource integrity demonstrates why the crypto is needed.


I thought BitTorrent's main strength (although that is one) is that it allows dynamic routing through lots of nodes, so you aren't pulling from a central cache, but from all sorts of clients. That was particularly useful in a world of asymmetric upload/download speeds; many people could upload to you at once, enabling your to saturate your downlink.

That wouldn't be the case on the moon I would have thought, as you won't have a huge network of intermediate nodes to draw on. You probably just have one or two links, which you can push content through en masse.

Tl;dr I don't see why what you're saying applies in this scenario.


And you gonna have 3 people watching YouTube in space after using millions to send them there?


For your example, sending one copy to space is better than sending three copies.


Not seeing how starlink can’t just put a hd and serve stuff.


You'll need a way to find which starlink has the hdd that has the data you need to access.

Content-based indexing (as in IPFS) handles that. Plus the data gets cached in the starlink you locally accessed, in your example.

It might help to think about IPFS as a public CDN operating on an open specification protocol and an open-source implementation.


Is this a joke? If anything they would use a centralized service and not have to install a wallet to communicate.


I think you might've misunderstood what I was saying. I absolutely agree with you. I don't think it will use IPFS.

I'm saying in a hypothetical future in which we have colonised other planets, tech companies will just plop caching servers on Mars and the moon, completely obviating the need for IPFS or anything special.

(edited original comment a little to clarify)


One of the most unusual home pages I've seen in a long time, not sure if I like it or I'm just conditioned to expect a certain style, kudos to them for being different though.


I think the typography can be improved a little, but overall I like it quite a bit. It's refreshing to come across something that isn't just the zillionth iteration of your bog standard home/landing page.

*One other thing I noticed is that the "Resources" section doesn't have the gradient border edges on it, which I don't know is intentional or not.


I really appreciated the interactive video-style homepage on https://filecoin.io/. I thought it was a lot cooler than the standard self-congratulatory graphs / bullet points that the I usually skim over.

EDIT: However, it is strange that the foundation has a totally different site.


Sure space is cool (or maybe not, but don't want to upset the HN consensus), but this seems like a total gimmick and waste of money, time, and PR.


And I thought I was reading Hacker news, not "only viable businesses" news. Sigh...


Based on a skim of the article; they uploaded, ran, and tested their code on a cubesat running "Lockheed Martin’s SmartSat technology". So, it wasn't that expensive. My initial thought was cubesats are cheap anyway, but they don't seem to have even launched one themselves.

Gimmick, sure. but it's an important milestone and costs less than a series of TV ads.


Except it's true.

I don't see a use case for this other than speculation.

I would even go as far to say that this 'side project' was done to try and pump the price of filecoin so that people can take profit off of it.


> I don't see a use case for this other than speculation.

The other use cases are hinted to in the name of the project...


> waste of money, time, and PR

That's exactly what cryptomoney has been about for years now, so it fits right in.


Filecoin is probably one of the few cryptocurrencies with intrinsic value, even if the amount is debatable. IPFS has stood the test of time and seems like a good protocol; and a cryptocurrency that can be used to pay for storage is not valueless.


Has it? I have never seen anybody using IPFS in the wild, even projects for which it should be well suited (archive.org, Linux package distribution, Git, Lemmy, Imgur, CivitAI), don't use it. Worse yet, IPFS still provides no real way to deal with private or local data, which drastically limits its use.

I love the idea about data being addressable by hash, but I don't feel IPFS has actually delivered anything meaningful in that area yet.

And with ipfs-search.com shutdown, there is not even any way left to explore what is actually on the network now.

And technical issues aside, there is also the legal problem that IPFS conflicts with copyright. Redistributing anything is illegal by default, unless somebody gives you permission, IPFS provides no means to track that permission. You can't attach a GPL or an author to a hash.


> Has it? I have never seen anybody using IPFS in the wild

Every time someone downloads a book from a shadow library like Library Genesis, it's through IPFS most of the time, often via an IPFS gateway such as Cloudflare's IPFS gateway, so you don't even notice it's using IPFS. These shadow libraries have millions of users per day, especially academics.


I’ve recently observed that both IPFS and the Cloudflare gateways are extremely unreliable on Libgen.


That's besides the point. The point being made was that IPFS is being used in production.

You bring up a valid point though. From my experience, IPFS suffers from a tragedy of the commons issue. It's performance is generally terrible because the free providers are overloaded.


Not really:

> Every time someone downloads a book from a shadow library like Library Genesis, it's through IPFS most of the time

I would like to see evidence this is actually true. My theory is that IPFS's "success" in hosting Libgen is roughly turning ICO proceeds into free hosting. Paying for dedicated servers might have been more cost efficient and, given the performance issues, more robust!


100%, and same for Standard Template Construct. There are basically no reliable IPFS gateways with book or paper content. They all get struck from the more reliable gateways immediately.


Unfortunate behavior in an apocalypse-proof distributed filesystem.


So what we already knew, IPFS is a slightly better torrent.


It's a significantly worse BitTorrent. BitTorrent is fantastic.


Wouldn't all those examples also work just as easily with torrents/magnet-links? I think in all those cases a central server distribution model has unfortunately been "good enough" for the majority of users (even though their data is mined and ads are injected)

When it comes to FOSS, I personally don't understand why something like a package manager isn't P2P by default. It feels very aligned with the hacker culture - a la "A Declaration of the Independence of Cyberspace". Virtually nobody uses it, so the solutions are half baked, clunky and not integrated with every day workflows (ex: browsers don't support anything P2P). Something like libcurl can't pull a torrent from the web


> Wouldn't all those examples also work just as easily with torrents/magnet-links?

My (possibly wrong) feeling is that IPFS is meant for smaller files, and torrents for bigger collections. So with the example of a package manager, you could download each individual package via IPFS, whereas a torrent would make sense if you wanted to download an archive of them all.

Not to say torrents couldn't work, but it just kinda feels like not the intended use case.

(The above feeling come from reading about IPFS a few times throughout the years and toying with it for a while).


There has been attempts at doing torrent-based package distribution, see the notes of debian testing it here: https://wiki.debian.org/DebTorrent. There are basically 2 problems:

Packages are a lot of small files, and not everyone is interested in them all. Torrents are good for large content that everyone wants. The two are in opposition and there needs to be a balance: do you need to make One Unique Torrent with everything, but then peers have to individually ask each and every other peer if they have the content they're looking for, making it unscalable ? Or do you make one torrent per package, meaning peers have to track thousands of torrents ? (That's the direction IPFS took, which is why it's extremely inefficient and resource-heavy). DebTorrent has made the choice to go in the middle.

The real problem, to me, is that content-addressed content is the anti-privacy protocol: since you ask everyone "do you have this content", you're basically telling everyone what you want. In the case of installing packages, this means telling everyone what the version of your packages is, and is potentially a security issue.

But the real "problem" is that it fails at being much more better than the existing solution. Bandwidth is cheap, storage is cheap, so hosting everything on a few independent but centralized servers is good enough.


P2P works better for ISO based distros with full offline backups or maybe LTS based distros with small repos as Hyperbola. Having a full repo of Trisquel with continuous updates would not work well.


Oh right, that's a very good point. I've only use LTS for years. Forgot about rolling releases haha


I was working on the same building as the team from Protocol Labs. I talked with a couple of devs and it seems like they never even considered that most people who would be interested in running a node would like to have control over who gets access to the pinned files. I think I opened a ticket asking for an ACL system, but it got closed.

I really had high hopes for it, but I realized that all I really want is an object storage with content addressable urls.


We've been pioneering privacy and access control on top of IPFS since 2015 in Peergos [0].

We have block level access control for example: https://peergos.org/posts/bats

As well as E2EE, sharing, trustless servers and more [1].

[0] https://github.com/peergos/peergos

[1] https://book.peergos.org


Ok, that is really cool. I worry though that this is too tied into the whole peergos architecture? Can you give some pointers on what would I need to do to have a node with the extended protocol and nothing else?


The block auth is very generic, just an extra top level key in cbor maps or a small prefix in raw blocks. The protocol is S3 V4 signatures.

There is a Java implementation of the modified bitswap for this in Nabu [0] and a Go one in ipfs-nucleus [1]. With both of these you can use whatever auth verification protocol you like.

[0] https://github.com/peergos/nabu

[1] https://github.com/peergos/ipfs-nucleus/


Hi, PL-funded founder here.

This was one of the things that made IPFS a non-starter for us. We ended up grafting Hashicorp Vault into kubo (the go-ipfs implementation) so that we could use IPFS and have things like detetes and access revocation that actually work.


ACLs are fundamentally incompatible with blockchains. Your only solution is to encrypt those files, and have off-chain key management if you ever want to revoke access.


ACLs are compatible with some blockchains. Token gating, merkle proofs, and RBAC are fairly common in smart contracts. See: https://docs.openzeppelin.com/contracts/2.x/access-control#r...


In terms of interacting with smart contracts, yes. In terms of accessing data, no. All data on a blockchain is public and immutable, and that's the point.


IPFS and distributed filesystems have nothing to do with blockchains.


IPFS is entwined with (and pretty much 100% backed by) Filecoin, which is a blockchain. Introducing a point of incompatibility would be a nonstarter.


Nothing is "entwined". I can download kubo and run it without even knowing what Filecoin is. In fact, every user of Brave browser does it. Same thing for projects like OrbitDB. Same thing for customers of Pinata.


I can download F2FS and run it on my RAM. That doesn't mean that F2FS isn't entwined with NVMe SSDs.

In fact, while F2FS is a ludicrous example, this happens regularly with filesystems. The core design of filesystems is still built assuming that a block device will be behind it. IPFS is the same way, just s/block device/blockchain/.


I'm sorry, I really have no idea where you are trying to go with this.

- You can have files on IPFS and no blockchain whatsoever "behind it".

- IPFS itself makes no use of blockchains as an abstration for data storage or file organization

- If all the blockchains networks and nodes disappeared overnight, no IPFS nodes would be affected.

I don't know if you actually used any of that or just going by hearsay.


Filecoin, the thing funding IPFS, certainly does :)


Sure, but IPFS, the technology, doesn’t.


But nginx HTTP bridge with copyright banlist is OK…


IPFS does not in any way involve a blockchain.

There's a separate "layer on top of IPFS" called Filecoin that does involve a blockchain. (From a technical point of view, it's not really a layer on top, more of a re-implementation of IPFS with a blockchain.)

I've worked on IPFS and know many of the key people at PL. They really are not blockchain people or cryptobros. They are much different from anyone else in crypto I have encountered. But ultimately they have to fund their activities somehow, and this is the reality of where the business potential is in P2P circa mid-2020s.


i used to work at PL, surprised you got that response (don't know who you talked to). Meanwhile, folks have been building things like that:

https://guide.fission.codes/developers/webnative/file-system...


IPFS also has issues with large datasets, anything larger then a few TBs cases nodes to explode and indexing takes forever.


Also the lookup time for files is too slow to use it on anything user facing.


> Linux package distribution

FWIW it seems like NixOS at least tried or is trying: https://blog.ipfs.tech/2020-09-08-nix-ipfs-milestone-1/



> I have never seen anybody using IPFS in the wild, even projects for which it should be well suited (archive.org, Linux package distribution, Git, Lemmy, Imgur, CivitAI), don't use it.

Netflix uses IPFS. [0]

[0] https://blog.ipfs.tech/2020-02-14-improved-bitswap-for-conta...


> there is also the legal problem that IPFS conflicts with copyright. Redistributing anything is illegal by default

I think you might misunderstand how IPFS works, or possibly confuse it with Freenet. When you use IPFS, nothing is automatically distributed, unless they happen to know the hash of the content you've added locally. It's not until someone starts to explicitly request data from you, that you start sharing anything.


Everything in your local cache is redistributed, meaning every single website you view through IPFS ends up getting potentially redistributed. Can you guarantee that each and every item you view on the Web allows redistribution?


Perhaps it should use the fact that any file A can be written as the XOR of two files B and C, where B and C look like complete noise. If you only host B and another person only hosts C, then there can be no copyright infringements, while you can still reconstruct A by simply XORing the other files together. Of course at the expense of having to store and transfer twice the amount of data.


This is engineer logic. This kind of thing would never fly in court. If you want an example, go back and look at what happened to Aereo back in 2014. They came up with a clever hack to work around broadcast TV copyright. It went to the U.S. Supreme Court and got shot down.


I don't think so.

Assume A is the copyrighted work. Now construct a file of random bits and call it B. Then assuming A=B^C, then you can solve C=A^B (where ^ is the XOR operator). Both B and C appear to be completely random.

Assume you are sued because C is found on your harddrive and if you XOR it with B (found on someone else's harddrive) you get copyrighted work A.

To construct your defense, take a work in the public domain, and call it P. Compute D=C^P. Now tell the judge that there is a file D on a friend's harddrive, that allows you to reconstruct P.

Alternatively, tell the judge that you are researching random numbers and C is just a file you work with. It is not illegal to have random numbers (bits) on your harddrive.


I see it pretty differently.

I mean, the XOR thing is a bad idea, because obviously at least one of those files is violating copyright. Trying to feign innocence is not engineer logic, it's failed logic.

But Aereo I'm still mad about. They saw that distribution is what cable companies have to pay lots of money for, so they removed distribution. It should have been fine.

The effect of the decision is that it's legal for a person to set up a remote antenna for personal use, but it's a copyright infringement to pay a company to set it up for you. Or, perhaps, that it's possible to pay a company to do so but only if it's your idea as a one-off. Either way, that doesn't make any sense as a copyright issue.

And after the supreme court decision, they tried to pivot to legally being a cable service... only to get denied recognition as one!


I'm sort of mad about Aereo too. My view is the same as yours. It's unfortunate that it got shut down.

My point in the GP comment was more that if the entire justification for your technical architecture is to make an end run around some law, you have to consider the possibility that a judge is going to see what you're trying to do and may have a different interpretation. There's a big human element in the law that I think engineers, especially "code as law" blockchain people, often overlook or misunderstand.

Maybe in the future we'll have AI judges and AI lawyers and your case will be decided in 100 milliseconds. However, that's not what we have today.


> There's a big human element in the law that I think engineers, especially "code as law" blockchain people, often overlook or misunderstand.

The point with the XORing is that you can't blame one person for the infringement.

Imagine twin brothers. One of them steals a painting and is caught on camera. The judge knows that one of the twins is guilty. However, he can't convict either of them exactly because of the human element in the law.


I don't know how it works legally, but if you publish your own content to a distributed protocol like IPFS you should expect it to be distributed. If content that shouldn't be redistributed is put on the public IPFS, it should be the person who added it's fault, imo.


The person who added it, is not tracked. A file can be added by anybody at anytime and anywhere and it will get the same content address. It's up to the person doing the redistribution to ensure it's legal, but IPFS gives them no tools to do so.


libgen uses it


In practice libgen uses torrents. The available seed nodes for IPFS are... few. This is largely due to the software/protocol being pretty bad compared to battle tested torrents.


Perhaps if you're downloading full dumps, but distribution of individual books is often done via IPFS.


Running IPFS at scale is horrible. Try to download a few dozen TBs of small files. Its garbage collection is rubbish (ended up nuking the ZFS dataset every couple of days instead), it is very CPU and IOPS hungry, and it has bad network throttling support.

I would claim it has failed the test of time as it has very little adoption.


Why the torrents didn’t move to IPFS already? I think it looks more like DoA than standing the test of time.

From my experience it’s a “heavy tech”/resource hog hog that does not appeal neither to developers nor to end users(it has no killer app)


I would like to add that IPFS pretty much doesn't run on spinning rust or slow CPUs. A PI or other low end box can easily run torrents with an external harddrive. IPFS can't download large files at a reasonable speed on slow hardware.


Isn't IPFS pretty much the same tech as torrents with magnet links? I'm not sure there's any benefit.

I think the challenge with torrents is maintaining communities of seeders without getting taken down, but I don't think IPFS really helps with that.


Are they interoperable with torrents yet?


Because torrents work, so they don't need to switch to IPFS.


So is Storj and Sia.

It raises the question, though: I can pay Storj S3 with STORJ tokens, and I can also pay for renterd space with Sia tokens, and I can also pay for IPFS with Filecoin, and all of them still work with good old credit cards. What "intrinsic" value is there for any of these tokens, if they can only be used on their own internal economies?

And before the "but permissionless, so I can pay with crypto!", why not just use DAI?


I’ve earned Storj for years by renting excess hard drive space to store other people’s files. Having a native currency lets you innovate in some unique ways.

Microtransactions are not practical with credit cards and other traditional settlement methods.

The reasons not to use something like DAI are fundraising (unfortunately - crypto VCs really like seeing a token), network bloat/decentralization, and leadership risk. Your own currency is less likely to collapse by other people’s bad decisions.


I also run my own Storj node. It took at least one year to pay back the hard drive. Now that they are cutting down the payout for hosting, I am receiving barely enough to justify the power bill.

No one cares about microtransactions, and I'm yet to see a customer who likes a product or service but refuses to make a pre-payment of $10.

None of the reasons you mentioned for a token are beneficial to the end user.


How much is the power bill, and how many watts are you using?

Barely paying for power does sound like too little, but enough money to pay back hardware cost in a year is definitely overkill.


I went to look, and I was wrong on both extremes. I've allocated 2TB. It actually took me two years to get make ~$125, when I bought disks at ~$60/TB.

For the past 6 months, I've received an average of 5.8 STORJ per month, which on today's rate amount to $4/month, but if I look at the price at the time I received them it would be more like $2.2/month.

I don't know the exact wattage, but given that is a oldish celeron and that I'm based in Germany, I'm guessing that running that server + disks takes ~25W, which means ~18kWh per month and given how energy prices have gone up last year (close to 0.30€ per kWh), I am actually losing money by keeping this server on.


Okay, so you're in an especially bad situation for terabytes per watt.

If I was buying hard drives right now, I'd expect 2x14TB for about $500 including tax. If that pulled in anything close to $2/TB/month then $6 of electricity would not be a big deal.

Hmm, I just built a NAS, maybe I should rent out some of that space.


> I’ve earned Storj for years

That makes sense, but then what can you do with Storj?


You can turn it back into storage, and you can sell it to people that want storage. I feel like that's good enough to establish value.


But is there any other value than storage? If I have excess storage to rent to people, I get coins I can use to rent storage?


You can sell it for cash money.

The important thing is not that coins can be exchanged for a product or service that you want, it's that they can be exchanged for a product or service that many people want. And you want that trade to be the basis of the price, rather than speculation.


I'm all for Storj (or Sia, or Filecoin) managing to create an alternative to AWS/GCP/Azure, but even if they succeed, they are never going to be able to make their system operate at a lower cost of the existing commodity.

I recently set up a Minio cluster with 80TB (usable, 120TB total) for ~150€/month. Storj is cheap (~3.5€/TB/month), but there is no way that Storj can ever be that cheap. And we are not even talking about egress fees.

It's difficult for them to see them truly disrupting the market when their systems have by design so much overhead and middleman wanting a cut. The moment that this model starts to become a threat to the big cloud providers, they will slash their prices and then any advantage will be gone.

The best I see for these decentralized storage systems is to work as a tit-for-tat backup, maybe?


Most people don't want to make their own clusters, or don't have enough data for it to be a good price. There's plenty of room for a service that's twice as expensive as commodity prices.

> The moment that this model starts to become a threat to the big cloud providers, they will slash their prices

I doubt it's ever going to be a big threat, since this kind of storage is never going to have great latency. I don't think that's an existential risk. Though even if it is, and happens, there's billions of dollars to make before we reach that point. It's not a reason to avoid the market.


> Most people don't want to make their own clusters, or don't have enough data for it to be a good price.

Yeah, these people will pay for iCloud, Google Drive, Dropbox... Or maybe if they are a bit more technical they will look into Backblaze B2.

> there's billions of dollars to make before we reach that point. It's not a reason to avoid the market.

I want to agree with you, but honestly this feels more like a situation where all the players will be investing a bunch of time and money in the hopes of becoming a winner and they will all end up finding an ever shrinking spread.

To illustrate the point, all these projects raised already billions of dollars, but there is a very low chance that any of them will ever recoup that money. It's not a problem for them because they are all playing with someone else's money, but globally this has been nothing but a waste of time and resources.


Afaik there was some work to make it possible to pay for Sia storage in the new renterd node with any crypto asset you could make a payment channel with (so, most of them including Dai), but I don't see that in the readme anymore: https://github.com/SiaFoundation/renterd


I guess if you are inside the crypto ecosystem, it's easier to pay nodes via tokens.


It's not. They pay out via zksync (layer 2 system) to avoid fees, but they only accept payment in the main chain.

So if you want to cash out or use the tokens you receive, you need to eat the fees. It's ridiculous and a clear mechanism for them to restrict the circulating supply.


It's an evolving ecosystem. Rollups are fairly new so imagine fully migrating to L2 is a work in progress.


What I mean is that they could do things on their business side, off-chain, completely independent from the "protocol" or smart contracts. If they already accept use a "traditional" payment processor to accept credit cards, why could't they add zksync as an option for payment on Tardigrade?

It's really not hard - I've done a PoC when I was working on https://hub20.io - but it doesn't help with their "tokenonomics", so they just ignore it.


On paper yes. In practice a lot of these crypto coins are dominated by get rich quick types that treat the bit that actually generates value as an afterthought. I've not seen much evidence that Filecoin is any different.

The benchmark for success here would be people participating being able to earn meaningful amounts of file coin simply by hosting ipfs data. Is that even remotely profitable at this point?

IPFS as a technology is fine. It seems to work well enough though there are some resilience and scaling challenges. Your data basically disappears unless you ensure it doesn't Which is where filecoin and other incentives come in. Basically it requires you to pay for someone to host your content. Because others won't unless they happen to have downloaded your content in which case it may linger on their drive for a while.

My guess is that the whole Filecoin thing is scaring away a lot of enterprise users though.

What it boils down to is that things like s3 and other storage solutions are pretty robust and affordable as well if you are going to pay for hosting the files anyway. And probably a lot easier to deal with. So, for most companies they might look at it and then use something like S3. The whole business of having to buy some funny coins on a dodgy website is a bit of a nonstarter in most companies.


IIRC they subsidize hosting by ~8x (ie, when someone pays $1 in filecoin to pin content, filecoin pays out $8 for people actually running nodes. Otherwise the economic incentive to run nodes isn't there, just another VC scheme of selling dollars for a dime to brag about growth)

source was personal communication, sorry


Is there actually a way to use filecoin to pay for IPFS pinning yet? Last time I checked a couple years ago Filecoin was worth more than Disney and the actual use case was vaporware.


You're almost there, soon you'll realize digital assets in general are important for aligning incentives.


Filecoin, Akash, DVPN, Jackal, Helium are all cryptocurrency projects with real products/use cases outside of the circular defi economy/number go up technology.


Helium?

The company that lied about partnerships, structured itself as a way for insiders to cash out on tokens early, then failed to attract any business so pivoted to a new tech and a new token to do it all again?

Helium is a failure and a joke.

https://www.forbes.com/sites/sarahemerson/2022/09/23/helium-...


I don't know anything about their business model, but I do know that they have a real product


They sent out real boxes. The full package being sold to box-buyers, the reason they bought the boxes in the first place, did not exist.


There is also Althea for network infrastructure/connectivity.


It has the same intrinsic value as the other cryptocurrencies: Illegal activity (in this case, piracy).


I don't agree that Filecoin does anything to enable piracy. If your goal is to pirate software and movies, your needs were already (and still are) adequately met with BitTorrent and there's no need for a blockchain token. Tokens would just be an expensive distraction to someone like that.


How was your experience using filecoin? Were you able to provision storage?


value of crypto is in the network (aka market).


Sounds like bullshit to me.


Ignoring the question if it is bullshit or not, what do you personally gain from posting comments like this? Wouldn't it be more interesting for everyone involved (including yourself) if you actually share the arguments against the parent comment, if you have any?


To keep people from investing in cryptocurrency. It is a waste of resources and provides little to no return. It does not help to decentralize or stablize the economy. Although to be fair not many fiat currencies do either.

Anyone with two or more brain cells realizes that this sort of stuff is a scam.

The news is publicity to stir up investors.


Just some perspective:

Price (market cap): ~3B

Annualized revenue based on the last 30 days: 3M.

Intrinsic value based on perpetual DCF assuming 5% interest rate: 60M.


1. You're trying to assign utility value to it based on earnings which is flawed

2. You're valuing it like a company which is also flawed


100% agree. But you are shooting the messenger, I just translated OPs message. Applying "intrinsic value" to cryptocurrencies is flawed.


> Filecoin is probably one of the few cryptocurrencies with intrinsic value,

None of these coins have any intrinsic value.

There are thousands of people who sat on bench and thought, "I want to create a ponzi coin" doesn't mean they create value.

Crypto is a net negative phenomenon.


I'm pretty cynical of crypto, but you're confusing what something can be used for with what it can only be used for.

Any coin, including USD, has no intrinsic value. Its value is in what people can buy with it.


You cannot pay taxes or do anything legitimate with crypto.

Sounds pretty useless to me.


Have fun dying on that hill. 30 years ago you would have been saying the internet is worthless.


Crypto isn't internet. At best, crypto is Juicero


Within 20 years Ethereum will be used as a global coordination and settlement layer, and is well on its way to achieving that.

But whatever helps you sleep better at night.


> Within 20 years Ethereum will be used as a global coordination and settlement layer

Of course it won't

> and is well on its way to achieving that.

Of course it isn't

> But whatever helps you sleep better at night.

Well, we all know what lullabies you sing to yourself


The article says there are 3 advantages of IPFS -- speed, data verification, and data resilience. HTTPS/curl can provide the latter two. Does someone know if they published speed numbers of IPFS vs HTTPS/curl in the satellite environment?


Maybe initiatives like that will finally help transitioning to IPv6. We're out of address space here on Earth, and now we're going with TCP/IP to space.

If we won't be wasteful with IPv6 space, it'll suffice for all inter-planetary and inter-galactic communication. Considering 1 trillion stars per galaxy and 1 trillion galaxies in the universe, we're safe with more than 340 trillion addresses per star.

We would be able to address between 1/3 to 1/2 of all atoms in the universe with IPv6.

Now that's a safety margin!


> We would be able to address between 1/3 to 1/2 of all atoms in the universe with IPv6.

There are around 10^38 IPv6 addresses and 10^80 atoms in the observable universe. It's not even close.


Sounds like a perfect way to distribute uncensorable OS models and datasets


I'm sure it will be as successful as the many failed attempts to build uncensorable offshore micronations.


Torrent been here since the turn of the century


This doesn't really mean anything really.

What is the use case of these crypto tokens other than speculation?

Where is the market in space that is asking to use this?


The tokens are used for incentive alignment. Space is used for robustness.


> The tokens are used for incentive alignment.

Or pump and dump scams.


One doesn't exclude the other. Theranos and enron doesn't mean all companies are scams.


Yet the majority of crypto companies are scams.


The majority of stocks are scams, they're called penny stocks


Stocks have fundamentals and pay out dividends to shareholders.

Crypto has no fundamentals, do not pay out dividends and only resemble ponzi and pyramid schemes.


> Crypto has no fundamentals, do not pay out dividends and only resemble ponzi and pyramid schemes.

Incorrect. Ethereum has a lot of utility, has chain revenue (txs fees), and you can stake your ETH and generate yield.

Based on the end of your statement I think you're letting your bias get in your way.


Could it be to avoid content liability laws? Pirate bay used to run servers from balloons above a certain altitude for this reason.


And like those failed projects, this has the same fatal flaw: you need that offshore or high altitude server to connect to a base station on Earth in order to connect the server to the rest of the Internet. The authorities where those base stations are located don't look kindly on this one weird trick to evade sovereignty.


Do something weird, pump up the price.


It is actually down today. ¯\_(ツ)_/¯


Maybe it’s obvious there is no use for a long time, putting that stuff in space is pure stunt


Agreed 100%


FileCoin strikes me as yet another crypto scam.

On April 1, 2021, “The FileCoin Foundation” announced that they were donating 50,000 FileCoin to The Internet Archive, valued at $10 million. That announcement got a lot of PR for FileCoin and drove its price to its highest value ever - $237 - that same day. It never saw anything near those prices again.

Just a guess, but I bet the creators cashed out big time right after pumping their coin by announcing their generosity by press release.

FileCoin’s current price is $5.85.

A while ago, I contacted The Internet Archive to ask if they were able to sell their donated FileCoin at the price it was trading at when they received it. I didn’t receive a reply.


Well the caching aspect of IPFS is useful for space, the transport protocols are way too chatty and latency sensitive for real usage.


The beauty of ipfs is the transport protocols are completely modular. They do a pretty good job supporting a lot of variety a separating concerns via https://github.com/libp2p/specs


The beauty of ipfs has been there since 6 years back and it’s almost completely uselses


Has anyone tried implementing IPFS on top of Licklider or another DTN protocol? Seems to me that's pretty much essential for any "interplanetary" setup.


The too chatty protocol is bitswap. Because the merkle-tree is not coupled to the protocol, they developped their own custom protocol on top of UDP for communication.


Isn't the QUIC transport the default since v0.6.0? Compared to TCP, it's much better suited for satellite communications.


Quic is not suited for satellite communications. There are good custom protocols though that are FEC heavy.


I do hear IPFS maybe useful with powerful AI(AGI) agents, I’m not still understanding how.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: