I remember when this came out 20 years ago. It was part of the whole wave of P2P filesharing programs that came out between 1999-2002: Napster, Gnutella, Kazaa, Audiogalaxy, etc. Amazing it's still going on.
BTW, for folks new in tech - it's amazing how influential that wave of programs were, even though they largely failed in the marketplace. Napster founder Sean Parker later became the first investor and first president of Facebook. Also involved in the wave (in a tangential way) was Mark Zuckerburg, whose Synapse Media Player got a $M+ buyout offer from Microsoft while he was still in high school. Uber founder Travis Kalanick's first two projects were Scour (a P2P filesharing app) and Red Swoosh (a P2P CDN). I met the AudioGalaxy founder while working on Google Search - he later went on to become one of the early Waymo engineers. The Kazaa founders later went on to found Skype, and we know where that went. Chord (an academic research project in distributed hash tables) was led by Robert Tappan Morris (originally famous for creating the Internet Worm of 1988), who then went on to co-found YCombinator, which owns the website you're reading this on. The gossip protocol invented & refined by Gnutella forms the basis for many cryptocurrency P2P protocols like Bitcoin & Ethereum.
There's probably several trillion dollars in market cap attributable to the intellectual descendants of a bunch of nerds who wanted to share stuff over the Internet and fuck over the RIAA, MPAA, and governments.
Kinda funny small-world stuff that Justin Frankel (Winamp + Gnutella) went onto AOL/TimeWarner and got into a feud with an exec he wrote a passive-aggressive blog post about without naming him. The exec was Chamath Palihapitiya.
Justin was and always will be a legend - he wrote an AOL Messsenger ad blocker and mp3 search plugin _while working at AOL_
And Gnutella, too. I still remember hurrying to download it when it came out because I knew that Justin worked at AOL/TimeWarner and it was going to get taken down on Monday as soon as the powers that be realized someone had written a piracy app. Luckily people had mirrored it, and eventually ended up reverse-engineering the protocol.
The interesting thing is that most of those people weren't actually pals when working on P2P stuff, but they were all working on the same stuff. Zuckerburg and Parker didn't meet until Facebook was growing quickly. Scour/Red Swoosh, to my knowledge, had no major connection with the other projects.
I view it more like an instance of synchronicity, where a lot of bright young people looked at the world independently and decided that this was the space they wanted to be working on. And then when that space didn't pan out they went their separate ways, but the fact that they were bright & energetic meant that the successor projects became huge.
(I wonder if a similar effect explains the General Magic, Paypal, and Justin.TV mafias.)
Or more likely this is how synchronicity works:
- we have 7B people on earth
- at any point of time, all possible exciting among the moment are being worked on by someone
- if something succeed, we look back and wonder at the synchronicity: how it is possible that 10 people worked independently on the same thing - they must have all have been guided by the superior intellect they had in common!
...
I’m sure there were great synchronicity of people working on squaring the circle in 18th century, and of people working on the philosopher stone earlier, except they were all misguided so we don’t talk about them.
When the internet became a thing for the general public, it was a bit of a no-brainer to try and do "X but on the internet" for all X. "Filesystem but on the internet", "diary but on the internet", "cash but on the internet", "tv but on the internet" etc.
They may not have made money, but they certainly succeded in getting marketshare. The main reason they dont now (other than napster being sued out of existence) is that bit torrent displaced them, and really that should be considered the same class of program
BitTorrent was also careful to demonstrate that it had substantial non-infringing uses. That's perhaps a lesson to folks who want to challenge the system: seem as innocuous as possible for as long as possible, until you become the system.
Google did this to very good effect: even when I was there the first time (~2010, over a decade after founding) they still had a sterling reputation in the press, while Netscape got crushed by their arrogance (and Microsoft, relatedly) less than 5 years after founding. Microsoft too, for that matter: through the 80s they were seen as an innocuous software publisher, because the hardware was where the money was, and then in the 90s people realized hardware was a commodity and Microsoft was a monopoly.
The IPFS website at the bottom left contains a section "Protocol labs", the "About" link of that links to https://protocol.ai/ which says "We're decentralizing storage with Filecoin".
So their end goal might be making money off that cryptocurrency...
Thanks to the GP for causin me to figure out why the wheel is being re-invented here! :)
Why is adding an incentive mechanism to "pinning" files on IPFS inherently wrong? If you find the right incentives to a tech problem, that pushes the tech forward right? Like Satoshi Nakomoto putting together the right type of incentive to be able to create an almost unhackable network in Bitcoin (along with Ethereum and others).
I'm not yet saying that Protocol Labs will ultimately find the success they're looking for BUT writing off their efforts as "just a way to make money off of cryptocurrency" I feel is pretty dismissive.
It's not dismissive. They already got their money. $257 million, to be exact. No one knows how Filecoin is going to make money except for Protocol Labs.
"The market price of the integral reward"? What is that supposed to mean? Filecoin is just a medium of exchange for people buying and selling storage services. Filecoin does not offer anything specific and its platform has to compete with plenty of mature, efficient alternatives.
> the value of the coin is driven by the interest in the tech it provides.
What "tech" does it provide? Not IPFS, for sure, that already was under development and could have come to fruition as a regular open source project. A decentralized market for storage, "proof-of-spacetime"? These might be interesting. But again, if we want to fund these things there are surely better/cheaper ways to do it. $250 million is a lot of money for the tech that was developed so far.
More importantly: there is nothing about the "tech" that requires $FIL to make it work. The market dynamics will be the same whether they used FIL, DAI, ETH, wrapped BTC or even a dollar-pegged token they decided to issue. People are not going to pay more or less for their storage because the price of the token went up or down.
> Currently at $22, not bad for a random coin.
These are the kind of statements that almost make me side with the buttcoiners. At this point, with all the speculation, market manipulation and naive FOMO "investing", there is no useful information to be had from the price of the token.
...yes, of course I'm familiar with Filecoin. If that is the explanation for how speculation prompts use of IPFS over Freenet (both of which I adore), then there is no explanation.
They can. Think of it like a torrent file and a webseed. You can freely use torrents to distribute your content, but if everyone who has ever downloaded it goes offline, any new downloaders will be stuck at 0% forever.
The only add on here is that you can pay some service in filecoin to keep seeding your torrent forever.
Of course you can. There is no substance to this line of reasoning except that PL publishes both.
It's like saying speculating on py-umbral (a standalone open source project) because we use it to build NuCypher (a blockchain-dependent open source project).
"Tahrir aims to be a distributed, decentralized, scalable, and anonymous "workalike" for services like Twitter, Google Plus, and Facebook. It is at an early stage of development, but is being actively worked on by a number of volunteers."
What are the benefits and disadvantages of Freenet vs IPFS? I am genuinely curious, they seem superficially similar and yet Freenet has been around for 20 years. Personally I haven't heard of it but does it have a lot of traffic/popularity compared to IPFS?
The short version: Freenet has better anonoyminity properties than tor/i2p at the cost of scaling poorly and only supporting static documents instead of interactive (tcp) servers.
It's become satisfyingly fast for me over the past years.
> only supporting static documents instead of interactive (tcp) servers.
You can develop dynamic services on Freenet just fine.
It is just done in a different fashion technically:
Instead of hosting the software on a server, and running scripts on the clients' web browser, there is no server/client model. It's true peer-to-peer: Every client is its own server, and the code runs there.
I.e. users install "plugins" for Freenet, and those use the network primitives which Freenet provides to establish dynamic connections and render dynamic content.
So you can have dynamic HTML if you want to. It's just not served by the sites you visit. Instead, things are developed "once and for all" as plugins for all sites and users to use.
So it's kinda like "forced true decentralization". You can't just shove JavaScript down the throat of random visitors of your site. You actually have to go through the effort of making your site's service a real application which people voluntarily choose to run.
>It's become satisfyingly fast for me over the past years.
My understanding is that freenet (At best) scales logarithmically where Tor scales mostly constantly. Scalability and (current) speed are not the same thing. However saying poorly was probably unfair, I should have said is less scalable than Tor is.
> You can develop dynamic services on Freenet just fine.
> [...]
What you're describing is how I would define a static page (plus client side scripting). Which works fine in many use cases. But you can't ssh into some server over freenet, or plop your php site from the world wide web unchanged, on to freenet. You can do these things with Tor, along with (at least in principle) all the "true p2p" things you mentioned you can do with freenet (Obviously in the Tor case you need to have your server always going and have sufficient capacity). That's why I would describe Tor as more flexible in terms of the things you can "host" (for lack of a better word). But the flexibility comes at a cost, and in exchange for being less flexible, freenet has a lot of interesting properties that tor does not have.
As with any anonymous p2p network you need to know:
Your level of privacy depends on your threat level. If the government is after you, they may very well know exploits. If you just want privacy from corporations for perfectly legal reasons then you might be fine.
TL;DR: Freenet is a self-contained *storage* network. Tor is a *communication* network, I2P as well. The latter is also self-contained. However as they're both communication networks they do not provide censorship-resistance. The endpoints you're connecting to are central servers and can be taken down.
>However as they're both communication networks they do not provide censorship-resistance. The endpoints you're connecting to are central servers and can be taken down.
Can you elaborate on this? This reads as essentially false to my understanding. If by endpoints you mean 'relays', those are not 'central servers'. If you mean directory authorities, then yes, those are centralized.
With that said, I'm still not sure what you mean by 'they do not provide censorship-resistance.'
They absolutely have and do provide censorship resistance.
> Can you elaborate on this? This reads as essentially false to my understanding.
No, it's not false AFAIK :)
By 'endpoint' I mean what an address on those networks actually *addresses*.
Both a Tor-address (a .onion hostname, or a regular web address if you use Tor to access one), and an I2P-address, are essentially similar to an IP-address in terms of being the address of a *machine*.
They are anonymized, but the data is routed to the very specific machine which the address is of.
A Freenet address on the other hand is the address of a *file* - basically a hash.
So in Tor and I2P, even though the machine behind the address is anonymous, you can still DoS it because you have the address of the specific machine where the data is at.
In Freenet, you cannot DoS the machine because the address does *not* tell the network where the file is stored. It only tells what file you get. Any machine on the network can serve the file to you!
And in fact, if you try to DoS a file by requesting it very frequently on Freenet, it will instead be distributed MORE on the network: Machines along the path where the data is routed will cache the file.
The wording can be a little confusing, because i believe in tor when people talk about censorship resistance they mean censorship of the whole network - e.g. blocking all the entry nodes, trying to find bridges and blocking them, which is a different type of censorship. I imagine freenet is just as vulnerable to that, and i would imagine that is an easier thing for powerful entities to do than to push GB of data into the tor network for an extended period of time to hopefully knock out a single hidden service.
Freenet in fact is less vulnerable to that because they aim to prevent that specific attack you described with the following feature:
It offers a "darknet" mode where you manually add peers by exchanging cryptographic keys with them.
The connections to those peers are fully encrypted then and thus very difficult to block because you can't detect Freenet with merely packet inspection anymore.
Besides, the ability to censor individual Tor sites is *bad enough* already IMHO.
The properties of Freenet you describe are extremely intriguing.
However, I'm asking about your description of Tor and I2P
>However as they're both communication networks they do not provide censorship-resistance. The endpoints you're connecting to are central servers and can be taken down.
This is technically incoherent, and I'm not sure what you mean by it.
As far as I'm aware, at no point does your TCP traffic from your tor client connect to a 'central server'. I'm not familiar enough with I2P to comment.
I think they meant servers as in the machine hosting the content you are acessing, such as the machine that hosts HN. Not that every connection goes through a central server, but your content is hosted on a central server, as opposed to FreeNet where your content is distributed and so can't be taken down like a server can.
A p2p fille sharing system is not a wheel. I found that freenet was a little complex. Ipfs is quite simple and modern. Doesn't freenet use Java?
Anyway, software doesn't have well respected ISO standards and 20 years is an eternity in the age of the internet. So I don't see a problem with ipfs. Also, ipfs looks like it's more flexible and reusable.
Seems interesting. Wanted to give it a shot but it looks like they had no official docker containers and it wasn't packaged in the fedora repos. I noticed the I2P project does have a docker image and seems to do a similar thing. Do you know what the differences between the projects are?
However it does munge content and break sites by serving captchas instead of images or css so I don't link to it by default. It is generally faster though.
That's the beauty of IPFS. You can use whichever gateway you want, even your own.