Freenet is a peer-to-peer platform for censorship-resistant
communication. It uses a decentralized distributed data store to
keep and deliver information, and has a suite of free software
for publishing and communicating on the Web without fear of
censorship.[4][5]:151 Both, Freenet and some of its associated
tools were originally designed by Ian Clarke, who defined
Freenet's goal as providing freedom of speech on the Internet
with strong anonymity protection.[6][7]
Freenet has been under continuous development since 2000.
I ran a Freenet node for quite a while. I eventually stopped for two reasons:
1.) Freenet is really astonishingly slow. Think ten kilobits per second of transfer, and tens of seconds of latency. It doesn't fit the www user interface very well at all. It would probably need to maintain a dozen copies of every file in order to attain a reasonable amount of throughput. Bittorrent has it beat cold for mildly illegal files, (copyrighted music, movies, etc) which means that Freenet's users mostly use it for very illegal files, thus:
2.) Man, it is absolutely full of child porn. If you donate 10 GiB of disk space to Freenet, then you can be sure that at least 5 GiB of that is going to be dedicated to child porn.
#2 is somewhat related to #1. It's so slow and inconvenient, the only user base are people who absolutely need the one feature it does deliver: very strong anonymity and censorship protection. Unfortunately that's to protect something gross and abusive to other human beings, not to protect legitimate free speech.
I'm very socially liberal. CP is not like drugs. It's not a victimless crime, unless you want to argue that all those 8yr olds agreed to have their pics posted and fully understand all the implications of that. Then there's the stuff that actually shows sex, nearly all of which is coerced through deception or physical coercion.
This is the problem with naive cyber-anarchism. We really do need both police and intelligence agencies because there are bad people out there who want to do very bad things. There are people who want to rob, kill, rape, con, and generally abuse other people. The problem is when governments and their agencies abuse their power and themselves engage in abusive behavior. That's a political problem, not a technical one, and has mostly political solutions.
I'm not opposed to using crypto, privacy software, etc. Privacy is also very important, as is security. To some extent these things are needed to protect you from bad actors, not just from unwarranted surveillance. They're also needed as a check and balance to limit government intrusion. It should be hard to spy on people, as this discourages overreach by increasing its cost.
All these things come down to questions of balance. How much policing do we want? The answer is "enough but not too much," but where's that line? It's a hard question. There aren't many easy answers. People who like to think in absolute black and white terms don't like that, but that's the real world.
Yep I love the idea of a decentralized, free network (mesh or otherwise) but it has to be very specialized or it gets used for CP all day long. I think the answer is way less bandwidth so it can only be used basically for text-based data - chat, scientific stuff, streams from temperature and wind gauges, etc.
I had the idea a long time ago for a decentralized network with really small object size limits and bandwidth caps. It dodges a lot of the scalability problems. The idea would be for people to use it for communication, not big file transfer. If someone wanted to send a file they could post a magnet: link or a regular http: link, etc.
I would like to point out that maidsafe is objectively a scam.
Operation of the maidsafe system as advertised relies on a number of provably impossible technologies, like purely algorithmic proof-of-identity.
They gave a presentation at a recent Bitcoin conference in DC. I asked a few basic questions about how they planned to do certain things critical to maidsafe's operation (that no one knows how to do, and many think are impossible), and their answers were so obscenely stupid that anyone in the room with relevant technical knowledge was laughing.
Example: "How do you plan to prevent bots from gaming the data transfer payment system?" The answer was something like "Oh, it's way too hard to make a bot. There are too many steps."
No. Proof-of-identity means proving you are a unique human. I.e you're not a bot, and one human can't claim to be multiple humans.
It is not possible to have proof-of-identity on a distributed system without having a trusted centralized identification service (which maidsafe claims to have solved, yet offers no evidence).
I don't know if this matches academia, but in my usage "identity" means that a user can prove they are the same user ("A user can establish that they are identical to another user.") What you're looking for is demonstrating uniqueness ("A user can establish that they are identical to no other user of the system"). I think that is a part of "identification", as you say, although my usage of "identification" might include mapping to other information (govt. records or similar). I don't know which of these MaidSafe is assuming - the first is not hard, UI notwithstanding. The others are.
Yes, as far as anybody seriously believes this is completely impossible. Their "solution" will likely be not one at all. There's several comments floating around that MaidSafe has saved all of the people who bought MasterCoin and didn't have anywhere to dump it due to the markets being too shallow. I'm inclined to believe that's more the purpose than anything.
Futurists have a tendency to imagine a world of changed human behavior and it's compelling to do so. The reality is that the future rarely arrives as sweeping change, but rather as metaphor and specialization.
Whereas you can imagine others adopting new patterns of behavior because you understand the underlying reasons why such behavior is reasonable, the metaphor through which you explain this change is not readily understood. Why, as a User do I want this? If the answer is control and privacy, you might be barking up the wrong tree (time and again we've shown that those are not things consumers want or are willing to pay for).
If you want to drive dynamic change in the world, you have to change the underlying structure of complicated systems while steadfastly avoiding changes in user behavior. It turns out this is quite hard.
I applaud your efforts but encourage you to avoid the rabbit hole of endless specialization and to improve the marketing metaphor/rhetoric.
I think you are wrong on the control and privacy issue. It would seem to me to be more a lack of understanding on what control and privacy mean for the average user. Most Internet users are like most car owners. They have no idea about the inner workings, nor do they care. They don't generally assume that having a seat belt is more safe than not having one. And they generally trust that the manufacturer has their [the user's] individual best interests in mind when developing, building, and selling the car. -- Likewise, most internet users firmly believe that if it was "good" for me it would be already "built-in" to the system. This inherent trust in what is presented is the problem, rather than consumers not wanting control and privacy.
The problem is far bigger than just loss of control and privacy. We are paying so much more[1]:
1. The advertisers who pay for it all still get their money from us, but baked into prices of the things we buy from them. There is no free lunch.
2. The overhead cost of advertising is huge and we pay for that too. Ad systems and data collection systems, ad engineers and people like the author. Ad agencies. Creative agencies. Ad tracking. Marketing departments.
3. As the article points out, we pay the opportunity cost of a product that cannot put users first because they live or die by giving advertisers what they want. Costs include our lost privacy, content and services design that optimize for advertising revenue instead of its users, and our time and attention stolen by surreptitious ads. As has been said, we are more Google's products than we are their customers.
4. We pay the social costs. Democracy and the free market assume people make voting and purchasing decisions based on facts and reason. Advertising is predominantly about manipulation and deceit. To me this is the most expensive cost of all.
Added together, we are paying a lot more for "free" web content and services than if we could just straight up pay web sites for straight-up ad-free versions. But as in the prisoners dilemma, we individually make decisions that hurt us all collectively. Whatever you think of MaidSafe, the article is so right when it says,
"Do we have the Internet we deserve? There’s an argument to say that yes, we absolutely do. Given web users’ general reluctance to pay for content. We are of course, paying."
So, as you point out, the question is, "How do we get users to understand this?" Got any ideas?
I use much less open source UI than I did ten years ago. A huge portion of the infrastructural code we rely on is open, but almost all of the user interfaces I actually touch every day are closed. If I want to tweak something, I have to resort to hacks.
Additionally, if I write or modify an open source web service, there's the whole problem of hosting it. I have to essentially start a small business to host it. Or leave that to the users, which sort of defeats the purpose of a web service. If I want to change a single feature of Wikipedia, I have to re-host the entire thing AND have a plan for staying in sync with their centralized database.
As a result, the world of open source user-facing software is developing an order of magnitude slower than open source infrastructure software. That's sad.
If I could simply write a web service, push it into a decentralized hosting network, and have the cost of keeping it deployed spread out across the network of users, then it becomes much more tenable to be a developer of open source web apps and services. Doubly so if there is a mechanism for me to get micropayments for the code.
I think if Maidsafe or something similar succeed, it will lead to a Cambrian Explosion of open source software. I know I'd write and patch lot more of it.
Maidsafe makes some big claims about their cryptography and verifiable behavior that was panned in this /r/crypto thread [1]. Can anybody add some thoughts to this?
“Our network knows within 20 milliseconds if the status of a
piece of data or a node has changed. It has to happen that
fast because if you turn your computer off the network has to
recreate that chunk on another node on the network to maintain
four copies at all time.”
What do they mean by that 20ms figure? That can't be the entire network, since trans-pacific latency is something on the order of 100ms
Perhaps the copies of data are geographically placed to minimize distance to within 20ms. Just a thought - I have no actual data to support that assumption.
Sounds pretty made up. It's over 200ms for me to see a reply from most servers just due to the latency coming in and out of the deep sea cables. I wager most nodes in a p2p system would have peers far above that. You can't get around limitation that no matter how fancy your code.
It's possible with central tracking servers if you de-hype-ify that number to say <500ms. With a decentralized network it's not possible. Even trying to achieve this would result in completely insane exponential bandwidth overhead and consummate issues around vulnerability to amplification DOS attacks.
This is why we have no huge-scale meshnet. As the network's size increases the bandwidth required to maintain the network's routing tables increases exponentially, not linearly, until no bandwidth remains for actual traffic.
If this is solvable it will either require a creative redefinition of the problem or a fundamental innovation in mathematics, probably in the realm of graph theory.
I think the lack of wider scale meshnets is just that the people willing to set one up or join one are either geographically isolated or aren't aware of others in a similar position. You need a high density of willing participants for something like it to work. Yes there's issues with authentication and flood control, but I'm not sure meshnets have ever become a big or important enough thing for that to be an issue (I'd love to be corrected on that).
I wish these guys luck, but I'm becoming increasingly pessimistic about the idea of a 100% edge-only decentralized network that is really robust and useful.
I don't think it's just a matter of putting the engineering effort to bear. I think there are fundamental mathematically-based barriers here. Try this paper for starters:
I was thinking about something similar to this the other day for a replacement (or evolution of) wikipedia. If you wanted to store all of human knowledge and history in some sort of archive, it would be enormous - but build it on a p2p basis, everyone having a slice of it and that slice being replicated on everyone's machine. Access would mean that you have to agree to hold on to and serve part of it.
But I don't get how that could work for applications, especially in security sensitive applications.
You might be interested in Smallest Federated Wiki[0][1], a software project by Ward Cunningham (creator of the first wiki). It focuses more on collaborative, distributed editing than distributing content though.
Servers have a lot of nice features that are hard to replicate in a totally decentralized environment - I say this as someone who experimented a lot with different peering structures for a hobby project.
And indeed, this too seems to rely on persistent nodes, though they don't say in what capacity (whether they work like torrent trackers or if they actually relay content).
In an increasingly mobile-flavored network where one person has many devices it makes sense to have servers. However, that doesn't mean we all need to get behind the mega silos of Facebook and Google. The early internet actually got this right, both technologically and topoligically.
The problem is partly cultural since we have gotten used to the all-or-nothing, anti-federation approach of Web 2.0+, but it's also due to the inability to deliver and change features in a timely manner. But if, say, Facebook UI innovations were to stagnate (and some say that they already have), it would become more feasible to implement a slower-moving federated service.
Like FileCoin, this is very interesting but is any of those networks ready to be used today? Even in some limited capacity? Right now, it sounds like pure hype.
In the long run I don't see how various things like twitter aren't replaced with P2P open source systems. What about twitter really requires a single centralized provider?
It's all fun and games until the government decides you're hosting child porn on your computer, even if you didn't put it there.
So, they'll have to come up with some means of centralized censorship, which is going to hack off the devout civil libertarians who would initially support it. And the vast majority of internet users aren't going to care until someone tries to explain to them how the internet is using their resources for storage.
So I'm a little confused here - let me just ask something;
How is this going to work?
The answer seems obvious, but look at the battle raging over net neutrality right now. With a decentralized infrastructure, it's going to be a lot harder to get around the prospect of paid peering then it'd be with Uncle Google and/or Amazon paying their way into your home.
So all of your users data is going to be stored in random places? I don't see how anyone thinks this could feasibly work unless everyone is suddenly cool with loss of trade secrets, the wholesale giveaway of intellectual work, and complete loss control over data/content.
It's hypothetically possible to come up with versions of algorithms that distribute the processing of your data such that each node does not get enough information to recover secrets, in a cryptographically secure fashion.
As a developer though in 99% of cases I'd much rather just pay for a few servers in multiple datacenters and load balancing than waste time researching the above, let alone proving its cryptographic security for my use cases.
It could, however, be useful for cases where you're trying to get over foreign government censorship or oppression against whatever service you're trying to provide. Imagine, for example, a free speech discussion service that was decentralized and essentially un-blockable. Bonus points if its cryptographic system can be designed such that an arbitrary processing node can never be proven from its data alone that it was helping run the service (among other services).
It's already possible to execute some algorithms on untrusted computers using homomorphic cryptosystems. I'm not sure what to what extend this will proceed - it might be possible to securely execute arbitrary algorithms on untrusted platforms. This is sort of a holy grail of cryptography - to be able to process secret information efficiently without gaining knowledge of it.
I agree it has it's cases in a Tor sort of way, but in terms of "replacing the internet" which I think was the original title of this HN post, I don't think it's feasible.
I'm not outright dismissing the technology or the thinking behind it but it certainly wouldn't fit a majority of use cases as we use the internet now.
This is the internet all over again. Remember "online services"? BBSes?
Some form of this was bound to sprout as the internet became feasible to regulate and demarcate. It started with BitTorrent because people wanted to pirate music. (Maybe before, but torrents are impressive in that they reside literally nowhere).
This is going to happen. Maybe in 2014/15, maybe in 2020 when people are having their prostates probed by the NSA/Europe's right to forget/Brazil's lei de mídia/yadda yadda yadda.
1.) Freenet is really astonishingly slow. Think ten kilobits per second of transfer, and tens of seconds of latency. It doesn't fit the www user interface very well at all. It would probably need to maintain a dozen copies of every file in order to attain a reasonable amount of throughput. Bittorrent has it beat cold for mildly illegal files, (copyrighted music, movies, etc) which means that Freenet's users mostly use it for very illegal files, thus:
2.) Man, it is absolutely full of child porn. If you donate 10 GiB of disk space to Freenet, then you can be sure that at least 5 GiB of that is going to be dedicated to child porn.