It's a bit dated at this point, but still give an ok overview of networks like Gnutella, EDonkey and Bittorrent and also goes into the more exotic ones.
As far as I see it, the currently 'usable' clients that use darknets and turtle-hopping are Oneswarm (http://oneswarm.cs.washington.edu) and AllianceP2P (http://code.google.com/p/alliancep2pbeta/)
p.s. Sorry about the language, it was my first bigger paper written in English, so I'm a bit heavy on the passive forms and runaway sentences.
You can use Bittorrent in it, there is a emule port, gnutella etc. Works quite well. I often have download speeds of ~50Kilobytes/s in Bittorrent. It is a self-contained net with some trackers.
While I can't say anything about it from a networking point of view, a big problem is that there is no easy "how can I use this" guide on the project page. I firmly believe that a well packaged implementation is the basis for anything :)
It also seems to be an overlay network. I personally think that an encrypted friend2friend Darknet approach is more useful as an first step rather than trying to do it on an internet scale. "Social" Darknets offer a good balance between transfer speed, security and people being ok with their upload being used (since it's by their direct friends).
I'm still waiting for a usable software that integrates within existing social networks (twitter, facebook, ...) to gather peers
I was also mostly focusing on networks with active communities that are still in development.
- WASTE isn't developed any more and hasn't been for quite some time
- Share and Perfect Dark are both closed source and only active in Japan. This makes them pretty much useless (closed source) and I wouldn't call them "notable" if they're limited to a single country and some scattered anime fans. Users of these networks have also been arrested in Japan for copyright violation, so I'm not sure how good their implementation actually was.
As for freenet: This is true, I should have probably spent the time. Sadly, this thesis was only 12 of 30 credits in that semester, so most of the time I had to do something else and was a bit limited when it came to depth and breadth.
It didn't get any traction then but it was completely decentralized (based on distributed hash tables).
It was not designed to act as a gossip-like overlay network for indexing content (similar to eDonkey/eMule), it was designed to act as an overlay network for identifying metadata of known keys stored in DHT.
The BitTorrent checklist is as follows:
1. [X] peer to peer file transfer
2. [X] Kademlia/magnet links for decentralized metadata
3. [X] Kademlia/peer exchange for decentralized tracking
4. [X] Gossip protocol for indexing magnet links
5. [X] integrated data proxying for anonymity/plausible deniability
Tribler solved step 4, OneSwarm and some others are trying to solve step 5.
We're indeed on the cusp of an impenetrable file sharing network.
Until they just block the protocol with deep packet inspection. (Encryption won't work, you can analyse packet size/timing to fingerprint protocols pretty accurately.)
It is further reaffirmed in MGM v. Grokster that a filesharing service is only accountable for its actions if it advertises or supports the service's use of violating the law.
The broad trend has been toward easier, faster, more secure file sharing. Does anyone really believe that trend is going to reverse course?
Basically, when anyone can add any file they want (for example, eMule etc scanned directories and added whatever it could find), noise is invariably much higher. Torrent trackers act as curators, ensuring there's high quality of uploads (especially for private trackers), thus increasing SNR.
I'd love it if someone could prove me wrong, though. Does anyone have any insight on this?
Skype's issue was getting the P2P network back up and running ASAP because 1) people pay them for a service that was down and 2) (voice|video|text) chat generally has a higher priority than file sharing. People wouldn't be as ticked off if the P2P filesharing network went down for a few days (or a week) before rebuilding itself.
What happens when all superpeers go offline at once is that the 'normal' peers then start DDOS'ing any remaining superpeers (or the few superpeers that are able to get back up and running quickly). As people settled down with trying to reconnect to the superpeers, eventually the network would rebuilt itself (possibly with completely different superpeers). Once you knock out all superpeers at once, the network becomes quite a bit less distributed.
That said, once all superpeers are down, it could be easy to continue manually DDOS superpeers as their appeared on the network for a targeted attack. If someone was being malicious, they could keep up the DDOS even as regular users are letting up on their inadvertent DDOS due to the sudden change in network makeup.
Also, is there a way to force outgoing encryption (you can do this in uTorrent)?
“Together with software bugs and a code cleanup, that is now our last known weakness,” says Pouwelse.
I guess to solve that problem the distribution must be decentralized and supernode adress delivered with client (as stated in above comment)
Do you have the cite handy for the paper referred to in section 2? I'm curious about the problem of network degradation due to pollution by an adversary whose clones attempt to maximize their reputation so as to isolate non-clones before initiating the attack.
The BT core program looks after the online user and search data, sharing this with other clients. This could be always on.
While the BT file manager looks after seeding / leeching through the BT core program. This allows you to participate in the group acting as a super peer and having up to date search data while not having to have a full blown BT client running.