Hacker News new | past | comments | ask | show | jobs | submit login
The Darknet Project: netroots activists dream of global mesh network (arstechnica.com)
153 points by divy on Nov 7, 2011 | hide | past | web | favorite | 51 comments



I remember hearing about China usurping 15% of western internet traffic for 18 minutes. This was accomplished by having nodes report as being the next closest hop in the network path to the packet destination. In a decentralized darknet project, I imagine such an issue being much more widespread. In fact, I would imagine a darknet project would actually play in to the hands of the government. It would be perfectly plausible to infest the darknet with millions of your own nodes reporting as the next best hops thus inserting themselves in the middle of all darknet traffic able to analyze data as it flows through the system. Obviously, a darknet would utilize encryption for traffic but all bets are off when you potentially have a constant man in the middle and no centralized authority on trusts. What's worse, and more to the point of playing into the hands of the government is that a darknet would give them (the government) a concentrated focus area. If I were to categorize the percentage of traffic that was "interesting" for regular internet traffic vs. the percentage of "interesting" traffic of all darknet traffic then I would imagine the darknet having a much higher ratio of noteworthy to junk traffic. If I had a limited amount of resources to invest in analyzing and decoding secure traffic I would obviously point my tools at the most richly dense data source.


Man in the middle attacks may not work as well as you think. While it is possible to create false information using this darknet mechanism, it is possible using things like PGP to ensure that you are getting data from the same node. Of course, you have to figure out how to set it up right from the beginning; if someone messes with you before you know what their key is supposed to be, they can feed you false information.

You might be able to overcome this by using a many-to-many authentication mechanism. But realistically, what could the government hope to accomplish by feeding you false information? (this is the only thing they are likely able to do). Consider:

There is no way to back track traffic and discover who sent the request.

Once you realize a source is feeding false information, you know to never trust that source again.

It is easy to imagine a decentralized rating system for the quality of information provided by various keys on the network. Keep in mind you can't really fake who you are. You are your public key, no one else can publish under your public key but you.

EDIT: I highly recommend reading the freent paper: http://freenetproject.org/papers/ddisrs.pdf


> There is no way to back track traffic and discover who sent the request.

If you control a vast majority of the nodes, this is simply incorrect.


Sort of. The only way to tell if a person made a request is if you control all the nodes that this person is connected to. If you do not control even one node that a person is connected to, there is always the possibility that the request came from this other node, not the real node. This is because the difference between making a request for something on your behalf and making the request on someone else's behalf looks exactly the same as far as another node is concerned.

This is how freenet works. Of course, in freenet there is a time to live associated with each request so it will die eventually if, for example, the searched for item is not present on the network at that time. You could figure out that it's from a particular node by seeing what the time to live is from that node, but small amounts of random variance in time to live values can effectively ensure that both requests don't live forever and that it is suitably difficult to determine the origin of the requesting node.

Now, it is certainly possible with enough concerted effort to find out what a user is doing with some statistical probability that a user is looking at something, but you can rarely be absolutely sure.


I think 100% certainty that a node is the originator of a request is not necessary for most purposes. If you are thinking of a court case, then maybe, but only if there is no corroborative evidence.

And in other situations where people might want to use a darknet (e.g. a repressive regime) a few false positives aren't going to bother anyone concerned.


All the government needs to know is what information is being requested, and who is requesting it.


The good thing about the Darknet plan is that it levels the playing field - Sure, Governments might suddenly have access to influencing the infrastructure the same way that corporations have now - but I would argue that Governments have quite a lot of influence already. The important difference would be that ordinary citizens would have that very same power.

Unless you live in a predominantly authoritarian state (and I guess disagreement about this is what most of these discussions come down to, in the end), keeping government in check by empowering the people with insight and access to its processes is usually what enables a democracy.

Your point about darknet traffic being inherently more interesting is insightful, but I'm not sure where it gets us. The same can be said about TOR (and there have been attacks on this, too, as well as TOR posing the same issue of possibly being 'infected' by too many nodes of a single attacker) and I guess it' really just a technical challenge, in the end.


I'm not sure I understand the benefit of it being easier for more people to run man-in-the-middle attacks on the network. I doubt the government would rely on such a network for sensitive information, so it'd probably end up being citizens spying on other citizens (or worse, e.g. phishing).


Ordinary users would not have the same power as governments because they would not have the money to run as many nodes as a gov or big corp.


Sep Kamvar has done a significant amount of research addressing precisely the problems you describe. The conclusion being that (in non-pathological cases, which would likely include the case you vaguely describe) one can design network architectures using reputation mechanisms such that malicious peers are found and routed around. See http://kamvar.org/publications, specifically "Numerical Algorithms for Personalized Search in Large-Scale Self-Organizing Information Networks" (the first Book) esp. Chapter 9, I highly recommend it.


I think the goal of a darknet is more to prevent censorship than to ensure privacy. As you said, any untrusted nodes could easily reveal a user's location and intent.

I would much prefer a network that piggybacks on the existing infrastructure and poses as innocuous traffic through stenography or encryption.


"The US State Department seems to view decentralized darknets as an important area of research for empowering free expression abroad."

(my emphasis). Depressing!


This was something that was discussed pretty heavily on the various IRC channels during the 2009 Iranian Election.

Cell phones, wifi, etc. had mostly been cut off in Tehran, which is where there were some really horrible things happening that the rest of world needed to know about. In a situation like that, a mesh network full of wifi devices that could communicate with either HAM operators, or satellite internet operators would have been really helpful.

A lot of people were actually surprised that the CIA didn't have something like this sitting on the shelf ready to be distributed. (And this is where I start sounding like one of those people, but the CIA has been involved in revolutions in places like South America. A communications kit seems like something that they would employ.)


To be fair, another interpretation is that darknets are not needed in the US because you can use the regular Internet for free expression.


Like donating to whistleblowing websites like WikiLeaks.


To be fair, that was more of a private business decision on the part of PayPal than it was a government mandate. But I get what you're saying, and I agree that internet is not as open, even in the United States, as it could and should be.


A darknet would make VISA do what you want?


Regardless of their intended audience the fact that they're interested is the important thing. If it's good it'll make it's way around the world.


I can't help but think that projects to overlay a darknet on our existing Internet infrastructure are several orders of magnitude more likely to succeed.


Overlaying on existing infrastructure is a faster way to get up and running, but ultimately a darknet that depends on another network is vulnerable to that network's centralized 'off' button.


Perhaps the best design would allow both. A darknet which seemlessly routes over independent mesh networks and the Internet. Ultimately, the Internet is going to have to play a part in such a network. How else is data going to travel between cities and across oceans?


Between cities? In the cars of people who bring their laptop with them home from work.

Across oceans? Never underestimate the speed of a ten foot container filled with 2 terabyte harddrive :)


Never underestimate the speed of a ten foot container filled with 2 terabyte harddrive

No. Never underestimate the bandwidth of a truck filled with harddrives. Always underestimate the speed of said truck.

Networking is easy when latency doesn't matter. Unfortunately, it does.


Latency matters sometimes. I'm an old-fashioned guy who still subscribes to a lot of blogs, and for that, a few days of latency is irrelevant to me.


This darknet wont be supporting IM then? ;)


See, that's what I didn't get about the article: DARPA intended the Internet to be built against single points of failure.

Where is the Internet's 'off button'? Sure, there are exchanges and backbones that are more important than others (e.g. LINX in London), but it takes sustained and continuous efforts by a government to even come close to filtering/censoring the Internet effectively.


What people generally say is that the net was designed to be robust against things like a nuclear strike taking out some part of the network.

That is very different from saying that it can't be turned off, by a government with the power to legally compel infrastructure operators to do things.

I'd say that the US government, if it wanted to, could turn off the Internet, for most of its citizens, over the course of a few days.

What would stop this happening is the legal framework, and the business and societal infrastructure that depends on the Internet functioning; but not some technological property of the Internet.

The ability to knock domains out, at the DNS level, was demonstrated not too long ago - e.g. http://torrentfreak.com/feds-seize-pokerstars-full-tilt-poke...

Theres this meme out there that the Internet cant be turned off, because its designed in a 'decentralised' manner, and its just not true, in pragmatic terms.


Forget the power of the courts, the US government outright owns some significant pieces of the internet infrastructure.


Which ones? Aside from a few DNS servers and some nice to have services (time.nist.gov or whatever), I can't think of any offhand.


it takes sustained and continuous efforts by a government to even come close to filtering/censoring the Internet effectively

Yes, and they (government) are the sort of people who sometimes want to censor speech & the internet. Some countries have filtered the internet before, this is not an abstract problem.


This is true. I think it's the only practical approach given current technology though.

When somebody comes up with a £50 home wifi access point that has a range of a mile or more inside an urban environment, that is when we'll get a proper darknet.


There are wimax routers that already make it possible, but the vast majority of people want simple wifi. It does the job and most importantly it's what the ISP installs for them.

There is not a good evolutionarily stable strategy that leads to 1-mile-range wifi becoming standard. At least not now and not any time soon.


I'd love to have a wimax router, but aren't they expensive? And require a license to operate?


Why would you need a mile in an urban environment? Trace out a circle with that radius in Google Earth; it's an absurd range.

The idea is that you have a pretty high density of users. 100 meters would be more than enough to cross any street and reach several buildings away in any direction. Even in an American suburb, it'd reach a few houses away. That's a realistic goal: 100 meters, not 1600.


Because none of my neighbours will have the hardware. And if they don't have it, I'm not getting it because it's a waste of my time.

If you expand the number of neighbours to all of those that are within a mile of me, you increase the likelyhood of finding somebody by a large amount. I would buy kit just because I'd be interested in finding these people.

Chicken/Egg etc


I'm not 100% sure on this, but I think that part of what makes the longer range nice is that it requires fewer hops and thus lower latency.


After reading the article and skimming some posts on their subreddit, I think the idea generally concerns the capabilities of consumer electronics to 'replicate' the Internet in a completely decentralized fashion. By doing so, there's no central authority managing your packets, and if you want to visit a particular node (i.e., to visit a web site), the problem becomes analogous to the stochastic shortest path problem, which is NP-complete. So, wouldn't this system require P = NP for it to have any viability at all when factoring in the effects of latency and downtime?


Most NP-complete problems can be approximated quickly enough for practical purposes.

Decentralized routing is a hard problem, but there has been a lot of research with pretty convincing results. I'm not sure if it scales to the size of the internet, though.


That's the point, though: I don't know if you can find an 'approximate' solution to decentralized routing, since you need precision. Do you have any peer-reviewed articles evidencing these convincing results? I'd be extremely interested in learning some more about this.


Who is the current centralized authority that figures out the shortest routes today?


Difference in definitions: I read "approximate solution" as an "approximate route," as in the node choices are approximated (potentially leading to a wrong final node, or losing packets at a dead end). Instead, finding approximately the shortest route that doesn't lose packets and gets you to the correct node would presumably work.


There has been some work in this area:

http://www.cs.uiuc.edu/~caesar/papers/rofl.pdf http://www.dca.fee.unicamp.br/~pasquini/artigos/sbrc_2009.pd... http://www.cs.uiuc.edu/homes/pbg/papers/Scalable_Routing_on_...

None of these find shortest paths; they aim to find paths that are within a constant factor of shortest.


You really just need to find some reasonable route. Your current routes are very unlikely to be the shortest / cheapest / metric of choice; they are mostly decided politically.


Looks interesting, but it seems to be just a talking shop at the moment, without any actual goods to show yet.

it's hard to imagine that TDP will ever move beyond the conceptual stage. The group behind the effort is big on ideas but short on technical solutions for rolling out a practical implementation

I like the idea of using WiFi as hardware, since it's a technology that's almost everywhere now.


This is interesting. I've been toying with a darknet idea, but it's not going to mirror the internet. "My" version is limited to plain text and packets no larger than 1kb, if even that. It'll show up on TechCrunch eventually, but I want to talk with some people first.


I'd like to note that "the Internet" is a vast, broadly-scoped amalgamation of routers and different network topologies. They don't use one kind of hardware or software to manage it all. Any successor or parallel alternative network should be as (if not more) flexible to achieve it's goals.

I'd also like to suggest that the network be powered purely by standard Internet client machines and off-the-shelf hardware. Custom software would be necessary, but it's better to rely on a random guy with a quick installer on a USB key than custom hardware mesh routers deployed by professional installers.


Lucky for us, the new IEEE 802.11 was ratified in September and includes (finally) 802.11s for mesh networking. There is support in Linux and FreeBSD for several of the most popular wireless drivers. The open source router firmware dd-wrt also supports 11s.


That is pretty damn cool, but I think for a serious "alternate Internet" to succeed, OSI layers 1 & 2 should not matter. An application installed by a regular user needs to be able to do most of the heavy lifting with most generic off-the-shelf hardware to get a really decentralized open alternative to take off like wildfire.


They should do some kickstarter projects around this. I bet they could find a load of libertarians going all nuts over the idea. Wouldn't mind throwing some money at it myself.


It would be pretty interesting if Republic Wireless allowed their phones to connect to the Darknet. There'd be no technical reason why they wouldn't, except perhaps they'd need to beef up their cheap Android phones with mesh protocols to form their own nodes.

(Referenced HN thread that also happens to be on the front page: http://news.ycombinator.com/item?id=3208563)


Here is a comprehensive list of open mesh/protocol links http://openmesh.wordpress.com/2011/01/30/a-list-of-open-sour...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: