Hacker News new | past | comments | ask | show | jobs | submit login
The talk about de-anonymizing Tor at the BlackHat conference has been removed (tux.so)
203 points by lanbird on July 21, 2014 | hide | past | favorite | 49 comments



Roger's response here is probably relevant:

https://lists.torproject.org/pipermail/tor-talk/2014-July/03...

  Hi folks,

  Journalists are asking us about the Black Hat talk on attacking Tor
  that got cancelled. We're still working with CERT to do a coordinated
  disclosure of the details (hopefully this week), but I figured I should
  share a few details with you earlier than that.

  1) We did not ask Black Hat or CERT to cancel the talk. We did (and still
  do) have questions for the presenter and for CERT about some aspects
  of the research, but we had no idea the talk would be pulled before the
  announcement was made.

  2) In response to our questions, we were informally shown some
  materials. We never received slides or any description of what would
  be presented in the talk itself beyond what was available on the Black
  Hat Webpage.

  3) We encourage research on the Tor network along with responsible
  disclosure of all new and interesting attacks. Researchers who have told
  us about bugs in the past have found us pretty helpful in fixing issues,
  and generally positive to work with.
(imho 2) and 3) is a polite way of saying that this particular talk did not feature much in terms of responsible disclosure. But these are not related to 1).)


Coordinated disclosure is the proper term.


A Black Hat spokeswoman told Reuters that the talk had been canceled at the request of lawyers for Carnegie-Mellon University, where the speakers work as researchers. A CMU spokesman had no immediate comment.

Source: http://www.reuters.com/article/2014/07/21/cybercrime-confere...



Interesting, they did a talk at Education City in Qatar and I had no idea about it? Very disappointed, and surprised they had talks with these kinds of experts on this talk (censorship avoidance is not looked kindly upon there).


I have to imagine that this is for some sort of internal bureaucratic reason. I don't see who is in a position to even want to stop this talk - almost certainly not the Tor project itself.

The mundane (and thus most likely) answer is that the CMU lawyers wanted to pull it either because they want to sort out some sort of intellectual property first, or they're worried about some sort of liability.


I would imagine the researchers broke quite a few laws verifying this attack on the public Tor network, if they indeed did so. And since Tor is incredibly hard to simulate at that level, it's likely that they did. Even if they developed the attack on a simulated network they may have run the tool for verification against the live network. Maybe they did it to de-anonymize a drug marketplace or something else they thought they could get "ethical hacker points" for. Maybe they sent the information to the feds and thought they were doing the right thing.

This is something that has always been legally murky, enough so that I feel like some technical people could decide that they didn't care and just go with it. More people under them might have as well, pulled along by sheer groupthink if not genuine agreement.

This attack was unique not in that it made strong claims, but that it had unusually specific strong claims that indicated some amount of empiricism. I feel like you could only reasonably claim that number if you actually tested it against a very strong network simulation (which doesn't exist for Tor) or the real network.

It's not like other researchers haven't done similar things to get results about Tor. There are a few workshop and academic conference papers that talk about results obtained by analyzing Tor traffic; this is technically wiretapping according to the Tor project, but previously it's always been mundane enough that nobody has gotten involved. This experiment might have compromised some people's very personal information, and it's incredibly public.

This is all really just an expansion of "they're worried about some sort of liability." In any case that's by far the likelier of the two; I can't imagine you could sell IP related to this.


>This is all really just an expansion of "they're worried about some sort of liability." In any case that's by far the likelier of the two; I can't imagine you could sell IP related to this.

I more or less agree that liability seems more likely, but I have no idea what the nature of the attack is, so it's always possible it's an offshoot of some other research they are doing which can be patentable. Alternatively, it could be that CMU procedure is to require approval for all talks for brand and IP protection reasons and he just hasn't gone through the proper procedure, so in the meantime they pulled it (rather than pulling it in response to an actual analysis of the talk). This last one seems unlikely, though, as you'd imagine there was no rush to pull the abstract (which contained no details).


At a school like CMU it's hard for me to believe they'd cancel a researcher's talk because it wasn't properly disclosed. It'd create a headache for the IP team, but they wouldn't cancel the talk. That just makes them look awful.


I don't see who is in a position to even want to stop this talk

A government agency that wants to stay a step ahead of the competition or of its targets?


It's possible, but I think that's the paranoid / Hollywood spy version of this. Not saying that this sort of thing doesn't happen - the spy agencies take themselves very seriously but aren't big on effective policies anyway, but unless there's a specific operation that is relying on this specific exploit, and someone in the government got advance details of the nature of the exploit, it doesn't seem to have a particularly high prior probability. Anyone with a significant budget can probably pay for any number of zero days so they don't have a single weak point like "if anyone fixes this bug in the software our operation / malware will stop working".

Generally when you see some outside force trying to suppress security research and the presentation thereof, it comes from the companies who will actually have to fix the problems and deal with support calls (or companies who feel that security through obscurity is sufficient and are hoping to somehow suppress the information from ever getting out). In this case, that would be maybe the Tor Project, but they generally are very receptive to this kind of thing.


Or a University who doesn't want to get sued / get bad publicity for screwing with a tool used by government agencies...


Legality aside, I'm surprised this wasn't pulled on ethical grounds. Does Black Hat not require "researchers" to follow responsible/coordinated disclosure?

What about the political dissidents who use Tor? They could be at risk of certain death if caught by the authoritarian regimes they live under. Without coordinated disclosure, the "researchers" might as well have been signing death warrants.


Black Hat is a venue for presenting research. They don't influence the procedures used by researchers at all. And the Black Hat review board is not stuffed full of people who buy into "responsible disclosure".

In fact: I'm not aware of a vulnerability research conference that does get nosy about this stuff. I even reviewed for Usenix WOOT one year, and we didn't vet research for "coordinated disclosure". Not even Usenix works the way you want BH to.


"at risk of certain death"

That's an odd construction...


I doubt this removal is anything sinister. Attacks on Tor have been a relatively common theme at many large security conferences. For example, there was a presentation at IEEES&P 2013 on de-anonymizing Tor hidden services (http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf). The Tor people are typically pretty open to this stuff. It was most likely removed due to something mundane, like the presenters having issues getting through their organization's bureaucracy.


Mid summer tends to be pullout season for Blackhat and Defcon speakers. A handful happen every year, thats why they have alternates.

Sometimes the speakers screwed up and didn't get their material together and they weren't important enough to ignore that. Other times they're threatened by their employer or some external forces.

Subway hacking, Padgets RFID (and GSM a few years later IIRC), etc. Theres quite a history of great presentations that have never happened for one reason or another.


Wild conjecture. Most of the guys on CERT have a security clearance. This talk may have been viewed as crossing streams that he could not cross. He likely had to get the talk approved by whoever manages his clearance to ensure his talk is not leaking secret information. Someone further up the chain may have caught wind and pulled it.


Speakers drop out all the time.

Or maybe someone didn't want to compromise Tor in public until the Tor project had a chance to address the issues.


>>> Or maybe someone didn't want to compromise Tor in public until the Tor project had a chance to address the issues.

To some degree, isn't this what the Black Hat conference is all about?


Seems to me the public ousting of projects only happens when they refuse to implement a fix, or deny that something's an issue.


Not at all... Black Hat is one of the more commercial, "industry" security conferences out there.


Every year that some controversial BH talk happens that exposes some company's unpatched security vulnerabilities (or even questions the company's integrity), either the talk is pulled, or the talk materials are literally ripped out of the books or CDROMs given to attendees. As soon as a company gets wind that a talk might catch them with their pants down they threaten to file suit and Black Hat pulls the talk.

The Black Hat conference is about promoting the security industry. DEFCON, on the other hand, is about promoting hacker culture. It's a lot more common to see 0-day talks at DEFCON because there's much less industry spotlight [and thus, fewer general business professionals that could get scared by some new attack being announced].


No, to some degree BH is about compromising X in public after X has been repeatedly contacted with the necessary details AND given ample time to address the issues.

What these "researchers" were doing was just reckless. When it comes to Tor, lives are on the line. This kind of irresponsible disclosure is abhorrent, at best.


I don't know what BH you've been attending for the last 10 years, but it's not the one I've been going to.


A lot of "I don't like your post so I'm downvoting it", Reddit-esque behaviour in this thread.


At this point it is not really a good idea to use Tor anyways, given that you are then automatically targeted by the NSA and at the same time potentially provide cover for covert operations of several countries. What is really needed is political action to limit the capabilities of security agencies to indiscriminantly monitor web traffic.


I disagree. The only way to prevent security agencies from indiscriminately monitor web traffic is to make it technically impossible. No political action is going to stop all such entities in the world from monitoring web traffic, let alone prevent non-government entities from doing so. I am not saying Tor is the answer, but whatever the answer is, it will have to be technical.


> The only way to prevent security agencies from indiscriminately monitor web traffic is to make it technically impossible.

The vast majority of people do not want that Internet. See, for example, the popularity of Facebook. (About 1.2bn users per month).

You need technical measures, and law, and effective oversight.


I would guess that the vast majority of users don't know enough to have an opinion about the security and privacy of their browsing experience, but would be in support of such improvements if it caused them no inconvenience.

Law and "oversight" are really not likely to be effective. They're only useful as part of a "defense in depth" strategy, where we make it technically impossible for any attacker to get this information, and if our protocols have flaws in them, the government shouldn't be allowed to look at them anyway, so we have a second (weaker) layer of defense behind our primary defense.


Privacy or "oversight," pick one. With strong croup and deniability privacy is absolute, unless you want torture to be a law enforcement tactic. If you can't handle that, you might as well communicate in the clear.


What?

Oversight is a legal measure applied to police and security agencies to ensure that they are obeying the law, not something you do to the general public.


Ideally, but in these times...


Well Tor is obviously not the answer, it introduces too much latency and at the moment very few nodes mostly located in the US bear the majority of all traffic. No technical solution will prevent governments from monitoring all important network hubs. It seems impossible to prevent them to gather at least metainformation there. If enough routers in an onion routing scheme are compromised the same is true. If there would be laws that guaranteed the physical integrity of data centers, it would definitely be much easier to devise safe routing protocols.


Yes, Tor is not the answer. I can think of a hypothetical technical solution to the problem, however. If everyone used an onion-routing protocol where everyone also acts as an exit node, you could create a situation where even meta-information would be unobtainable.


As it stands Tor is deliberately routing the majority of the traffic through a minority of the available exit nodes, they explain that they do that for performance reasons. Given that they are financed almost exclusively by the US government and some of the developers have very friendly relations with law enforcement to this day, it is at least plausible that there are other reasons at work. In some of the leaked NSA memos they even state that while they have not been able to fully compromise the tor network so far, at least the majority of their targets are using it. All of this is a clear indication to me, that TOR should be abandoned sooner rather than later.


True, but if everyone were to use Tor all the time, everyone would be suspicious all the time, and therefore no one would be suspicious ever.

I'd like to see a pay-per-install Tor browser program materialize, one that would incentivize retailers and ISP techs to install Tor browser on customer devices. Every device should be connected to Tor from the moment it is powered on. Then we could at least go back to having free speech on the Internet.


"...suspiciuous all the time"? What nonsense. When Everyone uses Tor (or anything else), by definition that is "normal".

Or do you view envelopes with this same paranoia? https://www.philzimmermann.com/EN/essays/WhyIWrotePGP.html


To the NSA, a normal Internet citizen is a terrorist. Just searching the Web for anything Tor-related gets you put on a list. You are preaching to the choir.


He addressed that in literally the rest of the sentence:

>everyone would be suspicious all the time, and therefore no one would be suspicious ever.


I think the opposite is the right thing. We should try to get everyone on that list.


This is not realistic though and as I said it would actually help the security establishment and military if more people used Tor.


It's pretty realistic given the impetus towards tor-enabled FOSS routers. Many people may begin using tor without ever realizing it, if certain people get their way and the tor network expands to allow such usage realistically.


No it wouldn't. How is it possibly helpful to the security establishment if I use tor for what is essentially an innocuous purpose?


It helps them because they are using the service for covert operations. If they were the only ones using it, it would be useless to them. They did very cleverly position it as an instrument for dissidents and at the same time told the generals that this would actually be an advantage. On top of that the NSA is known to successfully target Tor users. If you are really doing something that is against US security interests, you would be mad to use Tor, that was all I was trying to say.


The same thing can be said about the internet as a whole, even more so.


On the opposite, everyone should use it. I love using it for queries I feel embarassed about, like googling for illness symptoms or watching wildlife documentaries.


but then how can amazon.com bombard you with ads for Anal Wart Cream for the next six weeks?


As soon as I learned that companies are people, I suspected Sprint might have something like that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: