Journalists are asking us about the Black Hat talk on attacking Tor
that got cancelled. We're still working with CERT to do a coordinated
disclosure of the details (hopefully this week), but I figured I should
share a few details with you earlier than that.
1) We did not ask Black Hat or CERT to cancel the talk. We did (and still
do) have questions for the presenter and for CERT about some aspects
of the research, but we had no idea the talk would be pulled before the
announcement was made.
2) In response to our questions, we were informally shown some
materials. We never received slides or any description of what would
be presented in the talk itself beyond what was available on the Black
3) We encourage research on the Tor network along with responsible
disclosure of all new and interesting attacks. Researchers who have told
us about bugs in the past have found us pretty helpful in fixing issues,
and generally positive to work with.
The mundane (and thus most likely) answer is that the CMU lawyers wanted to pull it either because they want to sort out some sort of intellectual property first, or they're worried about some sort of liability.
This is something that has always been legally murky, enough so that I feel like some technical people could decide that they didn't care and just go with it. More people under them might have as well, pulled along by sheer groupthink if not genuine agreement.
This attack was unique not in that it made strong claims, but that it had unusually specific strong claims that indicated some amount of empiricism. I feel like you could only reasonably claim that number if you actually tested it against a very strong network simulation (which doesn't exist for Tor) or the real network.
It's not like other researchers haven't done similar things to get results about Tor. There are a few workshop and academic conference papers that talk about results obtained by analyzing Tor traffic; this is technically wiretapping according to the Tor project, but previously it's always been mundane enough that nobody has gotten involved. This experiment might have compromised some people's very personal information, and it's incredibly public.
This is all really just an expansion of "they're worried about some sort of liability." In any case that's by far the likelier of the two; I can't imagine you could sell IP related to this.
I more or less agree that liability seems more likely, but I have no idea what the nature of the attack is, so it's always possible it's an offshoot of some other research they are doing which can be patentable. Alternatively, it could be that CMU procedure is to require approval for all talks for brand and IP protection reasons and he just hasn't gone through the proper procedure, so in the meantime they pulled it (rather than pulling it in response to an actual analysis of the talk). This last one seems unlikely, though, as you'd imagine there was no rush to pull the abstract (which contained no details).
A government agency that wants to stay a step ahead of the competition or of its targets?
Generally when you see some outside force trying to suppress security research and the presentation thereof, it comes from the companies who will actually have to fix the problems and deal with support calls (or companies who feel that security through obscurity is sufficient and are hoping to somehow suppress the information from ever getting out). In this case, that would be maybe the Tor Project, but they generally are very receptive to this kind of thing.
What about the political dissidents who use Tor? They could be at risk of certain death if caught by the authoritarian regimes they live under. Without coordinated disclosure, the "researchers" might as well have been signing death warrants.
In fact: I'm not aware of a vulnerability research conference that does get nosy about this stuff. I even reviewed for Usenix WOOT one year, and we didn't vet research for "coordinated disclosure". Not even Usenix works the way you want BH to.
That's an odd construction...
Sometimes the speakers screwed up and didn't get their material together and they weren't important enough to ignore that. Other times they're threatened by their employer or some external forces.
Subway hacking, Padgets RFID (and GSM a few years later IIRC), etc. Theres quite a history of great presentations that have never happened for one reason or another.
Or maybe someone didn't want to compromise Tor in public until the Tor project had a chance to address the issues.
To some degree, isn't this what the Black Hat conference is all about?
The Black Hat conference is about promoting the security industry. DEFCON, on the other hand, is about promoting hacker culture. It's a lot more common to see 0-day talks at DEFCON because there's much less industry spotlight [and thus, fewer general business professionals that could get scared by some new attack being announced].
What these "researchers" were doing was just reckless. When it comes to Tor, lives are on the line. This kind of irresponsible disclosure is abhorrent, at best.
The vast majority of people do not want that Internet. See, for example, the popularity of Facebook. (About 1.2bn users per month).
You need technical measures, and law, and effective oversight.
Law and "oversight" are really not likely to be effective. They're only useful as part of a "defense in depth" strategy, where we make it technically impossible for any attacker to get this information, and if our protocols have flaws in them, the government shouldn't be allowed to look at them anyway, so we have a second (weaker) layer of defense behind our primary defense.
Oversight is a legal measure applied to police and security agencies to ensure that they are obeying the law, not something you do to the general public.
I'd like to see a pay-per-install Tor browser program materialize, one that would incentivize retailers and ISP techs to install Tor browser on customer devices. Every device should be connected to Tor from the moment it is powered on. Then we could at least go back to having free speech on the Internet.
Or do you view envelopes with this same paranoia? https://www.philzimmermann.com/EN/essays/WhyIWrotePGP.html
>everyone would be suspicious all the time, and therefore no one would be suspicious ever.