First of all, before people go out to the streets they need to coordinate somehow, and Facebook seems to be a good way of doing that recently.
Second of all, the rants that are done publicly have a chance of creating a huge PR shitstorm for CMU when some journalist gets a wind of it. Sadly, in this age media pressure is the only thing that actually works (usually for worse).
I would love to see how they managed to justify this to the IRB, and how the IRB failed at its only job, inasmuch as the only possible purpose for turning over the IPs to the FBI is to inflict harm on people, harm which happened. Given how IRBs worry about the most exotic potentials for harm...
I've complained a number of times about sketchy behavior from researchers in the space of "Bitcoin transaction deanonymization" which were likely to cause harm to people and have reliably gotten a response from CS departments that sounds like "anyone could do this, so there are no ethical considerations".
First, if we look to physical law there is no experiment that couldn't just be conducted by 'anyone'-- That nothing prevents me from stabbing you in the chest just to see what happens doesn't make it ethical.
Much research in this space ends up actually breaking the law-- at least pedantically. For example, in the above citation they talk about the efforts they had to go through to avoid being blocked ("We spent some additional effort making our measurements as difficult to detect as possible", "Perhaps, bypassing the authentication mechanism
and associated CAPTCHA by reusing an authentication cookie could be construed as a “hack.” However, we argue this is nothing more than using a convenient feature that the site operators have willingly offered their visitors.") which demonstrates that their access was beyond their authority, a violation of the CFAA (at least by the standard Weev, who incremented a counter in a URL, was prosecuted under!). Even outside of criminal law, the foreseeable harm you cause to another by investigating them opens you up to a tort even when 'anyone' could have performed the investigation.
Researchers have access to institutional, governmental, and structural support (cheap students) which heighten the potential risk of their work (as well as their potential liability). But even ignoring that, research that causes harm to people is harmful regardless of who does it. Owing to the heightened risk and liability public institutions have infrastructure for harm mitigation which appears to be being bypassed.
It sounds like over a million dollars of public funding went into these recent attacks, and efforts the tor project spent defending those attacks were diverted from being spent on protecting against other ones.
But I don't think any of this is a problem limited to CMU or even University research.
I think Computing professionals are simply falling down on their ordinary professional and ethical obligations to the users of their systems on a regular basis, in part because we're making a mistake of confusing the appropriate adversarial model we use for security analysis for an ethical maxim... and CMU acting as paid outsourced law enforcement due process violation mill is just a symptom of a greater dysfunction along with things like the Facebook emotional manipulation experiment.
Edit: This is speculation on my part based on responses I've seen when asking researchers about their; but I am told that at least on other projects CMU CS does at times seek IRB approval, so my impression that they never do is probably sampling error.
I don't think that bitcoin deanonymization research is a bad thing. Perhaps it should be done in a way that tests the anonymity of transactions and accounts created for the purpose of the research, but to research the topic itself is easily classified as legitimate security research.
The issue starts when you publicly "out" organizations or people in your research findings, or take money from authorities with the understanding that you'll report back data on people and organizations.
> I don't think that bitcoin deanonymization research is a bad thing.
I never said it was; but care should be taken to avoid harming people. It's unlikely that someone will take that care to mitigate harm if they are of the view that they have no obligation to mitigate to begin with.
> The issue starts when you publicly "out" organizations or people in your research findings, or take money from authorities with the understanding that you'll report back data on people and organizations.
The article isn't really clear, but the alleged activities were attributed to SEI, which is not at all the same thing as CMU, although they are obviously related.
Lots of room for discussion about these activities but it probably should be done in the context of expectations for a Federally Funded Research and Development Center (and one funded by the Department of Defense) and not in the context of expectations for a private university.
> The article isn't really clear, but the alleged activities were attributed to SEI, which is not at all the same thing as CMU, although they are obviously related.
Looks to me like SEI reports directly to the CMU Provost.
Glad to see someone pointed this out. It's a pretty meaningful distinction. To provide some additional context for other readers, the SEI is in the same category as Lincoln Labs & Los Alamos : managed by a University, but operated largely independently and funded by the federal government.
when you are a subdomain on cmu.edu, I am going to hold cmu.edu responsible for your actions (regardless of the formal legal relationship between the two)
Why? Should bit.ly be held responsible for the actions of the Libyan government? Should customers of heroku be held responsible for Heroku's corporate actions (customer.herokuapp.com)?
The domain hierarchy doesn't actually tell you much about the relationships between entities and certainly doesn't tell you enough on its own to discern culpability between entities.
Anyone paying attention has been aware of this since last fall. It's common knowledge that CMU/SEI works for the US military. And the timing was a total giveaway. They pulled the Black Hat talk, and SR2 etc went down not long thereafter. But it's not surprising that CMU/SEI would do this. Legal niceties aside, it's arguably part of their mission.
The focus here, in my opinion, ought to be on the Tor Project's role in this. Although they did speak up last summer, and have blogged this update, I'm not impressed. They ignored the CMU attack for five months last spring. Maybe they did get blindsided by a new kind of attack. Maybe they just weren't paying enough attention. Or maybe, as they've noted, there aren't many devs who focus on onion services.
But so far there's been no mea culpa from the Tor Project. It's disingenuous to whinge about unethical researchers and investigators. It's stuff like this that fuels conspiracy theories about Tor and the US military. The Tor Project needs to take a stand.
Edit: The Tor Project didn't see the CMU/SEI attack while it was happening. Maybe it was too subtle. Maybe the network is too chaotic. But I have no basis for saying that they ignored it. Still, maybe the Tor network needs an IDS.
> We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays.
No, it sounds like they knew new nodes joined the network.
Not that they knew what they were doing - they seemed to have suspected something, but made the decision that it was a small enough fraction of the network to not worry about, but obviously in hindsight they were wrong about that.
Yes, they saw a bunch of new relays, starting in February 2014. And they weren't doing anything obviously alarming. But once Tor devs had seen the Black Hat abstract, they figured out what to look for. Once they were monitoring relevant parameters, the attack was obvious.
In retrospect, more monitoring seems an obvious approach. But that would probably hurt performance. And threaten security and anonymity. Maybe there's a middle ground.
I don't have much to add, but I just want to say this is my favorite comment in this thread. It's not confused about basic details (i.e. CMU vs. SEI), lays out some relevant history and touches on what is to my mind the single biggest concern to emerge from this : namely, Tor's unsettling brittleness.
This is no different from selling an exploit on the black market for $1m . It's sad that a university is going black hat. A university should be a safe place with great respect to ethics in the field the university does research.
Can we please stop decrying black hats without nuance? Because that whole "CMU is blackhat now" thing? That's not what happened here.
Selling exploits exclusively to assist the government is very much like a large swath of white hats.
What do you think Infragard is? It's Feds rubbing elbows with crooked pro-government technologists. But they're the white hats, remember.
At the end of the day, you only have green hats (the color of money), and people who don't wear hats at all. The black/white distinction has become meaningless.
Surely selling to a government is at least grey hat? In my book, white hat never does offense. If you're creating an exploit that isn't proof of concept and that harms someone, you're not white hat anymore.
Do you know what your entire government is up to? Even if you sell it to the NSA, how do you know it won't end up in the hands of the DEA to spy on a large number of people just to find a couple of drug traffickers?
We don't know. We can't know.
There's nothing grey about that choice. They're the ones that call themselves white.
when the entity purchasing the exploits (the US government) is known to
* engage in human torture
* practice extraordinary rendition to third world countries to circumvent due process
* photograph prisoners naked
* practice extra-judicial, aerial execution
then it's reasonable to call the hat black. that we used to think their hat was white is a testament to their power of deceipt, not an indication that the "hat system" has lost meaning.
I would argue that the hat color is from the perspective of "the public", and clearly the public is on the losing end of lost anonymity.
Categorizing people and power as separate entities is the traditional political battle that has been going on forever. The people are the power, and the technology they have enables them to exercise that power. From this perspective, the battle is for access to better technology. What Moxie calls "power" should be ignored; they won't matter when technology outlasts them.
I didn't say black hat. I said black market. I think its morally as wrong as selling an exploit on the black market. That money has blood on it now. I'm not saying the researchers are black hat hackers. I just think that ethivslly they're walking a very dangerous path.
I believe the complaint here is not that they were analyzing the software, but that they were launching active attacks and using the results in an unethical manner, i.e. to conduct possibly-illegal criminal investigations. This is Not OK in the same way that it's Not OK to go around breaking into people's houses with lockpicks and searching them on the grounds that you are conducting lock research. Note that I understand that physical lock security is a relatively well-settled field. That is not relevant to the point at hand; even if it were an important thing to study, it would be Not OK for the government to be paying people to go around picking locks.
Edit: Furthermore, by the same logic, it is acceptable for the government to do nearly anything illegal in order to conduct "research". Microsoft, Google, and co claim to be secure, so therefore it should be OK for the government to be attacking them.
A group that performs an actual attack, hands the result to a nation state adversary, and doesn't publish the results either before or after (except in the abstract of a withdrawn talk) is hardly the "loyal opposition".
edit: Academics are not Obama's proverbial JV team. One should expect them to be as smart as those working for internal agencies, if somewhat less focused on attacking real-world systems. It seems that in this case, the only difference is that they're somewhat worse at keeping secrets, but given that the talk was withdrawn, it's not that they didn't try. So I don't see the grounds for the implicit suggestion in the quoted tweets that they aren't 'real' adversaries.
This misses the point entirely. Nobody is saying that security researchers shouldn't find vulnerabilities.
The problem is that rather than publishing or filing a bug, the researchers helped the government exploit it for $1M. Effectively the same as selling the exploit on the black market.
If, as the Tor blog post insinuates, the product of the CMU research was "targeted data dump to the Feds to investigate criminal activity," I could see the hand-wringing being warranted. That's paying a university to do law enforcement work.
If the product was (as it seems based on public information) "paper about how to attack Tor along with a couple of case studies," I really don't see the problem.
The kerfuffle around the talk and paper's release is certainly interesting in its own right, though.
> That's paying a university to do law enforcement work.
Not really. The organization involved is SEI, a Federally Funded Research and Development Center sponsored by the Department of Defense. Not at all the same thing as CMU.
Mentioning this for clarity, not as a statement about the actually activities.
A lot of people in this thread are misreading the blogpost. The primary complaint is against Feds and not CMU[1], specifically that the Feds used a loophole in the legal system to exploit Tor. They are then asking the university and researchers to be more careful about the funding/nature of their research and blaming them for bringing academic security research into dispute.
I agree that some of the middle paragraphs could have been worded more carefully.
[1] "Civil liberties are under attack if law enforcement believes it can circumvent the rules of evidence by outsourcing police work to universities." from post
There's a difference between white hat and black hat. Working for the other side is not white hat, and arguably security software writers should only welcome white hat research.
It's stunning that you may have read this conversation and still think people have an issue with security research on Tor in general.
The testing, and perhaps even the specific methods of the this test, are not the question. The question is the morality of CMU being paid by authorities to uncover PII about hidden services and report back to them.
Test the system (potentially with your own uni's red/blue teams instead of on other 'public' hidden services) and report back anonymized reports of the data. One can say "Doing X, I got result Y" without giving out the personal details of results, much less to the FBI.
Yes. It's the difference between pubishing a paper "How to break TLS" and a paper "How to break TLS, with an appendix of 10,000 credit card numbers I logged using this method".
It's not stunning. It is a long-running controversy with Tor and its operators. Most of what's in the "Ethical Tor Research Guidelines" is common-sense, but some of it potentially forecloses important research.
The comment I replied to said developers should only welcome white hat attackers. We seem vaguely agreed that black hats won't stop attacking because they are unwelcome, so what change is effected by only welcoming white hats?
It's not OK to break into random houses under the guise of lock picking research. Even if it is research, it's still B&E.
However, the designer/manufacturer/seller/marketer of the defeated locks would look silly to come out and say, "No fair, you're not allowed to pick those locks. We clearly said you could only do that in our test lab!"
What if those locks were willingly sent from house to house with the intention that the sending would obscure their source/destination? If you signed your house up to be one of those locations and decided to try to pick some of the locks that were shipped through your house, you would consider that B&E?
If I understand correctly, he did a lot of the work with the idea of selling hotels an alternative to the keycard programmers that come with the locks, and the company that did that purchased stuff to work with.
I'm pretty sure everyone here would mock the hotel lock vendors for complaining... even if a government paid that guy to illegally break into a bunch of hotel rooms.
Your position seems to imply that "bank robbery" should not be a crime because banks are supposed to be secure, and it's their own fault that there was a flaw in the system. I think that reasonable people can conclude that a system should be secure, there are things that "shouldn't" be done, regardless of the security of the system. It's not to say that "outlaws" won't break the law, but that we view their actions as negative (even if in a perfect world their actions would be futile).
First of all, just as copying someone's source code isn't the same as stealing their laptop, there is a well-established principle that hacking someone's network isn't the same as breaking into their physical premises.
Second, they may have broken into the proverbial bank, but they didn't steal anything while they were there. At most, they photocopied the list of who is renting safe deposit boxes at that branch. Hardly grand theft.
You take the analogy too literal. The idea is that the actions of the attacker can be agreed to be negative, even if security is lax on the victim's side. You don't say "Welp, they were asking for it!" and call it a day.
I didn't imply that you should. My statement still stands. While "they shouldn't have done X" is not security, that doesn't mean that the statement is devoid of meaning, or that we should ignore the question of whether or not "they should have done X."
Edit: Your statements seem to imply that because "100% security" would mean that anyone "doing X" would have no result, then everyone should be allowed to "do X" without question.
I am saying that regardless of what we'd like to expect, we should prepare for the unexpected. Picking your attacker is doomed to fail; don't try. I don't think this is controversial.
It isn't. I also fail to see what it has to do with the discussion at hand. The issue IMHO is not that CMU did research on Tor, or that the US government paid for research into the security of Tor, but the question if the FBI basically outsourced investigations to researchers, and if yes, on what legal basis this happened.
Yes, that they succeeded to collect information is a failure of Tor and makes it less trustworthy, I don't think that's contested by anyone here.
Attacking real users in the process of finding security bugs and how to deal with captured data is the (separate) ethical issue here.
I would not be supportive of a government that pays people to attempt to rob my bank without telling me or my bank in the name of "security research", even if their intention was never to steal money. Regardless of what was posted on the door.
Yeah, but CMU allegedly got paid a pretty big chunk of money for it. If they were just doing research, whatever. But $1m? That's a whole lot of taxpayer money.
Not entirely. A lot of universities get taxpayer money to do research, and $1m is not out of the normal range for research funding in science and technology.
My point was "this is more money than CMU should have gotten for looking into the Tor network, especially from the FBI" rather than "taxpayers paid for this"
In the government contracting world, $1m is an absolutely tiny program - keep in mind that this covers not only the research, but also all the overhead / facilities / computers / keeping the lights on stuff.
$1m will cover something like 3-4 staff for about a year. Less, if they're senior people.
Your points are well-taken, but I would expect academics to be the only demographic more likely than governments to compromise a well-designed security system.
Although I suppose the academics I mean aren't undergrads..
CERT falls under the Software Engineering Institute, which is a Federally-Funded Research and Development Center sponsored by the Department of Defense. CMU administers the SEI in the same way MIT administers Lincoln Laboratory.
This kind of thing happens in private industry as well. For example some CMU professors also hold positions at Google (while still remaining active CMU professors doing public research, i.e., not on sabbatical) and do work that might not get released to the public.
Admittedly, the line between SEI and CMU is rather fuzzy, so the line between contract work for the government (or whoever else) and academic research is fuzzy.
Should there be a brighter line between contract work and academic work?
Russia had official government contract awarded to a research institution to de-anonymize Tor users. The contract has been canceled by the institution recently. English version:
I really don't like the idea of targeting all Tor users, but as far as I can tell, that's not what this is. For this to work, wouldn't the FBI have to have provided a list of hidden services they were interested in to CMU? Based on Tor's description of the traffic confirmation attack, an encoded version of a specific hidden service has to be injected: https://blog.torproject.org/blog/tor-security-advisory-relay...
Isn't it plausible they could have received a warrant for each of those hidden services?
They probably didn't, but targeting specific services isn't that abhorrent to me, especially the ones engaged in highly unethical behavior (child porn, theft, scams). I don't think drug markets should be illegal, but that's a separate issue entirely.
They stated the reason at the time: CMU hadn't cleared it for release. Presumably they wanted to keep the vulnerability secret so they could keep exploiting it.
I'm sure that's likely, I just have a crazy wistful idealism that the hacker manifesto is still an ethical driver of the computer security industry and academia.
I know it's not the CMU students' fault, but part of me is glad the Plaid Parliament of Pwning lost DCCTF this year. It's really saddening that blind deference to authority is now the norm in the industry.
I'm not sure that CMU needs a warrant to collect the data passing over TOR. It certainly violates my ethical principles but perhaps the threshold is lower for others?
CMU doesn't need a warrant to scrape or transmit Tor traffic, and neither do you.
Intent will matter very much in this instance: If the FBI paid someone to do something for them that the FBI would need a warrant to do, then there's a problem. The same would be true if a warrant-less sheriff paid a private investigator to eavesdrop on a suspect.
Otherwise, if, in the course of their research, no matter who funded them, CMU researchers found someone doing something illegal and reported it to the Man, that's generally regarded as good citizenship. If, in the course of my research, I happen to unmask a bike-theft ring, you can be rather certain that I'll call the cops.
Tor is a good invention that needs our support, but it isn't a license to do illegal things.
Tor is, in many ways, an encrypted Twitter. You know that everyone can read what you send and some people will actively attempt to crack it. You bet everything on the encryption and network structure to avoid interception and localization. I don't think there ought to be any inherent presumption of privacy attached to the protocol; to assume that there is one is to be exposed to any breach in the protocol's design.
They exploited a vulnerability in the tor protocol. There's an argument to be made that connecting to the tor network but not complying with the protocol is a violation of the CFAA, if done in a malicious way.
If you wiretap your entire neighborhood, find a bike-theft ring, and call the cops, you'll also get in trouble.
Iirc the attack involved setting up an exit and a relay, then sneaking a message through, so it could tell when a circuit connected to both, and thereby deanonymize users.
Tor can anonymize traffic but only against passive attackers who cannot perform traffic flow confirmation on a global scale.
> A global passive adversary is the most commonly assumed threat when analyzing theoretical anonymity designs. But like all practical low-latency systems, Tor does not protect against such a strong adversary. Instead, we assume an adversary who can observe some fraction of network traffic; who can generate, modify, delete, or delay traffic; who can operate onion routers of his own; and who can compromise some fraction of the onion routers.
> The most well known global-traffic-analysis attack—"traffic confirmation"—was understood by Tor's designers but considered an unrealistically strong attack model and too costly to defend against.
They explain a number of attacks against Tor (both active and passive) and introduce Dissent, "a project at Yale University that expands the design space and explores starkly contrasting foundations for anonymous communication".
Isn't that the exact threat model that the CERT researchers followed? The Tor Project found only 115 relays (6.4%) in the network that was emitting the signal.
That's not the aforementioned Globally Passive Adversary. So by their own measurement the Tor project failed. That said, they took steps to mitigate the exact vulnerability these researchers exploited.
I'm not unfamiliar with Tor, and I would not be surprised to hear that the NSA, with all their resources, was able to pierce its anonymity.
I am very surprised that a couple of guys at a university could do so. That's definitely within the realm of attacks that Tor was supposed to be secure against.
I assume the CMU folks didn't break into any one elses nodes. It seems like one of the principles of Tor node operation is that you can run your nodes however you want. I don't think there is any binding agreement or even strong implication that someone operating a Tor node agrees to not be hostile to users.
Perhaps it was unethical by the standards of university research, but I wouldn't be bothered by the government doing this. If the government or a university researcher idled in your irc channel and logged the communications I would think that was okay. I kinda view this the same way.
The government would probably need a warrant to do that. The university doesn't, and the gov may have use it as a proxy for its attack, while calling it "research". If this sort of thing isn't stopped now in a loud way, it may become the new normal in academia.
This wasn't done by 'academia'. It was apparently done by a Federally Funded Research and Development Center sponsored by the Department of Defense.
That doesn't mean there isn't room for criticism regarding the activities but that would be criticism regarding government activities and not academic activities.
There's broad protections for reporting crimes incidentally observed. It's why parallel construction exists in the first place - to avoid the absurdity of happening to observe a murder, but then grant immunity to the victim due to how it was initially identified even if you later find the body, weapon, and DNA evidence.
In the case of Tor, when you go probing hidden services it would be very easy to find a lot of crime you'd definitely want to report if you knew who was doing it.
The SEI is funded by the DoD and many among its staff and leadership are former military. It's not really 'academia' in the way you're imagining it. Rather, think of it as a government contractor that only does R&D.
They did something with the nodes that should have been impossible, basically exploiting a vulnerability in tor. That seems worse than just passively observing.
and yet some hn users were questionning wether russia was the worst with hiring an army of trolls on the basis that "Kremlin propaganda wants you to believe: everybody is lying, we are no different. No it's not."
What's your point? The U.S. government does bad things so no one should care if researchers at a major university collude with them to do bad things?
P.S. The tone of your comment ("you really need to bone up...") is really insulting. There's nothing in your parent's comment that indicates they are ignorant to any of the links you provided. That's your assumption. Best to leave the insults until after you've assesed someone's ignorance, at the very least. And frankly I don't think that kind of insult is very productive even if it is accurate.
You obviously missed my point. I did say I did not want to say I disagree with you, really. However, your writing style, with excessive links and references about a subject which was not discussed comes across as nutty, no matter whether one might agree with you or not. When I said to dial down, I was referring to your style, not your actual content.
1. Don’t go into details about, for example, the behavior of the US, when that was not the topic being discussed. It’s fine to merely bring it up as an aside, but don’t go into any deep details unless someone asks.
2. Don’t ascribe ignorance to people that could very well be honestly disagreeing with you. It is very possible to have read all your references and still not agree with you. People draw different conclusions from the same facts all the time. Reasonable people can disagree.
3. Don’t go overboard with references. More is not better. One or two references (which should ideally themselves be summaries with references of their own) is quite enough.
CMU Research Ethics Reporting: https://www.cmu.edu/research-compliance/report-problem/index...
CMU Research Misconduct hotline: https://www.cmu.edu/research-compliance/research-misconduct/ or call 877-700-7050 for 'anonymous' reporting
CMU IRB contacts: https://www.cmu.edu/research-compliance/human-subject-resear...
CMU Office of the President contact page: https://www.cmu.edu/leadership/president-suresh/contact/inde...
Also of note - Ethical Tor Research Guidelines: https://blog.torproject.org/blog/ethical-tor-research-guidel...