CMU Research Ethics Reporting: https://www.cmu.edu/research-compliance/report-problem/index...
CMU Research Misconduct hotline: https://www.cmu.edu/research-compliance/research-misconduct/ or call 877-700-7050 for 'anonymous' reporting
CMU IRB contacts: https://www.cmu.edu/research-compliance/human-subject-resear...
CMU Office of the President contact page: https://www.cmu.edu/leadership/president-suresh/contact/inde...
Also of note - Ethical Tor Research Guidelines: https://blog.torproject.org/blog/ethical-tor-research-guidel...
Let's just say there are quite a few rants happening in my network on Facebook.
First of all, before people go out to the streets they need to coordinate somehow, and Facebook seems to be a good way of doing that recently.
Second of all, the rants that are done publicly have a chance of creating a huge PR shitstorm for CMU when some journalist gets a wind of it. Sadly, in this age media pressure is the only thing that actually works (usually for worse).
Let's not be too cynical. :-)
I've complained a number of times about sketchy behavior from researchers in the space of "Bitcoin transaction deanonymization" which were likely to cause harm to people and have reliably gotten a response from CS departments that sounds like "anyone could do this, so there are no ethical considerations".
E.g. for an example in print from CMU, see section 6.2 of http://arxiv.org/abs/1207.7139
I don't consider the argument persuasive:
First, if we look to physical law there is no experiment that couldn't just be conducted by 'anyone'-- That nothing prevents me from stabbing you in the chest just to see what happens doesn't make it ethical.
Much research in this space ends up actually breaking the law-- at least pedantically. For example, in the above citation they talk about the efforts they had to go through to avoid being blocked ("We spent some additional effort making our measurements as difficult to detect as possible", "Perhaps, bypassing the authentication mechanism
and associated CAPTCHA by reusing an authentication cookie could be construed as a “hack.” However, we argue this is nothing more than using a convenient feature that the site operators have willingly offered their visitors.") which demonstrates that their access was beyond their authority, a violation of the CFAA (at least by the standard Weev, who incremented a counter in a URL, was prosecuted under!). Even outside of criminal law, the foreseeable harm you cause to another by investigating them opens you up to a tort even when 'anyone' could have performed the investigation.
Researchers have access to institutional, governmental, and structural support (cheap students) which heighten the potential risk of their work (as well as their potential liability). But even ignoring that, research that causes harm to people is harmful regardless of who does it. Owing to the heightened risk and liability public institutions have infrastructure for harm mitigation which appears to be being bypassed.
It sounds like over a million dollars of public funding went into these recent attacks, and efforts the tor project spent defending those attacks were diverted from being spent on protecting against other ones.
But I don't think any of this is a problem limited to CMU or even University research.
I think Computing professionals are simply falling down on their ordinary professional and ethical obligations to the users of their systems on a regular basis, in part because we're making a mistake of confusing the appropriate adversarial model we use for security analysis for an ethical maxim... and CMU acting as paid outsourced law enforcement due process violation mill is just a symptom of a greater dysfunction along with things like the Facebook emotional manipulation experiment.
Edit: This is speculation on my part based on responses I've seen when asking researchers about their; but I am told that at least on other projects CMU CS does at times seek IRB approval, so my impression that they never do is probably sampling error.
The issue starts when you publicly "out" organizations or people in your research findings, or take money from authorities with the understanding that you'll report back data on people and organizations.
I never said it was; but care should be taken to avoid harming people. It's unlikely that someone will take that care to mitigate harm if they are of the view that they have no obligation to mitigate to begin with.
But that's probably in the contract.
About SEI: http://www.sei.cmu.edu/about/organization/workingwithanFFRDC...
Lots of room for discussion about these activities but it probably should be done in the context of expectations for a Federally Funded Research and Development Center (and one funded by the Department of Defense) and not in the context of expectations for a private university.
Looks to me like SEI reports directly to the CMU Provost.
CMU has a policy on ethical behavior and professionalism among its staff. SEI is staff. So this could have legs.
Similarly, while SEI does not seem to fall under CMU proper, its very close relation may besmirch the institution.
The domain hierarchy doesn't actually tell you much about the relationships between entities and certainly doesn't tell you enough on its own to discern culpability between entities.
Point still stands, DNS hierarchy doesn't imply culpability or legal responsibility in either direction.
The focus here, in my opinion, ought to be on the Tor Project's role in this. Although they did speak up last summer, and have blogged this update, I'm not impressed. They ignored the CMU attack for five months last spring. Maybe they did get blindsided by a new kind of attack. Maybe they just weren't paying enough attention. Or maybe, as they've noted, there aren't many devs who focus on onion services.
But so far there's been no mea culpa from the Tor Project. It's disingenuous to whinge about unethical researchers and investigators. It's stuff like this that fuels conspiracy theories about Tor and the US military. The Tor Project needs to take a stand.
Sounds like they did know?
Not that they knew what they were doing - they seemed to have suspected something, but made the decision that it was a small enough fraction of the network to not worry about, but obviously in hindsight they were wrong about that.
In retrospect, more monitoring seems an obvious approach. But that would probably hurt performance. And threaten security and anonymity. Maybe there's a middle ground.
Selling exploits exclusively to assist the government is very much like a large swath of white hats.
What do you think Infragard is? It's Feds rubbing elbows with crooked pro-government technologists. But they're the white hats, remember.
At the end of the day, you only have green hats (the color of money), and people who don't wear hats at all. The black/white distinction has become meaningless.
Recommended reading: http://www.thoughtcrime.org/blog/saudi-surveillance/
Do you know what your entire government is up to? Even if you sell it to the NSA, how do you know it won't end up in the hands of the DEA to spy on a large number of people just to find a couple of drug traffickers?
We don't know. We can't know.
There's nothing grey about that choice. They're the ones that call themselves white.
* engage in human torture
* practice extraordinary rendition to third world countries to circumvent due process
* photograph prisoners naked
* practice extra-judicial, aerial execution
then it's reasonable to call the hat black. that we used to think their hat was white is a testament to their power of deceipt, not an indication that the "hat system" has lost meaning.
I would argue that the hat color is from the perspective of "the public", and clearly the public is on the losing end of lost anonymity.
> If I absolutely have to frame my choices as an either-or, I’ll choose power vs. people.
I would not have expected him to say that.
I think Moxie is saying that the choice is whether you work in support of centralized authority ("power"), or decentralized "people".
> Tor is having a fit of institutional pique that researchers are compromising the network's privacy guarantees by, well, looking at it.
> If you write security software, and you're not praying that loyal opposition hits you with everything they've got, you're not doing security
> Tor is intended to be, and is marketed as, robust against nation state adversaries. It cannot possibly be so if it worries about academics.
Edit: Furthermore, by the same logic, it is acceptable for the government to do nearly anything illegal in order to conduct "research". Microsoft, Google, and co claim to be secure, so therefore it should be OK for the government to be attacking them.
edit: Academics are not Obama's proverbial JV team. One should expect them to be as smart as those working for internal agencies, if somewhat less focused on attacking real-world systems. It seems that in this case, the only difference is that they're somewhat worse at keeping secrets, but given that the talk was withdrawn, it's not that they didn't try. So I don't see the grounds for the implicit suggestion in the quoted tweets that they aren't 'real' adversaries.
It's this reason, along with DARPA and similar funding, that I call it the military-industrial-(higher-)education complex.
The problem is that rather than publishing or filing a bug, the researchers helped the government exploit it for $1M. Effectively the same as selling the exploit on the black market.
In exactly the same way that the military is not mercenaries.
If the product was (as it seems based on public information) "paper about how to attack Tor along with a couple of case studies," I really don't see the problem.
The kerfuffle around the talk and paper's release is certainly interesting in its own right, though.
Not really. The organization involved is SEI, a Federally Funded Research and Development Center sponsored by the Department of Defense. Not at all the same thing as CMU.
Mentioning this for clarity, not as a statement about the actually activities.
I agree that some of the middle paragraphs could have been worded more carefully.
 "Civil liberties are under attack if law enforcement believes it can circumvent the rules of evidence by outsourcing police work to universities." from post
The testing, and perhaps even the specific methods of the this test, are not the question. The question is the morality of CMU being paid by authorities to uncover PII about hidden services and report back to them.
Test the system (potentially with your own uni's red/blue teams instead of on other 'public' hidden services) and report back anonymized reports of the data. One can say "Doing X, I got result Y" without giving out the personal details of results, much less to the FBI.
Doing the legwork of law enforcement, is different from doing research.
It's not OK to break into random houses under the guise of lock picking research. Even if it is research, it's still B&E.
However, the designer/manufacturer/seller/marketer of the defeated locks would look silly to come out and say, "No fair, you're not allowed to pick those locks. We clearly said you could only do that in our test lab!"
This story has lots of discussion of it:
Especially this comment:
(Also, not all hotels, just lots and lots of them that used that particular implementation)
Second, they may have broken into the proverbial bank, but they didn't steal anything while they were there. At most, they photocopied the list of who is renting safe deposit boxes at that branch. Hardly grand theft.
Edit: Your statements seem to imply that because "100% security" would mean that anyone "doing X" would have no result, then everyone should be allowed to "do X" without question.
Yes, that they succeeded to collect information is a failure of Tor and makes it less trustworthy, I don't think that's contested by anyone here.
Attacking real users in the process of finding security bugs and how to deal with captured data is the (separate) ethical issue here.
$1m will cover something like 3-4 staff for about a year. Less, if they're senior people.
Although I suppose the academics I mean aren't undergrads..
This kind of thing happens in private industry as well. For example some CMU professors also hold positions at Google (while still remaining active CMU professors doing public research, i.e., not on sabbatical) and do work that might not get released to the public.
Admittedly, the line between SEI and CMU is rather fuzzy, so the line between contract work for the government (or whoever else) and academic research is fuzzy.
Should there be a brighter line between contract work and academic work?
Extorting SR2 would've made them quite a bit more than just 1 million. ;)
Isn't it plausible they could have received a warrant for each of those hidden services?
They probably didn't, but targeting specific services isn't that abhorrent to me, especially the ones engaged in highly unethical behavior (child porn, theft, scams). I don't think drug markets should be illegal, but that's a separate issue entirely.
I know it's not the CMU students' fault, but part of me is glad the Plaid Parliament of Pwning lost DCCTF this year. It's really saddening that blind deference to authority is now the norm in the industry.
Intent will matter very much in this instance: If the FBI paid someone to do something for them that the FBI would need a warrant to do, then there's a problem. The same would be true if a warrant-less sheriff paid a private investigator to eavesdrop on a suspect.
Otherwise, if, in the course of their research, no matter who funded them, CMU researchers found someone doing something illegal and reported it to the Man, that's generally regarded as good citizenship. If, in the course of my research, I happen to unmask a bike-theft ring, you can be rather certain that I'll call the cops.
Tor is a good invention that needs our support, but it isn't a license to do illegal things.
Tor is, in many ways, an encrypted Twitter. You know that everyone can read what you send and some people will actively attempt to crack it. You bet everything on the encryption and network structure to avoid interception and localization. I don't think there ought to be any inherent presumption of privacy attached to the protocol; to assume that there is one is to be exposed to any breach in the protocol's design.
If you wiretap your entire neighborhood, find a bike-theft ring, and call the cops, you'll also get in trouble.
Hold on, what about wiretap laws?
Part of the way this strength will be achieved is researchers in public interest settings behaving ethically.
Tor's purpose is to anonymize traffic. What is it good for, if it can't do that?
> A global passive adversary is the most commonly assumed threat when analyzing theoretical anonymity designs. But like all practical low-latency systems, Tor does not protect against such a strong adversary. Instead, we assume an adversary who can observe some fraction of network traffic; who can generate, modify, delete, or delay traffic; who can operate onion routers of his own; and who can compromise some fraction of the onion routers.
(https://svn.torproject.org/svn/projects/design-paper/tor-des..., Page 4)
I read an interesting article in the ACM last month, titled "Seeking Anonymity in an Internet Panopticon" (http://cacm.acm.org/magazines/2015/10/192387-seeking-anonymi...).
> The most well known global-traffic-analysis attack—"traffic confirmation"—was understood by Tor's designers but considered an unrealistically strong attack model and too costly to defend against.
They explain a number of attacks against Tor (both active and passive) and introduce Dissent, "a project at Yale University that expands the design space and explores starkly contrasting foundations for anonymous communication".
That's not the aforementioned Globally Passive Adversary. So by their own measurement the Tor project failed. That said, they took steps to mitigate the exact vulnerability these researchers exploited.
I am very surprised that a couple of guys at a university could do so. That's definitely within the realm of attacks that Tor was supposed to be secure against.
Perhaps it was unethical by the standards of university research, but I wouldn't be bothered by the government doing this. If the government or a university researcher idled in your irc channel and logged the communications I would think that was okay. I kinda view this the same way.
That doesn't mean there isn't room for criticism regarding the activities but that would be criticism regarding government activities and not academic activities.
In the case of Tor, when you go probing hidden services it would be very easy to find a lot of crime you'd definitely want to report if you knew who was doing it.
We detached this subthread from https://news.ycombinator.com/item?id=10550229 and marked it off-topic.
P.S. The tone of your comment ("you really need to bone up...") is really insulting. There's nothing in your parent's comment that indicates they are ignorant to any of the links you provided. That's your assumption. Best to leave the insults until after you've assesed someone's ignorance, at the very least. And frankly I don't think that kind of insult is very productive even if it is accurate.
2. Don’t ascribe ignorance to people that could very well be honestly disagreeing with you. It is very possible to have read all your references and still not agree with you. People draw different conclusions from the same facts all the time. Reasonable people can disagree.
3. Don’t go overboard with references. More is not better. One or two references (which should ideally themselves be summaries with references of their own) is quite enough.