There's a difference between white hat and black hat. Working for the other side is not white hat, and arguably security software writers should only welcome white hat research.
It's stunning that you may have read this conversation and still think people have an issue with security research on Tor in general.
The testing, and perhaps even the specific methods of the this test, are not the question. The question is the morality of CMU being paid by authorities to uncover PII about hidden services and report back to them.
Test the system (potentially with your own uni's red/blue teams instead of on other 'public' hidden services) and report back anonymized reports of the data. One can say "Doing X, I got result Y" without giving out the personal details of results, much less to the FBI.
Yes. It's the difference between pubishing a paper "How to break TLS" and a paper "How to break TLS, with an appendix of 10,000 credit card numbers I logged using this method".
It's not stunning. It is a long-running controversy with Tor and its operators. Most of what's in the "Ethical Tor Research Guidelines" is common-sense, but some of it potentially forecloses important research.
The comment I replied to said developers should only welcome white hat attackers. We seem vaguely agreed that black hats won't stop attacking because they are unwelcome, so what change is effected by only welcoming white hats?
It's not OK to break into random houses under the guise of lock picking research. Even if it is research, it's still B&E.
However, the designer/manufacturer/seller/marketer of the defeated locks would look silly to come out and say, "No fair, you're not allowed to pick those locks. We clearly said you could only do that in our test lab!"
What if those locks were willingly sent from house to house with the intention that the sending would obscure their source/destination? If you signed your house up to be one of those locations and decided to try to pick some of the locks that were shipped through your house, you would consider that B&E?
If I understand correctly, he did a lot of the work with the idea of selling hotels an alternative to the keycard programmers that come with the locks, and the company that did that purchased stuff to work with.
I'm pretty sure everyone here would mock the hotel lock vendors for complaining... even if a government paid that guy to illegally break into a bunch of hotel rooms.
Your position seems to imply that "bank robbery" should not be a crime because banks are supposed to be secure, and it's their own fault that there was a flaw in the system. I think that reasonable people can conclude that a system should be secure, there are things that "shouldn't" be done, regardless of the security of the system. It's not to say that "outlaws" won't break the law, but that we view their actions as negative (even if in a perfect world their actions would be futile).
First of all, just as copying someone's source code isn't the same as stealing their laptop, there is a well-established principle that hacking someone's network isn't the same as breaking into their physical premises.
Second, they may have broken into the proverbial bank, but they didn't steal anything while they were there. At most, they photocopied the list of who is renting safe deposit boxes at that branch. Hardly grand theft.
You take the analogy too literal. The idea is that the actions of the attacker can be agreed to be negative, even if security is lax on the victim's side. You don't say "Welp, they were asking for it!" and call it a day.
I didn't imply that you should. My statement still stands. While "they shouldn't have done X" is not security, that doesn't mean that the statement is devoid of meaning, or that we should ignore the question of whether or not "they should have done X."
Edit: Your statements seem to imply that because "100% security" would mean that anyone "doing X" would have no result, then everyone should be allowed to "do X" without question.
I am saying that regardless of what we'd like to expect, we should prepare for the unexpected. Picking your attacker is doomed to fail; don't try. I don't think this is controversial.
It isn't. I also fail to see what it has to do with the discussion at hand. The issue IMHO is not that CMU did research on Tor, or that the US government paid for research into the security of Tor, but the question if the FBI basically outsourced investigations to researchers, and if yes, on what legal basis this happened.
Yes, that they succeeded to collect information is a failure of Tor and makes it less trustworthy, I don't think that's contested by anyone here.
Attacking real users in the process of finding security bugs and how to deal with captured data is the (separate) ethical issue here.
I would not be supportive of a government that pays people to attempt to rob my bank without telling me or my bank in the name of "security research", even if their intention was never to steal money. Regardless of what was posted on the door.