(Posts like this will go away once we turn off pagination. It's a workaround for performance, which we're working on fixing.)
Also, https://www.neowin.net/news/linux-bans-university-of-minneso... gives a bit of an overview. (It was posted at https://news.ycombinator.com/item?id=26889677, but we've merged that thread hither.)
Edit: related ongoing thread: UMN CS&E Statement on Linux Kernel Research - https://news.ycombinator.com/item?id=26895510 - April 2021 (205 comments and counting)
"We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".
That'll be a fun paper to write, no doubt.
* One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith.
Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits .
The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.
As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?
* Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. 
: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2
I also now have submitted a patch series that reverts the majority of all of their contributions so that we can go and properly review them at a later point in time:
As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".
Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.
So thanks. Keep it up.
From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...
> A lot of these have already reached the stable trees.
Apologies in advance if my questions are off the mark, but what does this mean in practice?
1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?
2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?
3. Will there be a post-mortem for this attack/attempted attack?
Specifically, I think the three malicious patches described in the paper are:
- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.
- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.
- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.
This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.
edit: wording tweak for clarity
Welcome to academia. Where a large number of students are doing it just for the credentials
Immigrant graduate students with uncertain future if they fail? Check.
Vulnerable students whose livelihood is at mercy of their advisor? Check.
Advisor whose career depends on a large number of publication bullet points in their CV? Check.
Students who cheat their way through to publish? Duh.
Edit: Oh now I get it you clever person you. Only took an hour ha.
It's good to call out bad incentive structures, but by feigning surprise you're implying that we shouldn't imagine a world where people behave morally when faced with an incentive/temptation.
I don't think it's fair to say "by feigning surprise you're implying..." That seems to be putting words in GP's mouth. Specifically, they didn't say that we shouldn't imagine a better world. They were only describing one unfortunate aspect of today's academic world.
Here is a personal example of feigned surprise. In November 2012 I spent a week at the Google DC office getting my election results map ready for the US general election. A few Google engineers wandered by to help fix last-minute bugs.
One young engineer was not pleased when he found out about this. He took a long slow look at my name badge, sighed, and looked me in the eye: "Michael... Geary... ... You... use... TABS?"
That's feigned surprise.
(Coda: I told him I was grateful for his assistance, and to feel free to indent his code changes any way he wanted. We got along fine after that, and he ended up making some nice contributions.)
We'd also have to agree on what "behave morally" means, and this is impossible even at the most basic level.
Question for legal experts,
Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?
Also US code 1030(a)5 A does not care about software license. Any intentional vulnerability added to code counts. Federal cybercrime laws are not known for being terribly understanding…
To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let's say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.
Maybe they can pose as 10 different people, in case some of them gets banned.
"As U.S. intelligence agencies accelerate efforts to acquire new technology and fund research on cybersecurity, they have invested in start-up companies, encouraged firms to put more military and intelligence veterans on company boards, and nurtured a broad network of personal relationships with top technology executives."
Foreign countries do the same thing. There are numerous public accounts of Chinese nationals or folks with vulnerable family in China engaging in espionage.
It's difficult to protect against trusted parties whom you assume, with good reason, and good-faith actors.
A perfectly security system is only realized by a perfectly inefficient development process.
We can get better at lessening the efficiency tax of a given security level (through tooling, tests, audits, etc), but for a given state of tooling, there's still a trade-off.
Different release trains seem the sanest solution to this problem.
If you want bleeding-edge, you're going to pull in less-tested (and also less-audited) code. If you want maximum security, you're going to have to deal with 4.4.
How to solve this "issue" without putting too much process around it? That's the challenge.
Sarcasm aside, pentesting/redteaming is only ethical if the target consents to it! Please don't try to prove your point the way these researchers have.
If the researcher has sent these patches under a different identity, that would be just like how malice contributions appear. The maintainers won't assume malice, waste a bunch of time communicating with the bad actor, and may NOT revert their previous potentially harmful contribution.
Yes, and if you do it without a heads-up as well that makes you a bad actor. This university is a disgrace and that's what the problem is and should remain.
I too thought like this till yesterday. Then someone made me realize thats not how getting consent works in these situations. You take consent from higher up the chain, not the people doing the work. So Greg Kroah-Hartmancould could have been consulted, as he would not be personally reviewing this stuff. This would also give you a chance to understand how the release process works. You also have an advantage over the bad actors because they would be studying the process from outside.
But I would like to put in a disclaimer that before getting to that point they could have done so many other things. Review the publicly available review processes, see how security bugs get introduced by accident and see if that can be easily done by a bad actor, etc.
To take a more realistic example, we could quickly learn a lot more than today about language acquisition if we could separate a few children from any human contact to study how they learn from controlled stimuli. Still, we don't do this research and look for much more complicated and lossy, but more humane, methods to study the same.
And as for the solutions, their contribution is nil. No suggestions that haven't been suggested, tried and done or rejected a thousand times over.
I also consider Greg’s response just as much a test of UMN’s internal processes as the researcher’s attempt at testing kernel development processes. Hopefully there will be lessons learned on both sides and this benign incident makes the world better. Nobody was hurt here.
> Nobody was hurt here.
This is where you got me, because while it's clear to me that short-term damage has been done, in the long term I believe you are correct. I believe this event has made the world a safer place.
The purpose of the research was probably to show how easy it is to manipulate the Linux kernel in bad faith. And they did it. What are they gonna do about it besides banning the university?
If a corporation relies upon open sourced code that has historically been written by unpaid developers, if I was that corportion, I would start paying people to vet that code.
And also, if I had to pick between a somewhat inclusive mode of work where some rando can get code included at the slightly increased risk of including malicious code, and a tightly knit cabal of developers mistrusting all outsiders per default: I would pick the more open community.
If you want more paranoia, go with OpenBSD. But even there some rando can get code submitted at times.
I mean, it is no surprise. It is even worse with proprietary software, because you are much less likely to be aware of your own college/employee.
Hell, seeing that the actual impact is overblown in the paper, I think it is a really great percentage caught to be honest, assuming good faith from the contributor.
What? Are you actually trying to argue that "researchers" proved that code reviews don't have a 100% success rate in picking up bugs and errors?
Specially when code is pushed in bad faith?
I mean, think about that for a minute. There are official competitive events to sneak malicious code that are already decades old and going strong. Sneaking vulnerabilities through code reviews is a competitive sport. Are we supposed to feign surprise now?
Bug bounties are more than a different beast: they are a strawman.
Sneaking vulnerabilities through a code review is even a competitive sport, and it has zero to do with bug bounties.
It's just f** brilliant! :)
* a black hat writes malware that proves to be capable of taking out a nation's electrical grid. We know that such malware is feasible.
* a group of teenagers is observed to drop heavy stones from a bridge onto a motorway.
* another teenager pointing a relatively powerful laser at the cockpit of a passenger jet which is about to land at night.
* an organic chemist is demonstrating that you can poison 100,000 people by throwing certain chemicals into a drinking water reservoir.
* a secret service subverting software of a big industrial automation company in order to destroy uranium enrichment plants in another country.
* somebody hacking a car's control software in order to kill its driver
What are the security implications of this? That more money should be spent on security? That we should stop to drive on motorways? That we should spent more money on war gear? Are you aware how vulnerable all modern infrastructure is?
And would demonstrating that any of these can practically be done be worth an academic paper? Aren't several of these really a kind of military research?
The Linux kernel community does spend a lot of effort on security and correctness of the kernel. They have a policy of maximum transparency which is good, and known to enhance security. But their project is neither a lab in order to experiment with humans, nor a computer war game. I guess if companies want to have even more security, for running things like nuclear power plants or trains on Linux, they should pay for the (legally required) audits by experts.
As per the attack surface described in the paper (section IV). Because (III, the acceptance process) is a manpower issue.
Have they submitted patches to any projects other than the kernel?
And if this escalate to MSM Media it might also damage future employment status from UMN CS students.
Edit: Looks like they made a statement. https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...
- Signed by “Loren Terveen, Associate Department Head”, who was a co-author on numerous papers about experimenting on Wikipedia, as pointed out by: https://news.ycombinator.com/item?id=26895969
Edit: Parent comment originally referenced the paper that caused this mess.
As far as I'm concerned this university and all of its alumni are radioactive.
It's not about guilt, it's about trust. They were trained for years in an institution that violates trust as a matter of course. That makes them suspect and the judgement completely fair.
"As a matter of course" is a big leap here.
you think students believe in everything that profs do/say?
Your logic doesn't make ANY sense.
When someone graduates from the university, that is the same as the university saying "This person is up to our standards in terms of knowledge, ethics and experience."
If those standards for ethics are very low, then it naturally taints that reputation they sold.
It is unfair to judge a whole university for the behavior of a professor or a department. Although I'm far from having all the details, it looks to me like the university is taking the right measures to solve the problem, which they acknowledge. I would understand your position if they tried to hide this or negated it, but as far as I understood that's not the case at all. Did I miss something?
It doesn't block patch submissions from students of professors using their private email, since that assumes they are contributing as individuals, and not as employees or students.
It's as close as practically possible to blocking an institution and not the individuals.
> As far as I'm concerned this university and all of its alumni are radioactive.
That is not a practical issue, but a too broad generalization (although, I repeat, I may have missed something).
It's the same as with employees. If I get a patch request from email@example.com I'll assume that it comes from IBM, and that person is submitting a patch on behalf of IBM, while for a patch coming from firstname.lastname@example.org I would not assume any IBM affiliation or bias, but assume person contributing as an individual.
That's not what the comment I was responding to said. It was very clear: "As far as I'm concerned this university and all of its alumni are radioactive". It does not say every kernel patch coming from this domain is radioactive, it clearly says "all of its alumni are radioactive".
You said before that alumni from the university could submit patches with their private emails, but according to what djbebs said, he would not. Do we agree that this would be wrong?
And as unfortunate as it sound it look like all victim of such generalization, the alumni would have to fight the prejudice associated to their choice of university.
And any piss I find, i will blame on amazon
That is exact opposite of how rot in literal bunch of apples behave. Spoil spreads throughout the whole lot very, very quickly.
The chief problem here is not that it bruises the egos of the Linux developers for being psyched, but that it was a dick move whereby people now have to spend time sorting this shit out.
Prof Liu miscalculated. The Linux developers are not some randos off the street where you can pay them a few bucks for a day in the lab, and then they go away and get on with whatever they were doing beforehand. It's a whole community. And he just pissed them off.
It is right that Linux developers impose a social sanction on the perpetrators.
It has quite possibly ruined the student's chances of ever getting a PhD, and earned Liu a rocket up the arse.
I disagree. I think it's easier to excuse bad judgment, in part because we all sometimes make mistakes in complicated situations.
But this is an example of experimenting on humans without their consent. Greg KH specifically noted that the developers did not appreciate being experimented on. That is a huge chasm of a line to cross. You are generally required to get consent before experimenting on humans, and that did not happen. That's not just bad judgment. The whole point of the IRB system is to prevent stuff like that.
What? That's exactly how it works. A bad apple gives off a lot of ethylene which ripens (spoils) the whole bunch.
Being a public university, I hope at some point they address this publicly as well as list the steps they are (hopefully) taking to ensure something like this doesn't happen again. I'm also not sure how they can continue to employ the prof in question and expect the open source community to ever trust them to act in good faith going forward.
This is quite literally the first point of the Nuremberg code research ethics are based on:
This isn't an individual failing. This is an institutional failing. This is the sort of thing which someone ought to raise with OMB.
He literally points to how Wikipedia needed to respond when he broke the rules:
Doesn't mean there aren't ethical issues related to editors being human subjects, but you may want to be more specific.
What did you see that offended you?
The way I've seen Harvard, Stanford, and a few other university researchers dodge IRB review is by doing research in "private" time in collaboration with a private entity.
There is no effective oversight over IRBs, so they really range quite a bit. Some are really stringent and some allow anything.
how it does?
What a joke - not sure how they can rationalize this as valuable behavior.
Personally, I think all contributors should be considered "bad actors" in open source software. NSA, some university mail address, etc. I consider myself a bad actor, whenever I write code with security in mind. This is why I use fuzzing and code analysis tools.
Banning them was probably the correct action, but not finding value requires intentionally ignoring the very real result of the exercise.
However I'd also like to note that in a real world penetration test on an unwitting and non-consensual company, you also get sent to jail.
Everybody wins! The team get valuable insight on the security of the current system and unethical researchers get punished!
You don't get to rob a bank and then when caught say "you should thank us for showing your security weaknesses".
In this case they merged actual bugs and now they have to revert that stuff which depending on how connected those commits are to other things could cost a lot of time.
If they were doing this in good faith, they could have stopped short of actually letting the PRs merge (even then it's rude to waste their time this way).
This just comes across to me as an unethical academic with no real valuable work to do.
Yeah, there’s a reason the US response to 9/11 wasn’t to name Osama bin Laden “Airline security researcher of the Millenium”, and it isn’t that “2001 was too early to make that judgement”.
We live in a society, to operate open communities there are trade-offs.
If you want to live in a miserable security state where no action is allowed, refunds are never accepted, and every actor is assumed hostile until proven otherwise, then you can - but it comes at a serious cost.
This doesn't mean people shouldn't consider the security implications of new PRs, but it's better to not act like assholes with the goal being a high-trust society, this leads to a better non-zero-sum outcome for everyone. Banning these people was the right call they don't deserve any thanks.
In some ways their bullshit was worse than a real bad actor actually pursuing some other goal, at least the bad actor has some reason outside of some dumb 'research' article.
The academics abused this good-will towards them.
What did they show here that you can sneak bugs into an open source project? Is that a surprise? Bugs get in even when people are not intentionally trying to get them in.
I strive for a high trust society too. Totally agree. And acknowledging that people can exploit trust and use it to push poor code through review does not dismantle a high trust operation or perspective. Trust systems fail when people abuse trust so the reality is that there must be safeguards built in both technically and socially in order to achieve a suitable level of resilience to keep things sustainable.
Just look at TLS, data validation, cryptographic identity, etc. None of this would need to exist in a high trust society. We could just tell people who we are, trust other not to steal our network traffic, never worry about intentionally invalid input. Nobody would overdraft their accounts at the ATM, etc. I find it hard to argue for absolute removal of the verify step from a trust but verify mentality. This incident demonstrated a failure in the verify step for kernel code review. Cool.
You can have your verify-lite process, but you must write down that that was your decision, and if appropriate, revisit and reaffirm it over time. You must implement controls, measures and processes in such a way as to minimize the deleterious consequences to your endeavor. It's the entire reason Quality Assurance is a pain in the ass. When you're doing a stellar job, everyone wonders why you're there at all. Nobody counts the problems that didn't happen or that you've managed to corral through culture changes in your favor, but they will jump on whatever you do that drags the group down. Security is the same. You are an anchor by nature, the easiest way to make you go away is to ignore you.
You must help, first and foremost. No points for groups that just add more filth to wallow through.
Any patch coming from somebody having intentionally introduced an issue falls into this category.
So, banning their organization from contributing is exactly the lesson to be learned.
So you admit it was a malicious breach? Of course it isn't a perfect process. Everyone knows it isn't absolutely perfect. What kind of test is that?
In the case of research, universities are required to have an ethics board that reviews research proposals before actual research is conducted. Conducting research without an approval or misrepresenting the research project to the ethics board are pretty serious offenses.
Typically for research that involves people, participants in the research require having a consent form that is signed by participants, alongside a reminder for participants that they can withdraw that consent at any time without any penalties. It's pretty interesting that in this case, there seemed to have been no real consent required, and it would be interesting to know whether there has been an oversight by the ethics board or a misrepresentation of the research by the researchers.
It will be interesting to see whether the university applies a penalty to the professor (removal of tenure, termination, suspension, etc.) or not. The latter would imply that they're okay with unethical or misrepresented research being associated with their university, which would be pretty surprising.
In any case, it's a good thing that the Linux kernel maintainers decided that experimenting on them isn't acceptable and disrespectful of their contributions. Subjecting participants to experiments without their consent is a severe breach of ethical duty, and I hope that the university will apply the correct sanctions to the researchers and instigators.
Of course, in a few years this will all be forgotten. It begs the question... how effective is it to ban entire organizations due to the actions of a few people? Part of me thinks that it would very good to have something like this happen every five years (because it puts the maintainers on guard), but another part of me recognizes that these maintainers are working for free, and they didn't sign up to be gaslighted, they signed up to make the world a better place. It's not an easy problem.
I don't think it's unreasonable for maintainers of software to ignore or outright ban problematic users/contributors. It's up to them to manage their software project the way they want, and if banning organizations with malicious actors is the way to do it, the more power to them.
Also to assume _all_ commits made by UMN, beyond what's been disclosed in the paper, are malicious feels a bit like an overreaction.
I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).
Would a project like this be unfeasible due to the sheer amount of commits/day?
Are you not concerned these malicious "researches" will simply start using throwaway gmail addresses?
Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.
Fool me once. Why should they waste their time with extra scrutiny next time? Somebody deliberately misled them, so that's it, banned from the playground. It's just a no-nonsense attitude, without which you'd get nothing done.
If you had a party in your house and some guest you don't know and whom you invited in assuming good faith, turned out to deliberately poop on the rug in your spare guest room while nobody was looking .. next time you have a party, what do you do? Let them in but keep an eye on them? Ask your friends to never let this guest alone? Or just simply to deny entrance, so that you can focus on having fun with people you trust and newcomers who have not shown any malicious intent?
I know what I'd do. Life is too short for BS.
Because well funded malicious actors (government agencies, large corporations, etc) exist and aren't so polite as to use email addresses that conveniently link different individuals from the group together. Such actors don't publicize their results, aren't subject to IRB approval, and their exploits likely don't have such benign end goals.
As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.
> I sent patches on the hopes to get feedback. We are not experts in the Linux kernel and repeatedly making these statements is disgusting to hear.
this is after they're caught, why continue lying instead of apologizing and explain? Is the lying also part of the experiments?
On top of that, they played cards, you can see why people would be triggered by this level of dishonesty:
> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies
Or perhaps it really is a second attempt by his advisor at an evil plot to sneak more buggy patches into the kernel for research purposes? Either way, the response by the maintainers seems rather disproportionate to me. And either way, I'm ultimately grateful for the (apparently unwanted?) attention being drawn to the (apparent lack of) security surrounding the Linux kernel patch review process.
Who then replies with a request for "cease and desist"? Not sure that's the right move for a humble newbie.
Yes, malicious actors have a head start, because they don't care about the rules. It doesn't mean that we should all kick the rules, and compete with malicious actors on this race to the bottom.
I also don't view unannounced penetration testing of an open source project as immoral, provided it doesn't consume an inordinate amount of resources or actually result in any breakage (ie it's absolutely essential that such attempts not result in defects making it into production).
When the Matrix servers were (repeatedly) breached and the details published, I viewed it as a Good Thing. Similarly, I view non-consensual and unannounced penetration testing of the Linux kernel as a Good Thing given how widely deployed it is. Frankly I don't care about the sensibilities of you or anyone else - at the end of the day I want my devices to be secure and at this point they are all running Linux.
That you care about something or not also seems to be irrelevant, unless you are part of either the research or the kernel maintainers. It’s not about your or my emotional inclination.
Acquiring consent before experimenting in human subject is an ethical requirement for research, regardless of whether is a hurdle for the researchers. There is a reason that IRB exists.
Not to mention that they literally proved nothing, other than that vulnerable patches can be merged into the kernel. But did anybody that such a threat is impossible anyway? The kernel has vulnerabilities and it will continue to have them. We already knew that.
So what other things do you think appropriate to not engage in acquiring consent to do based on some perceived justification of ubiquity? It's a slippery slope all the way down, and there is a reason for all the ceremony and hoopla involved in this type of thing. If you cannot demonstrate mastery of doing research on human subjects and processes the right way, and show you've done your footwork to consider the impact of not doing it that way (i.e. IRB fully engaged, you've gone out of your way to make sure they understand, and at least reached out to one person in the group under test to give a surreptitious heads up (like Linus)), you have no business playing it fast and loose, and you absolutely deserve censure.
No points awarded for half-assing. Asking forgiveness may oft times be easier than asking permission, but in many areas, the impact to doing so goes far beyond mere inconvenience to the researcher in the costs it can extract.
>at the end of the day I want my devices to be secure and at this point they are all running Linux.
That is orthogonal to the outcome of the research that was being done, as by definition running Linux would include running with a new vulnerability injected. What you really want is to know your device is doing what you want it to, and none of what you don't. Screwing with kernel developers does precious little to accomplish that. Same logic applies with any other type of bug injection or intentioned software breakage.
In the same way there is no law requiring Linux kernel maintainers to review patches send by this university.
"it was not literally illegal" is not a good reasoning for why someone should not be banned.
This "attack" did not reveal anything interesting. It's not like any of this was unknown. Of course you can get backdoors in if you try hard enough. That does not surprise anybody.
Imagine somebody goes with an axe, breaks your garage door, poops on your Harley, leaves, and then calls you and tells you "Oh, btw, it was me. I did you a service by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of your property. Thank me later." And then they expect you to get let in when you have a party.
It doesn't work that way. Of course the garage door can be broken with an axe. You don't need a "mildly sophisticated attack" to illustrate that while wasting everybody's time.
"It was my brother on my unsecured computer" is an excuse I've heard a few times by people trying to shirk responsibility for their ban-worthy actions.
Geographic proximity to bad actors is sometimes enough to get caught in the crossfire. While it might be unfair, it might also be seen as holding a community and it's leadership responsible for failing to hold members of their community responsible and in check with their actions. And, fair or not, it might also be seen as a pragmatic option in the face of limited moderation tools and time. If you have a magic wand to ban only the bad-faith contributions by the students influenced by the professor in question, I imagine the kernel devs will be more than happy to put it to use.
Is it really just the one professor, though?
"... planning on recording the event to show it on YouTube for ad revenue and Internet fame."
In this case, the offender's friends are benefiting from the research. I think that needs to be made important. The university benefits from this paper being published, or at least expected to. That should not be overlooked.
Basically, yes. The kernel review process does not catch 100% of intentionally introduced security flaws. It isn't perfect, and I don't think anyone is claiming that it is perfect. Whenever there's an indication that a group has been intentionally introducing security flaws, it is just common sense to go back and put a higher bar on reviewing it for security.
Whether or not this indicates flaws in the review process is a separate issue, but I don't know how you can justify not reverting all the commits. It'd be highly irresponsible to leave them in.
What I strongly disapprove of the researcher is that apparently no steps are taken to prevent real world consequences of malicious patches getting into kernel, I think the researcher should:
- Notify the kernel community promptly once malicious patches got past all review processes.
- Time these actions well such that malicious patches won't not get into a stable branch before they could be reverted.
Edit: reading the paper provided above, it seems that they did do both actions above. From the paper:
> Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches.
All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.
So, unless the kernel maintenance team have another side of the story. The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.
This time two reviewers noticed that the patch was useless, and then Greg stepped in (three weeks later) saying that this was a repetition of the same bad behavior from the first study. This got a response from the author of the patch, who said that this and other statements were “wild accusations that are bordering on slander”.
I'd hate to be the PhD student that wastes away half a dozen years of his/her life writing a document on how to sneak buggy code through a code review.
More than being pointless and boring, it's a total CV black hole. It's the worst of both worlds: zero professional experience to show for, and zero academic portfolio to show for.
Just because their actions didn’t cause damage doesn’t mean they weren’t negligent.
Sometimes, complex situations don't have simple analogies. I'm not even sure mine is 100% correct.
Just like bumping into somebody on the roof is normal, but you should always be aware that there’s a chance they might try to throw you off. A researcher highlighting this fact by doing it isn’t helping, even if they mitigate their damage.
A much better way to show what they are attempting to is to review historic commits and try to find places where malicious code slipped through, and how the community responded. Or to solicit experimenters to follow normal processes on a fake code base for a few weeks.
Strangers submitting patches might be completely normal.
Malicious strangers trying to sneak vulnerabilities by submitting malicious patches devised to exploit the code review process is not normal. At all.
There are far more news reports of deranged people throwing strangers under traffic, subways, and trains, than there are reports of malicious actors trying to sneak vulnerable patches.
How could you possibly know that? In fact, I would suggest that you are completely and obviously wrong. Government intelligence agencies exist (among other things) and presumably engage in such behavior constantly. The reward for succeeding is far too high to assume that no one is trying.
The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)
That's factually incorrect. The arguments over what constitutes proper code reviews continues to this day with few comprehensive studies about syntax, much less code reviews - not "do you have them" or "how many people" but methodology.
> it appears very questionable what scientific knowledge is actually gained here
The knowledge isn't from the study existing, but the analysis of the data collected.
I'm not even sure why people are upset at this, since it's a very modern approach to investigating how many projects are structured to this day. This was a daring and practical effort.
Under that logic, it's ok for me to run a pen test against your computers, right? ...because I'm only wasting your time.... Or maybe to hack your bank account, but return the money before you notice.
Slippery slope, my friend.
> Under that logic, it's ok for me to run a pen test against your computers, right?
I think the standard for an individual user should be different than that for the organization who is, in the end, responsible for the security of millions of those individual users. One annoys one person, one prevents millions from being annoyed.
Donate to your open source projects!
They could discuss the idea and then perform the test months later? With the amount of patches that had to be reverted as precaution the test would have been well hidden in the usual workload even if the maintainers knew that someone at some point in the past mentioned the possibility of a pen test. How long can the average human stay vigilant if you tell them they will be robbed some day this year?
It would give the University some notoriety to be able to claim "We introduced vulnerabilities in Linux". It would put them in good terms with possible propietary software sponsors, and the military.
I don't think this necessarily follows. Rather it is fundamentally a resource allocation issue.
The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated. Instead it's a more nebulous standard of "reasonable assurance", where "reasonable" is a variable function of what must be sacrificed to perform a more thorough review, how critical the patch appears at first impression, and information relating to provenance of the patch.
By assimilating new information about the provenance of the patch (that it's coming from a group of people known to add obfuscated bugs), that standard rises, as it should.
Alternatively stated, there is some desired probability that an approved patch is bug-free (or at least free of any bugs that would threaten security). Presumably, the review process applied to a patch from an anonymous source (meaning the process you are implying suffers from a lack of confidence) is sufficient such that the Bayesian prior for a hypothetical "average anonymous" reviewed patch reaches the desired probability. But the provenance raises the likelihood that the source is malicious, which drops the probability such that the typical review for an untrusted source is not sufficient, and so a "proper review" is warranted.
> it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
That's hard to argue with, and ironically the point of the research at issue. It does imply that there's a need for some kind of "trust network" or interpersonal vetting to take the load off of code review.
Nobody can assure that.
But realistically, when you find out a submitter had malicious intent, I think it's 100% correct to revisit any and all associated submissions since it's quite a different thing to inspect code for correctness, style, etc. as you would in a typical code review process versus trying to find some intentionally obfuscated security hole.
And, frankly, who has time to pick the good from the bad in a case like this? I don't think it's an overreaction at all. IMO, it's a simplification to assume that all associated contributions may be tainted.
Linux is set up to benefit the linux development community. If UMinn has basically no positive contributions, a bunch of neutral ones and some negative ones banning seems the right call.
Its not about fairness, its about if the hurts outweigh the benefits.
I think the best way to make this expectation reality is putting in the work. The second best way is paying. Doing neither and holding the expectation is a way to exist certainly, but has no impact on the outcome.
The reviews were done by kernel developers who assumed good faith. That assumption has been proven false. It makes sense to review the patches again.
Given that some patches may have made it though with holes, you pull them and re-approach them with a different mindset.
> it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches
Perhaps the mindset needs to change regarding security? Actual malicious actors seem unlikely to announce themselves for you.
THANK YOU! After reading the email chain, I have a much greater appreciation for the work you do for the community!
Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.
Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.
In general, it is the wrong attitude to say, oh we had a security problem. What a fiasco! Everyone involved should be fired! With a culture like that, all you guarantee is that people cover up the security issues that inevitably occur.
Perhaps this incident actually does indicate that kernel code review procedures should be changed in some way. I don’t know, I’m not a kernel expert. But the right way to do that is with a calm postmortem after appropriate immediate actions are taken. Rolling back changes made by malicious actors is a very reasonable immediate action to take. After emotions have cooled, then it’s the right time to figure out if any processes should be changed in the future. And kernel devs putting in extra work to handle security incidents should be appreciated, not criticized for their imperfection.
I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time. (there is the possibility that these are good faith patches and someone in the linux community just hates this person - seems unlikely but until a proper independent investigation is done I'll leave that open.)
> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.
I'm not sure how it affects things, but I think it's important to clarify that they did not obtain the IRB-exempt letter in advance of doing the research, but after the ethically questionable actions had already been taken:
The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. ... We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.
That's not really what they did.
They sent the patches, the patches where either merged or rejected.
And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose until they got caught and people started reverting all the patches from their university and banned the whole university.
> (4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
Seems like there are some patches already on stable trees , so they're either lying, or they didn't care if those "don't merge" messages made anybody react to them.
1 - https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...
The main culprit seems to be only Quishu Wu. He is also the one who wrote the paper.
>> The work taints the relationship between academia and industry
> We are very sorry to hear this concern. This is really not what we expected, and we strongly believe it is caused by misunderstandings
Yeah, misunderstandings by the university that anyone, ever, in any line of endeavor would be happy to be purposely fucked with as long as the perpetrator eventually claims it's for a good cause. In this case the cause isn't even good, they're proving the jaw-droppingly obvious.
..."Because if we're lucky tomorrow, we won't have to deal with questions like yours ever again." --Firesign Theater, "I Think We're All Bozos on the Bus"
Yet we do nothing about it? I wouldn't call that jaw-droppingly obvious, if anything, without this, I'm pretty sure that anyone would argue that it would be caught way before making it way into stable.
What they do almost universally lack is enough people making positive contributions (in time, money, or both).
This "research" falls squarely into the former category and burns resources that could have been spent on the latter.
Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".
Had this not been caught, it would've exposed a major flaw in the process.
> ...until they got caught and people started reverting all the patches from their university and banned the whole university.
Either these patches are valid fixes, in which case they should remain, or they are intentional vulnerabilities, in which case they should've already been reviewed and rejected.
Reverting and reviewing them "at a later date" just makes me question the process. If they haven't been reviewed properly yet, it's better to do it now instead of messing around with reverts.
While true, it's simply not acceptable to abuse trust in this way. It causes real emotional harm to real humans, and while it also may produce some benefits, those do not outweigh the harms. Just because malicious actors don't care about the harms shouldn't mean that ethical people shouldn't either.
Given the completely unavoidable limitations of the review and bug testing process, a maintainer has to react very differently when they have determined that a patch is malicious - all previous patches past from that same source (person or even organization) have to be either re-reviewed at a much higher standard or reverted indiscriminately; and any future patches have to be rejected outright.
This puts a heavy burden on a maintainer, so intentionally creating this type of burden is a malicious action regardless of intent. Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.
The vast majority of drunk drivers never kill anyone.
> Sending a malicious patch (one that is known to introduce a vulnerability) is a malicious action.
I disagree that it's malicious in this context, but that's irrelevant really. If the patch gets through, then that proves one of the most critical pieces of software could relatively easily be infiltrated by a malicious actor, which means the review process is broken. That's what we're trying to figure out here, and there's no better way to do it than replicate the same conditions under which such patches would ordinarily be reviewed.
> Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.
Yes, everyone knows that patches can introduce vulnerabilities if they are not found. We want to know whether they are found! If they are not found, we need to figure out how they slipped by and how to prevent that from happening in the future.
That is a complete misunderstanding of the Linux dev process. No one expects the first reviewer of a patch (the person that the researchers were experimenting on) to catch any bug. The dev process has many safeguards - several reviewers, testing, static analysis tools, security research, distribution testing, beta testers, early adopters - that are expected to catch bugs in the kernel at various stages.
Trying to deceive early reviewers into accepting malicious patches for research purposes is both useless research and hurtful to the developers.
But the linux kernel is NOT a security product - it is a kernel. It can be used in entirely disconnected devices that couldn't care less about security, as well as in highly secure infrastructure that powers the world. The ultimate responsibility of delivering a secure product based on Linux lies with the people delivering a secure product based on the kernel. The kernel is a essentially library, not a product. If someone is assuming they can build a secure product by trusting Linux to be "secure" than they are simply wrong, and no amount of change in the Linux dev process will fix their assumption.
Of course, you want the kernel to be as secure as possible, but you also want many other things from the kernel as well (it should be featureful, it should be backwards compatible with userspace, it should run on as many architectures as needed, it should be fast and efficient, it should be easy to read and maintain etc).
Well, in real life, you can't go punch someone in the face to teach them a "point". If you do so, you'll get punished.
> Reverting and reviewing them "at a later date" just makes me question the process.
I don't think anybody realistically thought that the kernel review process is rock solid against malicious anyway. What exactly does the paper expose?
This just turns the researchers into black hats. They are just making it look like "a research paper."
How is this not human research? They experimented on the reactions of people in a non-controlled environment.
It doesn’t just “involve humans” it is first and foremost the behavior of specific humans.
> but the humans (reviewers) are not the research subject.
The study is exactly studying their behavior in a particular context. They are absolutely the subjects.
This study does not care about the reviewers, it cares about the process. For example, you can certainly improve the process without replacing any reviewers. It is just blatantly false to claim the process is all about humans.
Another example, the review process can even be totally conducted by AIs. See? The process is not all about humans, or human behavior.
To make this even more understandable, considering the process of building a LEGO, you need human to build a LEGO, but you can examine the process of building the LEGO without examine the humans who build the LEGO.
This was all about the reaction of humans. They sent in text with a deceptive description and tried to get a positive answer even though the text was not wholly what was described. It was a psych study in an uncontrolled environment with people who did not know they were participating in a study.
How they thought this was acceptable with their own institutions Participant's Bill of Rights https://research.umn.edu/units/hrpp/research-participants/pa... is a true mystery.
"Process" in this case is just another word for people because ultimately, the process being evaluated here is the human interaction with the malicious code being submitted.
Put another way, let's just take out the human reviewer, pretend the maintainers didn't exist. Does the patch get reviewed? No. Does the patch get merged into a stable branch? No. Does the patch get evaluated at all? No. The whole research paper breaks down and becomes worthless if you remove the human factor. The human reviewer is _necessary_ for this research, so this research should be deemed as having human participants.
How was this study conducted? For every patch that the researchers sent, what process did it go through?
The answer is, it was reviewed and accepted by a human. That's it. Full stop. There's your human subject right there in the middle of your research work. It's not possible to conduct this research without that human subject interacting with your research materials. You do not get to discount that human participation because "Oh well we COULD replace them with an AI in the future". Well your study didn't, which means it needs to go through the human subjects review process.
When you claim that this study was about a process, you're literally taking the researchers side. That's what they've been insisting on as the reason why this study is ethical and they did not need to inform or obtain consent from the kernel development team. That's the excuse they used to get out of IRB's review process so they can be considered "not a human subjects research". That's the excuse they needed so they can proceed without having to get a signed consent form. They did all of this so they could conduct a penetration test without the organization they were attacking knowing about it.
You don't seem to be able to comprehend why or how the maintainers feel deceived here, or that their feelings are legitimate. If you did, you wouldn't keep banging on about "oh this is just a process study, the people don't matter, it's all isolated from humans". Funny enough, the people who DID interact with this research DID feel they mattered and DID feel deceived. The whole point of IRB was to prevent exactly this; researchers conducting unethical research which would only come to light after the study concluded and the injured parties complained (and deceit IS a form of harm). For research which is supposed to be isolated from humans and thus didn't see the need in obtaining a signed consent form, that's not really the outcome you expect to see if everything was on the up and up. Another form of harm from this study, the maintainers now have to go over everything they submitted again to ensure there's nothing else to be worried about. That's a lot of wasted man hours and definitely constitutes harm as well. All of University of Minnesota now has less access to the project after getting banned, even more collateral damage and harm caused to their own institution.
Let's be honest. If the researchers were able to sneak their code into a stable, or distribution version of the kernel, they'd be praising themselves to high heaven. Look at how significant our results were, we fucked up all of Linux! Only reason they didn't is because at least they can recognize that would be going a step too far. They're just looking for excuses to not get punished at this point. Same with the IRB. The IRB is now trying to wiggle out of the situation by insisting everything is ok. The IRB is also made up of professors who have a reputation to maintain! They know they let something through that should never have been approved in it's current form. Most human subject research NEVER get this kind of blowback and the fact that this one did means they screwed up and they know it.
No ethics review board considers a multi page, multi forum, lengthy discussion on the ethics of a study they approved as a good sign. Honestly, any study that gets this much attention would be considered a huge success in any other situation.
Thats not the correct or relevant criteria. If you were correct, testing airport security and testing AntiMoney Laundering checks at a bank would amount to human experiments. In fact its hard to think of any study of the real world that would not become a "human experiment".
"When you claim that this study was about a process, you're literally taking the researchers side."
Thats some seriously screwed up logic right here.
"Weinstein was a Nazi and a serial killer, if you disagree with me you are taking his side"
It's easy to think of studies that don't involve humans so that statement is just wilful obfuscation. Physics, chemistry, heck lots of biology, and of course computer science are primarily made of studies on objects rather than people. Of those that are done on people they are almost always done on people who know they are the subject of an experiment. Very few studies are like this one.
Studies of airport security are done all the time, thats how we know its terribly ineffective. The staff of the airport are not told about them, they are not human experiments.
The experiments on people have a spesific definition that goes beyond "a human is present"
Perhaps a similar approach that allows randomness with some sort of agreement with the maintainers could have prevented this issue while preserving the integrity of the study.
It is self-evident that this study tangibly involved people in the scope, those people did not provide consent prior, and now openly state their grievances. It is nothing short of arguing in bad faith to claim otherwise.
Maybe the stated aim of the research was to study the process. But what they actually did was study how the people involved implemented it.
Being publicly manipulated into merging buggy patches, and wasting hours of people's time are two pretty obvious effects this study had that could cause some amount of distress and thus it cannot be dismissed as simply "studying the process".
It is likely the professor involved here will be fired if they are pre-tenure, or sanctioned if post-tensure.
Of course, there are other ethical and legal requirements that you're bound to, not just this one. I'm not sure which requirements IRBs in the US look into though, it's a pretty murky situation.
It seems to qualify per §46.102(e)(1)(i) ("Human subject means a living individual about whom an investigator [..] conducting research: (i) Obtains information [...] through [...] interaction with the individual, and uses, studies, or analyzes the information [...]")
I don't think it'd qualify for any of the exemptions in 46.104(d): 1 requires an educational setting, 2 requires standard tests, 3 requires pre-consent and interactions must be "benign", 4 is only about the use of PII with no interactions, 5 is only about public programs, 6 is only about food, 7 is about storing PII and not applicable and 8 requires "broad" pre-consent and documentation of a waiver.
It's not worth arguing about this; if you care, you can try to change the law. In the meantime, IRBs will do what IRBs do.
Since IRBs exist to minimize liability, it seems like that would be that fastest route towards change (assuming you have legal standing )
Frankly universities and academics need to be taken to court far more often. Our society routinely turns a blind eye to all sorts of fraudulent and unethical practices inside academia and it has to stop.
I had a look at section §46.104 https://www.hhs.gov/ohrp/regulations-and-policy/regulations/... since it mentioned exemptions, and at (d) (3) inside that. It still doesn't apply: there's no agreement to participate, it's not benign, it's not anonymous.
IRBs are like the TSA. Imposing annoyance and red tape on the honest vast-majority while failing to actually filter the 0.0001% of things they ostensibly exist to filter.
I'm guessing it passed for similar reasoning, along with the reviewers being unfamiliar with how "vulnerabilities are injected." To get the bad code in, the researcher needed to have the code reviewed by a human.
So if you rephrase "inject vulnerability" as "sneak my way past a human checkpoint", you might have a better idea of what they were actually doing, and might be better equipped to judge its ethical merit -- and if it qualifies as research on human subjects.
To my thinking, it is quite clearly human experimentation, even if the subject is the process rather than a human individual. Ultimately, the process must be performed by a human, and it doesn't make sense to me that you would distinguish between the two.
And the maintainers themselves express feeling that they were the subject of the research, so there's that.
What makes people revieing linux kernel more 'human' than any of the above?
Main point is that IRBs were created in response to some highly unethical and harmful "studies" being carried out by institutions thought of as top-tier. Now they are considered to be a mandatory part of carrying out ethical research. But if you think about it, isn't outsourcing all sense of ethics to an organization external to the actual researchers kind of the opposite of what we want to do?
All institutions tend to be corruptible. Many tend to respond to their actual incentives rather than high-minded statements about what they're supposed to be about. Seems to me that promoting the attitude of "well an IRB approved it, so it must be all right, let's go!" is the exact opposite of what we really want.
All things considered, it's probably better to have something there than nothing. But you still have to be responsible for your own decisions. I bamboozled our lazy IRB into approving our study, so I'm not responsible for it being obviously a bad idea, just isn't good enough.
If you think about it, it's actually kind of meta to the code review process they were "studying". Just like IRBs, Code review is good, but no code review process will ever be good enough to stop every malicious actor every time. It will always be necessary to track the reputation of contributors and be able to mass-revert contributions from contributors later determined to be actively malicious.
Eventually, the IRB, unhappy at his behavior, said he couldn't do the experiment. He left for another institution (UC San Diego) immediately, having made a deal with the new dean to go through expedited review. It was a big loss for Boulder and TBH, the IRB's reasoning was not sound.
This research had the potential to cause harm to people despite not being human research and was therefore ethically questionable at best. Because they presented the research as not posing potential harm to real people that means they lied to the IRB, which is grounds for dismissal and potential discreditation of all participants (their post-graduate degrees could be revoked by their original school or simply treated as invalid by the educational community at large). Discreditation is unlikely, but loss of tenure for something like this is not out of the question, which would effectively end the professor's career anyway.
I don't buy it, and you fail to back that claim up at all.
As to the backdated review now being undertaken, as far as I'm concerned that decision is squarely on the maintainers. (Honestly it comes across as an emotional outburst to me.)
Nor is breaking my computer monitor.
Theft was meant as an example of a type of harm, not a complete list of all types of harm.
Something doesn’t have to be illegal to be harmful.
Sure, but I'm still going to be pretty annoyed with you. And if you've wasted my time by messing with a system or process under my control then I'm probably going to block you from that system or process.
As a really prosaic example, I've blocked dozens - if not hundreds - of recruiter email addresses on my work email account.
It seems very possible to me that an IRB wouldn't have accepted their proposed methodology if they hadn't received an exemption.
Is there anyone on hand who could explain how what looks very much like a social engineering attack is not "human research"?
First of all, this is completely irresponsible, what if the patches would've made their way into a real-life device? The paper does mention a process through which they tried to ensure that doesn't happen, but it's pretty finicky. It's one missed email or one bad timezone mismatch away from releasing the kraken.
Then playing the slander victim card is outright stupid, it hurts the credibility of actual victims.
The mandate of IRBs in the US is pretty weird but the debate about whether this was "human subject research" or not is silly, there are many other ethical and legal requirements to academic research besides Title 45.
Right. It's not just human subjects research. IRBs vet all kinds of research: polling, surveys, animal subjects research, genetics/embryo research (potentially even if not human/mammal), anything which could be remotely interpreted as ethically marginal.
It's a real shame because the university probably has good, experienced people who could contribute to various OSS projects. But how can you trust any of them when the next guy might also be running an IRB exempt security study.
I don't think code commits to the Linux kernel make it to live systems that fast?
I do agree with the sentiment, though. It's grossly irresponsible to do that without asking at least someone in the kernel developer's group. People don't dig being used as lab rats, and now the whole uni is blocked. Well, tough shit.
That'd be great, yup. And the linux kernel team should then strongly consider undoing the blanket ban, but not until this investigation occurs.
Interestingly, if all that happens, that _would_ be an intriguing data point in research on how FOSS teams deal with malicious intent, heh.
I think the real problem is rooted more fundamentally in academia than it seems. And I think it has mostly to do with a lack of ethics!
We presented students with an education protocol designed to make a blind subset of them fail tests. Then measured if they failed the test to see if they independently learned the true meaning of the information.
Under any sane IRB you would need consent of the students. This is failure on so many levels.
(edit to fix typo)
Has anyone from the "research" team commented and confirmed this was even them or a part of their research? It seems like the only defense is from people who did google-fu for a potentially outdated paper. At this point we can't even be sure if this isn't a genuinely malicious actor using comprimised credentials to introduce vulnerabilities.
Intentionally having bugs in kernel only you know about is very bad.
Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials.Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
With the footnote: Contributors: GCSS had the original idea. JPP tried to talk him out of it. JPP did the first literature search but GCSS lost it. GCSS drafted the manuscript but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it serves him right
it was, iirc, a poster not a paper.
"Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction" (2010, in Journal of Serendipitous and Unexpected Results)
We had the good fortune to have discussion of the study with comments from the author a few years back:
People got swatted for less.
This does not at all mean the behavior in question should be condoned. This fails the sniff test worse than thioacetone.
As heard frequently on ASP, along with "Room Temperature Challenge."
"The University of Minnesota Department of Computer Science & Engineering takes this situation extremely seriously. We have immediately suspended this line of research."
If you've got a suggestion of a way to catch those bugs, please be more specific about it. Just telling people that they need "better protection" isn't really useful or actionable advice, or anything that they weren't already aware of.
You can't break into someone's house through their back window, tell the owners what you did, and not expect to get arrested.
People don't scream "how are we going to know that people can break into houses through broken windows without these heros!?"
Really losing my faith in the accuracy of HN if such a huge thread is full of misinformation.
Basically (as I understand it, feel free to correct me) this is what happened:
Researcher emailed maintained with flawed code, maintainer LGTMed it, researcher told maintainer that the code is buggy and not to merge it. The researchers confirmed that the code was not merged or commited anywhere. Paper gets published. Nothing of note happens.
Now, one of the researchers grad students has submitted stuff to linux oh his own volition- he does not appear to be associated with the previous research. These commits are "obviously bad" according to linux maintainers and claim that the grad student is just continuing the "merge bad shit" research. These commits do not appear to be intentionally flawed but rather newbie mistakes (so claims the student)- which is why he feels the linux community is unwelcoming to newcomers.
Now how on earth did that warp to whatever everyone here is smoking?
The paper you’re referring to was from last year. Two of the three patches that they emailed in under fake author names were rejected; they wrote a paper about the experience. All that happened as a result was that everybody told them that it was a terrible idea, and they tweaked the wording of the paper a bit.
Now _this_ year, a different PHD student with the same advisor posted a really dubious patch which would introduce one or more use–after–free bugs. This patch was also rejected by the maintainers. Greg noticed that it looks like another attempt to do the same kind of experiment again. Nobody but them know if that’s true or not, but the student reacted by calling it “slander”, which was not very advisable.
The methodology in the original paper had one redeeming feature; after any patch was accepted, they would immediately email back withdrawing the patch. That doesn’t appear to have happened in this case, but then this patch was rejected.
As a result of this, all future contributions from people affiliated with UMN are being rejected, and all past contributions (about 250) are being reviewed. Most of those are simply being backed out wholesale, unless someone speaks up for individual changes. A handful of those changes have already been vouched for.
That is pretty drastic, because there will certainly be acceptable patches that will need to be re–reviewed and possibly recommitted. On the other hand, if you discover a malicious actor, wouldn’t you want to investigate everything they’ve been involved with? On the gripping hand, there are such things as autoimmune diseases.
I guess we’ll have to see how it plays out.
There's no other option when someone on the same research team later sends them 4 diffs, 3 of which have security holes, than to assume they're still doing research in the same area.
This is what happens when you do a social experiment without at least informing someone in the organization beforehand. There's no way to verify whether it was well intentioned diffs or not. So you must assume it's not.
> These are two different projects. The one published at IEEE S&P 2021 has
> completely finished in November 2020. My student Aditya is working on a new
> project that is to find bugs introduced by bad patches. Please do not link
> these two projects together. I am sorry that his new patches are not
> correct either. He did not intentionally make the mistake.
The best analogy I could come with so far is; someone offered you compelling job offer, and when you where ready to sign up they would be yeah, that was research project, sorry - would you be ok with such behavior ?
This not ok, because you did not consent to waste your time on someone else's research project.
What is the point of dubbing yourself the arbiter of the moral high ground and spreading mis-information in the very next breath?
I am less puzzled by you spreading misinformation than I am by the fact you have this outrage at the very thing you are doing and don't hesitate to attack the character of people you disagree with.
> A number of these patches they submitted to the kernel were indeed successfully merged to the Linux kernel tree.
It turns out the researchers DID allow the bad faith commits to be merged and that is a big problem that is still being unwound.
But you also forgot the part where Greg throws a hissy fit and decides to revert every commit from umn emails, including 3+year old commits that legitimately fix security vulns. Great job keeping mainline bug free with your paranoid witchhunt!
If you know of any others that shouldn’t be reverted, you should email the list and point them out.
Yes, it does.
Now, how do you do that other than having fallible people review things?
"Earth is center of universe" took 1000 years to remove from books, I'm not sure what her point was :D
However, the prior activity of submitting bad-faith code is indeed pretty shameful.
It's a different university, but I wonder if these people will see the same result.
Black list the whole lot from everything, everywhere. Black hole that place and nuke it from orbit.
Should every city park with a "no alcohol" policy conduct red teams on whether it's possible to smuggle alcohol in? Should police departments conduct red teams to see if people can get away with speeding?
I don't think they're a professor are they? Says they're a PhD student?
What's preventing those bad actors from not using a UMN email address?
Technically none, but by banning UMN submissions, the kernel team have sent an unambiguous message that their original behaviour is not cool. UMN's name has also been dragged through the mud, as it should be.
Prof Lu exercised poor judgement by getting people to submit malicious patches. To use further subterfuge knowing that you've been already been called out on it would be monumentally bad.
I don't know how far Greg has taken this issue up with the university, but I would expect that any reasonable university would give Lu a strong talking-to.
They gain some trust comming from university email addresses
edit: Reference to combat the downvote: https://www.nbcnews.com/news/china/american-universities-are...
If they just want to be jerks, yes. But they can't then use that type of "hiding" to get away with claiming it was done for a University research project as that's even more unethical than what they are doing now.
Sabotage is a very real risk but we're discussing ethics of demonstrating the risk instead of potential remediation, that's dangerous and foolish.
It really doesn't though. You can claim ownership of that email address in the published manuscript. For that matter, you could even publish the academic article under a pen name if you wanted to. But after seeing how the maintainers responded here, you'd better make sure that any "real" contributions you make aren't associated with the activity in any way.
You're criticizing the process, but the truth is that without a real name email and an actual human being's "social credit" to be burned, there's no proof these researchers would have achieved the same findings. The more interesting question to me is if they had used anonymous emails, would they have achieved the same results? If so, there might be some substance to your contrarian views that the process itself is flawed. But as it stands, I'm not sure that's the case.
Why? Well, look at what happened. The maintainers found out and blanket banned bad actors. Going to be a little hard to reproduce that research now, isn't it? Arbitraging societal trust for research doesn't just bring ethical challenges but /practical/ ones involving US law and standards for academic research.
How are kernel maintainers competent in detecting a real person vs. fake real person? Why is there any inherit trust?
It's clear the system is fallible, but at least now people are humbled enough to not instantly dismiss the risk.
> The maintainers found out and blanket banned bad actors.
With collateral damage.
Anonymity is de facto, not de jure. It's also a privilege for many collaboration networks and not a right. If abused, it will simply be removed.
Given what the Linux kernel runs these days, that would probably be advisable. (I'm a strong proponent of anonymity, but I also have a preference that my devices not be actively sabotaged.)
> we're entering the territory of fraud and cybercrime
So what? The fact that it's illegal doesn't nullify the threat. For that matter, it's not even a crime if a state agency is the perpetrator. These researchers drew attention to a huge (IMO) security issue. They should be thanked and the attack vector carefully examined.
If you want to talk about a state level actor, I hate to break it to you, but they have significantly more powerful and stealthier 0-day exploits that are a lot easier to exploit than a tactic like this. Guess what's the last thing you want to have happen when you commit cybercrime? Do it in public with where there's an immutable record that can be traced back to you, and cause a giant public hubbub, maybe? So, I can't imagine how someone could think there's anything noteworthy about this unless they were unaware of that.
That's somewhat the unintentional humor and irony of this situation -- all the researchers accomplished was proving that they were not just unethical but incompetent.
However, I don't agree that what happened was abuse or that it should be deterred. Responding in a hostile manner to an isolated demonstration of a vulnerability isn't constructive. People rightfully get angry when large companies try to bully security researchers.
You question if this vulnerability is worth worrying about due to the logistics of exploiting it in practice. Regardless of whether it's worth the effort to exploit I'd still rather it wasn't there (that goes for all vulnerabilities).
I think it would be much easier to exploit than you're letting on though. Modern attacks frequently chain many subtle bugs together. Being able to land a single, seemingly inconsequential bug in a key location could enable an otherwise impossible approach.
It seems unlikely to me that the immutable record you mention would ever lead back to a competent entity that didn't want to be found. There's no need for anything more than an ephemeral identity that successfully lands one or two patches and then disappears. The patches also seem unlikely to draw suspicion in the first place, even after the exploit itself becomes known.
In fact it occurs to me that a skilled and amoral developer could likely land multiple patches with strategic (and very subtle) bugs from different identities. These could then be "discovered" and sold on the black market. I see no convincing reason to discount the possibility of this already being a common occurrence.
The only sensible response I can think of is a focus on static analysis coupled with CTF challenges to beat those analysis methods.
What would you like them to do instead or in addition to this?
Update the processes and tools to try and catch such malicious infiltrators. Lynching researchers isn't fixing the actual issue right now.
I think "lamenting" is very much the wrong attitude here. Given all the things that make use of Linux today that seems like the only sane approach to me.