Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the support.

I also now have submitted a patch series that reverts the majority of all of their contributions so that we can go and properly review them at a later point in time: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...



Just wanted to say thanks for your work!

As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".

Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.

So thanks. Keep it up.


How could resilience be verified after asking for consent?


Tell someone upstream - in this case Greg KH - what you want to do and agree on a protocol. Inform him of each patch you submit. He's then the backstop against anything in the experiment actually causing harm.


Same way an employer trains employees on phishing campaigns or an auditor or penetration tester tests resilience or compliance.


Yes, employers often send out fake phishing e-mails to test resilience and organizational penetration testing is done on the field with unsuspecting people.


Ah. I never replied to the e-mails sent out by my employer about registering for a training in phishing detection. I just assumed those e-mails were phishing e-mails.


I assume that so many official emails from my employer are phishing.. it's a mess.


This read as sarcastic to me but FYI what you said is actually, unironically, true


A lot of people are talking about the ethical aspects, but could you talk about the security implications of this attack?

From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have already reached the stable trees.

Apologies in advance if my questions are off the mark, but what does this mean in practice?

1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?

2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?

3. Will there be a post-mortem for this attack/attempted attack?


I don't think the attack described in the paper actually succeeded at all, and in fact the paper doesn't seem to claim that it did.

Specifically, I think the three malicious patches described in the paper are:

- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.

- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.

- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.

This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.

edit: wording tweak for clarity


> the paper is deliberately misleading in an attempt to overstate its contributions.

Welcome to academia. Where a large number of students are doing it just for the credentials


What else do you expect? The incentive structure in academia pushes students to do this.

Immigrant graduate students with uncertain future if they fail? Check.

Vulnerable students whose livelihood is at mercy of their advisor? Check.

Advisor whose career depends on a large number of publication bullet points in their CV? Check.

Students who cheat their way through to publish? Duh.


The ethics in big-lab science are as dire as you say, but I've generally got the impression that the publication imperative has not been driving so much unethical behaviour in computer science. I regard this as particularly cynical behaviour by the standards of the field and I think the chances are good that the article will get retracted.


FWIW, Qiushi Wu's USENIX speaker page links to a presentation with Aditya Pakki (and Kangjie Lu), but has no talk with the same set of authors as the paper above.

https://www.usenix.org/conference/usenixsecurity19/speaker-o...


Can I cite your comment in exchange for a future citation?


Sure?

Edit: Oh now I get it you clever person you. Only took an hour ha.


Feigning surprise isn't helpful.

It's good to call out bad incentive structures, but by feigning surprise you're implying that we shouldn't imagine a world where people behave morally when faced with an incentive/temptation.


I dislike feigned surprise as much as you do, but I don't see it in GP's comment. My read is that it was a slightly satirical checklist of how academic incentives can lead to immoral behavior and sometimes do.

I don't think it's fair to say "by feigning surprise you're implying..." That seems to be putting words in GP's mouth. Specifically, they didn't say that we shouldn't imagine a better world. They were only describing one unfortunate aspect of today's academic world.

Here is a personal example of feigned surprise. In November 2012 I spent a week at the Google DC office getting my election results map ready for the US general election. A few Google engineers wandered by to help fix last-minute bugs.

Google's coding standards for most languages including JavaScript (and even Python!) mandate two-space indents. This map was sponsored by Google and featured on their site, but it was my open source project and I followed my own standards.

One young engineer was not pleased when he found out about this. He took a long slow look at my name badge, sighed, and looked me in the eye: "Michael... Geary... ... You... use... TABS?"

That's feigned surprise.

(Coda: I told him I was grateful for his assistance, and to feel free to indent his code changes any way he wanted. We got along fine after that, and he ended up making some nice contributions.)


Why should we imagine this world? We have no reason to believe it can exist. People are basically chimps, but just past a tipping point or two that enable civilization.

We'd also have to agree on what "behave morally" means, and this is impossible even at the most basic level.


Usually "behave morally" means "behave in a way the system ruling over you deems best to indoctrinate into you so you perpetuate it". No, seriously, that's all there is to morality once you invent agriculture.


Thank you.

Question for legal experts,

Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?


Not an attorney but the kernal is likely shielded from liability by it's license. maybe the kernal could sue the contributers for damaging the project but I don't think the end user could.


Malicious intent or personal gain negate that sort of thing in civil torts.

Also US code 1030(a)5 A does not care about software license. Any intentional vulnerability added to code counts. Federal cybercrime laws are not known for being terribly understanding…


License is a great catch, thank you. Do the kernel get into separate contract with the contributors?


I literally LOL'd at "James Louise Bond"


I wonder about this me too.

To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let's say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.

Maybe they can pose as 10 different people, in case some of them gets banned.


You're still in a better position with open source. The same thing happens in closed source companies.

See: https://www.reuters.com/article/us-usa-security-siliconvalle...

"As U.S. intelligence agencies accelerate efforts to acquire new technology and fund research on cybersecurity, they have invested in start-up companies, encouraged firms to put more military and intelligence veterans on company boards, and nurtured a broad network of personal relationships with top technology executives."

Foreign countries do the same thing. There are numerous public accounts of Chinese nationals or folks with vulnerable family in China engaging in espionage.


The principal researches appear to be alumni of mainland China schools.


Read into the socat diffie-hellman backdoor, I found it fascinating at the time.


Woah. I Googled that! Nice reference. This is a good explanation with more links: https://github.com/AllThing/socat_backdoor


Isn't what you've described pretty much the very definition of advanced persistent threat?

It's difficult to protect against trusted parties whom you assume, with good reason, and good-faith actors.


The fundamental tension is between efficiency and security. Trust permits efficiency, at the cost of security (if that trust is found to be misplaced).

A perfectly security system is only realized by a perfectly inefficient development process.

We can get better at lessening the efficiency tax of a given security level (through tooling, tests, audits, etc), but for a given state of tooling, there's still a trade-off.

Different release trains seem the sanest solution to this problem.

If you want bleeding-edge, you're going to pull in less-tested (and also less-audited) code. If you want maximum security, you're going to have to deal with 4.4.


I have the same questions. So far we have focused on how bad these "guys" are. Sure, they could have done it differently, etc. However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

How to solve this "issue" without putting too much process around it? That's the challenge.


What's next, will they prove how easy it is to break into kernel developers' houses and rob them? Or prove how easy it is to physically assault kernel developers by punching them in the face at conferences? Or prove how easy it is to manipulate kernel developers to lose their life savings investing in cryptocurrency? You can count me out of those...

Sarcasm aside, pentesting/redteaming is only ethical if the target consents to it! Please don't try to prove your point the way these researchers have.


Just playing devil advocate here, the surprising factor does play into it. No bad actor will ever give you heads-up.

If the researcher has sent these patches under a different identity, that would be just like how malice contributions appear. The maintainers won't assume malice, waste a bunch of time communicating with the bad actor, and may NOT revert their previous potentially harmful contribution.


> the surprising factor does play into it. No bad actor will ever give you heads-up.

I too thought like this till yesterday. Then someone made me realize thats not how getting consent works in these situations. You take consent from higher up the chain, not the people doing the work. So Greg Kroah-Hartmancould could have been consulted, as he would not be personally reviewing this stuff. This would also give you a chance to understand how the release process works. You also have an advantage over the bad actors because they would be studying the process from outside.


it's not simple like that, if Greg doesn't do the work of review then who gives him the authority to consent on behalf of others?


I see what you are saying. But he is also sort of the director to this whole thing. The research question itself is worthwhile and I don't think if it was done properly this much time would be wasted. All they have to prove is that it will pass few code reviews. That's a few man hours and I really don't think people will be mad about that. This whole fiasco is about the scale of man hours wasted both because they repeatedly made these "attacks" and because this thing slipped into stable code. Both would be avoided in this scheme.

But I would like to put in a disclaimer that before getting to that point they could have done so many other things. Review the publicly available review processes, see how security bugs get introduced by accident and see if that can be easily done by a bad actor, etc.


the way it should work imho is contributors to be asked for consent (up-front, retroactively), that stealthy experiments would happen at some point. Given the vital role of the linux kernel, maybe they'll understand. And if they turn out to be under-resourced to be wasted on such things, then it would highlight the need for funding additional head count, factoring in that kind of experiments/attacks.


> No bad actor will ever give you heads-up.

Yes, and if you do it without a heads-up as well that makes you a bad actor. This university is a disgrace and that's what the problem is and should remain.


C'est la vie. There are many things that it would be interesting to know, but the ethics of it wouldn't play out. It would be interesting to see how well Greg Kroah-Hartman resists under torture, but that does not mean it is acceptable to torture him to see if he would commit malicious patches that way.

To take a more realistic example, we could quickly learn a lot more than today about language acquisition if we could separate a few children from any human contact to study how they learn from controlled stimuli. Still, we don't do this research and look for much more complicated and lossy, but more humane, methods to study the same.


They proved nothing that wasn't already obvious. A malicious actor can get in vulnerabilities the same way a careless programmer can. Quick, call the press!

And as for the solutions, their contribution is nil. No suggestions that haven't been suggested, tried and done or rejected a thousand times over.


Agreed. So many security vulnerabilities have been created not by malicious actors, but by people who just weren't up to task. Buggy software and exhausted maintainers is nothing new.


What this proves to me is that perhaps lightweight contributions to the kernel should be done in safe languages that prevent memory leaks and with tooling that actively highlights memory safety issues like use after free. Broader rust adoption in the kernel cant come soon enough.

I also consider Greg’s response just as much a test of UMN’s internal processes as the researcher’s attempt at testing kernel development processes. Hopefully there will be lessons learned on both sides and this benign incident makes the world better. Nobody was hurt here.


I understand where you are coming from, and I agree that it's good that we are paying more attention to memory safety, but how would a memory safe language protect you from an intentionally malicious code commit? In order to enact their agenda they would need to have found a vulnerability in your logic (which isn't hard to do, usually). Memory safety does not prevent logic errors.

> Nobody was hurt here.

This is where you got me, because while it's clear to me that short-term damage has been done, in the long term I believe you are correct. I believe this event has made the world a safer place.


One could argue that when a safe language eliminates memory safety bugs (intentional or unintentional), it makes it easier for the reviewer to check for logic errors. Because you don't have to worry about memory safety, you can focus completely on logic errors.


I would agree that it does, and I do agree that we should try to reach that point. I just want to point out that I think it's dangerous to assume safety in general because one thing is assumed to be safe.


This is for me unrelated and even a little bit minimizing the issue here.

The purpose of the research was probably to show how easy it is to manipulate the Linux kernel in bad faith. And they did it. What are they gonna do about it besides banning the university?


I believe it comes down to having more eyes on the code.

If a corporation relies upon open sourced code that has historically been written by unpaid developers, if I was that corportion, I would start paying people to vet that code.


So you are just fine knowing that any random guy can sneak any code in the Linux kernel? Honestly, I was personally expecting a higher level of review and attention to such things, considering how used the product is. I don't want to look like the guy that doesn't appreciate what the OSS and FSF communities do everyday even unpaid. However this is unrelated. And probably this is what the researchers tried to prove (with unethical and wrong behavior).


I'm not fine with it. But those researchers are not helping at all.

And also, if I had to pick between a somewhat inclusive mode of work where some rando can get code included at the slightly increased risk of including malicious code, and a tightly knit cabal of developers mistrusting all outsiders per default: I would pick the more open community.

If you want more paranoia, go with OpenBSD. But even there some rando can get code submitted at times.


If you've ever done code review on a complex product, it should be quite obvious that the options are either to accept that sometimes bugs will make it in, or to commit once per week or so (not per person, one commit per week to the Linux kernel overall), once every possible test has been run on that commit.


I am not sure if these are the only options we have here. Did you see the list of commits that this bunch of guys sneaked in? It's quite big, it's not just 1-2. A smart attacker could have done 1 commit per month and would have been totally fine. All they needed apparently was a "good" domain name in their email. This is what I think is the root of the problem.


> So you are just fine knowing that any random guy can sneak any code in the Linux kernel?

I mean, it is no surprise. It is even worse with proprietary software, because you are much less likely to be aware of your own college/employee.

Hell, seeing that the actual impact is overblown in the paper, I think it is a really great percentage caught to be honest, assuming good faith from the contributor.


> However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

What? Are you actually trying to argue that "researchers" proved that code reviews don't have a 100% success rate in picking up bugs and errors?

Specially when code is pushed in bad faith?

I mean, think about that for a minute. There are official competitive events to sneak malicious code that are already decades old and going strong[1]. Sneaking vulnerabilities through code reviews is a competitive sport. Are we supposed to feign surprise now?

[1] https://en.wikipedia.org/wiki/Underhanded_C_Contest


Bug bounties are a different beast. Here we are talking about a bunch of guys who deliberately put stuff into your next kernel release because they come from an important university, or whatever other reason. One of the reviewers in the thread admitted that they need to pay more attention to code reviews. That sounds to me like a good first step towards solving this issue. Is that enough, though? It's an unsolvable problem, but is the current solution enough?


> Bug bounties are a different beast.

Bug bounties are more than a different beast: they are a strawman.

Sneaking vulnerabilities through a code review is even a competitive sport, and it has zero to do with bug bounties.


Sorry I think I didn't understand/read correctly what it was about.

It's just f** brilliant! :)


What would be the security implications of these things:

* a black hat writes malware that proves to be capable of taking out a nation's electrical grid. We know that such malware is feasible.

* a group of teenagers is observed to drop heavy stones from a bridge onto a motorway.

* another teenager pointing a relatively powerful laser at the cockpit of a passenger jet which is about to land at night.

* an organic chemist is demonstrating that you can poison 100,000 people by throwing certain chemicals into a drinking water reservoir.

* a secret service subverting software of a big industrial automation company in order to destroy uranium enrichment plants in another country.

* somebody hacking a car's control software in order to kill its driver

What are the security implications of this? That more money should be spent on security? That we should stop to drive on motorways? That we should spent more money on war gear? Are you aware how vulnerable all modern infrastructure is?

And would demonstrating that any of these can practically be done be worth an academic paper? Aren't several of these really a kind of military research?

The Linux kernel community does spend a lot of effort on security and correctness of the kernel. They have a policy of maximum transparency which is good, and known to enhance security. But their project is neither a lab in order to experiment with humans, nor a computer war game. I guess if companies want to have even more security, for running things like nuclear power plants or trains on Linux, they should pay for the (legally required) audits by experts.


I agree with the sentiment. For a project of this magnitude maybe it comes to develop some kind of static analysis along with refactoring the code to make the former possible.

As per the attack surface described in the paper (section IV). Because (III, the acceptance process) is a manpower issue.


Ironically, one of their attempts were submitting changes that were allegedly recommended by a static analysis tool.


It's possible that they are developing a static analysis tool that is designed to find places where vulnerabilities can be inserted without looking suspicious. That's kind of scary.

Have they submitted patches to any projects other than the kernel?


Guess we have to wait for their next paper to find out.


As an Alumni of the University of Minnesota's program I am appalled this was even greenlit. It reflects poorly on all graduates of the program, even those uninvolved. I am planning to email the department head with my disapproval as an alumni, and I am deeply sorry for the harm this caused.


I am wondering if UMN will now get a bad name in Open Source and any contribution with their email will require extra care.

And if this escalate to MSM Media it might also damage future employment status from UMN CS students.

Edit: Looks like they made a statement. https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...


> Leadership in the University of Minnesota Department of Computer Science & Engineering learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel.

- Signed by “Loren Terveen, Associate Department Head”, who was a co-author on numerous papers about experimenting on Wikipedia, as pointed out by: https://news.ycombinator.com/item?id=26895969


Their name is not in the author list for the paper.

Edit: Parent comment originally referenced the paper that caused this mess.


Yep, sorry, I double-checked and edited it quickly. Sorry about that!


It should. Ethics begins at the top, and if the university has shown itself to be this untrustworthy then no trust can be had on them or any students they implicitly endorse.

As far as I'm concerned this university and all of its alumni are radioactive.


Their graduates have zero culpability here (unless they were involved). Your judgement of them is unfair.


> Their graduates have zero culpability here

It's not about guilt, it's about trust. They were trained for years in an institution that violates trust as a matter of course. That makes them suspect and the judgement completely fair.


Lots of universities have had scandals. I could probably dig one up from your alma mater. They're big places with long histories. Collective punishment achieves little productive and should be avoided.


Its not about collective punishment. Universities sell reputation, both good and bad. It just so happens that they sold bad reputation.


Collective punishment is a clear and unilateral signal that something is extremely wrong and not within the punisher's power to unwind properly (or prevent in the future). Until it's clear that this university can be trusted, they should not be. I would feel the same about any schools that I attended, and I would not have issues with blanket bans for them either if this was the kind of activity they got up to.


> They were trained for years in an institution that violates trust as a matter of course.

"As a matter of course" is a big leap here.


Their graduates might not have been directly involved, but it's not possible to ig ore that those graduates were the product of an academic environment where this kind of behavior was not only sanctioned from the top but also defended as an adequate use of resources.


Adequate use of resources seems like a bizarre reasoning. Do you also evaluate how a candidates alma mater compensates its football staff before you hire?


you actually believe that all of those adult engineers can't decide on their own?

you think students believe in everything that profs do/say?


This is only slightly better than judging from the skin color or location of birth.


Isn't academics part of how you are evaluating a candidate for a job ?


That's a bit much, surely. I think the ethics committee probably didn't do a great job in understanding that this was human research.


Ok...then is everybody who graduated from MIT radioactive, even if they graduated 50 years ago, since Epstein has been involved?

Your logic doesn't make ANY sense.


It makes perfect sense once you realize that universities are in the business of selling reputation.

When someone graduates from the university, that is the same as the university saying "This person is up to our standards in terms of knowledge, ethics and experience."

If those standards for ethics are very low, then it naturally taints that reputation they sold.


no, when somebody graduates from X school, then it means he was capable to either pass or cheat all exams.


Why is the university where you put the line? You could as well say every commit coming from Minnesota is radioactive or, why not, from the US.

It is unfair to judge a whole university for the behavior of a professor or a department. Although I'm far from having all the details, it looks to me like the university is taking the right measures to solve the problem, which they acknowledge. I would understand your position if they tried to hide this or negated it, but as far as I understood that's not the case at all. Did I miss something?


Linux kernel is blocking contributions from the university mail addresses, as this attack has been conducted by sending patches from there.

It doesn't block patch submissions from students of professors using their private email, since that assumes they are contributing as individuals, and not as employees or students.

It's as close as practically possible to blocking an institution and not the individuals.


I think that is a reasonable measure by the LK team. In my opinion, it is the right solution in the short term, and the decision can be revised if in the future some student or someone else have problems to submit non-malicious patches. But I was specifically referring to this comment:

> As far as I'm concerned this university and all of its alumni are radioactive.

That is not a practical issue, but a too broad generalization (although, I repeat, I may have missed something).


I don't read it like this. Alumnis and students are not banned from contributing, as long as they use their private emails. It's the university email domain that is "radioactive". The Assumption here is that someone who uses university email is submitting a patch on behalf of the said university, and that may be in a bad faith. It's up to the said university to show they have controls in place and that they are trustworthy.

It's the same as with employees. If I get a patch request from xyz@ibm.com I'll assume that it comes from IBM, and that person is submitting a patch on behalf of IBM, while for a patch coming from xyz@gmail.com I would not assume any IBM affiliation or bias, but assume person contributing as an individual.


> Alumnis and students are not banned from contributing, as long as they use their private emails. It's the university email domain that is "radioactive".

That's not what the comment I was responding to said. It was very clear: "As far as I'm concerned this university and all of its alumni are radioactive". It does not say every kernel patch coming from this domain is radioactive, it clearly says "all of its alumni are radioactive".

You said before that alumni from the university could submit patches with their private emails, but according to what djbebs said, he would not. Do we agree that this would be wrong?


What if the same unethical people who ran the study submit patches from their gmail accounts?


That seems to me like an unjustified and unjust generalization.


I think current context of the world as it is is full of unjustified and unjust generalization.

And as unfortunate as it sound it look like all victim of such generalization, the alumni would have to fight the prejudice associated to their choice of university.


That's a ridiculously broad assertion to make about the large number of staff and students who've graduated or are currently there, that is unwarranted and unnecessarily damaging to people who've done nothing wrong.


By that logic, whenever data is stolen I will blame thr nearest Facebook employee or ex-employee.

And any piss I find, i will blame on amazon


That's a witch hunt, and is not productive. A bad apple does not spoil the bunch, as it were. It does reflect badly on their graduate program to have retained an advisor with such poor judgement, but that isn't the fault of thousands of other excellent graduates.


It's discomforting to see "bad apple" metaphor being used to say "isolated instance with no influence to its surroundings".

That is exact opposite of how rot in literal bunch of apples behave. Spoil spreads throughout the whole lot very, very quickly.


Also the common phrase is “a bad apple spoils the bunch.”


Both variations are common. "It was just a few bad apples" is the one you more often see today. But it only became common after refrigeration made it so that few people now experience what is required to successfully pack apples for the winter.


Undoubtedly I am in the minority here, but I think it's less a question of ethics, and more a question of bad judgement. You just don't submit vulnerabilities into the kernel and then say "hey, I just deliberately submitted a security vulnerability".

The chief problem here is not that it bruises the egos of the Linux developers for being psyched, but that it was a dick move whereby people now have to spend time sorting this shit out.

Prof Liu miscalculated. The Linux developers are not some randos off the street where you can pay them a few bucks for a day in the lab, and then they go away and get on with whatever they were doing beforehand. It's a whole community. And he just pissed them off.

It is right that Linux developers impose a social sanction on the perpetrators.

It has quite possibly ruined the student's chances of ever getting a PhD, and earned Liu a rocket up the arse.


> it's less a question of ethics, and more a question of bad judgement.

I disagree. I think it's easier to excuse bad judgment, in part because we all sometimes make mistakes in complicated situations.

But this is an example of experimenting on humans without their consent. Greg KH specifically noted that the developers did not appreciate being experimented on. That is a huge chasm of a line to cross. You are generally required to get consent before experimenting on humans, and that did not happen. That's not just bad judgment. The whole point of the IRB system is to prevent stuff like that.


Ah, so people do actually use the expression backwards like that. I had seen many people complain about other people saying “just a few bad apples”, but I couldn’t remember actually seeing anyone use the “one/few bad apple(s)” phrase as saying that it doesn’t cause or indicate a larger problem.


> A bad apple does not spoil the bunch, as it were.

What? That's exactly how it works. A bad apple gives off a lot of ethylene which ripens (spoils) the whole bunch.


Ethylene comes from good apples too and is not a bad thing. The thing that bad apples have that spoils bunches is mold.


How not to get tenure 101


Based on my time in a university department you might want to cc whoever chairs the IRB or at least oversees its decisions for the CS department. Seems like multiple incentives and controls failed here, good on you for applying the leverage available to you.


I'm genuinely curious how this was positioned to the IRB and if they were clear that what they were actually trying to accomplish was social engineering/manipulation.

Being a public university, I hope at some point they address this publicly as well as list the steps they are (hopefully) taking to ensure something like this doesn't happen again. I'm also not sure how they can continue to employ the prof in question and expect the open source community to ever trust them to act in good faith going forward.


first statement + commentary from their associate department head: https://twitter.com/lorenterveen/status/1384954220705722369


Wow. Total sleazeball. This appears to not be his first time with using unintentional research subjects.

Source:

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C22&q=Lor...

This is quite literally the first point of the Nuremberg code research ethics are based on:

https://en.wikipedia.org/wiki/Nuremberg_Code#The_ten_points_...

This isn't an individual failing. This is an institutional failing. This is the sort of thing which someone ought to raise with OMB.

He literally points to how Wikipedia needed to respond when he broke the rules:

https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_no...


As far as I can tell, the papers he co-authored on Wikipedia were unlike the abuse of the kernel contribution process that started last year in that they did not involve active experiment, but passive analysis of contribution history.

Doesn't mean there aren't ethical issues related to editors being human subjects, but you may want to be more specific.


I didn't see any unethical work in a quick scan of the Google Scholar listing. I saw various works on collaboration in Wikipedia.

What did you see that offended you?


You realise that the GP went through the trouble to point out that research on people should involve consent, and that they [wikipedia] needed to release a statement saying this. What does that tell you about the situation that gave rise to that statement?


Got it, and @Tobu's comment describes the issue perfectly. Thanks!


They claim they got the IRB to say it's IRB-exempt.


Which would suggest the IRB’s oversight is broken in that institution somehow, right?


Well, the university of Minnesota managed to escape responsibility after multiple suicides and coercion of subjects of psychiatric research. From one regent: “[this] has not risen to the level of our concern”.

https://www.startribune.com/markingson-case-university-of-mi...


Wow, very interesting read (not finished yet though), thank you. To me, this seems like it should be considered as part of UNM's trustworthiness as a whole and completely validates GKH's decision (not that any was needed).


A lot of IRBs are a joke.

The way I've seen Harvard, Stanford, and a few other university researchers dodge IRB review is by doing research in "private" time in collaboration with a private entity.

There is no effective oversight over IRBs, so they really range quite a bit. Some are really stringent and some allow anything.


>It reflects poorly on all graduates of the program

how it does?


I hope they take this bad publicity and stop (rather than escalating stupidity by using non university emails).

What a joke - not sure how they can rationalize this as valuable behavior.


It was a real world penetration test that showed some serious security holes in the code analysis/review process. Penetration tests are always only as valuable as your response to them. If they chose to do nothing about their code review/analysis process, with these vulnerabilities that made it in (intentional or not), then yes, the exercise probably wasn't valuable.

Personally, I think all contributors should be considered "bad actors" in open source software. NSA, some university mail address, etc. I consider myself a bad actor, whenever I write code with security in mind. This is why I use fuzzing and code analysis tools.

Banning them was probably the correct action, but not finding value requires intentionally ignoring the very real result of the exercise.


I agree. They should take this as a learning opportunity and see what can be done to improve security and detect malicious code being introduced into the project. What's done is done, all that matters is how you proceed from here. Banning all future commits from UMN was the right call. I mean it seems like they're still currently running follow up studies on the topic.

However I'd also like to note that in a real world penetration test on an unwitting and non-consensual company, you also get sent to jail.

Everybody wins! The team get valuable insight on the security of the current system and unethical researchers get punished!


A non-consensual pentest is called a "breach". At that point it's no longer testing, just like smashing a window and entering your neighbour's house is not a test of their home security system but just breaking and entering.


A real world penetration test is coordinated with the entity being tested.


Yeah - and usually stops short of causing actual damage.

You don't get to rob a bank and then when caught say "you should thank us for showing your security weaknesses".

In this case they merged actual bugs and now they have to revert that stuff which depending on how connected those commits are to other things could cost a lot of time.

If they were doing this in good faith, they could have stopped short of actually letting the PRs merge (even then it's rude to waste their time this way).

This just comes across to me as an unethical academic with no real valuable work to do.


> You don't get to rob a bank and then when caught say "you should thank us for showing your security weaknesses".

Yeah, there’s a reason the US response to 9/11 wasn’t to name Osama bin Laden “Airline security researcher of the Millenium”, and it isn’t that “2001 was too early to make that judgement”.


But bad people don’t follow some mythical ethical framework and announce they’re going to rob the bank prior to doing it. There absolutely are pen tests conducted where only a single person out of hundreds is looped in. Is it unethical for supervisors to subject their employees and possibly users to those such environments? Since you can’t prevent this behavior at large, I take solace that it happened in a relatively benign way rather than having been done by a truly malicious actor. No civilians were harmed in the demonstration of the vulnerability. Security community doesn't get to have their cake and eat it too. All this responsible disclosure “ethics” is nonsense. This is full disclosure, it’s how the world actually works. The response from the maintainers to me indicates they are frustrated at the perceived waste of their time, but to me this seems like a justified use of human resources to draw attention to a real problem that high profile open source projects face. If you break my trust I’m not going to be happy either and will justifiably not trust you in the future, but trying to apply some ethical framework to how “good bad actors” are supposed to behave is just silly IMO. And the “ban the institution” feels more like an “I don't have time for this” retaliation than an “I want to effectively prevent this behavior in the future” response that addresses the reality. For all we know Linus and Greg could have and still might be onboard with the research and we’re just seeing the social elements of the system now tested. My main point is maybe do a little more observing and less condemning. I find the whole event to be a fascinating test of one of the known vulnerabilities large open source efforts face.


Strong disagree on this.

We live in a society, to operate open communities there are trade-offs.

If you want to live in a miserable security state where no action is allowed, refunds are never accepted, and every actor is assumed hostile until proven otherwise, then you can - but it comes at a serious cost.

This doesn't mean people shouldn't consider the security implications of new PRs, but it's better to not act like assholes with the goal being a high-trust society, this leads to a better non-zero-sum outcome for everyone. Banning these people was the right call they don't deserve any thanks.

In some ways their bullshit was worse than a real bad actor actually pursuing some other goal, at least the bad actor has some reason outside of some dumb 'research' article.

The academics abused this good-will towards them.

What did they show here that you can sneak bugs into an open source project? Is that a surprise? Bugs get in even when people are not intentionally trying to get them in.


Of course everyone knows bugs make it in software. That’s not the point and I find it a little concerning that there’s a camp of people who are only interested in the zzz I already knew software had bugs assessment. Yes the academics abused their goodwill. And in doing so they raised awareness around around something that sure many people know is possible. The point is demonstrating the vuln and forcing people to confront reality.

I strive for a high trust society too. Totally agree. And acknowledging that people can exploit trust and use it to push poor code through review does not dismantle a high trust operation or perspective. Trust systems fail when people abuse trust so the reality is that there must be safeguards built in both technically and socially in order to achieve a suitable level of resilience to keep things sustainable.

Just look at TLS, data validation, cryptographic identity, etc. None of this would need to exist in a high trust society. We could just tell people who we are, trust other not to steal our network traffic, never worry about intentionally invalid input. Nobody would overdraft their accounts at the ATM, etc. I find it hard to argue for absolute removal of the verify step from a trust but verify mentality. This incident demonstrated a failure in the verify step for kernel code review. Cool.


This is how security people undermine their own message. My entire job is being "tge trust but verify" stick in the mud, but everyone knows it when I walk in the room. I don't waste peoples time, and I stop short of actually causing damage by educating and forcing an active reckoning with reality.

You can have your verify-lite process, but you must write down that that was your decision, and if appropriate, revisit and reaffirm it over time. You must implement controls, measures and processes in such a way as to minimize the deleterious consequences to your endeavor. It's the entire reason Quality Assurance is a pain in the ass. When you're doing a stellar job, everyone wonders why you're there at all. Nobody counts the problems that didn't happen or that you've managed to corral through culture changes in your favor, but they will jump on whatever you do that drags the group down. Security is the same. You are an anchor by nature, the easiest way to make you go away is to ignore you.

You must help, first and foremost. No points for groups that just add more filth to wallow through.


The result is to make sure not to accept anything with the risk of introducing issues.

Any patch coming from somebody having intentionally introduced an issue falls into this category.

So, banning their organization from contributing is exactly the lesson to be learned.


I agree, but I would say the better result, most likely unachievable now, would be to fix the holes that required a humans feelings to ensure security. Maybe some shift towards that direction could result from this.


Next time you rob a bank, try telling the judge it was a real world pentest. See how well that works out for you.


> It was a real world penetration test that showed some serious security holes in the code analysis/review process.

So you admit it was a malicious breach? Of course it isn't a perfect process. Everyone knows it isn't absolutely perfect. What kind of test is that?


What exactly did they find?


I would implore you to maintain the ban, no matter how hard the university tries to make ammends. You sent a very clear message that this type of behavior will not be tolerated, and organizations should take serious measures to prevent malicious activities taking place under their purview. I commend you for that. Thanks for your hard work and diligence.


I'd disagree. Organizations are collections of actors, some of which may have malicious intents. As long as the organization itself does not condone this type of behavior, has mechanisms in place to prevent such behavior, and has actual consequences for malicious actors, then the blame should be placed on the individual, not the organization.

In the case of research, universities are required to have an ethics board that reviews research proposals before actual research is conducted. Conducting research without an approval or misrepresenting the research project to the ethics board are pretty serious offenses.

Typically for research that involves people, participants in the research require having a consent form that is signed by participants, alongside a reminder for participants that they can withdraw that consent at any time without any penalties. It's pretty interesting that in this case, there seemed to have been no real consent required, and it would be interesting to know whether there has been an oversight by the ethics board or a misrepresentation of the research by the researchers.

It will be interesting to see whether the university applies a penalty to the professor (removal of tenure, termination, suspension, etc.) or not. The latter would imply that they're okay with unethical or misrepresented research being associated with their university, which would be pretty surprising.

In any case, it's a good thing that the Linux kernel maintainers decided that experimenting on them isn't acceptable and disrespectful of their contributions. Subjecting participants to experiments without their consent is a severe breach of ethical duty, and I hope that the university will apply the correct sanctions to the researchers and instigators.


Good points. I should have qualified my statement by saying that IMO the ban should stay in place for at least five years. A prison sentence, if you will, for the offense that was committed by their organization. I completely agree with you though that no organization can have absolute control over the humans working for them, especially your point about misrepresenting intentions. However, I believe that by handing out heavy penalties like this, not only will it make organizations think twice before approving questionable research, it will also help prevent malicious researchers from engaging in this type of activity. I don't imagine it's going to look great being the person who got an entire university banned from committing to the Linux kernel.

Of course, in a few years this will all be forgotten. It begs the question... how effective is it to ban entire organizations due to the actions of a few people? Part of me thinks that it would very good to have something like this happen every five years (because it puts the maintainers on guard), but another part of me recognizes that these maintainers are working for free, and they didn't sign up to be gaslighted, they signed up to make the world a better place. It's not an easy problem.


I agree. I don't think any of the kernel developers ever signed up for reviewing malicious patches done by people who managed to sneak their research project past the ethics board, and it's not really fair to them to have to deal with that. I'm pretty sure they have enough work to do already without having to deal with additional nonsense.

I don't think it's unreasonable for maintainers of software to ignore or outright ban problematic users/contributors. It's up to them to manage their software project the way they want, and if banning organizations with malicious actors is the way to do it, the more power to them.


It turns out that the Associate Department Head was engaged in similar "research" on Wikipedia over a dozen years ago, and that also caused problems. The fact that they are here again suggests a broader institutional problem.


Looks like the authors have Chinese names [1]. Should they ban anyone with Chinese names, too, for good measure? Or maybe collective punishment is not such a good idea?

[1] https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...


I have to ask: were they not properly reviewed when they were first merged?

Also to assume _all_ commits made by UMN, beyond what's been disclosed in the paper, are malicious feels a bit like an overreaction.


Thanks for your important work, Greg!

I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).

Would a project like this be unfeasible due to the sheer amount of commits/day?


Thank you for all your excellent work!


> should be aware that future submissions from anyone with a umn.edu address should be by default-rejected

Are you not concerned these malicious "researches" will simply start using throwaway gmail addresses?


That’s not likely to work after a high profile incident like this, in the short term or the long term. Publication is, by design, a de-anonymizing process.


Are throwaway gmail addresses nearly as 'trusted'?


Putting the ethical question of the researcher aside, the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.

While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.


> This "I will ban you and revert all your stuff" retaliation seems emotional overaction.

Fool me once. Why should they waste their time with extra scrutiny next time? Somebody deliberately misled them, so that's it, banned from the playground. It's just a no-nonsense attitude, without which you'd get nothing done.

If you had a party in your house and some guest you don't know and whom you invited in assuming good faith, turned out to deliberately poop on the rug in your spare guest room while nobody was looking .. next time you have a party, what do you do? Let them in but keep an eye on them? Ask your friends to never let this guest alone? Or just simply to deny entrance, so that you can focus on having fun with people you trust and newcomers who have not shown any malicious intent?

I know what I'd do. Life is too short for BS.


> Why should they waste their time with extra scrutiny next time?

Because well funded malicious actors (government agencies, large corporations, etc) exist and aren't so polite as to use email addresses that conveniently link different individuals from the group together. Such actors don't publicize their results, aren't subject to IRB approval, and their exploits likely don't have such benign end goals.

As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.


we don't have the full communication and I understand that the intention is to be stealthy (why use an university email that can be linked to the previous research then?). However the researcher's response seems to be disingenuous:

> I sent patches on the hopes to get feedback. We are not experts in the Linux kernel and repeatedly making these statements is disgusting to hear.

this is after they're caught, why continue lying instead of apologizing and explain? Is the lying also part of the experiments?

On top of that, they played cards, you can see why people would be triggered by this level of dishonesty:

> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies


From reading other comments about the context surrounding these events, it sounds to me like this probably was an actual newbie who made an honest (if lazy) mistake and was then caught up in the controversy surrounding his advisor's past research.

Or perhaps it really is a second attempt by his advisor at an evil plot to sneak more buggy patches into the kernel for research purposes? Either way, the response by the maintainers seems rather disproportionate to me. And either way, I'm ultimately grateful for the (apparently unwanted?) attention being drawn to the (apparent lack of) security surrounding the Linux kernel patch review process.


> it sounds to me like this probably was an actual newbie who made an honest (if lazy) mistake

Who then replies with a request for "cease and desist"? Not sure that's the right move for a humble newbie.


They should not have experimented on human subjects without consent, regardless of whether the result is considered benign.

Yes, malicious actors have a head start, because they don't care about the rules. It doesn't mean that we should all kick the rules, and compete with malicious actors on this race to the bottom.


I'm not aware of any law requiring consent in cases such as this, only conventions enforced by IRBs and journal submission requirements.

I also don't view unannounced penetration testing of an open source project as immoral, provided it doesn't consume an inordinate amount of resources or actually result in any breakage (ie it's absolutely essential that such attempts not result in defects making it into production).

When the Matrix servers were (repeatedly) breached and the details published, I viewed it as a Good Thing. Similarly, I view non-consensual and unannounced penetration testing of the Linux kernel as a Good Thing given how widely deployed it is. Frankly I don't care about the sensibilities of you or anyone else - at the end of the day I want my devices to be secure and at this point they are all running Linux.


I don’t see where I claim that this is a legal matter. There are many things which are not prohibited by law that you can do to a fellow human being that are immoral and might result in them blacklisting you forever.

That you care about something or not also seems to be irrelevant, unless you are part of either the research or the kernel maintainers. It’s not about your or my emotional inclination.

Acquiring consent before experimenting in human subject is an ethical requirement for research, regardless of whether is a hurdle for the researchers. There is a reason that IRB exists.

Not to mention that they literally proved nothing, other than that vulnerable patches can be merged into the kernel. But did anybody that such a threat is impossible anyway? The kernel has vulnerabilities and it will continue to have them. We already knew that.


>I view non-consensual and unannounced penetration testing of the Linux kernel as a Good Thing...

So what other things do you think appropriate to not engage in acquiring consent to do based on some perceived justification of ubiquity? It's a slippery slope all the way down, and there is a reason for all the ceremony and hoopla involved in this type of thing. If you cannot demonstrate mastery of doing research on human subjects and processes the right way, and show you've done your footwork to consider the impact of not doing it that way (i.e. IRB fully engaged, you've gone out of your way to make sure they understand, and at least reached out to one person in the group under test to give a surreptitious heads up (like Linus)), you have no business playing it fast and loose, and you absolutely deserve censure.

No points awarded for half-assing. Asking forgiveness may oft times be easier than asking permission, but in many areas, the impact to doing so goes far beyond mere inconvenience to the researcher in the costs it can extract.

>at the end of the day I want my devices to be secure and at this point they are all running Linux.

That is orthogonal to the outcome of the research that was being done, as by definition running Linux would include running with a new vulnerability injected. What you really want is to know your device is doing what you want it to, and none of what you don't. Screwing with kernel developers does precious little to accomplish that. Same logic applies with any other type of bug injection or intentioned software breakage.


> I'm not aware of any law requiring consent in cases such as this

In the same way there is no law requiring Linux kernel maintainers to review patches send by this university.

"it was not literally illegal" is not a good reasoning for why someone should not be banned.


> As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.

This "attack" did not reveal anything interesting. It's not like any of this was unknown. Of course you can get backdoors in if you try hard enough. That does not surprise anybody.

Imagine somebody goes with an axe, breaks your garage door, poops on your Harley, leaves, and then calls you and tells you "Oh, btw, it was me. I did you a service by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of your property. Thank me later." And then they expect you to get let in when you have a party.

It doesn't work that way. Of course the garage door can be broken with an axe. You don't need a "mildly sophisticated attack" to illustrate that while wasting everybody's time.


You’re completely right, except in this case it’s banning anyone who happened to live in the same house as the offender, at any point in time...


By keeping the paper, UMN is benefiting (in citations and research result count). Universities are supposed to have processes for punishing unethical research. Unless the University retracts the paper and fires the researcher involved, they have not made amends.


IP bans often result in banning an entire house.

"It was my brother on my unsecured computer" is an excuse I've heard a few times by people trying to shirk responsibility for their ban-worthy actions.

Geographic proximity to bad actors is sometimes enough to get caught in the crossfire. While it might be unfair, it might also be seen as holding a community and it's leadership responsible for failing to hold members of their community responsible and in check with their actions. And, fair or not, it might also be seen as a pragmatic option in the face of limited moderation tools and time. If you have a magic wand to ban only the bad-faith contributions by the students influenced by the professor in question, I imagine the kernel devs will be more than happy to put it to use.

Is it really just the one professor, though?


No, it's not. It's banning anyone who hides behind their UMN email address. Because its been proving now the UMN.edu commits have bad actors.


To continue the analogy, it would be like finding out that the offender’s friends knew they were going to do that and were planning on recording the results. Banning all involved parties is reasonable.


I'd amend to:

"... planning on recording the event to show it on YouTube for ad revenue and Internet fame."

In this case, the offender's friends are benefiting from the research. I think that needs to be made important. The university benefits from this paper being published, or at least expected to. That should not be overlooked.


The fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

Basically, yes. The kernel review process does not catch 100% of intentionally introduced security flaws. It isn't perfect, and I don't think anyone is claiming that it is perfect. Whenever there's an indication that a group has been intentionally introducing security flaws, it is just common sense to go back and put a higher bar on reviewing it for security.


Not all kernel reviewers are being paid by their employer to review patches. Kernel reviews are "free" to the contributor because everyone operates on the assumption that every contributor wants to make Linux better by contributing high-quality patches. In this case, multiple people from the University have decided that reviewers' time isn't valuable (so it's acceptable to waste it) and that the quality of the Kernel isn't important (so it's acceptable to make it worse on purpose). A ban is a completely appropriate response to this, and reverting until you can review all the commits is an appropriate safety measure.

Whether or not this indicates flaws in the review process is a separate issue, but I don't know how you can justify not reverting all the commits. It'd be highly irresponsible to leave them in.


I guess what I am trying to get at is that this researcher's action does have its merit. This event does rise awareness of what sophisticated attacker group might try to do to kernel community. Admitting this would be the first step to hardening the kernel review process to prevent this kind of harm from happening again.

What I strongly disapprove of the researcher is that apparently no steps are taken to prevent real world consequences of malicious patches getting into kernel, I think the researcher should:

- Notify the kernel community promptly once malicious patches got past all review processes.

- Time these actions well such that malicious patches won't not get into a stable branch before they could be reverted.

----------------

Edit: reading the paper provided above, it seems that they did do both actions above. From the paper:

> Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

So, unless the kernel maintenance team have another side of the story. The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.


That paper came out a year ago, and they got a lot of negative feedback about it, as you might expect. Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.

This time two reviewers noticed that the patch was useless, and then Greg stepped in (three weeks later) saying that this was a repetition of the same bad behavior from the first study. This got a response from the author of the patch, who said that this and other statements were “wild accusations that are bordering on slander”.

https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...


> Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.

I'd hate to be the PhD student that wastes away half a dozen years of his/her life writing a document on how to sneak buggy code through a code review.

More than being pointless and boring, it's a total CV black hole. It's the worst of both worlds: zero professional experience to show for, and zero academic portfolio to show for.


True. They would be better off competing in the Underhanded C Contest (http://www.underhanded-c.org/).


We threw people off buildings to gauge how they would react, but were able to catch all 3 subjects in a net before they hit the ground.

Just because their actions didn’t cause damage doesn’t mean they weren’t negligent.


Strangers submitting patches to the kernel is completely normal, where throwing people off is not. A better analogy would involve decades of examples of bad actors throwing people off the bridge, then being surprised when someone who appears friendly does it.


Your analogy also isn't the best because it heavily suggests the nefarious behavior is easy to identify (throwing people off a bridge). This is more akin to people helping those in need to cross a street. At first, it is just people helping people. Then, someone comes along and starts to "help" so that they can steal money (introduce vulnerabilities) to the unsuspecting targets. Now, the street-crossing community needs to introduce processes (code review) to look out for these bad actors. Then, someone who works for the city and is wearing the city uniform (University of Minnesota CS department) comes along saying there here to help and the community is a bit more trustful as they have dealt with other city workers before. The city worker then steals from the people in need and then proclaims "Aha, see how easy it is!" No one is surprised and just thinks they are assholes.

Sometimes, complex situations don't have simple analogies. I'm not even sure mine is 100% correct.


While submitting patches is normal submitting malicious patches is abnormal and antisocial. Certainly bad actors will do it, but by that logic these researchers are bad actors.

Just like bumping into somebody on the roof is normal, but you should always be aware that there’s a chance they might try to throw you off. A researcher highlighting this fact by doing it isn’t helping, even if they mitigate their damage.

A much better way to show what they are attempting to is to review historic commits and try to find places where malicious code slipped through, and how the community responded. Or to solicit experimenters to follow normal processes on a fake code base for a few weeks.


> Strangers submitting patches to the kernel is completely normal, where throwing people off is not.

Strangers submitting patches might be completely normal.

Malicious strangers trying to sneak vulnerabilities by submitting malicious patches devised to exploit the code review process is not normal. At all.

There are far more news reports of deranged people throwing strangers under traffic, subways, and trains, than there are reports of malicious actors trying to sneak vulnerable patches.


> Malicious strangers trying to sneak vulnerabilities by submitting malicious patches devised to exploit the code review process is not normal.

How could you possibly know that? In fact, I would suggest that you are completely and obviously wrong. Government intelligence agencies exist (among other things) and presumably engage in such behavior constantly. The reward for succeeding is far too high to assume that no one is trying.


We damaged the brake cables mechanics were installing into people's cars to find out if they were really inspecting them properly prior to installation!


To add... Ideally, they should have looped in Linus, or someone high-up in the chain of maintainers before running an experiment like this. Their actions might have been in good faith, but the approach they undertook (including the email claiming slander) is seriously irresponsible and a sure shot way to wreck relations.


Greg KH is "someone high-up in the chain." I remember submitting patches to him over 20 years ago. He is one of Linus's trusted few.


Yes, and the crux of the problem is that they didn’t get assent/buy-in from someone like that before running the experiment.


> This event does rise awareness of what sophisticated attacker group might try to do to kernel community.

The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)


> The limits of code review are quite well known

That's factually incorrect. The arguments over what constitutes proper code reviews continues to this day with few comprehensive studies about syntax, much less code reviews - not "do you have them" or "how many people" but methodology.

> it appears very questionable what scientific knowledge is actually gained here

The knowledge isn't from the study existing, but the analysis of the data collected.

I'm not even sure why people are upset at this, since it's a very modern approach to investigating how many projects are structured to this day. This was a daring and practical effort.


> The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.

Under that logic, it's ok for me to run a pen test against your computers, right? ...because I'm only wasting your time.... Or maybe to hack your bank account, but return the money before you notice.

Slippery slope, my friend.


Ethics aside, warning someone that a targeted penetration test is coming will change their behavior.

> Under that logic, it's ok for me to run a pen test against your computers, right?

I think the standard for an individual user should be different than that for the organization who is, in the end, responsible for the security of millions of those individual users. One annoys one person, one prevents millions from being annoyed.

Donate to your open source projects!


> Ethics aside, warning someone that a targeted penetration test is coming will change their behavior.

They could discuss the idea and then perform the test months later? With the amount of patches that had to be reverted as precaution the test would have been well hidden in the usual workload even if the maintainers knew that someone at some point in the past mentioned the possibility of a pen test. How long can the average human stay vigilant if you tell them they will be robbed some day this year?


That's why for pen testing, you still warn people, but you do it high enough the chain that the individual behaviors and responses are not affected.


Does experimenting on people without their knowledge or consent pose an ethical question?


Obviously.


I think the question may have been rhetorical, and the intended answer the opposite of yours: No, it doesn't pose a question, since it obviously shouldn't be done.


I wouldn't put it past them to have a second unpublished paper, for the "we didn't get caught" timeline.

It would give the University some notoriety to be able to claim "We introduced vulnerabilities in Linux". It would put them in good terms with possible propietary software sponsors, and the military.


> the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

I don't think this necessarily follows. Rather it is fundamentally a resource allocation issue.

The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated. Instead it's a more nebulous standard of "reasonable assurance", where "reasonable" is a variable function of what must be sacrificed to perform a more thorough review, how critical the patch appears at first impression, and information relating to provenance of the patch.

By assimilating new information about the provenance of the patch (that it's coming from a group of people known to add obfuscated bugs), that standard rises, as it should.

Alternatively stated, there is some desired probability that an approved patch is bug-free (or at least free of any bugs that would threaten security). Presumably, the review process applied to a patch from an anonymous source (meaning the process you are implying suffers from a lack of confidence) is sufficient such that the Bayesian prior for a hypothetical "average anonymous" reviewed patch reaches the desired probability. But the provenance raises the likelihood that the source is malicious, which drops the probability such that the typical review for an untrusted source is not sufficient, and so a "proper review" is warranted.

> it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.

That's hard to argue with, and ironically the point of the research at issue. It does imply that there's a need for some kind of "trust network" or interpersonal vetting to take the load off of code review.


> The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated.

Nobody can assure that.


In a perfect world, I would agree that the work of a researcher who's not an established figure in the kernel community would be met with a relatively high level of scrutiny in review.

But realistically, when you find out a submitter had malicious intent, I think it's 100% correct to revisit any and all associated submissions since it's quite a different thing to inspect code for correctness, style, etc. as you would in a typical code review process versus trying to find some intentionally obfuscated security hole.

And, frankly, who has time to pick the good from the bad in a case like this? I don't think it's an overreaction at all. IMO, it's a simplification to assume that all associated contributions may be tainted.


Why? Linux is not the state. There is no entitlement to rights or presumption of innocence.

Linux is set up to benefit the linux development community. If UMinn has basically no positive contributions, a bunch of neutral ones and some negative ones banning seems the right call.

Its not about fairness, its about if the hurts outweigh the benefits.


Not only that, good faith actors who are associated with UMN can still contribute, just not in their official capacity as UMN associates (staff, students, researchers, etc).


> Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process

I think the best way to make this expectation reality is putting in the work. The second best way is paying. Doing neither and holding the expectation is a way to exist certainly, but has no impact on the outcome.


> seems to suggest a lack of confidence in the kernel review process

The reviews were done by kernel developers who assumed good faith. That assumption has been proven false. It makes sense to review the patches again.


I mean, it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches. Review processes obviously aren't perfect, but usually patches aren't constructed to sneak sketchy code though. You'd usually approach a review in good faith.

Given that some patches may have made it though with holes, you pull them and re-approach them with a different mindset.


> You'd usually approach a review in good faith.

> it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches

Perhaps the mindset needs to change regarding security? Actual malicious actors seem unlikely to announce themselves for you.


Doesn't this basically prove the original point that if someone or an organization wished to compromise linux, they could do so with crafted bugs in patches?


Just wanted you to know that I think you're an amazing programmer


This might not be on purpose. If you look at their article they're studying how to introduce bugs that are hard to detect not ones that are easy to detect.


> Thanks for the support.

THANK YOU! After reading the email chain, I have a much greater appreciation for the work you do for the community!


My deepest thanks for all your work, as well as for keeping the standards high and the integrity of the project intact!


I would be interested how many committers actually work at private and state intelligence?


you know what they say, curiosity killed the cat


Well, you or whoever was the responsible maintainer completely failed in reviewing these patches, which is your whole job as a maintainer.

Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.

Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.


On the contrary, it would be the easy, lazy way out for a maintainer to say “well this incident was a shame now let’s forget about it.” The extra work the kernel devs are putting in here should be commended.

In general, it is the wrong attitude to say, oh we had a security problem. What a fiasco! Everyone involved should be fired! With a culture like that, all you guarantee is that people cover up the security issues that inevitably occur.

Perhaps this incident actually does indicate that kernel code review procedures should be changed in some way. I don’t know, I’m not a kernel expert. But the right way to do that is with a calm postmortem after appropriate immediate actions are taken. Rolling back changes made by malicious actors is a very reasonable immediate action to take. After emotions have cooled, then it’s the right time to figure out if any processes should be changed in the future. And kernel devs putting in extra work to handle security incidents should be appreciated, not criticized for their imperfection.


Greg explicitly stated "Because of this, all submissions from this group must be reverted from the kernel tree and will need to be re-reviewed again to determine if they actually are a valid fix....I will be working with some other kernel developers to determine if any of these reverts were actually valid changes, were actually valid, and if so, will resubmit them properly later. For now, it's better to be safe."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: