Specifically, I think the three malicious patches described in the paper are:
- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.
- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.
- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.
This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.
edit: wording tweak for clarity
Welcome to academia. Where a large number of students are doing it just for the credentials
Immigrant graduate students with uncertain future if they fail? Check.
Vulnerable students whose livelihood is at mercy of their advisor? Check.
Advisor whose career depends on a large number of publication bullet points in their CV? Check.
Students who cheat their way through to publish? Duh.
Edit: Oh now I get it you clever person you. Only took an hour ha.
It's good to call out bad incentive structures, but by feigning surprise you're implying that we shouldn't imagine a world where people behave morally when faced with an incentive/temptation.
I don't think it's fair to say "by feigning surprise you're implying..." That seems to be putting words in GP's mouth. Specifically, they didn't say that we shouldn't imagine a better world. They were only describing one unfortunate aspect of today's academic world.
Here is a personal example of feigned surprise. In November 2012 I spent a week at the Google DC office getting my election results map ready for the US general election. A few Google engineers wandered by to help fix last-minute bugs.
One young engineer was not pleased when he found out about this. He took a long slow look at my name badge, sighed, and looked me in the eye: "Michael... Geary... ... You... use... TABS?"
That's feigned surprise.
(Coda: I told him I was grateful for his assistance, and to feel free to indent his code changes any way he wanted. We got along fine after that, and he ended up making some nice contributions.)
We'd also have to agree on what "behave morally" means, and this is impossible even at the most basic level.
Question for legal experts,
Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?
Also US code 1030(a)5 A does not care about software license. Any intentional vulnerability added to code counts. Federal cybercrime laws are not known for being terribly understanding…