Hacker News new | past | comments | ask | show | jobs | submit login

I don't think the attack described in the paper actually succeeded at all, and in fact the paper doesn't seem to claim that it did.

Specifically, I think the three malicious patches described in the paper are:

- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.

- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.

- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.

This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.

edit: wording tweak for clarity




> the paper is deliberately misleading in an attempt to overstate its contributions.

Welcome to academia. Where a large number of students are doing it just for the credentials


What else do you expect? The incentive structure in academia pushes students to do this.

Immigrant graduate students with uncertain future if they fail? Check.

Vulnerable students whose livelihood is at mercy of their advisor? Check.

Advisor whose career depends on a large number of publication bullet points in their CV? Check.

Students who cheat their way through to publish? Duh.


The ethics in big-lab science are as dire as you say, but I've generally got the impression that the publication imperative has not been driving so much unethical behaviour in computer science. I regard this as particularly cynical behaviour by the standards of the field and I think the chances are good that the article will get retracted.


FWIW, Qiushi Wu's USENIX speaker page links to a presentation with Aditya Pakki (and Kangjie Lu), but has no talk with the same set of authors as the paper above.

https://www.usenix.org/conference/usenixsecurity19/speaker-o...


Can I cite your comment in exchange for a future citation?


Sure?

Edit: Oh now I get it you clever person you. Only took an hour ha.


Feigning surprise isn't helpful.

It's good to call out bad incentive structures, but by feigning surprise you're implying that we shouldn't imagine a world where people behave morally when faced with an incentive/temptation.


I dislike feigned surprise as much as you do, but I don't see it in GP's comment. My read is that it was a slightly satirical checklist of how academic incentives can lead to immoral behavior and sometimes do.

I don't think it's fair to say "by feigning surprise you're implying..." That seems to be putting words in GP's mouth. Specifically, they didn't say that we shouldn't imagine a better world. They were only describing one unfortunate aspect of today's academic world.

Here is a personal example of feigned surprise. In November 2012 I spent a week at the Google DC office getting my election results map ready for the US general election. A few Google engineers wandered by to help fix last-minute bugs.

Google's coding standards for most languages including JavaScript (and even Python!) mandate two-space indents. This map was sponsored by Google and featured on their site, but it was my open source project and I followed my own standards.

One young engineer was not pleased when he found out about this. He took a long slow look at my name badge, sighed, and looked me in the eye: "Michael... Geary... ... You... use... TABS?"

That's feigned surprise.

(Coda: I told him I was grateful for his assistance, and to feel free to indent his code changes any way he wanted. We got along fine after that, and he ended up making some nice contributions.)


Why should we imagine this world? We have no reason to believe it can exist. People are basically chimps, but just past a tipping point or two that enable civilization.

We'd also have to agree on what "behave morally" means, and this is impossible even at the most basic level.


Usually "behave morally" means "behave in a way the system ruling over you deems best to indoctrinate into you so you perpetuate it". No, seriously, that's all there is to morality once you invent agriculture.


Thank you.

Question for legal experts,

Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?


Not an attorney but the kernal is likely shielded from liability by it's license. maybe the kernal could sue the contributers for damaging the project but I don't think the end user could.


Malicious intent or personal gain negate that sort of thing in civil torts.

Also US code 1030(a)5 A does not care about software license. Any intentional vulnerability added to code counts. Federal cybercrime laws are not known for being terribly understanding…


License is a great catch, thank you. Do the kernel get into separate contract with the contributors?


I literally LOL'd at "James Louise Bond"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: