Hacker News new | past | comments | ask | show | jobs | submit login
Feasibility of stealthily introducing vulnerabilities in open source software [pdf] (github.com/qiushiwu)
54 points by etxm 24 days ago | hide | past | favorite | 31 comments





> We believe that an effective an immediate action would be to update the code of conduct of OSS, such as adding a term like, "by submitting the patch, I agree to not intend to introduce bugs."

Do the authors of this study honestly believe that the reason malicious actors intentionally introduce security vulnerabilities in software is because the "code of conduct of OSS" doesn't prohibit it? Do the malicious actors read the code of conduct and think, "Oops, I can't be malicious here, I'll try somewhere else".


The only people that could stop would be this research. Maybe. Or maybe it'll give this guy an excuse to pad his resume with another paper entitled "Feasibility of ignoring requests asking me, specifically, to piss off."


I have been in meetings involving FOSS software that were about someone doing something obnoxious, and when a response is proposed, someone in the room says "But we have no statement disallowing what they did."


Yeah, it's all theater. A few weeks ago I opened a business banking account and the banker asked me what my company did, and I told her "I traffic underage girls, and sell methamphetamines". Woooo boy, she did not like that. It was an obvious joke and she knew it, but honestly, what criminal is going to tell the truth about their illicitly gotten gains?


> Do the malicious actors read the code of conduct and think, "Oops, I can't be malicious here, I'll try somewhere else".

Nope, but it will protect you from researchers with questionable ethics


how does a code of conduct prevent that?


https://www.phoronix.com/scan.php?page=news_item&px=Universi...

I'm shocked that it had to come to this, but if the kernel developers deem it necessary to remove every commit from the university and ban them from commiting something has gone horribly wrong.

> Academic research should NOT waste the time of a community.

https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...

Agree 100%


I wonder if the people involved in approving and conducting this research are aware of the ACM's Code of Ethics. I can see pretty clear links to at least two or three of the code's ethical principles. This seems to be a pretty serious breakdown of the researchers understanding their ethical responsibilities, but also the review and approval of research projects.


I wonder how come IRB (Institutional Review Board) approved this paper in regards with etical concerns. This is obviously a research on humans and they didn't give their approval for that. (fix: formatting)


"It's just software"

IRBs are typically more equipped for biological/psychological research and likely wouldn't have the technical chops to see past the software to the real world, especially if it was presented to them inadequately.


I’ve only used IRBs in medical research but my gf routinely seeks approval for her sociological research.

Did they evens seek IRB approval? Being from the CS department they might not have even considered it.


Right, my point wasn't that what I listed was the universe, but rather that "software development" could be outside their wheelhouse and could easily be presented poorly to them. I'm sure they'd do a fine job identifying HCI as human subjects research, too (and likely regularly do so).


Also, aren't IRBs normally mostly focused on the ethical implications for immediate subjects? Like, obviously if you have human test subjects, the ethics for those participants are considered. But the ethical issue in this research is not about the project maintainers that are directly involved, but about the downstream users who are impacted. It's more difficult to have a process to review/approve impacts of this sort, because they're typically not observable.


Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

That being said, I think it would've made more sense for them to have created some dummy complex project for a class and have say 80% of the class introduce "good code", 10% of the class review all code and 10% of the class introduce these "hypocrite" commits. That way you could do similar research without having to potentially break legit code in use.

I say this since the crux of what they're trying to discover is:

1. In OSS anyone can commit.

2. Though people are incentivized to reject bad code, complexities of modern projects make 100% rejection of bad code unlikely, if not impossible.

3. Malicious actors can take advantage of (1) and (2) to introduce code that does both good and bad things such that an objective of theirs is met (presumably putting in a back-door).


No, there's an ethical way to do it. You get the permission of the top people in the project you want to test (in this case, that would be Linus and Greg K-H), but the more junior maintainers aren't informed. You then see which bad changes are caught and which aren't, but you have a backstop to prevent malicious code from actually getting into a release. And even then, you leave it to the people in charge of the project to decide whether they think this would be a worthwhile test or a waste of time.

To do otherwise is completely unethical experimentation on unwilling human subjects, plus a risk that if you "succeed" (in sneaking something by) you have harmed the public.


'Red team' exercises happen ethically all the time. You get permission from people in charge (of either a project or a company), you agree on the ground rules, and then you do the exercise. They don't have to tell everyone in the group, but you have to tell certain key people and get their buy in.


There's no good reason this couldn't have been done here, though not all OSS projects are set up in a way to make this easy.


"In further research, we demonstrate it is possible to issue a denial-of-service against a community potluck event by eating all the food ourselves."


I wonder how hard their CS Departments rankings are going to drop, and how much funding they're going to end up losing over this.

Getting banned from committing to the most important and critical open source project out there cannot be good for a university.


I doubt not being able to contribute to the Linux kernel will affect their CS department's rankings. Maybe it'll affect their reputation in the short term.

The teaching hasn't changed, and the CS field is a very large space, and the Linux kernel is a prominent but small part. That's not to say the Linux kernel is insignificant, just that the CS field is very big.

Personally, I see this as a group of researchers going about it outside well-understood ethical patterns for pentesting and getting punished for it. I think it's necessary for the university to be made an example of, but I don't think it necessarily reflects badly on the whole CS department. Hopefully the IRB does a postmortem of this and concludes that ethical review also needs to field-specialist input in general.


And it has gotten the University banned from contributing to the kernel. Hope it was worth it.


How do they ban a university anyway? Haven't they read the research paper about changing your email address?


their next paper 'How alienating open-source community led us to a career in academia'


This was pretty a pretty brazen breach of responsibility by these researchers. The fact that they exposed end users to risk and appear to not have clued in the upper levels of kernel development were serious lapses. While code of ethics and ethical are a review, it doesn't appear that there is much in the way of help for experimental design that could have helped the researchers do this in a smarter way from the start.

That said, I believe the punishment for the failing here should be measured. I don't think they should just blatantly fire a professor for doing this, though a severe reprimand is in order. Also, banning an entire university could probably be toned down a bit.

The end result of this will hopefully be much more in-depth code review, better tests, better fuzzing, and more deployment of static analysis tools that can catch errors like this.


So the NSF funded some wholly unethical research that does nothing other than prove as stated in the conclusion that the openness of open source means it’s doomed to be forever insecure. What a horrible moment.


I would prefer to see research on the known incidents where things like this have happened in the wild. AFAICT the most common route for maliciously introducing vulnerabilities is through dependencies. Old npm libraries getting taken over by people who introduce cryptocurrency miners, that sort of thing. Pull requests that fix a real bug and also update the version number of some dependency, how often does the reviewer really analyze the new version of the dependency to see if it contains anything malicious?


Reading the paper, it seems to attempt to address concerns about ethics (none of the patches got past the email stage, they never got into git) and timewasting (although reviewing the emails will have taken community time, they point out that they did also fix some real code issues). See section "VI - A - Ethical considerations"


The basic answer appears to be that while the paper claims that the lkml appears to be talking about removing patches from stable today. https://lore.kernel.org/linux-nfs/YIAy1tH0miFxEJEk@unreal/


Prompted to download PDF, assumed is 5d chess malware, didn't read \s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: