If there is a commonly used open source library without hackable bugs, you won't even hear about the author who committed his/her own time to build reliable software.
If someone finds a bug, then she will get some prize, and will be invited to a conference. And the library author will be publicly bashed as an idiot.
Sometimes open source people don't even get mentions.
I was working on a patch for a huge open source project once. I spent hours on that. Two other people helped me, they also spent some significant time on that. And we managed to implement this. Who was mentioned in the release changelog? The person who committed that. Then I stopped spending my precious time on such things like giving someone the credits for my work. I love programming, I work on my own projects instead.
And all that makes me sad.
I think you've misunderstood what's happening here. Zerodium, the company mentioned in this article, is an exploit broker. They buy vulnerabilities from researchers, then sell them on to government intelligence agencies. The entire purpose of their business is to undermine the security of the tools we use.
Bug bounties are a response to this trade in exploits. They incentivise researchers to publish vulnerabilities rather than selling them to spies. They're a necessary evil to keep zero-day vulnerabilities out of the hands of oppressive regimes. It's not nice, but that's just the world we live in.
Large companies that rely on open source software have started to understand the importance of financially supporting OSS development, largely as a result of the Heartbleed crisis. The Linux Foundation's Core Infrastructure Initiative has created a secure financial foundation for critical open source projects.
or provide an opportunity for the original developers to introduce an obscure backdoor and cash out
Was there any instance of this? Are there disincentives against this? (I guess the entity offering the bounty could say, only software released before this day is available. Though malicious contributors can very certainly guess that there will be other future bug bounties too.)
I believe sometime ago there was new surrounding backdoored crypto also on the low-level side of things there was a secret rootkit in Street Fighter that allowed for an EOP
It's not only Zerodium there are a lot of government contractors who buy/fund attack research especially in things like Theoretical Cryptography, Machine Learning, Computer Vision, Formal Verification.
> They incentivise researchers to publish vulnerabilities rather than selling them to spies. They're a necessary evil to keep zero-day vulnerabilities out of the hands of oppressive regimes. It's not nice, but that's just the world we live in.
I think it's quite interesting that we don't see Bug bounties for things like Theoretical Cryptography like Quantum-safe encryption, Formal Verification, and the like. But hasn't there been cases where Bug bounties have been subverted for evil or are just broken entirely.
> The Linux Foundation's Core Infrastructure Initiative has created a secure financial foundation for critical open source projects.
For critical open source projects hasn't there been an increase in Formal Verification and more Theoretical approaches to security ?
History of security improvements in UNIX derived OSes is building band ainds to work around it.
That was handled very poorly by the open-source project, I think all projects need something like kentcdodds's all-contributors guidelines which does require additional tooling and there is definitely additional code reviewing care in order to merge Pull Requests but it makes all contributors feel good when they look back at the effort of all contributions. I experienced this first hand when I contributed to one of the open source modules of the guy, the repo tooling didn't let me submit the code and I said to myself "well this is stupid, I just need the code to be merged ASAP", after a few minutes at first I figured it out and the PR got accepted. Now I can actually go back and say "hey look at my face in the Readme of the repo, that's me, yay!" which sounds stupid but I assure you I won't hesitate to contribute again.
This falls apart pretty quickly. You're assuming that people writing open source software WANT money or WANT fame. If they want those things, then they should ensure they go about it the proper way. As with nearly everything in life, nobody is just going to hand it to you.
As for people finding exploits being "bad". If I volunteer to build a playground, then forget lag bolts on the walkway, is the person who reports the missing bolts bad? Or are they GOOD because they informed someone who could fix it before someone got hurt?
Finding exploits, and paying for exploits isn't a bad thing. There's a reason we have inspectors. What you can very much argue is a bad this are companies like Zerodium who use those exploits to intentionally harm everyone else. It would be the equivalent of a lawyer hiring an inspector to review every public playground he could find so that he could file lawsuits.
I'm just saying that the situation is sad. So there were people who built a playground. And then there were people who found bugs in the design or the implementation. The sad situation is that people will make heroes only from those who found bugs. They even want to pay for that. And for the work of those who built that? Seems like they will be forgotten, or blamed for the bugs.
I'd rather see both groups treated the same way.
> Finding exploits, and paying for exploits isn't a bad thing.
I agree. What is bad is paying only for the exploits, totally forgetting about all the people who worked on building the code.
Just imagine that you are paying for building a house. But you will pay only for the problems that will be found. I think that in a couple of months you will get buildings full of problems, and then the builders will find them, and get payed. This will be quite terrible.
I've been looking at open source communities especially in the Vulnerability research space it seems there's been a lot of favoritism towards attack oriented research from the community.
Come on, that's not true. There was a sectest made on dovecot a while back, and it came out to be a really well written piece of software, and everybody complimented the authors and had kind words for them. Same for ssh and a lot of other oss.
With open-source developers being badly paid, large numbers of relatively unknown contributors with little to lose (ie no reputation, no criminal charges, no repercussions whatsoever), and major corporations not caring enough about Uncle Sam to spend $$ to shield open source software they use, who will stop this kind of decay?
It used to be that the stakes were low, the developing community was small, and the amount of software was manageable.
If someone introduced a zero-day, sooner or later would be caught and kicked out. Few people cared about breaking into this software so some donated personal effort was adequate to shield against those intruders.
Now if you can remotely compromise Debian or Ubuntu you have millions of servers in your hands and potentially hundreds of millions worth of private data. I don't see how this can be stopped.
Years ago at Ruxcon in Melbourne, this came up in a panel discussion. One of the members, Ranty Ben, talked about how exploit sales were part of his career/income.
The talk was originally here, but it seems to be gone now: https://www.youtube.com/watch?v=xlJ1DQdjVHM
So how much is Zerodium getting for these Zero days? If all you care about is money then aside from the fear that you might end up getting investigated by one of these agencies, why wouldn't you sell directly to them for more money than Zerodium is giving you.
Otherwise if you actually care about the security of systems, then disclose it to the developers, give them reasonable time to fix/patch and submit it as a CVE.
The same attractions would apply as when they're dealing with other government contractors rather than individual actors.
The really sad thing here is that those exploits might be put to use against the people of the world.
So I started methodically learning cybersecurity. I ended up writing this comment about a year or two in:
> It's surprising how bad cyber security is, but so much of it is right there in the pages of this [U Waterloo textbook]. It's like finding out you can buy a Patriot missile for $250 and some spare time in the evenings.
Over the past two years I think the world is starting to understand where we are headed, but there are no easy solutions. Many of the problems we have are the same ones we've had since the dawn of the NSA. Computers need operating systems. Operating systems are really fucking hard to make completely secure, let alone completely secure and borderline useable. If you think $500k for 0days is high you ain't seen nothing yet. When autonomous systems run everything or when greater numbers of Wall St traders start realizing what they can do with an 0day and some outlying put options we're going to see 0days worth tens of millions as the arms race heats up.
The problem with cybersecurity tools (and AI / autonomous systems) is that it's all dual use. The same tools you probe your own server with are the same ones you can probe others with. Worse—we can't even control the export of attack tools because it's all essentially just data. You can't stop people from memorizing code snippets or facts.
Even so, we need to instil fear into people in the West. We need to limit who they're legally allowed to sell the vulns to. Allied states: Yes. Defence interested parties: Yes. Some cybergang: Fuck no. We need to deny travel visas to the direct family members of other individuals in unaligned states that sell 0days to the worst actors.
It won't stop malactors from getting these bugs, but it will make it significantly more expensive for them to do so, and at the end of the day war and crime are economic concerns as much as they are political.
So would that go to researchers who work on more theoretical areas(with a real world implications) things such as Static Analysis or Formal verification ?
Often it is a situation where multiple processes are working together and there is a way to trick a privileged process into modifying memory in a way it shouldn't.