Hacker News new | comments | show | ask | jobs | submit login
Rewards of Up to $500K Offered for FreeBSD, OpenBSD, NetBSD, Linux Zero-Days (bleepingcomputer.com)
75 points by ax00x 18 days ago | hide | past | web | favorite | 40 comments



This makes me sad. People working on open source projects get nothing. Sometimes they get some money. Sometimes they get some fame. People who don't build anything, but find a hole, they are heroes, they get prizes, they are worshiped.

If there is a commonly used open source library without hackable bugs, you won't even hear about the author who committed his/her own time to build reliable software.

If someone finds a bug, then she will get some prize, and will be invited to a conference. And the library author will be publicly bashed as an idiot.

Sometimes open source people don't even get mentions.

I was working on a patch for a huge open source project once. I spent hours on that. Two other people helped me, they also spent some significant time on that. And we managed to implement this. Who was mentioned in the release changelog? The person who committed that. Then I stopped spending my precious time on such things like giving someone the credits for my work. I love programming, I work on my own projects instead.

And all that makes me sad.


>This makes me sad. People working on open source projects get nothing. Sometimes they get some money. Sometimes they get some fame. People who don't build anything, but find a hole, they are heroes, they get prizes, they are worshiped.

I think you've misunderstood what's happening here. Zerodium, the company mentioned in this article, is an exploit broker. They buy vulnerabilities from researchers, then sell them on to government intelligence agencies. The entire purpose of their business is to undermine the security of the tools we use.

Bug bounties are a response to this trade in exploits. They incentivise researchers to publish vulnerabilities rather than selling them to spies. They're a necessary evil to keep zero-day vulnerabilities out of the hands of oppressive regimes. It's not nice, but that's just the world we live in.

Large companies that rely on open source software have started to understand the importance of financially supporting OSS development, largely as a result of the Heartbleed crisis. The Linux Foundation's Core Infrastructure Initiative has created a secure financial foundation for critical open source projects.


> They buy vulnerabilities from researchers...

or provide an opportunity for the original developers to introduce an obscure backdoor and cash out


That's an interesting take on the situation.

Was there any instance of this? Are there disincentives against this? (I guess the entity offering the bounty could say, only software released before this day is available. Though malicious contributors can very certainly guess that there will be other future bug bounties too.)


> Was there any instance of this? Are there disincentives against this? (I guess the entity offering the bounty could say, only software released before this day is available. Though malicious contributors can very certainly guess that there will be other future bug bounties too.)

I believe sometime ago there was new surrounding backdoored crypto also on the low-level side of things there was a secret rootkit in Street Fighter that allowed for an EOP

https://github.com/FuzzySecurity/Capcom-Rootkit

https://www.blackhat.com/docs/eu-17/materials/eu-17-Filiol-B...


Impossible. Our ego is too high to allow bug in our code.


Higher than a pile of banknotes that together make half a million dollars? I doubt that.


> I think you've misunderstood what's happening here. Zerodium, the company mentioned in this article, is an exploit broker. They buy vulnerabilities from researchers, then sell them on to government intelligence agencies. The entire purpose of their business is to undermine the security of the tools we use.

It's not only Zerodium there are a lot of government contractors who buy/fund attack research especially in things like Theoretical Cryptography, Machine Learning, Computer Vision, Formal Verification.

> They incentivise researchers to publish vulnerabilities rather than selling them to spies. They're a necessary evil to keep zero-day vulnerabilities out of the hands of oppressive regimes. It's not nice, but that's just the world we live in.

I think it's quite interesting that we don't see Bug bounties for things like Theoretical Cryptography like Quantum-safe encryption, Formal Verification, and the like. But hasn't there been cases where Bug bounties have been subverted for evil or are just broken entirely.

> The Linux Foundation's Core Infrastructure Initiative has created a secure financial foundation for critical open source projects.

For critical open source projects hasn't there been an increase in Formal Verification and more Theoretical approaches to security ?


I understood. That was just my thought about the whole situation where you can earn on finding bugs, not on writing reliable software.


Companies like Zerodium profit from the languages many insist in using for writing such systems.

History of security improvements in UNIX derived OSes is building band ainds to work around it.


> Who was mentioned in the release changelog? The person who committed that. Then I stopped spending my precious time on such things like giving someone the credits for my work. I love programming, I work on my own projects instead.

That was handled very poorly by the open-source project, I think all projects need something like kentcdodds's all-contributors[0] guidelines which does require additional tooling and there is definitely additional code reviewing care in order to merge Pull Requests but it makes all contributors feel good when they look back at the effort of all contributions. I experienced this first hand when I contributed to one of the open source modules of the guy, the repo tooling didn't let me submit the code and I said to myself "well this is stupid, I just need the code to be merged ASAP", after a few minutes at first I figured it out and the PR got accepted. Now I can actually go back and say "hey look at my face in the Readme of the repo, that's me, yay!" which sounds stupid but I assure you I won't hesitate to contribute again.

[0]https://github.com/kentcdodds/all-contributors


>This makes me sad. People working on open source projects get nothing. Sometimes they get some money. Sometimes they get some fame.

This falls apart pretty quickly. You're assuming that people writing open source software WANT money or WANT fame. If they want those things, then they should ensure they go about it the proper way. As with nearly everything in life, nobody is just going to hand it to you.

As for people finding exploits being "bad". If I volunteer to build a playground, then forget lag bolts on the walkway, is the person who reports the missing bolts bad? Or are they GOOD because they informed someone who could fix it before someone got hurt?

Finding exploits, and paying for exploits isn't a bad thing. There's a reason we have inspectors. What you can very much argue is a bad this are companies like Zerodium who use those exploits to intentionally harm everyone else. It would be the equivalent of a lawyer hiring an inspector to review every public playground he could find so that he could file lawsuits.


It seems like you read something else then.

I'm just saying that the situation is sad. So there were people who built a playground. And then there were people who found bugs in the design or the implementation. The sad situation is that people will make heroes only from those who found bugs. They even want to pay for that. And for the work of those who built that? Seems like they will be forgotten, or blamed for the bugs.

I'd rather see both groups treated the same way.

> Finding exploits, and paying for exploits isn't a bad thing.

I agree. What is bad is paying only for the exploits, totally forgetting about all the people who worked on building the code.

Just imagine that you are paying for building a house. But you will pay only for the problems that will be found. I think that in a couple of months you will get buildings full of problems, and then the builders will find them, and get payed. This will be quite terrible.


Bash had the shell shock bug for over 25 years before someone found it. No physical analogy for that comes to mind, but we can pretend I suppose.


> This makes me sad. People working on open source projects get nothing. Sometimes they get some money. Sometimes they get some fame. People who don't build anything, but find a hole, they are heroes, they get prizes, they are worshiped.

I've been looking at open source communities especially in the Vulnerability research space it seems there's been a lot of favoritism towards attack oriented research from the community.


> If there is a commonly used open source library without hackable bugs, you won't even hear about the author who committed his/her own time to build reliable software.

Come on, that's not true. There was a sectest made on dovecot a while back, and it came out to be a really well written piece of software, and everybody complimented the authors and had kind words for them. Same for ssh and a lot of other oss.


It seems like the people who work on the code bases would know vulnerabilities better than anyone. Couldn't this provide an opportunity?


That's kind of a perverse incentive to be less careful at first and cash in later


As predicted by Scott Adams some twentythree years ago :-)

http://dilbert.com/strip/1995-11-13


Anyone who has ever placed in the underhanded C contest should be automatically disqualified from committing code. You know, "your hands are deadly weapons" type of exception. \s


Nah, you just review what they submit like everyone else. If you block them, they'll do the contest under an alias or use different names for contributions. Plus, they're really smart folks who might bring a lot of value to the project.


Sure, they put them intentionally there :)


for 500k they might as well. It's an ugly practice.


It would kind to have the same level of rewards for people/projects with a proven track record regarding security. If you are willing to give bounties to bug hunters, you may just give them to developers in order not to introduce the bugs


There is a zero sum trade-of between exploit brokers and the public's interests. I don't mind for-profit businesses, especially large ones, paying for exploits to their own products. Open source projects such as the BSDs, however, can't afford it and they donate their work to the public for public benefit. For those systems selling and buying exploits rather than reporting them to the devs is unethical, IMHO, for both the exploit discoverer and the broker.


Can we fix this kind of behavior without spiraling down to a money-spending competition? (Which the open source community clearly cannot win?)

With open-source developers being badly paid, large numbers of relatively unknown contributors with little to lose (ie no reputation, no criminal charges, no repercussions whatsoever), and major corporations not caring enough about Uncle Sam to spend $$ to shield open source software they use, who will stop this kind of decay?

It used to be that the stakes were low, the developing community was small, and the amount of software was manageable. If someone introduced a zero-day, sooner or later would be caught and kicked out. Few people cared about breaking into this software so some donated personal effort was adequate to shield against those intruders.

Now if you can remotely compromise Debian or Ubuntu you have millions of servers in your hands and potentially hundreds of millions worth of private data. I don't see how this can be stopped.


I find especially interesting the tables showing the payoffs for exploits of different platforms. It gives us insight into supply of and demand for the exploits:

https://www.bleepstatic.com/images/news/u/986406/attacks/Zer...

https://www.bleepstatic.com/images/news/u/986406/attacks/Zer...


This gets into the entire controversial business of selling exploits to "Security companies." Often these companies are just brokers, sometimes selling to states, but also to criminals.

Years ago at Ruxcon in Melbourne, this came up in a panel discussion. One of the members, Ranty Ben, talked about how exploit sales were part of his career/income.

The talk was originally here, but it seems to be gone now: https://www.youtube.com/watch?v=xlJ1DQdjVHM


>Zerodium is known for buying zero-days and selling them to government agencies and law enforcement.

So how much is Zerodium getting for these Zero days? If all you care about is money then aside from the fear that you might end up getting investigated by one of these agencies, why wouldn't you sell directly to them for more money than Zerodium is giving you.

Otherwise if you actually care about the security of systems, then disclose it to the developers, give them reasonable time to fix/patch and submit it as a CVE.


The concept of government contractors is quite common. If zerodium can guarantee auditing and safe zeroday acquisition then I don't see anything different in governments using a broker to attain their exploits.

The same attractions would apply as when they're dealing with other government contractors rather than individual actors.

The really sad thing here is that those exploits might be put to use against the people of the world.


I fell into cybersecurity after doing some consulting to a major department of the government of Canada on machine learning. While there I was shocked at how bad things are. FVEY may be the pre-eminent alliance in the cyber domain of war, but defence is so much harder than offence if the situation isn't rushed. This didn't really dawn on our political leaders until recently, but they don't really know what to do.

So I started methodically learning cybersecurity. I ended up writing this comment about a year or two in:

https://news.ycombinator.com/item?id=12788910

> It's surprising how bad cyber security is, but so much of it is right there in the pages of this [U Waterloo textbook]. It's like finding out you can buy a Patriot missile for $250 and some spare time in the evenings.

Over the past two years I think the world is starting to understand where we are headed, but there are no easy solutions. Many of the problems we have are the same ones we've had since the dawn of the NSA. Computers need operating systems. Operating systems are really fucking hard to make completely secure, let alone completely secure and borderline useable. If you think $500k for 0days is high you ain't seen nothing yet. When autonomous systems run everything or when greater numbers of Wall St traders start realizing what they can do with an 0day and some outlying put options we're going to see 0days worth tens of millions as the arms race heats up.

The problem with cybersecurity tools (and AI / autonomous systems) is that it's all dual use. The same tools you probe your own server with are the same ones you can probe others with. Worse—we can't even control the export of attack tools because it's all essentially just data. You can't stop people from memorizing code snippets or facts.

Even so, we need to instil fear into people in the West. We need to limit who they're legally allowed to sell the vulns to. Allied states: Yes. Defence interested parties: Yes. Some cybergang: Fuck no. We need to deny travel visas to the direct family members of other individuals in unaligned states that sell 0days to the worst actors.

It won't stop malactors from getting these bugs, but it will make it significantly more expensive for them to do so, and at the end of the day war and crime are economic concerns as much as they are political.


> Even so, we need to instil fear into people in the West. We need to limit who they're legally allowed to sell the vulns to. Allied states: Yes. Defence interested parties: Yes. Some cybergang: Fuck no. We need to deny travel visas to the direct family members of other individuals in unaligned states that sell 0days to the worst actors.

So would that go to researchers who work on more theoretical areas(with a real world implications) things such as Static Analysis or Formal verification ?


I saw an ad (math dept. at Canadian university) from the government looking to hire cybersec people. I was pretty interested, but didn't match the requirements they were looking for (experience with cybersecurity). As someone with a background in scientific computing, I wonder what the transferable skills are between sci-comp/cybersec, apart from data analysis/machine learning?


IMHO you are probably smarter than they are, especially if you can remain scientific about things.


If the government has to crowd source its back doors then there may still be hope for digital freedom.


Can't you use some form of static analysis to highlight potential vulnerabilities?


The low-hanging fruit amenable to this type of discovery has been plucked years ago.


Yes. It was how I wouldve tried to claim these vulnabilities if I was a black hat. CompSci teams behind tools like Saturn often test them on FOSS code, both known-buggy and patched. The score is how many known bugs they catch vs false alarms. However, they often find new bugs with new tools even in well-trodden code.


Static analysis will not stop most of the exploits that have happened on iOS/OSX in the recent years.

Often it is a situation where multiple processes are working together and there is a way to trick a privileged process into modifying memory in a way it shouldn't.


This is disturbing. $500k payout for colluding to indrocuce a back door in OSS.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: