Hacker News new | comments | show | ask | jobs | submit login
Exploit broker Zerodium ups the ante with $500k to target Signal and WhatsApp (arstechnica.com)
110 points by acconrad on Aug 23, 2017 | hide | past | web | favorite | 67 comments

Not just Signal & WhatsApp,

$500,000 - Messaging Apps RCE + LPE (SMS/MMS, iMessage, Telegram, WhatsApp, Signal, Facebook, Viber, WeChat) .

Which I guess chat apps with > 250M MAU (except Telegram, which probably is because of it's high usage rate in Arab countries). Looking at their programme, they might probably approach the original vendor; but we all know who would pay hard cash for 0 days ;)

Original Source : https://zerodium.com/program.html

Strange that they would pay the same money for an iMessage or Viber exploit. Those have very little market penetration compared to WhatsApp it's 90%+ (!)

It's the countries that they are big in that probably make them think this way.

For example, Viber is very popular in the Middle East, Russia, Africa, India, parts of Asia, Ukraine, Balkans, some of the 'Stans.

Say a company buys and sells insider trading information... Would it be legal?

So why is buying/selling 0 days OK?

Both are selling information that will be used wrongly in the wrong hands, the kind of information you don't want a "broker" to know about, the kind of information you don't want a broker to find clients for.

And if there's only one potential buyer (the target), they're basically black mailing 0 days targets: "become a client or... who knows what will happen... maybe the NSA or hackers will buy it?"

You are confusing two things. Buying/selling "insider info" (i.e. material non public info) is not illegal. Acting on such info CAN be illegal under certain circumstances, but even that is super difficult to prove. Why would buying/selling 0days be any different? It's the usage that determines legality. Why should buying an exploit to permit jailbreaking be illegal? Or to permit unlocking a device (presuming local laws allow).

>Buying/selling "insider info" (i.e. material non public info) is not illegal.

I doubt that. If I had non-public information that could materially impact the price of a company's stock, I would not expect to get away with selling that information to anonymous buyers via some shady broker.

If you're not an insider yourself there's nothing illegal about it.

You should look at some of the recent failed insider trading prosecutions. If you sell the info and expect someone to trade on it, and thus you are benefiting from ill-gotten gains, then they MIGHT be able to prove conspiracy. My point is that if you have MNPI and I pay you for it, and that's the end of it, well that's not illegal. It may be a civil violation due to NDA or something, but that's different.

I can imagine that what you're saying is true if there was no danger to the public.

But if someone were to trade on the knowledge that a supplier had delivered faulty airbags to an auto manufacturer without telling that manufacturer, I doubt this would be looked at kindly by the courts.

But I agree that the metaphor is not ideal. Selling 0day exploits to that broker is much worse. Any seller would have to have a reasonable expectation of aiding organised crime or terrorism.

Do you believe a company buying and selling insider info would be allowed?

It's an amazing business idea!

- people with insider info can use that info to get rich but without insider trading

- targets of insider info can be coerced into buying the data to prevent selling to third parties

- and of course we can trust the broker to never sell the data to third parties

I'm being a bit sarcastic of course.

Most firms are unwilling to invest any sort of money into security for their customers (this includes preventing breaches and having a sufficiently enticing bug bounty program). Third-party private firms are an arbiter that puts pressure on those same companies to invest more into their security or face the risk of serious PR problems.

>Say a company buys and sells insider trading information... Would it be legal? So why is buying/selling 0 days OK?

Insider trading is a Victorian-esque and arbitrary law. The same way sports ban PEDs under the guise of "fairness" (when in reality everyone uses steroids and PEDs, and most everyone does insider trading) so too do these outdated laws do nothing but put the SEC on a moral high-ground and bolster the tax bucket.

>Both are selling information that will be used wrongly in the wrong hands, the kind of information you don't want a "broker" to know about, the kind of information you don't want a broker to find clients for.

May be used wrongly. Zerodium is a grey hat distributor. They can sell to the companies themselves, domestic and foreign governments, or terrorists. The first three can be used for good (and it could be argued so could the terrorists if their idealogies converge with some oppressed opinions among a populace). It's not 100% and it's likely, but "good" comes out of it.

>And if there's only one potential buyer (the target), they're basically black mailing 0 days targets: "become a client or... who knows what will happen... maybe the NSA or hackers will buy it?"

In a perfect world, you shouldn't have to have a coal tax and companies would care about the environment, but we don't live in this world. We live in a world where the only that matters is asset value, and good security is rarely seen as increasing asset value (only stopping asset decreation from scandals).

> and most everyone does insider trading

I know this is a popular belief, and there are certainly some outright violations in the industry, and then a whole host of activities that carefully toe this line, by and large it's just not true. Most hedge funds and active managers underperform their benchmarks. I've spent my whole career in hedge funds and seen far more clueless money losing portfolio managers than mischievous insider traders...

Most firms DO invest (lots of) money into security for their customers, bounty programs were not invented by Zerodium.

Brokers like Zerodium will inflate the price of 0 days, for sure. Maybe this will be an incentive for 0 days researchers to dig deeper. At least they'll be richer. But maybe it will cause more harm than good: it's a market place for exploits, and Zerodium will look like a 0day Wallmart to hackers.

"They can sell to the companies themselves, domestic and foreign governments, or terrorists": I consider the only ethical option is the first one.

If you're really concerned about security for consumers, can you tell me how you're trusting "domestic and foreign governments, or terrorists" to protect your security when they're buying exploits?

They're buying exploits, holes in security: they don't buy you more security, it's the exact opposite.

I'm sure companies are much more concerned about security for their customers than any other actor. Why would a third party want access to exploits if not to defeat the security put in place by the company?

0 days researching is essential to plug holes, not to punch holes. Only the target can plug holes, the other "customers" only want to punch holes.

From: http://money.cnn.com/2016/04/07/technology/zerodium-apple-ha...

"In a sense, Zerodium is a cyber arms dealer. It pays hackers to learn about their tactics, then packages that and sells it to elite subscribers.

For $500,000 or more a year, governments could buy a road map for hacking Android phones to spy on people. Companies could learn about a special hacking tactic before it's used on their own Windows computers -- or quietly use it themselves for corporate espionage."

So they are selling to governments (which make laws and make this legal).

I wonder if there's a way to use civil legal proceedings to punish these exploit-trading firms. Maybe class action lawsuit representing victims of these exploits.

The government is gonna shutdown its own favorite means of acquiring 0days?

Well, be that as it may, but the judiciary is generally independent. Inferring political meddling in courts is a rather speculative territory.

You'll have about as much luck as suing Glock for gun violence.

Well, gun are very generic. This is highly targeted. You can't pretend you just "bought this exploit for defense purpose".

code is not considered to be a weapon by law in most countries. As soon as their country rules this, it's easy enough just to migrate to a region who doest care. This wont be stopped any time soon since all countries would need to adpot these laws in one go for it to be effective... yay interwebz :)

On another note i think it's silly ppl try to exploit apps on phone when it's well known how to access any device via simcard and/or lte chips >.> can just take screenshots of conversations, no need for decryption. >.> guess they like flushing money through the toilet lol.

You can just as easily claim you bought an exploit for non directly offensive purposes.

Such as ?

Red team training, developing countermeasures and signatures, researching attack heuristics, and if "counter hacking" will ever be legalized (which there is some push to do so) then in the same manner as owning a gun legally shooting back.

Fair enough.

Also FYI probably the first large scale broker of exploits was TippingPoint/HP (i don't remember if this thing started before or after HP acquired TippingPoint), they would buy exploits develop signatures for their IPS products then notify the vendors of the affected products and later disclose the vulnerability to the public. http://www.zerodayinitiative.com

why? Why would you want to punish the capitalist levers to ensure these products stay on their toes?!

What prevents the developers from building their own exploits for an easy retirement?

Being sued to death and possibly being hit with criminal charges as well.

If you are in a position to successfully (as in get your code reviewed, accepted, promoted and deployed) backdoor a product that is big enough to qualify for a bug bounty of even a $100,000 you are going to be paid well enough for the risk not to be worth while.

This is correct.

As bad as these things are, you wouldn't want them to be illegal.


You cannot effectively ban market activity, see war on drugs. Only this time it's even more difficult because no physical goods are involved.

Also such prices are a good indicator of app security, you'd only want apps whose exploits cost north of "pocket money for a state actor", so IMO > 100 mn?!

> You cannot effectively ban market activity, see war on drugs.

I'm not saying you can ban market activity perfectly, but legalization certainly makes things easier, lowers prices, and increases activity.

> Also such prices are a good indicator of app security

Are they? Surely the price reflects the value of the exploit?

This would to a much greater extent reflect the user base of the app (high variation) than its relative security (lower variation; though the two would have an interactive effect). The larger the user base of the app the better, though more wealthy and insecure (e.g. rich retirees) would have more value to criminals, where as politically engaged young people would have much more value to governments and spies. I would think that the price would be a pretty noisy/poor indicator of the security of the app, relative to the user base.

> but legalization certainly makes things easier, lowers prices, and increases activity.

I agree partially that the fear of being caught is a disincentive. However, higher prices make less people buy it but more people offer it. In a way, illegal drugs with its high potential profits (due to higher risk) attract more dealers (and less consumers). This again puts pressure on the risk premium, so consumers benefit from more supply. So high prices are not really a solution IMO.

> price reflects the value of the exploit

I think it reflects both the reach (~ number of users weighed by their wealth and gullibility) and the "safety" (an open-source app, audited by some well-known company vs. some prototype closed-source app). Since the reach can be reasonably estimated, the safety can IMO also be estimated (as price / estimated reach). I am pretty sure we should not try to agree on a clear metric but I personally would find that info useful when choosing e.g. a messenger app or some piece of hardware (say a router)

Maybe because it increases the security significantly. Say a large government pays top $ for an exploit. Chances are pretty good that the vast majority of the black hats on the planet will not have it.

Additionally publicity generates incentive to fix the problem. More apps/OSs/libraries will try harder to be secure. Apps could start wearing high exploit bounties as badges of honor.

Much like how ransomware likely has increased security more and changed user behavior than an infinite amount of suggested security training. Some users even gasp ask about how to protect against ransomware and as a side benefit actually protect against mistakes, dying disks, and other flavors of malware at the same time.

Seems better to trot this kind of stuff out in the open than to hide your head in the sand and try to hide security problems from the public.

But why incentivize the weakening of secure systems? I honestly don't think that black hat hackers would find much utility in cracking an app like Signal (except maybe for street cred). Relatively few of it's users would be "soft targets" in terms of susceptibility to phishing, social engineering, weak passwords, lack of 2FA, etc.

Governments on the other hand would pay lots of money to increase their mass surveillance capabilities. Signal users are disproportionately young, sophisticated, and politically engaged.

Given that Signal's budget is raised from donations and grants, and is much more fixed than an open market to undermine it, how would such a market incentivize them to increase funding on security? It's already their top priority.

> But why incentivize the weakening of secure systems?

Are you suggesting that the developers would put in a vulnerability on purpose, in order to sell it and collect the payoff?

Because, short of that, I can't see how exploit trading incentivises weakening of systems. It just incentivises people to find weaknesses.

That's what first-party bug bounty programs do now.

The extra thing that a free market does is incentivises people to find weaknesses and sell them so that they can be maliciously exploited. When vulnerabilities are exploited instead of patched, secure systems are by definition weaker.

Probably not.

Firefox/TOR LCE+SBX on Linux: 100k

Chrome LCE+SBX on Linux: 80k


Yeah... I recently uncovered a way to bypass the tor browser bundle proxy on some linux flavors [0]. Unfortunately, The bounty from the tor project isn't nearly as much[1], but at least I can sleep at night.

[0] https://blog.torproject.org/blog/tor-browser-703-released

[1] https://hackerone.com/torproject

Good job!

If it's not a secret, how much did they pay to you? Seems like this could be considered, at least, a medium severity, and the top bounty they gave so far is only $500. If you don't wanna disclose that, that's perfectly understandable. I'm just curious.

I haven't received a payment yet actually -- it is still pending.

Unfortunately, The bounty from the tor project isn't nearly as much

It sounds like it's a privacy leak, but not an RCE/SBX? Especially as it's saying that the sandboxed version of Tor isn't affected.

Bit sad that you can't click the revisions on https://trac.torproject.org/projects/tor/ticket/23044#no2 to see the diff.

So I believe the max bounty without a bonus for any Tor vuln through their bug bounty program is 4k. The max for the Tor Browser Bundle is 3k.

Likely because Firefox is shipped as default on a lot of distros.

It used to be 30k for Firefox vs 80k for Chrome, though.

I'm thinking it's a combination of being the default, Firefox just having enabled the sandbox on Linux, and it being the base of the Tor Browser (which means Firefox exploits can sometimes be used against Tor too).

I expect two factor authentication over cellular networks (SMS and voice callbacks) to be commonly exploited in a year. Orgs really need to force OATH (HOTP, TOTP) and FIDO (U2F, UAF) and begin transitioning away from cellular two factor.

The problem that I haven't seen a good answer to, is what happens when (not if) the customer loses their 2nd factor device? If someone loses their phone, they can still keep their same number and receive an SMS verification on their new phone. With OATH/FIDO, that won't happen.

If I was WhatsApp I would offer a larger bounty. If they get the 0 day they could fix it before it gets into the wrong hands. I would think 500k is a drop in a bucket for them, and give them good press that they are actively keeping up with privacy/security of their users

If I was WhatsApp, I would offer to compensate those who have found the exploit for their work. If someone else buys the exploit I would sue the broker for extortion.

If I was WhatsApp, I would offer to compensate those who have found the exploit for their work.

Sure, their rate is $1 million an hour :).

I could see the government jump in and pressing charges but the worst thing Whats-app can do is piss them off.

well the thing is offering to compensate the work won't be their go to. The one who finds the exploit will most likely sell it to the highest legal bidder, which at the moment is Zerodium.

Interestingly, I don't see a payout linked to Zerodium directly. What happens if someone they do business with decides they can just grab the keys to the castle directly? I wonder if they'd bargain, or just set another bounty on the individual or group in question.

Wish someone will offer a bounty to target Zerodium.

What a world we live in. I wish humanity made this turn out differently, where digital privacy was a human right, not something worth $100k.

I'm not sure this isn't a "good thing", to some degree. Companies like Apple and Google offer rewards for people who find exploits in their software, but they have little incentive to raise the reward even as the exploits become more and more rare, and demand more time to find. This may give them the false impression that there are no more exploits, while in reality they just haven't incentivized people sufficiently. This company operates on the other side (of which I don't approve), but by pricing exploits more accurately (via supply and demand), it forces companies to raise their prices as well to compete.

In other words, this can be seen as part of the free market incentivizing people and companies to find and patch exploits, or for programmers to just write safer code in general.

Since the offer is made by a shadowy organisation, how would we know whether it indicates anything reasonable? Companies with money to spare could use it to easily discredit competitors. First, company A offers $X (where $X might be a full-term equivalent salary or more) to a shadow broker for an exploit in company B's product. Likely outcomes include:

If an exploit is found, company A can play it for what it's worth, either accusing B of negligent behaviour, sloppy coding, defunct bug award program, etc., or exploiting the bug (via anonymously hired help) for a while, possibly in such a way that company B's customers notice the issue before company B does.

If an exploit is not found company A can still accuse company B of negligent behaviour if company B hasn't matched $X in their bug awards.

If company B has done everything right company A can pull the offer, and try to convince a reporter that this indicates an exploit was found and is being used without anyone at company B noticing. Dumber stories certainly have surfaced, and the damage could be significant.

Couldn't the same argument be made about a thriving market in ways to break into your house at night and kill you?

If your home has 250 mill people using it a month, probably.

More like... 250m people use a given brand of lock.

I wouldn't trust a lock that doesn't have a $500K bounty offered for exploits.

So what do you use at your house? Watching the lock pick village videos from defcon made me realize that there are no perfect locks.

I use a cheap lock and don't expect it to hold up against any but the most lazy attacks. I don't trust it.

Human social order is built around a desire to feel superior to others. Capitalism "won" because it weaponised that desire -- not the other way around.

Intentional division of capability is what the present economic system is based on, but I agree

digital privacy has never and will never exist... it's a myth created to sell 'secure' products. (which never turn out to be actually secure, and even if they were , are running on insecure devices....)

you are right it's a human problem, and the problem is lack of respect for eachother. won't be fixed by any software patches for sure =]

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact