The reality is that they only pay that much for bugs in the kernel that do not require a user interaction.
Other bugs that use a common action on an app that everybody uses, for example opening the stock mail application, may be enough in order to compromise almost all iPhones. Apple seems to pay "only" $100k for those problems.
I just scanned through the comments and clicked on some links so correct me if what I wrote is not accurate.
So, when either party violates the agreement, it reflects poorly on that person who made the introduction, making it harder for them to make those connections in the future. And, these introductions matters, most sellers don't want to just sell to anyone, there needs to be some trust that who you're selling to will be selling it to friendly governments or whatever. Its not like a craigslist ad where you sell to just anyone who answers.
So that acts as a deterrent on the buyer side. It'll be harder to get new sellers if you have a poor, or no reputation.
On the seller side, you're not going to get too many people willing to vouch for you as you start burning bridges by selling non-working exploits.
And on that, the payment scheme acts as a deterrent, like teh great-grandparent said:
> grey-market sales are valued on continuous access; you get paid over a period of time, and if the bug you sold dies, you stop getting paid.
That is, you might get XX Thousand upfront, and then an agreed upon XXX thousand based on the exploit surviving XX days.
So trying to scam the buyer will net you a small amount of the total at best, but I mean, often times they'll hold payment until its confirmed and contracts are written and signed over these sales too, its not under the table payments or anything for the most part. Legitimate business transactions.
So, I guess to sum it up, reputation and a demonstrated, or atleast vouched for past record. There is a lot of trust on both sides.
Not all brokers are alike though, exploit survival is a gamble, but sensible end-buyers usually don't want to burn the exploits either so will use them sensibly. There are some brokers that don't sell exclusively (despite their claims), they have a reputation for exploits getting burned early.
I have not been involved with any iOS exploits, not really my area of interest, but lets say I was. Would I consider selling it off to Apple, yeah, it would be something to consider. I'd consider the market rates too of course, 1MM vs 1.5MM, sure Apple is enticing, 1MM vs 2MM, maybe not. Not sure where I would actually draw a line, but you are right that Apple doesn't need to compete directly with the market rate, just close enough.
I'm sure there are those that would rather just go for the bigger profits regardless.
Hacker: I have a no user-interaction RCE
Apple: ok yeah
Hacker: gimme a phone number
Apple: here you go
iPhone: I am pwned
Apple: ok lets do the deal
1. The buyer or someone the buyer trusts, then the buyer can log all the network traffic and find the incoming attack traffic and work out the exploit from there.
2. The seller or someone the seller trusts, can backdoor the software to fake it.
3. Someone they both trust, that would require they have some mutual contacts which while possible I wouldn't count on it.
4. A random victim, more possible, but neither party would want to risk prematurely burning the exploit.
And of course there are a ton of exploits that are not remote, all sorts of local privilege escalations, and there are partial exploits that are sold. Like a multistage exploits like say just the exploit to escape a sandbox, or even just an exploit that requires a memory leak could be sold without a memory leak, or just selling the memory leak. Obviously a fully weaponized exploit sells for the most, but there are buyers for stages also.
I was thinking about phones, not servers.
> then the buyer can log all the network traffic and find the incoming attack traffic and work out the exploit from there.
Is it really that easy? I'm not a security researcher, but I imagine that most exploits aren't just a magic byte sequence you send to the victim -- so I assumed that just a single observation of a successful attack is not enough to understand it easily.
that doesn't change things too much, it does introduce some potential difficulties with intercepting certain types of traffic/input to the phone. The question just becomes who controls the hardware being compromised.
> but I imagine that most exploits aren't just a magic byte sequence you send to the victim
Its not, and its not like you can just replay those very same bytes, but its not magic, it all has a meaning and a purpose. While its not easy, you can work out plenty from logs. The entire exploit necessarily is there, things will change, but all the instructions that get injected to do later stages necessarily needs to be sent, or the instructions to generate/cause them.
Its not an easy skill, but its not unheard of.
 I'm simplifying a bit to avoid getting into various code execution techniques
Not reputation? Not the thrill of it? Not hatred of Apple? Not plain maliciousness?
Zerodium already pays double what Apple does. Where's the incentive?
It’s true that Apple pins its rewards to specific outcomes, but I think a lot of bug bounty programs do something like this. For instance, Google’s top bounty for Android ($200k) is only awarded if you can provide an exploit compromises that the trusted execution environment (see https://www.google.com/about/appsecurity/android-rewards/).
So to me, this isn't a PR stunt. It's a necessary "dumbing down" we see all too often. It's no different from journalists digesting and simplifying the content of an advancement in biology or physics.
I don't get this it is either A or B reasoning. Why can't it be also a PR stunt? Or also contain PR?
If a company wants to do something and PR gets involved and makes sure the messaging is good, that’s not a PR “stunt” anymore.
A stunt is a trick.
¿Porque no los dos? Security and PR.
Now, by keeping these 0days off the market, Apple also gets to further burnish their reputation. It's a good play no matter how you look at it.
Of course, first Apple needed to be fairly certain that there aren't tens of thousands of vulns left to patch!
That's certainly not to say the HN audience is an elite (it's not) but it is comprised of outliers in the sense of people having abnormal jobs and/or abnormal interests compared to the average population in any city-sized slice taken pretty much anywhere in the real world.
I guess on average it will reduce those holding on. But it will not eliminate them.
Are independent discoveries of bugs common?
Also, somewhat famously, both Spectre and Meltdown were discovered independently by multiple teams in the same timeframe, who all coordinated disclosure with the CPU vendors etc. https://meltdownattack.com
We should also remember that there are tons of people outside the US who are into this. Africa, Asian, Eastern Europe. They don't have to worry about the legality of selling an exploit.
isn't that the point? sure, it makes selling on the black market more valuable for the hackers willing to do that, but it also makes purchasing an iPhone exploit less accessible for anybody else. that's a good thing.
This move from Apple makes people like me, working with human rights defenders and journalists, happy.
Because it drives up the costs for the NSO Groups, Hacking Teams and Gammas of this world. They either pass on (and take a hit reducing their revenue/internal capacity) or drive up their costs (making it harder for crappier regimes to afford
/ reducing the frequency that high end exploits will be used.
Besides that, earning a 1 million dollar reward for cracking the iOS kernel is probably a nice ticket to a pretty well paying gig at some security firm.
“The Cupertino, California-based company said in a lengthy memo posted to its internal blog that it "caught 29 leakers," last year and noted that 12 of those were arrested. "These people not only lose their jobs, they can face extreme difficulty finding employment elsewhere," Apple added.”
Then Apple could buy and fix ASAP so the researcher get screwed. So yeah it’s cheaper but if someone notice just wait for the backfire!
Guaranteed one time payement seem like the fair way to go.
1M is a lot of money to me, a regular person, but when you consider that top security engineering talent could be making north of 500k in total compensation, 1M suddenly doesn’t seem all that impressive.
It’s a good bet to make on their risk. Imagine paying a mere 1M to avoid a public fiasco where all of your users get owned.
This just seems like good business. They could make it 5M, and it would still be worth it to them in the medium to long term.
You can do so much damage/return with an exploit that affects > 30% of the population. Get 5 of those and sky is the limit.
I think this has a lot to do with government agencies buying any exploit they can get their hands and there is basically no market besides that. I don't know if that is illegal in the US, but it seems that government is the only buyer.
Extremely unlikely. The risk/reward if found out is too lopsided. Conviction for insider trading has you pay a penalty and transform your fund into a family office -- Raj Rajaratnam going to prison for a decade is a unique exception not the rule.
Conviction for insider trading in combination with wire fraud, espionage, and all the other exploit-related charges will send everyone involved to prison for 10-20 years, pretty much guaranteed. What use is a bigger hedge fund if you have that sword of Damocles hanging over you?
Lots and lots of other charges but if no insider is giving you info then it wouldn’t be insider trading.
What would be 100% legal would be if you bought an exploit and then traded on the release of that exploit. Depending on the severity of the exploit it could move the stock price a bit. And, even though people wouldn’t like it, that’s kind of the point of the market. You get rewarded for helping with information and price discovery.
What does this mean?
Some companies put a lot of effort in security (Apple, Google, Facebook, etc). Usually they have engineering driven cultures.
The other majority of companies see security just as a cost center that needs to be covered in order to reduce legal liabilities.
The second kind of companies do not have a bug bounty programs because they know that they have too many holes and prefer not to attract too much interest.
For those companies that may have huge capitalization and profits, paying $1M for a vulnerability is not practical.
I expect that in general all companies (including those in the first group) detect compromised accounts and services from time to time but unless they have to disclosed because the laws demand it, they prefer to avoid the bad PR and potential lawsuits.
But while I expect the first kind of companies doing a root cause analysis and improving the systems, the companies in the second group of companies usually just clean up the detected compromised systems and avoid to look too much deep because either they do not have the skills or they are afraid to find things they will have to disclose and be liable for.
If I were to discover a vulnerability is there a legal way I could cash in on it (aside from this case with Apple)?
Also I heard in person, so I cannot quote.
Not sure how legal this is, but there are even vulnerabilities brokers, who set you up with buyers.
Following this line of thinking leads to some pretty absurd conclusions, like 7% of Tesla's value being predicated on Elon Musk not smoking a joint.
> Shares of Tesla plunge after news of a pair of C-suite executive resignations and a bizarre video showing CEO Elon Musk smoking pot on a podcast.
100k exploits most likely what has been resold few times over before it reached the "professional infosec" space
I presume that it is a legal minefield, selling 0days and extortion are first cousins, at least. If you could hold a bid, no doubt an Arab country would pay $50 Mil for one...but they buy them from companies that sell "software" and services.
It makes sense to invest way over the top if you can kill bug classes outright --- and Apple does this, too. For example, people that were doing DMA hardware attacks against macOS a couple years ago are now on Apple's payroll, designing hardware to defend against those attacks. That's a meaningful serious investment in defense. Rewriting their kernel in a memory safe language would be another example (one they haven't done yet).
Massively outbidding the current spot price for a bug doesn't accomplish anything like that. Think about who they're really bidding against. They can drive the price of bugs way up, and they are doing that gradually, but there will still be a price and people selling them.
The important thing they're doing on this is making unlocked devices available for researchers, and lowering the bar for research for people who would never sell to brokers.
If you found an unfound gaping crater of an exploit somewhere -- how much would that be worth to them? Likely a lot more than $1m. I'm sure you could negotiate that number up, a lot.
But bug bounties like this are competing with the shadier markets. I suspect they found out the shady companies are offering more than their existing bug bounty program did.
As Warren Buffet says there is plenty of money to be made in the centre
I'm absolutely sure that saudi arabia would pay much more than that for an exploit like that and I don't need to have experience doing it, it's common sense.
Maybe I'm naive, but I would think that any programmer skilled and privileged enough to be able to insert a vulnerability into a mainline product, get it past code review, and make it look like an innocent mistake during the inevitable root-cause analysis would not only know better, but would also value their career enough that whatever payout they got from the bounty wouldn't be worth jeopardizing all they've worked for.
And anyway, how's that conversation supposed to play out? "Sorry about that bug boss, yeah could've happened to anyone. Anyway, about my retirement in two weeks..."
Why would an apple employee take the risk to loose his 300k+ / year salary for a 1M bounty (he'll have to share with the acquaintance) ?
Y'all are a joke. Does that make sense?
Governments are more than willing to pay $100,000-$1,000,000 per unlock.
I wonder how they're going to manage this. I could easily see some less than ethical researchers applying for this program and selling all the 0 days they find to the usual suspects rather than informing Apple.
I don't work in the security field nor am I a business number cruncher, but that was the gist I had of what these programs achieved.
Edit: see Despegar's reply, I should have RTFA! However worth pointing out that there would be some incentive for researchers to go to Apple instead of a third party, which might tip the scales in their favour.
>Previously, a company called Zerodium was vocal about how much it will pay researchers, before handing them to its unknown government customers. In January, the secretive company announced it was offering $2 million for a remote hack of an iPhone.
So that's already more than what Apple offers. I tend to think they'll always be outbid.
How would that make any sense? It is ludicrous.
So $1M/exploit is priced significantly ahead of the $2M/hack.
Interesting this also means that an entire exploitable stack now becomes worth a lot more, while any given exploit is worth a lot less. And any stack of exploits becomes much more brittle, as a patch of a single one of N exploits can knock out the use of the stack.
I forget the influence until I see things like this.
Corporations had been unilaterally deciding what the payment for a reported bug would be. They were constantly undervaluing and wasting everyone's time. People would say the same rationale "low liability and clean money is more valuable than dirty money and needing to launder it".
Yeah, but not that much more valuable.
So now the bounties are reaching their market price.
Now, people who prosecute white hacks that practice responsible disclosure are technically known as "asshole"s but the end result is that there are a ton of laws even innocent computer usage breaks so your liability for any damage to end users usually doesn't need to be considered, there's enough book to throw at people already.
 Essentially if you touch a computer or computing device it's quite likely you've somehow violated the CFAA, it's _stuuupid_. https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act
You don't need safe harbor because analyzing your own property is not a crime. Neither is telling people what you found. Also, please stop using the term responsible disclosure!
Do you own the OS you’re breaking into?
Unfortunately the intersection of "security researcher" and "right-to-repair advocate" is probably tiny, but that would be something I'd love to see: someone finds a crack that enables a lot of third-party-repair scenarios and sells it to that industry instead of to the black-hat criminals or even back to Apple itself.
The third-party repair/aftermarket industry is huge and would love to break through the proprietariness of the ecosystem. In that light, $1M seems rather small...
Based on personal experience, the intersection isn’t all that small.
When you consider it could be the likes of the three digit shoe inspectors over there in the US it could be a fair chunk of change
0days are used against hardened targets. Think “Iranian nuclear facilities” rather than “grandma’s PC”
It is nothing like that. It is the same as freelance development work, except there are very few customers and the developer writes something that may or may not have any value.
I think one of the intentions here is to lessen the demand for black market dev-fused phones, which is already a huge problem for Apple. This is similar to the idea of officially allowing Linux on the PlayStation — give hackers no legitimate reason to pwn your console
>Some incorrectly speculate it was used as an attempt to help classify the PS2 as a computer to achieve tax exempt status from certain EU taxes that apply to game consoles and not computers (It was the Yabasic included with EU units that was intended to do that).
and Linux on the PS3 was used as a marketing point.
https://en.wikipedia.org/wiki/OtherOS - both these pages are really bad :/
I wish I remembered the specifics of the comment, but selling a 0day on the black market is not something a casual person can easily do, and even if someone figures out how, there's a lot that can go wrong, with many of those outcomes leading to jailtime.
It's vastly superior to participate in a bug bounty program legitimately, from a risk standpoint, especially if you're standing to make $1M. 0days are (and I'm not an expert on this) not generally going for enough more to justify all that extra risk.
Argue with 'tptacek, not me.
That's what has been happening so far. Here's a report from a couple of years ago about this:
The announcement today is actually raising the rewards 5x (from $200K to $1M) to make it more valuable to report this to Apple.
I’d say that the researchers have a pretty strong incentive not to screw around with Apple.
It doesn’t matter anyway, because Apple patches the bug, thus killing its black market value completely.
If they get some special "developer" devices on their hands, they might find some funny things...
But maybe it depends on how you define “hacker” and what you call “random”. I’m saying that folks in the jailbreaking scene are some of the primary targets for this. It wouldn’t be worth launching if the plan was to exclude them. Some are already part of Apple’s bounty program.
“Apple Calls In Rock Star iPhone And Mac Hackers For Secret Bug Bounty Bash”
How much do they make selling to China?
Aside from the Chinese part of the jb scene, it’s kind of disheartening to know that the rest of them are selling that capability to a hostile adversary (if that’s true). I’m surprised the five eyes aren’t offering enough to keep them out of China’s hands.
My guess is that they're not going to let just any rando h4xx0r into the program. They'll take on well-known security researchers and academics who have something to lose by leaking zero days.
Where are the trades happening? Is it the exploiter putting out something like "kernel exploit for iOS xx.x" ? Or the exploiter bids on people offering money? How is the seeker of exploits going to be sure that the exploit is working? How do the people keep their anonimity? And how does the money is exchanged? Crypto currencies? And how was it working before crypto?
Are governments bidding but they also want to convict the exploiter so they get the exploit for free? I remember watching one or two movies where the hacker was caught and was sentenced to jail. Government stepped in and said: "You can either be in jail for n years or work with the good guys for n/2 years" Is it just science fiction?
Before zerodium existed I think it was a bit more personal. Here's an article about the Grugq acting as a broker.
There is no way you will ever hear authentic answers to your questions. The only time anyone tried to explain that the resulting article backfired on the interviewee. (Disclaimer, it was me)
Governments do not buy from developers. The paperwork would be insane. They buy from businesses like Raytheon. How Raytheon gets them is opaque. But they do employ hundreds of exploit developers. Read the r/netsec job postings and notice how many require having a TS clearance. Every interesting job that says “work on vulnerability discovery and exploit development” requires TS.
Governments generally speaking do not cheat on business deals where they want to continue having access to that market. It is like stiffing the company that sells you replacement parts for your government vehicles. You save money now, but in the future your planes can’t fly and no one will do business with you.
All of this I explained during the interview, but the objective of the article was not what I assumed it would be, which was to address the dynamics of how the market works. I was naive to think that, but in my defense I was genuinely shocked that people were unaware of the market (it has existed forever). Literally everyone who is an infosec rockstar has been involved with exploit sales . Many still are because It allows them to work on what they enjoy — bugs and exploits — and remunerates for their expertise. They get paid a living wage to do what they want. Like any freelance developer. They are just smart enough to keep their mouths shut.
I haven’t been involved with the market for almost a decade now but you’ll still hear people saying shit like “how does it feel to sell weapons to dictators??” (Even On here there are a number of such comments.) I can truthfully answer that I have no idea. I only ever sold software to western governments who had a hard on for terrorists.
I’m still angry about it, but I have no one but myself to blame. You can’t unfuck the goat. C’est la vie. People want sensational stories about evil people, they don’t want stories about the dynamics of a grey market software industry. No one will ever speak about it again (lessons learned analysis! Protip, don’t be the lesson others learn from).
The market has changed massively over the years. It is nothing like the one I was involved in back then. However, as I said, no one will ever discuss it again. They saw what happened and they won’t speak in public about it.
What was, is, and will continue to be, the legitimate sale of vulnerabilities is now closed forever.
As a thought experiment, think of this. Let’s take it for granted that the IC counter terrorist units and the legal authorities hunting for child abusers are acting in good faith. That is, not every single person at NSA is desperate to see what you are doing on the Internet (literally, you are noise obscuring their signal). There are people who are going after child sex abusers, do you want them to have the capability to exploit a web browser or do you want web browsers to be safe tools for child abusers. This is not hypothetical .
There cannot be a discussion about a market where there is so much hysteria about fringe cases of abuse. Rather than trying to find ways of mitigating against abuse, the reaction has been to advocate for prohibition. Prohibition does not work, it simply drives reputable operators out of the market.
The conversation about vulnerability sales has been as even handed and rational as the conversation about marijuana in the 50s. Instead of marijuana madness you get “the FBI can hack your computer!!” ...I guess the upside is that at least this time the topic is not a proxy for racism [edit: I retract that statement. Pretty much every rationalization about banning vulnerability sales talks about African or Arabian buyers.]
And again, I have said too much. Try to explain something, get called a baby killer. I’ll bet there will be accusations of enabling dictators to spy on civil rights activists. To preempt the “you don’t know what happens after you sell it!” I say simply this — the point of having a middleman to handle the transaction is to ensure that you sell to the right end users. Exploit developers don’t want to sell to dictators, they find someone who can get them access to a market where their work will be used ethically. That can’t be said for all, of course. The jailbreak community in particular is essentially a vendor to the Chinese government.
But there you go. The most you’ll hear about it from someone that actually knows what they’re talking about.
[edit: haha, see? It was brought up before I even posted a response!  There is no accurate information. Literally every single paper on the topic cites newspaper articles rather than academic research. This is actually unique. It is the outlier case. Mara did a review of the literature and found that the majority of citations were to articles, far in excess of other topics)
 https://www.econinfosec.org/archive/weis2007/papers/29.pdf [PDF] — a paper from Charlie Miller talking about how difficult it was for him to sell exploits without a trusted third party to act as an impartial party to the sale. That TTP is called an “exploit broker” because that sounds far scarier than “trusted third party.” Incidentally, this is the environment I was operating in, and it was clear that no one involved in security considered it abnormal.
: https://www.wired.com/2014/01/tormail/ ... look at the framing of the article. It is not “FBI screws up their operation and mistakenly collects data that is irrelevant to their investigation.” It is “if you used this secure email provider [hosted on the same infrastructure as a massive child sex abuse web site] the FBI has your inbox!!!!”
 https://news.ycombinator.com/item?id=20651348 .. feel free to read the article and think what you like. Andy Greenberg is a good journalist. I was an idiot. ¯\_(ツ)_/¯
Was that the forbes article linked above?
> You can’t unfuck the goat. C’est la vie.
That goat laid you golden eggs though. It takes me over 15 years to earn a $1m paycheck, and I wouldn't mind dealing with some people moaning at me for it. People always find something to complain about anyway, so I wouldn't be too concerned about it.
> The conversation about vulnerability sales has been as even handed and rational as the conversation about marijuana in the 50s.
Sure, you're right, but this is true for many new things, you're just smack in the middle of this discussion. It's good to have these discussions though, because it makes people aware that otherwise weren't. Comparable hysteria is currently happening with 'company X is listening to your conversations' and 'self driving cars might kill you to save a baby'. People in the field have been discussing the ethics of these things for a long time, but now it's becoming a public discussion. This happens when things grow.
Personally, I prefer the black/grey zero day market over back doors being proposed by some. Those would be permanent vulnerabilities, while zero days are (usually) temporary and probably only used while absolutely necessary, opposed to just eavesdropping on anybody. I also see the need, because the internet gives bad people too many places to hide.
So, from me, thanks for your services, you probably helped keep us safe from bad actors.
So yeah, that $15k golden egg. ¯\_(ツ)_/¯
At thetime I did not know about phrases like “off the record” or that you could have corrections made to articles that had false information. Had I known I would have OTRed at the beginning although I should never have spoken in the first place. And I should have made them correct the inaccuracies.
But, c’est la vie.
That's a pretty big difference indeed, that's more like a normal SV salary than a 'live for ever in Thailand' amount of money.
I understand that he is writing a book now, and I doubt many insiders will speak to him. I don’t expect it to be very accurate.
I had a lot of problems because of that article. Not just the death threats and such, but ... do not ever become interesting to states. They have a lot of resources.
I can't think my way around your point about prohibition though - I think someone saying "selling exploits is bad" is also someone that would say "the government shouldn't be monitoring us, pedophile or not," and that's part of why they don't think exploits should be sold to governments. Could be wrong.
But, we all generally seem to feel that the government shouldn't be given back doors into our devices that only they get to use, yeah? So instead the alternative is an endless arms race as chrome or whoever tries to out engineer the FBI? Why not just give them the backdoor at that point? (I.e., why not just support them having the backdoor, I'm not implying those in your wheelhouse have the power to legislate or anything)
As for backdoored Chrome, what is to prevent China using a modified version of Firefox that removes the backdoor? It would blind NSA to collection on the Chinese target.
There is no way you can use backdoors against hard targets. Hard targets are why they need 0day. It is an arms race because it is a conflict between states.
Whatever fears people have about 0day being used against them are, as I’ve said before, like worrying about ninjas rather than cardio vascular disease. One is something you have no control over, but almost no exposure to as a risk. The other requires regular work to stay safe.
Years ago I wrote “free security advice” and the basic concept is still relevant. I should update it now though. Android 9 is a much harder target that 4.4 was. I would actually rate Android as safer than iOS because all of these ridiculous articles about million dollar pay outs have driven most developers towards iOS, and iOS is a monoculture.
A hardened Android device (disclaimer, I’m making one for retail sale) is safer than a stock iOS.
Literally everything in the media is complete garbage. No one who knows how things work would ever discuss them again.
Governments have used zero days. Most famously to use a zero day unlock an iPhone against a terrorist (whose house was ransacked by the news media). Less famously was to botch a legal case against a pedophile (amazingly, it would be possible to find and arrest nearly all pedophiles on Tor by burning half a million dollars in zero days). But the government didn’t want to release the zero day for Play Pen and Mozilla got involved in the case.
But Freedom Hosting’s zero day was discovered while it was being used. I think the government still uses zero days, but parallel constructs the evidence from them. This is policy making by mismanagement.
On face value, the government is involved in abhorrently irrational decision making. The government cannot be considered responsible enough to have zero days, but that’s an argument that will lead nowhere.
I read your post, but still have no idea why it's inapplicable in the real world. Could you explain that again? I think it's a very interesting discussion, so I'd like to actually understand your point.
>Farook destroyed his personal phone. The FBI wants access to his work phone. UPDATE: FBI locked themselves out of the iCloud account after it was seized.
>FBI already has huge amounts of data from the telco and Apple. This is almost certainly enough to rule out clear connection with any other terrorists.
>FBI is playing politics, very cynically and very adroitly.
You are referencing cases where exploits were in the news. What does that have to do with vulnerability markets?
Are you saying that the IC used exploits only twice? And they were misused both times? What does that have to do with vulnerability markets?
I'm sorry, but I'm utterly lost. Can you please clarify?
It looks like you DID update it: https://gist.github.com/grugq/353b6fc9b094d5700c70
And from that gist:
> Use an iPod or an iPad without a SIM card
> Use an iPhone
How can you then says:
> A hardened Android device (disclaimer, I’m making one for retail sale) is safer than a stock iOS.
Is Android (and in general any open source system) safer than a iOS (a closed and highly customized system) ?
The idea I heard over and over is that a open source system is more secure because the code is scrutinized by anyone that wants to.
But with a monthly security update and how quickly a vulnerability can be exploited, it does not seems to be the case anymore.
The main reason is the time between a vulnerability is patched in the source code and the patch is deployed. When a commit that fixes a vulnerability is committed on the Android codebase, anybody that knows what is looking at would be able to notice it, and likely build/distribute an exploit before the patch is actually pushed to all users. On a closed source system, an exploiter can still reverse engineer the changes in an update but less people have the skills to do it and it is not straight forward to understand which changes in the code are a security patches.
Considering the timing and what I see on the Android security bulletin almost every month there are EoP and even RCE vulnerabilities being patched. A Google Phone, on average, will go 2 weeks every month vulnerable to a "known" vulnerability.
For all the others the situation is dramatically worse. Samsung is at best a month behind the security update schedule. A Samsung's user will have a phone that is always behind the last vulnerabilities patched and visible in the Android code base.
Some of these vulnerabilities can be quickly distributed since everybody has an LTE internet connection, read new on a browser.
When I wrote it Android devices never got patched (hence the advice to switch to a FOSS rom that would be updated, rather than a frozen in time factory ROM.)
Security involves a lot more than just access to the source code. That is simply a factor in the ease of some techniques for vulnerability discovery. Back then Android had poor process isolation, significant problems with its sandbox, lax SELinux configurations, insecure software architecture (eg not using “least privilege”)
For a regular user, a stock iOS device is safer than an Android device because there is very little iOS malware in the wild. For a user at risk, then they are safer using a secured device, which by default means modified Android.
Security is not a generic “thing”. It is a continuous process that provides countermeasures against threats by mitigating risks.
If you want a device that is safe by default, will always be patched, and is not vulnerable to indiscriminate exploitation or malware embedded in apps — use iOS.
You can achieve that with a Google Android device (starting with about v8 or so). Of course you still have to be vigilant against malware laden apps.
Any more information on this? I'm more than a little depressed by the current options in phones - I don't relish the idea of moving to ios - but at the same time I'm a bit worried about the direction Android is taking...
What kind of marked/price range are you aiming for?
So in past, present and future, the legitimate sale of vulnerabilities is now closed forever. When was legitimate?
Are you saying that since it is not legit, exploits should never be sold? What are you advocating for ?
There is so much bullshit about the “highly lucrative black market” it is staggering. The market is not big. There is significant risk which gets factored into the payment structure, so the payments are lower than people imagine.
The market is not very liquid. If you have a Chrome capability for sale but your client already has a Chrome capability, they won’t buy it. If their capability dies, then they’ll want yours, but by then yours might be dead as well. Gross oversimplification, but that is generally how things work. The demand is very specific, the supply is very limited, and the product is very fragile (particularly time sensitive.) It is lucrative like making a startup is lucrative. You invest a lot of time and resources and sometimes, with luck, you win big, but the odds are not in your favor for a million dollar payday.
Most articles treat it like some sort of open market drugs bazaar. It is nothing like that at all. It is more like a handcrafted goods faire with a few wealthy customers looking for exactly the thing they need. Only they won’t tell you what they need, they simply want to see what is on display. Lots of window shoppers, as it were.
The product has an unknown shelf life.
The customer cannot tell you what they need, they will only look at what you have and possibly choose something.
For the developer they need to ensure that they provide sufficient information about the capability so the customer can make an informed decision. But they have to avoid revealing sufficient details that it can be reproduced from the ad copy.
Part of what a broker does is actually translating between two parties who don’t speak the same language. The customer needs a tor browser Bundle capability. The developer has written a UAF RCE Firefox that relying on JIT spraying for reliability. Someone has to translate from exploit dev speak into IC language.
For the IC, that TBB capability is a replaceable part in a larger program that enables them to achieve their mission objectives. For the exploit dev, that bug is a labor of love that they spent months working on. They have completely different views on the value of the capability. One side sees it as a component they need for a machine they want to use. The other side sees it as weeks of frustration and pain invested into a unique masterpiece.
They have different expectations, don’t speak the same language, and don’t trust each other. Things have changed a lot from when I was involved. It’s all very fascinating but, as I said, no one who knows about it will discuss it.
I’m being stupid and talking about it, again. But hopefully this will clear up some of the stupid myths about the vulnerability market.
For example all those “wow, a way to read a someone’s private messages on Facebook? That’s got to be worth millions!!” No, it is not. If a legitimate client wants to read someone’s messages on Facebook, they get a warrant. There is no ROI for cyber criminals, and whatever it might be worth to North Korea the risks associated with that sale are not worth it. That bug is worth whatever Facebook says it is worth. Dropping the 0day would make for some news, but mostly it would be negative. So the only rational way for a security researcher to make money from a Facebook bug is through the bug bounty system. (I’m not addressing cyber criminals discovering such a bug, because that is not relevant to the issue of vulnerability sales.)
"Literally every single paper on the topic cites newspaper articles rather than academic research."
You mention that everyone is doing it with no citations of academic sources. I'd be interested in reading any recent research you believe is high quality and represents the current market. That other paper was dated 2007. I figure there's been some changes.
"Let’s take it for granted that the IC counter terrorist units and the legal authorities hunting for child abusers are acting in good faith. "
We can't take that for granted. Ok, so the prior precedent I pushed Schneier et al to use in media was J Edgar Hoover. He used blackmail on initially a small number of politicians in control of his budget and power to massively increase his budget and power. The Feds committed all kinds of civil rights abuses. His reign lasted a long time with his power growing. He accomplished it all through surveillance using ancient methods that required actual people listening in on calls and such. Both Feds and I.C. stay doing power grabs even though some or all of that was stopped. We'll never know since FBI continued to have its budget and power.
I predicted that, post-9/11, they'd do a power grab as a USAP. If it's a USAP, then only a few in Congress can oversee the program and therefore only a few need to be controlled. Sure enough, Snowden leaks confirmed they did that for nation-wide surveillance, Congress kept doing nothing more than they usually do (didn't even read reports per GAO), gave them retroactive immunity for abuses, the warrants weren't for specific individuals ("targeting criteria"), they shared data with all kinds of non-terrorism-related agencies, at least one (DEA) regularly arrested folks after lying about sources, and they're steadily expanding that. Again, with criminal immunity for whatever secret things they're doing.
So, no we can't consider them acting in good faith. They've constantly lied to Americans and Congress about programs that are used to put people in jail for all sorts of stuff. There's no telling what they'll do if we give them too much power. That's why some of us advocated warrants for information or specific acts of surveillance. One can also hold people in contempt for not giving up keys. It needs to be targeted with evidence behind what they're doing.
And, yes, some horrible people will get away with crimes like they do with our other civil rights. You'd have to be non-stop spying on every person anywhere near a child 24/7 to achieve the goal of preventing that. Yet, we don't do that because we as a society made a trade-off. This is another one. This isn't hypothetical: the FBI is so corrupt they pay people to recruit/bust terrorists with Presidents and Congress usually taking bribes from companies to get elected. We should always treat them as a threat that acts in their own self-interest that might differ greatly from ours.
It is a fatal character flaw I have. When people want to know about something I try to help them.
> You mention that everyone is doing it with no citations of academic sources. I'd be interested in reading any recent research you believe is high quality
There was one by RAND which is good.
> represents the current market.
There is nothing that I am aware of that discusses the current market. The RAND paper is closest.
> acting in good faith.
I should not have used a blanket statement. My point is that there are people in IC who are legitimately going after terrorists and child abusers. They have a legitimate need for capabilities that enable them to do that.
I am not saying that the IC is a benign and wonderful government organ. I am saying that within IC there are people who are actually hunting terrorists and pedophiles. I didn't want to explain all of that because it is obvious that it is true. Hence, "lets take it for granted". Rather than discussing the history of the IC, I wanted to explain that there are legitimate uses for 0day and that is the issue being discussed.
The rest is not relevant to explaining how the vulnerability market operates. (Well, how it did in 2011.) When someone asks "how do shares work?" you don't start off by talking about boom and bust markets and macroeconomics. Same thing here. "How does the market work?" is not a question about the IC. It is about how the market works. If you're talking about the vulnerability market you talk about the vulnerability market. You have to assume that there are legitimate players who are acting in good faith.
This entire post is why I abridged it to "lets assume good faith."
It used to be done via payment services like PayPal, but I imagine BitCoin would play a large part in the modern world.
I believe some companies offer cash in hand or bitcoin payments, but by no means is that “the way it is done”
If you look closely you can see all the paper towels the photog stuffed into the bag. (He also tried to steal one of the stacks, lol. It was his bag, and when he gave me the money back it was short. He accidentally “missed” one of the stacks.)
What a waste of time. The dude needs mental help there isn't anything there. If there is it's unconvincing going through 20 of these.
However there was the few of him getting evicted that were real.
As several people have mentioned, hackers can sell to the highest bidder and having proof that you have an exploit is probably sufficient, but what if Apple was willing to pay as much as the highest bidder?
This may also likely convince people who have sold bugs to reach out to Apple.
It probably costs them a fraction of the PR spend or risk of data breach/user exposure etc.
Edit: Since it's XNU, and it's open-source, and it's been around for a really long time, this seems unlikely. But if something was found in here, for instance, everything would be practically compromised: https://github.com/apple/darwin-xnu/blob/master/bsd/netinet/...
I remember the days of jailbreak-me.org when you could just visit that website, your iDevice would be rooted, and Cydia would be installed on your iDevice. You could install all sorts of tweaks, mods, and apps through Cydia. I remember installing a Pandora tweak that gave unlimited skips, gave it a black theme, and removed ads (because I was a poor student) and got freaked out because if tweaks could modify apps like that, then they could probably phish banking passwords. Anyone else remember those days?
Also search for "15-213 Bomb Lab" - it's from a CMU systems programming class and teaches use of the debugger and some common vulnerabilities.
I think Apple could do better and has the resources to do so. Why not incentivize enough so the people that profit from this kind of business don't even have to ponder the decision?
Would it even make sense for me to try? It seems like the probability I find something, especially without user interaction, seems so far off that it would be hard for me to find a constant motivation.
Imagine one year where I dedicate two days a week learning, understanding and trying. Do I would have any chance at all to find something worth the $1M?
Yes...but probably not the way you're thinking.
Most issues are discovered these days through fuzzing first. So there is always a chance your fuzzer will find an issue worth $1M, its much less likely that you'll realize its worth or be able to demonstrate and begin to weaponize the exploit to prove its worth.
Lets rephrase the question a little bit though:
Instead of "Do I would have any chance at all to find something worth the $1M?" lets ask "Would I have any chance of learning this level of exploit development"
Two days a week, lets just round to 50 weeks a year, give you a bit of a break during the year and say 100 days of effort.
So, in a 100 days would you have any chance of reaching the level of being able to atleast write an iOS exploit, ignoring the discovery aspect? Unfortunately, the answer is still no.
But, you would make some serious progress!
A modern iOS zero-click exploit isn't just one issue, in a worst-case (okay there are worse than this, but this is a poor case) scenario you might need the following issues
- Memory Leak + Entry Point service exploit
- Sandbox Escape to low priv user
- Privilege escalation to higher priv user
- Kernel memory leak + Kernel exploit to finally get root privs
This even for someone with experience, going from fuzz result to exploit can take months. So in a 100 days of spread out effort, you won't be doing that, but you might be able to begin approach that first stage, a memory leak and an exploit in a user-land service.
I do only say might because 100 days is a really short time when you think about how technical your knowledge of this stuff needs to be, but I'd like to think that with some real determination, in 100 days at least foundations of modern software exploits should be approachable.
As for would it make sense for you to even try, the best time to start was 20 years ago, its been getting increasingly more difficult. The longer you wait the higher the barrier to entry gets.
> The full $1 million will go to researchers who can find a
> hack of the kernel—the core of iOS—with zero clicks required
> by the iPhone owner.
> Another $500,000 will be given to those who can find a “network attack requiring no user interaction.”
which I believe many of her vulnerabilities are definitely eligible for. I read that article from https://news.ycombinator.com/item?id=20639999 yesterday, and she had this paragraph as her second
> Vulnerabilities are considered ‘remote’ when the attacker does not require any physical or network proximity to the target to be able to use the vulnerability. Remote vulnerabilities are described as ‘fully remote’, ‘interaction-less’ or ‘zero click’ when they do not require any physical interaction from the target to be exploited, and work in real time. I focused on the attack surfaces of the iPhone that can be reached remotely, do not require any user interaction and immediately process input. 
The full $1 million is for that level of fully remote attack, but against the kernel. I'd have to look up to see if any of the code she found vulnerabilities in are part of the iOS kernel.
Surely this is an additional $500,000 if she finds a kernel exploit (which would net her $1 million)?
'dang and/or 'scbt: This link and title is probably better: https://www.macrumors.com/2019/08/08/apple-bug-bounty-progra... | Apple Ups Bug Bounty Payouts, Expands Access to All Researchers and Launches macOS Program.
Edit: As the posters below said, those aren't kernel bugs. Thanks for the correction!