I would be genuinely shocked if Apple doesn't end up paying out much more for all the bugs found. Frankly, it would be genuinely concerning if they didn't acknowledge the severity of the bugs and the time invested by this particularly skilled team.
To Sam and the others involved. Fantastic job and amazing write up. 10/10
EDIT: see this comment thread for more info on the economics by people who know what they're talking about: https://news.ycombinator.com/item?id=24719656
But why would an expert spend any of their valuable time outside of work looking for bugs if they didn't like the terms of the program? That's irrational behavior.
And why would someone who's willing to sell bugs to criminals bother with a site that's already been picked over by bug bounty researchers? The vast majority of companies in operation today have no such program and would likely be much more fruitful.
And lastly how would paying more for bugs prevent someone from also selling it to criminals?
That's not something a company the size of Apple can count on.
> And why would someone who's willing to sell bugs to criminals bother with a site that's already been picked over by bug bounty researchers?
Because it's Apple, it's one of the biggest companies on earth. iPhone jailbreak vulnerabilities alone fetch millions on the black market.
If you know the bug bounty program doesn't pay much you can expect only the trivial things to have been found, and if you're very skilled you know you still have a good chance of finding things to sell.
> And lastly how would paying more for bugs prevent someone from also selling it to criminals?
It would keep more honest people interested in your bug bounty program instead of doing something else.
Yes, and do you think you have a better understanding of the situation than the security and risk management folks that work there? There's absolutely nothing that has been said in this thread that they aren't keenly aware of. There are people in Cupertino that are going to wake up in a few hours, grab some coffee and pore over the threat intel reports from last night. They know who is buying and for how much and have a long detailed analysis of what happened with previous jailbreaks. There is another team of people dedicated to staffing the bounty program, rifling through stacks of reports with a signal to noise ratio that's approaching the Shannon limit, triaging findings, tracking down product and engineering teams to get a quick response so they can get back to the researcher in a timely fashion, handling rejections for out of scope and dupes.
These people are in it up to their eyeballs in this every day. They live it, breathe it, love it and they'll move the needle when moving the needle makes sense. Until then anyone that participates in the bounty program and then cries foul when payouts are in line with the posted max and not with what could be had on the black market are going to get zero sympathy from me.
You could have said the same to this team, "do you think you understand cyber-security better than Apple's experts?"
That's barely the yearly cost of one generic software engineer at Apple.
I was expecting multiple millions in payment given the severity and quantity of vulnerabilities found. State actors could easily 10x that amount legaly through gov contractors.
China or North Korea could easily allocate a much larger team to something like this and disrupt Apple (not for bug bounties). Although, China and North Korea dedicate their resources to financial fraud where there is real money to be had.
Apple is a tightwad joke. If they laid out a scope of work for a professional pen testing company that included pen testing their 184.108.40.206/8 range then that contract would easily have been in the hundreds of thousands.
I’m sure foreign adversaries will take notice now. Apple’s cybersecurity posture has always been very weak. It’s known they don’t dedicated any resources to it.
Well worth it for Apple and a decent payday for 3 months of spelunking.
Kudos to Apple for following through.
I hope this sets the standard for companies going forward.
Chances are they already did, and are just sitting on the vulnerabilities.
They aren't competing with FAANG salaries, except maybe for a few outside experts that they might bring in to kick off a program.
They have all the top cybersecurity specialists they could ever need.
Number of security researchers isn't a function of population size it is a function of population size * fraction with propensity to show requisite skill * fraction who go to work in the profession.
Shockingly adding millions of more starving people who have never seen a computer doesn't get you many more cybersecurity specialists.
The upper caste from which all their "talent" is drawn is a small fraction of its total population mostly composed of the descendants of the lower class peasants and workers who supported the rise of the current regime. They supported not an establishment of such a system but rather an inversion of the prior order. It's an impoverished field from which little of value grows. To be clear there is nothing inferior about North Koreans by nature it is that the regime wastes the value that it does get.
Einsteins are in theory as apt to be born from the less privileged classes but there is only a 50 50 chance of getting enough food to be healthy let alone a chance at intellectual development.
By your same logic, they would not have any Olympic competitors, let alone medalists.
Most nations educate millions to 10s of millions in order to get their doctors, scientists, software developers, leaders. Someone who educates merely thousands of the kids of new rich kids selected primarily from the ranks of the lowest echelons of society a century OK is poorly positioned to be the best in any field.
Just like with Russia or China, by bribing cybersecurity specialists.
I know it’s hard for senior management to want to really commit to bug bounty programs like this because it feels embarrassing and vulnerable, but posts like this should be sent around the boardroom when discussing — apple rented an AMAZING security team here.
Sam, can you disclose what you got paid for all this?
Also, they state in the article: "However, it appears that Apple does payments in batches and will likely pay for more of the issues in the following months."
Zerodium - http://zerodium.com/
Azimuth Security - https://www.azimuthsecurity.com/
NSO Group - https://www.nsogroup.com/
ZDI - https://www.zerodayinitiative.com/
SSD - https://ssd-disclosure.com/
Makes me wonder, if these guys could do it, how many Chinese industrial espionage units have?
And Russia, and Iran, and so on... It seems safe to assume someone else out there found at least one of these and got in to the Apple internal network and has been quietly doing their job, whatever it may be.
This itself is massive. How many 0-days could emerge from something like that?!
If anyone asks, why you should go to all the effort to secure the software in your internal-network, that's why.
Even worse: every team thinks the thing they do is absolutely the most important thing in the world so they hide it even more. They create empires around the information they control and explicitly force you out of it. So instead of just reading their freaking source code or documentation you have to get permission to open a ticket in their system, then you open it, then one person will triage your ticket, another will forward it, another will create an internal Jira about them, a PM will prioritize it, then a dev will gather the information and pass to the Senior Information Proxy employee who will instruct the intern to finally reply it in your ticket. And of course your original message was misunderstood so the thing they gave you is useless. All you needed was access to the damn thing, but they built an empire around it and now you have to fight a war of improductivity.
Especially, if most of your colleagues never have to bother with security, because they think, they are safe behind the perimeter, how can you expect a secure perimeter? With so many applications, there is bound to be one to have a hole.
The argument is more on the meta-level. Most of the shown ones are implementation issues. Hundreds of people have their hands in here.
But being able to gain more privileges because you have managed to compromise a service, that is one of design. And here, only few should have a say in.
Expect failure, limit the impact.
The Google model is frequently derided on HN as "not invented here" but at least you can say that they aren't getting rooted via some kind of toxic waste like Jive forums.
In my experience all enterprises I have experience with do this. There’s just something about the whole process that makes everyone worry about security later.
We talk about the ethical responsibility (and common-sense practicality) of companies cooperating with white-hats who have found vulnerabilities in their systems.
But how does HN feel about this policy as it applies to third-party ISVs contacted for knowledge relevant to a vulnerability exploit?
Should there ideally be a framework in place where the company running the Bug Bounty program puts white-hats in contact with its upstream ISVs’ engineers; or perhaps even treats the white-hat as an employee in terms of eligibility to receive support from the ISV on the Bug Bounty hoster’s tab?
Or, to flip that around, maybe the ISVs themselves should be willing to help the white-hat for ethical/practical reasons as well (for the exploit might, in the end, be as much their problem as it is their customer’s.) But in that case, should there be a some sort of best-practice approach to authenticating that J. Random Hacker who emailed a question to you, is actually a white-hat—e.g. by validating that they’re registered with the bug-bounty program of your client? Or does it not even matter, and you should just answer even a black-hat hacker’s questions about your APIs, since “vulnerability research is vulnerability research and has a long-term result of hardening the ecosystem either way”, and then let the cards fall where they may?
I don’t think black-hats would feel ethical remorse from registering with a bug-bounty program in order to get access to information.
“Or does it not even matter, and you should just answer even a black-hat hacker’s questions about your APIs”
I can see the headline: “Foo, inc. helped hackers break into their systems”.
Let me cite a specific recent example: I was tasked with building an application that integrated document e-signatures. The spec called for Docusign specifically, so I looked at their documentation. What I could find of it was written unclearly and much of it was hidden behind a developer account login. Getting a developer account was "free", as long as my time was worth $0. (You had to fill out a form of some kind, I don't remember the specifics anymore.)
So then I looked at HelloSign, a competitor. Their documentation was public, freely available, and beautiful (https://app.hellosign.com/api/documentation). It included specific examples and walkthrus. This says things to me like, "we care about the developer experience".
I practically begged the customer to use HelloSign instead. I expected, from experience, that Docusign integration was going to suck, and HelloSign integration would suck a lot less. The customer said, "the spec already says Docusign, so we can't switch".
And the Docusign integration did suck. It was terrible. Lots of it was incomplete. Their vendor library was a godawful mess, built from some automated tool that converts an API into a bad class library. Their support was basically useless even after a contract had been negotiated and signed. The client ended up spending an extra ten grand or so and at least a couple weeks worth of delays just on Docusign-related issues.
This is a pattern that reoccurs often enough that experienced developers use documentation as a proxy for the quality of the service.
The bigger the project the more moving pieces there are and the more likelihood of flaws.
My experience in bug bounty programs has taught me that if you do start a bug bounty program you need to be serious about it and when a report comes in that is actually serious that you need to act on it quickly. And it seems Apple is doing that.
What would be more concerning is if they weren't acting on fixing issues quickly. Some take longer, that's to be expected depending on the problem, but what has been reported so far has been fixed in a timely manner.
Also, keep in mind that security and privacy, while related, are not the same things.
You can have privacy (i.e. minimal data gathering) and poor security. You can also have poor privacy but amazing security.
Not sure why I'm feeding the troll here but whatever.
None of this is abnormal. And it seems based on this article that Apple responded quickly and fixed the reported issues.
Bug bounty programs aren’t a replacement for internal security, and they have the potential to be very expensive compared to paying someone a salary.
Is it an enormous compromise of trust? Dunno. With an average fix time of a single business day, I’m inclined towards “no”: that’s an awfully rapid response for incompetence to deliver.
Second, you'd be a little naive if you thought Google Mail has never had XSS vulnerabilities.
Five people working for 2 months is 10 person-months. Apple paid them just under $52,000, none of which was guaranteed. They had to pay whatever taxes are appropriate for their jurisdictions.
I'd say Apple got an amazing bargain.
The amount of effort put into finding multiple critical - high vulnerabilities of a $1TN+ company and the result is $51k + taxes to possibly share between 5 hackers for 4 qualifying bugs for that bounty sounds like Apple took them for a cheap ride through their campus.
Compared to 1 hacker, 1 month, JWT signature check failure = 100k from Apple :
I would expect Apple to pay $500k - $1M for this session in the end, and it would be in the best interest of all parties if this happened. Apple would encourage responsible disclosure (and attract more white-hat bug hunters) this way. The amount of vulnerabilities found is a proof by itself that team work does pay off, if the team is strong. Also, this is a drop in the bucket for Apple. It would probably cost them much more to have them on the payroll for the same amount of time.
I agree Apple got a great deal here (that's the point of bounties, and anyone who thinks they're a bad deal for strong researchers is... right). But I'm always going to point out that HN has weird misconceptions about the economics of this stuff.
It’s not about the dollars you could make. That’s probably pretty hard to get away with.
But the damage you can do? That’s a whole different thing.
By a team of four experienced security researchers working for multiple months?
(This was not several months of full time work, but rather several months of part time work; but I'm stipulating the former condition.)
I think we're on firmer ground saying that there are ways of delivering software that foreclose on "obvious bugs". But when we talk about fundamentally changing the way we deliver software --- in secure-by-default development environments, on secure-by-default deployment platforms, with security as a primary functional goal prioritized over time-to-market --- we're actually into real money now, not just another $250k on pentesters.
Calc based on 3 months, 5 people, 600USD/md rate.
EDIT as I can't reply to tpaceck below: no, those 2000usd/day rates do not exists in projects in size of 300MD like here. In general they do not exist for big projects.
Yes, I agree, you have rates around 1200 in high cost countries, yet as I wrote earlier, you can have similar/the same skill level at 600 usd/md if you're willing to work with guys not from HCC.
As to the skills I'm talking this level: https://research.securitum.com/mutation-xss-via-mathml-mutat...
I do not believe you can find pen testers worth their salt who would cost _less_ than a non-distinctive developer. At least not one who will do more than run some automated report over all your endpoints.
If we're cherry picking a single one, the associated involvement and timeframe drops dramatically, to something much closer to one or two people, tops, over the course of just a few days, tops.
That's something a pentesting team can absolutely achieve for far less than $500,000 over the course of a few days, too.
Always found at least a medium severity issue though.
Big engagements were typically a week, max. Usually one day of kickoff / getting “in the zone” for a project, three or so days of intensive testing, then the final day is usually writing reports (ugh, reports) all day.
Yes, but that's a shared premise in this subthread already.
Apple is also huge, and no huge company avoids vulnerabilities; staff as ambitiously as you want, but any disjoint group of competent testers attacking a new target is going to find a disjoint set of bugs.
I can easily see the iCloud photo worming one making it's way into mainstream media and causing millions of dollars of reputational damage.
For what it's worth, "reputational damage" has always been a kind of rhetorical escape hatch from arguments that have become too mired in facts.
To my mind, this team deserves a higher salary than typical Silicon Valley programmers for this work.
The conclusion is the same: they are underpaid by a factor of approximately 4+
I’ve spent time in my career with a “big gorilla” employer whose business is very visible within its community. Companies will “pay” a lot to say “We solved FooCorp’s problems with <x>” or “FooCorp bought our <y>”
Lazy buyers assume that their peers have their shit together.
>Between the period of July 6th to October 6th
They may have corrected it. Whilst $52k is cheap for 15 months of labour, that's as of October 4th. So it's not unreasonable for that number to go up significantly over time. It'll be interesting to see what their final total is.
I don't know what Apple would value 15 months of highly skilled security consultants at, but I can't imagine it'd be below $200k, so Apple is likely still getting a good deal even if they pay out a lot more.
Just because Apple got an amazing bargain doesn't mean the payout for them won't be great as well.
This is the professional equivalent of having interns do a bunch of real work and throwing them a pizza party.
But to be clear that doesn't mean I think they (or someone else) should not be allowed to make this choice. The possibility should definitely exist. I just don't think it's a good choice in terms of it being a norm.
For a role like this, where outsized skill of someone who is and needs to be elite should be rewarded with enormously outsized pay, I think this a good model.
But, I do find it wild that a group as decorated as this already can't even get compensation that is commensurate with their skill and experience without having to rely on intangible future benefits.
This type of social proof, when executed well, is a boon to one’s career opportunities and credibility for getting future consulting jobs.
If they’re not hired by Apple, they’re going to move to the top of the list for info section recruiters everywhere. Being able to point to this blog post makes them an easy sell relative to some other person with a generic resume.
“ This was originally meant to be a side project that we'd work on every once in a while, but with all of the extra free time with the pandemic we each ended up putting a few hundred hours into it.”
It is when the thread kicked off by measuring how much they are being paid / man month...
3 months as I think they added the extra month as part of the responsible disclosure and remediation phase.
Annoying, but possibly more to come.
With their abilities, they could still go in that direction.
Developing exploits that are acknowledged by major targets--even if done freelance or as a hobby--is one of the few ways to gain lines on your resume that everyone in the security field will pay attention to.
Yes, it also is effectively a screen for people with the spare resources to invest in a career without getting paid for it.
But the established folks don't know in advance what exactly that will be... if they did, they'd already be paying someone to do it.
As a new person, there's no better way to demonstrate your ability to make an impact than to just do it.
I'm not an infosec person myself. But my experience is that upwards of 80% of the ones I interact with who aren't like the people I mentioned above are just hangers on because they like the group or being associated with "infosec" because it sounds cool or something. Maybe it's because you don't need to be an engineer to regurgitate OWASP vulnerabilities and tell people to use password managers, but perhaps that's enough to, after you look around the room of infosec people, feel like you're an "infosec person." To be clear, that stuff is important, but not anywhere close to sufficient. So a lot of applications for our roles come from these people, who just sit on twitter all day and retweet the Taylor Swift security person, but they're totally not technical and have done nothing of note other than write compliance plans.
My hypothesis is that it's all this noise that makes hiring good infosec people difficult. If I'm hiring a kernel programmer or SRE I seem to get much more signal in my applications, but hire someone for security or infosec and there's too much noise from people like above.
They both matter, though. Basic blocking and tackling at the IT level is important, especially to large old institutions. Apple is obviously an apex technology company, but they're also a 45 year old public corporation... I'm not surprised they've got some vulnerabilities lurking in their subdomains.
Patrolling DNS and 3rd party corporate applications is not usually what people think is sexy security work, though. Problems avoided are harder to sell than problems discovered or bad guys defeated.
Is this like an actual side business you run? Can you tell us more?
Wow. I would think it's just impossible to secure all that, and that's not even everything.
You can make sure your village have no spies, you cannot ensure the same for a city. I bet every large enough network is compromised to some degree.
The comparison of the city is a really good one.
There's also something to be said about migrating internal DNS to a subdomain of apple.com that is only visible internally.
Not solutions to security, but making things harder to scan makes it harder to find the vulnerabilities.
edit: fixed the number of addresses
The internet was just a research project to connect some universities, government sites, and a handful of companies. No one realized where it was going.
By the time it was clear the IPv4 address space would be exhausted it was also clear reclaiming those IP blocks (for which there is no legal basis) would merely temporarily delay the exhaustion - likely by a year or two at best.
Companies that have an entire /8 block are AT&T, Apple, Ford, Cogent, Prudential Financial, USP and Comcast.
For some reason the US Department of Defense has 13 /8 blocks.
All others belong to regional internet registries (AFRINIC, ARIN, APNIC, LACNIC, RIPE NNC).
I really don't know why anyone other than the registries needs/deserves/got /8 blocks.
My guess as to the answer of “why” is power and leverage. It’s the same as nations claiming physical land. “Maybe we’ll need it, maybe we won’t. But either way, now it’s ours to decide.” Writing that out, do they own those? Can someone take those back?
I did a bit of digging and looks like they're looking to sell:
But yeah, that a bit much for one company. I'll give hosting providers a pass on owning a million IPs, because they're for the lending out to customers.
I suspect they are only parking those names after recovering them or buying them preemptively. Domain names are cheap, so why not. I don't think that's any argument for the possession of the /8 though.
I remember Google had ownership of duck.com until recently, so they probably participate in the wholesale acquisition of random domains as well .
Consulting firms bill between $1500-$2500/day for senior staff. 2 hackers for 10 days could be the $50k they got paid. Instead, this crew used 5 hackers for say 45 days, or 225 person days. Napkin arithmetic suggests that's somewhere between $240k and $560k.
I could say it's consulting firms who are overpriced, as a group of amateurs will do better work for for %10-%20 of the cost, but over the years I've found that the difference in the security world is that you hire a small shop to discover the truth about risks, but you pay a big firm to lie about them. That's what costs extra, and given their transparency maybe this work wasn't mispriced at all.
I had another family member who worked in a big 4 accounting firm. These companies regularly pay in excess of $800 an hour for the most ridiculous consulting. $1500 a day for two people is robbery in the world of consulting.
To me, this is the real rip off.
A security consulting firm would do more for you. They'd basically be telling you how to make your entire hull stronger. And one of the things they might tell you to do, is start a bug bounty program. And they would also likely put things in place for the real security problem in your org: social engineering. Among other things.
And more than that, spending x dollars on a security consulting firm demonstrates that you did some diligence in securing customer data. And that goes a long way in a courtroom.
1) Sam and the other hackers did not do this as a full time gig, they primarily do this as moonlighting from their full time jobs (you can verify this on LinkedIn)
2) Consultants are often given tight scopes, and these artificial client-driven constraints often prevent consultants from identifying similar findings as Sam and crew found.
3) Bug bounties provide no defined level of assurance. They found an SSRF, but it is a very real possibility that somebody in their crew (or an individual bug hunter) doesn't have experience in that particular topic and Apple would have never been the wiser. In a bug bounty you're at the whim of the crowd's varying skills and interests. You can game this by offering larger bounties, but you can't pre-define a scope or level of assurance.
4) They've gotten paid ~$50k thus far for four bugs, if you read the article they mention they'll very likely be getting paid more. I'd be surprised if their total payout isn't six figures when all is said and done.
5) Your stated rate for consulting firms charge for a particular role is correct for the US market, but the level of "seniority" in a senior consultant varies wildly. Many large firms will undeservedly give somebody with two years experience the title "senior", regardless of actual skillset.
6) You state "a group of amateurs will do better work", first point is to note these five are not amateurs in any way! They're in the top 1% of global bug bounty hackers. Second it seems like you're defining "better" as "finds more vulnerabilities from a blackbox bug bounty perspective". I find that client's IRL don't define things in the same way you've done here.
7) "but over the years I've found that the difference in the security world is that you hire a small shop to discover the truth about risks, but you pay a big firm to lie about them." This I couldn't agree with you more on, it is MIND BOGGLING to me that firms with no ethics, actual standards, or transparency are the top firms in the security assessment/pentesting space. For an industry that proports to hate snake oil security, we sure are comfortable with a ton of snake oil security assessments.
8) This industry needs standards, for-profit old boys clubs are not the way
And the grass roots/non-profit approach also failed due to lack of advocacy, adoption, and persistent leadership. http://www.pentest-standard.org/index.php/Main_Page
I'd love to see a world where Bug Bounties and full security assessments can live harmoniously and people do flip out declaring one or the other service totally useless all the damn time.
Regarding amateurs, olympic athletes are amateurs, it's a reference to people pursuing it out of interest instead of just a 9-5 job, even if they happen to do it full time. Amateurs will almost always outperform professionals because the skill distribution among pro's has a longer tail, where to even get in the game without a pro backing you have to be above average. This was an amateur moonlighting effort that delivered better results than consultants who cost 10x the money.
Bug bounties find most vulns in scope that %80 of hackers would find, which I think is more valuable than an assurance level, because assurance levels are bunk. A security architecture is valuable, provided it's built with an understanding of the threat model of the actual business and gets implemented, but otherwise, I think the security assessment document production business doesn't have a long future.
Later came Facebook and people created their account using the @hotmail.com and starting to left MSN, since facebook had a messenger. One day I received an email from Microsoft saying that they were disabling MSN (I'm telling this from memory, forgive me if I'm saying anything super wrong).
Fast forward to me being in college and studying a little bit of pentest. As I recall I was trying to see how much information a could gather from a person by their facebook page (as a non friend). If you try to login using their ID (or username) you could find pieces of cellphone number and emails. So I tried this with a profile from some girl I had a crush back in the day and discovered that shed used MSN as email.
Eventually I tried to log in her email on MSN and found out it has been disabled for a while. So I tried to recreate the email account with me as the owner and for my surprise it worked. I then went back to facebook and recover "my" password. With the email and password, facebook didn't let me login because of my location. But I knew where this girl lives, so I found a proxy server  and bam, I was in.
Not going to lie, I did look at some of her messages and pictures, but felt very bad after and decided to tell facebook and microsoft about it. This was facebook's response . After a day or two of getting no answers from both companies (before I got the answer from facebook), I told the story to about 2 or 3 tech reporters. They told me they wrote to microsoft asking for a comment, but never got any answer. A week later I tried to recreate another "dead" account on hotmail and I couldn't. Don't remember exactly what they did, but I just couldn't create the email, so I figure they has fixed it.
1 - http://free-proxy.cz/en/proxylist/country/BR/all/ping/all
2 - https://imgur.com/a/kFMlO6d
Edit - On a more serious note, I think they tried to enforce account creation to a cellphone number instead of email.
$34,000 - Multiple eSign environments vulnerable to system memory leaks containing secrets and customer data due to public-facing actuator heapdump, env, and trace
And if you are a bug bounty hunter, some of the simplest things can lead to the best ROI. I'm actually surprised something this basic was not already found and reported, but credit goes to their recon efforts for determining where to look.
In Spring Boot 2.0 and newer actuator module only exposes "info" and "health" web endpoints by default. Default configuration does expose more endpoints via JMX, though. Also, if your project includes Spring Security module actuator endpoints are secured by default.
It's that any network of enough complexity run by an organization of enough complexity is actually impossible to secure.
Data security is hard, I would rather trust someone who has shown good capability there, invests a lot in that and has more to lose. That's why for the foreseeable future, I would rather use Gmail, Google Drive over their alternatives. Also, why I prefer to use Amazon instead of individual storefronts which ask for contact and payment details.
However I am stunned that they did not earn past 6 figures. I was a bit primed with the $100.000 bug earnings.
The upside is that nobody needs atomic bombs to shutdown a whole country anymore ;-)
What a joke. That's an hourly rate of $20 (assuming 5 researchers working for 3 months). Just enough to buy a MacBook to do the research in the first place.
So the 20-30$ per hour figure is closer, before taxes, with zero benefits like health, dental or pension plans.
They themselves say: bounty hunting is not a job