Hacker News new | past | comments | ask | show | jobs | submit login
We Hacked Apple for 3 Months (samcurry.net)
1454 points by samwcyo 14 days ago | hide | past | favorite | 308 comments



I think what might not be immediately obvious to people outside of the bug bounty scene is that Sam Curry, Brett Buerhaus, Ben Sadeghipour, Samuel Erb, and Tanner Barnes represent some of the best bug bounty hunters out there which is definitely one of the reasons they absolutely pwnd Apple here.

I would be genuinely shocked if Apple doesn't end up paying out much more for all the bugs found. Frankly, it would be genuinely concerning if they didn't acknowledge the severity of the bugs and the time invested by this particularly skilled team.

To Sam and the others involved. Fantastic job and amazing write up. 10/10


"Within the article I'd mentioned that Apple had not yet paid for all of the vulnerabilities. Right after publishing it, they went ahead and paid for 28 more of the issues making the running total $288,500" https://twitter.com/samwcyo/status/1314310787243167744

Even if they would pay a full $5.5 million, at 100k per issue, it seems reasonable for the breadth of the findings and potential losses prevented. The warehouse access alone could cause far more damage, while some of the smaller vulnerabilities are clearly not worth that much.

EDIT: see this comment thread for more info on the economics by people who know what they're talking about: https://news.ycombinator.com/item?id=24719656


In the instance yes, but bounty programs need to be sustainable so parameters are set up front. Folks can choose to participate or not. If they don't like the offering, they can find something else.

Bounty programs are in place so that bad actors are not the only ones on the lookout for bugs. If experts get paid pennies for finding enormous security vulnerabilities, what's stopping them from selling them to actually bad actors for a potentially much greater cut? I can imagine that someone would be willing to pay far more than $5M to gain access to Apple wharehouses.

Nothing but their ethics.

But why would an expert spend any of their valuable time outside of work looking for bugs if they didn't like the terms of the program? That's irrational behavior.

And why would someone who's willing to sell bugs to criminals bother with a site that's already been picked over by bug bounty researchers? The vast majority of companies in operation today have no such program and would likely be much more fruitful.

And lastly how would paying more for bugs prevent someone from also selling it to criminals?


> Nothing but their ethics.

That's not something a company the size of Apple can count on.

> And why would someone who's willing to sell bugs to criminals bother with a site that's already been picked over by bug bounty researchers?

Because it's Apple, it's one of the biggest companies on earth. iPhone jailbreak vulnerabilities alone fetch millions on the black market.

If you know the bug bounty program doesn't pay much you can expect only the trivial things to have been found, and if you're very skilled you know you still have a good chance of finding things to sell.

> And lastly how would paying more for bugs prevent someone from also selling it to criminals?

It would keep more honest people interested in your bug bounty program instead of doing something else.


>Because it's Apple, it's one of the biggest companies on earth.

Yes, and do you think you have a better understanding of the situation than the security and risk management folks that work there? There's absolutely nothing that has been said in this thread that they aren't keenly aware of. There are people in Cupertino that are going to wake up in a few hours, grab some coffee and pore over the threat intel reports from last night. They know who is buying and for how much and have a long detailed analysis of what happened with previous jailbreaks. There is another team of people dedicated to staffing the bounty program, rifling through stacks of reports with a signal to noise ratio that's approaching the Shannon limit, triaging findings, tracking down product and engineering teams to get a quick response so they can get back to the researcher in a timely fashion, handling rejections for out of scope and dupes.

These people are in it up to their eyeballs in this every day. They live it, breathe it, love it and they'll move the needle when moving the needle makes sense. Until then anyone that participates in the bounty program and then cries foul when payouts are in line with the posted max and not with what could be had on the black market are going to get zero sympathy from me.


> Yes, and do you think you have a better understanding of the situation than the security and risk management folks that work there?

You could have said the same to this team, "do you think you understand cyber-security better than Apple's experts?"


Hey, you should start a buck bounty program then. Provide companies with financial advice and get a percent of what they save!

iPhone jailbreak would also be of interest to official parties

Legal risk. You can make a quick safe buck from selling the fix to Apple or you can risk some trouble for selling it to criminals.

Maybe there's demand for agents/managers for less famous/media-savvy bug hunters, quintupling the payout would easily pay for the agent's fee in a case like this.

> They went ahead and paid for 28 more of the issues making the running total $288,500"

That's barely the yearly cost of one generic software engineer at Apple.

I was expecting multiple millions in payment given the severity and quantity of vulnerabilities found. State actors could easily 10x that amount legaly through gov contractors.


I’ve interacted with some of them directly when I worked on a bounty program. Definitely some of the best in the business (and actually pleasant to work with).

Apple only paid them $52k? Apple is a trillion dollar company. These hackers saved them easily millions of dollars in expenses.

China or North Korea could easily allocate a much larger team to something like this and disrupt Apple (not for bug bounties). Although, China and North Korea dedicate their resources to financial fraud where there is real money to be had.

Apple is a tightwad joke. If they laid out a scope of work for a professional pen testing company that included pen testing their 17.0.0.0/8 range then that contract would easily have been in the hundreds of thousands.

I’m sure foreign adversaries will take notice now. Apple’s cybersecurity posture has always been very weak. It’s known they don’t dedicated any resources to it.


Late reply: They just paid for 28 more issues, running total is now $288,500.

https://twitter.com/samwcyo/status/1314310787243167744


$288k and Apple has only paid them for roughly half of the vulnerabilities. They expect the payout to exceed $500k.

Well worth it for Apple and a decent payday for 3 months of spelunking.


Gross pay (not including employee benefits and before payroll tax deduction), split among a team of 5 people, unclear if they were working on this one project full time, and amortized over other months with less renumeration. It may not be better amortized pay than a regular software job.

Especially considering that the authors are some of the best bug bounty hunters in the world. $500 an hour is a fairly normal rate for a top security consultant, as far as I'm aware.

Absolutely wonderful news! Congrats to everyone involved.

Kudos to Apple for following through.

I hope this sets the standard for companies going forward.


Apple are already behind the standard and times on this. Apple aren’t leading here, they are reluctantly catching up and doing the minimum they need.

From the article: "However, it appears that Apple does payments in batches and will likely pay for more of the issues in the following months."

But that is the thing... their official Bug Bounty program scope didn't include most of these exploits so any payments/awards would have to be made outside of the traditional system and thus probably take more senior approval/time to make payments. They knew that they would possibly not get paid for them but took the risk anyways. I have a feeling they will end up getting at least a hundred thousand dollars total.

If one bounty hunter got $100k for a single exploit, these guys should’ve gotten millions...

Not to discredit the great work all these people did but not all exploits are created equally. Generally speaking the bug bounty amount is directly correlated to the blast radius of the exploit.

Not just saved Apple, but Apple users too. Wasn’t the “fappening” rooted in hacked iCloud accounts with weak credentials? Imagine what juicy political targets are out there using iPhones syncing with iCloud.

iCloud wasn't cracked. They used social engineering to gain access to the accounts.

It wasn't that, they did do password cracking, if I recall correctly iCloud itself had a limit on password attempts through the site but there was a way to attempt logins through the API that didn't have that limit which let them target those users for brute force attacks.

I think this was speculation at the time, but it later came out that they were phishing attacks which got access to iCloud accounts, and once you have that you have the person's device backups.

You're right iCloud itself wasn't "cracked", per se, but the XSS exploit is (was) incredibly malicious and did not require any social engineering that I can see.

https://samcurry.net/hacking-apple/#vuln3


Is this linked exploit the same as the Fappening exploit?

> China or North Korea could easily allocate a much larger team to something like this

Chances are they already did, and are just sitting on the vulnerabilities.


How is North Korea going to recruit top cybersecurity specialists?

It's safer not to underestimate them. Every country has smart, competent people in it, no matter how poor or how oppressive their government. A cybersecurity / black hat program is much cheaper than nuclear or ballistic missile programs; and mostly just a matter of human resources and education. And North Korea can set up very strong rewards and incentives for those who do well. Put their whole family up in Pyongyang luxury apartments, etc.

They aren't competing with FAANG salaries, except maybe for a few outside experts that they might bring in to kick off a program.


North Korea does have a well-established cyberwarfare group, https://en.wikipedia.org/wiki/Lazarus_Group. The WannaCry ransomware attack has been attributed to this group.

It's a country of over 50 million people, all of whom are beholden to their government.

They have all the top cybersecurity specialists they could ever need.


The population of North Korea is only 25M and 43% of them are malnourished and only a small percentage have access to the internet.

https://globalnews.ca/news/5029484/north-korea-malnutrition-...

Number of security researchers isn't a function of population size it is a function of population size * fraction with propensity to show requisite skill * fraction who go to work in the profession.

Shockingly adding millions of more starving people who have never seen a computer doesn't get you many more cybersecurity specialists.


It's also one of nine nuclear powers and one of ten to have developed space launch capability. It also scores near the top of the international math olympiad regularly. What makes you certain North Korea hasn't similarly invested in developing security researchers?

Apples net income is bigger than their GDP and most of their people as impoverished, malnourished and poorly educated.

The upper caste from which all their "talent" is drawn is a small fraction of its total population mostly composed of the descendants of the lower class peasants and workers who supported the rise of the current regime. They supported not an establishment of such a system but rather an inversion of the prior order. It's an impoverished field from which little of value grows. To be clear there is nothing inferior about North Koreans by nature it is that the regime wastes the value that it does get.

Einsteins are in theory as apt to be born from the less privileged classes but there is only a 50 50 chance of getting enough food to be healthy let alone a chance at intellectual development.


And yet they are able to assemble a team of math olympiad contestants that consistently places in the top 10 in every year that it competes.

They just need to send a couple dozen of their brightest talent. You know their engineers & scientists train in Russia and China, right? Which from my last recollection, invests heavily in offensive cyber warfare.

By your same logic, they would not have any Olympic competitors, let alone medalists.


You get a bright talented individual by offering 1000 ample nutrition and educational opportunities historically with computer tech you give people the opportunity to learn and play with it from an early age.

Most nations educate millions to 10s of millions in order to get their doctors, scientists, software developers, leaders. Someone who educates merely thousands of the kids of new rich kids selected primarily from the ranks of the lowest echelons of society a century OK is poorly positioned to be the best in any field.


Ask Sony

> How is North Korea going to recruit top cybersecurity specialists?

Just like with Russia or China, by bribing cybersecurity specialists.


Apple has a trillion because they found a way to exploit the gullible and greed. I am surprised they paid at all.

I sorted exploits by date, it made a fun short headline summary of how productive they were. Short answer: very.

I know it’s hard for senior management to want to really commit to bug bounty programs like this because it feels embarrassing and vulnerable, but posts like this should be sent around the boardroom when discussing — apple rented an AMAZING security team here.

Sam, can you disclose what you got paid for all this?


End of the post it says 51k so far. I'd expect the price to go up a LOT more, because otherwise the sane (monetary) advice becomes "report some vulnerabilities to apple, and then keep finding them and sell them to third parties".


Yeah, that is only for 4 vulnerabilities out of 55. And 3 of them were only "High." They still have 10 (!!) more critical vulnerabilities they may receive payment on.

Also, they state in the article: "However, it appears that Apple does payments in batches and will likely pay for more of the issues in the following months."


Update: They just paid for 28 more issues, running total is now $288,500.

https://twitter.com/samwcyo/status/1314310787243167744


There are defense contractors that do exactly this. Governments pay more than Apple will ever pay, so if you are in it for the money (and don't care about the ethical repercussions), selling the discovered exploits to governments is the way to go.

That's only true if you have no way to be put in (financial) risk by the vulnerability you're not disclosing to Apple.

If you're a security researcher, you probably know how to cover your tracks.

Do you? The skills involved in VR and exploit dev don't necessarily mean you're good at opsec.

True, plus "opsec life sucks" so most of us don't do it.

Where do security researchers sell their investigation on the black market? Links?

The following companies buy 0-days. Each has a slightly different business model:

Zerodium - http://zerodium.com/ Azimuth Security - https://www.azimuthsecurity.com/ NSO Group - https://www.nsogroup.com/ ZDI - https://www.zerodayinitiative.com/ SSD - https://ssd-disclosure.com/


Zerodium is probably the closest thing to a “legitimate” acquirer of exploits, those which aren’t being disclosed to the vendor and then fixed.

any company that builds software for government tracking like NSO Group will happily pay, and have very deep pockets

Yes, but the risk if you’re caught is huge.

not if you're selling them to -your- government. Then you're a patriot but also a rich patriot.

Yeah I imagine you can make a lot more money selling them to state actors than to apple.

It seems like this cooperative approach was very effective. I assume rubber duck debugging and having multiple minds attacking the problem from from multiple directions greatly improves the efficiency.

Jesus, that prebaked password on the Jive platform was really bad. Especially as one could ultimately access nearly the entirety of Apple's internal network from that.

Makes me wonder, if these guys could do it, how many Chinese industrial espionage units have?


> Makes me wonder, if these guys could do it, how many Chinese industrial espionage units have?

And Russia, and Iran, and so on... It seems safe to assume someone else out there found at least one of these and got in to the Apple internal network and has been quietly doing their job, whatever it may be.


"Our proof of concept for this report was demonstrating we could read and access Apple’s internal maven repository which contained the source code for what appeared to be hundreds of different applications, iOS, and macOS."

This itself is massive. How many 0-days could emerge from something like that?!


Great, hard-shell/soft-centre.

If anyone asks, why you should go to all the effort to secure the software in your internal-network, that's why.


I work on a giant famous multi-billion dollar company where all the internal stuff is full of permission requirements, training requirements, etc. It is absolutely HORRIBLE to be productive here. Every single kind of information you need to be able to work is hidden behind someone's wall. I often lose entire weeks of productivity just trying to find who owns a certain information or knows which permission I need to request in order to read a link. There was a time I literally had to wait a whole month before the person was on vacation and their manager didn't know how to authorize me into the system. All I needed was a binary file they provided.

Even worse: every team thinks the thing they do is absolutely the most important thing in the world so they hide it even more. They create empires around the information they control and explicitly force you out of it. So instead of just reading their freaking source code or documentation you have to get permission to open a ticket in their system, then you open it, then one person will triage your ticket, another will forward it, another will create an internal Jira about them, a PM will prioritize it, then a dev will gather the information and pass to the Senior Information Proxy employee who will instruct the intern to finally reply it in your ticket. And of course your original message was misunderstood so the thing they gave you is useless. All you needed was access to the damn thing, but they built an empire around it and now you have to fight a war of improductivity.


To add insult to injury, your account of the state of things gives me no reason to think that their internal systems aren't rife with similar vulnerabilities, so rather like DRM only making life hard for paying customers, I suspect that these measures only make access difficult for honest employees.

It's funny, because the received wisdom in the industry is that Apple behaves exactly this way. I guess not in their production environment!

If you are unable to secure your perimeter, what would lead you to believe that you had better security of your interior?

The point is, at a certain scale you _are_ unable to secure your perimeter. Are you surprised that a handful of likely thousands external facing application can be hacked?

Especially, if most of your colleagues never have to bother with security, because they think, they are safe behind the perimeter, how can you expect a secure perimeter? With so many applications, there is bound to be one to have a hole.

The argument is more on the meta-level. Most of the shown ones are implementation issues. Hundreds of people have their hands in here. But being able to gain more privileges because you have managed to compromise a service, that is one of design. And here, only few should have a say in.

Expect failure, limit the impact.


That's certainly one view of things. The other view is taken by the beyondcorp/zero-trust model. But the lesson I take from this article (and my own experience) is that if you allow commercial off-the-shelf and open-source software into your network the end result will always be an insecure mess. If you absolutely must adopt off-the-shelf software the only safe way to do it is to put a proxy in front of it that's completely integrated with your authn/authz systems such that the native protocol of the third-party system is completely hidden and inaccessible.

The Google model is frequently derided on HN as "not invented here" but at least you can say that they aren't getting rooted via some kind of toxic waste like Jive forums.


If I understand correctly, Google’s model is to basically roll their own authentication frontend to any service they run. Now, this is likely better than what some off-the-shelf open source library might be using (which might actually have been fine if you had configured it correctly) and I have nothing against running further authentication before giving access to your things, but calling this the “only safe way” to do something is not really true at all. There’s a number of companies that run without this model that do fairly well, and Google endpoints are occasionally are hit by researchers. So it’s good on Google that they have a policy up for this and it mostly seems to work for them, but it’s not the only solution like you’re suggesting.

I think the main lesson is just to not tolerate third-party protocols. Having a uniform RPC interface with integrated authentication, authorization, and delegation makes it much easier to get your security situation under control. If you're out there with your MongoDB password in a secrets vault, you're already in an unsustainable situation.

Less surface area?

Any of those countries could just get someone hired at Apple for that


Why not both so as to protect your assets.


And sends a request to an external C&C server... once a year-ish, just in case the ingress route is closed.


And the US and the UK and the EU...

I had the same thought. Really basic front end web vulnerabilities right in HTML response. One can only speculate on the state of Apple's web hosts' ip filtering, rate limiting, ddos protection, etc.


I wonder how many agents they pissed by blocking their access through exposing the holes...

As an iPhone user (and Mac for work), of late I've often wondered whether Apple is really just all about the pretty looking things and overhyped launches and marketing campaigns, giving specific (usually trademarked) names to features that have been common place on other platforms for years. And a large portion of their user-base also being their staunch fans. Do they just glue things together under the hood this way and that?

> Do they just glue things together

In my experience all enterprises I have experience with do this. There’s just something about the whole process that makes everyone worry about security later.


Am I being hyperbolic or is this an absolutely enormous compromise of trust in Apple? XSS in iCloud Email allowing for data exfiltration of emails, pictures, videos??? That's absolutely insane. It just comes to show how vulnerable we all are to exploits like this, especially if you're a notable person of interest.


Software is made by people and people are not perfect.

The bigger the project the more moving pieces there are and the more likelihood of flaws.

My experience in bug bounty programs has taught me that if you do start a bug bounty program you need to be serious about it and when a report comes in that is actually serious that you need to act on it quickly. And it seems Apple is doing that.

What would be more concerning is if they weren't acting on fixing issues quickly. Some take longer, that's to be expected depending on the problem, but what has been reported so far has been fixed in a timely manner.


I agree. The bigger lesson here, imo, is that companies need to take security of their tools even if they're only used by a handful of people and not just focus on their front-facing tools. A forum that everyone had access to but that likely didn't get much traffic was used to get access to internal networks. That needs to be taken seriously.

This happens when a company choses to amass wealth instead of investing in security. Their ads about privacy now sound laughable

Oh please... EVERY SINGLE piece of software has some security issue. Apple is no exception. Assuming they should be perfect is just petty BS and short sighted.

Also, keep in mind that security and privacy, while related, are not the same things.

You can have privacy (i.e. minimal data gathering) and poor security. You can also have poor privacy but amazing security.

Not sure why I'm feeding the troll here but whatever.


Do you mean issues like those found here? It's pretty embarrasing and negligent.

I feel like you’ve never worked at a large company before.

None of this is abnormal. And it seems based on this article that Apple responded quickly and fixed the reported issues.


Imagine how many thousands of exploits Apple has found and fixed internally, that weren’t found by outside researchers.

Bug bounty programs aren’t a replacement for internal security, and they have the potential to be very expensive compared to paying someone a salary.

Is it an enormous compromise of trust? Dunno. With an average fix time of a single business day, I’m inclined towards “no”: that’s an awfully rapid response for incompetence to deliver.


First, practically nobody uses iCloud Email. I'm honestly surprised it still exists. You can confirm with Google searchs the C.W. that iCloud Mail isn't a serious contender among email platforms.

Second, you'd be a little naive if you thought Google Mail has never had XSS vulnerabilities.


The people who specifically choose to use iCloud email are far more likely to care about an XSS than the average Gmail user.

People who have mac.com and me.com email addresses (which are now part of iCloud email) are many and have the same variation in security posture as any other cloud email user.

We'd be talking about people who have @me.com email addresses and use the web application and not Mail.app on macOS or iOS. I doubt there's that many.

Please elaborate.

> I had even tried emailing the company who provided the software asking how you were supposed to form these API calls, but they wouldn't respond to my email because I didn't have a subscription to the service.

We talk about the ethical responsibility (and common-sense practicality) of companies cooperating with white-hats who have found vulnerabilities in their systems.

But how does HN feel about this policy as it applies to third-party ISVs contacted for knowledge relevant to a vulnerability exploit?

Should there ideally be a framework in place where the company running the Bug Bounty program puts white-hats in contact with its upstream ISVs’ engineers; or perhaps even treats the white-hat as an employee in terms of eligibility to receive support from the ISV on the Bug Bounty hoster’s tab?

Or, to flip that around, maybe the ISVs themselves should be willing to help the white-hat for ethical/practical reasons as well (for the exploit might, in the end, be as much their problem as it is their customer’s.) But in that case, should there be a some sort of best-practice approach to authenticating that J. Random Hacker who emailed a question to you, is actually a white-hat—e.g. by validating that they’re registered with the bug-bounty program of your client? Or does it not even matter, and you should just answer even a black-hat hacker’s questions about your APIs, since “vulnerability research is vulnerability research and has a long-term result of hardening the ecosystem either way”, and then let the cards fall where they may?


“is actually a white-hat—e.g. by validating that they’re registered with the bug-bounty program of your client?”

I don’t think black-hats would feel ethical remorse from registering with a bug-bounty program in order to get access to information.

“Or does it not even matter, and you should just answer even a black-hat hacker’s questions about your APIs”

I can see the headline: “Foo, inc. helped hackers break into their systems”.


As an ISV, why would I have any reason to help anyone who isn't paying me?

Thanks, great point. At first I was thinking that it was the ISV playing "security by obscurity" but it's probably much more simple than that: labor costs!

If you don't have a way to share documentation for your API with anyone for a cost of about $0.00, then you're signaling that your development process is a bit broken.

its probably just a case of they emailed support@ without a support contract, and didn't get very far. I don't think that's very indicitive of much, especially for "enterprise software".

Sure, but there's no reason for something like API documentation to require emailing support@.

Let me cite a specific recent example: I was tasked with building an application that integrated document e-signatures. The spec called for Docusign specifically, so I looked at their documentation. What I could find of it was written unclearly and much of it was hidden behind a developer account login. Getting a developer account was "free", as long as my time was worth $0. (You had to fill out a form of some kind, I don't remember the specifics anymore.)

So then I looked at HelloSign, a competitor. Their documentation was public, freely available, and beautiful (https://app.hellosign.com/api/documentation). It included specific examples and walkthrus. This says things to me like, "we care about the developer experience".

I practically begged the customer to use HelloSign instead. I expected, from experience, that Docusign integration was going to suck, and HelloSign integration would suck a lot less. The customer said, "the spec already says Docusign, so we can't switch".

And the Docusign integration did suck. It was terrible. Lots of it was incomplete. Their vendor library was a godawful mess, built from some automated tool that converts an API into a bad class library. Their support was basically useless even after a contract had been negotiated and signed. The client ended up spending an extra ten grand or so and at least a couple weeks worth of delays just on Docusign-related issues.

This is a pattern that reoccurs often enough that experienced developers use documentation as a proxy for the quality of the service.


Your example is comparing apples to oranges. We're talking about software that doesn't even have "contact us for pricing" on the website because nobody that needs it even asks what it costs.

Sorry, I don't talk to enterprise ISVs unless somebody's paying my billable rate.

Developers will not be shopping around, making the decision to buy a ‘global manufacturing suite’ like DELMIA Apriso, whatever that is. If they are, they will have the documentation made available for them.

July 6 - August 6 - September 6 -- that's 2 months elapsed, not three.

Five people working for 2 months is 10 person-months. Apple paid them just under $52,000, none of which was guaranteed. They had to pay whatever taxes are appropriate for their jurisdictions.

I'd say Apple got an amazing bargain.


Exactly.

The amount of effort put into finding multiple critical - high vulnerabilities of a $1TN+ company and the result is $51k + taxes to possibly share between 5 hackers for 4 qualifying bugs for that bounty sounds like Apple took them for a cheap ride through their campus.

Compared to 1 hacker, 1 month, JWT signature check failure = 100k from Apple [0]:

[0] https://bhavukjain.com/blog/2020/05/30/zeroday-signin-with-a...


The 4 exploits they got paid for don't seem like the biggest ones though.

I would expect Apple to pay $500k - $1M for this session in the end, and it would be in the best interest of all parties if this happened. Apple would encourage responsible disclosure (and attract more white-hat bug hunters) this way. The amount of vulnerabilities found is a proof by itself that team work does pay off, if the team is strong. Also, this is a drop in the bucket for Apple. It would probably cost them much more to have them on the payroll for the same amount of time.


Where did you come up with that number? $500k is much more than a sitewide external app pentest of comparable scope would cost Apple, by an integer multiple. The bugs here are good, but they're not "bug bounty black swan" good; they're what you'd expect from a sitewide pentest.

I agree Apple got a great deal here (that's the point of bounties, and anyone who thinks they're a bad deal for strong researchers is... right). But I'm always going to point out that HN has weird misconceptions about the economics of this stuff.


That second bug they describe would have allowed them to mess with inventory in a warehouse. They could have easily "disappeared" millions of dollars of products. Some of these other bugs would have required apple to disclose PII leak disclosure which could do tens of millions of dollars of damage to their company valuation.

You'll find, if you talk to people that do this work professionally, that bugs where you can tell yourself a story about the millions of dollars you could make are not uncommon, and that the rack rate for generating those bugs doesn't scale with their hypothetical value. I've done multiple projects for FIX gateways at exchanges. Those are fun stories to tell yourself! But those projects weren't even especially lucrative.

> where you can tell yourself a story about the millions of dollars you could make

It’s not about the dollars you could make. That’s probably pretty hard to get away with.

But the damage you can do? That’s a whole different thing.


Pen test that took 6 months with 10 people would cost at least $2mm using an extremely low $200/hr rate. People who are best in the industry will be significantly higher.

> $500k is much more than a sitewide external app pentest of comparable scope would cost Apple, by an integer multiple.

By a team of four experienced security researchers working for multiple months?


Yes. I'd say "word to the wise", but I think very few people reading this thread buy pentest time in such large blocks: past a month and you start getting into steep discounts.

(This was not several months of full time work, but rather several months of part time work; but I'm stipulating the former condition.)


Your comment got me thinking, Apple probably was already buying large blocks of pentest time, and the comments in the thread make it seem like these were obvious flaws. Is that right? If we assume Apple already had a contracted pentest firm, can you speculate why didn't they find these flaws?

I don't know what "obvious flaws" means. I know from like a dozen years of consulting experience, and from 10 years of vuln research prior to that, that putting a different set of eyes on a target tends to get you a different set of bugs. Finding vulnerabilities is as much an art as a science, which makes sense when you think about what hunting for software vulnerabilities actually entails. If you could do it deterministically, you'd be saying something big about computer science.

I think we're on firmer ground saying that there are ways of delivering software that foreclose on "obvious bugs". But when we talk about fundamentally changing the way we deliver software --- in secure-by-default development environments, on secure-by-default deployment platforms, with security as a primary functional goal prioritized over time-to-market --- we're actually into real money now, not just another $250k on pentesters.


someone is watching schit creek

Yes, because it is worth in pentesting services 180k USD, no more no less. I mean, you can pay around 360k in London or SV rates and 180k in European for _similar_ skills people.

Calc based on 3 months, 5 people, 600USD/md rate.

EDIT as I can't reply to tpaceck below: no, those 2000usd/day rates do not exists in projects in size of 300MD like here. In general they do not exist for big projects.

Yes, I agree, you have rates around 1200 in high cost countries, yet as I wrote earlier, you can have similar/the same skill level at 600 usd/md if you're willing to work with guys not from HCC.

As to the skills I'm talking this level: https://research.securitum.com/mutation-xss-via-mathml-mutat...


If "md" means "billable day", a $600 billable day is extremely low for this kind of work; that's closer to what people pay for network pentesting. $1500-$2000 is closer to the market (before discount, assuming senior but not principal level delivery).

When I worked as a 'consultant' (glorified contractor) .Net developer, the company charged > 90 Euro / 105 USD per hour for my time. So that would make my going rate be > 800 USD / day. This is in a country where 50K / year is a decent developer salary.

I do not believe you can find pen testers worth their salt who would cost _less_ than a non-distinctive developer. At least not one who will do more than run some automated report over all your endpoints.


A classic false comparison: the four experienced security researchers working for multiple months covers 55 issues, not "that one issue".

If we're cherry picking a single one, the associated involvement and timeframe drops dramatically, to something much closer to one or two people, tops, over the course of just a few days, tops.

That's something a pentesting team can absolutely achieve for far less than $500,000 over the course of a few days, too.


I’m unsure what your point is? I see dozens of different issues listed in the post, on different endpoints, all of which presumably took time to find. When they said they had a team of multiple people work for months on this, I am unsure why you think they haven’t spent their time as efficiently as “a pentesting team”. Actually, I’ll be stronger: looking through the list of things they discovered, it seems like they were absolutely churning out vulnerabilities for the entire period. A real team would have certainly cost much more than what they’ve currently been paid.

Issue count != time spent. I found about a dozen issues in a day once. And once, it took me three days to find one.

Always found at least a medium severity issue though.

Big engagements were typically a week, max. Usually one day of kickoff / getting “in the zone” for a project, three or so days of intensive testing, then the final day is usually writing reports (ugh, reports) all day.


Sounds about right. :-)

It's not. The median appsec engagement is ~4 person-weeks.

A real team would have certainly cost much more than what they’ve currently been paid.

Yes, but that's a shared premise in this subthread already.


There's really 2 options here. One, Apple doesn't employ a pen-testing team currently, which would be nuts, or, two, the pen-testing team couldn't find these bugs, or they'd already be found.

Apple has product security teams, in infra security team that covers a lot of this web attack surface, a large red team, researchers, and employs 3rd party firms to do sitewide tests.

Apple is also huge, and no huge company avoids vulnerabilities; staff as ambitiously as you want, but any disjoint group of competent testers attacking a new target is going to find a disjoint set of bugs.


Or option 3: apple is HUGE, in all respects: physical space, people with access, code base, etc. etc. and they already have plenty of teams in place, but a bug bounty program is a cheap supplemental. In which case paying out more for your bug bounty program than you pay your real teams would be really weird.

In that case, do you think that Apple is incompetent for not stumping up $250k or less for an external pentester to find these bugs? Plus maybe $100k more for an internal PM/point of contact for the pentester? Or do you think Apple handled it fine, the expected cost to the business of their security holes was less than $350k and they could just wait for them to come through the bug bounty program or for internal engineers to find them?

I think everything is complicated, and that is certainly isn't as simple as "Apple should pay paid $250k to a pentesting firm to find these bugs", because you could keep paying $250k over and over again and keep finding different bugs of comparable severity.

And finding these bugs of comparable severity isn't worth the $250k each time?

I can easily see the iCloud photo worming one making it's way into mainstream media and causing millions of dollars of reputational damage.


It's not a question of whether any spot assessment is worth $250k (though: Apple can get a sitewide pentest from experts for substantially less than that). It's a question of whether paying that continuously is worth it, or whether that many can be spent more productively on something else.

For what it's worth, "reputational damage" has always been a kind of rhetorical escape hatch from arguments that have become too mired in facts.


10 person months would be 10/12ths of a programmer salary i Silicon Valley, which would probably be around $200k

> 10 person months would be 10/12ths of a programmer salary i Silicon Valley, which would probably be around $200k

To my mind, this team deserves a higher salary than typical Silicon Valley programmers for this work.


FWIW Typical SV programmers don't make anything like 200k/yr, so they are already above that range even if there aren't more payouts...

Agree; these people are incredibly skilled.

The conclusion is the same: they are underpaid by a factor of approximately 4+


Apple paid with public exposure. Anything Apple is a story of interest, which has a value especially in security circles where half the business is a pure PR exercise.

I’ve spent time in my career with a “big gorilla” employer whose business is very visible within its community. Companies will “pay” a lot to say “We solved FooCorp’s problems with <x>” or “FooCorp bought our <y>”

Lazy buyers assume that their peers have their shit together.


While it's a great marketing and reputation building tool, it's still pretty poor to pay people in exposure; they could have taken each and every one of these exploits to the black market instead and they probably would have earned a lot more money.

Alternatively, the Apriso exploit alone apparently would have allowed them to create fake manufacturing-level employees with fake payroll going to arbitrary bank-account targets; so an unethical attacker probably could have collected an unbounded amount of money just from that (since it likely wouldn’t have been caught until after the first event; and payroll would happen all at once, paying out to as many different accounts as the attacker wished.)

Your assuming the security professionals in question have a desire to commit a felony.

PR is good but it wont keep the lights on. If you want them to return to work for you, pay them with exchange currency. Apple's motive should be to encourage skilled hackers to come forward with exploits - i.e. make it worth their time. Not drive them into the arms of a competitor.

It now says:

>Between the period of July 6th to October 6th

They may have corrected it. Whilst $52k is cheap for 15 months of labour, that's as of October 4th. So it's not unreasonable for that number to go up significantly over time. It'll be interesting to see what their final total is.

I don't know what Apple would value 15 months of highly skilled security consultants at, but I can't imagine it'd be below $200k, so Apple is likely still getting a good deal even if they pay out a lot more.


Something tells me the real money comes from future consulting contracts and that this PR will more than pay for itself. Just like how everyone on HN agrees writing a book isn't a great use of time besides what it allows you to put on your resume.

Just because Apple got an amazing bargain doesn't mean the payout for them won't be great as well.


One problem is this puts a downward pressure on others who demand fair compensation for their labor. Not everyone wants to play a long game of "maybe i'll get paid in the future from the 'experience'"

This is the professional equivalent of having interns do a bunch of real work and throwing them a pizza party.


Unfortunately, it doesn’t matter if other people don’t want to play the long game. This team does, they’re executing it well, and it will boost their careers as a result. Everything was done voluntarily by consenting professionals with the rules of the game outlined up front. Can’t really fault them for that.

People can consent or do plenty of things that are allowable. That doesn't mean I can't fault the actions or dig deeper into whether or not it has other drawbacks (or even pros). Just because something is allowable doesn't mean it doesn't have other impacts.

But to be clear that doesn't mean I think they (or someone else) should not be allowed to make this choice. The possibility should definitely exist. I just don't think it's a good choice in terms of it being a norm.


This is very fair criticism for standard jobs like a regular software developer.

For a role like this, where outsized skill of someone who is and needs to be elite should be rewarded with enormously outsized pay, I think this a good model.


I think we're in agreement.

But, I do find it wild that a group as decorated as this already can't even get compensation that is commensurate with their skill and experience without having to rely on intangible future benefits.


Kind of a similar dilemma to strikebreaking

This is exactly why they’re writing a blog post about it.

This type of social proof, when executed well, is a boon to one’s career opportunities and credibility for getting future consulting jobs.

If they’re not hired by Apple, they’re going to move to the top of the list for info section recruiters everywhere. Being able to point to this blog post makes them an easy sell relative to some other person with a generic resume.


They've only paid out on a small number of the reports so far though. There's time for more $ to roll in.


You are making the false assumption that these people are working fulltime, which they are not. At least 3 of them have full time jobs.

As full time job might not mean 40 hours per week during the pandemic. This is briefly mentioned in the article.

“ This was originally meant to be a side project that we'd work on every once in a while, but with all of the extra free time with the pandemic we each ended up putting a few hundred hours into it.”


It doesn't sound like they were working on this 8h a day of every day.

It is not about $/hour, it is about the knowledge they have learnt and that will help Apple protect against some bugs which would result in losses and other damages to consumers..

>It is not about $/hour

It is when the thread kicked off by measuring how much they are being paid / man month...


Everybody wins here. It's a bargain for Apple, because their ledgers deal with numbers that require the -illions suffixes, but it's ALSO $10k per person, which even after taxes is still a lot of money on top of their regular salary for anyone with bills to pay.

Good pentester costs typically $2000 per day. Given the amount of work they did, the return feels like a slap in the face. Certainly it won't encourage highly skilled people to hunt for security holes.

At least $10k per person since the total payouts are still coming in and haven't included any of the high or critical exploits so far.

As of October 6th, 2020, the vast majority of these findings have been fixed and credited.

3 months as I think they added the extra month as part of the responsible disclosure and remediation phase.


> However, it appears that Apple does payments in batches and will likely pay for more of the issues in the following months.

Annoying, but possibly more to come.


Apple hasn't paid them for the largest exploits yet. That $52k will likely blow-up to far more when Apple pays them so their work will end up being much more lucrative. Apple also pays in batches so they'll likely get a few more batches and some of those will be yuuuuuuuge!

These are not equivalent propositions. There is an incredible amount of value in working outside of a big corporation and its management hierarchy. It is a Dog and the Wolf situation. The food is always better under the collar.

If they really wanted money, they would have gone in a different direction.

With their abilities, they could still go in that direction.


If they actually did get paid so little, why did they do it? This seems like a terrible use of their time.


Qualifying people for highly paid info security positions is shockingly broken right now. No one who knows what they are doing cares about credentials you can get from a training program or school, but they also complain constantly about how hard it is to find and hire qualified people. The result is: there is a lot of salary out there for people who can figure out how to get it.

Developing exploits that are acknowledged by major targets--even if done freelance or as a hobby--is one of the few ways to gain lines on your resume that everyone in the security field will pay attention to.


It's the whole "you need to volunteer for a year before we'll hire you" hiring method typically seen in low paid positions in the arts, but this time for high paid infosec positions...


It's effectively a screen for skills that are very, very difficult to validate with credentials.

Yes, it also is effectively a screen for people with the spare resources to invest in a career without getting paid for it.


The art world might not be a bad comparison. In both security and art, established people with money are looking for new people who have the ability to make an impact.

But the established folks don't know in advance what exactly that will be... if they did, they'd already be paying someone to do it.

As a new person, there's no better way to demonstrate your ability to make an impact than to just do it.


I work at a company that has an infosec division and I don't know how we got so lucky with the people there. They're seriously legit low level kernel type programmers who seem to be able to reverse engineer anything given enough time and are able to seriously reason about what's going on in security. The types of people who speak at and headline at the largest security conferences, etc. Again, no idea how we got so lucky to have a great crew.

I'm not an infosec person myself. But my experience is that upwards of 80% of the ones I interact with who aren't like the people I mentioned above are just hangers on because they like the group or being associated with "infosec" because it sounds cool or something. Maybe it's because you don't need to be an engineer to regurgitate OWASP vulnerabilities and tell people to use password managers, but perhaps that's enough to, after you look around the room of infosec people, feel like you're an "infosec person." To be clear, that stuff is important, but not anywhere close to sufficient. So a lot of applications for our roles come from these people, who just sit on twitter all day and retweet the Taylor Swift security person, but they're totally not technical and have done nothing of note other than write compliance plans.

My hypothesis is that it's all this noise that makes hiring good infosec people difficult. If I'm hiring a kernel programmer or SRE I seem to get much more signal in my applications, but hire someone for security or infosec and there's too much noise from people like above.


Information security is just a super wide field. To pick a couple famous examples: what Google Project Zero does, and what the "Swift on Security" person does, have almost nothing to do with each other.

They both matter, though. Basic blocking and tackling at the IT level is important, especially to large old institutions. Apple is obviously an apex technology company, but they're also a 45 year old public corporation... I'm not surprised they've got some vulnerabilities lurking in their subdomains.

Patrolling DNS and 3rd party corporate applications is not usually what people think is sexy security work, though. Problems avoided are harder to sell than problems discovered or bad guys defeated.


One tip-off that you're not an infosec person is that you're comparing kernel REs to appsec people.

Oh totally, as I mentioned above I am not an infosec person and I hope I didn't imply otherwise (I did mention this specifically above). The above is just my impression from the outside but as someone who talks to and works with a lot of security/RE/infosec people.

That was just a really snarky way of saying that RE people and people who pay attention to OWASP are not comparables. Sorry, I should have just been direct about it.

Oh yeah, fair enough, point taken. :)

I'd wager they'll make substantially more money from the long tail of this blog post than from the bounties.


It is impossible to quantify what is a good use of their time without knowing them. Also not everyone does things in the pursuit of money. I sell eggs and could easily ask 5$ a dozen with the demand I have. Instead I only ask 4$ and have lots of clients I only charge 2$ and some I just give eggs to when I have extra. These are people with no money or means. I don’t expect to ever get anything from these people but every once in a while ‘oh my car breaks down and guess who has the knowledge or tool I need the guy I have been giving eggs’. I know the world will eat you up and take all you have but I personally “invest” my time and effort into a few of the things I enjoy even if the reward is low. These researchers now have an excellent start to a resume which is always a good thing.

> I sell eggs

Is this like an actual side business you run? Can you tell us more?


Well after covid started and the stores ran out of a lot of food I decided to get some chickens again. I have had a maximum of 6 in the past but decided to increase the flock since 6 birds is pretty much the same effort as 30 birds. I now have 33 in total and at this point in their life get one egg a day. They average something like 300+ eggs a year. I have sold enough to buy an automatic egg washer and now mainly worry about selling enough to cover feed costs. I do it because chickens are very therapeutic and I find them relaxing to be around. I have young kids so they are also learning the value of food and can eat all the eggs they want. So I wouldn’t really call it much of a business it is more of a hobby that I reap little reward other then my eggs and to help out a few others near me. I think if I ramped up to a few hundred birds I could make a bit of money but at the small size it keeps me from getting overwhelmed with too much work and I can just share my harvest with those around me. I have learned that making money is nice but I also get a great deal reward from helping others in need.

He does use the word fun twice in the opening.


Bug bounties are not generally considered a good source of income. It's a way to hone your skills, gain experience, develop a bit of industry cachet and get paid a little in the process.

Many people undervalue their services.


If you wanted to get hired as a bank robber, how would you do it? Gotta rob a few banks first.


For one they did not only get the money but also the exposure that comes with anything Apple. A lot of people will probably want to hire these researchers.

Off-by-one error, the irony! Jokes aside, I agree that the payout seems shockingly low.

The thing is not all RCEs are the same. Apple paid the right amount here.

That blog post alone is worth more than $52K in the long run.

Getting an opportunity to write a case study could be worth a good discount!

i had expected that Appple might have paid a million to him.


"To be brief: Apple's infrastructure is massive. They own the entire 17.0.0.0/8 IP range, which includes 25,000 web servers with 10,000 of them under apple.com, another 7,000 unique domains, and to top it all off, their own TLD (dot apple)."

Wow. I would think it's just impossible to secure all that, and that's not even everything.


> I would think it's just impossible to secure all that

You can make sure your village have no spies, you cannot ensure the same for a city. I bet every large enough network is compromised to some degree.


This is the truth. I've worked in large organizations and it really is impossible organizationally to be fully secure. People come and go. Responsibilities change.

The comparison of the city is a really good one.


It's interesting that by owning and using that Class A block, Apple are making it easier to scan for their infrastructure. Moving that to IPv6 and releasing the Class A would help them avoid the preliminary scanning that was performed.

There's also something to be said about migrating internal DNS to a subdomain of apple.com that is only visible internally.

Not solutions to security, but making things harder to scan makes it harder to find the vulnerabilities.


Why do they need 17.0.0.0/8 (16,777,216 addresses) if they only have 25000 webservers? #eattheIPrich

edit: fixed the number of addresses


Because back in the early days you could get one just by asking and they did?

The internet was just a research project to connect some universities, government sites, and a handful of companies. No one realized where it was going.

By the time it was clear the IPv4 address space would be exhausted it was also clear reclaiming those IP blocks (for which there is no legal basis) would merely temporarily delay the exhaustion - likely by a year or two at best.


This [0] is a really interesting page.

Companies that have an entire /8 block are AT&T, Apple, Ford, Cogent, Prudential Financial, USP and Comcast.

For some reason the US Department of Defense has 13 /8 blocks.

All others belong to regional internet registries (AFRINIC, ARIN, APNIC, LACNIC, RIPE NNC).

I really don't know why anyone other than the registries needs/deserves/got /8 blocks.

[0]: https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_addre...


Wow Prudential and Ford (if USP is supposed to be UPS, that too) are the odd ducks. At least the others have the internet as a core competency.

My guess as to the answer of “why” is power and leverage. It’s the same as nations claiming physical land. “Maybe we’ll need it, maybe we won’t. But either way, now it’s ours to decide.” Writing that out, do they own those? Can someone take those back?


USP is supposed to be USPS i.e. the U.S. Postal Service. It's still an odd one for sure.

> For some reason the US Department of Defense has 13 /8 blocks.

I did a bit of digging and looks like they're looking to sell:

https://datacentrereview.com/content-library/opinion/1522-th...


They probably don't, but there are weirder cases. Ford was allocated 17.0.0.0/8, Prudential 48.0.0.0/8, USPS 56.0.0.0/8

You can use those IPs for something other than webservers.

But yeah, that a bit much for one company. I'll give hosting providers a pass on owning a million IPs, because they're for the lending out to customers.


a /8 is actually 16M addresses, not 1M.

They don’t, they just got in early.

#letthemeatipv6

7,000 unique domains seems insane, what could they possibly need all of those for? Unless that includes subdomains, I guess.

7,000 does seem REALLY high, but I can imagine them needing the TLD for every possible spelling of Apple. Maybe applesucks as well. appl3, 8ppl3 and so on. Anything close to apple. Same goes for icloud, and I anything else. I guess you get to 1k pretty quick just covering typo squatters. They must have a team of people just to manage domain names!

It's probably semi-random domain names. Like "auth-8e3fe.icloud.apple.com" type stuff.

That's probably right. A quick internet search shows up domains like applecoronavirus.com and similar, as well as this court case [1] where they acquired a bunch of ipod related names.

I suspect they are only parking those names after recovering them or buying them preemptively. Domain names are cheap, so why not. I don't think that's any argument for the possession of the /8 though.

I remember Google had ownership of duck.com until recently, so they probably participate in the wholesale acquisition of random domains as well [2].

[1]: https://techcrunch.com/2010/01/07/apple-domain-names/ [2]: https://www.theverge.com/2018/12/12/18137369/duckduckgo-duck...


All parked domains could lead to the same IP. A single web server could distinguish which domain it’s contacted for, using the HTTP headers for example, and serve different content (probably all 301-redirects, but to relevant other websites of Apple).

In this case it looks like a lot of them don't have A records at all.

Bug bounties have always been mispriced. Either the damage estimates for a given bug are wildly over-estimated by risk analysts, or the price paid to find them is based on some kind of stupidity-arbitrage play. I think it's the latter.

Consulting firms bill between $1500-$2500/day for senior staff. 2 hackers for 10 days could be the $50k they got paid. Instead, this crew used 5 hackers for say 45 days, or 225 person days. Napkin arithmetic suggests that's somewhere between $240k and $560k.

I could say it's consulting firms who are overpriced, as a group of amateurs will do better work for for %10-%20 of the cost, but over the years I've found that the difference in the security world is that you hire a small shop to discover the truth about risks, but you pay a big firm to lie about them. That's what costs extra, and given their transparency maybe this work wasn't mispriced at all.


$1500 a day?!?! They are getting ripped off. I had a family member that worked for a large Fortune 500 company tech company. He got to take a look at the invoice for consulting on a project They pay consultants $1500 an HOUR for anything from QA to software engineering.

I had another family member who worked in a big 4 accounting firm. These companies regularly pay in excess of $800 an hour for the most ridiculous consulting. $1500 a day for two people is robbery in the world of consulting.


>$1500 an HOUR

To me, this is the real rip off.


A bug bounty program is aimed to find individual instances of a security hole in your technical architecture. Like finding a weak spot in a ship's hull, and punching a hole.

A security consulting firm would do more for you. They'd basically be telling you how to make your entire hull stronger. And one of the things they might tell you to do, is start a bug bounty program. And they would also likely put things in place for the real security problem in your org: social engineering. Among other things.

And more than that, spending x dollars on a security consulting firm demonstrates that you did some diligence in securing customer data. And that goes a long way in a courtroom.


Hi hi! Speaking as both a bug bounty vet, and a consulting vet (I run includesecurity.com), here's my .02 on some things you may not have considered given your comment.

1) Sam and the other hackers did not do this as a full time gig, they primarily do this as moonlighting from their full time jobs (you can verify this on LinkedIn)

2) Consultants are often given tight scopes, and these artificial client-driven constraints often prevent consultants from identifying similar findings as Sam and crew found.

3) Bug bounties provide no defined level of assurance. They found an SSRF, but it is a very real possibility that somebody in their crew (or an individual bug hunter) doesn't have experience in that particular topic and Apple would have never been the wiser. In a bug bounty you're at the whim of the crowd's varying skills and interests. You can game this by offering larger bounties, but you can't pre-define a scope or level of assurance.

4) They've gotten paid ~$50k thus far for four bugs, if you read the article they mention they'll very likely be getting paid more. I'd be surprised if their total payout isn't six figures when all is said and done.

5) Your stated rate for consulting firms charge for a particular role is correct for the US market, but the level of "seniority" in a senior consultant varies wildly. Many large firms will undeservedly give somebody with two years experience the title "senior", regardless of actual skillset.

6) You state "a group of amateurs will do better work", first point is to note these five are not amateurs in any way! They're in the top 1% of global bug bounty hackers. Second it seems like you're defining "better" as "finds more vulnerabilities from a blackbox bug bounty perspective". I find that client's IRL don't define things in the same way you've done here.

7) "but over the years I've found that the difference in the security world is that you hire a small shop to discover the truth about risks, but you pay a big firm to lie about them." This I couldn't agree with you more on, it is MIND BOGGLING to me that firms with no ethics, actual standards, or transparency are the top firms in the security assessment/pentesting space. For an industry that proports to hate snake oil security, we sure are comfortable with a ton of snake oil security assessments.

8) This industry needs standards, for-profit old boys clubs are not the way https://www.theregister.com/2020/08/11/ncc_group_crest_cheat... And the grass roots/non-profit approach also failed due to lack of advocacy, adoption, and persistent leadership. http://www.pentest-standard.org/index.php/Main_Page

I'd love to see a world where Bug Bounties and full security assessments can live harmoniously and people do flip out declaring one or the other service totally useless all the damn time.


Some fair statements, others less so. I've been in the game for a while, and the point I would emphasize is smart hackers don't get paid as well as people who do less difficult work with a lower bar to entry. Black/grey market bug bounties for iOS vulnerabilities in the $1m range reflect the risk profile and value much more accurately. The bundle in this report are worth at least the pro-consulting rate, and are more commensurate with that high watermark. Good on them for doing it, and the prestige payout is great, but advertising those disadvantaged numbers bears comment.

Regarding amateurs, olympic athletes are amateurs, it's a reference to people pursuing it out of interest instead of just a 9-5 job, even if they happen to do it full time. Amateurs will almost always outperform professionals because the skill distribution among pro's has a longer tail, where to even get in the game without a pro backing you have to be above average. This was an amateur moonlighting effort that delivered better results than consultants who cost 10x the money.

Bug bounties find most vulns in scope that %80 of hackers would find, which I think is more valuable than an assurance level, because assurance levels are bunk. A security architecture is valuable, provided it's built with an understanding of the threat model of the actual business and gets implemented, but otherwise, I think the security assessment document production business doesn't have a long future.


I once came up with a silly way of hijacking facebook accounts that were registered with @hotmail.com. I told both facebook and microsoft about this, never got even a thank you. I know that are some people who make a living out of bug bounty, but I felt very discouraged back then (I was still in college) and never bothered to try again.

Write about it! I’m sure somebody would appreciate reading it.

well, there was a time here in Brazil when MSN was very popular (@hotmail.com), all my friends used it as the default messenger.

Later came Facebook and people created their account using the @hotmail.com and starting to left MSN, since facebook had a messenger. One day I received an email from Microsoft saying that they were disabling MSN (I'm telling this from memory, forgive me if I'm saying anything super wrong).

Fast forward to me being in college and studying a little bit of pentest. As I recall I was trying to see how much information a could gather from a person by their facebook page (as a non friend). If you try to login using their ID (or username) you could find pieces of cellphone number and emails. So I tried this with a profile from some girl I had a crush back in the day and discovered that shed used MSN as email.

Eventually I tried to log in her email on MSN and found out it has been disabled for a while. So I tried to recreate the email account with me as the owner and for my surprise it worked. I then went back to facebook and recover "my" password. With the email and password, facebook didn't let me login because of my location. But I knew where this girl lives, so I found a proxy server [1] and bam, I was in.

Not going to lie, I did look at some of her messages and pictures, but felt very bad after and decided to tell facebook and microsoft about it. This was facebook's response [2]. After a day or two of getting no answers from both companies (before I got the answer from facebook), I told the story to about 2 or 3 tech reporters. They told me they wrote to microsoft asking for a comment, but never got any answer. A week later I tried to recreate another "dead" account on hotmail and I couldn't. Don't remember exactly what they did, but I just couldn't create the email, so I figure they has fixed it.

1 - http://free-proxy.cz/en/proxylist/country/BR/all/ping/all 2 - https://imgur.com/a/kFMlO6d


I am not sure what Facebook could have done in such situation in those days.

"Hey man, thank you for bringing this to our attention. Wanna a free t-shirt? Anyways, be good!"

Edit - On a more serious note, I think they tried to enforce account creation to a cellphone number instead of email.


The most valuable vulnerability they found was some publicly exposed Spring Boot Actuator endpoints (https://docs.spring.io/spring-boot/docs/current/reference/ht...):

  $34,000 - Multiple eSign environments vulnerable to system memory leaks containing secrets and customer data due to public-facing actuator heapdump, env, and trace
I guess it goes to remind you if you are a developer, don't overlook the simple things like not exposing these endpoints in production (literally a line in a config file) or at least making them secured.

And if you are a bug bounty hunter, some of the simplest things can lead to the best ROI. I'm actually surprised something this basic was not already found and reported, but credit goes to their recon efforts for determining where to look.


Does Spring Boot enable these by default?

You need to include a separate actuators module to enable them. IIRC in Spring Boot version 1.5 and older actuators were enabled and exposed as web endpoints by default. The heapdump endpoint mentioned in the article also required inclusion of Spring MVC module – which I guess most web apps do include.

In Spring Boot 2.0 and newer actuator module only exposes "info" and "health" web endpoints by default. Default configuration does expose more endpoints via JMX, though. Also, if your project includes Spring Security module actuator endpoints are secured by default.


This is a big win for Apple because in my experience most internal security teams are BU specific and never get to throw a wider net. Most security engineers probably realize that there are many gaping holes around the company but they never have the time or bandwidth to go broad and find issues in areas outside their BU. Ultimately, this kind of bug hunting, though lucrative for bug bounty people, does not realize true gains for the company because you are doing bug whack-a-mole all the time instead of trying to fix problems systemically. The joke at a big company I use to work was that its easier to pay thousands in bounty then trying to fix systemic issues because fixing those issues would be more costly. Not saying that this is the right mentality but leaders try to do cost benefit analysis and a bad bug is mostly just a bad PR day without any loss of value to the shareholders.

The takeaway I have from this is really not related to Apple at all.

It's that any network of enough complexity run by an organization of enough complexity is actually impossible to secure.


I think it is very much related to Apple. There will be organisations because of their culture or approach to security that will fare better or worse than them.

It's possible, but it requires investment, and it's likely to slow down productivity a little. The default approach in big traditional corps (not implying Apple is traditional) is to leave it up to IT, and maybe hire a Security Officer to signal virtue and assign blame.

It is in that a lot of people rely on apple to secure their data and it's good for them to know they should add an extra layer of encryption for anything critical.

What if Apple was just one company they looked at, over the couple of months. It sounds like they had the size and scale, and capability, to scan a lot more and they did.

https://securityboulevard.com/2020/09/def-con-28-safe-mode-r...


The work they have done here is amazing. Imagine a company like Apple being vulnerable to this extent. That's why when people bring up a new privacy safe/better UX alternative for a sensitive data service I am very skeptical to try them out. Like for email, fastmail or protonmail or Hey.

Data security is hard, I would rather trust someone who has shown good capability there, invests a lot in that and has more to lose. That's why for the foreseeable future, I would rather use Gmail, Google Drive over their alternatives. Also, why I prefer to use Amazon instead of individual storefronts which ask for contact and payment details.


$6k for an internal perimiter SSRF that led to source code access? What a joke.

Is that not the "XML External Entity processing to Blind SSRF on Java Management API" SSRF? As that would make sense to match that payment. I really struggle to believe that the $6k is for the maven access one, that's a billion dollar vulnerability.

That’s not a billion dollar vulnerability, you can buy recent copies of this source code for a million dollars.

A million dollars for iOS's source code?

Yeah. This stuff gets traded all the time.

Where can I read more about this?

Nice read, like Forensic Files for hackers.

However I am stunned that they did not earn past 6 figures. I was a bit primed with the $100.000 bug earnings.


I thought the same thing. I guarantee foreign agents trying to hack into Apple systems are being paid much more than these "nice guys". The incentives here don't seem up to par.


We need better techology, that's one thing that's certain. The current technology we have has too many security holes.


Good point. Future technology also has too many security holes.

Computers made it into the furthest corners of our lives. They are controlling critical infrastructure or are a front-end for it. So IT security should really be a top priority in almost any software project or product.

The upside is that nobody needs atomic bombs to shutdown a whole country anymore ;-)


"As of now, October 4th, we have received four payments totaling $51,500"

What a joke. That's an hourly rate of $20 (assuming 5 researchers working for 3 months). Just enough to buy a MacBook to do the research in the first place.


The big ones haven't been paid out yet it seems

In the article he says they invested "a few hundred hours"... I take that to be around 350 hours - $147/hr... still not a lot for speculative research

you missed a word: "we each ended up putting a few hundred hours into it."

So the 20-30$ per hour figure is closer, before taxes, with zero benefits like health, dental or pension plans.

They themselves say: bounty hunting is not a job


Good catch

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: