1. Discover an easily exploitable vulnerability that allows access to a chosen user's private data
2. Email their security address about it
3. Tweet at them about it
4. Fax them about it
5. A week later the vulnerability is still exploitable
You do not have to play fair. You're allowed to impersonate a suburban mom or a grandpa who's not good with technology. You're allowed to ramble or use vague and non-technical terms as long as a reasonably qualified person could determine what the vulnerability is. Specifically, it is not required that you include relevant product versions, steps to reproduce, a full reproduction video, or a one-sentence impact summary like "a caller can eavesdrop on the recipient of a Group Facetime call without their knowledge or consent".
There's no apologising this away. The vulnerability was already a monumental fuckup, but this detail propels it into the realm of cultural dysfunction. It should not be possible to fail this badly. If you put listening devices in people's pockets, you need to hold yourself to a higher standard than "I dunno, bug reporting is hard".
Even nonsensical reports to our security address will, with an SLA measured in hours, be read by a qualified human who will then reply to at least say "we're looking into it".
A credible report will be immediately escalated to someone with relevant domain expertise for investigation. The security engineer will attempt to reproduce.
A confirmed report will be escalated to an executive, who will determine urgency. For issues like this, where sensitive data is exposed, people will be woken up and several things will happen in parallel: the scope of the issue will be assessed, the root cause will be found, potential workarounds will be identified, a fix will be implemented, the potential existence of related issues will be investigated, and the reporter will be contacted to assess disclosure risk.
Even in the worst case, where a complicated vulnerability exists in multiple versions of multiple products, requiring multiple patches and backports and requiring coordinated disclosure with partners, I'd expect a fix to be in customers' hands within 14 days.
I think the intrinsic failure here is that Apple is - more than any other FAANG-like tech company - fundamentally disinterested in vulnerabilities that don't represent root-capable jailbreak vectors. Or rather, they ostensibly care, but every single process is systematically designed to encourage introspection on those vulnerabilities as a categorical imperative. Other types of vulnerabilities will be treated as second class citizens, so to speak. Apple does a lot of things right from a security perspective, but this really isn't one of them in my opinion.
This is very clear despite corporate messaging if you follow along with their bug bounty program. Consider that the bar for submitting a bug bounty to Apple requires the vulnerability to be something capable of compromising the device's sandbox or root privileges. This is explicit - a userland privacy bug is not sufficient. Furthermore, the bug bounty is strictly invite only, and even some of the most accomplished and talented vulnerability researchers in the world are closed out from it: https://twitter.com/i41nbeer/status/1027339893335154688
More generally speaking, a reliable formula for putting a vulnerability in front of someone who is both qualified and paid to urgently care is the following:
1. Look up the security team at the company. Not security contact information, the team.
2. Find out individuals on that team by going through blog posts, conference talks, etc.
3. Find those people on Twitter. Tweet at several of them with the broad strokes: you have a vulnerability in X product, you need to securely report it, you believe it's N severity, how should you do it?
But of course, you shouldn't need to do this. You should be able to fire something off to security@ or, better yet, a bug bounty program.
How many times have we heard security researcher or developers cant be bothered with communicating with Apple's blackhole anymore and decide to publish their finding on twitter?
This isn't the first time, and if nothing has changed this surely won't be the last.
That -- being warned and taking time to close a vulnerability -- has happened time and again.
And I'm not sure how closing a vulnerability in a backend service, which is your main (and income-wise, only) product (as is the case with Google, FB, Twitter, etc) compares to a vulnerability to a secondary product like Facetime.
Can you point to any of these numerous examples?
"The vulnerability came to light in mid-September after the Trend Micro Zero-Day Initiative (ZDI) posted details about it on its site. ZDI said Microsoft had failed to patch the flaw in due time and they decided to make the issue public, so users and companies could take actions to protect themselves against any exploitation attempts."
Or how about this?
"Google Admin, one of Android’s system-level apps, may accept URLs from other apps and, as it turned to be, any URLs would be fine, even those starting with ‘file://’. As a result, a simple networking stuff like downloading web pages starts to evolve into a whole file manager kind of thing. Aren’t all Android apps isolated from each other? Heck no, Google Admin enjoys higher privileges, and by luring it into reading some rogue URL, an app can escape sandbox and access private data. How was that patched? First, allow me to brief you on the way independent researched disclosed the vulnerability. It was discovered as far back as March, with a corresponding report submitted to Goggle. Five months later, the researchers once again checked out what was going on only to find the bug remained unpatched. On the 13th of August the information on the bug was publicly disclosed, prompting Google to finally issue the patch."
Would they ask the CEO for a bit of extra time? I security manager? How many assets would they need to be running in an organization of that size to get their way?
Another (mildly) interesting tidbit to feed our theory - The Good Wife TV episode (season 7 episode 14) introduces an agency called "TAPS" or "Technology Allied Protection Service". In the episode, it’s supposed to be a multi-agency task force that works with Silicon Valley’s biggest companies to clamp down on technology theft.
The biases in this community are out of control sometimes..
Baseball cap, please. I found a potential security bug and stumbled pretty hard delivering the details to MSFT (we all have our first time). The PM was emailing me within minutes, it wasn't deemed a risk.