What's especially pathetic is it doesn't matter what you're reporting - a grave security bug, a widespread hardware flaw, a longing for better functionality - Apple doesn't want to know. In fact they warned iOS developers against trying to get their attention.
If you run to the press and trash us, it never helps.
[EDIT: Apple indeed literally write that in a previous version of the page. Wow.]
It simply isn't as easy as saying 'flag all reports with 'security vulnerability' in the submission for priority.' That could still be thousands of reports in the 'priority' queue, most of which some person would need to manually investigate one by one.
1. Discover an easily exploitable vulnerability that allows access to a chosen user's private data
2. Email their security address about it
3. Tweet at them about it
4. Fax them about it
5. A week later the vulnerability is still exploitable
You do not have to play fair. You're allowed to impersonate a suburban mom or a grandpa who's not good with technology. You're allowed to ramble or use vague and non-technical terms as long as a reasonably qualified person could determine what the vulnerability is. Specifically, it is not required that you include relevant product versions, steps to reproduce, a full reproduction video, or a one-sentence impact summary like "a caller can eavesdrop on the recipient of a Group Facetime call without their knowledge or consent".
There's no apologising this away. The vulnerability was already a monumental fuckup, but this detail propels it into the realm of cultural dysfunction. It should not be possible to fail this badly. If you put listening devices in people's pockets, you need to hold yourself to a higher standard than "I dunno, bug reporting is hard".
Even nonsensical reports to our security address will, with an SLA measured in hours, be read by a qualified human who will then reply to at least say "we're looking into it".
A credible report will be immediately escalated to someone with relevant domain expertise for investigation. The security engineer will attempt to reproduce.
A confirmed report will be escalated to an executive, who will determine urgency. For issues like this, where sensitive data is exposed, people will be woken up and several things will happen in parallel: the scope of the issue will be assessed, the root cause will be found, potential workarounds will be identified, a fix will be implemented, the potential existence of related issues will be investigated, and the reporter will be contacted to assess disclosure risk.
Even in the worst case, where a complicated vulnerability exists in multiple versions of multiple products, requiring multiple patches and backports and requiring coordinated disclosure with partners, I'd expect a fix to be in customers' hands within 14 days.
I think the intrinsic failure here is that Apple is - more than any other FAANG-like tech company - fundamentally disinterested in vulnerabilities that don't represent root-capable jailbreak vectors. Or rather, they ostensibly care, but every single process is systematically designed to encourage introspection on those vulnerabilities as a categorical imperative. Other types of vulnerabilities will be treated as second class citizens, so to speak. Apple does a lot of things right from a security perspective, but this really isn't one of them in my opinion.
This is very clear despite corporate messaging if you follow along with their bug bounty program. Consider that the bar for submitting a bug bounty to Apple requires the vulnerability to be something capable of compromising the device's sandbox or root privileges. This is explicit - a userland privacy bug is not sufficient. Furthermore, the bug bounty is strictly invite only, and even some of the most accomplished and talented vulnerability researchers in the world are closed out from it: https://twitter.com/i41nbeer/status/1027339893335154688
More generally speaking, a reliable formula for putting a vulnerability in front of someone who is both qualified and paid to urgently care is the following:
1. Look up the security team at the company. Not security contact information, the team.
2. Find out individuals on that team by going through blog posts, conference talks, etc.
3. Find those people on Twitter. Tweet at several of them with the broad strokes: you have a vulnerability in X product, you need to securely report it, you believe it's N severity, how should you do it?
But of course, you shouldn't need to do this. You should be able to fire something off to security@ or, better yet, a bug bounty program.
How many times have we heard security researcher or developers cant be bothered with communicating with Apple's blackhole anymore and decide to publish their finding on twitter?
This isn't the first time, and if nothing has changed this surely won't be the last.
That -- being warned and taking time to close a vulnerability -- has happened time and again.
And I'm not sure how closing a vulnerability in a backend service, which is your main (and income-wise, only) product (as is the case with Google, FB, Twitter, etc) compares to a vulnerability to a secondary product like Facetime.
Can you point to any of these numerous examples?
"The vulnerability came to light in mid-September after the Trend Micro Zero-Day Initiative (ZDI) posted details about it on its site. ZDI said Microsoft had failed to patch the flaw in due time and they decided to make the issue public, so users and companies could take actions to protect themselves against any exploitation attempts."
Or how about this?
"Google Admin, one of Android’s system-level apps, may accept URLs from other apps and, as it turned to be, any URLs would be fine, even those starting with ‘file://’. As a result, a simple networking stuff like downloading web pages starts to evolve into a whole file manager kind of thing. Aren’t all Android apps isolated from each other? Heck no, Google Admin enjoys higher privileges, and by luring it into reading some rogue URL, an app can escape sandbox and access private data. How was that patched? First, allow me to brief you on the way independent researched disclosed the vulnerability. It was discovered as far back as March, with a corresponding report submitted to Goggle. Five months later, the researchers once again checked out what was going on only to find the bug remained unpatched. On the 13th of August the information on the bug was publicly disclosed, prompting Google to finally issue the patch."
Would they ask the CEO for a bit of extra time? I security manager? How many assets would they need to be running in an organization of that size to get their way?
Another (mildly) interesting tidbit to feed our theory - The Good Wife TV episode (season 7 episode 14) introduces an agency called "TAPS" or "Technology Allied Protection Service". In the episode, it’s supposed to be a multi-agency task force that works with Silicon Valley’s biggest companies to clamp down on technology theft.
The biases in this community are out of control sometimes..
Baseball cap, please. I found a potential security bug and stumbled pretty hard delivering the details to MSFT (we all have our first time). The PM was emailing me within minutes, it wasn't deemed a risk.
The weird thing to me is that the NYT article says that she tried "faxing Apple’s security team." Having some familiarity with the team and the process of reporting security vulnerabilities to them, I do not recall them ever claiming to have a fax machine. The idea of Apple asking you to fax in a bug report is ludicrous.
Also, it’s amazing to see she did so much to try to notify them. Yes, not being technical, she couldn’t figure out that security team email, but I wonder why all of the Apple people who were getting signals from her didn’t forward it to that team...
Overnighting is similar in that the carrier should provide confirmation that a package has been successfully delivered.
Certified mail is probably a better "someone received it" option, if that is the desired requirement.
Superstition leads to lawyers doing again a thing that worked previously without knowing why it worked or having any understanding that it's actually necessary for any reason. Most of the weird forms of words in legal contracts are a result of this, the court probably doesn't care whether you demand someone "Cease and desist" or just "Stop fucking doing that" but who wants to risk that they do care? Let's just write "Cease and desist" the same as we did on the previous document...
So, she uses Fax because her predecessors used Fax, not because there's an actual "something legal-binding about faxes". Just superstition.
There are superstitious people in every profession, that's what "sync; sync; sync" is all about - it's just that lawyering seems especially prone.
Do you really think a consumer would think to do that? Also you have to consider the number of bugs that are incorrectly flagged as being security.
Even then, the modern day incentives around vulnerability disclosure are not helping. Because security bugs are awarded bounties based on their severity, every single reporter has a financial incentive to hype and inflate their findings. "URGENT" this, "CRITICAL" that, "ACCOUNT TAKEOVER" due to already compromised computer/device, you name it.
Teams without sufficient resources will spend a lot of time dealing with the maladjusted severities. And yes, I believe the "mal-" prefix is warranted. If your report does go through with inflated severity, you stand to make more money.
I am starting to think that a reasonably run bounty programme should state up front that inflated severities in bug reports will reduce their payouts.
First, Apple did not even acknowledge the report to the initial finder. This is flawed, like Google, Microsoft and other big players acknowledge receipt.
Secondly, the person tried to go beyond after not hearing anything, by calling, faxing and other means to no avail.
Third, Apple needs to staff their responders better it seems. Most issues can be filtered out quickly to be left with the few interesting ones. The repro here doesnt need any technical knowledge!
Many companies outsource the initial triage process even to have it scale when needed.
So, one take away for Apple is to improve their response process and transparency. Transparency they have always lacked when it comes to security.
That is strange. I worked with some folks who interacted with researchers and their reports.
Half the battle with them was getting back to the researcher and getting their cooperation about keeping quiet + assuring them they're working on it, and working with them if additional data is needed. And at the same time fending off internal folks who have poor instincts and want to push back against, blame, or even punish the researcher (this was surprisingly common at companies who even should know better).
It's not the hardest part but developing that trust can be a big difference between a possible PR nightmare or not, and the initial contacts are a big deal.
Also researcher's who you get along with sometimes come back with better data, additional bugs,etc.
It sounds like Apple might be doing do this all via email, no case management software. That would be pretty bad.
You can’t automate handling of bug submissions unless they’re semi automated (crash reports , spintracer, etc get coalesced, and high volume ones get higher priority).
But people filing bugs themselves? That goes through triage to filter out unactionable bugs and send to correct engineering teams.
Those then gets screen by either (if the team is big enough) and dedicated screener, to the right sub component or engineer.
That all takes time, and is unavoidable
So it seems like a way more common problem than it should be.
He mostly invokes the Hitchiker's Guide quote "It rather involved being on the other side of this airtight hatchway" to suggest that often the problem in these bugs is that they have a step where you've legitimately got privileges, and then they use those privileges to... do something you legitimately need privileges for. That's not a bug, if you try it without the privileges it doesn't work, but Raymond has to walk through all the steps seeing whether anything surprising is going on.
Now, whether engineers at a corp actually get given the space to do this stuff is an executive policy decision. Maybe at Apple not enough of them do, I can't say. But if it doesn't get done you are sooner or later going to miss cases where it _sounds_ like it's not a bug but actually there's a serious bug if you looked closely.
If Apple were doing anything remotely similar to "hoarding", they'd have more like $700 billion in cash right now.
But it's this, broken MBP keyboards, broken MBP hinges, bent iPads, the MBP core-i9 thermal issue and probably a few I'm forgetting.
Nintendo doesn't have these kinds of failures, they're pretty analogous to Apple. Hardware and software. What's the difference? Nintendo cares and Apple doesn't. (Not to say Nintendo is issue free by any means).
Maybe they should put some of that $250b in cash to work upgrading their processes.
Their focus is on building cheap but durable goods, e.g. plastic scratches more easily but is less likely to shatter catastrophically.
They have definitely had their own industrial design failures - the Wii would sometimes overheat in sleep mode, the 3DS would scratch its own screen when placed under pressure in a pocket, the Switch supposedly can have its screen scratched by a bent dock. The initial run of switch controllers had an issue where the left side controller could disconnect very easily. Of course when your product is $149 or $299 replacing it or living with a scratch doesn’t hurt as much...
Apple has an entirely different brand they need to maintain which is why they use metal, make their devices thin, have edge to edge screens, etc.
Personally I have no problems with “toy” style design but I think a lot of apple’s premium is based on their sense of style and the feeling people have that iPhones, iPads, Macs are a “premium” product due to the industrial design. As an old school Mac user from back when Macs were beige boxes I personally don’t care about that stuff.
Are you sure it's not because their products are literally toys?
Incidentally, there is no shortage of real honest to god tools which are made out of plastics. Glass reinforced plastic can be a very robust material, particularly for its weight, and the weight of a tool is often an important consideration. Fetishizing metal for the sake of metal is often done in the domain of luxury goods where feeling and appearance count for more than objective practical physical properties.
Right now my primary computer is an aluminum macbook. The aluminum certainly looks nice, although I've found that the hard edge hurts my wrists when I rest my hands on it. Would putting a slightly larger radius on that bevel really negatively impact the supposed physical strength of the metal design? Nah. But it would negatively impact the aesthetics of the laptop. Apple made it sharp because sharp looks thin and looking thin sells well.
Contrast this with the T60 I was using a decade ago. It's not thin and it's pretty damn heavy. It's also got a metal skeleton but plastic exterior. But for all the faults of that laptop, its plastic components do not make it fragile. I dropped and stepped on that thinkpad more times than I can even recall but not once did the plastic break. Plastic, the right plastic, is actually pretty damn durable.
The thinkpad of course is still a typical consumer good. But next time you're walking past a construction site, take a look at the tools being used by the workers. There are a lot of plastic parts among those tools, and the tools hold up just fine in an environment far more demanding than anything you put your Apple product through.
If any tech support or Apple Genius were to have seen this bug reproduced, it should have immediately been easy to flag to the right person.
Saying that it’s not hard for a person to get in contact with someone working at Apple seems like a pretty biased view.
I think at this point, we need Tim Cook to write an apology piece about how they screwed up, how this won't happen again, and who got fired.
Something is quite wrong ...
We thought you might like to know that we put https://news.ycombinator.com/item?id=19029573
in the second-chance pool, so it will get a random placement on the front page
sometime in the next 24 hours.
This is part of an experiment in giving good HN submissions multiple
chances at the front page. If you're curious, you can read about it at
https://news.ycombinator.com/item?id=11662380 and other links there.
Your account lost submission privileges at some point. I'm not sure exactly why, but have taken the penalty off now.
My first comment on my own submission was for people to look at the other article as well/instead.
Karma sharing sounds nice, but in the end, I just want people to read the better article.
Thanks for caring about the quality of HN. That's what matters.
I am not surprised about what happened at all. There is an argument that can be made about the fact that it took Apple so many years to finally implement group video call that they could take a little bit of time to do it right but other than that, I don't see how Apple could have prevented a bug that a person wasn't willing to disclose without having money first.
You don't see this kind of stuff every week, and surely Apple has the resources to at least confirm it.
A video as reproduction steps is as credible as it gets. I hope she and and son get a nice reward from Apple...
http://www.telegraph.co.uk/technology/apple/8912714/Apple-iT... is a story about how Apple didn't fix a remotely-exploitable bug in iTunes for years after being notified about it. During that time governments were said to have used this bug to infiltrate users' computers.
And the worst part of it all: The majority of Apple's software (certainly both FaceTime and iTunes) are proprietary programs -- user-subjugating software that does not respect a user's freedom to run, inspect, share, and modify the program. This means that even the most skilled and willing users are prohibited from fixing the problem and distributing a fixed version of the program to help their community. So, rich or poor, proprietors do their users a disservice by distributing proprietary software.
This stuff is hard.
 - https://resources.sei.cmu.edu/asset_files/SpecialReport/2017...
EDIT: Changed the link to the CERT guide for CVD.
The only area in that document she would have followed to reach out Apple's security contact is under the heading of "Security and privacy researchers", of which I am doubting she thought herself or her 14 year old son as.
Yet, that is precisely what she did...? Your argument falls on it's face by her own action[s].
EDIT: Note in the screenshot that there's an appended "Follow-Up" with what looks to be an ID, which has been added to the Subject field of the email.
 - https://cdn.vox-cdn.com/thumbor/zrezAXK0-NdK3ugN3G2Uwd_vzuo=...
Anyway, it's absurd to say thay tweets aren't enough warning if they're public. A public disclosure may be distasteful to Apple, but it doesn't change the fact that they have a security emergency.
Who said Tweets weren't enough warning? Did I say that? Where?
...and what does a warning have to do with CVD? The two precepts aren't mutually exclusive, yeah?
Why should they have followed that process? Coordinated Vulnerability Disclosure is just one way of many of disclosing security problems. It's not the single right way.
They started doing just that (didn't anyone actually read the article) and then decided they weren't gettig enough traction and tweeted at Apple. Then, someone discloses the full vulnerability later because.... ...reasons?
CVD isn't the single right way, you're correct, but it allows the vendor to address the issue and remediate it, before the exploit is published. In fact, CERT states that they'll publish exploits after 45 days (I think it is) of non-repsonse from vendors.
To me, it sounds like someone started going the right direction and someone else took over the PR-value of the loudest voice gets the most attention - which there is some arguments for/against that.
However, since they already started going down the "right" road, I don't see why there's this crusade to say "all reports should be accepted through any channel". It's an untenable precept.
Should start-ups or FOSS monitor social media for security reports? Don't they define reporting processes of their own?
I'm not saying, "Don't ever use social media," which is what I'm gathering some people misunderstand this as. I'm saying, if they started down the reporting path, via the appropriate channels, then why disclose the vulernability publicly, if they were already going down the appropriate path?
As someone else mentioned, it should've been on support to see the tweet and pass it on to the appropriate team; especially, since they had already opened a bug for it. Yet, I don't see how this automatically equates to just dumping the exploit into the public domain. (I hope my explanation of it makes sense, at least?)
You might as well comment ‘maybe I’m cantankerous but they don’t seem to be baking a cake here’. You’re right they weren’t baking a cake. So what of it?
>I mean it’s not relevant because that’s not what they were doing.
If they first reported it via the email@example.com, what were they doing then?
Just reporting it by email! Why does that mean they thought they were following someone else's idea of how to do disclosure? It's not called firstname.lastname@example.org is it? Maybe they'd never heard of CVD. Maybe their idea of disclosure is to email and then Tweet it as well.
Do you see what I mean though? You snarkily ask 'maybe I'm wrong but this doesn't look like X' when nobody ever said or implied it was X. It doesn't make any sense as a criticism.
So, to explicitly say they weren't aware of 'x', when it doesn't match the timeline, is also - in and of itself- possible disingenuosus. Do you, at least, see where I'm coming from on that angle?
I see where you're coming from but I don't think it really does imply they were aware of CVD enough to be snarky and wave a standard in their face. They probably just thought they'd give Apple some time instead of thinking 'I'll follow CVD here'.
If they do have Twitter and other social media accounts for support then I think they should.
The story behind this particular report seems muddled quite a bit and the history of the report is quite weird. Maybe they wanted to have dibs on the report as Apple does not have a bounty program?
That's, pretty much, what I'm getting at. Everyone wants to jump on the "Apple's done a shit job with this" bandwagon, which - if you hate Apple - that's your perogative, but to go from reporting it, to a tweet, to full drop of the exploit publicly in less than a day from the actual tweet isn't going to end well for any company - no matter who it is.
>Maybe they wanted to have dibs on the report as Apple does not have a bounty program?
That's - ultimately - what I believe happened here.
I still fail to see what that has to do with CVD, though?
It also sounds like the person reporting the bug was ignored until she made a developer account and reported the bug through that - that shouldn't have been necessary.
It is not credible to imagine that an emergency disabling of Group FaceTime takes a week.
"There's a bug that allows our devices to be used as eavesdropping tools. But let's relax about it!"
I guess that "not everyone" includes some product people working on FaceTime.
The most plausible scenario is that somebody at Apple threw the original report in the figurative trash bin after failing to read it or failing to realize the significance of it (although given the clarity of the report, failing to read it and failing to understand it are virtually the same thing.)
Rolling a system update is also not immediate (eg you need to be sure that you don’t result in user flows resulting in no audio), hence the focus on server side fixes (that lead to today’s “kill server side”) fix.
But also “a week to fix” depends - it’s a week from the report, at which point it needs to be screened. So let’s say a day to get to an actual engineer - it may take longer to get appropriate security keywords attached - then the engineer would need to actually read it, which depends on what their current workload is, where they are in their release schedule, etc.
It’s not trivial - all big companies get thousands of big reports a day, and they just take time to get to the right place.
I agree with what other have said (telling a consumer to get an ADC account is clearly suboptimal). But in general the path from consumer to security big is challenging for everyone.
As before I’m sure the worlds greatest linguist will also be here :)