Hacker News new | past | comments | ask | show | jobs | submit login
Apple was warned about the FaceTime eavesdropping bug last week (theverge.com)
361 points by josu 78 days ago | hide | past | web | favorite | 131 comments

It's somehow never the tech companies' fault for willfully designing inept feedback channels or even null-routed feedback channels in Google's case to impede customers communicating with them. I think many companies, especially given $200b in savings, could have handled this report better. Many companies without $200b can receive information from a customer without it passing through journalists first.

What's especially pathetic is it doesn't matter what you're reporting - a grave security bug, a widespread hardware flaw, a longing for better functionality - Apple doesn't want to know. In fact they warned iOS developers against trying to get their attention.

     If you run to the press and trash us, it never helps.

That Medium article presents the quotation as though it's literally something that Apple wrote. It doesn't actually appear to on the page it supposedly came from, nor anything like it, as far as I can see.


[EDIT: Apple indeed literally write that in a previous version of the page. Wow.]

It's visible verbatim in an archived version of the page. See:


Clearly it does matter what you're reporting - that quote is specifically about the app store review process.

I can only imagine the amount of bug reports, real and false, that a company of Apple's size must receive on a daily basis. Is there any company at that scale that can reliably filter through all of them to find actual, critical bugs quickly?

It simply isn't as easy as saying 'flag all reports with 'security vulnerability' in the submission for priority.' That could still be thousands of reports in the 'priority' queue, most of which some person would need to manually investigate one by one.

If you are able to perform the following steps for any of Amazon, Google, Facebook, Netflix, Microsoft or Twitter, I will literally eat a hat (you may choose what kind):

1. Discover an easily exploitable vulnerability that allows access to a chosen user's private data

2. Email their security address about it

3. Tweet at them about it

4. Fax them about it

5. A week later the vulnerability is still exploitable

You do not have to play fair. You're allowed to impersonate a suburban mom or a grandpa who's not good with technology. You're allowed to ramble or use vague and non-technical terms as long as a reasonably qualified person could determine what the vulnerability is. Specifically, it is not required that you include relevant product versions, steps to reproduce, a full reproduction video, or a one-sentence impact summary like "a caller can eavesdrop on the recipient of a Group Facetime call without their knowledge or consent".

There's no apologising this away. The vulnerability was already a monumental fuckup, but this detail propels it into the realm of cultural dysfunction. It should not be possible to fail this badly. If you put listening devices in people's pockets, you need to hold yourself to a higher standard than "I dunno, bug reporting is hard".

I work for a large software company. I am not on our security team, but have worked with our security team to investigate and resolve reported issues. I agree that it should not be possible to fail this badly.

Even nonsensical reports to our security address will, with an SLA measured in hours, be read by a qualified human who will then reply to at least say "we're looking into it".

A credible report will be immediately escalated to someone with relevant domain expertise for investigation. The security engineer will attempt to reproduce.

A confirmed report will be escalated to an executive, who will determine urgency. For issues like this, where sensitive data is exposed, people will be woken up and several things will happen in parallel: the scope of the issue will be assessed, the root cause will be found, potential workarounds will be identified, a fix will be implemented, the potential existence of related issues will be investigated, and the reporter will be contacted to assess disclosure risk.

Even in the worst case, where a complicated vulnerability exists in multiple versions of multiple products, requiring multiple patches and backports and requiring coordinated disclosure with partners, I'd expect a fix to be in customers' hands within 14 days.

Yeah this is particularly the case at Google and Facebook. If you submit a security report to Google or Facebook through their bug bounty programs and escalate it with a critical severity tag (whether or not it's justified), someone on the application security team will review it within an hour. I can say that from experience (on both sides). If it's legitimate a sev:critical vulnerability, a workaround will usually be in production within 24 hours.

I think the intrinsic failure here is that Apple is - more than any other FAANG-like tech company - fundamentally disinterested in vulnerabilities that don't represent root-capable jailbreak vectors. Or rather, they ostensibly care, but every single process is systematically designed to encourage introspection on those vulnerabilities as a categorical imperative. Other types of vulnerabilities will be treated as second class citizens, so to speak. Apple does a lot of things right from a security perspective, but this really isn't one of them in my opinion.

This is very clear despite corporate messaging if you follow along with their bug bounty program. Consider that the bar for submitting a bug bounty to Apple requires the vulnerability to be something capable of compromising the device's sandbox or root privileges. This is explicit - a userland privacy bug is not sufficient. Furthermore, the bug bounty is strictly invite only, and even some of the most accomplished and talented vulnerability researchers in the world are closed out from it: https://twitter.com/i41nbeer/status/1027339893335154688

More generally speaking, a reliable formula for putting a vulnerability in front of someone who is both qualified and paid to urgently care is the following:

1. Look up the security team at the company. Not security contact information, the team.

2. Find out individuals on that team by going through blog posts, conference talks, etc.

3. Find those people on Twitter. Tweet at several of them with the broad strokes: you have a vulnerability in X product, you need to securely report it, you believe it's N severity, how should you do it?

But of course, you shouldn't need to do this. You should be able to fire something off to security@ or, better yet, a bug bounty program.

The biggest problem isn't about the time it took to fix the bug, whether it was a little harder such as iOS, or easier as it is on the backend. It is that Apple, refuse to listen to anything until the media broke the story, and then they go into damage control.

How many times have we heard security researcher or developers cant be bothered with communicating with Apple's blackhole anymore and decide to publish their finding on twitter?

This isn't the first time, and if nothing has changed this surely won't be the last.

Not sure what the "challenge" is.

That -- being warned and taking time to close a vulnerability -- has happened time and again.

Some examples:




And I'm not sure how closing a vulnerability in a backend service, which is your main (and income-wise, only) product (as is the case with Google, FB, Twitter, etc) compares to a vulnerability to a secondary product like Facetime.

Well, pointing out that tech company engineers aren’t really all the great, indeed even below average, threatens like, one of HN’s biggest orthodoxies: that the people who read and write in this forum are brilliant.

There have been numerous examples in the past where companies like Microsoft have taken way longer than one week to fix serious vulnerabilities. What makes you so confident you won't be eating a lot of hats?

Are any of these examples in recent history, in software terms? Software culture, especially with regards to security, has come a long, long way since the bad old days. I agree with the parent comment. This is not acceptable in 2019.

Can you point to any of these numerous examples?

>Are any of these examples in recent history, in software terms?


Is there a reason why the rest of the parent comment was left unaddressed? The implication was to provide such examples. I'm also curious to hear.

Several such cases are made public every year.

Here's one:


"The vulnerability came to light in mid-September after the Trend Micro Zero-Day Initiative (ZDI) posted details about it on its site. ZDI said Microsoft had failed to patch the flaw in due time and they decided to make the issue public, so users and companies could take actions to protect themselves against any exploitation attempts."

Or how about this?

"Google Admin, one of Android’s system-level apps, may accept URLs from other apps and, as it turned to be, any URLs would be fine, even those starting with ‘file://’. As a result, a simple networking stuff like downloading web pages starts to evolve into a whole file manager kind of thing. Aren’t all Android apps isolated from each other? Heck no, Google Admin enjoys higher privileges, and by luring it into reading some rogue URL, an app can escape sandbox and access private data. How was that patched? First, allow me to brief you on the way independent researched disclosed the vulnerability. It was discovered as far back as March, with a corresponding report submitted to Goggle. Five months later, the researchers once again checked out what was going on only to find the bug remained unpatched. On the 13th of August the information on the bug was publicly disclosed, prompting Google to finally issue the patch."


Makes you wonder whether these bugs have been intentionally introduced due to pressure from some government agencies with a specific target in mind. And the "delay" in issuing a fix is just to make sure that the mission objective is first achieved.

I love the idea, but really wonder who the spy agency would submit that request to.

Would they ask the CEO for a bit of extra time? I security manager? How many assets would they need to be running in an organization of that size to get their way?

We do know that US Security agencies already interface and work with American tech companies to "safeguard" their commercial secrets and infrastructure. There may already be some well established protocols and processes in place.

Another (mildly) interesting tidbit to feed our theory - The Good Wife TV episode (season 7 episode 14) introduces an agency called "TAPS" or "Technology Allied Protection Service". In the episode, it’s supposed to be a multi-agency task force that works with Silicon Valley’s biggest companies to clamp down on technology theft.

I remember an article about the 90s when you had to be a paying member of MSDN to be able to file a bug report

https://news.ycombinator.com/item?id=18174226 .. "It's a feature". Now the only thing saving you from eating the hat is the Fax part, I'll give you that.

This is hilarious. The other day people were patting Apple on the back, surmising it would take Apple mere hours to fix, only now it's revealed that Apple failed to act on the report when they got it. And yet people still think major tech companies will fix any anything within a week, but here we have a months old example where facebook decided to bury their head in the sand and it's gotten hardly any attention.

The biases in this community are out of control sometimes..

> Tweet at them about it

Baseball cap, please. I found a potential security bug and stumbled pretty hard delivering the details to MSFT (we all have our first time). The PM was emailing me within minutes, it wasn't deemed a risk.

Apple has a dedicated team that triages incoming security vulnerability reports. If you search for how to report a security issue to Apple, you would find their e-mail address.

The weird thing to me is that the NYT article says that she tried "faxing Apple’s security team." Having some familiarity with the team and the process of reporting security vulnerabilities to them, I do not recall them ever claiming to have a fax machine. The idea of Apple asking you to fax in a bug report is ludicrous.

That teen’s mom is a lawyer. There’s still something legal-binding about faxes that Email doesn’t have. At my former employer, we (a remote office for mother company) were sometimes told to send some info via a real fax and there were no ways around it (other than overnighting the original).

Also, it’s amazing to see she did so much to try to notify them. Yes, not being technical, she couldn’t figure out that security team email, but I wonder why all of the Apple people who were getting signals from her didn’t forward it to that team...

The sender can confirm that a fax has been received correctly on the other end. It will even print out a confirmation page with "message completely received". Can't do the same with emails, so there's no pretending you never received it because it was caught in some filter somewhere outside of your control.

Overnighting is similar in that the carrier should provide confirmation that a package has been successfully delivered.

Unrelated to anything, but... There are many fax vendors (sfax for example), that provide "virtual fax machines", and that email you received faxes. Those emails could get lost. The fax service could lose the data. Just because a fax was received by a "fax machine", doesn't necessarily mean someone actually got it.

Certified mail is probably a better "someone received it" option, if that is the desired requirement.

I'd expect that you can then sue the fax vendor?

Probably something in the ToS about forced arbitration or "best effort delivery", with how things go these day. :/

Just because a fax is sent doesn't mean that the receiver read the content. Were there even details about the issue on the fax or was it yet another "call me now in order for me to explain the issue while I could do it by email but I want you to give me money first" type of thing? Can we blame someone to not respond to a fax that doesn't include any details about the issue or how to reproduce it?

The faxed document is linked from the article, no need to speculate what it did and did not include. Hint: as the article says, it includes full reproduction details for the bug.

Just because certified mail is sent and signed for doesn't mean the receiver read the content. The point is that someone got the message, what the recipient does with that message is totally on them.

Lawyers are notoriously superstitious. Worse even than medics, since we managed somewhat to get modern medics to embrace the idea that it's possible to find out whether the blue pill or green pill is best by _doing science_ rather than relying on your gut instinct.

Superstition leads to lawyers doing again a thing that worked previously without knowing why it worked or having any understanding that it's actually necessary for any reason. Most of the weird forms of words in legal contracts are a result of this, the court probably doesn't care whether you demand someone "Cease and desist" or just "Stop fucking doing that" but who wants to risk that they do care? Let's just write "Cease and desist" the same as we did on the previous document...

So, she uses Fax because her predecessors used Fax, not because there's an actual "something legal-binding about faxes". Just superstition.

There are superstitious people in every profession, that's what "sync; sync; sync" is all about - it's just that lawyering seems especially prone.

There is a big difference between medics and laywers. Medics are playing a game where the rules are set by nature, biology, reality. Lawyers on the other hand play a game where the rules are set by other people in the legal industry (lawyers, judges, or legislators.) Moral men cannot alter issue proclamations about which pill will work and expect their demands to be heeded by nature, but all the rules in the legal system are made by mortal men. If people in the legal system decide that fax machines are blessed, then they are.

For it to reach that screening step it has to be tagged as security bug.

Do you really think a consumer would think to do that? Also you have to consider the number of bugs that are incorrectly flagged as being security.

I'm sure reports to product-security@apple.com get immediately flagged as "security".

Knowing how much outright spam a security email address gets... (luckily spam filtering is good enough to not surface them but a human still has to periodically go through the spams just to ensure there were no false positives)

Even then, the modern day incentives around vulnerability disclosure are not helping. Because security bugs are awarded bounties based on their severity, every single reporter has a financial incentive to hype and inflate their findings. "URGENT" this, "CRITICAL" that, "ACCOUNT TAKEOVER" due to already compromised computer/device, you name it.

Teams without sufficient resources will spend a lot of time dealing with the maladjusted severities. And yes, I believe the "mal-" prefix is warranted. If your report does go through with inflated severity, you stand to make more money.

I am starting to think that a reasonably run bounty programme should state up front that inflated severities in bug reports will reduce their payouts.

I thought it was filed through the standard external bug report tool? (The PS queue is higher priority, but I assume also gets huge amounts of spam)

There are a couple odd gaps that seem to be been in place at Apple.

First, Apple did not even acknowledge the report to the initial finder. This is flawed, like Google, Microsoft and other big players acknowledge receipt.

Secondly, the person tried to go beyond after not hearing anything, by calling, faxing and other means to no avail.

Third, Apple needs to staff their responders better it seems. Most issues can be filtered out quickly to be left with the few interesting ones. The repro here doesnt need any technical knowledge!

Many companies outsource the initial triage process even to have it scale when needed.

So, one take away for Apple is to improve their response process and transparency. Transparency they have always lacked when it comes to security.

>First, Apple did not even acknowledge the report to the initial finder.

That is strange. I worked with some folks who interacted with researchers and their reports.

Half the battle with them was getting back to the researcher and getting their cooperation about keeping quiet + assuring them they're working on it, and working with them if additional data is needed. And at the same time fending off internal folks who have poor instincts and want to push back against, blame, or even punish the researcher (this was surprisingly common at companies who even should know better).

It's not the hardest part but developing that trust can be a big difference between a possible PR nightmare or not, and the initial contacts are a big deal.

Also researcher's who you get along with sometimes come back with better data, additional bugs,etc.

How long is reasonable? Consider the number of incoming bugs, etc

With a standard case management system this can (and is at other companies) automated - so maybe an hour max for an acknowledgement that a case got created?

It sounds like Apple might be doing do this all via email, no case management software. That would be pretty bad.

No, everything is tracked. apple uses radar (an internal tracking system).

You can’t automate handling of bug submissions unless they’re semi automated (crash reports , spintracer, etc get coalesced, and high volume ones get higher priority).

But people filing bugs themselves? That goes through triage to filter out unactionable bugs and send to correct engineering teams.

Those then gets screen by either (if the team is big enough) and dedicated screener, to the right sub component or engineer.

That all takes time, and is unavoidable

Also when a user reports a major flaw in your product and your response is that you will not lift a finger until the user fills the correct form, you have reached the state of a useless bureaucracy, completely unconcerned about the quality of your product. I remember being given that advice by an employee on the vendor’s own forum where I reported a problem. They are still waiting for their bug report and I have found an alternative product.

I found a bug on the PayPal site (I can't associate a new email address). I tried the contact forms in their app and on the website, both resulted in error messages. I tried emailing them, but got an automatic message back saying that they don't monitor emails, instead directing me to the support form on their website, which you have to log in to access. And as I mentioned, that form just ends in an error.

So it seems like a way more common problem than it should be.

I sent Stripe a bug report once and they replied I had to sign in on their website to email them.

There's a guy on my team (of 12 people) in Apple that literally does nothing but screen bugs. I think he screened 100 bugs last week and there are still hundreds more unscreened. It's really hard to keep track of.

There's literally a team of people that send bugs to the appropriate group, where there are other people to do secondary screening…

Other people are saying "Apple must get thousands of reports of dubious quality". If that's true I'd expect O(100) reports screened per day? I guess automated screening roots out lots of them?

This is for one feature that's not even that big - this is not just randomly screening bugs, this is screening all the bugs that get assigned to us.

Raymond Chen (at Microsoft), for example has written repeatedly about incidents he spent lots of time investigating where your first instinct is "That's not a bug" and the eventual outcome was "Yup, that's not a bug" but it was a security ticket and so Raymond doggedly chased down every aspect to make sure it isn't a problem.

He mostly invokes the Hitchiker's Guide quote "It rather involved being on the other side of this airtight hatchway" to suggest that often the problem in these bugs is that they have a step where you've legitimately got privileges, and then they use those privileges to... do something you legitimately need privileges for. That's not a bug, if you try it without the privileges it doesn't work, but Raymond has to walk through all the steps seeing whether anything surprising is going on.

Now, whether engineers at a corp actually get given the space to do this stuff is an executive policy decision. Maybe at Apple not enough of them do, I can't say. But if it doesn't get done you are sooner or later going to miss cases where it _sounds_ like it's not a bug but actually there's a serious bug if you looked closely.

Yes, a company at that scale could do it, but they’d have to actually spend some of their $200 billion instead of hoarding it Scrooge McDuck style.

This is amusing to type, I guess, but not really true. Apple spent a huge amount last year: on stock buybacks (sigh), on building an enormous TV/film production studio from scratch, on a huge effort to develop self-driving cars, etc., etc.

If Apple were doing anything remotely similar to "hoarding", they'd have more like $700 billion in cash right now.

What would happen if you submitted the bug via certified mail? How would that get triaged?

One startup that’s working on this is unitQ: https://www.unitq.com

If this were their only QA/QC issue, then fine, it might be excusable.

But it's this, broken MBP keyboards, broken MBP hinges, bent iPads, the MBP core-i9 thermal issue and probably a few I'm forgetting.

Nintendo doesn't have these kinds of failures, they're pretty analogous to Apple. Hardware and software. What's the difference? Nintendo cares and Apple doesn't. (Not to say Nintendo is issue free by any means).

Maybe they should put some of that $250b in cash to work upgrading their processes.

Nintendo has different priorities. They use plastic screens instead of glass for instance.

Their focus is on building cheap but durable goods, e.g. plastic scratches more easily but is less likely to shatter catastrophically.

They have definitely had their own industrial design failures - the Wii would sometimes overheat in sleep mode, the 3DS would scratch its own screen when placed under pressure in a pocket, the Switch supposedly can have its screen scratched by a bent dock. The initial run of switch controllers had an issue where the left side controller could disconnect very easily. Of course when your product is $149 or $299 replacing it or living with a scratch doesn’t hurt as much...

Like I said, Nintendo is not perfect. But they at least try. Apple frankly doesn't appear to give a damn about shipping broken devices / software right now. And they have the resources to easily give a damn

Nintendo gets criticized pretty heavily for their decisions to do stuff like use plastic. Their devices are frequently described as “toys” on gaming forums that I visit due to this.

Apple has an entirely different brand they need to maintain which is why they use metal, make their devices thin, have edge to edge screens, etc.

Personally I have no problems with “toy” style design but I think a lot of apple’s premium is based on their sense of style and the feeling people have that iPhones, iPads, Macs are a “premium” product due to the industrial design. As an old school Mac user from back when Macs were beige boxes I personally don’t care about that stuff.

> Nintendo gets criticized pretty heavily for their decisions to do stuff like use plastic. Their devices are frequently described as “toys” on gaming forums that I visit due to this.

Are you sure it's not because their products are literally toys?

Incidentally, there is no shortage of real honest to god tools which are made out of plastics. Glass reinforced plastic can be a very robust material, particularly for its weight, and the weight of a tool is often an important consideration. Fetishizing metal for the sake of metal is often done in the domain of luxury goods where feeling and appearance count for more than objective practical physical properties.

Right now my primary computer is an aluminum macbook. The aluminum certainly looks nice, although I've found that the hard edge hurts my wrists when I rest my hands on it. Would putting a slightly larger radius on that bevel really negatively impact the supposed physical strength of the metal design? Nah. But it would negatively impact the aesthetics of the laptop. Apple made it sharp because sharp looks thin and looking thin sells well.

Contrast this with the T60 I was using a decade ago. It's not thin and it's pretty damn heavy. It's also got a metal skeleton but plastic exterior. But for all the faults of that laptop, its plastic components do not make it fragile. I dropped and stepped on that thinkpad more times than I can even recall but not once did the plastic break. Plastic, the right plastic, is actually pretty damn durable.

The thinkpad of course is still a typical consumer good. But next time you're walking past a construction site, take a look at the tools being used by the workers. There are a lot of plastic parts among those tools, and the tools hold up just fine in an environment far more demanding than anything you put your Apple product through.

It's not hard to reach a living person who works at Apple. The next step should have been reproducing it for that person. It's not clear that the finders in this story did that - it seems they might have been trying to find out about bounty payments first.

If any tech support or Apple Genius were to have seen this bug reproduced, it should have immediately been easy to flag to the right person.

What if you don’t live near an Apple Store like a lot of people do?

Saying that it’s not hard for a person to get in contact with someone working at Apple seems like a pretty biased view.

"Looking for a bounty?" Where did you get that from?

Bug Bounty I would assume, for finding the error.

Of course. However: I didn't re-read the article, but I don't remember that it suggested anything of the sort, so GGP just projected that onto them.

Years ago my team and I discovered a pretty significant bug in Safari's/CFNetworking's TLS implementation. Once the browser had deemed a certificate valid once, it would subsequently accept it for all hostnames. We got absolutely nowhere with Apple's official security contacts. The issue only got resolved months later, after I was able to find an employee from their security team at WWDC and explain the issue face to face.

Care to tell how it went? Did he have an expanation why the process was so crappy? Did hebmaybe even knew about your bug report but was unable tonfonsomething sbout it because of some beaurocracy?

We did not have any visibility into the process. Overall I think they just didn’t see it as that big of a deal, definitely not big enough to change release schedules for. This got assigned a CVSS score of 6.8, so not Critical or even High severity. Still feels pretty severe to me, but I guess that’s how everyone who discovers an issue like this would feel…

When I saw the headline, I assumed it was a situation where someone had emailed the wrong address or only tried to contact them via Twitter. But upon reading the article I see this is a high-quality report. She was sounding alarms and emailing all the right people. It's is insane that Apple missed this.

I think at this point, we need Tim Cook to write an apology piece about how they screwed up, how this won't happen again, and who got fired.

We also need some kind of hardware indicator like a light that indicates when the mic or camera are turned on. After a blunder like this, a privacy-focused needs to rebuild trust that they take privacy seriously.

there already is one when an app is still listening to your microphone in the background, a giant red bar at the top of the screen. The difference with this bug is that the facetime call pre-initializes microphone and video to reduce the initial connection delay.

Right, I don’t want something in the software which is what happens today. I want a hardware indicator, that should in theory be harder to break or hack (and should make this kind of bug more obvious to a QA team and the general public).

Not turning out to be a great week for Apple. Even if they do receive a large number of bug reports, I would like to think they have the resources (let's face it, they're not cash-strapped) to resolve something as critical and privacy-focused as this. Their failure to do so makes a mockery of their users who pay a significant premium for their products, often in the name of privacy.

What is happening with Apple - people used to justify the high cost of Apple devices claiming they paid for the "high quality". But now ... First the "bug" that allowed root access on macOS and now this "bug" that literally allowed anyone to spy on you through your iPhone? Not to speak of iPads / iPhones that bend, ios throttling due to weak batteries etc. etc.

Something is quite wrong ...

I'm no Steve Jobs fan or Apple apologist but it seems obvious enough - Jobs was the force driving the company forward and now the momentum he created is finally starting to wear off. I really hope they can get their shit together sooner or later as they still seem the best of a bad bunch wrt privacy - at least for now.

I submitted it before those 2 articles, but it got lost on new. It got resubmitted by the admin, who sent me this email:

We thought you might like to know that we put https://news.ycombinator.com/item?id=19029573 in the second-chance pool, so it will get a random placement on the front page sometime in the next 24 hours.

This is part of an experiment in giving good HN submissions multiple chances at the front page. If you're curious, you can read about it at https://news.ycombinator.com/item?id=11662380 and other links there.

Yup, that was me. When we merge threads, we try to favor the earliest submission. Eventually we're hoping to have some kind of karma sharing so it isn't such a lottery.

While you're here, ,my comments get published but my links never on new section even. anything wrong with my account?

The guidelines ask you to email hn@ycombinator.com with questions like this, so please do that in the future.

Your account lost submission privileges at some point. I'm not sure exactly why, but have taken the penalty off now.

I'm sorry and Thank you

I submitted the WSJ story only 2 minutes before Josu's original The Verge submission. Josu' submission was just above mine on the "new" page, so when I clicked on it and read the Verge article I realised it was a better article than the one I submitted. (and not paywalled)

My first comment on my own submission was for people to look at the other article as well/instead.

Karma sharing sounds nice, but in the end, I just want people to read the better article.

You're right! Yours was earlier. Either I didn't notice that or I saw your comment or I decided the other article was better... can't remember.

Thanks for caring about the quality of HN. That's what matters.

Thanks, I didn't realized. And yeah, I don't feel like people here care that much about karma. Usually when I submit articles it's because I want to read the discussion that it creates on HN.

I see. Thank you.

dang, that was quick

People who thinks that what happened is unacceptable needs to understand that Apple must receive a lot of these types of call every week. What would you do if someone send you multiple messages saying that they found a major issue _without even detailling anything_ while this person actually wants you to give them money for what they found (that they still haven't disclosed any information about it)? I'm sure the majority would ignore these calls unless some details were shared about the issue.

I am not surprised about what happened at all. There is an argument that can be made about the fact that it took Apple so many years to finally implement group video call that they could take a little bit of time to do it right but other than that, I don't see how Apple could have prevented a bug that a person wasn't willing to disclose without having money first.

The actual tip: https://pbs.twimg.com/media/DyGIwiHVYAAJaxH.jpg

You don't see this kind of stuff every week, and surely Apple has the resources to at least confirm it.

This was seriously as good as you can expect from a non technical, non QA person, and actually better than what lots of technical users would report.

A video as reproduction steps is as credible as it gets. I hope she and and son get a nice reward from Apple...

You're telling me some random customer wrote _THAT_ writeup? That's impressive.

Random lawyer - turns out the ability to describe things in clear, unambiguous language and in detail results in pretty high quality bug reports.

If they aren't hiring enough people to clear their bug report queue every day, that's still unacceptable.. They're sitting on gigantic piles of cash. They can afford it.

You're correct and worse still is the reception on discussion sites like these: it's sad that people encourage us to take pity on billionaire organizations that apparently don't hire and train the people needed to do decent work and avoid what has become a pattern of ugly security handling.

http://www.telegraph.co.uk/technology/apple/8912714/Apple-iT... is a story about how Apple didn't fix a remotely-exploitable bug in iTunes for years after being notified about it. During that time governments were said to have used this bug to infiltrate users' computers.

And the worst part of it all: The majority of Apple's software (certainly both FaceTime and iTunes) are proprietary programs -- user-subjugating software that does not respect a user's freedom to run, inspect, share, and modify the program. This means that even the most skilled and willing users are prohibited from fixing the problem and distributing a fixed version of the program to help their community. So, rich or poor, proprietors do their users a disservice by distributing proprietary software.

Another product stream and company (and hype) I was never a fan of. Best thing they did was to rip off FreeBSD and the worst was break *nix compliant userspace + influence design UX and UI patterns for a new generation.

There has been recently some activity here in HN regarding formal model checking and protocol verification (TLA+, SPIN, Promela...) I guess they are relevant to this case.

This stuff is hard.

That letter from the lawyer to Apple is quite inflammatory.

How so?

Maybe I'm just too old and contankerious and just don't "get it" but warning Apple via Twitter[0] isn't really following a Coordinated Vulnerability Disclosure process, yeah?

[0] - https://resources.sei.cmu.edu/asset_files/SpecialReport/2017...

EDIT: Changed the link to the CERT guide for CVD.

Reading that website you provided, the mother was correct to hit up twitter as they suggest that "Customers" contact Apple Support. Their twitter is an official channel to that end.

The only area in that document she would have followed to reach out Apple's security contact is under the heading of "Security and privacy researchers", of which I am doubting she thought herself or her 14 year old son as.

>...of which I am doubting she thought herself or her 14 year old son as.

Yet, that[0] is precisely what she did...? Your argument falls on it's face by her own action[s].

EDIT: Note in the screenshot that there's an appended "Follow-Up" with what looks to be an ID, which has been added to the Subject field of the email.

[0] - https://cdn.vox-cdn.com/thumbor/zrezAXK0-NdK3ugN3G2Uwd_vzuo=...

The tweet wasn't the warning. The tweet describes the 14 year old's mom sending a formal notice of some kind.

Anyway, it's absurd to say thay tweets aren't enough warning if they're public. A public disclosure may be distasteful to Apple, but it doesn't change the fact that they have a security emergency.

>Anyway, it's absurd to say thay tweets aren't enough warning if they're public.

Who said Tweets weren't enough warning? Did I say that? Where?

...and what does a warning have to do with CVD? The two precepts aren't mutually exclusive, yeah?

You yourself implied a relationship between CVD and the tweet when you said that a tweet isn’t a valid step in a CVD process.

> isn't really following a Coordinated Vulnerability Disclosure process, yeah

Why should they have followed that process? Coordinated Vulnerability Disclosure is just one way of many of disclosing security problems. It's not the single right way.

>Why should they have followed that process?

They started doing just that (didn't anyone actually read the article) and then decided they weren't gettig enough traction and tweeted at Apple. Then, someone discloses the full vulnerability later because.... ...reasons?

CVD isn't the single right way, you're correct, but it allows the vendor to address the issue and remediate it, before the exploit is published. In fact, CERT states that they'll publish exploits after 45 days (I think it is) of non-repsonse from vendors.

To me, it sounds like someone started going the right direction and someone else took over the PR-value of the loudest voice gets the most attention - which there is some arguments for/against that.

However, since they already started going down the "right" road, I don't see why there's this crusade to say "all reports should be accepted through any channel". It's an untenable precept.

Should start-ups or FOSS monitor social media for security reports? Don't they define reporting processes of their own?

I'm not saying, "Don't ever use social media," which is what I'm gathering some people misunderstand this as. I'm saying, if they started down the reporting path, via the appropriate channels, then why disclose the vulernability publicly, if they were already going down the appropriate path?

As someone else mentioned, it should've been on support to see the tweet and pass it on to the appropriate team; especially, since they had already opened a bug for it. Yet, I don't see how this automatically equates to just dumping the exploit into the public domain. (I hope my explanation of it makes sense, at least?)

If we agree that they didn’t claim to be doing CVD, were under no obligation to be doing CVD, and CVD isn’t the only way, then why did you comment saying ‘this isnt CVD’ and post a definition as if they were confused about what CVD is? I mean it’s not relevant because that’s not what they were doing.

You might as well comment ‘maybe I’m cantankerous but they don’t seem to be baking a cake here’. You’re right they weren’t baking a cake. So what of it?

>If we agree that they didn’t claim to be doing CVD


>I mean it’s not relevant because that’s not what they were doing.

If they first reported it via the product-security@apple.com, what were they doing then?

> If they first reported it via the product-security@apple.com, what were they doing then?

Just reporting it by email! Why does that mean they thought they were following someone else's idea of how to do disclosure? It's not called cvd-only-security-reports@apple.com is it? Maybe they'd never heard of CVD. Maybe their idea of disclosure is to email and then Tweet it as well.

Do you see what I mean though? You snarkily ask 'maybe I'm wrong but this doesn't look like X' when nobody ever said or implied it was X. It doesn't make any sense as a criticism.

I see what you're getting at but the point you missed was the week (I believe it was) between when they opened the report and then the tweet happened. Then, not surprisingly, the exploit is fully published publicly (the next day, I think?).

So, to explicitly say they weren't aware of 'x', when it doesn't match the timeline, is also - in and of itself- possible disingenuosus. Do you, at least, see where I'm coming from on that angle?

> So, to explicitly say they weren't aware of 'x', when it doesn't match the timeline, is also - in and of itself- possible disingenuosus. Do you, at least, see where I'm coming from on that angle?

I see where you're coming from but I don't think it really does imply they were aware of CVD enough to be snarky and wave a standard in their face. They probably just thought they'd give Apple some time instead of thinking 'I'll follow CVD here'.

> Should start-ups or FOSS monitor social media for security reports? Don't they define reporting processes of their own?

If they do have Twitter and other social media accounts for support then I think they should.

The story behind this particular report seems muddled quite a bit and the history of the report is quite weird. Maybe they wanted to have dibs on the report as Apple does not have a bounty program?

>The story behind this particular report seems muddled quite a bit and the history of the report is quite weird.

That's, pretty much, what I'm getting at. Everyone wants to jump on the "Apple's done a shit job with this" bandwagon, which - if you hate Apple - that's your perogative, but to go from reporting it, to a tweet, to full drop of the exploit publicly in less than a day from the actual tweet isn't going to end well for any company - no matter who it is.

>Maybe they wanted to have dibs on the report as Apple does not have a bounty program?

That's - ultimately - what I believe happened here.

Apple staff should copy/paste the information reported to them on Twitter to the relevant team for triage or investigation. The user is already helping enough by telling them on Twitter.

This is true but she had already emailed them and from the looks of the screenshot, recevied a response[0]?

I still fail to see what that has to do with CVD, though?

[0] - https://cdn.vox-cdn.com/thumbor/zrezAXK0-NdK3ugN3G2Uwd_vzuo=...

It sounds like the problem was that she never got a response. She sent the original letter, never heard anything back from them, assumed it got lost or was ignored, and then tweeted.

from petaluma to kankakee! finding bugs in the internet of shitty things! the latest craze to sweep the nation!

How many other hundreds-of-billions-of-dollars companies could produce a production code fix faster?

(according to the article) They didn't spend over a week developing a fix to the bug, it took them over a week to just disable the feature (given the severity of the bug this should have been done as soon as they knew about it).

It also sounds like the person reporting the bug was ignored until she made a developer account and reported the bug through that - that shouldn't have been necessary.

I don't agree that the article makes or implies this claim. A more plausible timeline is that Apple (incredibly) intended to roll out a client-side update fix on a relaxed schedule, until the repro went viral and forced their hand on a server-side shutdown.

It is not credible to imagine that an emergency disabling of Group FaceTime takes a week.

> roll out a client-side update fix on a relaxed schedule

"There's a bug that allows our devices to be used as eavesdropping tools. But let's relax about it!"

> "We at Apple believe that privacy is a fundamental human right but also recognize that not everyone sees it that way," Cook said[1]

I guess that "not everyone" includes some product people working on FaceTime.

1. http://time.com/5433499/tim-cook-apple-data-privacy/

> A more plausible timeline is that Apple (incredibly) intended to roll out a client-side update fix on a relaxed schedule

The most plausible scenario is that somebody at Apple threw the original report in the figurative trash bin after failing to read it or failing to realize the significance of it (although given the clarity of the report, failing to read it and failing to understand it are virtually the same thing.)

Right, sure. I mean the most plausible explanation given that Apple said that they were previously aware of the problem and had a scheduled fix in the pipeline to roll out within a week.

The fix itself depends on what the problem is - clearly they went for a “disable server side” route, but I’m sure they also tried to work out if they could filter on the server side. My guess is they can’t because the “call accepted” response is probably in the encrypted data stream.

Rolling a system update is also not immediate (eg you need to be sure that you don’t result in user flows resulting in no audio), hence the focus on server side fixes (that lead to today’s “kill server side”) fix.

But also “a week to fix” depends - it’s a week from the report, at which point it needs to be screened. So let’s say a day to get to an actual engineer - it may take longer to get appropriate security keywords attached - then the engineer would need to actually read it, which depends on what their current workload is, where they are in their release schedule, etc.

It’s not trivial - all big companies get thousands of big reports a day, and they just take time to get to the right place.

I agree with what other have said (telling a consumer to get an ADC account is clearly suboptimal). But in general the path from consumer to security big is challenging for everyone.

It should only take a few minutes if you’re the average HN commentator.

A HN commentator would never have made such a big in the first place. Duh. :D

Follow up, would it be “an HN” or “a HN”? eg hatche-en or atche-n or hacker?

As before I’m sure the worlds greatest linguist will also be here :)

Probably "an HN", as saying the letter "H" sounds like "ayy-ch" (in American pronunciation, at least), which has an initial vowel sound.

Right, but when I tried saying it in my head it seemed weird because my brains turn it into “an hacker news” :)

They could have built their own Group FaceTime in their mom's basement in a weekend. With blackjack and hookers!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact