Hacker News new | comments | ask | show | jobs | submit login

If you are able to perform the following steps for any of Amazon, Google, Facebook, Netflix, Microsoft or Twitter, I will literally eat a hat (you may choose what kind):

1. Discover an easily exploitable vulnerability that allows access to a chosen user's private data

2. Email their security address about it

3. Tweet at them about it

4. Fax them about it

5. A week later the vulnerability is still exploitable

You do not have to play fair. You're allowed to impersonate a suburban mom or a grandpa who's not good with technology. You're allowed to ramble or use vague and non-technical terms as long as a reasonably qualified person could determine what the vulnerability is. Specifically, it is not required that you include relevant product versions, steps to reproduce, a full reproduction video, or a one-sentence impact summary like "a caller can eavesdrop on the recipient of a Group Facetime call without their knowledge or consent".

There's no apologising this away. The vulnerability was already a monumental fuckup, but this detail propels it into the realm of cultural dysfunction. It should not be possible to fail this badly. If you put listening devices in people's pockets, you need to hold yourself to a higher standard than "I dunno, bug reporting is hard".

I work for a large software company. I am not on our security team, but have worked with our security team to investigate and resolve reported issues. I agree that it should not be possible to fail this badly.

Even nonsensical reports to our security address will, with an SLA measured in hours, be read by a qualified human who will then reply to at least say "we're looking into it".

A credible report will be immediately escalated to someone with relevant domain expertise for investigation. The security engineer will attempt to reproduce.

A confirmed report will be escalated to an executive, who will determine urgency. For issues like this, where sensitive data is exposed, people will be woken up and several things will happen in parallel: the scope of the issue will be assessed, the root cause will be found, potential workarounds will be identified, a fix will be implemented, the potential existence of related issues will be investigated, and the reporter will be contacted to assess disclosure risk.

Even in the worst case, where a complicated vulnerability exists in multiple versions of multiple products, requiring multiple patches and backports and requiring coordinated disclosure with partners, I'd expect a fix to be in customers' hands within 14 days.

Yeah this is particularly the case at Google and Facebook. If you submit a security report to Google or Facebook through their bug bounty programs and escalate it with a critical severity tag (whether or not it's justified), someone on the application security team will review it within an hour. I can say that from experience (on both sides). If it's legitimate a sev:critical vulnerability, a workaround will usually be in production within 24 hours.

I think the intrinsic failure here is that Apple is - more than any other FAANG-like tech company - fundamentally disinterested in vulnerabilities that don't represent root-capable jailbreak vectors. Or rather, they ostensibly care, but every single process is systematically designed to encourage introspection on those vulnerabilities as a categorical imperative. Other types of vulnerabilities will be treated as second class citizens, so to speak. Apple does a lot of things right from a security perspective, but this really isn't one of them in my opinion.

This is very clear despite corporate messaging if you follow along with their bug bounty program. Consider that the bar for submitting a bug bounty to Apple requires the vulnerability to be something capable of compromising the device's sandbox or root privileges. This is explicit - a userland privacy bug is not sufficient. Furthermore, the bug bounty is strictly invite only, and even some of the most accomplished and talented vulnerability researchers in the world are closed out from it: https://twitter.com/i41nbeer/status/1027339893335154688

More generally speaking, a reliable formula for putting a vulnerability in front of someone who is both qualified and paid to urgently care is the following:

1. Look up the security team at the company. Not security contact information, the team.

2. Find out individuals on that team by going through blog posts, conference talks, etc.

3. Find those people on Twitter. Tweet at several of them with the broad strokes: you have a vulnerability in X product, you need to securely report it, you believe it's N severity, how should you do it?

But of course, you shouldn't need to do this. You should be able to fire something off to security@ or, better yet, a bug bounty program.

The biggest problem isn't about the time it took to fix the bug, whether it was a little harder such as iOS, or easier as it is on the backend. It is that Apple, refuse to listen to anything until the media broke the story, and then they go into damage control.

How many times have we heard security researcher or developers cant be bothered with communicating with Apple's blackhole anymore and decide to publish their finding on twitter?

This isn't the first time, and if nothing has changed this surely won't be the last.

Not sure what the "challenge" is.

That -- being warned and taking time to close a vulnerability -- has happened time and again.

Some examples:




And I'm not sure how closing a vulnerability in a backend service, which is your main (and income-wise, only) product (as is the case with Google, FB, Twitter, etc) compares to a vulnerability to a secondary product like Facetime.

Well, pointing out that tech company engineers aren’t really all the great, indeed even below average, threatens like, one of HN’s biggest orthodoxies: that the people who read and write in this forum are brilliant.

There have been numerous examples in the past where companies like Microsoft have taken way longer than one week to fix serious vulnerabilities. What makes you so confident you won't be eating a lot of hats?

Are any of these examples in recent history, in software terms? Software culture, especially with regards to security, has come a long, long way since the bad old days. I agree with the parent comment. This is not acceptable in 2019.

Can you point to any of these numerous examples?

>Are any of these examples in recent history, in software terms?


Is there a reason why the rest of the parent comment was left unaddressed? The implication was to provide such examples. I'm also curious to hear.

Several such cases are made public every year.

Here's one:


"The vulnerability came to light in mid-September after the Trend Micro Zero-Day Initiative (ZDI) posted details about it on its site. ZDI said Microsoft had failed to patch the flaw in due time and they decided to make the issue public, so users and companies could take actions to protect themselves against any exploitation attempts."

Or how about this?

"Google Admin, one of Android’s system-level apps, may accept URLs from other apps and, as it turned to be, any URLs would be fine, even those starting with ‘file://’. As a result, a simple networking stuff like downloading web pages starts to evolve into a whole file manager kind of thing. Aren’t all Android apps isolated from each other? Heck no, Google Admin enjoys higher privileges, and by luring it into reading some rogue URL, an app can escape sandbox and access private data. How was that patched? First, allow me to brief you on the way independent researched disclosed the vulnerability. It was discovered as far back as March, with a corresponding report submitted to Goggle. Five months later, the researchers once again checked out what was going on only to find the bug remained unpatched. On the 13th of August the information on the bug was publicly disclosed, prompting Google to finally issue the patch."


Makes you wonder whether these bugs have been intentionally introduced due to pressure from some government agencies with a specific target in mind. And the "delay" in issuing a fix is just to make sure that the mission objective is first achieved.

I love the idea, but really wonder who the spy agency would submit that request to.

Would they ask the CEO for a bit of extra time? I security manager? How many assets would they need to be running in an organization of that size to get their way?

We do know that US Security agencies already interface and work with American tech companies to "safeguard" their commercial secrets and infrastructure. There may already be some well established protocols and processes in place.

Another (mildly) interesting tidbit to feed our theory - The Good Wife TV episode (season 7 episode 14) introduces an agency called "TAPS" or "Technology Allied Protection Service". In the episode, it’s supposed to be a multi-agency task force that works with Silicon Valley’s biggest companies to clamp down on technology theft.

I remember an article about the 90s when you had to be a paying member of MSDN to be able to file a bug report

https://news.ycombinator.com/item?id=18174226 .. "It's a feature". Now the only thing saving you from eating the hat is the Fax part, I'll give you that.

This is hilarious. The other day people were patting Apple on the back, surmising it would take Apple mere hours to fix, only now it's revealed that Apple failed to act on the report when they got it. And yet people still think major tech companies will fix any anything within a week, but here we have a months old example where facebook decided to bury their head in the sand and it's gotten hardly any attention.

The biases in this community are out of control sometimes..

> Tweet at them about it

Baseball cap, please. I found a potential security bug and stumbled pretty hard delivering the details to MSFT (we all have our first time). The PM was emailing me within minutes, it wasn't deemed a risk.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact