Hacker News new | past | comments | ask | show | jobs | submit login
Disclosing vulnerabilities to protect users (googleblog.com)
305 points by nnx on Oct 31, 2016 | hide | past | web | favorite | 179 comments



All browsers have about the same amount of bugs but Chrome's design or as they say "defense in depth" acknowledges that there will always be bugs so tries to make it as hard as possible for bugs to allow bad things.

Compare vulnerabilities

Chrome 6% code execution, 0.4% gain privileges

Firefox 48% code execution, 8.4% gain privileges

Safari 60% code execution, 9.4% gain privileges

IE 79% code execution, 7.2% gain privileges

Edge 55% code execution, 16.4% gain privileges

Before you complain Firefox and IE are older is older we can compare just the last 2 years of vulerabilities

https://docs.google.com/spreadsheets/d/1nqBd7zmg6grVBEws_UPX...

I expect Firefox's newer stats to get better on this front as they move to electrolysis and servo

It's interesting to note that Safari has zero "gain privileges bugs" for the last 2 years even if it has 22x the code execution bugs as Chrome

http://www.cvedetails.com/product/15031/Google-Chrome.html?v... http://www.cvedetails.com/product/3264/Mozilla-Firefox.html?... http://www.cvedetails.com/product/2935/Apple-Safari.html?ven... http://www.cvedetails.com/product/9900/Microsoft-Internet-Ex... http://www.cvedetails.com/product/32367/Microsoft-Edge.html?...


It looks like you've made several errors quoting from cited spreadsheet.

Aren't these the correct numbers according to the spreadsheet?

  Browser vulnerabilities 2015/01 - 2016/10
  =========================================
  Browser  Code Execution  Gain Privileges
  -------  --------------  ---------------
  Chrome            2.89%            0.58%
  Firefox          43.59%            2.88%
  Safari           69.11%            0.00%
  IE               69.08%            6.07%
  Edge             55.22%            2.24%
Source: https://docs.google.com/spreadsheets/d/1nqBd7zmg6grVBEws_UPX...


I didn't quote from the spreadsheet. I quoted from the totals on the webpages. In other words, the quoted values are from when each browser was first created to today. The spreadsheet is just 2015-2016


Oh, I see, thanks. Well, I've copied in the values for just the past two years for easy comparison, then!


The whole spreadsheet is a bit useless to be honest.

The percentages refer to vulnerabilities discovered between 2015 and 2016 and so may apply to older software.

And it's not Safari but rather WebKit as in there are issues listed for specifically the AppleTV which doesn't even run Safari.


It should be pointed out that most vulnerabilities are, in fact, being exploited before they are patched. Citation in this article: https://medium.com/@xParXnoiAx/irresponsible-disclosure-52d0...


So that's linking to a Forbes article that's paraphrasing this study from 2012: https://users.ece.cmu.edu/~tdumitra/public_documents/bilge12...

It doesn't support your assertion but it's still interesting 2008-2010 data from an antivirus vendor. It's talking about about how long some vulnerabilities were exploited by malware before getting disclosed, use in targeted attacks, and so on.


The study says that most vulnerabilities were being exploited before being disclosed.


I don't think it says that about the set of all vulnerabilities (IOW - citation needed!).

It does say "In this paper, we consider only exploits that have been used in real-world attacks before the corresponding vulnerabilities were disclosed" so it's unsurprising that in their dataset this is the case :)


Yeap, you're right, I misread. Here is a quote from the paper: "15% of these exploits were created before the disclosure of the corresponding vulnerability." So there's a lower bound.


Hm.

> Chrome's sandbox [...] prevents exploitation of this sandbox escape vulnerability.

It reads to me as though Google disclosed an 0day to promote Google Chrome.

This is fine.

•••

Edit: http://venturebeat.com/2016/10/31/google-discloses-actively-...

> A source close to the company also shared that the exploit Google describes requires the Adobe Flash vulnerability. Since Flash has been patched, the Windows vulnerability is mitigated.

If the immediate risk has been mitigated (barring other unknown attack vectors, this was the immediate one of which Google appears to have been aware), this makes the disclosure seem even more shamelessly promotional. I'd presume the Flash vuln would be to enable drive-by exploitation, so if that risk is mitigated, then they really seriously could've let this one wait.


Disclosing in-the-wild vulnerabilities like this 7 days after their discovery has been Google's policy for more than 3 years now. https://security.googleblog.com/2013/05/disclosure-timeline-...


http://www.theverge.com/2016/10/31/13481502/windows-vulnerab...

> This is the first major invocation of the policy in the three years since it was put in place

http://venturebeat.com/2016/10/31/google-discloses-actively-...

> This is not the first time that Google has disclosed Windows vulnerabilities before a patch was ready. In fact, the company did this for Windows 8.1 twice in January 2015. Microsoft understandably wasn’t pleased, but this time around is even more serious. Both of those earlier vulnerabilities weren’t being actively exploited. [emphasis added]

•••

A longstanding policy doesn't make it a good one.


Both of those earlier incidents were after Microsoft passed the 90 day (3 months!) deadline.


That makes this one even worse, if anything. The disclosure in question wasn't even two weeks in.


Worse? Did you miss the part where this vuln is being actively exploited? I'd much prefer if Google disclosed this sort of thing to trusted software vendors immediately and the public within 24-48 hours of finding out.


> Did you miss the part where this vuln is being actively exploited?

Not at all. That's actually the crux of my argument that this is promotional.

Google, Microsoft, and Facebook all collaboratively met at an event outside defcon this year called bountycraft. In it, they shared their practices handling vulnerability disclosure and other related events. All three have arguably the best secure development lifecycles in tech right now. All three are intimately aware of each other's practices.

Microsoft is assuredly fixing this into overtime. Microsoft assuredly knew this was in use in the wild. They're already working beyond full speed, and Microsoft has a history of sharing mitigations with third party security vendors. Given the relationship I described above, Google certainly knows all of this and knows that Microsoft won't be able to act any faster to handle this.

No one is served well by this disclosure except for two parties:

· Unprivy malware authors (the vast majority) who are now actively researching this disclosure.

· Google, by convincing users to use their browser for protection.

This was a marketing play.


Or it's just P0 following their policy. Which is the only way P0 can maintain credibility and exert pressure on vendors to fix things. Especially after Google management just publicly overrode them regarding an Apple vulnerability a week or so ago.


Sure. My earlier point:

> A longstanding policy doesn't make it a good one.

If this was a disclosure by the books, there would've been broader, less overtly promotional mitigations discussed.

Project zero would be better off reformulating its disclosure policy here if credibility is what they're going for. Sometimes it's healthier to admit being wrong.


While it might assuage some kneejerk skepticism like this, I don't think they're obligated to omit that Chrome has mitigations in place when disclosing a vulnerability.


You're absolutely right. I'd rather have preferred a few different responses, such as including multiple mitigating strategies, even if they're user actions such as browsing only trusted sites until a patch is released.

That said, even if the wording was "Browsers such as Chrome which block win32k.sys system calls using the Win32k lockdown mitigation on Windows 10 prevent exploitation of this sandbox escape vulnerability," I would've given them a partial pass.


>Google management just publicly overrode them regarding an Apple vulnerability a week or so ago.

Don't remember this. Can someone point me to a link?


  No one is served well by this disclosure except
  for two parties...
You missed one: the general public benefits from these disclosures.

Even though this gives an information advantage to an attacker (e.g., aforesaid "unprivy" malware authors), as a member of the general public, I would rather know that I'm vulnerable to some attack. I might change my behavior to limit my exposure. I may take additional precautions that would otherwise be too costly were it not for the additional risk. I might choose to accept the risk and continue working normally. My point is that by not disclosing, the security researcher makes my risk management choices for me, and I am not convinced they have my best interests at heart.

Likewise, public disclosure makes it possible for the customers of a software vendor to pressure them into addressing the vulnerability in a more timely fashion. Otherwise, these companies will not take security issues seriously. I have seen that pattern repeat over and over and over. The only way to get vendors to change is to name and shame, hit 'em in their wallet and hit them hard. Microsoft does SDLC now because of full disclosure, not in spite of it.


"Microsoft is assuredly fixing this into overtime"

Citation needed.


I can't give you a direct one short of saying I've interacted with their security teams and know their protocols to some extent. The indirect color I added to this image was from the Bountycraft interlude. Up to you if you want to trust any or all of that; you have every right not to.


Microsoft is assuredly fixing this into overtime. Microsoft assuredly knew this was in use in the wild. They're already working beyond full speed, and Microsoft has a history of sharing mitigations with third party security vendors.

And they still can't fix it in 7 days? Well, then they have to shape up or be replaced by a competitor who can. It's harsh, but customers benefit.


It means the other two examples aren't relevant.


Respectfully, but: I don't think Neel Mehta gives a fuck whether you use Chrome or not. He's Neel Mehta. He probably doesn't even care if people use Google to Google things. How to sum Mehta up? Even before he discovered Heartbleed, he was one of the most renowned vuln researchers in the industry.

A mental model of Mehta that has served me pretty well: if he says something publicly like this, it's probably because it's important and correct.

The idea that he'd be wasting his time on marketing stunts for the Chrome browser is a bit naive.


You certainly know him better than most people on here, so I can appreciate your position and believe you when you say it.

If that's the case, though, then he could benefit from reading this HN submission in writing future disclosures.


> If that's the case, though, then he could benefit from reading this HN submission in writing future disclosures.

Benefit how exactly?

By becoming another mindless conformist whitehat who never exercises full disclosure when a vendor isn't taking security seriously (but oh man will they ever pay lip-service to the idea of taking security seriously)?

That's what a lot of the comments in this thread would have him do, anyway.

I think the best benefit he could get from reading the comments here is the realization that so much of the tech industry knee-jerk-defends software vendors without realizing the importance of diverse vulnerability disclosure practices. Then when someone gets particularly nasty with him, he can shrug it off as just a particularly-vocal entitled developer who has probably never found a security vulnerability before, and proceed with following the established P0 policies.

But he probably already knows/realizes this.

But ultimately, I think the comments here would be a waste of time that could better be spent on things that matter far more than the egos of most HN commenters. Myself especially.


If the team wants to continue with the practice of full disclosure, that's on them. My gripe was with the promotional impression I got from the Chrome plug.

Therefore:

> Benefit how exactly?

Benefit by observing more nuanced language practices in the future. If tptacek's point stands (and I have no reason to doubt him), then Neel benefits by receiving less controversy for a highly controversial practice simply through tweaking the write-up.

It has nothing to do with "egos of most HN commenters." I'd like to think most commenters on HN have practical roles at their organizations and likewise have to deal with the fallout from disclosures like this much the way I now have to.


I can't even follow your argument anymore. Clearly, Mehta isn't a hype-man for Chrome. So what, exactly, is it that you want Google's security people to do? Their disclosure policies are probably the healthiest in the entire industry. If you're going to criticize them, please be as specific as you can.


Sure. How's "Don't come across as hyping your own product when you disclose an 0day"?

Even if the wording was "Browsers such as Chrome which block win32k.sys system calls using the Win32k lockdown mitigation on Windows 10 prevent exploitation of this sandbox escape vulnerability," I would've given a partial pass because it looks less overtly self-promotional.


I find this baffling. They're disclosing a vulnerability that has no patch, and they're clarifying that their own browser has a workaround. That's not self-promotion. In fact: not clarifying that would have been irresponsible.

3 minutes later, after a second read:

I understand that you're uncomfortable with the idea that for some users, switching to Chrome is the most reasonable workaround for a very serious vulnerability. But it is a factual statement.


That might be the case! That's why my statement was "don't come across as hyping" rather than "don't disclose a solution."

Elsewhere, I also stated a preference for conveying multiple solutions, "even if they're [only] user actions such as browsing only trusted sites until a patch is released." This would've been great to disclose for the many corporate users on Windows 7 since the specific mitigating feature implemented by Chrome (and Edge, if Microsoft's implication is to be trusted) in Windows 10 isn't available to these users.


Huh? You can't tell people to "only visit trusted sites". An attacker who is combining browser RCE with OS privilege escalation zero day will have no problem whatsoever coercing victims into visiting whatever site they choose.

You're suggesting that Google give terrible security advice simply to dull the discomfort you think they should feel about the fact that their browser's security model defeats this particular vulnerability. I feel like either I'm missing something, or you can't really be thinking this through carefully.


> Huh? You can't tell people to "only visit trusted sites". An attacker who is combining browser RCE with OS privilege escalation zero day will have no problem whatsoever coercing victims into visiting whatever site they choose.

An experienced attacker, sure. Targeted campaigns, sure. I'd argue this doesn't apply all that readily to attackers who might just buy an exploit kit for quick mass-mailed wins, though.

It's not terrible advice at all (though perhaps "only sites you know" might be better). It's only marginally effective, but for most users it's absolutely better than nothing at all. When the only advice an 0day disclosure gives is to use the disclosing company's product, it compels people to question the motive for the disclosure especially when, as discussed elsewhere in the thread, literally nothing is achieved by the public disclosure itself.

There are multiple solutions here. They might not all be nearly as effective as "Use Chrome" (paraphrasing), but they're all more effective than merely informing users that there's an 0day for a superset of users and then giving only a subset of those users a mitigation path.

I'm rather surprised we're not in agreement here, actually.


I'm feel like I didn't explain myself carefully enough, so just one more time, just to clarify, I'm going to re-state:

The attackers this disclosure contemplates have both browser RCE and a zero-day Windows kernel privilege escalation vulnerability. There is no site you can direct them to that an attacker can't briefly replace with a malware installation vector. The advice you're suggesting Google provide simply doesn't apply.


> The attackers this disclosure contemplates have both browser RCE and a zero-day Windows kernel privilege escalation vulnerability. There is no site you can direct them to that an attacker can't briefly replace with a malware installation vector. The advice you're suggesting Google provide simply doesn't apply.

Elsewhere in the thread (and far prior to our discussion here), I linked to http://venturebeat.com/2016/10/31/google-discloses-actively-...

> A source close to the company also shared that the exploit Google describes requires the Adobe Flash vulnerability. Since Flash has been patched, the Windows vulnerability is mitigated. That said, Microsoft still needs to plug the security hole as it could be leveraged in other types of attacks.

So, it would appear there's no RCE and that there's just an 0day priv-esc.


Yes, as I've been saying, it can be leveraged by any browser RCE.


If you're talking about the set of unknown size we're calling undisclosed RCEs, then we have two potential decisions which hinge on attack economics whereby an attacker leverages an unknown RCE in tandem with this priv-esc in:

  • targeted attacks (much more likely)

  • mass campaigns (I'd argue much less likely)

Since Google's disclosure is to everyone, advice which applies to everyone will also cover the set of people who:

  • aren't targets of attackers with an unknown RCE, and

  • are likely to download attachments w/ the priv-esc alone.

I don't have the stats, but telling everyone to be careful also tells this likely-majority subset of people to be careful, optimistically mitigating the substantially higher risk that they'll be owned by the priv-esc alone. The remaining at-risk set is the population of people for whom attackers may decide that burning an unpublished RCE is a valuable trade.

•••

If you're also talking about known and patched RCEs, then users who are missing those updates have bigger problems.

•••

To quote one of my employees who's got an eye on this thread: "At this point, the argument between the two of you is entirely pedantic in nature." heh


This is a lot of words, but the point here is simple:

The modern browser security model depends in part on the idea that arbitrary RCE does not equate to system privileges. That's what sandboxes and multi-process models are for.

A kernel privesc vulnerability bypasses the modern browser security model. That's why it's a big deal.

The disclosure points out that Chrome's sandbox preemptively blocks this particular privilege escalation bug, by not exposing its vector in its sandbox. But other browsers don't.


> But other browsers don't.

Microsoft seems to disagree without explicitly stating it, and Google didn't explicitly state this either.

Edit with relevant PR-speak from Microsoft: "We recommend customers use Windows 10 and the Microsoft Edge browser for the best protection."

It would be interesting to see if Project Zero tested for this in Edge, but since they gave no indication that other browser are vulnerable or not, it's fair to assume they haven't. Heh.


You are in a hole. It is dark. You can hear noises above.

There is a shovel here.

>


I know him barely at all. I met him once, at CanSecWest, in 2004. I remember it clearly, though, because he is famous. You basically have to not be in the software security industry to not know who he is.

This isn't some elaborate scheme to hype up Chrome.


Heh, I know of him, but I don't think a person's public persona dictates how that person conducts him/herself when it comes to a paycheck. Given that your endorsement is based on his public persona, I'm obligated to revert back to my original assumption that this very much is to hype up Chrome. Oh well ¯\_(ツ)_/¯


The part you conveniently omitted was the most important part in that sentence.

Chrome on Windows 10 is protected. So if anything, it's an ad for Windows 10, not Chrome. Chrome is still vulnerable on Windows 7 and Windows 8/8.1 - which is what, 70-80% of the Windows users? And as others have said, it probably affects all the other browsers, too, so if Google was indeed the "good guy" in this story, it would still do this so the users of other browsers find out about the vulnerability, too.


> The part you conveniently omitted was the most important part in that sentence.

Speaking grammatically, that's the least important part of the sentence. Quoting my reply to the same point elsewhere,

> Add on top of that the trivialization of "Windows 10" as the noun in a second-order preposition of place as well as the reference to Chrome as the primary subject of the sentence, and even basic analysis of the sentence in question shows a bias.

A proper non-promotional security disclosure would've included multiple mitigating strategies, even if they're user actions such as browsing only trusted sites until a patch is released. Even if that sentence was worded differently ("Browsers which block win32k.sys system calls using the Win32k lockdown mitigation on Windows 10, such as Chrome, prevent exploitation of this sandbox escape vulnerability."), I would've given them a partial pass.


> Chrome's sandbox [...] on Windows 10 [...] prevents exploitation of this sandbox escape vulnerability.

the Windows version is an important part of that statement since it's unclear from this what versions of Windows are affected by the vulnerability.


What they mean is that in Windows 10 a process can opt-in to some security features which mitigate this attack.

Firefox/others will still be affected if they don't opt-in.

Just in case anybody wonders, this mitigation is opt-in because what it does is it disables all win32k.sys calls (GDI, ...).


> since it's unclear from this what versions of Windows are affected by the vulnerability.

If anything, this just makes it worse, since it tells me Windows 7 users are SOL.


> If anything, this just makes it worse

Makes what worse? There's no mention of other browsers (so how could this be an ad for Chrome? Is Edge vulnerable? Is Firefox?) and the plain implication is that Chrome on Windows 7 is indeed SOL.


> There's no mention of other browsers (so how could this be an ad for Chrome? Is Edge vulnerable? Is Firefox?)

If this wasn't an ad and was merely a security disclosure, I'd have expected appropriate research into which configurations mitigate the risk, not just a single configuration which happens to favor the browser maker.

Add on top of that the trivialization of "Windows 10" as the noun in a second-order preposition of place as well as the reference to Chrome as the primary subject of the sentence, and even basic analysis of the sentence in question shows a bias.


The disclosure came from Google. Of course there's a bias if they spent time coming up with a mitigation within their software. They aren't responsible for other browsers, so why would they research fixes within those browsers? That assumption is just plain idiotic.

They are under NO responsibility to research each and every possible fix. Hell, they technically didn't even have to come up with a mitigation for Chrome.


> They are under NO responsibility to research each and every possible fix. Hell, they technically didn't even have to come up with a mitigation for Chrome.

Then they should've waited until an official fix was out. They know one's being actively worked on. They know the primary exploit path (via Flash) was patched. To my earlier point: they should've just waited.

> That assumption is just plain idiotic.

That's not necessary. You might not see my face, but I'm still a person deserving of some amount of respect.


Exactly what is the benefit of them waiting until an official fix is released if they have evidence of actual exploitation? How is anyone better off not knowing the details of a vulnerability that people are exploiting already? I'm having trouble making sense of your arguments on this thread, and I really am trying.


> How is anyone better off not knowing the details of a vulnerability that people are exploiting already?

This question is strange. On the one hand, the disclosure scares users into switching to Chrome even though Microsoft asserts Edge offers protection. On the other hand, the disclosure actually does not disclose the details of a vulnerability that people are exploiting already.

Would you rather have had the disclosure include the exact vulnerability?

I've addressed your other questions surrounding the benefits and drawbacks in other threads. No point in re-hashing them.


No, you can't move the goalposts like this. You just said Google should have waited to disclose until a complete fix was published. In asserting that, you must also assert you'd rather have Google disclose nothing than something.


I would rather Google disclose nothing to the public until there's a fix, or at least until a far-further-along date than 7 days from the initial disclosure to Microsoft.

Now that that's settled:

I didn't move the goalposts at all. I called into question the inconsistency in your implied assertion that people know the details thanks to this disclosure:

> How is anyone better off not knowing the details of a vulnerability that people are exploiting already?

People do not know the details. The disclosure gave enough to inform those not privy on where to look. It did not inform the world as to the details of the issue.

I'd rather Google have disclosed nothing because as I've stated in the vast majority of my other comments. My position hasn't changed: no one is benefited by a public disclosure here other than

  • Google (getting people to use Chrome as a defensive measure even though arguably Edge has this one covered), and

  • any unprivy malware authors who now know where to look.

Microsoft isn't going to act any more quickly here. Security vendors are already rolling out mitigations. Google's only advice was "use our browser" even though there are multiple solutions, both in terms of software (Edge) and advising on user actions for the general untargeted population. They included nothing other than:

> Chrome's sandbox [...] prevents exploitation of this sandbox escape vulnerability.

You asserted that they disclosed the details, which Google did not.

So now that my position is set clearly in stone without you able to turn it around with an argument about moving goalposts, let's get back to my current question: Since you asserted that Google disclosed the details even though they did not, would you rather have had the disclosure include the exact vulnerability?

Edited for formatting.


Lots more words, but we're back to the exact same place we were before: there's active exploitation of the bug, and you think Google should hide that fact to give Microsoft breathing room. That is an... unorthodox position to take.


> you think Google should hide that fact to give Microsoft breathing room.

https://news.ycombinator.com/item?id=12842909

I have full confidence that one of the most mature software security groups in tech isn't looking for breathing room on an 0day. Since you've been scouting out my posts, you know better than to put words in my mouth given that one of my biggest assertions here is that Microsoft's already moving as fast as it can and that a disclosure speeds nothing up here.

> Lots more words

I appreciate the attempt at a dodge, so I'll get right back to it: since you asserted that Google disclosed the details even though they did not, would you rather have had the disclosure include the exact vulnerability?


I would rather have what we got than nothing. You would rather have nothing. I still don't understand your argument.


Well done! That should be the rule, anything else is hypocritical.


No, "hypocritical" would be disclosing critical vulnerabilities in a competitor's product after a week of notification, when you yourself stop shipping security updates for your mobile operating system after two years; instead suggesting "buy a new phone!" as the solution. But no one would be that sleazy.


Not only is that not hypocritical (I would try actually looking up the definition, for one), it would also be monumentally stupid for security researchers to wait until all products at their employer are fixed before disclosing outside vulnerabilities.


Agreed. Though Analemma_ has a point about Google's decision not to provide security updates to Android devices/OSs which are just 2 years old.


It's 3 years of security updates for Pixel and Nexus phones.


And that's a minimum guaranty. We have yet to see if they truly stop or keep going.


No? We've seen Nexus devices being dropped after only 16 months of support.


I don't believe that's true. The shortest update life span was the 2011 Galaxy Nexus for 20 months. Ever since then the models have been updated for 3 years or more.


No? They got 2.5 years of security updates after release (and not 2 years of security updates after sale, as EU law demands), and they certainly only get 18 months of feature updates after release.

The Nexus 5 was already dropped in October 2015, having been released in October 2013.

Same with the Nexus 6 (it was dropped yesterday, being 2 years old).

It’s always been 18-24 months of updates max, and only if you buy on release day. https://support.google.com/nexus/answer/4457705#nexus_device...


That page you linked to says otherwise

> Nexus devices get security patches for at least 3 years from when the device first became available, or at least 18 months from when the Google Store last sold the device, whichever is longer.

Google is still releasing security patches for Nexus 5

http://www.androidpolice.com/2016/08/01/august-security-patc...

The most recent one for Nexus 5 aka Hammerhead was this month: M4B30X, Oct 2016

https://developers.google.com/android/images

You're mistaking the major version releases (6.x.x vs 7 x.x) with security patches, they are not promising major updates here only security patches for 6.x.x.


> or at least 18 months from when the Google Store last sold the device, whichever is longer.

Which is illegal, EU law demands 24 months from when it was lost sold anyway.

> they are not promising major updates

That’s another issue, many security features are only added in major updates (sandboxing Mediaserver, etc).

So you only get hotfixes, and not long-term fixes.


Sure but Google profit from 'Google Experience' devices - ie, anything that ships Gmail, Maps, Play Store - and don't push licensees like Samsung / HTC / LG / Sony etc to maintain these devices.

If IoT security wasn't even worse Android would have been the new botnet platform.


I can't tell if the Windows versions affected are still supported. It would be only hypocritical if Google has refused to patch a vulnerability in their supported OS before 7 days after it was privately disclosed with them. Android support term being too small is another problem.


Yes, it's far better to not inform the public that their computers are being infected by an active Windows zero day.


I see where he's coming from though. It is hypocritical that Google cares so much about Microsoft's customers and then craps on their own user-base so regularly. Case in point: Google released the app launcher for Linux, waited for adoption to take place (so that devs and users had apps made, tested, installed, and in-use) and then they canned the whole thing because it was taking market share away from Chromium. Then, they canned Chromium OS!

If I were Microsoft I'd tell them to mind their own damn business as well. It's obvious anyway that the only thing Google cares about in this instance is making themselves look good at the expense of Microsoft.


> Then, they canned Chromium OS!

Chromium OS is alive and well. Perhaps you're thinking of the Chrome-specific "application" APIs, which have been deprecated in favor of web technologies?


No, they did what any security researcher would do, Google or not. To be honest, their call is a perfect one.

On the one hand, you don't tell anyone other than Microsoft and wait for them to patch as many people as possible while the bug is ACTIVELY being exploited and possibly thousands are getting hacked by the hour.

On the other hand, you tell people, and many are now able to patch it themselves RIGHT NOW, force an update on their machine, not use affected software, and potentially avoid a hack that would've happened if they didn't know.

So yes, it's not black and white, but I think their call of waiting a week was an okay one.


Why shouldn't Google care about customers that use Windows? A large percentage of them also use Google services so it's in their interest to inform them.

>If I were Microsoft I'd tell them to mind their own damn business as well.

What would you tell their customers that were infected by their negligence? Let me guess? "Mind your own damn business".


I'm sure Microsoft has a bug-bounty. Google looks for bugs in competitors software and then blackmails the developers with disclosure, irregardless of that bug bounty. That isn't white-hat at all.


Bug bounties are usually for unexploited vulnerabilities that you find yourself. It sounds to me like google didn't discover this vulnerability on their own, they discovered malware that was exploiting it. I'm not sure if a bug bounty would pay out for a vulnerability you didn't discover.


Some comments feel almost troll


Maybe after the ridiculous Scroogled campaign that Microsoft did against Google, and stealing Google's search results, and extorting money from Android OEMs for lame patents, Google has earned the right to bully Microsoft a little. I'm okay with this.


Probably a very stupid question, but I'm wondering if containers (which are extremely popular nowadays) can help us to improve web security?

What I mean is, that all major browsers are trying to create their own sandbox environment to be more secure, right? But maybe it is better to do the other way round: give them all permissions, but put them in a controlled and restricted sandbox environment (container). And browser window will be just a small app that talks to that container, sends requests and gets resulting page (rendered) and just shows it? So, it will literally have no access to outside of the container.

As a bonus, it decouples UI from backend.

Asking, because the purpose of containers is to isolate and create sandbox environments, and the purpose of browser is to render and show web pages. So why do browsers care about security, if we already have containers?

Or that makes no sense and gives no advantage/security?


> major browsers are trying to create their own sandbox environment

This is partially true: for example there are sandboxes considering the website origin and related restrictions, but on the process level browsers reuse the system mechanism instead of implementing their own. For example on linux, chrome uses seccomp and a number of other ways to separate the ui from the backend. Windows has its own solutions for this.

So I don't think it's right to say they create their own sandboxes. At least not in a NIH meaning.

Containers are just a fancy package for the protections offered directly by namespaces and other existing restrictions. User namespaces are already used in chrome sandboxes for example (https://chromium.googlesource.com/chromium/src/+/master/docs...)


?? This is a sandbox escape exploit. Calling a sandbox a container isn't going to stop the exploit.


Have a look at what Microsoft is doing with Edge and Application Guard:

https://blogs.windows.com/msedgedev/2016/09/27/application-g...


Does anyone know which version of Chrome this was fixed in? This may force $EMPLOYER to finally update from an unholy mix of 50 and 53.


If you rely on printing things, don't update to Chrome 54.

https://bugs.chromium.org/p/chromium/issues/detail?id=658606


Christ, Google. At least the workaround is relatively simple - print to PDF (using Chrome's built in option) and then print that from another reader.


Edit: I deleted my original comment because I totally confused the brands (Xerox, not Brother). Brother doesn't do rebranded Samsung printers.

Link to Xerox issue: http://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_...


Why would your employer fix you to a specific version of chrome? Thats bizarre.


Never underestimate the degree to which enterprise IT departments view change as something to be feared. I once heard someone say that they wanted to delay Chrome updates until their security team could review them — said review process basically being waiting in a backlog for weeks and then someone searches for CVEs posted on that version.


I worked with a Government group that firmly rejected the concept of Windows security updates.

The general premise was, if you apply an update and something breaks (note that "changed behaviour" means "broken"), someone's head will roll. If you do nothing and get compromised, it's those damned <country> hackers again.


> Never underestimate the degree to which enterprise IT departments view change as something to be feared.

This. These supposedly risk-averse organisations also have no concept of the risk of stasis, only the risk of change.


> Never underestimate the degree to which enterprise IT departments view change as something to be feared.

Never underestimate the management's ability to fire a Enterprise IT workers for applying an update that the developer's swear changes no behavior in the companies enterprise software, but actually does.

// been on both sides, we suck at testing and risk assessment in this profession


Definitely — when you see people afraid to do something, that almost always indicates a deeper organization problem.


Large companies tend to be bad at keeping up-to-date. I know of a bank that still uses OpenOffice. No, not Apache OpenOffice: they still use the real OpenOffice.org.

Workstations are often locked down, so employees can't install or upgrade anything.


A friend of mine working for the government told me that in one major government agency, the most popular version of IE used on their website is IE11.

The second most popular version? IE 6. Not even 8. IE 6.

Who still uses it? Employees of other government departments...


While this is plausible for the past, I think this info is outdated now. https://analytics.usa.gov has lots of data on this (and you can break it out by agency using the dropdown at the top). It also estimates that only 5% of data comes from federal networks.

(Disclaimer: I work on analytics.usa.gov.)


I didn't say US Government. Australia :)




We should have analytics.dto.gov.au.


This is because some project once created an internal application with a web interface, which was tested with IE6. This application may be critical to keeping people happy (e.g. the HR system) or some unqualified bozo may want to 'cut costs' like updating apps to new browsers, so mandates sticking to a 'stable' version like IE6. And all future apps will have to target IE6 forever more...

It gets worse... People write apps that use Flash (online training) or Java applets.

There are solutions to this, but none of them involve using the one true SOE, one SOE to rule them all!


I would hope that they are using IE11 set to IE6 compatibility mode. My employer has an application that requires IE5 compatibility mode.


Yeah… just a reminder: IE6 was released in August 2011.


A little earlier than that: August 2001.


Can I blame bitrot? Yes, thanks, of course I meant 2001.


For a sense of just how long ago that was, IE6 was released before 9/11. Truly a more innocent time.


No it wasn't... It was released in 2001


2001, not 2011. Time flies. :)


Even if the employee can update it themselves, it is often prohibited, and fixing a vulnerability is treated as unauthorized installation of software, which is clearly against the terms of employment when using company computers. Whether they treat it as a big deal or not depends on who is involved, of course.


Java applets! I know a government department that'll be stuck on Chrome 44 for a looong time.


Someone could make a mint selling a Java applet runner to government agencies.


I (anecdote) know a country- sized governamental system that sticks to specific versions of Firefox. Not chrome or edge or IE or Safari or... Just Firefox. Specific versions. With some big lag when new versions of Firefox come out.

Companies and citizens, live with that.


Firefox has the ESR release for this purpose.


Yes, thanks for this information.

But sadly no, they don't follow this kind of stable release.


Yes, yes it is.


None of the vulnerabilities are in Chrome?


Google says their sandbox prevents it, but https://helpx.adobe.com/security/products/flash-player/apsb1... says Chrome's embedded Flash is affected.


I think they update that separately from Chrome versions. My Chrome already updated to 54 so I can't verify that, but you can check chrome://plugins/ to see if you have the fixed 23.0.0.205 version.


Advertising the details of an exploitable vulnerability before the vendor has patched is protecting users now? I don't buy this motive at all.


> I don't buy this motive at all.

OK I'll bite. What do you think the motive is?

According to the post's byline, it was written by Neel Mehta and Billy Leonard of the Threat Analysis Group at Google. Are you questioning their professional judgement and claiming they are individually biased?

If not, are you suggesting that there is some management directive to look for Windows exploits and publish them on an aggressive timeline in order to embarrass Microsoft publicly? Do you think professional security researchers would abide by such a directive?


Yes, I believe what you describe in your second paragraph is most likely to be the case, given that coordinated disclosure is the standard approach to protecting users.


Waiting for coordinated disclosure should not delay disclosure indefinitely, though. Google decided on 7 days as per https://security.googleblog.com/2013/05/disclosure-timeline-...


I can't imagine that Microsoft have refused to fix this vulnerability though, otherwise this would have been mentioned in the blog post.

There's no good reason for Google not to respect coordinated disclosure here. Making an arbitrarily tight deadline their policy isn't protecting users.


The cited motivation was "it's being exploited in the wild". It's better to enable the rest of the world to defend against it.


This is flawed reasoning, as the vast majority of people will defend against it by installing the patch.


Well, here we get to the eternal debate about whether this trumps other considerations. Back before vulnerability researchers started putting credible deadlines to these things, vendors would sit on patches for months while the vulnerabilities were being exploited widely and were open secrets. A rough consensus seems to be that the common good is best served by these disclosure deadlines.

Also there's the consideration that security-critical environments who pay attention these have much more value-at-risk than the average Windows user. You want your safety critical systems who pay attention to be protected.


I don't think this is the case here though. It's been a few days since reporting the vulnerability, not months.

We don't yet know if this was being widely exploited (versus being a niche exploit used by an APT, for example), but it will be now either way.


What patch?


The future patch that Google should have coordinated their disclosure with.


So, there is no patch. And the only mitigating thing is what Google has said, patching Flash.

Now, tell me, how users would know about that without disclosing.

And, remember, there are already exploits


The problem is that Google also said this:

> The Windows vulnerability is a local privilege escalation in the Windows kernel that can be used as a security sandbox escape. It can be triggered via the win32k.sys system call NtSetWindowLongPtr() for the index GWLP_ID on a window handle with GWL_STYLE set to WS_CHILD.

Which is enough information for someone to write an exploit from scratch.

If they'd just said there is a win32k.sys vulnerability and advised users to make sure Flash is up to date, this would have been fine.


I'm starting to believe the base problem is a difference between a software company that has to test the heck out of every patch because of their large customer base (Microsoft) and a company that doesn't have a complete consumer facing experience[1] (Google). Seven days is not enough to patch anything that needs to be tested in the extensive manner required. An update / disable Flash would have been good enough until a proper patch on the day IT has planned (Patch Tuesday) activities around the deployment of patches.

I'm starting to think the ultimate PC OS for IT would be one with a transactional audit trail on all changes to the PC.

1) try calling Google about a problem if you disagree with this statement


As mentioned elsewhere in this thread, this has been Google's disclosure policy for more than three years.

https://security.googleblog.com/2013/05/disclosure-timeline-...

Stop spreading FUD without evidence.


Looks like a conveniently written policy for hitting their competitors with to me. What is certain is that Google pick and choose when to apply these disclosure time limits, they're not set in stone.


I guess it can look however you want it to look, if you've already decided that you know the truth.


No, just sceptical of their claim to be protecting users by disclosing early.


"coordinated" disclosure is a wonderful thing in a vacuum, but when people are actively being exploited then what? You silently wait to announce until the vendor is ready? How many people get owned in that time frame? How many people could mitigate in the meantime if they're tipped off to what the bad guys already know and use?

There's no "right" answer because this is a holy war that's been going on for a very, very long time now.


Thank you for saying "coordinated disclosure" instead of the misnomer, "responsible disclosure".

n.b. https://adamcaudill.com/2015/11/19/responsible-disclosure-is...


He's probably being sarcastic.


That level of detail allows malware detection by static analysis, A/V signatures, HIPS rules, etc.

Source: I've done these things before based off notices like this, and caught malware with it


At the same time it allows exploit writers to quickly add this to their kits.

I'd expect AV vendors to already have signatures for this given that it's being actively exploited, which means there must be malware samples to know this.


Most importantly it tells MS "fix your shit when your users are being attacked". Seven days ago was a Monday. That means they had a full week and a weekend to fix this. I understand that big bureaucratic organizations move slowly, and that the fix isn't trivial, but this is really the kind of thing for which they should have "scramble the jets and fix it" processes in place if they care about their user's security.

When there are known exploits attacks against users, giving a short time to get out quick patches to trusted software vendors makes a lot of sense, but I don't see a good reason for giving more than 24-48 hours before public disclosure.


Ah yes, the endless holy war over what constitutes "responsible" disclosure and the concern over who benefits more, the attackers or the attackees?

In this case, the exploit was being actively used in the wild. That means bad actors already had access to this and it was the users who were in the dark. Now it may be added to the "toolkits" of scripters and people who buy exploit frameworks but the people who do real damage were already using it according to Google.


I'm not yet sure whether I agree with the action/timeframe here, but but broadly speaking the argument is that you need to set a deadline, and enforce that deadline in order to pressure companies to fix their issues.

It's not just that people affected by this vulnerability are being protected by its disclosure (though there are reasons why that might be the case) it's that in the future vendors will take deadlines from P0 far more seriously when they realise that their reputation is on the line if they fail to patch in time.

If you let vendors get away with "we know that this is being actively exploited, but we haven't been able to come up with a timely fix, so please don't tell our customers how screwed they are", then that becomes the standard line and you need to keep letting deadlines slip. Or you don't let them slip and you end up with this sort of situation.


What about the thousands of people with imporant data that could be attacked in the coming days using this exploits, but can now protect themselves from it knowing this?

It goes both ways. As they said, this is active in the wild and many are being hacked AS WE SPEAK completely unaware of it. Imagine you had some information that is worth millions of dollars on your computer who is vulnerable to this. Now that you know, the first thing you'll do it turn off your computer or find a way to protect yourself. If they hadn't released it, you could've been hit in the coming week or month or however long it takes Microsoft.

As you can see, this isn't a black and white problem.


How does Google's disclosure allow people to protect themselves, though? They say to update Flash. Great, but that doesn't explain whether or not anything can be done to prevent the specific Microsoft vulnerability from being exploited if Flash isn't involved.

So if Google doesn't have any way to mitigate the vulnerability, all putting these details out do is allow more actors the chance to use the vulnerability until Microsoft can release a patch, which is exactly the opposite of responsible.


They can use Chrome, which mitigates the attack.


Regardless of Google's motives, "full disclosure" of vulnerabilities has always been about protecting the users and not the companies. The idea behind it is that companies need to be pressured into fixing bugs quickly. You can argue about the benefits and how effective such a strategy is, but that's why some people have done and continue to do full disclosure of bugs.


What an interesting comment on the state of corporate relationships; Google actively delayed disclosure of an impactful vulnerability for Apple (https://news.ycombinator.com/item?id=12795332).

I wonder why they would not do the same for Microsoft?


Google did not claim they were aware of active exploitation of the Apple vulnerability whereas for this Windows one, they claim knowledge of it being exploited actively.


They have done the same for Microsoft, extending the 90-day deadline multiple times.

Why they didn't in this specific instance is probably because it was being actively exploited in the wild, but that's just a guess.


Was that Apple vulnerability actively exploited in the wild during the delay?


That was a design error, this is presumably a simple bug fix. It's also being actively exploited.


Incidentally this reminds me, I recall that Angular 1.x was banned from Firefox add-ons... Was the vulnerability ever made public?


yeah, it does eval(); no vuln, just a specific design choice that breaks the security model of the browser plugins.

Basically the plugin operates at a higher level than the regular page DOM, so by running angular from a plugin you gain the ability to both execute (arbitrary) javascript from the page but also read/write to the disk. very bad. cannot be fixed. arbitrary code execution by design.


I like the fact that they disclose the vulnerabilities but I'd be even more interested in where they are exploited.


Personally i'm still not convinced this is good or bad.

Protect users AND educating script kiddies, this is a hard trade off, and all lazy updating users will be more vulnerable then before.

(more being more script kiddies, not so much more vulnerabilities).

As long as it is done responsible and vendors are given chances to fix this before. Not disclosing is worse i guess in any case.

However, "We encourage users to verify that auto-updaters have already updated Flash"

While adobe recommends to not install it, meh.. :)


Script kiddies used to mean someone who downloaded a DOS tool for example and ran it. As far as I know Google didn't release such a tool, nor is one available. Did the definition change?


Why a 'dos' tool? They download 'scripts/utils' and simply use them and call themselves hackers. They don't always do a denial of service attack. That never changed, but perhaps you never learned it correctly.

Telling the world about vulnerabilities is a knife that cuts on two sides. Inform users about security. Inform wankers how to abuse stuff.


Just an example:

"someone who downloaded a DOS tool for example"


That's my understanding of the popular usage, however I have noticed people also using it to refer to people who take advantage of known security issues using "simplistic" programming languages and techniques.

IE, a script kiddie is more likely to use python or php to SQL inject than to execute a timing attack against a crypto library.



win32k.sys, the fractal of bugs.


Here is a question for the armchair (or real) lawyers - How does disclosing this not run afoul of the law?


In the U.S. people are guaranteed by law to express themselves as they see fit, without government interference except in the most constrained cases. A security vulnerability disclosure does not meet any of the constrains I can think of off-hand: not incitement, not libel (written) nor slander (spoken), not child pornography, not obscenity, not false advertising, etc.

https://en.wikipedia.org/wiki/United_States_free_speech_exce...


From VentureBeat:

>Microsoft harshly criticized the disclosure. “Today’s disclosure by Google puts customers at potential risk,” a Microsoft spokesperson said. “We recommend customers use Windows 10 and the Microsoft Edge browser for the best protection.”

Really? Because I thought "your customers" were already at risk for an in the wild zero day exploit that's currently being used to infect Windows computers. How dare Google warn the public to allow them the time to take precautionary steps. I guess they think it's better for their customers to take it up the ass until they figure out how to patch it.


Given Microsoft's well-known cadence for releases, it might be prudent for Google to update their policy to 2 weeks in some cases (e.g. not yet observed being actively exploited) so that something reported to Microsoft right before an update doesn't get disclosed right before they can reasonably release an update without moving heaven and earth.

But if it's being actively exploited? Yes, thank you Google for letting us know ASAP. Would indeed be irresponsible not to in this case.


Not moving heaven and earth, just fixing it in a week when you're notified that there's a critical vuln that's actively being exploited against your users. Yes, Google could lower the bar two two weeks, or three, or four, but IMO, even a week is way more than should be necessary: for actively exploited bugs of this magnitude, disclosure should really be 24-48 hours after notification, tops. Perhaps 24 hours to other vendors like Firefox / Opera, and 48 hours to public.

This is not a "wait until the next release cadence" kind of issue, but more like a "scramble all jets and work through the weekend" kind. If I were a Windows user, I don't think there's anything else I'd want MS to work on over fixing an actively exploited remote priv escalation vuln.

Google's security team worked through their Christmas vacations when they had an attack awhile back. To give other companies 7 days to patch their software is really quite generous.


Except this issue is _local_ privilege escalation that is mostly only relevant for defense-in-depth scenarios that involve sandboxing.


The actively exploited window is 7 days, but the "normal" window is 90 days, which is far, far more than the Microsoft cadence. 7 days is plenty for a kernel level, actively exploited privilege escalation vulnerability, in my humble opinion.


There is precedent for Microsoft to release patches on days other than Tuesday (they call this “going out of band”) when an exploit is being used in the wild, especially if it is wormable. (This was the case for the hole that was later exploited by Conficker, for example.)


They didn't have to provide a patch within 7 days, they just had to admit that it exists and tell their customers they're working on it.


We sell your data and in return make browsing safe for you so we can sell more of your data. Clever Google. Really clever...





Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: