I appreciate the work they do, and I sure as hell appreciate the talent, but Google is mostly treating this entire endeavor as a giant marketing and recruiting trick.
(It worked well; they stole my favorite pentester from one of my preferred boutique consulting firms.)
Also in the bug you’re referring to, Google expresses surprise Microsoft let it get through because of the severity, and declined to comment on details that would only help exploitation. I also don’t see anything on the bug re: will disclose _because_ sev:hi (your core argument AIUI); but I agree that it doesn’t really impact what their policy should be.
You say Google’s words don’t mean anything, but it sounds like you’re advocating for their 90 day disclosure policy to not mean anything whenever a company doesn’t get its act together in time, which I’m sure isn’t your intention. P0 might have some of the best pentesters in the world (it does) but that doesn’t matter much if everyone else gets a ton more time to find and exploit bugs.
Microsoft uses a regular patch cadence so enterprise users can allocate the necessary resources for review and update of their computers. There are scores of enterprises that use tens of thousands or hundreds of thousands of computers per install.
The manner of Google's expected information release puts many end-customers at risk. It is true that in this case the vulnerability was due to Microsoft, but the release of exploitable information will put enterprises at risk. That is why Microsoft asked for additional time. This is a business decision not a technical decision.
Sure, Microsoft may have encountered technical issues in their fixing of the issue, but the risk upon exploit information being released is the issue. Google shoulders that risk all by themselves.
The fault is with the creator of the software, not the discoverer of the flaw.
This broken logic has been around since at least when "Stalking the Wily Hacker" was released. Those who make the error and release broken software and systems should be held accountable, instead of throwing teenagers in jail.
Don't release software, especially important software, unless it is finished.
Google is not a lawmaker, and if they continue to release 0-day exploits in this manner, even after being instructed otherwise by the vendors, at some point they will be made to shoulder some of the burdens of their having done so. Google knows this to be true, or they would not have held some recent vulnerabilities past their stated 90 day release window.
Thoughts about the relative technical merits of the companies or source codes doesn't come into play here. These are business decisions that affect real world companies and people.
Your speculation about the reasoning for the extension could not be more off. All you have to do is read their blog discussing their disclosure policy to understand why those exist.
That's simply absurd. Stop trying to turn a discussion into an argument.
> Your speculation about the reasoning for the extension could not be more off.
What was my speculation about Google's reasoning? I made no speculation whatsoever about Google's reasoning. Their stated reasonings in a blog post don't matter. Their actions matter.
> If Microsoft decided that the correct patch cadence was quarterly or annually (because so much QA work goes into a release), does that change what a disclosure deadline should look like?
Absolutely and enthusiastically yes, and for absolutely the reason you wrapped in parens.
When so much software runs on your platform, availability matters (and is a critical component of security, which I feel Google doesn't quite understand for reasons not entirely related to p0). QA-test the hell out of a patch unless there's evidence of 0d or imminent exploitation. Plenty of examples exist where that kind of regression testing was provably necessary, such as this one case:
I'm happy as hell it wasn't p0 who found that one.
Your argument only works if a few things are true:
* P0 is unwilling to budge from the 90 day disclosure if a bug is legitimately hard to fix. But that isn't true: for example, they kept Spectre/Meltdown under wraps for a very long time. It's not just bugs that conveniently affect Google, either: plenty of Windows issues were given grace periods (usually to hit a patch Tuesday). They've even re-restricted bugs after MSRC _failed to request a grace period in time_ (e.g. P0-395).
* If a bug was being exploited, you'd know. (If this isn't true, delay just means attackers have more time to exploit the bug.) But that isn't (generally) true: plenty of bugs are hard to detect remotely, and we have no clue what hoard attackers are sitting on.
Never mind the fact that that the onus is on Microsoft to show that a period is warranted (attackers aren't nice enough to leave them a detailed reproducer), can we even come up with a plausible reason for this bug being delayed that isn't "we didn't prioitize it"? Is there code that legitimately tries to load the wrong DLL? If the argument is just 'QA should win by default" and mine is "disclosure should win by default", we're just going to have to agree to disagree. Vendors do not get to arbitrarily model their business to manipulate how disclosure works. Attackers don't care.
Right, hence my earlier point, emphasis added:
> When so much software runs on your platform, availability matters […]. QA-test the hell out of a patch unless there's evidence of 0d or imminent exploitation.
At the expense of sounding like a broken record: (from an arguably oversimplified angle), confidentiality, integrity, and availability all matter.
To be clear: Microsoft can do whatever they want with the bug they themselves found too. (I imagine their internal teams would want similar policies to make sure that they can hold internal teams accountable for fixing their bugs, though, but whatever, that's on them.)
You are again only interacting with a tiny part of my argument. We're taking it as read that somehow this bug requires significant QA. Can we agree that some bugs don't need 6 months of intense QA to fix? A UAF is a UAF.
Google is operating with limited-to-zero information on what exactly breaks when the bug is fixed, and Windows (or even just the .net framework) is a behemoth. Google and Microsoft do both have threat intelligence groups, but that's a separate thread.
When you're producing software designed to run on many configurations with absolutely stable operation for at least a month at a time, it's extremely, extremely hard to say that "some bugs don't need 6 months of intense QA to fix." I'm not an engineer at Microsoft, but so long as Microsoft is giving routine updates on a private channel—which they did in this case—as to why it's taking so long, it at least signals that the team is in fact actively working towards a resolution.
In fact, with this specific defect, Google applied its own rules/practices to decide whether Redstone 4 (which is my assumed read into what "RS4" means) would be considered a broad patch, whereas Microsoft considers system requirements for minor Windows 10 releases to be hardware-identical in need and entirely backwards-compatible.
Timeline (per the report):
-> 2018-01-19: Reported issue to firstname.lastname@example.org and received MSRC case number 43182
<- 2018-02-10: MSRC indicates that the issue has been reproduced and will determine if it's to be fixed.
<- 2018-02-12: MSRC indicates that due to unforeseen code relationship this will not be fixed in April PT
<- 2018-04-02: MSRC requests the 14 day extension.
-> 2018-04-02: Informed MSRC that as the issue will not be fixed with 90+14 days then the grace extension does not apply.
<- 2018-04-05: MSRC again requests withholding of disclosure until 2018-05-08, giving more context on the deadline miss.
-> 2018-04-06: Informed MSRC that this isn't possible. Made it clear that the issue isn't particularly serious and other .NET based DG bypasses are still unfixed.
<- 2018-04-11: MSRC again requests grace extension based on the upcoming release of RS4 which will have the fix
-> 2018-04-12: Informed MSRC that as there's no firm date for RS4 this couldn't be applied, and RS4 wouldn't be considered a broadly available patch per the disclosure conditions.
-> 2018-04-19: Issue exceeds deadline.
As an aside:
> You are again only interacting with a tiny part of my argument.
I'm developing RSIs and would prefer to minimize my interaction to what's most relevant to the debate. I apologize if that makes it more difficult.
Edit: though for what it's worth, I'm enjoying interacting. I bear no ill will towards you for your perspective; I've seen and learned quite a bit about balancing security and managing business impact as I've continued to climb the career ladder, things which were shielded from me when I was a lowly developer or security engineer.
Bob: "My bug isn't particularly severe, and there are similar issues from Alice and Carol. If it isn't going to protect customers, what is the point in fixing it?"
Carol: "My bug isn't particularly severe, and there are similar issues from Alice and Bob. If it isn't going to protect customers, what is the point in fixing it?"
Second, you're neglecting the time aspect.
This is a valid argument: "There's a similar issue that microsoft hasn't bothered patching for months, so what's the point in keeping it secret?"
This is not a valid argument: "In a few months there will be a similar issue, so what's the point in keeping it secret?"
So there is no loop leading to mistakes.
Also, the other bug has been known, with POC, for more than half a year.
The whitelist is based on GUID. The lookup of GUID to actual binary is done through the registry.
COM hosting implementations should check that the object they got is the one they asked for. Net doesn't. So if you can write to the registry you can escape the sandbox.
I presume the editor just wanted to put an article out with Microsoft and Project Zero in the title, rather than analyze the actual flaw in the context of its severity.
Microsoft should make a mental note that when you receive an email from a member of Google's Project Zero team you don't wait 3 weeks to respond.
"Google, we believe in our user's right to privacy and are looking for ways to improve their experience on our platforms. Due to your non-compliance with the upcoming GDPR and past misdeeds we have classified all your services as spyware and will be protecting our users accordingly should you fail to address this matter in 90 days from now.
They had an opportunity to actually hurt Google by blocking tracking scripts long ago with their "Do Not Track" feature enabled by default in its browser. And they wasted it by simply asking advertisers like Google nicely if they'd like to stop tracking users or not (you'll never guess what happened next!).
Microsoft has already been found violating previous and less strict EU privacy laws recently. I think Google, Microsoft, Facebook, Amazon - they'll all end-up paying big fines in the EU within 18 months after the GDPR passes, because neither take it seriously enough and they still think they can use "angles" to trick the regulators as well as users into getting that data without real consent. They can't, and they'll learn it the hard way.
Oh, and the Privacy Shield will likely fall by the end of the year, too. So brace yourselves, it's going to be a wild ride for these privacy violators.
Microsoft making it default felt less like something to help consumers and more just bandwagoning. Whether or not DNT was particularly effective as a means of __blocking__ tracking has always been irrelevant. The point was the __message__ sent by those who enabled it.
This is pretty blatant re-writing of history. DNT was a piss poor standard that relied on advertisers respecting it. As soon as Microsoft put it on by default they got shit on endlessly for it, and advertisers just bailed anyway.
again, HN was upset at Microsoft for doing it.
Taxing the large companies is difficult, because of individual countries (like Ireland and Netherlands) free-riding to attract investment, while hurting all other EU members. "Tragedy of the Commons" that sort of thing.
This applies to all companies, everywhere. The problem is that the EU don't close their loopholes.
The opposite is true, despite Google having lobbied during the drafting. The entire premise of how google makes money conflicts with GDPR.
And let me explain "draft it". They lobbied. The proof for that is fb and ggl attack on Canada to prevent legalising something similar as GDPR.
Please (PLEASE, FOR GOD SAKE!), stop beeing protective to corporations, either FB, CocaCola, Tesla, Google or whatever comes to your mind. None is have priority in making world a better place, their only priority is money and power and money and they do not care about you more then a milking cow. If you disagree, you have fundamential lack of understanding how world functions.
Here you are having a study, about your behaviour, READ IT, you will thank me later, you are having issues worth psychiatrical care, help yourself and stop annoying the human race: https://insight.kellogg.northwestern.edu/article/leave_my_br...
I doubt it. The fines will be completely irrelevant, and they won't even need to notify more than 1% of their lawyers and lobbyists to ensure they can keep it up for the next decade.
Honest question, why is Google not in compliance with GDPR?
Imagine if DNS resolution on Windows was pay-to-play.
Now you get GNU/Linux out of the box but security intact.
I feel the 2 US companies have a friendly competition with each other which can help secure their systems.
There are certainly companies doing much worse than setting 90 day deadlines. For example VUPEN, Hacking Team, and GrayKey selling undisclosed vulnerabilities to "good" governments, and other companies servicing the shadier governments.
Surely it means specific people employed by Google may "speak". Does the right extend to corporations?
Imagine that world. I point out a mistake to you, and by reading or hearing it, you are suddenly holding a gun! We would have to criminalize coredumps :)
Unfortunately it does, to some degree.
Religious freedom too. The US is fucked.
For a bug with completely unknown scope and very difficult fixes (such as the recent intel issues) the story might be different. But 90 days here? Why would Microsoft need more than that?
It sounds like some other piece of MS software is relying on .NET not performing the checks that it should have been performing.
They own an operating system. We get to hold them responsible for whatever choices they make that impact security.
Microsoft has set up a patch delivery infrastructure that's pretty effective and comparably fast by industry standards, if not deactivated by the people who got offended by the forced Windows 10 upgrade and feature creep.
Add to it Microsofts well established unwillingness to provide any useful diagnostic information and suddenly the only way to use the machine is to not update it.
If they mess up a patch it's a big deal. If they break systems, introduce further bugs, etc...
90 days to understand the problem, fix the bug, verify the fix, plan the release, get it out to customers. There is a lot of work involved in such a thing.
Once you go life or death situations, regulatory environment applies, backward compatability matters, ... Everything takes endless. It is not code, commit, test and deploy. Intake, Risk Analysis, project planning, approvals, alignments, etc. So many more processes. We should not fool ourselves that other platforms are better in that once you go for serious SLAs. Linux Kernel or user land patch might be fast, but RedHat delivery will take longer.
Welcome to Enterprise development.
When enterprise just falls on its face, I don't have much sympathy. "So many more processes" sounds like taking a handful of steps, splitting them up, and making each one require multiple days of memos back and forth. Can you provide any justification for this? Am I misreading?
That is assuming whoever you are replying to is foreign to enterprises.
You're comparing an operating system security bug to advertisers' lives being made inconvenient.
Both involve money, but let's not pretend that the more money involved, the more important something is.
1) Never gave a required date for disclosure.
2) Upon requesting the right to disclose, was told no and he followed suit.
3) Was initially offered a bug bounty of $1300, which was upgraded to $5000. He apparently never bartered on that issue at all.
So your entire post was completely irrelevant.
The motivation of the project is supposedly to protect Google's users. Being firm on disclosure deadlines helps ensure that vendors take the issue seriously. Did they have any indication that Microsoft wasn't taking this seriously? If not, then it sounds like their true motivation is elsewhere.
That doesn't follow. The primary reason to be firm is to ensure that vendors take future issues seriously. Belief that the vendor is serious about a single issue removes only a tiny fraction of the motivation to be firm on deadlines.
1) The issue is so obscure that nobody else in the world will ever discover it, so not disclosing it to anyone but the vendor is the right choice.
2) The issue has been discovered by someone with malicious intent, and every second that you hide the details from the users, they're at risk.
You can't know which case applies, which is why policies about disclosure are useful. If a vendor is informed of a security hole, and they immediately fix it, great, users are saved. If a vendor is informed of a security hole, and they do nothing... eventually users will have to mitigate the risk in their own way (which is usually "stop using the flawed product"). A disclosure deadline strikes a balance; in many cases it's pretty likely that no evildoers have independently discovered the flaw, but would be able to exploit it if they knew the details. So giving the vendor a bit of time to fix the issue is the best solution. But given infinite time, all bugs will be discovered and exploited, so the longer you wait to fix or mitigate, the more risk you take on. Therefore, I think Google's policy strikes a very reasonable balance between protecting through patching and protecting by telling users to use something else.
With that in mind, I have no real qualms with people that disclose flaws immediately (letting users be aware of their risk), or vendors that fix an obscure bug that's not being exploited slowly. In the end, if users want to be free from all risk, they should be finding and mitigating these issues themselves... anything you get for free out of someone else's goodwill is a benefit.
The onus is on MS to show more time is warranted, and so far I’m only seeing evidence to the contrary.
Exposing competitors security bugs is only a nice marketing side effect.
Any security researcher has a responsibility to disclose a vulnerability in such a way that it does not cause widespread damage. I don't know why you would think otherwise.
>Full disclosure is the policy of publishing information on vulnerabilities without restriction as early as possible, making the information accessible to the general public without restriction. In general, proponents of full disclosure believe that the benefits of freely available vulnerability research outweigh the risks, whereas opponents prefer to limit the distribution.
It's not like _this_ bug alone would get anybody RCE - they'd need to chain up some other way in, and if they've done that there are at least two known and unpatched bugs that'd get them the same place as this one already.
It's just as easy to argue that pushing for extensions to disclosures that aren't going to decrease security is the dick move here...