I don't know if it should or it shouldn't, but it absolutely is not the norm for companies to announce those vulnerabilities publicly. Every year, most moderate-and-up-sized tech companies (really, a pretty big swathe of the Fortune 500 outside tech, as well) contract multiple penetration tests, and those tests turn up thousands upon thousands of sev:hi vulnerabilities, none of which are ever announced.
An obligation to announce findings would create a moral hazard as well, since the incentives would suddenly tilt sharply towards not looking for security vulnerabilities.
>An obligation to announce findings would create a moral hazard as well, since the incentives would suddenly tilt sharply towards not looking for security vulnerabilities.
A good point. There is also the fact that the average Internet user has no clue what things like a vulnerability, or a bug, or even a log is or what it means. Data mining, web scraping or data harvesting--no clue.
I just saw a TV report this weekend that stated CA hacked FB. Well on second thought, maybe that's better than trying to explain that even though "thisisyourdigitallife" you really need to spend some time and effort to understand what it all actually means.
While that is true, it's worth pointing out that Google's Project Zero has a "disclose by default" approach to vulnerabilities they find, even if there is no proof that they were exploited.
The default P0 timeline is 90 days... do we know when Google found this vulnerability in Google+? Does Google apply the P0 deadline to their own vulnerabilities? Is it fair to expect them to?
No, it's not reasonable to apply P0's public vulnerability research norms to internal security research.
P0 "competes" on an even playing field with everyone else doing public vulnerability research and, to a reasonable approximation, has access to the same information that everyone else does. Internal security assessment teams have privileged information not available to public researchers, and rely on that information to get assessment work done in a reasonable amount of time.
When P0 discovers a bug, it has (again, to an approximation) proven that any team of researchers could reasonably find that same bug --- everyone's using roughly the same sources and methods to find them (albeit P0's are done at a much higher level of execution than most amateur teams). That's the premise under which P0 bugs are announced on a timeline: what P0 has done is spent Google engineering hours surfacing and refining public information.
If you want to go a little further into it: the 90 day release window has a long history in vulnerability research. It's the product of more than a decade of empirical results showing that if you don't create a forcing function, vulnerabilities don't get patched at all; vendors will back-burner them indefinitely. Google's internal teams don't have that problem: when Google bugs get found by internal teams (and, presumably, by external ones), they get fixed fast. There's no incentive problem to solve with an announcement window.
Another lens to look at this through is the P0 practice of announcing after the publication of patches, regardless of where the window is. That's because, again, P0 is doing public research. Typically, when a P0 bug is patched, the whole world now has access to a before/after snapshot that documents the bug in enough detail to reproduce it. At that point, not announcing does the operator community a disservice, because the bug has been disclosed publicly, just in a form that is only "available" to people motivated to exploit the bug.
And again: not at all the case with internal assessments.
> While that is true, it's worth pointing out that Google's Project Zero has a "disclose by default" approach to vulnerabilities they find
This wasn't a Project Zero bug, was it? Project Zero is a very special team with a distinct and notable charter. They aren't "Google's Security Folks". Certainly Project Zero has discovered and disclosed bugs in Google products in the past.
It's the norm in healthcare (HIPAA), disclosure is required for breaches that affect 500+ persons, and even <500 person breaches have to be reported annually to HHS and to the individual at the time of discovery.
> It's the norm in healthcare (HIPAA), disclosure is required for breaches that affect 500+ persons, and even <500 person breaches have to be reported annually to HHS and to the individual at the time of discovery.
Breaches, not vulnerabilities. The discussion is not whether or not breaches should be disclosed[0], but whether newly discovered and believed-to-be-unexploited vulnerabilities should be disclosed.
[0]: They should of course, after a reasonable period in which to patch the vulnerability used.
If the implication is that Google deletes the logs to avoid having to disclose breaches, you've got it completely backwards. The default is deleting disaggregated log data as soon as possible after collection. There is a very high bar that has to be met for retaining log data at Google, and generally speaking it's easier to get approval to log things if you set it up so that they're deleted after a short time span. Not sure if that's what happened here, but that would be my guess.
That is the kind of argument that carries a lot of force on a message board, but is not at all lined up with how the world actually works. In reality, almost nobody operates under the norm of "any vulnerability found must have been exploited", and even fewer organizations disclose as if they were.
You can want the world to work differently, but to do so coherently I think you should explicitly engage with the unintended consequences of such a policy.
I think in this particular case, their policy statement in the sister article from Google blog indicates they couldn't really say that in this case.
> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.
^ the above statement, but couched with this:
> We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.
I wasn't arguing one way or the other on the issue, just reframing it so everyone's on the same page.
Devil's advocate: Do you believe that proactive security assessments would still be performed if each vulnerability found was required to be disclosed as though it had been exploited?
> It's the norm in healthcare (HIPAA), disclosure is required for breaches that affect 500+ persons, and even <500 person breaches have to be reported annually to HHS and to the individual at the time of discovery.
It is required by law to report breaches of data, though I can assure you that in practice, this does not happen nearly as often as you'd expect or hope.
There is, however, no requirement to disclose vulnerabilities for which there is no evidence of exploitation or data breach, or to disclose vulnerabilities that were provably never exploited.
I used to write those letters when I worked in insurance. They had to be reviewed by legal, needed to make it clear what level of threat was involved without divulging certain kinds of info and only occurred when an actual breach of some sort had happened.
In my case, it was usually not a computer issue. It was usually a case of "We sent a check or letter to the wrong address" and it was weirdly common for the reason to be "Because your dad, brother or cousin with a similar name and address also has a policy with us and you people are nigh impossible to tell apart."
And we couldn't say anything like that.
Point being that divulging the issue comes with risks of making the problem worse. So it's not as simple and straight forward as it seems.
If there is a reasonable belief that data was exposed, all of the exposed CA residents need to be notified, and if > 500, the Atty General of CA needs to additionally be notified.
Agreed! Apologies, as I think my use of the term "exposed" left that ambiguous. I should have used the original term "acquired". The first line from the link says the following:
> California law requires a business or state agency to notify any California resident whose unencrypted personal information, as defined, was acquired, or reasonably believed to have been acquired, by an unauthorized person.
With links to more specifics in the CA Civil code.
An obligation to announce findings would create a moral hazard as well, since the incentives would suddenly tilt sharply towards not looking for security vulnerabilities.