
Audit Cleared Google Privacy Practices Despite Security Flaw - infodocket
https://thehill.com/policy/technology/410568-exclusive-privacy-audit-failed-to-mention-of-google-plus-security-flaw
======
Leace
Ernst & Young also give a positive audit to Startssl/Wosign even though they
had problems at the time.

For that reason Mozilla "no longer accept audits carried out by Ernst & Young
Hong Kong." [0].

[0]: [https://blog.mozilla.org/security/2016/10/24/distrusting-
new...](https://blog.mozilla.org/security/2016/10/24/distrusting-new-wosign-
and-startcom-certificates/)

~~~
bsder
To be fair, I have never seen an external security audit do anything useful.

Either they bend to the folks paying the bills, or they flag a zillion,
generally known to engineers, issues.

If you really want to make you security audit useless, hand the auditors C
code. They will stare at you as though you just handed them the head of a
small child.

~~~
mahemm
I have personally been a part of security audits that have found critical
vulnerabilities in some of the best-known tech companies on the globe.

Just because there exists a world of auditors who function as rubber stamps
does not mean we all do.

~~~
cptskippy
I've been part to numerous audits during my career and there's always an
initial findings report with the opportunity to explain away findings such
that they never show up on the final report.

~~~
mahemm
If that's the case then you haven't been working with good auditors.

That said, the opportunity to do this (scrub reports) is a driving decision
behind many companies' choice to work with more permissive firms. If you want
a rubber stamp, you can certainly get it, but if you want a true partner who
will find and exploit issues you can get that too.

~~~
Leace
> if you want a true partner who will find and exploit issues you can get that
> too.

Could you name such a company? This is a perfect opportunity to get to know a
good auditor, actually I can't imagine how would one look for a good auditor
otherwise.

------
esoterae
This is a witch-hunt by Murdoch, I'm guessing. 500,000 people's information
with the _potential_ to have leaked, and yet Facebook and Equifax fucking
_skate_ on DEMONSTRABLY having leaked well over half of America's personal
information to domestic and foreign interests alike.

This is sour grapes of the worst kind.

~~~
SpicyLemonZest
I mean, what is it at this point, 2 days of bad press? All Murdoch properties
I'm aware of covered Facebook and Equifax for a lot longer than that, and
certainly didn't pretend that what they did was okay.

------
tptacek
This story doesn't make any sense. An FTC-mandated E&Y "Privacy Audit" isn't a
software security assessment and would never have been expected to either (a)
uncover subtle application security flaws or (b) metabolize internal reports
of subtle, fixed flaws. Nor, for that matter, is the internal discovery of a
software security flaw a "breach": routine internal discovery of
vulnerabilities is what good security teams do.

The reporting on this G+ story has been startlingly bad. There's apparently
some juice to the narrative that Google is running headlong into a collision
with the Trump administration, and so reporters seem to be starting with a
public policy conclusion and working their way back. Somebody needs to start
informing these people how professional software security works.

In _any_ audit situation --- PCI, HITECH, SOC-2, you name it --- there is
neither a norm nor a duty to notify auditors of internally discovered security
problems. Nor would any such norm be productive: it would warp internal
incentives, both to discover and to properly handle vulnerabilities.

If the _auditors_ discover a security problem, that problem gets documented,
just like it (typically) does with external reporters (researchers, bug bounty
claimants). When that happens, you have an indication that your internal
software security controls failed. But when your team finds a bug in its own
code? That's a sign that the team is doing things right.

~~~
kevingadd
All of your characterization here is correct, but does it really make sense to
classify it as an internally discovered security problem instead of a
potential-or-real-breach when it was present for _years_ , trivially
exploitable by third parties, and is impossible to vet? There's no way to know
whether or not it was used because there was no logging in place for third-
party use of user data, and apparently no audits (manual or automated) were
ever performed of these APIs. Google more or less admitted that all they had
to go on here was an examination of the third parties who potentially had
access to the data, because they have no way to know what they actually did
with it and apparently can't contact them to find out.

The reporting is definitely exaggerating the significance of the data that was
exposed and making it sound like it's known that the data was collected by
third parties, but it's absolutely the case that the ball was dropped here and
it's inappropriate to pretend nothing happened. There's simply no way to know,
which is itself a breach of user trust.

It's kind of a no-win situation here given the way the media reports on these
matters, but attempting to keep it quiet was incredibly foolish - everything
remotely bad that happens at Google is going to get leaked by internal
troublemakers now, and legal/pr should know that and should have gotten ahead
of this.

~~~
tptacek
There is nothing remarkable about finding a vulnerability that had been latent
in a codebase for years. It is not clear to me what "ball" you believe has
been "dropped" here. If it's that the first N groups of security testers that
looked at this code missed the vulnerability that their N+1th found, sorry,
no: that happens all the time.

~~~
kevingadd
If the vulnerability is 'any one of our API users could have accessed <large
set of private data> over the past 3 years' that's a lot more than 'we found
an obscure latent security issue'. Especially if you can't prove it didn't
happen by examining logs (which they can't, because they didn't keep logs) or
auditing your API users (which they couldn't).

It doesn't feel like splitting hairs to separate this from things like 'we
found an obscure xss exploit internally which is now fixed'. It is explicitly
a Possible Breach, not just a Bug. If you're arguing that Possible Breaches
don't require disclosure, well, that's a choice to make, but it worked out
poorly in this case and it will probably keep working out poorly because
everything G does is under a microscope until the far right gets bored with
them.

~~~
spoondan
There’s a strong argument against disclosure of internally discovered bugs
(security or otherwise) that have no evidence of user impact. Internal efforts
to find and fix bugs improve actual quality/safety but, if indiscriminately
announced, harm perceived quality/safety. We have seen repeatedly how bad
security reporting of actual weaknesses can drive users to more risky
alternatives or behaviors, lowering actual security and privacy. It’s just too
easy to say that all major security defects should be announced or that no
major defects should ever exist.

There are about 430 developers that could have exploited this vulnerability.
There was no evidence in the available two weeks of logs of it being
exploited. And the number of users and types of data available make Google+ a
low value target. The decision to not announce was reasonable, common, legal,
and moral.

It sets a poor standard to suggest they should have acted different because
they’re now under fire from people with far darker motives than reporting
truth or advocating for consumer privacy. Those exploiting the story for their
own gain should be shamed rather than capitulated to.

