
US senator urges investigation into Google+ bug ‘coverup’ - egusa
https://sociable.co/business/us-senator-google-coverup/
======
tptacek
What an embarrassment. Blumenthal should leave the performative infosec
policymaking to his colleague Ron Wyden.

It is not and never has been a norm for SAAS vendors to disclose internal
vulnerabilities that have not been discovered independently by third parties.
Tens of thousands are found every year by internal teams and contractors at
companies around the country, many of them far more severe than the G+ bug
(which would probably win a sev:low on a real assessment, less impactful than
an XSS bug). You hear about none of them.

A coherent argument that this is as it should be:
[http://flaked.sockpuppet.org/2018/10/09/internal-
disclosure-...](http://flaked.sockpuppet.org/2018/10/09/internal-disclosure-
boring.html)

You can argue that things should be different for shrink-wrap software and
hardware products, where vulnerabilities have a half-life and users need to be
notified to patch. I won't disagree, but I will note that the norm of _not_
disclosing internal discoveries holds there as well.

~~~
JumpCrisscross
> _Tens of thousands are found every year by internal teams and contractors at
> companies around the country_

It's fair to say "Google followed current best practice." It's not fair to say
"current practice is how it should always be."

What is acceptable for a company selling razors to Minessotans may not be for
a behemoth with troves of personal data on every American. The question,
"should Google have heightened disclosure requirements around confirmed and
potential breaches," is not invalid.

~~~
tptacek
The link I posted does a better job of directly addressing that argument than
I could do by retyping it here.

~~~
fauigerzigerk
So the argument in the linked article is this:

 _" A mandate to disclose internal vulnerabilities would change incentives.
Firms would have a reason not to want to find vulnerabilities."_

My problem with this thinking is that it could be extended to finding
breaches. If you have to publish every breach you're not going to want to find
them.

It's impossible to prove that a vulnerability has not resulted in a breach and
so an argument could be made that every vulnerability has to be considered a
breach.

That would not be very pragmatic though, especially if we also take the view
that every bug is a vulnerability (as the linked article does).

I think if the ultimate goal is to protect users, we can't be dogmatic or
formalistic about which incidents to publish. It has to depend on the
likelihood and the severity of any damage.

If a vulnerability concerns highly sensitive data and we don't know whether or
not there was a breach then users should be told about the incident so they
can protect themselves or change their behaviour in the future.

I also think that in this particular event Google tried to downplay the
likelihood of a breach. Bad idea.

~~~
tptacek
The difference between internal work to find vulnerabilities and breach
response is that one is optional and the other effectively isn't. There are
incentives at play in breach response as well, but they are attenuated.

And, again: so far as anyone knows, there was no breach. _All software_ is in
a continuous state of "likely breach". But words mean things: a breach happens
when a vulnerability is exploited maliciously, not when it's discovered.

~~~
fauigerzigerk
I didn't dispute the meaning of the words, and I agree that making internally
discovered vulnerabilities public should be (legally) optional.

But that raises the question which ones should and which ones shouldn't be
disclosed.

In my opinion, knowing for sure that there was a breach is not the right
threshold in all cases.

A high profile public API that had a glaring vulnerability for years seems far
more likely to have been breached than most other software.

Also, the more high profile the software the greater the reputational risk of
being wrong.

What if the bug gets leaked eventually? What if there was in fact a breach and
it only becomes known later when people have already suffered the
consequences?

If that happens, people will question the decision not to publish and the
damage to trust will be far greater. This has to be factored into the
incentive structure of any disclosure or non-disclosure.

I think the best course of action is to routinely disclose all vulnerabilities
but not necessarily alert all end-users to all vulnerabilities.

~~~
tptacek
I don't know where to start with this, so I'll just shotgun my answers:

* The vulnerability we're talking about, like all G+ vulnerabilities, has no half-life. It's fixed decisively the instant they deploy the fix to prod. There is no user response we're looking for to mitigate the vulnerability.

* The incentive problem isn't eliminated just because you only target Google with the new norm. When we demand that "high-profile" companies disclose vulnerabilities, we create the sentiment, across the industry and in low-profile companies as well, that vulnerability discovery is a bad event, to be avoided. _The exact opposite thing is true_.

* Internal vulnerabilities in high-profile applications are discovered so often that people will quickly tune them out (until someone sets out to take a scalp). It could even wind up benefiting companies with actual breaches, whose announcements will be lost in the noise, and more easily PR-spun.

The basic problem here is very simple: found- and- fixed vulnerabilities _aren
't breaches_. A vulnerability and its successful exploitation are not the same
thing. It does not matter if the vulnerability is "leaked eventually", so long
as it's fixed when it's found.

If Google had discovered this vulnerability and then just decided to ignore it
for 6 months, _that would be a story_. That's not what happened.

~~~
fauigerzigerk
_> The basic problem here is very simple: found- and- fixed vulnerabilities
aren't breaches._

What makes this less than simple is that we may not know whether or not a
particular vulnerability was exploited (i.e if a breach has occurred).

My opinion is that the likelihood of an undiscovered breach and the potential
damage of any such breach should be taken into account when deciding whether
or not to disclose a particular vulnerability.

 _> The incentive problem isn't eliminated just because you only target Google
with the new norm._

I have no interest in targeting Google specifically (I'm actually a
shareholder). They just happen to be big and have a lot of personal data. They
are in the crosshairs of regulators and politicians as well, which is another
reason to err on the side of transparency.

I'm also not trying to eliminate the incentive problem by limiting it to
Google (or other big companies). On the contrary, my opinion is that
disclosing vulnerabilities in a timely fashion should be seen as building
trust and become the new normal. Making it the new normal is what should fix
the incentive problem over time as people get used to it.

People tuning out is not a bad thing. The difference between an actual breach
and a fixed vulnerability will not be lost just because vulnerabilities are no
longer kept secret.

But as I said, I don't think end-users should be notified of every single
vulnerability.

~~~
tptacek
Any response I could write to this would just be repeating things I said
previously.

------
garyfirestorm
Sure. First hold Equifax accountable, then we'll talk about Google.

~~~
Waterluvian
Half the job of a politician is to get re-elected. This is part of that half.
There's outrage and standing up for the people against a big company. Someone
will be dragged before Congress and get embarrassed publically. And then we
move on.

~~~
DecayingOrganic
This reminded me of a story:

In an op-ed promoting campaign finance reform, the Oracle of Omaha, Warren
Buffett, proposed raising the limit on individual contributions from $1,000 to
$5,000 and banning all other contributions. No corporate money, no union
money, no soft money. It sounds great, except that it would never pass.

Campaign finance reform is so hard to pass because the incumbent legislators
who have to approve it are the ones who have the most to lose. Their advantage
in fundraising is what gives them job security. How do you get people to do
something that is against their interest? Put them in what is known as the
prisoners’ dilemma. According to Buffett:

 _Well, just suppose some eccentric billionaire (not me, not me!) made the
following offer: If the bill was defeated, this person—the E.B.—would donate
$1 billion in an allowable manner (soft money makes all possible) to the
political party that had delivered the most votes to getting it passed. Given
this diabolical application of game theory, the bill would sail through
Congress and thus cost our E.B. nothing (establishing him as not so eccentric
after all)._

Consider your options as a Democratic legislator. If you think that the
Republicans will support the bill and you work to defeat it, then if you are
successful, you will have delivered $1 billion to the Republicans, thereby
handing them the resources to dominate for the next decade. Thus there is no
gain in opposing the bill if the Republicans are supporting it. Now, if the
Republicans are against it and you support it, then you have the chance of
making $1 billion.

Thus whatever the Republicans do, the Democrats should support the bill. Of
course, the same logic applies to the Republicans. They should support the
bill no matter what the Democrats do. In the end, both parties support the
bill, and our billionaire gets his proposal for free. As a bonus, Buffett
notes that the very effectiveness of his plan “would highlight the absurdity
of claims that money doesn’t influence Congressional votes.”

This situation is called a prisoners’ dilemma because both sides are led to
take an action that is against their mutual interest. In the classic version
of the prisoners’ dilemma, the police are separately interrogating two
suspects. Each is given an incentive to be the first to confess and a much
harsher sentence if he holds out while the other confesses. Thus each finds it
advantageous to confess, though they would both do better if each kept quiet.

From the book The Art of Strategy. Great book.

~~~
rasteau
The prisoner's dilemma hinges on lack of communication between the prisoners.
In the original scenario, communicating prisoners would both choose to stay
silent. In the above scenario, communicating politicians would choose to never
let the bill out of committee.

~~~
JumpCrisscross
> _communicating politicians would choose to never let the bill out of
> committee_

This presumes every politician is equally dependent on outside financing.
Self-financed and small-donation financed politicians would be politically
incentivized to bunch together and knock the legs out from under the
competition's money machine.

Politics is complicated. Condemning campaign finance reform is premature.
(Saying it's a tough fight would be accurate.)

------
knorker
So not only should companies have to publish hacks, they also have to publish
when they internally find a bug?

Do I also need to publish if I left my keys in the door for two hours, but
nobody broke in?

~~~
jstanley
How do you know nobody broke in?

And if you're responsible for storing personal data for millions of people in
your home, and you left the keys in the door for a few hours, and you can't
prove that nobody broke in, then maybe you _should_ let people know that
there's a chance their data has been compromised.

~~~
orev
Server logs are more or less equivalent the video recordings of the front door
in this analogy. So the answer is you can know that nobody broke in by looking
at the logs/recordings.

~~~
dgellow
Google explicitly said in their post that they don’t have more than two weeks
of logs (ironically, for privacy reasons).

------
Isinlor
It's a really shitty job by Google PR people. Who in their right mind would
close product in response to a security vulnerability? It was bound to grow
out of proportions.

Headlines like "500 000 people at risk! The bug was so serious that Google
shuts down their social network!" just write themselves.

