Hacker News new | comments | show | ask | jobs | submit login
No, Google, We Did Not Consent to This (bloomberg.com)
243 points by kushti 63 days ago | hide | past | web | favorite | 153 comments



From the article:

> Google might agree to let a random online shopping company scan what I’m typing into Gmail, but I did not agree.

Google might, in the sense that they could start, but Google doesn't do (and never has done) what is described.

First of all, Google has never let companies scan what you type. It did let companies target based on content of messages, but that involves advertisers sharing targeting information with Google, not Google sharing email content with advertisers.

Second of all, even that stopped last year. From the Google announcement:

> G Suite’s Gmail is already not used as input for ads personalization, and Google has decided to follow suit later this year in our free consumer Gmail service. Consumer Gmail content will not be used or scanned for any ads personalization after this change. This decision brings Gmail ads in line with how we personalize ads for other Google products.

( https://blog.google/products/gmail/g-suite-gains-traction-in... )


> Google might, in the sense that they could start, but Google doesn't do (and never has done) what is described.

Yep, this is the point where it shows that the author has no idea what this is about. The entire case of the GMail-reading outrage was around the fact that the user had explicitly allowed that the 3rd party app read their emails.

Sure, you can argue that you clicked by accident, or that Google shouldn't even allow it in the first place, but saying that the user did not explicitly consent, is bullshit.


How is this different from the mystery terms in the ToS or click-to-install license you clicked on without reading?


There was a big warning on the screen saying that you're giving access to a 3rd party all to read all your emails.

This wasn't something buried deep into a long ToS, this was someone explicitly asking you for the keys to your house.

Not sure what you expect when giving access to your emails...


What about Gmail add-on developers? Do they get access to email content?


On the first google.com hit for "gmail add-on" [1]:

"Gmail add-ons examine incoming messages, then act on these messages in various ways, such as by:

- Displaying additional information for the user in the Gmail UI.

- Connecting to non-Google services, to retrieve information or take other actions."

So yeah, it looks like there's no security at all offered by Gmail add-ons.

[1]: https://developers.google.com/gmail/add-ons/


How is this different from, say, Outlook addons ? Or thunderbird ones for that matter. As the message says, outlook addons react to message contents ... so of course they need to access it.

You are not protected from the software you choose to run. I find it a bit weird that you'd want that at all to be honest. But you can choose to run other software.

If you're going to complain along these lines, I'd say browser plugins are so much scarier. They don't just have access to your mails, but to your web banking as well, and can (and have) for example override which account you transfer to, and how much.


Just to be clear... You're installing a plugin for a given website, and don't think it should be able to access content for that website?


There is no such a thing like plugin which can operate only on one website. Plugin in browser is like an app on Windows. Once you install it and run it, it can access anything.

I think the main problem these days is lack of understanding how computers and Internet work.


I think GP is responding to the gmail plugin comment, so yeah, it’s a plugin for a website.


Can we really continue to claim that we're unaware Google, Facebook, and other web companies are monitoring everything they can and sharing the information they collect, sometimes for profit, sometimes accidentally, and sometimes compelled by legal orders?

This isn't merely a legal technicality hidden in the terms of service. We know they're doing it, and by continuing to use the service we are consenting, however unhappily.


If I count the number of times recently I've read and heard people say "Well switch to Instagram or WhatsApp" in reaction to hearing about Facebook's privacy track record, then the answer to your question is No - the public is largely clueless.

(In case you weren't aware, Facebook owns Instagram and WhatsApp)


I feel like the group of people that know Facebook owns Instagram and WhatsApp are not mutually inclusive of the group that knows that Facebook will collect literally everything it possibly can about you.

My impression of the issue is that most people simply don't care.


> My impression of the issue is that most people simply don't care.

This is my impression as well, and I find it a bit of elitist arrogance to tell them they should. Different people value their data at different levels and we shouldn't be so quick to make it harder for them to give it away if they choose. A simple question one should ask while picking up their pitchfork is whether their perceived righteousness is based on perceived stupidity by the masses (and whether the perception is based on some anecdotal evidence). And if you really believe stupidity is that rampant, you can set your law-making pitchfork down and advocate education.


I find it a bit of elitist arrogance to tell them they should [care about privacy]. Different people value their data at different levels and we shouldn't be so quick to make it harder for them to give it away if they choose.

With regards to "shadow profiles", someone else's actions do diminish my privacy. I don't find it elitist or arrogant at all to protect my privacy in this way.

And, if someone came to me and said "stop doing this because it is negatively affecting me", I probably wouldn't think twice before stopping.


"stop putting up incessant consent forms (i.e. stop requiring opt-in to tracking) because it is negatively affecting me and my normal web usage"

"stop increasing compliance burdens on the internet because it is negatively affecting me and my business"


The question is if using the internet is more valuable than having your data being tracked by third parties. You can hardly find a single website on the internet where there is absolutely zero tracking happening. You certainly can't use a smartphone without being tracked, hell you can't even go into most stores without being tracked to some degree (it might be different if you pay in cash).


You most certainly can use the web and a smartphone without being tracked.

It's not difficult at all actually.. it becomes increasingly more difficult when you want to use these services and apps which you are the product by design, and the truth is this is week within the rights of those companies that are giving you their services for free..

This is all virtue signaling to the max as well as endorsing a more censored, controlled internet by these media outlets, tech, or govt..

After all, mums the word on an actual problem, the data breaches of credit reporting agencies, e.g. Equifax. These companies have essentially a govt endorsed monopoly on all of our data. And no one goes to jail when my actual important info gets hacked even after being warned, (DL/SSN, last 10 addresses, loan amounts and who I owe, etc.). On top of this, these companies can sell my info to other 3rd party vendors, and I literally have no recourse..

All this Facebook nonsense is hogwash. If your conspiratorial, there is a strong case to be made that this is about censorship, deference to the experts who know the web better than the masses who are "incapable" of using any discernment regarding bots or fakenews. These are the same people who brought us SESTA, FOSTA, PIPA et. al., same ones who told us we'd go back to dial up without net neutrality..

I personally think it's just virtue signaling to the max..


It reminds me of all the people who like Trump or dislike the EU, and all the people who believe that those people must not understand what's going on in the world; and that if only they learned the "truth", they wouldn't like Trump or realize that the EU is really good for us.

Ya know, sometimes people actually do understand and they just don't agree with you. I'm not a Trump supporter, but I believe people who say they like Trump for reasons that aren't "politie".

These anti-Google and anti-Facebook people should learn to disagree better.


Nobody is saying that they shouldn't be able to, rather that they should be informed as to what they're doing so they can actually make that decision. Without knowing what is happening, they cannot make an informed decision.


You aren't aware of every little detail about the actions that companies perform that affects the products you use. You don't know everything about water treatment, energy, housing construction, civil engineering, etc. And you don't know about those things because you trust people who do know to do the right thing. That's how society works. That's how billions of Facebook and Google users are.

Facebook and Google should get in major trouble when they mistreat their data (like the Cambridge Analytica scandal with Facebook), but you disliking the data they collect and/or their business model does not mean that everyone else will.

I know full well the type of data Google and Facebook collect about me and I just don't care. There's plenty of people in the tech industry that don't care either.


"but you disliking the data they collect and/or their business model does not mean that everyone else will."

I never said that, in fact I explicitly said the opposite.

"I know full well the type of data Google and Facebook collect about me and I just don't care."

Yes, and you were able to make an informed decision because you do know. Not everyone knows. That's the entire point I was making.


>> we shouldn't be so quick to make it harder

> they should be informed

These two items are at odds with one another in an opt-in situation.


>My impression of the issue is that most people simply don't care.

My impression is that people have absolutely no idea that Google has become so invasive as to purchase copies of most people's credit/debit card purchase histories, and would be horrified to learn that they have taken surveillance capitalism to such an extreme.

>Google has been able to track your location using Google Maps for a long time. Since 2014, it has used that information to provide advertisers with information on how often people visit their stores. But store visits aren’t purchases, so, as Google said in a blog post on its new service for marketers, it has partnered with “third parties” that give them access to 70 percent of all credit and debit card purchases.

https://www.technologyreview.com/s/607938/google-now-tracks-...


If they don't care enough to do a quick search and learn about it, then it doesn't matter that much.


This sort of thinking is backwards. A lot of people in tech fall back on this idea that lay-people should just do research and educate themselves about computer security and privacy. But even security experts disagree about the right choices to make, so what hope does a person with no background in computers have?

If your solution is "don't use the most popular service in the world", your solution is simply not realistic for most people.


Every thing about every person (living or dead) is being tracked in near real-time.

I've commented here many times about Seisent (bought by Lexis/Nexus). In the mid 2000s they had uniquely identified everyone in North America, Caribbean, and making in roads into Central & South America. Just from public records. One of their customers was the NSA, which also tied in money and phone.

(My employer was prototyping using Seisent to help uniquely identify patients across heterogenous orgs. It worked great, but cost too much for our use case.)

Seisent was (is) just one of many "independents" (third party). Today, FAANG and all the little dragons are doing the same.

--

The ridiculous part of the controversy is pretending we don't know who everyone is. Our voter registration database could trivially be near perfect in real-time. Every immigrant. Every homeless person. Every missing person (not yet dead).

Etc.

"dropping off the grid" is not possible.


> Can we really continue to claim that we're unaware Google, Facebook, and other web companies are monitoring everything they can and sharing the information they collect, sometimes for profit, sometimes accidentally, and sometimes compelled by legal orders?

Yes, we can.

1. Some people (mostly technologists) are aware, but many people aren't.

2. Much of the awareness that does exist was not the result of those companies being transparent about their practices. It's the result of inferences based on scraps of information and speculation.


Surely the author, editor, and publisher of this article, and anyone likely to read it, are aware. "We" are aware.


> Surely the author, editor, and publisher of this article, and anyone likely to read it, are aware. We are aware.

That's a really weird statement. Articles are published so that readers can be informed of things that the author and editors have become aware of. Often many articles are published on the same subject because 1) many people missed previous articles, 2) there a new update that people aren't aware of, 3) people forgot and it's a good time to remind them, 4) it's not news-facts article but a persuasive piece, 5) etc.

>> 2. Much of the awareness that does exist was not the result of those companies being transparent about their practices. It's the result of inferences based on scraps of information and speculation.

This means that "we" aren't as aware as you make us out to be.


> Articles are published so that readers can be informed of things that the author and editors have become aware of.

This is an opinion piece. It's written to persuade, not to inform, but its argument is founded on shaky ground.


>> Articles are published so that readers can be informed of things that the author and editors have become aware of.

> This is an opinion piece.

Opinions and ideas are things that one can be informed of.


Ok, but is this article intended to inform us that "this opinion exists" or convince us that "this opinion is right"?


> Can we really continue to claim that we're unaware

Yes, I can. Why?

Because I only have a vague abstract notion that they collect data and do something with it before turning that into cash.

How am I meant to explain this to my friends and family, beyond "they are bad"?

I've never seen a fully worked through example of exactly what they are doing with all this data. Please point me at an article or video or something that everyone else has already seen a hundred times, because I still haven't found that one thing that will convince other people.


We can continue in sense things that benefit us we claim ignorance about that is made possible and provided for 'free'. But deep down I think people will know that something gotta give even if technical/legal/marketing copy says it is all free.

It is same thing that people here consider those half million salaries normal in valley without honestly thinking how this is made possible in first place. For them its cutting edge machine learning work etc, is a good enough answer.


I'd argue that the risk of such vulnerabilities with smaller companies is way higher but they are just not disclosed. Many people are also fine with the data for free services trade.


I'd like to know the extent/methods of the tracking and whether I can avoid it by not using their services and cleaning out my browsers and not using gmail.


Probably makes sense to through Microsoft in there.

Also Amazon and Apple, too.


"Didn't say no" is a very, very bad standard for consent.


"We know they're doing it, and by continuing to use the service we are consenting" This is a very narrow notion of consent. Sounds very libertarian.


Legally, we're consenting when we agree to the terms of service.

If we didn't know they were doing this we might have a legitimate objection that the ToS are abusive, but that argument is far less compelling if we know they're doing this and agree to the ToS anyway.


Even libertarians would have issue with that definition when you try to say that you consent to the government's laws by continuing to live under the government.


Countless companies every year hire security auditors, and get back a 100-page report in 8 point font filled with vulnerabilities, many of them marked "severe" or "critical." Forcing companies to then publicize those reports will be burdensome and counterproductive.


Forcing them to publicize would also force them to spend the resources to patch them. I'm sure many companies go through those 100 pages and throw out 50 as "wontfix."

Though, if you force them to publicize, you'd probably have to force them to conduct the audits, so they can't avoid the publicity/patching spend by keeping themselves in the dark.


> I'm sure many companies go through those 100 pages and throw out 50 as "wontfix."

I agree in principle, but it's not always that simple. Outside of the obvious "don't make your S3 buckets public", it's usually a combination of priority, cost, and compatibility. Years ago I ran a network inside the firewall of <Insert large company here>. They did constant vulnerability testing, and I had monthly meetings explaining what I was doing to mitigate all of the vulnerabilities that came up. Mitigation for some vulnerabilities meant rewriting decades old programs to run on a new OS/platform so we would harden around it the best we could while the work was done.


Why do I care if something like this is "burdensome" to businesses?


Just because something is burdensome to a business does not mean it will have the intended, positive effect.

At worst, you only make things worse for users and empower large incumbents without solving anything.

So hopefully someone out there cares.


But the way it was said, it implied that I should care about the burden on business. I don't. If you're saying I should be concerned about unintended consequences, then I can agree with that. But I'm not going to be upset that businesses are "burdened" by doing what they should be doing in the first place.


Because you, as a consumer, will ultimately pay for it.


I'm pretty sure you're wrong here. Publicly traded companies have to declare their risks. Cue share holder lawsuits if an identified risk was ignored and caused material damage.


Apparently unpopular opinion: an internally-discovered vulnerability with no evidence of abuse is not a breach and does not require public disclosure.


When some medical contractor misconfigures an AWS bucket and exposes 15,000 medical records we all lose our minds. It doesn't matter if the first bloke to find it was the researcher who disclosed it... We still go nuts. We make fun of the companies who come back and say "There was no evidence that the data was accessed by unauthorized parties." We know full well there's no evidence that the data WASN'T accessed by unauthorized parties.

Please stop pretending this isn't a big deal just because of a hard-on for Google. If you'll put a 10 man company out of business for their complacency and ignorance you should be lining up at Google HQ with pitchforks over this. They're supposed to be above this. They are hailed as a gold standard.


> Please stop pretending this isn't a big deal just because of a hard-on for Google.

Please don't project strawmen onto other users, it's intellectually weak and rude besides.


So what do you say about the idea that it creates a disincentive to find security issues, because you'll be hit for them one way or the other?

Also, I fundamentally disagree with your example. If they did an adequate investigation, using a 3rd party service, and found no evidence of my data being accessed by a 3rd party, and then fixed it- I'd say, "Good job checking up on yourselves" and move on.

Security is still incredibly hard to get right. I'm willing to bet your service has security holes in it, right now- and that's not a hit against you. We haven't mastered these systems and anyone who thinks they have is just waiting to get bit in the ass. Every security professional knows: It's never, ever, a question of "if", but of "when".


> We haven't mastered these systems and anyone who thinks they have is just waiting to get bit in the ass.

We've absolutely mastered these systems.

We just prioritize rapid development time and ease of use over security.

People make fun of DoD / space price tags (and admittedly, there's still a lot of crap sold there), but it's a trade off.

If we wanted computers to cost $x0,000 and OSes to cost $x,000, and the pace of progress to be glacial, we could have completely secure systems today.

It's a choice, not an impossibility.


> We just prioritize rapid development time and ease of use over security.

> If we wanted computers to cost $x0,000 and OSes to cost $x,000, and the pace of progress to be glacial, we could have completely secure systems today.

> It's a choice, not an impossibility.

Progress in core OSes and conventional hardware has already slowed down to a glacial pace, compared with previous decades. Rapid development is often used to just create a churn of useless sidegrades and other poorly thought out things (deceptively sold as upgrades).

It might be time to make that different set of choices.


>If we wanted computers to cost $x0,000 and OSes to cost $x,000

That's an interesting theory that's not supported by historical evidence as far as I can tell. When computers were expensive, there just were less people with access. The systems were not any more secure.


I think ethbro is saying that focusing on absolutely secure systems would drive up costs, not that if we make them more expensive we'll somehow get more security just because they're more expensive.


I think this is overlooking the real criticism, which is that there is no evidence that simply wanting something to be secure would make it so.

For example, the parent article, the one this comment is ostensibly in response to, has nothing to do with operating system security.


They weren't suggesting we need more secure OSes specifically, that was just an example.

A better example is maybe instead of Google spending $xxx,000 to develop the system that was found insecure they should have spent $x,000,000 so they had more resources devoted to the security aspects.

Perhaps this is too high for this system to exist; well maybe that system just shouldn't exist if it can't be secured properly with the budget for it.


it is an impossiblity in all practical senses. This is the entire premise of Chromium. Knowing that there will always be bugs but designing the system so they are unlikely to be able to do bad things.

there's two points there

acknowledging it's impossible to have zero bugs ... and ... acknowledging that there will be exploits.

sure you can do better than average but "completely secure" is a myth

as for DoD there's this from 3 days ago

https://news.ycombinator.com/item?id=18177617


It's only impossible when you look at rapidly developing, easy to use systems.

Which is my point and why the broad assertion is wrong.

Look at a VxWorks type OS. Or OpenBSD. Limited capability and compatibility, very strong security guarantees.

"Completely secure" isn't a mythical unicorn. It just costs time and money. A LOT of time and money.


> So what do you say about the idea that it creates a disincentive to find security issues...

If a company is disincentivized to look for security holes because it's highly likely it will find them; that company is on the fast-track to failing.

> If they did an adequate investigation, using a 3rd party service, and found no evidence of my data being accessed by a 3rd party

They found no evidence of data being accessed, but they also don't share how this particular system tracks data that is accessed. This system was vulnerable for 2-3 years. What if you left your car at my house and you found a dent on it, but I claim that I didn't have any security footage of someone damaging your car without mentioning that I don't actually have security cameras.

> I'm willing to bet your service has security holes in it...

You're right, and it I find that my NTP service is exploitable and anyone can DDOS me I'll fix it and move on. There's no need to disclose that I'm an idiot when it doesn't affect other people. But when I accidentally leave my servers mis-configured and my API unprotected against unauthorized access I would make a post about it. We're not talking about an arbitrary vulnerability that lets people echo hello on a Google server. They left data exposed. I would like to know if it involves me.


> If a company is disincentivized to look for security holes because it's highly likely it will find them; that company is on the fast-track to failing.

I don't see any evidence to back up this assertion. FWIW I don't believe this is true.


Agreed. When is the last time a company failed because of bad security?


Equifax seems to be doing alright.

Sony hasn’t failed yet.

I don’t think your opinion matches with reality.


>So what do you say about the idea that it creates a disincentive to find security issues, because you'll be hit for them one way or the other?

But is that really what would happen? Vulnerabilities are frequently found and publicly reported. The reaction depends on the type of vulnerability (how stupid it sounds) and on whether or not the vulnerability resulted in a breach. You don't get hit equally hard for all security issues.

I think both users and developers have to be mature enough to deal with this sort of thing without resorting to secrecy. Secrecy creates distrust as the reaction to this particular Google+ issue shows.


> So what do you say about the idea that it creates a disincentive to find security issues, because you'll be hit for them one way or the other?

With big enough repercussions it should incentivize processes that don't allow it to happen in the first place. This is how HIPPA and various other things work already and it does a decent job (not perfect though).

I'd also say that we need decide if mistakes are forgivable and whether or not if it's based on impact.

Perhaps for a small data leak it's forgivable, and we won't hit them with a huge fine or whatever. If it's a big leak then maybe we need to jail people.

It's a bit of a strawman to say everything has security holes because it's assuming the problem is unfixable from the start.

It's a bit like launching a person into space. "We know that probability we'll be sending people to their death is high, let's not bother then." We decided to do it anyway but we set extremely high bars for safety.


> So what do you say about the idea that it creates a disincentive to find security issues, because you'll be hit for them one way or the other?

Were this actually the case, the CSV list (not even mentioning Google's own Project Zero) would have disincentivized companies and projects years ago.


Imagine you've been driving across the same bridge every day for 3 years, only to discover it had an inevitably catastrophic structural flaw, only revealed when someone bothered to look, but the engineers, contractors, and government didn't want to talk about it.

How'd you feel then?


That actually happens.

Also bridges are built with a much bigger lifetime in mind and require (proving GP's point) a much bigger investment.


I now think I replied to the wrong post. This thread is more confusing than most. We're all talking past each other. We (this group) likely agrees more than we disagree.


Depends, did they fix it immediately?


>...you should be lining up at Google HQ with pitchforks over this. They're supposed to be above this. They are hailed as a gold standard.

Especially given the way Google, quite rightly, expose vulnerabilities through Project Zero.


The difference here is that Google found this problem itself, and evaluated with high confidence that it was likely unknown to anyone outside that audit.

That means that Google was proactively checking it’s work to make sure it was secure, unlike your example where nobody was likely ever going to notice that bucket was misconfigured.

They’re the gold standard because they find and clean up their own shit, even when nobody is pressing them to do it.


Do we? "Some user data exposed on AWS due to crappy S3 permissions" happens like forty times a year. People publish papers annually just finding stupid crap like this in mobile apps. Misconfiguration bugs happen in thousands upon thousands of products. Where do we lose our minds?


There is a qualitative difference between an internally discovered vulnerability - presumably by somebody with lots of access, knowledge, and the ability to verify their hypotheses - and an externally discovered vulnerability.

Furthermore, if every potential vulnerability that might have been exploited needs to be reported as a breach... then each and every bug in any library and other dependency you use likely needs a breach report too - after all, you typically cannot verify retrospectively whether that exploit wasn't independently discovered and abused before the fix.

I'm not even positive such a reporting requirement would really be a bad idea, but it would definitely needs some more mature reporting and population education to tell the two classes of issue apart, especially since it's pretty vague where you draw the line on what needs publication once you get that far.


When it makes headline news... typically.


Less than 1% of these make the news. Papers regularly find tens of thousands of examples of this vuln.


I think there is a difference between a vulnerability in an internal system, and blatantly exposing data to the public requiring little technical acumen to obtain.


It wasn't an internal system. It was discovered within Google (internal as it was used originally). The wording is accidentally confusing.


Comparing this to the Cambridge Analytica breach or a HIPPA violation is disingenuous.

This is the information that was possibly was exposed: https://developers.google.com/+/web/api/rest/latest/people


If you're going to be harangued for all vulnerabilities, exploited or not, then why proactively try to prevent breaches at all?


You say that as if being harangued is the worst thing that could happen to a company.

Plus, were this the case, the CSV list would have stunted vulnerability exploration across the board. Have you seen the list of vulnerabilities discovered in curl?


I thought I was going to agree with you and then it all went sideways. You are right that some people "lose [their] minds" over the AWS scenario. I think the person that you are responding to is making the point that they shouldn't in that case either.

It isn't that it is Google that makes the difference, but the extent of the damage.


>We know full well there's no evidence that the data WASN'T accessed by unauthorized parties.

Really? If Google has access logging on its content, and as in your example an AWS bucket can be configured for access logging then it would be fairly straightforward to identify if the data was accessed.


Google does not have (or claims not to have) that kind of logging beyond the short term; not enough to cover even a small fraction of the vulnerability period.


You don't need to resort to crude sexual language or imply parent commenter's opinion isn't in good faith, rather than out of some blind corporate allegiance.

You're supposed to be above this here on HN.


I'm not nit-picking, but your mention of "gold standard" got me thinking. What is Google the gold standard of? They have the top search product which is built on possibly gold standard software, server and networking technologies, but beyond that, are they considered the top standard in anything else? Maybe privacy and security should be important to their business models over the long term, but I'm not sure there is a unified coherent thought in the company's mind that better privacy and security help them make money with search, which is their chief priority. In other words, they can make heaps and piles of money every day with bad privacy and security, and I don't think they were ever considered a gold standard for either of those.


Google is near universally considered one of the gold standards of infrastructure & application security.

So besides applications, infrastructure, distributed systems, reliability, money & security what have the Romans ever given us?


It would be hard to make a case that they're not one of the 3 best defensive security teams in the world, and that very much includes every world government. You could go back and forth about whether they're #1 or #2 or in 2- or 3-way tie.


According to the announcement, Google can only say that no one abused the vulnerability in the two weeks prior to discovery.

> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.


How about attempts to abuse the vulnerability in the months since?


But..."no evidence of abuse" can mean a lot of things.

The original WSJ article shared this version of "no evidence of abuse", and it's not very reassuring.

"Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time."

It's also not clear that the activity logs would even have the context to distinguish normal access from unauthorized access.


It would probably be useful to explore the distinction between how vulnerabilities are handled in Open Source software, where disclosure is commonplace, versus non-source-available proprietary software, where the opposite is true.

We need disclosure in Open Source because if attention is not drawn to the vuln downstream users won't upgrade, but highly motivated attackers will have what they need. With non-source-available software, there are fewer opportunities for attackers to learn of any vulns.


'No evidence of abuse' is not acceptable grounds for treating it as if there were no abuse. Personally, I consider all three points as being irrelevant.


OK. What do you find to be acceptable grounds for treating a situation as if there were no abuse?

Bear in mind that proving no abuse is impossible, as it's always possible that the hypothetical abuser got one step further than your investigation and covered their tracks.


Never not inform.

When such a vulnerability is discovered the right thing to do is to inform the users of its possible severity and the duration during which it was available.

Always.


If that ever becomes standard, every actually relevant notification would be lost in the noise. Multiple local privilege escalation vulnerabilities are discovered yearly in both Linux and Windows (along with many other OSes and userland applications). Do you really want that every single company releases notifications that roughly every single of their systems were vulnerable since the day they were created until date X each time that happens? After all, there is no way to prove no one abused that before their systems were patched (including the period between system creation and vulnerability being publicly released).


> Do you really want that every single company releases notifications that roughly every single of their systems were vulnerable since the day...

I'd love for that to happen. Maybe then everyone will start to pick up on the fact that all of our computer systems are insanely insecure.


It is much more likely in my opinion that people would become desensitized to data breaches and stop taking any of them seriously. Equifax or Cambridge Analytica would have been just another in a deluge of notifications.


Are you familiar with California's Proposition 65? It was based on the same concept - never not inform. Some might opine that the consequences might have failed to meet hopes.


Nope, am Canadian.

Canada is going to require disclosure of breaches starting November 1. That may allow for companies to still fail to disclose the _possibility_ of breaches, however.

https://globalnews.ca/news/4122202/data-breach-canada-privac...


https://en.wikipedia.org/wiki/California_Proposition_65_(198...

Long story short, California requires labeling of things that may contain chemicals hazardous for a variety of reasons. The idea was simple - never not inform. The intentions were pure.

The result is that basically everything has a vague and uninformative label about how it may contain chemicals that could be hazardous. The labels are spectacularly uninformative and incredibly numerous. They are universally ignored by everyone, as they constitute a sea of noise.

The net result is that a wonderful, laudable, pure, kind, and compassionate idea - never not inform - has led to a deluge of useless informational notices in which actual useful notices are impossible to find.

It might be worth considering that there could be a lesson in there.


Labeling is slightly different than simply informing, in that it is a specific method of informing. It would be like requiring Google to display information regarding this security issue on all of its pages.

Warning labels have dubious efficacy, as evidenced by decades of grotesque warnings upon cigarette packaging and their limited success at reducing consumption; but that's not to say we ought not be able to know about the potential harms of a product or service, just that product labels sort of suck as a method of informing users.

OTOH, Canada requires nutritional information on all food, and that's reasonably useful and successful _because_ the content of that label is strictly defined and reasonably useful.


The core idea you have is amazing! Tell people, be sure they know their risks, communicate them clearly, and then they're empowered to make their own choices. It's a wonderful idea that both protects people and respects their autonomy.

Nutritional information is relatively easy to provide is thoroughly exhaustive detail. How does one go about informing about data events that may or may not have occurred, and may or may not occur in the future, in a way that's comprehensible to users who must be assumed to have absolutely no technical grounding whatsoever?

The most useful informative warning I think that could be reasonably provided runs like this: "Please be aware that any data you provide this service, and any data this service gathers, could previously, presently, or in the future be the subject of a data breach that may or may not be detected."

While the warning could of course be improved by being made more specific about what data is concerned, this does not seem like a good way to produce useful and informative warnings. Instead, it seems to me that attempting to pursue that most wonderful of ideals - never fail to inform - could easily produce a sea of overcautious attempts to ensure a user is never not informed, and instead over-informed into oblivion.

While such labels might potentially be successful in scaring users into using services less, this might not be the same as informing users.

I do hope I've communicated clearly here. Please let me know if anything could have been better-written. More importantly, I've clearly failed to understand some of your wise and well-made points. Can you help me with what I've missed?


As a non-technical user, if it were to be simplified like Nutritional Information, I would expect broad categories risk that receive either a check or no-check for the possibility of exposure. IE:

    X Authentication
    X Financial
    X Communications
    X Property
Or such, so I can be aware at-a-glance that my user name, password, credit card information, private messages, or personal documents may have been compromised. I wouldn't be over-informed into oblivion, or at least not as likely to be, if the information available were suitably informative-yet-terse. Like a nutrition label.

However, that's just my pie-in-the-sky desire!

Pragmatically, as a technical person, I'd personally be happy with regular release notes of product revisions, including SaaS products, and which are paired with a technical-oriented blog that details the notes in further clarity for the technically adept. The role of journalists can step in here, and appropriately severe information will spread much as it does now.

The upshot is that technical folks like us may learn a thing or two via such an open communication. If everyone did it then there would be less fear to do it and there would be more information avaialable about common risks and their mitigations.


So then all security audits just become you paying someone large amounts of money to get really bad press for yourself. Regardless of whether you've actually lost customer information or not.

At which point why not just stop being proactive?

Hardline stances sell great on forums, that doesn't make them right.


> At which point why not just stop being proactive?

The attitude that absence of evidence should be treated as if it didn't happen leads to exactly that conclusion - that it's better to not be able to tell what's happening.


Does your place of business inform for every closed xss bug? Sqli? Every buffer overrun? Directory traversal? The bar for "bug that could potentially lead to access to user data" is so low that I'd say that any business that isn't finding hundreds of these bugs a year isn't doing a serious job on security.


We try to, yes.


You are hereby informed that all technology systems may have vulnerabilities that may be exploited. Duh.


There's a scale from "there's no evidence of abuse and if there had been abuse we're certain we would have seen evidence" to "there's no evidence of abuse but there's no reason to suppose if there had been abuse we'd see any evidence".

Companies tend not to be terribly good at articulating where we are on that sort of scale.


You're absolutely right! There very much is such a scale.

Even the companies who are the very best at communications will tend to struggle to communicate clearly when filtered through a press that have strong incentives to construe everything as a data breach. Lay people reading either of your statements will tend to stop at "there's no evidence" and go "What do you mean, 'no evidence'!? I want certainty!". Witness how readily and widely this whole G+ event has been mis-reported as MASSIVE DATA BREACH.

While you're completely correct, perhaps there's room for subtlety here.


Read Google's response carefully. They say they only had 2 weeks of api logs. They never say if the logs even had enough context to detect abuse.

After reading it, I got the impression they have no idea what did, or did not happen.


Exactly. There must always be an assumption that abuse happened.


So if you were to write software or publish it, or deploy it, or even merely use it as part of your process to provide some other service to users, do you think it's reasonable to report each and every vulnerability?

Keep in might that effectively means you'd need to report a potential breach every patch tuesday if you're running on windows, and similarly frequently for most linux distros. Oh, and you'd need to report all vulnerabilities in any dependencies of any software you build on, such as webframeworks, libraries, other apps... everything. Oh, and svn had a sha1 collision vulnerability, so any software that ever used SVN, and anything that depended on it might need to report a breach. With meltdown and spectre... OK, we've basically arrived at the point that if your business was somehow in nebulous proximity to a computing device it may well have posed the risk of a breach. Where does it stop?

I mean - I'd love to live in a world where that were a realistic strategy, but in the I WANT MOAR DEPENDENCIES world we live in, which is just waking up to dealing with security issues, I'm not sure this is realistic, nor helpful.

So it's a shame that the best we have is the absence of evidence - but for all my frustration about the cavalier risks google took here, I do actually believe that's pretty plausible evidence of actual absence. After all, if this had been discovered and meaningfully exploited, I really doubt the exploiter would have stopped.

Reporting this kind of stuff as a breach is basically FUD. Essentially: You're distracting from real security issues, of which there are legion. No need to get all hypothetical about it - yet. Google made a shameful bug, but this isn't breach, not by any useful definition of breach.


How is this different than any other internal/private bug found every day at orgs?


If it was exploitable from outside then it was not internal, by definition.


Good luck demanding that every xss bug ever closed from any business gets publicized.


There's "no evidence" of abuse because they apparently only kept a short timeframe of logs. So we don't really know, and neither do they.

Put another way, "no evidence of abuse" is not the same thing as "evidence of no abuse".


No, because it incentivizes the company "not to look very hard" for evidence of abuse.

A company can claim EVERY bug was never exploited, and nobody can disprove them.

There is an inherent conflict of interest there.


I wonder if “Strategic Lack of Log Retention” would make a good conference topic.

The problem with what Google is doing is that they are insinuating that a lack of evidence is evidence of lack. This is not unlike when companies like Equifax claim “we have no evidence...”

We should not be rewarding companies for strategically avoiding culpability.


I feel like lack of logs can increase apparent culpability (as this scenario shows). It seems a little silly they don't store anything after two weeks but I was explained elsewhere that a lot of the reason was that those logs can contain user data and they can't lose it or turn it over to the feds if it doesn't exist.


It can, however it also makes it harder to really pin them on anything. I think that's a net win.

I understand the position that excessive log retention can itself create a high-risk metadata pool, but there are ways to mitigate that by tokenizing them or removing PII data without having zero usage information.


The fact that we have not (at least, not yet) observed evidence of abuse is little comfort, from an ethical perspective.

https://en.wikipedia.org/wiki/Moral_luck#The_problem_of_mora...


It's not an unpopular opinion, it's the opposite of disclosure. It's not an opinion, but a different ethic in the vein of "no-disclosure".

Who discovers it or if you can prove it was abused, is not relevant to the security issue.


Do you disclose every vulnerability or bug internally-discovered at your org?


Picking the nits here: They did not have any evidence of abuse, because they threw the logs away after 14 days...

I'd consider that at least a disingenious reading of "we have no evidence that bad things happened".


Google deleted most of the logs that could have contained any evidence. Would you feel the same way if Google employees manually deleted the logs after discovering the breach?


Would you prefer google keeping all access logs to your account forever?


Yes, I don't want them to delete records of third-party access to my account unless I explicitly ask them to. Even if I didn't want that, there are a lot of options in the space between deleting after two weeks and storing them forever.


> no evidence of abuse is not a breach

400 third-party apps had access to this info. It's not that "Google knows that there wasn't any abuse" but that "Google doesn't know whether there was any abuse, because it didn't have the proper systems in place to check for that anyway."

It's kind of like you saw no crime happening because you didn't look. But that says nothing about whether or not the crime actually happened.


Let's just assume there wasn't any abuse.. say a bug compromised my bank account, but no money was stolen (someone may have looked my balance and decided I was too poor to be robbed), do I expect to be made aware? yes of course and I feel data privacy deserves the same level of diligence because this is still a breach in trust, So ethically, they could and should have at least made a statement and apologized.

We should not equate "no evidence of abuse" to "evidence of zero abuse", that type of plausible deniability is not going to push improvement in protecting user privacy. Especially in this case, no evidence was really a lack of evidence (probably worse), because logs were only kept for a short period of time.

In the legal sense, or based on "industry practice", they might not be _required_ to disclose to the public. But can they, and should they? because we have all witnessed Google gone above and beyond, and done amazing things over the years. I'm a google fan, and I'm very disappointed by how this was handled.


An attacker accessing your balance would be considered abuse. The analogy doesn’t fit.


whats up with the contradictory first point the article is making?

1) This data privacy glitch is just like Facebook’s Cambridge Analytica scandal, except it isn’t.

well, if its not, then why even bring it up? that part smells like sensationalism to me..

things doesn't get better when we realize there are no indications of any actual leaking of anyones anything.

The bug this article refers to was pretty bad, and googles handling of it was indeed poor. but this is just bad journalism.


> 1) This data privacy glitch is just like Facebook’s Cambridge Analytica scandal, except it isn’t.

> well, if its not, then why even bring it up? that part smells like sensationalism to me..

It's the same type of glitch, except there's no evidence that it was exploited (which is a different statement than it wasn't exploited; it may very well have been).


Unless I'm horribly misinformed, the Google breach is absolutely nothing like the Facebook-Cambridge Analytica deal. CA got huge amounts of information about users. The G+ breach just gave out contact information.

It's similar only in that it unintentionally gave out more information than it was supposed to. Beyond that, they're not similar at all.

I get that we shouldn't give Google a slap on the wrist because it's "not as bad", but we absolutely should not conflate the massive breach that was CA with this.


I think the comparison is a coherent one on the security side - these were both attacks enabled by allowing apps to piggyback on the visibility settings of the app user. Further, both represent threats which can't be entirely controlled (picture a user infected with a worm that simply opened Facebook and clicked through profiles), but can be constrained by auditing API data request options. If I had a social media site with an API for user-installed apps, I'd be thinking about these attacks in the same category.

But I do think the coverage here, equating the attacks on a user-impact level, is substantially unfair. The Facebook attack in some cases compromised Timeline posts and private messages from friends. What's more, Facebook initially claimed only profile data had been access, and took very little further flak when it was eventually revealed that private messages had been compromised.[1] Portraying the contents of the breach as comparable feels like it not only overstates the current exposure, but gives Facebook a pass on the broader reach of its exposure.

[1] https://www.wired.com/story/cambridge-analytica-private-face...


"There’s no quick fix here."

I think people are thinking to small. Imagine if you could own your data profile and "invest" it into websites or services. Everyone builds their services to accept this same "profile" formatting and the user takes it where they pleases.

This would mean small upstarts can compete with google and Facebook (who right now have a huge head start on having all this data) by having a better UX.

right now, everything is trapped in all these different walled gardens. I see it like your cellphone only being able to call cell phones of the exact make and model of your own.


Interesting idea.

How would ownership extend to metadata derived from your profile? For example, one of the claims made during the Cambridge Analytica hearings was that the data that they had in their possession could be used to derive political leanings, sexual orientation, purchasing habits etc.

I'm certain that this is where the value of the data is. No platform genuinely cares about cat pictures and birthday wishes - they care about how likely you are to purchase an advertiser's product. Or, cynically, they care about how many degrees of separation you are from a person under investigation.

This is not data that you've created directly or intentionally.


I've been thinking about this lately. What if the web fundamentally was data collection of what a person is doing, and then things hang off that. Right now it is a collection of information that you can 'find'. Like your photos, web history, and fitbit would be generating your profile which is the fundamental part of the web, not the 'addressable places to get javascript apps' we have now.

Anyway that's as far as I got. Good luck and let me know how it goes.


Google is great about disclosing everyone else's zero day flaws tho!


I suspect that the government will feel compelled to get involved here and I'm guessing the default ask of the public is that they do. But is a class action an option? Given that there's no evidence of a breach, does that means there's no actual damages to claim?


Google is the quintessential evil tech corporation, and the federal government should prevent them from retaining the power they currently hold over the economy and society as a private autocratic monopoly.


Should we really be giving Bloomberg our traffic when everything points to them fabricating information in the Super Micro supply chain article?


This is the classic 'emperor has no clothes' moment but some 'loyal subjects' still pointing to imaginary clothes.


I wish Google would let me pay to just have 0 ads and maximum privacy. I would pay a lot for that and I would be a happier user since all my pet peeves seem to come from them dumbing down products so they can fit ads.

YouTube Red is a good start, hopefully this spreads.


GSuite is literally that for the core products.


Thanks for the lead, I didn't know about it!


This happens again and again...

a. oh no!!!

b. nobody will go to prison

c. a programmer will be fired

d. managers will get bonuses

e. nobody will change the way they write programs, process data, etc.

f. go to point a


Huge ad for WhatsApp just below the fold. Not only is it ironic given the subject matter, it's not blocked by Brave.

Also, this has got to stop mentality is too soft. That time passed when Uber pulled the wool over everyone's eyes while the CEO stepped down.

We need more Captains, less crew.


Huge ad for WhatsApp just below the fold. Not only is it ironic given the subject matter, it's not blocked by Brave.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: