> Google might agree to let a random online shopping company scan what I’m typing into Gmail, but I did not agree.
Google might, in the sense that they could start, but Google doesn't do (and never has done) what is described.
First of all, Google has never let companies scan what you type. It did let companies target based on content of messages, but that involves advertisers sharing targeting information with Google, not Google sharing email content with advertisers.
Second of all, even that stopped last year. From the Google announcement:
> G Suite’s Gmail is already not used as input for ads personalization, and Google has decided to follow suit later this year in our free consumer Gmail service. Consumer Gmail content will not be used or scanned for any ads personalization after this change. This decision brings Gmail ads in line with how we personalize ads for other Google products.
( https://blog.google/products/gmail/g-suite-gains-traction-in... )
Yep, this is the point where it shows that the author has no idea what this is about. The entire case of the GMail-reading outrage was around the fact that the user had explicitly allowed that the 3rd party app read their emails.
Sure, you can argue that you clicked by accident, or that Google shouldn't even allow it in the first place, but saying that the user did not explicitly consent, is bullshit.
This wasn't something buried deep into a long ToS, this was someone explicitly asking you for the keys to your house.
Not sure what you expect when giving access to your emails...
"Gmail add-ons examine incoming messages, then act on these messages in various ways, such as by:
- Displaying additional information for the user in the Gmail UI.
- Connecting to non-Google services, to retrieve information or take other actions."
So yeah, it looks like there's no security at all offered by Gmail add-ons.
You are not protected from the software you choose to run. I find it a bit weird that you'd want that at all to be honest. But you can choose to run other software.
If you're going to complain along these lines, I'd say browser plugins are so much scarier. They don't just have access to your mails, but to your web banking as well, and can (and have) for example override which account you transfer to, and how much.
I think the main problem these days is lack of understanding how computers and Internet work.
This isn't merely a legal technicality hidden in the terms of service. We know they're doing it, and by continuing to use the service we are consenting, however unhappily.
(In case you weren't aware, Facebook owns Instagram and WhatsApp)
My impression of the issue is that most people simply don't care.
This is my impression as well, and I find it a bit of elitist arrogance to tell them they should. Different people value their data at different levels and we shouldn't be so quick to make it harder for them to give it away if they choose. A simple question one should ask while picking up their pitchfork is whether their perceived righteousness is based on perceived stupidity by the masses (and whether the perception is based on some anecdotal evidence). And if you really believe stupidity is that rampant, you can set your law-making pitchfork down and advocate education.
Facebook and Google should get in major trouble when they mistreat their data (like the Cambridge Analytica scandal with Facebook), but you disliking the data they collect and/or their business model does not mean that everyone else will.
I know full well the type of data Google and Facebook collect about me and I just don't care. There's plenty of people in the tech industry that don't care either.
I never said that, in fact I explicitly said the opposite.
"I know full well the type of data Google and Facebook collect about me and I just don't care."
Yes, and you were able to make an informed decision because you do know. Not everyone knows. That's the entire point I was making.
> they should be informed
These two items are at odds with one another in an opt-in situation.
With regards to "shadow profiles", someone else's actions do diminish my privacy. I don't find it elitist or arrogant at all to protect my privacy in this way.
And, if someone came to me and said "stop doing this because it is negatively affecting me", I probably wouldn't think twice before stopping.
"stop increasing compliance burdens on the internet because it is negatively affecting me and my business"
It's not difficult at all actually.. it becomes increasingly more difficult when you want to use these services and apps which you are the product by design, and the truth is this is week within the rights of those companies that are giving you their services for free..
This is all virtue signaling to the max as well as endorsing a more censored, controlled internet by these media outlets, tech, or govt..
After all, mums the word on an actual problem, the data breaches of credit reporting agencies, e.g. Equifax. These companies have essentially a govt endorsed monopoly on all of our data. And no one goes to jail when my actual important info gets hacked even after being warned, (DL/SSN, last 10 addresses, loan amounts and who I owe, etc.). On top of this, these companies can sell my info to other 3rd party vendors, and I literally have no recourse..
All this Facebook nonsense is hogwash. If your conspiratorial, there is a strong case to be made that this is about censorship, deference to the experts who know the web better than the masses who are "incapable" of using any discernment regarding bots or fakenews. These are the same people who brought us SESTA, FOSTA, PIPA et. al., same ones who told us we'd go back to dial up without net neutrality..
I personally think it's just virtue signaling to the max..
Ya know, sometimes people actually do understand and they just don't agree with you. I'm not a Trump supporter, but I believe people who say they like Trump for reasons that aren't "politie".
These anti-Google and anti-Facebook people should learn to disagree better.
My impression is that people have absolutely no idea that Google has become so invasive as to purchase copies of most people's credit/debit card purchase histories, and would be horrified to learn that they have taken surveillance capitalism to such an extreme.
>Google has been able to track your location using Google Maps for a long time. Since 2014, it has used that information to provide advertisers with information on how often people visit their stores. But store visits aren’t purchases, so, as Google said in a blog post on its new service for marketers, it has partnered with “third parties” that give them access to 70 percent of all credit and debit card purchases.
If your solution is "don't use the most popular service in the world", your solution is simply not realistic for most people.
I've commented here many times about Seisent (bought by Lexis/Nexus). In the mid 2000s they had uniquely identified everyone in North America, Caribbean, and making in roads into Central & South America. Just from public records. One of their customers was the NSA, which also tied in money and phone.
(My employer was prototyping using Seisent to help uniquely identify patients across heterogenous orgs. It worked great, but cost too much for our use case.)
Seisent was (is) just one of many "independents" (third party). Today, FAANG and all the little dragons are doing the same.
The ridiculous part of the controversy is pretending we don't know who everyone is. Our voter registration database could trivially be near perfect in real-time. Every immigrant. Every homeless person. Every missing person (not yet dead).
"dropping off the grid" is not possible.
Yes, we can.
1. Some people (mostly technologists) are aware, but many people aren't.
2. Much of the awareness that does exist was not the result of those companies being transparent about their practices. It's the result of inferences based on scraps of information and speculation.
That's a really weird statement. Articles are published so that readers can be informed of things that the author and editors have become aware of. Often many articles are published on the same subject because 1) many people missed previous articles, 2) there a new update that people aren't aware of, 3) people forgot and it's a good time to remind them, 4) it's not news-facts article but a persuasive piece, 5) etc.
>> 2. Much of the awareness that does exist was not the result of those companies being transparent about their practices. It's the result of inferences based on scraps of information and speculation.
This means that "we" aren't as aware as you make us out to be.
This is an opinion piece. It's written to persuade, not to inform, but its argument is founded on shaky ground.
> This is an opinion piece.
Opinions and ideas are things that one can be informed of.
Yes, I can. Why?
Because I only have a vague abstract notion that they collect data and do something with it before turning that into cash.
How am I meant to explain this to my friends and family, beyond "they are bad"?
I've never seen a fully worked through example of exactly what they are doing with all this data. Please point me at an article or video or something that everyone else has already seen a hundred times, because I still haven't found that one thing that will convince other people.
It is same thing that people here consider those half million salaries normal in valley without honestly thinking how this is made possible in first place. For them its cutting edge machine learning work etc, is a good enough answer.
Also Amazon and Apple, too.
If we didn't know they were doing this we might have a legitimate objection that the ToS are abusive, but that argument is far less compelling if we know they're doing this and agree to the ToS anyway.
Though, if you force them to publicize, you'd probably have to force them to conduct the audits, so they can't avoid the publicity/patching spend by keeping themselves in the dark.
I agree in principle, but it's not always that simple. Outside of the obvious "don't make your S3 buckets public", it's usually a combination of priority, cost, and compatibility. Years ago I ran a network inside the firewall of <Insert large company here>. They did constant vulnerability testing, and I had monthly meetings explaining what I was doing to mitigate all of the vulnerabilities that came up. Mitigation for some vulnerabilities meant rewriting decades old programs to run on a new OS/platform so we would harden around it the best we could while the work was done.
At worst, you only make things worse for users and empower large incumbents without solving anything.
So hopefully someone out there cares.
Please stop pretending this isn't a big deal just because of a hard-on for Google. If you'll put a 10 man company out of business for their complacency and ignorance you should be lining up at Google HQ with pitchforks over this. They're supposed to be above this. They are hailed as a gold standard.
Please don't project strawmen onto other users, it's intellectually weak and rude besides.
Also, I fundamentally disagree with your example. If they did an adequate investigation, using a 3rd party service, and found no evidence of my data being accessed by a 3rd party, and then fixed it- I'd say, "Good job checking up on yourselves" and move on.
Security is still incredibly hard to get right. I'm willing to bet your service has security holes in it, right now- and that's not a hit against you. We haven't mastered these systems and anyone who thinks they have is just waiting to get bit in the ass. Every security professional knows: It's never, ever, a question of "if", but of "when".
We've absolutely mastered these systems.
We just prioritize rapid development time and ease of use over security.
People make fun of DoD / space price tags (and admittedly, there's still a lot of crap sold there), but it's a trade off.
If we wanted computers to cost $x0,000 and OSes to cost $x,000, and the pace of progress to be glacial, we could have completely secure systems today.
It's a choice, not an impossibility.
> If we wanted computers to cost $x0,000 and OSes to cost $x,000, and the pace of progress to be glacial, we could have completely secure systems today.
> It's a choice, not an impossibility.
Progress in core OSes and conventional hardware has already slowed down to a glacial pace, compared with previous decades. Rapid development is often used to just create a churn of useless sidegrades and other poorly thought out things (deceptively sold as upgrades).
It might be time to make that different set of choices.
That's an interesting theory that's not supported by historical evidence as far as I can tell. When computers were expensive, there just were less people with access. The systems were not any more secure.
For example, the parent article, the one this comment is ostensibly in response to, has nothing to do with operating system security.
A better example is maybe instead of Google spending $xxx,000 to develop the system that was found insecure they should have spent $x,000,000 so they had more resources devoted to the security aspects.
Perhaps this is too high for this system to exist; well maybe that system just shouldn't exist if it can't be secured properly with the budget for it.
there's two points there
acknowledging it's impossible to have zero bugs ... and ... acknowledging that there will be exploits.
sure you can do better than average but "completely secure" is a myth
as for DoD there's this from 3 days ago
Which is my point and why the broad assertion is wrong.
Look at a VxWorks type OS. Or OpenBSD. Limited capability and compatibility, very strong security guarantees.
"Completely secure" isn't a mythical unicorn. It just costs time and money. A LOT of time and money.
If a company is disincentivized to look for security holes because it's highly likely it will find them; that company is on the fast-track to failing.
> If they did an adequate investigation, using a 3rd party service, and found no evidence of my data being accessed by a 3rd party
They found no evidence of data being accessed, but they also don't share how this particular system tracks data that is accessed. This system was vulnerable for 2-3 years. What if you left your car at my house and you found a dent on it, but I claim that I didn't have any security footage of someone damaging your car without mentioning that I don't actually have security cameras.
> I'm willing to bet your service has security holes in it...
You're right, and it I find that my NTP service is exploitable and anyone can DDOS me I'll fix it and move on. There's no need to disclose that I'm an idiot when it doesn't affect other people. But when I accidentally leave my servers mis-configured and my API unprotected against unauthorized access I would make a post about it. We're not talking about an arbitrary vulnerability that lets people echo hello on a Google server. They left data exposed. I would like to know if it involves me.
I don't see any evidence to back up this assertion. FWIW I don't believe this is true.
Sony hasn’t failed yet.
I don’t think your opinion matches with reality.
But is that really what would happen? Vulnerabilities are frequently found and publicly reported. The reaction depends on the type of vulnerability (how stupid it sounds) and on whether or not the vulnerability resulted in a breach. You don't get hit equally hard for all security issues.
I think both users and developers have to be mature enough to deal with this sort of thing without resorting to secrecy. Secrecy creates distrust as the reaction to this particular Google+ issue shows.
With big enough repercussions it should incentivize processes that don't allow it to happen in the first place. This is how HIPPA and various other things work already and it does a decent job (not perfect though).
I'd also say that we need decide if mistakes are forgivable and whether or not if it's based on impact.
Perhaps for a small data leak it's forgivable, and we won't hit them with a huge fine or whatever. If it's a big leak then maybe we need to jail people.
It's a bit of a strawman to say everything has security holes because it's assuming the problem is unfixable from the start.
It's a bit like launching a person into space. "We know that probability we'll be sending people to their death is high, let's not bother then." We decided to do it anyway but we set extremely high bars for safety.
Were this actually the case, the CSV list (not even mentioning Google's own Project Zero) would have disincentivized companies and projects years ago.
How'd you feel then?
Also bridges are built with a much bigger lifetime in mind and require (proving GP's point) a much bigger investment.
Especially given the way Google, quite rightly, expose vulnerabilities through Project Zero.
That means that Google was proactively checking it’s work to make sure it was secure, unlike your example where nobody was likely ever going to notice that bucket was misconfigured.
They’re the gold standard because they find and clean up their own shit, even when nobody is pressing them to do it.
Furthermore, if every potential vulnerability that might have been exploited needs to be reported as a breach... then each and every bug in any library and other dependency you use likely needs a breach report too - after all, you typically cannot verify retrospectively whether that exploit wasn't independently discovered and abused before the fix.
I'm not even positive such a reporting requirement would really be a bad idea, but it would definitely needs some more mature reporting and population education to tell the two classes of issue apart, especially since it's pretty vague where you draw the line on what needs publication once you get that far.
This is the information that was possibly was exposed: https://developers.google.com/+/web/api/rest/latest/people
Plus, were this the case, the CSV list would have stunted vulnerability exploration across the board. Have you seen the list of vulnerabilities discovered in curl?
It isn't that it is Google that makes the difference, but the extent of the damage.
Really? If Google has access logging on its content, and as in your example an AWS bucket can be configured for access logging then it would be fairly straightforward to identify if the data was accessed.
You're supposed to be above this here on HN.
So besides applications, infrastructure, distributed systems, reliability, money & security what have the Romans ever given us?
> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.
The original WSJ article shared this version of "no evidence of abuse", and it's not very reassuring.
"Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time."
It's also not clear that the activity logs would even have the context to distinguish normal access from unauthorized access.
We need disclosure in Open Source because if attention is not drawn to the vuln downstream users won't upgrade, but highly motivated attackers will have what they need. With non-source-available software, there are fewer opportunities for attackers to learn of any vulns.
Bear in mind that proving no abuse is impossible, as it's always possible that the hypothetical abuser got one step further than your investigation and covered their tracks.
When such a vulnerability is discovered the right thing to do is to inform the users of its possible severity and the duration during which it was available.
I'd love for that to happen. Maybe then everyone will start to pick up on the fact that all of our computer systems are insanely insecure.
Canada is going to require disclosure of breaches starting November 1. That may allow for companies to still fail to disclose the _possibility_ of breaches, however.
Long story short, California requires labeling of things that may contain chemicals hazardous for a variety of reasons. The idea was simple - never not inform. The intentions were pure.
The result is that basically everything has a vague and uninformative label about how it may contain chemicals that could be hazardous. The labels are spectacularly uninformative and incredibly numerous. They are universally ignored by everyone, as they constitute a sea of noise.
The net result is that a wonderful, laudable, pure, kind, and compassionate idea - never not inform - has led to a deluge of useless informational notices in which actual useful notices are impossible to find.
It might be worth considering that there could be a lesson in there.
Warning labels have dubious efficacy, as evidenced by decades of grotesque warnings upon cigarette packaging and their limited success at reducing consumption; but that's not to say we ought not be able to know about the potential harms of a product or service, just that product labels sort of suck as a method of informing users.
OTOH, Canada requires nutritional information on all food, and that's reasonably useful and successful _because_ the content of that label is strictly defined and reasonably useful.
Nutritional information is relatively easy to provide is thoroughly exhaustive detail. How does one go about informing about data events that may or may not have occurred, and may or may not occur in the future, in a way that's comprehensible to users who must be assumed to have absolutely no technical grounding whatsoever?
The most useful informative warning I think that could be reasonably provided runs like this: "Please be aware that any data you provide this service, and any data this service gathers, could previously, presently, or in the future be the subject of a data breach that may or may not be detected."
While the warning could of course be improved by being made more specific about what data is concerned, this does not seem like a good way to produce useful and informative warnings. Instead, it seems to me that attempting to pursue that most wonderful of ideals - never fail to inform - could easily produce a sea of overcautious attempts to ensure a user is never not informed, and instead over-informed into oblivion.
While such labels might potentially be successful in scaring users into using services less, this might not be the same as informing users.
I do hope I've communicated clearly here. Please let me know if anything could have been better-written. More importantly, I've clearly failed to understand some of your wise and well-made points. Can you help me with what I've missed?
However, that's just my pie-in-the-sky desire!
Pragmatically, as a technical person, I'd personally be happy with regular release notes of product revisions, including SaaS products, and which are paired with a technical-oriented blog that details the notes in further clarity for the technically adept. The role of journalists can step in here, and appropriately severe information will spread much as it does now.
The upshot is that technical folks like us may learn a thing or two via such an open communication. If everyone did it then there would be less fear to do it and there would be more information avaialable about common risks and their mitigations.
At which point why not just stop being proactive?
Hardline stances sell great on forums, that doesn't make them right.
The attitude that absence of evidence should be treated as if it didn't happen leads to exactly that conclusion - that it's better to not be able to tell what's happening.
Companies tend not to be terribly good at articulating where we are on that sort of scale.
Even the companies who are the very best at communications will tend to struggle to communicate clearly when filtered through a press that have strong incentives to construe everything as a data breach. Lay people reading either of your statements will tend to stop at "there's no evidence" and go "What do you mean, 'no evidence'!? I want certainty!". Witness how readily and widely this whole G+ event has been mis-reported as MASSIVE DATA BREACH.
While you're completely correct, perhaps there's room for subtlety here.
After reading it, I got the impression they have no idea what did, or did not happen.
Keep in might that effectively means you'd need to report a potential breach every patch tuesday if you're running on windows, and similarly frequently for most linux distros. Oh, and you'd need to report all vulnerabilities in any dependencies of any software you build on, such as webframeworks, libraries, other apps... everything. Oh, and svn had a sha1 collision vulnerability, so any software that ever used SVN, and anything that depended on it might need to report a breach. With meltdown and spectre... OK, we've basically arrived at the point that if your business was somehow in nebulous proximity to a computing device it may well have posed the risk of a breach. Where does it stop?
I mean - I'd love to live in a world where that were a realistic strategy, but in the I WANT MOAR DEPENDENCIES world we live in, which is just waking up to dealing with security issues, I'm not sure this is realistic, nor helpful.
So it's a shame that the best we have is the absence of evidence - but for all my frustration about the cavalier risks google took here, I do actually believe that's pretty plausible evidence of actual absence. After all, if this had been discovered and meaningfully exploited, I really doubt the exploiter would have stopped.
Reporting this kind of stuff as a breach is basically FUD. Essentially: You're distracting from real security issues, of which there are legion. No need to get all hypothetical about it - yet. Google made a shameful bug, but this isn't breach, not by any useful definition of breach.
Put another way, "no evidence of abuse" is not the same thing as "evidence of no abuse".
A company can claim EVERY bug was never exploited, and nobody can disprove them.
There is an inherent conflict of interest there.
The problem with what Google is doing is that they are insinuating that a lack of evidence is evidence of lack. This is not unlike when companies like Equifax claim “we have no evidence...”
We should not be rewarding companies for strategically avoiding culpability.
I understand the position that excessive log retention can itself create a high-risk metadata pool, but there are ways to mitigate that by tokenizing them or removing PII data without having zero usage information.
Who discovers it or if you can prove it was abused, is not relevant to the security issue.
I'd consider that at least a disingenious reading of "we have no evidence that bad things happened".
400 third-party apps had access to this info. It's not that "Google knows that there wasn't any abuse" but that "Google doesn't know whether there was any abuse, because it didn't have the proper systems in place to check for that anyway."
It's kind of like you saw no crime happening because you didn't look. But that says nothing about whether or not the crime actually happened.
We should not equate "no evidence of abuse" to "evidence of zero abuse", that type of plausible deniability is not going to push improvement in protecting user privacy. Especially in this case, no evidence was really a lack of evidence (probably worse), because logs were only kept for a short period of time.
In the legal sense, or based on "industry practice", they might not be _required_ to disclose to the public. But can they, and should they? because we have all witnessed Google gone above and beyond, and done amazing things over the years. I'm a google fan, and I'm very disappointed by how this was handled.
1) This data privacy glitch is just like Facebook’s Cambridge Analytica scandal, except it isn’t.
well, if its not, then why even bring it up? that part smells like sensationalism to me..
things doesn't get better when we realize there are no indications of any actual leaking of anyones anything.
The bug this article refers to was pretty bad, and googles handling of it was indeed poor. but this is just bad journalism.
> well, if its not, then why even bring it up? that part smells like sensationalism to me..
It's the same type of glitch, except there's no evidence that it was exploited (which is a different statement than it wasn't exploited; it may very well have been).
It's similar only in that it unintentionally gave out more information than it was supposed to. Beyond that, they're not similar at all.
I get that we shouldn't give Google a slap on the wrist because it's "not as bad", but we absolutely should not conflate the massive breach that was CA with this.
But I do think the coverage here, equating the attacks on a user-impact level, is substantially unfair. The Facebook attack in some cases compromised Timeline posts and private messages from friends. What's more, Facebook initially claimed only profile data had been access, and took very little further flak when it was eventually revealed that private messages had been compromised. Portraying the contents of the breach as comparable feels like it not only overstates the current exposure, but gives Facebook a pass on the broader reach of its exposure.
I think people are thinking to small. Imagine if you could own your data profile and "invest" it into websites or services. Everyone builds their services to accept this same "profile" formatting and the user takes it where they pleases.
This would mean small upstarts can compete with google and Facebook (who right now have a huge head start on having all this data) by having a better UX.
right now, everything is trapped in all these different walled gardens. I see it like your cellphone only being able to call cell phones of the exact make and model of your own.
How would ownership extend to metadata derived from your profile? For example, one of the claims made during the Cambridge Analytica hearings was that the data that they had in their possession could be used to derive political leanings, sexual orientation, purchasing habits etc.
I'm certain that this is where the value of the data is. No platform genuinely cares about cat pictures and birthday wishes - they care about how likely you are to purchase an advertiser's product. Or, cynically, they care about how many degrees of separation you are from a person under investigation.
This is not data that you've created directly or intentionally.
Anyway that's as far as I got. Good luck and let me know how it goes.
YouTube Red is a good start, hopefully this spreads.
a. oh no!!!
b. nobody will go to prison
c. a programmer will be fired
d. managers will get bonuses
e. nobody will change the way they write programs, process data, etc.
f. go to point a
Also, this has got to stop mentality is too soft. That time passed when Uber pulled the wool over everyone's eyes while the CEO stepped down.
We need more Captains, less crew.