Hacker News new | past | comments | ask | show | jobs | submit login
Google Will Soon Let Users Automatically Scrub Location and Web History (buzzfeednews.com)
395 points by siberianbear on May 2, 2019 | hide | past | favorite | 243 comments

This is welcomed, but they might do better to remove the requirement to have the Web/Location/App history enabled for apparently unrelated features.

E.g. I cant store my home location in Google Maps without turning on Web & App activity history. Why? I cant share my current location (not historic, just my latest location - they only need to store one) with my partner unless I turn on Web and Location history. Why?

I am sure a lot of people will now pipe-up and say "Ah well they want your data! That is why!". Kthx bai for that - I get it.

It strikes me as this being a "nudge" more than anything, or perhaps just a cut-corner to get a feature enabled rather than support something custom (e.g. in the case of my current location vs location history, it was probably easier for them to build something that just picks the most-recent record off of the top of your location history rather than create something new for only storing the most recent ping independently from location history along with all the privacy controls, legal reviews, documentation, and UI fiddly bits that go along with supporting something new)

Here's your answer:

There's no such thing, at least at scale, as sharing your information without also storing it. Not if you want the system to be reliable. So in order to make Google share your location with others, you have to give them permission to store your location. This should be self-evident enough if you take a moment to think about it.

As for lumping web history together with location history, it turns out you can derive the latter if you know the former. There's been a lot of fuss and breathless reporting about that fact. So authorizing them to know the one without the other is a bit of a fiction, especially in those cases that get written up by tech journalists. Best not to pretend that it's possible to keep them separate.

This new thing is an acknowledgement of the fact that the whole point of using Google services is sharing information with them and asking them to hold on to it, at least for a bit, at least until you've gotten the desired use out of the service... but also of the fact that a certain kind of person, the kind you often find on HN, would sleep better if Google deleted that data after it had served it's immediate purpose.

>So in order to make Google share your location with others, you have to give them permission to store your location.

Sure. I have no problem with that. What I have a problem with is, why is google insisting on storing all the locations that I am not sharing?

I only want to share this current location. So yes, that will have to be stored somewhere. But for the love of everything I don't see why sharing my current location requires saving all my future locations.

Because otherwise Waze doesn't work.

You can also imagine the other sub-rosa reasons for this.

As for lumping web history together with location history, it turns out you can derive the latter if you know the former

So, the question remains, why do they need to record your web history if you want to save your location? You can't derive it that way round.

I have all this stuff turned off, there is no good reason for google to store it, it's a privacy nightmare, esp given automated sharing with the state.

Likewise. I don't think it's reasonable to require these permissions in order to use a Google Home device or the Google Assistant, for instance.

I can understand needing to remember the last query, in order to provide context for the next one. Automatic scrubbing would be helpful...but it's missing an "every 10 minutes" option.

(Today, my solution is a completely separate Google account that only gets used for Google Home/Assistant.)

It would surprise me if Google didn't have those accounts linked anyway, thus undermining your policy.

A Google employee once called me on my personal cell phone when I was a few minutes late to a business meeting. I still don't see how that could've happened unless they'd linked my work (g-suite) account with my personal (gmail) account, and then allowed their employees to access personal data in a totally inappropriate fashion.

Maybe there's some other explanation, but I can't fathom what it would have been.

There's probably some other explanation. I wasn't under the impression Google employees can just go read your data like this. And if you want to believe imagine they can -- it'd be dumb of them to do it in this fashion. If you didn't leave your number on your emails/website/resumes/Facebook/etc. or given it out previously then maybe the employee got it from someone else who had your number.

Thank you for sharing your completely uninformed opinion, and asserting it as fact with literally no reason to think it's more correct than mine.

> Thank you for sharing your completely uninformed opinion, and asserting it as fact with literally no reason to think it's more correct than mine.

I in fact do have evidence to believe it's more correct than yours but I have no inclination to discuss it further.

You claim that you have secret proof which you refuse to share, but that sounds more like a troll or a bullshitter than somebody with actual evidence. I wonder why.

> You claim that you have secret proof

Evidence. Not proof. And it's not secret. Just not something I'm inclined to share here.

> which you refuse to share, but that sounds more like a troll or a bullshitter than somebody with actual evidence. I wonder why.

Because of (1) my privacy and (2) the fact that you're attacking me like this. You're welcome to see me as a troll if you'd prefer that.

What's the _nature_ of this evidence that you claim to have?

I have phone calls and e-mails to a personal g-mail account that make literally no sense to receive _unless_ Google had linked my personal and business accounts in a CRM someplace; an d then given access to personal contact info to some of their staff.

> the fact that you're attacking me like this

How is this an attack? I've done nothing to you that you didn't do to me. Except in my case I revealed the nature of my evidence whereas you revealed literally nothing. You just came up with a dumb and obviously false hypothesis and declared it true.

> I've done nothing to you that you didn't do to me.

You're saying I called you a troll?

(This discussion is over from my end.)

You wrote a bunch of fairly rude words in a post you then deleted...

Employees don't have any more access to your information than anyone else.

The point isn't to avoid any connection between the accounts. The point is to ensure that my primary account can leave those histories turned off.

I often get reminded by apps that "This app will not function when play services has no access to: Contacts, calendar, call history (and about 5 more things)". I once turned everything I though to be unnecessary off from the settings. Indeed nothing ever stopped working correctly but many things started nagging me periodically, while functioning perfectly.

I just stopped using Android after shit like this. Privacy is an afterthought at best, despite the great new permissions model.

And it's absolutely not Android's fault as much it is app developers' faults. Want to use an app that tracks your dreams? Most devs will have you sign up for their terrible service so they can store the information in plaintext on a server.

At least with iOS most devs seem to opt for CloudKit with E2E encryption. Privacy seems more wired into the iOS ecosystem than Android.

> And it's absolutely not Android's fault as much it is app developers' faults

I do see it as a shortcoming of android that i cannot redirect unwanted capabilities into a harmless sandbox version of those.

Privacy Guard on LineageOS does exactly that. It's not Android, it's Google.

Not sure why this would be down-voted, unless people are really just that tribalistic about phone/tablet OS choice.

Didn't downvote, but Android has had dynamic permissions (since v6?) whose repeated notifications you can suppress, so I'm confused if the comment is still valid.

Kind of misses the point - I do mention the robust permission system in Android.

The issue is that the developers themselves don't really seem to care as much about privacy etc as iOS developers. This effectively renders the privacy measures mostly meaningless. If I'm writing my dreams into your app and you can read them on your own server, I've already lost all semblance of privacy.

In the light of the recent story on how YouTube and Google Docs put up banners about not working with IE6, I would not put it past them to do something similar any time the user did something that Google did not like.

That IE story was specifically about YouTubers during the transition to Google. A story about how they used a legacy YouTube permissions role to circumvent process and oversight.

It also specifically talked about Google Docs team doing the same thing which people at the time attributed to being first. This is what provided cover for the YT team.

Not sure if you're trying to provide defensive cover to Google with your comment or not, but it clearly goes to show they are not above these kind of dark UI patterns.

A one-time banner is not a dark UI pattern, jeez.

This is my number one annoying dark pattern from Google at the moment.

I would very much like to store my home and work addresses in Google Maps on my phone, but I can't do that without them also storing everywhere I go.

There is no good reason for these two things being tied together other than to make we share my location data. We are talking about storing a few hundred bytes of relatively static data and letting me select them as a shortcut, this is not a complex feature, it's a simple shortcut.

Somewhat related:

I turned off youtube watch history.

I can no longer see my google play music played counts, because somehow the same permission covers both, even though I would like music history but not youtube history.

I wouldn't be surprised if that's a side effect of Google's plan to "replace" Play Music with YouTube Music, even though they're two completely different services.

Maybe they're the same service behind the scenes. Would you store the same music twice, if you had to run both?

They don't seem to be. At least not in terms of content. YouTube music has songs that Google play doesn't, and vice versa.

The YouTube music experience is kind of odd. It seems that you can only listen to full albums if someone has uploaded them to YouTube as separate songs. Which makes sense, but it means that currently a lot of the music I listen to is nowhere to be found on YouTube music.

They're in the process of being merged: https://www.theverge.com/2018/5/23/17386752/youtube-music-up...

I'll chime in here (like I do in all these threads until it's fixed) but Google Home and Android Auto require all of these settings to be enabled last time I checked.

The Google Podcast app (!) refuses to do any related to subscriptions without Web and App activity tracking enabled. Seriously reaching here regarding relevance and consent.

Also can't subscribe to podcasts if web activity is disabled, which is bullshit because YouTube subscriptions continue to work just fine

Here is another nudge: how easily can you write a rule to delete from gmail inbox after say X days? (same question for outlook.com, icloud.com etc)

There are ways for technical users, but it nudges users to leave all the juicy data to the company, which can then use it in any way it feels (including monetization)

I dare say that I don't expect automatic deletion of emails from your inbox is not a feature many users are clamoring for, whether they have privacy concerns or not. Hence I don't think the lack of it is for monetization puposes or the lack.

If you are a business customer you can set an email retention period for your domain (under the 'Compliance' section) which deletes all emails and chat messages after N days.

Users should clamor for a feature that automatically delete emails with subject=password after say a week.

In theory the links are only valid for a few hours. In practice I don't trust that.

Even then, it's not about clamoring, but just having a way to create rules. You just can't do that easily in gmail.

Why not just delete the mail right after clicking the link?

There's a new "confidential" toggle on the web Gmail (at least for gapps accounts) that let newly composed email expire after a certain amount of time. Not available on Android Gmail yet. Just saw it yesterday.

The actual email to non-gmail addresses is just a link to fetch the mail body, which can be validated with SMS.

> Why?

you know why. It's profitable. This information makes them money.

Another example, text-to-speech on Gboard requires the "Google" app, unfortunately.



The funny thing is that Google will still let you store your home address and they'll display it on the map, but if you have Location History turned off they'll refuse to list it as an autocomplete option when you search for addresses or directions. They know what it is, but they won't let you actually use it unless you expose everything else. It's extremely user hostile and there's no good reason to do it.

Edit: To clearly demonstrate what I'm talking about: https://imgur.com/a/QW9OxAS

They pin your saved locations to the top of the UI. If I click "Home" it will take me to my exact address. However, if I attempt to search "Home" in the search bar I get

"Turn on your Web & App Activity setting to search for 'home' and other personal places"

No thanks, Google. You don't need to know my entire location history 24/7 to take me to a static address you already have saved.

Yep, Google Maps still shows my home and work locations on the Commute tab from before I disabled almost everything in my Google account, but won't let me change the work address since we moved locations. So I guess when I change houses (the only one I really care about, since I use it to send an ETA to my wife) I can just turn everything on, change the address, then turn it all off again. It's illogical dark patterns like this that made me start detaching from Google, and will probably drive me to buy a non-Android phone next time, although I detest Apple UI even if their quality is usually great.

> I can just turn everything on, change the address, then turn it all off again

You can't - or, rather, I can't. It may be a problem that only affects me, I haven't talked to other people about it, but that was exactly my thinking a few months ago. So I activated location services, changed my home address to my new address, checked that it was set right, and disabled it again. And whoops, it's my old address again.

They might've fixed that in the mean time, I haven't checked it recently, I've just given up on using maps.

Is this on desktop or mobile? It could match the theory in the grandparent post that they preferred sticking to one backend. That also allows handling conflicts in one place, with one protocol. E.g. what would the behaviour be if you edited the home address with a ZIP code on your phone, while offline? What if you try to make the same change from your laptop and e.g. you set a ZIP+4 code? And then what happens when your phone is online again?

It's on mobile. I haven't tested it much, but I'm really not interested in doing so. I only keep Google Maps because my wife and I like to share locations, which I am trying to do via some direct method between our phones eventually anyways.

This is hilarious. Every single Google conspiracy is basically because some Google Engineer tried to DRY up functionality in an insanely complicated system.

So, I think that's probably quite right in general, and it's no "conspiracy". But it can still be a barrier to privacy. Rather than a conspiracy, it's evidence of priorities. You don't eliminate a development affordance when DRYing something up for something you know is a priority -- or if you do, it'll get refactored to fix it soon enough, if it is a priority.

Which is perhaps generally applicable to "conspiracies" in fact. There are some instances of powerful people making plans; but there are even more instances of results from systemic rewards and punishments, people acting independently with certain interests, from just how the system works. It doesn't make em always great. Systems can be changed.

How can you explain this: in Android 5.1, every time you enable GPS, a popup appears asking you to share location with Google. There is a checkbox titled "Don't show again", but if you tick it, the button "Decline" gets disabled [1].

I don't have the exact screenshot, but the popup looks approximately like this [2].

They intentionally wrote the code to make sure that the user doesn't make the wrong choice. And this popup is not really necessary for the user, mostly for Google.

Also, this reminds me of Google's "Amateur Hour" story with Firefox. Every time they make a mistake, it's in the favour of Google, what a coincidence.

[1] https://android.stackexchange.com/questions/115944/how-to-pr...

[2] https://i.stack.imgur.com/9VGfbm.png

It's funny, though, how every single "conspiracy" just randomly ends up falling in the "fine if you give them full access, fails if you turn off location services" quadrant. Like, I've not once seen a mistake in the direction of "eh just turn off that data collection point and it'll fix it."

Imagine you have ~billions of users and 99.9% of them use the default. Do you put your eng effort towards power-user flexibility for the tinfoil hat crowd or do you improve popular features?

In the case mentioned, searching "your locations" appears to be just all on or all off (I have no insider knowledge of Maps). That greatly reduces the surface area for heisenbugs in a high QPS system.

The reason for that is pretty obvious isn't it? Very few systems depend on an absence of data to work. However, many systems can be designed which depend on the presence of data. If a setting affects the presence of data in some Google-internal databases, turning that setting off will disable any features that depend on the presence of that data in that database.

Unless they demonstrate a conscious, consistent effort to be privacy aware and let you control you data, the way Apple does, for example, those conspiracies will keep coming.

The only way out for Google is to actually start paying attention to that issue and make a conscious effort. And it'll take time before they'll regain the trust.

I've found that with it disabled, on Android, I can't even click the "Home" or "Work" buttons in maps without a prompt to turn on "Web & App Activity".

Why do I need to turn on "Web & App Activity" to store my home or work address?

Because Google wants to inconvenience us into turning it back on, plain and simple.

I have just discovered that long pressing the Google Maps app icon on Android let you choose "Home" without asking for "Web & App Activity".

So does this actually delete it ... for Google?

Deleting it from say my browser, maps, or other history is one thing. Does Google no longer have any of it?

Also the top and bottom bars that leave me with like 1/3 usable space on google's blog is really frustrating to read on: https://blog.google/technology/safety-security/automatically...

Yes, if you choose to delete it, it will be deleted. (It takes nontrivial amounts of time to delete it across distributed service architecture but the deletions do get propagated.)

What about old backups (e.g. tape archives, which I think I've read Google uses), how do those get deleted? Do you store and encrypt the data with a user-specific key and then delete that key upon receiving a deletion request? Perhaps user data is never stored as part of tape backups? Perhaps you have a system which (very slowly) is able to load the appropriate tapes and purge the user's data?

The standard way to delete things from cold storage is to encrypt each piece with keys that you can more easily to delete.

You can also, of course, read in the data, rewrite it with the deleted portion removed, and write it out again.

(Disclosure: I work at Google, though I don't know anything about how Google handles this)

Cannot comment in too much detail as not sure how much information is public but tape backups are also in scope for wipeout and user data is removed from them.

Is any copy of data anonymized and still kept around in ml models etc.?

I'm not an expert in this area but from what I know, personalized ML models are rebuilt often from fresh data that is not anonymized so if you request your data to be removed, then it's gone from these models as well.

That said, there's also data without PII (anonymized) that is not attached to any specific user, that of course is not subject to this data wipeout as Google wouldn't know what to delete. Such data can be used for general ml models.

Not sure how it's done at Google but anonymization is tricky and it's easier to gather two forms of information from the start, one with PII and one without PII. I think most large companies are doing it like that.

EDIT: Just remembered one thing. At Google, even when building general models data that is rare cannot be used because it could be used to identify specific people. For example: If search query is used by less than X people on a given day then these queries cannot be used for model building.

Thanks for the detailed technical answer. It helps; much appreciated.

Thanks for replying to so many questions. I hope you aren't too discouraged by the negative responses, your work is appreciated!

I also don't believe this. How do you know it will be hard deleted?

Edit: Nm, I see you're a Google engineer in this domain.

I guess for the writer of the blog it isn't something to point out, but it would have been nice if they explicitly said that.

Maybe it is just just the writer's POV, but I do worry they didn't explicitly say it is deleted as far as Google's storing of such data...

I simply do not believe this.

Unless they're audited by a source that can be trusted and have the findings made public, I will not believe it either.

You're welcome to believe whatever you want, but a decent chunk of my day job is making sure that it's true.

'justcodebruh' your comment is marked as dead, but to answer your question I work on Google's privacy engineering team.


Attacking another user like that will get you banned here. No matter how you feel about privacy or Google or anything, please follow the site guidelines, and please don't post here if you can't treat your fellow community members with respect.


LOL rather than applaud some modicum of privacy we may be getting back, you decide to denigrate. Should we all hold this cynical attitude and hope for no progress at all?

He didn't tell everyone to hold the same attitude. He said they're doing a shitty job, and he provided a great example.

Rude, but true, and I would argue helpful. Sorry you're not enjoying reading it.

We should all hold companies accountable and not celebrate every “modicum of privacy”.

Look at what companies do, not what they say they do.

Yay, you can delete all your data... says the company which puts hidden microphones in consumer products, leaks copious amounts of user data across unrelated services, releases a browser which automatically logs you in to google services, lifted its ban on personally identifiable info in its ad service etc. etc.

So are you claiming that you can't actually delete your data? Or are you saying that letting you delete your data isn't a thing?

Because "letting you delete your data" seems to be a thing the company is doing. Which you say we should look at.

No one can say for certain how, and when they delete the data. Even the person from the privacy team said as much.

Moreover, this is a very, very, very small "modicum of privacy". A very small number of users will use that. It makes the company look good, for sure [1], but it will in no way, shape, or form affect what they are already doing, and will continue doing.

Should we celebrate them for being so gracious as to let the users delete their own data? Well, it's literally the least they should do anyway (once again, [1]).

[1] I wonder how much of this is related to GDPR, which requires Google to provide users with means to delete their data.

> No one can say for certain how, and when they delete the data. Even the person from the privacy team said as much.

You and I read very different comments. Not being able to tell you, a random HN commented, the exact procedure by which a specific piece of data's lifetime is managed is different than not being able to, in the abstract, describe, monitor, and alert on the intended lifecycle of a piece of data.

> I wonder how much of this is related to GDPR,

Probably not at all: Google was already GDPR compliant (it was already possible to manually delete your data), this goes beyond the GDPR mandate, and allows you to set a ttl on your data, essentially.

It's becoming more apparent to me that you, either intentionally or not, aren't clear on what the status quo was, or what this change changed. To be clear, deleting your location data has always been possible (or at least, has been possible for a long time). There's a UI for it. This is, in addition to being able to manually delete data, have Google automatically delete any data older than a certain time.

> It's becoming more apparent to me that you, either intentionally or not, aren't clear on what the status quo was, or what this change changed.

It's becoming more apparent that the whole conversation kinda derailed from "privacy engineering team" which engineers privacy (whatever that means at Google) does a shitty job to "we should celebrate them for doing just the minimum amount of work for something that has already been possible, if spread across many UIs".

Literally despite multiple complaints about how Google handles data with regards to location and activity tracking (original complaint: https://news.ycombinator.com/item?id=19809168, summary: https://news.ycombinator.com/item?id=19812861), the "privacy engineering team" answers "just disable it": https://news.ycombinator.com/item?id=19810083

This is what I'm talking about, and this is the status quo that was, is, and is going to be in the future.

> derailed from "privacy engineering team" which engineers privacy (whatever that means at Google) does a shitty job to

Actually no, it was you who derailed an otherwise unrelated conversation to that topic.

> we should celebrate them for doing just the minimum amount of work for something that has already been possible, if spread across many UIs

The "minumum amount of work possible" was doing nothing. Let's isolate this: is this change a good thing, or a bad thing, in your opinion? That is, all else equal, would you prefer Google provide this specific privacy control, or not?

If not, why not? What is bad about this feature?

If yes, why are you attacking the very people implementing features that you think are good? Isn't it better to support the people doing work you support?

I’ve already answered that here: https://news.ycombinator.com/item?id=19811270

People will be more likely to consider your criticism if you are specific. Can you recommend an incremental improvement that aiiane could push for?

Literally the very first sentence in the comment I linked:

"they might do better to remove the requirement to have the Web/Location/App history enabled for apparently unrelated features."

And then that comment, and the first comment below it, describe the ways this can be improved.

And then comment upon comment upon comment:

- I cant store my home location in Google Maps without turning on Web & App activity history. Why?

- I cant share my current location (not historic, just my latest location - they only need to store one) with my partner unless I turn on Web and Location history. Why?

- Google will still let you store your home address and they'll display it on the map, but if you have Location History turned off they'll refuse to list it as an autocomplete option when you search for addresses or directions. Why?

- Google Home and Android Auto require all of these settings to be enabled. Why?

- Can't subscribe to podcasts if web activity is disabled. Why?

All the above are severe breaches of privacy, requiring the user to enable data collection by Google for features that have nothing to do with this data.

But yay, hurray, they will graciously let us delete our data.

To be fair, companies do say things which they don't do or takes years to do it in the way that was advertised. Asking for a third-party auditor isn't unexpected or unwelcome in most cases.

I had a similar concern where I was unsure if Google deleted my data and then anonymized it for helping their models anyways. In my opinion, that would be materially different than deleting all of it.

I've beat up on Google a fair amount. But, companies can get in so much trouble for outright not telling the truth, I don't believe it would be worth it for them to do so. The only big tech company that has a history of outright lying and being provably shady is Facebook.

IBM. Microsoft. Tons of companies lie all the time.

I value your contribution to this thread, can I ask some follow-ups?

Does Google have multiple "Deletion policies" such that deleting data from i.e. your GCP bucket follows one policy, and the "scrubbing" described in this article follows an entirely different policy? If so, do different deletion policies have different processes and different audit trails such that the end "deleted" state is subjective and controlled by the engineering and managerial oversight of the engineering/leadership team of that given product(s)?

From my (naive) opinion, it must be really, really, hard to for example, retrain every ML model that a now deleted datapoint ever touched. Its hard too to believe that, at some high level in Alphabet's org, there is no motivation to have the positive PR of feature(s) like this, but still at essence not delete the parts of the data trail that significantly drive Google's revenue. Do these datapoints significantly impact Google's revenue?

First let me say that I can't speak to whether anything "significantly impacts Google's revenue" - either I don't have any material information in that regard, or if I did the SEC probably wouldn't want me making statements about it. :)

So with that caveat in mind, let me see what I can help answer.

I'm not entirely sure what nuance you're implying when you say "different deletion policies" - while for instance Cloud might have a different timeline or set of triggers for when and what data is deleted, when it happens "deleted" still generally means "deleted". Some products like GSuite have the ability for administrators to say, disable accounts, which removes them from use but doesn't delete the account, but that's transparent to the domain administrator.

It's definitely nontrivial to track data propagation within large systems, but standardizing infrastructure, having central documentation of data handling plans, and having comprehensive privacy reviews for any new functionality that launches helps keep people on the same page.

Edit: oh, and regarding "retraining every model a data point touched" - the easiest way to do this is to just always be regenerating your models on a frequent basis. If you retrain your models once a day or once a week on a fresh snapshot of your data, they'll only ever be that stale.

Uh, I guess all of this is wonderful and I appreciate the write up, but anecdotal evidence on an internet forum is not very reassuring to many of us. Until Google commits to some real actual transparency, there is no reason to believe they are actually deleting any data from their own servers, and that the options presented to end users is nothing more than a placebo switch.

No offense to you, but I remember when Amazon was releasing their home devices, and many people rang alarm bells in these forums only to be answered by supposed Amazon employees or friends thereof explaining why these devices couldn't possibly been sending data. Well low and behold, they are sending all sorts of data to Amazon. Were those commenters just lying? Were they misinformed? Were they trying to spread disinformation for whatever reason? Perhaps all three..

Here in America, the gig is up. Everyone, even our grandma's, understands that security and privacy is always going to take a back seat to profit. Always.

So again, no offense to you personally, but everything you are saying must be taken with a ginormous grain of salt.

The only answers are either open source or objective third party auditing, or preferably some combination of both. Words from google employees mean nothing.

What data are the Amazon devices sending other than recordings of commands which need to be interpreted?

Again, appreciate the reply. Myself and others post a lot of criticisms towards Google on HN, and though I feel much of it is deserved, its the commentary from actual Google employees that turns it into a much more substantial and informed conversation.

Google retrains models all the time. They gave a presentation about ML and production last year:


You can see there's a section on privacy and deleted data as well.

Each team has its own policies, because each product is different: at a bare minimum they might be using different storage systems, but it's very likely that their data pipelines are quite different, too. In any case, each team's targets are at least as strict as any published ones, of course.

And additionally, it's probably useful for external folks to know that each product team has a designated product counsel and [usually] designated/dedicated POC on several different security & privacy teams, all of whom provide strict guidance on what is or is not allowed, and why.

Also, it is likely related to the timeframes for auto deletion they are giving. 3 months is good enough for ML. If only a day, then it would not be worth anything to G.

Yes. The former is governed by commercial terms, and the latter is governed by consumer terms.

That's different from your second question about model training/retraining -- there is an answer to that questions elsewhere (but it is taken seriously and training data is also deleted upon deletion of the source data where it contains PII). I don't know this next part, but I suspect models use such vast amounts of data that any arbitrarily small deletion wouldn't have much impact.

Your comment here is the first direct indication I've had (at any level of trust) that Google does in fact delete things if you ask it to. Maybe it's dumb but I feel a tiny bit more inclined to trust these options now than I did before. Thanks for your contribution (to this thread and to deleting stuff).

Edit: Stray somewhat-related question just in case you'd know, why is it that if I open YouTube in a private browsing window on a computer with freshly cleared cache, it asks me which of my two Gmail accounts I want to log in with? Is it just IP address based plus some browser fingerprinting?

Curious, as a Google employee would you believe a Facebook employee, on HN making the same claim?

No, because Facebook does not have a detailed written policy regarding user data deletion and retention, which Google does have.

The policy is up to 90 days to fully delete all user data, stated here https://www.facebook.com/help/359046244166395/

> detailed written policy

Oh we're all good then!

I'm not sure if you're being sarcastic, but breaking a written policy is (AFAIK) fraud and has some teeth to it.

Assuming that you, an individual, can prove not only that the policy was broken, but also broken at the time when the policy was in effect. T&Cs change often, and only major changes require notifying the user.

The entire burden of proof would be on you, vs. a behemoth corp whose bottom line depends on maximizing data collection and retention.

Go to court on your own dime, prove it, and yes, the penalty might have some teeth. Might.

Usually AGs do that. They're slowly coming around to this being a major thing.


At this point the big companies probably need to submit to some external audits to really get trust back.

I've worked on wipeout compliance at Google and IMHO there's no possibility that an outside auditor would ever be able to understand those systems to such an extent that they would discover a flaw in them that wasn't previously known to the many engineers who work full-time on wipeout. The systems are just too complex. One does not simply `find -type f /google` to figure out what data is available.

I disagree. Compliance enforcement of complex systems is not done by sending in a team of engineers to reverse engineer the whole thing. What you do is require the regulated corporation to maintain auditable evidence of its compliance in a specific format, then do routine auditing of the evidence. You only do a true deep dive into their systems if the routine audit shows non-compliance or potential fraud.

Part of the point of the audit would be to demonstrate that you take it as seriously as you say you do. Not so much to check the fine details.

I don't personally put any value in those kinds of audits but Google has those. For example they have outside auditors for SOC and ISO 27018 and whatnot. Some dipshit from Ernst and Young swears that Google has such internal controls and processes.

In other words, it would be the same type of theater that most companies engage in. I know plenty of companies with "strict security standards" administered by the infrastructure department so they can pass an audit or some other type of security compliance. But most security breaches happen because of badly written software that the "compliance department" never reviews and they wouldn't know what to look for if they did.

Developers could have:

  sql = "Select * from Foo Where FirstName = '" + firstname + "'";
All over their code and no one in "compliance" would be any the wiser.

This isn't necessarily true. At my previous company we had a serious finding (DoD audit) as a result of poorly written software. The finding was discovered by running one of our standard internal reports and looking at who touched what, when, and where. Pretty straightforward, and although the fix was in code, the discovery didn't require any technical analysis.

Without going into confidential information, how could a report that didn’t analyze the code find possible sql injection attacks or over posting vulnerabilities?

Saying that systems are too complex to be audited is a horrifying statement that reveals an environment ripe for abuse.

That’s not what I meant. Outside auditors are the dumbest people on the planet and all they are prepared to understand is that you deployed Norton Deletemaster Enterprise Edition or whatever. Trying to explain to them that you’ve got a mathematical proof of why the backups are unreadable is a big waste of everybody’s time. I’ve been in that meeting and it’s worse than death. Still, the audits exist and the reports are public.

yes - and these online discussions are never astroturfed by employees of these companies.

Literally nothing whatsoever in Google's history suggests what you are describing is true.

may i ask what your day job is?

Answered here while your comment was dead: https://news.ycombinator.com/item?id=19809515

That's not a very good argument.

I believe you, but I also believe the NSA and others siphon that data and archive it so it's kind of a minor point if google keeps it or not, privacy is a relative term these days. We can be fairly confident since the Snowden revelations that data dragnet operations are continuing and expanding in scope.

Looking at all leaks coming from Google, including Dragonfly and Maven do you think that NSA could access data directly without any engineer noticing it and going to the media with it? I doubt it.

yes.. they split fiber pipes, if encrypted they use undisclosed zero days to steal encryption keys or try to break the encryption with super computers. these tactics are are all well known.

All Google internal traffic is encrypted, and I'd guess they use some sort of forward secrecy. I guess it's going to be basically impossible to use split fibers to sip at that data.

The change was made exactly in response to PRISM. In fact, it was something that was already rolling out at the time, but the disclosure of that program caused the effort to be dramatically accelerated. At this point, and for quite some time, they've been using encryption for almost all traffic on the internal network.


you think that the thousands of NSA employees whose mission it is to collect and analyze that data just sat around and declared defeat after they started encrypting it? lol

I don't think the NSA has thousands of people to dedicate to the slurping of Google's data specifically. More like hundreds, if even that.

But I mean if your position is that the NSA has cracked modern encryption technologies, then I guess you better get off the internet. Whether you use Google or not, you're screwed.

i'm sure they've cracked some of them and can brute force others faster than anyone realizes. but they also are probably very good at targeting specific people/machines and getting the encryption keys they want, especially if its being used to encode a large amount of data. i keep a low profile and avoid doing illegal things so i'm not really concerned but it still sucks that privacy is disappearing because its convenient and profitable

I'm an ex employee. There's actually a whole team whose sole job is making sure that all other teams have policies, measurements and alerting for deleting data. They'll chase you if any of the above doesn't hold. It's non-trivial work and slows down your development, if you believe in releasing early and fast. For everybody else, it makes total sense.

I bet it's not free to run, but it's cheaper and easier than elsewhere, because Google's infrastructure is built in-house and mostly integrated. I don't envy other companies that want to do the same.

I work for Google, opinions are my own.

I understand your skepticism but at least in this case, I think your incentives are aligned with Google's. If we don't delete it (within ~30 days I think?) then I'm pretty sure we'd be in violation of GDPR.

Is that to say it's definitely deleted? I can't say that for sure since there bugs are always possible but at least I'm pretty confident the intention is to delete it.

You don't have to believe it, but it helps rationally thinking about incentives.

What is worse:

- losing some data of the 0.1% of users who actually care, or

- the regulatory and PR nightmare that would happen if they don't and it is brought to light?

Google's Sarbanes-Oxley compliance should verify that this works exactly as described. I don't expect those to be public.

You're welcome to all the tinfoil you can fit on your head but this is a written policy Google markets to their paying customers. What you are saying is they are just committing overt fraud.


Brother it's 2019, Im not sure if you noticed but fraud is in big time. Markets are at all time highs. Fraud penalties are just over head.

Besides, for that document to work, the authors have to be trust worthy.. in this case the authors are not even close to trustworthy it might as well have been written by the hamburgler.

Fraud for which they get slapped on the fingers.

A subpoena for something someone deleted would be a good way to find out. My prediction is that for something deleted, say, six months ago, the subpoena would come back empty.

(Disclosure: I work for Google, though not on anything like this. Speaking only for myself, not the company.)

> I simply do not believe this.

If you're not willing to put even that must trust in your various cloud providers, you aren't the target market. Surely you aren't sharing this information with Google already, right? So the feature isn't for you.

I've never seen audit reports from a trusted source of personal data being confirmed deleted for Apple, Amazon, etc. So I'm not sure what makes Google unique.

So you delete the data from the training results of the services that ingested this meta? Yeh RIGHT!

Google may disassociate your data from your account, but your data lives on.

In Europe at least it would probably be a violation of GDPR to not actually delete it on request of the user if it is considered personal data.

Is that even possible? If you use X data to generate a model and then delete the data and keep the model, you could in theory use the model to re-identify that user that deleted the data with a high degree of confidence.

In general models trained on individual users' data get rebuilt when the underlying data changes so as to properly reflect the current state of the source data.

Do you have any specific policies around this? I suspect it would be hard to say definitively that there aren't remnants of data I've asked Google to forget still kicking around.

I think that's a good question.

They could delete it... and then repopulate that data the next minute pretty easily.

They cannot repopulate the data from the model but the models can be analyzed to identify people.

Good point.

Will they also untrain the machine learning model that they've fed all of this data into to predict exactly what kind of ads are most likely to get you to spend money? How about slurping up data for users who don't have an account but send a ping to google on every goddamn page on the internet?

From someone else's post up-thread: "In general models trained on individual users' data get rebuilt when the underlying data changes so as to properly reflect the current state of the source data." https://news.ycombinator.com/item?id=19809346

Google is not allowed to collect PII data of users who visit google services without registering. So everytime you delete your cookies, you are a new user, and they also are not allowed to log the ip address of non-users indefinitely.

Do you have a source, or any evidence that they don't? Google doesn't deserve the benefit of the doubt. This is literally their business model.

I think this is basically the GDPR rules, but I would need to look into the details more.

You're right - Article 3 of GDPR defines the scope as applying to any user residing in the EU. If the users are not logged in then you don't know where they reside (IP address != residence), meaning any international company that operates in the EU needs to err on the side of caution for non-logged in users everywhere.

IP address geolocation is the normal way people determine if GDPR applies.

IP address geolocation is not always accurate, can be spoofed, and is flat out wrong for EU residents who travel outside of the EU.

Maybe it's normal for a startup, but for a billion dollar company, they're not going to roll the dice on 4% revenue fines and IP address geolocation.

If you send some test requests to any big company including Google I believe you'll see they're using IPgeo.

(Disclosure: I work at Google, I'm not a lawyer, and this is not legal advice)

Do you think my interpretation of Google not having long-term user-profiles of non-logged in users due to GDPR is correct? I read that Google has rolled out their GDPR compliance worldwide.

Simply using an IPGeo service doesn't mean they're not handling the GDPR compliance differently.

GPDR is not recognized in all countries, and most notably not in the country google calls 'home'.

Google is an Irish company.

Translation: American users are f-ed.

GDPR is not enforceable in the US, so this is applicable only to the EU. In the US AFIAK Google can collect anything that you consent to.

uBlock with dynamic filtering takes care of that for me, but I recognise it isn't practical for >95% of internet users.

I use uMatrix, but the amount of sites where you need to allow Google domains and cookies for reCaptcha to work is depressing.

And then you will be "punished" for being off the grid, and the images will take 8 seconds to fade out and fade in...

Glad I'm not the only one hating this. Especially with lots of captchas in a row it's super annoying - "yeah, it's the same browser, same cookie, same IP, doing the tenth captcha in 10 minutes, all previous tests succeeded ... better make sure to look extra closely and slow it down".

I've experienced this too. Are you saying Google's algorithms know (or at least strongly suspect) I'm human but throttle me anyways because I've taken steps to protect my privacy? Out of spite?

Probably not, the issue is that headless browser bots take the same precautions of preventing Google knows anything about them as you and other privacy-conscious users do.

There's a leap in logic there. Some ads may be harmful, but if so, they shouldn't be used at all.

Will that remove the history from Google's Sensorvault?[0] Because if not, this is just theater.


One thing that I find very interesting in the comments is people's approach to Google services. It's no secret that Google makes money from ads and models based on the user usage on their free services.

A lot of people are complaining that if they disable data collection, try their hardest to not contribute to these data models and block the ads with ad blocker then some feature doesn't work for them. So you just want to use Google services completely for free without contributing anything.

Other people complain about permissions that Assistant or other products require. There's always a very good reason why these permissions are required that is not monetary. Each permission request goes through a lot of scrutiny, lawyers reviews, product reviews, approvals, etc. You can ask why disabling some permissions affects features that shouldn't need it. Sometimes it is lawyers fault, sometimes it's just engineers not supporting properly each of 2^{number of permissions} permission combinations.

> So you just want to use Google services completely for free without contributing anything.

Not necessarily. I believe that google's services are good enough to use them, and I'd certainly pay a few bucks a month to do so and be their customer - and not be the product that they sell to their customers. I'm also pretty sure that they'd make more money off of me paying them for providing a service to me than they make by selling my data to advertisers (me blocking ads and all that). If the only way of "contributing" however is to give up my privacy, then yes, I don't want to contribute.

Why not pay for a GSuite account? That seems to meet your criteria.

That would be more of a "paying for something", I think. I don't use Google Apps, I have a gmail account that I don't regularly use. I'm a simple man, I just use search (and it's variations) and maps, and the occasional YT video. I understand that GSuite doesn't touch those, so while a good idea from a moral view ("I can't pay you for this, so I'll pay you for that which I never use but can pay for"), it doesn't change the privacy invading parts of the products I actually do use.

This is it for me as well.

A lot of online publications frustrate me with this as well. They're only supported through advertisements, and request that you disable your ad blocker.

LET ME GIVE YOU MONEY! I would pay for the ability to read your site. Right now I have the choice of ad blocking and preventing them from earning much of anything or giving up my privacy to support them.

It's especially ironic given that many of these sites are increasingly writing about the dangers of "surveillance capitalism" and big tech.

Yeah. I understand it may not be worth it to set up regular membership programs and collect fees for that, but at least have a tip jar.

I'll be happy when they let me use their products like Google assistant without collecting all this info in the first place. Even a degraded experience would be fine. Why does assistant need to know where I was last week or what I searched for to answer simplistic questions?

You're in luck! Those requirements were removed months ago. You can use Assistant with those settings disabled no problem.

False. Got a home a few weeks ago. I had web activity disabled, and while the home “worked”, it was nagging me to enable it all the time for all kind of unrelated reasons. Need Spotify? Oh you need web activity sharing. ?!?!

I just tried. Still asks me to turn on web and app activity, voice and audio activity, and device information. I say "OK Google" and it asks me to turn these things on. It does understand that I invoked it but won't help me unless I turn on these three trackers.

TBH I dont mind Google knowing all this stuff about me. I like targeted ads. What I dont like is the idea that someone could steal that information from them and use it against me. Soak up as much info as you want from me Google, idc but if you're going to do that than you better damn well be able to keep it protected.

>you better damn well be able to keep it protected

There is no 100% security.

Some companies do a lot better job than others at protecting their systems, but it's only a matter of time before they are breached or information is leaked, especially a high value target like Google.

If a powerful nation state wants to breach a corporation, they will.

So, the real question is this: is an affinity for targeted advertising really worth creating the most detailed psychological profiles in history on billions of people if it is inevitable that this information will be compromised and used for more nefarious purposes?

I see this(others recent Gmail changes) as tactical measures that google is taking to save its image in light of steep growth of privacy friendly startups(proton mail, duckduckgo, etc.). But my take is that they aren't going to get too war with this until they really commit themselves to preserving users privacy. Unfortunately, likelyhood of this change in thinking of Google executives is very low as its an up hill battle against the revenue they earn selling users data.

Doesn't Google keep track of web history through IP/browser fingerprinting even if you don't use an account? And when asked about it they give vague answers about what information is being kept and for how long. That's why I switched to using DuckDuckGo for most of my searches.

Probably. I also switched to DuckDuckGo, and it generally works fine. On Google, I would get banners suggesting that I set my privacy settings, but it would say I had to log into a Google account to use it.

This is huge, and exactly what I want from these kinds of services -- a set "age out" policy where I can specify how long my data is retained before being deleted.

If FB let me clean out all my posts/comments/activity from everything older than a year I might actually start using it again.

Google should know that they have very little credit left on my side. When I read the this article, I immediately think about the ways on how they can still keep tracking me.

- Check the latest user status after 3 months before deleting user history.

- Compare it with the previous 3 months and update the difference ona separate table that are not facing the end user.

- Scrub location and web history from user facing database.

It saddens me now that all google apps on my mobile phone are no longer allowed to access nothing. I try to only grant access if i really need to. This is how it became unfortunately.

Why is the minimum lifetime of data set to 3 months? Is it because 3 months sounds like a reasonable period of time for users to look back at their usage or is it because Google needs 3 months to process the data, for marketing or other money making purposes?

The reality is whether they keep the data for 3 months or 18 months doesn't mean anything. It just gives you the illusion that they no longer use your data. I find it hard to believe there's good intentions behind this (other than deleting data AFTER it's used).

You can always set it to zero by turning off location history / web and app activity entirely, causing new data to not be stored at all.

And then a pile of features that shouldn't need that history will refuse to operate.

I'm happy to just leave both settings off entirely. I actually do have some historical location data still stored there, they know where I was a number of years ago.

I almost wonder if leaving it there has a positive effect, as my life has changed, the locations I go have changed, the products and services I use and the stores I patronize have changed, and Google is left with a version of me that is no longer accurate.

This is why most ads target within 30 days.

This is actually worst for your privacy because it gives you an impression that your data is not being used. However, by the time that you remove it Google is already done with your data. I feel this will spread to other type of data that they are collecting too. I cannot convince myself that it’s a good thing and Google has good intentions with it.

Yes, maybe if the auto-delete was for 24h, at least as an option, but 3 months? Why bother? Just pause the activity completely:


What value has "buzzfeednews" added here?

> Scrubbing this data from your Google account...

What about those of us without Google accounts? I use pretty aggressive tracker blocking, but wouldn't be surprised if a PREF cookie, ETag, browser fingerprint, or some other tracking mechanism snuck through at some point. My parents don't use any Google services other than search (not logged in), but Google likely has most of their web browsing histories going back years, thanks to ads and analytics.

If I delete my Google cookies, will Google delete my history and not try to re-identify me? It seems like "privacy theater" otherwise.

What's the over/under on how long before this feature 'accidentally' breaks? I'd bet even money on it 'malfunctioning' in less than 1 year.

Just like so many of the privacy features put out by FANG companies, which mysteriously undo themselves upon update roll outs. Funny how I've never seen those roll outs cause the privacy features to automatically turn on, only off.

Still better than nothing, I appreciate the option, but I can't help but being VERY skeptical about this.

The article doesn’t mention Google collecting and storing location information even when it’s turned off, and having the users figure out more convoluted ways to get rid of that tracking. Scrubbing this information periodically is not a substitute for not collecting or storing it in the first place. Whenever that’s done, I’ll probably accept that Google is doing something for privacy.

This feature looks pretty cool and it’s basically what I wanted. Kudos.

The question is, are they deleting that data?

I think they are big enough that, at least for EU users, not deleting that data would land them in a lot of trouble, so I think that they do.

For people concerned with privacy I don’t think Google is trustworthy enough, but these features are good to have for the rest of the world, so I’m glad to see it.

One nice form of regulation would be a mechanism to let companies in this situation make a formal and binding privacy claim, that would then expose them to painful consequences if shown to be untrue.

Right now we just have to take Google's word for it, but if they were willing to pinky swear (figuratively speaking), it would let people trust them more, and possibly let them offer new kinds of services.

> One nice form of regulation would be a mechanism to let companies in this situation make a formal and binding privacy claim, that would then expose them to painful consequences if shown to be untrue.

Or you could just make making false representations to get anything of value from people a tort generally, and in egregious cases a crime as well, rather than making it a privacy-specific regulation, and without making any particular formality on the part of the vendor necessary for consumers to be protected.

Your scenario means you have to sue Google and win, as well as prove damages. That has historically just not worked. When Snapchat was shown to be collecting location data after explicitly promising not to, for example, they just got a slap on the wrist.

I'm taking about a system that would let companies voluntarily increase their legal exposure to specific claims as a proof of commitment. That would allow companies to pick a level of privacy protection they wanted to offer, and market to customers based on that commitment, in an enforceable and credible way.

> Your scenario means you have to sue Google and win, as well as prove damages. That has historically just not worked.

So does a proposal about a special ceremony which makes the claim binding; you still have to take action when they break it, and prove that they did.

> I'm taking about a system that would let companies voluntarily increase their legal exposure to specific claims as a proof of commitment.

Increased exposure in what specific way? Lowering the standard of proof below preponderance of the evidence? Keeping that standard but allowing statutory minimum damages? Adding a damages multiplier?

You have a regulatory body to provide outside auditing, high monetary penalties for violations through neglect, and if the violations are intentional, somebody goes to the pokey. Kind of a codified and structured version of what the FTC does now, except with real teeth and without the problems the FTC faces in enforcement.

If you don't like that approach, there are other ways to make the idea work (or, since this is HN, "well actually" it into a fine powder).

> Your scenario means you have to sue Google and win, as well as prove damages.

I understood the proposal to be making corporate lying illegal, such that proving damages but not be necessary.

> are they deleting that data?

Also, are other "global observers" deleting data extracted from Google?

There's a hint of hubris in this move, the way I see it. Google must have enough confidence in the interferential power of the data they've accumulated that they can now wash their hands of anti-privacy dark patterns and declare themselves pure. It's a huge PR win without having to commit to deep changes in their (primary) business model.

Assuming that were true, wouldn't it be a massive existential risk to Google? The finely tuned ML model that classified everything correctly before might catch fire and explode if the thing generating its data changes.

In practice I assume that data has highly marginal value for them so the small percent of people who bother to use this won't matter much.

Living outside US jurisdiction, I strongly believe, my data will end up in the hands of the interested party, specially when/if I am targeted, no matter what setting I choose. I rather get some services running smoothly in my devices when I have these settings (i.e. historical data) enabled. So, I prefer keeping these enabled.

Is there an open source replacement for Google Maps timeline that doesn't destroy battery life?

I know how bad it is privacy wise but I get so much utility of being able to see all the places I have been on a map, it's so wonderful after travelling or for finding places in my hometown I haven't yet been to.

On a web forum for programmers on a website known for funding startups there are numerous discussions about Google being inconvenient about privacy and some of them are thinking about buying Apple products. As opposed to rooting their smartphone and using MicroG.

Rooting and re-romming is decidedly nontrivial in many cases, even for otherwise technically proficient users. Many devices are poorly supported, information is at best unclear, tool distribution is vastly less than trust-inspiring, data loss risk is high, and supported platforms for accomplishing root are limited.

Source: been trying (off and on) to root a device since 2015. By all accounts, it's not ROMable.

This being a key reason I've all but entirely soured on smartphones and tablets of any description, though Purism are looking interesting.

People get older, you know.

It would have been fairer to let the users decide how many days precisely they want to keep history, and let everyone find their sweet spots.

And, sure, it's deleted from Google. Is it also deleted from ads networks after Google shared the data with them? I doubt about it.

What do you mean? Google is the ad network. They aren't sharing that data with anyone else.

Too little, too late. The average person is beginning to hate you as much as the tech savvy people, Google, and there's no one to blame but your greedy elitist self!

By the way, I deleted my entire history two weeks ago, and since then I didn't notice any downsides yet, such as a deterioration of my "search experience".

Offering this should be a legal requirement for any company collecting tracking data, IMO. I might even re-enable my YouTube and Search history when they roll this out.

But only if you have a google account. If you don't have a google account, there's no way to request your shadow profile get scrubbed. As usual.

If you don't have google account then your data is stripped from all PII making identification virtually impossible. How do you want to instruct Google to delete your data if it cannot tell what's your data?

>How do you want to instruct Google to delete your data if it cannot tell what's your data?

If it has enough information to append to a shadow profile, it has enough information for me to request that the entire shadow profile get deleted.

Not sure if this is sarcastic or not. Psudeoanonymization is a paragon of plausible deniability.

Deniability of what? Google cannot reliably identify logged out users. There're different lawyer groups that make sure it's the case. I was working with the log data and this is actually a big pain, having to go through data reviews and remove all identifiable data.

We will do something approaching the right thing... soon!

I don't want to send my location to google in the first place. Is there a way to do that on Android without disabling Location Services entirely?

Google builds a profile for each user.

Even when the data is deleted, I assume the profile stays (as far as I can tell from the meager writing so far), and can continue to be developed with further info, even if all the old info has been deleted.

At the end of the day, the profile is more important than the data that was used to build the profile, so its great for Google - and it doesn't help me that much that my data was deleted.

In addition, Google could save in that profile all sorts of useful metadata (how many emails, from what range of countries, etc.) that might someday be useful.

Other commenters have stated that Google's algorithm currently rebuilds the profile when any data has changed. Obviously this will have been fixed (...to include the fact that the user wants deletion in the profile) before this feature gets rolled out.

1. Only a tiny percentage of users will use this, so the benefits in a legal sense are big compared to the loss.

2. The only loss is that future development will have been able to squeeze more from that info, and the metadata is enough to offset most of that risk.

Why just not stop collecting it?

Only do that if you don't want to know what Google knows about you (because they will keep your history and don't mind hiding it from you).

can we export our location history?

Yes. Google takeout has that as an option.

Thank you European Union

google will soon let you think you can scrub your personal info.

uh huh

Would you please stop dragging down the quality of this site by posting uncivil and/or unsubstantive comments? If you don't believe something, fine—the rest of us don't necessarily need to hear about that, but ok, one comment was fine. This one, though, is both rude and dumb. Posts like this will get you banned from HN, so please don't do it again.


I personally don't believe them. I have an android phone but no Google login. I don't even want to begin to try and work out how to get them to scrub my anything. I've asked them for a copy of all my data based on phone number under GDPR. I've also instructed them to delete what they have about me. Obviously very interested to see how it works out.

Whenever I want to send an email through gmail in my phone, it doesn't suggest senders unless I give them permission to access my phone contacts. Does anybody know a way around this?

Ya ok, call me a cynic, but I doubt it amounts to anything more than a new column in their tables. It’s gonna all be there - there’s too much money on the line to permanently erase anything

Honestly, people have to be fools to think this is anything more than a feel-good placebo. Google is only doing this because public sentiment is starting to grow restless regarding data privacy. But given the pitiful, embarrassing grasp (judging by the Facebook/Google hearings) Congress has on the issue, I guarantee there will be no kind of followup to ensure Google actually deletes the data as they say they will. And honestly, why would they actually implement the feature? Who's going to hold them accountable? Won't be the federal government, they're a wet sock. Won't be shareholders, it's actually against their interest to provide data deletion. Won't be the media, they'll move onto another news narrative soon enough.

I don't know that we can be so sure it is a placebo.

It would be good policy at Google to do / be ready to actually do the thing so that if called to / required to... they can quickly say "done!".

A CEO sitting in front of congress would certainly like that.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact