Hacker News new | comments | show | ask | jobs | submit login
Chrome Privacy (mikewest.org)
200 points by czottmann 2214 days ago | hide | past | web | 99 comments | favorite

2 responses:

1. Chrome is the only major browser not to support the Do Not Track header. Google is also the browser vendor whose bottom line would be most impacted if users could easily opt out of tracking. Coincidence?

While I welcome folks from the Chrome team to weigh in on the reasons for this, my own understanding is that adoption of this feature is being blocked by Google's Mountain View based policy team, and is not a decision that is in the hands of engineers.

Compare this to Chrome's absolutely spectacular record in the area of security, where folks like Adam Langley and Chris Evans have been able to ship innovative features that haven't worked their way through the standards process. Examples include HTTPS certificate pinning (that recently led to the discovery of MiTM attacks against Iranian users using the DigiNotar certs).

In the area of security, Chrome's engineers deploy whatever they think will help users. In the area of privacy, Google's lawyers and lobbyists are calling the shots.

(Also, there still isn't a working API to let others support DNT in Chrome. An API exists, I think, but it is quite buggy, AFAIK)

2. Blocking 3rd party cookies by default. Apple defaults to blocking 3rd party cookies, Chrome does not. Both are derived the same webkit core (yes, I know there is different code now), but when Google decided to create Chrome, they went with a different default than the one that Apple had already used -- one that hasn't led to websites breaking for Apple users.

Again, which browser vendors' bottom line would suffer if Chrome users could not easily be tracked? Google.

Let me be clear - I don't think that Chrome is engaging in any sneaky shenanigans to directly track users. No, that would be too obvious. Instead, Chrome just makes it easy for Google's other services to track users, when they stick with the deaults.

> 1. Chrome is the only major browser not to support the Do Not Track header. Google is also the browser vendor whose bottom line would be most impacted if users could easily opt out of tracking. Coincidence?

The Chrome team has already addressed this issue. The problem with Do Not Track is that it isn't clear what it blocks and what it does not. It is also pretty useless as most websites that survive using ads will never support the Do Not Track headers, so it's a faux solution to the problem. Google already offers a browser extension (Firefox, Chrome and IE) to block Google Analytics. Last but not least, there is a dashboard on Google to know what Google ads know about you and the possibility to delete this information.

I think what's really bad is that DNT headers offer a false sense of privacy when in fact no websites respect the headers. Google's alternative solutions have the upside of being very clear about what they accomplish for your privacy.

Some ad networks already respect DNT. But you are right, many of the big ones don't. While Google can't force most of the ad networks to respect DNT, Google can choose to respect the header, which is currently sent by 5% of Firefox users, when it is received by the Doubleclick and Google Analytics servers.

Instead of supporting a header that millions of consumers are already sending, you instead offer a browser add-on for analytics, and "keep my opt outs" for ad network opt out cookies.

The signal sent by a consumer setting the DNT flag in their browser is just as clear as a consumer installing your analytics opt out plugin, or obtaining the doubleclick opt out cookie.

You could, right now, respect the DNT header as you currently respect your own opt out mechanisms, but you don't.

I get that Google is not a charity. I get that Do Not Track (or any other mechanism that makes it easy for consumers to avoid tracking) threatens your bottom line. What I would prefer though, is honesty.

Please just admit that you want to make it difficult for consumers to opt out, and that a single, easy to use mechanism built into the browser is something you want to avoid at all costs, even if that means you are ignoring millions of consumers' intent.

1. I wrote Keep My Opt-Outs. It has exactly the practical effect you're looking for with regard to DoubleClick and about 60 other ad networks _right now_. Installing it opts you out of interest based tracking in exactly the way that clicking the DNT box in Mozilla doesn't (yet): https://chrome.google.com/webstore/detail/hhnjdplhmcnkiecamp...

2. Google's participating openly in http://www.w3.org/2011/tracking-protection/ to work out what DNT means in a detailed sense, so that when users send a header, either via a checkbox or via an extension, there's agreement about what the practical impact is. I'm not sure it's reasonable to expect much more than that right now.

1. Keep my opt outs is not a serious privacy enhancing technology. If it were, it would be built into the browser by default, and enabled via a simple, easy to discover UI (or better, enabled by default).

Keep My Opt Outs is largely political propaganda, or if you will, privacy theater. It gives your DC people something to talk about when they testify before Congress or the FTC. It allows them to say, "look, we do offer users the ability to opt out", while knowing that few users will seek it out and turn it on.

Compare, for example, the 5% of Firefox users who have enabled DNT, vs. the 62k users of KMOO. That number of users is pathetic, given how many people use Chrome.

2. Since when does something need to have gone through the standards process for Google to ship it in Chrome?

Consider, for example, what Adam Langley wrote when he added support for DNSSEC certificates to Chrome:

"I'm also going to see how it goes for a while. The most likely outcome is that nobody uses [this feature] and I pull the code out in another year's time." See: http://www.imperialviolet.org/2011/06/16/dnssecchrome.html

Google's approach to security (and in many other areas) is to iterate, quickly, see what works, and if it doesn't, kill it off.

Likewise, Chrome supports an early draft of WebSockets. The spec isn't finalized yet though. Google added support to a draft spec, and then will update Chrome to the final spec once it is done. See: http://blog.chromium.org/2010/06/websocket-protocol-updated....).

It seems to only be in the area of privacy where Google wants to wait until technologies have gone through the slow standardization process. In the mean time, while you wait for things to work their way through the W3C, Google's ad business continues to build detailed behavioral profiles on Internet users.

The longer the W3C takes, from Google's perspective, the better.

Look - I get that it must be frustrating to be a privacy engineer working on Chrome, when upper management won't let you deploy serious privacy enhancing features to users. I get that it must be embarrassing to work on the only browser that doesn't support Do Not Track (usually, IE is last to the party). What I don't get, is why you tout features like KMOO and Google's involvement in the W3C process as though you expect them to be taken seriously.

Google is not committed to enabling users to easily protect themselves from Google's widespread collection of their private data. To argue otherwise is foolish.

DNT is privacy theater too. It encourages a business model shift and research into data mining that has the same effect as tracking but without an implementation that would violate DNT. DNT does not solve the problem of ubiquitous online tracking. That problem is most likely unsolvable.

On my home network, Google, Facebook, and Twitter compete for the most web tracking next to my ISP's own capabilities. Traffic weighted, Facebook is now the leader in my household.

According to a study Ars Technica covered

"So what about the rest? Two advertising companies took overt steps to respect the Do Not Track headers sent by browsers like Firefox, Internet Explorer, and Safari, which we just learned is actually a step beyond NAI's baseline requirement. Another 10 companies went even further by stopping the tracking and removing the cookies altogether (and just for interest's sake, it's worth noting that Google falls into this category)."


Ars botched the details in summing up the study. Read the original study here:


When a consumer visits Google's opt out page (where you obtain a doubleclick.net opt out cookie), or gets one via the NAI, the doubleclick.net tracking ID is deleted.

Google does not support the DNT header.

> 2. Blocking 3rd party cookies by default. Apple defaults to blocking 3rd party cookies, Chrome does not. Both are derived the same webkit core (yes, I know there is different code now), but when Google decided to create Chrome, they went with a different default than the one that Apple had already used -- one that hasn't led to websites breaking for Apple users.

Chrome's cookie code isn't derived from other code in WebCore; it's implemented in the platform layer. And it's intended to be as compatible as possible with the majority of the web. As for the privacy implications of third-party cookies, I think Michal covered the reality of the situation extremely well: http://lcamtuf.blogspot.com/2010/08/cookies-v-people.html

> Blocking 3rd party cookies by default

As a Firefox user who disables 3rd party cookies, this actually does break some sites. Signing in with your Google account to various blogs becomes impossible. Buying tickets online to the local puppet theater becomes impossible. That sort of thing.

You are a Firefox developer, right? Not a mere user.

Firefox takes a totally different approach to blocking cookies than IE/Chrome/Safari.

Firefox blocks the setting and transmission of cookies by/to 3rd parties.

The other browsers just block the setting of new cookies by 3rd parties.

An example of what this means:

A Safari or Chrome/IE user who has turned on 3rd party cookie blocking visits facebook.com in a 1st party manner (by visiting the facebook home page). He/she then visits CNN, where facebook is present as a 3rd party (via the like button). Even though that user has opted to block 3rd party cookies, their stored facebook cookies will be transmitted to facebook when it acts as a 3rd party, because the cookies were first set as a 1st party.

In comparison, when a Firefox user who has turned on 3rd party cookie blocking visits CNN, facebook has no idea who they are.

No one is visiting doubleclick.net as a 1st party, which means if Google turned on the Chrome/Safari style 3rd party cookie blocking, it wouldn't be able to track users for behavioral advertising (interestingly, Facebook could still do so). Were this to happen, I guess Google could always move away from using doubleclick.net and put everything under the google.com domain, which would get around this.

Mozilla's method of 3rd party cookie blocking does indeed cause collateral damage, which AFAIK, is why Mozilla hasn't turned it on.

Google doesn't have the same excuse for allowing 3rd party cookies, since Apple users don't suffer broken sites when they browse. (If the Flash fiasco has shown us anything, it is that websites will bend to Apple's will, and change whatever breaks in order to allow Apple users to visit their sites).

If third party cookies were eliminated entirely, the companies currently using them could get the same effects by using technology that is more than 10 years old and proven to work. When Firefox, Mozilla, Safari, IE, and Chrome change their cookie implementations to block 3rd party cookies, they are wasting millions of development dollars and hours and they have no effect on the end result -- ubiquitous tracking will continue.

Regulating the technology is, and always will be, a waste of time. If people don't want the tracking done, they need to outlaw it, regardless of technology implementation.

> You are a Firefox developer, right?

Well, sure, but that's not relevant for purposes of this discussion. ;) The fact that I use the browser and turn off third-party cookies is the relevant part.

> The other browsers just block the setting of new cookies > by 3rd parties.

Ah, interesting. That significantly reduces the value of that setting, as you point out.

I suppose we could look into blocking the setting of such cookies by default and only the sending when the pref is flipped. I'll file a bug, thanks!

If you don't like third party cookies then I encourage you to go to chrome://flags/ and Enable the 'Block all third-party cookies' experimental feature.

Simple as that.

Defaults matter. An extensive body of social science research tells us that users stick with the default settings.

In the area of security, Google recognizes the importance of setting safe defaults. See: "99.99% of Chrome users would never change the default settings. (The percentage is not an exaggeration.)"


Having an option to block 3rd party cookies, but then hiding it in the chrome://flags/ menu is essentially worthless for most users.

Privacy enhancing technologies need to be easy to use, preferably, enabled by default.

The DNT header is to be implemented by website developers (and they have nothing to gain by doing it). I understand that Google is better off without implementing this header, but I don't really think it would change anything if Chrome had it.

I agree on the cookies issue, though. Things like "Allow local data to be set (recommended)" make me kind of sad.

as someone working on sites that use the recent browser storage APIs, I very much prefer that to be the recommended setting. Much saner than cookies, and for some type of apps makes for a much better user experience as well

Notice that "allow local data to be set" and "block third party cookies" are not mutually exclusive settings in preferences...which is good, because they can be and tend to be used for very different purposes.

If you want to block all data from being stored, that's fine (and web APIs are great in that they are designed to be individually denied and let a page detect that they have been), but I definitely disagree that first party storage enabled by default (and even "recommended") is lamentable.

I tried to look at what chrome sends to google with wireshark, and there are quite a lot of connections made to google servers, but it's all encrypted (ssl). So I actually have no way to know what is sent to google.

Did someone make a detailed analysis of what info gets sent to google on a default install of chrome?

I noticed awhile back that even if you disable Chrome's safe browsing feature, Google still sends data to their server. I'm not sure if this is still the case today.


Yep, this still appears to be the case. It establishes a long-lived SSL connection to one Google's servers few seconds after the start. If the connection is forcefully killed (e.g. with SysInternal's TcpView), it won't be re-established, but needing to kill it every time I need to use Chrome is really annoying.

It is also the reason why I have Chrome updates permanently disabled - I don't trust Chrome enough to let it run anything in a background.

Can you file a bug at http://new.crbug.com/ please? That doesn't sound like the behavior I'd expect.

Sorry, I can't. It wants me to log in with Google ID which I don't and won't have.

Ok. Would you mind dropping me an email (mkwst@chromium.org) with details? I'm happy to file the ticket for you.

Well... :) That would require setting up a dummy email account and what not, and that's a bit too much hassle. It'd be far more prudent if you'd remove Google ID requirement in front of the bug repository. I frankly don't understand why it is needed in the first place, at least for browsing the repo that is.

you can browse just fine: http://code.google.com/p/chromium/issues/list

I'm pretty sure all the browser bugtrackers require an account to file a bug.

Why the hell am I at -1?

Do enlighten me how can I file a bug report against Chrome if I don't have and don't want to create a Google ID.

Can you file a bug at http://new.crbug.com/ please? That doesn't sound like the behavior I'd expect.

The behavior you expect out of Chrome is just simply not what its actual behavior is.

Asa Dotzler from Mozilla corrects some of your misconceptions here: http://news.ycombinator.com/item?id=3034450.

EDIT: Oh, I see you're actually on the Chrome team. I learned the stuff Asa that describes in his comment in your privacy policy. It's pretty easy to understand.

That should definitely not be happening. If you're seeing this behavior please provide file a bug with full details at the following URL: https://code.google.com/p/chromium/issues/entry?template=Pri...

On 2008 we published an article about this using our own hooking tool: http://blog.nektra.com/main/2008/10/15/the-truth-about-googl... don't know what's the state of this stuff on 2011.

This is hooking the application so you don't have issues with sniffing SSL.

This is the tool I use to see SSL traffic:



It effectively helps you to man-in-the-middle-yourself.

According to Chrome help (http://www.google.com/support/chrome/bin/answer.py?hl=en&...), only passwords are encrypted, not bookmarks, autofill data, apps, extensions, history, preferences, and themes. Is this still true?

Edit: in Chrome 15 (beta) just found an option "Encrypt all synced data". Yay!

Confused at why no-encryption would even be an option for this data.

Hmm, this feature seems buggy. I just tried enabling the option and it never finished. Had to kill it.

Filed a bug report: http://crbug.com/97939

Thanks. It's now sitting in the Sync team's queue.

wow, I was assuming that everything is stored encrypted on Googles servers. Would be cool if someone could clarify this, as the explanation in the Google help is a bit vague IMHO.

To my understanding they started providing the option to encrypt only very recently. My take is obviously that it's bad for the company policy to have this stuff encrypted (any regular guy would have had built-in encryption when making such a sync service). But on the other hand it would be pretty bad advertising when Firefox always had full encryption. Providing the option (but not making it mandatory) gives the proper advertising/evangelizing arguments.

Wow, that was an amazingly skillful non-denial denial. Notice how he never said Chrome does not collect user information. Because, as we all know, it does. I've often wondered how much it collects, by what mechanisms (including via third parties or analytics added by third parties) and to whom the information is available and under what circumstances. The statement from Mike West does not appear to shed any additional light on this; maybe it's answered elsewhere -- does anybody know? I've seen the Google Chrome TOS and wasn't able to get a clear picture from that.

BTW anyone in Mike's position still may not be privy to everything that is done with the data, as some data collection and sharing may be subject to national security orders that that most employees are not allowed to know about. So any statement his team makes about this should be couched with "to the best of our knowledge."

Chrome does not collect user information, unless you explicitly opt-in to sharing aggregated usage information and crash reports with Google. If you do opt-in to these metrics (and you have to opt-in, it's disabled by default), you can opt-out at any time via chrome://settings/advanced#metricsReportingEnabled

The data collected is available for you to peruse at chrome://histograms/

All the data that Google collects is subject to the privacy policies at http://www.google.com/intl/en/privacy/, and Google of course complies with SafeHarbor regulations (http://en.wikipedia.org/wiki/International_Safe_Harbor_Priva...)

"Chrome does not collect user information, unless you explicitly opt-in to sharing aggregated usage information and crash reports with Google"

OR if you don't opt in to anything and you simply type some things into your Chrome Omnibox which get sent to Google for suggestions, or if you don't opt in to anything and you simply mistype a URL and that send data to Google for more helpful error pages, or if you don't opt in to anything and Google's phishing and malware service collects sites you're visiting. Oh, and where you don't opt in to RLZ, but you get it anyway.

As I said on Twitter, I'm not really bothered by these things, but it's disingenuous to pretend they don't exist or to respond to peoples' privacy concerns by insisting that aggregate usage stats is the only information Google collects.

I see (what I imagine to be) both perspectives. If you're a programmer, you think of course suggest is talking to google -- that's pretty much the only way it can work. That doesn't really fall under whatever mysterious third-party bodies collecting your browser history that the OP was trying to FUD up the place with.

On the other hand, most people don't stop to think that search suggest is sending each character to get new suggestions, which is "user information" that is collected (though admittedly the privacy policy does say the logs are anonymized within 24 hours). So it really should be in the list of information that Chrome collects, even if it feels self evident to some.

> As I said on Twitter, I'm not really bothered by these things

Well I hope not, considering if you replaced parts of your comment:

"if you don't opt in to anything and you simply type some things into your [Firefox search bar] which get sent to Google for suggestions, or if you don't opt in to anything and you simply [type a malformed] URL [into the AwesomeBar} and that sends data to Google for [a search page], or if you don't opt in to anything and [Firefox queries of] Google's phishing and malware service collects sites you're visiting"

it still applies. and Mozilla essentially sells that information by selecting a third party default based on some non-transparent bid process :P

Personally, I'm just sick of the FUD, especially in comments on an article asking to stop the insinuations and actually list problems so they can be fixed. There are and probably always will be defaults some people disagree with (and feel are doing users a disservice since users rarely change defaults), but there is a world of difference between that and "I don't need to give evidence, I can just feel it. We all know Chrome is etc etc"

There is no bid process. Mozilla sets the default search engine to whatever search engine provides the best search results for its users, as far as that can be determined.

This is why the search engine is different in different locales; for example Yandex has way better Russian search results than Google, and hence is the default search engine in the Russian localization of Firefox.

Mozilla makes a 9 digit income in referral fees from setting search engine defaults, almost exclusively from Google.

At least they no longer solicit donations, but for them to pretend they're viable without that omg privacy violating teat to suckle at is just gauche.

The major search engine providers all have referral fee programs.

So while Mozilla does make money from those, it would do that no matter which of the major search engines it defaulted to, no?

And note that there is a reason the search field is not merged with the url bar in Firefox. Precisely because it would be a privacy violation.

Question for mikewest:

Can Google do anything to help make browsers appear to be less unique, and thus less trackable?

I'm talking about http://panopticlick.eff.org/

I'd much rather find a technical solution to that than a political non-solution.

Fingerprinting is a problem, and it's difficult to address. There's been some discussion around mechanisms for disabling features to make the browser signature less unique, but it's a very tough problem.

Take a look at https://trac.webkit.org/wiki/Fingerprinting for some discussion around what would be required. It's very, very nontrivial.

otoh, you need a browser signature if you want to avoid CSRF attacks.

No, to reliably protect your application from cross site request forgery attacks you usually use auth tokens in the request.

So even if there might be a browser-signature based solution for CSRF protection, there is a very solid alternative, which I think is the best practice anyway.

Enabling click to play for plugins in Chrome is already possible and makes you much less trackable. You will get much less bits of identifying information in panopticlick because your fonts and some other things can't be read out without Flash or Java.

IIRC click-to-play doesn't prevent detection of the plugin; it just prevents it from initializing. And you should also be able to get at fonts by using CSS, SVG, or canvas just to name a few.

As for the larger question, I really don't think there's any way of preventing sites from uniquely fingerprinting a given browser installation. There are just so many places where fingerprints leak through (and the behavior is relied on) that I'd expect it would take a massive overhaul of the web as we know it. Although, I'm a security guy not a privacy guy, so maybe I'm just too pessimistic.

You sure about that?

I already do click-to-play for plugins in Chrome, and it doesn't seem to help much. According to Panopticlick, there are 19.75 bits of data in my Browser Plugin Details, and for the "value" it describes all the plugins I have enabled.

Also with click-to-play enabled, Panopticlick can see my system fonts (20.75+ bits of data, one in 1,769,122 browsers has this value). Apparently Panopticlick is not using one of my plugins to get that data... I haven't whitelisted eff.org or otherwise enabled plugins there.

Looks like Panopticlick is using JavaScript/CSS font detection methods: http://www.lalit.org/lab/javascript-css-font-detect/

Hmm... Panopticlick reports "No Flash or Java fonts detected" when I try it with IE9 on the same system. Is IE9 doing something to block Javascript/CSS detection of those fonts, or does Panopticlick have a bug with IE9 or what? Looks like that method worked for IE6/7...

Whoops, I have to take back what I wrote above.

Somehow my Chrome had lost its click-to-play setting. I don't remember unsetting that... hrm.

Anyway, with click-to-play enabled you are correct: Panopticlick cannot see my fonts. Sorry about the confusion.

However, with click-to-play definitely enabled, Panopticlick can still see my Browser Plugin Details.

Ah, the old "no no, just trust us" defense.

This is just a thing Chrome has to live with, it's a browser developed by a company that makes money selling user behavior to advertisers. You'd have to be stupid not to think this is a possibility.

Must be frustrating to the developers who know what it does and doesn't do though.

The trouble with privacy comments like this blog post is always the same: no matter what the current situation is or how good the intentions of the person making the comment may be, unless they are an executive with the authority to legally bind the company in question to a privacy policy that has real repercussions if subsequently violated, in the end anyone can still be screwed over on the whim of whoever has the data.

In this case, that "whoever has the data" has been publicly dismissive of fundamental privacy concerns up to and including CEO level, and has a business model built around extracting as much value as possible from that private data without regard for the privacy concerns of any individual.

We put the source out there and try to be as transparent as we reasonably can. In the end though, people make their own decisions.

Open source != trustworthiness.

This is a very tough cookie to crumble. The counter-argument to "we are open source" is that the binaries can potentially be assembled from an altered source code. Ideally, the binaries should be assembled by multiple independent "build points" and compared against vendor's version. There was a secure smartphone OS vendor (the name escapes me ATM, it was several years ago) and they did just that - an open source project with audited build system - and it was a major hassle by the looks of it.

The only sure way to deal with the trust issue is to not have a conflict of interest to the first place, which is something that seems to be neigh impossible with Chrome.

One problem is that Chrome is not open source - Chromium is. But 99% of people use Chrome, and there is no way to build Chrome from source to be sure what code is running.

We have to take Google's word for it that Chrome is identical to Chromium as regards privacy, and as others stated, Google's business interest is clearly to track user information, not to respect their privacy.

I've already explained why this framing is incorrect: http://news.ycombinator.com/item?id=3034628

So, go ahead and check out a release branch, set the "Official" build flag yourself, wait anywhere from 2-8 hours for the binaries to get built, and verify it against the bits we ship.

Thanks for the information, I have never seen this stated officially anywhere.

So one can build Chromium with "Official", then add some DLLs (Flash, etc.), and get something 100% identical to Chrome?

Edit: And is there an official statement of this somewhere?

I went through it and there's still some ugly hackery involved on Windows. These are technical issues (e.g. how ffmpeg is compiled via MinGW), but they make it complicated to generate a non-branded, psuedo-official build. That said, all the pieces you need (minus the closed-source plugins) appear to be in the repository and can be built and compared against an official release. It could definitely be made easier, but would require some non-trivial engineering to do so.

Chrome source isnt out there AFAIK. Chromium source is. Not a small difference.

For whatever reason, Chrome does seem to make life hard for privacy extensions like Ghostery:

"As Chrome's resource blocking API is not yet comprehensive, some elements may execute." - http://www.ghostery.com/download

The main dev on the WebRequest API sits right behind me, and is making steady progress. It's the first synchronous extension API that interacts with the network stack, and it's a nontrivial effort to get it running. I can assure you, however, that making life hard for privacy extensions is the exact opposite of what we're being paid to do.

Look at the progress on privacy-related APIs over the last year: WebRequest is coming along nicely, WebNavigation and ContentSettings are feature complete and in the final stages of polish, and Proxy went out to stable in Chrome 13. Privacy and Clear just landed in experimental, and we're iterating on them rapidly.

(Details on the state of each are available at http://code.google.com/chrome/extensions/trunk/experimental....)

Is it possible to restrict requests to the host in the URL I'm visiting? XSS, DNT, etc. become much less of an issue when you can block all external resources requested by a page.

That will be possible with the WebRequest API, yes: http://code.google.com/chrome/extensions/trunk/experimental.... For example, you'd be able to intercept and block network requests from a particular tab that didn't hit the same domain.

The API in experimental now, so download a dev release of Chrome/Chromium, enable experimental APIs via `about:flags`, and start filing bug reports. :)

FWIW, this is already possible with Firefox by using the RequestPolicy addon.

Glad to hear it. Chrome is great in every other way.

Engineers are actively working on APIs for this, but it's quite a bit more complicated than it may seem at first. Low-level capabilities like these need to be implemented such that they don't conflict with our extension permission model and general security posture. For instance, we don't want an extension with the WebRequest API to be able to prevent you from uninstalling it, or to manipulate other extensions and internal browser configuration.

Star this issue so you can get updates as it progresses through the release cycle: http://code.google.com/p/chromium/issues/detail?id=60101

At the end of the day Google is really not the company you have to be worried about - it's the ad networks. They are the ones implementing zombie cookies using the 10 or so methods of storage and the ones that directly sell your information to even seedier companies. I used to work for one so I know what goes on. This will always be a dance between smart developers usurping tracking efforts and smart developers coming up with new tracking methods.

I really don't understand the extreme sentiment that your info via cookies is the worst form of privacy breach there is. Why would you not be more concerned with the companies that charge $.50 per call to access an API that can fetch information on anyone that fills out a form ("Oh I never knew my neighbor only makes $48k a year")? I guarantee you these companies know more about you than Google, Twitter and Facebook (unless you post every thought ever, of course). Case in point, they knew my coworker's wife at the time was 4 months pregnant. Unless you can derive such information from searches (doubtful and completely not worth the effort) then this information obviously came from elsewhere. Perhaps, for instance, like the doctor's office.

"At the end of the day Google is really not the company you have to be worried about - it's the ad networks."

Google is the largest ad network on the Web.

I'm pretty sure Chrome send URLs to Google at least for indexing purposes. I've put up random pages on my websites, not linked to them anywhere at all but visited them in Chrome and then BAM - indexed soon after in search.

Chrome does not arbitrarily send URLs to Google. We go out of our way to avoid doing that, actually.

Look at the implementation of SafeBrowsing, for instance, which does some clever work with hashes to ensure that Google never knows exactly what URL you visited that triggered the warning. It would have been _much_ simpler to just send the URLs, I assure you.

Chrome, or Chromium? One is open source, one really isn't. The set of things that Chrome does is certainly based on Chromium, but Google certainly has the freedom to add some very useful tracking behavior... if they want to.

On one, we have source. On the other, we just have the quote "We go out of our way to avoid doing that, actually." What other evidence?

Given that I'm responding to "I'm pretty sure Chrome send URLs to Google at least for indexing purposes." let's first agree that there's been no evidence provided that I can respond to. :)

I mean both Chrome and Chromium. The source is there, we develop in the open, and I'm not sure what additional evidence I can provide to you to prove a negative. Wireshark?

Well the source for Chrome is not open (edit: correction acknowledged - I thought only Chromium was open), though if you are a Chrome developer confidently saying this is impossible then I would give merit to that. My experiences were non-scientific but still gave me enough suspicious to right this. I may try a more rigorous/careful/documented experiment sometime in the future.

The source to Chrome is definitely open and you can compile an effectively identical version yourself (minus the branding). There are some closed source plugins (eg. PDF and Flash) that are shipped as dynamic libraries. However, you could disable them (via about:plugins) or simply delete the corresponding binaries from a Chrome distribution and everything remaining would be open source.

When you say "effectively identical" do you mean all the preference defaults, all the Google service connection points, etc? Could I do a build from the Chrome source and get a hash match with the release build from Google?

When I had this happen, or when I heard this happen, there was always a reason that was far less sinister than initially thought.

* Public referrer logs created a backlink

* Somebody (else) published the URL

* Somebody (else) shared the URL

* You pinged the URL to search engines or other services.

* The URL appeared in your RSS feed

* The URL appeared in your sitemap

* Your pages URL ranges are easily guessable (/item.php?id=1007, /item.php?id=1008) and traversed by a search engine.

And more recently, something less innocuous: You simply added a Google +1 button to your pages.

  When you add the +1 button to a page, Google assumes that
  you want that page to be publicly available and visible 
  in Google Search results. As a result, we may fetch and 
  show that page even if it is disallowed in robots.txt.

Try putting up new pages, only visit them in non-Chrome browsers, and see if they still get indexed. If they do, then it's not Chrome.

and dont forget to disable safe-browsing (which is pinging google) eg in Firefox "block known blah and bleh sites"

Last time I checked, safe-browsing wasn't leaking url, hashes were sent around. Has this changed?

What did you visit after hitting those pages? There is also "referer"...

nope... no outbound links

IIRC, it has components of Google Toolbar built in. If you accessed those pages with a browser with the Google Toolbar you'd have experienced the same.

You recall incorrectly. Chrome does not send the URLs of pages you visit to Google for indexing, or for any other reason.

Please see the privacy policy. It answers these questions quite clearly:




> When you type URLs or queries in the address bar, the letters you type are sent to your default search engine so the Suggest feature can automatically recommend terms or URLs you may be looking for. If you choose Google as your search engine, Google Chrome will contact Google when it starts so as to determine the best local address to send search queries. If you choose to share usage statistics with Google and you accept a suggested query or URL, Google Chrome will send that information to Google as well. You can disable this feature as explained here.


> If you navigate to a URL that does not exist, Google Chrome may send the URL to Google so we can help you find the URL you were looking for. You can disable this feature as explained here.


> If you enable the optional AutoFill feature, which automatically completes web forms for you, Google Chrome will send Google limited information about the structure of pages that have web forms and information such as the arrangement of the form so that we can improve our AutoFill service for that page.

(end excerpts)

Could you give us more information about the AutoFill feature? Does it send the URL along with the listed information? Does it do this for every URL that contains a web form?

I'm hoping you're offering that as an FYI, rather than as a justification... because it's not a very good justification

Why not just use Chromium?

The same thing also applies to Android.

Not entirely sure why I'm being downvoted but the same privacy concerns should apply to Android as well as Chrome.

I thought it was general knowledge that the closed-source Chrome sends (some) data back to Google while the open-source Chromium does not.

Disclosure: I use Chrome (default) and IE9 (whenever Chrome fails).

Usage statistics and crash reporting are strictly opt-in (and the default at install is opted out). There's also sync, but you must explicitly enable the feature and log in.

That leaves five other places where data is sent back to Google for things like search suggestions and malware detection. You can find an explanation of those features and instructions on disabling them here: http://www.google.com/support/chrome/bin/answer.py?answer=11...

chromium contains RLZ identifier

The RLZ identifier is present only if Chrome is installed through some sort of third-party deal like a system or software bundle. If you install the stock Chrome from Google it's not present. You can find a good overview of RLZ here: http://blog.chromium.org/2010/06/in-open-for-rlz.html

If you don't like third party cookies then I encourage you to go to chrome://flags/ and Enable the 'Block all third-party cookies' experimental feature.

Simple as that.

Interesting. Thanks

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact