Hacker News new | past | comments | ask | show | jobs | submit login
Google dragged to UK watchdog over Chrome's upcoming IP address cloaking (theregister.com)
107 points by Beggers1960 6 months ago | hide | past | favorite | 62 comments



Combined with Privacy Sandbox [0], this means the browser can track you directly without cookies, but websites can't track you via IP address. So it has the effect of centralising the tracking to google only.

I haven't used Chrome since this rolled out, since I didn't see a way to object to the new tracking. This is what the linked article [0] says:

> It’s unclear if toggling these features off will stop Chrome from collecting these data altogether, or if it just won’t share the data with advertisers.

[0] https://theconversation.com/google-chrome-just-rolled-out-a-...

[1] https://developer.chrome.com/en/blog/shipping-privacy-sandbo...


Depending on the user’s aims, that could still be a net win… limiting tracking to a single party versus N parties. Obviously not ideal as no tracking at all though.


Normalizing VPN/proxy use is a fantastic development for the entire market. Sure, I can see a future where websites only allow traffic from naive connections, Apple proxy, or Google proxy. But I can just as likely see a future where many more "normie"-appearing providers spring up and website managers move on to some other feel-good check box besides IP discrimination.


The particular issue here is where many people have the same complaint about Cloudflare. You're concentrating that single party to a single provider for a massive number of people. Hence now governments have a one stop shop for all the information they need.


That's assuming you can trust the single party not to pick winners to share data with and losers to keep out.

With Apple we know they have a bunch of revenue streams based on selling actual products to users, so there's much less of a hazard there. With Google... who knows what they'll do to juice their next earnings report?


> Depending on the user’s aims, that could still be a net win… limiting tracking to a single party versus N parties.

It's actually worse in every way regardless of the user's aims.

Having individual sites track you across their affiliates is much better than having a single party track you across everything for commercialisation purposes.

TBH, the tech community asked for this. The support for google chrome over firefox is driven primarily by technical folk.

The majority of people really don't give a damn what browser they use.


It makes that one party much more powerful and difficult to deal with in the future.

Google is already at the point of too big to fail.


It's not meaningfully different. Nobody actually wants the data. They want to do stuff with the data and take actions based on get data. Google will collect all the info and then process it any way the 1st party asks. And the 1st party will still take the same action whether google or themselves processed it because they still get that actionable insight. That actionable insight being actioned is what's detrimental to you.


Depending on the user’s aims, Standard Oil's monopoly could still be a net win.

- Some HN commenter in 1890s (probably)


>So it has the effect of centralising the tracking to google only.

I thought privacy sandbox could be used by any website, just the same as Google can use it. Am I wrong?

Disclosure: I work at Google, but as can be seen from my comment, I don't have much knowledge of privacy sandbox.


> The Google-run proxy can observe the user's IP address but not the websites being visited and the third-party proxy can see the web servers being visited but not the IP address of the visitor.

Google will likely already have the user's IP from sync, safe browsing, etc, if Google's proxy doesn't know what sites are visited it doesn't sound like this increases Google's access to any user data.


Indeed that was my point it doesn’t hurt Google just their competitors. Thus concentrating power and increasing their monopoly grip on the web.


> Combined with Privacy Sandbox [0], this means the browser can track you directly without cookies, but websites can't track you via IP address. So it has the effect of centralising the tracking to google only

Definitely not the actions of an org trying to exert monopoly power.


Certificate transparency has the same problem. It has the side effect of sending every domain you visit to the certificate log server. An advertisers wet dream. If you are using chrome this is google. Now if you are using chrome I always assumed it was sending everything you viewed to google anyway. So I guess this is... OK, or at least, not worse than the status quo.

https://en.wikipedia.org/wiki/Certificate_Transparency


No it absolutely does not. I think you're confusing CT with OCSP, which is the browser checking to see if a certificate has been revoked, which died a death not just because it was incredibly slow and unreliable (CA's OCSP server fell over enough that OCSP failed open), but more critically because it was an incredibly invasive privacy problem.

CT does not require that, that was part of the explicit design goal of CT. To be trusted by a browser a CA has to publish every certificate they issue to multiple CT logs. When the certificate is published to a CT log, the log provides a signed confirmation, and that is included in the final certificate.

Every browser has a set of CT logs that they trust. When they perform the trust evaluation on a certificate they verify the certificate is correctly signed by the CA, and then they go through the included CT annotations and find the annotations from logs that the browser trusts and verify that those annotations are signed with the respective log's public key.

At no point does the browser perform any other network requests (again, a problem with OCSP is that the additional network access during TLS handshakes was incredibly slow and unreliable - if CT required this it would require _more_ fallible network loads). That makes CT faster, and resolves the privacy hole OCSP forced.


What?

> It has the side effect of sending every domain you visit to the certificate log server.

No? It means that if a certificate is to be trusted, it needs to enter a public append only log server.

It does not "send every domain you visit to CT." The certificate simply has to be in CT before chrome trusts it.

CT logs are public, you can (and many do) make archives of it.

The idea is simple, if a website is to be _publicly trusted_, it needs to also be publicly logged.

You have heavily misunderstood what CT does, and the paradigm of control around it.


To be charitable, he could be talking about the implementation of SCT auditing in Chrome, but it has never uploaded all the sites you visit.

    There are three possible states:

    No Safe Browsing protections -> no SCT auditing
    Default Safe Browsing protections -> SCT auditing logic selects a small proportion of TLS connections and performs a k-anonymous lookup on an SCT. If that privacy-preserving SCT lookup reveals that the SCT is not known to Google but should be, the client uploads the certificate, SCTs, and hostname to Google (but no other information).
    Enhanced Safe Browsing protections -> SCT auditing logic selects a small proportion of TLS connections and uploads the certificate, SCTs, and hostname to Google (but no other information).
https://groups.google.com/a/chromium.org/g/ct-policy/c/Fddjj...


Got it. Just for reference for others, SCT Auditing is different from SCT validation.

I was talking about the validation side. SCT Auditing basically helps ensure that the logs are keeping their promise by letting the root program (Chrome, and Apple) see what certificates are flowing on the internet and if the merkle tree structure is valid.


Yes but how does your browser trust it? it has to check the cert log. who has the server for the cert log. for chrome this is google. Did you reach a different conclusion from the big table in the wikipeda page where it requires "1 SCT from a google log" ?


> Did you reach a different conclusion from the big table in the wikipeda page where it requires "1 SCT from a google log" ?

I've worked at both Let's Encrypt and Google Trust Services. CT is something I'm intimately familiar with.

> it has to check the cert log.

No, it does not.

> Yes but how does your browser trust it?

Your browser ships with a log list: https://source.chromium.org/chromium/chromium/src/+/main:com...

The public key of the log is included there. It's checked to see if the SCT is signed with the correct public key.


Chrome hasn't required "1 SCT from a google log" since April 15th 2022


Uh huh. And if I log what addresses reach out to me to check for the presence of a particular certificate in the logs?

You're not thinking about it hard enough. Your CT's would effectively have to be implemented in a way where everyone had a locally cached copy of the log to check against. That at least shuts down an browsing activity leak vector.

If you centralize it, the abuse is a matter of when, not if.


> Uh huh. And if I log what addresses reach out to me to check for the presence of a particular certificate in the logs?

Again, misunderstanding of how it works.

Logs are checked using the SCT in the certificate. It's a signed "promise" from the CT that the pre-certificate will eventually be appended to the logs in a few hours since submission.

The various root programs monitor to ensure that the promise does in fact happen.

Your browser process does *not* reach out to the CT log on its own.


Isn’t this similar to what Apple does with Safari on iPhone where they can hide your ip address by using iCloud servers as relay?

Discussed here: https://news.ycombinator.com/item?id=31387019 or https://news.ycombinator.com/item?id=27467798

Why is it good when Apple does it but terrible when it is Google?


Is it because Apple’s motivation is perceived to be selling protection to its hardware customers where as Google’s primary motivation is perceived to be to get a monopoly in the surveillance business?


From other comments it _sounds_ like Google's system is done as a single proxy, which is bizarre to me because it means google can see every site that is loaded which even for google seems on the nose.

Apple's service is explicitly designed to prevent this exact problem. There's a write up for it on apple's security site (possibly part of the system security doc?). There are intentionally two layers, the connection from the device -> apple's servers, and then the connection from apple's servers to Akamai or cloud flare (or some other CDN). The connection to apple's servers is encrypted to a key from the 2nd layer CDN so apple can't read it, that request is forwarded to the CDN which decrypts it makes the request, then encrypts the response to the client's key and sends that to apple, apple forwards that encrypted blob on to the originating device which can then decrypt it.

The end result is apple cannot ever see the destination or response, and the backend CDN can't see the device that made the request. That should be the design of _any_ privacy conscious proxy service (including all the questionable "privacy!" VPNs). That's kind of why I'm surprised that the claim is that Google's service is a single layer - it's so blatantly invasive.


It’s not a single layer, it is designed the same way apples service runs. This is addressed in the article.


As justinclift points out elsewhere in this discussion, the article may have misreported that:

> We are considering using 2 hops for improved privacy. A second proxy would be run by an external CDN, while Google runs the first hop. This ensures that neither proxy can see both the client IP address and the destination.

https://github.com/GoogleChrome/ip-protection#core-requireme...

If they choose to go ahead with the second hop it would be the same as Apple’s approach. But it sounds like this has not been committed to yet.


This is what I was unclear on - I couldn't tell if this was one-hop (and so tremendously invasive "privacy"), or two hops through an independent 3rd party (and so actually a privacy feature).


In that case the complaints other comments people are making are simply wrong. There isn't a privacy concern here, I think google has just burned so much trust that the _assumption_ is now that the goal is tracking.


It is not ok for Apple or Google to do this while at the same time operating an ad business.

If they feel this is in the best interest of the end user, then they should divest of either their ad business or control of the browser. Neither company is willing to do this. This IP move is anticompetitive as it consolidates even more control of the ad ecosystem in a handful of companies. Google’s response that they are placed at the same disadvantage as other third parties is not accurate. Google controls the browser and so has full control to communicate any data between the browser and their servers, bypassing the proxies.

There is only one thing that drives these companies and that is maximizing profits for the benefit of their investors. This objective is fine. However, it is disingenuous for either of these companies to hide behind the defense that they care about the privacy of end users.

If Apple cared about the privacy rights of all humans, why do they share all data belonging to their customers in China with the Chinese government. The only reason is profits. Google also shares all their customer’s data with any government that asks.

If there were a thousand companies that each had access to a tiny sliver of a consumers data, we would have a system that naturally protects end user privacy. However, with a few companies controlling the vast majority of the consumer tech landscape, we now have a system where a few for-profit companies are keepers of our data and already sell out when their profits are at stake.


That's an excellent example of the conflict of interest of trying to be an ad company and a proponent for privacy at the same time.


Worse, it’s arguably anti-competitive behavior. Though, truth be told, once I looked at the facts, I would prefer Google to have my data siloed (which itself, yes is not good) than random third party data collectors that would sell it on data markets.


There's no conflict of interest, it's simply denying the enemy from pillaging big G's harvest (you).


> The Movement for an Open Web (MOW), an organization that has lobbied against Google's Privacy Sandbox initiative by claiming it's harmful to rival internet advertising businesses

They have much more to be annoyed at. VPN companies, The Tor Project, AD blockers, etc

I use the Google One VPN[0] to cloak my real IP, but only sparingly. For most of my Internet surfing I use a two hop VPN setup. One VPN router with kill switch mode, and then I connect to another VPN service on top of that, a sort of fake Tor / private relay setup.

I don't funnel all my traffic into Google One's VPN. I like to compartment and not put all my eggs in one basket. Looks like I'll be doing the same when this new Google-owned IP cloaking feature ships.

[0] https://one.google.com/about/vpn


>> The Movement for an Open Web (MOW), an organization that has lobbied against Google's Privacy Sandbox initiative by claiming it's harmful to rival internet advertising businesses

> They have much more to be annoyed at. VPN companies, The Tor Project, AD blockers, etc

While I'm sure they dislike those, the key word here is rival - the claim is that Google has its own ad business, and is only deploying privacy features in a way that hurts other ad companies.


That seems unnecessary. Why not just forward through an anonymous SSH box?


Wouldn't that mean that all your web traffic is going through a google server?


Yes, but it’s end-to-end-encrypted.

See https://github.com/GoogleChrome/ip-protection/issues/10 for example.

Of course, Google can conceivably backdoor Chrome, and then the exfiltration of data wouldn’t be obvious from the client-side traffic.


If we assume Google to backdoor Chrome, the whole IP protection talk is irrelevant. They could backdoor it without needing that.


The CA system and hence https/TLS is already backdoored, by construction. Any of root CA’s that your browser trusts can man-in—the-middle you unless you keep track of and pin certificates like Google does. These roots CAs are run by random companies and governments around the world. All of them have equal precedence in browsers.


Any fake certificates the malicious CA signs will have to be written to a Certificate Transparency log, which would make noticing the fact that the CA has done something like this easy.


It’s not that easy. They would also need to MITM the actual traffic, or the DNS.


Same with plain http.


Chrome would need to contact Google servers to send the collected data. Such unexpected traffic would be more easily detectable without IP Protection. With IP Protection, Chrome is sending data to the Google proxy all the time, so it would be harder to detect extra data sent that way.


From the third paragraph of the article:

> It's designed to run Chrome browser connections through two proxies, one operated by Google and one operated by a third-party (eg, Cloudflare), so that the true public IP address of the user is obscured, hopefully thwarting attempts to track them around the web using that address.


Reading the front page the of GitHub repo with the Chrome "IP Protection" source code, it has this worrying statement:

    We are considering using 2 hops for improved privacy. A second proxy would be run by
    an external CDN ...
https://github.com/GoogleChrome/ip-protection

"Considering" means "we're thinking about it, but it's in no way final".

Sounds like Google could implement a proxy solution but decide the 2nd hop for improved privacy isn't needed after all.

Or make it optional, defaulting to OFF, etc.


^ This is an important "detail".


In other words what apple has been doing for years now


Wouldn't this be useful as a way to avoid getting blocked or having to do a thousand CAPTCHAs when using Tor? Access the relay via Tor, the relay doesn't know who you are because of Tor, the website sees you as someone using the relay which is too many ordinary people to block.


Yes, but such proxies will likely require authentication with a Google account, and Google will keep an audit log to prevent "abuse". So at best it works out for maintaining somewhat persistent nyms for well-scoped purposes (that you make with a burner SIM or buy from someone who did), but not as some panacea for Internet privacy.

This is of course assuming that the roadmap for these proxies isn't tied into their push for remote attestation ala "Safety" Net and WEI, in which case your "nym" becomes the entire device.


Yes. It’s like a Googled version of Apple’s Private Relay.


Wouldn't that mean that all your web traffic is going through a google server?

Yup just like most people using Android have their text messages routing through Google. Some don't even realize this.


RCS E2EE and Group E2EE are live which makes this a less dangerous attack surface / a less useful ad data reservoir.

Google Voice users like myself are SOL but I've never had pretenses that my messages there are private. They're probably stored in plaintext even, there's a lot of 15+ year old GrandCentral cruft underpinning Voice even to this day (speaking only as a user).


We really need to be thinking bigger here — Google included, to the degree there are pro-privacy folks in the org (which there) - in coming up with something that protects a lot more than the identity via IP of Chrome users, but all web traffic.

I mean, maybe even rethinking IP, TCP/IP entirely.


I always thought there was something that could be done with the BitTorrent protocol... I don't know how well it would work, but an Internet that isn't so brittle mirrors are commonplace and accepted wisdom would be nice.

BT isn't anonymous but a TCP/IP replacement version could be.


In my opinion this would just be a cat and mouse arms race. Make something new and corporations will race to be the first to add their tracking to it.


The anti abuse points from their github do not sound convincing to me. There will be a high value in farming accounts to either spam or attack. What do you do when the google proxy is dosing your service?


I was just watching a video[0] about how Private Relay under the hood and it sounds like authentication is somewhat tied to having genuine Apple hardware, as well as requiring an iCloud subscription.

If Google's relay just requires a Google account, there's no doubt that dummy accounts will be used to abuse the service. Apparently Private Relay sends a dynamic config of relay servers to use which they could leverage at any time to unmask you. I'm guessing Google will do it similarly.

[0] https://media.ccc.de/v/camp2023-57214-trustmerelay_investiga...


So Google is doing his why exactly? Since they know everything about everyone they now shut out the competition while sending all the IPs to their own proxy that will log them for themselves "for your privacy TM"???

I mean I would like this if it would not come from Google.


Welp Firefox it is then.


is the proxy service available for user modified browsers? otherwise this sounds like WEI with a fake mustache




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: