Google's position is that it's okay for a website to know X amount of data about a user, you know, as long as it doesn't, in total, cross the creepy line.
Everyone else's position is that if the data isn't required to operate, you don't need it. If we accept that the User Agent, as it is going to be frozen, is going to be served anyways to avoid breaking the legacy web, very little of this proposal adds value, and much of it adds harm. It isn't practical to move to not serving the User Agent, so any replacement for the data in it is pointless at it's very best. The frozen UA provides enough to determine if someone is mobile, the only real need for UA strings. And when most browsers are looking at reducing the tools for websites to fingerprint, Google is introducing new ones.
So Firefox's position on Privacy Sandbox as a whole is pretty logical: If it's optional enough to be requested, why offer it at all? The entire premise of Privacy Sandbox is that it wants sites to have access to some amount of information about the user, and the position of every non-Google-browser is that they want to give sites as close to no data at all as possible.
This is the core of the problem with a single company being legally permitted to operate a web browser and an ad company. Every single browser developer that doesn't own an Ads and Analytics suite is opposed to Privacy Sandbox.
I don't think this categorization is accurate. For example, Apple built https://webkit.org/blog/8943/privacy-preserving-ad-click-att...
> if the data isn't required to operate, you don't need it
This is simple, but it's also wrong. Some counterexamples:
* Learning from implicit feedback: dictation software can operate without learning what corrections people make, or a search engine can operate without learning what links people click on, but the overall quality will be lower. Each individual piece of information isn't required, but the feedback loop allows building a substantially better product.
* Risk-based authentication: you have various ways to identify a user, some of which are more hassle for them than others. A login cookie is lowest friction, asking for a password adds more friction, email / SMS / OTP verification add even more. You don't want to ask all users to go through the highest-friction approach on every pageview, but you also don't want to let a fraudster who gets access to someone's cookiejar/leaked password/old device/etc impersonate the user. If you have a small amount of information about the current user's browsing environment, in a way that's hard for a fraudster to imitate, you can offer much lower friction for a given level of security.
* Incremental rollouts: when you make changes to software that operates in complex environments it can be very difficult to ensure that it operates correctly through testing alone. Incremental rollouts, with telemetry to verify that there are no regressions or that relevant bugs have been fixed, produces better software. You're writing as if your position is Firefox's but even they collect telemetry by default: https://support.mozilla.org/en-US/kb/telemetry-clientid
> the position of every non-Google-browser is that they want to give sites as close to no data at all as possible ... Every single browser developer that doesn't own an Ads and Analytics suite is opposed to Privacy Sandbox.
I cited Apple's conversion tracking API above, but another example of this general approach is Microsoft's https://github.com/WICG/privacy-preserving-ads/blob/main/Par... I don't know where you're getting that they're trying for "close to no data at all", as opposed to improving privacy and preventing cross-site tracking?
(Still speaking only for myself)
That sounds cool. How do I opt into it?
Regarding risk-based authentication, I see a lot of value in it, but I think the cost may be too high, and often less robust methods it uses are a poor metric anyways. I gave an example elsewhere that someone might be using a wired PC and a wireless phone on two different carriers with vastly different user agents at the same time, for instance.
I think there's some merit in some very rough Geo-IP based RBA, but I'm not sure how many other strategies for that I find effective. The fact that Outlook and Gmail seem equally happy to let someone who's never signed in from outside the United States get logged into in Nigeria seems like low-lying fruit in the risk-based authentication space. ;)
Do you mean that before when you said "every single browser developer that doesn't own an Ads and Analytics suite" you meant to exclude nearly all the browser vendors? Google, sure, but also Apple, and Microsoft. And then Opera, UC Browser, Brave, DDG, ... I think maybe everyone but Mozilla and Vivaldi has an ads product?
The only IMHO reasonable technical solution is to reduce entropy as much as possible, even below any arbitrary set entropy limit.
Through in the end I think the right way is a outright (law based) ban of micro targeting and collecting of anything but strongly, transparently and decentralized anonymized metrics.
Also I don't seen Google fully pulling through, e.g. one area where chrome is massively worse then Firefox wrt. entropy is the canvas (at least last time I checked). It's an area where there are known reliable ways to strongly hinder fingerprinting of the canvas. But I don't see Google using them as it would be in conflict with Flutter Web rendering animations in the canvas (which inherently has problems and is technically sub-par compared to how the browser could render web animations (and does in case of Firefox)).
A. Browsers successfully reduce available entropy to where users cannot reliably be tracked across sites.
The thing is, if you can't get all the way to (A) then in removing bits you're just removing useful functionality and adding work for browser developers and web developers. Fighting fingerprinting is only worth it if you have a serious chance of getting to (A).
If you think (A) is off the table then I agree a regulatory solution is the best option. Even then, #1, as exemplified by UACH, is still helpful because it makes tracking more visible. If every piece of information you collect requires active work, instead of just receiving lots of bits by default, then it's much easier for external organizations to identify excessive collection.
(Though potentially a bit tricky to craft and enforce)