However, I want to stress at the moment. The problem is that pervasive tracking and device fingerprinting will go wherever the app functionality is. Currently, even with all of the many, many problems on the web, it's still better to use apps like Facebook via a website than it is to download them as native apps to your phone. The web has better fingerprinting resistance than most native platforms.
It's a really hard problem -- vulnerabilities like Spectre and Meltdown have made it worse. Now we're asking questions like, "is it actually reasonable for applications to have access to high-resolution timers?" People look at the web and say, "oh, this is a web problem." It's an everything problem. If we really want to get rid of pervasive tracking, we now have to think about how high-resolution timers are going to work on desktops.
What the web promised was a VM where anybody (technical or not) could run almost anything, without validating that the code was safe, and the VM would just protect them. We're finding holes in this particular implementation, but I still eventually want that VM as promised.
I don't want to deprive you of the VM as promised, but that seems very hard to implement, and I don't have time to help.
I just want to solve the much easier problem of giving people some way to use the internet to read documents without getting tracked .
By "document" I mean a page of text, images, links to other documents and maybe some other easy-to-implement non-privacy-compromising things.
The web is how almost all documents are made available on the public internet. Most document authors don't even consider or imagine any other way to do it. And the web is a privacy nightmare. That is the problem I'd like to solve.
I felt the need to write this because past discussions on this site of the problem I want to solve have gotten derailed into a discussion of how finally to achieve the vision of "a VM where anybody (technical or not) could run almost anything", which, like I said, strikes me as a much harder nut to crack.
: and without the need for anything as demanding of the user's time or the user's technical skills as you described when you wrote, "I run UMatrix and manually whitelist 3rd-party scripts, I've blocked web features like WebGL and Canvas behind prompts".
It would be very fast, reasonably private, probably a lot nicer to use (at least where documents are concerned), and nobody would use it. I don't necessarily disagree with your goal -- it seems very reasonable to want a document distribution platform that isn't encumbered with JS. But given that news sites can already speed up their pages dramatically by removing JS, and they don't, why would they support this new browser or platform? And without them supporting it, why would users move to it?
We've seen this play out with AMP. AMP is fundamentally flawed, but it did get one thing right: that news sites only care about search engine placement, and they do not care about user experience past that point.
I would (very cautiously) suggest that it might actually be easier to implement a VM that safely runs arbitrary code, than it would be to convince publishers to move to a user-friendly platform that only distributed documents. The only success I've seen in getting publishers to abandon scripting is via platforms like Facebook and Medium -- maybe that's replicable with a new browser or distribution layer on top of the web? I dunno; I think that also might just be a really, really hard problem.
I'd be happy to be proven wrong, I would happily start using a ubiquitous, script-free platform for document distribution.
: We maybe need to just bite the bullet and get rid of asset caching, or at least give asset caches a very short expiration date (<1 day). I guess making them domain-specific could help too.
: IP addresses are a huge issue, and I don't think the tech community talks about them enough. TOR is not really scaling. The best solution for ordinary consumers is a VPN, and VPNs have a lot of pretty obvious problems.
: My guess would be, someone would need to make publishing much easier than building a website (ie, Medium), maybe by adopting the DAT protocol and just offering free hosting for everyone. That runs into the same IP address problems (DAT and IPFS are not privacy friendly), but there's progress being made in that area. Or someone would need to find a way to get users to en-mass abandon the web and move to the new platform.
Would the VM that safely runs arbitrary code render existing web pages or would it be necessary to persuade publishers to adopt it?
The most promising progress (I think) is in building tracking protection that is undetectable. For example, you can put a permission prompt in front of someone's location and block off the API if the user clicked "no", but you could also just lie about their location, which means you'd still be compatible with most existing pages, and publishers wouldn't be able to strong-arm users into turning off the setting. This is how Firefox currently handles high-resolution timers. You can request them, they'll just lie to you sometimes.
Again, it's yet to be seen whether that kind of stuff will work.
If that happens, WASM might be an opportunity to rethink web permissions. Might.
If neither of those approaches work, then anything is fair game. At that point, we might as well try to make a document-only web, or migrate everyone to a new platform. I think that will be very difficult though.
I don't understand what problem(s) asset caching is currently causing. (Although I am a programmer, I'm not a web developer.)
Is the asset caching done by the browser a major part of the problem you refer to?
Is the caching done by content delivery networks a major part of the problem?
Are there other major parts of the problem with asset caching?
On the browser side, by serving a unique set of resources to each user, I can identify them in future visits. Roughly speaking, the attack works by inserting a combination of unique asset URLs and repeated asset URLs, and then logging on the server which of those URLS the browser tries to fetch. By responding to fetch requests with 404s or bad assets, the server can avoid having your browser overwrite the unique combination of URLS it has cached, which allows for more persistent tracking.
These caches also work across domains -- that means that if "foo" requests asset "googlepixel/my_id", and "bar" also requests asset "googlepixel/my_id", both will get served from the same cache. This means that caches can be used to track browsers across multiple websites.
On the content-delivery side, the cached images are likely unique to the website you're visiting, and by downloading them, you're letting the 3rd-party know that your IP address visited the page that they're associated with. In a centralized web this is a small concern, to the extent that it allows Cloudflare/Google to know a lot more about what pages your IP is visiting.
In a distributed web like IPFS this is a bigger problem, because now you're connecting to completely random people online to get your assets downloaded. Even worse, you're asking for those images/documents not by going to one specific server and saying, "hey, server X said you had asset Y", but by announcing to the entire network, "hey, I want asset Y, does anyone have it?"
I have to agree with hollerith here and say this is not at all what was promised. Instead, what was promised, IMHO, was that there are linked, passive pages with an expectation of UI transparency wrt when network accesses occur (in response to clicking on links or submitting forms), and this notion is even honored in the HTML 5 spec to some degree. A universal VM/runtime is desired by developers who want to sell services rather than sw, or don't want to bother with deployment procedures in app stores, or want portable code accross platforms, or for other plausible reasons, but is a non-priority next to the web's original purpose. OSs are far superior platforms for general-purpose apps, and turning browsers into platforms is only helping the (few) browser vendors left, but trampling by design on security, privacy, simplicity, and power efficiency.
Currently, none exist that I'm aware of. Phones aren't doing well on that front, and we haven't finished moving to Wayland yet, so that mess still exists. X11 is heaven for anyone who wants to fingerprint a device. And we still have to figure out whether or not we're going to allow high-resolution timers or raw access to the GPU, which is itself a pretty big fingerprinting target.
On mobile phones, the closest thing I have to a good adblocker is AFWall+, which doesn't work on iOS, and only blocks via the built-in IP-table, which isn't good enough to make me feel safe running apps like Facebook or Twitter. And most mainstream Linux distros (with a few exceptions like Qubes OS) are not shipping with the kind of process isolation that's necessary to guard against malware.
I guess MacOS is making some progress in this area at least? But for the most part, none of our computer environments were designed to run untrusted code -- Linux in particular was primarily designed to protect you against other users. The prevailing advice was, "just don't download malware", which doesn't reflect how people today use computers.
I want to stress -- there could be a solution to this. We could make a user-friendly native platform that replaced the web. But I don't think anyone has made one yet.
Could you say more about this? Previously, I asked you for technical information, but here I'm after your aspirations and maybe your values. What is so great about a state of affairs in which the average consumer can decide where on the internet to go today and at each stop (e.g., web page) along the way, code written by the owner of the web page is sent to the consumer's computer and is transparently run with the user's having to install anything?
My guess is that you dream of using the internet to create compelling experiences that move many (million?) of people, and you consider documents consisting of text, images and links to other documents woefully inadequate for that purpose, but let's hear from you.
(BTW, I don't care about using the internet to consume compelling or moving experiences -- or more precisely ordinary text documents, images, audios and videos are the only types of compelling / moving experiences that I use the internet to consume, and I have no need or desire for more than that.)
Someone might as well ask what's so great about general purpose computers, or Open Source. The web is a way to share documents, but even from its origin it was also a way to distribute software packages. There are a couple of things that also make it a reasonably decent software runtime, but more on that later.
My goal is that I want to make it easier for ordinary people to share software and to share software modifications -- that means fewer gatekeepers (ie, app stores), less complicated publishing (software should be as portable as possible), and less complicated installation. The removal of those barriers means that software is inherently less trustworthy -- I want ordinary people to be able to share code, but I also don't trust ordinary people that much.
On Linux, our thought process around software has been that distro packagers will read source code and hand-pick which packages are safe. Users can bypass their package managers, but for the most part shouldn't, unless they feel OK reading the source code and evaluating whether the author is trustworthy. This doesn't really scale (see Android), it requires a ton of volunteer work, it makes developing and distributing software much harder, and it puts burdens on end users that are unrealistic.
If we want a world where anyone can write software and anyone can run it, we have to make arbitrary code safer. It's never going to be 100% safe, but a user should feel comfortable downloading and installing an arbitrary app. When I say that currently the web is the best VM, this is what I'm referring to.
Across almost every axis, it is currently safer to visit a random website than it is to download a random app to your phone or desktop computer. And when I talk to people about hardening phone security, they're all caught up on moderation and approval processes, which are actively the wrong direction to go if you think of computers as general-purpose, democratizing devices.
From this point of view, it's less that the web should be a software runtime, and more that making software accessible requires us to have a good software runtime, and currently the web is better than the alternatives. It's pragmatic -- all of the other software runtimes are either less secure (Android/Windows), or less accessible (Qubes OS, actual VMs).
> My guess is that you dream of using the internet to create compelling experiences... and you consider documents consisting of text, images and links to other documents woefully inadequate for that purpose
I do want to be able to create compelling experiences and weird stuff, and I think there's an inherent value to having even flawed platforms that enable that. But, let's ignore weird canvas experiments and games, since not everyone cares about them. When we talk about traditional, normal software, my position is the opposite -- that document layout tools are adequate for most software.
Let's ignore the web and just talk about what a good general application framework would look like. Maybe about 60-70% of the software I run today could be using a terminal interface. Pure text is good enough for a large portion of application interfaces, and terminals are usually nicer to use than GUIs.
Most other applications I run natively are just documents, and they'd be better if their interfaces were HTML/CSS. Chat apps, text/database editors, git clients, file navigators, calendars, music players: these are not fundamentally complicated interfaces. The only applications I have installed natively that aren't just interactive documents are fringe-cases: games, image editors, Blender. There's a subset of programmers that get wrapped up in having pixel-level control over how their applications look, and I couldn't care less about how they want their applications to look -- all of their interfaces are just text arranged into tables with maybe a few SVGs on the side. They're documents that I can click on.
HTML and CSS have real problems, and we might want to fix a few of them. But they're already pretty good at laying out documents -- arguably better than most other interface tools that we have. And once you start thinking of applications as interactive documents, a lot of design decisions in HTML/CSS make a lot more sense. For example, if HTML is a language that you use to build a display layer, than it's dumb that there aren't more significant 2-way data-binding tools. But if HTML is a display layer, then it's obvious why we wouldn't want to have a lot of 2-way data-bindings -- they're hard for users to consume.
Where scripting is concerned, we have two options for this theoretical platform: we can run logic locally, or we can run it on a server. A lot of FOSS developers advocate for serverside logic, and I don't understand that, because I think that SaaS is (often) just another form of DRM that takes control away from users. I'd like to move more logic off of servers -- some of the biggest weaknesses of the web come from the fact that everything is so impermanent; you can't pin libraries, you can't run an older version of a website, you can't easily move data around. SaaS makes the majority of those problems worse. If a calculation can be done locally it is often better for the user to avoid the server entirely and bundle everything clientside.
I don't like either of those visions. I think most native platforms are just as bad as the web today (worse if you're thinking about security), and I think widespread SaaS is bad for users. Again, this is pragmatic -- it's not that the web is great, or that it doesn't have fundamental problems, it's that the web currently exists and is available to most people, and I don't think any of the native alternatives are comparable. If someone showed me something better, I'd abandon the web in a heartbeat.
The web promised a place where we could read structured text documents with the occasional embedded piece of media.
If it was known that it would be an app platform like today, decisions would have been made differently.
But just if you want a Turing complete language that is incapable of fingerprinting the host environment, there is TeX, which might be incapable of fingerprinting if a bug it has is fixed, and the file I/O is made more restricted.
Also, for internet stuff there are many other programs and protocols, so could be used, e.g. SMTP, NNTP, Gopher, Telnet/SSH, etc. (In many ways it will even work better. No need to deal with complex user interfaces that do not even work properly and which is not even the user's intentions.)
There is also Glulx, and other VMs, and I also thought of idea to make up a VM for this purpose, too.
what extension does this? there's plenty that disables it outright, but is there one that shows a prompt?
Since webGL requires you to use canvas, this should also block that attack, although I'm currently going a step farther and disabling webGL entirely (webgl.disabled), since I've seen a few sites (panopticlick for example) get around the prompt specifically with webGL, and I don't know how they're doing it.
Firefox's fingerprint resistance efforts are showing a lot of promise, although you will have to put up with some quirks (like learning to read UTC time).
While I mostly agree, it is very annoying seeing the exact same car insurance or <insert random local budget car dealer> ad three times during a ten minute video on YouTube about a super car I was interested in.
(side note - using TV as a high bar for the Internet would be...well, disappointing to say the least)
The author was describing "gets all users" as in, gets all users that Apple can get. What's next, would you point out how people in the 3rd world wouldn't be buying Apple/Google phones either?
The point was, when Apple has all of the market share they can expect to get, will they turn around and flip tracking on and etc.
Please don't deviate topics for solely pedantic reasons. I generally love pedantic distinctions, but this i think added little to no value.
The counter-arguments were that this would never happen because Apple's unlikely to deviate from the luxury pricing model that's worked so well for them. If Apple "gets all users that Apple can get" then turns tracking back on, while Android is still around with the same tracking but at a small fraction of the price, there's no reason for privacy-conscious users to stick with Apple.
There is a cost of switching platforms.
Google's revenue comes from advertisers.
Attribute incentives and motivations accordingly.
You essentially turn everything into a paperweight.
I don't think we have to worry about that. Apple has a small (10-20%) portion of the global market. In many sub markets Apple is struggling against the competition.
What we do have to worry about is digital privacy being unevenly distributed with people that have more money being able to afford more privacy.
That's assuming it's only a matter of Apple vs. Google and there is nothing to be said about it by Mozilla or Purism or anyone else whose products don't require Apple money to use.
And even if that turned out to be the case, it could sink the whole silly edifice anyway because advertisers want the customers with money. Tracking people who don't buy anything isn't nearly as valuable.
Not to mention card transaction data is already available for sale, and by the network owners, nevermind the banks!
It does seem odd for Apple to choose to hand that data over to the bank with the absolute least scruples. GS has made a name for themselves screwing their own customers.
I would’ve hoped Apple would partner with a smaller bank that has something to gain, and then say something like “because we care about your privacy, we have partnered with Bank X to develop the first credit card with truly private transactions” or something like that.
The fact that they just leak the data to the credit card industry is odd. With all their cash they could’ve bought a bank and issued the card themselves. Hopefully that’s their endgame.
In the US, you can't buy a bank without becoming a bank. Becoming a bank involves a lot of additional regulation.
It would have been awesome if Apple provided the lender and Visa/Mastercard/AMEX with a non-transferable license to use client/transaction data with Apple being the owner of the data.
It is goldman though after all, so you could be right!
> does that mean that Capital One was allowed to trade on this data for its own profit? Wouldn't that be amazing? Surely the answer is no: I assume that Capital One signed agreements with retailers (or rather, with Visa and MasterCard, which signed agreements with retailers) in which it promised not to disclose transaction data, or use it for nefarious purposes.
If you think about it this makes sense. A retailer selling material non-public information to a hedge fund which then trades on it is essentially the same as them executing trades based on the data via the fund, which is obviously insider trading.
>Employees of law, banking, brokerage and printing firms who traded based on information they obtained in connection with providing services to the corporation whose securities they traded;
Correct me if I'm wrong, but that sounds like this particular situation to me.
They still couldn't /legally/ trade on things like "companyX asked us for a loan, so let's buy/short/etc companyX before they announce it", as that would be servicing companyX directly, and they learned the information (about the loan) from that servicing.
It's not about Apple customers AND Goldman Sachs customers, it's about consumer trends. Having access to millions of purchases as they happen is useful unto itself when it comes to market trends. In this day and age effectively anyone can get HFT hardware / software, but having that pipe of data is a real moat that your average shop cannot compete with.
The card networks can even receive data of exactly what is purchased at what price with level 3 data:
I know staples.com reports it because it shows you everything purchased on the AmEx statement, and hotel stays will have check in/check out dates.
Is that a diverse enough customer segment to draw any wider conclusions beyond this particular group of people? It seems pretty narrow to me, but perhaps you are right that the data has some value regardless.
Brave is not a privacy-focused browser. It is an ad-focused browser and the business model of Brave is just this: ads, through Basic Attention Tokens. Privacy and BATs are in conflict and Brave will never be incentivized to respect the privacy of all its users. If you want privacy for you and for everyone, competition except Chrome is already better.
Brave is not a solution for a browser user's problem.
At this point there are only two browsers left: Firefox and Safari.
AMP, for what it’s worth, is based on existing web standards. It’s a restrictive way of using those standards, but it sits on top of existing standard tech. And Google is using their position as a search engine to push that, not their position as a browser vendor (as evidenced by the fact that AMP has the same effect in Firefox/Safari as it does in Chrome).
But this comment thread includes the claim that their influence over Chromium is being used to push AMP, and I don’t see how they are using their position with Chromium to push AMP, a technology that uses a specific subset of existing web technology and operates cross-browser already.
Each member gets one vote. Membership to the group is decided by the group, with a goal of no more than 1/3 coming from one company. The distinction I pointed out is both technically and practically correct.
Brave is clearly a privacy-focused browser: it takes very little time for a technically-minded, veteran HNer to kick the tyres on that project's focus and codebase to understand that privacy is their USP.
It's obvious the whole team believes in the idea of a user agent being an Agent of the User.
A little more time reviewing key figures, from Yan Zhu through to Johnny Ryan, reveals the calibre and integrity of people working on this project.
They are attempting to build a model that upsets the current surveillance capitalism status quo, so it's no surprise that there are attempts to spike perceptions around the project.
Google's Chrome has distorted that historical idea so they are agents of the ad network and advertiser, working against the user.
As for homophobe, I reject your definition. Call me what you want there, but “racist” is a lie. Either yours or your “last I checked” source’s.
From what you write, nonprofits are innocent and for-profits are guilty. I worked for a nonprofit or its wholly owned for-profit subsidiary for 11 years, and I can tell you that the profit motive does not go away in nonprofits. Check the 2017 form 990 on Mozilla’s site for the top salary, >$2.3m. I never got 1/3rd that and went down to 1/15th to start Brave.
Brave uses all open source for auditability and we pay for audits as well as bug bounties. We pay the user 70% of user private ad revenue. For publisher ads (not yet done, working with publisher partners) we will pay users the same 15% we get - publisher gets 70%. So we won’t make revenue without our users being happy and making more than we make. Let’s see Firefox share Google search revenue (which held up tracking protection in Firefox for years) with its users, giving more to the user than Mozilla gets.
You ad hominem argument against an open source product is absurd on its face. Should right wingers use only software from righties? How many tribes must hive off and build their own software, and reject open source that’s ritually unclean? Judge products on their observable design, implementation, and business properties.
If you don’t have enough brand awareness to get your PR out there just latch your story onto someone that does...
Do you think Mozilla, Chrome, and ad blockers were also part of this marketing effort, or is it possible that all this was just normal context?
That's why the Tor browser is setup to be as homogeneous as possible. Using Tor within Brave does not provide the privacy that users might expect, and Brave even point that out in their website. Hiding your IP is a good start alongside blocking known trackers, but it's only one component of properly avoiding tracking online.
Mozilla is investigating Tor integration: https://blog.torproject.org/mozilla-research-call-tune-tor-i...
Sure moving forward they do but they sure did take their time getting there.
No they don't. They just have a proven record of not being good at exploiting the massive amount of surveillance data they collect.
Anyone who thinks Mozilla is on the side of the user in privacy matters should start by asking why they support the beacon API.
At least they were "transparent" about the second one. But sending my browsing history to a company I've never heard of is a big ask. At least I can choose to have Chrome sync with Google, this wasn't ever asked.
All of their efforts to wean themselves of the Google cash have been not only pathetic failures, but also breaches of trust.
Many people and long time Firefox users felt back then that Mozilla did do something wrong.
Give an inch, take a mile. It's always like that every time by every single company in existence.
Anyway, I wanted to give this more airtime so I upvoted you ;)
Any links to best practise in this space?
There won't be the opportunity to lay back and say "This emission isn't trackable." very smart people at public and private intelligence agencies (Facebook, Google, GCHQ, Spetssvyaz, NSA) are working to find a way.
You might not think FB or Google are evil. But we live in a cyberpunk world now, there are criminals who are learning to act more sophisticated. Eventually they'll get leverage over an employee at Google/FB/etc and the data they get access to will be used offensively.
The current guys working in tech, tracking your every move are on the friendly end of the spectrum. They just want to sell you things, or get you hooked on e-cigarettes.
EDIT: Also a note that iOS doesn't have any way to control app network access, I don't think Android does either. So there's another easy front.
I don't know about Apple/iOS, however Android has plenty of third party local vpns that exist specifically to filter per-app internet access (No Root Firewall, NetGuard, etc), as well as iptables GUIs like AFWall+ for those who are rooted.
Without installing a third party app, while you can't entirely prevent an app from going online, there's a toggle in app settings, "Background data: Enable usage of mobile data in the background", though I'm unsure of the exact effect this toggle has. With more technical knowledge, you can leave the Android platform and run a pihole or a privoxy. It should also be possible on Android to write a no-root vpn that switches between different proxies/profiles for various apps (ie, use squid/privoxy on browsers, use DNS proxies for native apps, whitelist as necessary).
If you're rooted, you can also run your privoxy/pihole on the local device; I've had success with running a local dnsmasq, however it's far from battery friendly.
That'd make me definitely switch if I'd get faster ad-blocking and only lose HW accelerated video.
TFA is a little incoherent, however.
> Publishers and companies rely heavily on online tracking — i.e. collecting (anonymized) data about a user’s activity on the web — to keep tabs on your every move as you hop from one site to the other. [emphasis added]
> While this is typically used for targeted advertising, the implications go beyond just serving relevant ads in that it allows marketers to create detailed dossiers about your interests — resulting in significant loss of privacy.
None of that is at all "anonymized".
It sounds like many people now use anonymized whenever there's no obvious personal identifier (name, email, social security number etc) in the data. Never mind that a thorough profile doesn't need one to identify individuals.
IP addresses are PII under GDPR, for example. And it's well known that many HIPPA-level "anonymized" datasets have been deanonymized.
I wish the latest round of privacy restrictions (which I think are overall a decent idea) would take these use cases into account, or at least allow a mechanism to request the user's permission to use third party cookies for sites they trust.
Native apps have pretty robust permissioning systems. Why shouldn't websites?
(For context, these are some of the things we use at PayPal to build embeddable cross-domain components: https://medium.com/@bluepnume/introducing-paypals-open-sourc...)
They really don't, though. No better than websites. I was astounded when I first used Little Snitch and saw how often random apps were making network requests. For example, Translate Tab (a simple language translation app) sends every translation to Google Analytics. And you need a relatively sophisticated tool / expertise just to see this.
It made me rethink the superiority of native apps since all this is so hidden. I prefer websites over native apps because I can run uBlock/uMatrix. Native apps can do whatever they want when it comes to tracking. People aren't even talking about it.
Not only do native apps get off scot free, HN glamorizes them as superior with no questions asked.
Isn’t that what the Storage Access API is for?
I guess the next step for advertisers will be to enlist the websites themselves to send through more user data.
A big player like Facebook could ask media companies to set up fb.cnn.com or similar, but I imagine this is where we start to enter “security vulnerability” territory, where Apple uses different heuristics to ban this approach.
EDIT: Maybe not 'not at all', but it's remembered for a very short while anyway.
Marketing - Might be. Let's put up a banner about it in every corner
Tech - Find the most sensitive/concerning topic, and let people debate.
This can be overridden by manually setting DNS servers on the phone or computer to non-filtering public DNS servers, eg. 22.214.171.124 or 126.96.36.199.
You can expect domains like revddit.com or hotnail.com to get blocked by anti-phishing filters.
I don't fill out surveys anyway, but I was feeling particularly pissed, just because it should be socially unacceptable to behave like this, just because you think you can legally do it and nobody can stop you. So I flagged it as spam.
Oh, and the chain store that did this has been in the news for data breaches, not long ago at all.
Security by obscurity doesn't generally work. FWIW, here's what WebKit has to say about what it'll block: https://webkit.org/tracking-prevention-policy/. I hear that a specific list may also be released at some point.
I work with an enterprise publisher that has literally every item listed as an essential part of how it does business, from running programmatic ads to SSO and Google Analytics. The Unintended Impact and No Exceptions parts would mean a rework of that entire business for all of WebKit. That is true of most web publishers today. I cannot overstate what a vast impact this will have on web publishing.
Maybe those things need to change. We want to move towards a member driven model and move away from ads, but in order to fund that transition ads need to continue running as the product is developed and user base grows. Even then, since we use Okta for SSO, that would also break or require a significant reimplementation to server-side auth, which could also break cache for authenticated users.
We have a grand, beautiful plan for creating a publishing model that is trackerless except for a first party event logger that hashes all PII before it’s stored. We need data to operate, and some of it needs to come from the client. We would share client data with zero 3rd parties.
For now our pages are a brothel of 3rd party scripts. We hate it, we can’t survive with our it, and forcing this change could force us and most web publishers today out of business.
Below is a quote from the link above:
There are practices on the web that we do not intend to disrupt, but which may be inadvertently affected because they rely on techniques that can also be used for tracking. We consider this to be unintended impact. These practices include:
Funding websites using targeted or personalized advertising (see Private Click Measurement below).
Measuring the effectiveness of advertising.
Federated login using a third-party login provider.
Single sign-on to multiple websites controlled by the same organization.
Embedded media that uses the user’s identity to respect their preferences.
“Like” buttons, federated comments, or other social widgets.
Improving the security of client authentication.
Analytics in the scope of a single website.