Hacker News new | past | comments | ask | show | jobs | submit login
Plausible Deniability and Gaslighting in Fighting Ad Blockers (secarch.dev)
231 points by chii on June 5, 2019 | hide | past | favorite | 133 comments

While the controversy centers around ad blockers, the most important aspect of the Chrome extension API changes seems to have been missing from the discussions of the past few months. You will not be able to fully control every aspect of a request in your own browser, despite installing extensions that run priviledged code that you trust.

HTTP headers can no longer be freely edited, unless they are part of a limited whitelist blessed by the Chrome team. This kills innovation in the browser extension space in its tracks, and ad blockers are just a subset of use cases that will be impacted.

The changes will inevitably find their way into Chromium forks which do not have the resources to maintain the deprecated API while merging upstream changes, further limiting choice and hindering innovation.

I understand the challenges at scale, but there's definitely a decade-long trend towards locking down devices and software from user changes, because any API or configuration that's accessible to knowledgeable users is also accessible to malware and social engineering.

This coincidentally aligns with business goals (watch our ads, replace your phone every 2 years, sync everything to our cloud because anything else is risky and cumbersome, do not integrate our devices with a non-blessed solution etc.) but for some reason discussing these goals is frowned upon.

Correlation does not imply causation, but an event can have multiple causes at once, and only some are allowed in polite discussion, and not everything should be attributed to only malice or only stupidity.

> but for some reason discussing these goals is frowned upon

Well, the reasons are obvious - execs don't like having their motives questioned by the little people, especially when those motives are self-serving and bad. How dare you question your betters?

One reason of many I banned Google.

Mind you, another way to say that is “malicious extension authors can no longer snoop on all network traffic.”

As someone who was, at one point, pitched a project proposal for a browser extension our company could make which would “make users think they’re getting one useful effect, while secretly using their browser as a node in a distributed web-scraping farm to social-network sites they’re logged into” — I’m sure this is more common than people think, and very wary of extensions saying they do one thing while doing quite another (which happens to fit into the same permissions set as the explicit purpose of the extension, making users none the wiser.)

The discussed API changes do not limit malware to snoop on browser traffic, requests can still be observed, and scraping is also not affected.

Ad blocking is arguably one of the best ways to protect yourself from malware, and that is the capability Google attempts to limit.

This also isn't even a first step where eventually privacy restrictions around observation will be added later. From the discussion thread[0]:

> Chrome is deprecating the blocking capabilities of the webRequest API in Manifest V3, not the entire webRequest API (though blocking will still be available to enterprise deployments). Extensions with appropriate permissions can still observe network requests using the webRequest API. The webRequest API's ability to observe requests is foundational for extensions that modify their behavior based on the patterns they observe at runtime.

It's not just that the people claiming that this will improve privacy are wrong about this specific change. Google's stance is that blocking request observation is fundamentally not acceptable, specifically because it would stop extensions from tracking and responding to user behavior.

To all the people saying, "well, it's about time extensions got locked down" -- Google is not your friend and they don't want the same things as you.

[0]: https://groups.google.com/a/chromium.org/forum/#!topic/chrom...

> though blocking will still be available to enterprise deployments

So I guess the answer is to be an "enterprise", then.

Not necessarily, since having the API available for you personally is only one part of the problem.

The other question to ask is, will you even have an ad blocker to install, given that Gorhill has hinted these changes might mean abandoning Chrome as a target platform?[0]

If enough ordinary people move to enterprise, Google will eventually restrict that as well. If not enough people move to enterprise, extension authors won't support Chrome and you'll be stuck writing or forking your own ad blocker. So you're gambling on being able to walk very thin line alongside a company that's already shown you that it is actively hostile to your interests.

The more permanent answer is to just switch to Firefox and stop playing their game. Switching to enterprise will probably at best buy you some time before you're forced to make that decision.

[0] https://github.com/uBlockOrigin/uBlock-issues/issues/338#iss...

Oh, I switched to Firefox years ago (Chrome still doesn't offer any reasonable equivalent to Tree Style Tabs). This would be more for the folks that I haven't quite convinced yet to do the same.

Ah, alright. Too bad, actually; I was hoping there was a silver lining.

Well, maybe—did the API previously give extensions the ability to rewrite the request in-flight? Such as e.g. to rewrite Amazon requests to embed the extension-author’s Amazon affiliate ID? I know extensions can do that, but I’m not sure how they accomplish it; it’d be nice if that could be quashed.

But really, you shouldn’t be equivocating the particular capabilities this API provides with the ability to block ads altogether. There are many ways to implement ad blocking, and not all of them require the capability to have your extension active “on the wire”, intercepting network traffic and doing arbitrary Turing-complete things in response with full extension-level capabilities.

Personally, from my perspective (and Google might share this perspective), this API was never “the ad-blocking API.” A true “ad-blocking API”—a single-purpose one that can’t be used for any other malicious purpose—is the thing Safari has, where your extension can feed a block list into the browser, and the browser will prevent network requests matching the block list itself, without ever informing your extension that it has done so.

Of course, Google would never implement that API in Chrome. And that’s the real problem.

I don’t blame Google for limiting some other API that isn’t really “about” ad-blocking, but which can be used to block ads as a side-effect; just like I don’t blame Intel for making their processors slower in response to Spectre/Meltdown. Sometimes security for users trumps useful features or performance.

But I certainly do blame Google for not implementing an API that has absolutely no security impact—is perfectly safe, perfectly performant—and blocks ads.

If anything has ever been decreed on-high from Alphabet to the Chrome team, it’s likely “don’t implement the Safari content-blocking API, or any standard based on it.” Not anything to do with this API. This API is just problematic, and the Chrome team themselves want its capabilities out of the hands of extension authors.

There are already ways to limit access to the webRequest API using host permissions, for which the user gives explicit consent.

Once the host can be accessed the webRequest API isn't even needed to rewrite requests and read or modify page content, all of it can be acomplished using content scripts.

There are really no meaningful security or privacy benefits in limiting the webRequest API in ways that are being proposed by the Chrome team.

I don't think the host permissions do what you think they do.

1. They don't address the performance issue.

2. They still allow redirection of requests, which is a bigger security issue.

V3 fixes 2, which is a security issue, and a unique one if your threat model is a malicious extension.

Host permissions are required to access the requests through the webRequest API. Once you have set the host permission, you can redirect requests using a content script, without needing to touch the webRequest API.

The performance impact is negligible in sensibly implemented extensions, and there are several other ways to make the webRequest API more performant if that's really a valid reason for the change, some of which are already implemented in Firefox.

I don't believe most honest software engineers would rip out a core functionality of the browser that so many people depend on, just to gain a couple of microseconds, at least not before trying other optimization techniques.

> Once you have set the host permission, you can redirect requests using a content script, without needing to touch the webRequest API.

Then I'm a bit confused, why does this change cause a problem for ad-blockers? Can't they block and redirect requests via a content script after these changes?

No, you can't block requests from a content script as you would with the webRequest API. You can however redirect the page from a content script, read and replace page content, and steal passwords.

The webRequest API gives you fine control over all requests on the page in a performant way. A content script based ad blocker would be able to block ads to some degree by removing the DOM elements that initiate the request, though it would be limited, prone to breaking, and not performant.

Ah. I'm caught up now. That jives with my understanding.

(this was meant as an edit to the prior comment, if it seems out of place):

Looking into this more, I think what you're saying is that you can request both a content script and a host permission for the webrequest, and the user will approve both at the same time. But you have to specify them differently in the manifest, ie. it looks like

    permissions: [
        "<all sites>",  // old way
    host_permissions: [ // new way
        "<all sites>"
    "content_scripts": [
     "matches": ["<all sites>"],
     "js": ["contentScript.js"]

My (limited) understanding is one of the points of the new manifest is to make things more declarative so that they can be statically analyzed. So that, for example, host_permissions "<all sites>" might be allowed, but would require manual review, and content_scripts: ["<all sites>"] could be flagged even more strongly, etc.

If you don't specify the content_scripts portion of your manifest, you don't have the ability to write content scripts, so there isn't a security concern, you're limited by the declarativeNetRequest sandbox, and can't do the snooping.

For reference, uBlock, which I use, requests a ton in the manifest[0], and there are valid reasons for that (the userscript is, iiuc, for using the content picker). But a minimal ad-blocker could declarativeNetRequest, host:<all sites>, and nothing else, if I'm reading correctly. And that limited footprint can be statically validated. You know that extension can do nothing but modify requests, the way the requests are modified is statically verifiable. That sure sounds more secure to me.

[0]: https://github.com/gorhill/uBlock/blob/98271f6140bb188222e21...

Declaring content scripts in the manifest helps with performance and perhaps static checks, but it's not required to run code in pages, only the host permission has to be declared in the manifest, and you can just inject the code when the page loads: https://developer.chrome.com/extensions/tabs#method-executeS...

Malware would prefer executeScript to avoid scrutiny, and also wouldn't care about performance.

The declarativeNetRequest API is actually a welcome addition, but it's not a valid reason to deprecate parts of the webRequest API. We should be able to modify request headers for various use cases, some of which can't be forseen, because that's just how innovation works. This is why neutering that API is considered so harmful by pretty much everyone outside of Google.

If security and performance are an issue, webRequest API features can be decoupled into granular APIs, optimized, and accessed with granular permissions. The new proposed APIs do not even come close to the feature set of the webRequest API.

When feature parity is achieved through new APIs, deprecating the webRequest API makes perfect sense.

FWIW, it looks like changes to lock down tabs.executeScript and content scripts were originally considered[0], but I agree that tabs.executeScript not requiring a declared permission is a giant issue.

Thanks for this discussion btw, you introduced me to a bunch of quirks in the chrome apis I wasn't aware of.

[0]: https://docs.google.com/document/d/1nPu6Wy4LWR66EFLeYInl3Nzz...

I'm happy to chat, and I hope the discussed concerns about deprecating the webRequest API will motivate you to bring up the topic internally at Google.

Can confirm that using users for social media scraping is huge. I worked in that field for some time, though the product was never brought to production.

Note that in our case, though, the user was completely informed that they were acting as a node in a network. They provide the ability for us to collect non-public data on other users, we provide them with non-public data on other users that they do not have access to.

"They provide the ability for us to collect non-public data on other users, we provide them with non-public data on other users that they do not have access to."

Really doing God's own work there ...

Oh, we knew it to be the devil's work. The idea was to gauge how much people are willing to share, in order to get something that they want. It was disgusting just how many people were willing.

Far from a technology demonstrator, it was a social contracts demonstrator. Most people have no morals and will happily turn over their "friends'" personal information. Just look at how many Android apps require the Contacts permissions.

So, do the new API changes block people like what you were working on? Because I sure as hell hope they do, and if the changes mean ad blockers have to adapt, so be it.

I don't think they do.

> In Manifest V3, we will strive to limit the blocking version of webRequest, potentially removing blocking options from most events (making them observational only). Content blockers should instead use declarativeNetRequest (see below).

[1] https://docs.google.com/document/d/1nPu6Wy4LWR66EFLeYInl3Nzz...

Of course the new API changes cannot block people what we were doing. The API has no way of knowing that Joe User will now share the non-public information.

In fact, an argument could be made that we were not enabling anything that was not enabled before. Joe User could already log in and see his friends' information and then go off and gossip. We just demonstrated that Joe would be _willing_ to gossip in order to hear more gossip. It was absolutely disgusting.

That argument generalizes to not allowing users to have any say and to trust big brother knowing what's best for you instead.

I'm curious if you wouldn't mind sharing what was the context around that pitch, and your subsequent response. It sounds like you didn't go forward with implementing that ... was it your decision to not move forward, or were you in a team and that's just what ended up happening?

The startup that I was working at, that came up with the pitch, fell apart before we ever had to make the call to implement it. Which I’m somewhat glad of; I don’t really like the idea of being forced to choose between ethics and helping a few friends survive. (And don’t get me wrong, building this extension would have been a matter of survival for the startup—we were depending heavily on piecing together a unified social-network graph to do Dynamic Network Analysis to, and social-network services actively resist having this data pulled from them.)

If you’re curious, the point of the startup was to path-find between you and people you want to be connected to but aren’t (investors, say), telling you which friends-of-friends to prioritize getting to know in order to later leverage them as intermediate links in the chain to meet the people you really want to meet, that are three or four or five steps removed from you, and so entirely invisible to your local part of the network.

(And yes, that’s a really darn cynical play. But the data that we did have showed that the intermediate “links” in your chain would almost always be business-people, “growth hackers”, etc. for whom getting to know you was valuable in its own right—they would value any chance to grow their Rolodex. So it at least seemed like a positive-sum interaction on the surface.)

So basically Tinder for Recruiters?

On the surface, it sounds useful, but also like Tinder, would've likely been abused.

More like a creepy(er) LinkedIn.

"HTTP headers can no longer be freely edited, unless they are part of a limited whitelist blessed by the Chrome team."

Are you suggesting that Chrome/Chromium users will not be able to control the headers that the client sends? (short of editing the Chromium source and recompiling)

> This kills innovation in the browser extension space in its tracks,

I was under the impression that innovation in the browser extension space had long been killed since the transition of extension APIs to WebExtensions in the major browsers.

There's plenty of room for innovation in extensions just using the existing WebExtension APIs.

When the existing APIs run out of steam, you can work with Mozilla developers to design and implement new APIs and submit them to Mozilla as a contribution to Firefox. No guarantee they'll be accepted --- performance, security, maintainability must be maintained --- but people have done this successfully to enable their extensions (e.g. NoScript).

Mozilla needed to clear out the unmaintainable mess of letting any extension poke the browser's internals in any way. They've done that, without "killing innovation".

> They've done that, without "killing innovation".

Yes, "killing" is too strong, but they've certainly reduced it. The loss of functionality in the new extension system is a big problem for me, and prevents me from using the new Firefox in any serious way.

And yes, I know that I'm in the minority here and my opinion means nothing. Mozilla has shifted what market it's trying to address, and I am not in the target demographic. I recognize that's a personal issue and it's not really a criticism of the new Firefox.

Mozilla hasn't "shifted what market it's trying to address". Mozilla has always understood perfectly well that if Firefox isn't fast, stable and secure enough to be used by the masses of "regular users", it will become irrelevant. It just became increasingly clear over the years that the old extension system was hobbling that.

Not quite — if you run Firefox nightly you can develop and use "webextension experiments", which have the same power as the old pre-webextension Firefox addons. If you wish, you can also apply to have your experiment be turned into an official webextension API for Firefox.

This is considerably more cumbersome than previously, but still far better than having to fork the whole browser. (On the topic of forks, if running a bleeding edge browser — nightly — is not for you, you can use Waterfox, which is approximately Firefox ESR + the ability to run full-powered extensions.)

> you can use Waterfox, which is approximately Firefox ESR + the ability to run full-powered extensions.

But if you're going to use Waterfox (as I do), and you want to use the most recent version, be sure to check if the extensions you want to use will work. 68 introduced changes that do prevent some older extensions from working.

Forks can be patched. Otherwise, what's the point of maintaining a fork?

Although a small team of occasional volunteers might not have the resources to maintain this, you can be sure that Microsoft does. It's also considerably easier than doing it all yourself like Firefox.

From a technical standpoint, I don't understand why the adtech companies don't just serve ads via APIs consumed by website owners, and served to clients via the primary domain. This seems like it would obviate current adblocking and third-party cookie blockers.

It would add a bit of technical complexity for site owners, but that seems manageable, particularly for b2b relationships.

Obviously, I hope this doesn't happen, but it seems like an obvious strategy and I don't see the flaw.

"From a technical standpoint, I don't understand why the adtech companies don't just serve ads via APIs consumed by website owners, and served to clients via the primary domain."

I also would like to know why there is so much resistance to this, which was the original model of ads on websites ...

rsync.net stopped advertising, in all venues, about two years ago - mainly because the overlap between "people smart enough to use rsync.net" and "people who don't use an adblocker" is basically zero. Nobody who cares about our product ever saw our ads.

But, of course, we still have some interest in advertising our product and, to that end, I have approached several websites and offered very good money to just insert two lines of plain text on their HTML page. No "network", no code blob, nothing interactive ... no picture ... just an extra line of text, with a bit of it href'd for a link.

Huge pushback on that. No interest. "Impossible".

I really don't understand the responses I've gotten ...

As an example, from LWN.net's FAQ under Advertising

"What happened to text ads? The text ad facility allowed readers to place simple, text-oriented ads on the site. Use of this facility had been dropping over time; when we realized that nobody had bought an ad in over six months, we decided to remove the feature."

LWN would still sell you banner advertising, but they don't do text any more. As with other features that were killed because nobody used them, the interest of a single small buyer won't bring them back because it doesn't make any economic sense.

That's very interesting that you used that example because LWN.net was one of the content providers that we approached.

I figured someone at their organization had the wherewithal to open a regular file in vi and paste in two lines of HTML ... in exchange for money ...


> I also would like to know why there is so much resistance to this, which was the original model of ads on websites

My objection to advertising on the net is primarily the tracking that comes with it. I don't use adblockers specifically, but I do block things like Javascript and tracking pixels as much as possible.

Given the nature of modern internet advertising, I would assume that the tracking would be happening regardless of whether the ads were being relayed through the host website or not, and it wouldn't change my security stance.

It would make me much more suspicious of the website, though, as I'd wonder if the site were sharing its log and other data with the ad network. This is pure speculation, but I wonder if websites might be nervous about bringing that cloud of suspicion over them.

I know that if I were asked to include a barebones ad (I assume that it's text-only and doesn't link to an image you host) like you describe on my websites, I'd decline.

"Given the nature of modern internet advertising, I would assume that the tracking would be happening regardless of whether the ads were being relayed through the host website or not, and it wouldn't change my security stance."

I think I'm not describing the proposition I made - it was, literally, paste this line into a page:

<a href="https://rsync.net">rsync.net</a> - Cloud Storage for Offsite Backups

No tracking. No engagement. No stats. No performance tracking. Nothing. Open up a regular file in 'vi' and paste that line into it.


I fear that I didn't make my point very clear. I apologize. I was speculating that perhaps the resistance you're receiving from websites is about a fear of what the website's users may perceive rather than anything wrong with what you're really doing. That's why I would decline the offer on my websites.

If ad-blocking really takes off I expect this to become the norm. It is technically more complicated than just adding adding an adwords element to a web page or something like that, but I do think it'd be considerably more resilient to ad-blocking.

I once had the idea of delivering ads by dynamically injecting them into a video stream. As in, when you load a youtube video it prepends the ad into the same video stream as the main content. Ideally the transition from ad to main content is between key frames so that the ad-blocking software can't fully strip out the ad without screwing with the stream itself.

I figure for your one line of advertising, there's going to be campaign design work, setup effort, teardown plans, ongoing owners, monitoring, coordination internal to the business and external, legal and compliance... Much of that is fixed costs whether they go banner or one-liner.

My guess is to prevent ad-fraud by website owners. It's a lot harder to detect fraudulent clicks/impressions if all data are routed through website.

The cost of potentially blocked by ad blockers is finite (A percentage of total revenue), but the cost of ad-fraud is not bounded.

>My guess is to prevent ad-fraud by website owners. It's a lot harder to detect fraudulent clicks/impressions if all data are routed through website.

Isn't the solution to that problem a flat rate fee (similar to how advertisements on tv, newspapers and magazines work)?

Instead of a pay-per-click it could be a simple $X dollars and your ad will be visible for Y days/weeks.

I don't see how that would work. If my site gets zero traffic, would I still get paid a flat rate to 'serve' ads? Pay per impression/click works to pay proportionally to individual site traffic and the extent of a campaign.

The current solution is effectively a flat rate as far as an ad campaign is concerned: impressions/$

People would either (a) pay to place ads on sites they knew had a decent amount of traffic just from reputation, or (b) would hire ad-buying companies which made it their business to know what different sites' ad space is worth.

Needless to say, this could be inconvenient for the adwords-make-me-five-bucks-a-month scale sites. It'd work out OK for the New York Times-es of the world though.

What would happen if browsers simply didn't allow cross domain referencing? Would the web break (and would it be worse than NoScript)?

I've thought about this before, since NoScript is too disruptive for me. One issue is that it's common for scripts to served from assets.whateverwebsite.com. I also thought of allowing anything from the same second-level domain (so anything on .whateverwebsite.com), but that would allow anything on .co.uk. ¯\_(ツ)_/¯ in Chrome I trust, for now.

Sounds like a job for the public suffix list.

Ooh, cool, hadn't heard of that before! TIL.

But even with the added complexity of regularly pulling in the public suffix list, the problems keep going: e.g., facebook.com's scripts are all served from static.xx.fbcdn.net.

In the same vein: I've never seen a video site that tried to defeat adblockers by inserting ads directly into the video stream. It seems like an obvious tactic; it could even use JS to prevent fast-forwarding the ads, and that would be a difficult thing for adblockers to detect and remove.

The main problem I can see is that you would have to choose between CDN caching and targeted ads. If the video was uniquely generated for each user with new targeted ads, it couldn't be cached.

I know that some podcasts use a technique like this, modifying the audio stream at download time to insert recent ads, even if the podcast being downloaded is old.

Twitch recently started doing something like that, albeit by manipulating the HLS playlist rather than the video files themselves:


If I'm not mistaken, the reason the YouTube app ads can't be blocked even with something like Pi-hole, is that they do exactly what you're talking about.

I have never seen an ad on YouTube that was not put there by the channel owner "this video is sponsored by PushStuff".

Do you use the YouTube app or the website?

Good question! I forgot there is an app.

I suppose it's much more a hassle to get rid of the ads from the app.

That does exist, it’s called server-side ad injection (more like cancer injection).

This is coming. There's already a couple of startups doing this. A lot of them operate in "stealth" so they don't attract the attention of ad blockers.

There's no API needed, all they require is for the website to set up a CNAME record. They're usually random and rotated often.

If you're familiar with the current arms race in ad fraud, there's a coming arms race for ad blocking.

Indeed. This is one of the reasons why I recently realized that I can actually see the day coming when I won't be using the web anymore. That was a weird realization that only a couple of years ago would have been unthinkable for me.

Right--it could work at the DNS level. Yuck.

> This seems like it would obviate [...] third-party cookie blockers.

Wouldn't this do the exact opposite? Ads that did this wouldn't be able to track clients across different domains, even for clients that allow third-party cookies.

But I think the main issue is the jump in technical complexity, which is more than "a bit"—many, many websites are built by non-coders who can copy and paste a bit of HTML & CSS, perhaps using a CMS or blogging tool like WordPress or Moveable Type, and who can paste in a script tag, but no way could they write even the 3 lines of PHP it would take to reverse proxy for an ad script. Maybe if this approach became popular enough, there would be a standard WordPress plugin or something, but what adtech company would want to be the first to require it?

I block third-party Javascript quite aggressively (for fear of malware). I am much less strict for first-party JS. If this becomes the norm I'd block first-party just like third-party.

As well, obviously the owner of the website becomes directly responsible for malware served through an ad network.

Finally, advertisers prefer third-party to fight ad fraud (by the site owner).

Hmm, good point. Though it's not like we're headed in a particularly appealing direction as it is...

That is what yandex does, but unofficially.

It makes it WAY too easy to generate fake clicks/views.

There's plausible deniability and then there's sheer barefaced denial. When will the Better Ads Standards protect the average user from having outright malware, browser-jacking, click-jacking etc. etc. delivered via ad networks, potentially including Google's or Facebook's? Do that and 99% of web users will most likely stop caring about any other sort of ad blocking. However, as things stand today, switching to Brave, Firefox or any other browser that commits to making all malicious ads blockable is the only reasonable course of action.

Brave is based on Chromium, so it's vulnerable to Google's future decisions about what browsers should be able to do.

This is a very good article, but misses out a critical fact: Google plans on making the old API available to paid enterprise users, which is the final nail in the coffin.

I hope that Qt will add a new Gecko backend for QtWebEngine and move away from Chromium/Blink.

One way of looking at this is google admitting there's a need for technology that intercepts and tweaks requests, but that's only an enterprise-grade need (intranets, firewalls, block lists, etc.)

Consumers however shouldn't have access to those capabilities. The optimistic, parental, or (according to the article) gaslighting reason is consumers will shoot themselves in the foot by installing a spyware toolbar.

The follow-the-money reason is 80% of google's revenue is unaffected by intranets, firewalls, block-lists, etc., but IS affected by the consumer use-cases of this tech.

It has nothing to do with paid. Any enterprise chrome users can use the old settings. And afaik, enterprise chrome is still a free download.

Where do I go to get this paid enterprise Chrome?

I don't know, I'm not convinced.

I very much agree that Google's conflict of interest regarding ads is problematic and I'd absolutely trust them to look for ways how to get rid of adblockers, but the current issue seems like an unnecessarily roundabout way to archive that.

The Chrome team seems to have put a lot of engineering effort into the DNR language and even extended the language to respond to some of the criticism. (Though still far less than what the API would need to be usable). It seems odd to me that they would spend so many resources into implementing something that is not really expected to be used.

I feel if they really wanted to get rid of adblockers this instant, they could just tell so openly. Since Chrome has its own built-in filters now, they could just spin it as additional blockers no longer being needed.

Instead they're just slightly tipping the scales in favour of site developers. Ad-blockers wouldn't be made impossible with this change, they'd just be made less accurate and reliable. This doesn't make a lot of sense to me.

> Instead they're just slightly tipping the scales in favour of site developers. Ad-blockers wouldn't be made impossible with this change, they'd just be made less accurate and reliable. This doesn't make a lot of sense to me.

It's the camel's nose under the tent (or the boiling frog). Abolishing all ad-blocking extensions would cause many folks to migrate away from Chrome, so they probably see a more effective strategy in incrementally neutering these extensions at a rate that reduces peak outrage.

"What is something we can take away that ad blocking tech relies on?" Today it's an API change, in a few months it'll be some other measure.

Yeah, you could imagine some strategy where they modify the API to make adblockers appear progressively more annoying and unreliable to end users, so that eventually, the public perception of them changes.

However, this seems a bit like a Xanathos Gambit to me. There are a lot of things that could make this plan fail. It starts with the current backlash of the adblocker devs themselves: They could simply choose to boycott Chrome instead of damaging their reputation.

Even if they don't, so far you still have competition to Chrome so differences can be observed: If adblockers perform significantly worse on Chrome than they do on Firefox, it's apparent even for non-technical users that the browser is somehow a factor in this.

I agree though, just making a change to web store policy (e.g. disallowing all adblockers) probably wouldn't have cut it - that would risk triggering a Streisand effect where everyone tries to smuggle adblockers back into the store using all kinds of rule-bending. So a technical restriction would probably be needed from their point of view.

To be honest (warning, tinfoil hats ahead), I wouldn't be surprised if a long-term goal for Google is to abolish browser extensions altogether. Philosophy-wise, extensions seem completely at odds with Google's vision how the web should behave and how user experiences are designed. Most of their work in the space seems to be about restricting extensions, too - while work that extends capabilities (e.g. support for inspecting WebSocket connections or bringing extensions to mobile Chrome) is postponed.

> As far as I know Apple’s declarative API doesn’t have the same low rules limits as Chrome’s planned one either.

Safari content blockers have a limit of 50,000 rules.

For comparison, activating all uBlock origin registered filter sets takes 281,078 network filters + 210,016 cosmetic filters.

Yeah, I'm writing a conversion tool and I'm having quite a hard time getting EasyList to fit in the 50,000 limit. I have some deduplication rules in place which takes it quite close, and with cleverer algorithms it might be possible to fit the entire thing, but it's pretty clear that this limit is quite restrictive (plus, there's no guarantee that it will even work on iOS, which will kill the ruleset compiler if it uses too much resources).

It seems It will be better for me to stay with Firefox. Though I must admit sometimes it occures to me that maybe in the future I'll use curl more..?

IME Curl and even NoScript is becoming nigh impossible to use because of all the JS

one can only hope that firefox steps up its game. May be eventually the new rust engine can push firefox forward with performance, and renew the market from chrome's dominance.

How recent is your "firefox is slow" perception? I've been using it as my daily driver web browser since... well, for a long time, and it's come a long way.

Admittedly I haven't run Chrome myself much recently but looking over other peoples' shoulders I don't see much in terms of a perceptible difference in speed.

I don't know about OP, but when I say I want firefox to step up its game I'm referring to things more like this:


> I doubt firefox will ever focus on security. The security mechanisms we are talking about require breaking compatibility or performance. This isn't the stuff one rearranges deck chairs for.

This is just wrong. For example, major changes have been made to the internal architecture over the last couple of years to support process-per-site "site isolation". See https://bug1523072.bmoattachments.org/attachment.cgi?id=9061... for the new architecture.

It's true that Firefox is still behind Chrome in this area, but there is a lot of effort going into it.

If you load up youtube.com, the site loads faster on chrome than firefox.

On a mobile, chrome's scroll seems smoother than firefox (the test i use is if i rapidly swipe up and down to scroll the page, does my finger covering the text change position?).

Slow YouTube load is entirely YouTube/Google's fault

>YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube's Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome.


Youtube slownes on non-Chrome browsers is its core feature. Whatever Firefox do to address it, Google will find a way to make Firefox slower on their own web sites.

It's a typical case of the abuse of monopoly, where they leverage their dominant position in one market (online video streaming) to destroy competition in another market (web browsers).

the "Youtube Classic" firefox extension saved the site for me. You get an instant load on firefox after you remove the intentional sabotage.

This is probably too fiddly to really be useful, but you can tweak a lot of aspects of Firefox scrolling via about:config. https://forum.xda-developers.com/showpost.php?p=73438265&pos...

interesting! Will try it out.

If I load up youtube with an adblocker it shows me the video I clicked, if I don't have an adblocker it shows me a 3 minute ad.

Loading YouTube, or any other site, on Chrome won't be faster than Firefox once you can no longer block ads and trackers.

There's almost certainly just something broken about that person's system that interacts poorly with Firefox.

I feel like no amount of effort on a superior engine will be more effective than Google targeting their own browser for their own apps. The experience using basic Google services like search on Firefox mobile is much worse. You get access to fewer features, like looking for shopping results nearby or setting your location for shopping results near by, and I'm pretty sure it has nothing to do with the technical capabilities of the browser.

Send a note to your local antitrust authority/consumer protection/etc.

I've already checked my options and will hopefully be submitting one or more notes now that I got some more time on my hands.

If we all do the same it will be noticed. Also many agencies are already looking at Google and will probably welcome it.

(Note: I don't hate Google, I just want them to get a lesson or two like Microsoft got. They still earn loads of money but now without trashing my chosen OS.)

If Firefox really wanted to compete, they could just move web search features more thoroughly into native browser chrome, interacting with the search engine as a raw API provider (i.e. render the DOM in a floating document, then scrape the results out of it and display them in a special “search” URI-namespaced page—the same kind of page Chrome uses to display its native Google search bar!) This would basically commoditize the search-engine space for their users—all search engines would look the same and work the same, just with different browser-exposed features enabled or disabled per provider.

So, for example, if Google doesn’t want to expose the ability to set a location, but does expose an API for doing so, Firefox could just provide a native-chrome option on their native-chrome search results display to set that parameter and search again.

And, of course, while they’re at it, they could prune out the “sponsored results”, as well.

Of course, they’d need to be prepared to lose all funding from Google at that point...

Not to mention get the APIs changed around to break Firefox...

Adblocking is performance and much more impactful than some % shaved off in rendering benchmarks.

"I think it's fairly safe to say at this point that Google is institutionally incapable of imagining a world without ads, so they're not capable of entertaining solutions that would seriously interfere with the ad ecosystem."

I think it is safe to say the same about Mozilla. They, too, rely (indirectly) on the ad ecosystem to survive.

An HTTP client that does not deliver ads by default will not be produced by either company.

Trying to escape from online ads by using these corporate-controlled browsers is like trying to escape from a wet paper bag without being allowed to damage the paper.

It amazes me that Google isn't facing an antitrust lawsuit over the chrome adblocker situation. Of course chrome is going to block everything except Google ads to destroy the competition.

Very good write up on the Chrome ad blocker issue, and worth reading in its entirety.

Based on the comments in other posts on this topic, my view is a little different. I think Google is making decisions that look good in short term models but will be very damaging to them in the long term. In this case, it is simple. If Chrome does a poor job blocking ads they are going to become known as the junk browser.

Recall a behavior Google added to Chrome. When you open Chrome, you are presented with what looks like a default Google.com search page. When you click the search field and start typing, instead of appear in the search bar, the search occurs in the URL navigation bar.

Google could have trained their users to be certain they went to Google.com to make a search. They didn't. Today Google pays billions of dollars a year to Apple. Google still doesn't seem to have learned that lesson (see AMP and weird ideas on divorcing the URL from the site the user is on.)

Not only is Google's revenue growth under threat, but their existing revenue base may very well be too. After GDPR, the EU copyright directive, and changes Apple is making to Safari who knows what 2020 revenue will look like. In my mind, this explains a lot of the sloppy decisions Google's management has been making.

If there was no Firefox or Apple, then panic. For now, the panic should lay within Google.

Don't forget that Chrome reached its dominant market share in large part because every time you went to search for something you were presented with an ad for Chrome. Alternative search engines at this point are hard to come by, AFAIK it's basically just Google and Bing in much of the world, plus whatever has approval in China - most other search is just wrappers around those 2.

In that environment it may be hard to get the "Chrome as a junk browser" idea widespread. I still deal with people daily whose goto browser is IE because it's what they've always used.

This change will likely come with a push for better ads. Frequently these days you see whole articles that are actually ads.

They fact they must bode this low now speaks to the difficulties they're facing in achieving revenue growth. Most web users don't use an ad blocker.

Kind of hard to keep perpetual X% YOY growth when you're a Google-sized company. It is impressive for how many years they pulled it off, but those days are probably over.

As an oblivious Firefox user on Debian and Android, am I missing something by not using Chrome?

As a web dev, Chrome has better PWA support in the developer console. Lighthouse audits are also quite useful. Although Firefox's developer console is pretty good (and similar to Chrome's), then at the moment Chrome is a bit better.

I mostly use Firefox, both for browsing and development, but occasionally fire up Chrome, especially for the features mentioned above.

Features are missing in Google Aps in Firefox. I use Firefox as well, but sometimes when I want to use Google Docs, I just switch back to Chrome.

What features are missing? I use Google Docs in firefox and haven't noticed anything wrong.

>What features are missing? I use Google Docs in firefox and haven't noticed anything wrong.

Tracking and cookie slurping.

I've noticed that if you turn on privacy protections in firefox, all google sites start misbehaving. And forget about trying to get through a recaptcha without spending minutes clicking on crossroads.

Is it time for piracy of ostensibly free content? Just so that we can protect ourselves, and perhaps our children from large scale tracking? Any injected adverts into the content stream would either become static, or removed by the release group. Imagine, pirated text content of blogs, just to get away from the new web.

The alternative is ToR, i2p or other anonymising web services simply to make the tracking model unviable. What was once a mechanism for persons in oppressed countries and criminals, actually becoming the web of choice to stay out of the tracking traps.

Edit: Just wanted to add, the way Stallman gets websites delivered to him via email and an external scraping system.

Google did a similar hypocritical thing with GDPR in Europe. As a Google/DoubleClick customer we lost access to the raw data logs on where our ads were served, Google cited "privacy" and "GDPR" as the reasons. Then within a month Google created a product called "Ads Data Hub" that has the exact same data in it, for paying Google customers, in Google Cloud.

So for privacy reasons we cannot process that data in any Amazon (or other competing) product, but surprise surprise, it is available in a paid Google product.

Google has zero interest in privacy or even performance. All they care about is abusing their monopoly to push others out of the market and make more money from their own ads. The "do no evil" thing is long gone, it's time they get the antitrust lawsuit they deserve.

Just moved all my bookmarks from Chrome to Firefox and don't miss a thing. I hope Firefox positions 'import bookmarks' feature prominently.

Same here, switching is a very small hassle. I hope chrome's web metrics are plummeting in reaction to this.

I wish somebody would start a business with 1000 Pi-hole servers on a CDN cloud and charge $10/year to use them. But I don't have a clue how you match DNS lookups to paying customers. Probably have to just punt and make it a nonprofit funded by a foundation like Wikipedia. If anybody wants to build this, I'll help.

Domain blocking is really useful, but... is it possible to do more? I mean that uBlock Origin and other adblockers can make a more fine-tuned work, cleaning pages much further, like cookies popups or similar useless crap.

I had thought of such a service time ago, but would it be legal? If I do it in my private intranet, it shouldn't be a problem. But a service that would remove ads from known webs and serve a clean versions through a proxy could be accused of plagiarism or something like that.

Maybe that's because thay only do the domain blocking part.

It probably has a lot of repercussions on the safe harbor (I guess you’re not a transitory network anymore??), but you should ask a lawyer..

> A service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the provider’s transmitting, routing, or providing connections for, material through a system or network controlled or operated by or for the service provider, or by reason of the intermediate and transient storage of that material in the course of such transmitting, routing, or providing connections, if [...] (2) the transmission, routing, provision of connections, or storage is carried out through an automatic technical process without selection of the material by the service provider; [...] (5) the material is transmitted through the system or network without modification of its content.

I'm not sure it's even possible to do this on a proxy any more since so many sites do client-side rendering now. You'd have to run the Javascript, see what crap it produces, figure out what Javascript generated the crap, and remove that Javascript. Sounds dangerously close to solving the halting problem. (I presume ublock can do these things because it's sitting at the rendering engine. Proxies don't have that luxury.)

I forget how different my Internet experience is from everybody else, without Facebook, Twitter and similar sites that you log in to use.

Wow! Great minds...

This is a very nice write-up. Would be nice if the forces behind the Chrome change would comment.

This is truly one of the best long-cons in the tech industry history. Invest 10 years into creating a browser and get the whole industry and it's developers to love it, push it and develop solely for it, so that you can reap the benefits now. They learned from the best - Embrace, Extend, Extinguish. Even as recently as last year, saying that Google would eventually abuse Chrome market dominance would got heavy downvotes here.

More likely Google bean counters went to chrome team and demanded they start generating money or at least stop loosing other divisions money.

The good news is that Firefox and soon Edge are on all (important) platforms and support ublock origin

How long will Edge support uBlock if their upstream makes architectural changes to prevent it and its kind?

Or Google is actually scared. We have been hearing for years how adblocking is not an issue to be addressed, yet they are desperate to put the adblocking genie back into to bottle.

If that was their intention it was poorly achieved given that most users can switch to alternatives such as Firefox or Safari with close to no downside. Chrome has very close to zero lock-in.

I'm just an outsider but I do think Google's intentions with Chrome were of course self-serving, but overlapping user interests: They knew they are dependant on the web, so they worked to make a browser that made the web experience better, faster, stronger, etc. Blocking the most aggressive exploitations while trying to make the outcome one where their business could survive and still grow.

Most knew there would come a day when they would start to turn the screws, though (in the same way that Microsoft apologists would talk about that company only using patents defensively...until they started suing everyone), and that day has come. Ah well, to Firefox we all go.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact