> According to the research team, in most of the cases they analyzed, the Chrome extensions disabled CSP and other security headers “to introduce additional seemingly benign functionalities on the visited webpage,” and didn’t look to be malicious in nature.
Yes, because Chrome's architecture forces extensions to do this, ironically in the name of security.
Normal extension code can access a page's DOM but it cannot interact with any scripts on the page.
If you want your extensions to interact with the page's JS, the only way is to play man-in-the-middle and inject your own JS as if it was loaded by the page.
> Yes, because Chrome's architecture forces extensions to do this, ironically in the name of security.
To be fair, I think Firefox has this restriction too? I might be wrong, but I could have sworn I ran into some script-injection extensions that weren't working on some sites because of this problem.
It's been a while since I checked, I eventually gave up on the extensions because I wanted to thin out my list and I wasn't comfortable with them overriding CORS headers -- so things may have changed, or maybe I misunderstood the situation even back then.
Maybe? I vaguely remember webextensions not needing to do this. But I can't say that with certainty, because it might be that most websites didn't enable CORS so I never noticed, or I could just have the timeline mixed up in my head.
I don't think anything about webextensions requires this problem to exist. So if it is a problem with webextensions, it's a problem we've "chosen" to have. And browsers could fix this without drastically overhauling anything with extensions.
But I can not stress enough how much out of my depth I am commenting on this, my full knowledge is basically a couple days worth of research a year or two ago and some interaction on a bug report.
Any proper implementation of the cors eval/script constraints would block extension script injection, regardless of whether webextensions are used. You'd have to run code outside of the page context, which automatically poses issues since the page is probably in another process.
What are some use cases for interacting with other scripts, instead of interacting with the DOM?
I suppose that if you want to disable some behavior in the website (to insert your own extension's before letting the website's own execute) you have to do that... But when would you want to do that?
When you want to hook into page functionality and extend it. For example, extensions for YouTube will routinely access the JS objects of the page and listen to player events in order to introduce new features. This is only possible if you run your code in the context of the page.
You can add new scripts to the page though. You don't have to interfere with the existing scripts / change them. That doesn't break CSP if you do it the right way.
Interacting with the page as if your code would have been loaded by the page is only possible if you inject your code in the page context, typically using a script tag in Chrome, at which point you are governed by the CSP restrictions of the page.
You either relax those restrictions in some way, or you can't run your code in the JS context of the page.
It's possible to run code in a content script regardless of how restrictive the pages's CSP headers are, but you are running in an isolated environment that can only access the page DOM.
Can you provide an example of the website which makes it impossible to insert JS from extension? I'd like to tinker with it. I wrote a simple extension which inserted JS to youtube and it worked just fine. I thought youtube is secure enough?
YouTube does not (currently) send CSP headers from what I can see in the dev tools. Hacker News does, but it specifies "unsafe-inline" which negates most of the protection. Not aware of any sites with more locked down policies off the top of my head; it's not exactly an advertised feature of most sites.
slack. I do so with my extension[0]. At one point they rewrote it and added CSP.
In order to inject my code before theirs, I had to come up with different approaches on FF and Chrome... and since it was starting to become a cat and mouse game, I just took the extension private.
After that, they didn't escalate their game. I'm thankful for that, so that I can keep enjoying it without spending lots of time figuring out how to mess with it.
Adblockers often inject anti-anti-adblocking scripts. AFAIK, these are often drop-in replacements and so need to communicate with other scripts in the same way as the original.
I'm one of the chrome extension authors that does this.
The reason I created the extension is that our company provides a service where loading your site in our iframe is necessary for us to provide our service.
Lots of sites use security headers to block this, so we created a chrome extension to strip the headers out.
This would obviously be horrible if we recklessly stripped the headers on ALL sites. But what we did to minimize the security risk is to register a dedicated domain where ONLY that iframe loading your site occurs, and that dedicated domain is the ONLY domain where the chrome extension strips security headers.
In other words, our extension doesn't do anything, unless the user is actively logged into our service using our tool, which requires their site load in an iFrame.
The risk of doing this is minimal if you take the correct precautions and only strip the headers when absolutely necessary (and never strip headers when the parent frame is a domain outside of your control).
I think that the extension architecture should make it easy to declaratively state which URLs an extension will call back to, and then make it up to the browser to transparently update CSP headers to allow those calls.
As someone who runs a website with a strict CSP, when I first went live I was panicked by the fact that we got TONS of reports about CSP violations to our report URL. It was only then when I realized that our CSP was blocking a ton of extensions that people had installed. Honestly, I think at that point I would have preferred that the extension update our CSP, it's not like the extension couldn't already see everything on our pages.
> I think that the extension architecture should make it easy to declaratively state which URLs an extension will call back to, and then make it up to the browser to transparently update CSP headers to allow those calls.
Adblockers need to play whack-a-mole with this URL list to provide mocks that pretend to be ads to anti-adblock mechanisms, updating that declaration with extension updates every few hours will only lead to frustration.
> I would much rather filter out extraneous CSP violations than have some exploited extension exfiltrate sensitive data
I am not familiar with the architectural framework of browser extensions, but would this already be possible, that is for an extension to read the contents of the page (which it obviously already has access to) but then send that information using a connection that doesn't operate in the same security context as the page that was read?
I mean, browser extension CSP violations are triggered because the extension just basically injects a script into the page. Is it possible for the extension to just filter the page context and make a remote request by some other means?
> Is it possible for the extension to just filter the page context and make a remote request by some other means?
Yes it is. I think what you describe is in fact the preferred way for extensions that need to communicate with a remote service are expected to work, and when implemented that way, the CSP rules (rightly) don't apply.
From what I understood of the article, this alternative way of sending data isn't what they mean by extensions tampering with security headers. The article doesn't go into detail but it would be interesting to see if the header tampering is necessary in all these cases, or if a different approach could work without triggering CSP.
Can you specify which policies are a nightmare to develop on? As a security-minded UI eng, I warmly welcome any new security policies that enable me to harden my applications and prevent exfiltration of sensitive data.
The hard part isn't breaking the security, the hard part is not breaking the security. Web extensions are often only provided with nuclear options like overriding CORS or requesting full-page access. There's not really any new policy you can take advantage of here, it's just developers complaining that they have to resort to wider-impact, potentially dangerous hacks in order to perform basic actions.
In my mind, it's similar to when people hand-wring about extensions requesting access on all URLs. The security model for extensions is not fine-tuned enough to enable better behavior. It railroads extensions into over-requesting access to everything. I consider this to be a serious problem, but... I don't know, it doesn't talked about that much. To be fair, the web makes granular permissions difficult, and also to be fair Manifest V3 does try to make things more granular, in at least some ways. It's easier to make an extension now that only operates on some pages. But building limited extensions that don't have a lot of power is still somewhat difficult.
But regardless, I don't believe any of this is actionable advice you can use for your own pages. Which is good, because as a website author, you should not have the ability to override the user's decision about how extensions interact with your page; I would consider that to be an anti-web, anti-user sentiment in most cases. Websites don't get to decide what code can be run in an extension.
Yeah, CSP breaking bookmarklets and extensions is arguably not to spec[1]. Firefox finally partially fixed their non-conforming implementation recently[2], but I think Chromium still has an open bug here.
[1]: Policy enforced on a resource SHOULD NOT interfere with the operation of user-agent features like addons, extensions, or bookmarklets. These kinds of features generally advance the user’s priority over page authors, as espoused in [HTML-DESIGN].
Google could make tooling for extension authors which identifies permissions which have not been used and suggests removing them.
Then the process of developing an extension could be:. (1) request all permissions. (2) write code for extension. (3) run extension in test session and exercise all functionality. (4) let tool suggest the correct set of required permissions, which can be very granular.
This means the developers don't need to know the precise details of each of hundreds of permissions.
Because even through a user can grant a Chrome extension access to particular site, when that site has certain CSP policies enabled there's no way for the Chrome extension to interact with that page's JS. CSP is enforced on the extension's JS, even though the user wants to permit it.
Chrome could make things much easier for developers, and arguable safer, by offering some straightforward way for an extension to interact with a page that doesn't involve messing with CSP headers.
Because you're maintaining a legacy application that in development can only operate in http but if you override network resources in an https environment to point them to local the browser will (understandbly) throw a fit.
Something I would like is the ability to disable extensions on specific domains. I don't need any of my extensions on my webmail, GitHub, my bank, or local intranet dashboards.
I only install a few extensions that I need and trust but it would be very nice to be able to limit potential damages.
I ran into something along these lines just the other day. I have a private extension that replaces some embedded videos with a custom player. Last week, the server API I'd been pulling video metadata from went down. I found another API endpoint and modified the extension to use it, but the new URL had CSP enabled and threw CORS errors when used in-browser. Unlike Chrome, this browser's extensions can't modify security headers, so I wrote a little shim utility to perform the request on a local server and pointed the extension at that instead.
In other news, thousands of Chrome extensions just inject ads from other sketchy ad networks or transmit browsing or other monetizeable data by cross-site referencing specific 1 pixel transparent images.
I was developing an extension that automatically saves all loaded images without repeating requests by loading them into a canvas and extract the binary data. I was forced to modify CORS because otherwise the canvas is tainted with cross-origin images. (I consider CORS related headers to be security headers as well.)
> The most commonly disabled security header was CSP
Good. It drives me crazy how many user agents modify our site somehow, immediately footgun themselves by honoring the CSP, and then send us a report about it.
Fully disabling the CSP is reckless overkill but they should indeed be modifying it.
No. Academics rarely list these things to avoid lawsuits. As these are intrusive browser extensions, I'd expect lots of lawsuit-trigger-happy VPN extensions in there.
That doesn’t sound realistic. Sketchy companies probably are not in your country, so good luck with those lawsuits. Second, listing your name in a list of “people with black hair” when it’s easily verifiable you have black hair isn’t a valid lawsuit.
As long as the research doesn’t allude that every extension is malicious, then it’s covered. While maybe it doesn’t belong in the research itself, someone could publish it in a separate blog post or gist.
There are plenty of reasonably trustworthy extensions. The logic of "it is a cybersecurity and privacy product, therefore it is the only thing I trust" is kind of terrible. That's if anything a category of products that is full of strange software, see "antivirus" software
GP very much did not say to trust security and privacy products in general. They said to trust two such products in particular, which are well-established and trusted. There are prominent examples within the past few months of "privacy" extensions embedding trackers and monetizing user data.
No, GP said to only trust these two extensions and no other extensions. While they are entitled to their opinion, IMO that is such ridiculous. There are many many extensions which provide value which don't do shady things.
I remember a not-so-long time ago that people were worried chrome would drop the extension system as a whole and therefore drop the ability of users to customise how they view web pages.
Now extensions are a "security threat". Here we go...
Yes, if an extension can override CSP directives to allow arbitrary connect-src exceptions, it effectively means that any data in any form on any page is now susceptible of having that data sent to an attacker-owned URL.
The solution doesn't necessitate removing extensions, it just means potentially constraining the API surface of extensions in order to mitigate the attack surface.
Extensions don't need to modify headers to view anything in the dom or make requests with the credentials of the page it's running in. The only thing it can't do is directly interact with the javascript running on the page (and vise versa).
This is misunderstanding. Extensions can read any data even with restrictive CSP. Malicious extension then can use other channel than the currently opened tab to exfiltrate them. There are many.
Extension users do want the extensions to interact with pages, often including cross-origin requests. That is what extensions are for and they won't work with restricting API surface.
Is this snark meant to imply that you think that people shouldn't acknowledge a dilemma when one exists, unless they also have a solution to offer for said dilemma?
Google has nigh-on monopoly power over the browser ecosystem. Also, extensions are by-and-large malware. Making people aware that extensions are mostly malware helps Google to consolidate its monopoly; but also (obviously) helps people to avoid malware.
We don't want Google to control the Internet, and we don't want browsers to be full of malware. That's a dilemma. It has no obvious solution. But it's better to be aware of the dilemma — and so realize there is no easy solution — than to ignorantly push for one side or the other to "win."
At this point I'm convinced that there has to be a really strong opposition, or the Internet becomes Google's jail. We are basically seeing the dystopian side of "zero crime policies" acted out online as they slowly but surely evolve the definition of "malware" and erode away at our freedoms.
Google could be educating users to make their own responsible decisions, but it's far more profitable to keep them uninformed and feed them propaganda to maintain the paranoia that lets it monetise and take control away from them. Being ultimately an ad company, it thrives on deception.
and so realize there is no easy solution — than to ignorantly push for one side or the other to "win."
Twenty years ago, what Apple Google Microsoft do today with their software would be widely considered adware/spyware. One side has already won the battle; we can't let it win the war.
"Give me liberty or give me death," as the famous saying goes.
What makes those extensions not "potentially malware"? I am assuming it is because you (or someone you trust) vetted them and decided they were trustworthy.
Why wouldn't this same idea apply to any other extension? You are right, everything is potential malware (the two extensions you mentioned included). There is nothing magical about the two extensions you mentioned.
Ublock Origin only exists because gorhill's original Ublock became evil due to a critical management mistake onboarding an untrustworthy parry. Everything is a risk.
I'd argue it's getting counter-intuitive though, similar to the bystander effect. Yes it is popular but if everybody thinks everyone else is verifying the source, then no one is.
When is the last time you checked out the UBO Git?
Sure, but the comment I relied to seemed SO sure that those were the only two legitimate extensions. My point was that there might be others that also have built up that level of trust, and to assume that you know the exact list of trustworthy extensions is putting too much confidence in your knowledge of everything.
So much this. Every extension is an attack vector. They have post-decryption access to all of your web content and largely make the entire benefit of HTTPS moot if you have a bunch of them.
Every extension you install should be carefully vetted and only enabled as long as absolutely necessary. It's the top malware vector I see today.
You could say the same thing about almost anything: browsers, new root certificates, firewall software, antivirus software, anything you install on your own device, etc. Essentially, they're saying "anything you want to execute on your machine is a potential attack vector", and the only way to solve it is: "your browser is now your dumb-terminal into the mainframe (website), and we won't let you the device owner install anything that may potentially tamper with the data coming from our safe mainframe to your screen".
This is not where I want computing to go, and we're all just playing cat-and-mouse games feigning "security" when really "we" just don't trust the "plebs" to administer their own machines and keep themselves safe. This is arguably where Android is already, so one need only look there to see the future of computing and how the browser and "security" is being used as a trojan horse to get there.
If you go to the average non-technical user's Chrome extensions tab, they have 10-12 extensions. 7 of them are actively malicious.
This isn't like a "hey, freedom allows you the freedom to make mistakes" thing. This is a "good practice hasn't even been attempted here, and it's the top vector of all bad things ever" thing.
It's fine for browser extensions to exist, they serve a purpose. However, all existing extension stores should probably delist the entirety of their collection, solely re-accept extensions which an actual software engineer has reviewed, and make it much harder than one click to inadvertently install one when a website asks you to.
If you're looking at it that way, then we should apply even more scrutiny over native apps we install on our machine and not single out extensions.
Also, not all extensions run on every single site. Also, some extensions actually help in blocking malware, crypto-trackers etc so the potential upsides are also great.
A script loaded by a site needs to escape its sandbox. I assume an extension script is not in its own sandbox so it can read you banking details when you visit a banking website. I imagine they can read key presses so an extension can know your banking passwords.
The Chrome Web Store is functionally a malware distribution system. There's no real moderation to speak of, and the vast majority of desktop PC malware is distributed by it.
Decentraleyes is also very helpful on the privacy front. It will cache commonly included libraries like jQuery from various CDNs like Cloudflare, preventing them from tracking you across the internet. ClearURLs is another one that I install on a freshly-opted-out Firefox, to avoid those ubiquitous tracking parameters in various URLs.
I think the fact that people are listing different sets of extensions they trust is good evidence that you can't just say "this list is the only list of extensions you can trust"
Frankly this is why I prefer a "maximalist" browser like Vivaldi (and old-school Opera) that focuses on building features into the core browser rather than supporting extensions. There are benefits to having a feature be provided directly by the browser, not the least of which is the trust model. There's a trade-off of fewer features overall and less customization, but it's a trade-off I'm personally fine with.
Different people use different extensions. I have seen quite a few people avoid NoScript, not because they don't trust it, but because they considered manually selecting which scripts could run not worth their time.
Yes, because Chrome's architecture forces extensions to do this, ironically in the name of security.
Normal extension code can access a page's DOM but it cannot interact with any scripts on the page.
If you want your extensions to interact with the page's JS, the only way is to play man-in-the-middle and inject your own JS as if it was loaded by the page.