Hacker News new | comments | show | ask | jobs | submit login
Malware in the browser: how you might get hacked by a Chrome extension (kjaer.io)
127 points by kjaer on July 18, 2016 | hide | past | web | favorite | 59 comments

If you like to tweak your Chrome install, check out:


It adds an extra level of permission where each extension that doesn't ask for a specific website is, by default, locked out of every website, and you have to enable it manually by either clicking on it, whitelisting the websites where it can run or globally (example pic, sorry for not being in english: http://puu.sh/q5QFR/d6004da3bb.png)

Is whitelisting extensions possible?

I wish this was the default.

If it was, it would make Chrome all but unusable to a very large portion of its users. Once they realized Flash didn't work on any websites, they'd switch back to Internet Explorer or Firefox or Safari, rather than trying to figure out why it does that.

Flash is built-in to Chrome. You'll have to invent another straw man.

In fact, I recommend Chrome to unsophisticated users (hi Mom!) who need access to flash content. I tell Mom to never install anything ever. (She's managed to get malware on her Mac, even though I keep reminding her not to ever click on anything that says "your computer needs updates", etc.)

But how would adblockers work if you have to whitelist a site for it to work?

I don't think this would be that big of an issue really, because Flash is built into the Google build of Chrome. It's the one "extension" that would be really easy to give a free pass on this policy. This would probably be more of a nuisance on Chromium builds.

Technically, Flash is a plugin, not an extension.

A big problem is also with unmaintained extensions that are being bought my malicious players (or developers' accounts hijacked), which then slap on an adware script and push a new version. That way, a previously good extension with a legitimate reason for "accessing your data on all websites" can silently become malicious.

Yes. Mozilla also allows that, which is really sleazy for Mozilla.

Google deletes them fairly quickly, at least the couple of times I heard of these things or removed them from friends'/colleagues' computers they were already gone from the Chrome Web Store. That Mozilla bug you linked in another comment reads like an awfully misguided application of policy.

One of these extensions I came across automatically closed about:extensions every time I opened it to prevent uninstallation. Eventually went through the chrome task manager, killing extensions one by one until I found the right one by trial and error. Very frustrating to debug.

I don't think it's sleazy, while Mozilla does review all add-ons, it's hard to restrict what an extension right now since they can access internal APIs and there isn't really a permission system to speak of.

Additionally, JS is a dynamic language so it's difficult to provide adequate automated scanning for malicious intent. Even so, extension authors can be quite clever in how they hide malicious behavior, especially if there's financial reward involved.

WebExtensions (https://wiki.mozilla.org/WebExtensions) add a permission system, have a much smaller attack surface, and should help to alleviate this problem. This combined with automated scanning + human review and

For Chrome, I think Google prefers a combination of automated scanning and quick response, without upfront human review (based on https://static.googleusercontent.com/media/research.google.c...)

Uhm, Mozilla does a code review of every extension or update to an extension before it gets published on AMO, so it should hardly be possible for a malicious third party to do malicious things...

There's an outfit called WIPS which buys up abandoned Firefox add-ons and puts adware and spyware in them.[1] BlockSite [2] is an example. This was approved by Mozilla AMO.[3]

[1] http://www.ghacks.net/2013/03/12/mozilla-needs-a-new-audit-p... [2] https://addons.mozilla.org/en-US/firefox/addon/blocksite/rev... [3] https://bugzilla.mozilla.org/show_bug.cgi?id=903799

It's a hard problem as many extensions do the equivalent of loading in a script from a remote site as part of their start -up process (consider an adblocker that regularly pulls down and updates list of ad definitions).

You could imagine that a malicious company would put through a "clean" version for testing and then once approved swap the script for the one loading malware.

The issue with Chrome extensions, just like with android apps, is that people never check the permissions and just click OK.

Extensions make it even easier to install them, though, just need to redirect a user.

I've also come upon some spam sites that try to get you to install extensions with annoying alerts that prevent you from closing the page, playing a recorded message "To close the page, just install the XX extension".

The only remedy is good screening in the app stores. Actually, for apps/extensions installed from the official repository, I would be OK with remote removal. This would probably spark an outcry from certain parties, but as long as it does not extend to manually installed extensions it's acceptable to me.

I remain baffled that alerts in modern browsers are still handled with an OS-level modal dialog that prevents interaction with the browser's interface elements.

Couldn't the alert instead be rendered as an overlay to the page itself, be modal to just that tab, and not disable the browser Chrome? This would make it much harder for pages to "trap" users with alert spam. The existing policies (don't allow this page to create more dialogs -> page instantly redirects to itself to reset the flag) aren't doing a good enough job.

Firefox and Safari already don't use the OS modal dialog and Chrome is working on it


> Couldn't the alert instead be rendered as an overlay to the page itself, be modal to just that tab, and not disable the browser Chrome?

Firefox and Safari do exactly that.

> The existing policies (don't allow this page to create more dialogs -> page instantly redirects to itself to reset the flag) aren't doing a good enough job.

Does that work? I've never seen any pages that circumvent alert blocking like that, and I'd have imagined that any sensible implementation would disable alerts for the entire session (or at least a few minutes).

I don't know what the exact exploit of the behavior is, but I've observed it on several "infected" computers while working in tech support. I've seen it on Internet Explorer, Microsoft Edge, Google Chrome, and some variation of it on Safari, all of which I was only able to remedy by forcefully closing the browser using the task manager or equivalent.

Truthfully, I have not actually observed this behavior on Firefox, and I'm quite thankfully no longer in the business of providing technical support to personal computer users, so hopefully I never will.

I had my words at an Edge developer about this problem, and they seemed at least interested in my feedback. So here's hoping Edge fixes this too.

That's at least what Firefox does.

I do read the permissions for Android apps - they all want permissions to everything. For no good reason. Enter apathy

There is a very good reason. Apps can auto-update (the default) if they haven't added permissions over the previous versions. Consequently it is very common practise to add every permission you can conceive of using at the beginning, versus having to support a large body of users who haven't updated your app in ages.

In theory some very recent barely used newer Android version fixes this by asking for permissions at first use of them (eg first time using camera or first time accessing contacts). In practise this doesn't matter for several years.

BlackBerry allowed fine-grained permissions, and this was 7-8 years ago. But rooted phone, Xposed and Xprivacy brings this feature to Android.

I was corrected on Twitter by Will Harris, who works on Chrome security, and it turns out that remote removal actually is in place for extensions that have been blacklisted from the official repository. I've added a small addendum correcting this.

I was under the impression all chrome extensions had to go through the chrome web store now. I don't think you can manually install them anymore (outside of installing in developer mode). I could be wrong though.

This is true. They've actually made it difficult to do even with Developer Mode active, which is probably a net win.

Obviously there's no net win, given that TFA is talking about extensions distributed on Chrome's Web Store, infecting over 130000 users before it was banned.

That kind of proves the walled garden is pretty ineffective, the only accomplishment being the lock-in of users to Google's Web Store as the sole distribution mechanism, thus keeping Chrome a proprietary platform, in spite of the open-source nature of its code.

The only real advantage of the web store is the ability to uninstall banned extensions. But that could have been accomplished just as well by digital signing of the distributed packages.

Yes, and Web Store only has problems if you want to keep stuff internal ('enterprise' distribution).

This does seem like a total mess.

It's so annoying - on Windows, you can't even prevent Chrome from complaining every time you start it up when you have Developer Mode (and unpacked extensions) active. On Linux it doesn't, fortunately. I just want to use my own extension and it's not something that others would benefit from.

I even tried to put my extension up on the Web Store just because of this but I gave up once my (obviously valid) credit card got rejected. Now I just live with it.

They do, but Google's policing of the web store is an absolute joke. Tell it at parties, people will laugh.

I've reported malicious extensions and a year later they're still there racking up installs.

Most useful extensions require access to all sites so permissions system isn't very helpful at all.

> Most useful extensions require access to all sites

That may be true for "always-running" extensions like ad blockers, but there's another useful class of "on-demand" extensions that don't need full access -- think Evernote or Pinboard clippers. They only need access to the one site you're looking at, and only when you invoke them.

Chrome added a tightly-scoped "activeTab" permission a few years back, for exactly this case. Before that, on-demand extensions did need all-sites access, and unfortunately a lot of extensions (and tutorials and example code) haven't been updated.


I agree. I think that there could be more finely grained permissions. I basically have so sigh and resign myself anytime I want an extension badly enough.

Doesn't the keyboard command to close the tab/window work in these cases? Thankfully, I've never encountered such BS.

My company has a Chrome/Firefox extension with ~60k users. We have been approached repeatedly by companies that want us to add in their tracking snippet to our extension. What they offered—tens of thousands of dollars every year—was tempting, but we didn't take the bait.

They were opaque with us about what the code did (they didn't share the actual code without NDA, and we never got that far). But I did get to see the snippet they wanted us to add to our privacy policy, which was devilishly opaque. It arguably disclosed everything that would be done, but it did so in a way that sounded very benign.

I googled some phrases from the privacy policy insert and found that they had in fact gotten several extensions to include the code. Scary.

> I googled some phrases from the privacy policy insert and found that they had in fact gotten several extensions to include the code. Scary.

Could you share those phrases, so that we can do the same Googling?

Looks like someone already did: https://www.reddit.com/r/programming/comments/3tgiaj/chrome_...

Leave off "site:chrome.google.com" and see a few more results. Looks like they don't have many takers these days (or have updated their privacy insert to make it harder to find them all).

Thanks for raising awareness of the possible danger of Chrome extension as a malware. I recently built my first Chrome extension and was amazed by what harm an extension could do if the maker had bad intension.

The danger is that many people do not pay much attention to the requested permissions.

To fight the issue, I think there should be a culture of open sourcing Chrome extensions. I open sourced mine, and if I build one again, I would.

I think it's great you did, but... People are not reading the permission request dialogs. How likely do you think it that they read the actual source? ;)

Extensions like this are just the same as .exe adware downloaded and installed by user (not automatically).

More serious problem is legitimate extensions that are trusted by lots of users then being sold to some rogue company, then lots of users receive malware with update.

The same goes for wordpress plug-ins. Once you're approved, you can push make are to hundreds of thousands machines. Or whatever you need to do - back links, redirects and other crap.

or the author is simply paid to add it to their extension. With the auto updating its basically impossible to stop.

Yeah, I've definitely at least experienced ad injection from extensions that went rogue.

I'm not sure, so this is not really an accusation, but I think I had such problems with BetterTTV (https://chrome.google.com/webstore/detail/betterttv/ajopnjid...), a Chrome extension for Twitch chat.

I had trouble with Google suddenly asking me to prove I'm not a bot - for months. I ended up buying a new router because my old one hadn't seen any updates in ages, just in case that got hacked.

Long story short, after some experimenting the only thing that seemed to shut Google up was to disable this extension.

This extension does load code from a remote site, they write it is because having new releases approved takes too long so this is how they work around it. I had BTTV report a newer version number than there was in the Chrome Web Store for that reason.

I don't understand they are allowed to do that, quite openly even? It defeats the purpose of the Chrome Web Store and any guarantees by Google are worthless if extensions can just load some of their code from somewhere else.

Now, this extension actually is open source (https://github.com/night/BetterTTV) and I have not read about any suspicions like mine from anyone else. Still, as I said above, Google only stopped asking me to verify I'm no robot after disabling this extension, and I tried several times (on/off).

And the code loading happens quite officially, I first read it on the extensions developer website itself. From their github README:

    > Files not included in the repo are pulled from the actual server,
    > so everything works.

Google gave a good overview of the screening they perform for extensions in a paper at usenix security last year: https://www.usenix.org/conference/usenixsecurity15/technical...

Basically, at their scale it's a hard problem, especially if you need to redo your analysis after every update to an extension and you can't afford a high false positive rate.

Well, Mozilla found a solution. And that's the brute-force solution. Just throw human code-reviewers at the problem until it solves itself.

I'm guessing Google would have more extensions to review than Mozilla and could not rely as much on volunteers as Mozilla can, but Google also has something like an order of magnitude more money, so I'm sure they could work something out, if they really wanted to.

4ish years ago I had an old, unupdated netbook get infected with conduit extension, which spread through the sync mechanism to my newer laptops.

There have been extensions out for years that would swap bitcoin addresses with their own one. Extensions unrelated to bitcoin

A somewhat related topic:

A few months ago Google fixed a vulnerability on the inline installation. It was possible to start a install on the attacker's website and then redirect the page to an arbitrary one. This would confuse the user, making him believe that the install came from the arbitrary page.

Here is the PoC if anyone is interested (CVE-2016-1640): https://www.youtube.com/watch?v=f_9ObDqBoo8

Great article, i like the in-depth analysis of how this actually works. Cheers for the share.

This example is hilarious, granted, but not even the one I truly worry about.

I work in a lax multi-national corporate environment, to be vague. These extensions, especially with religiously conservative adults, is of limited concern.

I am far more concerned about the semi-professional extensions.

I doubt this is malicious, but someone installed this in my environment and inquried why the quality of output went down (in terms of pixelation).


The problem here is it raises fewer eyebrows. It does a purpose-filled operation professionals would need, and they are far less discerning than me.

This person had Adobe Acrobat Pro, and forgot. Such extensions have real potential to IOC (indicators of compromise), but only very expensive next-generation malware detection knows that when it sees traffic out.

But what if there is no traffic out? Or it does a more professional job with exfil?

Most modern software inventory has no intelligence into plugins. That is terrifying. Per-user javascript directories? Enumerating just the obvious ones can be a full-time job?

What about dupes? I am the only one I know in my department who uses uBlock ... Origin. And I know there is a fork. Others are intended to have a similar logo and fool busy professionals.

I love FF, but also use Chromium. I am worried that the freedom afforded to me by the beauty of things like Keysnail, like the generally abstracted trend of vendor lockdown, forces me to voluntarily suck it up and deal with crap software defaults and workflows, and doubly recommend the same to people in my environment. I will increasingly have to part with each of the limited extensions I like, all while people here push Electron apps. I like them (who am I to be arrogant and judge the work of these people half my age; at least they put out code while I bitch all day), but the browser base is not discernibly updated or managed unless some developers coordinate. I am sure that came or is coming down the line, but currently populars apps will play catch up while people like me are forced to preemptively yet again restrict use of likeable tech because security was an afterthought.

Qubes increasingly looks like the future. It is sad, but I must every few years consume more resources of my computer for useful, but wasteful, separation of software from its self, because, well, queue the recently retracted Theo de Raadt "x86 virtualization being secure is a waste of time" trope rescinded because even his OpenBSD crew will bite the bullet and work on OpenBSD virt technology.

I just depressed myself.

Sincerely, Guy running multiple browsers in Firejail in a VM

EDIT: I do not the difference between have and half apparently; probably a sign of my age, haha!

This article actually indicates a (probably bug-bounty'able) flaw in the Chrome webstore security checks:

> The script that it fetches from the above server is a malware payload. The extension needs to download it after having been installed because it cannot ship with the payload if it wants to pass through the Chrome Webstore’s security checks.

There probably are legitimate reasons to pull in remote content, but I can't think of any that can't be worked around. You'd think that Google's own malware tracking would pick up as a bad site, but the malware author takes care to hide that delivery mechanism behind a header check.

So, to me, ANY request or evals by extensions should (at the very least) be detected and constitute a separate permissions category, or, better yet, BLOCKED as a violation of same origin policy.

I don't get where you're going with this. The extension has the "<all_urls>" permission which explicitly allows it to do this. It also explicitly turned on unsafe-eval.

> The extension has the "<all_urls>" permission which explicitly allows it to do this.

<all_urls> have nothing to do with the browser's normal Same Origin Policy for individual web pages. All URL's indicates that the extension is allowed to operate on all pages loaded in the browser window.

It has nothing to do with the resources loaded by the extension. In other words, I'm talking about (for a start), this "Regular web pages.. limited to Same Origin Policy. Extensions can talk to remote servers outside of its origin."


In other words, extension should be treated as operating within the context of that tab; in other words, a locked-down sandbox for that tab.

Cross-origin permissions as noted above should be flagged to the user and should apply across the board to any injectable resource (including CSS, HTML, Javascript, SVG, etc.)

Otherwise, outside resources should fall into the Same Origin Policy (i.e., if your tab is at https://google.com/, only resources should be loaded from google.com.)

<all_urls> specifically refers to whether the extension is triggered on all URL's that are visited in the browser or not. That permission is usually needed.

Different origin attempts to communicate (read and/or write) any data on its own to any third party server should fall under different permissions.

For more information on what should have happened here, please see "Only local script and and object resources are loaded":


> It also explicitly turned on unsafe-eval.

That's a good point, and I agree; in fact, unsafe eval should be only be allowable within the Same Origin Policy.

Also, this brings up another good point: some permissions are more risky than others (for example, the legitimate use cases for unsafe eval are few). A visual indicator of the relative severity of the requested CSP's when installing the extension would probably help less technical people separate when an extension is very risky. (Multiple red danger symbols would probably be a good clue..)

Yes, regular pages are limited to their same origin policy. Extensions, when they request it, are not. This extension requested it, so it is capable of downloading files from any server to a string. It also requested unsafe-eval, this lets it eval a string. Now, when you add one and one together, you have two features combined in a malicious way. But not a bug.

Sure, things should be different. But they are the way they are, and considering the way they are, this is not a bug. And certainly not bug-bounty'able.

I am surprised that no one has mentioned this: Google can remove an extension that is installed in your computer at any time!

Doesn't anyone else see that as incredibly overreaching?

The biggest surprise to me is that this has so many up votes on HN.

Getting "hacked" by a sleazy browser extension is about as surprising as getting a virus after installing something from a warez site.

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact