Hacker News new | comments | show | ask | jobs | submit login

I guess this is as good a place as any to post that I noticed something similar had happened to [User-Agent Switcher for Google Chrome](https://chrome.google.com/webstore/detail/user-agent-switche...) and [Block Site](https://chrome.google.com/webstore/detail/block-site/eiimnmi...). The "report abuse" link on the page is useless. The former is very insidious in that it actually hides the malware in a .jpg file that appears benign at first (promo.jpg for anyone who wants to analyze) but when loaded in a canvas element and decoded in some manner yields js that goes on to send all the user's http requests to some domain while also injecting ads and redirecting to affiliate links.



> it actually hides the malware in a .jpg file that appears benign at first (promo.jpg for anyone who wants to analyze) but when loaded in a canvas element and decoded in some manner

I am guessing the extensions had a "content_security_policy" key in its manifest[1], with a 'unsafe-eval' CSP directive in its value?

Any extension which declare such CSP directive in its manifest should be presumed malicious until further thorough investigation proves otherwise.

The 'unsafe-eval' in a manifest is essentially the ability for an extension to execute arbitrary code in the extension context which can't be code-reviewed by reading the source files.

EDIT:

"User-Agent Switcher for Google Chrome" confirmed to have a 'unsafe-eval' in its manifest.

"Block site" does not declare 'unsafe-eval' in its manifest. It does however add many "script-src" directives in its manifest, including one for ".wips.com", which means the extension can pull javascript resources not bundled with the extension from its own web site (hence outside of the Chrome store review process if any), and thus the behavior of the extension is subject to change at any time as far as its permissions allow.

So I guess the suspicion should be extended to any extension declaring a "content_security_policy" key in its manifest.

===

[1] https://developer.mozilla.org/en-US/Add-ons/WebExtensions/ma...


So how does unsafe-eval and loading scripts dynamically pass any sort of "google security scan"?

It would seem obvious that the moment a script tries to load arbitrary code outside of the package, it should fail.


Because unfortunately many, many libraries and templating engines rely on evaling code.

Generated code is code that's outside of the package.

There are also a few (keyword: FEW) valid reasons for using eval in situations where it's beneficial to pull updates and modules from a known and trusted location.

In those cases though, if you're not signing server side and validating signatures in the extension with a pre-shared cert, you've still got problems (single MITM attack can compromise your extension going forward).

Long story short, eval is still a very useful feature. It needs to be used properly, and it should be used rarely, but it's incredibly powerful, and lots of things you don't really think about suddenly stop working if it goes away.


> Long story short, eval is still a very useful feature.

And yet this is Mozilla's stance regarding extensions pulling code from outside the extension's own package[1]:

> extensions with 'unsafe-eval', 'unsafe-inline', remote script, or remote sources in their CSP are not allowed for extensions listed on addons.mozilla.org due to major security issues.

This is the sane stance in my opinion, with the best interests of users at heart.

I would like to be shown an actual, real case of why eval() would be impossible to avoid for an extension, not just a theorized one with no real sensible and convincing example. As much as I try, I can't come up with any such scenario.

[1] https://developer.mozilla.org/en-US/Add-ons/WebExtensions/ma...


You're really unable to come up with a single example of why treating data as code isn't useful?

I mostly agree with you, eval is dangerous and often improperly used, but it's ALSO an incredible tool.

The complete opposite of your argument is languages that are homoiconic(https://en.wikipedia.org/wiki/Homoiconicity): They're built in a way that explicitly allows manipulation, even of the language itself as a FEATURE.

They include languages like Lisp/Clojure/Julia.

Worse, many of the ways to "work" around the lack of eval just reduce the attack surface, or make the attack considerably less likely (obfuscation not security!).

Simple case: You need to apply certain policies to certain sites, those policies vary based on the browser in use, the country of origin of the user, and the country the site is hosted in.

The way that those policies vary... ALSO varies. As new legislation is passed, or corporate policies change, or certain countries become more or less stable.

A (completely valid) way to solve this problem would be to send both a policy engine and a policy set to the extension. The policy engine is eval'd and runs the policy set.

That allows real-time updates to the deployed extension's ability to parse new policies. Not just different data in the policies, mind you, but actual new policy capabilities.

Do you have to use eval? Nope, sure don't. You can generate a custom ebnf (hell, why not just use the ebnf for ecmascript), write you own freaking engine, and run it. All without eval. But you're still just running eval. You just get to name it nicely to something else, introduce an incredible amount of overhead to recreate a feature someone has already provided, and generally get less performant results.

Or, you can just fucking sign the payload with a pinned cert and verify on the client, and get the same guarantees for security you have with HTTPS to begin with.


"Treating code and data" doesn't specifically require eval. Lisp macros avoid eval. Dynamic extensions to a running Lisp program don't require eval, but rather load.


Sure, but you're still taking data from outside of your original package scope and running it or allowing it to be run later.

That's just useful. Javascript eval gets a bad rap (entirely in my own opinion) because

1. It's the de-facto language for the web, which both drastically increases the number of inexperienced developers, as well as the total attack surface.

2. Again, as a scripting language for browsers, parsing untrusted user input is a common need. Eval filled the need for parsing, but not the need for trust.

You can absolutely use eval in entirely safe ways. Most people don't, and doing so (especially in JS) is hard. That's because JS has made establishing trust hard. Juuuuust recently, with the introduction of some of the webcrypto work, and the rapid rise of ubiquitous https are we really getting to the point where doing so is really feasible at all.


Indeed they do. I can think much of the NPM ecosystem which assumes (incorrectly) that packages are unique, stable, and good. That whole situation a while back showed that not to be the case.

I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change. Primarily, immutability. I'm thinking of something like an IPFS repo which is "elsewhere", but still very much crawl-able, scannable for bad stuff (aside turing issues...), and can be shown to reproduce stuff if it is broken.

Also, using an immutable self-certifying system would also solve the second point, regarding the single MITM. The trust would be with the file/package, and not some ephemeral cert (whos trust is brokered from above).


>I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change.

But there are lots of times when the final result is much more valuable when the code CAN change.

You just have to have trust.

1. Trust that the folks who can change the code aren't malicious.

2. Trust that the code you think you're running is really the code you're running.

Neither of those things are really too much of a stretch. And the services and capabilities they allow are very, very nice.

Hell, statistically speaking... we're both typing these comments in Google Chrome, a browser that auto-updates itself all the time.

but 1. we trust that Google won't suddenly become malicious and 2. we trust the mechanisms in place (https, cert pinning) to ensure the update is really the update Google sent.

In fact, this whole article actually boils down to a breakdown of trust: Turns out random extension devs aren't as trustworthy as we might like. They make mistakes and there's no safety net.


If it can't change, then what's the point of grabbing something elsewhere instead of just embedding it in the addon?


I am mortified. I had this extension installed for 2 years... What info did they get and what can they do to me? Please help... (I uninstalled it, talking about User-Agent Switcher)


I had the same concern when I found out. I narrowed it to see that it was probably only within the last ~5 months that it was updated to include the badware. I don't have the skills to decode it fully (since it was obfuscated quite heavily) but I know at the very least that it sends browsing history and injects ads.

I'm not sure if extensions also have the capability to capture the https body info, but I erred on the safe side and also changed passwords.


Are you sure it's "User-Agent Switcher for Google Chrome", not just "User-Agent Switcher"? See my other comment that refers to the difference.


That is some fine detective work there, Lieutenant! :) What tipped you off in the first place to the malware in User-Agent Switcher?


The odd post requests I noticed to uaswitcher.org in wireshark while I was trying to create a packet capture. I saw that it contained my browsing history urls in double encoded base64 format. Interestingly it appears the extension was infected ~4 years back, taken down, and somehow later re-added, only to be reinfested with malware within the last few months. Suffice to say, I am now paranoid and have audited all my extensions, tossed out everything with obfuscated js, and run all my extensions in developer mode so I can be sure they never update without my consent.


I don't use browser extensions at all because they are often made by unknown developers and I cannot trust them.


I always thought it was strange that Google bothered adding so many XSS prevention measures to Chrome when they also happily give UXSS abilities to extension developers, complete with the veneer of trust provided by the Chrome web store.


> veneer of trust provided by the Chrome web store

Seriously? Who trusts the Chrome store or the Android store for that matter? If you've ever once submitted an app and seen how loose the security is, I can't see how you'd have any faith in their system.


You're viscously agreeing here. "Veneer" means a very thin layer of pretty material on top of cheaper material -- in this context the comment was saying that the trust afforded to Google is skin-deep and is probably unjustified.


Ahh you're quite right. Apologies, not enough coffee yet.


and if an extension is not downright malignand, they are often slow.


> urls in double encoded base64 format

I've seen that somewhere else; is it just for obfuscation? Or is there some other reason for it?


From what I've seen online a lot of these adware extensions do something similar. To me it doesn't make sense as an obfuscation method since anyone capable of capturing traces of network activity (or using chrome dev tools to do the equivalent) can probably recognize a base64 encoding and can just run the decoder a second time.

Maybe it might fool some automated analyzers though.


I've definitely seen naive analyzers just do something like:

-Input string

-If string is Base64 encoded

--Run base64decode(string)

-Output new string

With no recursion for an iterated check of any sort of encoding.


Beware: I also have a User Agent Switcher extension installed and was surprised to hear about this malware. Looks like the one I've been using safely is called "User-Agent Switcher" (https://chrome.google.com/webstore/detail/user-agent-switche...) and is only 23KB (whereas the "... for Google Chrome" is at 350KB).


For reference, on Windows the Chrome extensions are stored in a path ~similar~ to this one I have:

C:\Users\<login.name>\AppData\Local\Google\Chrome\User Data\Default\Extensions\lkmofgnohbedopheiphabfhfjgkhfcgf\2.0_0

"lkmofgnohbedopheiphabfhfjgkhfcgf" is the identifier for the user-agent extension I've been using for a while, and looking at the code, it's very straightforward and not spying on anything. It has not been updated since 2013 (and the website listed for the dev, www.toolshack.com, is not online), so I'm guessing a lot of users just go with other extensions that have received updates.

BTW, Google itself offers a similar extension at 120KB called "User-Agent Switcher for Chrome" (not "... for Google Chrome") here: https://chrome.google.com/webstore/detail/user-agent-switche...


Damn! I was using this all along!. Removed it immediately and checked the chrome store for an alternative, there's one from Google itself.


You're referring to this one right?

https://chrome.google.com/webstore/detail/user-agent-switche...

Is there any way to confirm that this is actually from google though, and not just some 3rd party who chose their publisher name as "google.com"? Most official google extensions have a "by google" badge in the right hand column whereas this one lacks it.


It's interesting, if you go to the support tab and follow the link to the maker's support website, you end up on https://spoofer-extension.appspot.com/ The contact page there links to "Glenn Willson" as the author - and their G+ page has a link to The User-Agent Switcher Has A New Owner: Google! http://www.glennwilson.info/2017/02/the-user-agent-switcher-...

Now, is that trustworthy or not? Good question.


Search for an official Google Chrome extension (such as Chrome Remote Desktop). You'll see a 'By Google' logo (https://chrome.google.com/webstore/category/collection/by_go...) as well as an 'offered by google.com'.

This just has the 'offered by google.com'. I would have assumed a 'By Google' logo would have been added after acquisition?


If you go to the listing of all Google extensions (the URL you linked) and open the "Chrome Sign Builder" Extension, go to Related -> "More from this Developer" you will find the User-Agent Switcher for Chrome. So it does seem legit.


Well, the LinkedIn checks out too: https://www.linkedin.com/in/wilsong

Of course, forgeable as well.

I've sent an email to his anonymized domain registrant contact to see if he can do anything about his malicious competitor.


Wow, I went to try and add a 1-star review to each to warn users (since both have extremely high ratings) and you need to install the malware-ridden extension to leave a review.

So we have a web store that that average Chrome users think is "safe," especially when apps have high review scores, and no meaningful ability to report malware to Google or to the users. Nice.


There is literally a 'Report Abuse' button about two inches below the download button. That's what you use in this case. Being able to leave a review of something you haven't tried would just be silly.


Are there any known problems with Random User Agent?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: