I am guessing the extensions had a "content_security_policy" key in its manifest, with a 'unsafe-eval' CSP directive in its value?
Any extension which declare such CSP directive in its manifest should be presumed malicious until further thorough investigation proves otherwise.
The 'unsafe-eval' in a manifest is essentially the ability for an extension to execute arbitrary code in the extension context which can't be code-reviewed by reading the source files.
"User-Agent Switcher for Google Chrome" confirmed to have a 'unsafe-eval' in its manifest.
So I guess the suspicion should be extended to any extension declaring a "content_security_policy" key in its manifest.
It would seem obvious that the moment a script tries to load arbitrary code outside of the package, it should fail.
Generated code is code that's outside of the package.
There are also a few (keyword: FEW) valid reasons for using eval in situations where it's beneficial to pull updates and modules from a known and trusted location.
In those cases though, if you're not signing server side and validating signatures in the extension with a pre-shared cert, you've still got problems (single MITM attack can compromise your extension going forward).
Long story short, eval is still a very useful feature. It needs to be used properly, and it should be used rarely, but it's incredibly powerful, and lots of things you don't really think about suddenly stop working if it goes away.
And yet this is Mozilla's stance regarding extensions pulling code from outside the extension's own package:
> extensions with 'unsafe-eval', 'unsafe-inline', remote script, or remote sources in their CSP are not allowed for extensions listed on addons.mozilla.org due to major security issues.
This is the sane stance in my opinion, with the best interests of users at heart.
I would like to be shown an actual, real case of why eval() would be impossible to avoid for an extension, not just a theorized one with no real sensible and convincing example. As much as I try, I can't come up with any such scenario.
I mostly agree with you, eval is dangerous and often improperly used, but it's ALSO an incredible tool.
The complete opposite of your argument is languages that are homoiconic(https://en.wikipedia.org/wiki/Homoiconicity): They're built in a way that explicitly allows manipulation, even of the language itself as a FEATURE.
They include languages like Lisp/Clojure/Julia.
Worse, many of the ways to "work" around the lack of eval just reduce the attack surface, or make the attack considerably less likely (obfuscation not security!).
Simple case: You need to apply certain policies to certain sites, those policies vary based on the browser in use, the country of origin of the user, and the country the site is hosted in.
The way that those policies vary... ALSO varies. As new legislation is passed, or corporate policies change, or certain countries become more or less stable.
A (completely valid) way to solve this problem would be to send both a policy engine and a policy set to the extension. The policy engine is eval'd and runs the policy set.
That allows real-time updates to the deployed extension's ability to parse new policies. Not just different data in the policies, mind you, but actual new policy capabilities.
Do you have to use eval? Nope, sure don't. You can generate a custom ebnf (hell, why not just use the ebnf for ecmascript), write you own freaking engine, and run it. All without eval. But you're still just running eval. You just get to name it nicely to something else, introduce an incredible amount of overhead to recreate a feature someone has already provided, and generally get less performant results.
Or, you can just fucking sign the payload with a pinned cert and verify on the client, and get the same guarantees for security you have with HTTPS to begin with.
1. It's the de-facto language for the web, which both drastically increases the number of inexperienced developers, as well as the total attack surface.
2. Again, as a scripting language for browsers, parsing untrusted user input is a common need. Eval filled the need for parsing, but not the need for trust.
You can absolutely use eval in entirely safe ways. Most people don't, and doing so (especially in JS) is hard. That's because JS has made establishing trust hard. Juuuuust recently, with the introduction of some of the webcrypto work, and the rapid rise of ubiquitous https are we really getting to the point where doing so is really feasible at all.
I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change. Primarily, immutability. I'm thinking of something like an IPFS repo which is "elsewhere", but still very much crawl-able, scannable for bad stuff (aside turing issues...), and can be shown to reproduce stuff if it is broken.
Also, using an immutable self-certifying system would also solve the second point, regarding the single MITM. The trust would be with the file/package, and not some ephemeral cert (whos trust is brokered from above).
But there are lots of times when the final result is much more valuable when the code CAN change.
You just have to have trust.
1. Trust that the folks who can change the code aren't malicious.
2. Trust that the code you think you're running is really the code you're running.
Neither of those things are really too much of a stretch. And the services and capabilities they allow are very, very nice.
Hell, statistically speaking... we're both typing these comments in Google Chrome, a browser that auto-updates itself all the time.
but 1. we trust that Google won't suddenly become malicious and 2. we trust the mechanisms in place (https, cert pinning) to ensure the update is really the update Google sent.
In fact, this whole article actually boils down to a breakdown of trust: Turns out random extension devs aren't as trustworthy as we might like. They make mistakes and there's no safety net.
I'm not sure if extensions also have the capability to capture the https body info, but I erred on the safe side and also changed passwords.
Seriously? Who trusts the Chrome store or the Android store for that matter? If you've ever once submitted an app and seen how loose the security is, I can't see how you'd have any faith in their system.
I've seen that somewhere else; is it just for obfuscation? Or is there some other reason for it?
Maybe it might fool some automated analyzers though.
-If string is Base64 encoded
-Output new string
With no recursion for an iterated check of any sort of encoding.
"lkmofgnohbedopheiphabfhfjgkhfcgf" is the identifier for the user-agent extension I've been using for a while, and looking at the code, it's very straightforward and not spying on anything. It has not been updated since 2013 (and the website listed for the dev, www.toolshack.com, is not online), so I'm guessing a lot of users just go with other extensions that have received updates.
BTW, Google itself offers a similar extension at 120KB called "User-Agent Switcher for Chrome" (not "... for Google Chrome") here: https://chrome.google.com/webstore/detail/user-agent-switche...
Is there any way to confirm that this is actually from google though, and not just some 3rd party who chose their publisher name as "google.com"? Most official google extensions have a "by google" badge in the right hand column whereas this one lacks it.
Now, is that trustworthy or not? Good question.
This just has the 'offered by google.com'. I would have assumed a 'By Google' logo would have been added after acquisition?
Of course, forgeable as well.
I've sent an email to his anonymized domain registrant contact to see if he can do anything about his malicious competitor.
So we have a web store that that average Chrome users think is "safe," especially when apps have high review scores, and no meaningful ability to report malware to Google or to the users. Nice.