The examples he gives either have the potential of alerting the user to the spoof (via the missing image) or require significantly more work to spoof the user (via a complex proxy at the router level or obtaining a homographic URL).
Either way, the barrier to stealing users credentials has gone up, which is exactly what security measures are intended to do. Hardly useless, and definitely not "worse than useless".
Your house probably doesn't have solid metal doors, metal bars over the windows, laser tripwires, a patrolling security guard, and angry Doberman Pinschers. All of those things would increase your home's security, but the cost of the security probably doesn't make sense compared to the risks involved with your house getting burgled.
Similarly, Hacker News doesn't require client TLS certificates, two-factor authentication, and insanely complex passwords. The risk of horrible and/or costly things happening because someone's HN account got broken into really isn't there, so implementing those security measures doesn't make much sense.
Even if the author's bank's security picture doesn't actually decrease fraud at all, the security picture was probably cheap to implement, required minimal user and employee training, and keeps the lawyers and regulators happy. The bank's risk exposure decreases, even if the author's doesn't.
Although the implementation might have been cheap, there's another factor to consider:
If a system provides a false sense of security, this almost certainly decreases the actual security of the system.
And putting effort (no matter how cheap) into something that decreases the overall security - that's not a good idea regarding risk management.
You assume that the image being there will make people less likely to check other aspects of the site, like the URL. But consider the average user. Spoofing and phishing attacks work because people don't check these things.
The security image is difficult to spoof and is more likely to clue average users in to attacks. Therefore, it is useful as a security device and is not worse than not having it.
Also, security images enable other methods for making attackers' lives difficult. Note that forcing proxies to grab the images allows the bank to focus on the IPs that request the highest number of images, or at least block known Tor exit nodes. Sure this forces people running Tor exit nodes to call in, but the percentage is so small that the bank won't care.
Again, the point is that the security image makes the attackers' lives more difficult. The image lends no "false sense of security" because without the image, you'd have the same sense of security.
 I just tested this with Ally (the bank shown in the blog post) and I remember it being true with ING Direct. I haven't tested other banks.
More steps are more work for the attacker but that's not a big deal. The issue is that the security image isn't just another layer. It's a layer that the bank is making a guarantee about that it can't back up. They don't say, "Pick a security image to make it slightly harder for phishers." They say, "Pick or upload your own image so that you really know you're on our site."
"Most people don't see this because they check the "remember this computer" option the first time they login, so even showing the security question so that one can grab the image will seem suspicious to many users."
What a phisher can do is emulate the 'clean' state. Not logged in, no cookie. Some users will get suspicious and leave the site, sure. It's like a sales funnel, you don't have to convert every visit to make money.
My problem with security images is not that they would never do any good, but that they will do more harm than good. They basically make a promise that they can't keep.
To deal more specifically with your example: I'm not sure what the most prevalent system is but the default one described doesn't involve an extra security question. The image is presented after the user enters their username but before they are asked for their password. If the site follows this flow, then we have a problem. Now in your case the flow is a little different.
It seems to me that showing users a different page based on a cookie is a good idea in that if a user hits the no-cookie version, they might be alerted. But the good part doesn't have anything to do with security images.
As others have posted, the real value of security images is not their security. It's marketing and compliance.
Agreed. Where we disagree seems to be regarding what constitutes "some" users. I contend that it's a large enough portion of the total that the security images do more good than harm. I admit that my position is based on intuition. If you have evidence to the contrary, please share it. (That's not meant to be snarky. I really would prefer basing my position on evidence than intuition.)
> I'm not sure what the most prevalent system is but the default one described doesn't involve an extra security question. The image is presented after the user enters their username but before they are asked for their password. If the site follows this flow, then we have a problem. Now in your case the flow is a little different.
I don't know what's most prevalent either. As I mentioned in my other comment, I've sampled too few banks to draw a conclusion, but 100% of the ones I've looked at ask a security question to register your computer before showing the security image. There's a chance I got lucky in the few that I sampled and the rest don't ask a security question, in which case, you'd be right---it'd be trivial to defeat in that case. I just don't see any evidence that that's true.
What do you mean by "the default one described?" Do you mean the one described in the blog post? If so, the screen shot in the blog post is from Ally Bank's website, which is one of the banks that I confirmed does ask a security question before displaying the security image.
> It seems to me that showing users a different page based on a cookie is a good idea in that if a user hits the no-cookie version, they might be alerted. But the good part doesn't have anything to do with security images.
The cookied version of a page must sufficiently unique per user. Otherwise the phisher could emulate the cookied version of the page. You haven't proposed an alternative to the security images, so I'm not sure what you're suggesting here.
> As others have posted, the real value of security images is not their security. It's marketing and compliance.
This is just an appeal to cynicism and doesn't add to the debate.
I've been basing my opinion on earlier uses of security images which were as I described, but I should not have called that the 'default' as I have no idea what is the most prevalent type. I know BoA had a system like that years ago.
I will say now that if you are _only_ showing the image to cookied users, then I don't have a problem with it.
I just reread the blog post to see how it is described there and the author doesn't make the distinction. But I can see both the username and the security phrase in the screenshots and you say that they come from Ally Bank (or another bank using the same software, I guess). So my criticism stands for the system _described_ in the blog post but not the one depicted.
As for the charge of cynicism: fair because I didn't go into any details. For the compliance angle, I was relying on this comment further down . As far as marketing goes, it's similar to the little SSL padlock/shield icons on the bottom of a page. It's just theatre. Well, in fact they are supposed to be links to authenticating sites, but in practice it's all about assuaging users' concerns. (OK, that's my inner cynic again).
And how exactly do you plan on serving the image to the user? Do you have a rouge router? Or a homographic URL?
And consider, if you do have either of those things, then you can do so much more than spoofing an image, like reading all traffic, including SSL.
While it's true that adding security images will make the phisher's job a little harder (and yes, you will need a server but then the phisher is serving this page up somewhere already) it doesn't add that much work. And now you have a situation where you've told your users: "If you don't see the image, it's not secure. If you do see the image, you're in the clear!" You're completely undermining them. Don't you think a user is going to look at things a lot less critically if they see their pet dog or whatever staring back at them?
That's why it's worse. It creates a small amount of extra work for phishers, but once they've done that, they are in a much better position. It's like a bad gambit in chess.
Any complaints about the value of the security images should not be addressed to banks. You should direct your complaints to the FFIEC and/or to your banks regulator (OTS, OCC or NCUA).
Because I can't think of a more frustrating and anti-security policy that claims to be "for your security".
(By requiring many more repeated logins, the risk rises that I'll slip one time and not carefully check that the DNS/SSL info is correct.)
If I intercept a session token via a proxy, network dump, XSS or browser bug I can use it and replay it at any time in the form it was intercepted.
Passwords get sent once and are usually protected and encrypted or hashed on the server. Session tokens are not, hence why they need to be temporary.
But everything else isn't constant: shorter sessions mean more password-typing-transactions, and especially into older tabs that have a "logout successful for your protection" message. That increases the risk of a successful phish, including by the same vulnerabilities you fear could compromise a session token. And practically, the problem with a password compromise is that it gives access to a indefinite stream of new session tokens.
So there's a balance between session-token-risks and login-transaction-risks. I doubt 15 minutes is the optimal tradeoff time -- I'm sure it isn't for me, with my habits on my own computers, and I haven't seen any rigorous evidence it's the right level for the banking masses. Its maddening uniformity across the industry "smells like" an arbitrary check-box from some regulatory document somewhere.
Online-services such as iCloud, Facebook, GMail, etc. don't auto-logout but they also have designed endpoints in which you need to re-authenticate (when changing the password, address info, anything dealing with authentication processes, generally) while still logged in.
How would this work for banks? Besides reauthenticating when changing critical account info, should someone be forced to reauthenticate when they make a transfer? Or a transfer of a certain amount of money?
While banking software is very sophisticated, my impression is that that's in the transactional system. My impression from using three different banks to manage my funds is that bank corporations are entirely less sophisticated in the user interface arena.
I think many people at HN remember American Express's debug mode snafu:
And I remember when their antiquated system couldn't handle anything more sophisticated than 8 character case insensitive passwords.
I much prefer that banks, for the time being, play it safe with the auto-logout.
If I say something is my secured home computer and I want a longer session, give me a few hours. And if you need to re-auth me "for my protection", do it when I try to do something fishy, like a transfer-out-of-bank or atypical-bill-pay... not just check my balance/ledger for whether a transaction has come through.
The error is the assumption that this does "play it safe": I'm unaware of any study that this decreases account misuse. And if login-phishing is a major (if not the largest) risk, then training someone to constantly expect some random tab to have a "timed out for your protection" screen, needing re-login, just gives phishers another hook where a user's guard is slightly lower.
My bank does this. It also allows batching transfers, so you only have to reauthenticate once.
Keep this in mind when you assess any of these systems; the authentication systems are not put in place to manage customer risk, they are put into place to manage regulatory risk.
Were you expecting it to say "security images are required"?
Are you familiar with reading federal regulator-ese? They do not ever come out and make blanket statements such as use XYZ, ensure X bit keys and so on. The entire process is based on the banks and the examiner's interpretation of the bank's risk profile. If you are interested in learning more about this reading some of the banking industry press coverage at the time may be easier to digest.
 I say 12cfr30b because that is what got the ball rolling for OCC regulated banks and at the time I worked for an OCC regulated bank. Depending on who the regulator (OTS, NCUA, etc) is the "ball rolling document" will be different.
The issue is that most people rely on memory to store passwords. Any term that is memorable and meets most online password "standards" is short enough for an offline brute force to break pretty quickly, especially if the attacker has some decent resources. The answer to this is "real phrase" passphrases, but many sites with password rules won't allow these.
At what point can you start suing for negligence of proper precautions protecting your money?
It is extremely rare that customer money is at stake when it comes to banking website security. If someone guesses your password and empties your account, the bank will cover the loss--same as if someone held up the teller with a gun.
Online banking security measures--and the regulations that govern them--are more about helping/forcing banks to mitigate the financial risk to themselves, not to their customers.
The most secure authentication is the one using security cards/key with a challenge code sent by the bank and the response returned by the key using bi-key cryptography. The one with usb connections would be most efficient, convenient and secure.
Nfc on phones may look more attractive, but phones are insecure.
As part of the newest releases of RSA's security approach it has been deprecated. In a few years you won't see this anywhere on the web (or, if you do, you'll know that the login and security portion of that site hasn't been looked at in years... also scary).
The banking industry is moving toward one-time passwords sent out-of-band and/or Google Authenticator for "something you have."
So it helps for when you fuck up the username or something else is weird, but security doesn't really rely on it.
Though I don't think there are any banks in my country that don't use 2 factor, so its a bit of a moot point anyway.
Coming from the US, I mis-parsed your post as (Social Security number) (equivalent for username) on first read and thought "That's so backwards! They're treating SSNs as less important than passwords". It's probably better to say "national ID number" or "national tax ID" rather than "Social security number equivalent".
The images give you as a user a sense of situational awareness -- I know based on the picture which of a half dozen accounts I have (personal, Ira, business, etc) I'm logging into.
They also make it more difficult to misdirect people to a lookalike site via phishing. Even old people recognize that their login picture, normally prominently displayed, is missing.
Cool you found a use for them, but that's not why they're there. Almost always when you're being asked to choose one, it's "for your security..."
> They also make it more difficult to misdirect people to a lookalike site via phishing. Even old people recognize that their login picture, normally prominently displayed, is missing.
First, it's not hard to mirror the interactions of the real site (there's actually a section in the post about sophisticated attacks which addresses this).
Second, I doubt very much that old people would notice anything missing, particularly if you masked it as a site redesign/upgrade. I myself am a fairly cautious user, but would I notice if the site for the MasterCard I recently got, which I've logged into maybe 3-4 times and never more than once a month, asked for my username and password up front rather than in a 2-stage format? I honestly doubt I would.
The less a person understands about computers, the more they rely on habit to use them. My mother calls me to ask what she should do whenever the tiniest change or unexpected balloon pop-up appears. The answer is invariable always the same: "Ignore it". But she calls every time, without fail, regardless.
So you might not notice the security image at all. My mother, who's used to her bank website always looking a certain way, will become very concerned when the security image is missing.
Of course, he might still type in his password even if he decides not to go through and submit the form, in which case his data is still compromised.
While not a perfect system, it works to some degree IMO.
I still prefer two-fold auth.
It's just another safeguard, and I think it does a fine job being that. It's not meant as an iron-clad, utterly impenetrable phishing prevention mechanism. If OP believed that, perhaps he is the gullible one.
I really don't think this is a good argument. First of all, it's laughably easy (5 minutes work) to adapt, and sites need to be customized for phishing anyway. Second of all, phishing is the only thing it slows down, and there are many better ways they could ACTUALLY increase security (e.g. actually training people to look for SSL with a trusted certificate). For instance, why on earth don't they have a chrome/firefox/ie/whatever plugin that detects phishing sites? It would not be that hard to look for sites that look similar to Bank of America (for instance) that aren't at the url they're supposed to be at.
Hell, I would even be ok with a (heavily vetted and open sourced) plugin that made sure you never typed in your bank password anywhere but with SSL sites signed by that bank's certificate.
I feel like we're rehashing the whole thread here. This has been addressed elsewhere. It's more than five minutes, and adds significant expense to the operation because you need an adequate dedicated machine that can run your scrapers, whether you use Selenium, mechanize, phantomjs, or whatever. You also then make your IP a single point of failure and either must operate a network of your own proxies or hope that a public network like Tor isn't (and doesn't become) banned if you're to continue getting responses from the server. A serious phisher who makes a lot of profit from this would still have incentive to do it, but it definitely complicates the operation.
> (e.g. actually training people to look for SSL with a trusted certificate)
SSL isn't fool-proof, and the right approach to SSL verification is both much more complicated to explain and much less prominent than a server-side security image. If the explanation is "look for the solid green bar before the URL", well, nothing's stopping the phisher from getting an SSL certificate that displays that way, and the only thing that prevents self-signed certs is the "wow this thing looks bad" message the browser gives, which most users happily ignore anyway.
"Never type your password if you don't see that cute gorilla" is much simpler and not really much weaker than "Never type your password if you don't see the solid green bar prefixing the URL" (especially since you'll have to explain to many people what the "address bar" is).
> For instance, why on earth don't they have a chrome/firefox/ie/whatever plugin that detects phishing sites?
Because Chrome and Firefox (and probably IE and the others) already have this built in.
>Hell, I would even be ok with a (heavily vetted and open sourced) plugin that made sure you never typed in your bank password anywhere but with SSL sites signed by that bank's certificate.
Are you seriously saying that it would be a net positive for security if banks expected users to download and install any software they gave a link to as a prerequisite to using their site? This is an extremely suspicious tactic and a person vulnerable to phishing is surely not going to vet or confirm the validity of the downloaded code in the first place, so it would just create a precedent expectation of "Oh, I just need to execute this arbitrary code on my OS..." before a user could use their bank.
Though, it'd be interesting to think about extending browsers' built-in phishing detection with verification against SSL certs. It sounds good in theory but I'm not really sure if it would work in practice, unless you think it's OK to have false positives every time the SSL cert changes (which may happen more frequently than you think).