Hacker News new | past | comments | ask | show | jobs | submit login
Worse Than Useless: Personal Security Images (worsethanuseless.com)
93 points by lann on Feb 3, 2013 | hide | past | favorite | 79 comments



Security is not about guaranteeing anything, it's about making it more difficult to break in. The lock on your front door does nothing to guarantee a burglar won't enter your home, it just makes it more difficult to do so.

The examples he gives either have the potential of alerting the user to the spoof (via the missing image) or require significantly more work to spoof the user (via a complex proxy at the router level or obtaining a homographic URL).

Either way, the barrier to stealing users credentials has gone up, which is exactly what security measures are intended to do. Hardly useless, and definitely not "worse than useless".


It's more complicated than that. Like physical security, computer security is applied risk management.

Your house probably doesn't have solid metal doors, metal bars over the windows, laser tripwires, a patrolling security guard, and angry Doberman Pinschers. All of those things would increase your home's security, but the cost of the security probably doesn't make sense compared to the risks involved with your house getting burgled.

Similarly, Hacker News doesn't require client TLS certificates, two-factor authentication, and insanely complex passwords. The risk of horrible and/or costly things happening because someone's HN account got broken into really isn't there, so implementing those security measures doesn't make much sense.

Even if the author's bank's security picture doesn't actually decrease fraud at all, the security picture was probably cheap to implement, required minimal user and employee training, and keeps the lawyers and regulators happy. The bank's risk exposure decreases, even if the author's doesn't.


> ... the security picture was probably cheap to implement ...

Although the implementation might have been cheap, there's another factor to consider:

If a system provides a false sense of security, this almost certainly decreases the actual security of the system.

And putting effort (no matter how cheap) into something that decreases the overall security - that's not a good idea regarding risk management.


Not having the image provides even less certainty of security.

You assume that the image being there will make people less likely to check other aspects of the site, like the URL. But consider the average user. Spoofing and phishing attacks work because people don't check these things.

The security image is difficult to spoof and is more likely to clue average users in to attacks. Therefore, it is useful as a security device and is not worse than not having it.


^this.

Also, security images enable other methods for making attackers' lives difficult. Note that forcing proxies to grab the images allows the bank to focus on the IPs that request the highest number of images, or at least block known Tor exit nodes. Sure this forces people running Tor exit nodes to call in, but the percentage is so small that the bank won't care.


Complex proxy?? You mean a headless browser like phantomjs and a slightly higher latency apparent to the client. Hardly difficult, which leads to the false sense of security these images provide. It's made slightly harder on the order of minutes to write a few extra lines of code.


It would probably need to be more complex than that if the bank is watching for unexpected activity from individual IP addresses.


The code for the proxy itself isn't that complex, no. But it would have to be tailored to the target's banking site. Again, not extremely complex, but more difficult. And actually implementing the attack, including getting a homographic URL or rouge router, is quite a bit more difficult.

Again, the point is that the security image makes the attackers' lives more difficult. The image lends no "false sense of security" because without the image, you'd have the same sense of security.


I'm always surprised when I write a scraper/proxy (usually in perl) at how little added latency is involved. If I host the thing on a fat pipe (say an EC2 instance), it's not even noticeable at home.


To add to that, the security image can potentially raise the barrier a lot more than the author led on. Usually [1], the bank asks you to answer a security question before showing you the personal security image. Most people don't see this because they check the "remember this computer" option the first time they login, so even showing the security question so that one can grab the image will seem suspicious to many users.

[1] I just tested this with Ally (the bank shown in the blog post) and I remember it being true with ING Direct. I haven't tested other banks.


No, it really is worse than useless. It is trivial to fetch the image - anyone with a passing familiarity with jQuery can probably do it.


Maybe I'm missing your point but you can't do cross-domain requests in jQuery (without JSONP) so it would still require a server layer.


Whoops, yes of course. Still pretty easy to setup.


If doing it from the server side, I think you'll find that you have to answer the user's security questions before you'll be able to scrape the security image and phrase.

http://news.ycombinator.com/item?id=5160424


The whole point of this kind of man-in-the-middle phishing is that you present a fake page just like the bank's page. Then when they submit whatever it is they have to submit, you do the same via your server and then present them with the next page and so on until you are logged in.

More steps are more work for the attacker but that's not a big deal. The issue is that the security image isn't just another layer. It's a layer that the bank is making a guarantee about that it can't back up. They don't say, "Pick a security image to make it slightly harder for phishers." They say, "Pick or upload your own image so that you really know you're on our site."


You didn't bother to read my other comment that I linked to:

"Most people don't see this because they check the "remember this computer" option the first time they login, so even showing the security question so that one can grab the image will seem suspicious to many users."


I did read it, but I didn't address it directly enough. Is seems like there a few red herrings popping up in this conversation. Homographic urls, for example. Similarly, if users are used to being logged in, or recognized, due to the presence of a cookie in their browser, then, yes, they are going to see something different. But that can hold true regardless of whether security images are used or not.

What a phisher can do is emulate the 'clean' state. Not logged in, no cookie. Some users will get suspicious and leave the site, sure. It's like a sales funnel, you don't have to convert every visit to make money.

My problem with security images is not that they would never do any good, but that they will do more harm than good. They basically make a promise that they can't keep.

To deal more specifically with your example: I'm not sure what the most prevalent system is but the default one described doesn't involve an extra security question. The image is presented after the user enters their username but before they are asked for their password. If the site follows this flow, then we have a problem. Now in your case the flow is a little different.

It seems to me that showing users a different page based on a cookie is a good idea in that if a user hits the no-cookie version, they might be alerted. But the good part doesn't have anything to do with security images.

As others have posted, the real value of security images is not their security. It's marketing and compliance.


> What a phisher can do is emulate the 'clean' state. Not logged in, no cookie. Some users will get suspicious and leave the site, sure.

Agreed. Where we disagree seems to be regarding what constitutes "some" users. I contend that it's a large enough portion of the total that the security images do more good than harm. I admit that my position is based on intuition. If you have evidence to the contrary, please share it. (That's not meant to be snarky. I really would prefer basing my position on evidence than intuition.)

> I'm not sure what the most prevalent system is but the default one described doesn't involve an extra security question. The image is presented after the user enters their username but before they are asked for their password. If the site follows this flow, then we have a problem. Now in your case the flow is a little different.

I don't know what's most prevalent either. As I mentioned in my other comment, I've sampled too few banks to draw a conclusion, but 100% of the ones I've looked at ask a security question to register your computer before showing the security image. There's a chance I got lucky in the few that I sampled and the rest don't ask a security question, in which case, you'd be right---it'd be trivial to defeat in that case. I just don't see any evidence that that's true.

What do you mean by "the default one described?" Do you mean the one described in the blog post? If so, the screen shot in the blog post is from Ally Bank's website, which is one of the banks that I confirmed does ask a security question before displaying the security image.

> It seems to me that showing users a different page based on a cookie is a good idea in that if a user hits the no-cookie version, they might be alerted. But the good part doesn't have anything to do with security images.

The cookied version of a page must sufficiently unique per user. Otherwise the phisher could emulate the cookied version of the page. You haven't proposed an alternative to the security images, so I'm not sure what you're suggesting here.

> As others have posted, the real value of security images is not their security. It's marketing and compliance.

This is just an appeal to cynicism and doesn't add to the debate.


Good response.

I've been basing my opinion on earlier uses of security images which were as I described, but I should not have called that the 'default' as I have no idea what is the most prevalent type. I know BoA had a system like that years ago.

I will say now that if you are _only_ showing the image to cookied users, then I don't have a problem with it.

I just reread the blog post to see how it is described there and the author doesn't make the distinction. But I can see both the username and the security phrase in the screenshots and you say that they come from Ally Bank (or another bank using the same software, I guess). So my criticism stands for the system _described_ in the blog post but not the one depicted.

As for the charge of cynicism: fair because I didn't go into any details. For the compliance angle, I was relying on this comment further down [1]. As far as marketing goes, it's similar to the little SSL padlock/shield icons on the bottom of a page. It's just theatre. Well, in fact they are supposed to be links to authenticating sites, but in practice it's all about assuaging users' concerns. (OK, that's my inner cynic again).

[1] http://news.ycombinator.com/item?id=5158212


Sounds like we're on the same page then. Glad we got that sorted out.


How is it worse?

And how exactly do you plan on serving the image to the user? Do you have a rouge router? Or a homographic URL?

And consider, if you do have either of those things, then you can do so much more than spoofing an image, like reading all traffic, including SSL.


Hold on a second, we're just talking about security images, not debating the whole phishing paradigm. Which would be futile, by the way, because phishing happens all the time - sometimes with homographic urls, sometimes without. As for rogue routers in the rest, you don't need that kind of thing at all. Most phishing is just getting gullible people to click on links in their inbox.

While it's true that adding security images will make the phisher's job a little harder (and yes, you will need a server but then the phisher is serving this page up somewhere already) it doesn't add that much work. And now you have a situation where you've told your users: "If you don't see the image, it's not secure. If you do see the image, you're in the clear!" You're completely undermining them. Don't you think a user is going to look at things a lot less critically if they see their pet dog or whatever staring back at them?

That's why it's worse. It creates a small amount of extra work for phishers, but once they've done that, they are in a much better position. It's like a bad gambit in chess.


The article fails to recognize the value of the "security images" to the banks. The banks have used these images to satisfy the requirements of the FFIEC guidance "Authentication in an Internet Banking Environment"[1].

Any complaints about the value of the security images should not be addressed to banks. You should direct your complaints to the FFIEC and/or to your banks regulator (OTS, OCC or NCUA).

[1] http://www.ffiec.gov/pdf/Auth-ITS-Final%206-22-11%20%28FFIEC...


Great, they're probably responsible for this too. Every time you try and use a new tab or navigation (back, forward, refresh) on my banks website, you get kicked out to the main page. It's like someone has never heard of form keys.

http://i.imgur.com/erm0faA.png


Are they the people to blame for the 15-minute bank login timeouts everywhere, too?

Because I can't think of a more frustrating and anti-security policy that claims to be "for your security".

(By requiring many more repeated logins, the risk rises that I'll slip one time and not carefully check that the DNS/SSL info is correct.)


It's a trade off between that and allowing some random stranger to transfer your money if you get up and forget to lock your computer. Our first alpha didn't have the auto-logout and by far the most common piece of feedback we got was that we needed it.


It isn't just if you step away from a computer. A session token that never expires is as good as a password, but with weaker protection.

If I intercept a session token via a proxy, network dump, XSS or browser bug I can use it and replay it at any time in the form it was intercepted.

Passwords get sent once and are usually protected and encrypted or hashed on the server. Session tokens are not, hence why they need to be temporary.


Holding everything else constant, shorter session tokens reduce one avenue of exploitation, yes.

But everything else isn't constant: shorter sessions mean more password-typing-transactions, and especially into older tabs that have a "logout successful for your protection" message. That increases the risk of a successful phish, including by the same vulnerabilities you fear could compromise a session token. And practically, the problem with a password compromise is that it gives access to a indefinite stream of new session tokens.

So there's a balance between session-token-risks and login-transaction-risks. I doubt 15 minutes is the optimal tradeoff time -- I'm sure it isn't for me, with my habits on my own computers, and I haven't seen any rigorous evidence it's the right level for the banking masses. Its maddening uniformity across the industry "smells like" an arbitrary check-box from some regulatory document somewhere.


What's the alternative?

Online-services such as iCloud, Facebook, GMail, etc. don't auto-logout but they also have designed endpoints in which you need to re-authenticate (when changing the password, address info, anything dealing with authentication processes, generally) while still logged in.

How would this work for banks? Besides reauthenticating when changing critical account info, should someone be forced to reauthenticate when they make a transfer? Or a transfer of a certain amount of money?

While banking software is very sophisticated, my impression is that that's in the transactional system. My impression from using three different banks to manage my funds is that bank corporations are entirely less sophisticated in the user interface arena.

I think many people at HN remember American Express's debug mode snafu: http://techcrunch.com/2011/10/06/zero-day-vulnerability-on-a...

And I remember when their antiquated system couldn't handle anything more sophisticated than 8 character case insensitive passwords.

I much prefer that banks, for the time being, play it safe with the auto-logout.


The alternative is longer timeouts, perhaps even indefinite when requested and for low-risk (view-only) activities.

If I say something is my secured home computer and I want a longer session, give me a few hours. And if you need to re-auth me "for my protection", do it when I try to do something fishy, like a transfer-out-of-bank or atypical-bill-pay... not just check my balance/ledger for whether a transaction has come through.

The error is the assumption that this does "play it safe": I'm unaware of any study that this decreases account misuse. And if login-phishing is a major (if not the largest) risk, then training someone to constantly expect some random tab to have a "timed out for your protection" screen, needing re-login, just gives phishers another hook where a user's guard is slightly lower.


>How would this work for banks? Besides reauthenticating when changing critical account info, should someone be forced to reauthenticate when they make a transfer? Or a transfer of a certain amount of money?

My bank does this. It also allows batching transfers, so you only have to reauthenticate once.


Interesting, I've always wondered why so many banks use this system online instead of something more robust like two-step auth.


The only reason the banks use these systems to begin with is the FFIEC guidance. These "improvements" were not voluntarily put in place by forward thinking bankers, they had to because their regulators told them to. So it is all about the cost of the system. Before anyone says Chase/HSBC makes X billion dollars profit keep in mind that these regulations also apply to smaller community banks with 5 Billion in holdings.

Keep this in mind when you assess any of these systems; the authentication systems are not put in place to manage customer risk, they are put into place to manage regulatory risk.


When I signed up for USAA, by default they had a silly authentication system based on "security questions". I was very disappointed until I found out that they support several mechanisms, and allow you to disable the ones you won't use. Hence, I use the one where I combine my password with a token generated by a mobile app. Maybe other banks have alternate authentication mechanisms stashed away as well?


They do. Bank of America, which to my knowledge is the largest bank that still uses the "security image", also allows users to enable 2-factor authentication via SMS.


Not just via SMS, but you can also purchase a token card called a SafePass.


I've only seen two step used to validate the user to the server. Are there sites that use two step to verify the server to the client?


Not a lot of information regarding security images in there and not all banks use them. BofA does, but not Chase. Therefore I have to assume its some lame compliance committee within the bank determining usage.


You are safe assuming that risk managment at banks is not implemented uniformly across the industry.

Were you expecting it to say "security images are required"?


I skimmed the PDF and didn't see what you're referring to; however, I did see it explicitly point out the ineffectiveness of "security" questions.


This all started with the first supplement the FFIEC put out regarding authentication in an internet environment in 2005[1]. This initial supplement was put out to clarify what was expected of banks especially with regards to the FFIEC examination handbooks regarding e-banking[2] and information security[3]. (This all began with 12CFR30B[4][5])

Are you familiar with reading federal regulator-ese? They do not ever come out and make blanket statements such as use XYZ, ensure X bit keys and so on. The entire process is based on the banks and the examiner's interpretation of the bank's risk profile. If you are interested in learning more about this reading some of the banking industry press coverage at the time may be easier to digest.

[1] http://www.ffiec.gov/bsa_aml_infobase/documents/new_5_2007/O...

[2] http://ithandbook.ffiec.gov/it-booklets/e-banking.aspx

[3] http://ithandbook.ffiec.gov/it-booklets/information-security...

[4] http://ithandbook.ffiec.gov/media/21989/occ-12cfr30-safe_sou...

[5] I say 12cfr30b because that is what got the ball rolling for OCC regulated banks and at the time I worked for an OCC regulated bank. Depending on who the regulator (OTS, NCUA, etc) is the "ball rolling document" will be different.


BMO, Bank of Montreal uses these along with a security phrase. Its absolutely ridiculous that this is mandated by some standard, but there is no guidance on password strength itself. BMO has a strict only 6 characters (no more, no less) policy. Oh yeah, before anyone asks: No numbers, no special characters. Choosable by the customer when opening the account.


I find password restrictions often prohibit good but unconventional password models, like the "actual phrase for a passphrase" crowd. I think the possibility of an online brute force should already be near-zero for banking apps, and if an offline brute force attack can be conducted, it's likely that a) your password isn't going to matter much anymore and b) the typically arbitrary password requirements set up by the site aren't going to do much to stop any significant GPU-based hash attack.

The issue is that most people rely on memory to store passwords. Any term that is memorable and meets most online password "standards" is short enough for an offline brute force to break pretty quickly, especially if the attacker has some decent resources. The answer to this is "real phrase" passphrases, but many sites with password rules won't allow these.


Also, per xkcd, et. al., rate limiting login attempts on a per-user and global basis significantly increases the difficulty of brute-forcing access even given password frequency lists.


My bank (BNP Paribas in France) has an even worse policy: 6 digits. No more, no less. They try to prevent phishing attempts with mouse-based PIN entry, require you to change the password every so many logins, but the fact remains: there are only one million possible combinations.


> BMO has a strict only 6 characters (no more, no less) policy.

At what point can you start suing for negligence of proper precautions protecting your money?


IIRC, most of these banks have insurance to cover that case, so in theory you shouldn't lose any money provided you notify them in a timely manner of unauthorized transactions.


This is an absolutely critical point that often gets missed in discussions of online banking.

It is extremely rare that customer money is at stake when it comes to banking website security. If someone guesses your password and empties your account, the bank will cover the loss--same as if someone held up the teller with a gun.

Online banking security measures--and the regulations that govern them--are more about helping/forcing banks to mitigate the financial risk to themselves, not to their customers.


Unless, of course, a reasonable implementation were used, tying the image to a cookie and using the browser security to prevent it being sent to different domains; if you're on a subdomain of a bank already, there are far more effective ways to execute an attack.


This is exactly how Yahoo implemented this. The downside is that you have to select a new "sign in seal" for each browser/computer that you use.


All an attacker has to do is present the "we don't remember this computer" screen and ask them to setup a new image once they "log in".


Exactly. Any proper implementation of this kind of security should not depend on the username, this would entirely break the purpose.


Bank of America ties it to your user name, which is one of the reasons I quit using them.


This could work if security questions (not the best form of security by itself) are asked if the request comes from a "not previously used" computer. So that way, if the phishing site is sending a request on my behalf, they would have to answer my challenge questions (w/o human intervention i.e.) before getting to the image... that kinda makes life harder for an attacker.. of course the logic of identifying the "first time you are using this machine" thing needs to be non-stupid (for lack of a better word)


The image is a way for the user to "manually" authenticate the server. It's a weak authentication because an attacker could easily get a copy of this image once he knows the user identifier and forge a apparently valid page.

The most secure authentication is the one using security cards/key with a challenge code sent by the bank and the response returned by the key using bi-key cryptography. The one with usb connections would be most efficient, convenient and secure.

Nfc on phones may look more attractive, but phones are insecure.


This is part of a system called Passmark which was acquired by RSA many years ago.

As part of the newest releases of RSA's security approach it has been deprecated. In a few years you won't see this anywhere on the web (or, if you do, you'll know that the login and security portion of that site hasn't been looked at in years... also scary).

The banking industry is moving toward one-time passwords sent out-of-band and/or Google Authenticator for "something you have."


I kinda like my bank's implementation of it: Social security number equivalent for username, then you get the security phrase on the same page where you type in the password, then you get a two factor auth page (cellphone).

So it helps for when you fuck up the username or something else is weird, but security doesn't really rely on it.

Though I don't think there are any banks in my country that don't use 2 factor, so its a bit of a moot point anyway.


Count yourself lucky for living in a country where your national ID number isn't assumed to be some kind of non-revokable terrible 9-digit pencil-and-paper OAuth token that's shared with half the world. I'm told that Norway's tax IDs are considered no less secret than phone numbers.

Coming from the US, I mis-parsed your post as (Social Security number) (equivalent for username) on first read and thought "That's so backwards! They're treating SSNs as less important than passwords". It's probably better to say "national ID number" or "national tax ID" rather than "Social security number equivalent".


The blog post is worse than useless.

The images give you as a user a sense of situational awareness -- I know based on the picture which of a half dozen accounts I have (personal, Ira, business, etc) I'm logging into.

They also make it more difficult to misdirect people to a lookalike site via phishing. Even old people recognize that their login picture, normally prominently displayed, is missing.


> The images give you as a user a sense of situational awareness -- I know based on the picture which of a half dozen accounts I have (personal, Ira, business, etc) I'm logging into.

Cool you found a use for them, but that's not why they're there. Almost always when you're being asked to choose one, it's "for your security..."

> They also make it more difficult to misdirect people to a lookalike site via phishing. Even old people recognize that their login picture, normally prominently displayed, is missing.

First, it's not hard to mirror the interactions of the real site (there's actually a section in the post about sophisticated attacks which addresses this).

Second, I doubt very much that old people would notice anything missing, particularly if you masked it as a site redesign/upgrade. I myself am a fairly cautious user, but would I notice if the site for the MasterCard I recently got, which I've logged into maybe 3-4 times and never more than once a month, asked for my username and password up front rather than in a 2-stage format? I honestly doubt I would.


I think old people and the less tech-savvy users are exactly the people who would notice something like this.

The less a person understands about computers, the more they rely on habit to use them. My mother calls me to ask what she should do whenever the tiniest change or unexpected balloon pop-up appears. The answer is invariable always the same: "Ignore it". But she calls every time, without fail, regardless.

So you might not notice the security image at all. My mother, who's used to her bank website always looking a certain way, will become very concerned when the security image is missing.


That's exactly how my mother behaves as well. She does online stock trading so she has good reason to be cautious.


I disagree with your second point. I manage a financial services portal that uses what looks like this same system (from RSA). We deployed it to satisfy regulatory requirements. Any time there is a hiccup in the system (such as RSA's servers going down or slow) we are flooded with calls asking if it is safe to login. Our users are not tech-savvy, and many are 40+.


This is true for a frequent user but honestly I log into so many different web sites that I'm pretty certain I'd not think much of anything if one of the login pages was redesigned without notice. Fortunately I'm also pretty sure I'd never fall for phishing links, but I'm sure the list of people who'd fall for both is sufficiently long that it doesn't matter.


Right. It's not correct that people will blithely accept an error like "Error: this image failed to load." These people don't think about errors the same way we do. To them all errors are the same and mean there's a catastrophic failure. Developers understand that some errors are minor, but a user faced with an error, where he expects a reassuring "security image," will probably become fearful and bail from that page.

Of course, he might still type in his password even if he decides not to go through and submit the form, in which case his data is still compromised.


Fine, so just omit the area for the image completely. Showing an error in place of the image is obviously a stupid choice over just asking for the password up front, and just omitting it (by pretending there shouldn't be one) will not trigger panic in non-savvy or forgetful users.


This makes me wonder how sophisticated phishing setups are. This seems like something that they would want to A/B test to determine which "converts" more "users".


I think you missed the entire point of the article. Re-read it please.


Passwords ID you to the entity. Images ID the entity to you.

While not a perfect system, it works to some degree IMO. I still prefer two-fold auth.


The image does not ID the system to you, that's the whole point of the article! A spoofer site would simply go to the spoofed site, fetch the image, and give it to you.


Theoretically, the image could be stored as a blob in your localStorage, encrypted with the server's public key. When you go to the bank's site, a bit of AJAX pops it up to them, they decrypt it server-side, then serve it back to you as an image (all over SSL, please.) The phisher can try to do all the same steps, but without the originator's private key, they'll be left with a useless encrypted blob that can't be turned into a servable image.


Will never happen because it would make it way too hard to access your account on other computers.


The way this mechanism works, you're supposed to go through the image personalization step on each computer you access the account with anyway. (And if you use localStorage, that makes it per-browser).


Right, it doesn't verify the entity with 100% certainty, but it's still probably a good thing to have, because it should be a relatively simple change in the bank's code and it creates a lot of extra work for an attacker.

It's just another safeguard, and I think it does a fine job being that. It's not meant as an iron-clad, utterly impenetrable phishing prevention mechanism. If OP believed that, perhaps he is the gullible one.


I mean, I just think it makes the banks look bad/like they don't actually know how to secure a site.


I think it's a reasonably cheap and fast method to increase security somewhat. It doesn't really mean they don't know how to secure a site otherwise.


> I think it's a reasonably cheap and fast method to increase security somewhat. It doesn't really mean they don't know how to secure a site otherwise.

I really don't think this is a good argument. First of all, it's laughably easy (5 minutes work) to adapt, and sites need to be customized for phishing anyway. Second of all, phishing is the only thing it slows down, and there are many better ways they could ACTUALLY increase security (e.g. actually training people to look for SSL with a trusted certificate). For instance, why on earth don't they have a chrome/firefox/ie/whatever plugin that detects phishing sites? It would not be that hard to look for sites that look similar to Bank of America (for instance) that aren't at the url they're supposed to be at.

Hell, I would even be ok with a (heavily vetted and open sourced) plugin that made sure you never typed in your bank password anywhere but with SSL sites signed by that bank's certificate.


> First of all, it's laughably easy (5 minutes work) to adapt

I feel like we're rehashing the whole thread here. This has been addressed elsewhere. It's more than five minutes, and adds significant expense to the operation because you need an adequate dedicated machine that can run your scrapers, whether you use Selenium, mechanize, phantomjs, or whatever. You also then make your IP a single point of failure and either must operate a network of your own proxies or hope that a public network like Tor isn't (and doesn't become) banned if you're to continue getting responses from the server. A serious phisher who makes a lot of profit from this would still have incentive to do it, but it definitely complicates the operation.

> (e.g. actually training people to look for SSL with a trusted certificate)

SSL isn't fool-proof, and the right approach to SSL verification is both much more complicated to explain and much less prominent than a server-side security image. If the explanation is "look for the solid green bar before the URL", well, nothing's stopping the phisher from getting an SSL certificate that displays that way, and the only thing that prevents self-signed certs is the "wow this thing looks bad" message the browser gives, which most users happily ignore anyway.

"Never type your password if you don't see that cute gorilla" is much simpler and not really much weaker than "Never type your password if you don't see the solid green bar prefixing the URL" (especially since you'll have to explain to many people what the "address bar" is).

> For instance, why on earth don't they have a chrome/firefox/ie/whatever plugin that detects phishing sites?

Because Chrome and Firefox (and probably IE and the others) already have this built in.

>Hell, I would even be ok with a (heavily vetted and open sourced) plugin that made sure you never typed in your bank password anywhere but with SSL sites signed by that bank's certificate.

Are you seriously saying that it would be a net positive for security if banks expected users to download and install any software they gave a link to as a prerequisite to using their site? This is an extremely suspicious tactic and a person vulnerable to phishing is surely not going to vet or confirm the validity of the downloaded code in the first place, so it would just create a precedent expectation of "Oh, I just need to execute this arbitrary code on my OS..." before a user could use their bank.

Though, it'd be interesting to think about extending browsers' built-in phishing detection with verification against SSL certs. It sounds good in theory but I'm not really sure if it would work in practice, unless you think it's OK to have false positives every time the SSL cert changes (which may happen more frequently than you think).





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: