For example when I install Windows 8 or Mountain lion one of the first prompts I must address is:
"Please choose an image to help you identify
valid system prompts"
"The user decides to use an image of a tiger"
See site key: http://en.wikipedia.org/wiki/SiteKey
The whole SiteKey/tiger image solution only gives you an illusion of the solution. What happens when the system displays "System error, unable to display the image?" How will a convincingly-written error message prevent your average gullible or below-average competence computer user from logging in to a phishing site?
Think of how many things can go wrong on a computer. Think of every time when someone asked you why something works one way in this situation, but another way in another situation, and you had to use a technical explanation (excuse, really) for that inconsistency. Computing is full of that. Until we get to a place where people can actually TRUST and expect consistent behavior in their computing devices, the SiteKey/tiger will be well circumventable.
As far as I'm concerned, SiteKey is a brilliant business idea for selling to satisfy the regulatory two-factor requirement, but a terrible idea in practice.
Frankly, I believe it's not something you can make happen. I remember a story here not long ago about honeypots in China and businessmen getting full briefing and warnings by the MI5 before leaving the UK and some would still leave their computers and smartphones powered on near the bed. I think it's the same with some users: they just don't learn and never will (I have another theory that states they don't want to learn anything about computers and that it should magically read their minds but I always end up cursing when I try to explain it and besides it's not the point :).
Unless of course they're adult diapers for the discerning (and ageing) security professional.
Please note that as of June 7th 2012 the system prompt
image identification system has been deprecated and
being replaced with new security measures.
If you have any questions or require assistance, contact
technical support at email@example.com
Some fairly large banks here in Norway have at times ran with a not-completely-valid SSL certificate - making the bank login indistinguishable from a man-in-the-middle.
Answer from their phone support? "Oh yeah whenever you see that warning, just click 'allow' or 'ignore'."
1. Attacker prompts me (or my grandmother) for login name.
2. Attacker gives login name to bank.
3. Bank serves proper image to attacker. Attacker stores image.
The parent poster suggested that all system messages have the security message. The user is not prompted for some sort of id first, they're already using the computer and are presumed to be logged in.
This is the right way to use security images, IMO, although they're still not perfect as others in the thread have pointed out. The way you describe, which I believe BoA uses (just hearsay), is bad security.
Security is never about 100% guarantees. It's about reducing the exploitability and impact of weaknesses.
"The user is then presented 10 images (a tiger,
a house, a moose, etc) from a library of 10,000 images."
The question is: are there 10,000 images that are different enough people won't be fooled. Say my picture is a green house. And a prompt has a picture of a red house, will I accidentally think its the right site key?
The good news is most people won't even have a picture of a house as their site key so it will protect a large percent.
1) If the attacker can scrape the screen, they can detect which image you are using - securing the entire pipeline to the screen is hard.
2) 10,000 images is way too few.
Even if we can assume an even distribution of images, as an attacker I can serve the same image to all targets, 1 in 10,000 will now think that they are interacting with a trusted component
[T]he obvious giveaways are used as a pre-qualifier, to ensure with the least possible effort that the ONLY people who respond to the scammers' initial mass mailings (and therefore have to be brought along individually during the later stages) are the absolutely most gullible, ignorant, susceptible, suckers they can find.
Selection is probably not the best way to check whether something is browser content...
One of the big ideas of the web is selectable content. However UI elements shouldn't be included in this set.
I've written a little more about this (with some screenshots to illustrate my thinking) here: http://blog.dcxn.com/2012/02/29/selectable-elements-are-driv...
For the Github example, you want to disable selecting the file list header and, bizarrely, you tend towards disabling selecting the file list itself. Considering the files and their meta information are Github's content, disabling copy-pasting file names and commit messages is incomprehensible to me -- that's the last thing I'd consider disabling. And if I want to copy/paste the entire file list, I might want to copy the list header along with it for the benefit of the recipient. I wouldn't disable the selection on anything on your Github example.
The graph label example is just as strange. I tried the linked Morris.js example, and I can't select the label text. How is that a better user experience? What if I want to IM a friend the 2011 Q3 numbers? What if I want to search for similar data? Both quintessential web actions.
I think braking selection is almost as bad as breaking the back key, the cardinal sin of web apps.
I have used this property quite a few times for perfectly legitimate reasons. Any time that the user needs to click and drag to accomplish an action other than selecting text, you would want to use this.
Obviously, this is useful for a lot of things, disabling selection being only one.
You don't want the user to accidentally drag your images all over the screen when he is trying to click on them (happens daily when I test the games I work on).
If something asks you about update/downloading/etc., reject it. You decide what to do and when, and you type the URL into the browser, or go to the normal menu/dialog/tool for updating.
(This is partly why Chrome browser is right and the normal approach is wrong: if/when it needs update, it just does it.)
People are used to the computer being in charge and commanding them. This is bad from a UX point of view, but now I see it also affects security.
Yet another reason popups of all kinds should be forbidden.
When all application-initiated communication come from the OS notification area, this kind of dialog will make people wary. Which is a win.
This reminds me of login spoofing of yesteryear. How do you know if the login prompt on a shared computer or terminal is really from the OS or is a user-level program trying to steal passwords?
The usual solution was to hit a special attention key--like the "break" key under UNIX or Ctrl/Alt/Del for Windows--that user-level programs could not intercept.
Could we use the same idea here? Holding the "break" key will highlight genuine messages from the browser or the OS.
It didn't save any passwords or such, just display some random funny non-sense message to the user after s/he inserted login and password and then loop back again to the login prompt with a failed error message.
Even with this obvious message that would warn an alert user for the suspicious terminal, we (my friends and the lab manager) got a few laughs when people coming to the lab and finding all the VT terminals taken would use the PCs to login and tried several (many!) times until giving up, at which point we would tell them the truth. Mind you, these were people comfortable with VT terminals and unix cli and somewhat computer savvy!
There's only so much we can do if the end user refuses to think. I suspect a lot of these people will be migrating to locked down/walled garden devices soon anyway.
I was watching a user a month or so afterwards to notice they just pressed enter every time an alert box popped up, immediately, without reading and without thought.
Alerts on computers aren't there to be read any more. They're confusing annoyances that you just click yes to. They're usually badly written in that they tell a normal person nothing, they're without context and usually ultimately exist because a programmer was prevaricating on making a decision.
We nagged our users too much as programmers, to turn around and blame them for not thinking is a sublime irony given that we were the ones not thinking and constantly asking for reassurance that it was us not making a mistake.
Considering that polished pieces of software ship with typos as well, it isn't an indicator of a virus or malware.
Note to malware coders: add ignore, repair, quarantine buttons that run the same code.
The example I recall was a "ribbon" in the OS that slide out to reveal the dialog. If a dialog presented itself but the ribbon remained along the edge you could immediately tell it was spoofed. Of course this requires the OS not allow untrusted code to reposition/hide the ribbon or present a full screen display without prompting the user.
Another example is iOS grays out the background (including status bar at the top) when presenting a modal password prompt. However this could easily be spoofed by a full screen native app. The only way to solve that is to require authorization to enter full screen mode.
Browsers are improving. At least Chrome shows the URL at the top of all popup windows. Entering full screen mode requires user authorization.
That of course doesn't solve the OP's problem of spoofing a floating window purely inside a webpage, but that really needs to be solved at the OS level.
We tried this with "AOL Certified Mail", which had an unspoofable official chrome, and I don't remember any serious drop in phishing.
EDIT: Also: spot on! I thought browser induced popups would have a clear signature of where they came from. Obviously this is not the case any more.
Anyway, asking security questions from the end user is always a bad choice. There is an excellent paper about it by Ka-Ping Yee:
But then of course, to relieve users from the burden of making security decisions one needs the whole chain of authentication of executables, access control and a trust system to dispense privileges.
EDIT: a better link to Yee's paper
The problem UAC solves is that you click on a harmless dialog, but suddenly an important dialog is swapped in under your mouse. A fake UAC can't do that.
But when I'm trying to explain to my dad how to know what to trust and what not to trust I realise it's completely hopeless. You can fake almost everything that a non-techie would know to check.
other than Dropbox public url's the services that exiist have so many images with the word "Download" in the resulting link, all of which look exactly like a UX element, that you have to click about half of them or play Sherlock Holmes to uncover the real download link. It's like a scratch-off lottery.
Put another way, if you want to steal a million dollars, do it by stealing $100 from 10,000 people. Much safer than stealing $100,000 from 10 people.
It turns out the letter really did come from BoA.
EDIT: and once Chase sent me a letter telling me to reply by September 31. http://danweber.blogspot.com/2009/08/chase-does-it-again.htm...
The customer service people at the card had no clue what was going on. They'd never heard of it either. They told me they'd escalate the question to a manager and call my back, but they never did.
Then we just have to educate users to press this panic button whenever something that looks like a popup is on screen. If it's a real popup, it'll do the modal flash thing; otherwise the browser -- and everything in it -- turns yellow.
e.g. on Windows the browser looks like this:
instead of this:
I also note that Dell's laptop division has a number of malware authors hard at work.
Me: Okay mom, if you ever get a popup that you were not expecting, try to move it outside of the browser before clicking on it. If you can't, it's fake.
Fairly simple, for now.
But to the advantage of the good guys, anybody with the brains and discipline to do a better job of this kind of thing, is much more likely to be able to make a better living honestly, than through fraud and deception.
While poorly written english is a red flag to some of us. Not all computer users are native speakers of english even in english speaking countries. They are much less likely to notice usage and spelling errors.
Yes. That's better than nothing.
The philosophy of HTML5 seems to be allowing applicants to do a lot of things which don't require much trust to be placed in them (and most applications don't need much), rather than security through asking the user's permission (e.g. most desktop OSs), when they are unlikely to have much idea which developers they should trust, or through accountability/review (e.g. iOS), which adds barriers to entry.