I like the approach they take in Singapore - take the default posture that users will probably not be security aware and will also reject your advice.
Want to do login to your bank account? 100% required that you have an two-factor SMS token in addition to your user ID and password.
Want to do bill pay to a new-payer? Not only do you need to have your two-factor SMS token to first login, and then make the payment, you also need the physical token they sent you to do a crypto-sign of the bill payer account information before you can add the new bill-payer.
Coming from the United States, I'm blown away how much more secure (and convenient - Love bank-bank transfers, haven't used a paper cheque in 2+ years) banking is in Singapore. Probably suggests why paypal probably took off faster in the USA then here as well.
>Want to do login to your bank account? 100% required that you have an two-factor SMS token in addition to your user ID and password.
Thing is, if making the choice to have two-factor authentication for the bank is rationally wrong (i.e. the cost of security exceeds the damage of compromise), this isn't actually helping?
The point of the article is that users reject security advice for good reason. Forcing them to accept your security advice doesn't help; quite the opposite, it is forcing them to pay the cost of security despite it being not viable.
(in this particular example it maybe that the two factor auth is low-friction enough that it 'pays off'. If the article is right, simply giving people the option of two factor auth will have them use it)
IME people don't worry too much about 2FA/really secure passwds until they're protecting something they think is valuable. I didn't 2fa my github account for a long long time, despite the ease of use, until I got added to some big projects.
Likewise, the average bank member is (speculating here) gonna say, "who wants to steal that $75 in my checking account? not worth it". But an account breach could end up costing a lot more than $75, especially to the bank who has to file insurance claims and do the paperwork, and ofc the time lost to the person who only had $75 to begin with.
What I'm trying to say is (I think) that the cost of a bank account breach is hard to measure, because there are a ton of marginal costs associated with it. Locking your credit reports, getting new bank accounts, changing all your autopays, possibly bouncing checks or autopays not going through, filing police reports, filling out claims paperwork, following up on all that, unlocking your credit next time you want to buy a car, etc. Lots and lots of marginal costs in terms of fees and your time.
> Thing is, if making the choice to have two-factor authentication for the bank is rationally wrong (i.e. the cost of security exceeds the damage of compromise), this isn't actually helping?
But it isn't rationally wrong, from the perspective of the bank. The bank isn't paying for the user's time while using 2fa; for the bank the cost is "only" creating and operating the 2fa system.
Likewise if you are an ISP or a datacenter operator, security best practices might actually make economic sense, both because of the higher thread and the relatively high stakes.
The article is talking about total economic costs (i.e., to all users, the bank, etc.), not just from the perspective of the banks. In other words, we as a society may lose more by implementing "best security practices".
I get what you're saying, but living in a sketchy cell reception area makes me cringe at the thought of requiring SMS (let alone security concerns surrounding SMS as others have already mentioned). A mobile app instead solves it as near as I can tell... assuming you're okay with requiring a mobile device.
A point that Bruce has harped on for years is that the law often creates the wrong incentives for security policies. In particular, in the U.S., the downside of what we call "identity theft" often falls squarely on the consumer, while the security policies for the banking industry are designed by banks. Thus the policy makers may not have to live with the downsides of their own policies.
I'm guessing that the incentives created by the law for banking security are rather different in Singapore -- maybe not ideal, but certainly different.
Can you (or anyone else) comment knowledgeably on this?
Government regulation. The MAS (Monetary Authority of Singapore) basically decreed that all financial institutions follow a standard set of (really, really good) guidelines for Internet Banking. Two-Factor authentication and Transaction-Signing by External Token are just a couple of the 182 requirements.
For example, check out: Appendix A - "COUNTERING MAN-IN-THE-MIDDLE ATTACKS "
It has specific guidance such as "Digital signatures and key-based message
authentication codes (KMAC) for payment or
fund transfer transactions could be used for the
detection of unauthorised modification or
injection of transaction data in a middleman
attack. For this security solution to work
effectively, a customer using a hardware token
would need to be able to distinguish the process
of generating a one-time password from the
process of digitally signing a transaction. What
he signs digitally must also be meaningful to
him. This means the token should at least
explicitly show the payee account number and
the payment amount from which a hash value
may be derived for the purpose of creating a
digital signature. Different crypto keys should be
used for generating OTPs and for signing
transactions. "
The United States (at least the Banks I dealt with) are way behind where Singapore is in terms of Security.
Those are good points but I'm not a fan of two factor SMS. It made sense in the day where a cellphone could make calls and do texting and nothing more, but smartphones are used like computers now. Say you have the typical situation with no pin on the phone and password auto-filled by the mobile browser for the site. Now if someone loses that phone, another person can login, looking at history or auto-fill settings for clues. Alternatively, if there is a password for the site, it's either really simple or the password reset request will come in to the mail client on the phone. Since phones are way more than they used to be, for two factor these days it should be a separate dongle that only does the one factor and it is forced to have a pin by the vendor before it can be used. So SMS two-factor is not really all that much better security-wise and it makes it more inconvenient, doubly so for those like me that don't even have a cellphone.
If you are using your phone for the "something you have" factor in 2FA, you should probably have the device secured with a password, pin, etc. That said, you make a good point, if someone has your phone, it's not locked down, and google (or whoever) remembers your password on your phone, then you are screwed.
People use phones for the second factor because they are convenient and always with them. I think the solution to the problem you stated above is to use "something you are" (thumprint biometrics) as the second factor of authentication instead of "something you have" (SMS code/mobile authenticator).
In the next couple years, most high-end smartphones will probably have thumbprint scanning technology, this seems like a better solution (and more convenient) as you only have to scan your thumb and it'll work as long as you have data, you don't need to worry if you have good cell reception (for SMS).
In most cases you can remotely wipe the compromised device but it will take some small amount of time to do so. You can also de-authorize the device in your password manager, etc.
> Want to do login to your bank account? 100% required that you have an two-factor SMS token in addition to your user ID and password.
Still leaves the user open to MITM, viruses, and SMS interception. You should take a look at chipTAN [1] [2]. The user has to confirm account number and transfer amount on a little device before the crypto chip on the debit card generates a TAN that is valid only for this specific transaction.
I'm not a great fan of German IT (often unimaginative and slow-moving; e.g. none of the major German e-mail providers offer any form of 2FA), but this chipTAN thing is pretty fascinating from a crypto standpoint.
We use a system like this in Belgium. You slide your card into the device (which is not connected to the internet), then input a challenge and (sometimes) the account number to transfer to and the amount to transfer, followed by your pin. The device then shows a response which you input into the online banking to sign your transaction. The account number and amount only have to be input for large transactions or new account numbers. They used to not require that, but then a set of viruses went around which would show an error page to the user but use the response token to perform a large transaction in the back-end.
It's inconvenient, but relatively secure. All in all a worthwhile trade-off I would say. It was shocking to me how casually credit card transactions were done when I went to the US the past few years. I didn't even know the magnetic strip on my card even had a purpose until I had it scanned into a register. I've never known a store in europe use the magnetic strip and signature, they always use the chip and pin method.
Yes - the Singapore system uses an offline "signing" physical dongle, in which you enter the account that you would like to use, click "sign", and then enter that information into the website, when you wish to add a new payee.
The thing is - once you get used to this system, the "cost" faced by the user goes away. I type in a 2FA a dozen times a day when logging in, and the overhead (wait 3 seconds, type in a 6 digit number in 3 seconds) - is really pretty insignificant.
Typing a 2FA code a dozen times a day is a pretty significant cost. You might be used to it, but from someone who does this at most once a day, I find it burdensome.
It's important to note that all this is bank security. Someone else gaining access to your online bank account is a catastrophe, losing control of your Steam account is not. Not to mention your 9gag/genericSite accounts. I mean, nobody wants them anyway. Passwords could be optional for all I care.
The only accounts actually worth protecting are bank/government accounts, email and maybe Facebook.
In the US, banks are required to make you whole for personal accounts if you didn't authorize the transaction. Thus it's in the banks' interest to clamp down on the fraud.
Small-businesses need to watch out. They often have significant cash in accounts, are targeted because of that, and don't have the same legal protection.
In some countries, banks give account holders books of banknote-sized paper 'cheques' or 'checks' - you can see pictures at https://en.wikipedia.org/wiki/Cheque - they can use for making payments.
They have some convenient properties, but poor fraud protection. As such nowerdays they're mostly relegated to applications where fraud isn't a big worry.
For example charity donations, payments to friends and family, paying for little johnny's karate lessons, stuff like that. Some utility companies also allow you to pay by mailing in a cheque.
Even so, two parties who trust each other can use them without calling the police about it. (Actually a post-dated check is not so much illegal as it is invalid and therefore not intrinsically worth anything.)
I worked for a major company which operated in all 50 US states about ten years ago which paid me via post-dated checks. I had to wait a few days after receiving my check in the mail before I could deposit it.
Few charities or small martial-arts businesses will trust a paper check. The possibility for fraud and nonpayment is too threatening to a small business to be worth it - they'd rather miss the sale than take payment in check.
I haven't seen a non-utility, non-bank business accept paper checks in about 13 years.
I volunteered as a treasurer of a (very) small non-profit org in the USA and in the last couple years I'd conservatively say I deposited many hundreds of checks, with two bounces. Perhaps 3/4 the total cash flow. Figure maybe $10K/yr worth of rather small checks. Generally people trust a check made out to "xyz" will end up in the "xyz" bank account better than just handing money to someone and hoping for the best. Also being a small org ALL our payments were done by me hand writing checks for expense reimbursements, various fees, rentals, etc. To minimize the appearance of impropriety, no cash payments were permitted, all documented and traceable check payments with the check number linked to transaction paperwork (reimbursement form, etc).
Most businesses get to pay for the privilege of having a business account and per transaction, as a volunteer org our banking expenses were zero, and all the competitors (square, paypal, various wallets, etc) all want a cut of the action, even if its small, so we had no interest.
A startup in the wallet business could get a lot of buzz and transactions by offering free service to all volunteer orgs / non profits. Even just temporarily for a year...
I assume that was a response to the ancestor comment about "charity donations, payments to friends and family, paying for little johnny's karate lessons, stuff like that."
FWIW, personal checks for religious offerings seem to be common and accepted (at least accepted enough that I've seen them in offering plates in multiple churches, and I've never seen any church tell people that they'll reject personal checks), so I think you're right on that front.
Anecdotally, I pay for my music lessons with personal checks, and I paid rent in my SF apartment, to an individual landlord, with personal checks sent via USPS. In both cases there are reasonable means to complain if those checks bounce.
- They can be mailed, unlike credit cards and cash
- They have no fees for either the sender or receiver
Of course you sacrifice convenience and security in exchange for no fees, which is why they are most commonly used for large transactions between trustworthy parties. For example, tax payments to the IRS.
Applied to a population, the argument makes sense:
100 million users spent 1 minute/day verifying URLs --> cost of $33M of lost productivity (assume wage of $20/h) --> avoids 10,000 successful phishing attacks (.01% of population) --> saves $500K (each victim loses $500) --> not worth following security advice (since $33M is far greater than $500K)
Applied at an individual level, the argument makes less sense:
1 user (i.e., me) spends 1 minute/day verifying URLs --> cost of $0.33 of lost productivity --> avoid .01% chance of phishing attack --> avoid .01% chance of loss of $500 --> but in the event I do get phished, my loss is $500 + WEEKS of hassle with banks, credit reporting agencies, etc, to clean up the mess!
This is like the antibiotics trade-off. We don't want the population to overuse antibiotics to avoid building resistance in the population. But if I'm sick, and there's only a 10% chance that the antibiotic is useful (and 90% chance that my illness is viral and therefore the antibiotic is useless but otherwise harmless), then it's still in my individual interest to take it.
Interesting logic but bad conclusion -- why didn't you do the final calculation?!?!
1 minute/day verifying URLs --> cost of $0.33 of lost productivity --> avoid .01% chance of phishing attack --> avoid .01% chance of loss of $500 --> avoid a loss of $0.05
With these numbers, it never makes sense to verify URLs ($0.33 > $0.05).
> my loss is $500 + WEEKS of hassle with banks, credit reporting agencies, etc, to clean up the mess!
This is the rub IMO.. if it costs you $2,000 in time/lost wages/whatever to work through fixing it, it still only costs $0.25 total/day, ie still not worth it. If it costs $3,000, then it's worth it.
> This is like the antibiotics trade-off.
Not really - you saw "population" and made the connection, but it isn't there. In the antibiotics situation, there's a common resource - antibiotic effectiveness - that is slowly depleted as a member of the population partakes. It's in the population's interest to maintain the resource, and in the individual's interest to deplete it. This is called Tragedy of the Commons [0].
For security though there is no analogous common resource; my security practices as a user and yours aren't connected in that way. It's everyone for themselves.
You account for the time hassle in one case but not the other. You need to account for it in both, with the population using the average value of time for that population.
The end result is that it is likely worth it for some people whose time is worth more than for others.
The only problem is the worth of time. Is a CEO making $200/hr missing their kid's baseball game really losing more than the $10/hr admin staff missing their kid's baseball game?
>This is like the antibiotics trade-off. We don't want the population to overuse antibiotics to avoid building resistance in the population. But if I'm sick, and there's only a 10% chance that the antibiotic is useful (and 90% chance that my illness is viral and therefore the antibiotic is useless but otherwise harmless), then it's still in my individual interest to take it.
I'm not sure this is all that related. For antibiotics, the problem is that someone else taking an antibiotic has a cost to you (in bacteria becoming more resistant). Thus, you taking it gets you a gain of 1000 at a cost of 1 but someone else taking it gets you a gain of 0 at a cost of 1. This is an example of everyone using a strategy that helps them at the cost of others, which is a strategy that does not scale well when trying to maximize the gain of the whole group. In the security example, if it is beneficial for the average person to use the 'increased security' strategy, then the strategy is a net benefit when scaled (if done so evenly across the population, which may be a faulty assumption).
In short, security scales the benefit (or cost) evenly. k gain for 1 person, kn gain for n people. k Cost for 1 person, kn cost for n people. Antibiotics does not. (k - j) gain/cost for 1 person, n(k - nj) gain/cost for n people.
Edit:
To specify, the n(k - nj) is based on a person getting k benefit at j costs from using an antibiotic, while everyone else gets only j cost. So one use of an antibiotic is 1 person get k benefit while n people get j cost, or (k - nj). This is then multiplied by those n people each taking an antibiotic.
In your fourth paragraph you include the lost time/money of cleaning up the theft. But in the second paragraph you don't. This means you're not actually comparing the two possibilities.
I treat password strength relative to the importance I give the service I'm using. If it is something I care about I will use a 8-12 character password with a few uppercase letters and digits. If it is something I don't care about, but requires an account, "1234" should be enough.
I have even given up on registering on a few sites because they required a safe password. This is getting even more common to me with mobile apps. Typing long passwords on a small tocuh screen keyboard is difficult.
Troy Hunt comments on this. If it's a non-important site that shares a password with another important site that is an attack vector (I'm sure you aren't doing this but many users do). So if you stick to all non-important sites get weak passwords you'll probably be fine you just have to make sure there is no attack vector to another site of more importance.
I.E. If one of them has the last 4 digits of your credit card then they can call customer service at another more important site and get more information building to a full scale attack. It could happen in a similar way to what happened to Mat Honan http://www.wired.com/2012/08/apple-amazon-mat-honan-hacking/
However, that example leads to what the article is talking about. If it's a low probability then users figure the risk is worth it.
Lastpass and it seems to work well. Have it generate a strong 12 character password with uppercase, lowercase, special characters and numbers (depending on the restrictions of the application). Secure it with a strong master password and change the master password on a regular basis.
That said, if someone guesses your master password, then you are in trouble.
For accounts that are unimportant to you, it seems logical to learn one complex password that you use across all these sites. However, there is a danger that an account is actually more important than you suspect - perhaps it gives away a clue.
That's actually probably the worst thing you can do. Password reuse is a bigger problem in practice than password guessability.
I use password generation schemes. For example, you might decide to use the highest-grossing films of various years. You can then write down the site name and a year in a file and then be able to derive a password, and it gives you dozens of unique passwords that are still resistant to dictionary attacks. It also tends to satisfy sites that require at least one number, one upper-case, and one lower-case letter.
It's worse than that. Since modern systems are multi-layered and many of the layers are not even administered by the user, even users that followed advice given are vulnerable to loss, so for folks trying to make an economic trade-off in terms of time and hassle, it's all just a crap shoot. Do some stuff that you feel might be reasonable, like install Norton or something, come up with a password that includes both your name and your ssn, then wear gloves when you click on pron sites.
It's really quite ludicrous the situation we put the average user in. There are folks who spend hundreds of hours worrying about security and still get taken to the cleaners. What chance does Joe Sixpack really have?
"Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives."
I've thought the same about highly restrictive network firewalls for years. Most threats today are 'pulled' in via http, e-mail, software update feeds, etc., or entry is made via phishing or social engineering. Highly restrictive firewalls don't do anything about any of that, and they impose significant inconvenience. Your firewall is security theater.
Part of the problem with security is that it's a gut feeling, unsupported "expert" opinion, and tech-folklore driven discipline. At worst it's cargo-cultish and almost superstitious.
For one example consider the extremely common -- and utterly dumb -- belief among many that NAT improves security. It's a superstition. How? What threats does it mitigate that can't be mitigated otherwise? Get concrete, give examples, show data. Nope.
Tell me how you would ship a device for $20 that will support an arbitrary number of IP devices behind the firewall on virtually any ISP scenario out of the box with zero or minimal installation?
NAT itself doesn't create security, but it brings a standard use case that is easy to secure.
As far as security goes, firewalls don't require it. Not only that, but I share the admittedly minority opinion that firewalls are a crutch for bad system security and that we should be working to fix that problem. A system that requires a firewall to be secure is broken.
I think that rationally it's likely even worse now (since 2009) in the sense that these massive data breaches keep happening and it has absolutely nothing to do with our own personal security behavior. It doesn't matter how careful we are with our security, it's going into the hands of the baddies anyway if they want it.
As someone who uses 1Password for everything, the one thing that bothers me most is when passwords are limited to specific characters or to painfully short lengths. What the heck?
If users reject security advice because they studied the costs and benefits, and found it unprofitable given the risks, then that's rational.
But if users reject security advice because "oh God it's too hard and it's probably not too bad anyway I have nothing to hide, right?", that's not rational. That's ignoring the problem and coincidentally getting the right answer.
It's like concluding users are rational for refraining from buying a lottery ticket. It turns out, though, the users didn't actually do any math, and were just too lazy to get up that morning.
Instead of giving them an advice, we who understand how it works should make these things defaults and not let them exposed. What can the users do in a world where banks are asking you to read the CC details loudly in a phone conversation and give them all the details over the phone. Next thing is that there is a fake call from a criminal organization pretending to be the bank. How would a user detect that it is fake? I think security should be about rules and enforced practices rather than advices that they can happily ignore.
> Instead of giving them an advice, we who understand how it works should make these things defaults and not let them exposed.
This is missing the point. The article states the security advice is actively harmful, in that applying it is more costly than the expected returns warrant. Just enforcing those costs on users doesn't help.
Also, even if I did get taken by a criminal organization pretending to be the bank, I'm going to get the money back anyway once I report fraudulent charges. Not much risk on my end…
I see roughly 1 certificate error per week browsing the web "in the wild". I've learned how to click through. I barely consciously notice them anymore.
I think that part of the problem is that security is not explained in layman terms. There is a slang in security circles that it is not shared with the rest of the world.
Here is an example on how to explain SSL in simple terms. I have not seen many of this kind of water down explanations
You should read the article. It isn't about rationally rejecting something that is "too complicated." It is about the overall cost for some of these security tips exceeding what is lost to the attacks they protect against.
I did read both article, Schneier's comments and the paper it self. Very interesting analysis of user behavior.
The first paragraph of the conclusion begins with: "“Given a choice between dancing pigs and security, users will pick dancing pigs every time.” While amusing, this is unfair: users are never offered security, either on its own or as an alternative to anything else."
User do get offer security in many ways: red web pages warning of an unsecure redirect, open lock icons on the address bar, etc. But the dancing pigs will be more amusing until the user understand the underlying concept of the warnings. Security must begin with education. Kindergarden level education.
The last paragraph of the conclusion ends with: "How did we manage to get things so wrong? In speaking of worst-case rather than average harm we have enormously exaggerated the value of advice. In evaluating advice solely on benefit we have implicitly valued user time and effort at zero."
My point is that part of the answer should be making things understandable. Making things understandable to everybody will reduce the cost of dealing with them.
But his analysis applies even to highly technical users, for whom the problem is clearly not understanding.
The reality is I had an argument about why we should be writing down passwords at work, because the projected security benefit of preventing a full breach is still less than the expected benefit of not losing our data all the time.
Could we have set up a better, more technical PKI than notes in the safe? Probably. But I'm not sure it would get us ahead on the cost/benefit curve.
Real security is about separating your porn watching from your banking; not about doing your porn watching to the security standards of your banking.
tl;dr: No, dancing pigs are always more amusing. No one wants to live in a perfectly safe box.
I think that we are two sides of the same apple. I completely agree with you. I think that your argument is sound in terms of technology implementation.
My argument talks about motivation, not implementation.
I think this still runs afoul of the fact that you may not be able to get the cost of understanding below the cost of ignoring. Which, for many is quite low, actually.
This is tough, as most talking points discuss the worst case cost for someone ignoring security. The normal cost for ignoring security is much much lower.
Older submissions should have a (YYYY) in the headline to indicate they're old, but posting older things is a common practice on HN. It just means, "Hey, I ran into this older thing that I think is still interesting, and I thought the community would too."
Does the fact that this article drifts up on the HN front page not inherently make the submission worth while?
Or should all up-voters clarify their reasoning behind doing so?
I mean, many people may have many reasons that got this article to the front page, does it really matter what they are (or what the single reason of the submitter was)?
The article definitely needs a (2009) on it. Who knows if it had that would it have drifted up? No point it speculating, but it does need the year tag.
Even though this is the first time this article has been posted on HN. Security advice is so quickly outdated that this article very likely isn't relevant today.
I don't know how prevelant some of the password management tools and two factor authentication was in 2009, but it's common to use them now. Browsers are more sophisticated and the landscape has changed a lot.
That all said the sentiment of the article still stands true. Users (like my family) hate worrying about security.
>* Updates still suck, users still can't tell the difference between fake and real ones
Browsers now automatically update. There is the issue with adobe updates, but automatic updates make this different. Yes they still suck on many applications, but doesn't that affect the article?
>* Passwords are still annoying
LastPass, keePass and other tools give uses a much more simplified way to access our accounts. Also being able to link Google/Facebook to an account does the same thing.
This article isn't 100% outdated, but it needs an update to address some of the changes that are there.
What about HTTP vs HTTPS and signing in over starbucks? Does your average user know about that.?
This is an issue of education and how to get the most bang for your buck, 2 factor authentication (easy), Password Management Software (easy), letting google/facebook/etc authenticate your account (easy).
There are ways to make peoples lives easier AND more secure, I don't know if these tools existed back then but I've been using LastPass for 2 years and back then it was clunky to use. Now I personally find it easy as heck. I'm more secure (then I was) and my life is easier. To that end this article needs an update.
If password management software was so easy, my mom would use it and my dad wouldn't call tech support every week to figure out how to use his. I'll say that it's better than it has been, but I can't call it easy.
Did you read the article? It's not about security best practice. It's about human nature and how that relates to security best practice. Until human nature changes this article is going to be pretty close to timeless.
Sorry I wasn't clear in my comment, but at the end I state the same thing. The Schneier article is still true, but the linked microsoft research article needs an update.
>That all said the sentiment of the article still stands true. Users (like my family) hate worrying about security.
When I said "article" I was talking about the full document which gets directly into security best practice.
Also HN Comment guidlines indicate you shouldn't ask Did you read the article?
> Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
https://news.ycombinator.com/newsguidelines.html
I was also having an off day yesterday specifically around HN, but seeing your comment made my day. Thanks :-) I hope you have a much better day today as well.
It is indeed still relevant, as it is not "security advice". Rather, it's advice to the security community on how to consider what advice they give to general users.
I was also talking about the paper by Cormac Herley from MSR. The lasting value of the paper is not the specific best practices that are not being applied. Those are examples of the claim people are being rational when they ignore security advice. The value in the paper is in the idea that users are acting rationally, and backing that up with some math and concrete examples.
I should not have called the article irrelevant, but instead stated that it needs an update to include modern techniques, practices and risk factors.
One of his examples in 7.3 User Effort is not Free he mentions the users time in input of a 6 digit pin vs an 8 digit password. But he doesn't include the use of a password management system. If you use a password managment system you can actually save time on password input.
Then look at his section on passwords. The same thing applies. And the article is not security advice but it contains security advice from 2009 which is different today.
People who know what they're doing may have better security, but when I have to return to the "mundane world" nothing's changed. Passwords no greater than 8 characters... case insensitive with no symbols allowed with a mandatory number... getting mailed my password back in plain text when I use recovery mechanisms (or, well, my wife actually since my password management generally outlasts my accounts)... it's all still out there.
Want to do login to your bank account? 100% required that you have an two-factor SMS token in addition to your user ID and password.
Want to do bill pay to a new-payer? Not only do you need to have your two-factor SMS token to first login, and then make the payment, you also need the physical token they sent you to do a crypto-sign of the bill payer account information before you can add the new bill-payer.
Coming from the United States, I'm blown away how much more secure (and convenient - Love bank-bank transfers, haven't used a paper cheque in 2+ years) banking is in Singapore. Probably suggests why paypal probably took off faster in the USA then here as well.