In the can-you-really-use-cell-phone-for-trusted-computing department:
I have had support agents come to me and say, "This user was convinced to put his phone into developer mode and attach it to a computer running malware controlled by the attacker." Game over.
Okay, that is colossally stupid behavior. Unbelievable, to most of the audience here. But users will do the damnedest things, and platforms -- whatever their static security failings -- really need to be resilient against coerced or ill-guided user actions as well.
I've worked on platforms that have had very well designed security systems, but they also made very sharp distinctions between what could be done by a developer and a normal user, and for the most part those worlds did not intersect at all.
Android's barrier of "tap seven times here and you're a developer" is very low. It's clever, and good for many reasons, but user security isn't one of them.
It is clear that user is the biggest risk in all systems, but that doesn't quite take away from the more important addressable question - specifically - is a password-less method better than what we do today (aka passwords.
I'd take a hunch that the number of users with easily guessable passwords outweighs the number of targeted malware attempts.
But I need not guess, any of the password dump files provides a good statistic showing % of passwords.. what was it something like 0.6% are still 123456? and another 2-4% some similar-looking cousin?
If we go with this logic - we also wind up getting extra wins: better usability, and cheaper to deploy/manage. But that's a whole other topic.
Recently I've moved around a bit and have been connecting to a lot of different networks for my internet. My online bank account has flagged and reset my account about a half dozen times in the last month.
The strategy seems to be "There is a snowball's chance in hell the account was compromised let's just lock online access and require a password reset just incase".
It's inconvenient, but you have to wander at what point do you need to take control from the user. It's hubris to think we can imagine all the edge cases for user behavior (like you described).
I have had support agents come to me and say, "This user was convinced to put his phone into developer mode and attach it to a computer running malware controlled by the attacker." Game over.
Okay, that is colossally stupid behavior. Unbelievable, to most of the audience here. But users will do the damnedest things, and platforms -- whatever their static security failings -- really need to be resilient against coerced or ill-guided user actions as well.
I've worked on platforms that have had very well designed security systems, but they also made very sharp distinctions between what could be done by a developer and a normal user, and for the most part those worlds did not intersect at all.
Android's barrier of "tap seven times here and you're a developer" is very low. It's clever, and good for many reasons, but user security isn't one of them.