Anyone who's spent time in hotels in recent years knows that since switching from mechanical keys to magnetic keys, sometimes the keys break and the guest is locked out of their hotel room, which of course they never discover until trying to open the door. Then they have to go to the front desk, probably stand in line, and request a replacement key.
Using smart phones as authentication devices suffers from this exact same problem. I can't speak for iOS but every android phone I've had has experienced slow operation, crashes, and other kinds of issues at inopportune moments. When I am trying to log into something to perform a 5 minute task I don't want to be delayed for 15 minutes while my phone chugs away at whatever.
I'm just one geek, but I have spent a lot of time thinking about "alternatives to passwords". I have concluded the password is king. We use passwords everywhere, and we will continue to use passwords everywhere, until someone invents something better. In 100s of years nobody has done that yet (Mechanical keys are a kind of password).
Instead of trying to replace passwords which are reliable and simple to use, with complicated authentication systems which are not reliable and not simple, we should focus our efforts on improving password authentication in our existing apps (no more limitations), building great password tools like keepass.info, and encouraging the average user to use password tools and practice good password habits.
This. 2FA or "tap to login" is all nice until the phone melts down and - by design - you (normally) don't even have backups, so have to use recovery codes. Which aren't always available.
> I have concluded the password is king.
What about the keypairs?
They are the same as passwords (when done right) - just long "random" strings of data. However, they don't have to be transferred over the wire for everyone to see. And they're more flexible in terms of possibilities on security-convenience spectrum.
There is no SRP standard for web (JS crypto doesn't count), but almost every TLS-aware system (client or server) out there has support for client certificates. The only problem is that browser vendors genuinely hate this (and want to shove users their own inventions), but if someone could somehow persuade them - it would just work.
Under what circumstance would that be considered normal, expected, or acceptable?
My phone just doesn't do that, and never has. Sure it crashes maybe once a month or so, but then I'm able to use it again within ~15 seconds.
I think that experience is mirrored by most people.
Monthly crashes are no big deal (in fact, I think, my phone crashes only once in a few months). Slight nuisance at most - e.g. if the crash corrupts Android app cache and system boots awfully long minutes, re-compiling the apps. However, I have three different mobile devices (2 phones and a tablet), from different vendors (Nokia, Acer, Samsung) that had suffered a hardware failure after some (5-8) years of use. Three dead eMMCs.
So, I'm sort of wary. It's exceptional, infrequent but happens quite unexpectedly and is very frustrating when it does. Especially if the recovery keys (which are rarely accessed by design) are lost, inaccessible (you're on the road) or misplaced.
Using strong passwords and an auto-completing app like keepass, this is simply not an issue. I can log in from another device.
Also, reliance on mobile devices means reliance on some corporation's cellular network. Putting the control of your vital access into the hands of people who you know are not your friends and who would love to take more than just your money is never a good idea, in my opinion. When their system crashes you can't just reboot your phone to fix the problem, you can't do anything.. in fact. You have to sit and stew in the hell of your creation until their system magically comes back online. Not worth it.
Get a Yubikey and store all your OATH tokens on that, as well as your phone.
Easier said than done. Import restrictions on any cryptographic devices in the country I live. :( Russian government genuinely hates civil cryptography.
Last news I've heard, late this spring, some company had managed to negotiate and obtain the necessary certifications and permissions, but they're still setting warehouses and logistics.
Thought of getting an ATECC508A or alike Secure Element IC and make a DIY HSM, but had no luck either. An acquaintance why runs an electronic component retail business said he'll try but these are rare find here, and usually out of stock.
(I wonder if there's a way to buy a Yubikey or Nitrokey token, visiting EU as a tourist... Customs probably won't care checking what some USB stick in luggage is - everyone has flash drives.)
I'm not sure what sort of logic they use for screening. They'd probably let anything pass if it'd be declared as "USB flash drive" and shipped from China (tons of such stuff is bought on AliExpress every day) in a typical envelope, haha - but may well likely screen the parcel for less common cases.
 A counter-terrorism^W mass surveillance law had recently passed so they will have to start screening parcels in 2017, though - but that's another story.
I have had support agents come to me and say, "This user was convinced to put his phone into developer mode and attach it to a computer running malware controlled by the attacker." Game over.
Okay, that is colossally stupid behavior. Unbelievable, to most of the audience here. But users will do the damnedest things, and platforms -- whatever their static security failings -- really need to be resilient against coerced or ill-guided user actions as well.
I've worked on platforms that have had very well designed security systems, but they also made very sharp distinctions between what could be done by a developer and a normal user, and for the most part those worlds did not intersect at all.
Android's barrier of "tap seven times here and you're a developer" is very low. It's clever, and good for many reasons, but user security isn't one of them.
I'd take a hunch that the number of users with easily guessable passwords outweighs the number of targeted malware attempts.
But I need not guess, any of the password dump files provides a good statistic showing % of passwords.. what was it something like 0.6% are still 123456? and another 2-4% some similar-looking cousin?
If we go with this logic - we also wind up getting extra wins: better usability, and cheaper to deploy/manage. But that's a whole other topic.
The strategy seems to be "There is a snowball's chance in hell the account was compromised let's just lock online access and require a password reset just incase".
It's inconvenient, but you have to wander at what point do you need to take control from the user. It's hubris to think we can imagine all the edge cases for user behavior (like you described).
But the idea of saving a private key on a locked down, app whitelisting, disk encrypted device (like an iphone) and to have a protocole that does not rely on a third party (which currently are mostly google and the social networks, the last people on earth I would want to share which sites I login to) is appealing.
or you could combine them: when Bluetooth isn't available then use a QR code
And I'm not talking about root access here. Just userspace to record mouse and keystrokes and then replay that. Then, just a couple of clicks and letters changed to some service that uses this authentication. If the replay is done right, those couple of clicks might lower the confidence of the behaviour analysis but not enough to lock it up (that sort of sensitivity would just make it infeasible). Now that it is authenticated, it can stop pretending and quickly move the mouse around and type to do whatever it wants. maybe it downloads your emails and uploads them somewhere.
The point is, your method requires no interaction for the majority of authentications and is potentially always online.
EDIT: That said, I do think this is a cute idea.
1- We only need secure multi-party computation algorithm if we cannot trust the server. In cases that server can be trusted with behavioral fingerprints then we can use server to to do the comparison.
2- One can assume that in some cases the server should not know about the behavioral fingerprint. For example in case that this procedure is implemented as a service, it might not be proper to send client side mouse movement and key presses to the server. Still server can be trusted as a mediator but should not know anything more than fingerprints being almost equal. You are right that behavioral finger prints like mouse movement are fuzzy. Specially since agent and browser are running on two different threads they get different time stamps for each mouse location. In this case you have to introduce some acceptance for fuzziness as you mentioned. Some statistical comparison. This is not as easy as checking equality securely (like Socialist millionaires problem) but you can in theory turn any circuit and make it secure so that the circuit will only expose fuzzy equality and nothing more about the data. See secure multi party computation: https://en.wikipedia.org/wiki/Secure_multi-party_computation
But your concern is valid since in practice the computation involved to do secure multi party computation in this case might be demanding for a browser. I have yet to verify that in practice. Keep in mind that our case is a bit more relaxed than general secure multi party computation problem since we have a server that can be trusted a little bit. Maybe that can help us a bit in devising a secure computation scheme. Any volunteers to work on that? :)
Link to draft: official one is behind paywall
> behind paywall