We use computer monitors which customers face from the same angle as us. I'm sure someone thought it would make the retail scenario more inclusive, but security-wise it's a mess. I can't verify account details without pulling up those same details for the customer to see. So I ask people for their details, click the button, and cross my fingers that they're right. If they're wrong, what then? They might legitimately not have known whose name it was under. It might be under their dad, mom, partner or business' name. Doesn't matter, the system has absolutely no design affordances to allow multiple people various levels of security privilege in accessing and altering accounts which are used by more than one person.
Furthermore, we have no organisational clarity about access privileges. Everyone makes up their own standards. Some people in the company are very strict, and won't do a SIM swap without photo ID or full ID over the phone. Some people will do one if the customer quotes the same last name and could be theoretically the account-holder's child. But does it matter when any customer can easily find out name, DOB and address from coming in store, then call up and get the SIM changed over the phone? We do have account PINs but very few people set them. And you could find it out in store if you were sharp-eyed.
There's a constant tension between providing a good customer experience and protecting security and privacy. But our commission is based partly on customer experience feedback scores - and if you're the one asshole who tries to follow all the rules (or follow what you decide should be the rules, because there aren't any haha) then you're gunna get a) bad feedback and b) alienate and make life difficult for the majority of ambiguous security events, which I'm sure are 95-99% trustworthy people.
Anyone relying on two-factor auth with a phone number who uses my company is vulnerable. Simple as that. It would take a determined attacker a day to get control of your number. All you'd notice was that your SIM stopped working. It would all be too late by the time you'd gotten a new one re-activated - and you're still vulnerable.
I'm not sure what telcos are like in other countries but I doubt much better.
TL;DR Someone create a startup to better identify people remotely.
Remote identity verification over the internet is not solved perfectly, but FIDO's U2F is pretty good. Hardware tokens cost money which most people won't buy, which is one problem. To prevent getting locked out you have to buy (and the service has to support) multiple hardware tokens, but that protects against loss or breakage. To prevent targeted token theft attacks, the token needs some kind of biometric verification (iris scans would be good), but that gets very expensive in a device that needs to be reliable and yet kept on a keychain.
That or something similar is the only way to provide verification while preventing the creation of a centralized identity database [I think a cryptographically assured identity verification system that dramatically limits identity repudiation would turn into a privacy nightmare dwarfing current identity database efforts being made by many companies]. With something U2F-like, each company or service stores their own verification seed value that is used in the future to verify you. It could be on a mobile device, like standard TOTP auth, or on a separate specialized hardware token. It could use pre-shared seed values and hashing, or nonces and asymmetric crypto.
Except that could be socially engineered pretty easily.
Plus, if your phone dies, you're in for a major inconvenience, because no one remembers actual phone numbers any more. I remember the number I had as a kid, but nowadays, even though my mom lives in the same place, I reach her only through VoIP or cell, and both those numbers are stored on my phone, not in my head.
All your excellent examples will dwarf mine, but I'll still tell it as a very cheap medium:
When I was at Fortis Luxembourg, the bank gave me a passive token: A card with a few dozen digits on it. At each login it would request 3 of those along with the password.
The key point of this is, it never transmitted the full key over the wire. So someone who intercepted the communication could never rebuild my full password.
Cost for the bank? A few cents. Security? The best I ever had from banks.
I'm baffled. In Germany, the chipTAN method   is pretty standard, which uses the bank card as a cryptographic element. And usually, German IT seems to be years behind the industry standard (e.g. I don't know a popular German e-mail provider that offers 2FA.)
[Edit] This is the best thing about chipTAN: Even if the computer is subverted by a trojan, or if a man-in-the-middle attack occurs, the TAN generated is only valid for the transaction confirmed by the user on the screen of the TAN generator, therefore modifying a transaction retroactively would cause the TAN to be invalid. [/Edit]
 In action: https://www.youtube.com/watch?v=5gyBC9irTsM&t=41s
Pretty clever. On my chipTAN (Belgium, ING) I have to enter the number by hand (part of the account# of recipient, amount).
On the positive side mine does ask for a PIN before generating the TAN, so is probably a bit more secure (balanced with a wear of the keys on the TAN generator, of course - so it is arguable which one is better)
Mind you, dynamic 2FA frequently only narrows the time window in which phishing is effective. Even with transaction-based 2FA, you'd need people to actually read the text message the bank sends them with the transaction authorisation code.
As always, it's a security/convenience tradeoff - I've gone from needing "something I know and something I have" to "something I know and any one of several things I have".
Your tradeoffs there may vary - if I were a political-dissident/whistleblower/drug-czar I'd probably consider the risk of losing access altogether preferable to opening up additional avenues for vulnerabilities - an NSA-level adversary would probably have a significantly easier time if they knew they only needed to stealthily subvert one of several devices (at least one of which I don't usually have on my person) to get access to all my tfa secured assets, but the additional risk if I'm protecting myself from 4chan-grade griefers or non-network-pervasive internet criminals is - for me - low enough to accept for the additional reliability and convenience of multiple authorised tfa token generating devices.
I feel reasonably secure about this (as secure as I'm feeling about all the passwords already there in 1password) and I have a huge advantage that changing my phone won't require remembering to disassociate all accounts first if I don't want to lose access to them.
As TOPT works without a back-channel, that QR code stays useable until I manually revoke that key on the respective web site.
Backup codes may be a good option if kept somewhere very safe.
I've got at least gmail, aws(/amazon), Github, Dropbox, Zoho, and several TOTP TFA protected WordPress sites on 3 different devices using this method. It definitely works. I see additional devices start to generate the same codes when I add the same seed (so long as their clocks are reasonable synced...)
This is using the Google Authenticatior app on iOS and Android, I _think_ any RFC6238 compliant TOTP app that lets you type in a string to key it should "just work".
Mind you, it's probably the best idea.
Less secure, of course, but my desktop and laptop bypass two factor.
Did you downvote and responded to a thread incorrectly?
(Accidentally deleted a comment of mine, this attempts to copy it)
Also, what's with the downvotes?
'"SMS is not designed to be a secure communications channel and should not be used by banks for electronic funds transfer authentication," Stanton told iTnews this week.'
These companies build on their users but, when their users need them, they betray them.
Unfortunately - when you explain this to people there's no really good answer to their immediate "so what should I do?" question.
I no more "own" the bigiain.com domain than I own "bigiain" on HN, or "firstname.lastname@example.org". While I can ensure I keep paying for it's registration, I have no doubt that if Monsanto or Goldman Sachs or Apple launched an new thing and trademarked it "Bigiain", my registrar would fold instantly to a legal demand from their lawyers, and I'd be just as out-in-the-cold as all those people without friends-of-friends in high enough places at Instafacetwigoo to "fix things", or with publicity platforms like @mat behind them.
I suspect in the future, there'll be a well known way to tie your online activity/reputation/network to a strong public key (with some distributed blockchain-like revocation/renewal audit trail). If anyone's working on something like that - I'd love to hear about it...
I am curious if you guys have any really good thoughts on products that could implement a crypto identity key that solves a real life problem. Would love to discuss.
That being said, I think it's a bit unfair to say companies won't lift a finger to help protect their users usernames. On the technical security level, many companies put a lot of effort into things like 2F, general internet security, etc. In particular Google, but also Dropbox, github, and others. On the service level (i.e. what happens when you have to talk to someone) everybody could probably improve quite a bit. OTOH that's costly and would ultimately need to be paid for by the customers somehow.
On the legal level, there isn't really anything these companies could do for you. If you do not own a trademark for your chosen domain name (account name, page name, ...), you'll lose it to someone who does . That also won't change if you have all kinds of friends in all kinds of places - your problem then is basic trademark law, not the goodwill of some company (that has to adhere to the law, after all).
Disclaimer: I work for Google.
 possibly with the exception of the account or domain name being your legal name, but I don't think there's a general norm for that.
In our era, for many people an account at an online social network is part of their identity. Losing it can be devastating. An account at Google is even more; it is one's documents, emails, contacts, calendar, photos, various data and digital purchases.
So it is very important that there is support when you need it. Is it really so costly? I don't know. How many cases of account theft are there every day if the technical (prevention) measures are good? Maybe affected users are willing to cover some of them?
That particular problem can be solved by getting a domain that nobody else would want. In my case, I've registered my first name+last name.com, which will certainly never be considered for a trademark.
I require your address, SSN, mother's maiden name and the name of your first pet to verify your answer.
Thank you, have a great day!
Ah so it doesn't work for probably 99+% of the the internet
My telecom company was helpful at first, but then we began to see circle-the-wagons behavior from them. We were at least able to get the call forwarding off of the account, but they would not tell us any details about what had happened on the account.
Until your story (and even now) I'm not exactly sure if my hacker had been able to forward the text messages or simply routed the phone call to his phone and using Google's password reset process was able to get a robo call to accomplish the same thing.
All of this is seriously making me consider creating my own 2FA service, only slightly better.
One quick recommendation I would add would be to put a passcode on your account with your mobile provider. Just call them and say "I'd like to add a passcode to my account", so you can at least add one extra layer of security there.
This is the weakest part of the chain. We all forget our passwords.
Note, this isn't specific to any carrier, this is FCC regulations that poorly trained CSR's ignore.
Personal aside: does anyone know if I could call Verizon support and ask them to require a specific passcode be used before accepting any call relating to my account? Before I call and ask, I'd like to know the odds of them actually agreeing and actually abiding by it.
- two factor login (you need password + sms text)
- account recovery (using only a phone) THIS IS DUMB.
I only use an alternate email for recovery (my wife and I cross). Thus, each recovery account is still 2FA secured.
There's already been a story floating around about a young kid charging his dad's credit card because of the phone recovery option (he had the android phone in this case). This is NOT the same as 2FA auth.
:| Hacker must break A
:| Losing A locks you out
2) Password(A) + SMS recovery(B)
:( Hacker must break A or B
:) Losing A and B locks you out
3) Password(A) + SMS(B) 2FA
:) Hacker must break A and B
:( Losing A or B locks you out
4) Password(A) + SMS(B) 2FA + SMS password recovery(B)
:| Hacker must break B
:| Losing B locks you out
5) Password(A) + SMS(B) 2FA + SMS password recovery(B) +
:( Hacker must break B or (A and C)
:) Losing B and (A or C) locks you out
6) Password(A) + SMS(B) 2FA + Code sheet(C) + 3rd channel password recovery(D)
:) Hacker must break (A and (B or C)) or (D and (B or C))
:) Losing (A and D) or (B and C) locks you out
Only the 6th option is unambiguously better than a single password. I guess using a friend's phone for password recovery and your own for 2FA would achieve that.
As far as I know these authorization texts are only sent when your Gmail username and password have been entered correctly. This would indicate that the attacker knew your long random password. Keylogger? From there they only need your 2fa to access your account.
The case was this one. He was cheating her girlfriend, a friend of her accessed to my friend's text messages log, saw the evidence, and told to the gf about it. Apparently, but I never confirmed this, the friend (the one who read the messages) worked in the cellphone provider of my friend.
Since then I know I can't trust in my cellphone ever again, but I always was suspicious about this could be possible.
And of course some workers will abuse that access for personal reasons.
What did you expect? That workers at a phone company wouldn't be able to access your account info? Ideally, it would be compartmentalized, but...
That kind of people could work as little as several month (or even just weeks), make a huge damage and then what. No control?
Call forwarding didn't even cross my mind, but it just goes to show how ridiculously broken SMS-based two-factor authentication really is then, and even worse than I thought.
Ideally what I'd want is an NFC ring or a smart band/watch that can use FIDO's U2F or a similar protocol that works through NFC, to do 2-step verification for me.
I've got my eyes on a Yubikey NEO for just this kind of use: https://www.yubico.com/products/yubikey-hardware/yubikey-neo...
The reactions you can take at the moment are to use a mobile App, (or preferably a security key!) rather than SMS backup, and if You're feeling especially uncharitable to your phone company, change the backup number google makes you enter to a google voice number rather than that of your actual phone - creating a circular situation where it can't really be used as a method for account recovery / hijacking.
I have a unique email address for PayPal--different from my normal email address--that I want to keep secret. The problem is that every time I make a purchase, the merchant gets this email address (in addition to the normal email address I gave to the merchant). I know that merchants get it because I get junk mail at my secret PayPal address from merchants I did business with.
Is there no way to make a PayPal payment without PayPal handing my email address over to the merchant?
As a related question, why do I have to trust the merchant to redirect me to PayPal's website to make the payment? There are many ways I can get fooled into entering my PayPal password directly into merchant's website (for example, the merchant opens the PayPal site in a frame or pop-up, so you can't verify that it's really PayPal). Isn't there a way I can open my own browser window, login to PayPal, and give some sort of invoice number to PayPal to direct payment to the merchant?
You can right-click the page in Firefox and choose "view page info", then on the security tab you can see if it's paypal, see the certificate, etc.. Someone could hijack right-click, it's going to be a bit of effort though. I think in FF shift+rightMouseClick overrides normal right-click to give you the browser menu, but probably that's capturable by the site too.
Ctrl+I is the shortcut, but I don't think it handles frames.
It's the password recovery by phone that's the weakness. But I think people getting locked out of their own account is probably a bigger problem for Google than people getting hacked, so they err on the side of saving your from getting locked out.
As far as I understand, though, 2FA increased the attack surface in this case. A web interface itself still remains impenetrable, doesn't it (know your hard-to-guess password and you should be fine)? Mobile provider was the weakest link and any system is as secure as its weakest link.
2. Email randomized password stored in PasswordDatabase
3. PasswordDatabase is stored in CloudDrive
4. CloudDrive randomized password stored in PasswordDatabase
5. CloudDrive with 2FA
6. PasswordDatabase secured by weak password
7. 2FA codes from 2FApp
8. PasswordDatabase, CloudDrive, Email only available together on devices with a human-friendly password. Those 3 and the 2FApp are all on the phone, secured by human-friendly password, on me always.
(How do I make 8 mathematically stronger?)
Has any CloudDrive service been socially engineered? I didn't find any results in my rudimentary search.
I don't know of any off the top of my head, but there was that time a few years ago when Dropbox accidentally let anyone in without a password. This isn't to pick on Dropbox, but security lapses happen and it's wise to have multiple layers of strong defense to reduce your risk. (Also, if someone compromises the email associated with your CloudDrive, they can use that to get your CloudDrive by invoking a password reset.)
EDIT: Wolfram|Alpha estimates the entropy of a password generated using the constraints I used for mine as roughly 85 bits (the relevant space would take 14 trillion years to enumerate). It actually has a pretty information-heavy password strength estimator (though I can't attest to its reliability as I'm not familiar with the internals).
SMS is not two factor authentication, and should never be part of an authentication system.
I think someone confused about what "increase" means.
(And this is all the more surprising because in general I see all sorts of parts of Google making great end-user security decisions.)
They are entirely different. If SMS OTPs were actually 2FA, the hacker would have needed to steal the phone too.
The difference between two-step verification & two-factor authentication.
And it adds nothing, since it still has fallbacks to the existing systems.
I think when I enabled iCloud 2FA it included 2 channels for communication with my phone: one as a named iOS device (where the OS handles receiving and displaying codes), and another as just its phone number. Is that for SMS? Why would they even do that?
Though it seems like a lot of work, It's hard to imagine going through this... with a similar mindset.
How did the hacker know his mobile number?
Domain name registration?
Of course it's not me.