Hacker News new | past | comments | ask | show | jobs | submit login
When MFA isn't MFA, or how we got phished (retool.com)
393 points by dvdhsu on Sept 13, 2023 | hide | past | favorite | 277 comments



Beyond having hardware keys, this scenario is why I really try to drive home, in all of my security trainings, the idea that you should instantly short circuit any situation where you receive a phone call (or other message) and someone starts asking for information. It's always okay to say, "actually, let me get back to you in a minute" and hang up, calling back on a known phone number from the employee directory, or communicate on different channel altogether.

Organizationally, everyone should be prepared for and encourage that kind of response as well, such that employees are never scared to say it because they're worried about a snarky/angry/aggressive response.

This also applies to non-work related calls: someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.


I've had a wide range of responses from people calling me when I tell them I won't give personal details out based on a cold call.

A few understand immediately and are good about it. Most have absolutely no idea why I would even be bothered about an unexpected caller asking me for personal information. A few are practically hostile about it.

None, to date, have worked for a company that has a process established for safely establishing identity of the person they're calling. None. Lengthy on-hold queues, a different person with no context, or a process that can't be suspended and resumed so the person answering the phone has no idea why I got a call in the first place.

(Yet I'll frequently get email full of information that wouldn't be given out over the phone, unencrypted, unsigned, and without any verification that the person reading it is really me.)

The organisational change required here is with the callers rather than the callees, and since it's completely about protecting the consumer rather than the vendor, it's a change that's unlikely to happen without regulation.


> None, to date, have worked for a company that has a process established for safely establishing identity of the person they're calling

What's fun here is, the moment they ask you for anything, flip the script and start to try to establish a trust identity for the caller.

Tell them you need to verify them, and then ask how they propose you do that.

Choose your own adventure from there.


> Tell them you need to verify them, and then ask how they propose you do that.

Last time I did that, the caller said "but you can just trust that I'm from <X>." So I replied that they, likewise, could just trust that I'm me, and you could practically hear the light bulb click on. They did their best to help from there but their inbound lines aren't staffed effectively so my patience ran out before I reached an operator.


Credit card fraud departments are generally good about this.


I can't remember which company it was, but I got a call a few years ago about some issue with an account, and they wanted some information to "verify my identity"

I said wait a minute, you called me. Shouldn't I be verifying who you are?

The guy kind of laughed and said yeah, but this is the process I've been given to follow. I said I would call back on public customer service number and he said that would be fine.

It turned out it was a legit call, but just weird that they would operate that way.

I wish I could remember who it was. A credit card, I think.


This happened to me with AT&T herein Mexico: I have an AT&T pre-paid sim card that expires after a year. At the end of the year I got a call supposedly from someone form AT&T and told me about some special discount offer if I pre-paid for another year. The catch is that I needed to pay over there by phone ... (give my card details).

I told her that I preferred to call the AT&T number and for her to tell me what options should I press to get to her. She couldn't give me an answer to that.

Most likely a scam I guess.


Are they? Are they still asking about "mother's maiden name"?!


Anecdotally, I seem to have had the opposite experience. I've been doing this for at least 15 years, and never had a negative reaction. With bank, credit card, or finance-related companies, they seem to understand immediately. With other callers I've gotten awkward pauses, but ultimately they were politely accommodating or at least understanding that some issue would have to be processed through other channels or postponed.

However, I don't have strict requirements. When a simple callback to the support line on the card, bill, or invoice doesn't suffice--and more often than not it does, where any support agent can field the return call by pulling up the account notes--all I ask for at most is an extension or name that I can use when calling through a published number. I'll do all the leg work, and am actually a little more suspicious when given a specific number over the phone to then verify. Only in a few cases did I have to really dig deep into a website for a published number through which I could easily reach them. In most cases it suffices to call through a relatively well attested support number found in multiple pages or places[1].

I'm relatively confident that every American's Social Security number (not to mention DoB, home address, etc) exists in at least one black market database, so my only real practical concern is avoiding scammers who can't purchase the data at [black] market price, which means they're not very sophisticated. A callback to a published phone number for an otherwise trusted entity that I already do business with suffices, IMO. And if I'm not already doing business with them, or if they have no legitimate reason to know something, they're not getting anything, period.

[1] I may have even once used archive.org to verify I wasn't pulling the number off a recently hacked page, as it was particularly off the beaten path and a direct line to the department--two qualities that deserve heightened scrutiny by my estimation.


Someone needs to standardize a simple reverse-authentication system for this.

For example whenever a caller is requesting sensitive information, they give you a temporary extension directing to them or an equal, and ask you to call the organization's public number and enter that extension. Maybe just plug the number into their app if applicable to generate a direct call.

Like other comments have mentioned, the onus should be on them. Also, they would benefit from the resultant reduction in fraud. Maybe a case study on fraud reduction savings could help speed the adoption process without having to invoke the FCC.


In Sweden we have a special authetication system that is owned by the banks. It is called BankID and generally works well but it has flaws, especially that you shouldn't use it if they call you and ask to you do it since that is a security risk by itself.

It works if I call a bank or insurance company or something like that. A robot voice will ask me to authenticate and when I have done so and is transferred to an operator they will see that I authenticated. So it works when I call them but not the other way around. We need a new system.


I've had my cable company call me directly about an account issue and told them I couldn't validate it was them and the person got somewhat irate with my response, insisting there was no one I could call to verify them and that it has to be handled on that call. Turns out it was just a sales call (up selling a product) - which probably speaks to the level of talent they hire for that.


> this scenario is why I really try to drive home, in all of my security trainings, the idea that you should instantly short circuit any situation where you receive a phone call (or other message) and someone starts asking for information.

The trouble is, calling the number on the back of your card requires actually taking out your card, dialing it, wading through a million menus, and waiting who-knows-how-long for someone to pick up, and hoping you're not reaching a number that'll make you go through fifteen transfers to get to the right agent. People have stuff to do, they don't want to wait around with one hand occupied waiting for a phone call to get picked up for fifteen minutes. When the alternative is just telling your information on the phone... it's only natural that people do it.

Of course it's horrible for security, I'm not saying anyone should just give information on the phone. But the reality is that people will do it anyway, because the cost of the alternative isn't necessarily negligible.


I say "If this is a scam call please hang up now, otherwise give me an invoice or ticket number or name and department and I'll get back to you," and they usually do hang up. The case where you need to actually call your bank is really rare.

Note that it's very important not to let them give you an actual phone number to call on. This sounds obvious but I know someone who hung up but called back on a number given by the scammers, which was of course controlled by them and not the bank.


I'm going to add to this that "hang up" means physically do that. I've heard that many are tricked by the attacker playing a "dial tone" sound into the phone and thus keeping the line open and "answering" when you thought you called you bank.


You may be right that some people are tricked into thinking the call has been terminated by the caller, when in fact the caller is playing a dial tone over the line. It's worse than that though. In some telephone systems, the call is not ended when the callee hangs up.

https://security.stackexchange.com/questions/100268/does-han...


Yikes,although please note that BT reduced this time to 2s in 2015: https://www.bbc.co.uk/news/business-34714531


I don't think most people who get scammed this way pause to say "oh, this might be someone stealing my credit card number", then disregard that thought because it's too much of a pain to call back on an official line. Instead I think they don't question the situation at all, or the scammer has enough information to sound sufficiently authoritative. Most non-technical people I've talked to about this are pretty scared of getting scammed, but tell me the thought never crossed their mind they could call back on a trusted number.

I like the "hang up, call back" approach because it takes individual judgment out of the equation: you're not trying to evaluate in real time whether the call is legit, or whether whatever you're being asked to share is actually sensitive. That's the vulnerable area in our brains that scammers exploit.


I'm sure a lot of people are like what you describe (this doesn't occur to them), but I think it does affect those who are a bit suspicious/on the fence, potentially like the person in the article. ("Throughout the conversation, the employee grew more and more suspicious, but unfortunately did provide [the MFA code].")


The old adage is that a con artist makes the best mark.


I think the parent poster is arguing that we should normalize this behavior not that there's no excuse for not calling the number back given the reality we have today.

You're saying it's natural for people not to want to call back and wade through a million menus, and I agree.

But the conclusion from this is that companies should change their processes so that calling back is easy, precisely because otherwise people won't do it.

And the more people that do it despite the costs, the more normalized it'll be, and the more companies will be incentivized to make it easier.


We certainly should normalize this, but my point was that it's going against the grain, so efforts like this may be in vain without a bigger lever to pull. e.g., I imagine you'd need to convince some sort of authority (CISA? FIPS? not sure whom the right entity is) to point out the best practices here before organizations start paying attention.


Unfortunately there's a dark incentive. Providing support costs money, but an automated phone menu does not (or at least, it's negligible). So you want to chuck your customers into hold music hell and winnow out as many as you can. This also makes your staff scheduling easier.

If your customers are captive, this is all upside. And most customers will tolerate this. The ones that do churn somehow don't generate blame for the psychopaths who implement this hostile practice, those bastards cut support costs and get promoted out.


Great point. But it could be easily solved with something like: “Call the number on the back of your credit card. Push *5 and when prompted enter your credit card number and you will be immediately connected back to my line”


Or just connect you directly if you call back within a few minutes from the same number they called, no need to press anything. But I guess that's too advanced for 2023 technology


If my bank cold calls me, I can say "I just need to verify the legitimacy of your call, so send me your direct number in the online bank app, and I'll call you". It works every time, but it also works because all the employees have a direct number.

Normally we just write message back and forth in the banking app, and if we talk it's an online meeting with video. Only for large business I go to the physical site.


>This also applies to non-work related calls: someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.

There's a number of situations, not just credit card ones, where it's impossible or remarkably difficult to get back to the person that had the context of why they were calling.

Your advice holds, of course, because it's better to not be phished. But sometimes it means losing that conversation.


My mother recently started having to deal directly with utility bills and the like and this was some information we impressed very early on. You should never agree to billing or hand over CC/account information in a phone call you didn't initiate. She hasn't run into an issue yet - most utilities, online stores and other entities have call in numbers if you need to resolve a billing dispute. That random company you bought a plumbing valve from has an office somewhere with a secretary that gets a phone call maybe three times a month from customers looking to resolve issues - and Amazon has mostly centralized support for small sellers and has lines you can call to resolve any disputes you have which may forward you to the original sale party but often just resolve the issue directly.

Honestly, the worst experiences are usually with large companies that funnel all customers into massive phone centers - I've probably lost the better part of a week to Comcast over my lifetime.


Yeah, I'm talking about situations where it's a department that's not tied to the main call center. The credit card fraud people, for example[1]. Or, with Comcast, some guy saying your modem return was missing a piece, etc. Those are often hard to reach by calling the main number.

[1] For at least one place, the people that proactively identify things that could be fraud and call you...they aren't the same people you call to report fraud on your own. Why? No idea.


Definitely, sometimes they'll have a case number or agent id you can use to get back to them, but there are cases where you have to assume if it's important to them they'll continue to nag or reach out on another channel.

I have had at least one situation where I spent a while trying to get back to a quite convincing/legitimate sounding caller this way, where, as I escalated through support people it became increasingly clear that the initial call had been a high quality scam, and not in fact a real person from the bank.


I put in very limited effort in returning cold calls. The contact is being initiated by the other party, the interest in the exchange is theirs, and the onus on making it work is theirs.

Companies, including banks, don't call you to protect _your_ interests, they call you to protect themselves.


Except some banks and credit card companies will call you to notify you of fraud on your account.


Advice I haven't even followed myself:

It's probably a good idea to program your bank's fraud number into your phone. The odds that someone hacks your bank's Contact Us page are small but not zero.

The bedrock of both PGP and .ssh/known_hosts could be restated as, "get information before anyone knows you need it".

Fraud departments contacting me about potentially fraudulent charges is always going to make me upset. Jury is still out on whether it will always trigger a rant, but the prognosis is not good.


At least once I have gotten a terribly phrased and link-strewn "Fraud Alert" from a bank, reported it to said bank's anti-phisihing e-mail address, gotten a personalized mail that responded that it was in fact fraud and that they had policies against using third party subdomains like... And then found out the day later that yes, that was their real new anti-fraud tool and template.

There will need to be jail time for the idiots writing the government standards on these fraud departments before we get jail time for the idiots running these fraud departments before it gets better.


Last time I talked to someone about this they pointed out that fraud depts are often outsourced. Which is a lovely plan because now your customers hate you for something an entirely different company did to them. And also they are directing you away from the official website every single time you interact with them.

I'm not sure what grounds you issue arrest warrants on, but I appreciate the sentiment.


Ironically, fraud. They have done substantial financial and "real" harm by pretending to be competent at things that they are clearly not, and have been a combination of remiss in their duties and complicit in the crimes of fraudsters.

Sufficiently advanced incompetence is indistinguishable from malice, and should be prosecuted as such.


Years ago I bought a tv. Clear on the other side of town. When I got home I had a message from the fraud dept about charges. I called them up to explain I did in fact buy that tv.

They weren’t calling about the TV. They were calling about the car wash I stopped to get near my old neighborhood on the way home. For $8. Wat.

And for a while they would flag me every time I went on a road trip or travelled, because they pegged me as a non traveler. I travel, but quality over quantity. And you’re basically psychologically fencing me into a profile you’ve written about me that’s wrong by punishing me every time I step out of it? Fuck you.


> someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.

This recently happened to me, and bizarrely they wouldn’t tell me what’s actually going on on my account because of not being able to verify me. (They were also immediately asking for personal information on the outbound call, which apparently really was from them.)


Financial companies, the government, ... I always try to bother to raise the issue afterwards, but (not that I think my comments alone would do anything) so far nothing changed that I've taken issue with, I don't think.

A big one I'm aware of many others complaining about in the industry is local governments in the UK soliciting elector details via 'householdresponse.com/<councilname>' in a completely indistinguishable from phishing sort of way.

(They send you a single letter directing you to that address with 'security code part 1' and '2' in the same letter, along with inherently your postcode which is the only other identifier requested. It's an awful combination of security theatre and miseducation that scammy phishing techniques look legit.)


Ha, this reminds me of driver licences in Australia. So at this point almost everyones licence has been leaked multiple times (and just having the details used to be enough to open a bank account online, not sure if this is still the case).

I received an email from my state’s RTA, saying they were adding 2-factor authentication to licences. Great! I assumed this might be an oauth type scenario, or maybe even just email.

Nope. The “second” factor is a different number printed on the licence. Surely this communication had to go through multiple departments, get vetted for accuracy. Yet no one picked up that this isn’t multi factor authentication.

Its only purpose is to make it easier for them to issue a new licence _after_ you’ve been defrauded out of all your money, because most states refuse to issue people with new licence numbers. It does nothing more than fix an incompetence in their system/process. Yet it was marketed as some kind of security breakthrough, as if it would add protection to your licence.


That's the big problem, isn't it? People think it's okay to give out information on an incoming call because often it is really okay. If it were unreliable 99% of the time, phishers would not use this method as an attack vector.


Amen, amen, amen. IMO "Hang up, look up, call back" should basically be the only thing that security training focuses on, and it should be culturally ingrained: https://krebsonsecurity.com/2020/04/when-in-doubt-hang-up-lo...


> any situation where you receive a phone call (or other message) and someone starts asking for information.

I had AWS of all places do this to me a year or two ago. The rep needed me to confirm some piece of information in order to talk to me about an ongoing issue with the account. If I recall correctly, the rep wanted a nonce that had been emailed to me.

"I'm terribly sorry but I won't do that. You called me."

Ultimately turned out to be legit, but I admit I was floored.


This is where some kind of chaos monkey might be good. Imagine something that randomly slacks from one human account to another asking for passwords and then the receiver has to press a "suspect message" button as a form of ongoing awareness training.

As part of that a genuine ask for a password would get the same response, and perhaps the button sends a nice message like "Looks like you have asked for a password. We get it, sometimes you need to get the job done, but please try to avoid this as it can make us insecure. Please read our security policy document here."


>This also applies to non-work related calls: someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.

This is a policy I've implemented as well, both for myself and loved ones: don't provide any information to unverified incoming calls. Zero.

Sometimes I'll get some kind of sales call, which I may even be interested in. I'll say, proceed with the pitch to which they'll reply "first we need to confirm your identify". Then I refuse: you called me. Why do you need me to provide private information to confirm my identity?


What does unverified mean in this case?


I've expanded this to the general case and don't answer phone calls.


I've stopped listening to people. I limit myself to talking at them. The upside is that I'm never fooled. The downside is that so far as I can tell half the world hates me and the other half think I'm a lunatic.


> the idea that you should instantly short circuit any situation where you receive a phone call (or other message) and someone starts asking for information

It really irritates me that some significant companies openly encourage customers to ignore this advice, teaching then had practise. The most recent case I know of is PayPal calling myself. It was actually thenm new cc account, I thought I'd setup auto payment but it wasn't so I was a child if days late with the first payment) but it so easily could have not been. The person on the other end seemed rather taken aback that I wouldn't discuss my account or confirm any details on a call I'd not started, and all but insisted that I couldn't hang up and call back. In the end I just said I was hanging up and if I couldn't call back than that was a them problem because at that point I had no way of telling if it was really the company or not. At that point she said she'd send a message that is could read via my account online, which did actually happen so it wasn't a scammer. But to encourage customers to perform unsafe behaviour with personal and account details is highly irresponsible.


> someone starts asking for information

Especially OTP codes.

I can't understand how someone works at a tech company and is clueless to the point of sharing an auth code over the phone. My grandma, sure, but a Retool employee? C'mon, haven't we all read enough of these stories?


You can't understand at all how someone with your coworker's voice might lull you into a false sense of urgency and safety?

Security is a weak-link problem, not a strong-link one. You have to plan for the least security-minded people, the tired and stressed employee.


I can understand how someone with my coworker's voice might lull myself into a false sense of urgency and safety.

To the point of sharing an OTP code over the phone from a strange number? I'm sorry, no.


You could give them a false number and see what happens. That might trip up their script enough to reveal they aren't who they seem to be. Just play dumb -- "I can't understand why it's not working, that's the number on my phone...."


That's also a good idea.


They could trivially spoof the number they're calling from to match.


Or I could call them on a known point of contact, like a phone number or IM.


True, why are people so eager to pick up the phone.

The millenials are right in not picking up the phone


What vendors for hardware keys would be recommended besides yubico?


Maybe it’s just me, but I am really skeptical about the DeepFake part - it’s a theoretically possible attack vector, but the only evidence they possibly could have to support this statement would be the employees testimony. Targeting a particular employee with the voice of a specific person this employee knows requires a lot of information and insider info.

Also, I think the article spends a lot of effort trying to blame Google Authenticator and make it seems like they had the best possible defense and yet attackers managed to get through because of Googles error. Nope, not even close. They would have had hardware 2FA if they were really concerned about security. Come on guys, it’s 2023 and hardware tokens are cheap. It’s not even a consumer product where one can say that hardware tokens hinder usability. It’s a finite set of employees, who need to do MFA certain times for certain services mostly using one device. Just start using hardware keys.


Hi, David, founder @ Retool here. We are currently working with law enforcement, and we believe they have corroborating evidence through audio that suggests a deepfake is likely. (Put another way, law enforcement has more evidence than just the employee's testimony.)

(I wish we could blog about this one day... maybe in a few decades, hah. Learning more about the government's surveillance capabilities has been interesting.)

I agree with you on hardware 2FA tokens. We've since ordered them and will start mandating them. The purpose of this blog post is to communicate that what is traditionally considered 2FA isn't actually 2FA if you follow the default Google flow. We're certainly not making any claims that "we are the world's most secure company"; we are just making the claim that "what appears to be MFA isn't always MFA".

(I may have to delete this comment in a bit...)


Thanks for all this insight, this is why HN rules. What is your impression of law enforcement, everyone claims to reach out after an attack, but I've never seen follow up of sucessful law enforcement activity resulting in arrests or prosecution. Thanks again.


(May also have to delete this later, but...)

Law enforcement is currently attempting to ascertain whether or not the actor is within the US. If it's within the US, I (personally) believe there's a good chance they'll take the case on and presumably with enough digging, will find the attacker. (The people involved seem to be... pretty good.)

But if they're outside US (which is actually reasonably high probability, given the brazenness of the attack, and the fact that they're leaving a lot of exhaust [e.g. IP address, phone number, browser fingerprints, etc.]), then my understanding is that law enforcement is far less interested, since it's unlikely that even an identification of the hacker would lead to any concrete results (e.g. if they were in North Korea). (FWIW, the attack was not conducted via Tor, which to me implies that the actor isn't too worried about law enforcement.)

To give you a sense, we are in an active dialogue with "professionals". This isn't a "report this to your local police station" kind of situation.


On the plus side, if the attacker is outside the US, and a foreign national - the NSAs illegal wiretap evidence is legal!


The collection is legal as far as the NSA's mandate, but whether it's admissible in court...


FWIW engaging simultaneously with both the FBI and the USAO/DOJ and putting pressure on DOJ to act on the case typically results in better outcomes than just assuming the SA assigned is going to follow through and bugging them about it.


Thx again!


> …we believe they have corroborating evidence through audio that suggests a deepfake is likely…

Does that mean they have audio of the call?


Most attacks like this use stolen credentials for VOIP providers, i.e. Twilio. It's likely the FBI quickly obtained a subpoena which produced a recording. The attacker may not have known the call was being recorded.


This is an example of Google sabotaging a techology it doesn't like. I'm not saying it is a conspiracy. But by thwarting TOTP like this, Google is benefiting.

I really like TOTP. It gives me more flexibility to control keys on my end. And you can still use a Yubikey to secure your private TOTP key. But you can also choose to copy your private key to multiple hardware tokens without needing anyone's permission. Properly used, you can get most of the benefit of FIDO2 with a lot more flexibility.

I actually recently deployed TOTP, and everyone was quite happy with it. But knowing that Google is syncing private keys around by default, I no longer think we can trust it.


Thanks for the reply! What's expecting one.

Since you might have you delete the reply anyway, can I get a candid answer on why hardware 2FA tokens weren't a part of the default workflow before the incident? Was it concerns about the cost, the recovery modes, or was it just the trust in the existing approach?


One problem with hardware keys is still SaaS vendor support. There is a very narrow path for effective enforcement: require SSO, then require hardware tokens at the SSO level. But even that is difficult to truly enforce, because the IdP often has "recovery" mechanisms that grant access without a hardware key. Google is also guilty of not adding a claim to the OIDC/SAML response verifying that a hardware token was used to login, so vendors cannot be configured to decide to reject the login because it didn't use a hardware token.

If you have any vendors without SSO (like GitHub, because it's an Enterprise feature), you're lucky if they support hardware tokens (cool, GitHub does) and even luckier if their "require 2FA" option (which GitHub has, per organization) allows you to require hardware keys (which GitHub does not).

Distributing hardware keys to employees is one thing. Mandating them is quite another.


If your organization is rich enough to buy hardware keys for everybody, but too stingy to pay for GitHub Enterprise, I'm not sure what to say.


Yubico's U2F security key (good for FIDO2, WebAuthn, etc.) is $25, each member of your organization needs only 1 key (if they lose their key, they can get another one from IT, which can remove the old key and enroll the new one for them), with a handful of IT personnel possibly having more than 1 key for backup (this is less necessary when a group of IT holds admin permissions, as they serve as key backups for each other). $25/key amortizes out to well under $1/month considering that keys will last for years and can be transferred from one employee to the next when an employee leaves the company, and is of course usable for any vendor that supports hardware keys.

Much, much cheaper than $21/user/month for GitHub Enterprise. I'm not sure what universe you live in where buying hardware keys is expensive compared to Enterprise licensing?


The universe I'm in is the one where you have to staff the IT department and they have to support the device. The IT department costs way more than $21/month.

You have a valid point that we need SaaS vendor support for SAML/whatever, but GitHub, specifically, supports SSO. Yeah, it costs money to get that feature, but security doesn't just happen. Security is expensive, but it's more expensive not to have it. In this case, it costs $21/user/month. If that's too expensive to protect the source code of the company's product, that says a lot about the company.


I've personally worked for multiple startups where rolling out hardware keys did not require making additional IT hires (we're talking about companies smaller than ~50 people). Perhaps at BigCo size, you end up needing dedicated personnel to support a hardware key rollout at that scale, but at that scale you have the budget for GitHub Enterprise anyway so the point about pricing is moot; at BigCo size there is also even more of an incentive to roll out hardware keys since you're that much more likely to get spear phished.


Yep, we just had a cabinet with a bunch of hardware keys for anyone to take.

Remote employees take a bit more support though.


Hardware keys are much cheaper than GitHub enterprise


> unfortunately did provide the attacker one additional multi-factor authentication (MFA) code

How is this Google's fault?

Which rock was this employee living under to not have understood you NEVER give an OTP code to anyone?


> the only evidence they possibly could have to support this statement would be the employees testimony

I've set up my phone to record all calls. The employee could have too.


Very sophisticated attack, I would bet most people would fall for this.

I'm surprised Google encourages syncing the codes to the cloud... kind of defeats the purpose. I sync my TOTP between devices using an encrypted backup, even if someone got that file they could not use the codes.

FIDO2 would go a long way to help with this issue. There is no code to share over the phone. FIDO2 can also detect the domain making the request, and will not provide the correct code even it the page looks correct to a human.


> I'm surprised Google encourages syncing the codes to the cloud... kind of defeats the purpose.

Depends on what you think the purpose is. People talk about TOTP solving all sorts of problems, but in practise the only one it really solves for most setups is people choosing bad passwords or reusing passwords on other insecure sites. Pretty much every other threat model for it is wishful thinking.

While i also think the design decision is questionable, the gain in security from people not constantly losing their phone probably outweighs for the average person the loss of security of it all being in a cloud account (as google cloud for most people is probably one of their most well secured account)


It's also a reasonable defence against naive keylogging techniques - including shoulder-surfing either directly or eg via security cameras. In some places this can be a pretty big threat.


I think its reasonable against spur of the moment shoulder surfing. I'm a little doubtful about how common that attack vector is - i think showing passwords as ** is a reasonable deterrant against literal shoulder surfing as well. Once you get security cameras involved things get more sophisticated and people can watch the feed live or do other things with physical access to the device.

Ultimately, i think for the average user the attacker is mostly not in physical proximity (although there certainly are exceptions), and if you are being targeted explicitly then you are screwed if they are installing cameras and modifying your hardware.

Maybe the big exception would be a camera in a coffee shop place looking for people (not live) logging into their bank accounts. I could inagine this being a helpful defense.


Well all Google needed to do to make it at least a little harder is to encrypt the backup with a password at least.

The user can still put in an insecure password but uploading all your 2FA tokens to your primary email unencrypted is basically willingly putting all your eggs in one basket.


TOTP is helpful when you don’t fully trust the input process. If rogue javascript is grabbing creds from your page, or the client has a keylogger they don’t know about, TOTP can help.


Blizzard was one of the first large customers of TOTP, and what we learned from that saga is that 1) keyloggers are a problem and 2) impersonating people for TOTP interactions is profitable even if you're only a gold farmer.

The vector was this: Blizzard let you disable the authenticator on your account by asking for 3 consecutive TOTP outputs from your device. That would let you delete the authenticator from your account.

The implementation was to spread a keylogger as a virus, and when it detected a Blizzard login, it would grab the key as you typed it, and make sure Blizzard got the wrong value when you hit submit. Blizzard would say try again, and the logger would collect the next two values, log into your account, remove the authenticator and change your password.

By the time you typed in the 4th attempt to log in, you'd already be locked out of your account, and by the time you called support, they would already have laundered your stuff.

This was targeting 10 million people for their imaginary money and a fraction of their imaginary goods. On the one hand that's a lot of effort for a small payoff. On the other, maybe the fact that it was so ridiculous insulated them from FBI intervention. If they were doing this to banks they'd have Feds on them like white on rice. But it definitely is a proof of concept for something much more nefarious.


No it can't.

The rouge javascript or keylogger would just steal the totp code, prevent the form submission, and submit its own form on the malicious person's server.

Not to mention if your threat model includes attacker has hacked the server and added javascript, why doesn't the attacker just take over the server directly?

If the attacker installed a keylogger why dont they just install software to steal your session cookies?

This threat model doesn't make sense. It assumes a powerful attacker doing the hard attack and totally ignoring the trivially easy one.


> Not to mention if your threat model includes attacker has hacked the server and added javascript, why doesn't the attacker just take over the server directly?

If the attacker can only hack the server that hosts your SPA, but not your API server, they can inject javascript to it, but can't do a lot beyond that


So assuming server side compromise not xss - in theory the servers can be isolated, in practise its rare for people to do a good job with this except at really big companies.

Regardless if they got your spa, they can replace the html, steal credentials, act as users, etc. Sure the attacker might want something more, but this is often more than enough to do anything the attacker might want if they are patient enough. Certainly its more than enough to do anything TOTP would protect against.


> attacker has hacked the server and added javascript

adding javascript doesn't necessarily mean the server is hacked. XSS attacks usually don't require actually compromising the server. Or a malicious browser plugin could inject javascript onto a site.


rogue javascript. It's naughty, not red.


great insight:

> in practise the only one it really solves for most setups is people choosing bad passwords or reusing passwords on other insecure sites. Pretty much every other threat model for it is wishful thinking.

Why is no one talking about this?


The other side of this is that (to pull numbers out of my hat) 90% of non-targeted attacks are password reuse and 9.9% are phishing with 0.1% being something else. The fact that TOTP doesn't solve phishing does get talked about.

Ultimately totp & sms based 2fa is used because it solves the real business problem that websites face (the business problem being when enough users get hacked they blame the business not themselves, so we just need to save most of them not all). Yes there is some fear mongering to make people sign up for 2FA, but it is actually solving a big problem effectively. It doesn't matter its not helpful in more fanciful scenarios since those scenarios are largely imaginary to begin with (for the average user).


>FIDO2 can also detect the domain making the request, and will not provide the correct code even it the page looks correct to a human.

I could not agree more with this sentiment! We need more of this kind of automated checking going on for users. I'm tired of seeing "just check for typo's in the URL" or "make sure it's the real site!" advice given to the average user.

People are not able to do this even when they know how to protect themselves. Humans tire easily and are often fallible. We need more tooling like FIDO2 to automate away this problem for us. I hope the adoption of it will go smoothly in years to come.


The problem with Fido (and other such solutions, including smartphone-based passkeys) is that they make things extremely hard if you're poor / homeless / in an unsafe / violent family situation and therefore change devices often. It's mostly a non-issue for Silicon Valley tech employees working solely on their corporate laptops, and U2F is perfect for that use-case, but these concerns make MFA a non-starter for the wider population. We could neatly sidestep all of these issues with cloud-based fingerprint readers, but the privacy advocates won't ever let that happen.


Biometrics aren’t a great key because they cannot generally be revoked. This isn’t a privacy concern, it’s a security problem. You leave your fingerprints nearly everywhere you go, and they only need to be compromised once and then can never be used again. At best, you can repeat this process a sum total of 10 times without taking your shoes off to login.


I’ve always referred to biometrics as a “non revokable username” and not a “password.”

100% agree with you here.


You're right, software security is only really available to rich and tech minded folks.

That's kind of what I was trying to get at with my previous statement about humans being tired and fallible. The way we access and protect our digital assets feels incredibly un-human to me. It's wrapped up in complexity and difficulty that is forced upon the user (or kept away from, if you want to look at it that way).

As it is now, all of the solutions are only really available to someone who can afford it (by life circumstance, device availability, internet, etc) and those who can understand all the rules they have to play by to be safe. It's a very un-ideal world to live in.

When I brought up FIDO2, I was less saying "FIDO2 is the answer" and more saying, "we need someone to revolutionize the software authentication and security landscape because it is very very flawed".


The claim was that fido protocol is better than totp protocol no matter where you store keys. Your claim is that hardware key storage is difficult, but it doesn't differentiate between protocols: if you lost a device with hardware totp keys, you're not in a better position than if you lost a device with hardware fido keys.


Stronger security can also help the marginalized. If your abusive SO has the phone plan in their name they can order up a new SIM card and reset passwords on websites that way too often fallback from “two factor” to SMS as a root password.


> I'm surprised Google encourages syncing the codes to the cloud... kind of defeats the purpose

Probably so when you upgrade/lose your phone you don't otherwise lose your MFA tokens. Yes, you're meant to note down some recovery MFA codes when you first set it up, but how many "normal people" do that?


A number of sites I've signed up for recently have required TOTP to be setup, but did not provide back up codes at the same time. There's a lot of iffy implementations out there.


The totp recovery code is just a base32 encoded secret key, which is also present in qr encoded url.


gross


With Google Authenticator some years ago it wasn't even possible to restore your codes even if you had a local backup of the device. I'm not sure if that still is the case today but it was a common issue which we saw at our service desk before we switched to a different solution.


Yeah I had to re-enroll my phone when I got a new one a few years ago.

I never did get around to doing all of them so I still have the old phone in a drawer for those rare times I need it.


In my company, such a communication would never come via a text, so that would be a red flag immediately. All such communications come via email, and we have pretty sophisticated vetting in place to ensure that no such "sketchy" emails even arrive in our inboxes in the first place.

Additionally, we have a program in place which periodically "baits" us with fake phishing emails, so we're constantly on the lookout for anything out of the ordinary.

I'm not sure what the punishment is for clicking on one of these links in a fake phishing email, but it's likely that you have to take the security training again, so there's a strong disincentive in place.


After initially thinking it was a good idea, I've come to disagree pretty strongly with the idea of phish baiting employees. Telling employees not to click suspicious links is fine, but taking a step further to constantly "testing" them feels like it's placing an unfair burden on the employee. As this attack makes clear, well done targeted phishing can be pretty effective and hard for every employee to detect (and you need every employee to detect it).

Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything. And has been pointed out, that seems a likely result of that sort of testing. If I get put in a penalty box for clicking on fake links from HR or IT, I'm probably going to stop clicking on real ones as well, which doesn't seem like a desirable outcome.


Every company I’ve worked with has phish baited employees and I’ve never had any problem. It keeps you on your toes and that’s good.

What happened in the article — getting access to one person’s MFA one time — is not exactly a catastrophic event. It just happens, as with most security breaches, a bunch of things happened to line up together at one time to make intrusion possible. (And I skimmed the article but it sounded like the attacker didn’t get that much anyway, so it was not catastrophic.)

And things lining up rarely happens but it will happen enough times for there to be an article posted to Hacker News once in a while with someone saying that it’s possible to make it perfectly secure.


> constantly "testing" them feels like it's placing an unfair burden on the employee.

Meh, it's not that disruptive, maybe one email every couple of months.

> Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything.

Agreed. I think both things are important: keeping employees on their toes, which reduces the possibility of a successful attack, as well as making it not catastrophic if a phishing attack succeeds.


On the otherhand having your device die means without cloud backup you either lose access or whoever was relying on that 2FA needs to fall back on something else to authenticate you.

After all if I can bypass 2FA with my email whether 2FA is backed up to the cloud doesn't matter from a security standpoint.

Certainly I would agree with the assertion that opting out for providers of codes would be nice. Even if it is an auto populated checkbox based on the QR code.


The workaround I've seen is to issue a user two 2FAs keys, one for regular use and one to store securely as a backup. If they lose their primary key, they have the backup until a new backup can be sent to them. Using a backup may prompt partial or total restriction until a security check can be done. If they lose both, yes, there needs to be some kind of a reauth. In workplace context like this it's straightforward to design a high-quality reauth procedure.


They could do what Authy does. Codes are backed up to the cloud, so you're not completely fucked if the phone is stolen. But the backup is encrypted, and to access it on a replacement device you must enter the backup password.


That relies on someone remembering their backup password that they probably don't use often.


I suspect that this sort of issue is the real reason for making it difficult to not back up secrets to the cloud. On the one hand, you will have some number of people pissed off because they were taken advantage of and they realize that it was enabled by having backups in the cloud. On the other, you have people pissed off because they couldn't manage the final step in keeping their shit secure and are now locked out of something. The number in the latter category is vastly larger than the number in the former.


Authy makes the user enter this on a periodic basis to refresh their memory, which is a good thing imho


They could’ve just had employees use Okta Verify as opposed to Google Authenticator


Sophisticated... ok

I mean it's a great reason to use U2F / Webauthn second factor that cannot be entered into a dodgy site

https://rakkhi.substack.com/p/how-to-make-phishing-impossibl...


Not surprised. A team at Google identified this as a vector to juice growth, submitted the metrics which now govern their PSC and didn’t add the necessary counter-metrics to measure negative effects.

That’s normal because that’s how the game is played. All the way up the chain to the org leader, there is no incentive to not do this.


You live in a funny alternate reality. You should consider what it might be like to live in one where everyone else isn't dumber than you.

I will tell you a truth: People who think they're smarter than everyone else are generally missing important context or information.


Please don’t project your inferiority complex onto me.


> I sync my TOTP between devices using an encrypted backup, even if someone got that file they could not use the codes.

What do you use to accomplish this?


Not OP, but I store my TOTP secrets along with all my other passwords in a KeePass database and sync the encrypted database to my devices with Dropbox. All the clients I use to open a KeePass database can generate TOTP codes from the secrets at this point, so I don't use a dedicated TOTP app like Google Authenticator or Authy anymore.

Not multifactor anymore, but also not vulnerable to catastrophic phone destruction or Google account banning. It is what it is.


After the sync, you have exactly two devices that you can use to answer the MFA challenge, instead of one. It's a backup.


> Very sophisticated attack, I would bet most people would fall for this.

No. If you think people at your company would fall for this, then IMO you have bad security training. The simple mantra of "Hang up, lookup, call back" (https://krebsonsecurity.com/2020/04/when-in-doubt-hang-up-lo...) would have prevented this.

Literally like 99% of social engineering attacks would be prevented this way. Seriously, make a little "hang up, look up, call back" jingle for your company. Test it frequently with phishing tests. It is possible in my opinion to make this an ingrained part of your corporate culture.

Agree that things like security keys should be in use (and given Retool's business I'm pretty shocked that they weren't), but there are other places that the "hang up, look up, call back" mantra is important, e.g. in other cases where finance people have been tricked into sending wires to fraudsters.


The ineffectiveness of "security training" is precisely why TOTP is on its way out - you couldn't even train Google employees to avoid getting compromised.


IMO most of this is because most security training I've seen is abysmal. It's usually a "check the box" exercise for some sort of compliance acronym. And, because whatever compliance frameworks usually mandate hitting lots of different areas, it basically becomes too much information that people don't really process.

That's why I really like the "Hang up, look up, call back" mantra: it's so simple. It shouldn't be a part of "security training". If corporations care about security, it should be a mantra that corporate leaders begin all company-wide meetings with. It's basically teaching people to be suspicious of any inbound requests, because in this day and age those are difficult to authenticate.

In other words, skip all the rest of "security training". Only focus on "hang up, look up, call back". Essentially all the rest of security training (things like keeping machines up to date, etc.) should be handled by automated policies anyway. And while I agree TOTP is and should be on its way out, the "hang up, look up, call back" mantra is important for requests beyond just things like securing credentials.


It's not just because it's abysmal, it's because it was found, empirically, not to work, no matter how good you make it. The mitigation you're describing is also susceptible to lapses and social engineering, just like what got them into trouble in the first place.

The simpler mitigation of 'the target employee with with the Google account full of auth secrets should have had it U2F protected' would have worked even if the phone person had just read out the target's Google password to anyone who called and asked for it.

They could have enforced that with a checkbox in their GSuite admin console.


But aside from beating employees over the head with it, how many companies actually operate in a way that encourages and reinforces such an approach? I'd bet it's not many, and honestly if it's a non-zero number I'd be at least a bit surprised.

You can have all the security training in the world, but every time IT or HR or whoever legitimately reaches out to an employee, especially when it's not based on something initiated by the employee, the company is training exactly the opposite behavior Krebs is suggesting. Hanging up and calling back will likely at minimum annoy the caller and inconvenience the employee. Is the company culture accepting of that, or even better are company policies and systems designed to avoid such a scenario? If a C-suite person calls you asking for some information and you hang up and call them back, are they going to congratulate you on how diligently you are following your security training?

You're not wrong that the Krebs advice would help prevent most phishing, but I'd argue it has to be an idea you design your company around, not just a matter of security training. Otherwise you're putting the burden on employees to compensate for an insecure company, often at their own cost.


This fails to satisfy one of the core lessons here: trust nothing, not even your own training and culture.


Reminds me of a situation early in my career where I was talking with the CTO about some security concern and I said "well it's all on the internal company network" and he immediately said "why on earth do you think you can trust our internal network?"


So I take it you are employed by someone that allows you to connect to nothing and change nothing? Because if you can do any of those things, your employer is clearly Doing It Wrong, based on your interpretation.

(If you happen to be local-king, flip the trust direction, it ends up in the same place.)


Especially not the information security team. They're the most likely to be compromised.


They just have to catch someone half-awake, or already very stressed out, or otherwise impaired once.


I’ve done 6 different versions of “security training” as well as “GDPR training” over the past few years. I think they are mostly tools to drain company money and wasting time. About the only thing I remember from any of it is when I got some GDPR answer wrong because I didn’t resize your shoe size was personal information and it made me laugh that I had failed the whatever quiz right after I had been GDPR certified by some other training tool.

If we look at the actual data, we have seen a reduction in employees who fall for phishing emails. Unfortunately we can’t really tell if it’s the training or if it’s the company story about all those million that got transferred out of the company when someone fell for a CEO phishing scam. I’m inclined to think it’s the latter considering how many people you can witness having the training videos run without sound (or anyone paying attention) when you walk around on the days of a new video.

The only way to really combat this isn’t with training and awareness it’s with better security tools. People are going to do stupid things when they are stressed out and it’s Thursday afternoon, so it’s better to make sure they at least need a MFA factor that can’t be hacked as easily as SMS, MFA spamming and so on.


To emphasize, I 100% agree with you. I'm not arguing for more security training, I'm arguing for less.

"Hang up, look up, call back". That's it. Get rid of pretty much all other "security training", which is just a box ticking exercise for most people anyway.

I also agree with the comment about better security tools, but that's why I think "hang up, look up, call back" is still important, because it teaches people to be fundamentally suspicious of inbound requests even in ways where security tools wouldn't apply.


Then I guess we agree! My mantra is to just never click on any links. Even when I know they aren’t phishing I don’t click on them.

Of course that’s probably easier for a programmer than most other employees. I’m notoriously hard to reach unless it’s through a user story on our board that’s first been vetted by a PO. Something you probably can’t get away with if you’re not privileged enough by being in an in-demand role where they can’t just replace you by someone more compliant. I do try not to be an asshole about it, but we have soooooo many fake phishing mails and calls from our cyber awareness training that it’s just gotten to the point where it’s almost impossible to get trapped by one unless you ignore things until someone shows up in person. Luckily one of my privacy adding in FireFox prevented me from getting caught when I actually did click one of the training links on one of those famous Thursday afternoons. So I still don’t have the “you’ve clicked a phishing link” achievement… which I’m still not sure why is there, because I sort of want it now that it is, and eventually that urge is going to win.


We use OTPs extensively at Retool: it’s how we authenticate into Google and Okta, how we authenticate into our internal VPN, and how we authenticate into our own internal instances of Retool

They should stop using OTPs. OTPs are obsolete. For the past decade, the industry has been migrating from OTPs to phishing-proof authenticators: U2F, then WebAuthn, and now Passkeys†. The entire motivation for these new 2FA schemes is that OTPs are susceptible to phishing, and it is practically impossible to prevent phishing attacks with real user populations, even (as Google discovered with internal studies) with ultra-technical user bases.

TOTP is dead. SMS is whatever "past dead" is. Whatever your system of record is for authentication (Okta, Google, what have you), it needs to require phishing-resistant authentication.

I'm not high-horsing this; until recently, it would have been complicated to do something other than TOTP with our service as well (though not internally). My only concern is the present tense in this post about OTPs, and the diagnosis of the problem this post reached. The problem here isn't software custody of secrets. It's authenticators that only authenticate one way, from the user to the service. That's the problem hardware keys fixed, and you can fix that same problem in software.

(All three are closely related, and an investment you made in U2F in 2014 would still be paying off today.)


For others new to WebAuthn and Passkeys (like me), worth noting that Passkeys come with important privacy/ease-of-use trade-offs (nice summary here: https://blog.passwordless.id/webauthn-vs-passkeys)

Less of an issue though once more non-platform vendors start supporting them (e.g. Bitwarden https://bitwarden.com/passwordless-passkeys/)


Worth noting that implementing FIDO2/Passkeys is more challenging than it looks both from a UX standpoint and from a threat modeling standpoint. We tried to cover some of this in a blog post, in case anybody is interested: https://www.slashid.dev/blog/passkeys-security-implementatio...


Are there self-hosted versions of something akin to what okta does? Push notifications with a validation step that the actual user initiated the authn request?

Knowing how dead simple TOTP is technically, it's blown my mind that more companies don't host their own totp authn server.


Most places don't host TOTP auth servers because generally you want to bundle up the whole authn/authz package. Since you need your MFA flow to be connected to your primary auth flow, having one provider for one and then self-hosting the other is generally not smooth or easy.

Push notifications are also, in my experience, a massive pain (both in terms of the user flow where you have to pull out your phone, and in terms of running infra that's wired up to send pushes to whatever device types your users have). Notably, now you need a plan for users that picked a weird smartphone (or don't have a smartphone).

The better option is to go for passwordless auth, which you could self-host with something like Authentik or Keycloak, and then it handles the full auth flow.


What would be your recommendation for replacing TOTP today?


FIDO2


>The caller claimed to be one of the members of the IT team, and deepfaked our employee’s actual voice. The voice was familiar with the floor plan of the office, coworkers, and internal processes of the company.

Wow that is quite sophisticated.


And obviously untrue. If you’re an employee who just caused a security incident of course you’re going to make it seem as sophisticated as possible but considering Retool has hundreds of employees from all over the world, the range of accents is going to be such that any voice will sound like that of at least one employee.

Are you close enough to members of your IT team to recognise their voices but not be close enough to them to make any sort of small talk that the attacker wouldn’t be able to respond to convincingly?

If you’re an attacker who can do a convincing french accent, pick an IT employee from LinkedIn with a french name. No need to do the hard work of tracking down source audio for a deepfake when voices are the least distinguishable part of our identity.

Every story about someone being conned over the phone now includes a line about deepfakes but these exact attacks have been happening for decades.


Fully agreed, saying a deepfaked voice was involved without hard proof is deflecting blame by way of claiming magic was involved.


I think its right to be skeptical, but its also easy to do this if you’ve identified the employee to train on the voice of. You could even call them and get them to talk for a few minutes if you couldnt find their instagram.


Instagram for sound? You must know some people that use it very different than the people I know (if they're even still there).


Sophisticated enough that I’d just suspect the employee unless there was additional proof.


Highly reminiscent of the sort of social engineering hacks Mitnick would run. In his autobiography he would pull this sort of thing by starting small and simply asking lower ranking employees over the phone for low risk info like their name and things like that so when it came time to call higher ranking ones he could have trustworthy-sounding info to call back to. The attack is clever for sure, but not necessarily any more sophisticated than multiple well-placed calls.


inside job?


Anything's possible, but the simplest explanation (per Occam's razor) is just that the employee was fooled.

Is it plausible that if a good social engineer cold-called a bunch of employees, they'd eventually get one to reveal some info? Yes, it happens quite frequently.

So any suggestion that it was an inside job, or used deep fakes, or something like that would require additional evidence.

Kevin Mitnick's "The Art of Deception" covers this extensively. The first few calls to employees wouldn't be attempts to actually get the secret info, it'd be to get inside lingo so that future calls would sound like they were from the inside.

For example, the article says the caller was familiar with the floor plan of the office.

The first call might be something like "Hey, I'm a new employee. Where are the IT staff, are they on our floor?" - they might learn "What do you mean, everyone's on the 2nd floor, we don't have any other floors. IT are on the other side of the elevators from us."

They hang up, and now with their next call they can pretend to be someone from IT and say something about the floor plan to sound more convincing.


how's that Zero Trust architecture working out for everyone ?


They mention in the article that their zero-trust architecture is what prevented the attacker from gaining access to on-prem data. So it seemed like it worked pretty well in mitigating the damage.


I'm curious if they actually mean "Zero trust" in the "perimeterless" sense (https://en.wikipedia.org/wiki/Zero_trust_security_model) or if they just mean their on-prem solution doesn't require trusting some central service operated by Retool.


The latter


What’s this got to do with zero trust?


it is a cynical comment that is meant to hilite the relationship between humans where oppressive and untrusting employment leads to increase in antipathy, ill-will, feelings of being abused and all of that leading to insider theft and serious pre-meditated betrayal ?


Zero Trust is such a bad branding for how the architecture works. It's just "always prove" architecture.


“Always prove” and “zero ambient trust” are basically the same thing, no?

Perhaps “authenticate everything, everywhere” is better, but falls into the trap of trying to define “everywhere” and “everything”: should every single client application have to authenticate? Should you have to authenticate Ethernet frames?


I think we may mean the same thing, but zero trust has a connotation of negative rights, versus always prove is a way of framing things in a more positive assertion. At least that's worked for me at the last couple of places i've been.

Should every client application have to authenticate and authorize? Probably not every but the overwhelming majority probably and those that don't should have a good justification as to not. The challenge after that is "how long is this good for?".


It does seem to sound pretty well on the mind of the executives signing the deals that hear the marketing talk


Are the claims of deepfake and intimate knowledge of procedures based of the sole testimony of the employee who oopsed terribly? This is a novelisation of an events

Retool needs to revise the basic security posture. There is no point in complicated technology if the warden just gives the key away.


> Retool needs to revise the basic security posture.

Couldn't agree more. TBH I thought this post was an exercise in blame shifting, trying to blame Google.

> We use OTPs extensively at Retool: it’s how we authenticate into Google and Okta, how we authenticate into our internal VPN, and how we authenticate into our own internal instances of Retool. The fact that access to a Google account immediately gave access to all MFA tokens held within that account is the major reason why the attacker was able to get into our internal systems.

Google Workspace makes it very easy to set up "Advanced Protection" on accounts, in which case it requires using a hardware key as a second factor, instead of a phishable security code. Given Retool's business of hosting admin apps for lots of other companies, they should have known they'd be a prime target for something like this, and not requiring hardware keys is pretty inexcusable here.


> Google Workspace makes it very easy to set up "Advanced Protection" on accounts, in which case it requires using a hardware key as a second factor, instead of a phishable security code.

This isn't immediately actionable for every company. I agree Retool should have hardware keys given their business, but at my company with 170 users we just haven't gotten around to figuring out the distribution and adoption of hardware keys internationally. We're also a Google Workspace customer. I think it's stupid for a company like Google, the company designing these widely used security apps for millions of users, to allow for cloud syncing without allowing administrators the ability to simply turn off the feature on a managed account. Google Workspace actually lacks a lot of granular security features, something I wish they did better.

What is a company like mine meant to do here to counter this problem?

edit: changed "viable" for "immediately actionable". It's easy for Google to change their apps. Not for every company to change their practices.


> What is a company like mine meant to do here to counter this problem?

What is hard about mailing everyone a hardware key? I honestly don't see the problem. It's not like you need to track it or anything, people can even use their own hardware keys.

1. Mail everyone a hardware key, or tell them if they already have one of their own they can just use that.

2. Tell them to enroll at https://landing.google.com/advancedprotection/

> Google Workspace actually lacks a lot of granular security features, something I wish they did better.

Totally agree with that one. Last time I checked you couldn't enforce that all employees use Advanced Protection in a Google Workspace account. However, you can still get this info (enabled or disabled) as a column in the Workspace Admin console so you can report on people who don't have it enabled. I'm guessing there is also probably a way to alert if it is disabled.


I can't tell you how happy I am that I don't have to fight with Google Workspace administration anymore. When I was doing it, getting TOTP enforcement enabled was very problematic. You couldn't just set the org to be enforced, because new users wouldn't be able to login, and then you'd have to turn it off for the org any day new people started, then make sure that everybody was enrolled (including existing employees that turned it off while they could), etc.

They finally fixed it, but it took them a long time, and in the meantime, horrible workarounds.

They also had no way of merging two company's accounts; which is fine because m&a never happens, and google never aquires anyone using google workspace (i certainly would refuse to be aquired by them after using their software, but I'm extra grumpy)


It is not based on the sole testimony of the employee. (Sorry I can't go into more details.)


Employees are only human. Even smart, savvy, well-trained employees can be fooled by good social engineering every once in a while.

The key to good security is layering. Attackers should need to break through multiple layers in order to get access to critical systems.

Compromising one employee's account should have granted them only limited access. The fact that this attack enabled them to get access to all of that employee's MFA tokens sounds like indeed the right thing to focus on.


Fantastic write-up. Major props for disclosing the details of the attack in a very accessible way.

It is great that this kind of security incident post-mortem is being shared. This will help the community to level-up in many ways, specially given that its content is super accessible and not heavily leaning on tech jargon.


I disagree. I appreciate the level of detail, but I don't appreciate Retool trying to shift the blame to Google, and only putting a blurb in the end about using FIDO2. They should have been using hardware keys years ago.


Hi, I'm sorry you felt that way. "Shifting blame to Google" is absolutely not our intention, and if you have any recommendations on how to make the blog post more clear, please do let me know. (We're happy to change it so it reads less like that.)

I do agree that we should start using hardware keys (which we started last week).

The goal of this blog post was to make clear to others that Google Authenticator (through the default onboarding flow) syncs MFA codes to the cloud. This is unexpected (hence the title, "When MFA isn't MFA"), and something we think more people should be aware of.


I felt like you were trying to shift blame to Google due to the title "When MFA isn't MFA" and your emphasis on "dark patterns" which, to be honest, I don't think they are that "dark". To me it was because this felt like a mix of a post mortem/apology, but with some "But if it weren't for Google's dang dark patterns..." excuse thrown in.

FWIW, nearly every TOTP authenticator app I'm aware of supports some type of seed backup (e.g. Authy has a separate "backup password"). I actually like Google's solution here as long as the Workspace accounts are protected with a hardware key.

The only real lesson here is that you should have been using hardware keys.


This comment reads more poorly to me than the actual blog post. It _should_ be your intention to shift partial blame to Google, and you should own it. It's ridiculous that they make an operation like syncing your MFA keys seem so innocuous. I just changed phones, so I'm just seeing this user flow for the first time, and it is ghastly how they've made it the default path.

Changing things to make it less offensive to someone who was offended really waters down your position.


There is also the bit about the phishers deep-faking an employee's voice. Yeah, right. That happened. /sarcasm


In fairness dvdhsu responded about that point elsewhere: https://news.ycombinator.com/item?id=37502239


It was also a bit weird how they kept emphasizing how their on-prem installations were not affected, as if that lessens the severity somehow. It's like duh, that's the whole point of on-prem deployments.


To deepfake the voice of an actual employee, they would need enough recorded content of that employee's voice... and I would think someone doing admin things on their platform isn't also in DevRel with a lot of their voice uploaded online for anyone to use. So it smells like someone with close physical proximity to the company would be involved.


There’s a lot of ways to get clips of recordings of someone’s voice. You can get that if they ever spoke at a conference or on a video. Numerous other ways I won’t list here.


Probably wasn't a "deepfake" just someone decent with impressions and a $99 mixer. After compression this will be more than good enough to fool just about anyone. No deepfake is needed. Just call the person once and record a 30 second phone call. Tell them you are delivering some package and need them to confirm their address.


That is more plausible. But the embellishment about the phisher knowing the layout of the office etc makes me think it was just straight up an inside job, with the employee willingly handing over the OTP and trying to cover their tracks.


One possibility would be to just call the employee and record their voice. One could pretend to be a headhunter.


This would almost certainly be it. Calling someone to record them and using their voice later to impersonate them was done even before deep-fake voices were a concept. With the tools available now, even a short call + the grainy connection of a phone voice line would be more than enough to make a simulated voice work.


I'm already cautious about answering calls from unknown numbers. This could be a good reason to be even more cautious.


While the google cloud thing is a weird design, that seems like the wrong place to blame.

TOTP and SMS based 2FA are NOT designed to prevent phishing. If you care about phishing use yubikeys.


> The caller claimed to be one of the members of the IT team, and deepfaked our employee’s actual voice. The voice was familiar with the floor plan of the office, coworkers, and internal processes of the company.

huh.. this raises way more questions than it answers; my first two are: - how did the voice of some random employee (in IT for that matter) get learned by outside the company (enough to be deepfaked (and i presume on the fly) for that matter)? Maybe we should record less conversations (looks at Teams, Discord, Zoom) - where there already leaks of 'internal processes'?


MFA is a scam resulting from Google first, and then others wanting to get users' phone numbers associated with more data they collect on them. It provides no tangible security benefits, creates a lot of headache for IT department, creates big gaps in developer's productivity (if used in a programming company) and, actually, creates a new attack vector (phones are lost or stolen a lot more often than any other means of authentication).

Since Github now requires MFA, I'm throwing away my account: I'll never give them any physical evidence to connect me to other data they have on me.

In the company I work today (20-something thousands employees) the latest security breach was through MFA. Data was stolen. Perpetrators made jokes in company's Slack etc.

Last time I had to upgrade my phone (while working for the same company), it took IT about two weeks to give me all the necessary access again, which required a lot of phone calls, video conferences, including my boss and my boss' boss.

It's mind-boggling that this practice became the norm and is recommended by IT departments even of companies who have nothing to gain from collecting such data.


Wait, what? MFA - multi factor authentication- has existed long before Google was founded. RSA Securid tokens were introduced in the 1980s or so.

MFA is a easy and good way to prevent hostile account takeovers. Especially with the amount of data breaches, one time passwords are way more secure than memorized “static” passwords.

SMS based two factor is the one Google pushed. Even Google recommends other ways of MFA these days (using hardware like YubiKey or apps like Authy).

Public’s phone numbers are not that valuable for a company like Google . Until very recently they were listed in phone books publicly available.


MFA in its current form owes its existence to a lawsuit filed against Google being a monopoly on Android when packaging and selling advertisement data to ad campaign management companies.

It's not about being able to tie your phone number to your name. It's about being able to tie your browsing, purchasing, and other behavior history to an id that doesn't change much.

Google by itself doesn't run ad campaigns. It sort of has API to design a campaign yourself... but that's super ineffective. There are multiple companies who manage ad campaigns which run on Google. In order to be effective they need to have some predictive power over user's future browsing, purchasing etc. choices. Being able to consistently identify the user (and tie that to their history) is the most valuable ad-related info anyone can sell.

Whatever existed in the 80s has nothing to do with MFA is today. Today it's a scam that helps big tech companies who want to be an advertisement platform to harvest and to catalogue data helping advertisers predict user behavior. All it does to end users is inconvenience and less security. All it does to IT is an extra headache and more procedures that may potentially go wrong.


I don't understand:

> Google first, and then others wanting to get users' phone numbers associated with more data they collect on them

Perhaps you mean SMS 2FA, instead of a non phone number related MFA such as T-OTP?


Google were sued because they were selling to advertisers information about Android users that other advertising platforms couldn't possibly had. Advertisement data s.a. user preferences, their history of clicking on ads, browsing history etc would all be organized by the id derived from Android device. Once the court decided they cannot do that, and users should opt in to be tracked, they promptly created MFA that relied on collecting data about physical devices. Which they then again used to sell advertisement data.

The whole point of this exercise is not to enhance security, but to have an edge as an advertisement platform. If today you can trick the system into not using a phone, it's a temporary thing. The more users join, the tighter will be the system's grip on each individual user, and the "privilege" of not divulging your phone number will be taken away.

Google did this before with e-mail access for example, multiple times, actually. Remember how GoogleTalk used Jabber? -- Not having to use a proprietary chat protocol was a feature that made more users join. As soon as there were enough users, they replaced GoogleTalk with Hangouts or w/e it's called.

GMail used to provide standard SMTP / IMAP access, but they continuously undermined all clients other than Google's. Started with removing POP access. Then requiring mandatory TLS. Then requiring a bunch of nonsense "trusted application registration". Finally, this feature is now behind MFA, which makes it useless anywhere outside Google's Web client / Android app etc. All of this was delivered as a "security improvements", while giving no tangible security benefits. It was a move to undermine competition.


I use 1Password's authenticator, so no-one needs my phone number, and I don't have to worry about losing my phone, as there is a Linux CLI, a browser extension, etc.


You forgot to add: for now.

There is no genuine interest on the other side to provide you with better security. There is no genuine interest on the other side to make your life easier / to care about your privacy. You are allowed to opt out through a complicated mechanism because the provider needs high volume of users. As time goes, either the law will catch up to the provider and will make them make mandatory exceptions to this nonsense, or they will just exploit you whichever way they can.


Google repeatedly sends the 2fa to my phone even though I have an authenticator app set up for it. Even if I use the app, it still asks me to authenticate their own thing on my phone. So I just gave up and use that all the time now.


Unfortunately, MFA has become synonymous with SMS, email, and OTP. All of these methods require sharing a secret between two parties without any way to verify the authenticity of either party.

Key based authentication where both parties have private keys that are not shared is a much better alternative. Unfortunately, client side TLS certificates, which are application level protocol agnostic, never really caught on.


There is U2F/FIDO keys / passkeys which are what you describe, latter just very recently becoming widely available. When/if they become successful is another question. U2F/FIDO etc keys are only supported by a subset of websites.


> U2F/FIDO etc keys are only supported by a subset of websites.

But a growing number. https://passkeys.directory/ is a good place to check.

Ask for it. MFA via SMS/email/etc was not very common 10 years ago, but it is now. That's due in part to people asking for it.


But they're not application level protocol agnostic. Based on my understanding, they require use of HTTP. If I want to get MFA using an email client communicating via SMTP and IMAP, then the email client needs to be able to interact with the HTTP API.


You can use FIDO tokens for other protocols: I use it for SSH, for example since OpenSSH 8.2 or so.


That requires the client to implement FIDO support. This was added to openssh 8.2p1. For example, mutt doesn't have FIDO support and you have to use an external script for oauth2 support. Both require implementing support for interacting with a HTTP API (which is not application level protocol agnostic).

On the other hand, you can configure mutt to use a client side TLS certificate and SMTP servers (e.g., postfix) and IMAP servers (e.g. dovecot) both support client side TLS certificates without having to support sending HTTP requests or parsing HTTP responses.


It’s not HTTP - the design uses a much smaller binary protocol (hardware tokens are very constrained) called CTAP:

https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-cl...

OpenSSH uses that protocol to request encryption operations. Mutt could do that the same way but it’d need a server which supports the same crypto algorithm FIDO2 specifies. That’d be great but also somewhat pointless if you’re using Yubikeys which support x509 auth which IMAP and SMTP have supported for decades.


Client side tls certificates are the worst of all the options. They only make sense fully managed or for server to server communication.


Why did they need to call? They could’ve phished the password and MFA by simply MITMing?

Perhaps we need a distinction from phishable MFA and unphishable U2F/WebAuthn style


> The caller claimed to be one of the members of the IT team, and deepfaked our employee’s actual voice. The voice was familiar with the floor plan of the office, coworkers, and internal processes of the company. Throughout the conversation, the employee grew more and more suspicious, but unfortunately did provide the attacker one additional multi-factor authentication (MFA) code.

> The additional OTP token shared over the call was critical, because it allowed the attacker to add their own personal device to the employee’s Okta account, which allowed them to produce their own Okta MFA from that point forward.

They needed to have a couple of minutes to set things up from their end, and then ask for the second OTP code. A phone call works well for that.


Ahh, thanks and apologies for not re-reading before asking.

That is indeed interesting; keep the con going a bit longer to get a proper foothold.


Question for security folks out there:

So often I see these kinds of phishing attacks that have hugely negative consequences (see the MGM Resorts post earlier today), and the main problem is that just one relatively junior employee who falls for a targeted phishing attack can bring down the whole system.

Is anyone aware of systems that essentially require multiple logins from different users when accessing sensitive systems like internal admin tools? I'm thinking like the "turn the two keys simultaneously to launch the missile" systems. I'm thinking it would work like the following:

1. If a system detects a user is logging into a particularly sensitive area (e.g. a secrets store), and the user is from a new device, the user first needs to log in using their creds (including any appropriate MFA).

2. In addition, another user like an admin would need to log in simultaneously and approve this access from a new device. Otherwise, the access would be denied.

I've never seen a system like this in production, and I'm curious why it isn't more prevalent when I think it should be the default for accessing highly sensitive apps in a corporate environment.


Teleport has two person rule + hardware token enforcement, https://goteleport.com/resources/videos/hardened-teleport-ac...


Really, really appreciate you sending this! I will dig in but this seems to be exactly what I was asking about/looking for. I'm always really curious why the big native cloud platforms don't support this kind of authentication natively.


You're looking for quorums, or key splits. They aren't super common. You see them with some HSMs (need M of N persons to perform X action).


not good with acronyms, what is hsm here?



Hardware Security Module


I've worked in places where to get access to production or other sensitive stuff, an employee would need to submit a request which had to be approved by whoever was designated to approve such things. Then the employee got a short-lived credential that could be used to log in. Everything they did was logged. Once used, the credential could not be used for subsequent logins. Their session was time-limited. If they needed more time, they needed to submit another request.


Mechanisms like this exist, but they probably aren't integrated into whatever system you are using, and delays which involve an approval workflow add a lot of overhead.

In most cases the engineering time is better spent pursuing phishing resistant MFA like FIDO2. Admin/Operations time is better spent ensuring that RBAC is as tight as possible along with separate admin vs user accounts.


Transactions (messages) can be required to have multi-sig, if that is desired.

There are smartphone apps and various tools to send a multi-sig message:

https://pypi.org/project/pybtctools


i wonder on this too if people really use shamir secret sharing as part of some security compliance


Some startup, please make a product that uses AI to identify these obviously fake emails.

Hello A, This is B. I was trying to reach out in regards to your [payroll system] being out of sync, which we need synced for Open Enrollment, but i wasn’t able to get ahold of you. Please let me know if you have a minute. Thanks

You can also just visit https://retool.okta.com.[oauthv2.app]/authorize-client/xxx and I can double check on my end if it went through. Thanks in advance and have a good night A.


does iOS have a "is there a call in progress" API?

If so, it would be a good idea for OTP apps to use it and display a prominent warning banner when opened during a call.


You know how I never get phished? I never answer any call or sms asking anything, and a link in a text message is ALWAYS a major red flag. I know everyone is talking about the MFA, but the entry point was the employees phone numbers, how they got that in the first place? Especially from the article the attacker knew the internals of this company..

As for the MFA, google should have the on demand peer-peer sync rather than cloud save, for example, a new device is added, then your Google account is used to link between these new device and existing device, click sync and you will be asked on your old device that a new device is requesting bla bla would you allow it? And obviously nothing saved in the cloud, just a peer-peer sync and google is a connection broker.


I was thinking about this the other night. Is there really a solution to this? Best case scenario lets say you have a hardware key and everything is sealed up really well. You get your phishing call but instead of asking for a MFA code they have a real time IA enhanced video call from your daughter or mom with a gun to her head and they just walk you through a set of steps that will expose your IT systems. Do you do as they demand with a loved one’s life at stake? Or maybe it’s a scam? What do you do? You have 5 seconds to decide. Me? I go John Wick on them but I’ve had more than 5 seconds so that doesn’t count.


This is the biggest reason why MFA codes shouldn't be in the cloud. Use SMS-based MFA which is much more fool-proof though a pain in the ass. I have stopped using software based MFAs for this particular reason.


I don't think it's really accurate to describe it as not MFA. The attacker phished a password and 2 TOTP codes. So the attacker phished 3FA.

So yes, Google Authenticator sync made the security worse, but it didn't downgrade the security from MFA to non-MFA. And even if the sync was off, the TOTP codes in Google Authenticator could have been phished as well, so Google Authenticator can't be blamed so heavily, because the attack could have been done without it.

Disclosure: I work at Google but not on Google Authenticator.


TOTP: better than nothing, but not a lot better. It depends on the human being infallible. U2F doesn't, and would have worked here to prevent the takeover and the need for blaming the employee for not being infallible, but hey, at least less than $50 of hardware costs was saved by the employer!


One thing that is left out it to use unphishable MFA like hardware security keys (Yubikey, etc).


The only takeaways you need from this:

- Your on-premise customers are the smart ones. Networks containing sensitive information should be isolated, not all pooled together.

- Google still has actually no understanding of practical security. Literally ban their products from your networks.


After reading all of the hype in the comments, I was disappointed by the actual article. There's about one paragraph of actual material about the ("spear") phishing attack.

There are not any details about the progress of the attackers or the speed of the attack, which would have been interesting to me. There are no details about any losses from the attack (or profits to the attacker).

Once the employee provided a TOTP code to the attacker, the only surprise is that they get control of the other codes by cloud sync (as extensively commented on here).

Regardless of the hate, this could happen to anyone. But... big L for reading out your TOTP code to somebody. (If more details about the deepfake come out, then it might be more exciting.)


1. install pass otp

2. pass otp add whatever/otp/me

3. paste in "otpauth://totp/whatever?secret=whateveritis

4. pass git init; push to remote

Now you you have MFA on any device that has git and your gpg key.


I don't understand: Why on earth does google want to sync MFA tokens? They're one-time use, aren't they? Or... feh, I can't even fathom


Answering myself, this helps a bit: https://www.zdnet.com/article/google-authenticator-will-now-...

I guess we need a better way to handle "Old phone went swimming, had to buy another, now what?"


Which is funny because the 2nd factor is "something I have", which means if you don't "have it" then you can ever complete the 2nd factor. This ultimately means the 2nd factor, when you're phone goes swimming, is ultimately your printed codes.


They mean they are syncing the private key used to generate the tokens on demand.


Do all these 2FA apps - like say Microsoft Authenticator - have these hidden/not-so-hidden private keys? From other posts it sounds like you can view the token and write it down... MA doesn't have that, I don't think.


TOTP (Time-based one-time password) need a shared secret (and two synchronized clocks) to work, so yes.

FIDO2/WebAuthn relies on public key technology - so does also have a secret key - but is designed to be kept secret from the service/server one authenticates against.

For use - FIDO2 is more like a multi-use id. Like a driver's license many services accept as id. If you lose it - you don't restore a backup copy from a safe - you use your passport until you get a new one issued.

This makes more sense than with TOTP as the services only need your public key(id) on file.

https://en.wikipedia.org/wiki/Time-based_One-time_Password

https://en.m.wikipedia.org/wiki/WebAuthn


Which FIDO2 service do you recommend?

I get tired reading all these security articles. The more I read, the more I feel they are hiding something.


> Which FIDO2 service do you recommend?

Generally what comes with your phone and one or two hw tokens for backup? Looks like token2.com is a reasonable choice if you just want NFC/USBc and FIDO2 (and not storage for ssh/gpg keys). But I have little experience with hw keys.


ssh and pgp keys are not based on the similar functionality.

the keys from Token2 support *-sk key storage

https://www.token2.com/site/page/using-token2-fido2-security...

But not PGP


Thank you for the reminder that ssh now has FIDO2 support!


Answering myself again, yeah, they all seem to have this private key hidden away somewhere. Didn't know that.

https://frontegg.com/blog/authentication-apps#How-Do-Authent...?


Well that's even worse isn't it?


Syncing of "MFA codes" is really syncing of the secret component of TOTP (time based one time password).

And it's a good thing, and damn any 2fa solution that blocks it. I don't want to go through onerous, incompetent, poorly designed account recovery procedures if a toddler smashes my phone. So I use authy personally, while a friend backs his up locally.


> I don't want to go through onerous, incompetent, poorly designed account recovery procedures if a toddler smashes my phone

Why don't you use the printed recovery tokens?


Not all websites offer them.

Hell, no bank I use (several large and several regional) support generic totp. Some have sms, one has Symantec VIP, proprietary and not redundant.

Edit: since I'm posting too fast according to HN, even though I haven't posted in an hour, I'll say it here. Symantec is totp but You cannot back up your secrets and you cannot have backup codes.


Symantec VIP is TOTP under the hood.

https://github.com/dlenski/python-vipaccess


> Why don't you use the printed recovery tokens?

I currently see 53 2fa tokens in my private bitwarden.

You expect me to print, keep safe and manually reset them all when I buy a new phone?


The toddler got there first.

Seriously, though, it's hard to keep track of something that gets used once every five years.


Who has a printer these days?


Local libraries, print shops... but yeah that may be an attack vector.


A better way to fix this is to have multiple ways to log in. Printed backup codes in your safe with your personal papers and/or a Yubikey on your keychain. This works for Google and Github, at least.

Passkey syncing is more convenient, though, and probably an improvement on what most people do.


If you can backup a key it is not MFA. It just a second password and not another factor. The solution to having your phone smashed is to have multiple "something you have", so you have a backup.


For me the question is "who the fsck uses Google Authenticator to store all their tokens, both company and personal?"


Google Authenticator was I believe the first available TOTP app, and is by far the most popular. It used to be open source and have no connection to your Google account. Many people installed it years ago when they first set up MFA, and have just been adding stuff to it ever since because it's easy and it works. Even for technical users who understand how TOTP works, there is no obvious reason it appears unsafe to put all your tokens in the app (until you read this article).

Look at the MFA help page for any website you use. One of the first sentences is probably something like "First you'll need to install a TOTP app on your phone, such as Google Authenticator or Authy..."

It really did used to be the best option. For example, see this comment from 10 years ago when Authy first launched:

> The Google Authenticator app is great. I recently got (TOTP) 2-factor auth for an IRC bot going with Google Authenticator; took about 5 minutes to code it up and set it up. It doesn't use any sort of 3rd party service, just the application running locally on my phone. TOTP/HOTP is dead simple and, with the open source Google Authenticator app, great for the end user.

- https://news.ycombinator.com/item?id=6137051


I think technically Blizzard Authenticator (even the app) was available before Google Authenticator, but obviously for extremely limited use.


Also, since it doesn't allow to extract the private keys, you're kind of stuck with it once you've started using it.


Am I the only one questioning the deep fake of the voice?


If they have an audio recording of the person, there's a bunch of sites where you can create them on the fly for free. Don't know about the quality, though I imagine distortion can be dismissed as being from the phone rather than the fake.


Yeah, just add some twangy dropouts like a poor cell connection has.


MFA still means single point of failure - the person who has all the MFA is the one who can be hacked, like in this social engineering scenario.


I just call them one-time passcodes (otp)

Most of the time I am not using multifactor or 2factor the way it was designed

But it is accurately a one time passcode


Where can one find a breakdown of how to build implement a TOTP generator? For curiosity's sake


The basic premise is in https://datatracker.ietf.org/doc/html/rfc6238, although today I'd use SHA-256, not SHA-1, if possible.

But I'd disfavor TOTP over hardware tokens that can sign explicit requests.


Stopped reading at "deepfake".

It's the new advanced persistent threat, a perfect phrase to divert any resposibility.

(Yes, there are deepfakes. Yes, there are APTs. This is likely neither.)


I am (genuinely, really asking) curious why you think so. I've got no hand in this, but the skepticism around phishing attacks from this site of all places really surprises me. People like Kevin Mitnick have done more sophisticated phishing with fewer tools. Why wouldn't someone intent on running a social engineering scam use one of the widely available voice faking technologies that are available now? Keep in mind that they're simple enough to use that people are making memes with voices generated from ~5 seconds of voice recordings.


Making a meme is nothing like an interactive telephone conversation.

It's not that it's impossible, but it's not trivial either. But mainly, it's just unnecessary.

If the user is not fooled by a well crafted phishing, by doing the most trivial countermeasures such as calling back, they are not going to be fooled by a deepfake. In practice work on phishing is mostly better spent elsewhere. So while we shouldn't dismiss it completely, it's clearly not the case with a smallish company with limited economic value, so very unlikely the case here.

There has been a handful of highly profile media cases involving deepfake. None of which has held up on further investigation. It is understandable, nobody wants to be known as the one who didn't recognize his own kid on the phone, but the truth is more simple and actually helps us when designing countermeasures.


I suppose the disconnect then would be that we fundamentally disagree on what the simpler answer is. It's my understanding that a deepfake voice being used as part of a phishing scam is something that can be done trivially (or at least by a determined actor using free tools, so at least trivial enough for this case), so to me that would be the simplest, most obvious answer when compared to a company-wide conspiracy, but I can see your point if it is assumed that that isn't the case and that deepfake voices are actually hard to do.


Try it! The tools are publicly available. You might find that it's harder than you think. We are very sensitive for uncanny conversations. Analogue imitations and pitching the voice is much easier to work with.

However, my point is that none of that matters. After all, deepfakes are only going to get easier, so it is only a matter of time before it is as cheap as you describe. It is that imitating a voice have very little impact on the outcome of a phishing operation. Sure, it might not hurt, but other things affects the success of a scam. Don't rely on impersonating a voice, especially since a trivial callback completely defeats it, no matter how much resources you put into it.

Which is also why none of these recent media stories make sense. And when investigated, none of them has held up to scrutiny, precisely as expected. I have not done this myself but look out for follow up stories by respected bloggers and journalists.

Lots of people work with defending against these operations, and none of them spend any correctly identifying deepfakes for a reason. Don't believe my word, I am not in the business, but ask anyone who is if they find these details believable.


I wonder how long it'll be before a similar attack happens before someone's/a companies passkeys are synced to the cloud.


Excellent write-up, thank you.


Naming/training issue imo.

We need a better name than MFA.

Something like “personal password like token that should only be entered into secure computer on specific website/app/field and never needed to be shared”


It's well known that OTP is not immune to phishing. Force your users on webauthn or some other public key based second factor if you're aiming at decreasing the incident rate.


I blame SAML and any other federated login being an "enterprise only" feature on most platforms.

So users get used to sharing passwords between multiple accounts and no centralised authority for login. This causes the "hey what's your password? I need to quickly fix this thing" culture in smaller companies which should never be a thing in the first place.

If users knew the IT department would never need their passwords and 2FA codes they would never give them out, the reason they give them out is because at some point in the past that was a learned behaviour.


Ugh, or being able to generate an API/service token. It just ingrains the bad passwords and password sharing if you have to use passwords everywhere.


Well, push based 2fa with "select this number on your 2fa device" helps prevent some vectors. Simple totp doesn't do that.

"Never give your totp or one time code over the phone" is good advice.

"Never give info to someone who called you, call them back on the official number" is another.

This is user error at this point.


I disagree. Specially again that companies are centralizing on a couple 2FA companies (like Okta from TFA), this is just ripe for phishing. Okta itself is terrible at this; they don't consistently use the okta.com domain, so users are at a loss and have basically no protection against impersonators.


For okta, if it is set up properly, the user should get push notifications. And in that push notification is a number they need to select to validate the push.

This eliminates credential phishing and "notification exhaustion" where a user just clicks "ok" on an auth request by a bad actor.

As much as I advocate for non cloud services, what okta provides is very secure.


man you should see what people are getting up to with evilginx2 these days. They are registering homoglyph URLs just for your and running MITMs that passthru the real site 1:1, and forwarding to the real thing once they skim your login token so you never even notice. The really crappy phishes and jankfest fake sites are pretty much obsolete.

Then they hang out in your inbox for months, learn your real business processes, and send a real invoice to your real customer using your real forms except an account number is wrong.

Then the forensics guy will have to determine every site that can be accessed from your email and if any PII can be seen. What used to be a simple 'hehe i sent spam' is now a 6 month consulting engagement and telling the state's attorney general how many customers were breached.


I've been thinking along these lines for a while. The whole "factors of authentication", where higher=better is no longer a good summary of the underlying complexity in modern authn systems.

We need better terminology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: