So, what does this say about Apple security? There's a lot of speculation and insinuation that all the security lapses started with the purchase of a refurbished MacBook, but there's zero evidence other than some coincidental timing. The author clearly wasn't using many security precautions prior to being compromised. They had many interconnected accounts; reused passwords; limited use of 2FA; phone/SMS-based 2FA in the few places they had it; no separate password for Chrome browser sync's DB; no secure password management app; and kept the keys to their crypto accounts in the cloud. The list of compounded failures is long. There's no reason to think this has anything to do with Apple at all.
They haven't learned any lesson, either. Their advice after this? Turn your laptop off when you're not using it (useless) and use Google Voice for 2FA. This is worse than useless; this is actively bad advice and you should not follow it.
The average user should install 1Password and use a TOTP application. Anyone can learn to do that, and it's really all you need. More advanced users, those with particularly extreme security needs, and pedantic nerds can use YubiKeys, hardware wallets, self-hosted password vaults, PGP-encrypted backup codes, and other measures that are worth considering, but aren't as approachable for everyone.
Sometime back, I had 2fa set up on a phone, which eventually gave up the ghost. What this did was to lock me out of google and many other services I depended on. Most painful was being locked out of email. Any suggestion on how to mitigate device/ hardware failure?
Not GP, but I use Authy, which allows multiple devices, approving each new one with one of the others.
I don't recommend using its browser add-on though, so this only fixes your case if you have two (or more) devices to use that will be active at the same time, so that any one (all but one) can die.
Also, have multiple forms of 2FA (as in any one may be used) such as a FIDO key or printed codes.
Save the security codes. I keep a small file encrypted with Veracrypt with the "top level passwords", things like email, password managers and one time recovery codes. And since this is the last recourse, the password for this file is only stored physically.
It is also worth considering your threat model in this case. I am unlikely to face an attacker with nation-state resources, so I actually print out my backup codes on paper along with instructions on how to use them to bootstrap my entire password/auth chain. I then put that paper into a safety deposit box with a copy of my will and other docs my wife and family might need if I drop dead of a heart attack. No passwords to remember, or realize that I forgot at the point where I least want that to happen and secure against the threat models I consider most likely.
I switched from 1Password to LastPass because (at the time) 1Password’s support on Windows was rudimentary, and on Linux was basically non-existant. And then I switched from LastPass to BitWarden, because I became uncomfortable with LastPass (for reasons which I don’t recall).
And then when 1Password released their 1Password X system which works via a browser extension that works really nicely across Windows/Mac/Linux/iOS/etc, I switched from BitWarden back to 1Password again. (caveat: I haven’t actually tested the chrome extension myself; only the firefox and safari ones)
RE: BitWarden vs 1Password, they’re pretty similar in most respects. 1Password is arguably more polished (especially in the native OS X app), and has support for storing multiple separate vaults of data (a feature which I don’t actually use). BitWarden has the ability to self-host the server (which I’ve never done, but I very much appreciated that it was an option), and 1Password historically let you just sync your vault file however you wanted to; I had mine in a Dropbox folder to share between computers, for a while, and it worked great. But now with 1Password X, they want you to use their servers. Which obvs is less great, but.. I’m willing to put up with it, personally. (I mean, I was doing the same with LastPass and BitWarden already, anyway).
For me, the big killer feature for 1Password is that it can handle “Authenticator” features; scan the QR code into 1Password and it’ll handle the TOTP code for 2FA in addition to your username/password, etc. It’s so much nicer to have it all integrated together in a single flow than to need to find my phone and launch Authy separately every time I want to log into a web site.
Services like GitHub, Gmail, Apple etc allow you to use recovery keys in place of 2FA for this exact circumstance.
The keys are generated by the service and you must make a note of them ahead of time. And keep them secure obviously. You get about 10 possible keys from GitHub iirc.
Also, some services (Google, apple) allow you to perform 2FA over multiple options like one of TOTP, phone call, SMS or other device interaction (in apple's case).
So if the device gets bricked, find a cheap replacement and Bob's your auntie.
Your comment doesn't add any value to uncovering the root of the issue and just blaming the author without having the full picture.
> The author clearly wasn't using many security precautions prior to being compromised ...
Just because I don't mention the exact security precautions I use in the article, doesn't mean that I don't actually use them.
I spent 12+ years in tech. And started off my career by specialising in networks and security. And all my life I exhibited as much confidence as you are in your comment.
The moral of the story, is that if you are in tech, a security professional, or work in crypto - don't take for granted your security. No matter where you are in your career, what's your salary, or how many people report to you. Take time annually, bi-annually, or quarterly to review your online security. Especially if you've been on the internet since 90s. You might not even remember the websites you've sign up and emails you have.
> So, what does this say about Apple security?
If you can explain how Apple 2FA call was bypassed that you be helpful. What wasn't mentioned in the article is that I spoke with two Senior Advisors from Apple and they've haven't been taking this very seriously, to say the least. "Consider reinstalling the OS" and suggestions alike. It's an obvious next step to take, but it doesn't answer the question of how 2FA was breached. Reinstalling the OS or taking any other common measures in such sophisticated incidents don't prevent future incidents alike.
New account just posting on your most recent comment hopefully you’ll see. I wish you could message here
it can be anyone of course but after briefly looking in to u and what u said about everything I have a weird suspicion
Have u ever knowingly talked with someone or talked about maybe a site U were competing with at one point whether recently or years ago just at all? Especially related to crypto because I see too many similarities with something I was researching through a lot recently and found exciting info to say the least
Anyway gl with investigation. Be cautious ur other devices are not infected with something crazy
For any significant bitcoin amounts I would buy some cheap laptop and use it as offline storage without ever connecting it to anything. I don't trust hardware wallets because they are an obvious target for attacks, but one can't attack offline computer.
> buy some cheap laptop and use it as offline storage
Cheap laptop might not have redundancy, so if your SSD dies, you might be in for a rough ride. Best case, you can recover your wallet, worst case you're SOL.
I don't think there is anything wrong in particular with using Chrone's browser sync or password management system at all. That is so long as the linked account is secure.
The problem here is the weakest link which I imagine is the recovery email assigned to his google account (ie probably the Yahoo account). That was likely compromised because of the phone/SMS-based 2fa.
That is nothing to do with Google as users should be aware of the security of their recovery emails.
The attack vector in this case appears to have been read access to the user's SMS messages. (Via some kind of a tee rather than a traditional SIM swap, since the user was also getting copies on their phone. My best guess would be that their mobile operator runs one of those crappy services for reading your SMS via a web browser.)
There is no sign that the attacker had a keylogger on the user's laptops, for extracting passwords. If they did, they wouldn't have needed to do account recovery on all these accounts. So the master password of a traditional password manager would not have been compromised.
Email services (e.g. Gmail) are often used as a unique identifier for logins. Want to reset your password? It's often by email.
Storing passwords in the same account creates a single point of failure. If I get into your Gmail account I have total control of everything.
Third party stores separate passwords from logins, including for the email account itself.
Now I cannot get complete control of all your accounts just via your gmail/logins account. At best I can control them for a short period of time. Which means I maybe get your Skype account for a day or so.
It's all about making it harder for an attacker to get something valuable. The harder you make it the less likely they are to succeed.
Because the login accounts and passwords are separated an attacker has to do more work basically.
Addendum: the single point of failure actually becomes the third party password store. Which is good in some respects and bad in others.
Addendum 2: also, iirc, Google stuff doesn't ask if you want to create a password for the account, only if you want to store it. Which encourages password reuse.
Password managers often have the ability to generate a random password up to the maximum allowed length, meaning no account passwords are ever the the same and are harder to crack.
Furthermore some of them can automatically update your passwords for you if they've become stale.
So all I have to do is remember (and regularly change) my one super strong pass phrase for the password manager.
I'm not sure if I missed something crucial (or several things?) but this seems to be entirely an essay on how SMS is an antifactor in authentication. That they're using Apple devices and services doesn't seem to factor in to it.
Inevitable torrent of "It was your fault for X, Y or Z reason".
Nobody is perfect.
Every system has known or unknown vulnerabilities.
We need to be building systems that are forgiving of errors, and store important data redundantly.
I've been wondering a lot about how to truly secure an identity. Is there a way to have a meaningful and secure digital life if all your devices could be compromised and your memory is not perfect? I wouldn't want to trust my entire economic life to any single point of failure.
I've noticed that he has an app called "Whoscall" installed providing Caller ID in the Phone app. I wonder if this has access to Messages on the phone and is able to read/upload SMS?
A quick search online suggests that this is a Chinese app.
It is my understanding that Apple’s API for supplementing Caller ID info (and also optionally blocking calls) requires an app to provide a database of numbers in advance, and the system queries that database locally. The app does not get any access to call history or Messages history.
Setting up a new device is a very vulnerable time. You’re downloading and installing new software and signing into all your accounts. It’s very easy to do the wrong thing, like click through the wrong dialog while you’re blasting through it all.
"A SIM swap scam is a type of account takeover fraud that generally targets a weakness in two-factor authentication and two-step verification in which the second factor or step is a text message or call placed to a mobile telephone." [1]
Well, not you key, not your money. Isn't that what crypto currency advocates always tell us? Being your own bank carries high risks and in this case the risk got to the author.
I've never had an SMS MFA from Apple. It's always been sent through to other registered apple devices through their network and OS.
New logins to iCloud etc always pop up on my MacBook / iPhone with a map image showing me where the request came from and if I want to allow it, o my then can I get the code.
Apple doesn’t use sms 2fa, more to the point any message you get from someone not in your contact list gets red text added to the bottom saying “Apple will never ask you for any information over messages”
The Apple 2fa model is built around trusted devices - essentially turning each trusted device into a yubikey style hardware 2fa dongle.
Verification prompts also include gps level “this is where the request is coming from”. So the attacker needs to know your exact location when they’re going after you.
Apple 2FA only use SMS if there isn't a more secure method available and it must be explicitly selected when enabling 2FA. From 2016, the default is trusted Apple devices (running a minimum of iOS 9 or MacOS 10.11) or web browsers that are registered to your Apple ID. It can fallback to an automated phone call with two trusted phone numbers.
They haven't learned any lesson, either. Their advice after this? Turn your laptop off when you're not using it (useless) and use Google Voice for 2FA. This is worse than useless; this is actively bad advice and you should not follow it.
The average user should install 1Password and use a TOTP application. Anyone can learn to do that, and it's really all you need. More advanced users, those with particularly extreme security needs, and pedantic nerds can use YubiKeys, hardware wallets, self-hosted password vaults, PGP-encrypted backup codes, and other measures that are worth considering, but aren't as approachable for everyone.