Hacker News new | past | comments | ask | show | jobs | submit login
iMessage Key Verification (support.apple.com)
239 points by simpleintheory 11 months ago | hide | past | favorite | 121 comments



This seems somewhat similar to Matrix's (and other apps') approach of comparing keys to verify identity (plus with I guess some extra hardware requirements and attestation).

I'm interested to see what the uptake is among users, because even though Matrix has done a fair amount to smooth this process, verification is still a pretty large source of friction from what I can tell, and I'm not completely sure how it could be made easier. I guess the idea here is that once you verify a contact that syncs to their other devices, but in theory Matrix also does that, and in practice I still see some friction.

It's possible Apple's implementation will just be better, or that they'll rely on attestation to such a degree that they'll be able to skip some other friction points. But even with the public verification setup (which gets rid of the problem of needing to verify devices at the same time as the person you're talking to), I'm still slightly skeptical that users are going to copy and paste a code into their messaging app to verify contacts. My experience is that even popping up a button and saying, "do your friend and you see the same emoticons" is too much work for a lot of users.

Maybe I'll be wrong. And I guess ideally if iOS users get used to doing this, they might be more tolerant of doing the same thing in other messengers too.


Trevor Perrin, who co-designed the Signal Protocol, made the point that most people don’t have to do this. If a few people do, an adversary won’t know if the target is verified or not. If they MITM they might be discovered instantly. Which gives the entire herd protection.

- https://www.youtube.com/watch?t=2001&v=7WnwSovjYMs


Not a great argument IMO. If only 0.1% people check the keys, the attacker may be just okay with the 0.1% chance of being discovered – especially if there's no consequences for them.


Only for mass attacks. A targeted attack will encounter the risk of the attacker being exposed.

Think journalists, politicians, public figures


> A targeted attack will encounter the risk of the attacker being exposed.

What "risk" is there? I'm not aware of illegal spying by intelligence or law enforcement agencies having ever had any adverse consequences for them, in any country, at any point in history.


Risk of revealing their attack and losing whatever exploit made it possible, if nothing else. The stuff Citizen Lab has published is also making problems for some of the companies selling spyware


I don't mean to be snippy, but this is kinda what the whole Cold War was about. There were constant consequences for the spying. For domestic I think we can point to Watergate, Contra Affair, Snowden Leaks. I have some more recent examples but I think mentioning them will result in arguing and move from the topic at hand. You may not agree that the consequences were severe enough, but there were consequences. I think there's also a strong bias in that consequences take place after (often months or years) and there's less attention given to them so we often aren't even aware. But if consequences do happen, it does mean the rage machine was effective even if far from optimal. Worth noting that there is a danger in lack of attention to consequences, since it can lead to apathy and thus actually enable consequent-less actions in a self-fulfilling prophecy.


What consequences did the Snowden Leaks have?

I mean for the intelligence agencies – not for Edward Snowden. I'm of course aware his life has been destroyed. But what consequences were there for the people and institutions responsible?


This contains a decent summary, including some laws: https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...

I'd mention there are two big but abstract consequences.

1) The leaks significantly harmed international relationships and the result of this game much more ammunition to political adversaries like China and Russia. People argue that this is a consequence of Snowden's leak but that's like arguing that a mass shooting was only problematic because the news informed everyone. In a way yes, but it's not like those people would be alive if the news didn't report... It's not the real problem even if you wanted to argue over-sensationalism.

2) It seriously galvanized the battle for encryption and laid the pathway for the subsequent rapid rise in usage of tools like Signal and more funding and energy for building tools like Matrix and many others. Google's Project Zero certainly was influenced by this event.

While I get that these are more abstract, they are certainly consequences and certainly nothing to be scoffed at. This is another problem with the perception of consequences, is that often they are more subtle or abstract. But subtle or abstract doesn't mean any less impactful, just more difficult to trace. More opaque. We don't have a counterfactual to prove that these things wouldn't have happened without the leaks, but I'm certain the timing and degree would have been different. Do you think the world would be different had he not released them? I don't think this is an easy question to answer because it requires being exceptionally detailed and paying very close attention to a lot of events.


There were several instances when a person of interest suspected something's wrong with their phone and knowing they can be a target of a government surveillance they promptly submitted their devices to security companies. That's how some zero-days were uncovered by Apple.


It might still be an acceptable risk. Most governments around the world probably don’t care that much if it’s discovered they are surveiling a journalist or lawyer.

In most of the world everyone knows that journalists and lawyers are being monitored.


I think you and notpushkin are perhaps missing some of the "economic" angles on this. It's not just about the what, it's about the how. High value targets are highly likely to be following decent practices and at least staying up to date on software. Which implies that cracking iMessage would require use of a 0-day, of which there are not an infinite number at any given time, and which Apple will immediately eliminate forever if they discover it. Part of the point of highly targeted careful attacks is to stretch those out, it's not just about keeping the target from knowing (though that's not irrelevant), it's also about future targets.

So as with a lot of matters in intelligence work it's subject to cost benefit calcs. If using it against a given target means they are incredibly unlikely to notice and it can then be used again and again, it doesn't take much target value for a government to deploy it which pushes towards more mass use. On the opposite end if using it means it will immediately become useless ever again, then the expected target value has to at least exceed the market cost (which itself will rise more quickly if 0-days are being consumed more quickly vs production), every time. In between is a spectrum of less or more use. Apple wants it as far towards "use it and lose it" as possible, but Trevor Perrin's argument makes sense here: even a relatively small increase in percentage of "use it and lose it" amongst the population could significantly change the mean weighted cost for threat actors.

If they could know for sure whether a given counter measure was deployed that'd reduce the cost again, but if they can't there is indeed a population benefit. It's like a mine field, there don't have to be that many mines scattered around to really hurt people's willingness to cross it!


> High value targets are highly likely to be following decent practices and at least staying up to date on software.

Not even close. The vast majority of journalists, lawyers, activists, even public figures, don't have the knowledge to secure their digital lives, don't have access to an expert to do it for them, and in many cases aren't even fully aware of the nature of the threat (beyond some vague idea along the lines of "I'm probably being monitored").

On top of that, it has been my experience that people who don't understand threat mechanics on a deeper level (such as active MITM attacks) quickly stop following whatever best practices they have been trained to adhere to (in this case, peer key verification), because those practices have no observable effect to them and without actually understanding what's going on, it's hard for them to see what the point is.


>Not even close. The vast majority of journalists, lawyers, activists, even public figures, don't have the knowledge to secure their digital lives, don't have access to an expert to do it for them, and in many cases aren't even fully aware of the nature of the threat (beyond some vague idea along the lines of "I'm probably being monitored").

Citation needed. Because everything I have ever seen is that iOS users almost all leave on autoupdate and the move to the latest version is the overwhelming majority, very rapidly. Seriously, look at adoption each time over the last 5 years on a site like statista [0] or wherever, or various ones aimed at developers. If you want to claim that people at higher risk aren't part of the 60-85% I'd honestly be curious to see your numbers. Note I said "decent" not "best" practices. Whatever its flaws, mixed incentives, and issues (which are real), Apple has expended significant effort in making the normal default paths provide an ok baseline security for regular people and discouraging leaving them. Which isn't even something a lot of HNers like! If anything, I'd be unsurprised if HN types to lag in some respects because we want more control and to do things outside the well trod path. I've jailbroken a lot, is that something most people do? No.

In this specific case, the minimum needed to avoid a zero-day exploit is (by definition) merely to always have the OS updated and all security patches applied while staying firmly within the walled garden. Which it's objectively clear the super majority of regular people do. If you just go with the default and let Apple update your device whenever Apple wants, then it's a truism that anything you get hit by is something Apple hasn't yet patched. And in turn anything that raises the population probability that the 0-day actually gets noticed and potentially reported raises the risk of using the 0-day. The whole point of this feature is that it'd let a normal person who doesn't necessarily understand threat mechanics go "huh, that's funny" and then maybe say so on their social media/blog/wherever, at which point if even one person who follows them (and we're talking journalists or other types with enough influence to get targeted by major threat actors right?) recognizes what's going on and says "quick call Apple/security researcher/tell HN" now it's out there.

>because those practices have no observable effect to them

Literally the entire point of this new feature is to create an observable effect of tampering. Kind of a weird statement in context.

----

0: https://www.statista.com/statistics/565270/apple-devices-ios...


Turning on automatic updates, while a great choice for the vast majority of iOS users, does not protect against sophisticated adversaries who use zero day exploits. The fact that everyone is already on the latest version (they’re not, because of phased rollout, but it’s not too relevant here) means that an exploit that has value targets latest iOS by default.

Opt-in additional protections, such as Lockdown Mode, which aren’t perfect but help are rarely enabled by those who need it, despite being marketed to people who are targeted. Part of this is that it’s opt-in, but part of it is that a lot of the people targeted aren’t journalists: they’re the spouses of political leaders, or random government leaders, who don’t have a good security posture nor do they have people managing their devices for them to create one.

Also, do note that just because someone appears to have tampered with a conversation doesn’t mean you’ve burned your 0-day: it provides no indication of how they did so.


> Because everything I have ever seen is that iOS users almost all leave on autoupdate and the move to the latest version is the overwhelming majority, very rapidly.

Outside of the US, Android's market share dwarfs iOS's. And most people's Android phones are from vendors that stop providing updates, including security updates, after 2 years or so. There are hundreds of millions, if not billions, of vulnerable Android phones out there.

> Literally the entire point of this new feature is to create an observable effect of tampering.

Which, since most connections aren't tampered with, isn't actually observable in practice for most people. So the next time they meet someone new, they might not even bother asking them to do key verification.


Has warrantless mass surveillance really become so normalized that such gross violation of people's rights is just casually brushed aside like some unsurprising everyday occurrence, so common it can't be helped? Lawyers and journalists are people too, they're citizens, human beings with rights and they don't deserve to be "monitored" by anyone. If "everyone knows" they're being monitored, why is nobody doing a thing about it?

All these three letter agencies operate in the darkness and away from the public eye. That's where they belong, because what they do to their own citizens is supposed to be unconstitutional. If they've really gotten so brazen as to operate openly instead of clandestinely and are still enjoying complete impunity then there really is no hope left.


WhatsApp has this scheme. And to my knowledge, never had there been a report of verification failing.

If an adversary was discovered 0.1% of the time. There would be at least one person on a support forum with the text of the error that occurs when it fails...


I get the warning "your contact key has changed.." all the time with various contacts on WhatsApp. What am I supposed to do? there's no clear next steps to debug / report of suspicious activity. In such cases, users get trained to become complacent of such warnings.


You're supposed to meet up with that contact and verify the new key.

If even 0.1% of users did that, it would be 2 million verifications. And yet nobody has ever announced they have found a non-matching key.


The argument is context dependent, as is essentially anything related to security. Key verification isn't for most people and can even create more noise as normal people frequently change phones. But the average threat environment isn't the only threat environment. In higher risk settings (politicians, journalists, etc) verification rates are expected to be higher than 0.1% because these people frequently are also more knowledgeable of security practices and/or have better advisors than the general public. While the context isn't explicitly stated I think it is fair to assume that most can infer this and that if not someone can explain it. Often things that appear ridiculous but are common practice aren't if context is considered (doesn't mean good thing but just less absurd and it can be understood why the ridiculous thing is done).


Apple’s installed base is so large (around 2 billion devices) that if 0.1% of them verified their keys, that’s still a useful deterrent.


Tangential, but Keybase (and later Keyoxide [1]) with their “social proof” mechanics are a more human-friendly way to verify the encryption keys. I kinda wish Matrix had that integrated, too.

[1]: Here's my Keyoxide page for example: https://keyoxide.org/alexander@notpushk.in


Is Keyoxide based on the Keybase codebase, or is it a new development?

I quite enjoyed Keybase back in the day, but then they pivoted to being a crypto wallet, and were ultimately acquired by Zoom (a move I understand less every day, since they obviously gave up on their bold promises of end-to-end encryption they made back in 2020).


It's completely new, and based entirely on PGP. The proofs are stored with your PGP key, so it's also decentralized. It doesn't have any amenities like chat or file storage – it only maps your social networks to a given PGP key. But I believe third parties (like Matrix) could step up and support it natively – all the benefits of Keybase, none of the drawbacks.


Not entirely clear but it doesn't seem so from the launch post verbiage:

https://blog.keyoxide.org/keyoxide-launch/

Code lives here if you want to dig:

https://codeberg.org/keyoxide


> I'm interested to see what the uptake is among users

My suspicion is that it'll be quite low for many years, for two reasons:

- It requires a recent iOS and macOS version on all of a user's devices. Still got an old iPad lying around somewhere that doesn't receive software updates anymore? No key verification for you. (In a similar way, Apple has been making older devices obsolete by preventing Notes sync in some previous iOS version. This is only an issue because all of these apps are not updateable outside of the core OS.)

- It requires users to be logged in to the same Apple ID for iCloud and iMessage.

The former will only change once these old devices completely die – I just don't think many users will value key verification enough.


iCloud Advanced Data Protection also requires modern hardware and won’t enable if you have outdated/vintage stuff logged in on your Apple account.


This is super annoying. I don't have iCloud stuff enabled on my old iPad, but it works just fine as a media streaming device for the kids. I want to enable Advanced Data Protection, but it won't let me until I replace the perfectly good iPad :(


Couldn't you just make a dedicated account for that device? Wouldn't that be preferable anyway? You could still use family controls.


Any idea what happens if you sign out, enable it, and try to sign back in? Is the error as cryptic as I might imagine?


IIRC, it's just a generic failure to log into iCloud


You can remove that iPad from your account if you don't need iCloud for it


We do use it for TV+ shows like Snoopy in Space, so alas, I'm stuck with it being registered


Unregister your account, register a new iCloud and add it to your iCloud family. You can have 6 people in your family account.

I have ADP on my devices but no one else in my family has it on and we’re all in the same iCloud family.


I don't like how matrix does it. I tried to get very technical people to use it and they struggled. Plus, they assume you have enough trust with your contacts to share with them your device details instead of just a unique identifier.


The article is a little short on details, but it's not immediately clear to me how Apple's UX will differ. This is exactly my concern, I agree that Matrix's setup can be difficult for new users, but I'm not sure what a good UX for this even is. Apple's non-public verification method seems to be (at least at first glance) almost identical to what Matrix is doing.

If Apple rolls out a similar system and it works or they're able to identify pain points and make it easier to use, then cool. Maybe Matrix can take pointers from the UI if that's the case. But I wonder if that will be the case, or if Apple's implementation will suffer from the same UX problems that Matrix's does.


This Apple support page describes how both automatic and manual verification UI/UX presents itself to the user.

https://support.apple.com/en-us/HT213465


Same thoughts, I guess. This describes the process, and the process (at least for on-device comparison) sounds almost identical to what Matrix does today. I'm not sure what code is going to be compared, Matrix uses emoji which I've found helps a lot, neither article for Apple specifies what they'll use.

But :shrug: unless I'm not seeing a broader picture or there are details here that I don't understand, it does kind of sound like this is going to have the same problems that Matrix has. Although, to be fair, I've run into validation errors and syncing problems with Matrix before that theoretically Apple won't have? So maybe it'll be the same UX, but slightly more stable? Although also to be fair, Matrix doesn't require me to update all of my computers in order to verify an identity and Apple seems to be saying that users will need to do that, so I'm not necessarily taking it as a given that Apple's system system won't have its own share of annoying caveats.

It's a tiny bit disappointing, my takeaway from Matrix is that this all needs to be easier to do, and I was mildly hopeful that there would be some UI takeaways from Apple's implementation.

Or maybe people will just be more tolerant if it's Apple asking them to jump through the hoops instead of an Open Source messenger? If that's the case, and if the UX really is basically the same as Matrix's, maybe some of that tolerance will bleed over to Matrix as well.


Here’s my verification key, so you know what they look like, since you were wondering what would be shown/compared:

APKTIDJ_J3S3UhVqZKCX5EgKYnh9ez4pO9Hsr5YWv_5pXF5GUcLA


Ow. Okay, I take it back, unless there's something I'm missing then Matrix's system is better than this.

I'm sorry, I just can not imagine asking a non-technical person to copy and paste that into a messenger and then needing to help them debug which letter they left off. It's hard enough to get them to validate "I see a cat, a dog, a horse, a pizza, and a basketball."

I guess I'll wait and see what happens with it, but I'm going to temper my expectations about people adopting this.


To be clear, that code is only for offline verification. For live verification (akin to Matrix's emojis) Apple has you compare an 8 digit code.


Okay, fair, that's a lot better then. Still not ideal, but... yeah, my guess would be then that maybe people mostly do live verification.

I don't know, we'll see what happens. Maybe I'll be wrong and the system will take off.


They both suck, TOFU is bad. Apple should apply their central pki to certify that contact with their icloud id.

TOFU is a good idea when you don't want a central party arbitraring identities like with federated matrix. Makes little sense with apple.


You know, that is a good point. Far be it from me to encourage Apple to do more attestation -- to be clear, UX problems aside I don't want a centralized identity management service.

However, from Apple's perspective, this does kind of feel like the worst of both worlds. People have to update their devices to the most recent iOS version, apparently being signed in on an old device just turns off verification, apparently it's not even per-device?

So if that's the case, Apple has all of the downsides of attestation right now. Why also have the downsides for keys and in-band verification as well. It does seem like it would be simpler for them to try and have this be something that's tied into iCloud that gets set up only by the person who wants to be verified. Again, I'm not saying I want that, I don't want Apple arbitrating identities, but... why wouldn't they? Why have a system with both downsides?

I'm sure there are caveats I'm not thinking of, but it does seem like they could probably do this in a less federated/decentralized manner?


Is it any different from copying an url? That said it might be formatted as an url like totp url.


In theory no, but in practice, wow do people seem to struggle with keys. Matrix's current system went emoji only because even numbers seem to be too much for people. And arguably, even emoji are too much for people.

There's larger UX problems surrounding when/where to copy and what the caveats are, but even ignoring them, people do seem to struggle with copy paste, especially cross-device stuff. I'm not sure what the solution is.


For those interested, the underlying technology for this feature is transparency logs. Some technical details for iMessage's approach can be found here:

https://security.apple.com/blog/imessage-contact-key-verific...

The same technology powers WhatsApp's key transparency:

https://engineering.fb.com/2023/04/13/security/whatsapp-key-...

Less than a month ago the first workshop on "transparency systems" was held at ACM CCS:

https://catsworkshop.dev

Shameless plug: I'm one of the designers of the Sigsum public transparency log, as well as System Transparency - a security architecture intended to bring transparency to the reachable state space of a remote running system.


Is this to kill off Beeper?

EDIT: no, it wasn't. it was announced a year ago per other comments...


I think this feature is already in other chat apps like matrix or telegram, will see how’s Apple implementation compares, but great addition nonetheless.


Why does Apple couple iMessage with rest of iCloud? Why is iCloud and iCloud Keychain being on a requirement for secure iMessage to function? That seems like a poor design choice to me.

For someone who cares about their communication security deeply enough to do contact public key verification, they would likely want to turn off iCloud syncing iMessage across multiple devices. They are likely to not have same iCloud account on multiple devices. In such cases, what's the value of having iCloud Keychain being turned on?


iMessage Key Verification does not require that you use 'iCloud for iMessage' (aka iCloud message syncing), it just requires that you're signed into iCloud and have iCloud Keychain enabled. Your critique is still valid but I think it's important to note that you're not required to store all your actual encrypted iMessage content in iCloud.


Maybe it's a dark pattern, like safari downloads defaulting to iCloud instead of the phone.


Given how hard it is to find your downloads locally on your phone, I think most users want sharing with other devices my default.

Makes sense to download stuff and have it be in downloads on the laptop or iPad.

I don’t think that’s really that dark.


How safe is the contact that is uploaded to the iCloud? How safe is the contact from being modified by some app on your iPhone? The contact containing the verification code seems to be one of the weaker link in this whole thing.

If Mallory can change the verification code in the contact to their own, the communication between Alice and Bob is no longer protected.


They can't just change the verification code, and it's not based on the fields of the contact card. You can think of it as a fingerprint for their iMessage public key, the one used to encrypt messages end-to-end. If the key with which your phone encrypts iMessage payloads has changed, it indicates that the conversation is being intercepted.

WhatsApp supports this too, see "Verify Security Code" on this page: https://faq.whatsapp.com/820124435853543

So does Signal: https://support.signal.org/hc/en-us/articles/360007060632-Wh...

So does Telegram: https://telegram.org/faq#q-what-is-this-39encryption-key-39-...


But the verification code is stored in the contact card, so the parent comment still stands. Anything that can access contacts, e.g. apps or iCloud (since Contacts are not part of Advanced Data Protection i.e. E2E encryption), can modify the verification code in the contact used by Messages for validation.


According to https://security.apple.com/blog/imessage-contact-key-verific..., the actual verified hash of the account key is stored in an end-to-end encrypted CloudKit container and merely linked to from the contact card.


Oh interesting, that is not at all clear based on the Contacts UI which shows it like any other field


The iOS Contact APIs shouldn't allow modifying this.

You can also try exporting the contact to a vCard .vcf file using the Share Contact button. I believe the iMessage key verification info won't be included. (But as you noted the most important thing is that it can't be modified)


That’s just good UI design. Make complex stuff look dead simple.


Are you saying the iOS contacts API lets apps read and write the verification code? That seems like terrible design. What need would a 3P app have for that capability?


No, there is no way for apps to read and write this information.


It doesn’t matter, the keyword is “in transit” but since it requires iCloud Keychain - it also means it’s compromised with LE. Also the fact that most people they have iCloud also will backup their messages….

So it’s nice that it’s encrypted in transit but since iMessage is apple only and requires.. see above!


iCloud Keychain compromised with LE? As in law enforcement?

How do? iCloud Keychain is E2EE with a key derived from your device password/passcode.


Doesn’t sharing your pub code also kind of build a network of “proof” this is you? As in sure Billy knows its you but so will a court have that same evidence, making denial “thats a spoofed message” harder.


It looks like I would need the following for this to work:

To use iMessage Contact Key Verification, you’ll need: iOS 17.2, watchOS 9.2 and macOS 14.2 on all devices where you’ve signed in to iMessage with your Apple ID

Unfortunately my work iMac isn’t on Sonoma, it’s on Monterey. I suppose I could log out on that machine, but still, a bit of a shame older versions aren’t supported.

Am I reading the requirements correctly? Does this mean that for all devices to work with CKV, then all OS’s need to be updated, or will it not do CKV on any devices if even one device is not supported?


Yep, I have an 2017 iMac at home and won’t be able to use CKV unless a sign out of iCloud on that machine.


That’s what I wondered!


There is a huge opportunity here for Apple to do a proper chain of trust.

“You want to talk to Adam, but you haven’t verified their keys yet. However your contacts Anna and Derek have confirmed Adam’s identity”


This is such a privacy leak that I have a hard time thinking you're serious.

“You want to talk to Family Lawyer D. Ivorstein, but you haven’t verified their keys yet. However your contact Wife has confirmed D. Ivorstein’s identity”


Perhaps you could address that issue through explicit "family" or "friend" groups, where people can chose who they wish to verify for. That would limit the usefulness of the trust network but prevent the privacy issue you mention.


It would have to be an optional sharing. The UX would need a bit of work.

I would trust my technical friend with their chain of trust, but not my hair dresser.


I'm always confused by this. This merely validates that Anna at the time thought that was Adam's number, what else?

Does not guarantee it's Adam reading.


If you have a cryptographic primitive and a robust system to protect it (secure hardware, biometric auth), if you can confirm digital identity in real life you can be reasonably assured Adam is reading. Chain of integrity.


Just like blockchain, meat space applications of digital chains of trust require too much benevolence. At the end of all the state of the art crypto is Grandma pushing a button.


Speaks to the robustness of the legal system, and code simply a crude layer on top of it.


At its best it verifies that "Adam" is using a device that was verified by a trusted 3rd party as being added to the network by Adam himself. So, it is more about trusting devices than individual interactions.


Quite disappointingly, this requires being logged in with iCloud as well as iMessage on the same device, so I can't use it on my work computer (I have different Apple IDs at work and home). I don't really see why the two need to be tangled together.


If you use two different Apple IDs on two different devices, how does that prevent you from using iMessage Key Verification? As far as this system is concerned, you are essentially two different people, both of whom can have key verification on independently (sort of the point).

The only scenario where this might break is if you log into personal accounts on work devices or vice-versa. I think that’d be ill-advised…


You sign into your personal Apple accounts on your work computer? Seems like a very bad idea to mix work and personal.


I think many people end up in that situation.

An Apple account is required in many situations (e.g. you want to download something from the Mac Store, you want Find My Mac etc.), but Apple doesn't cleanly support multiple accounts on any of their devices (and they probably have no incentives to do so)

It's also a PITA to have single devices with single accounts. For instance 2FA is a pain, you also can't use features like sidecar.

All in all, Apple is really bad at this and makes you jump through hoops if you intend to have clean separation between your work and personal accounts.


That's exactly the problem, in a nutshell. Everything is tangled in a big ball of yarn with Apple:

Theoretically the iTunes/App Store/TV account is independent of iCloud – except that it's tangled to Apple Podcasts.

- iMessage used to be mostly standalone (iCloud sync was explicitly optional!) – but not it's tied to iCloud via contact key verification.

- Books is a weird mix of iCloud (for media) and iTunes (for purchases).

- Having my device as a trusted login factor is a complete mess: I still haven't figured out what makes or doesn't make a device "capable of generating authentication codes".

- iTunes subscriptions can somehow only be managed on an Apple device or iTunes – and logging in for that purpose messes up podcasts (see the first point).

At least on macOS, it's possible to make a second account and log in to most of these cleanly, but it's still a hassle compared to e.g. Google's seamless support for multiple accounts in almost all of their products.


The solution I landed on is having 2 iCloud accounts in the same “family” so things can be shared, but in a controlled manner.


That's exactly what I'm not doing (iMessage is ok for me, all my iCloud data definitely not), hence no contact key verification for me.


I mean, you literally are - you've signed into iMessage with your personal account on a work device.

I know it's not iCloud, but it's functionally the same as iCloud with all the checkboxes disabled.


How are the two the same? iCloud automatically logs you into iMessage, but the reverse is not true.

Getting more access beyond iMessage requires another authentication (it’s definitely not just “enabling more checkboxes), and most importantly iCloud Keychain won’t even be touched without the required second factor (usually another device’s passcode on the same iCloud account).


I wonder if the timing of this in response to Beeper Mini gaining access to the iMessage network?



Its unlikely. They had pre-existing infrastructure for this via their Apple Messages For Business[0] program, which includes all the backbone to show that the account is verified (including the checkmark and info tab). This appears to be an extension of that for consumer accounts

[0]: https://register.apple.com/resources/messages/messaging-docu...


No, this feature has been in beta builds for quite a while.


> gaining access to the iMessage network?

I wouldn't say they "gained access to iMessage network".

They figured out a weakness in Apple's authentication that allowed a user with a fake serial # to authenticate. Apple is slowly making it more strict/checking the serial #s better (my opinion/guess).


I think it would be hard to push out such a feature in such a short span. Beeper’s userbase is tiny and not an immediate security threat to Apple.


Nor a distant security threat.


Unlikely.


> Published Date: December 11, 2023


As mentioned in other comment, it was announced well before this month: https://www.macrumors.com/2022/12/07/new-imessage-apple-id-s...



it's been a few months since that post. looking forward to the complete technical walkthrough of their implementation.


I wonder, why now? Smells like a warrant canary.


How do you mean? As in Apple is requested to share info, & when they do so they modify data that would cause the key verification to fail, notifying any contacts of the suspected user via notification?


Yes, and they are not allowed to disclose that, but they are allowed to ship new security features.


Seems like Apple is tacitly acknowledging that sophisticated actors have successfully been man-in-the-middling iMessage users. I wonder if they have clear evidence of that since I haven’t seen any coverage on this.


Or! they're trying to get ahead of legislation (looking at the UK) by presenting a fait-accompli


I don’t get that impression. Given that iMessage is such a high value target, I wouldn’t be surprised either way, but adding more security features is not a tacit admission of compromise.


The attack is that anyone can make an iMessage account and pretend to be your friend ("new phone who this"); this feature is how you prevent that.


Agreed with the sibling comment. To quote Apple, this feature can "detect sophisticated threats against iMessage servers". Essentially it's to protect against state-sponsored attacks MITMing you. Doing that also probably requires the attackers also have access to some root CA private keys so it's a very small pool.


I don’t think that’s the attack this feature aims to prevent.

Rather, it aims to prevent someone who compromised iMessage infrastructure, from pulling a dodgy around keys.


Sucks that it requires iCloud Keychain enabled, and also removing your appleid from any legacy macs and iphones. Wish they explained the reasons for this, because I'm having a hard time seeing one.


Because iCloud Keychain sync is how each person can have one key for others to verify, rather than a separate key for every device they are logged into iMessage with.

It’s described in more detail here:

https://security.apple.com/blog/imessage-contact-key-verific...

(This is end-to-end encrypted, by the way; Apple can’t get at people’s private keys.)

And this is a new protocol, so no surprise it doesn’t work with older operating systems. (It doesn’t say you have to remove your Apple ID completely, just log out of iMessage.)


Probably requires a hardware-security-module for generating secure attestation keys.


Removing appleid from legacy mac and iphone because they don't have the hardware security module. So rather than having a broken imesssage experience on those devices, they decided its better for you to just unregister.


Beeper?


In fact, it seems like this would make Beeper and similar approaches an order of magnitude harder.

And I could totally see Apple making non-verified contacts' bubbles a different color sooner rather than later...


The contact key verification feature preexists Beeper’s iMessage support.


So like is “sophisticated threats” a passive-aggressive way of saying “Beeper”?


This was announced last year, way before any of Beeper’s shenanigans.

https://www.macrumors.com/2022/12/07/new-imessage-apple-id-s...


Beeper was released at least two years prior to that.


And that’s not what I’m referring to by their shenanigans.

Specifically I’m talking about their beeper mini spoofing of Apple devices, not the other beeper setup that forwarded content to/from an actual Mac.


I don't think this is related. This is just public key verification, the kind that other E2E messaging apps already had. I don't know about the specifics but as I understand, Beeper Mini could implement this too if they want to.


Theoretically the two are orthogonal. There's no reason why Beeper wouldn't be able to implement contact key verification; there's no hardware attestation component to it, as far as I can tell.

Practically, the added complexity of having to integrate with iCloud Keychain certainly won't help.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: