Hacker News new | past | comments | ask | show | jobs | submit login
Apple iMessage Zero-Click Hacks (wired.com)
262 points by curmudgeon22 on Sept 6, 2021 | hide | past | favorite | 151 comments



Contrary to the article, blocking ALL media besides plain text from random senders who aren't in your contacts is exactly what most people would want and should be the default. I don't see any downsides to that approach.


I wouldn't want this at all. Just off the top of my head I can think of a ton of use cases this makes hard:

* I ask a seller on FB marketplace to send me some pictures of an item

* I need to send pictures of some documents to my solicitor

* A new friend I've just met in a bar tries to send me her contact card

* My mechanic tries to send me a PDF of the invoice for his work

Sure, there are ways around all of these, but it makes iMessage (or any messaging service) a lot less useful.


All of this can be on a "Tap to view" basis for the first media received.

Right now, iMessage processes everything in the background upon receiving.

That enables zero click, instantly delete message attacks. The only trace you have is a random imessage sound, or vibrate, with no corresponding notification.

Can be tuned to send at 4am when most people have do not disturb on.


This a million times.

Whatsapp of all things has this. (mostly to save on bandwidth, because whatsapp as all about efficiency at one point)

There is no real reason to auto process untrusted data. I would have thought we'd learnt from the years of exploits outlook dealt with in the late 90s/early 2000s.


Sometimes my dad sends me photos over Whatsapp, and I have noticed that they appear in my Photos app before I have opened/viewed the actual Whatsapp message. I assume that this is happening because i have given Whatsapp access to my Photos. But, it does appear that attachment/image processing is happening via Whatsapp without my control/without my viewing the message + its attachments.


for those that don't know how to turn it off:

settings -> storage & data -> media auto download

> But, it does appear that attachment/image processing is happening via Whatsapp without my control/without my viewing the message + its attachments.

For me at least, on iOS whatsapp doesn't insert pictures and video into my photostream even if I tap "download"


Except people want to view the content.

If they think my friend has sent a picture, or a business has sent an invoice, they're likely to click it.

Especially in the 3 receiving situations that bb123 outlined (1, 3 and 4) you're not likely to have the contact info, or it's plausible that the person could be using another phone to send that message.

Now - I know to check the sender, don't click strange links, all of that. I run trainings on how to avoid phishing and improve personal & organizational digital security. But it comes down to the fact that people are using technology as a tool.

Apple made headway with BlastDoor but it's clearly not good enough.


Perhaps this idea could be extended to treating content differently for all messages from unverified identities. Getting people to verify identities in an end to end encryption system is a huge problem. This could provide a conceptual hook that would mean that unverified contacts were untrustworthy contacts. This would be a state, rather than something that you get nagged about from time to time.

Otherwise an attacker will usually be able to figure out a way to fake a message from someone already in your contact list.

Of course iMessage doesn't do identity verification which is why it does not have effective end to end encryption in the first place. So they would have to solve that, perhaps larger problem, first.


Or anyone that you haven’t replied to yet. I text many people who aren’t in my contacts.


A small way to reduce attack surface - have iMessage just setup for your iCloud email address instead of phone number. Phone numbers are becoming increasingly useless.

> In fact, Citizen Lab researchers and others suggest that Apple should simply provide an option to disable iMessage entirely.

There's a checkbox in Settings > Messages that does exactly this? It seems strange they published this.


> Phone numbers are becoming increasingly useless.

Not really, there's a ton of government services that require you to have a phone number (depending on where you live). I don't see any real suggestion for an alternative to having a phone number. If nothing else, to receiving notification. You can't really rely on iMessage, WhatsApp, Signal and similar services, you need one system that you're sure will cover 98% for all people. 3. parties can't even integrate into many of these services.

You could use email, but I don't really see how that's any better and many seniors will use SMS, but not email to any great extend.

SMS is still the only unified messaging service you can be sure that all your friends and family will have.


I use voip.ms and have SMS forwarded to my email. I can also reply via email. I only use a phone number for services that require one. Family and friends I will use email, iMessage, or Signal.


So you do still have a phone number, just not a SIM.


Nope, when I'm on the go, I certainly have a way better communication using phone calls than whatever VOIP du jour.

Phone numbers, like emails, are very robust and reliable, interoperable, not centralized to one entity, and the quality of service vs cost ratio is excellent.

Not to mention text messages:

- they work no matter if the person is using whatsapp, telegram, signal or the new hype stuff

- no GAFAM is collecting my text history to sell me ads

- they require no internet connection

There are 3 things that we must absolutely cherish and preserve in this race for tech: cash, emails and phone numbers.

They are a beacon of stability in this sea of ever moving innovation greed.

And I say that while I'm thinking about setting up an IFPS website, compiling python 3.10 beta to test it and buy a secondary e-ink screen for my laptop. I'm not technophobic.


Email and SMS aren't reliable, you've no way to know if they've been read.

SMS is unencrypted so someone's harvesting your data.

They require a cell tower connection, that's only 1 step away from an internet connection, probably 0 in many cases.

Cash and Phone numbers are trivial to steal.


> Email and SMS aren't reliable, you've no way to know if they've been read.

Most people do not want this feature. 9/10 of my iMessage contacts turn off read receipts; I bet the number would be similar on Facebook/Whatsapp if they allowed it.


> Email and SMS aren't reliable, you've no way to know if they've been read.

For me, ephemerality, one-shot, and unidirectionality are characteristics, not issues.

> SMS is unencrypted so someone's harvesting your data.

There is not a single entity that is getting all of it, which is the most important to me. Encryption is nice, but for most of my communications, that's not the most important feature.

> They require a cell tower connection, that's only 1 step away from an internet connection, probably 0 in many cases.

I'm regularly in situations where the phone works, but not internet. On the move, or in the country side.

> Cash and Phone numbers are trivial to steal.

Sure, and so is a bike. But I don't always want to take the bus.


Given your whatsapp messages are encrypted, no one's getting them.

You've no idea if your SMS ever arrived, or if your email even got into the Inbox of the reader instead of the spambox.


SMS is also exploitable though, right (Both types of messages go through Messages.app)? And you can't disable SMS entirely I don't think.


> you can't disable SMS entirely I don't think

Buy a data-only subscription, and use Google Voice or some sort of PBX powered app to still be able to receive regular phone calls.

Preferably I’d want a really basic voice only, open source PBX powered app for iOS that I could use. Then I could get me a data-only plan and SIM.

Caveat: I still need Norwegian BankID to work with my SIM though. I dunno if any of the data-only plans available in Norway support BankID, or if you need a regular subscription like I have now in order to use that.


> Caveat: I still need Norwegian BankID to work with my SIM though.

Same in India, Banking & Payment apps need to verify that the SIM i.e. Phone number is indeed the one associated with the bank account and so they send SMS in the background at random intervals.

I don't do real-time communication and so I had the lowest tier prepaid carrier plan just for this purpose, But the oligopolies decided to remove SMS from the low tier plans suddenly and all my payment apps are now deactivated!

Meanwhile scammers continue to use phone numbers (SIM) bought dime a dozen with fake identity cards[1].

WhatsApp should never be forgiven for making phone number as a flawed identity of a living person, It's disappointing that Signal continued with it.

[1] https://twitter.com/Abishek_Muthian/status/14069649600815718...


Data-only subscriptions can still receive SMS messages in my experience, you just can't send them.


And you still need them because some services only provide sms verification.


And we don't need those services. We just want them for convenience.

The data only plans is is good path to follow, it at least makes it more obvious each time a service clearly wants too much personal data.


Okay then, let me just go close my bank account then. It's just a convenience. :P


It is a convenience difficult to live without I can give you that. But so many other services we can't imagine living without require a cellular number. Accepting a flawed authentication mechanism and legal but abusive privacy breaches is what have kept them up and running.

This threat demonstrates how difficult it is to keep us safe from hacks. We can keep our bank account and one day have to deal with fraud recovery, or we can ask them to stop, some banks 2fa activation via a voice call.

Until banks cease to dominate and control our finances, we should at least do what we can protect ourselves from their incompetence.


Banks can and do rollback fraud and theft in most cases; that’s the advantage of working in a system supported by law and regulation. The alternative is trusting that every component in your crypto tech stack has perfect security (which is impossible) or else risk losing all your money in a manner that can’t be reversed. I’ll choose banks, thanks.


And that's fine, the majority of the current generation will stick to traditional banking out of fear. Maybe it is safer that way you are right. Let's not ignore the fact the crypto technology is actually safer as a whole, the method of transaction is order of magnitudes more hardened with private keys and signatures, and also more flexible. BTW cryptocurrencies and also smart contract networks does support rollback as shown in the Eth vulnerability that was exploited years ago. The irreversibility is a feature. We figured it is safer to adopt a push approach to transactions like when paying with cash rather than a pull approach used with debit/credit card, and that making the transaction not reversible leads to less fear from the creditor and promote fluidity. If reversibility is desired, multi sig and escrow exist. They are as rarely used as on eBay, because for most transactions we don't care. Nothing is perfectly safe you are right but humanity, despite stickiness to what is tolerable has always eventually adopted less understood but more enabling technologies when they prove to be superior in so many respects.


Also, in my experience in the US - Data Only SIMs tend to only be available if you have a primary 'regular' account with the carrier.

I haven't had too much luck just being able to get a stand-alone data sim from Verizon, AT&T or TMobile...


I know both TMobile and ATT frown upon this and have been known to shut down accounts that do this. Data only is for non-phone accounts in their eyes and anyone trying to circumvent that is committing some sorta fraud (in their eyes).


That seems like an awful lot of effort to go to for what really ought to be a settings toggle.


>SMS is also exploitable though, right

it's less feature rich, so presumably there's less attack surface.


> email address instead of phone number. Phone numbers are becoming increasingly useless.

Fantastic way to not have people send you messages anymore.

Phone numbers are technically becoming useless. They are practically still the by far dominant choice when someone goes to send you a new message.

We’ve had decades of texting phone numbers to reinforce this.


whatsapp doesn't rely on SMS protocols nor does it rely solely on phone numbers but is still being exploited quite often.

Instant-Messaging = Worthy target for exploits.

Just like web-browsers get exploited after years of patching.


>whatsapp doesn't rely on SMS protocols

Wrong. Whatsapp relies on SMS as 2FA OTP.


I don't think that is the context that the commenter was using when they said WhatsApp doesn't rely on SMS.


Well that has nothing to do with the subject. the subject is zero-click exploits, this is not regarding authentication.

The point of these apps is that I can get content(picture, message, video etc) to your local device and it get processed.


"... Citizen Lab researchers and others suggest that Apple should simply provide an option to disable iMessage entirely."

You can do this already. If you "manage" your iphone with Apple Configurator you have fine-grained control over every little thing it does. You can disable imessage (and many other things like the app store, etc.)


You can disable iMessage just by going into the Settings and turning the “iMessage” toggle to off. You don’t need to supervise the device or install a profile.


Maybe what they mean is that some or all of the iMessage code is still running on the device even if you disable it?


SMS will still enter through Messages.app if you have SMS enabled through your carrier and plan. But no, iMessage is not used when it's disabled.


iCloud isn't end to end encrypted for the most part - anyone security conscious should be avoiding iCloud in the first place.


Wouldn’t be not parsing incoming media unless it’s for someone from contact is a first step that can reduce largely the thread ?

So it will stay the same for people in your contact list but a new touch to load for message from unknown person



A reasonable trade-off would be disabling parsing for people not in your contacts list.


For the general population this probably applies, but in the case of this exploit it looks like it's targeted at specific individuals. If you were someone being targeted by an oppressive government, for example, you might be a bit more cautious when clicking on suspicious-looking incoming media so I think the suggestion has value.


Fair point. The point still stands but in a weaker way though. Even people who say/claim they care make mistakes.

The idea is that contacts can still send you media without prior approval. Do their children never make mistakes? Do their spouses/gardeners/etc.?


That might sound good in theory by in practice it's unlikely to go well(by default).

Many services from banks to healthcare utilize SMS as a main way of communicating with end-users. many rely on dynamic numbers.

Moreover, spoofing SMS messages is not that hard.

Messaging apps whether it is SMS or alternatives like whatsapp, telegram etc. will always offer a powerful vector to infect devices.


I think banks don't use inline media in their messages too often.


It is part of my job. many banks do. often embarrassingly so.


you have made an argument against _never_ parsing media from untrusted _SMSs_

This is iMessage, and instead of _never_ parsing, requiring user action to parse. It will cut out the majority of the noclick exploits, and make iphones safer for most people.

This of course doesn't protect against spear phishing. But it should give apple time to fix their shit.


I turned off imessage.

I seem to be under attack lately.

3-4 times a day random links sent from gmail addresses or unknown phone numbers to imsg with sketchy looking links in them.


Can you war-dial attack with these? Seems like it would be super easy for a script kiddie to just start at 111-111-1111, send message, increment by 1, repeat. Maybe narrow it down to valid area codes and what not, but seems like a super low budget thing to do.


Sure.

https://calleridreputation.com/blog/robotexts-are-replacing-...

"Robotech spammers are also targeting group messages by using automated programs to send thousands, even millions of group texts to random phone numbers with the hopes that somebody will take the prey and respond."

Also, some users give random apps access to their address book for whatever reason then there is a whole list of known good emails and numbers to spam.


>Also, some users give random apps access to their address book for whatever reason

"There's a sucker/fool born every minute." --PT Barnum

And there are businesses of all types where that is their sole business model.


Well, send from what? Every iMessage comes from an account with an Apple ID, so I presume stolen credentials would be the only way to really do this, adding to the cost.


If you know the email address that is used for the Apple ID, you can send it a message without being in messages. You can also send a text to a phone number via email based on the carrier and knowing how to structure the address. So, it's not impossible to do this at all. No stolen credentials necessary.


>If you know the email address that is used for the Apple ID, you can send it a message without being in messages.

Though this option precludes the random war-dialing explanation.


I hope that’s their starting point. My number has at least 1+ 0s in it. =)


You'll just be at the end when it rolls over.


I’m also getting these—no idea what the exploits actually are or how they work. Am I theoretically already exploited?


Not really.

1. There's no reason why a threat actor would have to send you 3-4 messages per day. Of the exploits I've seen, they only need to send one. Sending 3-4 messages per day just unnecessarily increases the risk of getting caught (ie. the target getting suspicious and asking on hacker news whether they're getting hacked)

2. There's no reason why the message has to contain sketchy links. They could very well disguise messages as ads/notifications for well known businesses, political organizations, or from random people who got the wrong phone number.

3. There's no reason why the attacker can't erase any trace of the initial message after your device is infected, so unless you're staring at your phone 24/7 it's very easy to miss the message.


Disagree with all 3 points.

If I am sneaking a payload in, and I have different exploits for different OS versions, I would exactly disguise it as spam.

Pretending to be a busines, or a random person with wrong number, and then DELETING IT is a noteable indicator of compromise.

I know this isn't how Pegasus works, but I'm sure there are more exploit kits being sold in the world. Some may not be as sophisticated, and may rely on spraying and praying with different exploits.


>If I am sneaking a payload in, and I have different exploits for different OS versions, I would exactly disguise it as spam.

Right, but the point is that GP seems to have been tipped off by the "sketchy links", rather than the spam itself, and that there are far better ways to compose your spam texts than ones with sketchy links.

>Pretending to be a busines, or a random person with wrong number, and then DELETING IT is a noteable indicator of compromise.

It depends on the nature of the exploit. I was operating under the assumption that "0 click" means the exploit gets run as soon as the phone receives it, which would allow for the exploit to clean up after itself without alerting the owner, unless the owner was staring at the phone the exact moment the message came in.


So enlighten us: how did you turn off iMessage?


“Settings” -> “Messages” -> Toggle “iMessage” to off. Couldn’t be simpler.


If you're not familiar with NSO Group and Pegasus, I highly recommend episodes 99 and 100 (just released) of the Darknet Diaries podcast.


"Apple hasn't issued a fix for this particular vulnerability"..."new defenses are coming with iOS 15, which will likely come out next month".

That's completely insane, isn't it? Have Apple just given up, or am i missing the scope of this vulnerability?

Also how can Apple not have better security with such an incredible amount of money in the bank?


It can take time to fix things properly. You don't want to half arse a mitigation, only for the attacker to bypass it in a week.


I got corrected last time this topic came up. I originally thought Messages was part of the OS and not a pre-installed userspace app. However, if it's in userspace, why is it such a vulnerable vector for compromising the phone? Is there some privilege-escalation component to this that I haven't read about?


iMessage is one of few apps that have broad permissions to execute code in response to notifications.

For other apps like Telegram; the server can send a predefined notification message.

For iMessage, when you get something even from someone outside your contacts, its daemon invokes specific code to handle the message, and its attachments.

Whilst this doesn't help if someone opens the app, it does at least change this from a zero click attack, to a one click attack.

(This is also another example of Apple not following its own app store rules. It has privileged access to frameworks.)


This is not true. With the right setup, you can send a push notification to your app to wake it up and execute code. See this delegate method for example: https://developer.apple.com/documentation/uikit/uiapplicatio....


So, in theory other messaging apps have the same vulnerabilities, but I’d have to open the message to get burnt?


No. This attack is specific to iMessage, since iMessage can access the springboard (the singleton class, not Springboad.app), while apps downloaded from the App Store cannot.


Yes. These apps also have happened to WhatsApp.


Still hard to understand. If it’s just image preview code, why isn’t Safari vulnerable?


It's not exactly specific to the image preview code, but rather the code that handles the notification when receiving an iMessage.

The attack mentioned in the Wired article[1] relies on iMessage asking the sandboxless Springboard[2][3] to deserialize a maliciously crafted field, included in the incoming iMessage, to escape the sandbox. This specific vulnerability doesn't appear to apply to other apps.

[1] https://googleprojectzero.blogspot.com/2019/08/the-fully-rem... [2] https://en.wikipedia.org/wiki/SpringBoard [3] https://iphonedev.wiki/index.php/SpringBoard


It might be, it’s just that using Safari would turn a zero-click attack into a one-click one (click on my shady link). (Also, WebKit runs with a different sandbox that may require special effort to escape out of.)


A bit off topic, but when did we ever expect integrated apps to be restricted to App Store rules? They're literally sold as part of the device, not as an app through the App Store.


iMessage is a pre-installed userspace app that uses frameworks that ship with the OS.


Is it a fair assumption that any code written in C / C++ / Objective-C has a high likelihood of allowing zero-click hacks?


Objective-c has bounds checks and lengths built into NSData, NSArray, and NSString… so many of the buffer overflow techniques likely won’t work against it. However, images and video seem to hit C++ code and from all of the past CVEs it seems this is a giant attack surface over and over again.

I’m surprised this code isn’t being rewritten in something like Rust, but perhaps there are more things going on at play, like the plist serialization attacks that end up coding for esoteric classes that contained various bugs.


As a certified member of the Rewrite It In Rust (RIIR) Reaction Force, let me answer this by saying that it’s very, very hard to get the software right, even the second time, and RIIR trades one set of unknowns for another. There are also a huge number of people who are convinced that Rust is a fad, or that C is good enough, or whatever. The same people who swore they could outbrake ABS decades ago. They do not want to learn a new, hard thing, and Rust can be hard at first.


Haha, love the ABS analogy!


Depends where the vulnerability lies. Plenty of iMessage DoS attacks [1-4] have targeted CoreText, which, I believe is written in C and CoreFoundation. I think all of these DoS attacks could be mitigated by not viewing the message and disabling message previews in notifications.

[1] https://habr.com/ru/post/191654/

[2] https://appleinsider.com/articles/15/05/27/bug-in-ios-notifi...

[3] https://blog.zecops.com/research/analyzing-the-ios-telugu-cr...

[4] https://www.macrumors.com/2018/05/09/how-to-fix-black-dot-im...


> I’m surprised this code isn’t being rewritten in something like Rust

I imagine the internal discussion is something like "by rewriting it, we will just introduce 100 new bugs. once we squash these last few bugs in the current tried-and-tested C code, it will be bulletproof!"

Being Apple I imagine they would prefer to rewrite it in Swift, but Swift may not be mature enough


Objective-C can run into other issues, though: NSSecureCoding exists for a reason ;)


Which is why I mention plists


Aren't these making it past BlastDoor, written in Swift?


The exploit targets existing C code that did image decoding and used CoreFoundation to handle data reads. Blast door is Swift-based and likely very memory-safe. What I’m curious about is where the limits of BlastDoor are that it couldn’t contain this particular attack. I’m under the impression that BlastDoor was built exactly for this type of attack.


Swift has e.g. UnsafePointer if you want to work more directly with memory. Presumably if BlastDoor uses them to work directly with memory then it could still be vulnerable, though I am not sure because I am not very familiar with them. If I was excited about pointers I wouldn't be using Swift...


It is highly unlikely BlastDoor is using much of those, except when interacting with system frameworks.


No


I wonder if Apple's devs are just going to say, screw it, we'll rewrite the whole thing in Rust with audits and formal analysis the whole way...


Apple had job postings last year looking for Rust developers to rewrite a service that was written in C so it's not without precedent.

Job posting links are dead now, but there was a reddit thread about it: https://old.reddit.com/r/rust/comments/fkngza/apple_hiring_r...



Rust or Swift. I am not a security expert, but I'd bet that re-writing in either would reduce the amount of such embarrassing exploits at least 10x.

From what I can tell, the combination of unsafe-by-default languages like C/C++/Obj-C and the way the human brain works is Not-A-Good-Combination© . Too many opportunities for error.


Some Apple devs still seem to love C and Obj-C (at least the ones my former employer worked with directly) and hate on Swift. Both Swift and Rust can be written to a much higher standard where the language protects you from stupidity, but only if you give up the past and use them. While you can write pretty good C-ish code (i.e. Linux), its far too easy to slip up once and the language does nothing to save your ass.

Some of Apple's OS code is pretty ancient. Switching to Swift or Rust is not necessarily a panacea if you call too many OS routines still in C-ish.


I agree. Swift is a nice language, compared to the alternatives.

I have a few months experience in it, and I can definitely agree that if you're writing Swift-only, it's very nice. The emphasis on values, and value semantics is definitely a differentiator from most other languages.

However, anytime you have to use/interop with an older API designed for Obj-C (for example, AVFoundation), it's much more of a pain. Effectively, you're writing Obj-C in Swift.

If someone is insisting on Obj-C instead of Swift in 2021, I would attribute it to a form of a Stockholm Syndrome. Many people form psychological bonds with whatever they are familiar with.


Yes, there are some people who simply prefer Objective-C, but you need to also realize that Swift is still not ready for system-level programming. Analysis tools aren’t ready; debugging basically means you go to printing variables to stderr and praying. The standard library defaults to crashing at runtime for simple float <-> integer conversion bounds errors which you’d think would be caught statically with more thoughtful design. Still a lot of rough edges.

SwiftUI in particular is excellent and if you can use it you should. But you can’t say Swift in general is ready to replace Objective-C. It’s not.


Swift is not ready, but it's not for those reasons. The real problem is that Swift needs a hefty runtime and is fairly slow due to excessive ARC traffic, plus it has no way of recovering from memory exhaustion. So you can't really use it in the kernel, but it's perfectly fine for writing system frameworks and daemons.


Realistically none of the commonly used systems languages have a mechanism to recover from memory exhaustion. Some pretend to, but if you actually try to use them, yeeeeech

The big missing features (from my perspective) are fixed-size arrays, placement allocation, and the ability to guarantee that no allocations or refcount operations occur in a marked critical section. There’s a lot of other stuff I would like to have, but those are the things I can’t live without.


Yeah, I agree, I'm just saying that Linus won't use it unless he feels like it gives him that "control" ;)


> The standard library defaults to crashing at runtime for simple float <-> integer conversion bounds errors which you’d think would be caught statically with more thoughtful design.

There are very real ways in which Swift isn't ready for systems programming, but this sure ain't one of them.

- In C and C++, this doesn't trap, it's undefined behavior. Trapping is _always_ better than UB. Are C and C++ "not ready for systems-level programming"? (Yes, but that hasn't stopped people from doing it).

- C and C++ compilers don't catch this statically either by default. They just silently invoke UB (https://godbolt.org/z/seTh9cva6).

- Unlike C and C++, Swift's standard library provides the tools you need to easily do something about it: if you don't want to trap, you can write `Int(exactly: x.rounded(.towardZero))` and get an Int? that is nil if a floating-point value is out of range.

There are rough edges here, but they are much, much less rough than the languages that people routinely use for systems programming. Sibling poster got at some of the real problems that _do_ need to be addressed.


One can make the argument that UB allows implementations flexibility on how they define overflow, allowing e.g. -fwrapv and trapping to be permissible based on what your desire is. But it's a fairly weak argument and doesn't detract from the rest of your points.


Yeah, I used to think that, but compilers are already allowed to do whatever they want in non-standard modes. Defining the result wouldn’t prevent implementing fwrapv or trapv. (Also note that neither of these applies to float-int conversions, so let’s not pretend that either helps here).


I can totally relate to people who love the simplistic of C, but ObjC? I couldn’t fathom why anyone would prefer this mess to Swift.


swift is by far an easier language, safer language, and has a lot of power and nice functional bits, but _oh my god_ those compile times and the lack of a stable debugger and refactoring tools can make it miserable to work in. That and ever since swift 4 or so I feel like we're leaning towards the c++ "everything and the kitchen sink" kinda problems when it comes to features. Used to be there were fairly obvious "right" or idiomatic ways of doing things, but now it's gotten a lot more complicated. Property wrappers, combine, all the stuff with opaque types and the insane stuff you can do with protocols and protocol extensions, all the fad architectures and patterns. Most swift codes bases I've worked in are highly over engineered and kinda feel like the developer of the project just learned about X cool thing in swift and wanted to use it everywhere. It's amazing how the crash rate of apps I've worked on in swift are like consistently less than 1%, it's easy to learn, very modern feeling. But gosh some days I'm just longing for some classic objective c spaghetti code that compiles instantly and gives me that great debugger I've come to rely on. Oh and don't get me started on the abysmal auto correct, code completion, and error messages. Swift still has a ways to go IMO, but I do think it's the choice for an iOS app in 2021- just be thoughtful about which language features you use and not going bananas with extensions and protocols.


Objective-C is really a fairly simple language. Swift has complexity that approach that of C++.


Low level library stuff like JPEG and PNG parsing/decoding is exactly where Rust could offer massive security wins for a relatively modest change of code. The main requirement would be to ensure that the Rust versions had absolute feature parity with zero regressions.


You are talking about the same company that shoved their proprietary WiFi protocol full of holes straight into the kernel. Judging from the kind of messages it emits, the latter is a true masterpiece by the way.


Well I would dare to say iMessage isn't the biggest target to convert to Rust.

At the end of the day, it is still an app with app level permissions, sandbox etc.

Kernel\Kernel modules are far more likely to be written as they allow for vastly more access than an app.


I would pick the baseband processor as the biggest target. https://www.theiphonewiki.com/wiki/Baseband_Device

Reasons for worry about the baseband code:

a) Code is written by a third parties

b) Apple is more restricted applying defence-in-depth (customised security CPU changes like PAC, customised compiler changes, etcetera).

c) harder to detect intrusion?

Versus reasons not to worry so much:

z) Baseband has more limited access to information

y) harder to make exploit survive a reboot - mainly useful as part of chain of exploit into main CPU?

x) Baseband code is device specific - helps to know target device to attack


It's a great target considering that a lot of other exploits go through the kernel just to get access to your iMessages.


If you successfully exploit a kernel vulnerability, you don't need an iMessage bug....

you can pretty much access whatever you want.


My point is that the thing you would often do after that is go after people's iMessages anyways.


I think you misunderstand how things work on modern mobile OS.

You don't need to access the messages app in order to get access to the messages.

it's the opposite actually, the messaging app needs permissions for the system level messaging component.


I assure you that I know enough to at least hold an intelligent conversation on mobile security. On iOS there is no "system level messaging component". (i)Messages are stored in a SQLite database that is protected via entitlements and sandboxing; the Messages app is given the ability to access it legitimately. Attackers can either exploit the Messages itself and (via code execution in that process) grab a user's messages, or they can exploit something else (such as the web content process) and then escalate privileges from there to bypass the sandbox.


Again, if you get a kernel exploit, you don't need access to the messaging app or to escalate privileges.

you're already root.

you can access any component without much restriction.

How the data is stored has nothing to do with this


This is correct. My point is that you would want to access messages data after doing that.


So let's circle back to the original question.

Pwning the app will only provide access to whatever permission it has and we are still sandboxed.

Pwning a kernel module\driver will provide access to everything whether its messaging, call logs, pictures etc. we are not sandboxed, we don't need an LPE exploit.

I think the priority is clear.


Exploiting the kernel is obviously always desirable, but it's not always possible.


Unless someone applies the squeaky wheel rule. The thing causing everyone to look at you gets pushed to the top of the list.


iMessage has the huge bonus that it's exposed to the internet. The kernel is much harder to actually get close to. iMessage? Send them a text.


Does this mean that iMessage evaluates messages as code for some reason? Why on earth would that be the case? It's a foundational security principle to not do that.

And even if they did then why is that so hard to fix?


It’s more like, if you send someone a photo, iMessage will decode the photo and display it. If the imaging library has a bug a maliciously crafted image may be exploitable.

iMessage has more integrations than that too. If you send someone a URL, e.g., the recipient will see a preview of the content.

iMessage does a lot to mitigate the attack surface, but people still get through.


Can Apple not rewrite the parsing components in a memory-safe language?


Replacing libjpeg, libpng, h264 & h265 codecs etc. is a gargantuan task. Even if Apple employs another 200 rust programmers (which don't exist in the market – so not possible) it would take years before that project is close to finishing. So intermediate solutions are necessary until then. It is also likely a rewrite would introduce other security issues (not memory safety issues) which would take time to fix.

Rewriting these libraries is probably also a common good, that would be better done through open source initiatives.


There are more than 200 people working on the Rust project itself. Depending on how you define “Rust programmer” there are already companies that employ that number of people individually.

That said you’re not wrong that it’s a gargantuan task that can’t be realistically undertaken, just you’ve really really underestimated the number of Rust developers.


iMessage is also the only messaging app that triggers all its decode functions upon notification, because of its special privileged status.


I imagine that iMessage isn't executing the code but the malware is packed into some part of the metadata that some dumb library needs to parse and some sort of buffer overflow attack is accomplished. The library is probably assuming the data is safe to parse.


Why aren’t lightweight hypervisors used more outside the public cloud? It seems that would go a long way in protecting the rest of the device from poorly written c code parsing user input.


The answer is probably plain old complexity.

Getting an application that's running in a hypervisor to seamlessly, for example, accept deep-link clicks is more complicated for the same reason that they're more secure. That extra boundary is another wall, another interface. And of course that means more complexity for app developers, and more compute cost/battery utilization.


For one, Apple's chips lacked hardware support for virtualization until last year.


On device? The vast majority of people don't give a flying fuck about privacy, for them the decrease in battery life would not be worth it


so we should get rid of all security features then? No memory management, no code signing, no HTTPS, no certificate pinning?

Vierualisation has a negligible impact on power consumption.


> Vierualisation has a negligible impact on power consumption.

[citation needed] Virtualization on embedded devices with constrained power is a big ask. They are not asking to get rid of all security protections, just pointing out that loosing 10-20% of battery life for marginal security improvement isn't a product winner.


What? How did you derive "we should get rid of all security features" from my reply? I'm saying your idea of using virtualization on-device is not feasible due to the power envelope available vs customer demands.



Am I safer if I disable iMessage or can the zero-click hacks exploit the Messages app through SMS?


How do Zero click hacks work?

Does iMessage accept arbitrary code that it can execute?


Usually by exploiting holes in some code that does parsing, e.g. for images. Here [1] is a nice write-up.

[1] https://googleprojectzero.blogspot.com/2020/04/fuzzing-image...


Similarly, homebrew on the PSP exploited libjpeg or libtiff, so this is one of those vectors that we're still dealing with 16 years later.


Similarly to Android there have been attacks that involve exploiting bugs in the code that parses incoming messages, and then via the exploit you can get remote code execution

For example (IIRC this was a real bug), if you exploit a bug in the text layout code, you could attack a device by getting a notification to appear on the lock screen - and SMS messages usually trigger a notification


There was a developer who discovered a bug with the XML parser, and wrote a whole blog post about how he was able to cause iOS's security system to malfunction using a specially-crafted XML permissions file and allow his app to do anything he wanted, even escape the sandbox. He kept it secret for years for his private experimentation until Apple patched it by accident, by adding a 5th XML parser to the other 4 for some reason and using that one instead for the permissions.



Are there seriously 5 XML parsers available in iOS? Are they all written in C? Do they validate all of them whenever a bug is found in one? I can’t tell if this is some sort of defense in depth or just swiss cheese copy-paste…


Yes, no (some are C++), lol no, no to both: it’s more that people just kept adding more for each domain


Depends on the hack but the majority seem to be from parsers for various formats, from images, to unicode and text data etc...

A message has to be able to display so many different types of content. A flaw in any one of those could be exploited. Combine a bunch of flaws together and you suddenly can do quite a bit.


this header field is always 4 bytes, no need to parse length


Are these exploits patched in IOS 14.7.1 or earlier?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: