Contrary to the article, blocking ALL media besides plain text from random senders who aren't in your contacts is exactly what most people would want and should be the default. I don't see any downsides to that approach.
All of this can be on a "Tap to view" basis for the first media received.
Right now, iMessage processes everything in the background upon receiving.
That enables zero click, instantly delete message attacks. The only trace you have is a random imessage sound, or vibrate, with no corresponding notification.
Can be tuned to send at 4am when most people have do not disturb on.
Whatsapp of all things has this. (mostly to save on bandwidth, because whatsapp as all about efficiency at one point)
There is no real reason to auto process untrusted data. I would have thought we'd learnt from the years of exploits outlook dealt with in the late 90s/early 2000s.
Sometimes my dad sends me photos over Whatsapp, and I have noticed that they appear in my Photos app before I have opened/viewed the actual Whatsapp message. I assume that this is happening because i have given Whatsapp access to my Photos. But, it does appear that attachment/image processing is happening via Whatsapp without my control/without my viewing the message + its attachments.
If they think my friend has sent a picture, or a business has sent an invoice, they're likely to click it.
Especially in the 3 receiving situations that bb123 outlined (1, 3 and 4) you're not likely to have the contact info, or it's plausible that the person could be using another phone to send that message.
Now - I know to check the sender, don't click strange links, all of that. I run trainings on how to avoid phishing and improve personal & organizational digital security. But it comes down to the fact that people are using technology as a tool.
Apple made headway with BlastDoor but it's clearly not good enough.
Perhaps this idea could be extended to treating content differently for all messages from unverified identities. Getting people to verify identities in an end to end encryption system is a huge problem. This could provide a conceptual hook that would mean that unverified contacts were untrustworthy contacts. This would be a state, rather than something that you get nagged about from time to time.
Otherwise an attacker will usually be able to figure out a way to fake a message from someone already in your contact list.
Of course iMessage doesn't do identity verification which is why it does not have effective end to end encryption in the first place. So they would have to solve that, perhaps larger problem, first.
A small way to reduce attack surface - have iMessage just setup for your iCloud email address instead of phone number. Phone numbers are becoming increasingly useless.
> In fact, Citizen Lab researchers and others suggest that Apple should simply provide an option to disable iMessage entirely.
There's a checkbox in Settings > Messages that does exactly this? It seems strange they published this.
> Phone numbers are becoming increasingly useless.
Not really, there's a ton of government services that require you to have a phone number (depending on where you live). I don't see any real suggestion for an alternative to having a phone number. If nothing else, to receiving notification. You can't really rely on iMessage, WhatsApp, Signal and similar services, you need one system that you're sure will cover 98% for all people. 3. parties can't even integrate into many of these services.
You could use email, but I don't really see how that's any better and many seniors will use SMS, but not email to any great extend.
SMS is still the only unified messaging service you can be sure that all your friends and family will have.
I use voip.ms and have SMS forwarded to my email. I can also reply via email. I only use a phone number for services that require one. Family and friends I will use email, iMessage, or Signal.
Nope, when I'm on the go, I certainly have a way better communication using phone calls than whatever VOIP du jour.
Phone numbers, like emails, are very robust and reliable, interoperable, not centralized to one entity, and the quality of service vs cost ratio is excellent.
Not to mention text messages:
- they work no matter if the person is using whatsapp, telegram, signal or the new hype stuff
- no GAFAM is collecting my text history to sell me ads
- they require no internet connection
There are 3 things that we must absolutely cherish and preserve in this race for tech: cash, emails and phone numbers.
They are a beacon of stability in this sea of ever moving innovation greed.
And I say that while I'm thinking about setting up an IFPS website, compiling python 3.10 beta to test it and buy a secondary e-ink screen for my laptop. I'm not technophobic.
> Email and SMS aren't reliable, you've no way to know if they've been read.
Most people do not want this feature. 9/10 of my iMessage contacts turn off read receipts; I bet the number would be similar on Facebook/Whatsapp if they allowed it.
> Email and SMS aren't reliable, you've no way to know if they've been read.
For me, ephemerality, one-shot, and unidirectionality are characteristics, not issues.
> SMS is unencrypted so someone's harvesting your data.
There is not a single entity that is getting all of it, which is the most important to me. Encryption is nice, but for most of my communications, that's not the most important feature.
> They require a cell tower connection, that's only 1 step away from an internet connection, probably 0 in many cases.
I'm regularly in situations where the phone works, but not internet. On the move, or in the country side.
> Cash and Phone numbers are trivial to steal.
Sure, and so is a bike. But I don't always want to take the bus.
Buy a data-only subscription, and use Google Voice or some sort of PBX powered app to still be able to receive regular phone calls.
Preferably I’d want a really basic voice only, open source PBX powered app for iOS that I could use. Then I could get me a data-only plan and SIM.
Caveat: I still need Norwegian BankID to work with my SIM though. I dunno if any of the data-only plans available in Norway support BankID, or if you need a regular subscription like I have now in order to use that.
> Caveat: I still need Norwegian BankID to work with my SIM though.
Same in India, Banking & Payment apps need to verify that the SIM i.e. Phone number is indeed the one associated with the bank account and so they send SMS in the background at random intervals.
I don't do real-time communication and so I had the lowest tier prepaid carrier plan just for this purpose, But the oligopolies decided to remove SMS from the low tier plans suddenly and all my payment apps are now deactivated!
Meanwhile scammers continue to use phone numbers (SIM) bought dime a dozen with fake identity cards[1].
WhatsApp should never be forgiven for making phone number as a flawed identity of a living person, It's disappointing that Signal continued with it.
It is a convenience difficult to live without I can give you that. But so many other services we can't imagine living without require a cellular number. Accepting a flawed authentication mechanism and legal but abusive privacy breaches is what have kept them up and running.
This threat demonstrates how difficult it is to keep us safe from hacks. We can keep our bank account and one day have to deal with fraud recovery, or we can ask them to stop, some banks 2fa activation via a voice call.
Until banks cease to dominate and control our finances, we should at least do what we can protect ourselves from their incompetence.
Banks can and do rollback fraud and theft in most cases; that’s the advantage of working in a system supported by law and regulation. The alternative is trusting that every component in your crypto tech stack has perfect security (which is impossible) or else risk losing all your money in a manner that can’t be reversed. I’ll choose banks, thanks.
And that's fine, the majority of the current generation will stick to traditional banking out of fear. Maybe it is safer that way you are right. Let's not ignore the fact the crypto technology is actually safer as a whole, the method of transaction is order of magnitudes more hardened with private keys and signatures, and also more flexible.
BTW cryptocurrencies and also smart contract networks does support rollback as shown in the Eth vulnerability that was exploited years ago. The irreversibility is a feature. We figured it is safer to adopt a push approach to transactions like when paying with cash rather than a pull approach used with debit/credit card, and that making the transaction not reversible leads to less fear from the creditor and promote fluidity. If reversibility is desired, multi sig and escrow exist. They are as rarely used as on eBay, because for most transactions we don't care. Nothing is perfectly safe you are right but humanity, despite stickiness to what is tolerable has always eventually adopted less understood but more enabling technologies when they prove to be superior in so many respects.
I know both TMobile and ATT frown upon this and have been known to shut down accounts that do this. Data only is for non-phone accounts in their eyes and anyone trying to circumvent that is committing some sorta fraud (in their eyes).
"... Citizen Lab researchers and others suggest that Apple should simply provide an option to disable iMessage entirely."
You can do this already. If you "manage" your iphone with Apple Configurator you have fine-grained control over every little thing it does. You can disable imessage (and many other things like the app store, etc.)
You can disable iMessage just by going into the Settings and turning the “iMessage” toggle to off. You don’t need to supervise the device or install a profile.
For the general population this probably applies, but in the case of this exploit it looks like it's targeted at specific individuals. If you were someone being targeted by an oppressive government, for example, you might be a bit more cautious when clicking on suspicious-looking incoming media so I think the suggestion has value.
you have made an argument against _never_ parsing media from untrusted _SMSs_
This is iMessage, and instead of _never_ parsing, requiring user action to parse. It will cut out the majority of the noclick exploits, and make iphones safer for most people.
This of course doesn't protect against spear phishing. But it should give apple time to fix their shit.
Can you war-dial attack with these? Seems like it would be super easy for a script kiddie to just start at 111-111-1111, send message, increment by 1, repeat. Maybe narrow it down to valid area codes and what not, but seems like a super low budget thing to do.
"Robotech spammers are also targeting group messages by using automated programs to send thousands, even millions of group texts to random phone numbers with the hopes that somebody will take the prey and respond."
Also, some users give random apps access to their address book for whatever reason then there is a whole list of known good emails and numbers to spam.
Well, send from what? Every iMessage comes from an account with an Apple ID, so I presume stolen credentials would be the only way to really do this, adding to the cost.
If you know the email address that is used for the Apple ID, you can send it a message without being in messages. You can also send a text to a phone number via email based on the carrier and knowing how to structure the address. So, it's not impossible to do this at all. No stolen credentials necessary.
1. There's no reason why a threat actor would have to send you 3-4 messages per day. Of the exploits I've seen, they only need to send one. Sending 3-4 messages per day just unnecessarily increases the risk of getting caught (ie. the target getting suspicious and asking on hacker news whether they're getting hacked)
2. There's no reason why the message has to contain sketchy links. They could very well disguise messages as ads/notifications for well known businesses, political organizations, or from random people who got the wrong phone number.
3. There's no reason why the attacker can't erase any trace of the initial message after your device is infected, so unless you're staring at your phone 24/7 it's very easy to miss the message.
If I am sneaking a payload in, and I have different exploits for different OS versions, I would exactly disguise it as spam.
Pretending to be a busines, or a random person with wrong number, and then DELETING IT is a noteable indicator of compromise.
I know this isn't how Pegasus works, but I'm sure there are more exploit kits being sold in the world. Some may not be as sophisticated, and may rely on spraying and praying with different exploits.
>If I am sneaking a payload in, and I have different exploits for different OS versions, I would exactly disguise it as spam.
Right, but the point is that GP seems to have been tipped off by the "sketchy links", rather than the spam itself, and that there are far better ways to compose your spam texts than ones with sketchy links.
>Pretending to be a busines, or a random person with wrong number, and then DELETING IT is a noteable indicator of compromise.
It depends on the nature of the exploit. I was operating under the assumption that "0 click" means the exploit gets run as soon as the phone receives it, which would allow for the exploit to clean up after itself without alerting the owner, unless the owner was staring at the phone the exact moment the message came in.
I got corrected last time this topic came up. I originally thought Messages was part of the OS and not a pre-installed userspace app. However, if it's in userspace, why is it such a vulnerable vector for compromising the phone? Is there some privilege-escalation component to this that I haven't read about?
iMessage is one of few apps that have broad permissions to execute code in response to notifications.
For other apps like Telegram; the server can send a predefined notification message.
For iMessage, when you get something even from someone outside your contacts, its daemon invokes specific code to handle the message, and its attachments.
Whilst this doesn't help if someone opens the app, it
does at least change this from a zero click attack, to a one click attack.
(This is also another example of Apple not following its own app store rules. It has privileged access to frameworks.)
No. This attack is specific to iMessage, since iMessage can access the springboard (the singleton class, not Springboad.app), while apps downloaded from the App Store cannot.
It's not exactly specific to the image preview code, but rather the code that handles the notification when receiving an iMessage.
The attack mentioned in the Wired article[1] relies on iMessage asking the sandboxless Springboard[2][3] to deserialize a maliciously crafted field, included in the incoming iMessage, to escape the sandbox. This specific vulnerability doesn't appear to apply to other apps.
It might be, it’s just that using Safari would turn a zero-click attack into a one-click one (click on my shady link). (Also, WebKit runs with a different sandbox that may require special effort to escape out of.)
A bit off topic, but when did we ever expect integrated apps to be restricted to App Store rules? They're literally sold as part of the device, not as an app through the App Store.
Objective-c has bounds checks and lengths built into NSData, NSArray, and NSString… so many of the buffer overflow techniques likely won’t work against it. However, images and video seem to hit C++ code and from all of the past CVEs it seems this is a giant attack surface over and over again.
I’m surprised this code isn’t being rewritten in something like Rust, but perhaps there are more things going on at play, like the plist serialization attacks that end up coding for esoteric classes that contained various bugs.
As a certified member of the Rewrite It In Rust (RIIR) Reaction Force, let me answer this by saying that it’s very, very hard to get the software right, even the second time, and RIIR trades one set of unknowns for another. There are also a huge number of people who are convinced that Rust is a fad, or that C is good enough, or whatever. The same people who swore they could outbrake ABS decades ago. They do not want to learn a new, hard thing, and Rust can be hard at first.
Depends where the vulnerability lies. Plenty of iMessage DoS attacks [1-4] have targeted CoreText, which, I believe is written in C and CoreFoundation. I think all of these DoS attacks could be mitigated by not viewing the message and disabling message previews in notifications.
> I’m surprised this code isn’t being rewritten in something like Rust
I imagine the internal discussion is something like "by rewriting it, we will just introduce 100 new bugs. once we squash these last few bugs in the current tried-and-tested C code, it will be bulletproof!"
Being Apple I imagine they would prefer to rewrite it in Swift, but Swift may not be mature enough
The exploit targets existing C code that did image decoding and used CoreFoundation to handle data reads. Blast door is Swift-based and likely very memory-safe. What I’m curious about is where the limits of BlastDoor are that it couldn’t contain this particular attack. I’m under the impression that BlastDoor was built exactly for this type of attack.
Swift has e.g. UnsafePointer if you want to work more directly with memory. Presumably if BlastDoor uses them to work directly with memory then it could still be vulnerable, though I am not sure because I am not very familiar with them. If I was excited about pointers I wouldn't be using Swift...
Some Apple devs still seem to love C and Obj-C (at least the ones my former employer worked with directly) and hate on Swift. Both Swift and Rust can be written to a much higher standard where the language protects you from stupidity, but only if you give up the past and use them. While you can write pretty good C-ish code (i.e. Linux), its far too easy to slip up once and the language does nothing to save your ass.
Some of Apple's OS code is pretty ancient. Switching to Swift or Rust is not necessarily a panacea if you call too many OS routines still in C-ish.
I agree. Swift is a nice language, compared to the alternatives.
I have a few months experience in it, and I can definitely agree that if you're writing Swift-only, it's very nice. The emphasis on values, and value semantics is definitely a differentiator from most other languages.
However, anytime you have to use/interop with an older API designed for Obj-C (for example, AVFoundation), it's much more of a pain. Effectively, you're writing Obj-C in Swift.
If someone is insisting on Obj-C instead of Swift in 2021, I would attribute it to a form of a Stockholm Syndrome. Many people form psychological bonds with whatever they are familiar with.
Yes, there are some people who simply prefer Objective-C, but you need to also realize that Swift is still not ready for system-level programming. Analysis tools aren’t ready; debugging basically means you go to printing variables to stderr and praying. The standard library defaults to crashing at runtime for simple float <-> integer conversion bounds errors which you’d think would be caught statically with more thoughtful design. Still a lot of rough edges.
SwiftUI in particular is excellent and if you can use it you should. But you can’t say Swift in general is ready to replace Objective-C. It’s not.
Swift is not ready, but it's not for those reasons. The real problem is that Swift needs a hefty runtime and is fairly slow due to excessive ARC traffic, plus it has no way of recovering from memory exhaustion. So you can't really use it in the kernel, but it's perfectly fine for writing system frameworks and daemons.
Realistically none of the commonly used systems languages have a mechanism to recover from memory exhaustion. Some pretend to, but if you actually try to use them, yeeeeech
The big missing features (from my perspective) are fixed-size arrays, placement allocation, and the ability to guarantee that no allocations or refcount operations occur in a marked critical section. There’s a lot of other stuff I would like to have, but those are the things I can’t live without.
> The standard library defaults to crashing at runtime for simple float <-> integer conversion bounds errors which you’d think would be caught statically with more thoughtful design.
There are very real ways in which Swift isn't ready for systems programming, but this sure ain't one of them.
- In C and C++, this doesn't trap, it's undefined behavior. Trapping is _always_ better than UB. Are C and C++ "not ready for systems-level programming"? (Yes, but that hasn't stopped people from doing it).
- C and C++ compilers don't catch this statically either by default. They just silently invoke UB (https://godbolt.org/z/seTh9cva6).
- Unlike C and C++, Swift's standard library provides the tools you need to easily do something about it: if you don't want to trap, you can write `Int(exactly: x.rounded(.towardZero))` and get an Int? that is nil if a floating-point value is out of range.
There are rough edges here, but they are much, much less rough than the languages that people routinely use for systems programming. Sibling poster got at some of the real problems that _do_ need to be addressed.
One can make the argument that UB allows implementations flexibility on how they define overflow, allowing e.g. -fwrapv and trapping to be permissible based on what your desire is. But it's a fairly weak argument and doesn't detract from the rest of your points.
Yeah, I used to think that, but compilers are already allowed to do whatever they want in non-standard modes. Defining the result wouldn’t prevent implementing fwrapv or trapv. (Also note that neither of these applies to float-int conversions, so let’s not pretend that either helps here).
swift is by far an easier language, safer language, and has a lot of power and nice functional bits, but _oh my god_ those compile times and the lack of a stable debugger and refactoring tools can make it miserable to work in. That and ever since swift 4 or so I feel like we're leaning towards the c++ "everything and the kitchen sink" kinda problems when it comes to features. Used to be there were fairly obvious "right" or idiomatic ways of doing things, but now it's gotten a lot more complicated. Property wrappers, combine, all the stuff with opaque types and the insane stuff you can do with protocols and protocol extensions, all the fad architectures and patterns. Most swift codes bases I've worked in are highly over engineered and kinda feel like the developer of the project just learned about X cool thing in swift and wanted to use it everywhere. It's amazing how the crash rate of apps I've worked on in swift are like consistently less than 1%, it's easy to learn, very modern feeling. But gosh some days I'm just longing for some classic objective c spaghetti code that compiles instantly and gives me that great debugger I've come to rely on. Oh and don't get me started on the abysmal auto correct, code completion, and error messages. Swift still has a ways to go IMO, but I do think it's the choice for an iOS app in 2021- just be thoughtful about which language features you use and not going bananas with extensions and protocols.
Low level library stuff like JPEG and PNG parsing/decoding is exactly where Rust could offer massive security wins for a relatively modest change of code. The main requirement would be to ensure that the Rust versions had absolute feature parity with zero regressions.
You are talking about the same company that shoved their proprietary WiFi protocol full of holes straight into the kernel. Judging from the kind of messages it emits, the latter is a true masterpiece by the way.
I assure you that I know enough to at least hold an intelligent conversation on mobile security. On iOS there is no "system level messaging component". (i)Messages are stored in a SQLite database that is protected via entitlements and sandboxing; the Messages app is given the ability to access it legitimately. Attackers can either exploit the Messages itself and (via code execution in that process) grab a user's messages, or they can exploit something else (such as the web content process) and then escalate privileges from there to bypass the sandbox.
Pwning the app will only provide access to whatever permission it has and we are still sandboxed.
Pwning a kernel module\driver will provide access to everything whether its messaging, call logs, pictures etc. we are not sandboxed, we don't need an LPE exploit.
Does this mean that iMessage evaluates messages as code for some reason? Why on earth would that be the case? It's a foundational security principle to not do that.
And even if they did then why is that so hard to fix?
It’s more like, if you send someone a photo, iMessage will decode the photo and display it. If the imaging library has a bug a maliciously crafted image may be exploitable.
iMessage has more integrations than that too. If you send someone a URL, e.g., the recipient will see a preview of the content.
iMessage does a lot to mitigate the attack surface, but people still get through.
Replacing libjpeg, libpng, h264 & h265 codecs etc. is a gargantuan task. Even if Apple employs another 200 rust programmers (which don't exist in the market – so not possible) it would take years before that project is close to finishing. So intermediate solutions are necessary until then. It is also likely a rewrite would introduce other security issues (not memory safety issues) which would take time to fix.
Rewriting these libraries is probably also a common good, that would be better done through open source initiatives.
There are more than 200 people working on the Rust project itself. Depending on how you define “Rust programmer” there are already companies that employ that number of people individually.
That said you’re not wrong that it’s a gargantuan task that can’t be realistically undertaken, just you’ve really really underestimated the number of Rust developers.
I imagine that iMessage isn't executing the code but the malware is packed into some part of the metadata that some dumb library needs to parse and some sort of buffer overflow attack is accomplished. The library is probably assuming the data is safe to parse.
Why aren’t lightweight hypervisors used more outside the public cloud? It seems that would go a long way in protecting the rest of the device from poorly written c code parsing user input.
Getting an application that's running in a hypervisor to seamlessly, for example, accept deep-link clicks is more complicated for the same reason that they're more secure. That extra boundary is another wall, another interface. And of course that means more complexity for app developers, and more compute cost/battery utilization.
> Vierualisation has a negligible impact on power consumption.
[citation needed] Virtualization on embedded devices with constrained power is a big ask. They are not asking to get rid of all security protections, just pointing out that loosing 10-20% of battery life for marginal security improvement isn't a product winner.
What? How did you derive "we should get rid of all security features" from my reply? I'm saying your idea of using virtualization on-device is not feasible due to the power envelope available vs customer demands.
Similarly to Android there have been attacks that involve exploiting bugs in the code that parses incoming messages, and then via the exploit you can get remote code execution
For example (IIRC this was a real bug), if you exploit a bug in the text layout code, you could attack a device by getting a notification to appear on the lock screen - and SMS messages usually trigger a notification
There was a developer who discovered a bug with the XML parser, and wrote a whole blog post about how he was able to cause iOS's security system to malfunction using a specially-crafted XML permissions file and allow his app to do anything he wanted, even escape the sandbox. He kept it secret for years for his private experimentation until Apple patched it by accident, by adding a 5th XML parser to the other 4 for some reason and using that one instead for the permissions.
Are there seriously 5 XML parsers available in iOS? Are they all written in C? Do they validate all of them whenever a bug is found in one? I can’t tell if this is some sort of defense in depth or just swiss cheese copy-paste…
Depends on the hack but the majority seem to be from parsers for various formats, from images, to unicode and text data etc...
A message has to be able to display so many different types of content. A flaw in any one of those could be exploited. Combine a bunch of flaws together and you suddenly can do quite a bit.