Outside of the great tech writeup, what is particularly interesting about this, to me, from a geopolitical perspective is the level of restraint.
The malicious actors in this case leveraged zero-days for iOS for years and yet do not seem to have overextended themselves or risk exposure by overly widening their intended targets. What I mean by this is: they clearly could have chosen to gain a massive infection rate by combining this with hacking a well-known popular site, or even pulling more visits from (say) social media, but instead the malicious actor chose to limit their intended recipients to run the exploits for a smaller set of targets for much longer while remaining undetected.
This, to me, hints at a state-actor with specific intent.
Crazy that the Implant was never detected directly on a phone. It's certainly visible to iOS itself, but apparently iOS isn't looking for unexpected processes running on the phone. As well, the Implant is sending network traffic that no one ever noticed or never tracked down. And presumably it has some affect on battery life. But all of this just disappears into the noise of everything else going on on an iPhone.
I wonder if the Implant ever showed up in any of the crash reports collected by iOS and uploaded to Apple.
It's a user-facing option: https://support.apple.com/en-us/HT202100
I'm the tech lead for my company's in-house mobile app crash reporter. Every so often we get reports that make no sense. Sometimes they are corrupted in strange ways which I just chalked up to bad device memory or storage, but who knows if something like this wasn't the cause. Semi-related, but I used to have jailbreak detection in the crash reporter SDKs but I had to remove it. Just attempting to detect a jailbroken device was in some cases causing the SDK to crash because the anti-jailbreak detection code injected into the app was buggy.
I wonder how many people you would need to infect, on average, until you were detected?
I would guess, with this exploit chain, and the lack of auditing available of iOS internals, that the actual exploit could run 1 Billion + times before detection. The biggest risk is someone noticing the wedged webkit renderer process and going to try and debug it. I bet that causes oddities when hooked up to a mac with devtools open.
Of the whole thing, the HTTP network traffic is probably by far the biggest red flag - and perhaps 1 out of 10 million people might notice/investigate that. Simple things like never connecting over wifi (cell network is far harder to sniff), and redirecting traffic, encrypted, via a popular CDN would be a good way to hide it.
True, but wouldn't that lead western three letter agencies to an account with a credit card attached? Sure, criminals can get stolen cards, but I imagine those have a limited lifespan and chasing after payment problems is not what the hacker wants to be doing.
For a 3 letter agency, legal issues might be the bigger hurdle. E.g. you might have to make sure the data passing though the CDN and thus CDN itself isn't in some other justification when spying on your own cities.
Isn't that a problem with the iOS walled garden, not even security researchers can properly investigate users devices and detect infections like this, like they can with desktop operating systems...?
I suppose it depends on how "special" they are—do they run standard iOS with normal Safari and actual apps?
NB I heard some infosec research companies actually get rooted phones from Apple with some big caveats.
It’s on us - the end user.
no, please don‘t tell me about android. that‘s a fucked up mine field altogether.
we need to remember that there is no and will never be 100% security. while not open — which is worth nothing to the average user — we get pretty close with ios given all we know.