Hacker News new | past | comments | ask | show | jobs | submit login

These are fascinating. It would be very interesting to know what the character and subject matter of the infecting sites were.

Outside of the great tech writeup, what is particularly interesting about this, to me, from a geopolitical perspective is the level of restraint.

The malicious actors in this case leveraged zero-days for iOS for years and yet do not seem to have overextended themselves or risk exposure by overly widening their intended targets. What I mean by this is: they clearly could have chosen to gain a massive infection rate by combining this with hacking a well-known popular site, or even pulling more visits from (say) social media, but instead the malicious actor chose to limit their intended recipients to run the exploits for a smaller set of targets for much longer while remaining undetected.

This, to me, hints at a state-actor with specific intent.




> Earlier this year Google's Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

Crazy that the Implant was never detected directly on a phone. It's certainly visible to iOS itself, but apparently iOS isn't looking for unexpected processes running on the phone. As well, the Implant is sending network traffic that no one ever noticed or never tracked down. And presumably it has some affect on battery life. But all of this just disappears into the noise of everything else going on on an iPhone.

I wonder if the Implant ever showed up in any of the crash reports collected by iOS and uploaded to Apple.


It would be fun to check, but the attackers could just turn it off.

It's a user-facing option: https://support.apple.com/en-us/HT202100


Of course they could, but the Implant according to Google didn't make any attempt to hide itself.

I'm the tech lead for my company's in-house mobile app crash reporter. Every so often we get reports that make no sense. Sometimes they are corrupted in strange ways which I just chalked up to bad device memory or storage, but who knows if something like this wasn't the cause. Semi-related, but I used to have jailbreak detection in the crash reporter SDKs but I had to remove it. Just attempting to detect a jailbroken device was in some cases causing the SDK to crash because the anti-jailbreak detection code injected into the app was buggy.


If you are under constant attack from well funded enemies who want to destroy you, attacking iOS with well focused exploits is a good way of getting intelligence. The enemy may be lulled into a false sense of security on iOS and if the exploit goes undiscovered for a long time, a lot of intelligence can be obtained.


> the malicious actor chose to limit their intended recipients

I wonder how many people you would need to infect, on average, until you were detected?

I would guess, with this exploit chain, and the lack of auditing available of iOS internals, that the actual exploit could run 1 Billion + times before detection. The biggest risk is someone noticing the wedged webkit renderer process and going to try and debug it. I bet that causes oddities when hooked up to a mac with devtools open.

Of the whole thing, the HTTP network traffic is probably by far the biggest red flag - and perhaps 1 out of 10 million people might notice/investigate that. Simple things like never connecting over wifi (cell network is far harder to sniff), and redirecting traffic, encrypted, via a popular CDN would be a good way to hide it.


> redirecting traffic, encrypted, via a popular CDN would be a good way to hide it

True, but wouldn't that lead western three letter agencies to an account with a credit card attached? Sure, criminals can get stolen cards, but I imagine those have a limited lifespan and chasing after payment problems is not what the hacker wants to be doing.


This isn't script-kiddie level hacking. There were considerable resources spend to develop those exploits, probably by a nation state. Those actors not being able to get an untraceable CCs/identities is highly unlikely.

For a 3 letter agency, legal issues might be the bigger hurdle. E.g. you might have to make sure the data passing though the CDN and thus CDN itself isn't in some other justification when spying on your own cities.


> leveraged zero-days for iOS for years

Isn't that a problem with the iOS walled garden, not even security researchers can properly investigate users devices and detect infections like this, like they can with desktop operating systems...?


A step in the right direction: Apple recently announced that next year security researchers will have access to special iPhones: https://www.theverge.com/2019/8/8/20756629/apple-iphone-secu...


I agree this is "a step in the right direction", but if the iPhones are special, would the exploit run on them?

I suppose it depends on how "special" they are—do they run standard iOS with normal Safari and actual apps?


I think it just means they can run unsigned code. But yes, I would presume that Safari and the other default apps are present.


Production iPhones can do that already. Three phones probably give researchers root access and allow for disabling AMFI, etc.


No, it has nothing to do with that. The only difference is that you can’t buy snake-oil virus scanners in the iOS App Store because they can’t even pretend to work.


The Apple iOS model has advantages and disadvantages. The issue cited is not a problem unique to the iOS walled garden model.

See: CVE-2019-1162

See: CVE-2017-6074


I hear this a lot, but is it something that actually happens to a meaningful extent on other systems? Does it balance out the fact that black hats (and state sponsored) can find exploits that they would not have found otherwise if they didn’t have the source?


Absolutely. It is such a disgrace to open society that we have allowed our phones, computers and cars to be so taken over by corporate interest that we cannot even peek inside.

NB I heard some infosec research companies actually get rooted phones from Apple with some big caveats.


That's why Android is far safer. No exploits there.


This is what the free market chose. People bought these devices en masse and continue to do so.

It’s on us - the end user.


there‘s no better alternative.

no, please don‘t tell me about android. that‘s a fucked up mine field altogether.

we need to remember that there is no and will never be 100% security. while not open — which is worth nothing to the average user — we get pretty close with ios given all we know.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: