That's pretty astonishing. The MMIO abuse implies either the attackers have truly phenomenal research capabilities, and/or that they hacked Apple and obtained internal hardware documentation (more likely).
I was willing to believe that maybe it was just a massive NSA-scale research team up until the part with a custom hash function sbox. Apple appears to have known that the feature in question was dangerous and deliberately both hidden it, whatever it is, and then gone further and protected it with a sort of (fairly weak) digital signing feature.
As the blog post points out, there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes). That leaves hacking the developers to steal their internal documentation.
The way it uses a long chain of high effort zero days only to launch an invisible Safari that then starts from scratch, loading a web page that uses a completely different chain of exploits to re-hack the device, also is indicative of a massive organization with truly abysmal levels of internal siloing.
Given that the researchers in question are Russians at Kaspersky, this pretty much has to be the work of the NSA or maybe GCHQ.
Edit: misc other interesting bits from the talk: the malware can enable ad tracking, and also can detect cloud iPhone service hosting that's often used by security researchers. The iOS/macOS malware platform seems to have been in development for over a decade and actually does ML on the device to do object recognition and OCR on photos on-device, to avoid uploading image bytes: they only upload ML generated labels. They truly went to a lot of effort, but all that was no match for a bunch of smart Russian students.
I'm not sure I agree with the speaker that security through obscurity doesn't work, however. This platform has been in the wild for ten years and nobody knows how long they've been exploiting this hidden hardware "feature". If the hardware feature was openly documented it'd have been found much, much sooner.
> If the hardware feature was openly documented it'd have been found much, much sooner.
Well, the point of kerckhoff's principle is that it should have been openly documented and then anyone lookindg at the docs even pre-publication would have said "we can't ship it like that, that feature needs to go."
This is a fairly incredible attack, and agree with your analysis. The hidden Safari tab portion where they “re-hack” the device could be bad organizational siloing as you mentioned or indicative of a “build your virus” approach that script kiddies used in the 90s. Could be a modular design for rapid adaptation, ie. perhaps less targeted.
I don't think hiring an ex-Apple dev would let you get the needed sbox unless they stole technical documentation as they left.
So it either has to be stolen technical docs, or a feature that was put there specifically for their usage. The fact that the ranges didn't appear in the DeviceTree is indeed a bit suspicious, the fact that the description after being added is just 'DENY' is also suspicious. Why is it OK to describe every range except that one?
But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.
If it's a genuine backdoor and not a weird debugging feature then it should be rather difficult to add one that looks like this without other people in Apple realizing it's there. Chips are written in source code using version control, just like software. You'd have to have a way to modify the source without anyone noticing or sounding the alarm, or modifying it before synthesis is performed. That'd imply either a very deep penetration of Apple's internal network sufficient to inject backdoors into hardware, or they have one or more agents.
This really shows how dangerous it is to intel agencies when they decide to attack security professionals. Attacking Kaspersky has led directly to them burning numerous zero days including several that might have taken fairly extreme efforts to set up. It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.
> What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function?
I agree. This appears to likely be an intentional backdoor injected at the hardware level during design. At such a low-level I think it could have been accomplished with only a handful of employees in on it. There would have been no need to subvert Apple from the top down with large numbers of people at many levels being privy.
In early silicon there can be a bunch of registers and functions implemented for testing which are later pulled out. Except maybe one set of registers doesn't get pulled but instead a door knock is added with a weak hash function, making the registers invisible to testers and fuzzing.
It seems a little too convenient that the door knock hash was weak. After all, strong hash functions aren't unknown or hard. The reason it had to be a weak hash function was to create "plausible deniability". If it was a strong hash then once any exploitation was discovered there would be no denying the vuln was intentionally placed. If it really was just a test DMA function that someone supposedly 'forgot' to remove before production silicon, I can't think of a reason to have it behind any kind of door knock in the first place.
I read that it was patched by adding these addresses to the "access denied" list. While I don't know anything about Apple security, I'm stunned that any such low-level access list isn't 'opt-in' instead of 'opt-out'. If it was 'opt-in' it seems like any such 'undocumented' register addresses would by denied by default. And if they were on the 'opt-in' list, yet remained undocumented, then it would be obvious to anyone looking at the security docs that something was amiss.
It should be very easy to add one without somebody noticing. This is the same Apple which shipped a version of macOS for months that added the ability to login to root with any password only a few years ago.
Their review processes are so incompetent even one of the most security critical components, root login, let a totally basic “fail your security 101 class” bug through. It is absolutely inexcusable to have a process that bad and is indicative of their overall approach. As they say, “one cockroach means an infestation”.
Mistakes happen but Apple's reputation for strong security is well deserved. They invest heavily and the complexity of this exploit chain is evidence of that. Linux has had its fair share of trivial root login exploits that somehow got through code review.
No, that is a level of error similar to delivering cars with no airbag in them for months. In any other industry that would indicate a unimaginable level of process failure. Only in commercial software are egregious, basic mistakes swept under the rug as “mistakes happen”.
Just to list a few process failures off the top of my head.
No proofs of specification conformance. No specification conformance tests. No specification. No regression testing. No regression testing of common failure modes. No testing of common failure modes. No enhanced review for critical components. No design conforming to criticality requirements. No criticality requirements. No intention to establish criticality requirements.
In actual safety and security critical software development you do all of those except maybe the first. Doing none of them is rank incompetence and clear evidence you do not know the first thing about actual security that can protect against real professionals. And fancy that, Apple can not and never has against attackers with minimal resources like small teams with only a few million dollars.
We can talk about a reputation for “strong” security when they can protect against the standard, commonplace 10M dollar attacks we see every day.
Uh, very few folks outside of Cupertino know better than Mike how much of a total shitshow Apple were a decade or more ago. His team had to deal with their insanity on a regular basis. : - )
Since then, though, they have cleaned up their act (I've never been a fan). As a silly example, I'm pretty sure they finally check into source control the configuration of their networking equipment... Plus do a lot more to counteract the sophistication of today's nation-state attacks. They've come a long way since the hacks of 2014, when they had to scramble to enable 2FA for iCloud
— previously used only for account changes or purchases. As for this vulnerability, it might be a plain bug or some NSA-style backdoor with plausible deniability, but we probably won't know which for years.
Idk who Mike Hearn is, and I'm always hesitant to call people out on HN for exactly this reason... but I think OP's airbag analogy was perfectly fair, since it was referring to the Apple bug where you could become root by simply clicking "okay" instead of entering a password. [0] It was comical in its simplicity.
But that said, overall I agree with Mike, in that Apple is clearly committed to security of their users. It's one of the main reasons I buy Apple hardware for my mobile, personal and work devices. Nobody is forcing them to put so much effort into that security, either - they do it largely by choice, and it distinguishes them from other vendors whose business models are inherently in conflict with their customers (I don't want to buy a phone from a company that derives most of its revenue from facilitating psychological warfare between advertisers and me.)
But I've always found this security to come with interesting tradeoffs. On the one hand, I'm pro-privacy and pro-user. But certain security objectives can't be achieved without reducing privacy or increasing restrictions on the user. Over the years I've come to appreciate this tradeoff, and I think Apple does a good job of balancing it. While I certainly don't feel like the device is "fully mine," I do feel like I'm getting some extra security in exchange for giving up some freedom to tinker.
Where do Apple have a reputation for strong security?
Compared to other mainstream operating system, they seem to constantly be the last to introduce things like stack canaries, non executable memory segments, and all that which is considered best practice now.
I’m not trying to defend Apple but I think that line of thinking is pretty cynical and could be used to condemn basically any company or open source project that attracts enough interest for attackers.
APTs probably routinely identify and target such developers. With multi-million dollar payouts for single bugs and high state level actor attention, employee profiling is clearly a known attack vector and internal security teams probably now brief on relevant opsec. FWIW the only Apple kernel developer I knew has somewhat recently totally removed themselves from LinkedIn.
Not the software kind, the good old listening devices.
There's a deep tool chest of spying. Heck, remember when keyboard presses were harvested through a wall? That was at least a decade ago. I think audio of key presses can be disambiguated.
> But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.
Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.
Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.
That rule should only be applied in the normal world. The world of security where you know bad actors are out there trying to do stuff, it doesn't apply. And there are examples of spy types injecting plans to go halfway with security for their purposes - not that this proves the origin of a given plan, incompetence is still one possibility, it just returns to original point, that this stuff is mysterious.
As a defender, you should treat malice and incompetence as functionally equivalent. Save the attribution for the post-mortem (or better yet, don't let it come to that).
> It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.
My theory is that defensive cyber security is so hard that it's literally easier to hack the entire world(with a focus on security people) to see if anyone has breached your systems.
What I mean is that (assuming the sbox values are actually random) you couldn't memorize it short of intensive study and practice of memory techniques. If the "sbox" is in reality some easily memorizable function then maybe, but even then, how many people can remember long hex values from their old jobs?
Go onto LinkedIn, search for Apple Global Security staff and you’ll get an answer. The head of and much of the staff are ex-USIC people. Now perform those searches over time and do a little OSINT and observe a revolving door where they are not so ex-.
I wouldn’t be surprised if one or two very senior people in large tech companies are agency agents, willingly or not.
I don’t really have any proof but considering the massive gain it shouldn’t surprise anyone. The agencies might not even need to pay large sum of $$$ if the said assets have vulnerabilities.
Very interesting, thanks. Summarizing that thread:
- The "hash" is probably an error correcting code fed into GPU cache debug registers which will be stored in the cacheline itself, you're expected to compute the ECC because it's so low level. That is, the goal isn't to protect the DMA interface. (but this isn't 100% certain, it's just an educated guess)
- The "sbox" is similar to but not the same as a regular ECC as commonly used in hardware.
- Martin argues that the existence of such registers and the code table could have been guessed or brute forced, even though a compromise or info leak from Apple seems more likely. Or possibly even from the old PowerVR days. But if it's the NSA then who knows, maybe they are literally fuzzing hidden MMIO ranges to discover these interfaces.
- This is possible because the GPU has full DMA access without an IOMMU for performance reasons, so it's fertile ground for such exploits. Probably more will be discovered.
Bc you are DMA-ing the raw bits into cache with the GPU, but the CPU is going to check those ECC codes on read as the caches on Apple SoC's are ECC-native. It's an integrity 'protection' not a security 'protection'
Or, like, they have a root kit and it works so why reinvent the wheel? They have an attack payload so why reinvent the wheel? Just plug and play all the packages you need until you can compromise your target device.
S-box is used in symmetric ciphers in cryptography. S-box = substitution box or lookup table [3][4]. You can see an example of Rijndael s-box in a python script here [1][2].
>there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes).
Then how did these researchers do it? Not being cheeky, I just don't follow security super closely.
What do you mean? Both the GPU and CPU design are proprietary to Apple. They used to use regular ARM designed cores but the last one of those before switching to their own core design was something like the A5 days (from memory). It uses the ARM instruction set but isn’t actually designed by ARM at all.
Similar for the GPU too. They may have started with HDL licensed from others (like I think their GPU might actually have been directly based on the PowerVR ones they used to use, but I believe the ARM one is basically from-scratch) but this vulnerability seems unlikely to have existed since then…
Being able to put together tooling with these capabilities makes the attacker an APT by definition. These are generally assumed to be national intelligence services, though that is an assumption. (Among other things, there are multiple countries where the lines between intelligence agencies and their contractors are... fuzzy.)
And while Kaspersky is refusing to speculate at all about attribution, the Russian government has claimed (without giving specific evidence) that it's NSA.
“This iMessage exploit is crazy. TrueType vulnerability that has existed since the 90s, 2 kernel exploits, a browser exploit, and an undocumented hardware feature that was not used in shipped software”
I really doubt it's a backdoor after reading the blog post and this thread chain from a prolific M1 MacBook hacker (macran) I think it was just an unused or very rarely used feature that was left enabled by accident.
First, yeah, the dbgwrap stuff makes perfect sense. I knew about it for the main CPUs, makes perfect sense it'd exist for the ASCs too. Someone had a lightbulb moment. We might even be able to use some of those tricks for debugging stuff ourselves :)
Second, that "hash" is almost certainly not a hash. It's an ECC code*. I bet this is a cache RAM debug register, and it's writing directly to the raw cache memory array, including the ECC bits, so it has to manually calculate them (yes, caches in Apple SoCs have ECC, I know at least AMCC does and there's no reason to think GPU/ASC caches wouldn't too). The "sbox" is just the order of the input bits to the ECC generator, and the algorithm is a textbook ECC code. I don't know why it's somewhat interestingly shuffled like that, but I bet there's a hardware reason (I think for some of these things they'll even let the hardware synthesis shuffle the bits to whatever happens to be physically optimal, and that's why you won't find the same table anywhere else).
>I really doubt it's a backdoor after reading the blog post and this thread chain from a prolific M1 MacBook hacker (macran) I think it was just an unused or very rarely used feature that was left enabled by accident.
Why? Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.
Because 1. it helps with debugging at development time; 2. it may take unreasonable effort to disable, possibly from a hardware team's point of view with no direct security background; 3. it may be worth keeping around for future patching flexibility.
Source: [0]
> Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.
No matter how large and hierarchical a company is, there will always be teams making hundreds of small, apparently localized decisions in their own line of work, without consulting anyone outside their team, and without seriously considering ramifications. It's humans all the way down.
> How would one look from your perspective?
A feature where you poke a seemingly random 64-bit value into an apparently arbitrary memory address, which grants you access to something you wouldn't normally have. That'd be a backdoor to me.
In the case at hand, the feature neither has a hidden MMIO address (it's discoverable in a small-ish search space), nor does it require a secret knock (but instead apparently just a somewhat convoluted checksum.)
> Why? Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.
Mistakes happen, maybe it's used in the manufacturing or fab phase to test something. Maybe it's used by the GPU firmware itself to perform some operations.
Maybe it was used by some older SoCs to do something and was just never removed.
Apple may not be a small family business but silly mistakes still happen all over the place in nearly every large system.
To quote another post by macran cause I feel like it's super relevant when trying to analyse how this kind of stuff comes about.
@nicolas17 @nil Correct. The question is, how many more of these fun bypasses are left undiscovered? The GPU is a huge amount of hardware and firmware.
Computers are so incredibly complicated now days that these kinds of hardware bugs are likely sitting around elsewhere as well, they don't come around that often because the amount of effort required to find them is likely a lot, which is likely only viable for a nation state actor.
Coresight is not some backdoor - it's a debug feature of all ARM CPU's. This looks like a necessary extension to coresight to work with Apples memory protection stuff.
Even though no public documentation exists, I'm sure thousands of Apple engineers have access to a modded gdb or other tooling to make use of it.
That the secret registers are in fact cache test registers, as explained at that link, is a very plausible explanation for their existence.
Nevertheless, this does not explain at all the astonishing fact that they were mapped by default in the accessible memory space, unless listed and explicitly denied in the system configuration files.
No amount of incompetence seems enough to explain such a default policy, so the supposition of an intentional backdoor still seems more likely.
> No amount of incompetence seems enough to explain such a default policy, so the supposition of an intentional backdoor still seems more likely.
I think you're overestimating how granular the memory mappings are (or how isolated the debug registers are). They're usually used to control access on a peripheral level, not to individual registers within a peripheral.
Apple's mitigation was in fact to alter boot-configured memory mappings to deny access. (And as to the mappings... if they were in the middle of a range of documented registers, or close to one, sloppiness and poor internal communication are at least plausible...)
I would hope that all memory outside the known ranges is denied by default... Apple should know all the memory mapped hardware in the SoC, so why would they allow IO to something that doesn't exist?
It's just a few lines of code to parse the memory map and deny all undefined regions. As well as being good for security, it also helps find out-of-bounds write bugs, and ensures you can't accidentally ship an out of date memory map.
For Wi-Fi–only devices, the Messages app is hidden.
For devices with Wi-Fi and cellular, the Messages app is still available, but only the SMS/MMS service can be used.
SMS/MMS messages and non-emergency cellular radio traffic can be disabled by a SIM PIN, e.g. when using device for an extended period via WiFi.
We purchased an iPad with cellular, with the plan to put my home country's sim card in it so I can still receive SMS (as most of the banks there still requires SMS verification when you login), and it turns out that iPad with cellular does not really show you SMS's that's not from the carrier of the sim card.
iPads can't send SMS text messages through Apple's Messages app. Even if you have an iPad with a cellular data plan for mobile internet on the go, you still can't send SMS text messages.
>In the Messages app , you can send text messages as SMS/MMS messages through your cellular service, or ...
Also my own experience is that it at least can receive SMS text messages, just it won't show you if it's not from your carrier (if it's from your carrier, it shows you via a popup window or something, can't really remember as that was several years ago).
I've never understood why iPads can't be used as phones with an ordinary cellphone SIM. Is it simply because Apple doesn't want to pay a Qualcomm licensing fee or some equivalent? Who is it in the chain/ecosystem that does not want tablets being used as full phones, the carriers? Apple?
I'm guessing it doesn't fit well with the carriers' price structure. Adding a tablet / smart watch / etc. is cheaper than adding another phone to your account. I wouldn't have a cellular iPad if it was a lot extra per month, but I think I pay $10 for both the tablet and the watch, which is fine with me.
Notice that the hash value for a data write of all zero's is zero...
And for a single bit, the hash value is a single value from the sbox table. That means this hash algorithm could reasonably have been reverse engineered without internal documentation.
This 'smells' like a typical way to prevent memory writes to random addresses accidentally triggering this hardware. Doesn't look like it was intended as a security feature.
In fact, this is how I'd implement it if someone said to me it was important that bugs couldn't lead to random writes. This implementation also effectively prevents someone using this feature whilst giving a buffer address they don't know the contents of.
10 bits of security is probably enough for that as long as you reboot the system whenever the hash value is wrong. The coresight debug functionality can totally reboot the system if it wants to.
Like a CRC? I'm reminded of the the Broadcom compression algorithm that required tedious reverse engineering, or a look at the Wikipedia page with sample code.
On the Amiga, you had to write to a blitter control register (BLTSIZE?) twice with the same value or it wouldn't do anything. This might be the same, only a lot more paranoid.
What are the chances this MMIO register could have been discovered by brute force probing every register address?
Mere differences in timing could have indicated the address was a valid address, and then the hash could perhaps have been brute forced too since it is effectively a 20 bit hash.
It looks like the registers could have been identified fairly easily via brute force. They're physically close to documented GPU registers, and accessing them triggers a GPU panic, which is how the researchers attributed them to the GPU component. The attackers could have used that same test to identify the existence of the registers.
The part that's less easily explained is how they were able to reconstruct a custom sbox table to execute the debug code. That's where the "insider threat" insinuations are strongest, but personally I'm not convinced that it precludes any number of other plausible explanations. For example, the attackers could have extracted the sbox from: older firmwares, OTA update patches, pre-release development devices (probably purchasable on ebay at some points), iOS beta releases, or a bunch of other leaky vectors.
The researcher basically says "I couldn't find this sbox table in any other binary where I looked for it." Well, that's not necessarily surprising since it appears to be Apple specific and thus there are a limited number of binaries where it might have appeared. And as the researcher notes, this includes now unpublished binaries that might have been mistakenly released. It's totally plausible that the attackers got lucky at some point while they were systematically sniffing for this sort of leak, and that the researcher is unlikely to have the same luck any time soon.
Looking at that sbox implementation, I can't believe it was implemented as a lookup table in the hardware of the chip - there must be some condensed Boolean expression that gives the same result.
The fact the attackers didn't know that Boolean expression suggests they reverse engineered it rather than had documentation.
In addition contents of the presentation, in terms of timeline...
2018 (September): First undocumented MMIO-present CPU launched, Apple A12 Bionic SOC.
2021 (December): Early exploit chain infrastructure backuprabbit.com created 2021-12-15T18:33:19Z, cloudsponcer.com created 2021-12-17T16:33:50Z.
2022 (April): Later exploit chain infrastructure snoweeanalytics.com created 2022-04-20T15:09:17Z suggesting exploit weaponized by this date.
2023 (December): Approximate date of capture (working back from "half year" quoted analysis period + mid-2023 Apple reports.
The presenters also state that signs within the code reportedly suggested the origin APT group has used the same attack codebase for "10 years" (ie. since ~2013) and also uses it to attack MacOS laptops (with antivirus circumvention). The presenters note that the very "backdoor-like" signed debug functionality may have been included in the chips without Apple's knowledge, eg. by the GPU developer.
So... in less than 3.5 years since the first vulnerable chip hit the market, a series of undocumented debug MMIOs in the Apple CoreSight GPU requiring knowledge of a lengthy secret were successfully weaponized and exploited by an established APT group with a 10+ year history. Kaspersky are "not speculating" but IMHO this is unlikely to be anything but a major state actor.
Theory: I guess since Apple was handed ample evidence of ~40 self-doxxed APT-related AppleIDs, we can judge the identity using any follow-up national security type announcements from the US. If all is quiet it's probably the NSA.
It's not, it really isnt. Honestly just apply this mentality to one other scenario to test the waters. We should stop publishing yara rules because it flips our hand to the malware makers? It's nonsense to even say.
The extra hardware registers might have been discovered by examining the chip itself. One could find where the registers were on it, and notice some extra registers, then do some experimenting to see what they did.
How many ex-Apple employees work(ed) at NSA? It may just have been the right person doing their regular 9-5 job, with no subterfuge. The list of employers for Hardware security folks is likely a couple of dozen companies, and Apple and NSA are among the most prestigious of them. I expect some employees to move in both directions.
> The extra hardware registers might have been discovered by examining the chip itself.
Perhaps. But it's easier to phone the technical librarian and say "Hi! I'm Bob from the password inspection department. Can you verify your current password for me?"
You can find the register file relatively easily because it's a block of memory that's the same on each core but isn't cache, but it isn't a 1:1 map from architectural registers that we would recognize: the chip is designed to find an optimal allocation of slots in the register file to runtime values.
These particular registers aren't part of the CPU proper anyway, so not in the register file in that sense -- they're mmio mapped, and https://securelist.com/operation-triangulation-the-last-hard... concludes that they are "a block of CoreSight MMIO debug registers for the GPU coprocessor".
Philip Zimmermann a while back was working on a secure phone product called the Black Phone. I tried to convince him that a secure phone should not contain any microphones of any kind. That sounds a bit weird for a phone, but it's ok, if you want to make a voice call, just plug a headset into it for the duration of the call. He wasn't convinced, but this iphone exploit makes me believe it more than ever.
Maybe I'm too dumb to find it on this page but if you are looking for the actual recording instead of a calendar entry in the past, it's here (a stream dump for now, fast forward to 27 mins):
>Hardware security very often relies on “security through obscurity”, and it is much more difficult to reverse-engineer than software, but this is a flawed approach, because sooner or later, all secrets are revealed.
The later works when you are not as big as Apple. When you are as big as Apple, you are a very hot target for attackers. There is always the effort vs reward when it comes to exploiting vulnerabilities. The amount of effort that goes into all this is worth thousands of dollars even if someone is doing it just for research. If I was doing this for some random aliexpress board it would be worth nothing and probably security by obscurity would mean no one really cares and the later part works here. But I wonder what Apple is thinking when they use obscurity cause people must start working on exploiting new hardware from day 1. You literally can get one on every corner in a city these days. Hardware Security by obscurity for example would be fine for cards sold by someone like nvidia to only some cloud customers and those are then assumed obsolete in a few years so even if someone gets those on eBay the reward is very low. iPhones on the other hand are a very consumer device and people hang on to their devices for very long.
Depends on usage and threat models? If one expects their device to be attacked immediately after a forced restart, SIM PIN can stop non-emergency usage of the cellular line, while Apple Configurator can restrict WiFi to a predefined SSID that captures device traffic for forensic analysis.
Layperson here so just guessing. If not using the hard reset method, the exploit might fake the reboot sequence to maintain its own persistence. AFAIK, a hard reset is built in hardware and thus impossible to fake.
I see that one of the steps in exploit was to use GPU registers to bypass kernel memory protection. Does it mean that the vulnerability cannot be fixed by an update and existing devices will stay vulnerable?
The kernel cannot protect against this, in fact the attackers have full read/write control and code execution capabilities to mount this attack. The fix is blocking this range from being mapped using features that are more powerful than the kernel.
I didn't hear anyone mention fuzzing once. I guess there was probably very specific insider knowledge being made use of and they wanted to point a finger, which is fair enough I guess. I'm just a bit surprised that it has not been mentioned so far in the discussion. Anyhow it seems that a allow-list approach by Apple would have been better than a deny list approach! Literally not checking out of expected bounds!
Fuzzing is about searching a state-space of an entity: function, method, and I suppose even a hardware-block for unexpected or undefined, or maybe even undocumented behavior.
Certainly this could have been used by the exploiters of these bugs to find undocumented but desirable effects in the hardware of iOS hardware blocks or devices.
Its one of the major arguments against backdooring systems even if you think this to be acceptable.
In the end you create a backdoor for everyone, even if you dont do it as moronic as here. You are the hostile actor.
> If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.
Did the systems software developers know about these registers?
I'm curious to know from experts if there's anything Apple can do to create a step-change in terms of security of iPhones? Like if the going rate for a zero day is $1 million, is there anything Apple can do that can drive that up to $2 or $3 million? Or is it just going to be a perpetual cat and mouse game with no real "progress"?
I am by no means a security expert whatsoever. Period. But reading the article carefully, there is a step in the chain of exploits (CVE-2023-32435) which depends on exploiting Safari. Apple implemented a "Lockdown mode" (https://support.apple.com/en-us/105120) which might have handled this (?).
Answering more broadly to your question, the "step-change" that you're asking for is precisely the "Lockdown mode" in iOS devices. It disables most of the features in order to reduce the attack surface of the device.
If you read a better article with technical details [1], you'll see that Apple SOCs contain a "feature" (that resembles a debugging tool) that allows to bypass memory protection by writing into undocumented and unused GPU registers. Apple locks down kernel memory to stop exploits, but these registers allow to bypass the lock.
This vulnerability is they key vulnerability without which all the exploit chain would be useless.
The original article doesn't have as many technical details as the article I linked to. That is why I added a link to another article which is better in my opinion and it is difficult to understand the vulnerability from original article. Original article also doesn't say anything about how Apple tried to fix it.
The Safari vulnerability wasn't necessary (the device was completely owned before that), and was really just a "nice to have" - it allowed verification of the targeted user and, presumably, customizable malware delivery. From the article, if you look at the bullet points under the Kaspersky diagram of the exploit chain:
> After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device and run spyware, but attackers chose to: a) launch the imagent process and inject a payload that cleans the exploitation artifacts from the device; b) run the Safari process in invisible mode and forward it to the web page with the next stage.
Yeah, lockdown mode might have handled it. If I'm reading the article right, the first step of the exploit was a PDF file sent with iMessage.
When I tried out lockdown mode out of curiousity, I found that it was aggressive about blocking PDF viewing. I quickly bailed on it because I often read research papers on the web, and it switched them from view to download.
It could author its format parsers in https://github.com/google/wuffs, and make them BSD-like open source to maximize adoption.
An even bigger change: It could allow users to choose their iMessage client freely. Why not open up the protocol? I’m sure a security focused client would be popular and in the grand scheme of things easy to author.
Perhaps they could open up more of the OS and apps. Perhaps their claims about the security of users and the App Store is kind of BS.
Either a third party app would still use the same vulnerable frameworks as iMessage, or they would re-implement them potentially with more vulnerabilities, or just not implement the features, which is what Lockdown Mode gives you.
But others probably would. Maybe the user-experience on the security focused clients will be degraded (missing features or disabled features for the sake of privacy and security, lower performance due to some other security features), but you'd still have this kind of niche clients.
Browsers are not the only thing concerned here, there are many sectors impacted by this philosophy. Qubes-OS for Linux Distributions, replacing containers (and especially Docker) with daemon-less or even MicroVM (Firecracker, Podman, ...). I'm sure there are also heavily sandboxed clients for Matrix, SMS or Emails.
Sure. Rewrite sensitive parts of their stack in memory safe languages. They have Swift after all. A lot of the iOS security improvements over time have really been more like mitigations that try to contain the damage when the giant of pile of decades old C gets exploited.
That is exactly their plan. Swift could always link into C applications, and they have recently come out with C++ interoperability [1] so things like WebKit etc. can start having parts re-written or new parts written from the start in Swift so they can gradually replace C and C++ codebases instead of trying to rewrite everything (which sucks because even for things much, much less complex than WebKit, you can have a team working for three years working on a replacement and it’ll have less features than the original had when you started).
They’re even working on an embedded subset for embedded devices so things like microcontrollers like battery management, the Secure Enclave etc. can run it.
I think memory safety + integer overflow checking by default would have blocked many of these. Not the hardware mitigation disable but getting to the point where that matters required some safety problems that can be excluded by better languages.
It's already 2-3 million +. Apple has amazing security, especially for the Iphone and continously monitors it and dishes out silent patches. For a REALLY high level example, it restricts system calls per process and requires all calls to be signed with an apple key, AND it restricts who you can do the system call to, these are continuously monitored and updated. Not only this, but persistence on Iphone is effectively dead, meaning you have to reinfect the device after every reboot. One of the big things you notice in the article is the use of ROP, apple requires every executable page to be signed by them, hence why you have to have these assfisting of rop chains.
2-3 million dollars is not “amazing”. That is less than the cost to open a McDonalds. You can get a small business loan in the US for more than that. There are literally tens of millions of people in the world who can afford that. That is 1/5 the cost of a tank.
2-3 million dollars is pocket lint to people conducting serious business, let alone governments. It is at best okay if you are conducting minor personal business. This ignores the fact that attacks at the 2-3 million dollar range are trivially wormable. If you had actual cause to hack every phone you are only incurring marginal cents per attack. Even relatively minor attacks like targeting 10,000 people are less than one phone of cost per attack.
I don't know. $2-3m for reading code in Ghidra and throwing stuff at a wall until something sticks? Maybe some fuzzing, etc.
I get that you theoretically could find an exploit that for example, you send to 100 known wealthy people, and with it you steal saved cookies + device IDs from financial apps and then try to transfer their funds/assets to an account you control but...
Could you really pull that off 100 times before Apple catches on?
I guess you could... easily... now that I think about it.
this has the (un)fortunate consequence of being illegal. Writing exploits and selling them to a friendly government, on the other hand, is totally legal. Plus, then you can sell support contracts for that sweet recurring revenue!
Need to restart your non responsive iPhone, hope you have some dexterity...
----
Force restart iPhone
If iPhone isn’t responding, and you can’t turn it off then on, try forcing it to restart.
Press and quickly release the volume up button.
Press and quickly release the volume down button.
Press and hold the side button.
When the Apple logo appears, release the side button.
We'd need to scrape decades of work in hardware and software for that.
Modern software sits on a foundation that was thought for a different era. They didn't have in mind the current challenges in terms of security and scale.
I don't understand what you mean. They've always been making progress, driving the price up. They can just keep doing what they're doing, and there will be progress from today.
Perhaps Apple should provide research devices directly to the US's and China's intelligence agencies and pit them against each other to help close more vulnerabilities. The agencies can decide on their own where to strike the balance between offense and defense, but I suspect it would lead to more closed vulnerabilities over time.
I think that would be the case if they were given to the US and other Western intelligence agencies. My scenario requires competition between geopolitical opponents to create an incentive to worry about defense as well.
I mean, this is already an extremely complex chaining of exploits that requires extremely sophisticated research. I can assure you that this is not being used on the average person.
Knowing more about the exfiltration component where it sends data to a remote server would be helpful. According to the article it’s sending large audio microphone recordings. I assume a company like Kapersky would explicit deny all outgoing network connections and then approve one by one.
> yeah, I’m wondering the same. Maybe they can’t point a finger at who did it, but there were no clues on the exfiltration?
From the articles at the above link...
C&C domains
Using the forensic artifacts, it was possible to identify the set of domain name used by the exploits and further malicious stages. They can be used to check the DNS logs for historical information, and to identify the devices currently running the malware:
addatamarket[.]net
backuprabbit[.]com
businessvideonews[.]com
cloudsponcer[.]com
datamarketplace[.]net
mobilegamerstats[.]com
snoweeanalytics[.]com
tagclick-cdn[.]com
topographyupdates[.]com
unlimitedteacup[.]com
virtuallaughing[.]com
web-trackers[.]com
growthtransport[.]com
anstv[.]net
ans7tv[.]net
I think lockdown drops most iMessage features, so I would suspect the answer is yes. But as far as I can tell, lockdown prevents use of mdm, so it might be a net negative for security… instead, using the mdm policy that disables iMessage might be preferable.
Lockdown prevents new enrollment in MDM/adding profiles, but you can use an MDM you're already enrolled in. It's pretty good from a security perspective.
What I dislike is that it applies to all devices in your iCloud profile, and is overall pretty intrusive/annoying. Best practice if you're going to use it is probably to have multiple iCloud accounts (maybe in a "family" for license sharing), and Lockdown Mode one of them for the more secure devices. I tried using it for all of my devices last year and it was pretty unusable.
(Main pain point was how it handles unsecure wifi networks; I consider ~all networks insecure regardless of wifi encryption, but not being able to save or otherwise autoconnect to a hotel network with an iPad with nothing on it, etc. was the last straw. With a decent travel router it's fine.)
Are hashes of the data ever used in known chip debugging features?
Since they're supposed to be disabled in production, what would be their point?
I'm no electronic engineer, but isn't it best for them to be fast and simple, to reduce the chance that they cause interference themselves..?
And isn't it strongly unlikely that an attacker in the supply chain (TSMC??) would be able to reliably plant this in all Apple chips from the A12 to the A16 and the M1 ??
That’s going to be a Chinese tool. Knowing the hardware that intimately and having all these convenient undocumented areas to play with is exactly the kind of thing you can put in place if you control the manufacturing.
Reminder that Lockdown Mode helps reduce the attack surface of your iPhone. It also helps tremendously with detection.
https://support.apple.com/en-us/105120
I've had Lockdown mode enabled for a few months. It's great, and not much of an annoyance at all. You do need to be fairly tech-savvy and remember that it's enabled, because sometimes something silently breaks and you need to opt-out of it (which you can do for a specific website, or WebViews within a specific app). And it won't auto-join "insecure" WiFi which can be annoying at a hotel, but frankly it's probably for the best. Also you won't receive texts with attachments in them, which is usually desirable but breaks workflows like activating a new SIM card while traveling (it's possible this was broken for me due to some other setting to exclude texts from unknown numbers).
The most noticeable difference is that SVG elements (?) are replaced with emojis. I'm not sure how that fallback works but it's funny to see buttons have seemingly random emojis embedded in them. (Does anyone know the details of how this replacement is done? Is it actually glyph fonts being replaced, not SVG?)
Isn't the most obvious answer that Apple, like other US tech firms such as Google, simply creates these wild backdoors for the NSA/GCHQ directly? Every time one's patched, three more pop up. We already know Apple and Google cooperate with the spy agencies very eagerly.
I consider that plausible with Google due to Google's funding history [0], but Apple is afaik way less "influenced" and the way this pwn was pulled off could also have been done by compromising Apple's hardware supply chain and not Apple itself.
Particularly considering how in the past Apple has been very willing to be on the receiving end of negative headlines for not giving US agencies decrypted access to iCloud accounts of terrorist suspects, with Google I don't remember it ever having been the target of such controversy, meaning they willingly oblige with all incoming requests.
How so? Any competent intelligence service will not just depend on the goodwill of a corporation to secure access to assets and intelligence.
If they cooperate that's good and convenient, but that does not mean the intelligence service will not set in place contingencies for if the other side suddenly decides not to play ball anymore.
I said nothing about anything you stated, that’s all clearly possible, I specifically refuted the unsupported claim that Apple “eagerly cooperate with spy agencies”, where there’s ample evidence to support an opposite claim.
Ahem, you mean you have a single example, from a decade ago, one where Apple was hardly a key player (hence why Apple didn’t sign onto PRISM until half a decade after Yahoo, Microsoft, Google, et all), as conclusive evidence of “eagerness to partner with spy agencies”, despite numerous public cases where they’ve done the opposite… got it!
No, I won’t agree to context free blanket statements which are specifically worded to imply something which is simply not provably true, especially given evidence to the opposite. If you knew anything about PRISM at all, even the technical details publicly available with the minimalist of effort on your part, you wouldn’t be asking.
> nobody takes time out of their day to defend Apple and PRISM on the Internet
I’m definitely not defending Apple to be clear. I just believe facts matter, especially when it involves security topics. Parroting around misleading and/or straight false statements related to security topics does nobody any good.
>This attachment exploits vulnerability CVE-2023-41990 in the undocumented, Apple-only TrueType font instruction ADJUST for a remote code execution. This instruction existed since the early 90’s and the patch removed it.
This is getting ridiculous. How many iMessage exploits have there now been via attachments? Why aren't Apple locking down the available codecs? Why isn't BlastDoor doing its job?
This is really disappointing to see time and time again. If a simple app to send and receive messages is this hard to get right, I have very little hope left for software.
If I've read the rest of the documentation correctly, the exploit is actually triggered from an attached ".watchface" file, which of course, has the font vulnerability in it.
I'd like to meet the person who suggested even sending .watchface files as iMessage attachments in the first place. What were you thinking? Did you not have a large enough attack surface already?
If I were an embassy employee (covert or overt), I'd want zero iMessage features beyond ASCII and the thumbs-up/down reactions. No attachments, no GIFs, no games, no Apple Pay, no easter eggs, no rich text
iOS has a reputation for having the best security, but how many times have Android/WhatsApp had these sorts of silent-instant-root exploits via invisible messages? I don't remember it happening. Maybe the strategy of writing lots of stuff in Java is paying off there.
Sigh…there has never been an 0day Stagefright exploit in the wild. And even if there was it wouldn’t have worked on all Android devices due to the OS differences among OEMs.
Also, there are plenty of old iPhones that do not receive updates anymore and are just as vulnerable so I’m not sure why you needed to get that in.
It’s quite unfortunate that Apple doesn’t allow users to uninstall iMessage, it seems to be the infection vector for advanced threats like this, NSO group, etc. Presumably it’s to avoid the support burden, but they could gate it behind having Lockdown Mode enabled for a week or something to shake out the vast majority of mistaken activations.
Actually lockdown is better. It leaves E2E encryption alone, but restricts attachment types, which should be enough to block the initial exploit in the chain.
Disabling iMessage would fall back to SMS, allowing messages to be snooped / modified in transit.
Hopefully they’ll also have a way to disable RCS, since it allows attackers to modify messages, and also has a larger implementation attack surface than SMS.
"Most message attachments are blocked and some features are unavailable."
iMessage with blue bubbles still works in Lockdown Mode. I think GIFs don't display properly and certain other attachments, but I can share photos, audio clips and video so I otherwise don't really notice that Lockdown Mode is enabled.
Unfortunately, Lockdown Mode disables Live Photos from being received via iMessage... That's a pretty big iPhone feature to not work under Lockdown Mode!
There are different aspects of security here. iMessage is tied to a physical device, so if you want to spam people, you have to purchase and burn through iPhones.
Rate limiting phishing attacks is certainly a useful security feature, but it does nothing to protect against targeted attacks.
Can someone explain to me why we can load vast quantities of untrusted code and a wide variety of image formats in our browsers all day long and be mostly safe today, but somehow even first party messenger apps seem to be a relatively easily compromised? Why can't messenger apps be sandboxed as well as browsers?
Sending these through messaging apps is appealing because that usually requires zero user action - you just send a message and the device runs the exploit as it generates preview thumbnails.
But browser exploits require the user to visit an infected website, which is much tougher. If I recieve an email or sms with "visit applesupport.info" I'm not going to click it.
It's all relative. Chrome has plenty of sandbox escapes. Microsoft found one lately where Chrome was passing strings from JS straight into the Windows TTS engine, which turned out to be parsing XML from it with a C++ parser that was full of memory errors.
Beeper is the user's choice. And Apple is preventing other companies from providing a more secure iMessage alternative, e.g. one that doesn't even parse messages from people not in the contact list, or doesn't even parse anything without a click, etc.
Apple has had so many zero-click exploits in iMessage, yet they insist that you have to use Lockdown mode to do anything about it, and then proceed to bundle Lockdown mode with lots of potentially unwanted behavior.
I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.
Me deciding to ride the subway to work for free is a user's choice, but that doesn't mean it's right. Using infrastructure for free because I feel like it is certainly my choice but I can't justify anger when someone makes me pay to use it since I should have paid in the first place. Currently Apple doesn't run iMessage as an open standard so it runs in "authorized riders only" mode.
>I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.
This isn't what I claimed. I claimed Apple treated unauthorized 3rd party access to their infrastructure as a security risk and worked to shore up that risk. As you pointed out, there have been plenty of zero-click exploits in iMessage. Limiting the devices sending iMessages increases security. I believe Apple doesn't allow iOS VMs in general for the same reason.
The attack vector is still smaller if Apple restricts iMessage to official devices only compared to any rooted Android phone being able to spam iMessage payloads.
They gotta, gotta, have those blue bubbles. Some teenagers fight to get an overpriced phone solely to avoid the deep deep shame of having a green bubble when chatting.
If apple is forced to shut down iMessage being the exclusive option and have some pure SMS application they might see a sudden noticeable drop in market share.
Absolutely - but the business interest of wanting to keep teenagers on iPhones absolutely would impede Apple from allowing users to uninstall the application.
Blue bubbles bad syndrome. Gotta bring it up when ever humanly possible.
Nvidia has a very similar green man bad syndrome going on too. As the amount of time a HN discussion on Nvidia increases, the probability of mentioning that Linus said “fuck you nvidia” approaches 1, even though it’s irrelevant to a topic, or that he's a mercurial asshole who's said a whole lot of things.
The casual fanboyism disrupts all discourse on these topics because there’s a large minority of users who have adopted what PG describes as “hater-ism” and allowed it to dominate their thinking on a topic. Negative parasocial attachment is the same process as positive parasocial attachment and just as problematic, but largely never called out.
In short: lotta fanboys on these topics who don't even realize they're fanboys/adopting fanboy frames, because they don't realize that anti-fanboys are still parasocially attached too. And we've casually accepted the low level of discourse on these topics, and it pollutes the whole discussion of a lot of interesting topics because of who's doing them.
Can you explain how disliking Nvidia due to being systematically problematic at some point (maybe still being problematic) is a fanboyism or parasocial attachment?
No they didn't, because the green was first in 2007, when iPhone only supported SMS. It was 4 years later that iMessage launched. The conversation probably went like:
"Okay well, now that we're launching an alternative to SMS, how will we distinguish iMessage messages from regular SMS messages?"
"Hm, well, SMS messages are green, so what if we picked another color?"
"Yeah okay, blue? ¯\_(ツ)_/¯"
"Sounds good, mock it up and send it to the engineers"
edit: The reason for picking green originally was probably because all the "communication"-related apps had a green color scheme, including Messages. This persists today — the app icons for Phone, Messages, and FaceTime are all green.
... And they've already announced[1] that they will be retaining the exclusive blue bubble for iMessage messages for... reasons? The green/blue bubble distinction will continue even when there is no technical difference between messages.
People use “green bubbles” to just mean “no guaranteed delivery or delivery receipts, no read receipts, very low quality image and videos, bad support for reactions, threaded replies, and group chats”.
…the color isn’t the problem. It’s shorthand for the real underlying issues
The color is a big part of the problem, white on green is one of the hardest to read because of the distribution of color cone cells in our retinas. Only maybe white on yellow would be worse.
Yep, why would they drop it? It’s especially egregious as Apple disregards its own human interface guidelines to make green bubbles excessively low-contrast. Very intentional.
I bought the very first iPhone the day after its release. Long before iMessage was introduced, it only supported SMS at the time. People forget, but those bubbles, the original SMS ones, were green. Blue bubbles showed up only when iMessage debuted three years later.
Who had motive to target Russian government officials, knowledge of the attack vectors, history of doing so, and technical and logistical ability to perform it leads Kaspersky and myself to the only rational conclusion: that Apple cooperated with the NSA on this exploit. I assume they only use and potentially burn these valuable methods in rare and perhaps desperate instances. I expect the Russian and Chinese governments' ban on use of Iphones will not be lifted and expand to other governments. Similarly to how the sanctions have backfired, this tactic will also backfire by reducing trust in Apple which is the core of their value proposition.
This looks like a typical modern security hole. There’s a giant stack of layers of unnecessary complexity, and all of them are garbage. The composition is also garbage.
All the NSA needs to launch attacks like this is to get a bunch of mediocre engineers to layer complexity atop complexity. They don’t need Apple to know about the attack.
Honestly, they probably didn’t actually have to do anything to get Apple (or any other large company) to self-pwn itself by hiring and promoting engineers and project managers for adding features, but not for improving product stability or software correctness, or deleting forgotten legacy cruft.
Anyway, the most effective approach to sabotage is to be indistinguishable from incompetence, so it’s hard to say if the people responsible for the vulnerability chain were working with the NSA or not.
You make a good point that a team of mediocre engineers could be responsible for the vulnerabilities. Those doing code review and change control would also need to be mediocre. It could be a combination of compromised and mediocre coordinated by a manager who is in service of the apparatus. Knowledge of the operation would better not go all the way up the ranks to keep it quiet.
It is true that Kaspersky by policy does not make attribution without concrete proof. It is the responsibility of intelligence agencies to make the call based on preponderance of evidence. The video linked above leads suspicion to a very few options. The attacker left a list of Apple ID's in the code in one place to check against. Kaspersky provided them to Apple, and Apple did not respond with any details about the users of those Apple ID's. One of the main vulnerabilities has been available for over ten years.
Yeah sure. Just a Russian IT company owned by Russian billionare who stayed in Russia after 2022 and kept his business in Russia after 2022. Sorry, but everyone who is big enough and are not working for Kremlin has left the country already.
> leads Kaspersky [..] to the [..] rational conclusion: that Apple cooperated with the NSA on this exploit
doesn't the article states precisely otherwise? that while the FSB accuses Apple of cooperation, Kaspersky does not have any reason to believe so, especially since it does not look like any known state actor.
You are talking about an unsustainable war economy that is overheating. Soaring inflation, brain drain and a falling ruble are only just the short term phenomena.
If you would truly believe what you say, you should convert all your savings from dollar to rubles. No serious economist would think that doing so would be a masterstroke though.
the dollar as the reserve currency already has a serious impact on the US (ie. the big upside is that it allows the US to borrow for very cheap, but the nasty downside is keeping the purchasing power of the USD artificially high, which is not great for the non-finance sectors of the US, not great for people who work in those sectors, and double-plus-not-great for US exports [which are not the dollar itself]), basically it's the "natural resource curse" again
A weak dollar is good if you own a company that relies on exports. For the rest of us who are paid in dollars and need to buy imports, a weaker dollar hurts.
That is one opinion. We can already see China and Japan selling off their US bonds and the BRICS countries are working on solutions to get off the dollar with high priority.
They were long reads, but thank you. They generally cover history and speculation on BRICS, but we will need to see how it works out. I have seen their meetings and open statements about intent to diversify away from the dollar for trade as a high priority. The articles don't really explain what happens if/when they do figure out international payment systems that avoid dollars. Think of this: You have a trillion dollars you printed floating around the planet. It didn't cost you much to print them, but you did get goods and services for them. If that trillion is halved to $500B, what happens?
Okay, so it's an especially hard topic, because the soundbites seem simple and dangerous (dedollarization, end of the dollar hegemony, BRICS will move off the dollar, the first signs of the beginning of the inevitable and long predicted extremely overdue fall of the West, etc.), but the prosaic technicality-dense details are simply long and turn out to be extremely anticlimatic.
Payment systems are already here that avoid the dollar. (From the extremely simple blockchainish digital-synthetic currencies like Ripple XRP to the classic SWIFT-like China's CIPS[1].) And these are already in use. After Russia got thrown out of SWIFT they are now basically using CIPS. (Of course they also have their own version, SPFS. "Coincidentally" its development started in 2014.)
However, on the CIPS wikipedia page you can notice that the important things needed for the actual settlement is a boring list of stuff about each member institution (account numbers, settlement procedure description, credit rating). And each member institution has its own rules about what transfers to accept. And of course in hard times trust is in low-supply, transfers start to get manually reviewed, tolerances start to decrease, everyone starts to hoard good money, thus only bad money remains.
All in all, the important thing is that there's no magic system that can handle payments without the usual institutional-societal framework. (Well, of course there are blockchainish things. For example Visa is doing something on Solana. And Solana is pretty fast and cheap. And a horror show to develop smart contracts on, but that's not really relevant now, and not important for Visa or Russia/China/banks, because they don't care much about the ethos of decentralization, they just want to have something quasi-trustless, fast, and cheap.)
> If that trillion is halved to $500B, what happens?
It depends, but, well ... nothing really. Most of money is already at rest. It represents exactly that stuff you got for it. It represents all the wealth created. It was printed to keep inflation around 1-2%. If it disappears in some computer system people will start to scratch their heads, but the ratios will remain mostly the same, so purchasing power and wages/salaries will not change.
That said if some bank decides to flood the market with cheap US Treasury bonds nothing happens. That's already in USD, the bank loses on the transaction a lot. And a lot of these reserves are in bonds.
Okay, what happens if that bank asks for the cheap bonds not USD, but let's say rubels? Okay, they will end up with a shitton of rubels. The exchange rate shifted, but nothing actually happened, sure the ratio of flow of goods and services will adjust as the overpriced rubels will be exchanged for a bit more goods and services than without this huge transaction. But what does this lead to? More exports from the typical exporters. It's not particularly good for anything that's hard to scale up, it will just result in price inflation. And then eventually the exchange rate will go back to reflect the actual flow of goods and services.
Thank you for the thought out explanation to a complex topic important for us to understand.
You recognize impact to exchange rates when one currency is in more demand than another. If this causes an increase in exports sold in the devalued currency, rates could eventually stabilize, but that depends on many things. Those exports could be gold in our treasury, US land, and factories. Those assets that are fixed in place are only valuable to a foreigner if they can be assured they will not be confiscated.
I know this is simplistic, but let's walk through this flow. I print up a $100 bill and give it to China for a washing machine. China will take it because they need it to buy a barrel of oil from Saudi Arabia. That $100 floats around in the world perhaps never returning as long as others accept and use it. If Saudis start accepting Yuan in payment for oil, China does not need that $100 bill as much. They start reducing their dollar reserves and US bonds. They use those dollars to buy the gold, US land, and factories. The US then has a lot of dollars but not as many assets. The dollars become less valuable because we have so many of them but not as much demand for them. If they are afraid of US sanctions, China will be less inclined to buy assets that could be seized and so the currency is less useful to them.
Admittedly there are many other factors. We will need to see how it plays out.
So let's do it realistically. You take out a loan for 100 USD, it gets printed by some bank, let's say Chase. You buy a washing machine from China, they put it in their central bank. (And they don't but bonds, let's assume.)
What if they want to buy oil, which happens to be sold by the Saudis, who want 100 USD for it. They can do it, or they can print more yuan, and use that to buy more USD on some exchange. (And they did it a lot, to keep the yuan artificially low. That's basically half of how they ended up with this huge reserve.)
And since their inflation was around 2 percent since 2010, and currently even negative unfortunately, they can print a lot.
And this is how economic development and exchange rates connect. One man's trade deficit is another's reserve basically. As long as there's some slack in economies (mostly some unemployment metric is used as a proxy for this) it makes sense to spend. (Otherwise it'll just push up prices more, ie. lead to inflation. Hence the very technical sounding name of NAIRU, Non-Accelerating Inflation Rate of Unemployment, I think hands down the worst name for any concept over many fields.)
If China starts using its reserves to buy US assets, that leads to a lot of USD getting back into the US economy, it's like a stimulus. It would push up prices of course, the Fed would increase interest rates, maybe it would even start fiddling with some other knobs (it could increase the fractional reserve ratio, it could increase interest paid on reserves, or interest paid on excess reserves).
> If they are afraid of US sanctions, China will be less inclined to buy assets that could be seized and so the currency is less useful to them.
Yes, and one read of the belt and road initiative is basically this, instead of giving it to citizens to spend, they tried to use it for geopolitical/colonialist/mercantilist projects.
All in all, my understanding is that using their huge reserve to cause some crisis would be a zero-trick pony (because after 2008 and the recent bank crisis, and the Russian sanctions implementations the West seems capable of handling speedbumps), and a slow decoupling would be good anyway. (As it would help the non-finance sectors of the US.)
That’s only “rational” for kaspersky bc in their world they can’t function without having actual intelligence operatives on staff. I seriously doubt nsa needed help here
My adjacent conspiracy theory is that the NSA and other state agencies do both original research and pay hackers for exploits that Apple hasn’t yet discovered.
The Darknet Diaries episode "Zero Day Brokers" goes into this. Apparently Argentina hosts a lot of outsourced exploit development. Here's the transcript: https://darknetdiaries.com/transcript/98/
Like how the NRO used to design and launch satellites that cost more than aircraft carriers but are now working closely with private companies like Maxar to find more economical solutions.
Thing is the fundamental laws of physics give us a good idea as to the capabilities of the NRO given a certain launch platform. Like how when scientists in the late 70s were figuring out the best telescope they could launch they ended up with almost the exact specs of the Keyhole spy satellites, a spare of which became Hubble.
Here's my serious answer that still works if you hate Apple.
Your question assumes two things: (1) That Apple intentionally leaves vulnerabilities in the stack, and (2) that Tim Apple is occasionally willing to share this candy with governments.
Having worked at Apple, I don't believe (1) can be true. Not only is it extremely unlikely that it could be kept a secret, but Apple's thing is "obsessive control", a mindset borne of organizational PTSD which originated with its near-death experience in the mid-to-late 90s. The Apple I know would not risk intentionally leaving back doors unlocked for enemies to find and leverage.
As for (2), the existence of a "Binder of Vulns" by nation-states would expose Apple to existential risk. It's possible that it could be kept secret within Apple's walls if it were never used, but once shared with a government it could not be contained. The splash damage of such a discovery could easily kill Apple.
I am assuming or knowing that the national security apparatus can both coerce and incentivize companies and individuals to give it what it wants. Their power is great and relatively unchecked to do both. Coercion tactics include releasing compromising information on a company, person or family member and more directly injuring person or company. Incentives include favorable regulation, taxation, and deals with other companies they control.
Knowledge of a binder of vulnerabilities is perhaps one of the greatest secrets that must be protected. Wikileaks releasing the Vault 7 leak was the death knell of Julian Assange. It proved such a binder exists in great detail.
I don't hate Apple, but assuming they can't be reached, seems naïve.
This happened at a company I worked at so it’s not out of the question. I figured it out by reverse engineering and quit on the spot. They tried to tell me I’d never work again if spying on users was a dealbreaker. They showed me a natsec slide deck that identified other collaborating companies as a way of making their point. Among them was Apple.
You are telling me that natsec people give every rando the full list of participants in the conspiracy? That just doesn't make sense for any (semi)competent security agency to disclose.
It can be defended against. The detail is that the only way to harden those defenses is to toss it out in the world and let folks poke holes in it.
This was an extremely complex exploit. It was complex because of all of the defenses put in place by Apple and others. It required State level resources to pull it off.
We also don't know what, if any, external skullduggery was involved in the exploit. Did someone penetrate Apple/ARM and get internal documentation? Compromise an employee? Did Apple/ARM participate? Maybe they just dissolved a CPU cover, and reverse engineered it.
But, that cat is not out of the bag, and it's been patched.
Progress.
As many folks say, when it comes to dealing with security, consider the threat model. Being under the lens of an advanced State is different from keeping your young brother out of your WoW account.
This exploit wasn't done by a bunch of scammers selling "PC Support". That's the good news.
When stuff like this happens, I always go back to Stuxnet, where not only did they breach an air gap, they went in and did a sneak and peek into some other company to get the private signing keys so that their corrupted payload was trusted. There's a difference between an intelligence operation and a "hack".
Making stuff like this very expensive is part of the defensive posture of the platform.
An official visits the headquarters, and informs that certain employees need to be hired at certain departments “to help with national security”. End of story.
What even makes people think that executives whose job is to deal with everyone in order to “do business” are their long distance friends, or some kind of punks who'd jump on the table and flip birdies into faces of people making such an offer?
>It was complex because of all of the defenses put in place by Apple and others.
I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.
This makes everything written on this page worthless...
> I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.
well you are in trouble then. all of modern hardware have such hidden parts in them, and are most of the time referenced as "undocumented" instead of "unknown". I know this seems pedantic, but from a public eye, anything undocumented is unknown. what makes those special however, is those are not used at all by public software, thus truly unknown as one can only guess their use or even their mere existence.
> I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.
Either Apple or Arm has employees that know what these registers do. They are likely used for debugging and/or testing.
A lot of those registers can do very interesting things, since e.g. fault-injection is an important part of testing. A security-minded implementation will allow these to either be fused off or disabled very early in the boot process. The latter is probably more common, and any disconnect between the hardware and software side can cause this step to get missed.
However this one seems to have been coordinated with Apple. A nonprofit nonaligned independently managed project could be more immune to pressures of the national security apparatus. I think it is incredibly naïve to think that the largest US corporation does not cooperate. This is why I keep donating to GrapheneOS.
Reading between the lines of TFA, it seems the researchers may also suspect that to be the case:
> Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or that it was included by mistake. Because this feature is not used by the firmware, we have no idea how attackers would know how to use it.
However, keep in mind that this level of "bugdooring" is possible without Apple's explicit cooperation. In fact, the attackers don't even need to force a bug into the code. It would probably be sufficient to have someone on staff who is familiar with the Apple hardware development process (and therefore knows about the availability of these tools), or to simply get a copy of the firmware's source code. Sophisticated attackers likely have moles embedded within Apple. But they don't even need that here; they could just hire an ex-Apple employee and get all the intel they need.
This is the same conspiracy mindset of flat earthers, and you deserve your own netflix mockumentary over it.
Because a bug is a bug, it's very nature means you cannot prove it isn't malicious, therefore you take it as positive proof of malice and sit pretty bc no one can prove a negative.
We had backdoors, then PRISM revealed. We have bugdoors now. No reason to think three letter glowies would like to give up any amount of control. They have the 'power' to straight up lie to the congress under oath, see Clapper.
I always get weird consultants reaching out to me on LinkedIn asking for deets on my org's layout and - curiously - our tech stack. They offer something like $500+ an hour but I don't want to be complicit in some compromise. Private intelligence is such a fascinating industry.
Since they've gone to the trouble of protecting it with an insecure hash, couldn't they also have designed this hardware feature so that it could be completely disabled until the device is rebooted? This vulnerability doesn't persist through reboots, so it would be sufficient to have the firmware lock the feature out during startup outside of development or manufacturing contexts.
I just dont get this mentality. Here is proof positive (if you believe attribution) that the NSA is using exquisite and exotic techniques to force their way into iphones and you look at it and come up with the exact opposite conclusion that Apple is letting them into the iphone. Its not a backdoor if you're smashing in the window.
Based on past history, it would be more surprising if Apple wasn't actively cooperating with the NSA, that was the case with PRISM (wiki):
> "The documents identified several technology companies as participants in the PRISM program, including Microsoft in 2007, Yahoo! in 2008, Google in 2009, Facebook in 2009, Paltalk in 2009, YouTube in 2010, AOL in 2011, Skype in 2011 and Apple in 2012. The speaker's notes in the briefing document reviewed by The Washington Post indicated that '98 percent of PRISM production is based on Yahoo, Google, and Microsoft'"
With the rise of end-to-end encryption in the wake of the Snowden revelations, this put large tech corporations in a bind, given the conflict between consumer desire for secure snoop-proof devices, and government desire for backdoor access. Pressure might have been applies by government contracting decisions, so no cooperation == no big government contract. The general rise of end-to-end encryption also meant that things like deep packet inspection along the trunk no longer worked, putting a premium on breaking into devices to install keyloggers etc.
All the fear of China doing this with Huawei (probably well-justified fear) may have risen in part as projection by politicians and insiders who knew the US government was doing it already with Apple, Android, Intel, ARM, etc. The US government has certainly retained legalistic justification for such behavior, even though the Act expired in 2020[1]. Also, corporations have been given retroactive immunity for similar illegal activites before [2], so Apple has that precedent to go by.
There's a fundamental category error at play here: exploit chains like this one and the one behind FORCEDENTRY[1] cost millions, if not tens of millions, of dollars to discover and weaponize, even before operationalization.
The people finding and building these chains are doing so as part of nation-state intelligence operations; they go well beyond what any reasonable civilian threat model contains.
Put another way: if someone in a competent nation state's IC decides that you're worth $10+ million dollars to compromise, they are going to get you. This is true whether you have an Android, an iPhone, or a Tamagotchi. The only thing that sets Apple apart here is that they've historically beaten Google to the punch on mitigations for these kinds of exploits. But from a threat modeling perspective, this attack is not comparable to the kind that most people have to deal with. Treating it as indicative of an overall security differentiator will not help you make ordinary security decisions, because anybody who gets this kind of attention will be Mossad'ed upon[2].
Sure they do, and yet at the bottom of them we keep finding.. iMessage. Which is like a funnel that takes untrusted external input and feeds it into various ancient unmaintained native code blobs that were thrown into iOS for the "time to market". This time it's an 90s Apple extension to TrueType in a 90s Apple library that presumably no font on an iPhone actually uses, last time it was the 90s fax machine image compression algorithm in a never updated open source library. You see, the full exploit cost many many millions, but at the bottom there are entirely self-inflicted basic failures.
It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
> It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
It’s happening! Admittedly it’s happening slowly, but it is happening. PostScript support recently got stripped out of MacOS and iOS explicitly because the security risk was too great, and effort to make parsers and renders safe was greater than any residual benefit from the postscript format.
It also looks like the “fix” for one for the TrueType exploit was to simply strip out the ancient extension because it’s not used anymore. As for why it didn’t happen before now, that probably just because nobody knew it still existed.
It may cost a million, but it doesn't follow that every use(r) costs the same (could even also call this a category error).
Neither is "going to get you" a given, maybe another agency is in charge of the alternative methods of getting you, and they have different priorities that doesn't include your target (or alternative ways are much more expensive or too slow to be worth it)
The point is that it's incorrect to think of the US (or any other country's) IC as a force of nature, blasting out 0days to random civilians just for kicks. These things are expensive, very expensive, and are carefully orchestrated. They don't look anything like the average civilian's security breach, which is somewhere between "accidentally leaked their own password" and "TSA asks you to unlock your phone."
Neither is it correct to believe the myth of careful orchestration with only rational institutional goals and deep care about other people's money (though again, develop/use are different "these things")
Your “threat model analysis” takes for granted that a “civilian” is a billion times less important than a “nation-state”. It makes no sense to waste any time analyzing anything after such a conclusion. Therefore, something is wrong here.
I think you've misunderstood. The point was that there are (to simplify) two different threat models at play here: one where your most powerful adversary is somewhere between your family and domestic law enforcement, and another where you are worth $10+ million to a nation state.
99.99% of the world lives in threat model 1; our goal as security minded people is to protect these people. These people want general purpose networked computers in their pockets.
0.01% of the world lives in threat model 2; our goal is also to protect these people. But these people don't get protected while also having general purpose networked computers in their pockets.
Both groups are civilians, and both deserve security. But they also have different demands; if Apple forced Lockdown Mode's usability restrictions onto a billion people tomorrow, a large percentage of them would switch to materially less secure hardware and software vendors.
From the inside of the status quo, those threat models, well-informed reasoning, the descriptions of hierarchy, and what “should” and “should not” be possible, “millions will react like this, millions will react like that” are valid and respected. From the outside, there's a tiny bit of a problem: ordinary human has no value apart from that of a cog in the machine, there is really nothing to protect, an empty place, so all those powerful words crumble like a card house in situations like these.
What I'm hinting at is that that assumption is wrong, it's a dead end from the start. Just like tribes sitting around the bonfire, and discussing legendary totem animals and gods making their life reasonable and orderly, people today are charmed by the images of events “above”. It's just a belief in the workings of “modern world”. One needs a better base to live one's own life, not an axiom that one, for all intents and purposes, doesn't matter.
Challenging this axiom is why when I think of the threat model I face traveling from place to place, I come to the inevitable conclusion that I am very likely a target of assassination attempts.
Consequently, I ensure that my motorcade of armed guards are always with me, and accept cars loaded with only the toughest reinforced glass. I am, of course, paranoid about physical proximity to strangers and that mostly incentivizes me to avoid public travel, particularly flights (though even my private jets must be carefully maintained only by my most trusted insiders lest they find themselves tumbling out the sky near Tver). When flying on particularly important trips I send a decoy plane, and I make sure that air space near me is well controlled.
Or maybe, just maybe, I don’t actually face the same threat profile as presidents, spy agency bigwigs, and leaders of paramilitary organizations, and pretending I do is an act of high fantasy that far from empowering my true human soul would be so ridiculous as to make life unlivable!
Why is it that everyone balks at including these shadowy government agencies in threat models? It feels like people just don't want the heat. Would people just give up if it was some corrupt narcostate instead?
They've proven numerous times they couldn't care less about the rights of their own citizens. The US agencies in particular can't even muster any respect for their own allies. I don't even want to imagine what they feel justified in doing to foreigners. They're basically a threat to everyone on earth at this point and we all need the ability to defend against people like them.
So it costs millions to compromise someone? We need to find ways to make it cost billions then. Then we make it cost trillions. They should have to commit crimes against humanity in order to get anyone at all.
Nobody's balking at it. Apple and Google both dedicate significant engineering efforts towards making these kinds of exploit chains even more expensive and unreliable. See for example Lockdown Mode in iOS 16.
The point is this: good security means being able to intelligibly state your threat model and respond to its specific capabilities. Failing to do this results in all kinds of muddied thinking, making it harder to defend against more quotidian adversaries. If your threat model genuinely involves the US IC, then turning on Lockdown Mode is about the best you can do short of throwing your phone in the ocean. By all appearances, that would have prevented this chain.
Not that I know of. There are other hardware-ish exploits (like checkm8), but I think most have been purely software.
(Hopefully what I said wasn't interpreted as a value judgement about hardware security specifically -- the only point I was trying to make is that ICs spend significant resources discovering exploits on all of these platforms.)
> “Due to the closed nature of the iOS ecosystem, the discovery process was both challenging and time-consuming, requiring a comprehensive understanding of both hardware and software architectures... "
-Kaspersky researcher Boris Larin
supports your point but it's not an easy argument to win either way.
It's "everyone can see it so the good guys will find it first" vs "bad guys have harder time discovering vulns but once they do they have gold"
For starters, it took them four years to detect an intrusion that takes advantage of a messaging service (iMessage) that wasn't the corporate choice for internal communications, and allowed the exfiltration of gigabytes of data from the C-suite and high level managers.
This is Kaspersky. They had to be aware of the long history of iMessage security bugs. They could have disabled it. They chose to ignore it instead.
This is an exploit from an adversary that reversed undocumented MMIO registers to completely bypass kernel write protections. Do you honestly believe that disabling iMessage would keep them out? They’d just field the exploit using Exchange or HomeKit or any of the dozen other ways there are to get content onto the devices.
> Do you honestly believe that disabling iMessage would keep them out?
Maybe, maybe not. But it is still a testament to their poor security posture that iMessage was enabled in corporate devices, when it is not part of their software catalog.
Kaspersky has tried to publicly switch blame to Apple twice in a row now. Maybe it is time for them to take some.
What they mean is there's evidence in the captured binaries and from other victims that this campaign has been running for at least four years, not that they were compromised for four years. It actually sounds like they detected their own compromise immediately.
Imho end of the day, open source vs closed doesn’t matter for number/severity of security issues and ends up just being ideological posturing. The bugs exist for a variety of other reasons and tend to have the same root causes attached.
OSS has other considerations though around security. Flaws may be easier to identify and either exploit or fix. Flaw fixing is trickier though because you need to do it in such a way as to not advertise it to the world either before it’s sufficiently deployed.
It has never been said it's your security. It's their security, of their data, on their devices, against their threats and competitors/partners. The user is just an unprivileged data input daemon digitizing “unique personal experiences”, or some other corporate language term.
It's easy to laugh at Juicero users, it's harder to notice the bigger elephant in the room.
Yeah people keep talking about reverse engineering but it’s just as real a possibility that this was simply engineered to be there. Apple and the government made a big public show about the San Bernardino iPhone situation[1] but that could have easily been a cover to convince people the government can’t get in to iPhones - because eventually the government dropped the court case, got in anyway, and the whole thing was quickly forgotten.
We can imagine that the government either has ideological capture of apple - that the management of apple agree to install hard to exploit vulnerabilities tailored for US government use - or legal capture through FISA rulings.
I’d be curious if anyone can summarize the latest understanding of FISA court actions in this realm.
"the government" isn't really a single entity. domestic LE and foreign intelligence have different laws and processes enforced by the constitution (thankfully). Its certainly reasonable that domestic LE really can't force Apple to handover US citizens data, while foreign intelligence services can effect supply chain attacks, back-dooring and other methods not permitted for US citizens..
The problem with conspiracies is everyone involved knows it’s a secret. If you’re the CIA, it’s much less risky to compromise a chip design engineer than have everyone from the CEO down at Apple in on the plant.
Maybe but then again what’s another secret when at a high level these firms are already very secretive.
It’s not apple but I think a lot about how Eric Schmidt of google was directly meeting with US military officials and talking about how important US defense was.
You can end up with a situation where the chip designer and some higher up both know what is happening and the higher up is there as a check to provide cover in case the chip designer is caught up in suspicion. (“No we asked for this for the manufacturing team.” Kind of thing.)
Of course this is all conjecture with no evidence and I understand why we don’t want to spend much energy on discussions we can’t confirm, but at the same time it is frustrating when the default assumption is that apple had no knowledge about this. The truth is that we don’t know and likely will never know.
I do wonder if the people earning millions of dollars a year think the same way however. Considering how compartmentalized Apple is it would not take many people to be in on this.
Yes, the famously compartmentalized Apple, whose roadmap regularly leaks out to the press years (cars, AirTags, Vision Pro) and months in advance (leaks about every iPhone), absolutely a great target for subterfuge.
> several of MMIO addresses the attackers used to bypass the memory protections weren’t identified in any device tree documentation, which acts as a reference for engineers creating hardware or software for iPhones. Even after the researchers further scoured source codes, kernel images, and firmware, they were still unable to find any mention of the MMIO addresses
I'd disagree with this. Apple execs surely know if this information gets leaked they're losing 30% market cap in a single day, why would they risk something like that when administrations change every 4-8 years?
I think the charitable explanation here is that this was an undocumented debugging interface. Apple knew about it and did not disclose it in any publicly available material. The NSA almost certainly has access to Apple's source code and documentation. Just look at the Snowden leaks when it was disclosed that the NSA was mitming Google's DC to DC links. They already knew Google wasn't encrypting those links before they surreptitiously dug up the fiber and they already knew enough about the system architecture to make sense of that firehose of data. Clearly either through NSL or bribing some insiders, they already exfiltrated a bunch of internal documentation and source code. Why would Apple be any different?
I wouldn't expect them to have HSM keys or anything but a mirror of their VCS? Yeah the NSA probably has that.
Power is more important than profit. Those running the national security apparatus have been in power for 60 years. The fact that they still haven't released the documents on the JFK assassination evidences that they are still in power.
Let’s say your old boss was embezzling and got away with it. Now you are the boss, and if you go public with it, not only are they out of power and likely nothing will happen, but all the freedom and flexibility you have in the same position is gone, and you or your friends have an island-problem you would rather not get into.
Maybe it’s just better to not rile up the shareholders.
People will call you a crank or a conspiracy theorist but that is only because they are afraid to think about the answers to those questions themselves. Its easier to pretend it couldn't happen.
Now I am thinking Kaspersky should not have published this information. What a wrong decision. Instead they should have sold it to Russian government which I am sure could find lot of interesting uses for these "debugging features" and offer a good reward.
Kaspersky are already firmly under the full control of the state, selling anything is redundant. The whole video comes off as a massive flex, giving off the same vibes as athletes representing the country at the Olympics.
I was disappointed that nobody in the audience dared to ask the obvious question of how much time has passed between disclosing the vulnerability to the state agencies and Apple. I very much doubt the state didn't seize the opportunity to use the exploit against its enemies first and tactically disclose it later. If anything, the talk demonstrates that if they opted to disclose such a valuable exploit, they could afford to because they have the capability to discover more and have other exploits that did yet not outlive their use. I bet there is an interesting story behind the talk, hopefully, the details will eventually surface up.
>The resulting shellcode, in turn, went on to once again exploit CVE-2023-32434 and CVE-2023-38606 to finally achieve the root access required to install the last spyware payload.
Why isn't Apple detecting the spyware\malware payload? If only Apps approved by Apple are allowed on an iPhone, detection should be trivial.
And why has no one bothered to ask Apple or ARM about this 'unknown hardware'?
>If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.
And finally does Lockdown mode mitigate any of this?
I think Lockdown would help here since it doesn’t decode message attachments. So the original link in the chain (decoding a PDF) would be impossible.
As for detecting unauthorized apps, I would imagine that once you’ve taken over control of the OS kernel, it’s game over for such software-based restrictions. The Halting theorem guarantees such limitations to any software-based restriction. And as long as you can form a Turing complete mechanism from pieces of the computer, such software limitations will apply.
This chain isn’t delivered via an app, it is sent through iMessage. The checks for “only apps approved by Apple” are not relevant if you exploit your way past them.
Thanks I did see the researchers posted how the malware gets into memory, but I still feel like since Apple tightly controls the enviornment it ahould be able to detect anything running there that should not be.
It’s kind of simple imo. Apple is an American company and after Jobs died, Apple quickly signed up to working with the NSA and enrolled in the Prism programme.
Apple, like any other USA company, has to abide by the laws and doing what they are told to do. If that means hardware backdoors, software backdoors, or giving NSA a heads up over a vulnerability during the time it takes to fix said vulnerability (to give time for NSA to make good use of it) then they will.
Only someone with great sway (like Jobs) could have resisted something like this without fear of the US Govt coming after him. His successor either didn’t have that passion for privacy or the courage to resist working with the NSA.
Anyone, anywhere with an iPhone will be vulnerable to NSA being able to break into their phone anytime they please, thanks to Apple. And with Apple now making their own silicon, the hardware itself will be even more of a backdoor.
Almost every single staff member at Apple will be none the wiser about this obv and unable to do anything about it even if they did - and their phones will be just as fair game to tap whenever the spies want.
I am speculating. But in my mind, it’s really quite obvious. Just like how Prism made me win an argument I had with someone who was a die hard Apple fan and thought they would protect privacy at all costs… 6 months later, Snowden came along and won me that argument.