Hacker News new | past | comments | ask | show | jobs | submit login
A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution (googleprojectzero.blogspot.com)
1005 points by arkadiyt on Dec 15, 2021 | hide | past | favorite | 341 comments



From the top of the article:

> We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis.

This reminded me that NSO went after Citizen Lab on multiple fronts. They even tried to use a spy to talk to JSR (https://www.johnscottrailton.com) and make him say controversial things, which could be later used to malign Citizen Lab. Darknet Diaries covered this incident recently: https://darknetdiaries.com/episode/99/


The transcript is such an intriguing read. You don't expect these things to happen in real life, but yet here they are, tie-cameras, pen recorders, driving circles around the block and all.


100% with you on this one. My life feels so boring in comparison!


Darknet Diaries is so good. To anyone who hasn't listened, highly recommend. Jack hits a homerun each week and the story about JSR and NSO was buck wild


Jack is awesome. Btw, I must give credit to many in the HN community who have recommended this podcast so many time in various threads. On my own I would have never found it.


I wish they would write articles too.. I don't have the patience to listen to podcasts :) But I've been told it's really good.


A good podcast app may help. For slower podcasts, instead of speeding it up - which may chipmunk or otherwise cause artifacts - podcast addict trims silence. You may miss stuff below the gate threshold, but it is still slick.

I only listen to 2 podcasts, I don't really and have never really liked the format. I prefer reading with music or silence, and have always loved weblogs for that. However, it's worth a shot to find a podcast app that "clicks" with your style of input.

With podcasting 2.0, the new features should allow producers that don't want to be part of an ecosystem to still make some income, plus you get streaming album art where supported, and liner notes and all sorts of other goodies.

I think podcastindex has a list of compatible 2.0 podcasting apps, but I'm unable to check.


yeah it really is refreshing to hear a podcast for the masses that actually gets technical details correct. It's a pet peeve when people simplify it and get it really wrong.


He sometimes messes up the small details, but frankly they’re tiny enough that it doesn’t destroy the overall experience


every other week release for the podcast, not weekly*


every other week release for STORIES FROM THE DARK SIDE OF THE INTERNET*


https://9to5mac.com/2021/12/15/pegasus-spyware-maker-nso-run...

hopefully this company is on the way out...


I'd assume they're using the Erik Prince/Constellis business model, taking some time off and getting the band back together under a different name to do the same work.


I think when you are providing such a valuable service, there is almost zero chance they just stop.


Yep. The way I think about it in mainstream terms is, "Jeffrey Epstein's clients didn't just stop wanting what he was providing, someone took his place. Who is that?"


I literally just listened to this episode today. Some crazy stuff.


This is mind boggling. NSO used a compression format's instructions to create logic gates and then from there "a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations", all within a single pass of decompression. Combine this with a buffer overflow and you've got your sploit.


That reads like some handwavy explanation of a hack in a movie scene...

"Now I just have to embed a 64-bit computer architecture into my compression algorithm and... boom. We're in."


"I found a 3rd party library that uses eval, so we just send it code we want to run and...boom. We're in."

"I found a popular chat app that after install leaves a tool with full sudo privileages behind for us to take advantage of located clickityclickity... here. We're in."

Sometimes, it can be even more pedestrian sounding. Hackers don't always have to be clever if other people are absolutely dumbasses before their arrival.


To be clear, what this exploits is nothing like what you've mentioned.

The article does a very good job of describing the relevant parts of the image format. They built a VM inside of an images single pass decompression route. I'd highly recommend reading the article.

This is just one of the exploits in a very large chain.

To quote some of the nations top security researchers:

> Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.

This has nothing in relationship to eval().


This is really piece of artistic work. A bit crazy to think out of the box and I'm jealous lol.


Yeah. Even I know about eval. I'm just happy Google and Apple actually care about security unlike the 2000s companies and can rival the smartest hackers to keep my phone safe!


I'm thinking you're missing the larger idea. The whole point is that while these "geniuses" did something really "impressive" and difficult, there are just as really not-impressive and not-difficult things found in the wild that have caused problems as well.


Why bring that up? It is something everybody knows and it adds nothing to this conversation.


It's called counterpoint. It was actually found interesting by several people, but you can have your opinion that you don't find it intersting. It actually did add to the conversation as there were multiply replies to it. Your comment about it is the thing that doesnt really add to anything.


Then you can "Enhance".

https://www.youtube.com/watch?v=Vxq9yj2pVWk

Joking aside, this does illustrate the "magical" properties of technology to the layperson. As a corollary, failure modes end up quite suprising and hard to reason about without a certain amount of proficiency in these technologies.


Enhancing works with trained AI these days

Maybe not for evidence collection, but for pleasing a human being to go follow a lead sure


I've seen some examples of this. It's very clearly trained on a white-male dataset.

I've also seen it "enhance" an image of a resistor into a human face.

I don't care how much AI you have, you can't add back data that wasn't in the original image. The best you can hope to do is get a vague approximation, and you must have a very, very good (comprehensive) training dataset for that to be remotely viable.


The premise of the technology is not adding more information to the image. But rather realizing that the image may have a description that is a lot smaller than its file size suggests; then it becomes a matter of rendering it using world-aware encodings. The resolution may appear higher but it is actually a filtration of the original data. And there’s nothing to say that simply because the current technology is overfitted to their present-day datasets, that such a filter (that is actually useful for common images, or enhancement by leveraging known/ few-shot other examples consisting of the same target object) cannot exist.


> It's very clearly trained on a white-male dataset.

TBF the Beatles look amazing in the Peter Jackson documentary, though the original material was shot on 16mm.


There is a world of difference upscaling something digital, and something analog. 16mm film actually does contain more information than could be shown with the original film. We have better scanning techniques today that can extract that information.

Upscaling something digital, does require creating information out of thin air, on the other hand.


>Maybe not for evidence collection,

Kyle Rittenhouse was possibly almost convicted due to "enhance with AI".


Bring it up with the appeals court in the event it occurs, unless you run out of money. Dont run out of money.


Well, that and the explanation is missing the details. Conceptually being able to construct something like that from XOR and NOT primitives is stuff from undergrad computer engineering curriculum. But it's certainly a respectable feat to find this combination of compression format and the vulnerability therein of all the supported formats, and think to apply it like this.


It reminds me about Nand to Tetris course https://www.nand2tetris.org


Its amazing how they took a buffer overflow and ran with it to create a whole turing complete machine. Its mind boggling how complex these exploits can be, no wonder they sell for millions


It also demonstrates how much more work there is after “buffer overflow” until you get to RCE.


Now - that is a big change.

Historically the jump from overflow to RCE was much much shorter.

Still the iMessage attack surface is just massive and running in an unsafe language kind of crazy?


> running in an unsafe language kind of crazy?

It sounds like their first step in remediation was to move the GIF copy operation into the BlastDoor sandbox, which is written in Swift.


> Historically the jump from overflow to RCE was much much shorter.

Not really. I am about to read the article, but it sounds like return-oriented programming[1] chaining "gadgets" that are small bits of existing code that you can re-purpose into executing arbitrary code by manipulating the stack. Extremely common exploitation technique, even if not trivial. Who said an exploit or RCE was trivial to exploit?

Edit: I was a bit quick to dismiss. The technique is certainly interesting, although the article doesn't go into the details of how the control flow is handled and where that register is stored. However, I'd like to point out that ROP is quite complex on its own, as it's kind of like using a computer with an arbitrary instruction set that you have to combine to create higher-level functions, hence my original confusion.

[1] https://en.wikipedia.org/wiki/Return-oriented_programming


I think what he means with historically is before ASLR, DEP, and other mitigations, where a buffer overflow meant you can simply overwrite the return pointer at ESP, jump to the stack and run any shellcode. Mitigations have made exploitation much, much more complex nowadays. See for example https://github.com/stong/how-to-exploit-a-double-free


Exactly. This escape is technically quite cool frankly in terms of some creativity.

That said, my own view is that messages from untrusted contacts should be straight ascii, parsed in a memory safe language with no further features until you interact (ie, write back etc).


Safeguards should be applied uniformly to all senders. A trusted sender could have been already exploited.


It's this attitude that is diminishing our security posture. Users want gifs, they want shared locations, they want heart emojies, they want unicode.

The fact that you force EVERY user you interact with to have them same treatment is the problem. Some people, I left into my house unsupervised. Some as guests. Some I don't let in at all.

We need to start modeling this approach online more.

I don't think you understand how far users will go to work around safeguards if they interfere with their daily life.


> Users want gifs, they want shared locations, they want heart emojies, they want unicode.

I want all of those things. I use them every day. I don't trust any of my contacts to not have an infection.


ROP chains are similar in spirit but typically created by hand and thus not all that long (several dozen steps, at most). Creating a 70,000 step program via a Turing tarpit is very interesting.


> 70,000 step program

My initial assumption was that they would compile a program, take the binary output as an image and JBIG2-compress it, as I don't really get how they would use the result of the binary operations to branch to different code. Reading the article a bit more, I think they can loop multiple times over the area, by changing w, h and line dynamically over each pass, which would give them some kind of basic computer. That part is still unclear to me, but that would indeed be a lot more impressive.

There are no details on how control flow is handed over to the program either, so it's possible that they loop multiple time over the scratchpad (1 loop = 1 clock cycle roughly), especially if the memory area is non-executable, and they have one shot at computing a jump pointer.

In any case, they can probably copy arbitrary memory addresses into the new "scratchpad" area to defeat ASLR (we'll see in part 2).


iOS does not allow the modification or generation of new executable code (at least, it will not at this stage of an exploit). So they are likely creating a weird machine to patch various data and then redirecting control flow with the altered state by overwriting a function pointer.


> […] then redirecting control flow with the altered state by overwriting a function pointer.

The analysis calls this out specifically:

> Conveniently since JBIG2Bitmap inherits from JBIG2Segment the seg->getType() virtual call succeed even on devices where Pointer Authentication is enabled

Which is disturbing. Was the code compiled for the arch64e architecture in the first place, or it is a bug in the LLVM compiler toolchain? The armv8.3 authenticated pointers have been invented to preclude this from happening, but that is not the case with the exploit.


Pointer authentication cannot protect against all pointer substitutions, because doing so to arbitrary C++ code would violate language guarantees. https://github.com/apple/llvm-project/blob/next/clang/docs/P... is a good overview of which things can and can’t be signed because of standards compliance.


Right, and they get there of a decomp pass on totally untrusted input over the network. This is why it's so crazy that apple has this huge attack surface.

My own suggestion. Ascii only messages if contact is not in address book or is a contact you've communicated with in your message history (however long you keep that) up to 1 year. Once you reply these untrusted saudi contacts can send you the gif meme's.


"Hello this is the state police, your mother just got in a car accident, please respond"


In the US if something serious like this happens the police will physically notify the next of kin of it, not send you a text.


The "police" already email and call me about my overdue IRS bill and my imminent arrest. I ignore all that crap.

Never interacted, maybe ascii only. Interacted, allow unicode and some other features (basic emojies? / photos?). Full contact? Allow the app integrations, heart sensor, animated images, videos etc.


[calls phone number]


Ah yes, let’s just force ASCII so that anyone using a language that’s not English has to suffer.


I wonder how they test the code? Maybe they can write a meta VM using a testable environment(e.g. in C) and transpile it into the instructions that library uses?


If I was them I’d test each part of the toolchain (which I assume is a high-level compiler of some sort to their RISC VM) independently, as you would for any component of this type. For the actual exploits itself it’s probably a regular debugger with facilities tailored to their VM.


Suffice it to say, this exploit was not simply chaining gadgets.


Right, my bad. I now read the article, the technique is intriguing, but I can't say much more for lack of details!


I read through this and my jaw dropped. Pretty amazing detective work and a really amazing exploit. Presumably you could run Doom on it :-).

Sometimes I feel like it's hopeless but my brain cannot help but work on creating solutions to this sort of problem.


Indeed amazing and also very well written article.

I wonder how much time it took to develop, I assume the whole general programming language from NAND gates is not something they had to come up with from scratch.

Putting the pieces together though, that's a work of art


They probably defined some microcode operations -> created a minimal assembly language -> wrote it in C -> hand-optimized the asm output -> compiled to "machine" code

All the steps are things you cover in a computer engineering degree (I think), but putting them all together in a tightly constrained environment (or even recognizing that the exploit can happen in the first place) takes a ton of skill, resources, and dedication.


absolutely brilliant, genius work.

I was confused about how they got the thing to run for an unbounded amount of time, but I guess they probably have the final operation at the end of a "processor cycle" be to overwrite the next SegRef so that it loops back to the current SegRef.

I'd love to see the thing in more detail - what the shellcode looks like, how the CPU was designed, everything.

a scummy company but such transcendental brilliance..


Stop weird machines!

http://langsec.org/occupy/


In this particular case:

If you are Wrangling Untrusted File Formats, you should be doing so Safely, using WUFFS.

You can't make this mistake in WUFFS. Your WUFFS image decoder might decode the image incorrectly, maybe Rudolph has a green or blue nose, maybe he's upside down or just a sea of noise, but it can't have a buffer overflow even if you screwed up really badly.

For example, any equivalent of the repeated addition numSyms += ((JBIG2SymbolDict *)seg)->getSize(); in WUFFS will get flagged, it clearly could overflow and WUFFS wants you to write code explaining how you're going to prevent that because overflows aren't allowed in WUFFS.

This leaves outfits like NSO with nothing much to attack. Sending me pictures of Rudolph with a green nose by "exploiting" a bug in my image decoder isn't very useful, unlike taking over my phone...


the problem is, this is an ancient PDF feature from the 90s that nobody has time to write and debug

so they just took an open source parser and slapped it in there

it’s hard to rewrite PDF parsing to a new language so that everything still works. Especially those weird features that people forgot about.



It seems we're now at the point where anything Turing complete can be a vector. Wow...


this wasn't Turing-complete until they exploited it to make it so. JBIG2 executes arbitrary binary bitmap operations, but sequentially (no looping.) using the exploit they presumably found a way to send it into a loop, probably by overwriting the pointer to the next segment or something.

theoretically I guess you don't need that, but you'd have to send a payload linear in size to the number of cycles expected to run the shellcode, and that wouldn't lend itself to a processor-like design - it'd just be too big.


This reminds me of the original story of Mel in which Mel managed to do similar things with assembly. Amazing stuffs and wish I had a chance to work with similar genius.


Basically anything that exceeds the regular category is risky and difficult to secure. See weird machines / langsec. This is a prime example.


Well, when combined with an integer overflow at least.


Technically it's not a buffer overflow; it's an integer overflow bug.


It's a real shame that the people who came up with this exploit are working for NSO and not on solving P = NP or something. I'm sure if we got them and the ones working on crypto at NSA in a room together, we'd have it and clean unlimited energy in a week.

I often feel sad thinking about how many brilliant engineers are dedicating their time to helping governments spy on people or other governments.


People of this caliber are avaliable here:

https://ctftime.org/

Here you have a list of decade of performance of experts/top competitors in:

security, reverse engineering, crypto, low lvl, malware analysis, OS internals, memory corruption

some of them even work at Google Project Zero :)


And some of them work at NSO...


If they solved P = NP, their first intention would be selling it to the highest bidder. NSO hackers are the digital equivalent of mercenary soldiers.


They don’t really have to, they can just mine Bitcoin by reversing SHA256 in polynomial time, inspect https messages to banks, or send Bitcoin to themselves by creating an ECDSA signature… or just set up a software as a service and have the biggest business in the world.


That's assuming that the solution to whether P=NP is P=NP and not P≠NP, isn't it?


>they can just mine Bitcoin by reversing SHA256 in polynomial time

This wouldn't be faster than ASICs. The coefficients would be too big for it to be practical.


n^10 is a polynomial too.

The equality of P and NP would not itself mean there are fast solutions.


Heck, even if P=NP meant there were fast solutions, merely proving that P=NP wouldn't necessarily give you those solutions, and they might turn out to be even harder problems!


> digital equivalent

In fact, many are literally mercenary soldiers!


kinda like Werner von Braun, maybe. he just wanted to make rockets. whether they were for Nazi Germany or the US didn't matter, whether they were missiles or spacecraft didn't matter, he just wanted to build them.


Which we have a descriptive word for: unethical. The colorful word would be: disgusting


"When the rockets go up, who cares where they come down? That's not my department, says Werner von Braun." ~ Tom Lehrer


"Don't say that he's hypocritical, Say rather that he's apolitical"


There’s nothing unethical about a scientist working on weapon development for their country in the middle of a war. Imagine it’s 1935 and you lack the modern perspective. I mean you might not like it, but I don’t think there’s an ethical violation here.


I'm not sure what world you live in where saying "I don't care if I'm making missiles for Germany or the USA" is not an ethical violation and is somehow patriotism... but I don't want to live in it with you.


You’re being unfair by enforcing a modern perspective, shaped by the victors, on an enemy from 80 years ago. Were Soviet weapons scientists unethical too?


I think it's barbaric to calmly send a request for more slaves (Jews other minorities) for the factory work, due to them inconveniently dying too fast. Because of the inhuman working conditions. I read that he did that all. He wasn't merely a patriotic, unaffected scientist, he played a part in the holocaust.


You’re moving the goalposts - the subject discussed was the participation of scientists in weapon programs. He might have been a scum (I think it’s trivializing what happened in Germany at that time, I digress), but his participation in the war effort is not unethical in itself. Thousands of kilometers away, other scientists toiled on a weapon that makes all the weapons the Nazis developed seem benign.


I think it's beyond barbaric to calmly send a request for more slaves (Jews other minorities) for the factory work, due to them inconveniently dying too fast. Because of the inhuman working conditions. I read that he did that all. He wasn't merely a patriotic, unaffected scientist, he played a part in the holocaust.


maybe "amoral"


I think you could also say the same about gambling, porn and other questionable industries.

The thing is, it's usually much easier making money off these things then making money from solving impactful problems.

If you're a regular joe and you could spend your next 5 years with a 100% chance of making millions for finding exploits, or a 0.01% chance of solving P=NP, I think the irrational decision would be picking the latter.


The problem isn't that they spend 5 years making fuck-you money.

The problem is that they don't realize that they can stop once they have it.


I don't know what's the quantitative trend.

I do know of several founders whose first company was a nasty ad-tech company (spyware), and after making their millions, their second company is a much more honorable digital health company.

You probably can find examples where such people can keep on creating nasty companies, so it would be interesting to see if there was a research about whether or not people pursue more honorable goals after they get lots of cash.


“The best minds of my generation are thinking about how to make people click ads. That sucks.”

~ Jeff Hammerbacher, fmr. Manager of Facebook Data Team, founder of Cloudera

This quote isn't just about people working directly on ad tech and ad targeting algorithms, but any product that is "free" and ad supported.


As a side note I just went to the Cloudera website, because I did not know about the company.

After selecting "Reject all" in the cookie dialog, the cookie was literally spinning (they have a spinning wheel animation for processing your cookie response!) for >5s on "We are processing your cookie settings request". If this is what the best minds of our generation are achieving then help us god!


It's not really true, this was true 15 years ago when the smartest people worked at Google (whose goal is to get people to click on ads), but now a lot of very smart people found other business.


It just is another example that bug bounties are undervalued and the experience doing anything “white hat” is too disastrous to be worth it.

Responsible disclosure is for the gullible.

The market keeps saying “this is what its worth”


What if vulnerabilities’ PR cost to companies will always be less than the price on the black market for exploits?


Why do you think some random hacker is smarter than all the academics we have? Somehow clean unlimited energy isn't achieved because people are working on exploits or optimizing ad revenue? I doubt it.


Because they were smart enough to go where the money is?/s


I feel the opposite. All this stuff and even more hardcore crypto stuff is all relatively simple math. It's not even close to comparable to the things mathematicians do. Or even what physicist have achieved with LHC or fusion research.


Surely cracking cryptographic algorithms is pretty hard math given people don't have that much success with it, even with a huge incentive (decrypting all communications worldwide)?


It's considered to be impossible. I doubt there is much (if any) serious research going on to mathematically crack RSA or ECC. Besides that is not what OP was talking about. That was about hackers finding standard vulnerabilities in code and exploiting it. Not about any mathematical flaws in crypto.


Kind of ironic to use P = NP as an example of something to work on considering the biggest implications of proving P = NP :)


Forgive my ignorance, but what would they be - the complete implosion of all forms of known security, or something else?

This is a bit beyond my ken :)


Well for one, the safety of encryption rests on certain problems being intractable. (In a theoretical sense; there are always implementation bugs that destroy security).

If P=NP, then those previously thought to be intractable problems, are actually tractable. And the foundation of a lot of security-related engineering collapses.


Is proving P = NP equivalent to knowing how any intractable problem can be solved? Is it possible for P=NP and yet a class of intractable problems to remain unsolved?


It would mean that a large class of problems that have solutions that can be verified quickly can be solved quickly. Which cuts both ways.

While that means most protocols used for cryptography would need to be replaced (hashing, digital signatures, etc) it also means other combinatorics algorithms (traveling salesman, protein structure prediction) would become solvable which may been boon for logistics and/or computational science.

(I think this is correct) If P=NP there will still be intractable problems; they would be ones where the solutions can't be verified in polynomial time... along the lines of verifying the solution is correct is as complicated as brute forcing the solution.

Note: it's been a while since my computation theory class. ;) I am reading over https://en.wikipedia.org/wiki/P_versus_NP_problem and relearning the fine house of cards theoreticians have divided this problem into. There is a "consequences of P=NP" towards the bottom that sums it up better than I can.


'quickly' is doing a lot of work in that sentence.

"A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP."

Note the "if". It is extremely important to the meaning. It's very possible for P to equal NP, but for that "if" to be false.


There's also the chance that while we may be able to come up with a polynomial algorithm for integer factorization, it's not actually practical to run still. Remember computational complexity discards the constants on that polynomial. Practically speaking x^2 + x is a lot different from 2^64x^2 + 2^32x + 2^16 :)


Among other things, mostly encryption. Most of our current methods depends on P != NP. So no need for 0 days if you can just read at encrypted data as if it wasn't.


> Most of our current methods depends on P != NP.

Not really. They depend on guessing being slow. P, just P, can do slow. P can be bigger than the universe even with low values of n.


Yes. Just like in that movie Sneakers


I feel the same way about all the smart engineers solving problems for Facebook, Twitter, etc...


Those engineering problems are trivial compared to many real problems. Turning out all those engineers to work on say cancer wouldn't necessarily result in any new breakthroughs . Case in point: all the brilliant software engineers who thought they could solve Covid (https://www.protocol.com/Newsletters/pipeline/very-venture-c...) only to find themselves out of their depth. The software engineering approach doesn't translate to all things and can even be harmful in some fields (cough Theranos). Physicists are another group that tend to have this conceit -i.e if they weren't so busy solving physics problems they would solve the economy and world peace


Usually software developers have big egos, but in reality it's much more difficult to handle DNA meaningfully than scaling computers


Dear sir, the entire NSA staff are currently plugging away on that pesky P=NP problem, including datacenter janitors.


I have suspected for awhile now that the bitcoin blockchain is actually an attempt to break SHA-256. Bitcoin is built around incentives, and it has created an incentive for people all over the world to basically brute force this algorithm and maintain a recursive set of low entropy outputs.

Which would make the btc blockchain an incredibly expensive and valuable data set, for someone armed with the right mathematical theory.


Part of Security is knowing your adversaries power. I think you might be on the right track. You can’t put a price on the security of an algorithm. However, Satoshi gave us a very good metric for calculating it.


How could you break it without destroying its value?


Why would the NSA care about destroying Bitcoin's value?


I don't see anything fundamentally novel here, other than we're not going to be just laughing at weird things that turn out to be Turing complete, they're all practical intrusion vectors now.


You would be surprised at the skills at the highest level of academia.


all the other best minds of our generation are working on getting people to click more ads


I am not sure what do you mean with unlimited energy here, is it literal or metaphor but I sense a second law of thermodynamics violation.


What if they are working on both?


NSO get way too much credit/dramatization these days. They are mostly 2 things

* a shiny UI for customers

* a bank of 0-days

Those 0-days could be found in house, could be brought in from a new employee copying a previous employer, or could simply be purchased.

Most people in the IDF understand when a great security researcher leaves 8200, the company they move to will probably have some of their secrets, theres really no way to stop a 0day from leaking from a researcher like that

This exploit has been closed, but we haven't heard anything about Pegasus not working anymore, so i'm just assuming they moved on to the next exploit. Previously there was a big Whatsapp exploit FB closed that had them hurting. I'm sure they always have multiple backups for when this happens

There is, and has always been, a 7 figure market for high quality 0days. Hell, maybe its 8 figures these days. NSO is just "in your face" which makes people angry

NSO was caught, and thats why Google is crediting them. But this same exploit could have been heavily used by 8200/NSA/who knows who else


I think you get credit for having a bank of actual zero days, self-discovered or not

Trying to trivialize the threat they pose only helps NSO

Plus, "willing to sell to nations with bad human rights records" should be on that list


“All the us army is is a bunch of tanks and planes”


>There is, and has always been, a 7 figure market for high quality 0days. Hell, maybe its 8 figures these days.

popular social media account handles go for 4 figures.

people have wallets on their phones with 6+ figures in crypto

OSINT'ing a billionaires' phone number, leveraging a 0-click, and you are looking at 8+ figure trade, personal, and national secrets.


This has already allegedly happened to Bezos (attacked by Saudi Arabia IIRC, which is an NSO customer). This was likely over his ownership of Washington Post and the reporting on the killing of Kashoggi.

Yeah, billionaires and Trillion-dollar company CxOs have to step up their electronic security


Bezos willingly gave his personal Watsapp number to a Prince, just to "be in touch", and got hacked as a direct result.

The Saudi's wanted leverage, gotten via Bezo's affair, but the US cannot let (national security) leverage escape our borders - and leaked his affair.

Shit is just lulz to me.


Those conversations are important for a CEO like Bezos though. Let's not pretend MBS is not mega powerful. I would say those type of relationships for multi nationals is probably a bigger value/part of the ceo work than like micro managing teams.

But it's super stupid he had just one phone combining personal, business, more private business. At least from the reporting that's what it sounds like happened.

Even the dumb ass Jan 6th coordinators and Meadows used burner phones. IIRC standard practice for political 'execs'/important leg committee staff.


The researcher that leaves the military takes with them general skills in reverse engineering and exploit development, but they cannot use specific 0days they know about from their military service. The specifics of everything done in the military is classified. People told me they couldn't mention in job interviews some of the skills they have because it's a secret. Like, if someone developed this Turing complete architecture on top of jbig2 decompression while they were in the military, it would be considered a secret that cannot be revealed.


> They cannot use specific 0days they know about from their military service

Of course they can, it is just illegal and might be classed as treason or similar.

Remember we are talking about getting exploits for nation states here rather than just some regular company - hiring spies is part of standard operations for the intelligence community and would be a valid zero-day acquisition strategy (depending on the protection offered for NSO by Israel).


> further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states

I mean the whole “nation state” or “nation state backed” hackers thing was always a liiiiitle (very) ambiguous right?

Does the evidence really even move the goal post or mitigate the convenient scapegoating?

Politicians and CEOs and certified IT professionals are all incentivized to say “it was a nation state there’s nothing we could have done!” and rely on their sycophants to never question it, instead of “we’re incompetent and powerless towards random teenagers who rented a rootkit before renting a compromised windows machine that happened to be located in russia”


It's a useful distinction because it clarifies your threat model: any attempts at security without a threat model is hokum, IMO. It's good to know the limits of your security stance by modeling how many resources your opponent can muster, and how many you can spare to defend yourself.

The resources required to develop these exploits (and mitigate against them), were at least an order of magnitude above the next tier, because there was very little sharing and reuse (except among allies). Now, thanks to NSO, any backwater tinpot dictatorship that can't provide reliable electricity or offer a coherent policy for longer than a few months at a time qualifies as a "nation-state" (i.e. hack anyone in the world), if they can spare a 6 or 7 digit budget to hire exploits.

What NSO/HackingTeam and similar offensive security companies did was to lower the bar on nation-state capabilities by removing the need to develop a local program over many years, and allowing the reuse of infrastructure, personnel and exploits by countries that aren't allies. Call it a SpaceX for hacking as opposed to space launches.


I think it is worthwhile to distinguish the two, and I think generally speaking it's the use of bespoke 0days that separates nation state attackers from all others.

One can't really arrange the funding of computer scientists/mathematicians working full-time on the thankless job of finding vulnerabilities without nation-state kind of money, as opposed to employing known vulnerabilities which carry lesser chance of success and greater chance of blowback in their execution.

Aftercall NSO is itself an IDF unit 8200 outfit.


So nations states as clients isnt the same as being state sponsored or backed, a nation state as a former employer isnt the same either

But ultimately I’m not sure the distinction matters if the main result is that hackers get away unscathed and the victims just deflect attention to the wrong targets


Blaming entire nations gives domestic justification for retaliation. No point giving up a card when it's handed to you. It is in a government's best interest to exploit every opportunity handed to them -- it's less effort than fabricating a reason when you need it later.


All security is only how many zeros of money is it built to protect agaist.


> Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.

> Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.

They create an emulated computer by decompressing an old image format inside a PDF file which has a .gif extension! That is top notch!


And still, in 2021, after so many exploits, realizing the futility of trying to fix these bugs and adding their "blast door" process, some Apple dev calls image parsing code where it doesn't belong. The people that are supposed to maintain the element of the OS that has been abused most by nation states do not know the internal APIs they are working with, even just to display looping GIFs.

This negligence is killing journalists and activists.


How do we know this code was written in 2021? GIF support was added to iMessage (I think) in 2016 or 2016; I couldn't find an exact date.


That's a fair point, but doesn't change the fact that it remained there after the code review that was surely(?) made after implementing blast door. Especially for image parsing APIs, considering they've always been the brittle part in the iMessage pipeline.

Not sure how Apple dev machines look like, but usually I can find stuff like this with a well written grep or a shell script and my API definition, if I'm not sure that I caught everything while refactoring. The codebase might be massive but the team should be sizeable enough to handle it.


It really does seem like a failure of blast door.

Parsing any untrusted data should always be sandboxed.


The image library also runs sandboxed, and they found a way to "escape it with ease."

Sandboxes don't reduce the need in having programmers with straight hands. They just trade less safe APIs, for ones deemed more so. You will have as much holes in the sandbox, as much APIs you add to it.


In this case they obviously didn't realize it needed to be called inside the sandbox. That function name really is amazingly misleading about what it will do. Anyone could have made that mistake.


> iMessage has native support for GIF images, the typically small and low quality animated images popular in meme culture. You can send and receive GIFs in iMessage chats and they show up in the chat window. Apple wanted to make those GIFs loop endlessly rather than only play once,

Any chat or message software you want to be REALLY secure should not have support for rich media of any type. I am even suspicious and skeptical that Signal supports embedding animated images.

I can name exploits of this type on desktop PC operating systems going back probably 22-23 years...

I do realize that lack of rich media inline in messages is a non starter for most non-technical consumer end users.


Signal lets you embed animated images but they still won't let you send native resolution images from your phone to someone else. Signal drastically recompresses any image sent. The only end to end encrypted software I know of that allows that is iMessage.


How could Signal recompress images while retaining end-to-end encryption? Wouldn't any "recompression" happen entirely on the client-side, and therefore be fair game for hackers to bypass with their own payloads?


https://sneak.berlin/20210425/signal-is-wrecking-your-images...

It's done clientside, and you can't remove it (on iOS) because only official Signal-published builds will receive push notifications of new messages from Signal servers (via APNS).

This doesn't apply to a sender of an exploit, but does apply to normal people who wish to send full res images or patch out the DRM in the Signal app.


they definitely don’t do this, but in principle they could use homomorphic encryption to do the compression server-side with zero knowledge


Not an expert on this so i might be wrong, but pretty sure in homomorphic encryption, you can't run an algorithm that reduces the size of the encrypted payload. Like you could recompress and after decrypting the result is smaller, but that only happens after decrypting.

Besides, its also totally impractical.


nope! you can reduce the size. trivial example is just XOR: Enc(A) xor Enc(B) = Enc(A^B).. 2 bits in, one bit out.

what you can't do is implement variable-size compression like Huffman trees. if you think about implementing Huffman as a circuit, you have a fixed length output - the worst-case length. you can't read the output any more than you can read the input, so you don't know how much padding you can throw away. therefore it's useless.

the same principle applies to running any algorithm that has a dynamic computational complexity. so you can run a Turing machine in FHE, but you won't know when it halts.


It's my understanding that the signal client which is sending the image reads the jpg/png/whatever image file from local storage, recompresses it local client side, and then sends the smaller version.


then that offers no security at all, since an attacker could use a hacked client. unless clients also refuse to receive anything but one, very well-validated, format, so that sending anything funky would be futile.


No, just have the server reject anything at the /SendMessage endpoint over a certain size; presumably the client is resizing / recompressing images to hit a specific target.


That won't help much.

• Compressing to a file size limit is actually difficult/expensive. Tools usually target some good-enough quality level, and then the file size depends on remaining entropy in the image. The limit would need to be conservatively high.

• Exploits aren't necessarily larger than an average image. Adversaries in this case are quite skilled, and may be able to codegolf it if necessary.


Messages support arbitrary files up 100 MB. Images are resized or compressed for user experience on different devices. The server doesn't know what's in a message.


there is no 'server' in a signal client-to-client link except as a directory server for the clients to find each other


This is not true. The server in the Signal protocol is responsible for message storage and delivery, just in a way where it's hard to associate individual message payloads with individual users (except by IP address, of course).


> Signal drastically recompresses any image sent. The only end to end encrypted software I know of that allows that is iMessage.

I believe they wont use Signal's own client to send their exploit. Client side TX restrictions make no sense.


As other have commented, this is absolutely mind-bogglingly hard core. Kudos to the NSO group engineers who designed and built this (regardless of your allegiances and whether you like or dislike that they do this and whether it's objectively good or evil or somewhere in between, you have to admit that it's deeply technically impressive).

Does anyone have a sense of who they sold this to and who used this particular 0-click exploit?


My country’s dictator (Viktor Orban) uses it to spy against the opposition and the president to make sure that he keeps control of Hungary. I would give more kudos to NSO if they helped us get rid of corruption in my country.


Sorry but can't agree here - this stuff is proper evil for most of world population, which includes also most of HN readers (no its not just SV and 5 other guys). Its more often than not used to oppress common citizens, freedom thinkers and truth sayers.

They are actively making this world a much worse place long term, and why - pure greed for money and power. They don't even try to act like there is some moral / law filter when choosing their customers.

NSO as company is a highly amoral business too, kind of goes hand in hand.


I don't think any of what you said refutes the point that this is deeply technically impressive.


It does to me, but then I have my own moral values.

How nazis organized jewish extermination of millions in concentration camps might be also amazing from bureaucratic & organizational point of view, yet I completely fail to marvel at such an achievement in efficiency.


> regardless of your allegiances and whether you like or dislike that they do this and whether it's objectively good or evil or somewhere in between, you have to admit that it's deeply technically impressiv

Might as well praise German logistics circa 1940-1945.


"Say what you will about those Fascists, at least they made the trains run on time!"

(Note on the above off-repeated cliche: they did not, in fact, make the trains run on time.)


German logistics that circa was actually pretty inefficient, big part why they failed to beat the opponents.


I admire its purity.

> Does anyone have a sense of who they sold this to and who used this particular 0-click exploit?

From the article:

> Earlier this year, Citizen Lab managed to capture an NSO iMessage-based zero-click exploit being used to target a Saudi activist.


The Indian government used it to spy on opposition politicians

https://theprint.in/opinion/only-15-indians-know-about-pegas...


I remember reading this. Are you aware of a detailed account? Regardless of Indian politics, has it actually been proven/researched?


> has it actually been proven/researched?

Yes.

"The Wire has confirmed the numbers of at least 40 journalists who were either targets or potential targets for surveillance. Forensic analysis was conducted on the phones of seven journalists, of which five showed traces of a successful infection by Pegasus." https://thewire.in/rights/project-pegasus-list-of-names-unco...

The forensic analysis was conducted by Amnesty International's Security Lab and was peer-reviewed by Citizen Lab 1. https://www.amnesty.org/en/latest/research/2021/07/forensic-... 2. https://citizenlab.ca/2021/07/amnesty-peer-review/

Also, "In the midst of the heated West Bengal assembly election, the phone of poll strategist Prashant Kishor was broken into using NSO Group’s Pegasus spyware, according to digital forensics conducted by Amnesty International’s Security Lab and shared with The Wire." https://thewire.in/government/prashant-kishor-mamata-banerje...

Govt of India in parliament on NSO group (Dec 3, 2012): "There is no proposal for banning any group named NSO group" https://twitter.com/A2D2_/status/1466700684573642752


Thank you. I had previously re as the citizenlab report, but didn’t realise that Indians were targeted as well.


I'm assuming NSO just buys these exploits and then packages them.


Some subset of their expansive customer list.

This isn't a one-time thing. They're a funnel.


One the one hand you've got people writing insanely complex hacks like this. On the other hand there's the guy who was doing whatever he wanted for years just by crafting dodgy plist files. https://blog.siguza.net/psychicpaper/


Same class of bug that completely broke Android app signing. In that case it was about ZIP file parsing differences (apps checked at install time by one parser, executed using another parser). And also the same bug class that allowed us to bypass Nintendo's mitigation for the Twilight Hack (Wii Zelda savegame exploit), twice. In that case it was about how they handled the savefile archives slightly differently from existing save data and the game itself.

Having multiple parsers for security-sensitive data that isn't just outright signed at an external layer is always a recipe for disaster.


However that plist hack wasn't anywhere near useful to someone like NSO. Different ballgame.


Ok, they apparently made a VM using just the JBIG2 logical operators, that’s both hilarious and amazing.

Still hate NSO though.


Not just a VM - effectively a computer. Holy crap that's amazing (ly evil).


Not just an effective computer, but a computer that scans iOS memory and RCE exactly as planned, without crashing or failing.


Right? I was using VM as a short hand - it is after all a virtual just more virtual than usual :)


I think VM is the correct term here. They are technically emulating hardware


Yeah, but I also understand that in general parlance VM means a higher level virtualization.

Eventually though you get into one of those annoying simulation vs emulation style arguments so I’m happy to accept either definition of VM, just as long as both sides agree on what it is that they’re discussing :)


It's less virtual than usual; it has full access to and control over the embedding process. This is an RM, a Real Machine running in the original access space.


I see "...a small computer architecture..." in the article and my instinct is to ask "Yeah-- but can it run DOOM?"


And since you already have graphical output, because it is a GIF displayed in iMessage, and you have access to gestures, since you exploited OS and can get access to any input, you should be able to have fully playable DooM in iMessage! You can even share that game with friends (who run unpatched iOS)!


I think that allowing overflows to go unnoticed is a mistake. Overflow on addition should cause an exception by default. It should be easy to implement in hardware and as it is UB in C, correctly written programs wouldn't break.

For example, imagine if you are counting money and because of the overlow millions turn into several cents.

Another evil thing is indirect jumps. They should be implemented using an index into a jump table.


Integer overflow is not UB in C, only signed integer overflow. Unsigned integer overflow is defined to be modulo 2^w. And there are plans in C2x or C23 IIRC to make signed integer overflow well-defined in terms of two's complement too.


integer overflow is UB because on some architectures it was trapping.

In practice for the last 30+ years the default behaviour has been non-trapping. So much so that making it trapping would break vast amounts of software that depend on it, so you can't change the general case behaviour in C, C++, etc, or "safe" languages like Java, C#, etc.

Newer languages do recognize this and make trapping the default behaviour, but "rewrite everything at once" is simply not a tractable problem.


> In practice for the last 30+ years the default behaviour has been non-trapping. So much so that making it trapping would break vast amounts of software that depend on it, so you can't change the general case behaviour in C, C++, etc

You can change it in C and C++, since the current behaviour is undefined i.e. give control of your computer to hackers.

GCC and Clang should make -ftrapv the default. They won't, because whichever one does it first will then perform worse on benchmarks than the other, and that's the only thing the devs care about. But they should.


Out of band update:

The overflow that starts this exploit chain is an unsigned overflow. Unsigned overflow in C and C++ has defined behavior. That behavior is wrapping.


You can change it to be trapping behavior, but doing so is problematic for architectures that cannot detect overflow at all of the supported widths in hardware because the software checks are slow.

Unfortunately (IMO), C and C++ has a sizable community that is unwilling to accept pessimizing behavior for various atypical architectures. This is not unreasonable, but it hugely limits the ability of the language to make decisions that work great for the 99%ile case.


No, you can't.

Because too much code is completely broken if you do.

The only things that make use of overflow being UB are optimizing compilers, and they have reliably broken code for because of this for 20 years. This means most developers have realized that pretending non-twos-complement architectures still exist is nonsense, and both C and C++ have significant pressure to actually define overflow as being 2c.


> Because too much code is completely broken if you do.

Any code that gets broken by that already has a security bug.

> The only things that make use of overflow being UB are optimizing compilers, and they have reliably broken code for because of this for 20 years.

Exactly! Code that can be broken by this is already broken. Using -fwrapv won't make it any more broken, it just makes the way it breaks safer.


No. It isn't broken. There is a huge amount of code that assumes this behavior, and works correctly, because that is the way hardware works. That is a huge amount of code that has worked reliably for decades, because it is correct, because the expected behavior matches the hardware behavior.

Saying "it is UB therefore is a security bug" is nonsense.

Saying it shouldn't be UB is useful, and is being addressed by the C and C++ standardization committees, and that work will not change the behavior, it will simply remove the "it's UB" nonsense that optimizers occasionally use. At that point the defined behavior will be a twos complement overflow.

Saying it's UB and therefore can be arbitrarily broken is equally nonsense, breaking code that is correct, within the confines of actual real machines, for no reason other than "it's UB" is not anymore helpful than saying "why don't you just rewrite it all in X".

It's actually incredibly difficult given your definition of what is allowed, to write anything in C that is not UB.


> There is a huge amount of code that assumes this behavior, and works correctly, because that is the way hardware works.

It doesn't though, because gcc et al don't care how the hardware work, they can and do happily miscompile that kind of code into security vulnerabilities instead. If you're talking about embedded code that's compiled with a specific vendor's non-optimizing compiler then yes (but changing GCC and Clang's defaults will have no effect on that kind of code), but if you're talking about code for mainstream desktop/server systems then no, it already doesn't and can't rely on wrapping overflow.

> Saying it's UB and therefore can be arbitrarily broken is equally nonsense, breaking code that is correct, within the confines of actual real machines, for no reason other than "it's UB" is not anymore helpful than saying "why don't you just rewrite it all in X".

But it's not correct, not just in theory but in practice. In real life, code that does this and gets compiled with a modern optimizing compiler like GCC or Clang is already an RCE unless proven otherwise. Yes wrapping is what a naive assembly translation would do. But the compilers don't do naive assembly translation and haven't for decades.

> It's actually incredibly difficult given your definition of what is allowed, to write anything in C that is not UB.

Yes, which is why we keep getting security vulnerabilities like this one.


In C# you can have trapping behaviour by using checked, and in fact there is an ongoing discussion to enable it by default on VS project templates.

Same applies to the Algol linage.


And NSO is the value option. Now imagine what nation states with an actual budget have at their disposal.


The problem with nation states is that they don't pay people. I don't think nation state can ever come up with something like this - it takes passion, genius and those qualities demand higher premiums than governments are ever willing to hand out.


Their entire business model is predicated on nation-states paying NSO more than NSO pays their employees.


Yes but they’re are paying for a proven exploit, not sinking money into R&D


nation states are willing to pay a LOT more for a contract, they just don't want to pay people directly.


It's still pretty expensive! NSO charged a flat $500,000 fee for installing Pegasus. It charged government agencies $650,000 to spy on 10 iPhones; $650,000 for 10 Android users; $500,000 for five BlackBerry users; or $300,000 for five Symbian users.


Feels weird that a private company can target individuals for a price. How was this legal? Isn’t it illegal to hack the phone of a private individual? Or do they simply say here’s the tool, here’s the manual, do what you want just don’t tell us?


Phone companies charge governments to tap a phone line this isn’t that different with the only exception that phone companies usually have to only follow requests made within the country they operate in and those are usually backed by a warrant.

Like with any complicated tech things are a bit more involved since their exports are controlled under the same regime as weapon exports do in Israel they likely have some oversight to ensure that their tech does not leak out and isn’t used outside of the bounds of what was agreed on at levels that go beyond NSO as a company itself.

These exports were very much part of the Israeli and quite likely the US foreign policy.

Some deals like the one with KSA probably should never been greenlit but many others unfortunately have had the outrage steered away from the main culprits.

Amongst their exports they’ve also exported it to European nations such as Poland.

Poland an EU and NATO member had used this software to have one of its government agencies spy on a prosecutor in charge of an investigation into some of the leading party’s members, however it didn’t seem to generate as much outrage and what little it had was directed as the NSO or Israel which is laughable.

Poland isn’t a state that normally could fall under any arms embargoes or export restrictions.

This software had likely very little to do with Khashoggi‘s fate, they didn’t use it to lure him into a trap or to track him for an assassination he was killed in an embassy after being invited to come in, and he came in out of his own free will.

I’m far more interested in how some of their western clients have used this software and unfortunately so far no one seems to want to pick or steer the story that way.


NSO have been trying to argue that they're shielded from responsibility for their actions by being a de facto extension of the state that they've sold to and therefore enjoy sovereign immunity.

The collapse of that argument in the Facebook case is why Apple are now suing as well.


> Feels weird that a private company can target individuals for a price. How was this legal? Isn’t it illegal to hack the phone of a private individual? Or do they simply say here’s the tool, here’s the manual, do what you want just don’t tell us?

It's only illegal if you get caught. And then find someone to prosecute you.


Isreal classifies it as a weapon. In contrast to companies that make guns and bombs, spyware seems mild by comparison.


Depends whether the spyware ends up with you being cut to pieces with a hacksaw while a Saudi prince watches over a teleconference link, I guess.

I'd rather be blown up.


For the capability and who they're charging, frankly, that sounds cheap. For an example of what a nation state is willing to spend to target a single individual, a hellfire missile costs about $150k and doesn't generate intel in the process.


A lot of people talking about governments as NSO's top clients... How do the governments actually pay them? Out of their pockets? State budget? In any ridiculous case, shouldn't this kind of payment be easy to track? Why is no one talking about this? Why isn't this forbidden on international level? The US (as world peacemaker) seem pretty chill about it. I thought this was 'merica!


It came out in the recent trial that the FBI couldn't open Kyle Rittenhouse's iPhone, which was the latest generation at that time last year.


The FBI couldn’t mainly because they haven’t caught up to this level as it’s not their primary objective they are first and foremost and investigative outfit.

The NSA not to mention the entirety of the US defense industry could’ve easily found a way to break the encryption on a single device especially since they only had to break a relatively simple password / passcode it’s just a question of how much would it cost and how long would it take.


Wait, but Kyle Rittenhouse was still alive and participated in the trial. They couldn't just make him open it himself?


First of all, that's kind of a sticky subject in the US due to the Fifth Amendment. Second of all, I have no idea what the parent commenter is talking about. Rittenhouse willingly turned his phone over to investigators.


That might just be what the FBI wants you to think...


Of course that is possible, but they also released their aerial infrared video of the event to the prosecution, which was not previously known.


How can you be so sure NSO is not funded by nation state(s)? :)


Well, I imagine knowing on whom foreign intelligence services are spying on is very valuable to Mossad :)


This is quite clever, but fundamentally it's only possible because of a buffer overflow. If the JBIG decoder had been written in Rust (just to cite one example of a language safer than C), this would have been impossible. Use dumb languages, pwn valuable prizes.


Why are you so certain? The core of the exploit is an integer overflow issue: an attacker causes a 32-bit integer to overflow by repeated addition.

Rust in release mode does not check for overflow on addition. Only debug mode Rust does that.


That was step one, it would then core dump on step two when trying to go over bounds.

Naturally this could eventually be used in another way.

There are no invicible fortesses, but some have enough defenses in place to keep almost everyone away or die trying, except the very resourceful ones.


This case is easy. No need to rewrite: deleted code is even safer than Rust.

Apart from Rust, Wuffs is also a good candidate for codecs: https://github.com/google/wuffs/blob/main/doc/wuffs-the-lang...

Too bad that Swift isn't that good for low-level codecs, so a Swift rewrite of Messages couldn't remove C dependencies.


If the code can't be deleted then an alternative to rewriting is to sandbox it like Firefox recently started to do with wasm. That would have kept any exploit in the sandbox - let them have fun in there with that 70,000 step program where it can't touch anything...

Sandboxing using wasm has around 10% overhead, so a full rewrite might end up running faster. But recompiling the code takes less time and effort and will not introduce new bugs, so it's a useful option too.


The sandbox mentioned on the article as the mitigation done by Apple, makes use of Swift.

https://googleprojectzero.blogspot.com/2021/01/a-look-at-ime...


Even if this was properly written C++ this wouldn't have worked, i.e. if the authors had used a std::vector instead of a hodgepodge of new and malloc. Even if you had overflowed the integer and used it to reserve some capacity in a vector, calling `push_back` would have reallocated the vector instead.

Containers and move/by-value semantics in C++ inherently avoid a lot of stuff, it's sad seeing so many C++ developers do not actually know the language and instead stick with a "C with classes" that doesn't provide any extra security compared with C.


As much as I'd like to agree with you, the real reason this is happening is because humans wrote the code and humans make mistakes. And even if you rewrite all of the software in Rust, it'll still have exploitable bugs.

Does it matter if it's a buffer overflow or a rusty pan, if the end result is someone reading your device's memory?


In a language with proper bounds checking, regardless of which one, the result of this attempt would have been a core dump.

Sure there are other ways to then take advantage from killing a critical process, but it would be one attack vector less.


You're assuming that no additional attack vector is being introduced due to features unique to Rust. In my opinion, unknown issues are worse than known issues.


Since the first Fortran compiler one of the basic tenets of computer programming has been that the computer itself should help the human programmer express his/her ideas and help the human avoid basic accounting mistakes.

C doesn't do the latter. Blaming the human programmer for stupid accounting mistakes is misplaced. The human's mistake was in choosing a bad language.


TL;DR - the ending of the post is all you need:

“JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent. The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying.”


This one will be another talking point right beside the "arbitrary code execution in SNES games via controller inputs" as a rebuke to arguments about even small systems (like an image decompressor) being "made secure".

I also keep thinking "The Cylons would totally write an exploit like this."


> I also keep thinking "The Cylons would totally write an exploit like this."

Reminder that the Cylon attack was an insider job in the rebooted series (from a model Number Six).


Do we know for a fact the NSO group are in fact NOT Cylons?


They must have spent tons of engineering effort to create this virtual computer to act as their foundation for further exploits. They don't deserve any sympathy of course, but it must really suck that their foundation disappears immediately with the fixed vulnerability.


I suspect once written it can be adapted to a wide range of Turing complete instruction sets.


I actually think you can write and test a NAND gate solution in any external environment that is much easier for development and testing (say plain C on Linux),and then transpile it into any other turing complete arch. Actually this could be a really interesting project to work on: 1) Write a VM in C and 2) Write a transpile program for another turing complete arch.


You should read the entire post.


Reading breakdowns like this gives me imposter syndrome


These people probably smartest developers in the world. I wouldn't compare myself with them.


You think? These devs are some of the devs in Israel. The best get too popular to work in secret labs like NSO. I find it hard to believe that the best devs are secret ones in Israel. But obviously, I could be wrong.


I'd imagine there is a fair share of amazing developers that have no interest in flaunting or broadcasting their talent, for at some level it just isn't necessary to advance their career.


It does not give me imposter syndrome. It just tells me I know absolutely nothing despite having a so-called advanced degree from "a top school". God damn it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: