Hacker News new | past | comments | ask | show | jobs | submit login
A Saudi woman's iPhone revealed hacking around the world (reuters.com)
400 points by arunc on Feb 19, 2022 | hide | past | favorite | 174 comments



"The image files tricked the iPhone into giving access to its entire memory, bypassing security and allowing the installation of spyware that would steal a user's messages."

Seems like there is a lot behind this summary of the hack.

How does an image file trick the iPhone into all that?



Wow, so basically:

1. iMessage has a feature to send and receive GIFs

2. These GIFs are copied to a specific path early in the message processing pipeline (even before the message is displayed)

3. But the copy code doesn't just copy the GIF. It uses the CoreGraphics APIs _renders_ the image to a new GIF file at the destination path.

4. The code uses the ImageIO lib to guess the image format, ignoring the .gif file extension. So you can trick this code to accept a non-GIF file.

5. You can use the above to invoke one of over 20 image codecs that were not intended to be invoked in this code, including the CoreGraphics PDF parser.

6. CoreGraphics PDF parser has a very specific vulnerability in its JBIG2 image codec.

7. JBIG2 takes an image of text, identifies repeating glyphs and uses that fact for better compression. To avoid confusing slightly differing glyphs in things like images of poor quality prints (think e and é, or 3 and 8), it has a way of applying a diff over each instance of an identified repeating glyph.

8. This logic has an integer overflow bug: the 'number of symbols' variable is a 32-bit integer, which can be overflowed using a carefully crafted file. Now the attacker can can set the buffer for symbols to a much smaller value.

9. Making a long story short, this allows overwriting heap memory, setting arbitrary values in the objects used in the JBIG2 logic.

10. The JBIG2 logic uses AND, OR, XOR and XNOR operations when iterating through these objects (to apply the 'diff' on glyphs). The attacker can craft a file that strings together these logic operations so that it basically forms a software logic circuit.

11. So this exploit basically emulates a computer architecture inside an image codec, which can be used to operate on arbitrary memory!

Is that right? If so, this is mind-blowing.


If a hack can be called beautiful, this fits the bill. How do people come up with these?


Being trained in Israeli intelligence corps, moving to civilian life, retaining your spook skills and being funded by Saudi billionaire prince who hates human right activism and criticism.


Or maybe by running step 1 and step 2, and then working for the iOS team, as you can see if you mine a little bit LinkedIn.


You actually only have to check 2/5 of those boxes.


The biggest issue here is that this image parsing was done by such a high-privileged process. What happened to all the sandboxes and stuff?


From the original article [0], last line: "In a future post (currently being finished), we'll take a look at exactly how they escape the IMTranscoderAgent sandbox."

[0]: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...


Is the CoreGraphics ImageIO stuff privileged?


I think the most critical part in the flow is the integer overflow bug, and it is totally avoidable. I am a software engine at Microsoft. Half of my time was spent on security and compliance. We have the right tool, right policy to avoid such things happen. However, I'm not saying Microsoft software is free of integer overflow bugs. I don't intend to advertise Microsoft C/C++ development tools here, but they are the ones I know most.

Let's go to the technical part: If you are asked to implement the binary algorithm with your favorite programming language, how do you verify your code? Unit-tests. How many test cases you will need? More than 10. Binary search implementations are easy to suffer integer overflow bugs(remember the one in JDK?), as long as you have enough tests, your don't need to worry too much. But how much is enough? People can't implement binary search correctly in decades is not because we don't know the algorithm enough or we don't have excellent software engineers, it is because w don't know how to test our code thoroughly. Any non-trivial C/C++ function may need tens of thousands test cases. Simply you can't write them by hand.

You need the right tools: fuzzing and static analysis.

At Microsoft, every file parser should go through fuzzing, which basically is you generate some random input, then you run your tests with the random inputs. Not very fantastic. But there is another kind of fuzzing: symbolic execution, which tries to find all the possible execution paths of your code. If you run symbolic execution with your binary search code, you can get 100% test coverage. And it is guaranteed bug-free. It is like a math proof. Please note the advantage is based on human just had surprising great advancement on SAT solvers in the last 20 years. And often you need to make some compromises between your business goal and security. Most functions can't reach 100% test coverage. You need to simplify them. See https://github.com/klee/klee to get a quickstart. Though C/C++ is often considered unsafe, they have the best fuzzer.

Then it is about SAL annotation and static analyzer. In C, whenever you pass a pointer of an array to another function, you should also pass its length with it. And in the callee function you should check the length. If you forgot it, your static code analyzer will give you a warning. In such a sense, if you didn't allocate enough memory, it will only result an error code being returned instead of undefined behavior.

The last thing: Use safeint wrapping your malloc function. https://docs.microsoft.com/en-us/cpp/safeint/safeint-library...

When we move off the binary search toy example to a real code base, clearly you can see how much extra effort is needed to make the code safe. Please pardon me, most OSS libraries don't have the resource. Many famous OSS projects are "Mom-and-pop" shops. They don't have any compliance rule. They invest very little on fuzzing. So the big companies really should help them. Now you see an integer overflow bug was found in Apple's image render, but was the code written by Apple? Not necessarily. Now we all see the importance of the Open Source movement. It's time to think how to harden their security. For example, even I want to spend my free time on adding SAL annotations to an OSS project I love, would the maintainers accept it?


Why aren’t you using higher-level memory safe languages for that? In C#, the runtime checks for integer overflow can be enabled with a single compiler switch. The switch is not set by default for some reason, but easy enough to enable manually, a single line in *.csproj file.

If you think GC performance is not good enough, see that proof of concept: https://github.com/Const-me/Vrmac/tree/master/VrmacVideo/Con... That C# code implements parser for Mpeg4 format. That format is way more complicated than GIF or even PDF, yet that code runs fine even on very slow computers (Raspberry Pi 4). There’s another similar one in that project for MKV format.


I'd prefer to catch such errors at compile-time. More static the language is, more optimization/analysis can be made. Sometimes the problem can be simplified when your CPU is 64-bit capable but you limit array sizes to 2GB, then you can use 64-bit math to calculate memory sizes to avoid integer overflow. Java and Google protobuf are two such examples. Sometimes the 2GB limit is acceptable, sometimes it is not. You know protobuf even tries to limit string size to tens of MB for safety? The simplification can not be accepted as a general solution.

Back to your Raspberry Pi 4 example: The CPU is 64-bit, but most users only use 32-bit OS with it. Today most Linux installations are 64-bit. I believe Google doesn't care much on protobuf's security on 32-bit systems. So does the other OSS software. So if you take it seriously, it works but it is not safe(when we are talking integer overflow).


> I'd prefer to catch such errors at compile-time.

I don't believe it's possible. These integers often coming from user's input, disk, or network. Compiler can't validate these simply because it doesn't have the data.

Even when possible, it's insanely complicated, and computationally expensive, to catch in compile-time, yet very simple in runtime.

Runtime performance overhead is very small because branch prediction is quite efficient on modern CPUs, these branches are almost never taken, JIT compiler knows about that, and emits code which will be predicted correctly even when uncached.

> if you take it seriously, it works but it is not safe

Noy sure I follow. Let's pretend I am taking it reasonably seriously, despite old and unpaid hobby project.

Why it's not safe? The Mpeg4 and MKV parsers are written in C#, and compiled with that <CheckForOverflowUnderflow> option set to True.


Holy shit. My relatives have asked me in the past "could this [image|video|other supposedly innocuous file format] be a virus or hack my phone?". I've always told them not to worry. Can't do that anymore.


https://en.wikipedia.org/wiki/Windows_Metafile_vulnerability Long story short: Windows library routines for handling an obscure, obsolete image format had a parser flaw. Simply rendering an appropriately crafted image via the standard Windows APIs -- whether in a web browser, file explorer, file preview, word processor, anywhere -- resulted in kernel-level arbitrary code execution.

Now, we've gotten a bit smarter about this sort of thing since. Both at a high level architecturally (don't process image files in the kernel) and at a lower level (use a language that takes measures to constrain its buffers). But the basic scenario hasn't been entirely eliminated. There could be a parser bug somewhere in your web browser for example that allows a properly crafted input to hijack the browser process.


> There could be a parser bug somewhere in your web browser for example that allows a properly crafted input to hijack the browser process.

Bit of a caveat: Chromium and Firefox are probably some of the most hardened software programs in the world (for other browsers, all bets are off).

Chromium distributes its logic over multiple processes per tab, so that even if you eg find a zero-day in V8, you still can't use it to get arbitrary file access without a sandbox escape. Last I checked, Firefox was getting there. Also, Firefox compiles some parsing and image processing libraries to WebAssembly for another layer of sandboxing (and to protect against ROP exploits), and increasingly uses Rust for sensitive tasks.

That's not to say they're safe, but I don't think they're the biggest source of exploits.


> Bit of a caveat: Chromium and Firefox are probably some of the most hardened software programs in the world (for other browsers, all bets are off).

There is a certain irony in the idea that people should rely on a Google product to avoid having their privacy compromised.


I guess with Google, at least you know and give consent to the privacy "invasion" when you use their products.


> I guess with Google, at least you know and give consent to the privacy "invasion" when you use their products.

Google tracks you and adds things to your profile when you explicitly choose incognito mode to avoid the privacy invasion. This doesn't seem like informed consent to me.


Which is why the only safe way to operate is assume anything that is susceptible to outside data is already compromised - and so run them in sandboxes.


You should read the writeup. This was run in a sandbox. Sandboxes are not silver bullets and too can have bugs.


The tech is the easy part: iOS/Android have the best security teams in the world, and an unlimited budget, and sandboxing is an old, proven tech.

I guess that the politics here are the real barrier.


Not only politics, the reason why some languages and OSes rule, is that real progress only happens one generation at a time, to put it in a more friendly wording.


"Science progresses, one funeral at a time."

https://en.wikipedia.org/wiki/Planck%27s_principle

As a general principle in life, it's quite frightening considering ever increasing life spans.


That's just a mitigation for tens of millions of lines of code written in C / C++.


That's not a solution. You're just piping the outside data into your sandbox; it can have bugs too.


This is why I run a 1-task only Windows VM inside a Linux VM on a Mac. Ain’t nobody ripping through x3 0-days for my chats.


If you're a targeted journalist, they'll go through more than three to get you. Full chains are fairly long these days.


lmao bro does all that in front of his IoT Samsung toaster, that has a speaker...speakers are microphones.


Macwinux


There's no such thing as a perfect solution, only solutions that improve a bad situation.


I'm not so much saying it's a bad idea as that what my parent comment described was a logical contradiction. It isn't possible to run "anything that is susceptible to outside data" in sandboxes, because that makes the sandbox susceptible to outside data. If you're genuinely assuming that anything susceptible is already compromised, then the sandbox is accomplishing literally nothing.


I always remember a quote from a sci fi I read about the "multi planet Internet" there.

It was layer upon layer upon layer of protocols and software.

Because it wasn't possible to remove old layers (because some satellites or wormholes or whatever would stop working.)

So, it was super easy to hack...and sending spam. Well you will get killed for that though.


Would be interested in the name of he book ?


It sounds like Vernor Vinge’s A Fire Upon The Deep (& sequels), well worth finding.


It depends on which sandbox you are using. In Qubes OS on desktop, you rely on hardware virtualization, which is virtually unbreakable.


I thought Spectre and Meltdown also allowed host data leakage from a compromised guest?


Yes, microcode vulnerabilities is a problem indeed. Hopefully Qubes Air (next version 5.0) will compartmentalize even that by using separate devices as qubes: https://www.qubes-os.org/news/2018/01/22/qubes-air/.


I wish OSs would just can support for legacy stuff nobody uses and make it an explicit install for the 1% of people who need it.


Each feature has a different 1% who use it


These exploits are only really an issue for your grandparents and whoever if some large-scale mass hack is happening[^2]. As long as they stay up-to-date, anyone not targeted by nation state actors and not holding millions in cryptocurrency[0] likely has nothing to worry about, as these exploits are better used hacking journalists trying to expose corruption or political opponents running against the incumbent[1].

0: https://news.ycombinator.com/item?id=30322715

1: https://www.seattletimes.com/business/rights-group-verifies-...

^2: For instance, the Coinbase Super bowl ad that was only a bouncing QR code would have been a very interesting way to start WW3 if it were Russia hacking millions upon millions of americans' phones, exfiltrating any potentially sensitive information (company emails, etc) in an instant, and/or destroying the device via some exploit chain that destroys the OS and requires a full firmware factory reset to recover.


I think it’s very possible to be in the grey zone. For example, I have worked as an activist, and a campaign I led was widely lauded by the mainstream media as being the key factor in a powerful government minister losing. Now, this is a western liberal democracy that probably doesn’t need to buy tools from NSO Group. But still, am I a fair target? Or what about a friend who is a journalist whose articles have recently attracted negative attention from the government of a fairly corrupt, despotic nation. Are they are target? I’m not talking about Snowden or Assange or executives at trillion dollar companies. In my circle I know a bunch of folk who are basically just normal people with jobs, but for whom their job means that someone working for some government somewhere might like to read their texts. How wide is the net? How can we protect ourselves?


You're really cavalier about whether widespread hacks happen. See any of the text message attacks from the past decade.


Vulnerabilities on iOS are getting really scarce. People spend truckloads of money finding them and you need to pay twice that for the permission to burn said exploit.

That stuff isn't burned on mass hacks on random phone users, it's way too valuable.

BUT. There is a small sliver of time between the exploit being used on a high value target and Apple patching the hole. That's the spot where Joe Schmoe should be cautious.


You are probably right, but this attack only became visible because it had a bug. How many others are invisible currently? Well that's what I'm asking myself :)


A lot, but they're still only used for high-value targets. They're way too valuable to waste on some random person who happens to click a link.


That's a bad argument for defense.

If it can be used on one random person then it can be used on the hundreds of millions of random persons who use iPhone and Android.

And getting even 1% of those massive user bases to click on a link and steal their money or private information, would be incredibly lucrative even for the short period until the patch rolls out, especially for the wealthier iOS userbase as a target.

In my EU country, I'm still getting regular spam SMS with links to what I presume is some older Android malware that wrecked havock last year. So, if attackers are still at it, months after a patch was rolled out, it means they must be still getting returns on their "investment".


Except we don't live in the past decade anymore. Even though people are still sometimes reluctant to updates ("it only made my device slow!"), We made significant progress on patch distribution.

In the past a bug in the SMS stack could be mass exploited and still not getting fixed anytime soon. Not anymore. These bugs cost $10k~$100k now and once you mass-exploit it, they are gone.


> Except we don't live in the past decade anymore.

You do know that is a terible attitude for a real-world security posture meant to protect non-theoretical people's property and information against actual exploits?

> In the past a bug in the SMS stack could be mass exploited and still not getting fixed anytime soon. Not anymore.

While you may wish for patches to always take care of exploits before any phones are compromised, that's not much more than wishful thinking. You assume that all 0day exploits are both known and fixed immediately. That is 100% false.


once you mass-exploit it, they are gone

That is only true of exploits that have obvious and visible impacts, right? If an attacker found an exploit and used it to put a rootkit on millions of phones, but did nothing with that rootkit and it had no outward markers, would anyone know?


Yes, probably the backdoors that security companies implement on the phones to exfiltrate and sell data would reveal that.


I wonder whether even as many as half of android phones are less than, say, six months behind on security updates. They're often quite slow in releasing for any given model, and that's while the phone even gets updates.


One of the earlier iPhone jailbreaks was a tiff image... complicated decompression/rendering algorithms leave room for implementation errors, which can be taken advantage of.

https://en.wikipedia.org/wiki/JailbreakMe#JailbreakMe_1.0_(i...


At some point it was also used as a way to get a custom firmware onto a PSP.

Then modders somehow managed to update the batteries' firmware (cf "Pandora battery") and use that. Sony couldn’t patch it, and it was basically game over for them until they released a new generation of hardware, with motherboards immune to the trick.

Fun times.


There's been buffer overflows/RCE exploits in all sorts of software that can parse images since, well, forever. I remember more than 20 years ago seeing a notice about the embedded Internet Explorer rendering engine in Microsoft Outlook Express having an RCE zero day which could be exploited by simply loading an image in the body of an email.

Rich multimedia parsing display systems in messaging apps are a very tempting attack surface for entities such as NSO.


Why a messenger app needs a picture viewer?


Because people send each other pictures?


> Why a messenger app needs a picture viewer?

A picture is worth a thousand words.


There are two different types of attacks.

One is fly-by attacks by random viruses and ransomware. For those cases, I would not worry about pictures.

Other is when you are targeted by regimes with essentially unlimited budget. In that case yes, the picture can be a spyware.


> Other is when you are targeted by regimes with essentially unlimited budget. In that case yes, the picture can be a spyware.

If this was the case, exploits would never be published or abused, and jailbreaks wouldn't exist because this logic says that those who find exploits will either disclose them "responsibly" or sell them to a nation-state.

If the idea of non-state hackers doesn't bother you, recognize that organized crime is a billion dollar industry and fraud rings would love root access on tons of normal people's devices, including your own.


That's terrible advice that is among some of the worst advice that could be given. There are many other types of attacks that are not viral, are not ransomware and do not originate from state actors.


How does one know which category they are in?


Think about who would want to spy on you, what they'd want to know, and how much they'd be willing to spend to know it.

If the most they could get out of you was a few thousand bucks from your bank account and maybe your email password, you're probably in the first category.

On the other hand, if you have access to highly confidential information (think classified government info or you're literally working on the next iPhone) or are the type of person who makes enemies of spoiled rich oligarchs in despotic nations then you're probably in the second.


The problem is, everyone is in the second category over a long enough time frame. Hong Kongers probably thought the same, but suddenly there were crackdowns, and state actors probably would have loved to have unrestricted access to peoples phones to see if citizens were exercising their “free speech” correctly.

Think about Ukraine today, the Russian government would probably love to have a way to compromise millions of Ukrainian citizens’ phones.

These people all use iPhones.


> These people all use iPhones.

A quick research tells me that pretty much all stats show Android use to be around 80% in Ukraine. Or did you mean Hong Kong? For the latter I see a 50/50 divide.

Just curious about that sentence. I don't think the stats take anything away from your general argument.


Who might be my enemy in the future? Well, maybe anyone who thinks I have something worth to them. Let's say a social media account with a double letter username. Or anything I don't think has any worth now but can be turned into a handsome buck tomorrow. People have been doxed and SWATed over less.


I don’t know. If I was going to bust a move on, say, Taiwan, it might be handy to have root access to as many computing devices as possible so that I could wreak havoc on my enemy’s communication and banking systems.


Who will target you if you are working on next iPhone?


everyone


China.

Xiaomi, Huawei, Oppo, Honor... there are quite a few Chinese phone brands that would benefit from knowing what Apple are working on.


You never really know.

But nobody is going to burn zero days on mass surveillance. It’s just for specifically targeted people.


Are you or someone you associate with interesting?

Negotiate big contracts? Work in aerospace or defense? Have access to inside information about a public company? Have access or are a high level political official?


If you are asking, you are probably in the first category, along with myself and the vast majority of people.



If you are a feminist activist in saudi arabia, I guess you know the deal



Exploits via image libraries have been a perennial threat. A lot of jailbreaks on various devices and consoles over the last 20 years owe their existence to such exploits.


In the late 1990s there were a ton of hoaxes about image files supposedly being viruses. Most famously:

https://en.wikipedia.org/wiki/Goodtimes_virus

I remember telling lots of people at the time that this was impossible, because images weren't executable code, and viruses spread through running programs, not through viewing images.

Unfortunately, this elegant, straightforward distinction didn't hold up over time. :-(

https://en.wikipedia.org/wiki/Weird_machine


> Unfortunately, this elegant, straightforward distinction didn't hold up over time. :-(

I think it was more that it was never true, rather than not holding up in time. ;)

The earliest I can find is a vulnerability in Netscape 3.0 (1996), not found until four years later:

https://www.openwall.com/articles/JPEG-COM-Marker-Vulnerabil...


You just need a buffer overflow in a file format parser.

Thus the distinction has never existed. There has never been such thing as a “safe” format.


>because images weren't executable code

I believe that is what the creators of this virus must be relying on. All I hope is that creating this image virus doesn't become common knowledge (cause that we will fundamentally reshape how we interact on social media).


> All I hope is that creating this image virus doesn't become common knowledge

All I hope is that devs start replacing parsers with ones written in a safe language.


The problem with cpu's is they dont know what instructions are supposed to run in order. Pipeline cache goes a little way towards getting the instructions in order, but ultimately a cpu does not know what instructions it has to run in order for a group of instructions to not be malicious. Think of a cpu like an old human telephone exchange where the operator is plugging in different cables to different sockets and hopefully you get the idea.

I'm amazed at the tech giants with all their funding and they still cant build secure operating systems or have the resources to reduce attack vectors within their own OS'es.


We've gone from 'every OS and device is easily exploitable' to mass market devices/OS pairings where drive-by exploits cost a million dollars.


You should look up Rice's theorem, because what you are suggesting is intractable and has nothing to do with the design of CPUs.


I wouldnt consider Rice's theorem to be relevant for what I was thinking. Sure all programs have common repeatable elements, like open a file, read/write, close file, so you wouldnt have an instruction or few out of the blue suddenly being run, in effect out of context, but thats whats happening here, the normal instructions that would be required to do a task, suddenly start using instructions that are not required in most cases before resorting back to the rest of the instructions for the original task.

Its abit like saying, would you expect some instructions for virtualisation to run if you load a jpg to display on screen? I wouldnt expect instructions for virtualisation functionality to be running in this example.

Or would I expect some instructions for encryption to run if I were to load a sound file to play over speakers? No I wouldnt expect that to happen, but thats the sort of thing thats happening here, some instructions not normally associated with a task are occurring, so how do you detect and alert and maybe halt those instructions?

There isnt anything in the CPU AFAIK that would pick this up, it would need the OS to act as a co-party to perhaps halt this, and I dont know if the OS or even AV software goes to this extent? At best, you'd have something like a dmesg feed or the Intel Processor Trace (https://news.ycombinator.com/item?id=30110088) to get the output of instructions being called (possibly independent of the OS), but like I say I dont know of any OS or AV product that goes to this level of monitoring.

Thats where I am coming from.


"Cook's egg" is a recommended reading. Summary here (spoilers ahead): https://icdt.osu.edu/cuckoos-egg


That summary sparks interest indeed, thanks for recommending! Just ordered a copy.


Was "Cuckoo's Egg" autocorrected to "Cook's egg"?


Yes, indeed. I don't see an edit option.


Can't edit after two hours, but it's present before that cutoff.


Image parsers are complicated and often exposed to untrusted data, they’ve always been a big vector of exploits.


they are safe, unless they're being targeted by state actors.


As of windows 7, the parsing of some font files was handled inside the Windows Kernel.


hi


>Fearful that her iPhone had been hacked as well, al-Hathloul contacted the Canadian privacy rights group Citizen Lab and asked them to probe her device for evidence

I wonder if she had a preexisting relationship w the group or how she was connected with them.

Presumably her work included a network of journalist protection orgs. But had she had a device “probed” like this before?

It’s such an important point of contact I wonder about the details of this.


Loujain has strong connections in the metro Vancouver BC area. People working in network security/infosec and associated with UBC IT are well aware of Citizen Lab and possibly put her in contact with them. Citizen Lab at the University of Toronto is fairly well known throughout the Canadian academic research/computer science/network engineering community.


This Saudi women's activist - Loujain al-Hathloul, is one of the most courageous people I have ever heard of. She was arrested for driving a car in 2018 and remained in prison till 2021 and she was subjected to severe torture and abuse.

https://en.wikipedia.org/wiki/Loujain_al-Hathloul

Her husband is another exceptionally courageous person and it is inspiring to see their cheer and steadfast courage despite all the hardship they face. It is sad to see Saudi Arabia continuing to harass this couple.


Would compiling image parsers with ASLR and bounds checking prevent these zero-click hacks?

I haven’t researched the exploits in detail but it seems to me Apple can develop better protection against such zero-click exploits.

At the very least, iMessages shouldn’t preview images from unknown contacts.


The image parser uses ASLR. The turing complete NAND computing device they describe in the article was used to do computations on the pointers leaked with the infoleak, resulting in an ASLR bypass. Brilliant.


IKR, sad to see such ingenuity used to hack activists.


I'm convinced that a bad image parser is apple's backdoor, but I only have my paranoia as proof.


What Apple stands to gain from a backdoor? It's clear what the cost of risk is, but what is the gain?


Just spitballing, but market access to China and the like?


Why use a backdoor if you have the frontdoor, the walls, the roof.. and the entire server?

Censorship, Surveillance and Profits: A Hard Bargain for Apple in China - https://nyti.ms/3oAvIVH

> Apple has largely ceded control to the Chinese government.

Chinese state employees physically manage the computers. Apple abandoned the encryption technology it used elsewhere after China would not allow it. And the digital keys that unlock information on those computers are stored in the data centers they’re meant to secure.


Isn't the walled garden and locked down OS/hardware supposed to prevent these things?


>Isn't the walled garden

No, only real security helps, some dude reviewing GUI or business related rules help with nothing, and Apple security automated checks are week, stronger sandboxes are needed in wall gardens or walled gardens with a gate.


Only provably correct software would prevent such things.

A walled garden could make it simpler to enforce that only software proved correct can be installed, but without the proof, it does not guarantee much,,only makes certain things less probable.

Writing provably correct software us now a rare and expensive engineering feat. Most consumer OSes have nothing of thus sort, sadly. And I mean just the limited set of system software, not third-party installable apps, of course.


Formal correctness proofs are both unattainable and insufficient. We don't know how to do it at the required scale, and it doesn't save us from flawed formal specifications, we'll have the bugs in the formal requirements instead of in the code.

There are more cost efficient and proven ways to effectively address these kinds of vulnerabilities, like limiting complexity, using programming language features to eliminate classes of bugs, using other verifiable engineering disciplines, and compartmentalisation. Proofs also have their place locally in verifying small components. But making this commercially viable would large numbers of customers and users who are willing to make big compromises in functionality, features, third party apps etc.


Limiting complexity is often not an option, because the complexity is of the subject area.

Things like image format decoders are easiest to produce formal analysis and proofs for. It's sadly still too expensive and slow to produce.

Fuzzing as a mandatory testing step could be useful in some cases.

I agree about bugs in specifications though :(


In this case limiting complexity would have worked: don't support frivolous amounts of image codecs (like the fax format here), just support 1-2, and make sure there is high assurance of the safety using known methods (security oriented code design, memory safe language, fuzzing, and sandboxing).


"Beware of bugs in the above code; I have only proved it correct, not tried it." https://www-cs-faculty.stanford.edu/~knuth/faq.html


It prevents third-parties from introducing these kind of vulnerabilities, but it doesn't prevent Apple from introducing them. It just makes it really hard to find (and fix) them.


No. It is supposed to mitigate these things. In a complex real world system there's no such thing as perfect security.


No, it's supposed to make Apple's shareholders rich. It doesn't do jack (as the article clearly shows) for security.


Whoever advertised that, they lied. There's no actual way to prevent these things - aside from not participating at all.


Why doesn’t Apple have a team doing what Citizens Lab does, instead of victims contacting a third party?

Also, I wonder if Google Pixel is more secure?


Speaking of Google's security, this gem was in the article, "Soon after her release from jail, the activist received an email from Google warning her that state-backed hackers had tried to penetrate her Gmail account."


Because Animoji was more important.

Incase anyone was wondering, yes the amount they spent on that particular feature vastly eclipsed their spend on the kinds of teams that could have caught this.


To be fair, Animoji probably sell more phones that protection against State-backed adversaries.


I really don't get why you would make a statement like this with such confidence. Apple employees hundreds of people on its security teams and pays each of those engineers fairly well. Animoji is a spin off of their Face ID work, which you could arguably call a security feature in its own right but not strictly a red team.


10 years from now:

“security researchers find a blank Animoji file that ended up being a security exploit wrapped around an OS written with image files”


I find it very hard to believe that the richest company in the world, didn’t have enough money to implement Animoji. And they had to eclipse the budget of security team. Could you provide a source for that?


3rd richest if we are going by profits: https://companiesmarketcap.com/most-profitable-companies/


I mean thats not really something any relevant to my comment, even if by profits it lags at 3rd place. Yet its significantly ahead of Microsoft and Google. And Apple doesn’t even have its own cloud business like Azure to take care of, so its very tough to believe Apple spent all its money on a novelty trick.


The short answer is it doesn’t matter when a government comes after you because of their ability to spend the resources necessary to infiltrate your device.

Apple’s security teams aren’t public facing; I suspect if a corporate or university customer had security issues, they would be addressed.


Apple has recently started notifying people who they suspect to have been targeted by state-sponsored attackers. I believe Google has a similar program in place for a while.


A recent Pixel with GrapheneOS installed should be pretty secure.

https://en.wikipedia.org/wiki/GrapheneOS


How come the company that made Pegasus is being sanctioned, but the government that used Pegasus to abduct, kill, and then dismember a dissident - isn't?


The simple answer is that to the people that really matter in decisions like that, those in the intelligence and the state departments of countries like the US, software like Pegasus is an important tool in staying ahead of adversaries. And in order to do its job effectively, it’s best if people don’t know about it. Pegasus may be sanctioned, but another company will make a product to takes its place and the game will start again. A similar argument can be made in favour of keeping the Saudis on side; they are useful for what they can do to help attain the foreign policy goals of the US and it’s allies. Those goals are mostly related to making sure the US continues to have access to all the stuff they need to stay on top of the geopolitical tree


How does it help to stay ahead of adversaries when they're selling it to basically everyone including most adversaries?


you can sell basic version. Leaving premium to your self.


Because that would set the precedent to sanction the government that rubber stamped the selling of Pegasus also.


Because it’s easier?


That government is an important US ally.


By all the things the US does for them, they must be really important.


Well look at the map. Look at neighbors. Look how many 9/11 culprits came from there. Look what is in the ground. US is more than happy to keep such dictators in place to get access and control over all this and more.

It really doesn't matter how many people they behead or cut to pieces in their embassies, if you are not US citizen you mean little to nothing and its explicitly stated in US laws. I guess common americans have no idea how much this pisses large parts of the world.


Your use of the word happy is a bit of a stretch. Welcome to realpolitik. Presently Germany is more than "happy" to look the other way on Ukraine for desperate need of Russian gas.


I have always wondered about how exploits like these manage to stay secret for any length of time? I would imagine that leaking such a thing could he very rewarding but I guess there is too much risk involved.


I guess enemies of a state should be using feature phones?


Feature phones are notorious for having terrible security. Most of them can be quietly hacked with an SMS from the carrier. And they support MMS so you're not immune to image parsing bugs either.


Loujain is a University of British Columbia graduate and was a political prisoner:

https://www.google.com/search?client=firefox-b-1-d&q=loujain...

https://en.wikipedia.org/wiki/Loujain_al-Hathloul

Let's not forget that the present Saudi regime (MBS) is responsible for luring political opponents/journalists, killing them, and dismembering them with bone saws.

https://en.wikipedia.org/wiki/Assassination_of_Jamal_Khashog...


The irony is that he's also the one who forced the country to allow women to drive.


It's just a very minor victory for women's rights and it was a very impractical law anyway.

It's still a very oppressive country where being a dissident means getting threatening phone calls from the security agencies.

Remember 2011 when there were popular protests all around the Arab world? Well they tried to do one in Saudi as well but the intimidation was so high, that it never got off the ground.


The entire reason Saudi is so powerful is because of Oil Wealth and its special relationship to US since WW2


[flagged]


wonders of i'net..

you know, there is something called female circumcision. And the society+culture behind it, has been applying it even on women born and living outside it, when they once visited their relatives..

(described in book: Ever Since Adam and Eve: The Evolution of Human Sexuality - Malcolm Potts and Roger Short, 1999)


Thanks for the non sequitur.


[flagged]


People are automatically assuming I'm siding with MBS.


Call me cynical but no part of that seems anything other than a PR move, and a weak one at that. He's just trying to build an image, though how successful he's been is sometimes shown though (non-malicious) comments like yours.


Tech has essentially created this problem. Can’t tech fix it?


Not to sound like a broken record but…. This was yet another memory management bug that would have been prevented if using a memory safe language


On a brighter note, apple is currently in the process of converting almost all iMessage components to Swift for this reason. I'm sure it is taking many engineering hours, and image parsers/open source libraries like this are the most difficult to convert.


Just one component, the one that parses incoming messages. The problem here is that it parsed the message and decided to pass it to ImageIO, which is written in C++.


Is there a link that mentions this ? The bug was in the ImageIO/Core Graphics layer so are they re-writing all Core Graphics components in Swift ?


Yeah, something tells me that they're not going to be rewriting an image decoder to have a runtime...


Let’s hope they make swift work of it


1000 FTEs thwart the world’s nation state hackers?


I’m only a few more CVE’s from advocating C++ and Objective(ly)-C(rap) proponents be subject to registration and public humiliation whenever the (inevitable) next issue occurs.

I get it, legacy crap has momentum and you can’t ignore that. What’s not ok is the mountain of people who pretend that’s not a problem.


Hence why having legal liability is so relevant for making this a reality.

The industry will only really change when pushed to do so.


Or memory safe hardware.


Construction has essentially created the problem of potholes. Can they fix it?


"The invention of the ship was also the invention of the shipwreck" ― Paul Virilio


Good analogy.

Question here is: can they fix potholes faster than new ones show up?

Seems answer is no for tech. And construction, these days.


It takes an expert to know that there's vulnerability. Whereas construction engineer can "see" the pothole and so they can fix it. Software engineer has to "know from exploits" that there's a vulnerability so they can fix it. It's not far away when OS are written in memory safe languages like Rust.


You mean far away like 1961?

https://en.m.wikipedia.org/wiki/Burroughs_large_systems

Nowadays still being sold to governments that care about security.

https://itupdate.com.au/page/unisys-clearpath-mcp-unsurpasse...

https://www.unisys.com/ms/client-education/course-catalog/cl...

Or maybe 1983?

https://en.m.wikipedia.org/wiki/Rational_R1000

Maybe 1982,

https://news.ycombinator.com/item?id=22375449

Plenty of examples (those are a tiny snippet) on how safe OSes should be written, until there is liability the easiest way will always win.


It's more complex to find security bugs, yes, but I think the analogy stands.

In order for a construction engineer to "see" a pothole, they need to actually know where the pothole is and physically go there.

When you have millions of kilometers of paving across a continental-sized country, like the US or China, for example, this is unfeasible. "Seeing" a pothole isn't so simple as it might give you a first impression...


I think the answer is probably an astounding yes for both, if you think of the trend of vulnerabilities/units of software generated.

The move to a large majority of software being run in a sandboxed environment has drastically reduced this sort of thing.


They surely do, because if I can prove the pothole broke my car, I can sue them, or have my insurance take legal action.

Eventually this will be standard in software as well.


As long as people aren't put in jail for faulty software, it will never be fixed.

Remember Diginotar?

Who knows how many lives were affected in Iran...


Surgeons aren't put in jail for faulty surgery. Wanting this for software is a bit draconian.


Surgeons can be held accountable and can lose their license at least.

That has never happened to software developers.


I think we have vastly different standards on what's reasonable and prudent between software developers and surgeons!


Nope, it can't. As long as you use tech, it's a risk management situation, and a cat and mouse game.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: