Hacker News new | past | comments | ask | show | jobs | submit login
You’ve Got 0-Click Mail: Unassisted iOS Attacks RCE via Mobilemail/Maild (zecops.com)
273 points by DyslexicAtheist 9 months ago | hide | past | favorite | 128 comments



A number of interesting points:

* This was abused in the wild in targeted attacks against individuals including corporate executives and journalists, and, based on the presence of suspicious 0x41s in the exploit, likely purchased and minimally modified.

* On iOS 13 the attacks are even more powerful, as moving the mail parsing out-of-process means it is “0-click” and can silently crash without alerting the user of a problem (and the process’s VM layout is much more consistent to boot). Humorously, the error message that appears (“This message has no content”) is a common bug anyways so users have been trained to ignore it.

* Memory mapping files is hard, and there’s some non-trivial error cases that need to be checked.

* A fix is in iOS 13.4.5 beta.

(By the way, MobileMail and maild from the title are process names and are spelled as such.)


An alarming point:

> Q: Since when iOS is vulnerable to these bugs?

> A: iOS is vulnerable to these bugs at least since iOS 6 (Sept’ 2012). We haven’t checked earlier versions.


In terms of threat modeling: Before we all go and disable the first-party mail app for however many weeks until 13.4.5, is there any reason to think this attack has leaked or will leak into the low-level scammer world?

It was presumably in the realm of million-dollar zero-days, so unless you have reason to believe you would be targeted by a state actor, is it safe to assume the current unpatched risk is negligible?


The original exploit was probably somewhat difficult to find, but the blog post gives a fair amount of detail on how one might go about exploiting this bug. That being said, I don't think the heap overflow is enough to actually get code execution; you'd probably need a leak due to ASLR as well (although on iOS ASLR isn't the best…) Your average script kiddie isn't going to pull this off, but the knowledge required is likely far from nation-state level.


> disable the first-party mail app for however many weeks until 13.4.5

If the claims in this article are true (and I have no reason to doubt them), I don't think Apple will leave this unpatched for weeks.


> Humorously, the error message that appears (“This message has no content”) is a common bug anyways so users have been trained to ignore it.

Source? I've been using MobileMail for more than a decade and never seen this bug.


Anecdotally, I've seen that pretty regularly for several years.


They're after you.


I think it shows up more frequently for certain mail providers. Maybe you use Exchange or Yahoo or Gmail and someone else uses one of the other ones? That changes quite a bit about how MobileMail works.


Source? I've been using MobileMail for more than a decade and never seen this bug.

I see it frequently, sometimes several times a week. I saw it just yesterday in the macOS mail client, but I don't know if it's related.


You must be an uninteresting target then.


does anyone have any idea when the fix will be pushed? i need to ascertain the amount of panic to apply to myself and my household iOS devices.


Current public release in 13.4.1 and current beta is 13.4.5 beta 2, released on a weekly cadence with the next one predictably likely to arrive today. Not sure what happened with .2 through .4, but my bet is that 13.4.5 GM will come out today for developers and be promoted to release Friday.


When iOS 13.4.5 is released publicly, which is probably some time in the next month or two.


woah.

i hope you're wrong, no offense.


I sure hope so as well, but usually iOS versions with actual beta builds (that is, not a 0.0.1 "hotfix" release) come about 4-10 weeks apart. There is a chance that this "0.0.5" release is meant to be fast-tracked through the release process, but this kind of thing is rare anyways so I can't say much about it. As of today we are on developer beta 2 of iOS 13.4.5.


I remember working with an individual that stated in a public mailing list that an OS X version was going to gold master. This person was subsequently fired for releasing that information. How things have changed.


Most people make these statements as an external third party who have never worked at apple or similar.

They've just noticed that apple releases stuff on fairly strict schedules and predictable behaviors, so it's easy to predict when the next minor point release will come out, based on previous behavior. Similar with a new iphone releasing every year after july at the very least.


I would expect that anyone who is caught willfully disclosing that information today to be harshly disciplined and likely terminated.


HN is all in on foolish consistency wrt title capitalization.


If you mention it, occasionally it will get fixed.


If you email the mods using the footer Contact link, it'll get fixed more often.


If you want to reduce your risk to these 0days:

1. Go to Settings > Password & Accounts. Set Fetch New Data to "Manual" and disable "Push". This will ensure that new mail is only downloaded when you specifically request it, and it will no longer do so automatically.

and/or

2. Use Safari or different e-mail clients such as GMail and Outlook. These mail clients are unaffected by the currently disclosed 0days.

There's also no word on whether PAC (a new processor security feature available in current model iPhones) affects exploitation of these bugs. You're safer if you're running a device with an A12 CPU or higher (anything newer than an iPhone XS or XR).

You can use the iVerify mobile app to get push notifications when new public exploits for iOS come out. The news feed contains specific instructions to mitigate the impact. See more:

https://apps.apple.com/us/app/iverify/id1466120520

https://twitter.com/IsMyPhoneHacked/status/12529544579540090...


> anything newer than an iPhone XS or XR

I think you mean an XS, XR, or anything newer


With the iVerify app, I'm suspicious of any claim that it can detect threats. How would such a thing work on iOS? My understanding of the security model is that it makes that sort of thing very difficult.


iVerify uses a variety of methods to detect if the phone has a security issue.

Some of them are deterministic and report with complete certainty. For example, the Volexity report about EVIL EYE includes IOCs for the exploit payload and we can detect and report whether that payload exists on your phone. (read that report here: https://www.volexity.com/blog/2020/04/21/evil-eye-threat-act...)

We also include more probabilistic, generic, or environmental checks that look for deviations from the known good for iOS. These are based on our deep knowledge of iOS and use little-noticed side channels to report information about iOS to the iVerify mobile app. These can possibly detect unknown jailbreaks. We walk a thin line to write our checks without violating Apple's rules (e.g., no private APIs).

It's certainly not foolproof, there are ways that a malware toolkit could avoid iVerify, but it's a great extra seatbelt to wear. It's also extremely effective at determining exposure to an attack immediately after IOCs are released.


> We also include more probabilistic, generic, or environmental checks that look for deviations from the known good for iOS. These are based on our deep knowledge of iOS and use little-noticed side channels to report information about iOS to the iVerify mobile app. These can possibly detect unknown jailbreaks.

Does this differ significantly from existing jailbreak detection tools, like checking for suspicious-looking files on disk, address-space scans for unexpected code and invalid signatures, and lower-level APIs having different behavior?


There's a substantial limit to public knowledge about those methods and APIs. We've been able to discover more of them, and more that work on current versions of iOS than other teams. Those techniques unfortunately go stale very quickly and most of what you find stopped working in iOS 12 or earlier, yet people still rely on them and think they work.


I wrote @ronomon/mime (https://github.com/ronomon/mime) as a strict fuzz-tested MIME parser in a memory-safe language to prevent similar attacks such as:

1. Malicious character encodings that can crash email clients such as Apple Mail.

2. Malicious RFC 2231 continuation indices designed to cause overallocation.

3. Base64 data containing illegal characters such as null bytes.

4. Unterminated comments and quoted-strings.

5. Missing multipart parts (e.g. no terminating boundary delimiter).

6. Malicious data designed to cause CPU-intensive decoding or stack overflows (aka MIME bombs, the email equivalent of zip bombs).

7. Malicious multiple occurrences of crucial headers and parameters, which could cause clients to render an email differently from that scanned by antivirus software.

8. Encoded words containing malicious control characters (cf. Mailsploit).

While it won't prevent this exact attack because it seems to be purely size-based, @ronomon/mime was able to detect attacks such as Tim Cotten's "Ghost Emails" Gmail hack [1] as well as recent CVEs reported against ClamAV [2] and SpamAssassin [3], which are variants of a MIME multipart attack I disclosed through Snyk, "How to crash an email server with a single email" [4].

[1] https://blog.cotten.io/ghost-emails-hacking-gmails-ux-to-hid...

[2] https://blog.clamav.net/2019/11/clamav-01021-and-01015-patch...

[3] http://mail-archives.apache.org/mod_mbox/spamassassin-announ...

[4] https://snyk.io/blog/how-to-crash-an-email-server-with-a-sin...


> as a strict fuzz-tested MIME parser in a memory-safe language to prevent similar attacks such as

As the "memory-safe language" appears to be JavaScript, which lacks integers, how does the parser safely handle the 7bit encoding that still comes out from a large variety of email hosts?


I just googled "email 7 bit encoding" and yeesh, that is gnarly (here's an explanation for anybody who's curious: https://stackoverflow.com/questions/25710599/content-transfe...).

However, I can tell you that JavaScript doesn't lack integers. It tries to make you think that numbers are just numbers, but if you store an integer in JavaScript it is represented as an actual integer. Equivalence checks work, there are even bitwise operators. It's only once a decimal is introduced that the representation changes to floating-point.


> Equivalence checks work, there are even bitwise operators.

That's not exactly proof that JS has integers. I mean you can still do this:

> 1 === 1.00000000000000000000000000000000000000000000001

JS implementations may use integers as an optimisation. The problem is when you need to sure something actually is an integer.

You can also have bitwise operations on doubles, you just take part of the number and toss away the rest, and pretend it's an integer. But occasionally, and unexpectedly, it may cause a rounding issue, depending on how it is done.

However, looking more into it, this library relies heavily on the Buffer [0] API from Node (which does have proper integers, and Buffer is a Uint8Array). Which handles 7bit data by treating it as latin1 (ISO 8859-1), and tossing away 1 bit of every byte. Which might not be a bad way of handling it.

I don't know enough about Node's particular implementation to judge whether or not there are problems here. I do know that 7bit encodings can break a whole lot of parsers in unexpected ways.

[0] https://nodejs.org/api/buffer.html


> I do know that 7bit encodings can break a whole lot of parsers in unexpected ways.

That would be interesting. Would you please share some unit test cases and I will run them past @ronomon/mime?


This is good work, thank you. I've added to my testing toolbox.


Important point buried waaay at the bottom: -- The vulnerability allows to run remote code in the context of MobileMail (iOS 12) or maild (iOS 13). Successful exploitation of this vulnerability would allow the attacker to leak, modify, and delete emails. Additional kernel vulnerability would provide full device access --

This is not a full-device/root RCE, and the sandbox / ASLR does work to contain this.


Is it possible to use the exploit to send e-mails From the compromised device?

I’ve had a problem with lots of bounce mails today on one of my accounts. Apparently somebody used my SMTP for spam which I find hard to believe since I keep my password safe. It’s not part of any known public data set.


There is a good chance that the attackers have this capability also. It would be an additional component of an exploit chain. The authors did not investigate this.


Is that really that important when email data will often be the primary target of the attacker?


Yes? Details on what the scope of a issue is pretty damn important.


Since it is a bit buried in the comments, and not explicitly mentioned in the article, here is a working mitigation.

From the founder/CEO of ZecOps twitter directly: https://twitter.com/ihackbanme/status/1252990118945681409

Q: If a user turns off mail sync for their accounts into the mail app, is that a mitigation?

A: Yes. If a user is not syncing emails it would mitigate the issue.


If you turn off mail sync you can no longer read mail right?


Correct. If you fetch mail, you're vulnerable (at least that's my understanding).

You can use third-party apps, or Safari if it's a webmail.


You would need to read mail with a third party app.


You can still manually fetch the mail.


Manually fetching would still execute any malicious code contained in the mail if I understand the article correctly. The only way to be protected seems to be to not use Mail.app until iOS is patched.


In a more general sense, what level or standard should advertised privacy claims be held to? How are "privacy" and "security" quantified? Are there any real-world tests they have to pass? Or is it simply: "Yup, our product is secure" (TM) And if it does get breached, should they release a statement correcting previous claims? Or need to hold off on running any privacy ads until it's fixed?

Actually, now that I re-watch an ad [0], they're not even making any claims about the iPhone's privacy. It's just showing a bunch of people doing private things, then posing a hypothetical question:

  "If privacy matters in your life"
  "It should matter to the phone your life is on"
  "Privacy. That's iPhone."
Notice "It should ..." How many rounds of legal do you think that text went through.

[0] https://www.youtube.com/watch?v=A_6uV9A12ok


I think Apple has been around long enough to know better than to make claims like "iOS is secure" or "macOS is secure".

Oracle, for example, apparently learned the hard way about claiming software is "unbreakable".

Everyone here on HN knows that pretty much nothing is "secure".



What specific “claims” are you taking about, since everything in that link appears to be clearly true.


My idea is that a company should be allowed to claim any numerical value they want for "privacy" or "security" in advertising, but, if the "privacy" or "security" is breached they should be required to pay out the number claimed to the entity who breached it. In the case of mass consumer products, they should probably also be required to specify a per-unit value.

This scheme has many advantages. First, it prevents companies from overstating too much since if they claim a number much more than the cost to find a breach, then it becomes profitable for white-hats to demonstrate that (e.g. they say $1 Billion, but it only costs $1 Million). In fact, if they really overstate it they will be appropriately "fined" for their false advertising. This means that the number will probably be similar to or less than the true cost.

Second, it allows companies and users to choose their risk. If a user has uniquely valuable data or a specific use case with greater requirements, they can choose a service with the level they deem appropriate. If a company manages unimportant data or is too new to have high "security", they can specify a low number to properly reflect the value of data they can or will protect. This avoids the problem of a single fixed value that could prevent low "security"/importance systems from being created and could shield high "security"/importance systems from the liability they should have to manage.


Can anyone confirm exactly what needs to be done to temporarily disable the Apple Mail app on the latest production version of iOS (13.4.1) until this is resolved?

I've never been entirely confident in my understanding of the background processing model for iOS apps, and this attack potentially affects people I know.


Disabling the mail feature on all the accounts configured on the phone (some accounts have multiple features like contacts, calendars, etc - those can be left enabled).

Alternatively, disable push e-mail and set the refresh interval to manually and make sure to never open the Mail app (as it could cause a manual refresh).


Thank you, but I think I'm really asking how exactly that "disabling" should be done. The UI has changed repeatedly over the years, and it's not clear to me that things like stopping the front-end app from loading, force-closing it if it's already open, or changing settings related to mail accounts would necessarily be sufficient here. Presumably deleting all of the mail accounts on any affected phone would do it, but "You just need to delete everything and then you can set it all up again later" isn't great advice to have to give to a non-techie relative if it's not necessary to go that far.


On iOS 13: Settings > Passwords & Accounts > Select each account > Disable the "Mail" slider.


Thanks. That's what we concluded as well, and having now had chance to try it, it does seem to be sufficient here.


From the founder/CEO of ZecOps twitter directly: https://twitter.com/ihackbanme/status/1252990118945681409

Q: If a user turns off mail sync for their accounts into the mail app, is that a mitigation? A: Yes. If a user is not syncing emails it would mitigate the issue.


The fact that the report mentions a daemon (maild) means that it may not be possible for an end user on a non-jailbroken device to stop activity. (For a non-Apple app, simply force-quitting by swiping up in the switcher would go 99% of the way.)

I'd guess that removing mail accounts from the device would do the job, though.


It's possible to delete the Mail app (which I had already done on my phone). I'd hope that would kill related daemons. I'm jailbroken as well so I'll check if the daemon is running later tonight.


Good point; one would hope this would do it. :)


You can turn it off under Screen Time -> Content and Privacy Restrictions -> Allowed Apps.


There's no guarantee that restricting access to the UI of the Mail app would prevent it from refreshing accounts in the background.


I was under the impression that it disables it on a deeper level since disabling Camera, for instance, disallows apps (like Snapchat) from accessing the camera even if you'd previously given Snapchat camera permissions.


Is there a server-side filter that would kill these mails before they get sent to the device?


Good question, I'd expect it would be possible to add that to the Apple e-mail servers and automatically protect every version of iOS.


Does anyone use Apple email?


Yes, a lot of people do. However, as you indirectly point out, this would not be a real fix because presumably most people do not use Apple's servers for their email.


If Apple, Google, and Microsoft could catch these in-transit it would prevent a very large number of attacks. (And 100% of the devices I'm responsible for.)


> The suspicious events included strings commonly used by hackers (e.g. 414141…4141).

I had to look this up. "AAA…" (0x414141… in ASCII/UTF-8) is traditionally used as arbitrary user input that's easy to spot in places it shouldn't end up.

https://security.stackexchange.com/q/18680


Any email client that attempts to implement displaying rich text messages and multimedia in email bodies constitutes a huge attack surface. Outlook proved this 20 years ago. Apparently we haven't learned much.


Any email client that doesn't attempt to implement displaying rich text messages and multimedia in email bodies isn't going to get very far in the modern world.


I agree, but I also know people with access to some critical systems who either do their email using mutt, or run their email and other internet-facing things in a sandboxed vnc-over-ssh session to a secondary workstation OS.


Ideally software would present the user with the option of going into "maximum security" mode, which limits functionality in favor of a reduced attack surface. Then the user can decide for themselves if they are willing to make such a sacrifice.

As an added bonus, it provides users with interim fixes for situations like this, where the exploit is known and in the wild, but a patch is some time away.


It's not a solution for iOS, but Fair Email is an open source Android mail client that fits this description. It can work as a text only client for sending and receiving, and defaults to pretty much everything being safe. Links are hyperlinks, but bring a pop-up dialogue which shows you the destination, and will even unmangle common trackers from links, or MS Outlook advanced protection mangled URLs.

It pretty much defaults to being this "reduced attack surface", and I wish more apps were like this. It's probably one of the more complex settings UIs you'll ever find in a mobile app, but that's because almost everything is able to be enabled or disabled... Want it to check and validate DKIM locally? That's a toggle switch. Want it to auto remove tracking pixels from html emails? That's a toggle. Want to view email from trusted contacts in html view? Pretty sure that's a toggle too.

No connection, just a happy user of a nice open source email client.


This bug appears to be about MIME parsing, so not displaying rich text or multimedia wouldn't have helped.

Implementing the parsing in a memory safe language would have helped.


Outlook has bugs in rich e-mail rendering, so it's a bad idea? Are browsers a bad idea? I hate looking at weirdly formatted e-mails, but it seems to provide pretty good value to some people.


Fantastic read, pretty terrifying to see how stealthy this attack is, glad this report came out.


Website is down for me (522 error from Cloudflare). I've found this cached version: https://web.archive.org/web/20200422184540/https://blog.zeco...


It's Outlook Express all over again.


this comment made me instantly 15 years younger but without the hair I've lost in that time.

RE:FW:RE:FW: that was a disaster.


If you were using Outlook in other languages, you would have seen Microsoft making the exact same mistake with it comes to localisation that they made so many times before. So you'd have email subjects like this:

  Re: SV: Re: SV: Re: SV: Re: SV: Re: SV: Re: SV: Some topic
And adding one of those every for each message in the thread. (SV is short for "Svar" which means "Reply" in Swedish).

You'd have thought he mess of localising the actual directory names in Windows (i.e. c:\Program Files, etc) rather than making it a display property in the file manager would have taught them something, but it didn't.


One of the things I’ve wanted to do since my iPhone started showing weird behavior is the equivalent of a nuke and rebuilt from a verifiable source just to make sure any malware was removed. But I couldn’t find way to do this, it seems you can only do a factory reset but this just resets using the software on the phone so how does it remove potential malware? Is there a way to do this?


When you plug your iPhone into a computer a button appears in iTunes labeled "Restore iPhone..."

In years past, you could use this to download a fresh copy of the OS and install. But I don't know if that's still what it does.


> But I don't know if that's still what it does.

It does. If you don't have a cached .ipsw for the latest iOS version, iTunes will download it for you. Your iPhone also contacts Apple to verify the IPSW iTunes gave it is both "from Apple" and still signed (https://ipsw.me/iPhone12,1) before installing it.


Ah! Thank you both, I’ll try this later.


How can Zecops keep sufficient records to identify instances of this attack in the past without keeping a massive treasure trove of private data?

Surely you would need to keep the raw content of every email of every client to identify this?


Perhaps they provide their clients with a script to run that identifies suspicious emails.


It was my understanding that iOS devices have kernel or hardware checks prevent unsigned code from running. This is why JIT languages don't work there.

Is that true? If so, how to remote code execution exploits like this work?


Correct, most processes on iOS do not have the ability to JIT code. Usually exploits such as these rely on return oriented programming techniques (https://en.wikipedia.org/wiki/Return-oriented_programming) to bypass this, although Apple has added hardware mitigations for this in its newer chips.


The point of pretty much all exploits is to find ways to bypass that very feature.


Is the iOS Mail app still not sandboxed?


Not sure it matters a ton if it is or not. Think about how many accounts you have that allow password reset via email. What's even higher value that they'd look to get to from email?


I imagine access to your emails is what you would want anyway—these days it’s the strongest proof you have, stronger than ssn.


No, it is.


Any chance someone could fix the title? (Add a "!" or ":" after "Mail", as is in the original; fix the capitalization on MobileMail and maild)


While clever, I think the first part of the title can be dropped and “exploited in the wild” added.


So if you never used mail on your iPad, can you ignore this?


If you've never used Apple's Mail app on an iOS device, the vulnerability doesn't affect you.


What are the difficulties in making this into a worm and infecting billions of iOS devices?

It seems we have had a few 0-day iOS vulnerabilities lately but so far no one has cared enough to build something really malicious. I understand Apple has patched them quickly but updates take still a week or two to get on all devices.


“ , we surmise with high confidence that these vulnerabilities – in particular, the remote heap overflow – are widely exploited in the wild in targeted attacks ”

Surmise is the wrong word, the authors have actual evidence this is true.


Holy crap.


I keep reading about this and that new system feature which is supposed to make X class of bugs impossible. What silver bullet would prevent this kind of exploit?


Use of a memory-safe language would prevent the heap corruption. A better type system would make it hard to ignore the error.


So disable the mail app until the next ios release?


This is yet another unsafe language bug. I know it's no easy task, but the sooner Apple gets iOS rewritten in Swift the better.


I'm sorry but that is likely never going to happen. iOS is a custom version of OS X, which is a fork of FreeBSD. That C is likely never going to be completely replaced.

I love Swift, it's a great language, but we shouldn't rewrite our stacks every time a better language comes out. This is where the engineering trade-offs come in. Maybe over time, but over a long period of time.


I think you are correct in saying that iOS will never get rewritten.

But I think that you are incorrect in saying that it shouldn’t get re-written.

Memory-safety is a compelling reason to re-write something. It’s not just a flavor of the month thing, there are real and large security benefits to rewriting your shit in a memory safe language.

Honestly for any large projects (ie scale of iOS) it is amusing to see people think that they can forego a memory safe language and still be “secure.” Subscribe to the iOS discloses vulnerability mailing list and take a look at how many of those vulnerabilities are because of the lack of memory safety. Hint: it’s often the vast majority of them.


What does not checking a syscall error have to do with the language?


Not sure if Swift has the mechanisms to do it, but for example, you could wrap your syscalls in something that requires an error to be checked. In C++ it can only be a runtime check (outside of extensions?). I _think_ Rust might allow a compile time check but I don't remember which mechanism you'd do it from. Don't know about Swift.

Edit: of course you're always allowed to check and discard the error. No language can stop people from purposefully shoot themselves in the foot, but at least safeties can be installed.


Rust has compile-time warnings for when you don't use the result of a function whose return type is marked as #[must_use].

You can easily silence such a warning by writing `let _ = f(...)`, though, but I don't think that's a bad thing.


Functions in Swift must be annotated to allow discarding their results without a warning.


At least on on my system, ftruncate is annotated with __wur, so a simple -D_FORTIFY_SOURCE would suffice to produce a warning in this case.


C++ has the nodiscard attribute for this now.


This bug is just as easy to express in Swift.

To eliminate this class of bug you need to eliminate statements, such as in Haskell.


you'll have to explain for that to mean anything. The associated web page doesn't give any information. Also what do you mean by removing statements.


Not sure how an Out-of-Bounds write (used in this exploit) could happen in Swift.


Zecops.com seems down...

Presumably they need more DevOps skills...


Install the iVerify app for analysis and instructions to limit your risk to these 0days. https://apps.apple.com/us/app/iverify/id1466120520

iVerify pushes news with analysis of every new public exploit for iOS. Here are the instructions that it also posted to twitter (https://twitter.com/IsMyPhoneHacked/status/12529544579540090...)

1. Go to Settings > Password & Accounts. Set Fetch New Data to "Manual" and disable "Push". This will ensure that new mail is only downloaded when you specifically request it, and it will no longer do so automatically.

and/or

2. Use Safari or different e-mail clients such as GMail and Outlook. These mail clients are unaffected by the currently disclosed 0days.


I am honestly curious what "Threat detection" from the second screenshot is supposed to do.


iVerify looks for indicators that your phone may be running code it shouldn't be. Here's a brief overview from our initial release of the app:

https://blog.trailofbits.com/2019/11/14/introducing-iverify-...

https://www.vice.com/en_us/article/bjw474/this-app-will-tell...


Your app has a false positive if run on a previously (but no longer) jailbroken device.


Is iverify able to detect previous exploitation of this specific issue?


We don't have precise information about the payload for the ZecOps bugs, so we can't say.

The attacks profiled by Volexity earlier this week, the MobileSafari exploits that China was using to spy on Uyghurs, are now detected by iVerify. They released enough information that we pushed an update to all iVerify users to alert their phones if they were compromised.

The Volexity report is here: https://www.volexity.com/blog/2020/04/21/evil-eye-threat-act...


Dumb question first: Why was the mail app not 100% in user space like the others? Do Apple make system apps system apps just to make fun of 3rd party developers or does it actually make sense?


It is, what makes you think it isn’t? The article even states that this vulnerability grants access only to the process that is parsing mail which, depending on the OS version, is the mail application or some background process.


The Mail app runs in userspace.


Why does Apple advertise as "secure by design"? Is their design really any different from anyone else's?

See: https://www.apple.com/business/docs/site/AAW_Platform_Securi...


On your computer, this exploit would likely be able to grab your children's pictures too.


Not iOS, but my Apple Mail (10.14) app randomly crashes too. Does not seem to be related to incoming mail, but as far as i know it is related to the Javascript engine. I always feel i can report all crashes I want, but some things never get fixed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: