Hacker News new | past | comments | ask | show | jobs | submit login
iOS zero-day let SolarWinds hackers compromise fully updated iPhones (arstechnica.com)
235 points by wil421 6 months ago | hide | past | favorite | 75 comments



I am both terrified and in awe at the technical prowess it takes to discover these vulnerabilities, let alone exploit them. Meanwhile I sit here fumbling with writing custom hooks


If I can write 4 lines of code without an error it’s a good day.


Heck, if I can get existing code to build properly on my local machine before I even modify it, it's a good day.


This made me laugh - I'm setting up some projects on a new machine and going through this process atm. Tried to get one going on Window Subsystem for Linux and had some issues so putting it on my old Mac just to get it to run. I will be happy when it does!


Use vagrant


To be clear, most of the folks doing this sort of security research are probably worse programmers than your average senior dev, or at least not noticeably better.

It's a distinct skillset.

To whatever extent they might build software that's more secure than the average dev's, that really comes down to applying their security skills and toolset to their own software- thinking like an attacker- not any particular skill at software engineering.


Of course, this is not always true: a lot of good security researchers are software engineers as much as they are exploit authors. Writing tooling isn't sexy, but sometimes you just have to do it, and if you don't do a good job you aren't going to be able to use it.


Software exploits and tooling don’t need to bug free, they just have to work some of the time on some devices under some circumstances.

Luck too.

The birthday lottery still plays a roll: being born with the particular set of predispositions, and the right family and environment to encourage strengthening those predispositions.


Bad tooling, like any other bad software, has limited utility: it cannot be reused or built upon. Sure, your quick script might work for now, but five years later everyone is going to be using the thing another person wrote because nobody wants to touch what you made.


I mean, that just sounds like enterprise software.


define programmer, programmer is not software engineer by default

they are exploiting the 'quality' codebase written by your so-called average dev or software engineers

and since when software engineering does not take security into account?


> most of the folks doing this sort of security research are probably worse programmers than your average senior dev, or at least not noticeably better.

Hard disagree. Maybe you can argue that they might not have skillset/experience of writing good quality code but they need to know the very in depth of each of the component from javascript to kernel. It's not a separate skill. It's the same skill as what senior dev do only the folks who write exploit need to understand allocations in bit level detail in each layer of the machine in which code is executed. eg I would recommend seeing the coding episodes of geohot who was a hacker and look how fast he could write the "ordinary" code like setting up website and other things.


I'm a 10x engineer.

Guaranteed to be 10x bugs to lines written.


I'm going to write me a new minivan this afternoon!

https://dilbert.com/strip/1995-11-13


/me takes 10x tries to get it working!


Thank you for making me feel okay with myself


Don't get as far as 6, and they won't find something in them to hang you.


I think I wrote 3 lines last week. I'm still not confident that it's error-free.


It’s a dichotomy – they rely on people like us to create the bugs they can exploit :)


How many get paid to write bugs like this?


Paying somebody to write buggy code on purpose is a bit like pissing into the ocean.


There are so many layers of abstractions that interoperate to some degree that these vulnerabilities will only continue to be found/exploited, forever, in the end of time.


Status Quo: yes

Nonexistent ideal: No way, Jose. Your infotech is owned because it is fundamentally unsound.

There's a huge gap between cutting edge security research at the hardware level and the implementation of consumer hardware/os's

Fuchsia is a good start.


Lots of interesting stuff going on here:

https://spectrum.ieee.org/tech-talk/computing/embedded-syste...

Microsoft IoT for Azure has some interesting hardware developments pertinent to separation of public facing hardware and out of band control mesh


Details of the specific exploit:

> After several validation checks to ensure the device being exploited was a real device, the final payload would be served to exploit CVE- 2021-1879. This exploit would turn off Same-Origin-Policy protections in order to collect authentication cookies from several popular websites, including Google, Microsoft, LinkedIn, Facebook, and Yahoo and send them via WebSocket to an attacker-controlled IP. The victim would need to have a session open on these websites from Safari for cookies to be successfully exfiltrated. There was no sandbox escape or implant delivered via this exploit. The exploit targeted iOS versions 12.4 through 13.7. This type of attack, described by Amy Burnett in Forget the Sandbox Escape: Abusing Browsers from Code Execution, is mitigated in browsers with Site Isolation enabled, such as Chrome or Firefox.

For this to be effective you would need to be logged in to your accounts on Safari, rather than just logged in through the respective app (FB for example). So this would have limited effectiveness. Also if you browse with Safari in Private mode it probably wouldn't have worked at all.


Maybe some of those apps use whatever Apple calls WebViews (SFSafariViewController?) ?

As I understand it, the web views share cookie storage with regular safari.


It used to but shady developers were able to use it as a side channel communication mechanism, so Apple dropped the shared cookies (and made the view controller not super useful) in iOS 11. Apps can prompt (using Authentication Services) to get temporary access.


Well, it's still useful in that it doesn't punt you over to Safari. But if you're looking to have your users already signed in, it is not useful for that, no.


The current title might be slightly misleading – the SolarWinds hack did not include an iOS compromise, as I initially thought when reading the headline. To quote the article:

> These are two different campaigns, but based on our visibility, we consider the actors behind the WebKit 0-day and the USAID campaign to be the same group of actors

Same group, but different campaign.


The big takeaway I have here is that security is a balance between usability and safety. In this case by having malicious links that were obfuscated (behind HTML?) to appear as legitimate LinkedIn links, the target clicking them was compromised.

If mail clients were to open a modal for each link and say "Are you sure you want to go to https://LinkMeIn.com/totally-legit?email=victim123@gmail.com" would this cut down on these attacks?

Taking the idea too far: A system like this would probably link to some sort of cloud database eventually to catch "emerging threats" (novel URLs that look malicious) but then would that in turn threaten end-to-end encryption of email by sending links in emails to a cloud tracker?


I just wish all email clients would stop allowing HTML to hide an actual link. It needs to stop. Anchor tags and any type of onClick/onTouch event in an email should not do anything. Just stop letting them obfuscate the freaking address, it’s that simple. Tell the marketing people to go to hell, and no they cannot have their silly nicely printed link. :-)


This. On phones especially it's getting increasingly hard, as a nerd desiring to do so, to figure out what the sender's email address and link's HTML target are :<


If you put your malicious URL behind a URL shortener or a use a subdomain that looks legit, 99% of people are going to click on it anyway.


The point would be if it’s from Bank of America they shouldn’t use a URL shortener.


Why do you want this specifically for HTML emails and not on actual web sites? Surely it’s just as much of a threat in either place.


Email is push, websites are pull. If you never choose to visit a website, then you'll never see a link on it. Most of the websites you do visit already prevent obfuscating links. Emails just show up in your inbox when an attacker wants them to. We could of course change that, and have people only see emails from known contacts or that they have requested. However, this destroys a major value proposition of email. Instead, it makes sense to limit the ability for senders to obfuscate the contents of email.


An attacker could email you an unobfuscated link to a website which contains obfuscated links and the threat model is exactly the same. Both require only that the victim 1) trust an email enough to click a link and then 2) trust the destination of that link enough to do the unsafe thing. Unless your computer propagates the "untrustworthiness" of the link from the email client to the browser and continues to prevent unobfuscated links, it seems like you've gained very little.


If you are forced to copy and paste you force the url to be visible. Forced copy and paste increases people just logging on to an existing sute in your hisstory to check notification.. those your netflix is canceled click here to restore service get resolved quickly when you login to netflix directly instead of netflix.d.nowhere.io


This is part of why I read my email as text (in emacs, though that's certainly not mandatory).


You can’t run Javascript in an email.


Reality, unfortunately, isn't nearly as definitive as stating this.

Several versions of Outlook, still in use, actually do support certain subsets of JavaScript, in email.


That would be nice :)


This would generate a ton of "false positives" (go look at a link in a random email newsletter and see what the actual URL is). People who do a lot of email stuff on their phone would be trained to click Yes a hundred times a day.

Meanwhile it seems very unlikely to stop such a determined attacker. They just need to compromise a site that you might plausibly want to visit, or create a convincing enough lookalike. The URL need not look suspicious.

IMHO expecting users to be able to discern "safe" from "unsafe" links by just looking at them represents a failure of our infosec systems.


For regular users, the answer to any question "Are you sure..." is always yes.


Sigh. We had to disable Windows Scripting Engine company-wide because someone complained that his invoice won't download no matter how many times he tried. His invoice in this case was a ransomware payload that the browser was fortunately stopping. Some people care, some care some of the time, and some just don't care.


Few years ago I almost self-signed Transmission-bt app bundle via Xcode CLI tools because OSX falsely detected some KeRanger malware in it and tried to remove the app. And I was committed to run it by any means, cause I was tired of uTorrent.

Turned out it wasn’t a false positive, their dist site got pwned. Other than that, I’m very careful with pc security.


so, why was it disabled then?


Chrome identified the ZIP download as malicious. It was sheer luck, otherwise the user would have opened the ZIP and executed the obfuscated VBS inside.


Everyone is always sure. If they aren't the first few times then they quickly get in the habit of just clicking "okay".


"Of course I'm sure I want to open this link."

and yet the gunfighter did not know where the link actually led to


You shouldn't get compromised just clicking a link. There's generally multiple levels of isolation to allow running untrusted code (websites) on your machine/device and being able to break through this is a serious failing


Have you seen an link in outlook? This safelink feature? You have almost no idea where you will go.


Security is largely not a balance between usability and safety at the levels most companies operate at. Both usability and safety could have been achieved in this case by just not being vulnerable to the attack as was the case for many other browsers. Then you could click the link and still be safe without any usability tradeoff in this specific case.

Obviously, there are ways to sacrifice usability to gain security, but it is by no means required or sufficient to do so. There are plenty of ways to completely demolish usability without gaining any safety. And even in cases where it is necessary to tradeoff, most problems are so far from the actual edge of what is possible that you only need to sacrifice a negligible amount of usability to gain order of magnitude improvements in safety if you are working with someone who knows what they are doing.


No it wouldn’t because so long as the bulk of emails hold valid links the users will on average be conditioned to click. Sometimes all it takes is for one insider to click to take down an organization.


My email client definitely does this. Called FairEmail, available for Android.


From Google:

"The exploit targeted iOS versions 12.4 through 13.7."

The title is incorrect. It targeted older phones exclusively.


Is older iOS version iPhone considered "not fully updated"?

I'm using an older LTS version of Ubuntu, I get security updates daily and I would myself "fully updated".


I think it means it was targeting older model iPhones that are updated to the newest available software from Apple but aren’t supported by the newest versions of iOS. Apple generally releases security updates for these older versions for major vulnerabilities when they know about them, but as stated this was a zero-day so they had not had the opportunity to patch it before the attack.


Security fixes are backported with some frequency, but new mitigations are usually not. If a bug is made inexploitable by a mitigation in the latest OS, older OSes that have the same vulnerable code can probably be exploited, even though they still get security updates.


> The title is incorrect.

Misleading but not incorrect. Old iOS versions mentioned still gets security updates, so technically correct.



Would that have helped here? Wasn't the problem in clicking the link?


Assuming the messages were delivered via email, plain text offers the distinct advantage of non-obfuscated links.

More on the same theme:

The only safe email is text-only email

https://theconversation.com/the-only-safe-email-is-text-only...

https://news.ycombinator.com/item?id=15224199


What if the link contains a spelling mistake your brain is trained to overlook naturallly?


Or it’s a legitimate link apparently from someone you trust to a site you haven’t visited before? Or to a site you trust but was compromised? These were very sophisticated attackers.


Doesn’t help if the mail client detects URLs and presents them as links, like a majority of mail apps do. Seeing the link won’t stop end users from clicking it if it’s blue and underlined, and the slightly cleverer copy and paste.

The best bet it to rewrite links and parse the through a proxy that scans them on click. It’s a shame free mail services don’t do this. The only one I think offers this is Outlook.


> The best bet it to rewrite links and parse the through a proxy that scans them on click.

Sure if you want the proxy provider to know all the links you're clicking on.


The Project Zero stats imply three times the rate of detected zero-days versus last year. Apparently, this is largely due to the increasing output of private companies finding and selling exploits. Three of the four exploits discussed in this article were developed by the same private company and sold to two different government-backed actors.


Could this be used as an argument for allowing iOS to support other browser engines?


Not a lot. It's almost a wash. Let's say another engine takes half the market. Now you've got twice the attack surface , but the vulnerable population is half as large for each.

But some attacks are only worth it if the pool of vulnerable devices is large enough. So the fragmentation helps, bust mostly for lower stakes attacks.


> So the fragmentation helps, bust mostly for lower stakes attacks.

fragmentation make keeping all variant up-to-date much harder.


"Security" - Apple Commercials

I suppose to give them credit, putting a white word on a black background is enough plausible deniability.


More browsers = More exploits. An attacker may not know which browser you use, but the linked site could contain multiple exploits to cover more than one browser. Also, this exploit didn't even escape the IOS sandbox. Instead it was exfiltrating active logins to social networking / email sites. The weakness is that attackers were able to violate the same-origin policy and that allowed them to read those cookies from a malicious site. It says that these attackers are mitigated by 'site-isolation' which is in chrome and firefox. Hopefully Safari will add similar capabilities soon.


exploit is described in "Forget the Sandbox Escape: Abusing Browsers from Code Execution" https://www.youtube.com/watch?v=a0yPYpmUpIA


Is it just me or is this article more interested in asserting that Russian State Hackers are unequivocally responsible for the chain of Solarwinds attacks, rather than discussing the subject of their misleading, arguably clickbait-y title?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: