Hacker News new | past | comments | ask | show | jobs | submit login

Can't edit my original post anymore: Firefox and Chrome do seem to isolate even same-browsing-context-group and bridge the required APIs via IPC, so hopefully Safari will catch up at some point.

Basically, there are three scenarios:

- Completely unrelated tabs (e.g. those you open manually, those opened via command-click, tabs opened via '<a target="_blank' ...>" or 'rel="noopener"' references etc.) – these are relatively easily isolated if the browser supports it at all. All major (desktop!) browsers now largely do this, including Safari.

- "Same browsing context group" (but different origin) sites. These can communicate via various APIs, and historically that was achieved by just letting them run in the same rendering process. But in the face of attacks such as this one, this can be insecure. Firefox and Chrome provide sandboxing via separate processes; Safari does not.

- Same origin sites (without any stricter policy). These can fully access each other's DOM (if they have an opener/opened relationship), so there's not really any point in having them live in different renderers except possibly for fault isolation (e.g. one of them crashing not taking the other down). As far as I know, all browsers render these in the same process.

Sites can opt out of the second and third category into the first via various HTTP headers and HTML link attributes. If we were to design the web from scratch, arguably the default for window.open should be the first behavior, with an opt in to the second, but that's backwards compatibility for you.




I worked on a browser team when Spectre/Meltdown came out, and I can tell you that a big reason why Firefox and Chrome do such severe process isolation is exactly because these speculative attacks are almost impossible to entirely prevent. There were a number of other mitigations including hardening code emitted from C++ compilers and JS JITs, as well as attempts to limit high precision timers, but the browser vendors largely agreed that the only strong defense was complete process isolation.

I'm not surprised to see this come back to bite them if after like 7 years Apple still hasn't adopted the only strong defense.


To add to this and to quote a friend who has more NDAs in regards to microarchitecture than I can count and thus shall remain nameless: "You can have a fast CPU or a secure CPU: Pick one". Pretty much everything a modern CPU does has side effects that are something that any sufficiently motivated attacker can find a way to use (most likely). While many are core specific (register rename, execution port usage for example), many are not (speculative execution, speculative loads). Side channels are a persnickety thing, and nearly impossible to fully account for.

Can you make a "Secure" CPU? In theory yes, but it won't be fast or as power efficient as it could in theory be. Because the things that allow those things are all possible side channels. This is why in theory the TPM in your machine is for those sorts of things (allegedly, they have their own side channels).

The harder question is "what is enough?" e.g. at what level does it not matter that much anymore? The answer based on the post above this is based on quite a lot of risk analysis and design considerations. These design decisions were the best balance of security and speed given the available information at the time.

Sure, can you build that theoretically perfect secure CPU? Yes. But, if you can't do anything that actually needs security on it because it's so slow; do you care?


This is also a fundamental property - if you can save time in some code/execution paths, but not in others (which is a very desirable attribute in most algorithms!), and that algorithm is doing something where knowing if it was able to go faster or slower has security implications (most any crypto algorithm, unless very carefully designed), then this is just the way it is - and has to be.

The way this has been trending is that in modern systems, we try to move as much of the ‘critical’ security information processing to known-slower-but-secure processing units.

But, for servers, in virtualized environments, or when someone hasn’t done the work to make that doable - we have these attacks.

So, ‘specialization’ essentially.


Your friend is genuine in their interpretation, but there is definitely more to the discussion than the zero sum game they allude to. One can have both performance and security, but sometimes it boils down to clever and nuanced design, and careful analysis as you point out.


> I'm not surprised to see this come back to bite them if after like 7 years Apple still hasn't adopted the only strong defense.

So the Apple's argument that iOS can't have alternative browsers for security is a lie.


Strange claim.

Security isn’t a one-bit thing where you’re either perfectly secure or not. If someone breaks into your house through a window and steals your stuff, that does not make it a lie to claim that locking your front door is more secure.

In any event, Apple’s claim isn’t entirely true. It’s also not entirely false.

Browsers absolutely require JIT to be remotely performant. Giving third parties JIT on iOS would decrease security. And also we know Apple’s fetish for tight platform control, so it’s not like they’re working hard to find a way to do secure JIT for 3P.

But a security flaw in Safari’s process isolation has exactly zero bearing on the claim that giving third party apps JIT has security implications. That’s a very strange claim to make.

Security doesn’t lend itself to these dramatic pronouncements. There’s always multiple “except if” layers.


> Giving third parties JIT on iOS would decrease security.

Well, at least in this case it would have greatly increased security (since it would have allowed the availability of actual, native Chrome and Firefox ports).

And otherwise: Does Apple really have zero trust in their OS in satisfying the basic functionality of isolating processes against each other? This has been a feature of OSes since the first moon landing.


If JIT is such a problem then Apple shouldn't use it themselves. Sure, they let you disable it but it's still enabled by default while everyone pushes the narrative that Apple is all about security.


JIT isn’t the problem. It’s giving control of JIT to third parties.

We can still hate on Apple, it’s just more accurate to say they don’t trust their own app sandboxes to stand up to LLVM / assembly attacks from malicious apps with JIT access.


I just don't buy that it's a special security concern at all. There are so many other possible security vulnerabilities to exploit that don't involve a JIT compiler. So why would Apple specifically restrict third party apps from JIT?

It's realistically just another way to ensure they maintain control over app distribution. Safari sucks for web apps. Third party browsers are just different shells over Safari on iOS. Apps built on things like React Native support hotfixing without slow app store reviews - but your app will be slow without JIT and rules force you to still go through reviews for feature changes.

There's no issue with any of this on Android.


> It’s giving control of JIT to third parties

Any real-world examples demonstrating how it's insecure? Here and now it demonstrably decreases the security.


The alternative browsers have the required site isolation but aren't allowed. There's no fix for Safari and you must use it. I think it's very clearly decreasing the users' security.


Binary thinking is unhealthy.

Alternative browsers would introduce other security concerns, including JIT. It’s debatable whether that would be a net security gain or loss, but it’s silly to just pretend it’s not a thing.

Security as the product of multiple risks.

Discovering a new risk does not mean all of the other ones evaporate and all decision making should be made solely with this one factor in mind.


Can you provide any arguments that JIT would in fact decrease security other than "Apple says so"?

Every major mobile and desktop OS other than iOS has supported it for over a decade. Apple is just using this as a fig leaf.


"Decreasing the security" is not binary thinking. It's just a fact today. Also, ability to run software doesn't make you less secure. I never saw any real proof of that. It's the opposite: Competition between different browsers forces them to increase the security, and it doesn't work for Safari on iOS.


I think a detached and distanced perspective must come to the conclusion that vendor lock-in isn't healthy. For security, performance or flexibility it tends to fall short sooner or later.

One could also talk about the relevance of a speculative attack that hasn't been abused for years. There can be multiple reasons for that, but we shouldn't just ignore the main design motivation of Apple here. That would be frivolous and that excludes serious security discussions.


Are you really surprised, eventually the apple distortion field starts to wain around the edges but by then people have moved on to the new shiny.


Does Safari always open sites in separate processes when manually opening a new tab (e.g., via Command+T or via another macOS app sending a link to be opened by Safari) instead of allowing a webpage in one tab to open a link in a new tab via window.open? If so, does that prevent the SLAP attack from working against the contents of those manually opened tabs? Wouldn't the best practice, then, be to (1) never login to a website (or access a site where you are already logged in) by clicking a link on another site, and (2) when browsing a site where you are logged in, never click a link to another website, but instead copy the link, manually open a new tab, and then paste the link into the address bar? Obviously, that's cumbersome and annoying, but if it mitigates SLAP, then maybe it's worth the effort.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: