Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Google: Software is never going to be able to fix Spectre-type bugs (arstechnica.com)
48 points by metaphysics on Feb 24, 2019 | hide | past | favorite | 34 comments




So, admittedly I'm still wrapping my brain around the nitty-gritty of Spectre, but what's the solution to this then? It can't actually be that we'll all just accept that sandboxing only works if you spin up a separate process for every single container.

I remember when these attacks first came out that a few people were saying, "speculative processing was a mistake, we should just eat the performance costs and get rid of it." I vaguely remember that position getting a lot of criticism.

Is that where we are though? Are the performance costs of abandoning speculative processing greater than the performance costs of relying on spinning up a new process every time you want sandboxing to work? Or is there a middle ground that I don't understand? Obviously there's a lot of old hardware that's never going to be replaced, but let's say I sit down today to build a new processor for a new computer. Do I just accept that it will be vulnerable and move forward anyway?

My naive reading of this is that Google is saying we need to rethink processors themselves if we want to fix this (and we really do want to fix it). Am I reading it correctly?


> My naive reading of this is that Google is saying we need to rethink processors themselves if we want to fix this (and we really do want to fix it). Am I reading it correctly?

That's the gist of it, yeah.


I understand the theoretical issues with the Specter-class of bugs: shared machine environments leaking data, javascript in browsers stealing encryption keys etc.

What I don't know is the actual real-world potential for data theft. IIRC, the data could only be stolen at a few k/second and you would need to know which address to start poking at. If my sister-VM has 64GB of ram an 64GB of swap, what are the chances of finding anything before the end of time?


64 billion bytes at 2,000 per second is 32 million seconds, or 1 year to scan the entire contents of RAM once. Obviously data and processes are going to bounce around in that year, so just peeking randomly and naively at your RAM, the chances are perhaps slim, but not "end of time" levels of slim. Randomly probing millions of different targets, the chance of finding something of value is much higher.


So, what you're saying is that we just need more RAM?


Can't speak for everyone, but I sure do.


The Chrome folks working on the largest mitigation to Spectre-class bugs -- Site Isolation -- are the same folks who were behind the controversial proposal that would have broken certain third-party ad-blocker extensions.

Their experience, literally, was, "we tried to fix it and the whole Internet yelled at us".

I believe a practical (if oversimplified) model of the trade-offs is more like "secure, useful, fast: pick two".


I'm not sure how the two things are connected. The ad blocker thing I don't believe was connected to Spectre or isolation.


I think he was just pointing out that that team appears to mostly be working on security, and it wasn't just some ad team behind that proposal.

FWIW, it's not surprising. I don't install extensions because I don't see how you can possibly trust arbitrary code published by a pseudo-anonymous screen-name to have unlimited access to all browsing activity. That proposal sounded great to me, it would have been nice to be able to use an ad blocker.


The proposal, as it was initially, would have prevented ad blockers from working. I think extending the proposal so people could give that full access to extensions they trusted (such as ublock origin, which is open source and you can build for yourself if you'd like) is the best way forward on that.


> The proposal, as it was initially, would have prevented ad blockers from working.

Is that true? I thought the proposal was to allow extensions to provide rules to be run on requests being made, but not to see the requests themselves. The same way Safari extensions apparently work?


> but not to see the requests themselves

No, the observation API is untouched[0]. It affected the ability of extensions to arbitrarily intercept and redirect requests, which (to be fair) is a potential MITM risk.

IF an extension relied on the declarative model and ignored the webRequest API, you would get a reasonable increase in privacy, because extensions wouldn't be able to monitor what requests had been made or whether or not they were really blocked.

But there were no plans to discourage developers from using the non-blocking parts of the webRequest API, so those advantages were pure speculation.

> The same way Safari extensions apparently work?

Yep. It's worth noting though that:

A) Safari hasn't removed the ability for extensions to use the non-declarative model.

B) Safari's approach does have documented downsides that make adblockers using their new strategy objectively less powerful. Adblock has a decent page up describing their take on the tradeoffs between the two approaches[1], and you can find similar conversations surrounding other Safari blockers if you poke around long enough.

[0]: https://docs.google.com/document/d/1nPu6Wy4LWR66EFLeYInl3Nzz...

[1]: https://help.getadblock.com/support/solutions/articles/60000...


Yes, but the limits on what kind of rules and how many left it unable to do a lot of what current ad blockers do, like block certain media types and other things like that. I believe it also prevented them from blocking ad serving web sockets and such but i'm not positive on that one.


I think you might have a distorted view of the ad-blocker thing from reading Hacker News?

Most of the Internet probably never even heard about it.


It managed to get picked up by mainstream outlets like the BBC.


For web devs out there I would refer you to this paper from Microsoft research on the realities of side-channel attacks in Saas https://www.microsoft.com/en-us/research/publication/side-ch...


I once heard that anywhere we try and get extra performance, there is probably the possibility for a side channel, and I think that's true. Any optimisation in such a complex system is likely to have some sort of side effect, given that simply timing is often sufficient as a side channel it is going to be difficult to eliminate this class of attack.


"With careful construction, an attacker can make the processor speculate based on some value of interest and use the cache changes to disclose what that speculated value actually was. This becomes particularly threatening in applications such as Web browsers: a malicious JavaScript can use data revealed in this way to learn about the memory layout of the process it's running in, then use this information to leverage other security flaws to execute arbitrary code. Browser developers have assumed that they can construct safe sandboxes within the browser process, such that scripts can't learn about the memory layout of their containing process. Architecturally, those assumptions are sound. But reality has Spectre, and it blows those assumptions out of the water."

So this is yet another reason to avoid Javascript.


Couldn't the CPU maintain a few speculative cache lines that commit to regular cache if the speculation worked, and dump if not?

This would keep the L1 clean, at least


Yes, but it wouldn't help enough. For example, a speculative load from one core can observably evict or change the state of a cacheline on another core.


That seems to require a much more sophisticated attack. Also my point is that a speculative cache would be a separate (like a peer to L1) cache, only showing its side effects in L2/3. That'd be a bit harder to crack with the shared core noise and larger size.


The problem with that is, what happens when your speculative code changed values in that region of memory that another core just wrote to. who wins? and how do you do it without now violating a bunch of other problems where one core sets up a memory fence and other things like that. It ends up incredibly complicated to decide what to do with the result that it's probably not worth the extra work.


Why not ask the folks who did TSX?


It's not _just_ JS though...


It's probably by far the largest vector for untrusted code in the world.

If its sandbox guarantees are broken by Spectre then the case for allowing it to run on one's machine becomes much weaker.


Are these Intel-only bugs? AMD uses a different approach to speculative execution and claims that the Meltdown-type attack won't work on post-Bulldozer CPUs.


Are Spectre-style bugs present on other processors like the ARM64 or POWER?




*ARM processors with speculative / out of order execution are affected.


My understanding is that older A53 implementations of AArch64 / ARM64 are strictly in-order and thus not suceceptible to Spectre etc.


Any CPU with out of order speculative execution is probably vulnerable.


In other news google announces its new bare medal cloud services




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: