I remember when these attacks first came out that a few people were saying, "speculative processing was a mistake, we should just eat the performance costs and get rid of it." I vaguely remember that position getting a lot of criticism.
Is that where we are though? Are the performance costs of abandoning speculative processing greater than the performance costs of relying on spinning up a new process every time you want sandboxing to work? Or is there a middle ground that I don't understand? Obviously there's a lot of old hardware that's never going to be replaced, but let's say I sit down today to build a new processor for a new computer. Do I just accept that it will be vulnerable and move forward anyway?
My naive reading of this is that Google is saying we need to rethink processors themselves if we want to fix this (and we really do want to fix it). Am I reading it correctly?
That's the gist of it, yeah.
What I don't know is the actual real-world potential for data theft. IIRC, the data could only be stolen at a few k/second and you would need to know which address to start poking at. If my sister-VM has 64GB of ram an 64GB of swap, what are the chances of finding anything before the end of time?
Their experience, literally, was, "we tried to fix it and the whole Internet yelled at us".
I believe a practical (if oversimplified) model of the trade-offs is more like "secure, useful, fast: pick two".
FWIW, it's not surprising. I don't install extensions because I don't see how you can possibly trust arbitrary code published by a pseudo-anonymous screen-name to have unlimited access to all browsing activity. That proposal sounded great to me, it would have been nice to be able to use an ad blocker.
Is that true? I thought the proposal was to allow extensions to provide rules to be run on requests being made, but not to see the requests themselves. The same way Safari extensions apparently work?
No, the observation API is untouched. It affected the ability of extensions to arbitrarily intercept and redirect requests, which (to be fair) is a potential MITM risk.
IF an extension relied on the declarative model and ignored the webRequest API, you would get a reasonable increase in privacy, because extensions wouldn't be able to monitor what requests had been made or whether or not they were really blocked.
But there were no plans to discourage developers from using the non-blocking parts of the webRequest API, so those advantages were pure speculation.
> The same way Safari extensions apparently work?
Yep. It's worth noting though that:
A) Safari hasn't removed the ability for extensions to use the non-declarative model.
B) Safari's approach does have documented downsides that make adblockers using their new strategy objectively less powerful. Adblock has a decent page up describing their take on the tradeoffs between the two approaches, and you can find similar conversations surrounding other Safari blockers if you poke around long enough.
Most of the Internet probably never even heard about it.
This would keep the L1 clean, at least
If its sandbox guarantees are broken by Spectre then the case for allowing it to run on one's machine becomes much weaker.
The _Talospace_ and _TenFourFox Development_ articles are by Cameron Kaiser, who is active on HN (and Blogger) as classichasclass.
Edit: Add missing 'and' in last sentence.