Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] New Spectre Vulnerability Beats Mitigations, Performance to Badly Drop After Fix (techpowerup.com)
70 points by 0-_-0 15 days ago | hide | past | favorite | 41 comments



The previous discussion about this paper can be found here: https://news.ycombinator.com/item?id=27000570


We need a way to switch mitigations on/off on per-process basis. I don't want mitigations when I run the computation-heavy code I have written myself but do want them when I run a browser or other Internet-connected apps.


I agree. I want these turned on for shared servers and on the consumer on OS processes that I don't trust. But I would want an option to turn it off for JavaScript-heavy sites that I do trust.

I'm frankly amazed that the mitigations so far have not already been disastrous to performance. Either we accept drastically less performance from now until basically forever, or we adopt a more fine-grained security model.


> turn it off for JavaScript-heavy sites that I do trust

How can you trust any?


Downloading a software trough http and executing it on your operating system is not very different than downloading a software through http, and executing it inside your web browser virtual machine.


PR_SET_SPECULATION_CTRL


Nice I learned something today. How is that used currently in relationship to browsers or stuff like signed desktop apps that don't execute third party code?


I've finally managed to finish reading this latest paper, and it strikes me that the real problem is all these "invisible" resources (icache, dcache, TLB, uop-cache etc.) lack security tags. The architecture marks all the explicitly visible resources with the privilege level of whoever owns them, but the rest is a big free-for-all. I only know x86, so maybe there are other archs where this is done?


I kinda wish we stopped fixing these for consumer OSes. People don't usually run untrusted code for hours. The performance drop isn't worth the result. So, IMO, it's only really applicable to cloud providers.


People regularly run untrusted Javascript in their browsers. Spectre is exploitable via Javascript as well. There are some mitigations, but those are generally just of the kind "make the timer API suck more" and the likes. Those mitigations don't really fully mitigate the problem and get more and more toothless with new variants of Spectre-like problems.

Spectre and the like needs to be dealt with on everything that isn't strictly an embedded fixed-code device.


Is it possible to put these mitigations for Javascript only in the browser?


Some mitigations are implemented by browsers, but a lot of it has to happen at the OS level


Is it reliably exploitable from JS, or is it again a theoretical thing nobody has ever bothered to implement in practice? Is there a web page that would show me some of my kernel memory?


https://leaky.page/ will show you browser memory. Kernel memory is not vulnerable with browser sandboxing and current mitigations.


The two tests do work for me, but the actual leaking doesn't. No matter how I tweak the parameters, it fails with "error: too many wrong false negatives in leak test (> 20%)". I guess my CPU might just be too old ¯\_(ツ)_/¯


What browser/CPU? I only get "error: could not infer memory layout" on Firefox and Brave with a Zen 3 CPU


Vivaldi and i7-3615QM



It's only theoretical until we discover that some state agency got it working three years ago and has been using it to quietly pwn activists since.


Firefox doesn't yet, but Chrome on desktop fully mitigates spectre using site isolation - every origin (including via embedded iframes, etc) runs in its own process.


> Chrome on desktop fully mitigates spectre using site isolation

Spectre and pals can literally jump between processes that it's no longer funny.


Realistically, only processes that are scheduled on sibling hyperthreads, and hyperthreading can be restricted to processes that have the same privilege level, which is done on ChromeOS. The amount of microarchetectural state that sticks around after a context switch is finite and whenever there's a new avenue found like this, usually easily fixed by ucode update and kernel fix.


> easily fixed by ucode update and kernel fix

Those microcode updates and kernel fixes are anything but easy. They are anything but lightweight. Context switching overhead is a huge contributor to lag and general inefficiency in interactive systems. Every kind of Spectre mitigation increases that overhead. Every further mitigation wrecks more of the performance advantages that caches have historically provided.

From a performance, power consumption and efficiency perspective, Spectre is a disaster. Mitigations just move the problem from a security and privacy one to an efficiency and performance one.


Ironically, Firefox has done a better job at following COOP/COEP: right now, it's completely possible in Chrome (until Chrome 91, which isn't stable yet) to read other sites' JS and images without their consent with Spectre.


Spectre is attack against that.


Fwiw, every time Chrome claims to fix a security problem their own way, somebody finds an exploit in it later.


Web browsers do. Especially with v8 JIT, ASM.JS, WebAssembly. People have used it in the past to do things like write HTML pages that reach all the way down, across the browser, across docker, across gvisor, across selinux, across the operating system, across the hypervisor, and reprograms your CPU microcode. https://www.usenix.org/system/files/conference/usenixsecurit... I kind of wish browsers provided us the freedom to turn all those wasm jit optimizations off, for that reason. Does anyone know how to write a browser extension that does it? If you want something that should be done for consumer machines, push for ECC RAM.


That hasn't happened. That paper describes the potential for backdoored microcode, not any existing vulnerabilities that can be triggered from an unprivileged state.


I will happily sacrifice some performance to be able to run JavaScript, WebAssembly, virtual machines and Docker containers safely on my laptop.


That seems reasonable to me. For especially paranoid people (like some of the HN crowd), you could design a "safe" computer within a computer that takes all the performance hits from in-order cache-less designs and has two very controlled communication channels with the rest of the system: one for bidirectional data transfers to the main CPU and an unidirectional one to stream (software-rendered?) framebuffer data to the GPU.

I highly doubt that the market is big enough to make such a product viable.


You also have to always wonder what the environmental impact of these security fixes is, both in terms of extra carbon pumped into the atmosphere and the resources going into replacement CPUs. It's probably very bad. I wonder whether it's ethical to publish them? :)


Does anyone know whether this, or a similar class of vulnerabilities exist that effect ARM processors?


I think it depends how broadly you define class of vulnerabilities. As I understand it any processor capable of speculative execution is vulnerable to the broad class of Spectre/Meltdown vulnerabilities, which includes all commonly used processors implementing the ARM instruction set.


The M1 has had a paper recently published about Spectre-like speculative execution exploits, so at least for that class of ARM, yes I believe so


Dumb question: what would be the cost of a compiler-based mitigation, ie. never using fused instructions (ie. complex instructions broken-down into several micro-instructions internally by the pipeline) but only micro-ops that won't use the u-ops cache ?

This would have an obvious impact on the code size/cache efficiency, and on the number of registers used to store intermediate results, but how much would this kill performances, depending on the use-case ?


It's probably impossible by a wide margin. The compiler is limited to emitting plain old x86 instructions, the leaky u-ops and caches thereof used to execute essentially harmless x86 instructions instructions are a microprocessor implementation detail.


The best mitigation is to not use multi-tenant hosts.


Sorry but this is false and we must end this misconception. Run JavaScript in a browser on a single-user machine? You are vulnerable.


All of these vulnerabilities impact desktop and servers. I guess I wasn't clear, but when talking about "multi-tenant hosts", I meant servers.


Ok, so if I visit a site with malicious JS it can now potentially steal the contents of my ram at a blistering few KB/s for the duration that I am on the site.

That's really not that big a deal compared to slowing everything down again. Modern computers are slow enough thanks to the mountains of abstraction developers insist on using (web included!), the realtime AV scanner, Ads, user tracking, and just general software bloat.


Surely this will make everyone see the truth and end this silly javascript fad.

(/s)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: