Hacker News new | past | comments | ask | show | jobs | submit login
A Spectre proof-of-concept for a Spectre-proof web (googleblog.com)
117 points by theafh on March 12, 2021 | hide | past | favorite | 29 comments



If you're just interested in the PoC, it's available here: https://leaky.page/ and looks like the code is at https://github.com/google/security-research-pocs/tree/master...


Tried it in Firefox, but the timing result in the first step was already quite noisy (the two peaks overlapped to about 60%), and the second step didn't complete after 3 minutes of dismissing repeated "a website is slowing the page down" prompts.

That doesn't mean Firefox is safe, to be clear. Just that this PoC doesn't work well for my combination of browser and CPU.


It may not work in Firefox due to the restricted resolution of performance.now() there; per the demo page, that is 5μs in desktop Chrome - Firefox meanwhile rounds to the nearest millisecond, per https://developer.mozilla.org/en-US/docs/Web/API/Performance....

That said, the demo also mentions a paper [1] on high-resolution timing channels available in Javascript without needing to rely on the performance API. I don't know how well a Spectre attack relying on such a timing method would fare in any browser.

Looking at that paper's abstract and code samples, I think most if not all of the timing channels it describes rely entirely on tight loops, so one or more of them being attempted would explain the high CPU usage you noticed.

[1] https://pure.tugraz.at/ws/portalfiles/portal/17611474/fantas...


Now of course if they enabled COOP/COEP like their own guide recommends[1], that woukd be different.

[1]: https://web.dev/coop-coep/


I have `mitigations=off` and the PoC didn't work for me on Chrome / i7-3770.


Worked on my i7-7920HQ.


The memory dump demo seemed to work for my computer.

Does that mean I am missing some spectre mitigation stuff? I thought this was already fixed a few years ago? How do I stop this demo from working? (linux, intel i7, chrome 89)


Last year Chrome published a great paper on this[1]. The summary is that we no longer think it is possible to completely prevent speculative execution bugs. A big focus nowadays is on providing tools (mainly via HTTP headers) that allow a website to opt-in to a more strict security model where specific sensitive resources can't end up in a process that is running untrusted code. If you're curious, check out this[2] document which explains a bunch of these different mechanisms.

Disclosure: I work at Google and am involved in deploying some of these features internally.

[1]: https://arxiv.org/pdf/1902.05178.pdf [2]: https://w3c.github.io/webappsec-post-spectre-webdev/


> Chromium’s threat model, for instance, now asserts that "active web content … will be able to read any and all data in the address space of the process that hosts it"

This was a huge WTF to me. I have been doing web dev for 10+ years and can barely get origin based security right. Now we're expected to understand process level security boundaries too???

That said, are there any resources explaining how the chromium process works? It has always been a black box to me. For example if a form is being autofilled, don't those personal info/passwords have to be loaded into memory? There's an infinite amount of things that I thought was inaccessible solely because there's no JS api to access the data that I now need to think about. Direct memory access is just a huge can of worms, no?


Yeah, I think it is a bit unfortunate that there doesn't seem to be any way of hiding this implementation detail from developers. In general though, Chrome is very thoughtfully designed so things mostly work as you'd hope. The core of the process model is that each site (e.g. `ycombinator.com` not `news.ycombinator.com`) gets its own process. This [0] has a great list of things they've considered when designing site-isolation. For example, Chrome's password manager does respect site isolation and is designed to operate across multiple processes[1].

[0]: https://chromium.googlesource.com/chromium/src/+/master/docs... [1]: https://chromium.googlesource.com/chromium/src/+/master/comp...


> in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.

Interesting!


Not interesting to my eyes. If M1 was vulnerable to Meltdown then Apple need a smacked bottom, but Spectre is a lesser but much harder evil to get rid of - there isn't an obvious solution to avoiding these side channels on a processor with multiple cores sharing a memory bus


Really? Their proposed mitigations are various Cross Origin Resource restrictions?

That's the equivalent of saying "just don't run bad JS code". It's not workable. Have they given up?


See this[1] paper for more information. I think from the browser POV it is more about admitting that it just isn't possible to reliably mitigate Spectre and instead focusing on what can be done at the browser level. And at the browser level, it is possible to ensure that sensitive resources don't end up in processes running attacker JS.

Of course this could be fixed at the CPU level, but realistically very few people want that since that would drastically slow down modern CPUs which rely on speculative execution.

[1]: https://arxiv.org/pdf/1902.05178.pdf

Disclosure: I work at Google and am involved in deploying some of these cross-origin resource restrictions internally.


But it's still going to help exploits against the browser, isn't it? Letting code poke around until it finds addresses it needs or something like that.


As I understand it (though I don't work directly on Chrome), a key part of Chrome's threat model is that a compromised renderer process (where there is one renderer process per site) has limited security impact. So being safe against Spectre (which gives a read primitive in the renderer process) is just a subset of being safe against a compromised renderer process.


Per site isolation =]

Which the (comparably) insecure likes of Firefox (unfortunately) does not have.


Not yet! But soon. :) See Project Fission [1]. Currently if you're using Beta or Nightly you can toggle it on and I believe it is getting very close to being ready to ship.

[1]: https://wiki.mozilla.org/Project_Fission


> "just don't run bad JS code". It's not workable.

A little NoScript goes a long way. At least that way you can pick what you want to run.


It seems to me like timers and multi-threading make these things possible, so why couldn't the web site ask for access for those resources from the user? Most web sites don't need these.

HN should work without reading timers at all for example.


Oh yes, if you ask me, throw this JIT shit out. Whats the spectre rate like if you need to do it through slow as molasses bytecode? 1b/minute?


It's not the JIT that's the issue, but a deeper problem. It's untrusted instruction streams which.you can get from just untrusted data too.

Netspectre was able to dump kernel memory just from untrusted received network packets, no jit required.


> however, in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.

Wow! How come Apple, on this brand new processor, wasn’t able to mitigate against Spectre? (A quick google shows that many Apple fan sites hailed the M1 as being immune to what they called “intel bugs’)


Spectre is a consequence of just about any speculative execution combined with shared CPU caches (or even shared memory busses, even without shared cache). Since nobody (including Apple) knows how to make a CPU fast without those things all high-performance CPUs are potentially vulnerable. Unless we go back to single-core single-processor systems without speculative execution (or flushing all cache on every context switch) some variant or other of a Spectre attack will be possible.

If you want to be invulnerable to this you're basically stuck with a microcontroller.


I will admit I don't fully understand the implications of this.

Doesn't this mean it's essentially game over for running untrusted JS by-default? Doesn't default-deny functionality like NoScript have to become mandatory in browsers for security? If not, why not?


It means game over for users who run browsers like Safari, which don't isolate each site to their own OS process.

If you load Javascript from one site, that JS can read the entire state of memory for another site, if it is within the same OS process. This means that any site can include some nefarious javascript that reads all the cookies and passwords for the user on other sites, and then log in as them.


Seems like Firefox doesn't have this feature yet either?

https://wiki.mozilla.org/Project_Fission


Yeah, Firefox doesn't have it yet but as I understand it, they're getting very close to shipping Project Fission.


This only allows reading data from the current process. Chrome and Edge have something called site-isolation where every site has its own process. In principle, this means that a site can only read its own resources. The catch here is that there are a bunch of different ways a site can include potentially sensitive resources from other sites (e.g. via referencing them with an `img` tag). So sensitive endpoints need to opt-in to additional protections that ensure they do not end up in cross-site browser processes.

But no, this isn't game over for running untrusted JS. It just means that we need to assume that JS can access anything in the same process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: