
Ask HN: Could JavaScript be moved to a higher x86 ring than userspace? - somebodynew
There are various mitigations against Meltdown (and AMD is immune), but the general consensus seems to be that there isn&#x27;t much that can be done about Spectre. The x86 architecture has four rings of protection, but usually only ring 0 (kernel) and ring 3 (userspace) are used. What if userspace was moved to ring 2 so that ring 3 could be used for even less privileged JavaScript execution, reducing Spectre back to a special case of Meltdown which can be solved at the hardware level without performance issues (except perhaps for the new cost of making &quot;system calls&quot; out of JavaScript)? This would also significantly hamper any attempt at remote code execution in the browser.<p>Is this viable?
======
oceanswave
No, not as written. conceptually, this already happens in a different way. but
doing this wouldn't mitigate the issues caused by Meltdown and Spectre.

JavaScript executing in browser javascript engines already live in their own
isolated environments in ring 3, so effectively ring 4 if you'd like, but
these are just conceptual rings so all unprivileged operations are in ring 3,
JS engines (and other things) are just designed to be isolated from other code
running in ring 3.

Microsoft has gone a step further and has something called "Windows Defender
Application Guard" that effectively runs MS Edge in a separate isolated VM,
which might be along the lines of your thinking.[1]

The problem is that any application in ring 3 down can exploit Meltdown and
Spectre, so even code running in Application Guard can still potentially read
privileged memory anywhere on the system, so even this is not effective at
combatting the problem.

It's not practical to make unprivileged code currently running in ring 3 more
trusted and doing so would require much worse changes. You might conceptually
have a completely separate system on system architecture for the purpose of
running completely untrusted code, but this is overkill, as the current
architecture is _supposed_ to be just this. But anything along these lines is
still hypothetical and would require hardware changes that can't be patched
in.

[1] Intent of MS of doing so is a bit convoluted, as they seem to be saying
that they don't trust their own browser engine to be fully sandboxed, but they
don't seem to provide why they feel this, nor why they feel a VM wouldn't
potentially have it's own escapes, such as those demoed here
[https://arstechnica.com/information-
technology/2017/03/hack-...](https://arstechnica.com/information-
technology/2017/03/hack-that-escapes-vm-by-exploiting-edge-browser-
fetches-105000-at-pwn2own/) so this is probably really just marketing
exploiting the fears of business users in order to sell enterprise licenses.

------
yuhong
Unfortunately x86 paging only distinguishes between ring 0 and ring 3.

------
en4bz
A possible mitigation is to disable High Precision Timers in JavaScript since
the attacks rely on having a timer with nanosecond precision. If you limit the
timer to microsecond resolution it may not be possible to measure if a cache
line has been loaded or not reliably.

I could be wrong but this was my first thought for a possible mitigation.

EDIT: I was wrong. I guess you would need to disable Web Workers too.

    
    
      JavaScript does not provide access to the rdtscp
      instruction,and Chrome intentionally degrades the
      accuracy of its high-resolution timer to dissuade
      timing attacks using performance.now().
      However, the Web Workers feature of HTML5 makes
      it simple to create a separate thread that repeatedly
      decrements a value in  a  shared  memory  location.

