
Spectre Mitigations Part 2 - cejetvole
https://www.wasmjit.org/blog/spectre-mitigations-part-2.html
======
twtw
I've asked this before, but I'm going to ask it again (sorry for the
repetition).

What makes this sandbox different from previous sandboxes (JVM, browsers, etc)
that makes proponents sufficiently confident to put it in the kernel? All
previous sandboxed designed to be secure have been broken on a regular basis.

The rationale seems to be performance, as if it is an innovation to realize
that having multiple isolated processes and privilege levels has a cost, and
things could be faster without context switches or virtual memory. But
everyone knows this already - the real innovation is daring to trust your
sandbox so much that you think you can do without process isolation.

Sorry to keep asking, but it strikes me as irresponsible to bash on context
switching as having poor performance and not noting that it is really one of
the foundations of computer security.

First time I asked:
[https://news.ycombinator.com/item?id=18406623](https://news.ycombinator.com/item?id=18406623)

~~~
ec109685
What attack scenarios you are envisioning here? If the host is only running
this software (e.g. a single purpose server, firewall, etc.), the security
benefits of process isolation are reduced.

Clearly if you are trying to run hostile software from multiple vendors on the
same box (e.g. a browser), you want more sandboxes.

~~~
twtw
I see a future where the npm-style ecosystem develops with the JIT running
inside the kernel, so you end up running a "trusted" application ends up
running a whole lot of untrusted code via dependencies. Usually process
isolation gives you some limit on how much damage something like that can do.

Also, if you are 100% sure that the code in it is trusted, then there really
is no reason to sandbox it, right? If the intent is to only run trusted code,
why is this article about spectre mitigations?

~~~
kiriakasis
> if you are 100% sure that the code in it is trusted, then there really is no
> reason to sandbox it, right? If the intent is to only run trusted code,

Trusted code can have bugs; sandboxing, in a sense, is always useful (not
always beneficial)

------
titzer
Do _not_ run untrusted code in the kernel. Period. Double period. Triple
period.

Speculative vulnerabilities and side channels are a lot more than just variant
1 and 2.

WasmJIT is a nice project. But let me repeat. Do not put it in the kernel and
run untrusted code through it, period. Software mitigations are _not_
sufficient. This is not WASM's fault, this is not WasmJIT's fault; it's just a
fundamental reality of modern hardware.

~~~
reitzensteinm
What if the sandbox completely denies any method of timing to the untrusted
code? Including thread to thread communication.

Timing related side channels seem unavoidable, but that doesn't mean that
answer in -> answer out computation is necessarily intrinsically able to
perform it.

Also, if you can trust code to not be malicious, but not to be correct,
loading code via WASM and executing it in the same address space isn't
necessarily crazy.

~~~
titzer
> What if the sandbox completely denies any method of timing to the untrusted
> code? Including thread to thread communication.

You can construct timers from shared mutable memory (think: counter thread),
but even in shared-nothing systems, one can construct timers using message
passing (think: a crude timer that counts messages in one process and a sender
hammering it with messages).

> Also, if you can trust code to not be malicious, but not to be correct,
> loading code via WASM and executing it in the same address space isn't
> necessarily crazy.

Sure, agreed. This is why my comment mentioned _untrusted_ code. WebAssembly
sandboxing makes sense to protect the kernel from OOB writes, but it cannot
guarantee no OOB reads from speculative side-channels.

