
Post-Spectre Threat Model Re-Think - pedro84
https://chromium.googlesource.com/chromium/src/+/master/docs/security/side-channel-threat-model.md
======
voidmain
> We don’t believe it’s possible to eliminate, coarsen, or jitter all explicit
> and implicit clocks in the Open Web Platform (OWP) in a way that is
> sufficient to fully resolve Spectre.

I'm not sure, but I think they are giving up on the best strategy here. The
holy grail is for untrusted code to be totally deterministic, so that it
cannot exploit Spectre or any other side channel attack.

Browser Javascript, which is historically single-threaded and event driven, is
much closer to this than native code ecosystems where there are decades of
investment in shared memory multithreading (which is very hard to make
deterministic!) I don't think it would be impossible to make a JS engine where
any given Javascript event handler and all the synchronous APIs it can call
are deterministic, so that all interaction with the nondeterministic outside
world is through events (and the "outgoing" side of these interactions would
be delayed until the event handler stops running). Then I think you could make
it so that the runtime of JS isn't measurable, or at least is only extremely
coarsely measurable, via these events.

~~~
lioeters
I don't understand fully what it means for a language or piece of code to be
"totally deterministic", but it reminds me of a couple topics that arise in
discussions about JavaScript: about an advantage of TypeScript, that a
_strongly typed_ language allows more strict error checking at compile time,
because the compiler can map the behaviors of functions (which sounds
"deterministic"?); and an advantage of _functional programming_ , that each
function should be "pure", without side effects or dependency on global state
- which, if I remember right, is sometimes called "deterministic". I wonder if
these topics are related to what you mean by "untrusted code to be totally
deterministic, so that it cannot exploit Spectre or any other side channel
attack".

This is an intriguing idea: "a JS engine where any given Javascript event
handler and all the synchronous APIs it can call are deterministic, so that
all interaction with the nondeterministic outside world is through events".

It seems like that would be a significant re-think/re-organizing at a
fundamental architectural level, that it would almost be a new language. Maybe
Web Assembly could achieve this?

~~~
voidmain
Determinism is related to purity ("functional programming"), but is a weaker
notion. For example, this function

    
    
        var c = 0
        function deterministic_not_pure() {
            return c++
        }
    

is not pure, and you can't reason over it equationally. But it is
deterministic: if I make a series of calls to this function in my browser, and
you make the same series of calls in your browser, and someone else emulates
the code using a Turing machine made of rocks in the desert [1] we will all
see the same results. (And consequently, there is nothing that such code can
do to learn about its environment!)

As far as I know everything in Javascript-the-language has this property! [2]
(Equally, so does WebAssembly). And most of the browser's APIs do as well,
though the sheer size of that surface means that any attempt to fix the rare
cracks will still be daunting. The vast majority of information that browser
JS gets is either through events (i.e. function arguments) and I think most of
the rest could be safely forced to be deterministic during the execution of a
JS function.

It's still obviously possible for these _events_ to leak timing information
about the execution of JS, but I think that can be fought. For example, you
could delay the delivery of every event by the number of nanoseconds that JS
has executed on the page so far, though of course that naive approach will
gradually degrade in performance. But I think for example doing that until you
have 10ms of delay and then resetting would probably make timing attacks
totally impractical.

All of this needs way more thought, and I'm not at all suggesting that it's
easy. But browser JS is closer to an actually-securable-against-side-channels
state than almost any other widely deployed platform, and I think it's sad for
the people that control the platform (which is, let's face it, basically the
Chrome team) to throw it away because they love performance.now() and
SharedArrayBuffer.

[1] [https://xkcd.com/505/](https://xkcd.com/505/)

[2] Well, ECMAScript probably doesn't specify a random number generator, for
example. But it _could_.

~~~
lioeters
Thank you, I was hoping to learn something by posting a comment, and was
richly rewarded by your reply. Hadn't seen the "computer of rocks", brilliant.

Yes, I'm starting to see the difference between pure and deterministic. It
seems like the latter is a description of a particular type of
predictable/repeatable "program flow".

I found a related Wikipedia topic [1], "What makes algorithms non-
deterministic?" Quote:

\- If it uses external state other than the input, such as user input, a
global variable, a hardware timer value, a random value, or stored disk data.

\- If it operates in a way that is timing-sensitive, for example if it has
multiple processors writing to the same data at the same time. In this case,
the precise order in which each processor writes its data will affect the
result.

\- If a hardware error causes its state to change in an unexpected way.

Although real programs are rarely purely deterministic, it is easier for
humans as well as other programs to reason about programs that are. For this
reason, most programming languages and especially functional programming
languages make an effort to prevent the above events from happening except
under controlled conditions.

The prevalence of multi-core processors has resulted in a surge of interest in
determinism in parallel programming.

\---

The above includes a number of topics you raised, about events and timing
information, and the possibility of making a language deterministic with some
exceptions "under controlled conditions".

This part feels relevant too: "easier for humans as well as other programs to
reason about".

[1]
[https://en.wikipedia.org/wiki/Deterministic_algorithm#What_m...](https://en.wikipedia.org/wiki/Deterministic_algorithm#What_makes_algorithms_non-
deterministic)?

------
peteretep
> for example ensuring that password and credit card info are not
> speculatively loaded into a renderer process without user consent

Great that they're thinking about this, but I can't help but think many users
will happily just copy-paste their CC details into random sites protected with
a password of "password".

There's obviously strength in depth for security, and I applaud the developers
for caring so much when users will attempt to subvert all security measures.

~~~
nothrabannosir
_> Great that they're thinking about this, but I can't help but think many
users will happily just copy-paste their CC details into random sites
protected with a password of "password"._

To be honest, we’re pretty much incentivised to by the fraud insurance
structure. Any fraud is automatically covered through chargebacks. Eventually
it’s the merchants that end up paying for this, big time, so they have the
incentive to fix it. But since they are so far removed from the actual CC
handling, usually (through PSPs, and the place a CC is stolen from is very far
from the merchant who eventually is abused by it), there is very little they
can actually do.

I don’t care two pence about where my CC ends up, to be honest. Beyond perhaps
the very minimum I need to, to avoid being held liable for the chargebacks
through neglect. Which is essentially never.

It’s not healthy for the overall system, but I pay for the privilege (through
indirect fees), so I might as well use the comfort :/ tragedy of the commons.

CC fraud incentives are a tricky beast.

