Hacker News new | past | comments | ask | show | jobs | submit login
Fantastic Timers: High-Resolution Microarchitectural Attacks in JS (2017) [pdf] (gruss.cc)
89 points by jhatax on Jan 5, 2018 | hide | past | web | favorite | 34 comments



It's apparently a pretty controversial position to have now, but I'll say it again:

Leave JS off by default. Whitelist sites that absolutely need it. Running completely untrusted foreign code on your hardware has been, is, and likely will continue to be a source of security problems.

As a bonus, you get to enjoy much faster-loading, nearly ad-free and cruft-free pages on almost every site. Now couldn't be a better time to try it out.


I've had JS off for non-TLS sites for a while. Any site that wants to run JS should have the chops to know how to set up a cert. and my machine shouldn't be running JS that could have been MiTM'd


I'm astonished at how fast websites are without JavaScript. I just turned it on today to make an order on EBay and I was immediately annoyed by how slow and sluggish it was. Why aren't there programs to buy stuff on the internet?

There are some other minor pain points though. Why can't I collapse comments here on HN without JavaScript?

There are more examples of minor issues that seem to be completely solveable by a statically defined markup language. Mozilla, where are you when we need you?

Allowing JavaScript only on certain sites is not a good solution. First, it leads to unfair conditions and thus centralization. Second, the appification of the web including it's consequence of ever increasing dependency on certain software that is outside of the control of the users continues.


> Why can't I collapse comments

Features like this could easily be a declarative feature in HTML. e.g. something like:

    <div class="comment_wrapper">
        <action type="show,hide" for="foo">
          <img ...> <!-- or whatever -->
        </action>
        <div id="foo" class="comment"> ... </div>
    </div>
Unfortunately, development of new HTML features like this mostly stopped when a major browser developer decided to try to fight with Microsoft for the desktop/mobile markets by changing webpages into software.


You don’t even need that. You can work some magic with CSS to hide/show and even rotate/flip elements using inputs.

> input:checked + div { display: none; }

Some people don’t like that either though. :/


You can use the <details> tag for that: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/de...


I've been doing it for more than a year, first two weeks were very painful and I've almost regret doing it. However it has become a lot better after I established long enough whitelist and put my fixes in place.

Only thing which I'm missing is to run my custom scripts on page I disable javascript on. I am able to fix quite a bit with stylus but not everything.

Also it's so weird to me that firefox which cares about privacy(?) doesn't have domain js blocking like chrome does and I have to use 3rd party addons.


You will get extreme page damage leaving javascript off on every site. If you're willing to deal with the micromanagement that's great, but it isn't the right answer for most people.

Instead, I recommend using uBlock Origin in "medium mode". Medium mode allows first-party scripts to run but blocks all third-party scripts by default.

In medium mode, around 75% of pages work without any additional finagling and the others can be fixed by enabling selective script sources only on that specific host. I find it a reasonable middle-ground.

https://github.com/gorhill/uBlock/wiki/Blocking-mode:-medium...


That's the default behavior of uMatrix, which also lets the user select what to do with the other scripts. I use both, uBlock Origin for ads and hiding parts of the page (the latter especially on mobile: fixed menus), uMatrix for scripts. I used NoScript but the new post Firefox 57 interface became more complex than the one of uMatrix so I jumped ship. It's not so hard after all if one is a web developer. If not, maybe uBlock medium mode would be ok but do you have non developer friends using it in that way?


Non-developer friends certainly, non-technical ones not so much.

If that's all you use uMatrix for, you can remove it and get all that functionality with uBlock Origin alone.


uMatrix is a wonderful way to do this :)

It has a small learning curve if you understand a bit of how the web works.


A centralized, auto synced, cross-browser whitelist would be nice. Of course everyone's individual lists would be different.


Woah!

Summarizing:

...

Goethem et al. exploited more accurate in-browser timing to obtain information even from within other websites, such as contact lists or previous inputs.

...

Oren et al. recently demonstrated that cache side-channel attacks can also be performed in browsers. Their attack uses the performance.now method to obtain a timestamp whose resolution is in the range of nanoseconds. It allows spying on user activities but also building a covert channel with a process running on the system. Gruss et al. and Bosman et al. demonstrated Rowhammer attacks in JavaScript, leveraging the same timing interface. In response, the WC and browser vendors have changed the performance.now method to a resolution of 5 µs. The timestamps in the Tor browser are even more coarse-grained, at 100 ms .

In both cases, this successfully stops side-channel attacks by withholding necessary information from an adversary. In this paper, we demonstrate that reducing the resolution of timing information or even removing these interfaces is completely insucient as an attack mitigation.

...

Our key contributions are:

– We performed a comprehensive evaluation of known and new mechanisms to obtain timestamps. We compared methods on the major browsers on Windows, Linux and Mac OS X, as well as on Tor browser.

– Our new timing methods increase the resolution of ocial methods by 3 to 4 orders of magnitude on all browsers, and by 8 (!!) orders of magnitude on Tor browser. Our evaluation therefore shows that reducing the resolution of timer interfaces does not mitigate any attack.

– We demonstrate the first DRAM-based side channel in JavaScript to exfiltrate data from a highly restricted execution environment inside a VM with no network interfaces.

– Our results underline that quick-fix mitigations are dangerous, as they can establish a false sense of security.


> reducing the resolution of timer interfaces does not mitigate any attack.

> quick-fix mitigations are dangerous, as they can establish a false sense of security.

This demonstrates - again - the danger of treating security as default-permit. This blacklist-style thinking is very common, but it is guaranteeing eventual failure because you cannot enumerating badness[1].

Limiting the granularity of performance.now assumes that providing any timing information at all is safe. It's the same basic misunderstanding of what it means to design for security I hear way too often whenever a new security issue is being discussed: someone always asks "Is this an actual problem in the wild, or whining about some hypothetical that isn't a 'real threat'?" So what; future threats are nt limited to only the attacks we know about today.

As I sain in a recent comment[2], the thing that nobody really wants to talk about is that it isn't possible to know the behavior of programs a Turing complete language without running the program. Declarative documents in HTML+CSS were safe, but trying to run potentially malicious Turing complete programs safely is provably futile endeavor.

[1] http://www.ranum.com/security/computer_security/editorials/d...

[2] https://news.ycombinator.com/item?id=15708099


Why high precision timing was ever introduced into browsers is beyond me. I don't see why timers should be more accurate that millisecond precision or why the timer should return different values when polled more than once in the same execution context (as an event based language).

Edit:

> Unlike other timing data available to JavaScript (for example Date.now), the timestamps returned by Performance.now() are not limited to one-millisecond resolution. Instead, they represent times as floating-point numbers with up to microsecond precision.

What a terrible idea. If this kind of profiling is needed, it should be better off as a feature of the browser developer tools, not a built-in function.

None of these examples are compelling to me either: https://w3c.github.io/hr-time/#introduction

HTML 5 audio has its own standards as does drawing. In fact, a high-precision timer is the worst solution to any of those problems.


It was introduced because "web developers are building more sophisticated applications where application performance is increasingly important. Developers need the ability to assess and understand the performance characteristics of their applications using well-defined interoperable measures." [1].

Browser-engine makers, like WebKit, were itching [2] to implement this, so they participated early on.

The proceedings of this W3C working group were conducted in the open; their mailing list archive is public [3].

For a cute and oblivious early message, see this post [4] from July 2010. Luckily it was eventually followed up by the first wave of enumerated privacy concerns [5] in October 2010.

[1] https://lists.w3.org/Archives/Public/public-web-perf/2010Jun... [2] https://lists.w3.org/Archives/Public/public-web-perf/2010Jul... [3] https://lists.w3.org/Archives/Public/public-web-perf/ [4] https://lists.w3.org/Archives/Public/public-web-perf/2010Jul... [5] https://lists.w3.org/Archives/Public/public-web-perf/2010Oct...


I would say that ideally, you have a dev tool feature to make the function available, and just have a stub by default that returns a constant or a low resolution timestamp. That way, you can still do whatever kind of custom instrumentation you want, but it's not sitting there waiting to be exploited in an end-user's browser.


Someone once suggested the risk in opening email attachments was analagous to the risk in running executable code (e.g., Javascript) from websites. Each can expose the user to a multitude of vulnerabilities. The argument against this comparison at the time was that Javascript from websites, unlike email attachments, could be run in a "sandbox". Even if the code could not be trusted, it was "safe" because it was "isolated".

Today, many users do not run Javascript because all too often it exposes them to excessive advertising and resource usage. Now, after researchers defeated KASLR with Javascript, and with Meltdown and Spectre being implemented in Javascript, there are additional benefits of not running third party Javascript.


That was extremely quick. Did they have this paper up their sleeve, expecting browsers to implement these kind of mitigation?


OP here. I didn’t write the paper, but had goose bumps (in a scared, WTF kinda way) after reading through it. Thanks for the comments so far...

I got the link to this paper from Luke Wagner’s post [1] on Mozilla’s Security blog regarding Spectre, and migrations being implemented in Firefox. I had a strong sense of foreboding after reading both the post and the linked PDF for a few reasons:

1. The overall tone of the post is pretty dour because a veritable Pandora’s box has been opened by these findings.

2. The team doesn’t really know how to fix the issues surfaced without performance penalties.

3. Not every attack vector is known to the research and browser development community!

4. Firefox (and other browser vendors) strongly believes that the web as a platform holds a great deal of promise. For this to become a reality, browsers need to perform close to native speeds. High fidelity timers are deemed pivotal to the performance equation even though they risk the overall security of the system. This is why Luke and team are going to invest heavily in fixing these issues as they are reported.

This last point is key, and it surfaces the inherent Catch-22 faced by browser vendors. They need to balance performance and security considerations for the web platform to succeed. And right now, security is being sacrificed at the altar of performance.

This brings me to the question that I have been noodling over all week long: Is the notion of the “web as a platform” an anachronism? Can we ditch this idea because native apps have won?

More insightful and influential folks in the HN community should chime in here with their perspectives.

1. https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

Edit: light wordsmithing


Personally I am betting my horse on the native side.

My web development projects were a pleasure back when it was all about HTML/CSS instead of trying to duplicate an OS.


I was wondering how can I use SharedArrayBuffer to implement the high-resolution timer. It's just consist of a dedicated thread keep incrementing the counter memory on SharedArrayBuffer. Read the counter value in SharedArrayBuffer from other threads and it's effectively a high-resolution timer.

It can achieve 2 ns resolution.


So this paper appears to have been published on 2017-12-23 (according to https://link.springer.com/chapter/10.1007/978-3-319-70972-7_...), which was ahead of the Specter and Meltdown disclosures. Truly unfortunate timing for the browsers implementing mitigation, huh.

It seems that one of the covert channels described here has been fixed (SharedArrayBuffers) but many of the other ones have not. The passive reading of data from timing-based side effects (like CSS animation as described in the paper) seems particularly hard to avoid in general. HTML5 video will have the same vulnerability, I bet.


I had a depressing feeling this was coming.

Are cache timing attacks just inherently impossible to stop? It’s sure starting to look that way with the last couple years of security research. Seems like every mitigation that gets thrown up is knocked down immediately.


It may be that the only true solution is to replace / re-engineer CPUs with these vulnerabilities expressly in mind, and guarded against.

Until then (and that will probably take a while obviously) it's probably going to be a cat and mouse game. I would expect a lot of patching. Future discoveries probably won't have the luxury of a 6 month coordinate / fixing window.

On top of that this is drawing a lot of attention to a particular area which will likely have a snowball effect as more and more exploits are found more and more will search for them.

Finally, If the mitigation strategy is 'don't let a javascript application be able to accurately know how much time something takes' and for 25 years that has not been a serious fundamental design consideration (quite the opposite to the point that a super accurate one was given freely in the form of `performance.now`). It seems like this is one big messy ball of yarn to untangle.


"It may be that the only true solution is to replace / re-engineer CPUs with these vulnerabilities expressly in mind, and guarded against. "

No such way that anyone knows about exists without huge performance loss. Without changing existing software/programming/etc paradigms, it is, IMHO, quite unlikely such a way exists.


No, they aren't. These mitigations are built by attempting to deny the ability to observe the side effect. This is, as this paper (and many others) show, an exercise in long term futility. Heck, the history of physics shows this is a long term futility (IE we've always found ways to observe things previously thought unobservable)

Short term its an arms race.

But you could, for example, have the JIT generate lfence everywhere and nop them out as you can prove speculation safety, for example. You could make loads/stores constant time (but with current memory technology this has higher cost than not speculating at all)

There are no future magic processor bullets here. Memory instruction parallelism is the main driver of performance in most applications.

Speculation of loads/stores is the one thing processors could do in cases compilers/jits/etc could not because it was assumed they could undo the side-effects (IE compilers don't do it because they fault. It was assumed processors could because they can make the fault go away. it turns out this is not the only observable side-effect, and now all the speculation they do is not really safe)

Where do you go from here. Well, AFAIK, they can't make it safe to speculate like they do not in any multi-core cache coherent system that anyone has ever thought about (single core it's theoretically possible), because the side-effect is not just observable by your processor, but by the other ones. So even if your local processor could undo all the effects, because the cache is coherent, it would have already been observable to another processor and you lose again.

This leaves vendors in the lurch of trying to choose between:

1. do we design cooperative software/hardware mechanisms to speculate and eat the perf cost of not speculating otherwise.

2. do we just tell everyone it's a software problem and keep on speculating everything.

3. Do we try to find an effective processor/ISA/etc abstraction that lets us speculate by default and protect the things that matter

etc

Nobody has any amazing ideas i've seen so far about what to do in the future.

The main saving grace here is that a lot of compute bound workloads have been moving towards GPUs/etc, which generally don't do speculative execution to start (and generally are hard to observe anyway). It may in fact be that the future answer for CPUs is

"stop executing speculatively, eat performance hit, accelerate move of various perf sensitive workload types to more specialized processors that use paradigms that don't have this issue"


I may have misunderstood this, but from what I read it seems the reason Meltdown did not affect AMD is that when code running in user mode is speculatively executed it is not allowed to access kernel memory.

This doesn't save AMD from Spectre because Spectre is not about bypassing hardware memory protection. It's about getting code that is allowed by the hardware to freely access certain memory but that was designed to not try to do so, to do so.

So what if we added hardware memory protection that could be used by an application to protect its own memory space from itself? An application that wants to run sandboxed code could then use this to protect itself from that sandboxed code.

I think we already even have a mechanism that does most of this (or at least we did up until x86 went 64-bit...I'm not sure how much of this still works in 64-bit mode): protection rings. x86 has 4 protection rings. Most operating systems only use two of them: ring 0 for the kernel, ring 3 for applications. Why not use ring 2 for applications, and ring 3 for sandboxed code running inside applications?

Combine this with the x86 segmentation mechanism, which can be used to provide separate virtual address ranges within the application's virtual address space for each of the sandboxes. Even if speculation isn't fixed to respect memory protection, this could go a long way, because an instruction cannot generate an address outside of the currently mapped segments, except by first loading a new selector into the segment register, and that does a check to make sure the selector you are trying to load is for a segment that the current ring is allowed to use.

I may be misremembering a lot of the details on how the rings and the segment system works, because I haven't really looked at this stuff in a very long time [1].

[1] Probably last around 1986 or so, when I was on the team at Interactive Systems that ported System V Release 3 from the 3B2 to the 386 for AT&T. I had to deal with a lot of the segment stuff back then, especially since I was half the team on the subproject to write the thing that allowed running 286 Unix and Xenix binaries on 386 Unix.


Yes, something like this would work. Your solution is basically #1/#2 - Declaring this mostly a software problem, and giving a cooperative mechanism where software effectively marks the areas it wishes to protect. Rely on everyone to isolate everything they like, then continue speculating everything in sight and declare everyone who didn't isolate well enough to be "bad programmers" :)


One solution would be to ensure that each javascript interpreter is isolated in a process memory space where it cannot see anything it shouldn't. Basically, you have to design the software the the understanding that anyone who can execute even "sandboxed" code in your process effectively has full read privileges over all the bytes therein.


It is interesting that all security measures that Tor developers implemented gave them exactly nothing. The researches were able to get the same 2ns resolution as with stock Firefox.

I wonder if the measures in FF to counter the Spectre will fire any better.


I was wondering when we'd see a paper like this since I saw the post on Firefox reducing timer resolution. My immediate thought was "there have to be other ways to get good enough timers to mount these attacks". Sure enough..


Nice timing. This follows up well from my previous HN comment, that timers are everywhere. Pretty much any operation that has a small predictable time can be used as the basis for timing something else.


I'm pretty sure a timer on a webserver is going to be good enough to determine if an item is cached or not in the client. The time difference between cached and not-cached is huge.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: