
Pixel-perfect timing attacks with HTML5 - leetreveil
http://contextis.co.uk/research/white-papers/pixel-perfect-timing-attacks-html5/
======
jffry
This is a fascinating attack. Definitely read the bits on the SVG filter
timing attacks. They construct something that allows distinguishing black
pixels from white pixels, apply a threshold filter to an iframe, and then read
out pixels from the contents of that iframe.

Then they turn this around, set an iframe's src to "view-
source:[https://example.com/"](https://example.com/"), and read out
information from there (in a more efficient manner).

~~~
aidos
I love the way timing attacks seem so unlikely but actually easy ways to
extract information.

Everything about this attack is beautiful. A serious of seemingly unrelated
issues that don't appear like a problem from the outside but when combined
produce a solid attack that you could roll out today.

Well worth reading through the whole article.

~~~
jffry
A lot of security issues start at one seemingly innocuous little toehold and
then use, abuse, and combine the hell out of it to do surprising and
obviously-undesirable things with it. That's what I find so beautiful about
this sort of hack.

------
zubspace
The paper describes how to prevent the sniffing attack:

 _Website owners can protect themselves from the pixel reading attacks
described in this paper by disallowing framing of their sites. This can be
done by setting the following HTTP header:

X-Frame-Options: Deny

This header is primarily intended to prevent clickjacking attacks, but it is
effective at mitigating any attack technique that involves a malicious site
loading a victim site in an iframe. Any website that allows users to log in,
or handles sensitive data should have this header set._

I wonder, why is this option an opt-out and not an opt-in? Shouldn't this be
the default?

~~~
danielweber
I know people who try to do interesting things for the users with iframes and
are completely frustrated by things like that. File under "why we can't have
nice things."

------
randallu
These same guys had previously used WebGL to suck out text in the same way;
unfortunately the demo is no longer at the same URL, but it is what's
responsible for the fairly weird implementation of CSS Shaders:
[http://www.schemehostport.com/2011/12/timing-attacks-on-
css-...](http://www.schemehostport.com/2011/12/timing-attacks-on-css-
shaders.html)

It's amazing that the same thing can be observed with the standard SVG
software filters, though. I'd imagine that using X-Frame-Deny as they suggest
is a much better solution than killing all JS (because you just know some
incompetent ad network will manage to flip the switch and break millions of
pages with that ability...).

~~~
jffry
Would X-Frame-Options:DENY work to mitigate the view-source: attack?

~~~
jffry
Just threw together a test case. X-Frame-Options does seem to mitigate the
view-source attack:
[http://jsfiddle.net/GEynT/2/embedded/result/](http://jsfiddle.net/GEynT/2/embedded/result/)

~~~
joshfraser
To be clear, the hack is still possible without view-source. It just makes it
easier and more generic of a solution.

------
Someone
For those, like me, wondering why that 'detect visited' hack doesn't simply
bolden visited links or changes its font or font size and uses
getComputedStyle or getBoundingClientRect [1] to see whether that changes the
bounds of the element: that trick has been mitigated three years ago. See
[http://hacks.mozilla.org/2010/03/privacy-related-changes-
com...](http://hacks.mozilla.org/2010/03/privacy-related-changes-coming-to-
css-vistited/).

[1] not explicitly mentioned there, but I think the solution described intends
to plug that hole, too.

------
M4v3R
These attacks are getting more and more creative. I begin to think that there
is no such thing as perfect security in a world that constantly demands new
features.

~~~
talmand
There is no such thing as perfect security.

~~~
Dylan16807
You can always turn the device off.

It really is a matter of features and how they're implemented.

Good luck picking a byte that can exploit a 7400.

~~~
talmand
If we're only talking about exploiting a device across a network, sure turn it
off or disconnect it from the network. But there's more to security than that.

One can always take the device and turn it on for oneself.

If one can't exploit the device, one can resort to rubber-hose cryptanalysis.

~~~
Dylan16807
We are only talking about exploiting a device across a network.

~~~
TeMPOraL
Then bridge the gap by infecting pendrives. That's how e.g. Stuxnet worked.

~~~
Dylan16807
I don't understand what you're replying to. Physical access is a great way to
bypass network security, but it has nothing to do with websites.

~~~
TeMPOraL
"Exploiting device across network".

Security doesn't exist in isolation. AKA. there's always a way.

~~~
Dylan16807
If you assume a specific target, there is always a way to get to them.

If you're talking about making a browser secure against internet-based
attacks, there is not always a way. This type of security is merely extremely,
overwhelmingly difficult.

------
mistercow
It seems to me like a web server ought to be able to send some signal to
browsers on either a single page or subdomain basis, which disables JS for
those pages. If another page includes such a JS-disabled page in an iframe,
then at the very least, all scripts on the parent page should be immediately
terminated, and ideally loading of the iframe should fail if any scripts have
executed (obviously an exception should be made for, e.g. Chrome extensions).

This should completely nullify a vast number of potential attacks for sites
that are particularly sensitive. There's no reason, for example, that the
logged-in portion of a banking site should need to use JS. That seems like a
reasonable sacrifice for adding significant security to critical websites.

~~~
TylerE
> There's no reason, for example, that the logged-in portion of a banking site
> should need to use JS.

Said no one who has ever had to develop a decent web ui.

~~~
mistercow
That's what I do professionally, and I can say with confidence that a banking
website does not _need_ to use JS. There is no functionality there that can't
be done the old-fashioned way.

Online banking does not need to be a rich HTML5 experience, and online banking
worked just as well as it does today before the modern trend of trying to make
everything act like a desktop app.

Would developing the UI without using JS be harder? Yes, marginally. Is it
worth opening up security vulnerabilities to make development slightly easier?
No. Just in terms of how much each of those costs the bank, no. From the
users' perspective, no.

~~~
randallu
GMail, etc, is just as important from a security perspective as your banking
site since it could be used to perform a password reset. It could conceivably
be iframe'd and have its contents sucked out.

It's unlikely that every link in the chain will stop using JS, so we must
develop more creative methods.

There's also a history attack in here based on observing a repaint due to a
link changing color. So even if one did turn off JS due to some signal,
oppressive regime X could still sniff if their subjects had visited website Y
and do bad things to them. At this point tracking visited links seems like
it's more trouble than it's worth!

~~~
mistercow
>GMail, etc, is just as important from a security perspective as your banking
site since it could be used to perform a password reset. It could conceivably
be iframe'd and have its contents sucked out.

Now that is a good point. In general, I don't know what to do about the weak
link of email, which goes far beyond sniffing. I think it's hard for people to
properly respect the gravity of their email's security when the vast majority
of what comes through it is basically frivolous, or at least security-
noncritical.

------
tripzilch
I have a soft spot for side-channel attacks, they are often a beautiful
example of out-of-the-box thinking. This whitepaper is no exception, in
particular the second part about (ab)using the SVG filters.

I was thinking, of course it doesn't help much in mitigating this attack, but
they calculate average rendering times over several repeats of the same
operation. When profiling performance timings, it's usually much more accurate
to take the _minimum_ execution time. The constant timing that you want to
measure is part of the low-bound on the total time, any random OS
process/timing glitch is going to _add_ to that total time, but it will not
somehow make the timespan you are interested in randomly run faster. There
might be some exceptions to this, though (in which case I'd go for a truncated
median or percentile range average or something).

Also had some ideas to improve performance on the pixel-stealing, as well as
the OCR-style character reading. With the latter one could use Bayesian
probabilities instead of a strict decision tree, that way it'll be more
resilient to accidental timing errors so you don't need to repeat as often to
ensure that _every_ pixel is correct, just keep reading out high-entropy
pixels and adjust the probabilities until there is sufficient "belief" in a
particular outcome.

But as I understand from the concluding paragraphs of this paper, these
vulnerabilities are already patched or very much on the way to being patched,
otherwise I'd love to have a play with this :) :)

------
ptolts
That was the most interesting thing I've read in a while.

------
Sephr
To mitigate the new detect visited vectors, browsers could render everything
as unvisited and then asynchronously render a 'visited' overlay (in a separate
framebuffer) at a later time. SVG filters will have to be processed twice for
the visited-sensitive data, so a vendor may just wish to limit SVG filters to
only processing the 'unvisited' framebuffer for the sake of performance.

