
HEIST: HTTP Encrypted Information Can Be Stolen Through TCP-Windows [pdf] - jerf
https://tom.vg/papers/heist_blackhat2016.pdf
======
niftich
The techniques are interesting and clever, but isn't this "attack" only
possible on websites that fail to properly protect against unapproved cross-
origin requests, meaning they would be vulnerable anyway?

I re-read the specs to make sure [1] but isn't it on the server to alter the
response to potentially nothing if the cross-origin response didn't come from
source the server expects?

The paper says:

"Similar to the defense of blocking 3rd-party cookies, the web server could
also block requests that are not legitimate. A popular way of doing this, is
by analysing the Origin and/or Referer request headers. However, it is still
possible to make requests without these two headers, preventing the web server
to determine where to request originated from. As a result, this technique can
not be used to prevent attacks"

On the contrary; the cross-origin access control spec (last edit in 2014) has
an entire section on how to prevent malicious cross-origin requests like this,
with verbiage like:

"This extension enables server-side applications to enforce limitations (e.g.
returning nothing) on the cross-origin requests that they are willing to
service."

If the server properly implements cross-origin access control by, say,
returning nothing, the malicious cross-origin requesting script can't perform
reflection attacks, and there's nothing interesting to measure with TCP
windows and compression and the like.

[1] [https://www.w3.org/TR/access-control/](https://www.w3.org/TR/access-
control/)

~~~
pfg
The thing is: this attack doesn't even use CORS. It just sends a GET request
to an URL and looks at the timing, which can be done using the fetch/Resource
Timing APIs.

Here's a JSFiddle to play around with this[1], based on the code in the paper,
using GitHub's account page as an example. You'll notice that the request
succeeds and the fiddle has access to timing information. Inspecting the
requests will show that there are no CORS headers being sent (like Origin)
based on which the target could prevent the request from being served.
(There's the Referer header, I suppose, but blocking the request based on that
would break pretty much any third-party link to that page.)

I've tested this against various high-profile sites that include sensitive
information (Google's account page, Twitter's account page, etc.), and it
worked just fine on each of them. Most of them also have a parameter
reflection vector (typically the full request URL is included somewhere on the
page, which should be enough), so I'm pretty sure each of those sites are
potentially vulnerable (though I'm not sure if calling a _site_ vulnerable is
the right approach here).

[1]: [https://jsfiddle.net/39rsn2kg/](https://jsfiddle.net/39rsn2kg/)

~~~
nine_k
Would inclusion of a small random timeout before serving the request solve
this? Would not the load balancer activity cancel the effect?

If anything, adding random garbage to the error page to approximately match
the size of a normal response should make the attack useless.

~~~
pfg
Typical (WAN) network conditions are not all that different from a small
random timeout, and these kind of attacks still work. Adding random noise just
means you'll need more samples, but statistics are going to win eventually. I
guess a combination of random noise and tight request rate limiting might
mitigate this in practice (in exchange for a DoS vector, I suppose).

------
Grom_PE
Ah, third-party cookies, the bad idea to begin with, is causing problems
again.

Hopefully this will force browser vendors to disable that misfeature by
default.

~~~
lossolo
You mean those vendors that make money using third party cookies to track
users?

~~~
Grom_PE
Sure. Those who haven't modernized their money-making technologies yet should
stop relying on this security hole. There are alternatives.

~~~
lossolo
If this would be so simple you would not see flash anywhere in the web for
years already. Unfortunately it's not the case.

~~~
beedogs
I haven't seen flash on the web in years, because I've uninstalled it. Good
riddance to bad rubbish.

------
j_s
_Surprisingly, we found that not only do our attacks remain possible, we can
even increase the damaging effects of our attacks by abusing new features of
HTTP /2._

Interesting!

~~~
rubiquity
The paper goes on to describe why. The short version is a big part of HTTP/2's
performance comes from header compression via the HPACK algorithm. Anything
that compresses can be compromised if the attacker can inject data into the
compressed output. This paper does a really nice job of concisely explaining
BREACH and CRIME.

~~~
Animats
There's a provision in RFC 7541 that applies here. There's a way to indicate
that a header field must not be compressed. Security-critical header fields
such as keys and nonces should not be compressed. The RFC notes: "Note that
these criteria for deciding to use a never-indexed literal representation will
evolve over time as new attacks are discovered."

So the protocol has a feature for this. Everything that sends HTTP2 needs to
update the list of fields not to compress, but that can be done by the sending
end without any change to the receiving end.

[1]
[https://tools.ietf.org/html/rfc7541#section-7.1.3](https://tools.ietf.org/html/rfc7541#section-7.1.3)

~~~
cperciva
_" Note that these criteria for deciding to use a never-indexed literal
representation will evolve over time as new attacks are discovered."_

Or, in other words: "We know that this is insecure. Maybe someone will fix it
later."

~~~
Animats
True, but the HTTP/2 people were building a wrapper for data, not a security
protocol. At least they put in a feature to suppress compression. Because of
that, this can be fixed from one end; it's not necessary to deploy new
browsers.

------
honkhonkpants
Just the text, for those of you unwilling to open PDFs from security
researchers:

[http://pastebin.com/rG3RD6hw](http://pastebin.com/rG3RD6hw)

~~~
joshka
A pdf probably isn't going to be able to run JavaScript in your browser (the
primary mechanism that this attack utilizes. Pastebin on the other hand... ;)

~~~
pests
Except some major browsers like Chrome do their entire PDF rendering in
JavaScript.

~~~
JonathonW
Chrome's PDF renderer is a PPAPI plugin written in C++.

Firefox is the one with the Javascript-based PDF renderer.

~~~
pests
Oops yes you are correct.

My original point still applies though.

------
kyled
This sounds very similar to the Time attack from 2013.
[https://youtu.be/rTIpFfTp3-w](https://youtu.be/rTIpFfTp3-w) . I hope they got
credited.

They used a timing side channel attack to extract data from a response when
compression was enabled and user data is reflected back in a response. This
attack can be done inside the browser and abuses how data is segmented when
sent over tcp.

~~~
bluesmoon
The building blocks were described even earlier than that:
[http://www.slideshare.net/bluesmoon/messing-with-
javascript-...](http://www.slideshare.net/bluesmoon/messing-with-javascript-
and-the-dom-to-measure-network-characteristics)

And here: [http://www.lognormal.com/blog/2011/11/14/analysing-
network-c...](http://www.lognormal.com/blog/2011/11/14/analysing-network-
characteristics/)

~~~
niftich
See also, another cross-site timing attack from 2007 (pdf):
[http://crypto.stanford.edu/~dabo/pubs/papers/webtiming.pdf](http://crypto.stanford.edu/~dabo/pubs/papers/webtiming.pdf)

------
ComodoHacker
A common way to mitigate side-channel attacks is to add random noise to the
side channel. In this case the original side channel is timing of network
requests. Thus we can randomize time of network packets arrival, simulating
unstable network link and trading speed for security. This can be done in the
browser, at OS level client-side or at the server side. It's strange this
method isn't mentioned in the paper.

Also I wonder if inherently varying latency of mobile and inhabited Wi-Fi
networks can by itself prevent this type of attacks.

 _Edit: grammar._

~~~
madgar
Actually, timing attacks almost always rely on statistical methods in practice
due to the inherent noise in relying on coarse timing data. So adding random
noise very rarely helps at all other than increasing the amount of data
required for the already-prepared statistical methods used in the attack.

------
userbinator
_purely by running (malicious) JavaScript inside the victim 's browser_

I've said it many times and I'll say it again: keep JS off by default and
enable it only for the few trusted sites that absolutely need it.
Interestingly, the authors mention disabling 3rd-party cookies as a
countermeasure, but not JS.

~~~
jakeogh
Agreed. Surf makes that easy, CTRL-SHIFT-S to enable JS for the current page.
It's downright annoying when I use another browser now. Ditching JS makes
everything faster.

[http://git.suckless.org/surf/log/?h=surf-
webkit2](http://git.suckless.org/surf/log/?h=surf-webkit2)

I have modded it a bit:
[https://github.com/jakeogh/glide](https://github.com/jakeogh/glide) (not well
tested)

~~~
Scarbutt
Is there a chrome plugin for that? activate JS with a key binding and
automatically refresh the page.

~~~
jakeogh
Not sure. There's a FF plugin: [https://addons.mozilla.org/en-
US/firefox/addon/yesscript/?sr...](https://addons.mozilla.org/en-
US/firefox/addon/yesscript/?src=api) and if you disable JS by default it's a
quick way to enable it per domain.

------
majortennis
almost perfectly acrostic

