Hacker News new | comments | show | ask | jobs | submit login
Timing attack against HSTS to sniff browser history in Chrome and Firefox (mit.edu)
257 points by bretthopper 694 days ago | hide | past | web | 94 comments | favorite



Mitigation, get your site added to the HSTS Preload list.

Most of the advanced webappsec security features leak information. CSP, HPKP, HSTS, etc, which was all 100% known.

This talk is excellent because it puts together the attacks into real PoCs, real attacks, and great information on how it all works. These attacks in the talk are quality too, instead of being 'mostly' theoretical.

This wasn't really documented all in one place, or as high quality in the past. Way to go @bcrypt. High quality work.


Can anyone who runs a high trafficked site and isn't on the HSTS preload list explain why not? I am genuinely interested.


There's a lot of pressure to turn the includeSubdomains flag on, with the new Google Submission Process (submit a form to hstspreload.appspot.com).

Not all businesses are in a situation where they can just turn on HTTPS across all subdomains. I'm sure agl will make exceptions if you reach out and are a high traffic site.


CSP, HSTS, and HPKP were all known to be exploitable for "supercookie" purposes, but not history sniffing like this.

So it's fair to call this a novel attack.


No. Leaks were known far beyond the "supercookie", but attacks were not as high quality as these.


Could you link some older sources discussing some of these leaks? I'm having trouble finding any. Not doubting you, since you're probably right; would just be interested in reading them.


[edit - just realized this may not answer the question you asked, which is leaks that take advantage of HSTS/HPKP/CSP. hopefully still useful info.]

I may write a blog post later about this, but here are a few that come to mind (only counting things that have demos or have been observed in the wild):

* css-visited browser history sniffing (fixed several years ago): http://dbaron.org/mozilla/visited-privacy * HSTS unique-subdomain combination supercookies: http://www.radicalresearch.co.uk/lab/hstssupercookies * lcamtuf's cache timing attack: http://lcamtuf.coredump.cx/cachetime/ * webrtc local ip leak: https://diafygi.github.io/webrtc-ips/ * panopticlick: https://panopticlick.eff.org/ * evercookie: http://samy.pl/evercookie


AFAIK most of that info is in HN and twitter discussions between webappsec people. Here's a link[0] to a CSP leak that was briefly discussed on the webappsec mailing list. I also know Homakov wrote one[1].

That's one of the reasons I like this talk so much. It really cemented these type of issue with real attacks, instead of a loosely described bunch of PoC. I remember reading somewhere on an RFC (perhaps CSP) that reporting can cause unwanted information leakage, but I didn't find it in the security considerations with a quick control+f just now.

[0] - https://lists.w3.org/Archives/Public/public-webappsec/2015Ju...

[1] - http://homakov.blogspot.com/2014/01/using-content-security-p...


In addition to Egor's stuff, I'd recommend just reading the "Privacy and Security Considerations" sections of various RFC's and W3C specs. Lots of theoretical attacks in there that people simply haven't built demos for!

In case anyone's interested, slides are up at https://zyan.scripts.mit.edu/presentations/toorcon2015.pdf and talk recording at https://www.youtube.com/watch?v=kk2GkZv6Wjs


like you said, the security considerations are mostly theoretical. It actually might seem basic turning it into a real attack but it requires a fair bit of work.

It is always my favorite section of the RFCs. Along with anywhere that says "The UA [MUST|MAY|...] \w+". Much fun to be had....


Ah, I feel silly now. I remember reading Egor's blog post on that last year.

Thanks.


> Mitigation, get your site added to the HSTS Preload list.

If that could scale, we wouldn't need DNS?

Come to think of it, as ip addresses aren't distributed, why not have HSTS be a flag in DNS, rather than a header? (Yeah, I know, because DNS isn't secure - but that's the real problem, right there...).


Or, if the site you're visiting is DNSSEC enabled, you could just look for a TLSA record. If the TLSA record is present, don't allow unencrypted connectivity.

That's probably how you could replace HSTS with DNS.


Even without DNSsec, HSTS could pretty much work as advertised.


So what does it mean if now UA vendors have 1 million hosts added to HSTS pre-load list? Are they all created equal? Do they share the same preload list? Is there a single form to submit to all UA vendors? What if you have internal hosts that you still want the same mitigation in place? What is UA vendors' SLA on updating preload list? Preload suffers the same problem as pinning, correct me if I am wrong.


Preload does not suffer the same issue as pinning using a header. All sites on the list would just look like you probably visited them, which means ambiguity.

Not all UAs share the same preload list, but they all borrow from the chrome preload list. Read here https://hstspreload.appspot.com/.

Its not a great ecosystem and I dont forsee it as one that is improving, with encryption being a big issue in some countries


Probably a good idea to edit the title to indicate that this is an example attack site as well, not my favorite thing in general to land on without warning.

No js seems to mean no worries though.


creator here. sorry, i did not expected to be hn'ed. not running js is usually a good idea though!


The results never leave your browser, though, so there's not much to worry about.


The JS code sent to kentonv's browser might be safe, but you can't really meaningfully comment on hobs' situation, since you don't have a copy of the JS code that was sent to hobs' browser.


Actually I can. The author of this code is a friend of mine who works with the EFF, and the idea that she's secretly sending malicious scripts to some people but not others from her MIT web hosting space under her own name is simply not plausible.


Unless, she's already compromised your browser and the two previous comments are in fact placed by her!

In all seriousness, I appreciate the added context your (particularly this last clarifying comment) add to the discussion.


> The JS code sent to kentonv's browser might be safe, but you can't really meaningfully comment on hobs' situation [...]

But if that's the case then you shouldn't click on anything on HN.

I would argue that it is A-OK to post this link directly on hackernews.

It just says "look at what your browser can do! imagine if blackhats took advantage of this."


In any case, I didn't minify or remove comments, so you can just view-source:http://zyan.scripts.mit.edu/sniffly/index.js


That is what I thought about my browser history.


While I probably would have linked HN to the slides instead of the site, I do think calling it an attack site is paranoid and disingenuous.


Does the term "example attack site" not mollify that a bit? I tried to be clear that its not a real attack site (or my comment we be much more alarming) but something that executed malicious behavior, even if it wasnt from a malicious actor.

Wasnt looking to castigate the author, just looking for a title with a warning.


Honest question: what is your concern?

Any link you click could possibly contain a hidden attack, perhaps a malicious one.

If this link had been labeled differently, would you not have clicked?


The site attacks seems to fail in Tor Browser. It returns a list of sites that have nothing to do with my browser history. Probably because Tor Browser reduces the time precision.[1]

It seems like it would be nice to require some sort of user privilege escalation like accessing location, or webcam to access high precision timing. This would close off a huge class of time related side channels.

[1]https://trac.torproject.org/projects/tor/ticket/1517


Tor Browser restricts js timing precision to 100 milliseconds, which makes this way harder. HTTPS Everywhere also creates a lot of false positives, although those can be subtracted out.

See discussion at https://trac.torproject.org/projects/tor/ticket/17423


I hit the page from Chrome 45.0.2454.101 and it literally did not get a single site that I regularly visit. It did make a hit on Reddit, I guess, but I have only visited the site two or three times, and you could probably say that about 2/3 of the population.


interesting, are you using any browser addons? please file a bug at https://github.com/diracdeltas/sniffly, thanks


In Chrome I am running ABP, which can't be very uncommon, Evernote web clipper, rest console, xpath helper, and a bunch of Google's own shims. I will file a bug later today if I can repro the results.


Same here. Most sites are those that I visited shortly a long time ago.


Same here. The only addon I use is uBlock origin. I also had to activate javascript on the page which I normally don't.


Same on Firefox 41.0.2. It got 1 or 2 correct hits; a lot of correct ones in "sites you probably haven't visited" but then again that's not the hard part?


> you could probably say that about 2/3 of the population

Hardly.


The root problem seems to be that the web security model is broken because you can get onload AND onerror callbacks from img coming from 3rd party sources regardless of CORS.

You are not allowed to read the content of those resources for security reasons, and the fact that you can detect whether they have been loaded seems just as broken (although obviously this usually only has privacy impact, while reading the contents easily allows impersonating the user or stealing secrets).

This attack abuses this by using HTTP URLs and causing HTTPS to trigger onerror by using a content security policy that disallow HTTPS, which allows to measure timing of the HTTP->HTTPS redirection, which is instantaneous if the site had already been visited due to HSTS caching.

So I think two changes are needed:

1. The browser should never fire onerror on non-CORS-allowed 3rd party resources, but instead fire onload even if an error happens

2. The onload event should ideally be triggered 1 second after the resource has been seen regardless of whether it has been loaded or not. If that breaks too many websites, the timing granularity should be rounded to the greatest period possible.

There is also the problem of lots websites having CORS policies that allow * , so should probably also fix browsers to not allow sites to have CORS * policies, at least for this purpose.

Alternatively, it might be possible to load those resources normally, but effectively consider them to be a "3rd party origin from 1st party origin" that is different from "3rd party origin" and from other 1st party origins for all caching purposes.

However, this seems risky because if the 3rd party site manages to somehow fingerprint the user (e.g. because the IP address is whitelisted) then that information is potentially leaked to the 1st party site. For example, it would be possible to detect whether your IP is whitelisted by a domain that otherwise returns errors to everyone.


> 2. The onload event should ideally be triggered 1 second after the resource has been seen regardless of whether it has been loaded or not. If that breaks too many websites, the timing granularity should be rounded to the greatest period possible.

That would be horrible. Just to take my own site as an example, I embed thumbnails that expand to the full image when clicked upon. The image is first loaded through javascript, and my code waits for onload, so that my code knows the expected dimensions, then the image in the DOM gets the new URL and transitioned to the new size. Adding a delay here would make everything much slower and less pleasant to use.


You still can insert external image into a page and poll its size either directly or detect it via layout changes.

Maybe the root problem lies in HSTS and browsers using HTTP by default (because that is why we need HSTS)?


Indeed, should disallow that as well, e.g. give such external images and videos an intrinsic size of 1/3 the browser window size regardless of their actual size.

And disallow (non-CORS-allowed) cross-domain JavaScript and CSS.


changing the intrinsic size of an image sounds like a "break the web" sort of change. I'm imagining all the super blurry photos / gifs / etc. that will result, and not liking the thought.


It would be easier to disallow http-only CSP though.


this sounds like a good idea. I do have to wonder if you use CSP & https for your own site (because what site deploys CSP without deploying https first?), what sort of http-only directives would even be legal (i.e could successfully load content) under mixed-content rules.

But I don't think this solves the whole problem, as the time to "redirect + load https image" v. "HSTS redirect + load https image" is probably significant enough of a difference to timing attack. In which case you just need to time the onload of an image (or similar resource).


CSP can be useful on HTTP-only sites to prevent and log XSS attacks (e.g. somebody trying to insert a script on a page via vulnerable server script). That is SCP primary purpose IIRC.


firing onload instead of onerror for a cors failure seems heavy-handed - the ajax spec says cors failure should be treated as network failure, and while that can cause headaches it seems even worse to say something loaded when it in fact didn't. I don't really see a sane behavioral choice here. TBH I'm unsure there's a sane recommendation to make here that will solve the privacy leak problem without causing significant pain for the internet - knowing when something loaded or failed can be extremely useful. And besides, if you block onload / onerror handlers, you can possibly just change the POC to use CSP error reporting, and try to timing attack that instead.


Having disabled NoScript, Privacy Badger, HTTPS Everywhere, and AdBlock Pro, it can tell I've been to npmjs.com. I'm not immediately worried.

Maybe too obvious to point out, but as far as I can tell, it looks for entries in the browser cache, rather than looking at the history per-se. If you have the browser cache empty when you close the browser (gotta watch those evercookies!), that's also not an issue.


sort of. the hsts cache gets cleared when a private browsing session is closed or when you clear it manually in browser settings. it takes a long time (up to a year) to expire on its own.


This approach is rather interesting. But I'm wondering whether a similar attack could be made by placing links on a web page and using the CSS :visited selector to change the style of visited web pages. Couldn't you then check which links have that formatting and which don't via JS?


This used to be possible, but browsers put into place various mitigations: restricting which properties a :visited selector can affect, always computing both the visited and non-visited style to avoid timing attacks, etc. http://dbaron.org/mozilla/visited-privacy has a writeup describing the issues and the solutions that were adopted.


You used to be able to do that, but it was fixed in 2010: https://blog.mozilla.org/security/2010/03/31/plugging-the-cs...


Yes, this attack was demoed to work several years ago but has been patched as far as I know:

https://developer.mozilla.org/en-US/docs/Web/CSS/Privacy_and...

"The first change is that Gecko will lie to web applications under certain circumstances. In particular, getComputedStyle() and similar functions such as element.querySelector() always return values indicating that a user has never visited any of the links on a page."


One nice thing is that it is quite inaccurate, I've visited a large number of the sites it tells me I haven't.


It's accurate depending on what you're looking for. Saying you didn't visit a site means you either did, or did not, but saying you did visit a site means you 100% did for sure. In the case of a negative the results are meaningless, but in the case of a positive they are accurate. Similar to a bloom filter


I saw several sites pop up on the list of sites I visited that I'm quite sure I've never explicitly been to. It's possible these were all loaded via ads or some such thing, but it seems unlikely.


I've just checked my history and the sites I thought I have not visited do not show (2/14). Maybe 85%?


Yeah, I think the accuracy so far has been like 75-80% among my friends (once httpseverywhere is disabled). PS: you can check whether sites are in your HSTS cache in chrome at chrome://net-internals#hsts


what browser and extensions? feel free to file a bug at github.com/diracdeltas/sniffly


I think many popular sites will be a false positive because some browsers ship with a list of HSTS domains.. visiting them in HTTP first wouldn't happen. https://code.google.com/p/chromium/codesearch#chromium/src/n...


Preloaded HSTS entries are filtered out https://github.com/diracdeltas/sniffly/blob/c645af76cb53a21a...


Strange. I disabled "scripts from 3rd party sites" and "frames from 3rd party sites" by default in ublock origin and this site successfully loads 329 external JS ressources from other sites (none are manually enabled). Does anyone have the same problem? Is this a bug of ublock origin or expected behaviour?


It loads images, not JS, if I understand correctly.


Ah, thanks! That why these external ressources are shown but not blocked. Interesting attack vector!


A slide discussing HSTS+CSP attack (this one) and also HPKP attack: https://zyan.scripts.mit.edu/presentations/toorcon2015.pdf


This was wildly inaccurate for me. I think it was wrong far more than it was right.


The core issue is the same that leads to cross domain search timing attacks [1] (which can be prevented with CSRF tokens)

With timing HTTP->HTTPS redirections maybe the issue is not that the response can be timed but that HTTP exists in the first place? There are other similar timing attacks that can easily be used to identify if a user is logged in to a specific website [2].

[1] https://news.ycombinator.com/item?id=10211306 [2] http://crypto.stanford.edu/~dabo/papers/webtiming.pdf


Completely inaccurate for me.


Hm, tried this on my Android phone first and it didn't seem accurate - might have found 1 or 2 sites I actually visited, a few I wasn't sure of, and would randomly change between reloads. Seems to work much better on desktop Chrome (win 7). Wonder why there's a discrepancy.


I'm using Chrome 46.0.2490.80 without HTTPS Everywhere and it couldn't find any site I visited (tells me I visited none). I tried to visit a few and restart the test, it didn't found anything either

Edit : retried in Firefox 41.0.2, it's giving accurate results


How much of this is accurate, and how much of this is simply the top XXX sites that people visit?


I pulled the list of domains out of the Alexa Top 1M plus some domains that my friends run. But I'm not biasing results towards showing up as visited by popularity or anything like that.

I didn't set up analytics to figure out how accurate results are for the average person; having manually checked with a few people's browsers, I'd say the accuracy rate is ~75%.


At a glance, looks about 75% accurate for me. But what really freaked me out is that it correctly flagged a financial institution (not a major one) where I have an account yet feel pretty sure I haven't visited the website in months.


Yeah, part of the nice/scary thing about HSTS is that it is a highly persistent cache. The browser is reluctant to clear it because it's a security feature. So HSTS pins can be stored for up to a year in FF/Chrome, even if you are deleting cookies regularly.


Thankfully, this particular attack does not work with a running µmatrix. I do see a bunch of image requests to all these domains, but it shows that I haven't visited any (except for the ones that are blocked completely by hosts files, amusingly enough).


adblocked domains are indistinguishable from hsts blocked domains in terms of timing. so they show up as false positives. a clever attacker could subtract them out though.


I was skeptical at first, then I visited some of the sites it said I had not visited. A second round caught them, eep! A little machine learning could probably make this much more accurate, the PoC is clear.


Are timing attacks over the internet affected by buffer bloat?

Not a single positive even though I have visited a number of the sites on the list. There's even tabs open for Reddit right now.


this worked on an older version of chrome for me, but not the recent ones -- the sites listed were incorrect.

additionally, some alternative browsers based on chromium also gave inaccurate results when used on ipad and android.

this is strictly based on my exp w/ my own machines. im a security researcher so maybe my machines exhibit more unconventional settings / environments that are atypical of the avg user.

will try again later on a different machine. this is interesting stuff,good work.


Security features (like any software features in general) are additional vulnerability vectors prone to implementation flaws.


Tried with Firefox for Android 41 and it is quite accurate.

Note: in China, so sometimes HTTPS actually fails to establish


Doesn't seem to be working in Safari on Mac. I'm getting over 650 CSP errors in my console.


I'm using HTTPS Everyhwere, and it couldn't get a single website correct.


Yes, HTTPS Everywhere blocks this particular attack, hence why the top bar says to disable it.

(Incidentally, the author of this attack is also one of the authors of HTTPS Everywhere.)


Does not seem to work without Javascript enabled.


A different version of this would of been to mess around with the :visited css selector since it's assuming a list of domains that you probably visit



Yes. If you're using a browser version that's been updated past 2010-2011, this will no longer work.


Yes, and if you're running an older browser, then the webserver can provide arbitrary code that just reads the history directly. :)


Wow. Our corporate proxy isn't going to like that many requests that quickly from one box. I wonder if they can add a landing page.


Any picture/script heavy main page (news etc) probably produces the same number of requests


Yan has hit the HN homepage before, so hopefully it can weather this storm too.


I believe voltagex_ meant outgoing requests from his/her location.


Sites you probably haven't visited: "myspace.com" < pffft. That's too easy.


Luckily I have taken out entire classes of attacks against the browser (this one included) using simple, baseline configs that harden the browser. I am unsure how many people take these measures and I don't know the stats for how many people are hardening their browser in some way, but I suspect large swathes of web users are at risk here.

Since this is Hackernews, my efforts to educate the masses on weaponized browser attacks like this would be futile, and I am sure many have configured their browser in some rudimentary way to merry away the assholes.

There is a certain sigh of relief and 'ha catch me if you can' that coats me when I see PoCs like this rendered obsolete and inert.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: