Most of the advanced webappsec security features leak information. CSP, HPKP, HSTS, etc, which was all 100% known.
This talk is excellent because it puts together the attacks into real PoCs, real attacks, and great information on how it all works. These attacks in the talk are quality too, instead of being 'mostly' theoretical.
This wasn't really documented all in one place, or as high quality in the past. Way to go @bcrypt. High quality work.
Not all businesses are in a situation where they can just turn on HTTPS across all subdomains. I'm sure agl will make exceptions if you reach out and are a high traffic site.
So it's fair to call this a novel attack.
I may write a blog post later about this, but here are a few that come to mind (only counting things that have demos or have been observed in the wild):
* css-visited browser history sniffing (fixed several years ago): http://dbaron.org/mozilla/visited-privacy
* HSTS unique-subdomain combination supercookies: http://www.radicalresearch.co.uk/lab/hstssupercookies
* lcamtuf's cache timing attack: http://lcamtuf.coredump.cx/cachetime/
* webrtc local ip leak: https://diafygi.github.io/webrtc-ips/
* panopticlick: https://panopticlick.eff.org/
* evercookie: http://samy.pl/evercookie
That's one of the reasons I like this talk so much. It really cemented these type of issue with real attacks, instead of a loosely described bunch of PoC. I remember reading somewhere on an RFC (perhaps CSP) that reporting can cause unwanted information leakage, but I didn't find it in the security considerations with a quick control+f just now.
 - https://lists.w3.org/Archives/Public/public-webappsec/2015Ju...
 - http://homakov.blogspot.com/2014/01/using-content-security-p...
In case anyone's interested, slides are up at https://zyan.scripts.mit.edu/presentations/toorcon2015.pdf and talk recording at https://www.youtube.com/watch?v=kk2GkZv6Wjs
It is always my favorite section of the RFCs. Along with anywhere that says "The UA [MUST|MAY|...] \w+". Much fun to be had....
If that could scale, we wouldn't need DNS?
Come to think of it, as ip addresses aren't distributed, why not have HSTS be a flag in DNS, rather than a header? (Yeah, I know, because DNS isn't secure - but that's the real problem, right there...).
That's probably how you could replace HSTS with DNS.
Not all UAs share the same preload list, but they all borrow from the chrome preload list. Read here https://hstspreload.appspot.com/.
Its not a great ecosystem and I dont forsee it as one that is improving, with encryption being a big issue in some countries
No js seems to mean no worries though.
In all seriousness, I appreciate the added context your (particularly this last clarifying comment) add to the discussion.
But if that's the case then you shouldn't click on anything on HN.
I would argue that it is A-OK to post this link directly on hackernews.
It just says "look at what your browser can do! imagine if blackhats took advantage of this."
Wasnt looking to castigate the author, just looking for a title with a warning.
Any link you click could possibly contain a hidden attack, perhaps a malicious one.
If this link had been labeled differently, would you not have clicked?
It seems like it would be nice to require some sort of user privilege escalation like accessing location, or webcam to access high precision timing. This would close off a huge class of time related side channels.
See discussion at https://trac.torproject.org/projects/tor/ticket/17423
You are not allowed to read the content of those resources for security reasons, and the fact that you can detect whether they have been loaded seems just as broken (although obviously this usually only has privacy impact, while reading the contents easily allows impersonating the user or stealing secrets).
This attack abuses this by using HTTP URLs and causing HTTPS to trigger onerror by using a content security policy that disallow HTTPS, which allows to measure timing of the HTTP->HTTPS redirection, which is instantaneous if the site had already been visited due to HSTS caching.
So I think two changes are needed:
1. The browser should never fire onerror on non-CORS-allowed 3rd party resources, but instead fire onload even if an error happens
2. The onload event should ideally be triggered 1 second after the resource has been seen regardless of whether it has been loaded or not. If that breaks too many websites, the timing granularity should be rounded to the greatest period possible.
There is also the problem of lots websites having CORS policies that allow * , so should probably also fix browsers to not allow sites to have CORS * policies, at least for this purpose.
Alternatively, it might be possible to load those resources normally, but effectively consider them to be a "3rd party origin from 1st party origin" that is different from "3rd party origin" and from other 1st party origins for all caching purposes.
However, this seems risky because if the 3rd party site manages to somehow fingerprint the user (e.g. because the IP address is whitelisted) then that information is potentially leaked to the 1st party site. For example, it would be possible to detect whether your IP is whitelisted by a domain that otherwise returns errors to everyone.
Maybe the root problem lies in HSTS and browsers using HTTP by default (because that is why we need HSTS)?
But I don't think this solves the whole problem, as the time to "redirect + load https image" v. "HSTS redirect + load https image" is probably significant enough of a difference to timing attack. In which case you just need to time the onload of an image (or similar resource).
Maybe too obvious to point out, but as far as I can tell, it looks for entries in the browser cache, rather than looking at the history per-se. If you have the browser cache empty when you close the browser (gotta watch those evercookies!), that's also not an issue.
"The first change is that Gecko will lie to web applications under certain circumstances. In particular, getComputedStyle() and similar functions such as element.querySelector() always return values indicating that a user has never visited any of the links on a page."
With timing HTTP->HTTPS redirections maybe the issue is not that the response can be timed but that HTTP exists in the first place? There are other similar timing attacks that can easily be used to identify if a user is logged in to a specific website .
Edit : retried in Firefox 41.0.2, it's giving accurate results
I didn't set up analytics to figure out how accurate results are for the average person; having manually checked with a few people's browsers, I'd say the accuracy rate is ~75%.
Not a single positive even though I have visited a number of the sites on the list. There's even tabs open for Reddit right now.
additionally, some alternative browsers based on chromium also gave inaccurate results when used on ipad and android.
this is strictly based on my exp w/ my own machines. im a security researcher so maybe my machines exhibit more unconventional settings / environments that are atypical of the avg user.
will try again later on a different machine. this is interesting stuff,good work.
Note: in China, so sometimes HTTPS actually fails to establish
(Incidentally, the author of this attack is also one of the authors of HTTPS Everywhere.)
See http://dbaron.org/mozilla/visited-privacy and https://blog.mozilla.org/security/2010/03/31/plugging-the-cs...
Since this is Hackernews, my efforts to educate the masses on weaponized browser attacks like this would be futile, and I am sure many have configured their browser in some rudimentary way to merry away the assholes.
There is a certain sigh of relief and 'ha catch me if you can' that coats me when I see PoCs like this rendered obsolete and inert.