

Detecting login state for almost any website on the internet - zemnmez
http://words.zemn.me/csp

======
mike-cardwell
There is a similar flaw that doesn't rely on CSP that I wrote about a few
years ago. It seems that this is a fundemental problem with the web. I don't
think it will ever be fixed.

[https://grepular.com/Abusing_HTTP_Status_Codes_to_Expose_Pri...](https://grepular.com/Abusing_HTTP_Status_Codes_to_Expose_Private_Information)

~~~
pbhjpbhj
Doesn't detect my logged-in state at FB, thought it might be noscript but even
enabling globally still doesn't allow the page to discover my FB login.

FF30 on Kubuntu.

~~~
mike-cardwell
Did you read the first paragraph on the page?

"Update: Some of the example tests on this page no longer work (I don't have a
Facebook account anymore and the Twitter URL I used went offline). The
technique it's self is still alive and relevant though."

~~~
pbhjpbhj
Yes, I hadn't processed it completely but it says that he doesn't have an FB
account - at the time of the update - so can't confirm if it works, but that
some examples may/do not work.

Admittedly it's not the most inciteful post ever but it gives a data point
demonstrating that a reportedly non-tested suspect example failed. If you were
looking for a FB login verification method then ...

------
retroencabulato
The linked W3C mailing list[1] provides some interesting discussion, as well
as a blog post from Jan outlining this attack[2].

The conclusion: security benefits of CSP outweigh cons.

[1] [http://lists.w3.org/Archives/Public/public-
webappsec/2014Feb...](http://lists.w3.org/Archives/Public/public-
webappsec/2014Feb/0036.html)

[2] [http://homakov.blogspot.de/2014/01/using-content-security-
po...](http://homakov.blogspot.de/2014/01/using-content-security-policy-for-
evil.html)

~~~
gpvos
Thanks. Especially the homakov post is enlightening, as it explains that
removing the report-uri feature is not even enough to make this exploit
impossible, as the onload/onerror events also signal success or failure.

Is there anything good that report-uri is used for that is more important than
removing this exploit possibility?

~~~
pfraze
Report URIs are pretty important for deploying CSP on an existing site.
Without it, you'd have to risk breaking the experience for a lot of users
(because it's hard to nail the policy on the first try) and you'd never get
any logs explaining what was blocked.

------
jacquesm
This is about as serious as the CSS exploit that allowed detection of which
websites you'd visited. In a way it is more serious because it allows the
attacker to detect a relationship on a higher level (logged in versus merely
visited).

[http://browserspy.dk/css-exploit.php](http://browserspy.dk/css-exploit.php)

~~~
f-
Such attacks are interesting, but the CSP part is a red herring to some
extent; we had this problem without CSP and the issue is mostly that nobody
has any good ideas on how to get rid of this class of attacks without breaking
the web:

[http://lists.w3.org/Archives/Public/public-
webappsec/2014Feb...](http://lists.w3.org/Archives/Public/public-
webappsec/2014Feb/0043.html)

~~~
ZoFreX
To what extent is CSP a red herring here? Is there any part of this that would
be mitigated if we didn't have it, or can you do everything here without it?

~~~
jlogsdon
It's all possible with HTTP statuses according to a link[1] posted above.

[1]
[https://grepular.com/Abusing_HTTP_Status_Codes_to_Expose_Pri...](https://grepular.com/Abusing_HTTP_Status_Codes_to_Expose_Private_Information)

~~~
ZoFreX
What legitimate use do those onerror / onload callbacks have... that seems
like the kind of thing that should be restricted to same origin!

~~~
f-
Similarly to CSP, onload and onerror are not the only ways to pull it off. The
effect of successfully or unsuccessfully loading images or scripts can be
usually inferred without that; for example, images have dimensions that, even
if you take away the ability to read them directly, can be inferred from the
changes to the layout of the nearby elements.

------
molf
It seems like this class of attacks is based on embedding content that is not
designed to be embedded in the first place.

Wouldn't it in theory be possible to require browsers to send an "Embedded-On"
HTTP header that contains the domain of the embedding page? Then it's trivial
for a website (Facebook/Google in this case) to block all requests from
unrecognised domains with HTTP 403 Forbidden – regardless of your login state.
It only requires that website owners know which domains they themselves use.

~~~
zemnmez
This isn't the case. Embedding a non-image as an image is only used to trip
the CSP. If the X-Frame-Options header is not set to DENY or SAMEORIGIN, you
can equally iframe the contents. CSP restrictions are capable of controlling
almost any kind of content on a page. Here I used images for my own
convenience because it makes the requests more simple; you can write similar
rules for frames, scripts and other content and come up with similar category
exploits with those rules.

When it comes to Embedded-On, that is what the Origin header that is used in
most modern browsers is for. It supersedes the privacy impinging Referer
header. What you are describing is essentially 'hotlink' protection for
generic resources.

~~~
kretor
What does "This isn't the case." refer to?

~~~
zemnmez
> It seems like this class of attacks is based on embedding content that is
> not designed to be embedded in the first place.

It's not the case that this type of attack hinges on the use of images.

------
kretor
Couldn't Google, Facebook etc. just set a CSP that forbids embedding certain
pages of theirs? E.g. Facebook forbidding embedding their facebook.com/me.

Edited to add:

There are two standards to disallow embedding, but both don't apply to <img>
tags:

The X-Frame-Options header [1] applies to frame, iframe and object elements.

CSP's frame-ancestors directive [2] applies to "frame, iframe, object, embed
or applet tag, or equivalent functionality in non-HTML resources".

There's may still be an option for blocking embedding as an image though: When
trying to load a URL, AFAIK a browser sends a list of accepted content types.
This should make it possible for a website to detect when a browser is trying
to load a webpage as an image, which the website then can simply respond to
e.g. with an error code.

[1] [http://tools.ietf.org/html/draft-ietf-websec-x-frame-
options...](http://tools.ietf.org/html/draft-ietf-websec-x-frame-options-00)

[2] [https://w3c.github.io/webappsec/specs/content-security-
polic...](https://w3c.github.io/webappsec/specs/content-security-
policy/#directive-frame-ancestors)

~~~
pfraze
I'd rather they create a CSP directive like 'embed-ancestors' to restrict
embedding. That seems like less overhead than a request header that's attached
to all <img> requests.

~~~
kretor
The request header is already sent by all browsers. Developer tools in Chrome
shows me that it sends

    
    
      accept: image/webp,*/*;q=0.8

when requesting images. I'm with you that a special CSP directive would be
nice.

------
ctz
Another reason why "web security" is the most immensely depressing topic in
modern computing.

It turns out you can't actually achieve a simple, secure system by repeatedly
rushing to fix problems as they arise.

~~~
jebblue
Actually if we could all agree to go back to plain HTML, no CSS and especially
no JavaScript and use solid client applications for client work; the web would
be infinitely more secure. That isn't going to happen though, unfortunately.

~~~
x1798DE
Frankly I imagine at least I'd find it more pleasant. I'm actually really
tired of the bundling of content and presentation. It'd be nice if HTML were
more of a set of formatting hints, and we could theme our browsing experience
for all sites. It sucks when a site has good content but is poorly designed.
There's no particular reason those two functions need to be tied together, but
people have distorted text formatting into web design in such a way that it
breaks reflow and you're forced to view the content the way they've decided it
should be viewed. Alas.

~~~
slashdotaccount
Why is your browser paying attention to the design suggestions of website
operators? Surely it should only show their content using your preferred
design?

------
michaelt

      Google got back to me [...] they had internally discussed
      the information leak problems associated with CSP and had 
      come up with no solutions that did not hamper CSP's 
      function.
    

Surely disabling third party cookies would fix this?

If it doesn't say www.facebook.com in the address bar, the browser doesn't
send the facebook cookies, so the response is always consistent with the user
not being logged in.

It might mean '+1' buttons have to be regular links, but other than that who
loses out except ad networks?

~~~
sp332
Youtube.com couldn't show my logged-in state from a google.com server then?

~~~
x1798DE
With third-party cookies turned off, youtube.com definitely _shouldn 't_ be
able to show your logged-in state from a google.com server. They generally
play some fancy footwork with redirects to deliberately break third-party
cookie origin parties so that they can achieve "single sign-in". I,
personally, would prefer it if this were not possible, but it seems unlikely
that Google would find this a desirable outcome.

Personally, I isolate my logged-in Google services in their own separate
browser. I was hoping to find (or write) a browser plugin which opens URLs in
different browsers based on regex of some sort (i.e. *.google.com, gmail.com,
youtube.com links open in Chrome, everything else in Firefox), but I believe
this is difficult to achieve because of the way browser sandboxing works (at
least in Chrome).

------
A1kmm
It looks like the Mozilla-Central tip doesn't support path-based host
policies: [https://mxr.mozilla.org/mozilla-
central/source/content/base/...](https://mxr.mozilla.org/mozilla-
central/source/content/base/src/nsCSPUtils.cpp#293)

It is still vulnerable if a site redirects to a different domain (or if
framing is allowed, loads resources from a different domain) depending on
private state.

The hash directive, from the CSP 1.1 draft and implemented in Mozilla Central,
however, probably opens up a whole lot of new problems - you can check whether
a given script matches a given hash.

------
homakov
Yeah as people mentioned since I wrote about it back in Jan
[http://homakov.blogspot.com/2014/01/using-content-
security-p...](http://homakov.blogspot.com/2014/01/using-content-security-
policy-for-evil.html) there was a _massive_ discussion how to mitigate it.
They decided to allow paths which come as result of 302 redirect. Good enough.

Dude don't kill this bug, this is only reliable detection to use in other
exploits

------
zemnmez
It's important to note that CSP2 aims to mitigate this problem by adding a
header that is used when CSP is active. Web applications can then be more
careful about the kind of responses they give under those conditions.

[https://twitter.com/mikewest/status/486048317902946304](https://twitter.com/mikewest/status/486048317902946304)

------
anon4
An easy (though sort of annoying) fix would be for the browser to only store
cookies per-tab - so if I log in to site A in tab A, then open site B in tab
B, the cookies set in tab A don't apply to tab B, which also means that if I
open site A in tab B, I'll be asked to log in again; and also to delete any
stored cookies when navigating away from the domain for which they were set,
or when closing a tab, etc. This of course also applies to all other forms of
local storage. A sort of a stronger version of the "delete all history when I
close the browser" setting.

Now, I realise these will get very annoying very fast, so I suggest you could
whitelist which domains can store things permanently and then only send them
if the url in the location bar is from the domain which set them (because fuck
iframes)... Of course iframes will need to work, so you'll also want to have
whitelisting of cookies which can be used in an iframe from this or that
domain.

All of which in the end has a pretty harsh setup for average users, but I'd
like it.

~~~
madeofpalk
That doesn't _actually_ fix the problem. If you visit Facebook in your first
tab, then navigate to another page from within the same tab, the new page can
still use this exploit.

So you've just broken the internet in a pretty major way, but not actually
fixed the problem (esp. considering a lot of people don't use tabs or use them
very minimally)

~~~
anon4
If you navigate away from facebook, it will delete all facebook cookies in the
tab. That's the idea.

Also, it doesn't break the internet - it doesn't tinker with IP or TCP or HTTP
or anything. I'm also not suggesting that all browsers do this, it's obviously
way too annoying for an ordinary user to put up with. I just want my browser
to do it. I want to fix the internet for myself.

~~~
x1798DE
Well if it's a cookie problem, then what should be done is that third-party
cookie policies should be rigorously enforced, then it doesn't even matter
what tabs are open, your browser would never present a cookie when the site
you're on is anything other than the site that gave you the cookie.

Typically browsers haven't been great with this sort of thing, and usually end
up being tricked into accepting "first party cookies" from a site through
which you are redirected (e.g. Google's login process, where when you click
"login", you are redirected from accounts.google.com to youtube.com and
eventually to your final destination, so that you have "first party cookies"
with Youtube, despite never voluntarily going to the site. Paypal did a
similar thing with Doubleclick some years ago, where you clicked a link, it
took you to a page where you're redirected to Doubleclick, they give you a
tracking cookie, then redirected to the page you wanted to go to).

------
gcb0
site is down and cant find cache.

is this the css :visited issue again?

~~~
seszett
It's not the same issue, this one is about CSP. I don't have time to make a
pretty mirror (and chromium can't show me the source anyway, it seems) but I
had it loaded before it was down, so here's a pastebin of the contents:

[http://pastebin.com/XsWj7hDw](http://pastebin.com/XsWj7hDw)

------
andor
This should be solved by disabling third-party cookies, right?

~~~
marcosdumay
Wrong. The requests are made to the sites the attacker wants to test. There is
no third party sending cookies to you.

~~~
andor
Wrong. :-P It's not about receiving, but about sending cookies to a third
party. Let's say I visit marcosdumay.com, where you include some images and
scripts from other hosts on your page, and you want to know whether your
visitors are logged into Facebook. Since marcosdumay.com != facebook.com, all
content from Facebook on your page is third-party content.

How does Facebook track sessions again? Right, through cookies. If third-party
cookies are disabled, and an img src tag on marcosdumay.com triggers a request
to facebook.com, this request will not contain my Facebook session cookie, and
I'll appear to be logged out, even when I'm not.

~~~
marcosdumay
Take another look at the article. The attaker makes the browser send the
Facebook authentication cookies to only Facebook.

~~~
andor
Yes of course, that's how cookies work. And disabling third-party cookies will
stop the browser from receiving and sending to any third parties.

In your browser, you have several options to deal with cookies: allow all of
them, allow none, or disable third-party cookies only. Let's say I visit
attacker-site.com, which includes both local content and stuff from other
domains:

    
    
      attacker-site.com
      |`-- tracking-script.js
      |`-- tracking-image.gif
      |`-- facebook.com/me
      `--- twitter.com/tweet-button.png
    

Content from other domains is third-party content by definition, and leads to
third-party requests, in this case to facebook.com and twitter.com. If third-
party cookies are disabled, those requests will will not include any cookies.
Even if I'm logged in to Facebook, to attacker-site.com it will appear as if
I'm not.

------
cool-RR
That's incredible. Does anyone care to whip up a demo for it that would tell
me what my Facebook username is?

~~~
mike-cardwell
If you read the article, you would know this is not the way it works.

~~~
pbhjpbhj
It could work that way (cf Barack Obama example), you'd have to check against
a "list" of all Facebook users though; and if the target was logged out it
would fail.

It seems doable that you could use FB to list all people with a given surname
in a locality and craft a page that finds which of those people your visitor
is. So it _could_ be used for doxing at some level.

