

#1 CSRF Is A Vulnerability In All Browsers - homakov
http://homakov.blogspot.com/2012/03/1-csrf-is-vulnerability-in-all-browsers.html
First article that makes the point.
======
elisee
Just in case it might be a problem for anyone: The article uses the CSRF
vulnerability to log you out of all Google services (and says so in a PS at
the bottom).

Don't open the article if you don't want to have to log in to Google again
afterwards (might be a problem if you're using two-factor auth and you don't
have your phone handy for instance).

~~~
homakov
hm yep. should I hide that thing? hm.. Sorry guys in advance.

~~~
jerrya
FWIW, Worked in firefox. Didn't work in Chrome or Opera.

~~~
redthrowaway
Worked in Chrome for me on Vista (work comp).

------
tptacek
CSRF isn't a browser vulnerability. It's a serverside application
vulnerability.

To say otherwise is to say there there is some trivial policy, just an HTTP
header away, that would allow IE, Firefox, and Webkit to coherently express
cross-domain request policy for every conceivable application --- or to say
that no FORM element on any website should be able to POST off-site (which,
for the non-developers on HN, is an extremely common pattern).

There is a list (I am not particularly fond of it) managed by OWASP of the Top
Ten vulnerabilities in application security. CSRF has been on it since at
least 2007. For at least five years, the appsec community has been trying to
educate application developers about CSRF.

Applications already have fine-grained controls for preventing CSRF. Homakov
calls these controls "an ugly workaround". I can't argue about ugliness or
elegance, but forgery tokens are fundamentally no less elegant than
cryptographically secure cookies, which form the basis for virtually all
application security on the entire Internet. The difference between browser-
based CSRF protections (which don't exist) and token-based protections is the
_End to End Argument In System Design_ (also worth a Google). E2E suggests
that when there are many options for implementing something, the best long-
term solution is the one that pushes logic as far out to the edges as
possible. Baking CSRF protection into the HTTP protocol is the opposite: it
creates a "smart middleman" that will in the long term hamper security.

This blog post seems to suppose that most readers aren't even familiar with
CSRF. From the comments on this thread, he may be right! But he's naive if he
thinks Google wasn't aware of the logout CSRF, since it's been discussed _ad
nauseam_ on the Internet since at least 2008 (as the top of the first search
result for [Google logout CSRF] would tell you). Presumably, the reason this
hasn't been addressed is that Google is willing to accept the extremely low
impact of users having to re-enter their passwords to get to Google.

Incidentally, I am, like Egor, a fan of Rails. But to suggest that Rails is
the most advanced framework with respect to CSRF is to betray a lack of
attention to every other popular framework in the field. ASP.NET has protected
against CSRF for as long as there's been a MAC'd VIEWSTATE. Struts has a
token. The Zend PHP framework provides a form authentication system; check out
Stefan Esser's secure PHP development deck on their site. Django, of course,
provides CSRF protection as a middleware module.

~~~
homakov
I see some points but >CSRF isn't a browser vulnerability. It's a serverside
application vulnerability. you didn't prove this one. CSRF is a browser
vulnerability. ANd I don't care about another stuff you said further - you
probably right that most popular frameworks have the protection out of box - I
know it, no surprise here:). But I did pretty wide audit - only rails'
protection looks really elegant. Hm.. probably I'm too much fan of rails,
true.

And, please >Baking CSRF protection into the HTTP protocol is the opposite: it
creates a "smart middleman" that will in the long term hamper security.
Surely, I don't mean "Stop secure your apps from CSRF, it's not your problem".
I just want to make browsers think about the issue as millions of developers
have to. Because it is their issue, they are in charge. But we are fixing it
on the backend(and we will have to for a next 10 years definitely)

~~~
Xk
CSRF is NOT a browser vulnerability. The browser is doing exactly what it's
supposed to do: load content. The browser can not (and should not) attempt to
identify the "evil" HTTP requests from the "good" ones. The browser's job is
to make requests.

Now, you could argue the browser's job should be to implement security
features as well. It does, after all, implement the same-origin policy. But,
if you think about it, there is no good way for the browser to fix the CSRF
issue. You can ask the user, which is what's suggested, but that never really
works. They'll do one of two things: click "okay" _every single time_ , or
stop using your browser.

I would guess well over half of all websites do one of the following: (1) load
an external JS file, (2) load an external image, (3) load an external CSS
file, (4) use an iframe which points to a different origin, (5) use a JS
redirect, (6) use a meta redirect, or (7) open a new window.

The proposed "solution" to CSRF stops ALL of these use cases. The user would
have to manually approve each and every one of them. Given that well under 1%
of alerts would be true attacks, the user would almost definitely "okay" on
the attacks as well: they would have been trained by thousands of other alerts
that this is an acceptable thing to do.

There was a paper by Barth and Jackson on CSRF defenses where they propose an
Origin header, but that's the extent to which security is implemented in the
browser. It is fundamentally up to the web application for verifying the user
did in fact initiate the request. No amount of code in the web browser can get
around this fact.

~~~
homakov
>I would guess well over half of all websites do one of the following: (1)
load an external JS file, (2) load an external image, (3) load an external CSS
file, (4) use an iframe which points to a different origin, (5) use a JS
redirect, (6) use a meta redirect, or (7) open a new window. The proposed
"solution" to CSRF breaks ALL of these uses.

You definitely kidding me. Please point out where in my post I said to deny
ALL requests. I was talking about ONLY POST requests. Probably I forgot to add
it :) So, I'm talking only about forms sending and GET is ok sure.

~~~
Xk
Either you do it for everything, or you do it for only POST and you end up
missing half of the vulnerabilities. Correct me if I'm wrong, but your CSRF
attack used a GET request, did it not? [1]

Web applications make state-changing operations on GET requests. You might not
like it, but they do.

[1] <img src="<https://mail.google.com/mail/u/0/?logout> style="display:
none;" />

~~~
homakov
>Web applications make state-changing operations on GET requests. You might
not like it, but they do.

but when developer made a mistake with GET it is 100% his problem - it's out
of question. he should be punished :D

~~~
baddox
You're both just choosing different places to draw the line between _developer
responsibility_ and _browser responsibility_.

~~~
tptacek
That is like saying "you're both _just_ suggesting two totally different
designs for the HTTP security model".

His model is wrong. Again: I assume he wants to know that, so, bluntness.

------
vectorpush
_it took me a long time to understand the point behind CSR (cross-site
requests) and CRSF fully enough to find them EXTREMELY malicious._

I think this is a very important line. The sense I get around most of my
colleagues is that CSRF exploits are only something "bad programmers" get
wrong. Of course, they're all rockstars who've never been exploited
(yet/AFATK) so it's not like they need to spend a weekend or five paging
through droll security papers. A little modesty would do us all well.

 _90% of developers just don't care and don't spend time on that._

Indeed. It takes time to learn, time to code, and unless you're working at a
big shop, there's little pressure (or even acknowledgement of the need) to get
this stuff right.

Keep up the good work OP.

~~~
divtxt
CSRF is like a kafka-esque joke.

Here's my take away from every CSRF article:

 _A malicious site will load your site in an iframe, fill in your form and
post it. Fixing it requires some a token in your form, but I can see you don't
understand how an extra hidden field in your form will make a difference so
you're clearly not going to handle it correctly. You're screwed. Go home._

As far as I can tell, CSRF should have existed since javascript & frames. How
have the browser vendors not fixed such a huge insecure-by-design flaw?

~~~
phleet
The difficulty is _how_ to deny this happening.

Pages making GET requests across domains is so common and necessary that
several technology standards would have to come together to propose a real
fix. Every image or script loaded from a CDN. Anyone hosting their own static
assets domains. Anyone using a plugin from Google, Facebook, Twitter, Disqus
uses this ability.

The tech companies can't even easily create a system to whitelist sites
allowed to embed them, because that would severely limit third party's ability
to use their services freely and would introduce a huge performance
bottleneck.

I haven't seen any particularly compelling solution to solving this. Things
only guarded behind a GET request can be loaded by script, link, embed,
object, img and iframe tags, and all of those have legitimate reasons for
loading resources cross domain without requesting permissions for each one
from the user.

~~~
divtxt
I have no problem with cross-site GET requests because I know GETs should
behave as 'read-only' anyway for lots of reasons.

What I don't get is how arbitrary cross-site POSTs with malicious values are
allowed. As far as I can tell, anyone can post this form:

<form
action="[http://bank.com/send_money><input](http://bank.com/send_money><input)
name="to_account" value="SCAMMER-1234"></form>

Worse, one article will tell you to only allow Referrer == "bank.com", and
then another will tell you that even that is no longer enough?!!!

Why can't we change the browser or the web server layer to prevent this by
default?!

~~~
eurleif
Browsers don't prevent it because there are legitimate uses for cross-domain
posts. Good frameworks do prevent it with CSRF tokens.

~~~
divtxt
I don't want the legitimate uses prevented. The default behavior should be to
prevent, and the legitimate uses should explicitly opt-in. That way, you only
have to do security analysis for those explicit points.

~~~
tedivm
This to me is a server side issue- but that doesn't necessarily mean it's on
the app developer. The behavior you're talking about can be set most servers
directly, by adding the "X-Frame-Options" header into every request by
default. Then exceptions would have to be made explicitly, by either the
server admin or application developer. If anyone should change the default
behavior (which I am not convinced is the case) it should be the server
developers, not the browsers.

~~~
eurleif
X-Frame-Options only prevents the page from being displayed in a frame. It
doesn't prevent a page on another domain from submitting a POST request.

~~~
tedivm
CSRF is solved very simply by using tokens for each field. If the attacking
site can't load the other page, it can't pull the token out, and without the
token the post gets discarded. If you've abstracted your form generation this
should be super simple to add.

------
Zirro
Am I correct in interpreting that the proposed fix would the be the same as
the functionality provided by RequestPolicy (which he mentions in the post)?
I've used it for quite a while now, and although it works well for me as a
power-user (who is concerned about security), I can't imagine the confusion
and pain a user will feel despite the message suggested.

Blocking resources loaded over separate domains breaks a lot of sites today.
Few popular sites keep everything under the same domain (CDN´s,
commentsystems, captchas and Facebook/Google/Twitter-resources, for example).
<http://www.memebase.com> is probably the worst "offender" I've come across.
Hacker News isn't one of them, which I'm happy to see.

Although if this was implemented I could see a lot of sites moving quickly to
remedy this, reducing the alerts. It'd still be a pretty hard transition-
period, though.

Want to see how much would break today (and if the fix would work for the
average user)? Try: <https://www.requestpolicy.com>

~~~
driverdan
As my other comment highlighted, disabling 3rd party cookies will prevent most
CSRF. As an added bonus it will also increase your privacy by preventing some
(but not all) cross domain tracking.

~~~
tptacek
I'm not following (but I'm a little buzzed). What do third-party cookies have
to do with CSRF? CSRF is a flaw in the victim application.

------
seanalltogether
Is a GET request in an iframe now considered a CSRF vulnerability? As far as I
know, he hasn't actually done any cross site scripting. If i submit this as a
link on hacker news and get a bunch of people to click it, have I forged a
cross domain request as well?

<https://mail.google.com/mail/u/0/?logout>

~~~
christianmann
Cross site scripting (XSS) is not the same thing as CSRF. If you were to do
that, it wouldn't be a CSRF, because the action originated with the user.

Normally CSRFs are automatic, either in the form of an image (<img
src="<https://...?logout> />) or an iframe src attribute. So, if you included
the above image tag on your page, then it would be a CSRF, sometimes also
called a Confused Deputy Attack.

------
citricsquid
Maybe this is a good time to ask:

I found an xss vulnerability in a website that can be used to cause noticeable
problems (enough that fixing it should be a priority) so I contacted the
developers behind the site and informed them what caused it, how to fix and an
example of it in practice and why it's bad: they've done nothing in over a
month. What do I do?

I guess the answer is "forget it", but I feel like if I don't _do_ anything
someone malicious will discover the issue and cause harm to users of the
website...

~~~
kaolinite
Is it a persistent XSS vuln or does it depend on malicious input being passed
via the URL or POST?

It's persistent if it can be saved in a comment or on a profile, etc, and is
much more dangerous if so. Non-persistent XSS realistically isn't too big a
deal, most sites are vulnerable and it's usually only a problem if you're a
big website and therefore vulnerable to phishing attacks.

~~~
citricsquid
I can link someone to a page and it can associate them with something they can
then never disassociate themselves with. For example I could create an
account, post illegal content (child pornography etc.) on the site then get
people to click a link and forcibly associate their account with that content,
which they are then tied to until a site administrator realises and fixes it.
(edit: without them ever knowing)

Imagine if I could make you the author of this comment, it's like that.

------
akavlie
For those who didn't see the recent kerfuffle: This guy recently found and
demonstrated a major Rails exploit on github. He seems to know a thing or two
about security exploits.

~~~
rosser
Clarification: he didn't recently find the exploit. He's been making noises
about it for a very long time and being ignored, so he took the (dubious, to
some) step of using the exploit publicly and loudly, to draw attention to the
problem.

~~~
technoweenie
Another clarification: He wasn't making noises for a long time with _GitHub_
and being ignored. His support responses were replied to nearly immediately
(except where the timezone differences came into play). We take security
reports very seriously.

We'd have preferred a more responsible disclosure, and I hope he (and others)
are more careful about this in the future. Most reporters we get act very
responsible, and we are always gracious (and even contract work from them in
some cases) In his case, we saw activity that he didn't report to us, and
suspended his account while we did a deeper investigation.

The Rails community and we still think that his proposed solution is not a
good idea, but it did provoke exploration in some other ideas.

<https://github.com/rails/rails/issues/5228>

[http://techno-weenie.net/2012/3/19/ending-the-mass-
assignmen...](http://techno-weenie.net/2012/3/19/ending-the-mass-assignment-
party/)

<http://weblog.rubyonrails.org/2012/3/21/strong-parameters/>

~~~
ricardobeat
I'm sorry, but why should he? If I find a major hole in SHA1 key handling,
should I contact GitHub since you are users of it? Of course not.

~~~
tptacek
"SHA1 key handling"?

Anyways, you misread him. All he's saying is that the delay Egor Homakov
experienced was with the Rails dev team, not Github. Github's response to
Homakov's finding was very fast.

~~~
ricardobeat
I just pulled something out of my ass to fill in the gap. Was probably
thinking of RSA.

------
davepeck
My app's web site is built with Django. I use the built-in CSRF tools. (I
should emphasize that my site is strictly HTTPS.)

In theory, no normal user will ever fail CSRF checks. In practice, tons of
people have complained that they see Django's (very confusing) CSRF error page
when they try to sign up for my service.

This was surprising to me; I thought we were _way_ past this point. Digging
into it, I've learned that tons of people use extensions that muck about with
cookies in ways that break Django's CSRF feature. I don't really know a way
around it.

How common is this, in your experience?

~~~
jasonkeene
Yeah this is something I run into often as I don't accept cookies from sites
by default and don't send Referer header (both are required for django's CSRF
middleware if over https). This is a good read if you are interested in the
rational behind these decisions ->
<https://code.djangoproject.com/wiki/CsrfProtection>

As far as a solution for your users, I'd just let them know that you require
cookies to login (obviously) and if you are posting over https make sure they
have the Referer header which can be forged to just be the domain and not the
entire URL if they prefer. I use <https://addons.mozilla.org/en-
US/firefox/addon/refcontrol/> set to forge for django sites.

~~~
davepeck
Yeah, there are plenty of reasons to do what you're doing that seem fair to
me. But at the same time, through no fault of yours, your requests are
indistinguishable from potentially malicious ones. The whole thing is a mess,
effectively a band-aid on top of deeper issues with HTTP's statelessness.

Also: that's a good link. Thanks.

~~~
jasonkeene
If I'm blocking cookies/referer by default then the onus is upon me to enable
them for sites that require them for stuff like this. I wouldn't worry about
users who have this issue. Maybe customize django's CSRF failure page to say
they need to enable both to use your service and call it a day.

~~~
davepeck
I agree in principle. And I have built a custom CSRF page to help my potential
customers out.

In practice, lots of my potential users don't even understand that their
AdBlock/whatever extensions are mucking about with Cookies in ways that break
things. It's a tough sell to tell someone who is thinking about trying your
service: "sorry, I don't work with your browser the way it is" when so much of
the rest of the world is either HTTP, not HTTPS, or simply has decided to punt
on CSRF or be much more selective about it. It looks to them like _I'm_ the
one that's broken.

Argh. It's no-win.

~~~
jasonkeene
Humm.. I'm thinking you could write a middleware that checks for Referer over
https and if not set, go ahead and set it to <https://yourdomain.com> That
would allow you to continue to use CSRF middleware for the nonce check (just
make sure yours is before theirs).

~~~
nbpoole
Except an attacker can strip a referer header: if you fail open like that, you
leave yourself open to attack.

See [http://blog.kotowicz.net/2011/10/stripping-referrer-for-
fun-...](http://blog.kotowicz.net/2011/10/stripping-referrer-for-fun-and-
profit.html) for examples

~~~
jasonkeene
In order to exploit this an attacker would need to be MITM on the network or
on a subdomain by setting a wildcard cookie. The site would still keep the
nonce check. I don't see any way around this without poking a tiny hole in the
CSRF protection. Guess you gotta weigh the cost/benefit.

------
javajosh
This attack vector requires:

1) previous authentication to a service.

2) service which supports destructive actions as guessable URLs.

3) "third-party cookie" support in the user agent. [1]

4) a visit to a page with a malicious resource construct (an image, script,
iframe, external style sheet, or object). Note that this resource could be
generated by JavaScript, although this is not necessary.

Sadly, the first three criteria are widely met. If we are to systematically
remove this threat, then we have to look at removing each in turn:

1) Previous authentication to a service can be mitigated by simply logging out
when you are done, but this is inconvenient and requires manual user
intervention. However, there is an interesting possibility to limit
"important" services to a separate process - a browser, an "incognito" window,
etc.

2) Services should be built with an unguessable component that is provided
just prior to calling by a known-good API, probably with additional referrer
verification.

3) It is my belief that disabling third-party cookies is the right solution
here: users rarely, if ever, get value from third-party cookies. Denying them
would allow API authors to write simpler APIs that do not have a secret
component, and would allow users to maintain the same behavior and login to
all their services from the same browser.

4) While it seems that little can be done on this front apart from releasing
some chemical agent into the atmosphere that made people trustworthy and good,
actually it may be possible for browser makers to do some simple analysis of
resource URLs to detect possible hanky-panky.

[1]
[https://en.wikipedia.org/wiki/HTTP_cookie#Privacy_and_third-...](https://en.wikipedia.org/wiki/HTTP_cookie#Privacy_and_third-
party_cookies)

~~~
eurleif
Third party cookie support isn't necessary. You could just use a link instead
of an image.

~~~
javajosh
Yes, that's the active form of the attack. To me, the passive form is far more
pernicious (you are taking destructive action passively). At least with the
active form you know that you've done something unintentional.

But this does imply that the final onus is on the programmers of services to
design services that do not have guessable, destructive one-step inputs.

~~~
eurleif
You could use a redirect, too.

------
brian_peiris
Here's Google's reply to this particular "vulnerability":
[http://www.google.com/about/company/rewardprogram.html#logou...](http://www.google.com/about/company/rewardprogram.html#logout-
forgery)

~~~
simonw
I don't understand why Google say that this is an issue that can't be solved -
why can't they use a CSRF token on their logout feature, either by switching
to using a POST form or by appending a CSRF token to the query string?

~~~
mhansen
That won't work without javascript, and then you need another URL to fallback
to that'll respond to GET requests for non-javascript browsers. And then you
could just XSRF the non-javascript URL.

[Disclaimer: I work at Google, but not on any area related to this]

~~~
simonw
I don't understand why this would need JavaScript - regular CSRF protection
for POST requests works fine without JavaScript - why can't that be applied to
the logout button?

------
spamizbad
I'm having a little trouble parsing this post. Is he saying he's discovered a
variant of CSRF that cannot be stopped by using the Synchronizer Token
Pattern? Or has he found something that a lot of site's protection patterns
don't follow?

~~~
huhtenberg
You seem to be familiar with the subject. I just read through CSRF and Token
stuff on [1] and there's one thing I don't seem to understand.

What would prevent an attacker from open an _original_ site's page in an
iframe and then have a script fill in and submit the form on it? In other
words, say I am logged in into my bank's site. I then open a malicious page
that has an iframe pointing at <http://bank/operations/move-funds> that
contains a fund transfer form. Wouldn't this page include a correct CSRFToken,
making the form readily submittable by a malicious script?

[1] [https://www.owasp.org/index.php/Cross-
Site_Request_Forgery_%...](https://www.owasp.org/index.php/Cross-
Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet)

~~~
njs12345
This is prevented by only allowing frames to interact with each other if
they're on the same domain. See (for instance) [http://msdn.microsoft.com/en-
us/library/ms533028(v=vs.85).as...](http://msdn.microsoft.com/en-
us/library/ms533028\(v=vs.85\).aspx)

------
huhtenberg
[ _repost from below_ ]

I just read on CSRF and its mitigation with Synchronized Tokens on [1] and
there's one thing I don't seem to understand. What does prevent an attacker
from open an original site's page in an iframe and then have a script fill in
and submit the form on it? In other words, say I am logged in into my bank's
site. I then open a malicious page that has an iframe pointing at
<http://bank/move-funds> that contains a fund transfer form. Wouldn't this
page include a correct CSRFToken, making the form readily submittable by a
malicious script?

Can anyone comment? It damn sure looks like a big gaping hole that is
virtually impossible to plug.

[1] [https://www.owasp.org/index.php/Cross-
Site_Request_Forgery_%...](https://www.owasp.org/index.php/Cross-
Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet)

~~~
nbpoole
Because a script (I assume you're referring to JavaScript) can't fill in a
form on or read the contents of a third-party website. That's a violation of
the same-origin policy.

CSRF tokens are a well-understood solution to this issue. In order to submit a
valid request, you must include what is essentially a secret token that is on
the page (although the secret token can just be your session ID). For an
attacker to get that token, they would need to be able to do at least one of
the following:

A. Guess it, by having you make multiple requests. (so you make the token long
enough that it's infeasible to guess)

B. Be able to read it by intercepting the HTTP response or reading it in some
way, in which case you have much larger security issues.

C. Be able to read the token in the HTTP request that the browser makes.
Again, if an attacker can do this, your session is already compromised.

~~~
huhtenberg
Right, the same-origin policy, thanks. Just found it after a minute of
jsfiddling.

Now, let's say my script is not loading bank's page into an iframe, but rather
fetches it with an ajax call. Wouldn't that page (again) include a valid CSRF
token? Or is this mitigated by checking a referrer on the bank's side?

~~~
nbpoole
You can make but CAN NOT view the result of a cross-domain request via
XMLHttpRequest unless the site specifically opts in to it. Same-origin policy
again.

------
brian_peiris
He logs you out of Google with a simple

<img style="display: none;" src="https //mail.google.com/mail/u/0/?logout">

------
jiggy2011
CSRF is a bit of a pain to work around but how much of a problem is it in the
wild?

Most sites where this could do real damage (and have real gains for the
attacker), banks etc are going to be well protected.

You could use it to comment spam a blog but that's going to be a crapshoot.
Guessing which blog people are logged into etc, you would need very targeted
attacks.

Sure , signing out of google is annoying but if you have lastpass or similar
signing back in is pretty frictionless.

~~~
homakov
>Most sites where this could do real damage (and have real gains for the
attacker), banks etc are going to be well protected.

You think so. In "the wild" even serious systems are vulnerable #OpApril1

~~~
ceol
Why are you spacing out the "release" of your information? I assume you've
found some CSRF vulnerabilities?

------
dfc
RequestPolicy + NoScript are the big reason I have not switched to chromium.

In order for requestpolicy to block this it needs to be in a fairly locked
down state too...

------
someone13
A bit of a note regarding REST:

RESTful services are as vulnerable to CSRF as anything else. See [1] for more
information (and I'm really sad that there's no second post, like mentioned).
However, since RESTful services imply no state on the server (i.e. no token),
the question is, how do you prevent CSRF attacks?

One really simple method is to deny all requests (on the server) with the
application/x-www-form-urlencoded content type, and deny all multipart/form-
data requests that include non-file parameters, which are the only two content
types that can be sent from an HTML form. For your application, XMLHttpRequest
can change the content type, and isn't affected by CSRF.

EDIT: Also, sort-of-related: I recommend you set the X-Frame-Options header
too, in order to prevent clickjacking. Info at [2].

[1]: [http://blogs.msdn.com/b/bryansul/archive/2008/08/15/rest-
and...](http://blogs.msdn.com/b/bryansul/archive/2008/08/15/rest-and-xsrf-
part-one.aspx) [2]: [https://developer.mozilla.org/en/The_X-FRAME-
OPTIONS_respons...](https://developer.mozilla.org/en/The_X-FRAME-
OPTIONS_response_header)

------
jim_lawless
I had envisioned what I think is a more solid defense against CSRF ... I just
haven't had time to build a proof.

Earlier commenters have noted that each request back to the server should
include an unguessable token that cannot be derived by mining other pages on
the site with cross-site AJAX requests.

My hypothetical solution is to embed that token in the prefix to the hostname
after logging into the given site. The token would then be sent in the Host:
header for all dynamic requests.

Step 1: You log in to www.somesite.kom.

Step 2: You are then forwarded to dynXXXXXXXX.somesite.kom where XXXXXXXX
represents a unique, dynamically-generated token tied to your session.

The attacker must now know XXXXXXXX to properly form up a GET or POST request
to attack your account.

The site itself could then use relative URL's for dynamic content or could use
the appropriate templating system to ensure that any dynamic URL's ( either in
HTML markup or script text ) contain the generated hostname.

------
slurgfest
There is a HUGE vested commercial interest in the CONTINUATION of the insecure
status quo, in which all the control is on the server side (hence, with the
companies doing tracking and advertising using third party requests, rather
than with the end user).

Furthermore, the players funding browser development all share strongly in
that vested interest. (Even for Firefox, follow the money - and if Firefox did
try to lock down without industry agreement, it would lose, which Mozilla
knows).

So you will not see any change. This also explains the degree of heat directed
at the suggestion that client behavior could be less insecure by default, with
regard to third party requests.

This is not new. Much of HTTP as originally conceived actually dictated a
great deal more user control over what happened. Those standards had to be
compromised from the word go in order to reach the present state.

------
txt
Adding a extra token for protection against CSRF attacks will only work if is
changed on each request. Some of the biggest sites out there do not do this. I
know of one site in particular (I won't name it, but its HUGE) that generates
a unique token every time a user logs in. The token doesn't change until the
user logs out even if the user closes the browser and doesn't go back to the
site for a week, the token will be the same. So it does its job, until
somebody like me pokes around and finds a hole that will parse out that token,
and generate a form that can make any request on behalf of that user in a
iframe without that user knowing a thing. Evil yes, but I found this months
ago, and it still works..and I haven't used it in anyway, besides a proof of
concept.

~~~
eurleif
You shouldn't be able to get the token from another domain, regardless of how
long it lasts. How are you able to?

~~~
txt
Im getting it on the same domain, but the request can be sent from any domain,
as long as the user is logged in.

------
aprescott
If Google required a POST to log out (as it should be, since logging someone
out is changing the session state and therefore not a "safe" GET-able
request[1]), we could fall back to CORS as protection which removes the need
for a CSRF token. Since the only way (I believe) to get a POST to fire cross-
domain, without explicit user interaction through, say, a regular HTML form,
is through JavaScript, the browser would refuse to make the request unless the
CORS headers explicitly allowed it.

Still, using <form> buttons for logging out, consistently across the entire
web, would take some effort. CSRF tokens are probably less intrusive.

[1]:
[http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1...](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1)

~~~
nbpoole
" _Since the only way (I believe) to get a POST to fire cross-domain, without
explicit user interaction through, say, a regular HTML form, is through
JavaScript, the browser would refuse to make the request unless the CORS
headers explicitly allowed it._ "

I'm not quite sure what you're trying to say here. But you can make cross-
domain POST requests in two ways, both involving JavaScript:

1\. Create an HTML form, use JavaScript to submit it.

2\. Use XMLHttpRequest to make a cross-domain POST.

~~~
aprescott
Yes: "the only way [...] to get a POST to fire cross-domain [...] is through
JavaScript".

The request would go against the same-origin policy, at which point CORS comes
into play.

Edit: Ah, but creating a form in the DOM and submitting it via JavaScript...
that one I hadn't thought of.

~~~
nbpoole
To fire automatically, yes: getting people to click on a button of their own
free will is easy though.

------
jorgem
"April fools day" will not be a good day for him to publish "secret stuff".

~~~
homakov
why? seems perfect.

~~~
pragone
Many people will assume it's a joke, and not take you seriously - of course,
at their own risk.

------
chubbard
Bravo! If you thought you were immune because you only visit "reputable" sites
it'll make you think twice. I tried putting it in a incognito tab in Chrome
and Google apps in a normal tab. That didn't log out, but if I put both google
apps and this site in incognito tabs or normal tabs then it logged me out.
Pretty important to log out of sites when you aren't using them. But, more
important to fix my sites!

------
peteretep
I am hesitant to post this, because from experience, smart people frequently
misread or misunderstand it, but:

The easiest first-step solution is just to check the HTTP Referer field, and
check it matches your domain.

Yes, this is easily faked by someone crafting their own HTTP requests. This is
/not/ easily faked by someone causing your browser to make requests, though.
And it provides very good coverage against the attack here.

~~~
nikcub
If you do that, you then have to make sure your app doesn't have any
redirector anywhere or anything that writes Location. Almost all apps do. This
is because you can get the site to insert the right referrer.

for eg. if the URL of the attack request is (both of these are close to real
life examples that work(ed))

[http://www.site.com/mail/filter?create=*%2Cattacker%40hush.c...](http://www.site.com/mail/filter?create=*%2Cattacker%40hush.com)

usually the site will have a redirector on the login action, which takes the
user back to the page that they were on after login, so you just use the
attack URL as the redir URL

[http://www.site.com/login?return_to=%2Fmail%2Ffilter%3Fcreat...](http://www.site.com/login?return_to=%2Fmail%2Ffilter%3Fcreate=*%2Cattacker%40hush.com)

amazing how many login scripts still do the redirect even if the user is
already logged in, or still do the redirect even if there is no real login

even if you do a javascript in-place login, there is usually a mobile version
that has this pattern. I rarely meet a site that doesn't have a way to bounce
between URLs and fake the referrer.

I guess the real conclusion is that these types of attacks are complicated and
better understood fully than implying a single short solution - because the
next response is always "but, if you do that, then" and so forth, like a
matryoshka doll

------
lucb1e
I wonder what will happen to websites which use cross-domain post requests to
log in securely, e.g. <http://example.com> submitting the login form to
<https://secure.example.com>

~~~
lojack
If they are using tokens then this should be viable as long as the state is
shared across servers.

------
tlogan
I 100% agree - CSRF is a huge hack: basically telling programes to add CSRF to
all their forms and requests shows that all these cookies and security
measures in browsers are worth shit. It seems like the entire web security
needs to be redesigned from scratch.

~~~
marshray
I agree that web security needs to be rethunk from the ground up, but I don't
think it's fair to blame browsers for things that are fundamentally HTTP
protocol and server app problems.

------
kaolinite
This is a problem with app developers, not browser makers. Applications should
follow the HTTP spec and not do any persistent changes via GET.

The "I can log you out of Facebook/etc" trick is one of those techniques that
script-kiddies love using on forums.

~~~
homakov
read again, it's not about GET and google trick is just trick.

------
postfuturist
Google allows a logout from a GET request, on Chrome of all browsers? Is there
a way to run browser tabs completely sandboxed regarding cookies/auth?
(private browsing mode and running different browsers is a bit too clunky)

~~~
timmaxw
Chrome allows you to set up multiple user profiles; each one is isolated from
the others. Like Incognito mode, it's per-window, not per-tab. I have one for
GMail, one for Facebook, and one for everything else.

------
infinitivium
Okay, this is a good idea, but how would it handle legitimate requests to
other domains?

Issues: \- some users would click allow anyway, so it doesn't completely solve
the problem \- what about apps built using CORS.. etc

------
beneth
Is this what's happening with "hacked" twitter accounts that don't ever seem
to have had anyone access them or change the password and seem to suddenly
start spewing bizopp spam for a day or two?

------
driverdan
Preventing this in the browser is actually pretty trivial for most sites.
Simply block 3rd party cookies. If the site uses cookies to track sessions the
request won't have your session cookie and won't work.

It's not up to browsers to prevent this. Just like how you can't rely on
client side data validation you must always take proper precautions on the
server. Browsers taking additional precautions to prevent this would be nice
but it's not the whole solution and never will be.

Edit: If you're going to downvote this please leave a reply stating why. I
don't understand an opposing point of view unless you use 3rd party cookies to
track people across domains.

------
ck2
Don't allow actions through GET, always use POST.

~~~
gibybo
That doesn't solve it because the attacker can just create a <form> and auto
submit it with js (or make a translucent submit button that is the size of the
entire page if you have JS disabled).

~~~
someone13
For example:

    
    
        <body onload="javascript:document.evil.submit()">
          <form name="evil" method="POST" action="https://mail.google.com/mail/u/0/?logout">
          </form>
        </body>
    

The massive button is left as an exercise to the reader ;-)

------
HeyImAlex
Aren't you protected if you just use csrf tokens on sensitive POSTs? I thought
it was good practice to always do that anyways.

------
swah
Are REST APIs susceptible in the same way? Or as long as we don't store our
auth token on the client cookie we're ok?

------
iamwil
I'm having a bit of trouble parsing the post. Did he just discover CSRF and is
trying to raise awareness? Or did he discover a new variant of CSRF that makes
previous counter-measures ineffective?

For completeness, Rails guide covers these security holes
[http://guides.rubyonrails.org/security.html#cross-site-
reque...](http://guides.rubyonrails.org/security.html#cross-site-request-
forgery-csrf)

------
djbender
"Jeff Antwood" Entertaining read.

------
gcb
Web is already too useless because people can't think about security in a
decent way.

Still baffles me that i can't allow a script at one domain to automate
something in another domain for me.

------
sedictor
WOW You've discovered absolutely new type of vulnerability! Awesome work! lol
lmao

~~~
homakov
no food for troll here

------
rdl
You realize you could be monetizing these security vulnerabilities, right?

~~~
homakov
how? If I report nobody pays even 'thank you'.

~~~
rdl
Fuck reporting it, unless you're contractually obligated because they've
retained you (or, if it's an open source project you like, and want to
support). If vendors won't even listen to you, clearly they don't value your
time or their product, or their customers.

You can sell security vulnerabilities to a variety of parties. If you want
introductions, email me.

Some people view this as "wrong" in some ethical way, but meh. Money is good
-- it can be exchanged for valuable goods and services. There have been a lot
of arguments for "responsible disclosure", "anti-sec", "full disclosure", etc.
over the years.

I'd draw the line at blackhatting yourself with the vulnerability, but just
selling the info is legal. Generally, security companies are buyers, and their
clients tend to be governments, generally western (USA).

~~~
homakov
I've been black in the past, now I'm completely white hat.

