Don't open the article if you don't want to have to log in to Google again afterwards (might be a problem if you're using two-factor auth and you don't have your phone handy for instance).
> To stir up your interest - check any google service e.g. gmail, you are logged out.
Great hook btw. Even more impressively, I have all js on his blog blocked through NoScript and it still worked.
You can inject the things above into somebody else's data, or hide them in your own page from the beginning, I suppose.
As a site developer, you can mitigate some mischief a bit by having any destructive update be a two step process: first get a form (or an "are you sure" page, if no real input is required), and add a nonce to the form, which is submitted back with the "request for destruction and subversion". Of course, the attacker can still request the form, harvest the nonce, and send it back with the attacking request, but now his attack has to be 2 steps instead of just 1. Also, if the nonce has a variable name, he has to know to grab everything off of the setup form, and not just resubmit a hard-coded name. Obviously, this won't stop everybody, but it does force them to try a little harder.
<img src="http://targetsite.php/form?submit=1&data=gjoprgrger />
Accept INCLUSION(XHR, SUBDOC) from SELF++
Imagine a real-world example. Alice wants to murder you. You have a kid. Alice calls you, exactly imitating the voice of your kid, telling you she's trapped under a girder at the abandoned bridge across town. You frantically race across town to free your trapped kid. You get to the bridge and notice it's on the verge of collapsing, and that there's a "no trespassing" sign posted. You ignore that and try to save your kid. You trip over something and that upsets the unstable structure of the bridge. It collapses, crushing you while you fall 3000 feet to your ultimate demise.
Guess what, you were just murdered.
Even though Alice did not physically drive a knife into your heart, she still killed you. She intended to kill you (that's the first part), and then set into a motion a chain of events that resulted in you dying. That's murder.
Going back to our computer example... even though the author of this blog post didn't break into your house and log you out of Google, the result is the same. He intended for you to be logged out of Google without your permission or Google's permission, and you were logged out of Google. Therefore, your computer account was maliciously accessed.
The reality of the situation is that he's in Russia and I doubt Russia gives a damn about this.
By your argument if I shoot you with your gun, I am not guilty of murder.
I guess a lot of people disagree with this? To clarify, I don't think this guy's guilty of a felony-- to my knowledge computer fraud requires at least some malicious intent or damage. But you seriously think if somebody used CSRF to drain your bank balance that wouldn't count as hacking because it was your browser? That's absurd.
If that is considered 'without your consent', then so is every site that is embedding external plugins, images, and videos.
How are you supposed to 'give your consent'? Must you be given a list of all the content on the website before the browser will be allowed to display it?
When you requested the page, you gave your consent to load whatever was on that page. If you don't want that, then you should use wget instead of a browser.
Of course I don't know everything that's in cereal -- I trust the manufacturers to provide me with the product I paid for, whatever that involves. But I know for sure that that doesn't involve snakes, and had I had reason to believe that there might be a snake in there, I wouldn't have bought it. And the rational response to this is definitely not "If you didn't want snakes, you should have x-rayed the cereal before you bought it."
Arguing that my ignorance of something which I wouldn't have wanted had I been aware of it constitutes consent is severely shaky.
(Again, I'm not saying that logging me out of Google is like getting bit by a snake, and I do think it was a decently harmless demonstration of the issue with CSRF for anyone who was unaware. I'm just speaking hypothetically here.)
Imagine you are buying a 'random flavour' box of cereal, inside of which you may possibly find a snake flavoured cereal which does happen to contain a snake. Similarly, you don't know what you'll get on the Internet until you've received it, and you can't be sure that what you get will be safe.
Of course, you want the manufacturer to make sure there is no snake in your snake flavoured cereal. In this analogy, google is the manufacturer/snake owner. It is up to google to make sure their logout page can't be embedded in an image like in this blog.
But imagine trying to explain to a judge that the fact that I asked for the page means that it's okay that it did something that I didn't want to happen. She's not going to believe you, and she'll be right not to. That's all I'm saying.
FWIW, worked on Chrome / Mac OS X Snow Leopard.
To say otherwise is to say there there is some trivial policy, just an HTTP header away, that would allow IE, Firefox, and Webkit to coherently express cross-domain request policy for every conceivable application --- or to say that no FORM element on any website should be able to POST off-site (which, for the non-developers on HN, is an extremely common pattern).
There is a list (I am not particularly fond of it) managed by OWASP of the Top Ten vulnerabilities in application security. CSRF has been on it since at least 2007. For at least five years, the appsec community has been trying to educate application developers about CSRF.
Applications already have fine-grained controls for preventing CSRF. Homakov calls these controls "an ugly workaround". I can't argue about ugliness or elegance, but forgery tokens are fundamentally no less elegant than cryptographically secure cookies, which form the basis for virtually all application security on the entire Internet. The difference between browser-based CSRF protections (which don't exist) and token-based protections is the End to End Argument In System Design (also worth a Google). E2E suggests that when there are many options for implementing something, the best long-term solution is the one that pushes logic as far out to the edges as possible. Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security.
This blog post seems to suppose that most readers aren't even familiar with CSRF. From the comments on this thread, he may be right! But he's naive if he thinks Google wasn't aware of the logout CSRF, since it's been discussed ad nauseam on the Internet since at least 2008 (as the top of the first search result for [Google logout CSRF] would tell you). Presumably, the reason this hasn't been addressed is that Google is willing to accept the extremely low impact of users having to re-enter their passwords to get to Google.
Incidentally, I am, like Egor, a fan of Rails. But to suggest that Rails is the most advanced framework with respect to CSRF is to betray a lack of attention to every other popular framework in the field. ASP.NET has protected against CSRF for as long as there's been a MAC'd VIEWSTATE. Struts has a token. The Zend PHP framework provides a form authentication system; check out Stefan Esser's secure PHP development deck on their site. Django, of course, provides CSRF protection as a middleware module.
I propose a new set of HTTP verbs, "SECPOST", "SECGET", etc that comes with the implication that it is never intended to be called by third-party sites or even navigated to from third-party sites. It is a resource that can only be called from the same origin. Application developers (and framework authors) could make sure to implement their destructive/sensitive APIs behind those verbs, and browser vendors could make sure to prevent any and all CSRF on that verb (including links and redirects).
First, every mainstream web framework already comes with a simple-to-use way to block forged requests. Even if we adopted new HTTP verbs to give them a common name in the protocol, by the time developers are making decisions they're not working with the ALL-CAPS-NAMES-OF-HTTP-VERBS anyways.
Second, there isn't anything inherently "cross-site" about CSRF, so denying off-site POSTs isn't a complete solution to the problem either. Every site that accepts any form of user-generated content must deal with intra-site request forgery as well.
So no, I don't think that's a great idea.
The things that are insecure here are serverside web applications. Changes to the HTTP protocol or to browsers are a red herring. There's no way around it: web developers have to figure out how to write secure code.
It troubles me deeply to have CSRF declared a purely server-side application problem. The browser is quite literally my agent in all of my interaction with the web. It is an extension of me, and when it does things that pretend that they are me, that feels very wrong. That is why I propose new HTTP verbs: my browser should know (and verify) that when it sends out a SEC* request, that my eyeballs are on that data and my finger physically clicked that button, and it can do this if those requests are, essentially, tagged as particularly sensitive.
To place the onus soley on the server-side is for me to abrogate my responsibility to fully control my browser-as-agent. Frankly, even if the server successfully rejects forged attacks, it is not acceptable that my browser, acting as my trusted agent, attempted that attack in the first place.
At any rate: there isn't going to be SECGET and SECPOST, so the academic argument over whether end-to-end is better than Apple, Mozilla, Google and Microsoft deciding amongst themselves how security is going to work for every web application is moot.
While the vast majority of resource requests (both primary and secondary) are beneficial, some are not. The browser currently does not have enough information to make this distinction. New HTTP verbs would give the browser enough information to refuse to directly load damaging resources.
Serverside request forgery tokens don't rely on browser behavior to function. They provide a much simpler and more direct security model: to POST/PUT/DELETE to an endpoint, you must at least be able to read the contents of the token. This meshes with same-origin security.
Personally, I don't believe either of those things. Server authors should certainly take point on battling CSRF. But there is an important client-side piece to the puzzle that cannot be ignored. If users cannot even prevent their own browsers from attempting malicious actions on their behalf, then there is something critically wrong with browsers.
"You're right in principle, but you're right for the same reason and to the same limited extent as if you said "people have a responsibility to be aware of the locks on their front door and windows and to use them". Which is that you omit the other side of the social contract: we all have an obligation not to exploit our neighbors' negligence if they leave their door unlocked by burgling them."
What you've tried to argue here is that we should add new HTTP verbs to express "this endpoint shouldn't allow cross-domain requests". Or, more generally, that we should add HTTP features to allow browsers to prevent CSRF attacks.
But CSRF isn't a browser security problem. It isn't even necessarily a cross-domain problem! (CSRF is in that respect misnamed.) The specific changes you've suggested would drastically change the HTTP protocol but couldn't even theoretically solve the request forgery problem, not just because of intra-site CSRF but because your suggested locked-down HTTP requests would also break otherwise viable apps --- meaning that many apps couldn't use these features even if they wanted to, and would have to rely on something else for CSRF protection.
The fact is that having browsers police request validity just doesn't make sense. Even if they could do that, they still obviously have to rely on serverside signals to determine whether a request is or isn't valid. If the serverside knows a request isn't valid, it already has the means to block it! Why on earth would the server punt this to the browser?
Your suggestions have the appearance of not being familiar with how CSRF protection works in apps today. It is almost a one-liner in many modern frameworks.
Your argument is equivalent to saying that websites should protect themselves from DDoS attacks - and that users should simply accept that their machines will be hacked and will become part of a botnet (or several botnets) at some point in time. In other words, DDoS is a server-side problem, not a client problem. Whereas I (and I think that most people) believe that it is our responsibility to use our computing resources responsibly, and work hard to avoid being included in a botnet.
You seem like a smart person, and I'm sure you have something to contribute to the client-side of this issue, but that won't happen until you are convinced that there is a client-side problem.
In any event, somewhat selfishly I suppose, I've found this discussion quite useful in clarifying my own views on the matter. So, thank you for violently disagreeing with me. :)
You keep saying CSRF is a "client-side problem", but you haven't explained why you think that, other than that it's a problem that is occurring in a client-server system so somehow the client must be involved. That's flimsy logic.
Forgery is like DDoS in that they both use the unwitting (and unwilling) compute resources of an intermediate victim to mount the attack. The unit of distribution of the DDoS case is a binary rootkit (for example) and the unit of distribution for a forgery attack is a web page.
The impact of successful DDoS and CSRF attacks are very different, of course, but the mechanism used to carry them out is very similar. In particular, they both differ from an ordinary hacker-to-target penetration, DoS, forgery etc. attack.
In an honest, respectful discussion that would occasion a response along the lines of either: "Ah, I didn't think about it like that. Let me see about adjusting the line of my reasoning," or, "No, your correction is invalid because..."
If a form is served from domain A (via GET) in to an iframe on a page that was served from domain B, then the JS on the page from domain B is prevented from reading or writing data on the page from domain A (unless an x-domain policy is in place) though it may be able to post it.
>Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security.
Surely, I don't mean "Stop secure your apps from CSRF, it's not your problem". I just want to make browsers think about the issue as millions of developers have to. Because it is their issue, they are in charge. But we are fixing it on the backend(and we will have to for a next 10 years definitely)
Now, you could argue the browser's job should be to implement security features as well. It does, after all, implement the same-origin policy. But, if you think about it, there is no good way for the browser to fix the CSRF issue. You can ask the user, which is what's suggested, but that never really works. They'll do one of two things: click "okay" every single time, or stop using your browser.
I would guess well over half of all websites do one of the following: (1) load an external JS file, (2) load an external image, (3) load an external CSS file, (4) use an iframe which points to a different origin, (5) use a JS redirect, (6) use a meta redirect, or (7) open a new window.
The proposed "solution" to CSRF stops ALL of these use cases. The user would have to manually approve each and every one of them. Given that well under 1% of alerts would be true attacks, the user would almost definitely "okay" on the attacks as well: they would have been trained by thousands of other alerts that this is an acceptable thing to do.
There was a paper by Barth and Jackson on CSRF defenses where they propose an Origin header, but that's the extent to which security is implemented in the browser. It is fundamentally up to the web application for verifying the user did in fact initiate the request. No amount of code in the web browser can get around this fact.
You definitely kidding me. Please point out where in my post I said to deny ALL requests. I was talking about ONLY POST requests. Probably I forgot to add it :) So, I'm talking only about forms sending and GET is ok sure.
Web applications make state-changing operations on GET requests. You might not like it, but they do.
 <img src="https://mail.google.com/mail/u/0/?logout style="display: none;" />
but when developer made a mistake with GET it is 100% his problem - it's out of question. he should be punished :D
His model is wrong. Again: I assume he wants to know that, so, bluntness.
Then if 3rd party sites wanted to still use form submissions, they could use an auth token in the form (though I'm unsure why they would do this instead of using JSONP).
So, I'm talking only about forms sending and GET is ok sure.
Google's logout CSRF works because the logout link is a GET request. So, no, there is no quick fix.
A simple cross-site request is one that:
- Only uses GET or POST. If POST is used to send data to the server, the Content-Type of the data sent to the server with the HTTP POST request is one of application/x-www-form-urlencoded, multipart/form-data, or text/plain.
- Does not set custom headers with the HTTP Request (such as X-Modified, etc.)
This is actually a big deal, since it means you can send a cross-domain mutlipart-POST with no preflight. That allows for an effective CSRF attack against file upload systems.
And of course, cross-domain POST requests via <form> tags have always worked and will continue to work.
Let's say you're logged into Gmail and Gmail had no CSRF protection anywhere.
This will not work even without CSRF protection. It would only work if Google sends back the header
as noted in the section you linked to.
Of course, I could also try to trick you into filling out a form whose method actually is pointed at Gmail's and include all the hidden input tags to set you up for forwarding emails to me, but you would know something fishy is going on because it would redirect you to Gmail.
It actually will work.
What you're describing is what's known as a "simple" request in XMLHttpRequest terms. That means there is no pre-flight necessary. Your browser will simply make the POST as requested and receive the response. It won't make the response available to you since the Access-Control-Allow-Origin header isn't set, but you're a malicious attacker in this example and you don't care what the response is: you just care that you were able to make the request. ;-)
If a pre-flight were necessary you would be right. The browser would send an OPTIONS request to the server, the server would respond without the appropriate headers, and the POST request would never be sent.
Let me know if any of this needs further explanation!
Who the hell thought it was a good idea to allow crossdomain XmlHttpRequests? Given that the standard say that post is for modification no other website should ever make thoes requsts.
And the whole point of CORS is that some websites do want to make those requests. ;-)
This is handwaving. You were wrong about this. I assume you want to know that, so I'm saying it bluntly.
"I just want to make browsers think about the issue as millions of developers have to. Because it is their issue, they are in charge."
No, the web browsers are not in charge. The secrets and sensitive actions are occurring on the servers, not in the browsers. The servers are what matter. The browser isn't protecting your email. The server is. The browser isn't protecting your bank account. The server is. The browser isn't controlling who is or isn't your Facebook friend. The server is.
What about the X-Frame-Options and Origin headers? They are browser-based mechanisms that hint server side, right?
(not for the classic POST case though...)
Read his comment. It's great.
I think this is a very important line. The sense I get around most of my colleagues is that CSRF exploits are only something "bad programmers" get wrong. Of course, they're all rockstars who've never been exploited (yet/AFATK) so it's not like they need to spend a weekend or five paging through droll security papers. A little modesty would do us all well.
90% of developers just don't care and don't spend time on that.
Indeed. It takes time to learn, time to code, and unless you're working at a big shop, there's little pressure (or even acknowledgement of the need) to get this stuff right.
Keep up the good work OP.
Here's my take away from every CSRF article:
A malicious site will load your site in an iframe, fill in your form and post it. Fixing it requires some a token in your form, but I can see you don't understand how an extra hidden field in your form will make a difference so you're clearly not going to handle it correctly. You're screwed. Go home.
Pages making GET requests across domains is so common and necessary that several technology standards would have to come together to propose a real fix. Every image or script loaded from a CDN. Anyone hosting their own static assets domains. Anyone using a plugin from Google, Facebook, Twitter, Disqus uses this ability.
The tech companies can't even easily create a system to whitelist sites allowed to embed them, because that would severely limit third party's ability to use their services freely and would introduce a huge performance bottleneck.
I haven't seen any particularly compelling solution to solving this. Things only guarded behind a GET request can be loaded by script, link, embed, object, img and iframe tags, and all of those have legitimate reasons for loading resources cross domain without requesting permissions for each one from the user.
What I don't get is how arbitrary cross-site POSTs with malicious values are allowed. As far as I can tell, anyone can post this form:
<form action="http://bank.com/send_money><input name="to_account" value="SCAMMER-1234"></form>
Worse, one article will tell you to only allow Referrer == "bank.com", and then another will tell you that even that is no longer enough?!!!
Why can't we change the browser or the web server layer to prevent this by default?!
So, in the context of this discussion, why don't the browsers make X-Frame-Origin: DENY the default behavior?????
It's user input. Don't trust user input.
I browser cannot do this. The OP probably saw some exploit code in an ad which was served in an IFrame, but the same-domain security model will not allow you to interact with another window or IFrame that is of a different domain.
Blocking resources loaded over separate domains breaks a lot of sites today. Few popular sites keep everything under the same domain (CDN´s, commentsystems, captchas and Facebook/Google/Twitter-resources, for example). http://www.memebase.com is probably the worst "offender" I've come across. Hacker News isn't one of them, which I'm happy to see.
Although if this was implemented I could see a lot of sites moving quickly to remedy this, reducing the alerts. It'd still be a pretty hard transition-period, though.
Want to see how much would break today (and if the fix would work for the average user)? Try: https://www.requestpolicy.com
That was my interpretation as well and I reached the same conclusion. Having the average user make application-level security decisions is a very bad idea.
RequestPolicy is a wonderful extension and I think its use should be encouraged. But the average user does not understand enough about an application and how it interacts with third-party websites to make informed decisions about whether a particular interaction is good or not. False positives (where the user flags a good interaction) will lead to loss of functionality while false negatives (where the user fails to flag a bad interaction) will lead to security vulnerabilities that website owners can't prevent.
And I am kind of at a loss where the cross-domain part of what you are saying is part of cross-domain tracking.
Other than that, it should be hammered into developer's heads that GET should not have side effects.
No it can not. I've seen this assertion popping up a few times on HN this past week: it's conflating two similar but very different attacks in a dangerous way.
X-Frame-Options prevents a type of attack known as clickjacking. It is similar to a CSRF, but it involves creating an iframe to a page with a form and convincing the targeted user to submit it. It provides this protection at the browser level: if an HTTP response contains the X-Frame-Options header and the requesting page violates that directive, the response is not rendered.
It does not prevent a CSRF attack. It's impossible for it to do so: once the malicious request had made it to the server and the server has sent back a response, the attack is already complete. There's nothing the browser can do to prevent it at that point. If you use nothing but X-Frame-Options to try and prevent CSRF, you'll have a site completely vulnerable to CSRF.
Normally CSRFs are automatic, either in the form of an image (<img src="https://...?logout />) or an iframe src attribute. So, if you included the above image tag on your page, then it would be a CSRF, sometimes also called a Confused Deputy Attack.
Facebook uses http://facebook.com/logout.php to log you out, but clicking that link won't do it.
I found an xss vulnerability in a website that can be used to cause noticeable problems (enough that fixing it should be a priority) so I contacted the developers behind the site and informed them what caused it, how to fix and an example of it in practice and why it's bad: they've done nothing in over a month. What do I do?
I guess the answer is "forget it", but I feel like if I don't do anything someone malicious will discover the issue and cause harm to users of the website...
They certainly will. Usually responsible disclosure is defined as some form of contacting the party involved, working out some window of time that you both agree on during which they can fix the bug (~30 days say), then disclosing details of the vulnerability. This is like a very polite and necessary threat.
If you care I would contact them again and let them know you plan to make the vulnerability public, and ask how much time they need to fix it.
It's persistent if it can be saved in a comment or on a profile, etc, and is much more dangerous if so. Non-persistent XSS realistically isn't too big a deal, most sites are vulnerable and it's usually only a problem if you're a big website and therefore vulnerable to phishing attacks.
Imagine if I could make you the author of this comment, it's like that.
We'd have preferred a more responsible disclosure, and I hope he (and others) are more careful about this in the future. Most reporters we get act very responsible, and we are always gracious (and even contract work from them in some cases) In his case, we saw activity that he didn't report to us, and suspended his account while we did a deeper investigation.
The Rails community and we still think that his proposed solution is not a good idea, but it did provoke exploration in some other ideas.
Anyways, you misread him. All he's saying is that the delay Egor Homakov experienced was with the Rails dev team, not Github. Github's response to Homakov's finding was very fast.
If you find a major hole in a part of Git, you are by no means obligated to tell GitHub. You are, however, legally obligated not to compromise their site using that hole.
Or, a better example: you can talk about XSS mitigation strategies all you want. You can't go around looking for XSS vulnerabilities on random websites and then exploiting them.
> You can't go around looking for XSS vulnerabilities on random websites and then exploiting them.
exploit: to use a situation so that you get benefit from it, even if it is wrong or unfair to do this; to utilize, especially for profit; etc
He used the exploit publicly and loudly (full disclosure to almost all affected parties) by doing a relatively harmless change to _rails_ _master_ on github.
If his actions should be called an attack, then it was highly targeted - at the people who could fix it - to get their attention.
Security vulnerabilities aren't the sort of things that go away just because you don't know they're there.
I won't try to impute motives but I think he did it this way because he felt like he was being treated poorly.
Igor found 2, and got ignored by the Rails team. His frustration led him to publicly demonstrating 1, which caused a whole lot of people a whole lot of trouble.
The people that are irritated at him are irritated at him because of 1, not 2.
It's like the difference between the class of buffer overflow exploits and the buffer overflow exploit in a particular piece of software.
There's a significant difference between the two.
In theory, no normal user will ever fail CSRF checks. In practice, tons of people have complained that they see Django's (very confusing) CSRF error page when they try to sign up for my service.
This was surprising to me; I thought we were _way_ past this point. Digging into it, I've learned that tons of people use extensions that muck about with cookies in ways that break Django's CSRF feature. I don't really know a way around it.
How common is this, in your experience?
As far as a solution for your users, I'd just let them know that you require cookies to login (obviously) and if you are posting over https make sure they have the Referer header which can be forged to just be the domain and not the entire URL if they prefer. I use https://addons.mozilla.org/en-US/firefox/addon/refcontrol/ set to forge for django sites.
Also: that's a good link. Thanks.
In practice, lots of my potential users don't even understand that their AdBlock/whatever extensions are mucking about with Cookies in ways that break things. It's a tough sell to tell someone who is thinking about trying your service: "sorry, I don't work with your browser the way it is" when so much of the rest of the world is either HTTP, not HTTPS, or simply has decided to punt on CSRF or be much more selective about it. It looks to them like _I'm_ the one that's broken.
Argh. It's no-win.
See http://blog.kotowicz.net/2011/10/stripping-referrer-for-fun-... for examples
Update: I noticed that later in the thread you mention that you already provide a custom error page. I'll leave this for others who might not be familiar with custom CSRF error pages.
The issue is the "require that the browser includes it", as the information on the token must be available to the server too, and django apparently puts that in a cookie (rails does too, if the session is stored in the cookie).
So I believe you have suggested a fix for the bit that works already, but not for grandparent's actual problem :)
1) previous authentication to a service.
2) service which supports destructive actions as guessable URLs.
3) "third-party cookie" support in the user agent. 
Sadly, the first three criteria are widely met. If we are to systematically remove this threat, then we have to look at removing each in turn:
1) Previous authentication to a service can be mitigated by simply logging out when you are done, but this is inconvenient and requires manual user intervention. However, there is an interesting possibility to limit "important" services to a separate process - a browser, an "incognito" window, etc.
2) Services should be built with an unguessable component that is provided just prior to calling by a known-good API, probably with additional referrer verification.
3) It is my belief that disabling third-party cookies is the right solution here: users rarely, if ever, get value from third-party cookies. Denying them would allow API authors to write simpler APIs that do not have a secret component, and would allow users to maintain the same behavior and login to all their services from the same browser.
4) While it seems that little can be done on this front apart from releasing some chemical agent into the atmosphere that made people trustworthy and good, actually it may be possible for browser makers to do some simple analysis of resource URLs to detect possible hanky-panky.
1) Rename "Third-party cookies" to "Evil Cookies" and lobby all browser vendors to disable them in all circumstances. They are enabled by default(!) in the major browsers presumably to placate advertising networks.
2) Introduce a new HTTP verb, SECURE, which browsers will not send to a third-party website under any circumstance, including navigation events. That would make requirement number four impossible to satisfy (even for links and redirects).
But this does imply that the final onus is on the programmers of services to design services that do not have guessable, destructive one-step inputs.
[Disclaimer: I work at Google, but not on any area related to this]
One reason I've also heard cited is that you always want the logout links in your application to work: you want users to be able to terminate their sessions quickly and easily. If you have a CSRF token tied to your user's session and that user happens to click on an old logout link (maybe they had an old tab open or something), the user won't be logged out of the application.
What would prevent an attacker from open an original site's page in an iframe and then have a script fill in and submit the form on it? In other words, say I am logged in into my bank's site. I then open a malicious page that has an iframe pointing at http://bank/operations/move-funds that contains a fund transfer form. Wouldn't this page include a correct CSRFToken, making the form readily submittable by a malicious script?
<img src="https://mail.google.com/mail/u/0/?logout" style="display: none;">
I don't see a way browsers could effectively enable CSRF protections. How is it supposed to know you don't want to request that page as an image? What about sites linking to images on other domains? CDNs would be blocked, because how is Chrome supposed to know you actually wanted to load the image from fbcdn.net or s3.amazonaws.com?
The browser isn't just doing what it's "supposed to be doing" (always a flimsy argument in favor of the status quo, I agree!) but also all it can do, since only the server has the information needed to judge how sensitive a request is.
It's true that servers & browsers work together to create a semblance of a security model for the web. But the bulk of the job belongs to the server; there are hundreds of thousands of different applications each with different needs. And the servers have a means of enforcing controls flexibly: by authenticating requests.
The browser isn't protecting your email. The server is. The browser isn't protecting your bank account. The server is. The browser isn't protecting your HN karma. The server is. The browser isn't protecting your code repository. The server is. No simple HTTP standard will cover all these cases, and so it's silly to suggest that HTTP is where this security controlled should be expressed.
I just read on CSRF and its mitigation with Synchronized Tokens on  and there's one thing I don't seem to understand. What does prevent an attacker from open an original site's page in an iframe and then have a script fill in and submit the form on it? In other words, say I am logged in into my bank's site. I then open a malicious page that has an iframe pointing at http://bank/move-funds that contains a fund transfer form. Wouldn't this page include a correct CSRFToken, making the form readily submittable by a malicious script?
Can anyone comment? It damn sure looks like a big gaping hole that is virtually impossible to plug.
CSRF tokens are a well-understood solution to this issue. In order to submit a valid request, you must include what is essentially a secret token that is on the page (although the secret token can just be your session ID). For an attacker to get that token, they would need to be able to do at least one of the following:
A. Guess it, by having you make multiple requests. (so you make the token long enough that it's infeasible to guess)
B. Be able to read it by intercepting the HTTP response or reading it in some way, in which case you have much larger security issues.
C. Be able to read the token in the HTTP request that the browser makes. Again, if an attacker can do this, your session is already compromised.
Now, let's say my script is not loading bank's page into an iframe, but rather fetches it with an ajax call. Wouldn't that page (again) include a valid CSRF token? Or is this mitigated by checking a referrer on the bank's side?
<img style="display: none;" src="https //mail.google.com/mail/u/0/?logout">
Most sites where this could do real damage (and have real gains for the attacker), banks etc are going to be well protected.
You could use it to comment spam a blog but that's going to be a crapshoot. Guessing which blog people are logged into etc, you would need very targeted attacks.
Sure , signing out of google is annoying but if you have lastpass or similar signing back in is pretty frictionless.
You think so. In "the wild" even serious systems are vulnerable #OpApril1
In order for requestpolicy to block this it needs to be in a fairly locked down state too...
RESTful services are as vulnerable to CSRF as anything else. See  for more information (and I'm really sad that there's no second post, like mentioned). However, since RESTful services imply no state on the server (i.e. no token), the question is, how do you prevent CSRF attacks?
One really simple method is to deny all requests (on the server) with the application/x-www-form-urlencoded content type, and deny all multipart/form-data requests that include non-file parameters, which are the only two content types that can be sent from an HTML form. For your application, XMLHttpRequest can change the content type, and isn't affected by CSRF.
EDIT: Also, sort-of-related: I recommend you set the X-Frame-Options header too, in order to prevent clickjacking. Info at .
Earlier commenters have noted that each request back to the server should include an unguessable token that cannot be derived by mining other pages on the site with cross-site AJAX requests.
My hypothetical solution is to embed that token in the prefix to the hostname after logging into the given site. The token would then be sent in the Host: header for all dynamic requests.
Step 1: You log in to www.somesite.kom.
Step 2: You are then forwarded to dynXXXXXXXX.somesite.kom where XXXXXXXX represents a unique, dynamically-generated token tied to your session.
The attacker must now know XXXXXXXX to properly form up a GET or POST request to attack your account.
The site itself could then use relative URL's for dynamic content or could use the appropriate templating system to ensure that any dynamic URL's ( either in HTML markup or script text ) contain the generated hostname.
Furthermore, the players funding browser development all share strongly in that vested interest. (Even for Firefox, follow the money - and if Firefox did try to lock down without industry agreement, it would lose, which Mozilla knows).
So you will not see any change.
This also explains the degree of heat directed at the suggestion that client behavior could be less insecure by default, with regard to third party requests.
This is not new. Much of HTTP as originally conceived actually dictated a great deal more user control over what happened. Those standards had to be compromised from the word go in order to reach the present state.
(sorry for being oblique, but I have no way to contact you privately and ask you more directly!)
Still, using <form> buttons for logging out, consistently across the entire web, would take some effort. CSRF tokens are probably less intrusive.
2. Use XMLHttpRequest to make a cross-domain POST.
The request would go against the same-origin policy, at which point CORS comes into play.
Plus it has nothing to do with using JS to submit a form.
The easiest first-step solution is just to check the HTTP Referer field, and check it matches your domain.
Yes, this is easily faked by someone crafting their own HTTP requests. This is /not/ easily faked by someone causing your browser to make requests, though. And it provides very good coverage against the attack here.
for eg. if the URL of the attack request is (both of these are close to real life examples that work(ed))
usually the site will have a redirector on the login action, which takes the user back to the page that they were on after login, so you just use the attack URL as the redir URL
amazing how many login scripts still do the redirect even if the user is already logged in, or still do the redirect even if there is no real login
I guess the real conclusion is that these types of attacks are complicated and better understood fully than implying a single short solution - because the next response is always "but, if you do that, then" and so forth, like a matryoshka doll
The "I can log you out of Facebook/etc" trick is one of those techniques that script-kiddies love using on forums.
- some users would click allow anyway, so it doesn't completely solve the problem
- what about apps built using CORS.. etc
It's not up to browsers to prevent this. Just like how you can't rely on client side data validation you must always take proper precautions on the server. Browsers taking additional precautions to prevent this would be nice but it's not the whole solution and never will be.
Edit: If you're going to downvote this please leave a reply stating why. I don't understand an opposing point of view unless you use 3rd party cookies to track people across domains.
<form name="evil" method="POST" action="https://mail.google.com/mail/u/0/?logout">
For completeness, Rails guide covers these security holes
Still baffles me that i can't allow a script at one domain to automate something in another domain for me.
A lot of people are watching you from your blog posts and some of these watchers would pay you good money to do a security audit.
I don't know the breadth of your expertise but I would reach out to some well respected security consulting firm and use your blog to demonstrate your interest/passion for web security. This might be a great way to broaden your expertise.
If you contacted 10 security firms, I'm sure at least one would hire you and cover VISA issues if you plan on leaving the country.
You used to be a "nobody" -- just some unknown developer whose English communication skills are a bit weak and who was likely to get ignored. That is no longer true. You are now "famous" in security circles, and if you approach people in a professional manner then I am confident that you will be heard.
You can sell security vulnerabilities to a variety of parties. If you want introductions, email me.
Some people view this as "wrong" in some ethical way, but meh. Money is good -- it can be exchanged for valuable goods and services. There have been a lot of arguments for "responsible disclosure", "anti-sec", "full disclosure", etc. over the years.
I'd draw the line at blackhatting yourself with the vulnerability, but just selling the info is legal. Generally, security companies are buyers, and their clients tend to be governments, generally western (USA).
Money is not "good". Money is "necessary" in our society because people are greedy bullies.
I don't see a huge moral difference between smart hacker with $0 (publishing 0-day for the lulz) and smart hacker with $250k (selling vuln to a defense contractor).