Firefox has an open bug https://bugzilla.mozilla.org/show_bug.cgi?id=795346
Microsoft does, too
And so does WebKit
WebKit allows voting, too, by filing duplicate issues in Apple's private "radar" issue tracker.
Which is to say, the WebKit bug is already filed in Radar as
Apple has said publicly that if you want to "vote" for a given Radar issue, you should file duplicates for that Radar. (I find that weird, but that's the way they do it.) To do that, go here: https://bugreport.apple.com/
You can copy and paste the data from OpenRadar, a community tool where people share Radar issues that they want people to be able to search for and/or duplicate. https://openradar.appspot.com/radar?id=4963174633701376
Be sure to mention in the bug description that you're filing a duplicate of rdar://problem/27196358.
EDIT: And while you're in there voting for browser security features, consider voting for Subresource Integrity on Apple WebKit and Microsoft Edge.
Given that Radar is a private store with a public write-only channel (bug report submissions), the only way it could work for non-Apple-employees to vote for something is to request that they describe it again themselves and then merge all the duplicates on the Apple-private side.
Not saying that Radar being private is not itself kind of weird, but the submission policy necessarily follows from that.
"most browsers" would become "the majority of browsers". If there were 5 browsers, "the majority of 5 browsers" would be "3 browsers or more".
It doesn't make any difference which ones are most used, the sentence (apparently now changed or removed) wasn't "most of the requests will be protected", or even "most users will be protected"
In Germany, nope: https://clicky.com/marketshare/de/web-browsers/
Germans sure love their Firefox. Now, that may not matter for you depending on what market you're targeting but not everyone is as lucky.
Opt in solution is not a solution. But still useful.
It is trivial to add, as long as you remember to do it.
Actually it is silly nginx or apache don't serve those by default.
Yeah, the headline is certainly exaggerating quite a bit.
He also mentions checking the origin/referrer header. I would strongly recommend against this strategy; as he says, it doesn't work everywhere. Specifically, regular form submissions will not include the origin header in most browsers, and the referrer header is simply not reliable.
More importantly, multiple strategies for CSRF protection is bad. You need to fall back on tokens anyway, so the "check origin first" method is basically just an extra bypass for attackers to abuse. Two checks in this case are significantly worse than one, because if either is broken you are insecure.
So CSRF is not dead after all.
It makes a case for companies to use WebSocket-based APIs.
Also localStorage works better on mobile when using frameworks like Cordova, React Native, PhoneGap and others - That is because your .html files are usually sitting on the mobile device itself (so the local domain for the file won't match the one where your REST API/backend is hosted; so cookies won't be sent and don't work).
The only advantage of the cookie (with httpOnly) in this scenario is that the malicious code can't access your session ID and use it for later (but they can still hijack the session in-place without knowing what your session id is)...
Since sessions expire anyway, there is a sense of urgency; because of this, an effective XSS attack would typically be carried out in-place on the page (while the session is active). So in practice, there is very little added security value in the cookie approach.
In my opinion, XSS mitigation is the last barrier of defence.
Sessions usually implement `touch` functionality, which will extend the session every time a request with it has been made.
The initial RFC draft had been submitted in April 2015 . It was updated multiple times  but eventually expired in December 2016 , unfortunately.
Luckily, that draft has been revived this month and updated again today , so that's probably the only news.
For people who need immediate support in PHP, I published a small library last June .
Still, the major problem today is the lack of browser support. Chrome (desktop and Android), Opera (desktop and Android) and the Android WebView have long supported this attribute, but Mozilla, Apple and Microsoft did not ship this (yet).
Doesn't seem to actually solve the real security problems with the web, which is that people don't know how it works, and there is no security model that works that matches physical security that people do understand.
Once browser support gets a bit better, it would seem like a good idea to start making use of it on that basis, but I don't think I'd ever rely on it as the only form of CSRF protection on a site...
I could get that in certain case it is a good idea to be strict (e.g: banking, e-commerce), but it is a relatively small percentage of the web. And the Lax policy doesn't solve the issue because, well, someone could always screw and forgot they don't accept submission on Post.
Don't get me wrong, it is a useful and powerful tool. What I'm simply saying is that CSRF is paraphrasing Mark Twain right now: "The reports of my death are greatly exaggerated".
P.s: also the other comments are right...
So if news.ycombinator.com cookies were set to strict, they will not be sent when I open the site, or indeed at all unless I was navigating from within the site (thus stopping cross origin attacks, but also stopping them being sent on first page load). If these cookies were used to identify a logged in user, the user will not be logged in as the cookie would not be sent. The link still works, but the behaviour is perhaps unexpected.
One solution is to have a trusted, strict cookie, and it is required for any actions that originate from the origin site - upvoting, new posts, comments, etc. You then have a second, untrusted, non-strict cookie that is used to identify a user. As long as this cookie is not used for any trusted operations, you have restricted the potential attack surface a lot.
None of this breaks links, it only breaks user experience expectations if utilised naively (like hacker news using strict cookies for their persistent login tokens).
If, and only if,
(1) a website uses strict cookies on a cookie
(2) the website assumes that cookie (1) is always available for 'valid requests'
(3) the website assumes that 'valid requests' (2) include top-level navigation events (such as typing the website's url into the browser bar)
then any behaviour that relies on the existence of that cookie will break in some situations.
That is, only a naive use of this feature will cause breakage. Worthwhile knowing about for developers, but in an ideal world would never cause issues for users.
In an ideal world there would be a solution without this shortfall, but it seems like an almost necessary feature due to how some cross origin requests are made (by spawning a new window etc).
It doesn't really break the idempotency of GET, as the idempotency assumes the cookies sent with the request are the same. That is, the expectation remains that if you send the same GET request with the same headers (including the same cookies) you should get the same response back.
Note that this is a client-side feature. The browser is choosing to not send cookies based on this policy. If you're crafting a request yourself it is up to you to include the correct cookies, just as it always has been.
All this is doing is providing a way for websites to ask browsers "never send this cookie unless the user is already on my site" (strict) and "never send this cookie unless the user is already on my site, or is performing a top-level request with a safe method" (lax).
The protocol still expects idempotent behaviour, but the user may be surprised that the browser didn't include their auth token with a top-level request, if the site had requested it be a strict cookie. If that happens it's a shortfall of the site, not of this addition to the protocol.
The two cookie solution needed to fix problems with `SameSite=Strict` is more complicated than just using CRSF tokens.
And `SameSite=Lax` solution creates a new way for developers to screw up. The lax setting gives you no CSRF protection on GET requests. It is too easy to accidentally accept GET requests on a critical form that should be POST only.
I do think that using them does provide some additional defence in depth, and specifically provides use that CSRF tokens can't. These are listed under 'additional uses' in the post, and essentially boil down to the fact that cookies are not sent at all.
In the wild, this would help today with any timing attacks looking to expose info from if/when a cookie is included in the request.
I feel like I'm missing something but relying on the browser to protect your site is leaving yourself wide open.
With a CSRF attack, it's the victim's browser performing the request - which the attack doesn't have control over.
> I feel like I'm missing something but relying on the browser to protect your site is leaving yourself wide open.
Sites can just add in the 'SameSite' attribute in addition to whatever CSRF mitigation measure they use.
Prepared statements fix SQLi. Encoding HTML entities on output fixes XSS. Etc. etc. Even SPA frameworks that default to encoding entities get hit with XSS.
Unless it is completely stamped out by default or simply no longer possible I encourage solutions like this. And this is mostly just an XSS problem, but XSS that leverages simple CSRF vulnerable end points is still a problem. To be safer you now need 3 flags on a cookie (HttpOnly, secure, and SameSite).
This is a pain for developers and almost ensures weird edge case CSRF type vulns will still creep up on occasion. Similar to how memory corruption mitigations often force an exploiter to combine multiple vulns to achieve the effect of one old school vuln, this raises the bar, but does not eliminate the possibility.
So we're left with Lax SameSite cookies, which is pretty much the same as what existing web frameworks do.
The best you can say is that this is less likely to have bugs in the implementation.
Then I wonder, because of some of the delightful vulnerabilities I've heard about.
So I'll just say banks certainly should use Strict SameSite cookies, and in fact they seem designed to suit banking workloads (especially as banks don't usually offer persistent logins, because that's too dangerous).
Most people aren't getting hacked via an obscure Google Analytics + Django CSRF interaction. Not people are getting hacked via client-side webapp vulns of any sort anyway.
See a Microsoft paper from 2011 that prominently features cookie isolation . Or a 2012 proposal in Mozilla's bug tracker  which resulted in some proof-of-concept code early on, a writeup in 2013 hosted on Github ; blogged about independently and contemporaneously by others , which hit HN .
I've always taken a dim view on cross-domain requests in general  and the sprawling set of specifications (like most security headers) us developers have to learn and implement properly to stay one step ahead , and am not particularly enthused that this is opt-in instead of a heavy-handed mandate like some other recently-introduced features, the default opt-in is the more-secure but essentially session-destroying version meaning it's guaranteed to encourage a long and impassioned debate about whether Strict or Lax is the preferred balance.
It's fascinating to go way back to ~2006-2008 and read about when CSRF was first starting to be recognized by mainstream evangelists, commentators, developers and decision-makers as a problem instead of a feature of just how the web works.
This article on DarkReading from 2006  was soon after cited by the OWASP wiki , Jeff Atwood first wrote about it in 2008  and admitted its sublteness yet seriousness took him by surprise, and yet it's amusing that going back to 2003 you can find references to CSRF by that name and instructions on how to protect against it  -- the author of the 2003 article, Chris Shiflett, is credited in the announcement about the 2008 Felten & Zeller paper : "On the industry side, I'd like to especially thank Chris Shiflett and Jeremiah Grossman for tirelessly working to educate developers about CSRF attacks."
The feature is for site owners to put additional security/trust restrictions on their cookies.