Hacker News new | past | comments | ask | show | jobs | submit login
SPAs Are Dead? (leastprivilege.com)
99 points by wstrange 3 days ago | hide | past | favorite | 112 comments





SPAs as in the UI/UX concept certainly not. SPAs as in “browser-based standalone applications that do cross-site authentication and API calls in the context of a modern identity and SSO architectures” – yes.

Has the latter ever been a definition of "SPA"? One would have thought the acronym "single page application" to have been fairly precise...


And the latter is not even true unless I missed something significant. It's just lazy implementations thereof that are dead. Having developed SPAs for much of my career, I did plenty of cross-site login and work never realizing third party cookies were even an option. I just figured they were bad practice and that oauth2, understanding the limitations of web frontend oauth2 security (e.g. can't really do a full authenticated client without a backend to keep secret the client secret), and using proper CORS was the right way to do it. I think that's the way forward with the removal of third party cookies, is it not?

I guess I'm more surprised that somebody was using third party cookies for non-tracking purposes than I am surprised they're being removed.

> Some people recommend replacing silent renew with refresh tokens. This is dangerous advice – even if your token service has implemented countermeasures.

If I follow the link the author is clearly talking about public clients, which are always going to have significant security limitations. Somewhere else on the site he mentions BFF (backend for frontend) architecture as a mitigation. Which I kinda thought was the whole point of confidential vs public oauth2. The spec is super clear that public is less secure which is just the nature of security; if you have to ship a secret to someone who shouldn't know it, but they need to use it, you're kinda out of luck.

I think I'm picking up that the author's definition of SPA is also assuming no back-end, in which case many of the security points make a lot more sense. But even if you just throw out a secure authenticating proxy, which can be done with very little code and a few off-the-shelf and OSS products, you're back in business and more secure than ever.


Sounds like JAM stack to me

If I can trust 2018 OAuth-as-a-service vendor literature, it seems popular to signify "use grant flow X because your app has no backend."

Maybe I'm missing something, but this seems like a very narrow definition of SPA. A SPA can just sit on a different path from the API server and cross-site cookies aren't a problem. For instance, www.example.com/app and www.example.com/api

That's what I moved our stuff too, because it was just easier for a number of reasons (cookies and CORS stuff being amongst them). Any calls to other domains are done through the API hosting on the site domain.

He does allude to that at the end of the post though: "So are SPAs dead? ... SPAs as in “browser-based standalone applications that do cross-site authentication and API calls in the context of a modern identity and SSO architectures” – yes.". That is a narrower definition though.


This is how I prefer to do it. No CORS pre-flight requests and messing with CORS settings

Seconding (thirding?) this.

I'm curious, are there any SPAs that aren't done this way?


I have my backend on a different subdomain instead of a subdirectory because it solves issues at the dns level. This means I don't have to deal with setting up a reverse proxy or separate rules for cloudflare.

This applies not only to production but also for local testing. I used to have a reverse proxy for local development.

Moreover it gives you better security by default since different backends are now treated as different origins, so same-orign-policy, and same-site (for cookies) kicks in.


Yeah I used rules in Cloudfront for this. Only have one API backend so security doesn't really factor into this much. Actually I consider it slightly more secure because there was no way for me to misconfigure CORS.

> there was no way for me to misconfigure CORS.

This is a pretty big issue because there are a shit tonne of bad resources that poorly explain CORS - so many places just slap a wild card in 'access-control-allow-origin', and call it a day.

Even a lot of the framework middleware can be confusing and unhelpful.

FWIW, once I actually got it setup, it was very simple, very easy. I highly recommend MDN's CORS page[1] as the only source someone should read, and to read the whole thing to actually learn it rather than just grabbing a library to solve the problem in 15 minutes.

Even then, I had to start with a small test project and test things at different levels to understand what a library would be doing. My back end is golang, and I used gorilla/mux, so I did things step by step to really know what was working and what wasn't. I've done it other ways with something like Spring boot and libraries where it's just a goddamn mess because it tries to automate too much for you and it becomes way too confusing.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS


The article is referring to all the myriad other things an SPA will bring in. Analytics, feedback components, interactive galleries, notification components, header components... you can sew a site together from SaaS components that all live in different domains.

Well... you could. Harder now.

From the article:

> So are SPAs dead? Well – SPAs as in the UI/UX concept certainly not. SPAs as in “browser-based standalone applications that do cross-site authentication and API calls in the context of a modern identity and SSO architectures” – yes.


> I’m curious, are there any SPAs that aren’t done this way?

For…enterprise reasons…the SPAs I’ve worked on in $DAYJOB use apis hosted in a separate subdomain (and, therefore, origin) than the page itself, meaning we have all the CORS headaches. But, certainly, segregating by path rather than domain is a lot more convenient if you control both the page and the APIs it consumes.


app.example.com vs api.example.com ?

Sure I've done that but only for SPAs that are totally anonymous, and deploy the api.example.com behind a CDN cache.

Yes there are a ton. Actually it was a bit of a pain setting it up that way. I'm using Cloudflare to dynamically route to S3 or an ALB based on the path. If Cloudflare didn't have that option I would have had to roll my own solution to defeat CORS which could have added in another point of failure

Edit: I meant Cloudfront. I do that a lot


Indeed, the intended audience seems to developers of SPAs which only use third-party authentication and data. The article even mentions "pretty much every authentication protocol – like SAML, WS-Fed and OpenID Connect." I'm obviously not in this intended audience, because in my experience this is an extremely niche SPA use case. I can't even think of a website that does this or would want to do this. Maybe if you want to build a third-party web client for something like Gmail or Apple Music?

In enterprise environments, I can see that being a problem. Tools like SAML are more prevalent when SSO is used.

Sounds like an overreaction. The only thing affected by the browser changes is some third party authentication mechanisms, and those can be easily fixed (as mentioned in the article itself).

I did web dev professionally for seven years until I started writing Swift a year ago. The learning curve for native iOS development is a little steep because Apple doesn't write adequate documentation, but resulting application delivers a superb user experience with extremely low power consumption.

Web apps and APIs are not the answer to the biggest problem we have today, which is the lack of open alternatives to Apple's App Store and Google's Play Store. While it is true that Apple cripples their browser to make web apps second-class citizens, browsers will always add a layer of indirection. Even if the performance loss were made negligible, it would encourage the creation of user interfaces that target the lowest common denominator available on all platforms.

In my opinion, the best way to fix this situation in the long-run is to design completely open mobile devices. I'd love to replace my iPhone and iPad with devices that are repairable and run a mobile-optimized version of Haiku. We're not as far off from that as you might think; metal 3D printers such as the Markforged Metal X will make it easier and cheaper than ever to make cases that turn a pile of parts into a working phone.

What does this have to do with web apps? Well, if you're spending all your time keeping up with web development, perhaps it's time to learn some new skills and push things forward in a different direction.


The general trend here seems that browser security improvements are more likely to kill core-function-of-your-app-as-a-service type software products, rather than SPAs in general or in particular.

What the article completely ignores, is the fact, that building cross site apps has been next to impossible for a long time. Since you can't contact a server who doesn't explicitly allows connections via CORS, your browser based app doomed.

And I write this as someone who would love to build apps all day with web technologies.


I don't quite get what you mean. If the cross-origin server (to your app) wants to be consumed then it will respond with the correct headers. And if the server is under your control then you can configure it so.

For non-browser apps the server doesn't have to explicitly allow it. A simple curl can access any server it wants. If y browser based apps wants to access another server, that server needs to be configured to allow it (because of the Same Origin Policy) and how many servers do you know that allow anybody to access them via CORS?

So take for example a WebDAV server. In theory, you could build a web based app, that can access any WebDAV server out there on the internet. In practice, that WebDAV server requires a special CORS configuration in order to be accessible via a browser from a different origin. Any other normal desktop app doesn't care about CORS and can access any server it has credentials for, no matter what configuration it has.


Not all apps need to contact third-party servers, so the way you categorically claim that building cross site web apps is next to impossible seems a bit exaggerated?

Edit: Unless you don’t consider it ”cross site” if you control the API server?


I would think any service that wants to be consumed will have CORS configured?

All else should be forbidden, that is the whole point of CORS.


No, the point of CORS is to mitigate against a security vulnerability in web browsers.

All non-browser based app needs no CORS setting


No ;), the point of CORS is to open up the web for cross-origin consumption from the browsers by allowing servers/services to opt in with special headers. Single origin policy is the one that protected against cross-origin requests and is still the default when no CORS is in place.

You are missing the point. You are right about SOP vs. CORS, but the point is, that most clients that are no browsers have no SOP and therefore also don't care about CORS.

So every server, that is not specifically designed to accept connections from browsers, cannot be reached by browser based apps. And that in turn is a serious disadvantage for these apps, because it eliminates a complete class of use-cases.

Yes, if you control both ends you can make it possible, but if you want to build an app that is able to simply connect to any server out there, you will be in trouble.


Let's try to get to the point then!

Any server not configured to reply with permissive CORS headers doesn't want to handle your cross-origin requests. I.e they are not "public areas" for anyone to consume but serve just their own front end.

CORS is a security measure to make it safe to consume cross-origin servers from browsers, both with and without credentials. Otherwise, with credentials you could read client data for some other service. and without credentials you could just use that service's resources without having permission. CORS gives the service operator a method to give permission.

Querying other servers from your back end is another story entirely. CORS isn't required there but also, your server doesn't have access to any of the credentials available in the browser so you wouldn't be able to get any client-specific data. You can get any other data of course, but because this is a server it's easier to block, perhaps by IP or adding captcha.


> You are right about SOP vs. CORS, but the point is, that most clients that are no browsers have no SOP and therefore also don't care about CORS.

Clients that aren’t browsers don’t generally have access cookies or similar user credentials for third party services as a result of user interaction with those services that allow exfiltration of data without user intent in the absence of SOP.

SOP is a solution to a browser-specific security issue.


> SOP is a solution to a browser-specific security issue.

Indeed, it is, but in my opinion a pretty bad one as it causes a lot of collateral damage. Instead, they should just use the (cookie) state from the origin that initiates the request instead of the origin that receives the request. AFAIK, that should have solved the security issue much more precisely. But now we have to live with the SOP+CORS in the web based world :-/


Well, WebDAV clients don’t usually run untrusted scripts downloaded from random servers, so they don’t need CORS.

In this sense (i.e. running dubious scripts from random untrusted sources) the closest thing to browsers would be npm, although its centralized nature (mostly) prevents malicious actors causing too much damage.


This. I suppose that's what has me confused about both the blog post and comment, i.e. why would you not control your servers?

Because that is actually the classical pattern of applications using the internet ;-) The whole point of internet protocols like HTTP, SMTP and the likes is about not having control over the server and being able to let programs talk to each other anyway.

So if you want to build a classic client app it would be independent from the server implementation. So maybe it would even be 'your' that you would use. For example you could built an app that could talk to a Dropbox server. But with the current SOP setup you would require Dropbox to configure their CORS accordingly.


Not entirely sure what parent meant but it seems that many web developers are still confused as to why they can't consume any site on the internet from the front end.

What the author describes was never a good idea anyway imho. We will just have to learn to point a subdomain to third party servers.

Or services we integrate with will have to start emitting the correct headers.

A better way to think about it: Who wants SPAs more, UI developers or their users?

If the demands of the developers out pace the demands of their users AND those demands primarily determine product design decisions the product is not all designed to benefit the user despite developers pleadings to the contrary. That is a very pronounced example of bias.


Users just want pages to load fast. UI developers (presumably) just want to maximize user happiness.

A SPA is just a way to front-load resources for a website so users don't have to re-load redundant resources for each new page. Whether that is actually worthwhile for the user depends largely on how many redundant resources a site has and how many pages a user is likely to request in a single session. SPAs are a situationally useful tech just like blockchain, machine learning, JS component frameworks, etc.


It’s hard to know what users want if those wants are assumed, which serves to reinforce bias.

I am currently involved in a app development where a single code base is deployed to several platforms.

Cors is being a topic for the last two month. I am increasingly concerned about the future of javascript webapps and its build targets like cordova/capacitor/electron because there is no single cors configuration which fits all build targets :(


Why authors of these blog posts assume that everyone know what the context is? Even if one assume the term is related to computing and technology, there's 4 different things named SPA. Is it that hard to expand the term in the title or in the beginning of the blog to help people quickly decide if they are interested in it or not?

Sorry for the rant, but this is not the first time the article forced me to read it before I could decide if I'm interested in it.


To be fair, most of the blogposts are not written for us on HN, but for the followers of the blog in question. I do not mind having to figure out the context, if it means I get to explore the esoteric and far corners of the internet from one place :)

I hoped that designers learned something and decided to stop the war against their users. I did not hold my breath.

To clarify I have nothing against actual web applications like an image editor or a game. However a wiki (I don't appreciate Notion), an online shop or a damned blog should not be one. I want my history, link copying, bookmarking, middle-clicks and Ctrl/Cmd-clicks to work as intended.


They work perfectly fine under a SPA. SPA pushes history which lets the back button work and you can bookmark the URls. They have real URLs so you can create tabs.

Go look at https://www.target.com/ and see if you can tell difference between SPA and regular web page except for the speed aspect.


Long ago Ruby on rails had a feature that sort of cached all links on the current page to make navigating to them faster. What happened to that? Would that not provide the same speed benefit? I'm ignoring the inefficient use of data here, ofc.

It's still a thing: https://turbo.hotwire.dev/

HTML fragments are not all that less efficient than JSON on the wire, and they are much more CPU efficient for low-powered client devices.


I think you're referring to TurboLinks. TurboLinks is still around. Building on that technology and moving towards HTML over socket is https://hotwire.dev/

That would be TurboLinks. It's going strong, and has seen significant improvements over time. Especially when paired with something like Stimulus and/or Reflex and/or Hotwire, you can make some _very_ snappy feeling server-side rendered HTML. At this point, I won't even consider making SPAs for anything unless the product owner has an incredibly compelling reason.

If you're curious what the modern state of server-side HTML Rails can be from the user's perspective, head over to hey.com and sign up for their free trial.


If you have a history with multiple pages each with their own URI then it is not a SPA. SPA means "single page application".

You’re getting a downvote treatment but I kinda agree with you that SPA shouldn’t change history. But also, that was like the first versions of SPAs. The name stuck around (hello AJAX) while the underlying infrastructure and mechanisms great improved.

"Page" is clearly too ambiguous as almost any acronym will be. The main tenet of a Single Page Application at least used to be, before it was buzzworded into ambiguity, that the browser loads everything it needs once and makes round trips only for new data or incremental functionality. I think the same definition is still applicable, though. It never had anything to do with the information architecture of your UI or whether the majority of the screen might change between pages. And nothing to do with the URL or history; that's all orthogonal and now can and should be done with deep linking and the history API to allow people to have faster feedback and still interact with sites in the browser the way they always have.

Single page load is the definition. The page never reloads. Even the first SPAs would uses url hashes to change the url before pushState was adopted.

That's not true. Wikipedia [0] has a nice summary:

> A single-page application (SPA) is a web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server, instead of the default method of the browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app.

[0] https://en.wikipedia.org/wiki/Single-page_application


Isn't it so relieving now that the pendulum is swinging back?

I'm so glad I moved into C# and non-web dev/management and got out of that rat race. Tech always swings back and forth, but I'm so very happy to have made it past SPAs being the end all be all.


> I want my history, link copying, bookmarking, middle-clicks and Ctrl/Cmd-clicks to work as intended.

In a well designed SPA, these will all work.


It's like with C or C++. Most people that do write it shouldn't, because they are not good enough or the process is broken and they will shoot their leg. SPAs are the same way, because you have to implement all those normal website things again by yourself. There will be casualties.

Exactly, the problem is that you don’t get these baseline features for free. SPA, in my experience, is a lot of people poorly reimplementing the wheel.

> I want my history, link copying, bookmarking, middle-clicks and Ctrl/Cmd-clicks to work as intended.

Solutions for all of those for SPAs have been around for quite a while. Whether any particular SPA uses them or not is, of course, variable, but there is nothing inherent to SPAs that prevents them from working.


In fact, in every modern JavaScript framework or rendering library I'm aware of, you have to go out of your way to make links that don't work with middle-click or page navigations that don't work with history and bookmarks.

I should start noting all those bugs, because I see them almost every day. But to be honest even "regular" web sites are guilty, especially with ctrl/cmd-click.

I want all those things too, but not clear on how you think notion should be implemented.

I use it exclusively in the browser on a laptop. I find it slow but I do not think that is the fault of browser tech or SPA architecture, and I assume it will get addressed.


>and decided to stop the war against their users The thing is that the question is who are their actual users? Truth is just that we arent the actual users, by we I mean the tech savvy crowd we are a minorityand form what I have seen the majority of the users of notion are happier with they way it has evolved, and at the end of the day notion and places like it are going to follow that, I dont like it but this is usually what the war against their "users" is a byproduct of.

Every single day I see my wife and relatives struggle with modern computing. Not to say that previously it was much better, but web applications are increasingly different between each other in the way they work. They are not tech savvy, but are bitten in the ass by everything that computers have to offer, especially web pages. I am angry, because I know how it can all work better or see bunch of bugs in almost anything I use, but I can form quite easily workarounds, they can't.

The Web After Tomorrow by Nikita Tonsky: https://tonsky.me/blog/the-web-after-tomorrow/

I don’t like the click bait title, but the conclusion is interesting.

I think SPA's are alive and well using the Solid project [1] with Inrupt's auth library[2] that supports Open ID Connect PKCE with dpop tokens.

[1] https://solidproject.org/ [2] https://github.com/inrupt/solid-client-authn-js


I prefer using cookies, but most SPAs actually use JWTs.

Cookies and JWTs are not alternatives to each other. You can store a JWT in a cookie.

You can also store a JWT in localStorage and require an additional secure signature for it within a cookie (http-only). Best of both worlds.

If doing that, why not go full-mode and store JWT in cookie with http-only flag?

There are good uses for page content to know what's in the JWT (display username, show logged-in status, etc). Cookies also have stricter size limits. Additionally, cookies by themselves are uniquely vulnerable to CSRF, although I guess these days using SameSite property correctly mitigates that.

You can prevent CSRF attacks by simply requiring a custom HTTP header: https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re...

True, although in the vast majority of cases JWTs are sent via HTTP headers (specially if you're making requests to multiple domains).

Cookies suck. The interface is beyond terrible, they were never scoped properly, and they don't have to be used.

Browser storage (sessionStorage, localStorage) is perfectly valid for storing an authentication token.


No, it is not. And I hope I never end up using any application developed this way.

Tokens stored in those storages you mention can be read by any javascript code, even third party.

That doesn't happen with http-only cookies.

Be careful with what you recommend publicly, as others might end up assuming this is fine, when it is clearly not.


If your third party libraries are so poisoned that you're leaking localstorage, you've got bigger problems than just localstorage... This argument against using localstorage makes no sense

Chrome extensions can also inject code via a content script and gather local storage data. You can't control what extensions people are running.

That also applies to cookies. Users can run any browser or script to access your site and do whatever they want with the cookies.

In the context of a compliant web browser you can set a cookie as http only as to disallow access it via Js.

No they can’t, refer to the documentation on cookie flags and attributes like httpOnly: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies

Not for http-only cookies it doesn't.

Security is not black and white. It is a spectrum of possibilities and an arms race between security measures and new attack strategies. So lowering all your barriers because one of them could be already weak is not a good strategy.

If you're sending your password in plain text without HTTPs, then saying "ok, nevermind, we already have that problem so let's just not use any CSRF protection" you're just making things worse.

Authentication tokens that allow anyone reading them to impersonate you are bad, no discussion about it. Go and google for this if you don't believe me. Ask any security expert (which I'm not).

Cookies have the exact same problem. Except if you set the http-only flag in them, which is what I'm advocating for here.

You can still use localStorage, just not for security tokens.


You seem to be unaware of the existence of capability based security. https://en.wikipedia.org/wiki/Capability-based_security

Related to this I was always wondering where to store refresh tokens when using both, access token and refresh token. Reference: https://stackoverflow.com/q/57650692

> And I hope I never end up using any application developed this way.

Slack [0] has entered the chat.

Jira [1] has entered that chat.

I could go on but I don't need to. Many web applications that people use store an unnecessarily obnoxious amount of data locally. I can only imagine how much of that is used just once or possibly even never (like images in settings windows that were never accessed).

[0]: On one instance of Firefox on one computer, app.slack.com has stored 1.9GB. On another instance of Firefox on another computer, app.slack.com has stored about 633KB. On a third instance of Firefox on yet another computer, app.slack.com has stored about 600MB.

[1]: Jira doesn't load _at all_ if you've got any secure settings enabled (like... no CORS and no cross-domain cookies and no cross-domain XHRs and ... the list goes on). So Jira loads in an incognito window in a VM. But suffice to say that its local storage is even more _fucking obnoxious_.

> Tokens stored in those storages you mention can be read by any javascript code, even third party.

I don't doubt you. But I am not a javascript developer. I'd like it if you could link to some demonstrations of how that can be abused.


I'm not saying you shouldn't use localStorage, or sessionStorage, or indexedDB. They are valid tools and they are there for a reason.

I'm saying you should not use for SECURITY tokens which, if leaked, can allow others to impersonate your user. So it doesn't matter it uses 40Gb of cached data. Security is not black or white, it is a full spectrum and despite it would be bad for a hacker to obtain a full chat historory or your list of jira card titles it is still less worrysome that somebody able to impersonate you in those services. Security is an arms race....you need to raise the barrier more and more as attacks get more and more sophisticated. So the "then just don't run extensions" or "just don't use untrusted third party scripts", despite it is something we should do, it is not a justification for lowering the bar of all the other stuff we should be doing.

Regarding the references you ask for, what can I say. localStorage can be accessed by any JavaScript running in the page, even from browser extensions or XSS attacks. I'm pretty sure you will find people more knowledgeable than me here [1] and overal just googling for it you will find a lot of resources about why storing authentication tokens in localStorage is a bad idea

[1] https://security.stackexchange.com/search?q=localstorage+sec...


So the article is about being unable to access third-party cookies due to browser privacy updates.

1. If you are using Http-only cookies, these are clearly unrelated. 2. If you are sharing an authentication token with the browser, cookie or not, it can be read by any scripts on the same origin.

So...what exactly are you trying to say?


> If you are sharing an authentication token with the browser, cookie or not, it can be read by any scripts on the same origin.

No, it can't. Read about http-only cookies.


Meh. Working at a consulting shop I've seen hundreds of apps built with tokens stored in the big bad localStorage. It's easy, it works and if you don't load scripts from untrustworthy sources then there is no attack vector outside of rendering un-sanitized user input as HTML or browser extensions.

We've dealt with plenty of security issues and exactly zero of them centered around XSS. I've seen more issues surrounding bad dependencies that could end up running right inside your API server. Browser extensions can do so many other things, like reading all the forms and middling your API requests. So, I can't take responsibility for that.

I would recommend http-only cookies over localStorage, but if you protect yourself in other ways then putting auth tokens in localStorage is not the end of the world. It hasn't ever affected us, so I'll leave the pearl-clutching over this up to the experts.


Yep, better to wait until the security issues happen to fix them. Great mentality.

Leaking your auth tokens through XSS sucks more.

If you have an xss your problem is much bigger than leaking an ephemeral access token through localStorage

I don’t know whether you are referring to only local and session storage being feasible or not, but on can access cookies from JS as well.

Normal cookies are JS-accessible, but HTTP-only cookies should not be: "A cookie with the HttpOnly attribute is inaccessible to the JavaScript Document.cookie API; it is sent only to the server."

https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#re...


Ah thanks! This is new to me. That is indeed a concern, but probably can be worked around, e.g. by proxying requests to third party domains through the same Domain.

You can't make the browser to send you cookies for other origins so you won't be able to use them from your server.

Which is why you use domain scoping, httpOnly and Secure cookie flags so they can only be read by matching hosts (with greater granularity than same-origin policy) over HTTPS and can’t be read by JavaScript. The Web Storage API does not offer these protections.

They can't be read BUT the browser will send the cookie with every request. If you have an XSS, it is game over. The attacker can just send requests from your browser. Slightly less convenient. You are merely taking away the convenience of the attacker doing the attack manually on his own browser, which he probably doesn't want to do anyways. If he can inject js into your site, he will make your browser send the request(s) to do the actions with your credentials in an automated and quick way. Your browser will send the cookie automatically. From the attacker, it would merely be nice if he could read your tokens, but it is absolutely not necessary.

Like some here, I don't understand the hate around keeping tokens in localstorage. People immediately say "but js can read it!" but so what? If someone can put malicious js in my site, it is GAME OVER, secure http-only cookie or not. When that is the case, the saner option is doing away with an old and misused invention called cookies. The upside with ditching cookies is that you are an order of magnitude safer against CSRF since your browser does not send anything automatically. You don't need to keep CSRF token state in your server(s) either (helps with scale, one less state to worry about), it is a win.

http-only secure cookies do not give you any additional security. Ditching cookies does.


I think some people don't realize that CSRF tokens are basically the same thing as bearer tokens (which JWT also does), but it's just that they get re-generated every time you open a new page usually. So it's a bit ironic when everyone screams that tokens are bad, but they're all using them to protect against confused deputy attacks.

I am talking about a much more general class of security than just XSS. You’re making perfect the enemy of good here - yes, of course XSS is not completely mitigated by httpOnly. That was not my point.

My actual point stands, the Web Storage API doesn’t offer the same protections as cookies. Don’t store sensitive data in localStorage, that is emphatically not it’s intended use.


>I am talking about a much more general class of security than just XSS.

And what would those be that are relevant to this discussion? The way we (ab)use cookies is arguably not their intended use either.

I can't think of a scenario in this context where an attacker says "damn he is using http-only cookies, I won't be able to do what I want to do"

The only pragmatic difference between both is js accessibility. That only matters when someone can inject scripts into your site. My point is, when that happens, cookies are also bust.


A good option is doing both.

Store a security token in localStorage and additionally store a secure signature for it in a secure, HTTP-only cookie. On your backend, verify validity of both the token and its additional signature contained in the cookie.


I don't believe it adds any meaningful security that justifies the cost (development, testing, hardening, scaling the state across servers if necessary etc.) With security "more complicated" does not necessarily mean "more secure". Doing it without multiplying the number of ways things can go wrong is deceptively hard.

With this method there is no additional per-user state, fortunately.

Yes, just that with regards to security I've seen to many burned by "it can't hurt" processes. With your suggestion, assuming perfect implementation, I personally can't see where it would help. Like, if attacker can run js in your site, they can just set the cookie as necessary before making requests (if the cookie does not exist already) since that is something they can already do. If the cookie exists (most likely scenario), the browser will send it with each request anyways so no added security there either.

By interface you mean.. the HTTP protocol? At least they can be scoped on the domain level. Browser storage can't.

Local storage and Session Storage are scoped per origin as well: https://developer.mozilla.org/en-US/docs/Glossary/Origin


Got to admit I only learned about this because of the Github Private Page item posted earlier on HN. I still not clear what this list does or tries to do. Can't find much about it on MDN. Need to do a bit more digging

localStorage doesn't work in many browsers' private modes, so no... it's not



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: