Hacker News new | past | comments | ask | show | jobs | submit login

The madness begun when SPAs (single page applications) became fashionable along with simple REST APIs. This made the client experience smoother but the network waterfall could get a bit long. For a shop you might have a workflow like [click product]->[get /product]->[get /product/reviews].

Then GraphQL came along to solve the N+1 issue as let you query everything in one network call... but only over HTTP POST so it broke caching on CDNs.

Then edge workers/lambda came to solve that issue. You can also bring your data closer to the edge with replicated databases.

Most newer stacks like Nextjs, Nuxt or SvelteKit behave like a traditional server side rendered page at first and then dynamically load content like an SPA once loaded which is the best of both worlds. They'll also be using HTTP/3 to force-feed a bunch of assets to the client before it even requests them.

Ideally you'd have your data distributed around the world with GraphQL servers close to your users and use SSR to render the initial page markup.




The data that is queried by POST through GraphQL is the same that used to have cache disabled on older web applications. You just do not want to cache that, you want to reload it every time the user asks for a reload.

This is not the problem.

The problem that the GP is pointing out is that modern frontends seem to break the data all over the place, and request each piece independently. GraphQL allows solving this problem, but on practice nobody uses it that way because it breaks the abstractions your frontend framework gives you.

Anyway, this entire discussion is on a weird tangent from the article. If you (not the parent, but you reading this) don't cache your static assets forever (with guarantees that this won't break anything), then you should really read the article.


> but only over HTTP POST so it broke caching on CDNs.

That's why when making request related to realtime data in the server, a POST is always needed! GET is ONLY get for static content.


That is... not right.

While POST is effectively never cached, GET isn't always either. Cache headers should be set properly on GET responses, indicating desired caching behavior (which is more than 'on' or 'off', as the OP gets into), on any content, static or not.

The fundamental difference between GET and POST in http is idempotency. An non-cacheable response to an idempotent request is sometimes a (intentional, desirable) thing, which is why you can make GET responses non-cacheable if you want.

Static content isn't the only thing you might want to be cached, there are all sorts of use cases for caching (which again can be configured in many ways, it's not just on or off) for non-static content.


While you’re right in theory, it doesn’t always work in practice. Some systems fail to respect caching headers.

Further, in my experience, intermediate caches are mostly useless for non-binary product data. Either you need to make a round trip or you don’t. Sure, you can cache and return “not changed”, but you still get the latency. Just returning the data often isn’t much slower.

POST avoids all of those issues by pretty much saying “give us what we neeed every time”


By that logic, would you write a web site where every link was actually a POST unless it was to "static content"? That would be a disaster, no?

I guess the person I was replying to, and you, were talking about Javascript API calls rather than ordinary HTML executed by a "browser". It still seems like a wrong idea to me, but if this is what you do on actual apps and have success, I guess that's a thing.


> Some systems fail to respect caching headers.

Don't we call those bugs? http has a pretty well defined spec?


Customers don't care what's defined in the spec. They care how it works.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: