> The main problem GraphQL tries to solve is overfetching.
My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.
TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.
For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.
I 100% agree that overfetching isn't the main problem graphql solves for me.
I'm actually spending a lot of time in rest-ish world and contract isn't the problem I'd solve with GraphQL either. For that I'd go through OpenAPI, and it's enforcement and validation. That is very viable these days, just isn't a "default" in the ecosystem.
For me what GraphQL solves as main problem, which I haven't got good alternative for is API composition and evolution especially in M:N client-services scenario in large systems. Having the mindset of "client describes what they need" -> "graphql server figures out how to get it" -> "domain services resolve the part" makes long term management of network of APIs much easier. And when it's combined with good observability it can become one of the biggest enablers for data access.
Completely agree with this rationale too. GraphQL does encapsulation really, really well. The client just knows about a single API surface, but the implementation about which actual backend services are handling the (parts of each) call is completely hidden.
On a related note, this is also why I really dislike those "Hey, just expose your naked DB schemas as a GraphQL API!" tools. Like the best part about GraphQL is how it decouples your API contract from backend implementation details, and these tools come along and now you've tightly coupled all your clients to your DB schema. I think it's madness.
I have used, implemented graphQL in two large scale companies across multiple (~xx) services. There are similarities in how it unfolds, however I have not seen any real world problem being solved with this so far
1. The main argument to introduce has always been the appropriate data fetching for the clients where clients can describe exactly whats required
2. Ability to define schema is touted as an advantage, managing the schema becomes a nightmare.( Btw the schema already exits at the persistence layer if that was required, schema changes and schema migration are already challenging, you just happen to replicate the challenge in one additional layer with graphQL)
3. You go big and you get into graphQL servers calling into other graphQL servers and thats when things become really interesting. People do not realize/remember/care the source of the data, you have name collisions, you get into namespaces
4. You started on the pretext of optimizing the query and now you have this layer that your client works with, the natural flow is to implement mutations with GraphQL.
5. Things are downhill from this point, with distributed services you had already lost on transactionality, graphQL mutations just add to it. You get into circular references cause underlying services are just calling other services via graphQL to get the data you asked for with graphQL query
6. The worst, you do not want to have too many small schema objects so now you have this one big schema that gets you everything from multiple REST API end points and clients are back to where they started from. Pick what you need to display on the screen.
7. Open up the network tab of any *enterprise application which uses graphQL and it would be easy to see how much non-usable data is fetched via graphQL for displaying simplistic pages
There is nothing wrong about graphQL, pretty much applies to all the tools. Comes down to how you use it, how good you are at understanding the trade-offs. Treating anything like a silver bullet is going to lead in the same direction. Pretty much all engineers who operated at the application scale is aware of it, unfortunately they just stay quiet
This is very much possible, and I have done it, and it works great once it's all wired up.
But OpenAPI is verbose to the point of absurdity. You can't feasibly write it by hand. So you can't do schema first development. You need an open API compatible lib for authoring your API, you need some tooling to generate the schema from the code, then you need another tool to generate types from the schema. Each step tends to implement the spec to varying degrees, creating gaps in types, or just outright failing.
Fwiw I tried many, many tools to generate the typescript from the schema. Most resulted in horrendous, bloated code. The official generators especially. Many others just choked on a complex schema, or used basic string concatenation to output the typescript leading to invalid code. Additionally the cost of the generated code scales with the schema size, which can mean shipping huge chunks of code to the client as your API evolves
The tool I will wholeheartedly recommend (and which I am unaffiliated beside making a few PRs) is openapi-ts. It is fast and correct, and you pay a fixed cost - there's a fetch wrapper for runtime and everything else exists at the type level.
I was kinda surprised how bad a lot of the tooling was considering how mature OpenAPI is. Perhaps it's advanced in the last year or so, when I stopped working on the project where I had to do this.
I imagine you are very much in the minority. A simple hello world is like a screen full of yaml. The equivalent in graphql (or typespec which I always wanted to try as an authoring format for openapi https://typespec.io/) would be a few lines
I see your point, yet writing openapi specs by hand is pretty common.
There is the part where dealing with another tool isn't much worth it most of the time, and the other side where we're already reading/writing screens of yaml or yaml like docs all the time.
Taking time to properly think about and define an entry point is reasonable enough.
Ha yes, see one of my other comments to another reply.
I never got to use it when I last worked with OpenAPI but it seemed like the antidote to the verbosity. Glad to hear someone had positive experience with it. I'll definitely try it next time I get the chance
If you generate OpenAPI specs, and clients, and server type definitions from a declarative API definition made with Effect's own @effect/platform, it solves even more things in a nicer, more robust fashion.
There is json-schema which is a sort of dialect/extension of OpenAPI which offers support for fetching relations (and relations of relations etc) and selecting a subset of fields in a single request https://json-schema.org/
I used this to get a fully type safe client and API, with minimal requests. But it was a lot of work to get right and is not as mainstream as OpenAPI itself. Gql is of course much simpler to get going
You still require gql requests to deal with. There's pretty much the same amount of code to build in BFF as it is to build the same in GQL... and probably less code on the frontend.
The value of GQL is pretty much equivalent to SOA orchestration - great in theory, just gets in the way in practice.
Oh and not to mention that GQL will inadvertently hide away bad API design(ex. lack of pagination).. until you are left questioning why your app with 10k records in total is slow AF.
Your response is incredibly anecdotal (as is mine absolutely), and misleading.
GQL paved the way for a lot of ergonomics with our microservices.
And there's nothing stopping you from just adding pagination arguments to a field and handling them. Kinda exactly how you would in any other situation, you define and implement the thing.
tRPC sort of does this (there's no spec, but you don't need a spec because the interface is managed by tRPC on both sides). But it loses the real main defining quality of gql: not needing subsequent requests.
If I need more information about a resource that an endpoint exposes, I need another request. If I'm looking at a podcast episode, I might want to know the podcast network that the show belongs to. So first I have to look up the podcast from the id on the episode. Then I have to look up the network by the id on the podcast. Now, two requests later, I can get the network details. GQL gives that to me in one query, and the fundamental properties of what makes GQL GQL are what enables that.
Yes, you can jam podcast data on the episode, and network data inside of that. But now I need a way to not request all that data so I'm not fetching it in all the places where I don't need it. So maybe you have an "expand" parameter: this is what Stripe does. And really, you've just invented a watered down, bespoke GraphQL.
I think BFF works at a small scale, but that's true with any framework. Building a one off handful of endpoints will always be less work than putting a framework in place and building against it.
GQL has a pretty substantial up front cost, undeniably. But you hopefully balance that with the benefit you'd get from it.
Thanks for mentioning this. I always find it unsettling when I've researched solutions for something and only find a better option from a random HN comment.
Fwiw I tried every tool imaginable a few years ago including kubb, (which I think I contributed to while testing things out)
The only mature, correct, fast option with a fixed cost (since it mostly exists at the type level meaning it doesn't scale your bundle with your API) was openapi-ts. I am not affiliated other than a previous happy user, though I did make some PRs while using it https://openapi-ts.dev/
And yes, current models are amazing at reducing time it takes to push out a feature or fix a bug. I wouldn't even consider working at a company that banned use of AI to help me write code.
PS: It's also irrelevant to whether it's AI generated or not, what matters is if it works and is secure.
There are literally users here that say that it works.
And you presume that the code hasn't been read or understood by a human. AI doesn't click merge on a PR, so it's highly likely that the code has been read by a human.
Agree whole-heartedly. The strong contracts are the #1 reason to use GraphQL.
The other one I would mention is the ability to very easily reuse resolvers in composition, and even federate them. Something that can be very clunky to get right in REST APIs.
re:#1 Is there a meaningful difference between GraphQl and OpenAPI here?
Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all
> Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all
Right, so if you take away the resolver composition (this is graph composition and not route federation), you can do the same things with a similar amount of effort in REST. This is no longer a GraphQL vs REST conversation, it's an acknowledgement that if you don't want any of the benefits you won't get any of the benefits.
There are pros & cons to GraphQL resolver composition, not just benefits.
It is that very compositional graph resolving that makes many see it as overly complex, not as a benefit, but as a detriment. You seem to imply that the benefit is guaranteed and that graph resolving cannot be done within a REST handler, which it can be, but it's much simpler and easier to reason about. I'm still going to go get the same data, but with less complexity and reasoning overhead than using the resolver composition concept from GraphQL.
Is resolver composition really that different from function composition?
Local non-utility does not imply global non-value. Of course there's costs and benefits, but it's hard to have a conversation with good-faith comparison using "many see it as overly complex" -- this is an analysis that completely ignores problem-fit, which you then want to generalize onto all usage.
Contracts for data with OpenAPI or an RPC don't come with the overhead of making a resolver for infinite permutations while your apps probably need a few or perhaps one. Which is why REST and something for validation is enough for most and doesn't cost as much.
> Pruning the request and even the response is pretty trivial with zod.
I agree with that, and when I'm in a "typescript only" ecosystem, I've switched to primarily using tRPC vs. GraphQL.
Still, I think people tend to underestimate the value of having such clear contracts and guarantees that GraphQL enforces (not to mention it's whole ecosystem of tools), completely outside of any code you have to write. Yes, you can do your own zod validation, but in a large team as an API evolves and people come and go, having hard, unbreakable lines in the sand (vs. something you have to roll your own, or which is done by convention) is important IMO.
Pruning the response would help validate your response schema is correct and that is delivering what was promised.
But you're right, if you have version skew and the client is expecting something else then it's not much help.
You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.
You're misunderstanding. In GraphQL, the server prunes the response object. That is, the resolver method can return a "fat" object, but only the object pruned down to just the requested fields is returned over the wire.
It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).
Sorry but not convinced. How is this different from two endpoints communicating through, lets say, protobuf? Both input and output will be (un)parsed only when conforming to the definition
Facebook had started bifurcating API endpoints to support iOS vs Android vs Web, and overtime a large number of OS-specific endpoints evolved. A big part of their initial GraphQL marketing was to solve for this problem specifically.
> when a server receives an input object, that object will conform to the type
Anything that comes from the front end can be tampered with. Server is guaranteed nothing.
> GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
Request can be tampered with so there's additional security from GraphQL protocol. Security must be implemented by narrowing down to only allowed data on the server side. How much of it is requested doesn't matter for security.
I ran a team a few years ago. The FE folks really wanted to use GraphQL, and the BE folks agreed, because someone had found an interesting library that made it easy. No-one had any experience of GraphQL before.
After a month's development I found out that there was one GraphQL call at the root of each React page, and it fetched all the data for that userID in a big JSON blob, that was then parsed into a JS object and used for the rest of the life of that page. Any updates sent the entire, modified, blob back to the server and the BE updated all the tables with the changed data. This didn't cause problems because users didn't share data or depend on shared data.
Everyone was happy because they got to put GraphQL on their resume. The application worked. We hit the required deadline. The company didn't get any traction with the application and we pivoted to something else very quickly, and was sold to private equity within two years. None of the code we wrote is running now, which is probably a good thing.
I get the feeling, from conversations with other people using GraphQL, that this is the sort of thing that actually happens in practice. The author's arguments make sense, as do the folks defending GraphQL. But I'd suggest that 80-90% of the GraphQL actually written and running out there is the kind of crap my team turned out.
- you can make changes to subcomponents without worrying about affecting the behavior of any other subcomponent,
- the query is auto-generated based on the fragment, so you don't have to worry that removing a field (if you stop using it one subcomponent) will accidentally break another subcomponent
In the author's case, they (either) don't care about overfetching (i.e. they avoid removing fields from the GraphQL query), or they're at a scale where only a small number of engineers touch the codebase. (But imagine a shared component, like a user avatar. Imagine it stopped using the email field. How many BFFs would have to be modified to stop fetching the email field? And how much research must go into determining whether any other reachable subcomponent used that email field?)
If moving fast without overhead isn't a priority (or you're not at the scale where it is a problem), or you're not using a tool that leverages GraphQL to enable this speed, then indeed, GraphQL seems like a bad investment! Because it is!
Yes, Apollo not leading people down the correct path has given people a warped perception of what the benefits actually are. Colocation is such a massive improvement that's not really replicated anywhere else - just add your data requirements beside your component and the data "magically" (though not actually magic) gets requested and funnelled to the right place
Apollo essentially only had a single page mentioning this, and it wasn't easy to find, for _years_
Quite. Apollo Client is the problem, IMO, not GraphQL.
Though Relay still needs to work on their documentation: Entrypoints are so excellent and yet still are basically bare API docs that sort of rely on internal Meta shit
100% agree on the unnecessary connection between entrypoints and meta internals. I think this is one of the biggest misses in Relay, and severely limits its usefulness in OSS.
If you're interested in entrypoints without the Meta internals, you may be interested in checking out Isograph (which I work on). See e.g. https://isograph.dev/docs/loadable-fields/, where the data + JS for BlogBody is loaded afterward, i.e. entrypoints. It's as simple as annotating a field (in Isograph, components define fields) with @loadable(lazyLoadArtifact: true).
Neat! I basically just reimplemented some of the missing pieces myself, but honestly for the kind of non-work GraphQL/Relay stuff I do React Router with an entry point-like interface for routes (including children!) to feed in route params to loadQuery and the ref to the route itself got me close enough for my purposes
I’ll have a play though, sounds promising :)
Oh this is interesting, sort of seems like the relay-3d thing in some ways?
Yeah, you can get a lot of features out of the same primitive. The primitive (called loadable fields, but you can think of it as a tool to specify a section of a query as loaded later) allows you to support:
- live queries (call the loadable field in a setInterval)
- pagination (pass different variables and concatenate the result)
- defer
- loading data in response to a click
And if you also combine this with the fact that JS and fragments are statically associated in Relay, you can get:
- entrypoints
- 3D (if you just defer components within a type refinement, e.g. here we load ad items only when we encounter an item with typename AdItem https://github.com/isographlabs/isograph/blob/627be45972fc47.... asAdItem is a field that compiles to ... on AdItem in the actual query text)
And all of it is doable with the same set of primitives, and requiring no server support (other than a node field).
Do let me know if you check it out! Or if you get stuck, happy to unblock you/clarify things (it's hard for me to know what is confusing to folks new to the project.)
Agreed on fragment masking. Graphql-codegen added support for it but in a way that unfortunately is not composable with all the other plugins in their ecosystem (client preset or bust), to the point that to get it to work nicely in our codebase we had to write our own plugins that rip code from the client preset so that we could use them as standalone plugins.
> The main problem GraphQL tries to solve is overfetching.
this gets repeated over and over again, but if this your take on GraphQL you def shouldn't be using GraphQL, because overfetching is never such a big problem that would warrant using GraphQL.
In my mind, the main problem GraphQL tries to solve is the same "impedance mismatch" that ORMs try to solve. ORM's do this at the data level fetching level in the BE, while GraphQL does this in the client.
I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.
In my opinion, GraphQL tooling never panned out enough to make GraphQL worthwhile. Hasura is very cool, but on the client side, there's not much going on... and now with AI programming you can just have your data layers generated bespoke for every application, so there's really no point to GraphQL anymore.
> I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.
How is this easier or faster than writing a few lines of code at BFF?
If you're interested in an example of really good tooling and DevEx for GraphQL, then may I shamelessly promote this video in which I demonstrate the Isograph VSCode extension: https://www.youtube.com/watch?v=6tNWbVOjpQw
TLDR, you get nice features like: if the field you're selecting doesn't exist, the extension will create the field for you (as a client field.) And your entire app is built of client fields that reference each other and eventually bottom out at server fields.
Wait, what? Overfetching is easily one of the top #3 reasons for the enshittification on the modern web! It's one of the primary causes of incredible slowdowns we've all experienced.
Just go to any slow web app, press F12 and look at the megabytes transferred on the network tab. Copy-paste all text on the screen and save it to a file. Count the kilobytes of "human readable" text, and then divide by the megabytes over the wire to work out the efficiency. For notoriously slow web apps, this is often 0.5% or worse, even if filtering down to API requests only!
It is still a major problem, yes. Interestingly, if you go back to the talks that introduced GraphQL, much of the motivation wasn’t about solving overfetching (they kinda assumed you were already doing that because it was at the peak of mobile app wave), but solving the organisational and technical issues with existing solutions.
Hilariously – react server components largely solves all three of these problems, but developers don't seem to want to understand how or why, or seem to suggest that they don't solve any real problems.
I agree though worth noting that data loader patterns in most pre-RSC react meta frameworks + other frameworks also solve for most of these problems without the complexity of RSC. But RSC has many benefits beyond simplifying and optimizing data fetching that it’s too bad HN commenters hate it (and anything frontend related whatsoever) so much.
I'm probably about as qualified to talk about GraphQL as anyone on the internet: I started using it in late 2016, back when Apollo was just an alternate client-side state/store library.
The internet at large seems to have a fundamental misunderstanding about what GraphQL is/is not.
Put simply: GQL is an RPC spec that is essentially implemented as a Dict/Key-Value Map on the server, of the form: "Action(Args) -> ResultType"
As someone who’s used GraphQL since mid-2015, if you haven’t used GraphQL with Relay you probably haven’t experienced GraphQL in a way that truly exploits its strengths.
I say probably because in the last ~year Apollo shipped functionality (fragment masking) that brings it closer.
I stand by my oft-repeated statement that I don’t use Relay because I need a React GraphQL client, I use GraphQL because I really want to use Relay.
The irony is that I have a lot of grievances about Relay, it’s just that even with 10 years of alternatives, I still keep coming back to it.
For me it’s really about the component-level experience.
* Relatively fine-grained re-rendering out of the box because you don’t pass the entire query response down the tree. useFragment is akin to a redux selector
* Plays nicely with suspense and the defer fragment, deferring a component subtree is very intuitive
* mutation updaters defined inline rather than in centralised config. This ended up being more important than expected, but having lived the reality of global cache config with our existing urql setup at my current job, I’m convinced the Relay approach is better.
* Useful helpers for pagination, refetchable fragments, etc
* No massive up-front representation of the entire schema needed to make the cache work properly. Each query/fragment has its own codegenned file that contains all the information needed to write to the cache efficiently. But because they’re distributed across the codebase, it plays well with bundle size for individual screens.
* Guardrails against reuse of fragments thanks to the eslint plugin. Fragments are written to define the data contract for individual components or functions, so there’s no need to share them around. Our existing urql codebase has a lot of “god fragments” which are very incredibly painful to work with.
Recent versions of Apollo have some of these things, but only Relay has the full suite. It’s really about trying to get the exact data a component needs with as little performance overhead as possible. It’s not perfect — it has some quite esoteric advanced parts and the documentation still sucks, but I haven’t yet found anything better.
That's something you should only really do in development, and then cement for production. Having open queries where an attacker can find interesting resolver interactions in production is asking for trouble
> That's something you should only really do in development, and then cement for production
My experience with GraphQL in a nutshell: A lot of effort and complexity to support open ended queries which we then immediately disallow and replace with a fixed set of queries that could have been written as their own endpoints.
But has this been thoroughly documented and are there solid libraries to achieve this?
My understanding is that this is not part of the spec and that the only way to achieve this is to sign/hash documents on clients and server to check for correctness
Well, it seems that the Apollo way of doing it now, via their paid GraphOS, is backwards of what I learned 8 years ago (there is always more than one way to do things in CS).
At build time, the server generates a random string resolver names that map onto queries, 1-1, fixed, because we know exactly what we need when we are shipping to production.
Clients can only call those random strings with some parameters, the graph is now locked down and the production server only responds to the random string resolver names
I mean yeah, in that Persisted Queries are absolutely documented and expected in production on the Relay side, and you’re a hop skip and jump away from disallowing arbitrary queries at that point if you want to
Though you still don’t need to and shouldn’t. Better to use the well defined tools to gate max depth/complexity.
yup, and while they are fixed, it amounts to a more complicated code flow to reason about compared to you're typical REST handler
Seriously though, you can pretty much map GraphQL queries and resolvers onto JSONSchema and functions however you like. Resolvers are conceptually close to calling a function in a REST handler with more overhead
I suspect the companies that see ROI from GraphQL would have found it with many other options, and it was more likely about rolling out a standard way of doing things
GraphQL is best if the entire React page gathers all requirement from subcomponents into one large GraphQL query and the backend converts the query to a single large SQL query that requests all the data directly from database where table and row level security make sure no private data is exposed. Then the backend converts SQL result into GraphQL response and React distributes the received data across subcomponents.
Resolvers should be an exception for the data that can't come directly from the database, not the backbone of the system.
This is a genuinely accurate critique of GraphQL. We're missing some extremely table-stakes things, like generics, discriminated unions in inputs (and in particular, discriminated unions you can discriminate and use later in the query as one of the variants), closed unions, etc.
I have strong agreement here and would add reasoning about auth flow through nested resolvers is one of the biggest challenges because it adds so much mental overhead. The reason is that a resolver may be called through completely different contexts and you have to account for that
The complexity and time lost to thinking is just not worth it, especially once you ship your GarphQL app to production, you are locking down the request fields anyway (or you're keeping yourself open for more pain)
I even wrote a zero-dependency auth helpers package and that was not enough for me to keep at it
Authz overhead for graphql is definitely a problem. At GitHub we're adding github app support to the enterprise account APIs, meaning introducing granular permissions for each graphql resource type.
Because of the graph aspect, queries don't work til all of the underlying resources have been updated to support github apps. From a juice vs squeeze perspective it's terrible - lots of teams have to do work to update their resources (which given turnover and age they may not even be aware of) before basic queries start working, until you finally hit a critical mass at some high percentage of coverage.
Add to all that the prevailing enterprise customer sentiment of "please anything but graphql" and it's a really hard sell - it's practically easier and better to ask teams to rebuild their APIs in REST than update the graphql.
I mean, the use of GraphQL for third party APIs has always been questionable wisdom. I’m about a big a GraphQL fan as it gets, but I’ve always come down on the side of being very skeptical that it’s suitable for anything beyond its primary use case — serving the needs of 1st-party UI clients.
GQL was always one of those things that sound good on the surface but in practice it never delivers and the longer you're stuck with it the worse it gets. Majority of tech is actually like this. People constantly want to reinvent the wheel but in the end, a wheel is a wheel and it will never be anything else.
The problem with this article is that GraphQL has become much more an enterprise solution over the last few years than a non enterprise one. Even though the general public opinion of X and HN seems to be that GraphQL has negative ROI, it's actually growing strongly in the enterprise API management segment.
GraphQL, in combination with GraphQL has become the new standard for orchestrating Microservices APIs and the development of AI and LLMs gives it even another push as MCP is just another BFF and that's the sweet spot of GraphQL.
Side note, I'm not even defending GraphQL here, it's just about facts if we're looking at who's using and adopting GraphQL. If you look around, from Meta to Airbnb, Uber, Reddit or Booking.com, Atlassian or Monday, GitHub or Gitlab, all these services use GraphQL successfully and these days, banks are adopting it to modernize API access to their Mainframe, SOAP and proprietary RPC APIs.
How do I know you might say? I'm working with WunderGraph (https://wundergraph.com/), one of the most innovative vendors in the market and we're talking to enterprise every day. We've just came home from API days Paris and besides AI and LLMs, everyone in the enterprise is talking about API design, governance and collaboration, which is where GraphQL Federation is very strong and the ecosystem is very mature.
Posts like this are super harmful for the API ecosystem because they come from inexperience and lack of knowledge.
GraphQL can solve over fetching but that's not the reason why enterprises adopt it. GraphQL Federation solves a people problem, not a technical one. It helps orgs scale and govern APIs across a large number of teams and services.
Just recently there was a post here on HN about the problems with dependencies between Microservices, a problem that GraphQL Federation solves very elegantly with the @requires directive.
One thing I've learned over the years is that people who complain about GraphQL are typically not working in the enterprise, and those who use the query language successfully don't usually post on social media about it. It's a tool in the API tool belt besides others like Open API and Kafka. Just go to an API conference and ask what people use.
What I liked about GraphQL was the fact that I only have to add a field in one place (where it belongs in the schema) and then any client can just query it. No more requests from Frontend developers like „Hey, can you also add that field to this endpoint? Then I don’t have to make multiple requests“. It just cuts that discussion short.
I also really liked that you can create a snapshot of the whole schema for integration test purposes, which makes it very easy to detect breaking changes in the API, e.g. if a nullable field becomes not-nullable.
But I also agree with lots of the points of the article. I guess I am just not super in love with REST. In my experience, REST APIs were often quite messy and inconsistent in comparison to GraphQL. But of course that’s only anecdotal evidence.
But the first point is also its demise. I have object A, and want to know something from a related object E. Since I can ask for A-B-C-D-E myself, I just do it, even though the performance or spaghettiness takes a hit. Then ends up with frontend that's tightly coupled to the representation at the time as well, when "in the context of A I also need to know E" could've been a specialized type hiding those details.
> Then ends up with frontend that's tightly coupled to the representation at the time as well, when "in the context of A I also need to know E" could've been a specialized type hiding those details.
GraphQL clients are built to do exactly that, Relay originally and Apollo in the last year, if I’m understanding what you’re saying: any component that touches E doesn’t have to care about how you got to it, fragment masking makes short work
> No more requests from Frontend developers like „Hey, can you also add that field to this endpoint? Then I don’t have to make multiple requests“.
Do people actually work like this is 2025? I mean sure, I guess when you're having entire teams just for frontends and backends then yea, but your average corporate web app development? It's all full stack these days. It's often expected that you can handle both worlds (client and server) and increasingly its even TypeScript "shared universe" when you don't even leave the TS ecosystem (React w/ something like RR plus TS BFF w/ SQL). This last point, where frontend and backend meet, is clearly the way things are going in general. I mean these days React doesn't even beat around the bush and literally tells you to install it with a framework, no more create-react-app, server side rendering is a staple now and server side components are going to be a core concept of React within a few years tops.
Javascript has conquered the client side of the internet, but not the server side. Typescript is going to unify the two.
> It's all full stack these days. It's often expected that you can handle both worlds (client and server)
Full stack is common for simple web apps, where the backend is almost a thin layer over the database.
But a lot of the products I’ve worked with have had backends that are far more complex than something you could expect the front end devs to just jump into and modify.
How do GraphQL based systems solve the problem of underlying database thrashing, hot shards, ballooning inner joins, and other standard database issues? What prevents a client from writing some adversarial-level cursed query that causes massive internal state buildup?
I’m not a database neckbeard but I’ve always been confused how GraphQL doesn’t require throwing all systems knowledge about databases out the window
Most servers implement a heuristic for "query cost/complexity" with a configurable max. At the time the query is parsed, its cost is determined based on the heuristic and if it is over the max, the query is rejected.
There are a lot of public facing graphql servers that use it without issue other than frustrating users of non adversarial but complex requirements. The problem is that it is generally on a per request basis.
An adversary is going to utilize more than a single query. It mostly protects against well intentioned folks.
Other forms of protection such as rate limiting are needed for threat models that involve an adversary.
The same problems exist with REST but there it is easier as you can know query complexity ahead of time at end points. GraphQL has to have something to account for the unknown query complexity, thus the additional heuristics.
> GraphQL isn’t bad. It’s just niche. And you probably don’t need it.
> Especially if your architecture already solved the problem it was designed for.
What I need is to not want to fall over dead. REST makes me want to fall over dead.
> error handling is harder than it needs to be
GraphQL error responses are… weird.
> Simple errors are easier to reason about than elegant ones.
Is this a common sentiment? Looking at a garbled mash of linux or whatever tells me a lot more than "500 sorry"
I'm only trying out GraphQL for the first time right now cause I'm new with frontend stuff, but from life on the backend having a whole class of problems, where you can have the server and client agree on what to ask for and what you'll get, be compiled away is so nice. I don't actually know if there's something better than GraphQL for that, but I wish when people wrote blogs like this they'd fill them with more "try these things instead for that problem" than simply "this thing isn't as good as you think it is you probably don't need it".
If isomorphic TS is your cup of tea, tRPC is a nicer version of client server contracting than graphql in my opinion. Both serve that problem quite well though.
On OpenAPI vs GraphQL: I disagree with the premise that OpenAPI achieves the same thing. GraphQL is necessarily tightly coupled to your backend — you can't design a schema that does something other than what's actually implemented. OpenAPI, on the other hand... I've seen countless implementors get it wrong. Specs drift from reality, documentation lies, and you're trusting convention. Sure, OpenAPI can do whatever you want, but for those of us who prefer convention over configuration, GraphQL's enforced contract is the whole point.
On authentication concerns: Yes, auth in GraphQL has varied implementations with no open standard. But REST doesn't thrive here either... it's all bespoke. This is a tooling problem, not a GraphQL problem. Resolvers become your authorization boundary the same way endpoints with controller actions do in REST. Different shape, same responsibility.
On type generation: In my experience, the codegen tooling with Apollo and Relay is incredible. I haven't seen anything on the OpenAPI side that comes close to that developer experience.
This is only an issue if the spec is maintained manually. In my opinion, best practice is to generate the specification from the actual implementation—assuming you didn’t start by hand-crafting the spec in the first place.
If the spec is the source of truth, server and client stubs can be generated from it, which should likewise prevent this kind of drift.
I realize that working with OpenAPI isn’t always straightforward, but most of the friction usually comes down to gaps in understanding or insufficient tooling for a given tech stack.
If all your experience comes from Apollo Client and Apollo Server, as the author's does, then your opinion is more about Apollo than it is about GraphQL.
You should be using Relay[0] or Isograph[1] on the frontend, and Pothos[2] on the backend (if using Node), to truly experience the benefits of GraphQL.
Incidentally, v0.5.0 of Isograph just came out! https://isograph.dev/blog/2025/12/14/isograph-0.5.0/ There are lots of DevEx wins in this release, such as the ability to create have an autofix create fields for you. (In Isograph, these would be client fields.)
Production-Ready GraphQL is a pretty good read for anyone who needs to familiarize themselves with enterprise issues associated with GraphQL.
My favorite saying on this subject is that any sufficiently expressive REST API takes on GraphQL-like properties. In other words, if you're planning on a complex API, GraphQL and its related libraries often comes with batteries-included conventions for things you're going to need anyway.
I also like that GraphQL's schema-driven approach allows you to make useful declarations that can also be utilized in non-HTTP use cases (such as pub/sub) and keep much of the benefits of predictability.
IMO the main GraphQL solutions out there should have richer integrations into OpenTelemetry so that many of the issues the author raises aren't as egregious.
Many of the struggles people encounter with the GraphQL and React stack is that it's simply very heavyweight for many commodity solutions. Much as folks are encouraging just going the monorepo route these days, make sure that your solution can't be accommodated by server-side rendering, a simple REST API, and a little bit of vanilla JS. It might get you further than you think!
This doesn’t really make sense. Obviously if you combine GQL with BFF/REST you’re gonna have annoying double-work —- you’re solving the same problem twice. GQL lets you structure your backend into semantic objects then have the frontend do whatever it wants without extra backend changes. Which lets frontend devs move way faster.
This is the true big benefit, the others talking about over fetching are not wrong but overfocusing on a technical merit over the operational ones.
My frontend developers had their minds blown when they realized that because we’re using Hasura internally, the only backend work generally needed is to design the db schema and permissioning, and then once that’s done frontend developers aren’t ever blocked by anything (which is not a freedom that I would want to give to untrusted developers, hence emphasis on internal usage of GQL)
(Unfortunately Hasura has shifted entirely into this VC-induced DDN thing that seems to be a hard break from the original product, so I can’t recommend that anymore… postgraphile is probably the way)
There is a pattern where GraphQL really shines: using a GraphQL native DB like Dgraph (self-hosting) and integrating other services via GraphQL Federation in a GraphQL BFF.
I would agree that REST beats GraphQL in most cases regarding complexity, development time, security, and maintainability if the backend and frontend are developed within the same organization.
However, I think GraphQL really shines when the backend and frontend are developed by different organizations.
I can only speak from my experience with Shopify's GraphQL APIs. From a client-side development perspective, being able to navigate and use the extensive and (admittedly sometimes over-)complex Shopify APIs through GraphQL schemas and having everything correctly typed on the client side is a godsend.
Just imagining offering the same amount of functionality for a multitude of clients through a REST API seems painful.
I don't agree with the author on most of this. GraphQL is far better than REST in almost every way and I disagree that the server side resolvers are somehow difficult to write. In a true enterprise setting, the federation capabilities are fantastic.
There are plenty of things to dislike about GraphQL that he doesn't touch on, like:
* lack of input type polymorphism
* lack of support for map types
* lack of support for recursive data structures (e.g., BlogComments)
* terrible fragment syntax
I would encourage you to write an educated person's critique of GraphQL, because OP's article + https://bessey.dev/blog/2024/05/24/why-im-over-graphql/ etc. suck up all of the oxygen, and no one hears about the genuine issues like that.
(And don't forget lack of generics, no support for interfaces with no fields, lack of closed unions/interfaces, the absolutely silly distinction between unions and interfaces, the fact that the SDL and operation language are two completely different things...)
> GraphQL is far better than REST in almost every way
I hear this so often, but never do I hear more than one or one and a half ways that it is better. No one seems capable of explaining how it's "better in almost every way" without diverging to very specific examples with cutout problems.
You may be interested in checking out https://www.youtube.com/watch?v=lhVGdErZuN4, where I talk about the benefits of Relay. This isn't (currently) possible without GraphQL, so it's a pretty compelling case for GraphQL.
But yeah, IMO, GraphQL doesn't justify itself unless you're using a client like Relay, with data masking and fragment colocation.
Exactly! Once its working, it can be very healthy. And especially on the client. For a very, very, very long time. We started using GraphQL at the very beginning, back in 2015, and the way it has scaled over time -- across backend and frontend -- has worked amazingly well. Going on 10 years now and no slowing down.
What I’ve realized over time is the idea is beautiful and the problem it solves is partly of API/schema discovery.
Yet I am conflicted on whether it’s a real value add for most use-cases though. Maybe if there are many micro-services and you need a nice way to tie it all together. Or the underlying DB (source or truth data stores) can natively support responses in GraphQL. Then you could wrap it in a thin API transformation BFF (backend for frontend) per client and call it a day.
But in most cases, you’re just shifting the complexity + introducing more moving parts. With some discipline and standardization (if all services follow the same authentication mechanics), it is possible to get the same benefits with OpenAPI + an API catalog. Plus you avoid the layers of GraphQL transformations in clients and the server.
100% based on my anecdotal experience supporting new projects and migrations to GraphQL in < $10B market cap companies (including a couple of startups).
I tend to agree with the author. GraphQL has its use cases, but it is often times overused and simplicity is sacrificed for perceived elegance or efficiency that is often times not needed. "Pre-mature optimisation of root of all evil" comes to mind when GraphQL is picked for efficiency gains that may never become a problem in the first place.
Facebook invented GraphQL to solve a very specific problem back in 2012 for mobile devices. Having to make multiple queries to construct the data needed in FE in mobile clients is bandwidth constraining (back then over 3G networks) and harmful for battery life, so this technology solved this problem neatly.
However, these days when server-to-server communication is needed over an API, none of the problems Facebook invented the protocol for are problems in the first place. If you really want maximum efficiency or speed you probably ought to ditch HTTP entirely and communicate over some lower level binary protocol.
REST is not perfect either, one thing I liked about SOAP was that it had a strong schema support and you got to name RPCs the way you liked, and didn't have to wrangle everything around the concept of a "resource" and CRUD operations, which often times becomes cumbersome to fit into the RESTful way of thinking if you need to support an RPC that "just does magic with multiple resources". These are the things I like about GraphQL, but on the other hand REST is just HTTP with some conventions, which you necessarily don't have to follow if things get in your way, and is generally simpler by design.
The only thing I wish with REST is having a stronger vendor support for Swagger/OpenAPI specs. One of the things my team supports is a concept of Managed APIs for our product: https://docs.adaptavist.com/src/latest/managed-apis and we support primarily RESTful APIs but also couple of GraphQL based ones and the issue we face is that REST API specs for many products are either missing, incomplete or simply outdated, so we have to fix them ourselves before we generate our Managed API clients, or write them by hand if the specs don't exist. It's becoming easier with AI these days, but one thing I personally regret when we transitioned from SOAP to REST as a community, is that the strong schema support became a secondary concern. We no longer could just throw API client generator at SOAP's WSDL and generate a client, we needed to start handcrafting the clients ourselves for REST, which is still an issue to this day, unless perfect specs exists, which in my experience is a rather rare occurrence.
We have a BFF and were considering for a while to go with GQL but eventually scrapped the idea: it seemed like a lot of work on the BE side.
But, we are quite constraint on resources, so now even the BFF seems to consume more and more BE development time. Now we are considering letting the FE use some sort of bridge to the BE's db layer in order to directly CRUD what it needs and therefore skip the BFF API. That db layer already has all sorts of validations in place. Because the BE is Java and the FE is js, it seems the only usable bridge here would be gRPC. Does anyone have any other ideas or has done anything in this direction?
I work on an open source server project that is deployed in many different contexts and with many different clients and front ends. GraphQL has allowed us to not feel bad about adding extra properties and object to the response, because if a particular client doesn’t want them, they don’t request them and don’t get them. It has allowed us to be much more flexible with adding features that only few people will use.
Feels like a schema design issue?
If your REST backend exposes a single path to remove an item, are there any reason why your GraphQL schema doesn't expose a root mutation field taking the same arguments?
Exactly. If it's that verbose and painful for a public API like Shopify/GitHub (where the 'flexibility' argument is strongest), it makes even less sense for internal enterprise apps.
We are paying that same complexity tax you described, but without the benefit of needing to support thousands of unknown 3rd-party developers.
we have a mixed graphql/REST api at $DAY_JOB and our delete mutations look almost identical to our REST DELETE endpoints.
TFA complains needing to define types (lol), but if you're doing REST endpoints you should be writing some kind of API specification for it (swagger?). So ultimately there isn't much of a difference. However, having your types directly on your schema is nicer than just bolting on a fragile openapi spec that will quickly become outdated when a dev forgets to update it when a parameter is added/removed/changed.
I hated GraphQL and all the hype around it. Until I finally got how to use it what for.
Same I thought about nest.js, Angular.
All of them hard to understand by heart at beginning, later (a few years), you feel it and get value.
Sounds stupid, but I tried to reimplement all the benefits using class transformers, zod, custom validators, all others packages. And always end up: “alright, graphql does this out of the box”.
REST is nice, same as express.js if you create non-production code. Reality is you need to love this boilerplate. AI writes this anyway.
The appeal of GraphQL is that it eliminates the need for a BFF and easily solves service meshing. Over fetching is more of a component design problem than a performance issue.
I thought that the main selling point of GraphQL was a single query per SPP argument, i.e. fetch your app state with a single query at the beginning instead of waiting for hundreds of REST calls. This also goes out of the window when you need to do some nested cursor stuff though, i.e. open app with third page selected, and inside the page have the second table on the 747th row selected.
My hot take is that if you’re using GraphQL without Relay, you’re probably not using it to its full potential.
I’ve used both Relay and Apollo Client on production, and the difference is stark when the app grows!
- you don't have a normalized cache. You may not want one! But if you find yourself annoyed that modifying one entity in one location doesn't automatically cause another view into that same entity to update, it's due to a lack of a normalized cache. And this is a more frequent problem than folks admit. You might go from a detail view to an edit view, modify a few things, then press the back button. You can't reuse cached data without a normalized cache, or without custom logic to keep these items in sync. At scale, it doesn't work.
- Since you don't have a normalized cache, you presumably just refetch instead of updating items in the cache. So you will presumably re-render an entire page in response to changes. Relay will just re-render components whose data has actually changed. In https://quoraengineering.quora.com/Choosing-Quora-s-GraphQL-..., the engineer at Quora points out that as one paginates, one can get hundreds of components on the screen. And each pagination slows the performance of the page, if you're re-rendering the entire page from root.
- Fragments are great. You really want data masking, and not just at the type level. If you stop selecting some data in some component, it may affect the behavior of other components, if they do something like Object.stringify or JSON.keys. But admittedly, type-level data masking + colocation is substantially better than nothing.
- Relay will also generate queries for you. For example, pagination queries, or refetch queries (where you refetch part of a tree with different variables.)
There are lots of great reasons to adopt Relay!
And if you don't like the complexity of Relay, check out isograph (https://isograph.dev), which (hopefully) has better DevEx and a much lower barrier to entry.
The article pretty much sums up why I've been a bigger fan of OData than GraphQL, especially in the business cases. OData will still let you get all those same wins that GraphQL does but without a sql-ish query syntax, and sticking to the REST roots that the web works better with. Also helps that lots of Microsoft services work out of the box with OData.
- Overly verbose endpoint & request syntax: $expand, parenthesis and quotes in paths, actions etc.
- Exposes too much filtering control by default, allowing the consumer to do "bad things" on unindexed fields without steering them towards the happy path.
- Bad/lacking open source tooling for portals, mocks, examples, validation versus OpenAPI & graphQL.
It all smells like unpolished MS enterprise crap with only internal MS & SAP adoption TBH.
One interesting conjecture that GQL makes, I think, is that idempotent request caching at the http level is dead... Or at least can't be a load bearing assumption because the downstream can change their query to fetch differently.
Do we think this has turned out to hold? Is caching an API http response of no value in 2025.
GraphQL is one of those solutions in need of a problem for most people. People want to use it. But they have no need for it. The number of companies who need it could probably be counted on both hands. But people try to shoehorn it into everything.
I wish I had read that before. It is very interesting and I would probably not have over-engineered my API so much (though I am not even using GraphQL).
A blog post about GraphQL in an enterprise setting, that fails to address the biggest GQL feature for enterprises. Not unlike most material on HN about microservices. Federated supergraph is the killer feature imo.
The author states that in their experience, most downstream services are REST, so adding a GQL aggregation layer on top isn't very helpful. It seems possible they would have a different opinion if they were working with multiple services that all implemented GQL schemas.
In that (common) case, the advantage is the frontend/app developers don’t need to know what a hot mess of inconsistent legacy REST endpoints the backend is made of, only the GQL layer does. Which also gives you some breathing room to start fixing said mess.
Another problem the article doesn't mention is how much of a hassle it is to deal with permissions. Depending on the GraphQL library you are using, sure, but my general experience with GraphQL is that the effort needed to secure a GraphQL API increases a lot the more granular permissions you need.
Then again, if you find yourself needing per-field permission checks, you probably want a separate admin API or something instead.
using graphql specifically Apollo was one of my regrettable decisions when I was designing a system 3 years ago, one that haunts me still today with wired bugs, too much effort to upgrade the version while prev version still have bugs etc. and I lost performance and simplicity of rest on top of that
It took me a while to learn the "right way" of doing Apollo. An alternative like Relay is much more opinionated so perhaps that would've helped me get there faster. But I eventually came around and now I agree that Apollo is an incredible piece of technology. I later worked on a REST API and found myself wanting to recreate much of Apollo. Especially the front-end caching layer.
Over a decade of web dev experience and constantly lurking on HN, I've never heard the initialism BFF. What is a Backend for Frontend and where did that term gain traction?
> The main problem GraphQL tries to solve is overfetching.
GraphQL is solving another problem. Problem of communication between frontend and backend team. When frontend team needs to have yet another, field exposed it needs to communicate this to the backend team. GraphQL let's them do this with code instead of Jira ticket and now the communication between the teams can be done asynchronously and batched. No more waiting for backend implementation each time. And if backend exposes too much then it's a backend problem and the frontend has nothing to do with it so it again can be solved without granular communication between backend and frontend teams.
GraphQL was created to solve many different problems, not just overfetching.
These problemes at the time generally were:
1) Overfetching (yes) from the client from monolithic REST APIs, where you get the full response payload or nothing, even when you only want one field
2) The ability to define what to fetch from the CLIENT side, which is arguably much better since the client knows what it needs, the server does not until a client is actually implemented (so hard to fix with REST unless you hand-craft and manually update every single REST endpoint for every tiny feature in your app). As mobile devs were often enough not the same as backend devs at the time GraphQL was created, it made sense to empower frontend devs to define what to fetch themselves in the frontend code.
3) At the time GraphQL was invented, there was a hard pivot to NoSQL backends. A NoSQL backend typically represents things as Objects with edges between objects, not as tabular data. If your frontend language (JSON) is an object-with-nested-objects or objects-with-edges-between-objects, but your backend is tables-with-rows, there is a mismatch and a potentially expensive (at Facebook's scale) translation on the server side between the two. Modeling directly as Objects w/ relationships on the server side enables you to optimize for fetching from a NoSQL backend better.
4) GraphQL's edges/connections system (which I guess technically really belongs to Relay which optimizes really well for it) was built for infinitely-scrolling feed-style social media apps, because that's what it was optimized for (Facebook's original rewrite of their mobile apps from HTML5 to native iOS/Android coincided with the adoption of GraphQL for data fetching). Designing this type of API well is actually a hard problem and GraphQL nails it for infinitely scrolling feeds really well.
If you need traditional pagination (where you know the total row count and you want to paginate one page at a time) it's actually really annoying to use (and you should roll your own field definitions that take in page size and page number directly), but that's because it wasn't built for that.
5) The fragment system lets every UI component builder specify their own data needs, which can be merged together as one top-level query. This was important when you have hundreds of devs each making their own Facebook feed component types but you still want to ensure the app only fetches what it needs (in this regard Relay with its code generation is the best, Apollo is far behind)
There's many other optimizations we did on top of GraphQL such as sending the server query IDs instead of the full query body, etc, that really only mattered for low-end mobile network situations etc.
GraphQL is still an amazing example of good product infra API design. Its core API has hardly changed since day 1 and it is able to power pretty much any type of app.
The problems aren't with GraphQL, it's with your server infra serving GraphQL, which outside of Facebook/Meta I have yet to see anyone nail really well.
It depends very much on the language/server you are using. In Rust, IMO GraphQL is still the best, easiest and fastest way to have my Rust types propagated to the frontend(s) and making sure that I will have strict and maitainable contracts throughout the whole system. This is achieved via the "async_graphql" crate which allows you to define/generate the GraphQL schema in code, by implementing the field handlers.
If you are using something which requires you to write the GraphQL schema manually and then adapt both the server and the client... it's a completely different experience and not that pleasant at all.
The ability to pick fields is nice, but the article failed to mention GraphQL's schema stitching and federation capability, which is its actual killer feature that is yet to be seen from any other "RPC" protocols, nix the gRPC which is insanely good for backend but maybe too demanding for web, even with grpc-web *1.
It allows you to separate your GraphQL in multiple "sub-graphs", and forward them to different microservices and facilitates separation of concern at backend level, while putting them back as one unified place for the frontend, giving it the best of both world in theory.
Yet unfortunately, both stitching and federation is rarely in practice due to the people's lack of fundamental abilities to comprehend and manage complexity, and that the web development is so fast, that one product is put out for one another year by year, and the old code is basically thrown away and remain unmaintained, they eventually "siloified"/solidified *2, and therefore it is natural for a simple solution like REST and OpenAPI/Swagger beats the complicated GraphQL, becaues the tech market right now just want to make the product quick and dirty, get the money, then let it go, rinse and repeat. Last 30 years of VC is basically that.
So let me tell you, this is the real reason GraphQL lost: GraphQL is the good money that was driven out, because the market just need the money, regardless of whether it is good, bad or ugly.
It is so natural, and I've tried to make it run in the new single file C#, plus the dependency injection and NativeAOT...I think I made the single-file code in their discussion tab, but I couldn't find it.
Another good honorable mention would be this: https://opensource.expediagroup.com/graphql-kotlin/docs/sche..., I used it before in place with Koin and Exposed, but I eventually went back to Spring Boot and Hibernate because I needed the integrations despited I loved to have innovations.
*1: For example, why force everyone to use HTTP/2 and thus enfoced TLS by convention? This makes gRPC development quite hard that you will need to have self-signed key and certificates just for starting the server, and that is already a lot of barrier for most developers. And the protobuf, being a compact and concise binary protocol, is basically unreadable without the schema/reflection/introspection, and GraphQL still returns a JSON by default and you can choose to return MessagePack/CBOR based on what the HTTP request header asked for. Yes, grpc-web does return JSON and can be configured to run on H2C, but it is more like an afterthought and not designed for frontend developers
*2: Maybe the better word would be enshittified, but enshittification is a dynamic process to the bottom, while what I mean is more like rotten to death like a zombie, so is it too overboard?
I dunno. I still really like Lighthouse (for Laravel).
It's about the only thing about my job I still do like.
The difference is that it is schema-first, so you are describing your API at a level that largely replaces backend-for-frontend stuff. If it's the only interface to your data you have a lot less code to write, and it interfaces beautifully with the query builder.
I tend not to use it in unsecured contexts and I don't know if I would bother with GraphQL more generally, though WP-GraphQL has its advantages.
I don't like GraphQL, it feels strange for me (for my rest brain)
despite many Rest flaw that I know that it feels tedious sometimes, I still prefer that
and now with AI that can scaffold most rest. the pain point of rest mostly "gone"
now that people using a lot of Trpc, I wonder can we combine Grpc + rest that essentialy typesafe and client would be guaranteed to understand how model response look ?????
Yeah but its react library, I talk about standard like OpenAPI schema but with GRPC model and discovery that can auto build a model response and inject it to most programming language
GraphQL was designed to add types and remote data fetching abstractions to a large existing PHP server side code base. Cypher is designed to work closer to storage, although there are many implementations that run cypher on top of anything ("table functions" in ladybug).
Neo4j's implementation of cypher didn't emphasize types. You had a relatively schemaless design that made it easy to get started. But Kuzu/Ladybug implementation of cypher is closer to DuckDB SQL.
They both have their places in computing as long as we have terminology that's clear and unambiguous.
Look at the number of comments in this story that refer to GraphQL as GQL (which is a ISO standard).
My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.
TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.
For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.
reply