Hacker News new | past | comments | ask | show | jobs | submit login
Why not use GraphQL? (wundergraph.com)
460 points by jensneuse 79 days ago | hide | past | favorite | 456 comments

For me GraphQL is the epitome (in the Web tier) of 'we need to solve the same problems as FAANG does'.

You'll likely never be in a situation where over-querying via a non-granular REST call will ever be an issue worth optimising around.

If you're shipping multi-megs of JS to a client don't then pretend that micro-optimisating the API call waterfall is your KPI, it's just disingenuous at best.

At best it's a band-aid around dysfunctional inter-team working.

As something of a war-weary veteran - I disagree.

It's a good instinct to be suspicious of new technology. But I have personally seen, many times, what REST APIs can grow into - unless you are very, very careful, "get user/1" can turn into god objects with every field under the sun, non-optional. I've seen `users/l` be over a megabyte with (eg) comments, friends, comments of friends, likes, likes of friends, and every other thing the front end team ever asked for. GraphQL solves that.

Yes, it can be avoided with strict, diligent design. But it often isn't. GraphQL solves the whole class of problem. And that's why I like it, despite my mistrust of "new hotness" technology.

We have a `user` GraphQL type. It has 200+ fields and resolvers into other data. Fortunately, clients don't need to pull everything.

Well, within one of our frontend apps, someone wrote a "GetUser" query fragment, wrapped it in a react hook to make it easy to use, and now anytime anyone anywhere wants to get a user, even with one field, they're getting 100+ fields they don't want. Anytime someone needs a new field on a user, they don't create a new query; they just add the field to that "GetUser" query.

Now, I've told several GraphQL advocates that setup (in our local community and online), and unfailingly they balk at it. Well, you're doing GraphQL wrong, everyone makes mistakes, you should be more selective, etc etc etc. I think that's fair; being more specific sounds like a good pattern.

Except, we are not certain the performance wouldn't suffer should we move everything to more selective queries. The issue is caching. If you have ten queries each requesting one field, the cache can't saturate until after ten network requests. But, one mega query, and now the remaining nine queries can just pull from the cache. Sure, the mega query takes longer than one mini-query, but its not longer than ten mini-queries.

I only outline this to say: GraphQL is not a magic bullet. Every single thing you can do which would ruin a REST API is also possible in GraphQL. REST API responses naturally evolve into god objects. GraphQL APIs solve the god object issue server-side by saying "god objects are fine", then push the responsibility of not abusing it onto the client (and it should be obvious, the client is the worst place to solve almost every problem, but that's another topic).

GraphQL is easier and better for some things, far harder for other things. At the end of the day, its no worse than REST. I don't recommend it, but that's only just because its new; unless a new technology offers some very easy to express, tactile benefits for your business, just wait for it to mature. When you get down to it, GraphQL's benefits are definitely not tactile, but they're still there, and I think over time it will evolve into something very interesting.

I agree with your point about caching, except, i think, it is missing one important detail that makes your argument less one-sided against GraphQL.

What you described is absolutely correct, except it is only the case if we cache by query. If we cache by objects and fields, none of those issues you described become relevant, and caching by object/field (as opposed to caching by query) in general seems like a better practice imo, aside from certain very specific scenarios.

In fact, it seems like the official GraphQL docs recommend that approach as well [0].

0. https://graphql.org/learn/caching/

No, its still an issue. If you have QueryA which requests User { firstName } and QueryB which requests User { lastName }, those queries both have to request their data in order for both fields to be cached.

If, instead, you have QueryC which does User { firstName, lastName }, but two usages of it (QueryC1/QueryC2), after QueryC1 requests, QueryC2 can use the cached results. This works whether you're doing Query/Operation level caching or Field/ID level caching. The former example works in neither. And QueryC is trivially faster than QueryA+QueryB because of network overhead.

This isn't necessarily an issue with GraphQL (and, I thought I was clear about this, but: I'm not against GraphQL). Its a behavior of both typical REST implementations and GraphQL. And, to be clear, GraphQL's out-of-box caching story is more powerful than any REST implementation I've seen short of hyper-engineered FAANG companies, because it enables really powerful field/ID level caching across multiple queries. But it doesn't work in this case.

The point is that its still very immature. Even the thought leaders (Apollo being the biggest one and worst offender) write these blog posts filled with Best Practices and Recommendations that often convey horrible advice and flat-out misrepresent GraphQL's actual advantages compared to REST. GraphQL solves a lot of REST's problems; it does NOT solve REST's "god-object" class of problems, like the grandparent comment suggests; and it introduces many new classes of problems that remain unsolved in the ecosystem because of how immature it is (one great example is OSI L7 inspection in many tertiary tools a typical SaaS app has. Many products like Datadog, CloudFront, AWS ALB, etc are capable of doing some really cool and powerful stuff out-of-the-box just by inspecting standard HTTP headers. REST is basically just standard HTTP; your resource is in the path, query parameters, http headers, its very standard. GraphQL is not, so many of these tools don't work out-of-box with GraphQL. People are catching up, but again, its immature).

Thanks for clarifying, I genuinely appreciate comments like this one that go into actual details, without vague fluff or generalized claims. At this point, I am fully with you on this one.

I wonder if this can be statically analyzed? If I have two child components that request bits of data, in theory those requests could flow up the component tree and only be triggered at the root by a component that intelligently coalesces requests: add in some logic to bucket requests by some amount (50ms/100ms) or some logic to explicitly trigger pending requests and it might allow the best of both worlds?

I think any timebox-based batching strategy would effectively just trade frontend performance for backend performance. Your backend would have to deal with fewer requests, and there's always a number of "things" the backend needs to do with every request regardless of the content (e.g. check a token against a database), so fewer requests is nice. But, some components would have to wait up-to X milliseconds to get their data, and if we're talking about a network request that takes 100ms, balancing the value of X to be big enough to actually have an impact, while being small enough to not double the time it takes for some components to get data, would prove very difficult.

The backend performance you could gain is kinda spurious anyway. We're talking about N requests being coalesced into 1 mega-request; I would rather clients send me N small requests, not 1 mega-request. Horizontal scaling is easy to set up and quick to do in real-time; vertical scaling is harder.

And I think, while a solution like this is interesting in the domain of "you've got two sibling components rendered at the same time who's queries could be combined", that's not exactly the issue I described a few comments up. Caching really doesn't enter into play here; it would not be measurably faster to just issue one mega-request for one component and let the other wait for the cache (which is, in the end, what we're talking about). I mean, it saves one network request, but 90% of a network request is just waiting on IO, and if there's one thing Javascript is really good at, its sitting around and waiting. It can wait for dozens of things at a time.

The issue is more specifically surrounding two components being rendered at different times. Imagine a User Preview card which displays a user's first name; the user clicks on that card, which takes them to a new page, the User Profile, which displays both their first and last name. Short of using a God Query for the Preview which covers a broad set of common user fields, like firing a shotgun and hoping you've got the cache saturated for unknown upcoming components, this situation will take two requests. The shotgun approach is what we do, and what many companies do; it works, but its imprecise. It can hurt the performance of the leading components which pulled the short stick, if that God Query gets too big, and as an application evolves you're gonna forget to keep that God Query updated with new things you may need.

This problem is enticing because it feels like there should be a solution, and we just haven't, as a community, found it. I think GraphQL, at its core, has the capability to solve this; its not like REST which is so undefined and hazy that any solution to something super-complex like this would only work in the domain of one company's "way of doing things". I hope one day we can figure it out, because I suspect that a general, easy to use way of preemptively saturating a cache like this would be a MASSIVE performance boost to every GraphQL client that's ever been written.

I think this is explicitly what Relay does.

Yes, caching is a pro FOR GraphQl, not against it.

I disagree. Where I work (and for some reason I expect this is the case in plenty of other places), APIs don't grow into being shit because the programmer doesn't know that "returning comments on the response of GET user/1" is bad, but because their manager asks them to implement that feature as fast as possible. Given that requirement, of course the guys will just throw the comments in that already existing endpoint instead of building "GET comments?user=John", right?

I'm a big fan of the saying "complexity has to live somewhere" and I think that's exactly what GraphQL is doing: moving the complexity to another layer.

My REST endpoints allow a `fields` parameter that specifies the requested fields of the endpoint. Even if you do have a huge object, you can pare it down. To get only `id` and `first_name` on a `User` endpoint, you do `.../rest/User/fields={"id"=true,"first_name"=true}` . Non-standard, but effective. The backend doesn't retrieve or send the un-needed fields.

Solr had that for ages in their rest api. They even have more fancy, calculated fields someone could request.


And you can extend this pattern it by allowing "includes" which would signal to the backend to load and deliver the relations listed in the includes part of the query.

This alone makes it impossible to get backend to return god entities with dozens of relations without explicitly specifying them.

I accomplish this with the same `?fields` query param and it is fractal. You can specify which _fields_ of the relations below the User you also want.

If `User` had `groups` and `groups` had `permissions`, then I can do:

    let results = await fetch('/rest/Users/?fields=' + JSON.stringify(
        id: true,
        first_name: true,
        groups: {
          name: true,
          permissions: {
            name: true
My system has its limitations however. It can't do relational constraints like GraphQL or renaming of fields, but it is likely the 20% that gets you 80% of the way.

To be honest, once you can traverse relationships like this, you've basically just got GraphQL in disguise and might as well be using it. You're going to have all the same difficulties as GraphQL in making it perform well, but don't have the elegant (IMO) query language -- and lose some of the expressivity of fields/relations being able to accept arguments.

Except I dont have to start with GraphQL. I can add the complexity when I need it instead of starting with complexity. Yes I finally got _one_ of the features of GraphQL, but I didn't have to swallow graphql or require everyone else to. They don't have to use the field specifier and it still looks very simple. In contrast to GraphQL.


My endpoints also have fields that will only be sent if you specifically request them.

What about nested structures their fields ? You can definitely roll your own anything if the cost of the widely used solution doesn’t work for you

Meh, people have been selling "technology-X" using this exact same argument since the beginning of time. You could swap out the "GraphQL" in your final sentence for "micro-services" and it'd swear that I've seen it in another thread ^_^

The existence of a God Response / God Object / or just a generally horrific and coupled system strikes me as a particularly human induced problem which no specific technology can actually fix.

I think that’s just pretty basic bad design, to be honest.

Thats the whole point, REST facilitates and often encourages bad design.

How does it encourage this, and how does graphQL discourage this?

I've always held a strict rule where The R in REST stands for Resource (which isn't true). There's a user, an Invitation (and not an invite action on user, nor a state:invited or invited_at on user). there's a Session. There's links that clients follow, and everything is as small as possible. I really don't see how REST encourages God objects.

I have seen many people try to solve the "too many requests" problem by slowing expanding rest responses into "god" objects.

GQL solves this problem elegantly by allowing you to specify the data that you need in a single request.

That is just one example, there are many many others that you can google if you are actually interested.

This is just going around in circles. “This thing is bad practise”. Ok, don’t do that thing. This is not an argument against rest or for graphql.

I think the point the parent reply is trying to make is not that you cannot follow good practices with REST and that you somehow magically get it with GraphQL. I think their point is that GraphQL makes it much easier for devs to follow good practices and with less resistance.

Put differently, Graphql gives good defaults out of the box, and REST bad defaults.

Apologies for being rude, but that’s just nonsense. What ‘good defaults’ does graphql give you?

You can do a lot of the things that Graphql does bu just a regular old JSON HTTP API. It's just not automatic and you will have to choose your own conventions

You no longer have to code data selectors explicitly for each endpoint.

> I have seen many people try to solve the "too many requests" problem by slowing expanding rest responses into "god" objects.

Fair point.

But I ask my question again, differently worded: how does Graphql solve this? Why does GraphQL solve "many requests" and why can't you solve or avoid this in REST.

> For me GraphQL is the epitome (in the Web tier) of 'we need to solve the same problems as FAANG does'.

I work at a small startup. We use a REST and Redux architecture. It's worked very well for us, but we're starting to hit some pain points. As we look to re-architect, our list of challenges pretty much mimic the challenges GraphQL looks to address. These aren't scaling issues or "big company issues". In my opinion, they're core issues with every client-server code base.

* Client Data and Statement Management

* API Coordination and Versioning (even more complex if you have a public API)

* Payload Management and Relational Data (it can be very expensive to pro-actively return embedded/dependent data, so you need to manage it somehow)

* Caching and De-deduping

* Typing and Data Interfaces between Client/Server

* Error Management (not that hard to standardize, but still something you need to account for)

* Pagination, Filtering, Sorting, etc.

There are issues that you get with GraphQL that are generally much easier to solve with REST, though. While it's easy to set up a GraphQL API that technically makes your whole graph available, going off of your default resolver functions alone is going to cause serious performance issues as soon as queries start to get more complex. The default behavior is basically n+1. To fix that, you have to do query introspection, which is significantly more complicated than just having a dedicated endpoint. Depending on your data sources & how easy it is to cache, this can be more or less of a problem.

This isn't to say that GraphQL is bad or anything, but there are definitely some gotchas that are important to evaluate before diving in.

The n+1 problem can be fixed without introspection by using dataloaders: https://github.com/graphql/dataloader. It's actually quite pleasant to work with them, but it's definitely one more thing you need to know.

(I'm arguing theoretically here:)

> I work at a small startup.

This means you can afford a lot of technical debt, even such technical debt as might be incurred by GraphQL (as argued by the post and many people in this conversation).

The concerns you've posted are certainly valid, but GraphQL is not the only (or even necessarily best) way to address them.

If you have a huge engineering staff like FB, then perhaps it's better to just "move the complexity to server side", but for most businesses that not what's going to happen. What's going to happen in practice is DoS galore.

I agree that GraphQL is not the only way to solve them, but it does provide consistent approach to address most of those. At a startup, it's very, very beneficial to be able to point to externally maintained documentation and best practices. It's one less thing to do internally and can help to onboarding new employees more quickly.

I'd rather take on technical debt related to our product features than to load us up on fundamental networking and caching issues.

That's fair. Reasonable people can disagree about these things.

EDIT: Sorry, a bit of a rambly post, too tired and emotional to rephrase.

I will say, though, that "fundamental networking and caching issues" sounds very... weasel-wordy, if you see what I mean? It's not very concrete about what the actual problems you might experience would be. (Yes, latency and huge numbers of requests are issues, but is it really an issue until you get big enough?)

Almost all successful applications in existence until about 2018 (or so? Not sure exactly when GraphQL was invented) has done just about fine. If we're talking about a brand new type of application which couldn't have been done without it, then fine. If not... well, we're not talking about technical limitations.

I truly do see the appeal of GraphQL for developers, and especially the fast decoupled iteration that it enables, at least theoretically. The thing is that when you need more data on the client, you still have to add that data on the server... somewhere. It's great that the client can choose the overall "shape" of the data, but that doesn't solve the problem of the data not actually being on the server.

I'm surprised at all the people that prefer REST over GraphQL.

I'm a developer of my own projects for fun. I don't work in tech. My frontends are on iOS / Android / the web with Typescript. So my confusion is from a perspective without expertise.

I started with REST via Django and a few others, and now I've switched to GraphQL. I love GraphQL over REST backends due to the type automation tools such as GraphQL Code Generator and query tools like Apollo. Also being able to construct a query and access children via one request is super nice. For example my old REST APIs call for a post, then get pictures for that post, then get comments for that post, then gets the username and other user data for the owner that made the comment. It's four requests. My GraphQL requests just get them all in one customizable request. Post can contain pictures and comments and all their properties. Comments can contain the owner of the comment and all its properties to include things like username.

The result is a typescript object that is strongly typed and has all the data I need. Before GraphQL, in REST, this would be four requests, three of which are in series (post -> comment -> owner to get username). I know I could make a custom REST API to do the same thing, but it was just so easy in GraphQL, I didn't have to worry about it.

I'm not working in teams, I'm not creating a super large backend, it's not complex projects... so I don't know what I don't know.

But what am I missing? Maybe not everyone needs / uses the typing I do? That's the big benefit for me. Maybe not everyone cares about being able to query children (or create children via a nested create)? Maybe these things were easier than how I was doing it in REST?

These are all excellent points in favor of productivity with GraphQL. I always avoid new technologies for a few years to let the hype cycle mature (for example, everybody talking about how great mongodb is and writing blog posts about how they migrated, followed two years later by everybody talking about how terrible mongodb is and how they had to migrate to posgresql). As a dev working on a commercial software project, especially if you're in leadership, you have to look very carefully at every technology you introduce into your stack, because you are going to be supporting it for years. If it turns out to be the wrong decision, or you get caught out by a bunch of edge cases that weren't well known in the early days, you can end up losing millions of dollars in productivity, and possibly killing the company when you can't fix bugs and ship new features. So I've been suspicious of GraphQL, but keeping an eye on it. I still don't know if the productivity gains you get up front are actually technical debt in disguise, but if that turns out not to be the case, and it really is a matter of only having to write one query instead of making five requests and several dozen lines of types, validators, and async redux middleware, I'll be happy to adopt it in the future.

> Maybe these things were easier than how I was doing it in REST?

That might be the case. Those things (such as returning children - and generally crafting a custom endpoint for specific use-case) are indeed very easy in my language/framework of choice (ASP.NET with a little help from LINQ).

I also get automatic generation of client-side data classes and strongly typed functions that encapsulate endpoints on the server using a small code generation framework that literally took 2-3 days to write (it's really not that complex).

You (and we) have no clue how bad your code is.

You're suffering under Dunning-Kruger.

For example, what you described about making 4 requests with your 'REST' API, you didn't have to do that. YOU made that terrible API. That's not a feature of REST, but you seem to have mixed the two up in your head.

What feature of REST would allow you to separate your entities as they should be while being able to arbitrarily query them together?

Feature where you can write an endpoint to return whatever it is that you need. Granted, it’s not automatic as in some GraphQL frameworks - not sure if that’s what you meant by “arbitrary”. But then again, such things are a step beyond just using GraphQL as a protocol.

The comparison would be;

REST: Any time there is a new "view"; you need to go to your server and write a new endpoint.

GraphQL: You can stitch data together however you want; as long as that data is defined.

So for example if I need name of someone and the url of their children; querying it alone is enough with GraphQL whereas with REST either I need to fetch multiple times or write a new endpoint that serves both together.

The idea of a REST endpoint is to represent a resource, not a view. How will you bring good RESTful API design together with the requirement of wanting an API that serves one very specific view that depends on multiple REST APIs. This seems to be in conflict doesn't it? What's your idea on resolving this conflict without abandoning the principles of RESTful API design?

> The idea of a REST endpoint is to represent a resource, not a view

Resource is not what you think it is. The conflict you’re talking about doesn’t exist.

GraphQL also allows the frontend team to iterate faster, because they don't need to ask the API team to change an endpoint, or add one. The API team also moves faster, they define the relationships and they are done, no going back and updating endpoints.

GraphQL creates a data graph that powers the UI, REST creates a database that you can query on the UI and you have to compose it all together and manage.

In my experience GraphQL slows down all teams. If you're on a new project and that's the stack you pick it can work well for a while but as soon as your product is large enough you get multiple GraphQL api's built by different teams that don't behave in a similar fashion. If you're in an org with historical code now you've got APIs that are RESTful and then some which are GraphQL based. Making those work together is either a hack on the frontend or a rewrite project.

Ultimately the thing that speeds up teams, again in my experience, is predictability not flexibility. If you tell someone they have a RESTful API they know exactly how to work with it, document it, etc. GraphQL which is shiny, new, and used by in-vogue orgs is fine, you can use it to get the job done, but I don't believe that it warrants much praise.


It is. ROFL!?

What if you don’t have separate teams, or what if your teams are in the same room? Why do you want your frontend to select it’s own data rather than consuming predefined known quantities? Is this not analogous to type safety? Is graphql not in fact closer to having a database that you query from the frontend, not least of all because it literally has ‘query language’ in the name?

> What if you don’t have separate teams, or what if your teams are in the same room? Why do you want your frontend to select it’s own data rather than consuming predefined known quantities?

Then you don't share many of the reasons GraphQL was originally created for. Act accordingly.

I agree. Sometimes it seems like GraphQL degenerates into a generic REST API where you construct and pass the SQL directly from the client.

Another way to put that is to say that GraphQL is just a way to ship your raw DB schema as JSON. I thought we learned it was bad to shape your front-end code to the mirror the DB schema, sometime back in the early dot-com era.

But the entities, names, dates, and photo URLs in my database are exactly what I want to show on the frontend, by and large. Maybe a join, aggregation, or something in there.

Right, but now when you are deciding on a schema change, your front ends clients are going to break because you didn’t create an API, the just allowed db querying from the front end.

Web API contracts are primarily routes and request/response data structures. This allows front end and backend concerns to be separate which allows a lot more flexibility over time for both front end and backend developers. From my limited knowledge of GraphQL, this is still possible but more work than just exposing your db schema types/DTOs.

At my last place the "product" team owned the clients (website, app) and the GraphQL layer, and the "platform" team owned the business logic just beneath the GraphQL layer (it happened to be in monolith though, so the boundaries between teams were intentionally fuzzy).

You do realize other patterns exist where the frontend team is largely in control of the first endpoint, right? Backend-for-Frontend is one such pattern, although I'm sure that is nothing new either.

> because they don't need to ask the API team to change an endpoint

Bollocks. What happens is you need some field implemented in the microservice or DB that your GraphQL layer talks to. This task gets tossed into your ticket system, gets managed by half a dozen managers, and weeks to months later you finally get the field you need and can finally bubble it up through your fancy toy API.

> GraphQL creates a data graph

does anyone even know what a graph is today? Serious question.

This seems to throw up the issue of a higher degree of the DB structure getting exposed to the frontend. Faster iteration of the front-end (presumably during a period of more rapid change) comes at the cost of forcing the backend to remain more static.

That seems like a fair trade-off to the oft-seen alternative of the front-end remaining more static because of slow iteration of the back-end, in the sense that I think that there are many situations in which that would be the least bad option.

How many features do teams tend to build that use exclusively existing data, versus new data? How much existing data is lying around that isn't already exposed via API?

I'm not sure that the query layer solves team collaboration and prioritization problems.

Redesigns, UX changes etc, will all change how a UI interacts with data but won't necessarily require any changes to the underlying data model.

Adding brand new capabilities to a system (or refactoring existing ones) will usually requires changes to the GraphQL layer, but this is only a subset of frontend work. Iterating on what already exists also represents a lot of development time.

> You'll likely never be in a situation where over-querying via a non-granular REST call will ever be an issue worth optimising around.

While I agree with your beginning sentence, the one I quoted doesn't sit well with me. It would be good to remember that not everybody has 100 Mbits fiber or 5G download speed.

We all saw the data on how milliseconds affect user retention then proceed to completely forget about it once building our apps.

I think we can do better.

Even if users don’t have 5G, why is your rest api so slow that it’s an issue? Why is graphql the solution? The featured article points out that optimising rest can be a better solution, in terms of time and effort, than implementing graphql. The problem that graphql solves isn’t this one.

Agreed but I think the point was over fetching is likely very far down the list of optimisations.

> If you're shipping multi-megs of JS to a client don't then pretend that micro-optimisating the API call waterfall is your KPI, it's just disingenuous at best.

It absolutely is the bottleneck in almost every web app that relies on Ajax fetches. The latency is an absolute killer.

On every app that I've optimised, I have to get people to stop going with their gut and look at the traces (both synthetic and real-user). People generally think they should be optimising their JS execution. But in fact the most important thing is usually sequencing loading correctly, followed by minimising JS bundle sizes.

Modern browsers are getting really good at parallelising loading and parsing JS and other resources. But if you're whacking some fetch in there that happens as a result of JS execution then you're generally looking at a wasted 300-500ms for most endpoints/device combos.

Well said. Plus, the fact that proper query-based load balancing now has to read, parse and interpret (based on the currently deployed application) the entire HTTP request in order to make any balancing decisions, whereas header-based (see: path-based) balancing only had to read a small portion of the request before forwarding all remaining connection traffic to the destination backend service after request replaying.

GraphQL has never made sense to me for anything beyond toy or small-scale projects.

I completely agree. Too often developers fall into the trap of engineering solutions inappropriate to the size and scale of their business.

This 1000x

Working for a company much smaller than FAANG, I've seen issues with REST. However, the waterfall problem was a small issue compared with problems with bandwidth and server-side compute when you are processing and returning more data than is actually needed. That isn't to say that graphql is a magic bullet for those problems, but the issues with REST can show up far before you get to FAANG scale.

One good fit we may have found is for back office/admin.

In a company you often have APIs geared for client apps, and then internal APIs between services. But rarely anyone develops good APIs for backoffice. And then you have an admin interface struggling to retrieve the data it needs from a dozen different API calls.

Slap a simple GraphQL server in front of these dozen calls, and you have a more streamlined development for internal tools.

Sounds like you agree with the OP; " But rarely anyone develops good APIs for backoffice" describes an organizational problem, and the way you've described using GQL does indeed sound like it's bandaiding that problem.

Indeed, this is more of an organisational problem and the question of what to prioritise next. Backoffice is usually de-prioritised in favor of customer-facing features.

GQL has issues like every tech does, but it feels so much more ergonomic from an api consumer point of view.

Simply point Insomnia at the base url and it’ll introspect the schema and provide typed query auto completion. No dozens of requests to different endpoints with different query parameters than can only be kept track of by having the api docs open on a second screen.

It's only ergonomic if you've never done REST before.

I'm not being flippant. If you're already using REST tools such as Postman and every caching tool in existence (browser, http proxy, etc. etc.) then GraphQL tossing it all out is the exact opposite of "ergonomic."

The caching story in GraphQL is a joke. What the browser gives you for free with GET caching takes weeks and months of fine-tuning and tweaking and head scratching with something like Apollo. Then you'll probably try decreasing the payload size (there's a third-party solution for this), or batching requests together (guess what... there is a third-party solution for this too). The amount of tooling and implementation work GraphQL needs to get up to par with built-in REST is pretty incredible.

Some people do need to solve the same problems as FAANG, and GQL is a excellent solution to those problems.

Some people also want an agile api surface so they can iterate quickly, GQL is an excellent solution to that problem.

It is also an excellent solution for: - client side caching - unified RBAC - service stitching / microservice hell - api documentation via graphiql - decoupling FE dev from api dev - api automation - typed api surface

The list goes on.

To say that GQL is a just a band aid for bad leadership is nonsense, and discounts the very real reasons that many many experienced teams are switching to it.

I would encourage you to re-evaluate your position as if you voiced those opinions in an interview with basically anyone I know you would quickly get passed up in favor someone that actually knows what they are talking about.

Boo. Let's not be too close minded. I'm a FANNG guy. A lot of our problems aren't even these mystical "FANNG problems" people talk about and get solved by REMARKABLY boring tech. So, I'd welcome his opinion in an interview. ^_^

Picking a non-sexy, non-scalable, "wrong" technology that does nothing more than solve the actual requirements at hand is a rare and amazing quality for engineer to have imo.

Everyone wants to build infinitely flexible, infinitely scalable machines using the most rapid iteration tools possible as though that's what "engineering" is. But sometimes... all you need is a REST API. Sometimes you need GraphQL. Sometimes, all you need is to stuff data in a bucket somewhere and call that an "integration point". All solutions have trade offs. Pretending otherwise, or faulting others for weighing those trade offs differently than you, is silly.

More of this please :)

That is exactly the kind of problem solver I like to work with. Most of the time, “boring” tech will solve the problem cheaper, faster and as reliable and scalable as any shiny new tech. I work with GraphQL, REST APIs and even JSON RPC APIs on a daily basis and when you truly use those tools, you get to know where they really shine.

And then 50 people stuff data into different "buckets" and call them "integration points" and you spend months refactoring their horrible choices after you have identified why your services can't scale.

Non sexy "wrong" tech is exactly that. Pick smart tools that do the work for you so that you don't have to babysit your teams choices and micromanage every project.

I have weighed the trade offs, and it is literally my job to identify the problems that come from them.

Ah, ok, guy. We get it. It's either your way and your preferred tech or it's total unbridled chaos which is doomed to fail. Turns out there IS a silver bullet after all. I'm embarrassed.

Yes, because I am the only person on the planet advocating for gql, and more broadly advocating against allowing individual developers to drop data into "buckets" or whatever point you were trying to make about how freelancing in production systems should work.

I am glad you understand now.

> I would encourage you to re-evaluate your position as if you voiced those opinions in an interview with basically anyone I know you would quickly get passed up in favor someone that actually knows what they are talking about.

I would encourage you to re-evaluate how you react to opinions that diverge from your own, especially in an interviewing context. GQL is not a panacea, and candidates should not be discounted because they understand this.

OP stated:

"At best it's a band-aid around dysfunctional inter-team working."

I would absolutely discount a candidate for having a moronic opinion like that.

> I would encourage you to re-evaluate your position as if you voiced those opinions in an interview with basically anyone I know you would quickly get passed up in favor someone that actually knows what they are talking about.

Ha. Now we’re introducing threats. Don’t agree with my technical opinions? You’ll never work in this town again! Funny, funny stuff.

Unfortunately he's somewhat right. Not because the interviewers necessarily know wtf they're talking - they probably don't. But going against the current hype in an interview can only hurt your chances. Liking something like GraphQL, or React, or Framework Of The Month can't really hurt you even if your interviewer doesn't like it. But not liking it can be a real issue! I would suggest at least staying neutral on the current hype stacks in an interview. It's like debating religion - don't go there...

So not liking the tools your employer uses makes you less appealing as a candidate?

Also, React is a hype stack now?

Almost anything FB (and FAANG) builds is hyped to infinity. Everyone drank the cool aid on this one.

React and GraphQL are extremely hyped. People build static HTML pages with React because... React. People use GraphQL APIs because... GraphQL.

These days the first thing every JS developer I've seen asks is: can we use GraphQL (which in fact they mean Apollo)?

I'm not saying it doesn't have its use-cases and merits, but it's very much hype tech.

Edit: typo

You would expect to be able to have a discussion of the pros and cons, and recognition at the least that other approaches are viable.

> React is a hype stack now?

Yeah, kinda. You definitely need some sort of front end framework, but I feel like there should be something better than react. I use react every day and it’s fine, but I’m waiting for something else to come and take it’s dominant position.

Idk, a lot of the back office apps I build are basically glorified forms with some content pages / dashboard. You really don't always need a framework.

Try svelte, see if you like it

Yeah, svelte is cool.

I build apps for living and hiring a svelte dev is not going to happen. Stick to react and go hiking on the weekends with the guys in your svelte meetup.

We're a GraphQL town, buddy. You take your [use of literally any technology which isn't GraphQL, because GraphQL is the best, because nothing else needs to be used anymore] and gtfo of here.

Well I think it was just a bit of advice really, not a threat.

If you can't see the positive benefits of a new type technology, then you might be passed up for someone else more open-minded.

But yes, I agree that you should be critical of all new tech, and not just accept it blindly because FANNG says so...

> If you can't see the positive benefits of a new type technology, then you might be passed up for someone else more open-minded

Conversely, if you are also incapable of seeing the drawbacks, then you are a bad engineer who also deserves to be passed up for someone else.

It wasn't a threat, just a suggestion that denigrating some extremely useful tech that is saving organizations tons of time and money may not be a winning career strategy.

> I would encourage you to re-evaluate your position as if you voiced those opinions in an interview with basically anyone I know you would quickly get passed up in favor someone that actually knows what they are talking about.

If you are a person who is not capable of seeing drawbacks as well, then you would also be a bad engineer who should also be passed up for other candidates, who are more capable of evaluating both the benefits and drawbacks of certain solutions.

If your workplace is solving FAANG type problems sure you would want someone who supports the concept of GQL.

Is everyone you know solving FAANG problems? If they are not are they emulating those FAANG companies tooling because they expect to be one or work at one in the future? Or is it a type signaling?

There is this over tooling over scaling wave going on.

Most startups fail because in their limited investments they over tool and under sell.

Do you work at a startup or a FAANG?

GQL has solved legitimate problems for us at scale, and is flexible enough to be used on some of our smaller projects as well.

I don't see the tradeoff that people are talking about, with GQL there has been nothing but upside.

Sounds like a great way to filter out employers.

I dont disagree with this, but I also see in our case that good tooling is immensely valuable in keeping things consistent and moving forward (in the same direction) without having to discuss every change.

I would strongly disagree. Scale is scale, and though I don't have the volume problems that FAANG have, I also have substantially fewer resources.

Being able to very quickly connect a series of tools and tech together in a way that is selective of the data being moved around saves me money and time, period. That matters.

> it's a band-aid around dysfunctional inter-team working.

If inter-team communication isn't a hard problem in software development, why are we even talking about API technologies?

If inter-team communication wasn't that hard, you could just email pc and ask him to email you a snippet of Ruby to process a credit card payment.

I respectfully disagree, at least in some contexts. If you support an API that has multiple clients with ever-changing needs, GraphQL can make your life much simpler, with much less code. I can only speak to the experience of using Lacinia in Clojure, but I found it to be fairly easy to use.

I thought micro-services is the epitome of 'we need to solve the same problems as FAANG does'

"You'll likely never be in a situation where over-querying via a non-granular REST call will ever be an issue worth optimising around."

I've def been in this situation before. But I agree that it it is not the norm

You posted this comment 10 hours ago and it's at the top, despite the down votes and biased opinions by people who have a lot to lose if you're right. The silent majority seems to agree with you.

Eh, I think it's "get off my lawn" folks supporting this -- I'd like to see a tally of folks who've actually built GraphQL apis.

I do agree a lot of software is built just for the sake of software - there is loads of needless complexity in things. But GraphQL is not that imho.

> You'll likely never be in a situation where over-querying via a non-granular REST call will ever be an issue worth optimising around.

Sure, but that's only one advantage of GraphQl. Personally, I'd use GraphQl even for the smallest of apps. With statically generated sites, you can forgo a back-end server and just setup a GraphQl instance. It's a one-time cost to learn, and then you easily setup your APIs and if you are using React/Vuejs, it's even easier to integrate with your API.

You're making a very good point here. You should always ask yourself if it's really worth it.

So a bit like microservices?

You say band aid, I say that it can be a great catalyst to "leave past, bad practices behind". Don't discount the psychological effect of switching to a new paradigm. It can truly empower people to feel that they can "do it right".

I'm not saying it's enough, far from it. Bit it can be a very liberating 1st step.

Disagree - GraphQL is a legit better way to build APIs.

As a backend engineer, I publish a schema of all the available data - the frontend can fetch it in any shape it prefers. Web UIs and mobile UIs often have different views, so they want different data, and fetching it in one request or many is up to the preference of the UI.

I don't have to build different REST endpoints presenting some data in a different shape for performance reasons or because different applications wanted to request the data in a different way.

As a frontend engineer, all the data is often already there unless I'm building some new feature the backend doesn't understand yet. I'm free to refactor my application without bothering changing the REST API - and GraphIQL lets me inspect the api and mock queries for how I might fetch the data for a given view.

It's a cleaner contract to let different people get their job done with less fuss - game changer.

Core Contributor of urql here, another JS GraphQL client.

I think this post is very good at stating why the particular post it’s referencing (“Why GraphQL” in the Apollo docs) isn’t a full picture overview, and it succeeds at that very well, but I think it doesn’t go on to expand on these views just as much as I’d like to.

If we look at popular comments by the GraphQL community on this it isn’t hard to find that “the quickest & easiest client is to just fetch a GraphQL API” (similarly said quite frequently) and that’s an excellent point.

But going on to say:

> However, you should always consider the cost of adding a GraphQL client to your frontend.

This is definitely true, but as a creator of another GraphQL client I’d say that there are major benefits as well, the main selling point of Apollo, Relay and urql (for the latter this is optional) being that you can utilise a normalized cache.

But these tools often provide great frameworks to solve some of the problems that are stated to be woes with GraphQL, like persisted queries, subscriptions, file uploads, etc. So they’re a great starting point to dip into multiple parts of the community and get out-of-the-box solutions for your problems.

And generally I’d say that sums up GraphQL: it’s not novel or anything new. Instead it combines a lot of ideas into a solid community and ecosystem. With tools like Swagger or gRPC it’s easy to see how any part we look at for GraphQL isn’t novel.

What is great is that the exact set of problems it solves it does so with a larger (and growing) community and a standard that encompasses multiple of these solutions.

Finally, I’d say again, GraphQL clients and servers aren’t “all of GraphQL,” Apollo isn’t GraphQL (although they’ve made themselves synonymous with it) and hence there are different tools, clients, and libraries to create GraphQL servers, clients, and interact with GraphQL APIs. As a user you’ll still have to choose, but a lot of the good parts come from agreeing on one language and standard that allows for introspection and the ease using which we interact with these APIs on the client-side, from simple HTTP calls to more complex caching clients.

Hey, thanks for your reply. I think you guys have done an amazing job creating a very powerful GraphQL client. However, to me "smart" GraphQL clients don't make much sense. My approach with WunderGraph is the following: You write down all your Operations using GraphiQL. We automatically persist them on the server (WunderNode) and generate a "dumb" typesafe client. This client feels like GraphQL and behaves like GraphQL but is not using GraphQL at all. It's just RPC. This makes using GraphQL more performant and secure at the same time. Additionally it's a lot less code and a smaller client because those RPC's are a lot simpler.

That sounds really cool!

But I don't think it addresses why I'd want a "smart" GraphQL client: normalized caching on the client.

Say I have a dashboard where multiple panels on a given page make their own requests, since the panels are shared between many pages. But they share some objects. If I get updated data in a request from one panel, I'd like to see that update in all panels, without triggering more requests.

Side note that a magic layer to have each of those components combine their requests into one would actually hurt performance, since it's better to load the requests in parallel. And manually merging them into one would be quite a chore.

Client side caching with a normalized cache implementation is very hard to get right. I see why you would want that feature and if it were simple to implement I'd always want to use it. However I think we can get away with a solution that is a lot simpler than normalized caching. With persisted Queries we can apply the "stale while revalidate" pattern. This means we can invalidate each individual page and would have to re-fetch each page in case something updated. This is some overhead but the user experience is still very good. Normalized caching in the client can get super hairy with nested data. In addition, normalized caching adds a lot of business logic to the client which makes it hard to understand the actual source of truth. From a mental model it's a lot simpler if the server dictates the state and the client doesn't override it. If you allow the client to have a normalized cache the source of truth is shared between client and server. This might lead to bugs and generally makes the code more complicated than it needs to be. Is it really that bad to re-fetch a table? I guess most of the time it's not. I've written a blog post on the topic if you want to expand on it further: https://wundergraph.com/blog/2020/09/11/the-case-against-nor...

> Client side caching with a normalized cache implementation is very hard to get right

Absolutely true! When I worked on this at $prevCo, it was tricky and sometimes caused bugs, unexpected behavior, and confused colleagues.

I will say that a proper implementation of a normalized cache on the client must have an ~inherent (thus generated) understanding of the object graph. It also must provide simple control of "whether to cache" at each request. Most of the problems we experienced were a result of the first constraint not being fully satisfied.

My impression is that Apollo does a good job on both of these but I haven't used it so I can't say.

I'll also note that the approach of "when one component makes an update, tell all other components on the page to refetch" sounds like a recipe for problems too – excess server/db load, unrelated data changing in front of users' eyes (and weird hacks to prevent this), etc.

Of course, with the wundergraph architecture, it sounds like answer to these questions would simply be to load a given table only once per page – which means no more defining queries on the "panel" components in the dashboard, for example.

All tricky tradeoffs! The right answer depends on what you're building. The Wundergraph approach sounds pretty cool for a lot of cases!

For many use cases, adding an avoidable server round-trip between a user interaction and a view update is an absolute non-starter. Milliseconds matter.

Does it lead to greater complexity somewhere, and all the issues around making that complexity bulletproof? Sure. But the user experience is so viscerally different that some will demand it. I think it’s admirable to work on getting that complexity correct and properly abstracted so that it can be re-used easily.

You can avoid this problem by using Etags, stale while revalidate pattern as well as prefetching. This keeps the architecture simple without any major drawbacks.

Not really, implementation of graphql cache is like day or two work.

Aside: why is there not an RSS feed for the WunderGraph blog?

I think Jens Neuse is making two important observations about GraphQL:

1. GraphQL's single URL/endpoint [1] is possibly an anti-pattern

2. ETags are important for Cache-Control and Concurrency-Control on REST endpoints

The concept of prepared statements is useful for my SQL-centric brain. WunderGraph effectively creates a REST endpoint for each prepared statement (GraphQL DML). Like prepared statements in SQL, WunderGraph uses query metadata to determine the types of input parameters and the shape of the JSON response.

Kyle Schrade makes an important point about canonical GraphQL queries: response payloads can be reduced by filtering JSON fields, similar to SQL projection (i.e. the columns specified in the SELECT clause). It seems that WunderGraph can potentially support both approaches by allowing optional GraphQL queries on each REST endpoint that can be used to filter the endpoint specific JSON response.

[1] https://graphql.org/learn/serving-over-http/#uris-routes

I don't see a problem allowing a generic GraphQL handler. It's just that I don't like the approach of allowing arbitrary queries from clients you cannot control. If this use case has a lot of demand I don't think I wouldn't support it. I'd just rather implement a seamless developer experience for code generation so you don't really want to not use it and lose the benefits.

I think we are on the same page. From my perspective, arbitrary queries are a vector for a Denial of Service event (both intentional and accidental). This has long been one of the use cases for Stored Procedures in SQL; restrict the public interface to guard against expensive queries (large scans and sorts). Faceted Search [1] may be a counter-example but I suspect that these interfaces are implemented at least partially with Full Text Search indexes rather than purely dynamic GraphQL/SQL.

It might be a useful exercise to prototype an online shopping site using WunderGraph.

[1] https://en.wikipedia.org/wiki/Faceted_search

Let me know how I can help you get up to speed. Would love to get in touch!

That sounds interesting! But isn’t this like persisted queries (the Relay kind and not Automatic kind) without the benefit of prototyping your queries as a front end dev as you’re working on the front end?

I’d say that’s completely fair still, just wondering. I’d also say I understand the carefulness and stance on “smart clients,” i.e. normalized caching, which is why this isn’t a default in urql, but without it I think the discussion here is much more nuanced.

It’s so to speak much easier to rely on an argument with a smarter client and the Apollo ecosystem, than the rest. Anyway, I like your approach with Wundergraph so I’ll definitely check it out!

I was asking myself an important question. When you write a Query, what activity are you actually currently involved in? You try to understand the API and want to query it. What's the easiest way to understand an API? Read the documentation? Where is the documentation? It's the schema, hence GraphiQL/Playground. So why would you want to switch back and forth between Documentation and Code when you want to understand an API? On the other hand, if you already use GraphiQL in your workflow, how does this look like? You write a Query in GraphiQL, then copy paste it into your code. Now if you want to add something else you go back to GraphiQL, search for another field and copy paste again. Compare that to WunderGraph: You keep getting back to GrapiQL and extend your Queries. You hit save and the code-generator re-generates the client. You don't even have to change the code if you just extended a query. The function call in the frontend simply returns more data. I wrote a feature page about this: https://wundergraph.com/features/generated_clients I'd really appreciate your feedback on it!

It all seems very interesting, I may try and experiment putting it in front of my current prod GraphQL schema and making a few queries, once I get the auth stuff figured out. One question though, is any of this going to be open source? The on-prem-first focus you have is certainly a selling point for me as I already run my entire backend in Amazon's ECS so adding another service for the wundergraph would be very simple - however, I'm always weary of using non-open source software that I can't fork and patch, as I've had to do that many a time due to not being able to wait for patches to be upstreamed. Regardless, I think the points you make in your blog posts are spot on, and I'm looking forward to watching this project evolve.

We'll open source all of it except the control plane and a component we're currently working on which lets you share, stitch and combine APIs of any type across teams and organizations. All the other parts will be open source, the engine, the WunderNode, CodeGen. We don't want to be locked into a vendor ourselves. You can always not use our proprietary services. The core functionality described above will always work offline without using any of our cloud services. We will offer a dirt cheap cloud service where we run WunderNodes on the edge for you but if, for any reason, you don't want to use this you're free to host your own Nodes. I'd love if you could contact me and we have a chat about your use case. I'd really like to get your take and build out the next steps as close to user expectations as it can get. I don't want to build something that doesn't work for the community.

What I can't quite glean from the docs is how you can do row-based security, ie authZ on user ownership of a row when you're trying to filter by certain things other than the ID.

Another thing is mutations - does WunderGraph support mutations at all yet? Security for those is also even more important, as you might want to restrict what entities you can attach to the entity you're creating etc.

I guess the root of my question is how much business logic can you achieve with WunderGraph itself? It's probably not something that's necessary if I really think about it, if it just handles the authN and then passes tokens with claims nad user IDs to the data sources, Hasura/Postgraphile et al can handle the row-specific authZ and business logic, and then WunderGraph can just be the BFF for each app client. I'd still definitely use it in that setup, as the generated clients and federation subscriptions would be a marked improvement over Apollo for me.

WunderGraph can inject variables or claims into a query. If you want to implement ownership based authorization e.g. with Hasura, Postgraphile, fauna or Dgraph, etc. the value to determine ownership needs to be part of the schema. E.g. a owner field on a type or a permission table/type. Then you supply a owner ID from the claim and that's it. This works because you don't allow this value to be submitted by the client. It always gets injected from a claim in the JWT. This leads to a big advantage over using one of the Auth implementations from said vendors like e.g. Row level security. You decouple Auth from the storage. You can always move to another database and are not stuck with a specific Auth implementation. You could also delegate Auth to a completely different service like open policy agent. If you don't want to use WunderGraph anymore you can re-implement the logic in a Backend for frontend. This way you evade vendor lock in for both database and middleware layer.

Mutations are fully supported. When generating clients all we do is treat mutations like POST requests and queries and subscriptions like GET http2 streams falling back to http1 chunked encoding.

WunderGraph doesn't want to contain business logic. We are the front door, making everything secure and establishing a stable contract between client and server. We mediate between protocols and we map responses so that every party gets data in the format and shape they expect. Other than that, if you want to add custom logic just run a lambda with any of the supported protocols, e.g. GraphQL, REST and in the future gRPC, SOAP, Kafka, RabbitMQ, etc. and we do the mediation. But as were the middleware layer I'd try to not put business logic into this.

That said I'd love to get in touch and discuss how WG can add value for you.

That does sound very interesting. I believe the issue lies in the fact that this is a workflow-based “sales pitch.” What I mean is that this is a difference that doesn’t always apply depending on what tools you use (like client dev tools, type generation / hints, etc)

But what it does do is constrain. Now, constraints are great. They’re always a great tool to introduce new innovations. What I’m ultimately thinking is, how much do you bring to the table compared to persisted queries and tools like GraphQL Code Generator and the added flexibility that comes with those tools?

Genuine questions of course, not criticism

First, with this approach you're able to add authentication rules into operations, not just the schema. That is, you can inject claims from the Auth jwt into variables. This gives you a lot more flexibility than schema directives or a resolver middleware. This feature is unique to WunderGraph.

Next we're able to execute the persisted query on the edge using etags for low latency.

WunderGraph adds the capability to use @stream & @defer on top of any existing GraphQL or REST API. You don't have to change anything on your existing GraphQL server. This works especially great with Apollo federation. WunderGraph is a replacement for Apollo gateway. We support federation with subscriptions, @defer and @stream, another unique feature to WunderGraph. The generated code gives you simple to use hooks, in case of react, to fetch data or streams.

Finally the generated code is authentication aware. WunderGraph has its own OIDC server. Generated clients know if authentication is required for a specific persisted query. This way a query will wait until the user authenticates and then fire off.

I think this should be enough. I don't want to get too much into the details as there are a lot more benefits.

I didn't know WunderGraph, but this sounds similar to OneGraph [1], i.e. you write your GraphQL query, identify its input if needed, then persist (once) the query on the server. This returns a unique query ID that can be used to execute that query server side. In OneGraph, you can use just HTTP for that, no need for a GraphQL client library. You can use any HTTP client to trigger a POST request with the persisted query ID and its input params in the request body. This way it seems a bit easier and simpler that your approach with RPC in WunderGraph. I need to read your docs to have a full picture though. ;)

[1]: https://www.onegraph.com

When I say RPC i actually mean JSON RPC. So it's more or less the same approach. You can use CURL to use a WunderGraph API.

Thanks for the clarification

The main issue of GraphQL is awful tooling, especially outside of the js.

For example, we have done small rewrite of Apollo Android library and achieved like 100x performance boost simply by writing as it should be written. Literally very minor changes and better DB.

Typesafe codegeneration? Still feels very beta anything your could find.

Compare it to something like grpc that have strict typed clients for almost anything.

100% this! Outside of JS, the tooling seems non existent.

What's so amazing to me is that it is really easy to spin up a server that talks graphql in any language. But there is no client support.

And hand crafting queries is a million times worse than reading the rest documents

Oh, you want to get a list of items by owner, you can do that but you'll also have to paginated the inside and outside list. And the client will have to follow these pages while also having a deep understanding of your model.

Without good client tooling, graphql is a nightmare.

It's easy to spin up a server. Spinning up a server that can actually access your data in a way that has reasonable performance is another matter entirely.

Is it so hard to create a GraphQL client?

As far as I know, JS clients Lokka and Urql are pretty small and cloud be easily ported to a different language.

I think this is the important point here; it’s not hard to call GraphQL and even create a client starting from copying @urql/core for instance, but I suppose at some point if you want the same benefits on your other platforms normalized caching will be of interest, which means it isn’t as trivial to get to feature parity. So I think it’s still a valid point

What kind of client side support do you need? You send a query document in a POST and get JSON back. I'm pretty sure all languages have an http client and a json library.

How is pagination worse than the REST equivalent?

I guess I could see a case for client side caching based on IDs, but unless you couple this with subscriptions thats kind of brittle.

The main different on pagination, is that you pages return inside an object. And you have have several different objects that all have different pages.

With rest, you can only paginate that particular call. You cannot really paginate different results in that one call. At most you can do is say get the list of objects at this endpoint. And that is far easier to grok than graphql's pagination in even the simple case.

I mean, just look at the official guide on this: https://graphql.org/learn/pagination/

And that only covers a simple use case!

I've implemented the connection spec many times and I find it very clear and usable.

Graphql allows you to paginate several collections in the same call, but you certainly don't have to. You can do just like REST and do one at a time.

> With rest, you can only paginate that particular call. You cannot really paginate different results in that one call.

Where did you get this idea that REST in any way dictates how you do pagination?

Dunno if it'll help at all, but I did write a thing about GraphQL connections that may clear up some things: https://andrewingram.net/posts/demystifying-graphql-connecti...

Hand crafting GraphQL queries is pretty easy, with auto-suggestion built into every client like GraphiQL for example.

But but isn't GraphQL the swiss army knife where you don't have to write queries anymore as some posters suggest?

My point remains, why write queries when you can query endpoints and write filters easily without having to learn a new query language?

With WunderGraph I actually want to go the other way around. Instead of having a "smart" GraphQL client I let users write their Queries/Mutations/Subscriptions in GraphiQL. I'll then take the operations and generate a fully typesafe "dumb" client which has no clue about GraphQL at all. The Operation gets persisted as an RPC which the dumb client can then call. Feels like GraphQL, behaves like GraphQL but is very lightweight, performant and secure at the same time. All it takes is one code generation step but I think that's acceptable. GRPC does it the same way.

This was a big reason we decided not to release a GraphQL version of our REST API at my previous employer. Most backend languages have poor gql clients, if any at all.

Better to just provide a nice, typesafe client library generated from OAS.

Strange, but my impressions are opposite.

We are using hasura (which is written in Haskel), I'm interested in dgraph.io (which is golang).

I had terrible experience writing graphql-service in typescript. I finished it but I did not liked it's strange behaviour and bloated docker image size. I rewrote it in 2 days with graphql-go/go and then it became OK: predictable behaviour and ~16Mb docker image. I'm thinking about rewriting another service too.

On client side I used: - python client `gql` - .NET `graphql-dotnet` - Apollo client - graphqurl - js client from hasura team

The worst impressions are from Apollo. Starting from their documentation. IDK why they explain everything starting from UI.

I have only used GraphQL in JavaScript and Rust, and the tooling was perfectly fine for me.

I would argue that even in JS the tooling is bad compared to other options. If you can't get autocomplete, strict types, etc out of the box without backflips, it isn't really worth whatever other benefits you may be getting.

When should we expect an article about this? I'm all eyes and ears!

apollo tooling would be less awful if you mindfully contributed your improvements back to the community

We’ve trialled combinations of REST, GraphQL and gRPC-web across a bunch of different products. GraphQL has reliably won for us in trading off DX, UX and feature speed. Reasons -

- Auto-generated, type safe entities from source to client, including relationships = fewer bugs

- Ability to unify different backends (eg a database, warehouse, external APIs, cloud storage)

- The “application data graph” concept always brings huge clarity to the architecture design - you get to build your mental model in code

- GraphiQL is excellent

Having said this, most of the benefits come from the incredible tooling, eg for our stack Graphene, Graphene-X bridges, Typescript, Apollo. I would never consider writing a GraphQL server from scratch, and we’ve had bad experiences with instant database -> GraphQL solutions (not really keen on writing my application logic in SQL). It’s also not an either or - our current app uses REST for file uploads and stream large datasets to the client. And ofc, you can achieve many of these benefits with other solutions.

Wundergraph looks like a great addition to the ecosystem, it would already remove boilerplate from our app.

I have a backend generator that also generates typescript and dart models for the frontends. > Ability to unify different backends Why would that be a GraphQL exclusive thing? > - The “application data graph” ... In my generator I supply the models with relationships, type checked using Go. So when creating those models I write code directly, no need to use a pseudo language like GraphQL.

> (not really keen on writing my application logic in SQL) Maybe you should learn it, as it's really simple and powerful as opposed to the GraphQL language. But for the record, neither do I write SQL queries using my generator and last time I had to write SQL was in the 2000s when I was still using PHP. Nowadays I just write my data models and generate the backend and implement the frontend. And I don't need GraphQL for it. @ngrx-data for Angular simplifies communication with the backend. And since I generate my notifier/updater too I have real-time updates via websocket. There's nothing GraphQL offers I don't already do since 2016.

> Auto-generated, type safe entities from source to client, including relationships = fewer bugs

Are you saying that your client (interface) has a direct relationship to your database schema...?

No. The server can inspect its GraphQL schema (what the frontend will be querying against) and emit Typescript types to type check against. If someone changes a field name on the backend, type checking the frontend will fail in all the places that expected the original name. No grepping around and chasing chains of passed data.

Did you trial the ususal suspects for getting those features you list, eg. REST+WADL, SOAP+WSDL?

I’m not familiar with those solutions in particular.

For type safety, many backend frameworks generate OpenAPI specs automatically, and you can generate Typescript stubs based on this. Ditto for gRPC and gRPC web. We use these.

But I’ve not seen a replacement for the “application data graph” (but would be interested to learn about them!). The link from @jefflombardjr [0] explains it nicely - it is a great abstraction (in certain situations). Modelling your application data graph is like modelling a good database schema - when you get it right, the rest of the application follows naturally. It’s magical when it works, and I’d happily do it even if it’s just me working on the project.

And GraphQL has a great ecosystem, that is the advantage over niche tools. Example - last week we added auto-generated GraphQL types and relationships for Postgres JSON fields, with the help of [1]. No more malformed JSON breaking our app.

Note this is all in the context of a web app. Reading other comments, the tooling seems to be less developed on other platforms. And again, without decent tooling (especially for the server) I wouldn’t touch it.

[0] https://medium.com/@JeffLombardJr/when-and-why-to-use-graphq...

[1] https://github.com/graphql-python/graphene-pydantic

I don't think either of these are a usual suspect anymore. I hadn't even heard of these. Maybe these are more common in some parts of the industry.

I haven't used WADL, but WSDL stands for Web Services Definition Language and is an amazing way to define endpoints and messages with type safety. From these you can generate bindings for pretty much any language or protocol that has a generator (and there are plenty, but probably not for the cool kids^TM 'golang' and 'rust').

One of the reasons it fell out of use is XML is verbose and typesafety is not cool anymore.

There is a couple for go and at least one for rust, although I don't know how well it fares. It shines more with languages which allow first-class runtime reflection and code loading, but you only care if dealing with runtime-configured services. Also WSDL 2.0 supports REST.

I would not say fell out of use, most enterprise-grade apis, old or new, will have one -- think bank/government/big corp. GraphQL is not there yet - as other commenters pointed out the tooling is not good enough outside JS world, security is complicated on the service side, and the protocol is just to young - the official release is two years ago, so it may start appearing just around now.

> typesafety is not cool anymore

This is rapidly reversing as the javascript ecosystem has increasingly adopted typescript in the last two years. I, for one, refuse to start a new front end without it, and the only other js dev I've spoken with recently who disagreed had never actually used typescript and didn't want to "write a bunch of interfaces like java".

The same goes for Python, where typechecking with mypy is getting pretty popular.

Would love to get in touch and discuss this further!

This is a refreshing read, every time I ask someone what they like about GraphQL or read an article about the benefits of GraphQL I get the same boilerplate, non-answers that made me avoid GraphQL on my latest app after working with it on 2 freelance projects over the last couple years.

The tooling around Apollo is pretty good, and that’s great but so many of the “why GraphQL” arguments are either short sided, assume you are doing REST badly, or imply you are an organization that is much bigger and has scaling problems most of the projects I’ve worked on do not have. So far I am very happy I went back to a REST API.

Oh and here’s another REST benefit: when working on my REST API I spend most of my time thinking about good API design and testing practices. When working with GraphQL I spent most of my time figuring out how GraphQL ecosystem libraries worked. I would much rather spend my time thinking about how to do the job well than how to glue together an ecosystem of tools that someone else designed. Admittedly this is a very subjective line to draw, for instance I loved Rails and some people make the same criticism of Rails that I make here for the GraphQL ecosystem.

GraphQL is great when your entities are complicated and interconnected with each other. If you just need to get one user object, it's not different from rest. If you need to get user, then 5 of his posts, then 50 of comments for each post and then 100 reactions for each post, all connected by IDs in your DB, GraphQL allows you to do all of this in a single HTTP request, single SQL query and with 0 lines written by backend developer.

(This example is pretty silly, but I have very similar and not silly examples that I just can't share because of respect for my employer's NDA).

My experience is that giving front-end developers who does not understand databases the ability to do this is a great way of killing performance across the board.

I don't particularly want that to be easy, because I've too many times had to clean up the consequences.

If you have a team where everyone understands the database implications, then awesome. In that case this consideration may not apply.

It can be a very bad thing or an acceptable payment for developer velocity, based on what's your business priorities with the project.

I have a couple of issues that I worked on that started exactly like this: front-end developer implemented a feature based on GraphQL, but when we deployed this feature, we quickly disabled it after receiving alerts from production database, and re-enabled it back only after backend developer (me) went and worked on database indexing.

But for my team, at the current point of the project's development, this scenario is a net positive overall, because we're exactly in "move fast and break things" mentality, which suits our product and our place in the market. If our project would require a lot of 9s, or worked with very confidential data, or had 10s of millions of daily users, we would have completely different internal procedures and development practices: much safer and much slower.

Also, I wrote about this exact problem in another thread under this post: https://news.ycombinator.com/item?id=25014918

I agree with this - my current main focus is effectively a CRM where I've exposed abilities for front-end devs to outright write SQL directly - as a shortcut when you need to get things done fast it can be fine.

But I've also never seen this done without getting horribly abused, and so I tend to lock things down gradually as a service matures or the team grows.

Then again, developers will try to work around it - I had one project many years ago where I on purpose introduced a very restrictive template language to ensure proper separation of content and logic, in large party because the templates were translated to 17 languages, and avoiding breakage was a lot easier the less logic there were in them...

Cue two of the developers trying to sneak in a tag to allow embedded Perl...

And this is why in general I don’t like ORMs, even my favorite ORM, Django’s. They hide from you the query complexity — and perhaps also the number of queries — that your ORM query produces. There are situations where a different ORM approach would yield similarly shaped data yet consumes orders of magnitude more or fewer resources. And this is without adding any new indices.

In general I don’t like giving anyone the ability to write queries unless I trust they understand the performance ramifications. This can be difficult to enforce because someone “on the outside” can always abuse your thoughtfully constructed set of access points you’ve provided by sticking them in nested loops.

Perhaps an answer is strict allocation of quotas outside the realm of assumed competence and good will.

That's why I only use orm with the simplest query. It helps so much if the queries are just select where, create update delete rather than composing the queries myself.

I totally agree. I much prefer tools like the now-deprecated Yesql for Clojure that provide for easy mapping of actual SQL queries to function wrappers for those queries. Also, given that the database is where your data model’s consistency should be enforced, stored procedures and views are your friends. The last fifteen years of web app frameworks and developer practices have abused/neglected RDBMS systems by regarding them as simple table stores. ORMS are a classic “now you have two problems” non-solution.

> GraphQL is great when your entities are complicated and interconnected with each other.

It's great for the frontend, but in this case it's awful for the backend.

> GraphQL allows you to do all of this in a single HTTP request, single SQL query and with 0 lines written by backend developer.

Yeah, right. Because that SQL query will just magically write itself. Especially if that ad-hoc GraphQL query requests extra fields (or doesn't request extra fields) or taps into data that' only available from an external service etc.

As someone who has worked on both front-end and back-end performance problems, moving a performance problem from the front-end to the back-end is a big win.

You have much more flexibility to solve problems on the back-end. On the front-end, you are fundamentally limited by the fact that you have to transfer all of the data and code to the client-side while the user is waiting.

That is true. You also have access to a wider range of options with regards to caching, for example.

Yes, it can and often will. That’s the point.

No, it can't, and it often won't. Just because you stumbled across a single application server that can do that doesn't mean it's true for GraphQL in general.

And once you go you go beyond toy examples on minuscule data, even those magical tools immediately run into problems: https://news.ycombinator.com/item?id=25014918 (note: this is by the author of the comment I replied to) and https://news.ycombinator.com/item?id=25015173

I think that these misconceptions a result of lack of experience/knowledge in graphic/databases and FP in general. I don’t see people writing assembly code trying to beat compiler optimizer very often, same thing here. We just need more competent engineers and a bit of time. If anything, we’ll move forward, to more conceptually complex protocol, definitely not back to the plain rest, that I saw many have nostalgia about.

> I think that these misconceptions a result of lack of experience/knowledge in graphic/databases and FP in general.

Which conception will magically convert an ad-hoc GraphQL query into "single SQL query and with 0 lines written by backend developer"?

> I don’t see people writing assembly code trying to beat compiler optimiser very often, same thing here.

No idea what you're talking about and how this is relevant

> If anything, we’ll move forward, to more conceptually complex protocol, definitely not back to the plain rest, that I saw many have nostalgia about.

As long as you pretend that GraphQL is magic that requires 0 backend work (but at the same time requires "more competent engineers and a bit of time"), then sure.

In thread you’ve pointed before there’s a link to postgraphie. Even without tweaks it already gives a decent code. With some little efforts you can optimize anything you want, score query complexity and etc. If you don’t want unpredicted performance, use persistent queries in production.

Nobody says backend work will magically disappear, but ignoring a generational improvement only because of job security concerns is insane.

> Nobody says backend work will magically disappear,

That was literally what was directly stated in the original comment I responded to.

> but ignoring a generational improvement only because of job security concerns is insane.

Ah yes. People who have only one database with a magical tool try to berate people with significantly different requirements.


The original post, if you’ll read it again, carefully, pointed out that you can compose a complex query in a single graphql query. And if you need this data, you have to get that data one way or another. What’s so hard about that? Now if you can make a single query to select that data, then great. In most cases there’s enough information to generate it automatically and efficiently and refine it if not. What’s not clear about that? Oh, maybe you have some magical and complicated infrastructure? Well, you have to query them anyway, right? It’s not automated you say? Sure, but you can always get insight from libs like Haxl, made by same Facebook. The magical database tool is a simple and comprehensible example (but still is brilliant in how well it works). The bottom line is that if dev team is capable of working with graphql, it’s just a better choice. Most of the projects I saw unfortunately are made in such a way that you hope they sticked to the good old Rest, because when you do that, use graphql as rest, the result will be disappointing. But come on, that’s the same as people migrating from react to vuejs bc it’s feels less alien or use mobx and other bi-directional storage instead of relay or redux, they’re obfuscating the problem for short-term benefit.

> The original post, if you’ll read it again, carefully

Literally says this, emphasis mine:

--- start quote ---

If you need to get user, then 5 of his posts, then 50 of comments for each post and then 100 reactions for each post, all connected by IDs in your DB, GraphQL allows you to do all of this in a single HTTP request, single SQL query and with 0 lines written by backend developer.

--- start quote ---

No. In general, it doesn't. There's a magical tool that they use, and they still have issues with it: https://news.ycombinator.com/item?id=25014918 So much for "single SQL with 0 lines of backend code" when you have "weird joins that we won't optimise for, and sooner or later there will be determined DDoSers who will figure it out"

> The magical database tool is a simple and comprehensible example (but still is brilliant in how well it works).

Yeah, this magical tool is only a tool, that works with a single database, for a single set of problems, and has to be constantly fine-tuned because you can't optimise ad-hoc queries.

But yeah, dismiss all that and just shout to the world: "REST sucks, GraphQL is so much better because we have this one single tool". The moment you step outside the limitations of that tool, you're screwed. But you haven't reached that point yet, so you consider yourself "competent".

> The bottom line is that if dev team is capable of working with graphql, it’s just a better choice.

This still has to be empirically proven by anyone without magical handwaving and dismissing any issues.

Omg, how hard is it to understand that any problem you attribute to graphql, will exist in rest. Every single one and more. Also, you can fine-tune any ad-hoc query, even in that magical tool, yeah. You know, your assumption of having a deeper expertise in that area is cute. I think you’ve imagined some dud who played with these magic tools and appointed himself an expert without a glimpse of understanding tech behind it. Well, that’s not true, but if it’s how you’re prefer to fin arguments, I’m fine. Otherwise, pls give me something, maybe some examples where you show how you have a better control of something or a better performance using rest over graphql? Because I don’t know if you understand, that graphql if just a glorified json generator. Yea, it’s much harder to design good resolvers, queries and limits, but it’s worth it. After all, if you have problems quering complex nested data (and you really objectively need it on a front end) in graphql, it can be only harder to handle correctly in rest, please, please prove me wrong.

> how hard is it to understand that any problem you attribute to graphql, will exist in rest

Ok, so then you agree that the person who said that this would allow you to solve these problems "with 0 lines written by backend developer." is completely wrong?

Awesome! Great. You now agree with the person you are responding to, that these problems cannot be solved "with 0 lines written by backend developer.", and that he was correct to critize this obviously false argument.

# part 2 It feels that there are the following type of people zealously hating gql: 1. Those who for some reason don’t understand it. That’s obvious from the scope of problems they highlight. The problem of resolving the data is not really that hard. If someone’s complaining only about it, it can only mean that got stuck at the fist step: making it work. Those who got it work, talk about other problems, that are real: caching, for example. Or lack of URI and troubles with supporting linked data, structuring mutations, etc, etc.. Here, in other threads some people are saying it’s a FAANG only thing and big problems. Others say it’s a small-project scale and solution only to you problems. Make up your mind. 2. Those who would feel comfortable with backend if thinga stayed at cgi/Perl level. I actually have nothing against it.

> The problem of resolving the data is not really that hard.

Says the person who also wrote this: "Yea, it’s much harder to design good resolvers, queries and limits".

> Those who got it work, talk about other problems, that are real: caching, for example. Or lack of URI and troubles with supporting linked data, structuring mutations, etc, etc.

Those who understand the problems with GraphQL talk about this, too. If only you were able to see what they are saying. For example: https://news.ycombinator.com/item?id=25014009


- much harder to design good resolvers, queries and limits

- problems with caching, for example. Or lack of URI and troubles with supporting linked data, structuring mutations

And yet, people who talk about these problems are somehow "incompetent developers who pine for the good old days of cgi/perl". Impeccable logic.

If you’ll keep cherry-picking this conversation will stop be entertaining. So yes, people who know what they’re talking about are discussing different set of problems that resolving data in general. I’m more and more convinced that you are not. And don’t twist my word, btw. I have nothing but respect for good old days guys and they are far from incompetent. But I f you think you belong to 2nd category, I want to point out that you can easily belong to both.

lol, no. but nice cherry-picking, btw. too bad the original (and unedited) post is still available.

# part 1 Let me break it down for you:

> (This example is pretty silly, but I have very similar and not silly examples that I just can't share because of respect for my employer's NDA). So that's an example, an illustration. Like a metaphor, it works if other person is willing to communicate. It doesn't work if you cut and stretch it to fit your agenda. That's not an object of discussion, it is a pointer. Ok?

> If you just need to get one user object, it's not different from rest. That simply says if you need some data, you have to get some that data. That statement is correct no matter what protocol you use. What would you do in rest, btw? Have a single endpoint, where you're trying to make a guess about what's the structure of this data would be? Having separate endpoints and have client cycle them to get what he needs? Limit his ability to traverse domain schema? In my opinion, these are f-off solutions. Because you're basically saying, that's the structure I know how to optimise on a backend, live with it. Look, it works great! What does it matter that you can't efficiently get your data? Not my problem, see, each of my endpoint is fast and simple.

> If you need to get user, then 5 of his posts, then 50 of comments for each post and then 100 reactions for each post, all connected by IDs in your DB, GraphQL allows you to do all of this in a single HTTP request, single SQL query and with 0 lines written by backend developer. Yes, yes. That's exactly what will effectively work for that toy example. You actually can do that. Now, of course real-life is much more complicated and even if you're going to map gql to database, you don't want to map its structure as-is. It could be an interesting conversation on how to do that, but unfortunately, it's not. But back on track, I believe the idea behind the quoted sentence is to illustrates, that you can do it, you can optimise a sequential and hierarchical query if that's possible. And in more complex cases, because you have a lot of information statically available and at query time you have a full structured query, you can know ahead its complexity and you know exactly what resolvers can be possibly executed to resolve the query. That gives you options: 1. Dumb and naive. Just call each resolver separately. That probably mirrors what would happen in rest. Because I'll repeat myself once again: you need the data, you have to get the data. In terms of transferring, transformation and querying, there should be zero difference. 2. A slightly smarter one. You can compose certain resolvers by grouping requests of same edges. That's fairly easy and should I explain why it's better than trying to do the same thing using rest, probably relying on intricacies of some query language a wise backend developer invented to solve a solved problem? 3. You can optimise for requests patterns. Yeah, that's right. You know that in order to satisfy gql client for such and such use-case, it needs certain data. So instead of making a separate endpoint with arbitrary degrees of freedom, you can have a better, cooler resolver that will perform your complex, which is exactly what you need. And if client requirement changes, he can extend that query and your optimised resolver will now work as a scaffold and additional resolvers will run on unexpected leafs. Which, again, you can optimise if you'll need. But that doesn't change the fact that client needs data. And he'll get it'll certainly get it anyway. Now a reasonable combinations of these 3 options are required in reality. But that toy problem in topic illustrates exactly that: you can have a nice computer-assisted solution. And I don't think anybody doubt that in this particular case computer can generate a query almost as good as an average backend dev with "with 0 lines written by backend developer.".

> If you just need to get one user object, it's not different from rest.

So then in this specific example, you agree that definitely it is not different than rest, and we cannot solve these problems "with 0 lines written by backend developer." then? Cool. Awesome. You agree with the person you responded to on that specific issue.

To be even more specific, it seems like you agree that it would not require "0 lines written by a backend developer" in order to do something like "taps into data that' only available from an external service etc.".

> you need the data, you have to get the data

Thats great that you agree with the other person that it does not require "0 lines" in order to do something like "taps into data that' only available from an external service etc.".

> But that doesn't change the fact that client needs data

Awesome. So then you agree with the person you are responding to, that you cannot, with "0 lines written by a backend developer" do something like "taps into data that' only available from an external service etc.".

Great. You are in agreement with him on that specific point.

Omg, that’s clearly a trolling :) Or you’re really just talking to yourself.

No it is not.

It sounds like you agree with the original person, on the specific point that it definitely requires more than "0 lines written by a backend developer" to do something like "taps into data that' only available from an external service etc.".

Great. You are in agreement with him on that specific point.

But that’s absolutely not what the original example was showing. It was showing that even an dumb tool can do cool tricks on simple tasks. And that’s because you have an abundance of information about query and schema, so you can possibly do a more high-level reasoning about how to fulfill it.

There’re design flaws, but the direction is most promising.

It is still not really clear because you haven't really said yes or no.

But I am going to take a guess and say that you agree that:

it definitely requires more than "0 lines written by a backend developer" to do something like "taps into data that' only available from an external service etc."?

This is a yes or no question here. It should be pretty simple. Just say yes or no.

Since you keep misdirection though, I am going to assume that the answer is Yes, you agree with this original statement.

> Like I’m saying long live the backend job

So then you agree that it require more than "0 lines written by a backend developer" to do something like "taps into data that' only available from an external service etc.".

Awesome. You agree with this statement.

I was talking about the example originally provided, that had no data tapping with external services. That was a detail added by the guy I was replying to. idk why he decided it’s relevant there. If you want my yes or no answer, on is there an out-of-a-box solution for querying external services, then probably not. There can be, for some use cases, but I never researched them. And that’s actually not a bad idea to have some generic kick-in lib for that. So here, you have it.

I hope you understand that my issue is with the person who’s currently very busy with coming up with the infinitely recursive graphql query.

> then probably not.

Alright, cool. So then you agree with that statement. Got it. Great. You are in agreement!

Furthermore, when you made this statement "Like I’m saying long live the backend job", it also is in agreement that there is definitely more than 0 backend lines required to solve something such as "if that ad-hoc GraphQL query requests extra fields", as that person originally stated.

I don't feel like I want to agree with that part, sorry. But I can admit I'm wrong if I had a counter-example relevant to that toy example.

I'm very confused on where you're going to with that.

> Because that SQL query will just magically write itself Do we have any disagreement on whenever we can have a full computer-generated query for the case where there're a few tables connected by FKs which is not worse than anything a human could write? If not, can I have some hint on why?

He’s trying to make it like I believe in magic. Like I’m saying long live the backend job. Hell no, but it lets you to work with arguably better abstractions. And that posthraphile didn’t appear from thin air, it’s not a part of graphql, somebody wrote it.

> any problem you attribute to graphql, will exist in rest. Every single one and more.

And then you go on to say:

> fine-tune any ad-hoc query

REST doesn't have ad-hoc queries, so no, this problem doesn't exists in REST

> it’s much harder to design good resolvers, queries and limits

So, it's much harder than in REST, but somehow REST has the same problems. Riiiight.

> After all, if you have problems quering complex nested data (and you really objectively need it on a front end) in graphql, it can be only harder to handle correctly in rest, please, please prove me wrong.

Once again: in REST you know exactly what your query will be. And even if it's hard to query and retrieve nested data, you can optimise that retrieval for that specific REST request you provide. With me so far?

GraphQL allows ad-hoc queries of unbounded complexity. This is the one and most significant problem that REST doesn't have. With me so far?

So, given all that, and given that you'r saying "it's much harder to design good resolvers, queries and limits" in GraphQL, how is handling complex data harder to handle correctly in REST?

It is not unbounded, unless you let it. Once again, at query time you have the full query. It’s not infinitely recursive. And if you split fetching the data by multiple endpoints and pretend your job is done, basically delegating the problem to your clients, what else can I say?

> It is not unbounded, unless you let it.

It is unbounded by default. And the tool you're so enamoured with, postgraphie, even has a dedicated section on this: https://www.graphile.org/postgraphile/production/

Let me quote: "Due to the nature of GraphQL it's easy to construct a small query that could be very expensive for the server to run".

And lo, and behold, it doesn't really have a solution against it.

So you have to either revert to basically REST with a predefined number of whitelisted queries, or pay for an experimental extension that attempts to calculate the cost of query.

> And if you split fetching the data by multiple endpoints and pretend your job is done, basically delegating the problem to your clients, what else can I say?

You can say something that actually shows that you know what you're talking about. Because you clearly have very little knowledge about REST and very sparse knowledge about GraphQL. You don't even know that unbounded complexity and infinite recursion are inherent in GraphQL.

Show me an example of how you make an infinite recursive query in graphql, I’ll wait.

And I can spell it to you again, you know structure and complexity of your query before you execute it. Feel free to ignore it and disguise ignorance behind I’ve pointed out by reverting that back on me. I’m asking a pretty simple questions, while you’re deflecting with an assumption that I need to prove my worth of your time, lol I rarely engage in online discussions and only if know for sure what I’m saying.

> Show me an example of how you make an infinite recursive query in graphql, I’ll wait.

Literally in the example provided by postgraphile. It literally shows how to DDOS a GraphQL service by constructing a simple recursive query. It literally shows how even a few levels of recursion will break your server. It literally shows that by default GraphQL - and postgraphile - has nothing against this. So yes, you can increase recursion in the query ad infinitum, which is my point that you fail to understand.

> Feel free to ignore it and disguise ignorance behind I’ve pointed out by reverting that back on me.

Stop projecting. You can't even understand what the tool you mentioned does, and the problem the tool's own documentation describes.


[PostGraphile author here, and I wrote that page of documentation.]

Firstly, GraphQL does not allow for infinite recursion; it is literally not possible to do infinite recursion in GraphQL; the GraphQL spec even has a section on this: https://spec.graphql.org/draft/#sec-Fragment-spreads-must-no...

Secondly, it's extremely easy to add a GraphQL validation rule that limits the depth of queries; here's an example of one where it takes just a single line of code: https://github.com/stems/graphql-depth-limit . This isn't included by default because there are plenty of solutions you're free to choose between, many of which are open source, depending on your project's needs. For most GraphQL APIs, persisted queries/persisted operations is the tool of choice, and is what Facebook have used internally since before GraphQL was open sourced in 2015. (Unlike what you state, this does not turn your API into a "REST API," it acts as an optimisation on the network layer and once configured is virtually invisible to client and server.)

> Firstly, GraphQL does not allow for infinite recursion; it is literally not possible to do infinite recursion in GraphQL

It's literally impossible to do infinite recursion anywhere because it's physically impossible to write down an infinite recursion.

However, if you look at the very example you provide on that page, you will see what I mean by infinite recursion. Moreover, you link to the Apollo page which literally has this example:

--- start quote ---

This circular relationship allows a bad actor to construct an expensive nested query like so:

  query maliciousQuery {
  thread(id: "some-id") {
    messages(first: 99999) {
      thread {
        messages(first: 99999) {
          thread {
            messages(first: 99999) {
              thread {
                # ...repeat times 10000...
--- end quote ---

Is 10000 infinite? No. Does it illustrate my point? Yes. Have you missed the point? Also yes.

> Secondly, it's extremely easy to add a GraphQL validation rule that limits the depth of queries

1. This statement is not even remotely true in general sense

2. It is not the default behaviour of any GraphQL implementation (because it's inherent in GraphQL)

3. The "extremely easy" solution for this particular case relies on an external package that needs to be added on top of something else. In your case it's not even added to postgraphile. It's added as an extra middleware to some other graphql library.

And that covers only one dimension: potentially infinite recursion. The other dimension is potentially unbounded complexity. For which the following is true:

1. It's inherent in GraphQL

2. Is not even solved by PostGraphile, except in an experimental paid package

3. The primary mode of mitigating this is disallowing arbitrary queries by providing only a whitelists of allowed queries (so, basically falling back to REST)

So in the end you end up piling more and more complexity on top of other complexities to arrive at a whitelist of allowed queries, ... which is basically just poorly implemented and over-engineered REST (well, REST-ish).

Honestly, no idea why you're fighting the facts of life that you yourself even document on your own product's pages.

So you gave me an example of a nested query that is not infinitely recursive and even admitted it. The one that as I said before you have an ability to easily identify before execution, just as any possible variations both in width and depth, from which I conclude that you lack some basic algorithmic knowledge or have troubles to apply them.

I know I’m arrogant, but yours is off the charts. Thanks for confidence boost!

Ping me when you’ll be able to show infinitely recursive query in graphql. Until that, I agree, there’s no point to continue, and have a good luck with that :)))

What prevents me from Ddosing your fancy rest api? Nothing if you do nothing about it first. Why do you assign omnipotent requirements on one technology, but not another?

If your front end is coded correctly, you know every query, their exact shape and complexity, so you can optimize for it and you exactly what you optimize for. If you let’s say have a public api, you’ll have more control than in rest, no less, because it’s easier to reason about what your api users need. It’s a foundation, not a final solution. You have to do some work to get something from it beyond mediocre, I thought that’s obvious.

> If you let’s say have a public api, you’ll have more control than in rest, no less, because it’s easier to reason about what your api users need.

REST: you know exact queries that frontend uses, and can optimize accordingly

GraphQL: users can and will construct ad-hoc queries of any complexity, so they can and will hit unoptimised paths

lyxsus on HN: GraphQL gives you more control than REST.

This problem is solved with WunderGraph. We save all queries as stored procedures when you deploy your app. We disallow arbitrary queries in production so you can test and optimize all stored queries before deploying. This gives you the flexibility of GraphQL with predicable performance and security as if it were a REST API.

Then what's the point when you essentially have the equivalent of REST

The reason why this is so powerful is that you can decouple frontend from backend with this pattern. If something is missing on a REST API you have to ask the backend developer to change it or create a new endpoint. In case of the former, the REST API gets bloated in case of many API consumers. Compare that to GraphQL and persisted Queries. The frontend developer can change the query themselves. If a change is required to the API they would still have to ask the backend developer to implement it. However, due to the nature of GraphQL other API consumers don't get affected by the change. All in all you get more for less.

If you’re a backend developer, it’s just more work to do, unless you really make use of gql abstractions, they’re kind of more expressive than raw rest. For clients - instant exploitability, type safety, much more flexibility.

> We disallow arbitrary queries

and at the same time

> This gives you the flexibility of GraphQL

The flexibility of GraphQL is in the arbitrary queries.

You just don’t get that you can apply limitations to graphql query not unlike you when you don’t allow to fetch a billion records from rest endpoint? This what that about? Because that’s a question of how, solvable.

You only need this flexibility during development. In production it's rarely the case you want to change a query without deploying a new version of the app. So there's no flexibility lost.

Thank you for the explanation!

They can’t get any complexity, unless you let them.

> lyxsus on HN: GraphQL gives you more control than REST. I like that :)

> It's great for the frontend, but in this case it's awful for the backend.

So what you are saying is it's better to leave this complexity for the frontend developer to handle on the client side? Very wise!

The complexity will always live somewhere. Just because you are a frontend developer, and GraphQL magically makes your life easier, it doesn't mean that complexity is just gone. That backend you're so dismissive of is what powers your site/app, and it needs to be developed and maintained, and has to be performant etc. etc.

And yes, GraphQL makes backend significantly more complex, fragile, and prone to significant performance issues.

To me that sounds like wilfully ignoring all the caching and concurrency opportunities you could have in addition to forcing all data through a single point of congestion.

At least caching can be solved somewhat, but not on the protocol level.

I think GraphQL is great for applications where there are no apparent caching or concurrency options.

For me it boils down to: - Flexibility -Type safety - Developer experience (like caching out of the box with Apollo), or self-documenting

There's something I never quite understood about GraphQL, and this seems like a good thread to ask!

With REST, all my APIs are defined and I can easily test all the database queries my server will run, check that they're indexed, etc. But with GraphQL, my understanding is a client might be able to request something like "Give me all users whose phone number starts with 555". It's possible that query isn't indexed, and after deploying the app we end up tanking our database performance, right? That seems like a huge potential issue to me, but I might be misunderstanding how it works in practice.

> Give me all users whose phone number starts with 555

There's no magic there, it's left up to you whether you expose such functionality and you are in full control of all fields that make up your API. Most of the time your APIs will reflect your database associations `{ users { posts { comments } } }` which should be indexed anyway. Custom queries on top of that, like a search filter, can be indexed/optimized individually. Resources can be paginated quite easily and you can also enforce a maximum depth when requesting associations, so that you don't end up with requests too large to deliver.

The main problem with GraphQL comes from the many different ways you can use it, which makes caching or eager loading difficult.

> Most of the time your APIs will reflect your database associations `{ users { posts { comments } } }` which should be indexed anyway.

That's not exposing database associations. At most that's exposing aspects of the domain model which are also reflected in the persistence model.

But how often does your persistence model really reflect the domain model that accurately? In going from domain to relational at least, you pick up a lot of details that are key to relational modeling but are irrelevant to the domain model, as in the indexing here.

As best I can tell from my limited experience, GraphQL is just exposing the bones of your relational schema without giving it much domain behavior. It's the software equivalent of offering a grocery store full of ingredients when all the hungry person wants is a sandwich.

What do you mean?

(Not OP)

The example `{ users { posts { comments } } }` reflects that in the abstract modeling of a message board, this relationship exists. The representation of this relationship will change depending on the database implementation; a document db may store the data explicitly in the hierarchical form, while a relational db would store them with a series of joins.

I see, thanks for the explanation! I think a lot of my coworkers think of GraphQL as some magic where it lets the client query for arbitrary things and avoid us having to add query parameters where appropriate, so I never got the whole picture. It sounds like the main benefit over REST isn't so much the queries themselves, but being able to control what data you get back, which is more in line with the article.

You can still have that magic, I've done that in a couple projects and it's certainly possible. Unfortunately implementing something "good enough" might get super expensive depending on your data model and security constraints.

You can use tools that will automatically generate GraphQL schema and operations from a database or it should be you design the schema & operations and control how the queries and mutations operate. The former is where some of the original concerns may come from, but the latter isn’t different from REST design.

The former would be something like Hasura right? As a backend dev, I get nervous when I see a tagline like "Instant GraphQL APIs for you data", because I worry about the schema and operations that are exposed.

Yes it'd be like Hasura.

There's also libraries (usually in-house) that let you query for every relation off that specific table. You can imagine how it works just match up the fks and expose in the graphql schema. That gives you control of what not to expose as well.

You need to set permissions manually for every GraphQL operation to be exposed with Hasura.

I think backend devs should be more worried about loosing 75% of their work when it comes to Hasura.

GraphQL is 2010's SOAP. Whatever happens in the database is not GraphQL concern. GraphQL is just a contract between the client and the server, and a better one than HATEOAS or whatever REST.

Thanks for saying that. I haven’t used GraphQL but when I read about it I was like “Isn’t this basically RPC again? Like XML-RPC (for those that remember that) and SOAP?”

To me the acid test, since getting burnt by both XML-RPC and SOAP has always been “can I drop a standard HTTP proxy Like nginx between this thing the layer behind or in front to cache reads?”

GraphQL _seems_ like you can’t really do that so you end up having to build caching into your app, which in turn - in my experience - always leads to systems you can’t predictably reason about

You generally wouldn’t run a cache in front of your GraphQL server, but can definitely have a cache between your data sources and your GraphQL resolvers. For instance, we have a single GraphQL interface in front of many backing micro services. Some of those are very hot and constantly handle direct reads, others only access the backing services through redis reads, others basically do REST requests (which are cached as normal) and drop unrequested fields (basically the BFF pattern from the article), and others even have hybrid approaches where accessing a certain subset of available fields sends you to a cache and accessing others triggers a live read. The resolver architecture gives you a lot of flexibility, but it also can enable a lot of complexity (though it is fairly easy to reason about because all of the connections are explicit). We’re a big org, so we’d have complexity either way, and switching to GraphQL has absolutely been a huge improvement that no one regrets (though we’ve learned a lot along the way).

You can cache reads with a HTTP proxy. I know that’s what we are doing at work anyways.

We build our GraphQL schema partially by running introspection on our MySQL schema. Only fields that are indexed in MySQL are valid sort/filter parameters in the GraphQL schema. To make a field sort/filter-able we add an index in a MySQL migration and regenerate the GraphQL schema from the new database.

This is not an issue from my point of view. If you do GraphQL the right way you'll always use persisted Queries. For each persisted Query you know exactly how it behaves and you can test it before you go to production. WunderGraph uses persisted queries by default. We don't allow arbitrary queries at runtime. Here's a post on the topic if you're interested to dive into this a bit more: https://wundergraph.com/docs/concepts/persisted_queries

> This is not an issue from my point of view. If you do GraphQL the right way you'll always use persisted Queries.

But that's not using GraphQL "the right way", is it?

I mean, GraphQL's value proposition is quite literally letting front-end developers run all sorts of ad-hoc queries without having to bother about indexes, and only care about the data you wish to extract.

If all you want is to run fixed queries then you're already better off putting up a REST API.

Facebook, the inventors of GraphQL, didn't just invent the language itself. At the same time they invented the Relay client. This client, back then, persisted all Queries at compile time. They never had actual Queries at runtime. It's Apollo with their Apollo client who made it popular to not follow this path. The devs at Facebook never said you must use GraphQL their way but I think if you don't follow their best practices you're kind of on your own. Why not follow their advice? They must have learned this lesson already.

Yeah if you have a fixed number of well-known queries that you support then a plain old REST API is superior in pretty much every way

Front end developers are api consumers in the same way that a third party client is.

As others have stated persistent queries are the answer. You can disable them in local dev if you want to give your devs more flexibility, but I have found it usually isn't necessary.

Your GraphQL api is capable only of what you code it to do. GraphQL is nothing more than a "contract definition" on what callers can ask your API to do. The responsibility is still totally up to you to determine what database query to run and optimize/index appropriate.

Zero difference from your REST call.

I see, so you're saying that the GraphQL contract wouldn't actually allow querying by phone number in the first place?

It’s up to whoever built the backend. GQL doesn’t care either way. If the person who built the back end wanted to allow query by phone number, they’d index that field to maintain query performance.

You don't directly query your database from GraphQL. Similar to REST, you read the parameters from the request and build your database query using the object–relational mapping library of your choice. A popular one at the moment for Node is Prisma.

But in this imaginary situation, there's no efficient way to query the database without adding an index, and the suggestion is it's harder to know what your clients will call versus REST because one is statically defined while the other is more freeform.

This problem is solved at the db layer. If your data is not queryable in an efficient manner from the DB then it doesn't really matter if you use REST or GQL.

It’s up to the graphql server implementation to resolve queries however it sees fit. In many cases this type of query would be implemented with a cursor, so not all users would be returned at once.

A cursor doesn't really help much if there's no index on our imaginary `users.phone_number` column. But based on another comment, it seems that we'd limit the ability of the clients to query by phone number in the first place?

You can whitelist queries!

sorry for stupid question, how REST and database indexes are related?

They're not really related here. The premise of my question is more around the fact that with REST, which database queries you'll run are fairly static and are known by the backend based on what APIs they expose. Therefore it's easy (ish) to make sure said queries are indexed properly.

With GraphQL, I'm asking about how clients _could_ write arbitrary requests, that since the backend doesn't know about them ahead of time, can't optimize for with indexes.

That makes sense, thanks!

They aren't.

There is just a lot of anti-patterns floating around because of tightly coupled API / DB's.

People apply this same logic to GQL and get confused.

> People apply this same logic to GQL

And programmers will apply this same anti-pattern to GQL, so it doesn't really solve that problem, and arguably makes it worse for the reasons stated in the un-indexed field example.

And that is why you used gql codegen and typing to limit their capability to do so.

Part of the benefit of gql that people often overlook is its self documenting nature and when tied with automation tools its ability to provide a lot of flexibility for developers without compromising you db.

The gql server is ultimately what is responsible for this, and we use schema introspection to ensure the searchable / sortable fields are only exposed if indexed.

Should I go back into my archives and look over the number of technologies leaning on code generation and model introspection that failed to catch on?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact