As you say, smaller, simpler queries are more performant. They also tend to be more relevant. It's kinda like the argument for 2 dozen eggs at $1.60 instead of a dozen eggs for $1. Buying 2 dozen at a time optimizes the cost per egg. But if you only ever eat 10 eggs before you throw the rest away, cost per egg is not the most important factor.
I prefer to pull data on demand in smaller chunks. At which point, GraphQL makes less sense for my use case. But I think it's cool that somebody thought it up and I'm sure it's very helpful in lots of situations.
I imagine most js frameworks are greedy because it's extra work to be lean here
If you have a general endpoint that doesn't have performance guarantees and you need to distribute widely seems to make sense (e.g. a product you want to sell to other companies with a general query interface). I'm sure there is good use cases; but I'm not sure a general website for a company for example is one. I'm curious what the general advantage is as a "default option" for frontend dev's.
I tend to use GraphQL as a default for anything moderately complex or likely to become complex because of the development flow it enables. But this flow also requires apollo; GraphQL is most appealing when you have a good frontend client like apollo.
Here’s the flow: I’ll get a spec for a feature and figure out what data it will need. Specifically, I’ll figure out what data each component in my frontend will need (I often use React). Then I’ll write a naive graphql query in order for each component to fetch the data it needs.
That might sound like a lot of querying and something you could just as easily implement with REST, which would seemingly defeat the purpose of using GraphQL. But apollo enables two things that offsets this, which it can do because of the way in which you can ask for multiple pieces of data at once and the type information GraphQL sends back. If a bunch of queries are made in rapid succession, you can configure apollo to combine those queries into a single network request. It will also cache entities by id and type name and parameters. This means that all of the little queries you write will typically pull from the cache rather than the server, and will be updated automatically if another query or mutation elsewhere in the app alters that entity.
Writing small, entity based queries is typically much faster to implement than creating a new page specific endpoint for every new page/task, as you can reuse and combine queries/types that you’ve already defined. And because of the optimizations apollo makes when deciding whether to query, you often don’t get a performance penalty from it.
At this stage, I’ll evaluate whether or not the performance is acceptable. Normally it is, but sometimes it’s not. When it’s not, I’ll move those little queries that were happening in the components out, and have the component get the data from higher up in the component hierarchy. The component logic stays the same, so the move is cheap in terms of dev time.
I‘ll then write more performant, page/task specific queries (top level graphql query types) that fetch a whole bunch of stuff at once and do so efficiently. If a frontend dev could write a task specific node endpoint, they could write a graphql resolver for a task specific top level query the same way.
Sure, you can use the "include" directive in ".gql" files to subset the fields. But it's a pain to use, specially in nested queries.
Care to share any specific problem you are currently facing?
Another is what to do when subqueries yield errors. We're often faced with three choices there: whether to make the entire query return an error, or make the subquery return a friendly error, or make the subquery return null/empty.
Finally, we're suspecting that in most cases where it might first appear we need a dataloader, maybe we don't if we change how the frontend queries, since the "+1" part of the "N+1" can often make its own group query to whatever repository we're using.
This is a summary of a research paper.