The point is exactly to push data fetching complexity from the Client to the API server. And that's generally a worthwhile tradeoff for these applications because you can deal with server complexity by throwing more money at your API server cluster, but you can't force your users to upgrade to more performant clients.
Of course, server-side overfetching is a problem that can drastically limit the scalability of your system by overloading the components you can't as easily scale by spinning up more machines (i.e. most databases). Naive implementations of GraphQL on the server can be ridiculously chatty and require multiple server-db roundtrips to resolve even the simplest of queries, which can arguably be an even nastier problem than dealing with multiple client-server roundtrips with a RESTful API.
This is why non-trivial GraphQL servers generally use some kind of resolver batching/caching layer (Facebook provides a library called DataLoader to facilitate this: https://github.com/facebook/dataloader) or a query planner at your root resolver (like Join Monster: https://github.com/stems/join-monster).
Neither of these approaches is trivial to implement, but then again, doing efficient data fetching on the client against a RESTful API is just as difficult, if not more so. Efficient data-fetching is just a very difficult problem, and involves essential complexity that needs to live somewhere. At the end of the day it's up to you if you want to deal with that complexity in your clients (using a traditional RESTful API) or on your API servers (using a query mechanism like GraphQL).
For example, server side rendering. Though Hyperfiddle is a bit more sophisticated than that.