GraphQL is super awesome at what it does but it's definitely not designed for rapid prototyping applications.
The thing about GraphQL is that it's middleware. It's designed to act as really nice glue between multiple backends.
It solves a lot of nice problems like over-fetching too much data, calling too many APIs, etc.
The problem is that you really don't need these to get an app shipped immediately.
The REAL sweet spot for GraphQL is for a company like Netflix or Facebook where you have 1500 APIs and tons of problems with data over-fetch and you have the time to sit down and do things right.
I think I'm going to end up going with Firebase just because you can bang something out FAST and get it shipped.
It's not going to be perfect but you can ship an MVP and start making revenue and/or grow your user base while you figure things out.
Where you start running into issues is the surrounding tooling. Integrating a typical REST API into an APM monitoring solution is a cinch, because all of these tools know how to read the incoming requests, HTTP methods, paths, bodies, etc. With GraphQL, you might be left building glue for your APM tool of choice, or just using the highly limited, but at least specialized, Apollo Engine. Enforcing strict rate limiting is easy with REST; very difficult with GraphQL due to how complex and free-form queries are.
Optimizing your backend to support those free-form queries is also (I dare say intractably) difficult; I haven't seen a single backend framework which doesn't actively encourage an N+1 problem on any query which returns multiple objects of data. AppSync as well, my god is that an evil play from AWS; if you've got separate lambda functions serving all the different nodes in your graph, a single query could trigger dozens, or even hundreds, of invocations. Combine that with their guidance to use Aurora Serverless and any casual observer might say that they're actively exploiting the unfortunate ignorance of an engineer trying to jump on the latest trends.
I don't believe any of these things are problems with GraphQL. I think they're issues with ecosystem immaturity, and I hope they get better over time. Frankly, every single backend library I've used sucks; its designed to be awesome on the frontend, and it is.
I think you're right that, right now, its best suited to large organizations. Large organizations can engineer around all of its issues and extract a LOT of value from it. Medium organizations are almost immediately going to run into ecosystem immaturity and scaling issues. Small organizations are going to get the most value from an "all in one" solution, whether that's Firebase, or a simple REST API on App Engine, or something like that.
But I could be wrong in my analysis that its not a core issue with GraphQL, and there are subtle complexities with the API definition language which make scaling it for anyone who isn't Facebook intractable. Time will tell.
Well then you haven't really looked :)
These are all examples of tools/libs that implement a graphql api without a N+1 issue
I think Facebook's Dataloader is more close to a solution.
OData did exactly this years ago, and got heavily criticised for giving too much querying power to the client side, as (without expert usage on the server side to whitelist the query) a client could run queries that would overload the server.
You're damned if you do, and damned if you don't.
When read and write becomes real time sync, though, I've always thought Firebase was under appreciated.
* The mapping of the response to the types used in your app balloons. There's heavy use of Swift's Codable to map the JSON result to objects. I'm finding in a lot of cases where I'd be making queries and I don't need the resultant root object but rather a value one or two levels deeper. This has caused me to write a number of "shell" types to help streamline the decoding process.
* Different GraphQL requests can query for different fields on the same type, thus forcing your app to have to different types for arguably the same thing or have optional fields in more places that I would like.
* There's security implications with exposing your backend to any kind of request. GraphQL supports hashed queries so that the entire request isn't sent over each time and prevent the abuse that can result from an exposed API. Setting up and supporting this infrastructure takes some amount of resources.
* More complex to provide response metadata, such as cache control and request/response IDs. Granted, some of this could/should be moved to the response header but more complex types are trickier to handle.
That all being said, I'm quite happy with the trade offs compared to using REST, including:
* With REST, some of the endpoints ended up returning massive results as they had to support Android, iOS, and Web use cases. It's really hard to audit what fields were still in use by the app and the ROI on cleaning up the endpoints was minimal.
* Related to the above, it's easier to deprecate certain fields and make the changes on all the platforms appropriately. Given that GraphQL supports tracing of queries/fields usage, it's a lot easier to know when a field is no longer in use and be able to clean it up. Granted, this is more of a backend plus vs a client but it provides a much more smoother migration process for the client.
* Explicit declaration of non-null fields. Fantastic for mapping types. The entire GraphQL query fails if a resolver returns null for a non-null-defined field, giving the app a piece of mind with regards to type safety.
I agree there's definitely a short-term cost but a big ROI.
What is the GraphGL story on caching and closest point of presence redirection?
We build a mobile app that consumes various "enterprisy" HTTP-based APIs. Often, due to how the APIs are designed to support a range of different frontends, we have to either fetch more data than we need, or do a bunch of granular requests where we would prefer to do a single large one. But most of the time that is outweighed by the fact that many responses are cached in CDN (Content Delivery Network). Since our users are spread out globally, going to the origin server for every response would in many cases imply a latency of 100-200 milliseconds, which wouldn't be acceptable.
When comparing apples to apples, GraphQL is amazing for rapidly iterating.
I think the biggest risk with GraphQL is it’s too easy to just mirror your data structures as an API.
Also, unrelated to the above, but we use Firebase for a few auxiliary real-time needs at work and it goes down constantly.
Rapidly iterating what?
- create a new action
- turn that action into requests via some middleware
- denornalize and put in your store
- maybe write a selector to get the data you need from your store
I had the impression it was like Firebase for quick prototypes (saw someone build a product in only 4h once), but you get CloudFormation templates out of it, which makes it much more flexible in the long run, when you customize more and more.
Standards allow the information economy to function... but influential companies want their internal practices to become 'the standard'.
FB has GraphQL stable and doing exactly what they want it to do. Now they can pass it off to a foundation for maintenance and blame without having to pay out of pocket.
The existence of the foundation is orthogonal to FB's internal investment and usage of GraphQL, which was and will continue to be significant.
Right pronunciation, but it's Latin. :)
Having given tutoring in Latin when I was 15, I'm always dying a little inside if I'm reading "per say".
I think OP was going for greed.
- Relay by Facebook, it's own GraphQL client: https://facebook.github.io/relay/
- Urql by FormidableLabs, an effort to build a simple React GraphQL client that covers the 80% use case: https://github.com/FormidableLabs/urql
- GraphQL Request by Prisma, a minimal, universal GraphQL client: https://github.com/prisma/graphql-request
Plus a variety of smaller ones like Lokka (https://github.com/kadirahq/lokka), FetchQL (https://github.com/gucheen/fetchql) and micro-graphql-react (https://github.com/arackaf/micro-graphql-react).
It's all about the tradeoffs you want to make: if you're building a React app your can't go wrong with Relay or Urql, if you're writing one-off (universal) scripts graphql-request is probably your best choice, etc.
I write native iOS and Apollo code generation can be quite hit and miss.
I’m an Australian. I’m currently on holiday in Girona, Spain.
Holy shit how do you people deal with these cookie notices all day?! Techcrunch covers the entire screen with something you have to click , and it’s hardly alone.
This is insanity. Who was this meant to benefit?
Most of the time, in my app, resource names do not even correspond to single tables or backend data structures but are simply presented that way for ease of use for external consumers.
Subjectively, Implementing it feels more like old school COBRA than REST or SQL.
If I have to write something that’s REST-ish, I’ll do my damnedest to make sure it’s HATEOAS, but in my experience, the number of developers who can consistently produce useful and consistent HATEOAS APIs is…vanishingly small.
I can learn more about an API from its GraphQL schema than most of the other API documentation (like OAS/Swagger or RAML) out there.
Because relatively few developers truly understand HATEOAS let alone REST. I think that did play a small part in the increased popularity of GraphQL.
GraphQL allows the backend to say "here's the data I offer and how it's structured, pick which fields you want from that" and the frontend query just specifies the "what." It's like picking from a menu (although fields can have parameters), and the results you get back are structured objects with nesting.
On the other hand, the data offered by a SPARQL endpoint has no inherent structure, the query is what gives the data structure. You not only ask what you want (the projection) but how to resolve the fields using (often complex) relational logic. It's orders of magnitude more powerful, which is awesome, but also more work and more confusing. Its biggest downside is that, like SQL, the results are row-oriented. For anything other than completely simplistic queries, the resulting data often needs to go through a transform/reducer step instead of being used directly, because real-world needs aren't always row-oriented (especially on the frontend). If you want to request information about both entities and the "children" of those entities, the projection is going to be (1) incredibly messy and (2) full of duplicate data, or you're making multiple queries.
I wouldn't want my frontend communicating with a raw SPARQL endpoint for the same reason I wouldn't want to expose a raw SQL endpoint (and basically nobody does). GraphQL puts the frontend "on rails" so to speak and lets the backend worry about the "how." That can be a positive or a negative depending on what you want to do, but in the frontend world (where REST is the norm), folks consider that a positive and it's the direction they've generally chosen.
On the other hand, GraphQL pushes its complexity (to worry about the "how") into the server's implementation. The complexity is still there. And it's impossible to be automatically browseable.
Semantic Web is only an application of SPARQL, not the only one.