Hacker News new | past | comments | ask | show | jobs | submit login
A REST View of GraphQL (hasura.io)
121 points by wawhal 32 days ago | hide | past | favorite | 67 comments



I'm resistant to GraphQL, although I take the caveat that I was also initially resistant to JSX and CSS-in-JS and my thinking has since evolved.

My two main annoyances are a) GraphQL could be thought of as a custom media type that almost "slots in" to the REST model by defining a request content-type and a response content-type with almost entirely optional fields, and b) the incessant idea that REST has to replicate the n+1 query problem.

For a) "it's strongly typed" has never struck me as much of an argument. It's appealing if you're already lacking a strong definition of your data model, but GraphQL doesn't inherently fix that, it just forces you to confront something you could fix within your existing architecture anyway.

For b), it seems that people tend to map something like /users to `SELECT * FROM users`, and 12 months later, complain that it's retuning too much data, and they have to make N queries to /groups/memberOf.

Am I alone in thinking the obvious solutions are to i) add a comma-separated, dot-notation `?fields=` param to reduce unnecessary data transfer, ii) add an endpoint/content-type switch for `usersWithGroups` and iii) realise this is a problem with your data model, not your API architecture?

As an additional c), my other concern is GraphQL touts "one query language for all your data", but tends to skip over the N+1 problem when _implementing_ the queries to disparate data sources. If your existing queries are bolted into `foreach (input) { do query }` then GraphQl isn't going to give you any speed up, it's just going to give you slightly more simplicity to an already-slow backend.

Granted, I work with "legacy" (i.e. old but functional) code, and I secretly like the idea that adopting GraphQL would force us to fix our query layer, but why can't we fix it within the REST model?

(I happen to be about to sleep, but I am genuinely interested in waking up to see what HN thinks)


For me one of the biggest advantages of using GraphQL is that clients consuming data can directly express what they want. The backend implementation can then do all sorts of magic to turn intent into execution.

Regarding the n+1 problem: of course that can be solved with plain rest endpoints as well, but it requires you to form your API around it, where if you have one unified endpoint solving the n+1 problem is a mere implementation detail on the backend.

In situations where it's desirable that a system's complexity is expressed in the API design, I can see GraphQL not being a good fit. If I compare it to SQL it is understandable that you sometimes do need to restrict what can be done to clear predefined operations, say for performance reasons. But if you can get away with this general "intent-based" querying, which GraphQL does well, I recommend it over plain rest endpoints.


Many outlets gloss over the n+1 query problem, but I find it to be a major shortcoming of the gql spec. Sure there are solutions, but they are not very ergonomic. In one of our products we found simple requests blowing out to 100s of downstream calls to the legacy REST APIs. The pain in optimizing the GQL resolving layer negated almost any benefit of the framework as a whole.

Im happy GQL brought api-contract-type-saftey to a wider audience, but it is similarly solved with swagger/OpenApi/protobuf/et al.

Folks may be interested in dgraph, which exposes a gql api and is not victim to the n+1 problem since it is actually a graph behind the scenes.


The n+1 problem is real but I do think dataloader is a very ergonomic solution. In a simple REST API using your ORM's preload functionality is even more ergonomic, but in more complex cases I've seen a lot of gnarly stuff like bulk-loading a bunch of data and then threading it down through many layers of function calls which definitely feels worse than the batched calls you make to avoid N+1's in gql.


> gloss over the n+1 query problem, but I find it to be a major shortcoming of the gql spec.

Why would an implementation detail like n+1 be part of the spec?


The n+1 query problem arises from having an API that doesn't let you express the full query that you want so the server can send you back all the data in one go instead of N+1 goes. The n+1 query problem arises naturally from bad API design. That's a spec problem.


Unless I'm missing something in the conversation, this is exactly what GQL is designed to solve.


I'm similarly skeptical of GraphQL. However Hasura (company that posted the article we're discussing here) does provide a a couple of key benefits:

1. It actually implements the entire API for you (you can slot in auth with a webhook or using JWTs). If you use it then your query logic is fixed by default, because you haven't implemented it at all, and Hasura implements it in a sensible way.

2. It gives you realtime/data watching capabilities. Generally in a REST model, you'd have to have a separate websocket channel and then implement the watching logic yourself. Hasura does all this for you, and allows you to reuse standard GraphQL queries for subscriptions.

We're not using it for everything (we also have REST APIs), but it's pretty handy where we are using it (and it sits on top of a standard postgres database which you provide the credential for, so it's super easy to integrate with an existing codebase if you're using postgres).


Can hasura provide record history in postgres ? This is typical on many financial systems - changing a record will store a copy of previous record values in historic table.


Yes (by integrating with Postgres and providing some extra information about the user affecting the change): This might help: https://hasura.io/docs/1.0/graphql/manual/guides/auditing-ta...


(Author here)

You are right about REST not necessarily replicating the N + 1 Query problem. ORMs have already provided solutions for this problem. See https://guides.rubyonrails.org/active_record_querying.html#e... for example.

Looking at GraphQL as being a media type for REST misses the point because:

1) If you were to build a system that primarily used GraphQL for communication, you would be breaking the spirit of uniform resource constraint. Since the URI is the resource for REST, you would be having exactly 1 resource. The article talks a bit more about this. 2) The value add from GraphQL is primarily faster (and more efficient) front end development. So the question is do you want to implement the GraphQL media type?

Typed vs non typed interfaces are a personal preference. Typing adds safety but historically made getting started harder. That has changed in the recent years though as tooling has significantly improved.

I've talked about sparse field sets in the article. They are an alternative, but any form of query language requires a parser and implementation in the backend and GraphQL has good tooling for this.

On the third point of N + 1 queries: Hasura compiles the sql query where possible and otherwise uses a batching technique for external APIs, similar to haxl.


The n+1 query problem arises naturally from having limited query support and limited representation of result sets. If your REST binding for your data is one URI per-entity and a query returns the URIs of the entities, then you have the n+1 problem. On the other hand, if your REST binding is that a query is an entity in itself returning a collection representation of result entity set, then you don't have the n+1 problem.

Apart from that, expressing queries in URIs is bloody hard.

I rather like the PostgREST approach.

Another idea is to POST queries to get a nice URI for each query, then you can GET the URI for a query using URI query parameters to supply values to the... query's parameters. (Look ma', no SQL injection.) PostREST gives you this if you wrap queries with functions, which is a bit sucky, but it gives the DB owner a lot of control over what queries get run.


> Hasura compiles the sql query where possible and otherwise uses a batching technique for external APIs, similar to haxl.

This is assuming the backend (REST API behind GraphQL) should support fetch many by collection of ids API - End of the day, this needs to be done whether we are solving N+1 problem by REST or GraphQL.


I agree with you 100%. In fact, in the REST standard in our company, we have 3 things that help with these problems:

1. All of our responses are well documented, both in separate docs and in a self-documenting format (you can GET .../$docs on any URL to get the description of field)

2. We support an "?includes=" query param that can be used to get arbitrarily deep data with a single GET request

3. Each object contains links to its relations, and these can often be relational-style queries. For example, you can make a request to /users and that would return a list of User objects, with each having a link for a field named "groups", with a link like /groups?userid=0. However, if you want directly the users with all groups, you can request /users?include=groups, and the backend will automatically populate that field as well.


> you can GET .../$docs on any URL

Wow, that is some next level stuff right there. Very nicely done. What do you get back? JSON, plain text, HTML, man page?


JSON, describing the fields of the objects. Honestly it's not very widely used, but it was an easy implementation so we decided to leave it exposed.


Philosophically the difference is a collection of resources vs a single connected data model and enforced object representation.

The single data model lets you provide resource join requests and the enforced encoding lets the service select only desired fields.

Field query params could be provided by web frameworks but they can't be fundamental because REST doesn't imply json or even resources with fields. Similarly, no fields means no join syntax. It will always be a bolt on solution for REST apis.

That said, I feel like GraphQL has a flaw around multiple schemas/services. You can build a rest API as a collection of any endpoints of services and because you were doing all the work anyway it doesn't feel strange.

I'm no GraphQL expert but it seems like stitching multiple schemas and joining across them isn't possible from a client perspective. I'm not sure how this plays out in practice.

I would feel much better about GraphQL if it was a client side library that kept the query syntax. In theory something like that could wrap any number of REST services. REST services could opt in to added functionality like query joining and field filtering as needed. Missing functionality would fall back to the client.

If we're taking requests for how to take the improvements of GraphQL into REST, that would be my suggestion.


I see GraphQL as a primarily organizational tool. If, for whatever reason, it is too difficult or costly to get frontend teams and backend teams aligned on the precise functionality of a REST API, GraphQL provides a spec that is almost certainly able to satisfy frontend requirements at the expense of nontrivial backend complexity.

This could arise if the backend team is producing an API for public consumption or consumption by many frontend teams with differing requirements.

I consider this a fairly narrow niche - the typical case of a backend serving a small number of frontends with tight collaboration between backend and frontend teams is, to my eyes, almost always handled more efficiently by standard REST.


My backend take a 'fields' parameter, I use it mostly in list views.

My backend has nested models. I can request the line_items field and get that list along with the root object, submit it back with added fields easily.


That sounds pretty much like what Vulcain [1] is doing for you on top of an existing REST API.

[1] https://github.com/dunglas/vulcain


Where GraphQL really shines is when you need to decouple the front end from the backed, or, explore the API. Documentation is built in and just about any tool (Insomnia, Postman, GraphiQL, etc) can let you explore the introspected documentation right in the tool.

Where it falls flat on its face, is unit testing (at least in C#) as it’s very cumbersome to inject all the things you need.

In my personal money management software, the GraphQL queries do not result in database queries. Instead it results in RPCs to micro services, starting orchestrations, putting items in queues, and, sometimes even batching as required. It was originally a massive REST api that was hard to navigate with a litany of query parameters and left the client doing too much. The backend and the front end were strongly coupled but not anymore.


It sounds like you're talking about something like https://jsonapi.org/, specifically the sparse field sets https://jsonapi.org/format/#fetching-sparse-fieldsets

It feels like it has some similar goals to GraphQL by offering the client more control over the data returned, but not at the cost of potential performance problems.


I'm becoming somewhat suspicious of both REST and GraphQL for APIs. Not in the sense of thinking that they're bad or antipatterns or anything so hyperbolic as that, but in that they make some fundamental assumptions about how APIs are going to work that I don't think we should ever just take for granted.

On the REST side, it's the assumption that the API is for state transfers. Transfer is a heavy word there. You're not just assuming that the service is stateful, and you're not just assuming that clients can ask for state changes. You're assuming that the dominant model for effectuating those state changes is that the client determines what the new state should look like, and transfers a representation of that state to the server.

And then, on the GraphQL side, you're assuming that the service is basically just a big database. Perhaps one with lots and lots and lots of behavior that, from a high level, looks a lot like triggers. But still, a database.

Both these assumptions may work well for a whole lot of applications. If CRUD's your ticket, then both can serve you well. But choosing either might force you to make some compromises if you were instead hoping to do come up with, for example, a more domain-driven design.


What is commonly referred to as “REST” isn’t actually REST, and trying to cram that round peg into that square hole is a hobgoblin of little minds IMO. In practice, “REST” can be simplified to RPC over HTTP using JSON as a wire format. Or you can use gRPC and get strong typing and explicit RPC semantics instead.


It's not just using JSON as a wire format, it's overloading HTTP verbs and status codes as part of your protocol.

HTTP verbs and status codes map fairly cleanly to CRUD and, more generally, the "state transfer" paradigm. For generic RPC, though? Tends to get quite a bit messier.


While that’s true in the general case, there’s some rules of thumb that get you most of the way there. Use GET for read-only, side-effect-free operations and POST for everything else. Return 2xx for success, 4xx for “client error”, and 5xx for “server error”. Use Swagger to document payload formats and expected response codes. Getting too much farther beyond that is a slippery slope that leads straight to the REST hobgoblin, who is a mere distraction from the dragons a service owner actually needs to battle.


One consideration I rarely see mentioned when discussing REST vs. TechDuJour is:

    who are your users?
If you develop an API for your front-end team, use the fastest, most efficient protocol. Use something that you can change quickly. Optimize for speed in all regards.

If you develop an API for other users and they expect that API to be around for a while and gradually grow, then you need to think about how much pain are you going to subject your users to when your API changes.

GraphQL: You will not get your domain model right on the first attempt. So you will need to change your "Graph". Your "Graph" might change substantially. How do you deal with that?

REST: Same thing: There are well-known standards how to grow and evolve a true RESTful API. Yet, you may still mess with your users, declare a /v2/ endpoint and discontinue /v1/


Not a general solution, but with Hasura, one approach we have seen is to use Postgres views to keep the old graph around as your data model changes.

In general API evolution is tricky and perfect decoupling between API clients and servers not possible (the article talks about this). What one can aim for is a combination of:

1) Adding new query parameters should not break clients 2) Adding new fields to the responses should not break clients 3) Adding new serialization formats (for newer clients) should not break existing clients

#1 & #2 is roughly what you get with GraphQL optional fields.

Phil Sturgeon has good articles on API Versioning[1] and Evolvability[2] for REST API's

[1] https://apisyouwonthate.com/blog/api-versioning-has-no-right... [2] https://phil.tech/2018/api-evolution-for-rest-http-apis/


"perfect decoupling between API clients and servers not possible "

Well, the ultimate REST client: the browser, comes pretty close to that. It doesn't care if you are rendering an app to manage your bank account or a blog or Hacker News.


If you've made a catastrophic and backwards-incompatible error, a common solution is to write "V2" fields and then reimplement the previous iteration's fields with the new resolvers. Old clients will then be able to continue to resolve fields they expect, while new clients will be able to take advantage of new features. This is generally pretty rare, because usually the usecase is that you're expanding the graph, and generally if you have followed the rules and best practices in writing the graph then expanding the graph should just work in a backwards-compatible way.

(aside: I agree that changing your graph / domain model is likely, and I take a dim view of tools that automatically generate APIs as a result.)

I've also seen https://github.com/ef-eng/graphql-query-rewriter and consider it an interesting thought, but have never seen it in production (and most people are rightly worried about doing such a thing).

GraphQL is our API for our frontend team, but it's also our API for end user devs. We just run it through an app that generates a REST API from a subset of our usual schema graph. (The decision was made not to expose the raw subset itself mostly just to cut down on docs duplication and avoid confusion for clients that don't know or need to know GraphQL.) This comes with advantages like allowing the use of `_expand` fields like you'd see on Stripe's API (https://stripe.com/docs/api/expanding_objects), without us having to break a sweat.


I've used GraphQL and Hasura extensively for my project (https://pagewatch.dev) and it has been a huge timesaver to get a realtime service up and running. BUT, I'm still using REST for mutations (The POST/PUT part really). At least for my use case I find that almost every time you are modifying data there is some other component also involved (eg starting a background task), so it might as well be a REST server endpoint. (Also I find the mutations language of GraphQL is really the weakest/hardest to really understand part)


Mutations don't have the really clear advantages that queries do (compared to their REST counterparts), but the one thing that I think does make them worthwhile is that you get the cache updates/re-renders based on the mutation result (instead of needing to re-fetch data or whatever to get your UI to update).


True, but then I'm using the subscriptions model that Hasura provides, so in my case the UI update is actually automatic, its just initiated by Postgres triggers instead of cache (so a bit slower, but still fast enough)


How have you found the performance / resource usage of widespread usage of Hasura subscriptions. We're using them in a few key places, but have been reluctant to rely on them too widely because it seems that they are actually based on the polling the query every second.


Traffic for my service is too low to say for sure yet, but the hasura docs seems to indicate that it could be quite scalable: https://github.com/hasura/graphql-engine/blob/master/archite...


You can put your REST API endpoints into Hasura through Actions. All you have to do is define an input and output GraphQL type for them, and give it the endpoint URL.

Then you get the benefit of a single, unified GraphQL API and can use Hasura's authorization model + connect the output fields back to the rest of the graph if you need.

https://hasura.io/docs/1.0/graphql/manual/actions/index.html

https://hasura.io/blog/introducing-actions/


Great article.

REST with JSON has always been a category error: JSON is not a hypertext and thus, as the author says, violates the most important and distinct aspect of the REST architecture.

I have written quite a bit about this:

http://intercoolerjs.org/2016/01/18/rescuing-rest.html

http://intercoolerjs.org/2016/03/15/hypertext-hypotext.html

https://threadreaderapp.com/thread/1276556432385531906.html


My main qualm about REST is the number of inputs for doing something: Path variables, query variables, POST input, headers and for shits and giggles the Http METHOD. To top it off, because people want "beautiful" REST APIs everyone has a playground to do these kinds of things differently. And boy are they done very differently from company to company. And that leads to massive opinionated people. Finally, REST is tied to HTTP too tightly. It can't be independent of it, ever. I know a lot of people believe that's a good thing, in that there are metrics, logs, and a variety of data that assists architecturally. But it has only reached that status because we're sitting at 15 years of REST toolchains.

I'm not saying GraphQL is the long term answer, but personally I'd rather be in a project using GraphQL (and hopefully not backed by REST at all)


This is partly because REST is not a standard like GraphQL, and you can't really treat them as being equivalent.

If you want a closer comparison, I think it's better to look at standards like OData, JSON:API,or even just for example [Microsoft REST API guidelines](https://github.com/microsoft/api-guidelines).

REST by itself is a pretty general description of a type of architecture that isn't even tied to HTTP. Even HN by most definitions is a REST service that primarily uses `text/html` as it's format.


> My main qualm about REST is the number of inputs for doing something: Path variables, query variables, POST input, headers and for shits and giggles the Http METHOD.

As a sibling comment has already extensively described why this is the way it is, I want to add one more thing: REST is a set of principles leveraging what already exists in HTTP. And HTTP gives you a lot. Here's an extensive HTTP decision diagram with links to explanations on basically everything: https://github.com/for-GET/http-decision-diagram/tree/master...

Caching? HTTP got you covered. Editing conflicts? HTTP got you covered. A decision path for auth? HTTP got you covered. No-op requests? HTTP got you covered.

Meanwhile GraphQL libs are busy reimplementing half of HTTP spec inside a single POST request.


Well, you have to dive into the origins of REST to understand where it stems from and how you'd use it.

REST isn't describing a concrete implementation of an API. Rather it's a set of first principles that define an architectural style of API. So, it's more abstract then what people usually make out of it. As such, there's no single, definitive, end-all REST API that gets it "the right way". There's just a multitude of different interpretations, that each cater to a specific problem, or a set of problems.

REST is tied to HTTP for the simple reason that the person who came up with those principles, Roy Fielding, was also one of the architects of the HTTP 1.1 specification, back in the late 1990's. He developed his framework of ideas on REST as his doctoral dissertation, while working on HTTP 1.1 in parallel.

HTTP is a communication protocol, and as that was developed through RFC's and intense discussion, he had to boil down how the different concepts within the protocol - GET, POST, PUT, URI's, the structure of a HTTP message,... - could be ordered in a way that yields meaningful and efficient interoperability between systems. The REST principles are just one way that describe on a high level how you could lay out the constituent HTTP concepts to achieve just that.

https://en.wikipedia.org/wiki/Representational_state_transfe...

"Throughout the HTTP standardization process, I was called on to defend the design choices of the Web. That is an extremely difficult thing to do within a process that accepts proposals from anyone on a topic that was rapidly becoming the center of an entire industry. I had comments from well over 500 developers, many of whom were distinguished engineers with decades of experience, and I had to explain everything from the most abstract notions of Web interaction to the finest details of HTTP syntax. That process honed my model down to a core set of principles, properties, and constraints that are now called REST.[7]"

For all intents and purposes, REST doesn't concern itself with particular serialization formats, how you structure the data, how you validate data through schema's, how you structure URI's, how you process that data, nor what language or technology you use.

In my book, GraphQL is an entirely different beast which has little if anything to do with REST. The only thing it has in common with REST is that all communication happens over HTTP through GET and POST requests. But instead of using the constituent parts of HTTP itself to encapsulate meaning and salience, it just acts as a plain carrier of enveloped plain text messages represented as serialized JSON. In essence, GraphQL is just a query language much like SQL: you define a query, you submit it to a database, and you get a result back. It just happens that you interact with that service over HTTP.

You could even go as far and argue that a GraphQL service IS RESTful simply because it adheres to the HTTP protocol and resources can be dereferenced through URI's if only you're willing to stretch your own interpretation of the REST principles far enough.

Regardless, on a practical level, if GraphQL works out for you, then great, keep using it. Maybe you're just building a tiny web service with just a few URI's that yield static data, well, just consider modelling how clients interact with those URI's according to REST principles. Or maybe you simply have a plethora of libraries and tools in your toolbox that whipping up a lightweight GraphQL API takes far less time then conceiving and implementing our own HTTP endpoint. It's just a matter of preference and context. One isn't inherently better then the other, after all.

A lot of of people are indeed opinionated. They are also building those opinions on other people's assumptions and beliefs. Few people actually take the time to go back to Roy Fielding's dissertation or read up on the origins of the HTTP protocol and the REST principles.

In part, the confusion about REST also uncovers a structural problem: a big group of developers are well versed in building applications for the Web using modern tools and languages... but lack a foundational understanding of the protocols, the standards and the concepts that underpin the Web. This should also spark worry as the foundational principles of the Web, especially the openness of the standars, are being contested by the likes of Google i.e. their decision to hide URI's from users in their applications: https://www.androidpolice.com/2020/06/12/google-resumes-its-...


We should resist attempts to turn technology choices into religious devotions. Here's an example of a project that brings some of the most significant benefits of GraphQL to an established, mature framework for building REST APIs:

https://github.com/yezyilomo/django-restql

It's tempting to think your new paradigm/framework/whatever is so groundbreaking, so purely beneficial, that it warrants asking people to demolish everything and start over. Those kinds of breakthroughs have happened and will continue to happen, but the signal/noise ratio, especially in the world of Web Development, is incredibly low.


(Author here) Couldn't agree more on not turning technology choices into religious devotions.

It is of course completely possible for you to build a query language from grounds up. django-restql seems to be doing this. Sparse fieldsets and Netflix Falcor other alternatives I've heard of. GraphQL however has good tooling and if you do see the need for a query language on top of your APIs using it is a good choice IMO.

GraphQL does not necessarily mean you have to throw away your existing backend and rebuild from scratch either. It's perfectly fine to use GraphQL as an additional API endpoint alongside existing APIs. A model we are seeing evolve with Hasura is to use Hasura for all the reads and delegate writes via the Actions framework either to a serverless function or another HTTP service.


Thank you for writing a nuanced, non-dogmatic article, and for your thoughtful response here!

I should have clarified that I am not criticizing your article. Rather, I am criticizing some zealotry that has emerged in the community promoting GraphQL as the end-all solution.

I have been - at different times and in no particular order - ambivalent, intrigued, fascinated, and skeptical in regards to GraphQL. I need to learn more and do something hands-on with it, of course.

It seems some of the biggest benefits of GraphQL are 1) accommodating client-side needs that the server-side developers did not anticipate and 2) avoiding over-fetching and under-fetching. Is that fair?

If so, it seems many would be served well by "retrofitting" their REST API with the ability to accept dynamic field requests and respond accordingly, as is done with django-restql.

As for tooling, REST is served well in that area with things like OpenAPI and Postman, although it is not clear (to me) how well they can adapt to non-standard GraphQL-like querying capabilities "tacked on".


Vulcain ( https://github.com/dunglas/vulcain ) is another example of getting GraphQL's benefits on top of existing technology.


I have been working on an idea to leverage SQL + REST APIs instead of GraphQL to solve the similar problems, and I would love to get some feedback on it.

GraphQL is great during development, but in production you often use "persisted queries" to prevent abuse. Persisted queries are also a way to use HTTP GET instead of POST, and thereby leverage HTTP caching. As such, if you swap out graphql and use sql during the development phase, you perhaps can get similar benefits.

My solution (https://github.com/hashedin/squealy) uses SQL queries to generate the API response. Squealy uses a template language for the sql queries, so you can directly embed where clause from the API parameters. The library internally binds the parameters, and is free from SQL injection.

Handling many-to-many relation, or many one-to-many relations is done in-memory after fetching data from the relational database. This provides the same flexibility as GraphQL, but of course requires you to know SQL.

Here is an example that builds various StackOverflow APIs using just a YAML+SQL based DSL - https://github.com/hashedin/squealy/blob/master/squealy-app/...

Squealy is still work in progress, so don't use it in production. But I would love to get some feedback!


This looks really cool! Particularly for the embedded analytics use case.

Do you have an example of how this would work on the front end with relationships. For example if I want to fetch posts and their comments how would the front code look like?


Yes, following relationships is a key feature of Squealy. Here is an example for a recent-questions API - https://github.com/hashedin/squealy/blob/master/squealy-app/....

In short, with 1 GET API call that uses 3 SQL queries, you are getting a list of questions, related answers and related comments.

You should parse that example as follows - 1. The first SQL query fetches all the questions, filtered by API parameters 2. The second SQL query fetches all comments for the questions selected in the first query. See the `WHERE c.postid in {{ questions.id | inclause }}` part - questions.id is coming from the output of the first query 3. The output of the first and second query is merged together on the basis of the questionid that is present in both the query output. 4. Similar approach is used for all the answers


Im right now building my backend on GraphQL and TypeScript.

Great DX.

GraphQL Modules + TypeGraphQL, Apollo Server, TypeORM (pg for prod and sqlite3 for dev/test)

Works pretty nicely auto-generating both graphql fields/querys/mutations/schema and entities/models from database from a single source of turth. I made a template from type-graphql's graphql-modules example to work as a separate codebase, if someone is interested I could switch it from private to public and share the link here


Wow, I hadn't seen before how you can mix model and gql code with TypeORM and TypeGraphQL before. Looks really convenient, especially with an auto-generator. Have you ran into footguns?

Did you try postgraphile? Any thoughts?

Curious to see how you string it all together...


Hey haven't tried postgraphile, but have tried Prisma/Hasura which seem to be on the same SaaS-GraphQL space. They're all valid products, but my stack is 100% fully open source, and there's no PRO to buy for any of the options.

I based it off the creator of TypeGraphQL GQL-Modules+TypeGraphQL v1 example: https://github.com/MichalLytek/type-graphql/tree/master/exam...

And a LogRocket's blogpost about TypeORM+TypeGraphQL: https://blog.logrocket.com/how-build-graphql-api-typegraphql...

I need to add typeorm/models to my template before sharing, but as an example here's a model which is also a type, using both @ObjectType() & @Field() (type-graphql and @BaseEntity() & @Column() (typeorm)class decorators.

```user.model.ts import { Field, ID, Int, ObjectType, } from 'type-graphql'; import { BaseEntity, Column, Entity, PrimaryGeneratedColumn, } from 'typeorm';

  @Entity()
  @ObjectType()
  export default class User extends BaseEntity {

    @Field(() => ID)
    @PrimaryGeneratedColumn()
    id?: typeof ID;

    @Field()
    @Column()
    name: string;

    @Field()
    @Column()
    email: string;

    @Field(type => Int)
    @Column()
    age: number;


  }
```

and this is what type-graphql auto-generates for a user schema.

```user.schema.ts

# !!! THIS FILE WAS GENERATED BY TYPE-GRAPHQL !!!

# !!! DO NOT MODIFY THIS FILE BY YOURSELF !!!

type Document { author: User! authorId: ID! }

type Notification { author: User! authorId: ID! }

type Query { users: [User!]! }

type User { age: Int! email: String! id: ID! name: String! } ```


Nice, thanks!

Fwiw, postgraphile is open source and can be plugged into any express app. But I don't think it provides an orm layer or equivalent, so I can see the advantage of this approach too!


Sorry, my bad, I briefly checked their homepage and saw a pro button somewhere: https://www.graphile.org/postgraphile/pricing/

Although it's a totally acceptable monetization strategy for any OSS to offer cloud+enterprise support, for my personal projects at least I prefer to choose projects which are more personal/community driven than commercial to put it simply.

But yeah I basically like TypeORM generally also gonna use typeorm-seeding on the mix to stress test the app.


<checks the income from the pro plugin versus that from sponsorship> I can definitely confirm that we're significantly more personal/community driven than commercial.


I'm interested please share!


Thanks for this article! I'm currently writing my Bachelor thesis, designing a new API for an existing architecture is part of my project. I've been considering going with GraphQL instead of just overhauling the current REST API and this is a pretty good starting point.


I love GraphQL because it is a standard people can agree on, if nothing else.

It is a very long list of decisions your team does not have to make. How to handle mutations and queries, how to represent your domain objects, all of this can be decided through referencing the spec and best practice.

Throw in the fact that it REQUIRES you to define a strict schema that has validation, and I am sold. The tooling is great, you get great documentation out of the box, the architecture is good enough, it is totally win-win in my book.

I feel like with REST you have to reinvent the wheel over and over again, and the lack of standards leaves the burden on the team, who may not be senior level engineers.. ( If I have to write another validation library I am going to flip out. )


Use Graphql if you have multiple teams that don't own their back-end, and you have multiple clients (Web, Mobile, SPA, Desktop...) consuming your data, and your front-end always has to consume multiple APIs and resources, and you don't count every milliseconds.


Having written https://glennengstrand.info/software/architecture/microservi... which is a blog entitled "GraphQL vs REST" I must admit that I was initially on guard with this blog. In then end, they did compare and contrast GraphQL with REST. I don't wish to go over this article point-by-point but their main arguments against REST seems to me to be this...

1. too many endpoints

2. too much data retrieval

3. too much coupling between clients and servers

The blog does not discuss when to choose REST over GraphQL since that would be in conflict of interest with those who hold Hasura equity.

Technically, GraphQL does decrease the amount of endpoints to one but at the cost of increasing client-side complexity because now client devs have to write what is essentially a query. To do so, they have to understand the backend schema. I would argue that this is not the greatest in terms of clear separation of concerns. I would also argue that this means that there would be even more coupling between client and server.

I won't cover the n+1 point since others here have already done so.


GraphQL == Modern SOAP


GraphQL with Apollo on the front-end provides very nice front-end global state and cache management, but at the cost of increased complexity. For example, fetching data updates the global state so that any page accessing that data is updated automatically.

Are there any lighter weight RESTful JS client state/cache management solutions out there people can recommend?


No there's not, and your argument of data updating page automatically is not applicable if you are using react for example you can pass the right data to the right component, and this is the point of Graphql, if you have your data at the root component, you are not getting the point.


I think REST works if you need to fetch data but when you want to query data, I prefer a well defined language instead of hacking my http headers into a query language that I need to define, implement and maintain.


Check out https://open-rpc.org/ if you haven't seen it. Brings GraphQL-like tooling to JSON-RPC.


The primary difference here is that REST is an actual architectural style with constraints.

GraphQL isn't. It's a data query mechanism expressed in JSON.


GraphQL makes sense if it is consumed by UI(frontend). But REST still the best for server side consumption. That is one application interacting with another application in the backend. I have seen people interacting with GraphQL for server side as well - but it is over-engineering and increases complexity. IMHO GraphQL is for humans, REST is for machines.


I like what AWS built on top od GraphQL.

AppSync + Amplify DataStore is pretty nice tech!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: