Hacker News new | past | comments | ask | show | jobs | submit login
Don't build a general-purpose API to power your own front end (max.engineer)
128 points by hakunin on Sept 14, 2021 | hide | past | favorite | 107 comments

I will go even further with this one. I've been working on a personal project of mine with REST APIs on the backend and a fancy combo of TypeScript/React/Redux Saga on the frontend. Everything was shiny and cool, except the overall development process was super slow. I'd spend hours polishing components, figuring out various states, installing TS libraries, fighting the compiler and god know what else. It was exhausting and tiresome, until one day I said "f#%k it, I'm done". I opened my favorite search engine and typed bootstrap premium themes. jQuery? Whatever. HTML? Fine. Pure CSS? Sure! No AJAX. No fancy reloads. No history management. Plain old <form method="POST">, good ol' cookies and simple HTML files. Within just three weekends I was already MILES ahead of a previous stack.

Long story short, if you are working on a personal project, please, consider the most dumb setup. With the vast options of super polished modern frameworks, it'll take you pretty far. A few more weekends and I'll prob be ready to go prod.

Edit: remembered something fun. I have a page that requires to poll the backend and then take some action. So I thought a bit of Ajax is fine. I opened the corresponding HTML file and started typing:

but then... hold on a minute! That's getting way too complex. META REFRESH FTW, M%F%CK%RS! :D

  <meta http-equiv="refresh" content="5">

Same here, except I didn't go back to that extreme. There is a wonderful underrated middleground which is using tools such as Django + Unpoly[1] + Slippers[2] + Tailwind, or Rails + view_components + hotwire + Tailwind, etc.... you can be insanely productive while still making your code very maintainable.

People usually think that it is either a modern SPA or jQuery spaghetti. Those are two extremes. If you put 10% the effort you were putting into building your SPA + API key into organizing better a "modern traditional" stack.... it can be wonderful.

[1] https://unpoly.com/

[2] https://mitchel.me/slippers/

I just built my MVP in a Next.js monorepo. We built the mvp in a few weeks and actually landed some real customers with it.

But I knew doing it in Rails or Django would've been faster, but I haven't got enough experience with either.

Let's say I want to create my app in Rails (or django). I don't need it to be pixel perfect, but I do want some flexibility around the UI. ActiveRecod is fine. Can I just plug this into the admin and customize as I see fit? I know Django has a built-in Admin, does Rails?

FWIW, I have used Django (without the admin) and tried learning Rails several years ago but it was too overwhelming for me. Rack middleware, the ruby syntax, the conventions—all of it seemed over my head. But if I can learn redux, react, etc, I feel like I can do Rails too.

Do you have any recommendations on resources? I want to learn how to make something that looks really good, is convention-based, lets me plug in my custom code when needed, and lets me prototype fast as hell.

I can't speak much about Rails, as I've only played with it. But I've used a lot Django in the past.

Regarding the Django admin (in rails you have ActiveAdmin[1]) think of it just as a glorified database explorer. It is an internal tool for developers, product managers and maybe for your support team. It is in no way thought to be used by end users. Every attempt I've seen to use it as such was a catastrophic failure.

With Django, if you know plain HTML and CSS, with the tools I've mentioned in the comment you're responding to, you can build almost anything... For example, let's say you need a highly interactive client side table.... you can always just attach a Vue or a React component for it by using Unpoly compilers [2].

I'd say this stack is less useful the more your app needs to work fully offline... but if you don't have that constraint... I cannot think of anything that can't be built faster and safer. Just an example: Authentication is something very risky to do your self, and has ton of corner cases. In Django just plug django-allauth, configure a few settings and done! You have a rock solid battle tested well documented authentication system, which otherwise would take you months or years to get right (both featuer, and security wise). Check django-packages too [3].

And regarding learning resources, the official documentation is awesome. You have also popular books such as Two Scoops Of Django among others. And almost all video learning platforms have quite decent courses.

[1] https://activeadmin.info/

[2] https://unpoly.com/up.compiler

[3] https://djangopackages.org/

Hey, if you already are up with NextJS, you've BlitzJS which is a Rails-like framework for NextJS.

Hope you enjoy it!

> Long story short, if you are working on a personal project, please, consider the most dumb setup.

This is not limited to personal projects. I can’t recall more than a single project I’ve worked on during the last decade where front-end code was really useful. Some cool stuff, ok, but never worth the pain.

I acknowledge it can be useful. For some real time project. Not for your crud-for-a-living.

Nailed. It. Yes. I work with GCP (google console) for work ALL DAY. It is a layered gross mess of SPA. Example, I need to list the servers. It will take 10-20 seconds to load the list of 30 servers. I don't care about the cool glitching transitions or boxes with fades. I want to list the servers. Hit the back button? OH WELL! All your session data is now lost. Anytime I type in GCP the computer hangs while it processes thousands of lines of javascript to see if I will autocomplete. When I click "create" my browser tab hangs while my macbook pro fan cranks up.

I want to rewrite the GCP console with HTML, using their api and not one damn line of JS.

100% ! I know it's not comparing apples to apples.. but do have a look at Hetzner Cloud UX. It's FANTASTIC ! Super minimal, functional and I can almost bet they didn't use a"UX Designer" I got the feeling they gave the programmer-already-on-staff that could name the most colours the UX-Design-Job.

IT's such a joy to spin up servers and work in that UX compared to GCP and AWS !! Scaleway had a good one a few year back, but now every page feels like a "marketing flyer" that screams BUY-ME all the time :/

I think this is their way of encouraging people to use the CLI.

I have the same challenges with the AWS ui, managing thousands of EC2 instances. While the CLI makes it a little simpler, it's still slow as molasses.

Some of these problems are because of frontend cruft (perhaps with the goal of nudging users towards APIs) to be sure. But plenty of them are due to the reality that the cloud providers' backends often do not, ironically, work very well at scale.

It’s easy to use the simplest possible stack on a personal project. You set the requirements and you will probably end up setting requirements that work with that stack.

On a professional level, the requirements may mostly be for a basic CRUD app but it will have a few requirements for interactive or real-time features that are not possible with a static html crud app. The client is not going to want to hear your hacker news spiel about how JavaScript has ruined the internet. Now you’ve got to embed mini SPAs into your static html, and you’ve increased the complexity past what it would have been had you just used a SPA in the first place.

And just scope creep in general. I can't imagine doing professional work without an SPA because that exposes me to risk.

You took a similar path that I did with my personal website. Kubernetes, istio, redis, OIDC, redux, sagas, you name it. All this unnecessary complexity was, however, deliberate. It was an excuse to learn about all the interesting things people are talking about. Then one day I decided the experiment was over and rewrote it from scratch in ~24 hours using plain create-react-app and Ubuntu on a $5/mo droplet. It was a valuable learning experience.

How does one reconcile moving fast like this with having fun coding? For me, I don't really want to use PHP, jQuery, etc, I want to use TypeScript and React simply because they're more fun to use, if a little slower.

Picking complex problems where you need reason about the solution in a more general way regardless of implementation language.

Like how do we efficiently receive, insert and query a gazillion rows of data. How can I use HTTP headers to my advantage to minimize data transfer in an a dynamic app. What is the best way to push data to multiple clients. How should I organize a user permissions system with users, roles, groups and inheritance and then connect that with resources dynamically (resources can be added and removed) but in the most efficient way (preferably with one query) or other complex problems that you can typically be faced with.

The implementation is then just a detail, because the problem is already solved in your head (or whiteboard), thus the language becomes irrelevant. Just pick a language where you, the programmer, can be as efficient as possible to transform the solution to code.

That is why I mostly code PHP, I don't get distracted with language nonsense, instead I focus on solutions to problems I solve my head and then type it down.

Code should be minimal but yet readable and understandable thus elegant, condensed to what you trying to solve, not the other extra fluff around it.

This also have the consequence that many of my solutions never gets typed down to code if there is no practical need for it at the moment, solving problems with your mind only can be as much as satisfactory as to coding. Remember we are engineers, we solve problems, programming languages are just tools we use, the problem and the solution will still remain the same regardless.

You don't need redux most of the time, let alone sagas. It's very common mistake. Mostly you can use `useState()` and just pass it with props.

If you really need reactive global state i'd prefer to use MobX.

I see this sentiment all the time and some context is sorely needed because its a dangerous statement to make. Theres a reason why global state management is a thing and its not an edge case.

For small projects where you want to prop drill is enough, fine. But we are talking about very small projects. Anything bigger and there is a significant drawback to that approach.

You don't need MobX either – for global state you can just use useContext().

Classic React. The subject of state comes up and a debate starts as the noobs squint at the screen. Pretty much the whole point of this thread/post. I started a significant project using React and tried for days to figure out how to handle state. After endless debate and conflicting advice online I felt I needed to speak with someone who has built a real world business app to explan how the fuck they pulled it off without pulling out their hair. (Still very curious but spooked enough to avoid sauntering onto a React project to find out.)

Use a networking library like relay, or react-query and 90% of your state management problems go away in most apps.

Your state is on the DB, those libs handle fetching and caching it.

Honorable mention, haven’t used it myself but there is RTK Query too, which is a library like those, but based on Redux, could be easier to debug.

Thats because in the react world its all about state management. Once it clicks that the view is just a function of state then these discussions will make more sense.

This is one of the statements I hear over and over from React devs that while technically true just glosses over the giant iceberg that is state management in React.

Lots of articles from people that know more than I that recommend NOT doing this. If you need global state (and you may not, in fact you probably don't), then use a state manager. There are a lot for React; some more standard, some more simple.

"...TypeScript/React/Redux Saga on the frontend..." This plus RxJS + websockets + in-house validation libraries + ... was the stack of my last company. For doing F**ng CRUD forms. This was one of the main reasons why I left.

I took a different approach, having one hammer (Quasar, Vue.js, TS) for all nails.

It may be too heavy or whatever but I easily build spa, pwa, electron from one place.

I forced myself not to look for the new shiny thing unil I really get to a point where Quasar and Vue are not enough.

I am an amateur dev and all users will have evergreen browsers and fast connections so plenty of concerns are moot.

This is my recipe as well and I've launched a number of products [1][2] this way and have had no complaints from customers.

[1] https://fiers.co/

[2] https://getelodie.com/

Also, consider using Rails and Turbolinks if you're familiar with them. You can get pretty far with turbolinks and the responsiveness is similar to a single page app.

Thanks for the advice. I use Go on the backend, but couldn't resist some "fanciness". I embed all the html/css/js/images files into a binary and serve everything from memory. Zero disc reads. It's blazing fast. Also, deploying the entire website becomes painless – scp the binary and restart the process.

You can still benefit from something like https://github.com/instantpage/instant.page as a poor man's replacement of Turbo(links).

> I embed all the html/css/js/images files into a binary and serve everything from memory. Zero disc reads. It's blazing fast.

I do that too. A friend "stole" that idea from the tour of go, I really like it. Just go generate before building to make add static files, go build, done. 1 binary, it just works.

Surely modern OSes cache files as memory-mapped files...? Given the availability of physical memory?

I do similary, moving out Vue (that I like much and in some context could be great) and use tailwind + htmx. Incredible upgrade!

Here's an idea – instead of passing the "visual page structure" to the client as JSON, use a markup language specifically designed for that purpose – HTML.

What ends up happening though is that you have to build that general purpose API for mobile apps regardless, and right after that developers start using it to render their web app components. Rinse and repeat.

If you are into server-driven UI, you can deliver structural data in JSON format and only have the client know how to render components.

This is easier said than done, however, as you basically need to reinvent parts of HTML but worse (ex. attaching event handlers to UI elements, etc.)

Or, use GraphQL. Or a BFF for each client.

> Here's an idea – instead of passing the "visual page structure" to the client as JSON, use a markup language specifically designed for that purpose – HTML.

Why didn't you just come out and say "Use Hotwire!" or "Use LiveView!" ;)

Wait, are you saying that "html-over-the-wire" is preferable even for mobile apps, or that having mobile clients makes using "html-over-the-wire" undesirable?

I totally agree with the idea of catering your BE to service your -- most likely -- single FE client. Every rails project I have worked on has this 1:1 mapping between endpoints and entities and it drives me insane. We set up these APIs to have clean lines between models and endpoints and consequently we push all the complexity of combining all of these related entities onto the FE. Then we wonder why the FE is so complicated! If I have to make 5+ API requests for one page then there was a severe failure in planning the API.

What's even more frustrating is this is the rationale for moving to something like graphQL. We have engineers advocating for it because then it's "just one request per page" and it doesn't click that the framework we are using is pushing us into a less than favorable API design.

GraphQL is appealing from the frontend perspective, but I've yet to see a case where it would do anything but make the backend 10x harder to develop, and it doesn't solve the problem of the frontend also having to know all of the entity relationships. Since you're using Rails, it's a lot easier to just add a controller endpoint that provides the composite data (as has been mentioned elsewhere already) for one UI action/page/navigation/state change, instead of using an entity-focused API. Controllers are supposed to abstract over potentially multiple models, not just be 1:1 endpoints for each type of entity.

If graphQL makes your backend 10x harder to develop, then you are not doing it right.

The overhead of GraphQL vs Rest on a backend is around 30lines of code in my experience (developed 5 different production Graphql servers, including a very large one). Routes become resolvers, but that’s mostly the only change.

But in the end you get a much nicer API, GraphQL codegen, type safety, auto generated docs, etc...

> Since you're using Rails, it's a lot easier to just add a controller endpoint that provides the composite data (as has been mentioned elsewhere already) for one UI action/page/navigation/state change, instead of using an entity-focused API.

Its possible, sure, but thats not the point. The point is the convention drives people to make entity-based endpoints and thats what I cannot stand. People are running the rails generator to build an entity and then when that doesnt scale we have to reconfigure the API. Overall not a fan of rails and the conventions it promotes.

Overall not a fan of rails and the conventions it promotes.

To each their own. No framework will have the right conventions to see a complex project through 100% with zero customization or deviation. I'm still fond of Rails though I haven't used it in a while. Nothing I've encountered since -- Node, Sprint Boot, Scala+TwitterServer, various Kotlin libs/frameworks -- has been as productive or appealing for me as Rails. You don't know what you've got until it's gone, as they say.

> it doesn't solve the problem of the frontend also having to know all of the entity relationships

isn't this the point of graphql? You can write your front-end with easily generated typed responses.

Hasura with graphql code generator is an amazing, type safe, and secure combo. I've never been able to develop a rest API faster, much less with documentation and type safely.

It might not be faster than rails/html, but it's a hell of a lot easier to reuse, understand, secure, and audit.

I may have misunderstood what you're saying, but binding your endpoints to your entities isn't a bad idea at all. For example, even when you're not making an SPA and are using good ol' server-side generated Rails, you would still have a 1:1 mapping of pages to resources, and your "app" would move you from page to page as you did stuff. This works out pretty well because your resources are what people want to interact with!

Try out Basecamp sometime - watch the routes, they're all well bound to resources, yet it feels very much like an app. The progenitors of Rails are still doing things the old way and they're pretty good at it.

I completely agree with the author. A few months back I examined the number of network requests required to populate the data for a page I was trying to improve performance on.

Let's just say it did not go well. Despite best intentions of being API-first with a SPA front-end, when you have a data-heavy and query-heavy application, it is absolutely the wrong choice ten times out of ten.

It leads to (not so) hilarious situations where the older, server-side rendered version of your app that uses jQuery absolutely demolishes your new hotness in performance. Try explaining that one with a straight face.

Did you consider endpoint composition? Instead of making 7 API requests (via fetch, or whatever), provide an endpoint that’s simply the composition of the 7 endpoints.

Great question! Yes, this was thought of and it's an approach that might still be taken. That said, I consider this to be a band-aid solution - IMO, just have a back-end to your app or a heavily optimized query interface like GraphQL. Once you are adding special endpoints for mostly just your UI, it's no longer strictly a general purpose API and the entire point of the design is moot.

If you end up in this position I consider it a design failure - most data-heavy apps are probably better off with a command/query API.

> Once you are adding special endpoints for mostly just your UI, it's no longer strictly a general purpose API and the entire point of the design is moot.

Not really, it's a very valid solution when you have multiple clients (Web, Mobile, TV, etc) and multiple API calls in the backend. Netflix had a similar solution back in the day (not sure they still do) to solve the issue of clients calling multiple APIs.[1] Which isn't much different from calling multiple endpoints of the same API. When you have one client and know everyone else isn't going to use your API, it might be overkill. But if you have a very granulated API for general purpose (like, say, Twitter) but want your Web or Mobile clients to have a very narrow and specialized one, it makes sense.

I wouldn't consider this solution a failure, just the outcrop of a highly distributed backend system.

[1] - https://www.nginx.com/blog/building-microservices-using-an-a...

I feel like your last example is a known tradeoff not an exception. Frontend frameworks never intended to solve performance, it had usually been at the cost of performance that we get a more powerful/dev friendly experience.

They are heavier applications not simple websites, and you pay for that in performance and often UX latency. The modern web feels slower than it did in 2010 in many situations. Latency and lazy developers not implementing affordances for when things are loading or the page is changing. Your app is probably slow for everyoelse, you need loading spinners! Especially if you are hijacking the browser navigation.

I would argue that mature frameworks like Rails or Django are peak productivity though - but, by all means, throw in a front-end framework like React to fill in the gaps if you need it.

I personally don't think the performance trade off is worth it. And it drastically increases front-end complexity in terms of state managment, which you now have to also manage in the client. The complexity is just moved further away from the server and productivity is gained on the visual part but lost everywhere else.

With HTTP2's multiplexing feature, it doesn't really matter how many requests the frontend is firing up. Just use HTTP 2 when you have multiple requests on a page.

I think the problem in this case is less the number of requests, and more what it implies. If certain endpoints are even a little slower than they should ideally be to fulfill the requests, you now have a metric ton of not-efficient-enough requests, which adds up very quickly. It's not a problem of making the connection, but the abstraction level.

You could argue, well, fix the API performance right?

That would be a valid suggestion. However, if the API is general purpose, what performance level is acceptable? Should it be able to perform tasks as fast as the average API consumer would expect, or should it be fast enough to serve the UI as well?

It creates a requirements problem in my opinion. If you were to have an API team, is this really on them, or is it on you for trying to use it in a way it wasn't necessarily built for?

This disconnect is why I don't believe a UI should ever be written against a generic API that is data or query heavy. It would involve too much coordination to get it right, which removes the advantage of having components and teams separated in this way (which is often done nowadays).

What do you win by downloading MBs of JS over a multiplexed connection? You still need to parse it all on one core.

Just another reason to use graphql. I really don't understand why the industry has to spend another decade kicking and screaming until everyone transitions to using it.

I've built and used APIs for full systems in all 3x: RPC, REST, GraphQL as both a creator and consumer and as far as I'm concerned everything else is dead

Do you honestly think if we got a bunch of people to comment who'd used all three, they'd all say Yep, graphql, that's my favourite? None of them is dead because there are people who prefer each one, and ceteris is rarely paribus - if one (I originally wrote RPC but I suppose it could be any of them) is already in use for inter microservice communication say, that's going to tilt the balance.

I mostly agree with this (provided you are sure you should be building an SPA in the first place, which is currently a massively over-used architectural pattern).

The one thing I disagree with is dismissing the idea "But we can reuse this API for the mobile app too!"

Depending on how you organization is structured, it can be common for the mobile app team to end up in need of APIs that aren't being delivered promptly.

Should that happen, having a web front-end that's entirely powered by APIs can level the playing field enormously - the website can no longer "cheat" and not bother with an API, which means the mobile team will get everything they need.

> Depending on how you organization is structured

I truly start to think that organizations with silos in what you can code are just wrong and prone to this over engineering thing.

Just make your backend the shared space between your teams. There is no reason a qualified front end or mobile programmer could not at least write the controller for the endpoint they need.

This approach is described by the cutely named BFF – backend for frontend – pattern. (https://samnewman.io/patterns/architectural/bff/)

It's a solid pattern. Eventually I find you end up wanting a write API for validation, and a read API for flexible querying in some applications though.

That is well-served by GraphQL mutations and queries.

BFF pattern can be more approachable and reduce client code, however, and that's a plus.

I'll be honest, I wanted to make a bff joke, but kept running in to too many double entendres.

Best. Pattern. Ever.

Edit: spelling

One of the best parts of Twitter is they load their main and the various sidebar widgets asynchronously. That allows you to see things that load first, and makes it appear as if something is "happening".

I don't like this concept of sending the page structure as JSON.

It would require all parts of the UI to just be stuck on "loading" while the back-end essentially the entire page.

If you decide to turn this into "ok well make it /page/a/component/3" then we're back to square one on the whole idea.

If everything on the page is slow to load, then I would agree. Twitter is such a crazy outlier that I can't even imagine what they have to deal with. In a usual case you'd get a rare 1-2 slow retrievals, and the rest of the page can show up immediately.

Another use case where you are very probably better to build a general-purpose API: when you want offline support, especially if you want sync. Ad-hoc offline support will cause you far more trouble than the effort of making a principled design. (As far as foundations for a sync-capable web protocol are concerned, I’d suggest JMAP as generally a good choice for client-server sync, and even if it doesn’t match your requirements, it’s good reading for those unfamiliar with sync considerations to get a feel for what you’re going to need.)

I don’t think this approach mutant excludes the ability to create another endpoint for sync.

Ad-hoc offline or sync support invariably goes wrong in important ways that lead to persistent missing or incorrect data and data loss. It’s something that needs to be designed in from the start, and it necessarily takes the form of a data-oriented API, entirely opposed to what this article is suggesting (UI-oriented). If your main UI needs to work offline or with sync, you can’t use a UI-oriented backend.

Absolutely. Your application API and your data API have different needs and requirements:

You application API is churny, specific and tuned for certain screens and user interfaces.

Your general data API is, well, general, rate limited, concerned with limiting the ability of that expressive power to damage your system, etc.


One you get over the hump of splitting your data and app APIs, the next step is to realize that your application API can be a hypermedia API rather than a dumb JSON API, and you are off to the races:


I'll go one step further than this - build an endpoint for each component in your frontend. That way you can re-use each of these components on multiple pages, and you end up with components that are scoped fairly narrowly.

In practice this ends up building out a reasonable approximation of what a "public" api would be. Eg, your WidgetList component forces out a /widgets endpoint, which might get re-used by some other widget too. That's fine. The point is you're still working UI-first and making the minimum viable backend.

With lots of components on a page, you might end up making multiple calls for the same information. That's also fine. You can optimize that later if it becomes a real problem.

How do you stop the spinner/load jank hell of 100 different components all requesting their own data? Not to mention, even if you're running a low traffic site so your backend performance is fine, the browser is limited to a set number of concurrent requests.

Locally, it might all go super fast, but as soon as you deploy it, that dashboard calling 30 endpoints is going to feel insanely slow just waiting for the network to become free for use.

If you're loading 100 pieces of data, maybe you have some UX/IA design issues? Nobody looks at 100 different things on a page. I'd say my average page has 5-10 requests, and it's plenty snappy.

Nanoservices? Snark semi intended.

(edit) What could go wrong? https://www.youtube.com/watch?v=y8OnoxKotPQ

I’m working on a project now with 64 front end components. So you’re advocating for at least 64 calls to the end?

Sure, maybe? Why do you have 64 data-loading components on a page? Is that typical or is this an odd exception that might make sense to optimize? How long does it take a user to look through the data in 64 separate boxes?

If there's a lot of redundancy in the requests, you can solve that in the request layer. Each request from a component doesn't need to become a separate network request. But if it's all unrelated data... maybe you have an IA problem.

No, I'd go it by this: decide a reasonable number for each page to make request for. Let's say it's N. For all the pages in your app divide them into N pieces. There might be elements that are repeated in all pages (like navbar) so even with very low N like N=3 you can save a lot of redundancy. E.g. divide all pages into navbar, content, bottom content. Now you have a single /navbar/ and /bottom/ apis and then for each page /pageHome/ /pageHistory/ /pageX/ etc...

> Have you seen that list of annoying decisions up there? For one, they are gone now.

Erm, even with that solution you still have to consider the changes that impact this schema. These problems didn't go away, they simply became masked differently.

I'd argue that the difference between "we're changing schema for all callers" vs "we're changing schema for 1 page of 1 frontend that we fully control" is so profound, as to render that whole concern moot.

I'm stuck in this quagmire and have been for 18 months. Right now it's manifesting as a React on GraalJS project written in Clojure. I've learned a lot, but actual progress is, of course, elusive.

I have done something similar with a previous project at work, though everything went through three layers, the front-end, which was reasonable as they describe, the front backend, which just serves the single page's required data, and the back backend which was mostly the legacy version of the site's back-end but still had useful logic in it.

Ended up working out okay, and the idea of each pages gets a service endpoint was a bit weird at first, but really gave it the flexibility to avoid needing to go touching the legacy layer very often.

I just use GraphQL and freeze the queries at compile time so only known queries can be used. That stops worries about misuse from the frontend.

On the backend, for performance we have two choices:

1. Over-fetch. It's relatively cheap doing that from a cache. Most queries don't use too many different variations.

2. Optimize what data we fetch based on the node's children in the GraphQL query received. I don't think people do this often enough but... GraphQL gives you the full query as an AST, so you can use that at runtime to know what your node's children will be rendering before they're hit. Because we enforce #1, we don't have to build some general-purpose query-builder a simple check of "Are you X named query?" is good enough. Internally it looks kind of like JSON RPC.

Am I doing it wrong? It seems to give you the best of both worlds because GraphQL APIs have great tooling and documentation behind them, and if you cut out all the 'general' purpose aspects in production you can be fairly efficient (e.g. our GraphQL API is about 5% slower than the JSON API it replaced).

I think this is totally fine if you're willing to accept some performance trade offs in favor of productivity, you're staying mindful of limits and scoping, and just generally you're accustomed to living in the GraphQL world.

I could not agree more. This is the way I design things and it makes life so much easier. It solves so many problems by just building a separate API for each app. They can all use and re-use the same underlying functions and logic, but each API serves the data in the most convenient way for whatever page is being rendered. I would never go back.

Don't bother looking back. I would encourage you to consider GraphQL depending on your use case, but for data-heavy apps the generic API approach is the worst approach.

I am not very familiar with GraphQL, but on first look it seems more suited to a public API. For an internal app, when we know EXACTLY what data is needed for each page, this seems like wasted effort. Any thing in particular you suggest checking out?

GraphQL was designed for mobile apps and UIs, and I think it’s perfectly suited. We started to build our front end using our public REST API and the experience was so harrowing that we started looking for something else.

The big win with GraphQL comes from the client-side tooling. You can have a React page with 100 components that each request their own data, and have it automatically batched up into a single request. Your page no longer has to predict what data the components will need. When a junior front end dev tries to reuse a component from page A in page B, the page B query will automatically be updated to fetch the data required for that component.

I’ve been using GraphQL seriously for a year on a client project now (an inherited decision). I can totally see why it would be useful for a public API or an internal API in a very large engineering org.

For everything else it is needless complexity and an additional failure mode. The tooling and operational aspects aren’t as mature as a conventional HTTP API (yet?) so I’ve found it to be a higher cost for limited gain.

Getting errors packaged as status 200 graphql responses (hey technically graphql worked fine, here's the result!) really bugs me.

I wish we could standardise on something that feels more 'native', have RESTful (the tangent w.r.t. Fielding, as it's actually practiced) JSON responses , JSONSchema or OpenAPI even, as popular widely implemented IETF RFCs.

Or the same with GQL, but with corresponding changes to HTTP to make it make more sense. Have a response be a type consisting of nullable success data and error data perhaps, or more layered status codes as rigid as they are but allowing what GQL wants to express by 'here is a successful response that contains errors'.

How else would you handle error codes without reducing efficiency by sending extra requests? One of the benefits of graphql is that you can query many different things in a single request, so some parts of your query could have errors without the whole thing resulting in an error.

There's nothing special about the enum status code and binary (header described) response data of today.

GQL has little choice, it can either call the whole thing an HTTP error, or, to be honest it probably does the right thing, it can call it a success and describe the error in the response data. In an hierarchical model of your choice, that's genuinely an HTTP-level success.

In the sort of respecifying I'm describing, you could expand status codes to describe 'mixed' results, or 'upstream error's, or whatever. I don't really have great suggestions ready because my only point is that I think the status quo is bad.

> For an internal app, when we know EXACTLY what data is needed for each page

I think that's only true for relatively small internal apps. I work on an internal app with a few hundred thousand lines of JS, and there are way too many different permutations of different types of pages to write an API catered to each use case. GraphQL has been fantastic.

This resonates with my experience, which is basically doing this the wrong way and watching all of it play out the way OP warns it will.

I'm sure there are others with the opposite experience but I can vouch both for the desire and expectation of devs to build general-purpose APIs when not needed, and the ongoing pain resulting from that year after year.

I agree with this. It's tangentially related to it being so standard to refer to every JSON HTTP API as a "REST" API. REST APIs (in the strict sense) are for public integrations (but how common is that really?) / other teams in your (largish) company.

> Your business logic has now moved from being haphazardly split between frontend and backend into just backend.

This. I have spent countless hours pulling my hair trying to understand backend business logic only to see part of being implemented in the front end.

What's more, the supposed generic backend API makes quite a lot of assumptions about the front end orchestration, so the API can be used only in conjunction with the front end it is serving.

Now not only is your backend API not reusable but also the business logic is brain split between the front end and backend.

> #Do you have data to support your claims?

> I wish. It’s pretty hard to measure these kinds of things in our industry. Who’s gonna maintain 2 architectures for the same software for 3 years, and compare productivity between them? All I got is a mixed bag of personal experiences. Feels inductively justifiable.

Oh wow this is sort of fascinating. Maybe we could crowd source experiments like this. Like maintain some random open source app with two different backend structures for X years and blog about it, share battle scars.

Love this idea, and someday when you also have a mobile app please provide them an actual API that is just as useful for them, don’t ask them to use those bespoke RPC endpoints.

How is it different from what GraphQL gives (Other than being too flexible)? Front-end can do single API call for entire page and can be cached.

Did I miss the whole point?

This kind of supports the author’s point. Many times when using GraphQL you will fetch data in a way that matches how the front-end needs the data, not in the way that creates a clean decoupled abstraction for the backend.

Which is exactly what GraphQL was designed for: a tool for easily writing APIs to translate your clean decoupled general-purpose APIs into something that is convenient for the front-end to use.

Since I do almost exactly that, I agree with the sentiment.

I'll go even further on one point: "Imagine if you could just send it the whole “page” worth of JSON" => why do you even send it as a separate endpoint? Embed it in your page! It works great with a SPA: the first load use embedded JSON, and next partial data refreshes hit endpoints returning JSON with the same format.

I'm following this pattern by using NestJS on the backend and it acts as the state of my frontend(no need fir redux). The API responses are already in a shape that I can easily consume. A single request gets all the data it needs(using relations). I like it!

Inertia.js does this very well. Using server routing on JS frontend and autoloading page data.

This approach does not help when you need to transform data models on the frontend (and possibly send it back).

Imagine trying to make sense of:

merge(pages.a.section.main.data, pages.a.section.secondary.data)

What does that even mean? It’s going to get hard to prep abstract data.

I think Hasura (automatic APIs for your DB) is the best thing since sliced bread. It handles 95% of my backend needs.

There should just be one endpoint that dumps the entire DB as JSON. The front end guys would love it!

We have an API similar to this.

I'd say, it can make sense. But one big issue is how it muddies the line of where certain frontend decisions should be made. For example; if you have a person's profile picture, and you want to round it, should that be a property on the JSON structure? E.g.

{ "profilePicture": { "url": "https://mycoolpicture.com/pic.jpg", "rounded": true } }

Or a property of how the frontend transforms that structure into HTML? In other words, its an opaque, intrinsic property of the "profilePicture" "JSON-component", every profile picture is rounded, because that's just how profile pictures are.

Until you hit the one page that wants one that isn't. No problem, maybe a generic "picture" "JSON-component". Or you hit the page that needs a squircle. Ok maybe we do need a "border" field, and why not just make its value the same as the CSS `border` property and oh my god are we just reinventing HTML?

There's no right answer to this question, and it will appear in literally every single component you build. Maybe its text weight: How generic should it be? Very-generic: "title". Kinda-generic: "heavy". Or literally just CSS.

What do these styles mean when faced with user browser preferences, such as dark mode or accessibility systems? By the API contract, "cardTitle" means "roboto, 16px font, #0000ff font, whatever"; but not always, right? Does the frontend disobey the contract? Do you send these preferences to the backend and let it handle it? Does the contract allow for lee-way in how its responses are interpreted? How much lee-way?

Lets say I want a sidebar. That sidebar has six items, so we list them. But what happens when the site is displayed on a phone? Well, for the sake of just providing an answer to continue this line of thinking: the designers want it to become a bottom bar. No problem, except, uh, the API says it should be a sidebar and we're not really communicating how big the screen is to the API are we? Well, maybe we are, sure, we should have considered that during the v1. Is the server really the best source of truth on how many items a 976px wide screen can hold? Four maybe I guess? Maybe the frontend just sees `sidebar`, interprets it to "understand" "implicitly" that it means "bottom bar" on small screens, and do the best it can, and now the entire point of this exercise is out the window and we're not obeying the API.

Another problem is during visual redesigns. This may demand the recreation of your entire API surface; you just doubled the work of a visual redesign. If the data model were generic, your backend team may not have even had to have been consulted.

Another is in interaction. Button which opens a new tab to google.com; easy, no problem. But in every reincarnation of this idea, I've never seen a strong argument for how to handle even a basic form, with a button to submit the input to another API endpoint. Do you have something like

{ "form": [ { "formTextField": { "id": "firstName" } } ... { "formSubmitButton": { "endpoint": "/createUser", "verb": "POST", "arguments": { "id": "firstName" } ... } } ] }

How do you handle the response from /createUser? What if there's an error? What if one page that calls /createUser needs to display the error in a snackbar, but another page needs to display it in-line with the button? Do you just... not do that? Just throw every error into a snackbar? Ok, are you capable of updating the local cache with the response from createUser, such that another call to the page-rendering API isn't necessary?

This doesn't feel as good to me as the rest of this line of thinking, which is already not great. It feels like "we had this really cool idea to render our entire frontend in JSON, oh crap we need to support interaction patterns other than queries, uh, how do we shoehorn that into our cool idea, ok, hold my beer this is cute."

Look; there's a reason why data and views are separate. This idea isn't new. I hope you realize that, but I'm afraid you don't; our industry does tend to revolve in cycles, but this is one that shouldn't come back.

There are really specific use-cases for something like this which are actually powerful, from my point of view. The example from our app is, best put: imagine a google search results page. No interaction except simple hrefs. Maybe you have a ton of "Link" { "title" "subtitle" "body" } cards, maybe a "MapResult": { "latitude" "longitude" }, basically displaying a list of cards which may have different content. The frontend needs to know how to render each card kind, but the specifics of how its rendered are kept presentation independent; nothing is asserted about structure, just content and relationships between the content. It can work for that. It still has problems, especially as the business wants more and more things shoved into this model that really wasn't built for it, but it can work.

I know its a meme, or the buzzword of the day, or whatever, but: GraphQL is actually really good at solving the problems you think this solves. It doesn't do it for free. You have to think about N+1s and composite queries and performance around those and designing a good schema. But, from the perspective of just the API language you're talking, its actually pretty good. And if you're really having performance problems on a page, you can always amp up your API caching, or introduce page-specific queries which collate data on the backend into a small number of database queries, or whatever.

> if you have a person's profile picture, and you want to round it, should that be a property on the JSON structure?

You can decide what to leave to frontend on a case-by-case basis. I would say unless people can choose if a picture is rounded and it's saved in database, this is valid to leave up to frontend.

> Maybe its text weight: How generic should it be? Very-generic: "title". Kinda-generic: "heavy". Or literally just CSS.

If text has sections that are formatted differently, backend could provide content for those sections separately. Styles are up to frontend.

> Does the contract allow for lee-way in how its responses are interpreted? How much lee-way?

Structure and content on the backend, presentation on the front-end.

> designers want it to become a bottom bar. No problem, except, uh, the API says it should be a sidebar

Call it something a little more encompassing, like "secondaryNavBar".

> you just doubled the work of a visual redesign. If the data model were generic, your backend team may not have even had to have been consulted.

There's a fair criticism in here, however the work is not doubled. 1. You can make some redesigns without changing field names, then slowly align JSON keys. The damage from this concession is contained in each page individually. That said, updating the JSON keys as you go should not be much more work than the redesign itself. 2. Should we be optimizing for rare redesign when instead we could optimize for ongoing maintenance? 3. How likely is it that redesign doesn't need anything new from backend anyway? 4. How likely is it that redesign won't introduce new N+1 problems?

> Do you have something like { "form": [ { "formTextField": { "id": "firstName" } } ... { "formSubmitButton": { "endpoint": "/createUser", "verb": "POST", "arguments": { "id": "firstName" } ... } } ] }

To render the form, you don't need to send the form to the frontend. Frontend could render the form by itself. You just just send it any pre-filled values. Form's action/method are also okay to include.

> What if one page that calls /createUser needs to display the error in a snackbar, but another page needs to display it in-line with the button? Do you just... not do that? Just throw every error into a snackbar?

Just because you mostly render full pages, doesn't mean it's illegal to have endpoints that give you snippets of data in response. Your createUser endpoint can respond with just a list of errors. It wouldn't be bad for mutating endpoints to do that, the benefits of this approach still remain.

I've addressed the points you've made, but seems as though they allowed you to build up a strawman, and the rest of the comment is knocking down that strawman. That's not what's being proposed here. It's okay, maybe you are arguing against specifically what you have at your work. I could help adjust your framing if you'd like, let me know.

That's reasonable criticism of my criticism.

I would defend myself by saying that it feels like the opposite is also true. I built up a strawman and struck it down, but the recommendations in the original blog post, and here, are defined in extremely abstract terms, then defended with "you can do it however you want, there's no rules" such that every team who implements it will do it differently, and inevitably most will spend years landing on ten bad ways of doing it, never reaching the nirvana of what was promised in the abstraction.

I do like the idea; but in a limited capacity, and I'd caution teams from planning an entire application around it. I'd stick to fragments of pages which are highly data-driven, and be more cautious around layout and structure.

In case anyone else feels that this article leaves enough wiggle room to accidentally end up with something like 015a described, here are the 3 rules I implied in answering the above post:

1. All structure and content goes to backend. 2. Find good names for size-adaptive sections. 3. Responses to write requests can be anything.

I think 1 and 3 are covered in the article, but elaborating on them is probably out of its scope. And 2 is more of a general programming advice.

tl;dr YAGNI

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact