Hacker News new | past | comments | ask | show | jobs | submit | mikecaulley's comments login

This doesn’t consider compute cost; the RAG model is much more efficient compared to infinite context length.


Agreed. I think that RAG implemented via tool-calling, with multiple agents talking to each other, is a much much more likely evolution in the future versus a single unified model.

I could very well be wrong! But we wouldn't want LLMs to be performing lots of arithmetic calculations via exploiting hidden parts of themselves that do linear regression or whatever, far better to just give them the calculator and get results faster and cheaper. Similarly, we can give them a search engine (RAG) and let them figure it out more efficiently.


I’m curious to hear what are some of the most frustrating points developing with Django, React, and GraphQL.


same here, maybe we can improve things!


Thanks for the great comment. I often see GraphQL get a bad rap in comments on this site but most of the time it seems to come from a misunderstanding of how to use it.


How you model the data at rest does not need to match the GraphQL specification. The GraphQL specification should be similar to your REST entities but they have traversable edges to other entities.


I've found GraphQL to greatly improve the developer experience when building web apps. As a front-end developer you can easily get a full view of the data and relationships. Tools like GraphiQL make exploring APIs a pleasure. And regardless of what you need to present to the user on screen, you can quickly build a request that perfectly matches the data you need. There is also the nice addition of strict typing and there are libraries that automatically generate TypeScript types for you from the schema and/or your operations.


I do agree that GraphQL has improved developer experience, or at the very least improved the perceived experience (which is essentially the same as actually improving the experience). I cannot deny it's popularity, and that comes from hype along with usefulness (perceived or actual).

> Tools like GraphiQL make exploring APIs a pleasure.

This is an innovation of the developers of GraphiQL, not of GraphQL itself. It's possible to build IDE support similar ot that of GraphiQL, with the rich metadata and schema data that is available by design in OpenAPI + JSONSchema + JSON-LD + HATEOAS land. The problem is that no one did/was excited enough to, the slog from Swagger2 to OpenAPI3 might have sapped the enthusiasm of the community just enough (or the emergence of GraphQL), but it's not that the tools isn't possible with other approaches.

> There is also the nice addition of strict typing and there are libraries that automatically generate TypeScript types for you from the schema and/or your operations.

This was already present with OpenAPI and related tools, so I personally don't put this as something that GraphQL brought about.


I don't want to piece together 4 different technologies to get the same thing.

And it's not even the same thing. You keep mentioning HATEOAS but that doesn't accomplish the same things as graphql at all. With HATEOAS I need to send out multiple requests to collect the data I need. With graphql I can send out one request to retrieve exactly the data I need. I can also combine multiple requests into a single request. GraphQL also only retrieves the fields you need by default. This is something you'd need to build manually with REST.

And yes, reducing the number of server requests does matter when you're trying to optimize for the largest market possible. Including people on slow mobile network connections where every request adds significant overhead.


> I don't want to piece together 4 different technologies to get the same thing.

> And it's not even the same thing. You keep mentioning HATEOAS but that doesn't accomplish the same things as graphql at all. With HATEOAS I need to send out multiple requests to collect the data I need. With graphql I can send out one request to retrieve exactly the data I need. I can also combine multiple requests into a single request. GraphQL also only retrieves the fields you need by default. This is something you'd need to build manually with REST.

All of this is reasonable, but my point was that we've abandoned a more flexible method, that had composable functionality and agreed-upon standards for GraphQL. You gained some pipelining, and some vertical filtering, but threw out a lot with the bathwater.

GraphQL does give you a way to do these things, and this is why it is popular (or at least a good reason why), but it is missing the wider possibilities of the other ecosystem. As people start to try and abstract over it they will be abstracting over a less robust, considered, standardized base.

> And yes, reducing the number of server requests does matter when you're trying to optimize for the largest market possible. Including people on slow mobile network connections where every request adds significant overhead.

Reducing the number of server requests does indeed matter -- I did not mean to propose it doesn't, reducing request was somewhat solved before now with the backend-for-frontend approach. There are lots of approaches to help people on slow mobile network connections, but the ultimate one is to server-render and trim as much as possible -- I'm not sure GraphQL is much better at this than REST with efficient endpoint choice.


> You gained some pipelining, and some vertical filtering, but threw out a lot with the bathwater.

What are we missing that actually matters in practice? And yes, if you minimize the problems that graphql was built to solve it seems like a worse solution.

> As people start to try and abstract over it they will be abstracting over a less robust, considered, standardized base.

Facebook, a company who's apps serve billions of people came up with graphql to solve real problems. Saying it isn't well considered is asinine. As for standardization, last time I checked GraphQL has a spec and the GraphQL Foundation is a member of the Linux Foundation.

Again, what problems are you talking about that graphql doesn't solve? I mean actual, practical problems.

> reducing request was somewhat solved before now with the backend-for-frontend approach.

So now I need to write a backend for each use-case. That sounds better. Or I need to make my endpoints extra configurable which starts to approach graphql territory.

> the ultimate one is to server-render and trim as much as possible

Again, we're now entering the "why graphql was invented" territory. Sure I could add these to my REST endpoints.. OR I could write one graphql endpoint and get all of these.


> What are we missing that actually matters in practice? And yes, if you minimize the problems that graphql was built to solve it seems like a worse solution. > Again, what problems are you talking about that graphql doesn't solve? I mean actual, practical problems.

If you're using GraphQL and feel that it solves all your problems and isn't painting you into a corner, you are free to continue to use it and reap the efficiency rewards. I'm not here to convert you to REST or any other RPC methodology. One of the things that was lost with adopting GraphQL is partial responses -- being able to send part of the answer back before the entire request is complete. I'm not sure exactly what streaming looks like in GraphQL but do they have anything as simple and useful as SSE?

> Facebook, a company who's apps serve billions of people came up with graphql to solve real problems. Saying it isn't well considered is asinine. As for standardization, last time I checked GraphQL has a spec and the GraphQL Foundation is a member of the Linux Foundation.

I did not mean to imply that GraphQL was not considered -- just that it is less considered than the alternatives that existed, and certainly was less battle tested. Just because something has a spec does not mean the spec is good and well reviewed (though again, I'm sure GraphQL's has been looked at by smart people for large periods of time). I don't know what to make of the GraphQL Foundation being a member of the Linux Foundation,

> So now I need to write a backend for each use-case. That sounds better. Or I need to make my endpoints extra configurable which starts to approach graphql territory.

It's not that writing robust endpoints is approaching GraphQL territory, it's that GraphQL territory is bootstrapping robust endpoints for you, but it also locks you in to it's way of viewing the world. I prefer to use interoperable smaller tools rather than doing this, and maybe I'm losing out because of that, but I'd take the complexity of one or two or five robust endpoints over adoption of the entire GraphQL spec & ecosystem for what I'd consider minimal gain. To each their own.

> Again, we're now entering the "why graphql was invented" territory. Sure I could add these to my REST endpoints.. OR I could write one graphql endpoint and get all of these.

No, that was me pointing out that if you're trying to help low bandwidth clients, that's how you definitively do it, GraphQL is not the right solution in that case -- you try your best not to make any calls on the frontend at all, so it seems the point is moot there.


Also it can potentially reduce the client side logic needed to stitch said 4 requests together.

This can be a big complexity/bug preventer for multi client Apps (web, mobile, desktop).


To what extent are you just pushing complexity to the back-end developers?

> Tools like GraphiQL make exploring APIs a pleasure.

No argument there. Is there something similar for traditional REST? For some reason, I thought the point of HATEOS was to make that kind of exploration possible.


GraphQL cannot ever be explored in the same way HATEOAS can because it is missing the linking part (and other related technology like JSON-LD). That said, GraphQL does offer introspection[0] which is not a bad start but lacks the breadth an depth of the hyperlink based approaches.

[0]: https://graphql.org/learn/introspection/


I 100% agree with your statement. I also realize you are not championing HATEOAS here necessarily.

That said, I am curious if you have found this "linking" aspect of HATEOAS useful for actual implementations? I have been doing integration work with a system that strictly adheres to "REST level 3" and "HATEOAS" principles for the past few years, and I myself have found the "explore-ability" of the API super handy. That said, the self-documenting nature only goes so far, and in the end I'm not sure the internal linking stuff is preferable to robust documentation.

I'm not trying to be down on this necessarily, I actually generally push back on the adoption of GraphQL as the answer to every problem.


> That said, I am curious if you have found this "linking" aspect of HATEOAS useful for actual implementations? I have been doing integration work with a system that strictly adheres to "REST level 3" and "HATEOAS" principles for the past few years, and I myself have found the "exploitability" of the API super handy. That said, the self-documenting nature only goes so far, and in the end I'm not sure the internal linking stuff is preferable to robust documentation.

You're right, this is a reasonable question and the answer is uncomfortable. I personally find it useful when paired with OpenAPI -- generally by using annotations on controllers and models, but it is indeed rare to have usecases that fit the linking functionality well enough to be significantly better than what you would get from just good documentation.

The "killer app" of this space for me personally is a Django Admin[0][1] (or React-Admin[2]) clone that is 100% client-side automated. I don't have a demo yet, but once I do it'll be up on HN.

> I'm not trying to be down on this necessarily, I actually generally push back on the adoption of GraphQL as the answer to every problem.

Please, feel free to push back, that's what discussion is for, and ideas that can't stand up to push back probably shouldn't be adopted.

[0]: https://docs.djangoproject.com/en/3.1/ref/contrib/admin/#

[1]: https://djangobook.com/mdj2-django-admin/

[2]: https://github.com/marmelab/react-admin


I would like a backend developer perspective on the same.

I haven't done any GraphQL stuff myself, but the "I can get whatever data I need" aspect feels like a huge potential headache for backend devs. Won't be a problem at prototype scale, but once you have a significant client count, how do you deal with unpredictable data access patterns that can't be optimized ahead of time?


Generally speaking, it's not more difficult than creating a REST API controller. Rather than mapping your service code to a controller, you map it to a (GraphQL) resolver. It feels extremely similar.

There are different things to look out for; like handling n+1 (https://shopify.engineering/solving-the-n-1-problem-for-grap...) queries, but nothing that I can say is too difficult.


It is more difficult. With REST you know ahead of time which data you need, and you can optimise it as much as you can. With GraphQL at any given point in time you have no idea what the request is. Dataloaders solve it to some extent, but not much.


Of course, you are building something that is much more powerful. If you intend every possible GraphQL operation to perform perfectly, it'll take more development effort. Though if you optimize for only the most likely cases, the effort is not much greater.


> Though if you optimize for only the most likely cases, the effort is not much greater.

That's why most discussions involving complexities of GraphQL inevitably devolve into: "in production we only allow a subset of queries". Because actually implementing GraphQL as it's specified and marketed is quite an undertaking.


Inevitably you do get a map of your data flow. GraphQL can be a huge help here, because you must have resolvers for the queries. It forces you to think ahead of time about how you're going to resolve getting the data to fulfill the API "contract" if you will. You can optimize that. GraphQL is great for describing your APIs in a type safe(ish) way where you get great decoupling by design.

I also will say that since a query can use multiple resolvers, that's where I have found it really shines. The model does implicitly work best when you have async systems over synchronous ones for more complex queries (multithreaded/multiprocess)

that's my anecdote, as someone who is currently and actively building and maintaining GraphQL backend services.

Its true you can get the same with any API design pattern, really, I will say, however GraphQL has specifications for all this, and I think that's what makes it more powerful

Another nice thing, is no versioning. I can just use the `@deprecated` built in directive, and when the usage of a deprecated part of an API gets consistently (for a specified period of time) down to 0, I can just remove it entirely.

OpenAPI and the like don't have a descriptive way to notify users of an API of this, you, for better or worse, have to version your API, which often ends up in situations where you always have an old version sticking around for a very long time.

Like all technologies though, it can (and does) have its issues. It adds a certain amount of complexity to your applications (more so for the client, IMO, even though exploring APIs is a huge upside with tools like GraphiQL)


I don't have especially strong feelings either way, but the majority of this is possible with OpenAPI as well.


Let me look into what effort it would take to clean up the code and put it on Github. I'd be happy if it helped others starting a project. It heavily utilizes ReSwift and SnapKit, two libraries I wanted to gain some more experience with.


Would definitely appreciate it. Ooh nice I would be keen to see how you implemented those libraries.

I’d love to create a few little ideas using the base concept.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: