Hi HN, Harrison (CEO/co-founder of LangChain) here, wanted to chime in briefly
I appreciate Fabian and the Octomind team sharing their experience in a level-headed and precise way. I don't think this is trying to be click-baity at all which I appreciate. I want to share a bit about how we are thinking about things because I think it aligns with some of the points here (although this may be worth a longer post)
> But frameworks are typically designed for enforcing structure based on well-established patterns of usage - something LLM-powered applications don’t yet have.
I think this is the key point. I agree with their sentiment that frameworks are useful when there are clear patterns. I also agree that it is super early on and super fast moving field.
The initial version of LangChain was pretty high level and absolutely abstracted away too much. We're moving more and more to low level abstractions, while also trying to figure out what some of these high level patterns are.
For moving to lower level abstractions - we're investing a lot in LangGraph (and hearing very good feedback). It's a very low-level, controllable framework for building agentic applications. All nodes/edges are just Python functions, you can use with/without LangChain. It's intended to replace the LangChain AgentExecutor (which as they noted was opaque)
I think there are a few patterns that are emerging, and we're trying to invest heavily there. Generating structured output and tool calling are two of those, and we're trying to standardize our interfaces there
Again, this is probably a longer discussion but I just wanted to share some of the directions we're taking to address some of the valid criticisms here. Happy to answer any questions!
Thanks Harrison. LangGraph (eg graph theory + Networkx) is the correct implementation of multi-agent frameworks, though it is looking further into, and anticipating a future, then where most GPT/agent deployments are at.
And while structured output and tool calling are good, from client feedback, I'm seeing more of a need for different types of composable agents other then the default ReAct, which has distinct limitations and performs poorly in many scenarios. Reflection/Reflextion are really good, REWOO or Plan/Execute as well.
totally agree. we've opted for keeping langgraph very low level and not adding these higher level abstractions. we do have examples for them in the notebooks, but havent moved them into the core library. maybe at some point (if things stabilize) we will. I would argue the react architecture is the only stable one at the moment. planning and reflection are GREAT techniques to bring into your custom agent, but i dont think theres a great generic implementation of them yet
using LangGraph for a month, every single "graph" was the same single solution. The idea is cool, but it isn't solving the right problem.... (and the problem statement shouldn't be generating buzz on twitter. sorry to be harsh).
You could borrow some ideas from DSPy (which borrows from pytorch) their Module: def forward: and chain LM objects this way. LangGraph sounds cool, but is a very fancy and limited version of basic conditional statements like switch/if, already built into languages.
I appreciate that you're taking feedback seriously, and it sounds like you're making some good changes.
But frankly, all my goodwill was burnt up in the days I spent trying to make LangChain work, and the number of posts I've seen like this one make it clear I'm not the only one. The changes you've made might be awesome, but it also means NEW abstractions to learn, and "fool me once..." comes to mind.
But if you're sure it's in a much better place now, then for marketing purposes you might be better off relaunching as LangChain2, intentionally distancing the project from earlier versions.
sorry to hear that, totally understand feeling burnt
ooc - do you think theres anything we could do to change that? that is one of the biggest things we are wrestling with. (aside from completely distancing from langchain project)
My advice is to focus less on the “chaining” aspect and more on the “provider agnostic” part. That’s the real reason people use something other than the native SDK of an LLM provider - they want to be able to swap out LLMs. That’s a well-defined problem that you can solve with a straight forward library. There’s still a lot of hidden work because you need to nail the “least common denominator” of the interfaces while retaining specialized behavior of each provider. But it’s not a leaky abstraction.
The “chaining” part is a huge problem space where the proper solution looks different in every context. It’s all the problems of templating engines, ETL scripts and workflow orchestration. (Actually I’ve had a pet idea for a while, of implementing a custom react renderer for “JSX for LLMs”). Stay away from that.
My other advice would be to build a lot of these small libraries… take advantage of your resources to iterate quickly on different ideas and see which sticks. Then go deep on those. What you’re doing now is doubling down on your first success, even though it might not be the best solution to the problem (or that it might be a solution looking for a problem).
> My advice is to focus less on the “chaining” aspect and more on the “provider agnostic” part
a lot of our effort recently has been going into standardizing model wrappers, including for tool calling, images etc. this will continue to be a huge focus
> My other advice would be to build a lot of these small libraries… take advantage of your resources to iterate quickly on different ideas and see which sticks. Then go deep on those. What you’re doing now is doubling down on your first success, even though it might not be the best solution to the problem (or that it might be a solution looking for a problem).
I would actually argue we have done this (to some extent). we've invested a lot in LangSmith (about half our team), making it usable with or without langchain. Likewise, we're investing more and more in langgraph, also usable with or without langchain (that is in the orchestration space, which youre separately not bullish on, but for us that was a separate bet than LangChain orchestration)
Separating into smaller libraries is a smart move. And yeah, like you said, I might be bearish on the orchestration space, but at least you can insulate it from the rest of your projects.
Best of luck to you. I don’t agree with the disparaging tone of the comments here. You executed quickly and that’s the hardest part. I wouldn’t bet against you, as long as you can keep iterating at the same pace that got you over the initial hurdles.
Your funding gives you the competitive advantage of “elbow grease,” which is significant when tackling problems like N-M ETL pipelines. But don’t get stuck focusing on solving every new corner case of these problems. Look for opportunities to be nimble, and cast a wide net so you can find them.
I agree. Adopting a more modular approach is a great idea. Coming from the Java ecosystem, I still miss having something like the Spring framework in Python. I believe Spring remains an example of excellent framework design. Let me explain what I mean.
Using Spring requires adopting Spring IoC, but beyond that, everything is modular. You can choose to use only the abstractions you need, such as ORM, messaging, caching, and so on. At its core, Spring IoC is used to loosely integrate these components. Later on, they introduced Spring Boot and Spring Cloud, which are distributions of various Spring modules, offering an opinionated application programming model that simplifies getting started.
This strategy allows users the flexibility to selectively use the components they need while also providing an opinionated programming model that saves time and effort when starting a new project.
I'm not sure. My suspicion is that the fundamental issue with frameworks like LangChain is that the problem domain they are attempting to solve is a proper subset of the problems that LLMs also solve.
Good code abstractions make code more tractable, tending towards natural language as they get better. But LLMs are already at the natural language level. How can you usefully abstract that further?
I think there are plenty of LLM utilities to be made- libraries for calling models, setting parameters, templating prompts, etc. But I think anything that ultimately hides prompts behind code will create more friction than not.
They were early to the scene, made the decisions that made sense at each point in time. Initially I (like many other engineers with no AI exposure) didn't know enough to want to play around with the knobs too much. Now I do.
So the playing field has and is changing, langChain are adapting.
Isn't that a bit too extreme? Goodwill burnt up? When the field changes, there will be new abstractions - of course I'll have to understand them to decide for myself if they're optimal or not.
React has an abstraction. Svelte has something different. AlpineJS, another. Vanilla JS has none. Does that mean only one is right and the remaining are wrong?
I'd just understand them and pick what seems right for my usecase.
You seem to be implying all abstractions are equal, its just use-case dependent. I disagree- some really are worse than others. In your webdev example, it would not be hard to contrive a framework designed to frustrate. This can also happen by accident. Sometimes bad products really do waste time.
In the case of LangChain, I think it was an earnest attempt, but a misguided one. So I'm grateful for LangChain's attempt, and attempts to correct- especially since itis free to use. But there are alternatives that I would rather give a shot first.
I don't think the choices made sense even back when they were made. LangChain always looked like an answer in search of a question, a collection of abstractions that don't do much except making a simple thing more complex.
We released this as a way to make it easier to get started with LLM applications. Specifically, we've heard that when people were using chains/agents they often wanted to see what exactly was going on inside, or change it in someway. This basically moves the logic for chains and agents into these templates (including prompts) - which are just python files you can run as part of your application - making it much easier to modify or change those in some ways.
Happy to answer any questions, and very open to feedback!
> we've heard that when people were using chains/agents they often wanted to see what exactly was going on inside, or change it in someway.
I certainly agree, but I'm having trouble seeing how templates help with this. The templates appear to be a consolidation of examples like those that were already emphasized in the current documentation. This is nice to have, but what does it do to elucidate the inner workings?
LangChain co-founder here. There's lots of good feedback here (that also resonates with previous feedback) that we're working hard to address. On some key points:
- We genuinely appreciate all the thoughtful criticism and feedback. Our goal is to make it as easy as possible to build LLM applications (both prototypes and production-ready applications), and if we're falling short in an area we'd much prefer to hear it rather than not. We don't have the bandwidth to respond to all feedback directly, but we do (1) appreciate it, and (2) try to address it as quickly as possible.
- Documentation: we've heard this for a while now, and have been working to improve it. In the past ~3 weeks we've revamped our doc structure, changed the reference guide style, and worked on improving docstrings to some our more popular chains. However, there is a still a lot of ground to cover, and we'll keep on pushing. Feedback on which specific chains/components need better documentation is particularly helpful
- Customizability: we need to make it easy to customize prompts, chains, and agents. We're thinking of changes to more easily enable this - better documentation, more modular components. We'll up the priority of this.
- Other tooling: there are general difficulties in building LLM applications that aren't strictly related to langchain, such as debugging and testing. We're working on building separate tooling to assist with this that we hope to launch soon.
I think Langchain is like democracy, everyone complains about it and tries to poke holes in it but it is clearly better than all the alternatives.
Once I got "into" langchain and how it did things my life as a developer got infinitely easier. It is true that it is doing a lot of things that you "could" do elsewhere, but that is kind of the point of a library. For example, it makes it incredibly easy to switch between vector datastores or embeddings, with just a tiny code change. I love that.
Look at how much code it takes to actually get something done. It makes it trival to take a file (or a number of files), chunk them, and load them into a vector store. Sure, I could write and maintain the code to do that, but why?
While I did find it challenging to get started with Langchain, it was more a lack of understanding of the ecosystem than anything else. Great abstractions aren't going to shield me from that without restricting choice. The documentation has improved noticeably in the last few weeks.
Great work, it is very much appreciated by the non-HN crowd. Don't let this feedback get you down.