Hacker News new | past | comments | ask | show | jobs | submit login
The Problem with LangChain (minimaxir.com)
268 points by minimaxir 10 months ago | hide | past | favorite | 92 comments



I'm in the exact same spot as the author just a few days in instead of months.

Frankly I could see langchain is garbage software just by looking at the code.

It still helps me get shit done fast to figure out how things are supposed to work. Sort of a cookbook of AI recepies. Once I have an approach narrowed down I'll rewrite everything on top of stuff langchain is supposedly wrapping. For now it's faster than tracking down individual libraries and learning the apis. It will stay in notebooks basically.


Yes! This is why I started working on AIPL. The scripts are much more like recipes (linear, contained in a single-file, self-evident even to people who don't know the language). For instance, here's a multi-level summarizer of a webpage: https://github.com/saulpw/aipl/blob/develop/examples/summari...

The goal is to capture all that knowledge that langchain has, into consistent legos that you can combine and parameterize with the prompts, without all the complexity and boilerplate of langchain, nor having to learn all the Python libraries and their APIs. Perfect for prototypes and experiments (like a notebook, as you suggest), and then if you find something that really works, you can hand-off a single text file to an engineer and they can make it work in a production environment.


This looks pretty rad. It is very expressive.

After scoffing at your shameless plug I read some of the code and was surprised to find it was not written in perl. Maybe some other options for the acronym could be:

* LASI (Language Artificial Specific Intelligence) * LLMML (Large Language Model Markup Language) * LLMMA (Large Language Model Metalinguistic Abstraction [this one is a stretch])


You don't need to look at the code: I looked at the release notes and the garbage fire of unrelated nonsense getting added and got to skip even installing it. https://github.com/hwchase17/langchain/releases

At this point Langchain is almost required to accept any PR. They raised money, and now their growth metric is Github stars.

A simple wrapper around the APIs (I used llamaflow, which is now llm-api https://github.com/dzhng/llm-api) and a templating engine is most of what you need.

AI is not at a point where generalist prompts to do agent/memory/search things is a good idea for a real product. You need to integrate procedural guidance unless you want your UX to be awful.


Pfft. You looked at the release notes? I knew it was garbage so fast that I got warped back in time to 2021, and stopped using LangChain before it even existed.


+1, but i figured it out in an hour. Langchain seemed like a ridiculous overcomplication of what would otherwise be basic python. Has been off to the races since I decided not to use it.


LLM calls are just function calls, so most functional composition is already afforded by any general-purpose language out there. If you need fancy stuff, use something like Python‘s functools.

Working on https://github.com/eth-sri/lmql (shameless plug, sorry), we have always found that compositional abstractions on top of LMQL are mostly there already, once you internalize prompts being functions.


I added a Python library API to my LLM CLI tool recently which offers a very lightweight way to call models: https://llm.datasette.io/en/stable/python-api.html

    import llm
    model = llm.get_model("gpt-3.5-turbo")
    model.key = 'YOUR_API_KEY_HERE'
    response = model.prompt(
        "Five surprising names for a pet pelican"
    )
    print(response.text())
Or you can stream the responses like this:

    response = model.prompt(
        "Five diabolical names for a pet goat"
    )
    for chunk in response:
        print(chunk, end="")
It works with other models too, installed via plugins - including models that can run directly on your machine:

    pip install llm-gpt4all
Then:

    model = llm.get_model("ggml-vicuna-7b-1")
    print(model.prompt(
        "What is the capital of France?"
    ).text())
It also handles conversations, where each prompt needs to include the previous context of the conversation:

    c = model.conversation()
    print(c.prompt("Capital of France?").text())
    print(c.prompt("what language do they speak?").text())
I wrote more about the new plugin system for adding extra models here: https://simonwillison.net/2023/Jul/12/llm/


Super pumped about this, Simon. Will be trying it this weekend. Any of the gpt4all models work, but need to duplicate the model files for now, right?


Yeah I haven't figured out how to have it reuse the models from the desktop GPT4All installation yet, issue here: https://github.com/simonw/llm-gpt4all/issues/5


This looks wonderful, a similar breath of fresh air to using the requests library for the first time. Really impressed by the amount of documentation too.

Is support for embedding and querying a corpus of custom text planned at all? 99% of what I wanted to use langchain for was building a chatbot that can answer questions about my own documents.


Undecided yet, but I think there's a good chance embedding stuff will eventually show up as an LLM plugin.


Nice. Now all we need is a vector database atop SQLite.


I had a go at one of those a few months ago: https://datasette.io/plugins/datasette-faiss

Alex Garcia built a better one here as a SQLite Rust extension: https://github.com/asg017/sqlite-vss


Just implemented pgvector in an existing psql db, works quite well


Why is the conversation tied to the model?


Good question! It's because there's an aspect of conversations that differs between different models: the way the previous messages are injected into the context of the prompt.

I went back and forth on a bunch of different designs, but eventually decided to try to make it so that each plugin that implemented a new model would only have to subclass Model and add new methods.

You can see all of the design arguments I had with myself about this here, across the course of 129 pull requests comments: https://github.com/simonw/llm/pull/65


Reading this was very cathartic, I was nodding along and laughing as I had the exact same journey of WTF. And all along I just assumed I was 100% of the problem.

Hearing this perspective helps put my frustration in context. I need to lower my expectations and just get used to its quirks. Despite its issues I've had a ton of fun building with langchain and will keep using it.


Amen! I had the exact same reaction. Like the author I eventually threw my hands in the air and started rolling my own solution that is already getting most of what I was interested in done without the hassle.


The core data structure, the Chain, is basically just a function. Combining chains is function composition, like literally it's just f(g(x)), but incompatible with _your_ f's and g's without an adapter.

Read this page and mentally swap "chain" for "function": https://python.langchain.com/docs/modules/chains/foundationa...

They build all these adapters and integrations and make it seem like they're helping you piece together a solution, but in how many cases were they necessary as a middleman? Like I really don't need a wrapper around the openai client, and for the more complex stuff like Agents, isn't this the most critical part of your app, if it's production? And for notebooks, is it any better than coding directly against your llm api? You probably won't swap llm backends, and if you're using the notebook for education/documentation, I think I'd want to show the actual openai api calls.

OpenAI's official documentation gives you the code for a tool-running agent anyway. Taking that and editing it can probably be done faster than pip-installing langchain and navigating its docs.


> ...but in how many cases were they necessary as a middleman?

Some prefer React, somet NextJS. LangChain has its place.


Langchain don't support basic things, like splitting a list result in single items and processing those one by one, accumulating the results further down the road.

By the time you have built custom chains, custom prompts, and custom agents to support all that, you basically are using their interface and not their code. At that point, eh, it's pointless. It's great for demos, I give you that, but every time I tried to coherce it into a product, it fell short.


React works. NextJS works. OpenAI's ChatCompletion API with `functions` works. Go on langchain's discord and ask their documentation bot anything.


The question is: Who needs to abtract over 10 lines of code?


LangChain is definitely verbose, and personally I don't use it. That being said, they have some pretty interesting tools the author didn't cover: their example selectors e.g. the MaxMarginalRelevance selector [1] is interesting, useful, and similar example selector tools become something close to necessary for managing large LLM applications.

I wish the code quality was better, but poking around their docs does give pretty interesting ideas you can build yourself, even if you don't use LangChain. I think the release of OpenAI function calling has also just kinda sideswiped the need for large parts of these kind of frameworks — you don't need much help in coercing to JSON or parsing anymore if you use the function calling API.

1: https://python.langchain.com/docs/modules/model_io/prompts/e...


Langchain is perfect to give you ideas for how to interact with LLMs, but for me it's been easier to implement everything myself than to use it.


I think I wrote pretty much the same comment here a few months ago. There's some good Langchain videos on YT to get you up to speed with the space in a day, then just go directly to the APIs you think are best suited to your problem.

Also: before using any new technology, go to hn.algolia.com and search HN for comments about it first.


I agree completely. I use it for inspiration and maybe a quick prototype to understand something, but I usually implement the pieces myself. Debugging LangChain performance and bugs is just an exercise in frustration.


I use lambda over let over lambda (lolol)[1] like this (Clojure):

    (def chatgpt
      (-> (endpoint :chat "gpt-4" auth)
          (chat/set-opts [:endpoint] {:stream true})
          chat/retry
          chat/catch-unkown-commands
          ;; chat/history
          chat/string-input
          (chat/file-io "file.txt" :stream true)
          #_(chat/preserve-ctx chatgpt-ctx)))
These higher-level functions take as input the next function in the chain, and return a function that is responsible for calling it and takes a context hashmap as argument. Additionally, you can pass messages to these functions like :history/reset, impacting the atom defined in the let part of lolol. I use the same pattern for higher level constructs:

[1]https://letoverlambda.com/index.cl/guest/chap2.html#sec_6


Would you mind explaining what's going on here? I know CLJ and read let over lambda, but not sure what your code is doing exactly without function definitions. I assume the first expr returns a context hashmap?

I was thinking about doing something similar with dynamic scoping in Emacs Lisp.


This is basically the middleware pattern except that each function is responsible for calling the next function in the chain. As a consequence, and contrary to the classic middleware pattern, the chain goes both ways, up to thhe api call, and down returning the result. The first expr is indeed special. It is in this context a level 0 function: it takes a context, does stuff, and returns the context. The other functions in the threading/-> expression are level 1 function: they return a level-0 function, in short they act like a factory. That returned function will be then threaded into the following level 1 function and so on. In the end the defined 'chatgpt' element is the whole chain. Call order is to be read from bottom to top.

Here's how level-1 functions are defined. 'ctx-fn' is just an indirection macro for 'fn' I haven't really used yet.

    (defn preserve-ctx [f & [at]]
      (let [mem (or at (atom {:ctx true}))]
        (ctx-fn
         :preserve-ctx [ctx]
         (if (->> ctx :it (cmd? :preserve-ctx))
           (do (case (:it ctx)
                 :preserve-ctx/get   @mem
                 :preserve-ctx/reset (doto {:ctx true}
                                       (->> (reset! mem))))
               (println (-> ctx :it) "done."))
           (-> ctx
               (when-not-> (-> :states :preserved)
                           (as-> $ (ctx-it (merge @mem $) (-> $ :it))))
               f
               (assoc-in [:states :preserved] true)
               (doto (->> (reset! mem))))))))

Edit: reading what I wrote, 'endpoint' is in fact a function returning function, except that since it's the "end" of the chain, it doesn't take a function to call next as argument.


Thanks. Have you found good libraries for working with openai api's in clj?


According to https://github.com/search?q=language:clojure%20gpt&type=repo...

The most advanced lib for dealing with LLMs is

https://github.com/zmedelis/bosquet

There is also https://github.com/cjbarre/multi-gpt/tree/main but it hasn't been update in 3 months and seems rather basic.

Alternatively, you can shoot me an email at

(->> '(102 117 110 116 97 105 110 64 109 101 46 99 111 109) (map char) (apply str))

and I'll prepare a repo for what I've been working on. It's usable but I wanted to clear some things up before a public release.


I generally agree, but why so polemic?

I mean, it starts by giving the full name of the original author… calls using his free software a waste of time… destroys the design as if tearing down someone’s work who made bad decisions at every turn when making a brand-new thing that is FOSS.

Did anyone else find that a bit strange? Did I miss the light-hearted tone buried somewhere?

That said, strangely… useful examples and up-to-date detailed technical analysis that I like. I just don’t get why this sector gets so nasty between people… fight the robots, guys! ;)


https://gwern.net/holy-war

Anyone who's been burned by LangChain, especially now that it has VC funding, has to be worried that LangChain will become the cross-LLM standard library™, and they'll be dealing with it and endless patches to it for the rest of their lives. (Think systemd or NPM or Python packaging.) If it's as bad as described, the time to stop LangChain is to strangle it in the cradle, before it can get too far or risks finding a killer-app/niche which will immortalize it no matter how bad it is.


It has an insane amount of traction and Harrison Chase is totally overplaying in a self-serving way. I've seen the guy IRL an annoying amount of times now considering his claim to fame is raising VC money for a code dumping ground.

People are taking it personally because there's shared a realization that he represents the Crypto-ization of raising for AI based startups: going full hype leader over substance.


I’m a bit confused about why it’s a Python library and where its value proposition lies. If LLMs are going to be ubiquitous in app development, wouldn’t you expect Apple to release a Swift library that blows LangChain out of the water (similarly with Jetbrains/Kotlin, or Microsoft/C#).


Microsoft has Semantic Kernel: https://github.com/microsoft/semantic-kernel

> The SK extensible programming model combines natural language semantic functions, traditional code native functions, and embeddings-based memory unlocking new potential and adding value to applications with AI. > SK supports prompt templating, function chaining, vectorized memory, and intelligent planning capabilities out of the box.

I can't speak from experience whether it's better than LangChain, however.


You could chain python anything with itertools… airflow, multiprocessing, jenkins, graph redis, celery… pick your poison.

I’d probably spend 15% too many manhours making it bespoke…

But The Creator of whatever just solved a problem before I did. Of course I’m going to use that until a better solution is practical.


There is MS Guidance for GPT-3.5 and 4 so I expect most devs will migrate there and LangChain will die out.


I think you're reading into the tone too much, but I address this very argument at the end.

> No one wants to be that asshole who criticizes free and open source software operating in good faith like LangChain, but I’ll take the burden. To be clear, I have nothing against Harrison Chase or the other maintainers of LangChain (who encourage feedback!). However, LangChain’s popularity has warped the AI startup ecosystem around LangChain itself and the hope of OMG AGI I MADE SKYNET, which is why I am compelled to be honest with my misgivings about it.


Thanks for making this:

"simpleaichat is a Python package for easily interfacing with chat apps like ChatGPT and GPT-4 with robust features and minimal code complexity. This tool has many features optimized for working with ChatGPT as fast and as cheap as possible, but still much more capable of modern AI tricks than most implementations"

https://github.com/minimaxir/simpleaichat

Separately, in the article, typo expect->except here:

"LangChain uses about the same amount of code as just using the official openai library, expect LangChain incorporates more object classes for not much obvious code benefit."


Thanks for replying. I get your angle a bit better now, and as a sarcastic guy should have gotten it!

My main headfake in this actually revolutionary-incremental advance over the past 8 or so months… has been more similar to yours than I read at first. Nice post!


I noticed the sentiment toward LangChain make an abrupt 180 as soon as they announced their funding. I don't think it's a conspiracy to ruin their value, more like HN readers were first interested in a cool open source project exploring this new field of generative AI, but now that they have millions the expectations change and they find themselves under more scrutiny.


A conspiracy seems far less likely to me than just people noticing they exist because they got funding.


Disappointment? Langchain promises a lot, it's generated lots of hype, it's got a big VC investment. So it must be awesome. But when you try to use it, you find yourself wasting hours or days to discover there's nothing particularly exciting behind the curtain.


They're well funded, so go ahead and criticize them.

https://blog.langchain.dev/announcing-our-10m-seed-round-led...


I don't know, I'm 100% against tearing down other people's FOSS work, but I think it's okay to be more critical once VC money starts flowing. At that point, it's a product jointly owned by venture capitalists for return on investment, and less a labor of love.

Anyway, for those of us who've used LangChain a bit, I don't think anything in the article is revelatory, the codebase is structured as a first mover project, with all of the copy+paste and poor abstractions that come with it. It's absolutely great for prototyping and simple scripts, but I don't think I would use it in a production codebase, I found it very difficult to work around the bugs and wonky abstractions.


Thanks for referencing the ‘implementing LangChain in 100 lines of code’ post I wrote a few months back. Really glad it was useful. And yes, I completely agree, much of LangChain is an abstraction that does little more than obfuscate.


Calling LLM API is just 2 line of code. Building LLM App is just API Calling. LangChain is a framework for sequential api calling, warpping 2 lines of code with 2 thousand lines of code, also wrapping python While loop in chain object. It's like: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


People have already said that LangChain is useful to get ideas about what can be done with LLMs past the single prompt-response dynamic. Something else I thought to add is that LangChain is also useful for prototyping, so you'll be able to have a proof of concept of your idea before the weekend ends. I do agree that once you have that nailed down then you should rewrite everything from the ground up.

I do recommend prototyping; even if you think your idea is feasible, you want to see it in action before committing significant resources. Personally, my prototype showed me that a single run of my idea takes more time and money than I initially expected.


I've looked at LangChain multiple times and there is some cool stuff in there to enable a quick prototype. That said, needing ALL the cool stuff in one particular use is unlikely and trying to figure out what to do when you have a specific requirement might not be worth the learning curve.

To illustrate the complexity of this, here's a list of things that you might have to do if implementing a document bot:

  1. Handle uploading or storing documents somewhere and keeping track of the location.
  2. Handling different document types. Sticking to PDFs for this list.
  3. Manually or using the PDF to augment the documents with tags, keyterms, titles, etc.
  4. At this point you need somewhere to store the metadata. Maybe a DB or using the vector store.
  5. Just dealing with PDFs requires some type of PDF library. Other documents may or may not require an additional library.
  6. Extracting text from the PDF with something like pdf2image. Not all PDFs have extracted (selectable) text in them. Also, PDFs have images, which sometimes have text.
  7. Doing some sort of OCR is very likely. Think about OCR'ing the whole thing to deal with no extracted text/images with text.
  8. Assuming that, converting pages to images. Also, consider images have data in them, so extracting an image from the image and running some type of detection on it...
  9. Using some OCR model to extract text, or figure out how to extract them from the PDF data.
  10. Cleaning up that text, then parsing it cleanly. nltk comes into play here.
  11. Fragmentation/windowing of text. How long to create the fragments? Or should it be variable?
  12. Using the text to get more text via a prompt to a model. Here we can get additional keyterms, or perhaps a summary or question about the text fragment. (we'll use something we write for prompting the LLM here in a second)
  13. Storing the fragment. Most people use a vector database for this now, so we can use Weaviate or Pinecone, or ???. Also, consider a moderate amount of fragments and their vectors can be stored in a pickle format with manual dot products for ranking.
  14. Figuring out where you are going to get a user prompt. Assuming the easiest thing, collect input from the user in a command prompt.
  15. What to do with the user's prompt once you get it. Do you ask an LLM for more info on the prompt? Or do you just jump to...
  16. Embed the user's prompt to get a vector back. (Weaviate does this transparently, but you can easily do it yourself using the ada-002 endpoint from OpenAI)
  17. Taking that vector (from the embedding/inference to the embed model) do a comparison to other vectors/text you've stored.
  18. Think about what text is important for a new prompt to the LLM. Should it contain directives? How much reference text from the documents does it need? Is a cosign distance or some approximate nearest neighbor match going to be enough?
  19. Think about augmenting the vector search with keyterms that were extracted earlier (by both the PDF itself + any LLM inference step you impelment)
  20. Take the text you pull back from wherever you stored the vector/text and then build a long string to stuff into a prompt.
  21. Consider some type of template structure for the prompts, so you can tweak them without losing your mind. String templates for files in Python are great for $this.
  22. Calling the various LLM endpoints. There are multiple models, in a variety of API endpoints, with tokens usually for auth.
  23. Consider you may just want text back from the LLM, or maybe you want it to complete or write a dict or array (in which case you may want to make this configurable). You may want to eval things that the LLM writes too.
  24. Consider the LLM (ChatGPT for example) may do a completion that contains a block delimited by ```python or similar.
  25. Consider those two things may require different completion endpoints, and some endpoints may be deprecated by the provider later.
  26. Think if you need function completion calling. GPT-X supports this, so you need a function and a way to pass that function's parameters to the LLM. 
  27. Build the prompt and submit it. Don't forget to protect your tokens, using env or config.py files.
  28. Take the response and do something with it that makes sense. Maybe give it to the user, or use it to build another prompt.
  29. Loop back to interact with the user. If the interaction is complicated, like with Discord integration, you may have to do this asynchronously.
  27. Think about storing the interaction for use in building future prompts. Hack this into #15.
  28. Always consider optimizing your prompt length.
  29. Consider how many tokens you are chewing through doing all this.
  30. Consider questions by the user about "what is on page 2?" need context. Another good one is "how many pages is this document", or "what is the title of the document?". A hard one would be "how many images are in this PDF?", meaning how many illustrations...
  31. If the document discusses code, and the model outputs code, or SQL, do you run it and if you do, how?
Example of most of this in action: https://github.com/FeatureBaseDB/DoctorGPT


I used Langchain before for a job interview and was not confident with how it works under the hood and how dangerous would it be if there’s some injection going on. So I used it as minimal as possible. It took me a lot of codes even though when I’m using it minimally. One of their example is to call an API by letting LLM parse a documentation and call the API from its understanding, which looks so unreliable if the LLM went offs a bit. I found it hard to give total control to Langchain.

I tried experimenting on building a library that makes it easy and transparent to use LLM https://github.com/adityapurwa/jehuty and tried the middleware approach that might be more familiar in general. Its an experiment so the API might changes a lot until we find a sweet spot. If you have an advice or suggestions it would be helpful and appreciated.


Worst part of langchain is the "documentation". Especially the one about the javascript implementation


Python is bad too ……


I played around with simpleaichat for a few minutes just now, and I really like it. Unlike LangChain, I can understand what it does in minutes, and it looks like its primitives are fairly powerful. It looks like it's going to replace the `openai` library for me, it seems like a nice wrapper.

I'm especially looking forward to playing with the structured data models bit: https://github.com/minimaxir/simpleaichat/blob/main/examples...

Well done, Max!


I do have a blog post about structured data planned. :)


I came to the same conclusion with author, decided to make port to Go. With a plan to just adopt the concepts and strip all the unnecessary complexity and lots things mentioned by the article. Datastore, chain, tools, model are pretty much implemented. However coming to Agent, I'm now wondering if it's the right way to do things. It does work, but is it efficient? At least I know it's not simple.


mind sharing the repo?


Sorry didn't share because I think it's not that good yet. But here you go :)

https://github.com/wejick/gchain


If you'd like another LLM framework option that also is a vector database, check out txtai (https://github.com/neuml/txtai).

Disclaimer: I am the author of txtai


Big +1 to this, txtai is an excellent library and is a breath of fresh air to use. In particular, the way that graphs are integrated with LLMs and with SQL or related databases is excellent!


In response to how over complex I saw the langchain and autogpt code was, I made my own implementation that starts with a solid foundation to doing similar types of things https://youtu.be/KN_etwBLej8 and I share the code if you want to do the same


Recent post on the same topic: https://news.ycombinator.com/item?id=36645575


It's funny that these tools try to coerce LLM to use json as intermediate format.

As long as you're cycling data between LLM unstructured text is going to be just fine.

Have your tools accept natural language as well, and you're golden.


GPT-4 has good json support now. It’s nice to be able to give it a a schematic get structured json back.


But why tho? At the start of the chain there's a human and its natural language questions. At the end of the chain there's a human waiting natural language answers.

In between you may want some form of data storage, and natural language can be that as well, as it makes retrieval trivial for LLM.


Why wouldn’t you want structured data out?

It also feels like you can keep it more on track by telling it to put the data into specific fields with very specific descriptions.


Your end of the chain assumption is wrong, is why ;)


To be fair, it started out as a set of ML notes and code snippets to piece together LLM applications. Its popularity resulted in more feature requests and contributions later on. Without a good up-front architecture (or spending time thinking about it), no wonder it starts making creaking noise.

My personal preference is Llama Index by Jerry Liu. It excels at clearer docs + better abstraction.


After running into these issues a few others and I wrote a typescript agent framework that I think significantly improves on LangChain in many ways: https://github.com/sciencecorp/buildabot/

It’s still very early days for software composing AI models and we almost certainly don’t have all the right metaphors yet. And I think there is a lot to be said for strong typing and simple, robust code!


I've played with langchain now for a couple weeks (with some of the llama-derivative local models and Oobadooba's native & openai apis + TextGen https://python.langchain.com/docs/modules/model_io/models/ll... ) and find it not-too-insanely-hard for an idiot like myself to figure out, though I'm just experimenting at this point with different models, esp. using tools, etc. I've found that some of the recommended prompts in the demos that, while perhaps working well with chatgpt/gpt4, need a lot of tweaking to work with with say WizardLM. But then I can get them working, so that's kinda neat.

I also played with huggingface's transformer agent (https://huggingface.co/docs/transformers/transformers_agents ) and thought it was a lot easier to useas far as the tools go, though is perhaps less capable for other things. I may go back to playing with that actually.


My primary use of langchain has been to turn text into a vector database with chroma and then query the vector database to return source materials to an LLM to use. I’d much prefer to move away from it because like the author I find LC’s adherence to system prompts is really bad. I can handle all of the agent side of things, I just want relevant source materials from the vector DB as strings — anyone have a better more direct way they recommend?


I found Chroma to be really slow. I recently switched to pgvector and I'm really happy with it.

This is an example of using pgvector that might be relevant for your use case:

SELECT d.id, d.doc, 1 - (embedding <=> (SELECT embedding FROM documentation_embedding WHERE id = 1)) AS similarity FROM documentation_embedding de join documentation d on de.documentation_id = d.id ORDER BY similarity desc;


chroma is getting much faster on Monday.

be cautious with pgvector - the recall can be extremely bad (% retrieved nearest neighbors vs ground truth)


I've implemented multiple tools that do this and do not use langchain. You just use chroma, the openai sdk and f strings. I can't share the projects because they were implemented for my work and are private.


I'm in the exact same situation. Every time I try todo something with LangChain it is a major pain and inevitably leads to me having todo something extremely sketchy to make it work. Also a lot of the functionality it provides in the first place is pretty terrible a lot of vector store/memory integrations barely work and some have been broken for weeks. The JSON parsing is also incredibly sketchy and randomly breaks.


I tried langchain briefly. Then realized it was just way faster and easier to write some basic Python scripts to glue together any APIs I was calling.


I disagree. It provides an easy way to let the LLM to use tools, and it has built in tools. Btw, OpenAI's function calling got the idea from LangChain. I did not see this kind of sentiment before the function calling made public. Now everyone think it's garbage ...


I don't find it difficult to read the langchain code, although I think the documentation needs to be improved. It is based on a simple concept that is easy to understand, and once you get used to it, you will find it rather easy to read.

However, I think that langchain's minor version should be increased to 1 or more. The current latest version of langchain is 0.0.234. I think it's a problem that the use in production is very peaky because all the changes that should be used properly by minor, patch version, etc. are all lumped together.


It is so trivial that it even gives me a headache to see the docs. People use it ‘because it’s easy’; it’s not, it’s quite badly done. And copilot can do the same from scratch for the stuff you want to use.


Had the same frustrating experience. The system prompt being ignored when running an agent etc.

But as always I trust the OSS community to make it better or replace it with something else


Exact same feelings as the author. Tried using langchain for my q and a task. I hoped it would let me a oid manually dealing with embeddings. Except... Perf was horrible and it spent my entire opening API quota in an hour or so

Decided to reimplements using embeddings and my own glue code. Took like a week (much less than the langchain work), and it's cheaper and better


On my latest project I’ve been using it in a few places and rolling my own stuff in others. It’s definitely handy for ingesting documents, chunking, and handling IO with vector dbs.

With OpenAI functions though, I find it easier to just make a local sequence of function executions than work w langchain abstractions.


What I'm most interested in is their long list of integrations.

If one can somehow reuse those integrations without even care about/use langchain, it's a massive time saver.

I haven't looked at the code so not sure how reusable they really are.


Dude did ReactJS dirty in that parting shot. Say what you want about the modern ecosystem, but the core idea of declaring UI state with a virtual DOM was excellent. There is a reason why every major web framework followed suit.


The thing with langchain is that it's documentation is terrible.

If I want to do something I have to rely on other sources to use it. That's just an overkill.


happy to see I'm not the only one with this thought. The more I look at langchain code, the more I feel stupid for using it and learning how it works. They have made a basic print("hello world") into

HelloWorldPrint(Baseprint): @validators def input_variables...

bros really just need llm.call(), I ended up rewriting my own tools which goes 10x faster for me personally.


I can't invest in OpenAI atm, any alternative of it so that I can make a "chatting interface" for PDF?


Use Falcon and do your own document chunking/embedding.


What about LlamaIndex? Thoughts?


Honestly just shut the whole thing down at this point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: