Hacker News new | past | comments | ask | show | jobs | submit | more d4rkp4ttern's comments login

Poetry -> UV migration is missing: If you already have a project using Poetry, with a large pyproject.toml with many extras and other settings, there currently isn’t a smooth way to port this to UV. Someone can enlighten me if I missed that.


Indeed, it has become the LLM equivalent of IBM, as in -- "No one ever got fired for choosing LangChain". A certain well-known ML person even runs a course on LangChain, as if it's a "fundamental" thing to know about LLMs. I was also surprised/disappointed to see that the otherwise excellent "Hands-on Large Language Models" book from O'Reilly has extensive examples using this library.

In Apr 2023 we (CMU/UW-Madison researchers) looked into this lib to build a simple RAG workflow that was slightly different from the canned "chains" like RetrievalQAConversation or others, and ended up wasting time hunting docs and notebooks all over the place and going up and down class hierarchies to find out what exactly was going on. We decided it shouldn't have to be this hard, and started building Langroid as an agent-oriented LLM programming framework.

In Langroid you set up a ChatAgent class which encapsulates an LLM-interface plus any state you'd like. There's a Task class that wraps an Agent and allows inter-agent communication and tool-handling. We have devs who've found our framework easy to understand and extend for their purposes, and some companies are using it in production (some have endorsed us publicly). A quick tour gives a flavor of Langroid: https://langroid.github.io/langroid/tutorials/langroid-tour/


You can have a look at Langroid -- it's an agent-oriented LLM programming framework from CMU/UW-Madison researchers. We started building it in Apr 2023 out of frustration with the bloat of then-existing libs.

In langroid you set up a ChatAgent class which encapsulates an LLM-interface plus any state you'd like. There's a Task class that wraps an Agent and allows inter-agent communication and tool-handling. We have devs who've found our framework easy to understand and extend for their purposes, and some companies are using it in production (some have endorsed us publicly). A quick tour gives a flavor of Langroid: https://langroid.github.io/langroid/tutorials/langroid-tour/

Feel free to drop into our discord for help.


Neat. Thanks!


Where’s the info on context length etc? Can’t seem to find the official specs page.


It shows the context length on the AI Studio site

2 million for gemini-exp-1206 32k for the other experimental gemini. I think gemini-exp-1121


Setting up Azure LLM access is a similar hellish process. I learned after several days that I had to look at the actual endpoint URL to determine how to set the “deployment name” and “version” etc.


Have a look at Langroid [1], a multi-agent LLM framework from CMU/UW-Madison researchers (I am lead dev). It does not use any other LLM library (CrewAI uses LangChain). We started building Langroid in Apr 2023 as an agent framework from the ground up, with a particular focus on an agent orchestration mechanism that seamlessly handles inter-agent hand-off as well as Tool/Function-handling.

[1] https://github.com/langroid/langroid


Thanks, checking it out now.


In the Langroid[1] LLM library we have a clean, extensible RAG implementation in the DocChatAgent[2] -- it uses several retrieval techniques, including lexical (bm25, fuzzy search) and semantic (embeddings), and re-ranking (using cross-encoder, reciprocal-rank-fusion) and also re-ranking for diversity and lost-in-the-middle mitigation:

[1] Langroid - a multi-agent LLM framework from CMU/UW-Madison researchers https://github.com/langroid/langroid

[2] DocChatAgent Implementation - https://github.com/langroid/langroid/blob/main/langroid/agen...

Start with the answer_from_docs method and follow the trail.

Incidentally I see you're the founder of Kadoa -- Kadoa-snack is one of favorite daily tools to find LLM-related HN discussions!


Langroid: https://github.com/langroid/langroid

Langroid (2.7k stars, 20k downloads/mo) is an intuitive, lightweight, extensible and principled Python framework to easily build agent-oriented LLM-powered applications, from CMU and UW-Madison researchers. You set up Agents, equip them with optional components (LLM, vector-store and tools/functions), assign them tasks, and have them collaboratively solve a problem by exchanging messages. agent-oriented Python framework to simplify building LLM applications. It has been in development since Apr 2023, predating other Agent frameworks, and was built from the ground up to be agent-oriented (not as an after-thought) and does not depend on any other LLM library.

The framework is gaining popularity among developers due its clean, principled design.

I am the lead dev/architect.

Areas needing help: CODE, DOCS, DESIGN

Level: There are a number of areas (see the issues and contribution docs) where beginners or advanced developers can contribute.

Contact: Drop into the discord and post a message in a suitable channel, or create an Issue or Discussion in the GitHub repo.

Discord: https://discord.gg/ZU36McDgDs


Would be nice if it had a way to use Jetbrains keymaps (in addition to VSCode). Currently my fav editor is Zed, partly because I can set it up to use Jetbrains keyboard shortcuts.


YouTube video of Sasha Rush presenting this:

https://youtu.be/6PEJ96k1kiw?si=7po7shOZOnQYd6hN


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: