Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Freeact – A Lightweight Library for Code-Action Based Agents (github.com/gradion-ai)
122 points by cstub 10 days ago | hide | past | favorite | 5 comments
Hello! We just released freeact (https://github.com/gradion-ai/freeact), a lightweight agent library that empowers language models to act as autonomous agents through executable code actions.

By enabling agents to express their actions directly in code rather than through constrained formats like JSON, freeact provides a flexible and powerful approach to solving complex, open-ended problems that require dynamic solution paths.

* Supports dynamic installation and utilization of Python packages at runtime

* Agents learn from feedback and store successful code actions as reusable skills in long-term memory

* Skills can be interactively developed and refined in collaboration with freeact agents

* Agents compose skills and any other Python modules to build increasingly sophisticated capabilities

* Code actions are executed in ipybox (https://github.com/gradion-ai/ipybox), a secure Docker + IPython sandbox that runs locally or remotely

GitHub repo: https://github.com/gradion-ai/freeact

Evaluation: https://gradion-ai.github.io/freeact/evaluation/

See it in action: https://github.com/user-attachments/assets/83cec179-54dc-456...

We'd love to hear your feedback!






> The library's architecture emphasizes extensibility and transparency, avoiding the accidental complexity often introduced by heavier frameworks that obscure crucial implementation details.

Can you give examples of crucial obscure implementations compared to your approach?


The main example lies in our core architecture, which uses just three fundamental abstractions: a code executor, a model, and the agent itself (see https://github.com/gradion-ai/freeact/blob/main/freeact/exam...). All actions are expressed through executable Python code (that you can easily inspect) rather than through multiple abstraction layers.

While other libraries or frameworks may offer convenience through extensive abstractions, this often creates black boxes that complicate debugging and customization when you go beyond standard use cases. For example, when you need to trace what prompts are being sent to the LLM or how tool outputs are being processed, multiple layers of abstraction can make this difficult to inspect. With our approach you can easily trace the flow from agent decision to code execution.

HTH!


You achieved equal (and from the looks of it also better) results with freeact zero-shot prompting than smolagents with few-shot prompting? If thats true - then nice :D

Great addition to the current frameworks and libraries out there - will give it a try on the weekend,but looks promising from you evaluation.

Thanks for checking it out! Hope you have fun, and we’d love to hear your feedback!



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: