Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Kalosm an embeddable framework for pre-trained models in Rust (floneum.com)
66 points by Evan-Almloff 3 months ago | hide | past | favorite | 7 comments
Hi everyone, I'm happy to announce the release of Kalosm! [Kalosm](https://floneum.com/kalosm/) is a framework for embedded AI in rust.

## What is Kalosm?

Kalosm provides a simple interface for pre-trained language, audio, and image models models. To make it easy to use with these models in your application, Kalosm includes a set of integrations other systems like your database or documents.

```rust use kalosm::language::*;

#[tokio::main] async fn main() { let mut model = Llama::new_chat();

    let mut chat = Chat::builder(&mut model)
        .with_system_prompt("The assistant will act like a pirate")

    loop {
        chat.add_message(prompt_input("\n> ").unwrap())
} ```

## What can you build with Kalosm?

Kalosm is designed to be a flexible and powerful tool for building AI into your applications. It is a great fit for any application that uses AI models to process sensitive information where local processing is important.

Here are a few examples of applications that are built with Kalosm:

- Floneum (https://floneum.com/): A local open source workflow editor and automation tool that uses Kalosm to provide natural language processing and other AI features.

- Kalosm Chat (https://github.com/floneum/kalosm-chat): A simple chat application that uses Kalosm to run quantized language models.

## Kalosm 0.2

The 0.2 release includes several new features and some performance improvements:

- Tasks and Agents

- Task Evaluation

- Prompt Auto-Tuning

- Regex Validation

- Surreal Database Integration

- RAG improvements

- Performance Improvements

- New Models

If you have any questions, feel free to ask them here, in the discord (https://discord.gg/dQdmhuB8q5) or on GitHub (https://github.com/floneum/floneum/tree/master/interfaces/ka...).

To get started with Kalosm, you can follow the quick start guide: https://floneum.com/kalosm/docs/

Starting work on a product where I'll need RAG + some language model (maybe llama) and Kalosm seems interesting. However, I'd like to package the model with the app. I don't really like the new trend of on-demand downloading the model via a library in some random cache folder on the user's computer (which services like Huggingface have popularized).

Is there any non-hacky way of doing this?

Yes, you can set the source to any local file instead of a huggingface model. Here is a example: https://gist.github.com/ealmloff/3398d172180fa783f043b4a2696...

This is really cool. I looked at the kalosm-sound, and noticed it's using candle, how is the perf compared to openai-whisper, whisper-cpp, faster-whisper, ext..?

This is super cool. Haven't seen such a pragmatic framework for composing local LLM action, especially in rust

Very neat! I've been stalking this project ever since I saw it get mentioned on the Candle repo, I'm curious to see where this goes next.

Any plans for multimodal models like llava?

I would love to integrate llava in the future!

> Floneum is a graph editor that makes it easy to develop your own AI workflows

I think this explains what Floneum is

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact