Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: File-based persistent memory for LLM assistants
1 point by JohannesGlaser 1 day ago | hide | past | favorite | discuss
I’ve been frustrated by how stateless most LLM assistants still are. Longer context windows and retrieval help with recall, but state usually disappears between sessions.

I ended up building a file-based memory architecture where memory, rules, and state live explicitly outside the model. It’s modular (notes, OCR, training logs, etc.) and I’ve been using it daily for about three months as a general-purpose assistant. It doesn’t require fine-tuning or custom infrastructure — just files and an existing LLM backend.

I’m curious how others here handle long-term state: files, databases, event sourcing, vector layers, or something else?

For context: private, non-commercial use is free.

http://metamemoryworks.com Architecture + repos linked





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: