Hacker Newsnew | past | comments | ask | show | jobs | submit | dfstudioz's commentslogin

I’ve been tinkering on a project I call Franklin — an experiment to see if I could run a self-hosted AI router inside Docker.

Right now it can:

Answer natural prompts like “what’s in /memory?”

Route them to either local tools, a local model (TinyLlama via Ollama), or GPT in the cloud.

Stream responses back through a minimal Web UI.

It’s still very early, and the last couple of weeks were mostly broken ports, missing API keys, and containers restarting — but I finally reached the point where Franklin just answered me back. That moment made it feel less like a pile of logs and more like an assistant actually running on my server.

I wrote up a bit about the journey here: https://www.dfrankstudioz.com/blog

Would love feedback from anyone else experimenting with local + cloud AI setups.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: