Hacker Newsnew | past | comments | ask | show | jobs | submit | advanced-stack's commentslogin

It's mostly a test of the coding capabilities of the 9B model. The 0.8B is used to make the telegram bot smarter than a if/then/else.

I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment


I revisited this and using the models directly via ollama run was actually surprisingly fast.

A bug or misconfiguration with the connection to opencode seemed to be the culprit.


mmh.. interesting; I'll try to compare the perf

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: