Hacker News new | past | comments | ask | show | jobs | submit login

Maybe related, but I've also got a tailscale instance running for the same use case (on an older it box but...) I've also installed open-webui attached to ollama. That way the interface I can deal with when on my phone is just a simple nice to use webpage. May want to look into this? Thus far it's worked very slick.



Nice, thanks for the suggestion. I got it set up just before leaving town for a few days, so have been doing a little tinkering with it. I was hoping to have a setup with LM Studio, where my laptop could use the API Server from the mini over the TS network. Unfortunately doesn't seem to be the case, so I'll set up a configuration like you mentioned to just have a global client from any device on the netowrk.

It's very cool to be able to have access to such a high horsepower machine from anywhere though. Next step is figuring out the networking interface to be able access the host GPU/ollama API from pods running in a Colima VM/k3s cluster setup.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: