Maybe related, but I've also got a tailscale instance running for the same use case (on an older it box but...) I've also installed open-webui attached to ollama. That way the interface I can deal with when on my phone is just a simple nice to use webpage. May want to look into this? Thus far it's worked very slick.
Nice, thanks for the suggestion. I got it set up just before leaving town for a few days, so have been doing a little tinkering with it. I was hoping to have a setup with LM Studio, where my laptop could use the API Server from the mini over the TS network. Unfortunately doesn't seem to be the case, so I'll set up a configuration like you mentioned to just have a global client from any device on the netowrk.
It's very cool to be able to have access to such a high horsepower machine from anywhere though. Next step is figuring out the networking interface to be able access the host GPU/ollama API from pods running in a Colima VM/k3s cluster setup.