Yeah the real problem is that hosted providers basically see everything about you over time. Each prompt alone might seem harmless, but the whole history adds up to a pretty detailed profile.
The way I'd think about it: run a local model for anything it can handle. Keep that stuff on your machine entirely. Then for the stuff that actually needs a more powerful model, don't just dump your whole context in — strip it down to just the specific thing you need answered, like "given X, what's Y" with no surrounding story. The hosted model just sees a decontextualized task, not what's actually going on.
You basically become your own privacy layer. More friction, but the provider never gets the full picture.
The way I'd think about it: run a local model for anything it can handle. Keep that stuff on your machine entirely. Then for the stuff that actually needs a more powerful model, don't just dump your whole context in — strip it down to just the specific thing you need answered, like "given X, what's Y" with no surrounding story. The hosted model just sees a decontextualized task, not what's actually going on.
You basically become your own privacy layer. More friction, but the provider never gets the full picture.
reply