Whatever you're doing with AI is not going to be fast enough to use in my jq pipelines, not to mention it's likely sensitive and I'm not sending it off anywhere for processing.
> There’s so many projects that just seem to call out to the OpenAI api even though they say something silly like “99% local.”
That used to be the case for lots of business-facing products that wanted to capitalize on the hype quickly a year ago but I think the dust has settled (?)
For developer tools, however things like llamafile are pretty much the standard.
Not to mention the pain of maintaining multiple keys, different response formats etc.
In my experience, local LLMs are slow to output / less accurate. I only use remote LLMs at this point, and they are getting cheaper by the day from significant competition