Hacker News new | past | comments | ask | show | jobs | submit login
[flagged]
fforflo 4 months ago | hide | past | favorite



Whatever you're doing with AI is not going to be fast enough to use in my jq pipelines, not to mention it's likely sensitive and I'm not sending it off anywhere for processing.


My natural assumption would be that AI helps in writing the script, but execution would be plain old dumb execution.


It's the other way around actually: standard jq but with AI capabilities.


Not all LLM models are remote. You can do just fine with local ones.


While this is true, almost all are remote. And the ones that are local, typically mention this as it’s an important distinguisher.

There’s so many projects that just seem to call out to the OpenAI api even though they say something silly like “99% local.”

For me, I look for project authors to describe their work as all on device pretty early on in my filter.

Not that calls to cloud AI are bad or something, but just have very different use cases.


> There’s so many projects that just seem to call out to the OpenAI api even though they say something silly like “99% local.”

That used to be the case for lots of business-facing products that wanted to capitalize on the hype quickly a year ago but I think the dust has settled (?)

For developer tools, however things like llamafile are pretty much the standard. Not to mention the pain of maintaining multiple keys, different response formats etc.


In my experience, local LLMs are slow to output / less accurate. I only use remote LLMs at this point, and they are getting cheaper by the day from significant competition


I use jq extensively, and my jq commands often stretch multiple lines.

But I haven't stuffed jq into a production pipeline.

I worked with Perl for years. I don't need to go back to that.

Not sure I fit the demography after all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: