Hacker News new | past | comments | ask | show | jobs | submit login

> How is that any different though from regular false or fabricated information gleaned from Google, social media or any other source?

It lowers the barrier to essentially nothing. Before, you'd have to do work to generate 2 pages of (superficially) plausible sounding nonsense. If it was complete gibberish, people would pick up very quickly.

Now you can just ask some chatbot a question and within a second you have an answer that looks correct. One has to actually delve into it and fact check the details to determine that it's horseshit.

This enables idiots like the redditor quoted by the parent to generate horseshit that looks fine to a layman. For all we know, the redditor wasn't being malicious, just an idiot who blindly trusts whatever the LLM vomits up.

It's not the users that are to blame here, it's the large wave of AI companies riding the sweet capital who are malicious in not caring one bit about the damage their rhetoric is causing. They hype LLMs as some sort of panacea - as expert systems that can shortcut or replace proper research.

This is the fundamental danger of LLMs. They have crossed past the uncanny valley. It requires a person of decent expertise to discover the mistakes generated and yet the models are being sold to the public as a robust tool. And the public tries the tools and in absence of being able to detect the bullshit, they use it and regurgitate the output as facts.

And then this gets compounded by these "facts" being fed back in as training material to the next generation of LLMs.






Oh, yeah, I’m pretty sure they weren’t being malicious; why would you bother, for something like this? They were just overly trusting of the magic robot, because that is how the magic robot has been marketed. The term ‘AI’ itself is unhelpful here; if it was marketed as a plausible text generator people might be more cautious, but as it is they’re lead to believe its thinking.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: