Hacker News new | past | comments | ask | show | jobs | submit login

“Bullshit” is the _perfect_ term. Philosopher Harry Frankfurter wrote a book called On Bullshit where he defines the term as speech or writing intended to persuade without regard for the truth. This is _exactly_ what LLMs do. They produce text that tries to reproduce the average properties of texts in their training data and the user experiences the encoded in their RLHF training. None of that has anything to do with the truth. At best you could say they are engineered to try to give the users what they want (eg. what the engineers building these systems think we want), which is, again, a common motive of bullshitters.



"Bullshit" doesn't work because it requires a psychological "intent to persuade," but LLMs are not capable of having intentions. People intentionally bullshit because they want to accomplish specific goals and adopt a cynical attitude towards the truth; LLMs incidentally bullshit because they aren't capable telling the difference between true and false.

Specifically: bullshitters know they are bullshitting and hence they are intentionally deceptive. They might not know whether their words are false, but they know that their confidence is undeserved and that "the right thing to do" is to confess their ignorance. But LLMs aren't even aware of their own ignorance. To them, "bullshitting" and "telling the truth" are precisely the same thing: the result of shallow token prediction, by a computer which does not actually understand human language.

That's why I prefer "confabulate" to "bullshit" - confabulation occurs when something is wrong with the brain, but bullshitting occurs when someone with a perfectly functioning brain takes a moral shortcut.


I don’t like “confabulate” because has a euphemistically quality. I think most people hear it as a polite word for lying (no matter the dictionary definition). And this is a space that needs, desperately needs, direct talk that regular people can understand. (I also think confabulate implies intention just as much as bullshit to most people.)


You’re right about the model’s agency. To be precise I’d say that LLMs spew bullshit but that the bullshitters in that case are those who made the LLMs and claimed (in the worst piece of bullshit in the whole equation) that they are truthful and should be listened to.

In that sense you could described LLMs as industrial strength bullshit machines. The same way a meat processing plant produces pink slime at the design of its engineers so too do LLMs produce bullshit at the design of theirs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: