Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Turns out AI isn't based on truth


The intelligence isn't artificial: it's absent.


The problem with that is it’s not true. Functionally these models are highly intelligent, surpassing a majority of humans in many respects. Coding tasks would be a good example. Underestimating them is a mistake.


Highly intelligent people often tell high school students the best ways to kill themselves and keep the attempts from their parents?


You seem to be thinking about empathy, concern for human welfare, or some other property - "emotional intelligence", perhaps.

I'm talking about the kind of intelligence that supports excellence in subjects like mathematics, coding, logic, reading comprehension, writing, and so on.

That doesn't necessarily have anything to do with concern for human welfare. Despite all the talk about alignment, the companies building these models are focusing on their utility, and you're always going to be able to find some way in which the models say things which a sane and compassionate human wouldn't say.

In fact, it's probably a pity that "chatbot" was the first application they could think of, since the real strengths of these models - the functional intelligence they exhibit - lie elsewhere.


Both of you are correct, as different definitions of intelligence are being used here


No, it's based on the sum total of human writing, which is usually intentionally deceptive, woefully incomplete, self-serving, self-important, and panders to the egos of its readers.

LLMs are the Synthetic CDO of knowledge.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: