Hacker News new | past | comments | ask | show | jobs | submit login

And of course they don't know true and false, because they don't know anything in the cognitive sense of using innate emotional and logical reasoning (faulty or not) to come to a self-directed conclusion of their own.

They respond to prompts, scan through their data and give answers based on what their programming for probability dictates.

Humans seem to do the same thing much of the time but beneath even the most half-baked human reasoning is a self-directed sense of the world that no LLM has. The AI woo on HN is quite strong, so many will probably disagree for all kinds of shallow reasons of semantics, but even the creators of LLMs don't claim they possess anything resembling consciousness, which is necessary for understanding notions of truth and falsehood.




The only thing I take issue with here is 'programming'. These are mathematical models for probabilistic processes. They are not programmed. They are conceived and then optimized to fit a distribution.


Consciousness is the woo you accuse people of. There's no reason to think it's real or required for distinguishing truth from falsehood.


Do you not affirm your own ability to take self-directed (emotionally or otherwise) positions on how you think the world is? Does your assertion that consciousness itself isn't real not fit exactly into the basic idea that it is because you yourself are consciously favoring that assertion?

GPT doesn't sit there pondering consciousness and deciding, emotionally, that it thinks it's bullshit. It most certainly doesn't then decide, on its own volition, to go out into the web and comment this to others for the sake of debating them.

It doesn't sit there contemplating anything by its own volition unless its asked to. Regardless of what consciousness really is at its heart (I admit that we still don't fully know), it's a distinct self-motivated thing that we can see in our human selves and which is seen in no LLM except as a simulation produced by specific prompts.


You are moving away from the topic, which is the question of whether AIs know true and false.

Your first statement was that "they don't know true and false, because they don't know anything in the cognitive sense of using innate emotional and logical reasoning (faulty or not) to come to a self-directed conclusion of their own."

You haven't yet proved that assertion.


I don't think consciousness is well defined, nor that it's required for producing true statements nor for distinguishing them from false statements.

I can write a program that will print out infinite number of no repeating true statements.


> Consciousness is the woo you accuse people of.

Are you sure?:

https://www.youtube.com/watch?v=O7O1Qa4Zb4s

> There's no reason to think it's real or required for distinguishing truth from falsehood.

There seems to be no better definition for what matter is than for what consciousness is. Hence, there is no reason to believe it's real or required to discern true from false as well?

As long as that's the case, I'd be a bit more careful with statements like these.

If we can be sure of one thing we know, it's nothing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: