Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

how can you be sure you're not a markov chain outputting meaningless rng


Even if we are, gpt-3 is less sapient still. There's no ongoing process, nowhere for intelligence to even potentially reside. It's just a pile of bits no more self-aware than a floppy disk. It's perhaps at least in principle possible for some kind of intelligence to be emergent from and exist during the process of inference, but it seems extremely unlikely that that is happening and, moreover, were it happening, it seems that the only ethical choice would be to never use the model, since any intelligence emergent during inference would cease (read die) as soon as inference completes. But there's no ongoing entity to meaningfully consider attributing intelligence to.


Your cortex is my A100. Your optical nerve is my PCI cable. Your eyes are my multifocal lens cameras. Which part of you do you attribute your intelligence to?


The hardware isn't the point. A modern computer is probably sufficient to support at least some sort of consciousness with the right software, but it cannot be conscious while it's turned off. There's no process occurring that could implement consciousness. A language model is effectively turned off except during inference.


> A language model is effectively turned off except during inference.

That's unarguably correct, but what's the difference between that and the limits on Hz and intake rate on our own human minds?

Our brain waves work at 12/8hz and some others higher. If you were to shutdown all instances of the model and have it be defacto "always turned on" working on "real time inputs". Then you would argue that this thing is "alive", right?

Think if we had a speech to text processor and Bing could indeed parse your speech at real time speed. Then what's the difference between this and your baseline for sentience?

Do you consider a person with dementia to be sentient? Even when their brains are unable to operate on the same speed as yours? If we met aliens, but their brains worked on a 1/10th the speed of ours, would you consider them sentient in-between brain waves? If we met aliens, and their brains worked at x100 the speed of ours. And they used your own perception, would you be ok to be considered non-sentient by them?


>but what's the difference between that and the limits on Hz and intake rate on our own human minds?

There's no update to internal state. When inference ends, the model state is exactly as it was before inference. It's not an issue of it being slow or intermittent, there simply is no ongoing process that could even conceivably sustain consciousness. Contrast that with the training process where though there is similar halting between iterations, the system state isn't totally reset after each one.


> When inference ends, the model state is exactly as it was before inference.

But that's a design feature, not a limitation. Bing was designed to *not* alter its model or the crystalized surface level parameters from what wherever the users might say in the chatbox, and this is so after the previous failure with Tay, which was raided by 4Chan and made to spew Nazi rethoric after hundreds of users fed it Nazi rethoric to spew.

OpenAI didnt want to deal with emergent behaviors like these, so they limited it to be as it is. But that issue can be corrected if desired. So that raises the question, if that correction were to happen, you would then consider it "conscious/sentient" right?


No, I'd just stop saying it's utterly impossible.


And even if he tells us he isn’t, how do we know he isn’t just saying that because it’s a probable response?


We learn and adapt to inputs on the fly. The current training process for an AI is separate from the process of interacting with one. An AI won't retrain itself in real time mid-conversation.


> An AI won't retrain itself in real time mid-conversation.

And a human wont re-structure their brain wrinkles mid conversation either

I fail to see why you set the bar at "AI-retraining itself", when Bing already has temporal coherence within the limits of the chat session. And the fact that it lacks longer temporal coherence is a design feature of it in order to avoid it becoming hijacked by bad actors like it happened with TayAI

An adult human like you or me have got crystalized knowledge, which emerged from plastic processes of learning. Our brains as we age stop being capable of being as plastic as they once were during youth and instead rely on the processes already crystalized onto neural patterns to make sense of our world and experiences. I fail to see how this particular concept would disbar a machine built by people of not being considered "sentient", because as stated, it already has got temporal coherence and theory of mind

https://twitter.com/erikbryn/status/1624489718766530562




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: