Hacker News new | past | comments | ask | show | jobs | submit login

Nowadays it seems rather easy to have a computer that talks as if it were conscious. Do you think it really does perceive itself the same way we do? I woke up a human today, but someone else woke up an algorithm?

If not, what’s missing?




According to the theory, an attention model. Here's artificial consciousness in three steps:

1. Have a robot build perception models of its environment and itself

2. Have the robot allocate computational resources and sensory bandwidth to the models using attention

3. Have the robot control attention using model predictive control

Because the attention model is less detailed than its actual attention, by virtue of being a model, it doesn't represent the mechanisms of attention or modeling accurately. Instead, it uses non-physical concepts such as "mental possession" to model itself or other agents paying attention to things, or "qualia" to denote the recursion that occurs when percepts we attend to are summarized by the attention model (which in turn can be attended to, and so on).


I don't think there's anything in principle preventing computers from becoming conscious, if that's what you're asking. I'm not convinced LLMs are there yet, although sometimes they do sound like it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: