Hacker News new | past | comments | ask | show | jobs | submit login

Aaronson has an interesting proposal to address the Chinese room problem that I think makes a lot of sense. The idea is that the Chinese Room intuitively doesn't exhibit understanding because it's a constant-time exponential-memory algorithm (a lookup table), whereas the algorithm that generated the entries in the Chinese Room table (a human) is super-constant time, sub-exponential memory algorithm, which introduces a place for consciousness to emerge. So the only reason the Chinese Room problem is philosophically confounding is that it adds a layer of indirection (a cache table) that obviously can't be conscious over the algorithm that actually might be conscious and that generates the table entries. http://www.scottaaronson.com/papers/philos.pdf

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact