Hacker News new | past | comments | ask | show | jobs | submit login

I think it’s more likely we can train two neural networks, one to make the decision and one to take the same inputs (or the same inputs plus the output from the first one) and generate plausible language to explain the first. This seems to correspond to what we dimwits consciousness and frankly I would doubt one system can accurately explain its own mechanism. People surely can’t.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: