Hacker News new | past | comments | ask | show | jobs | submit login

Great share, this is quite impressive. The system doesn't require much training input to go from blank slate to conversation-capable (1587 input sentences -> 521 output sentences; see Appendix S1 for examples). This high learning efficiency might imply that it resembles language-processing architecture in humans. At the low level it's neurons, but organised and connected in a specific, planned way.

A key point is that the central executive (the core, which controls the flow of data between slave systems) is a trainable neural network itself, which learns to generate the "mental actions" that control the flow of data between slave systems (like short-term and long-term memory), rather than rely on fixed rules to control the flow. This allows the system to generalise.

(Some of the training itself doesn't seem to be that similar to training a human child, though. It's instead tailored to the system's architecture.)

I'm excited to see how much richer this and similar systems will become, as researchers improve the neural architecture and processing efficiency. Will we see truly human-like language and reasoning systems sooner than expected?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: