Hacker News new | past | comments | ask | show | jobs | submit login

> Deep learning has succeeded tremendously with perception in domains that tolerate lots of noise (audio/visual). Will those successes continue with perception in domains that are not noisy (language)

I wonder if it's just a different kind of "noise". Higher level, more structured.

> My guess is we may need a fundamental breakthrough in a newfangled hierarchical learning system that is better suited for language to “solve” NLP.

It seems fairly evident that there are many hierarchies inside the brain, each level working with outputs from lower-level processing units. In a sense, something like AlphaGo is hierarchy-poor - it has a few networks loosely correlated with a decision mechanism.

But the brain probably implements a "networks upon networks" model, that may also include hierarchical loops and other types of feedback.

I think, to have truly human level NLP, we'd have to simulate reasonably close the whole hierarchy of meaning, which in turn is given by the whole hierarchy of neural aggregates.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: