Hacker News new | past | comments | ask | show | jobs | submit login

Nothing would stop us from simulating human brain-like (or analogously powerful) structures to build a machine that genuinely understands natural language. I'm arguing that we can't just learn those structures by statistical optimization techniques though.

If it turns out that the easiest, or even only means of doing this is by emulating the human brain, then it is entirely possible that we inherit a whole new set of constraints and dependencies such that world-simulation and an emobdied mind are required to make such a system learn. If this turns out not to be the case, that there's some underlying principle of language we can emulate (the classic "airplanes don't fly like birds" argument) then it may be the case that text is enough. But that's in the presence of a new assumption, that our system came pre-equipped to learn language, and didn't manufacture an understanding from whole cloth. That the model weights were pre-initialized to specific values.

If there is an innate language structure in the brain then we know that it's possible to develop such a structure by statistical optimization, since this is exactly what evolution did, no?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact