Neural networks are going to make huge inroads to the AI language problem simply by exposing the AI to example after example of words in varying contexts. But I wonder if the real problem is getting those neural networks to let go of unnecessary data? Humans rely on excited neurons to recognize patterns, but our neurons let a lot of sensory input pass us by to keep from getting bogged down in the details. Are the image-recognition AI's described in the article capable of selective attention? Will they get bogged down in the morass of information in trying to pattern-match every word to every image and context they learn?
Yes they are: https://indico.io/blog/sequence-modeling-neural-networks-par... https://github.com/harvardnlp/seq2seq-attn
I wonder if there's AI focused research that analyses how children learn language as an aspect of their research? Especially kids learning a "new" language.
Paradox? In the extended reals, where infinity is the biggest number, infinity plus one gets you infinity, just as you'd expect. In, say, the study of ordinal numbers, where there are many infinite quantities, it doesn't make any sense to talk about "the biggest number".
"Infinity plus one" is a paradox to a child who believes that infinity is a finite number. When they realize what infinity actually means, the paradox will be resolved.