>> Neural nets represent one of our best attempts to model brains.
See now that's one of the misconceptions. ANNs are not modelled on the brain,
not anymore and not ever since the poor single-layer Perceptron which itself was
modelled after an early model of neuronal activation. What ANNs really are is
algorithms for optimising systems of functions. And that includes things like
Support Vector Machines and Radial Basis Function networks that don't even fit
in the usual multi-layer network diagram particularly well.
It's unfortunate that this sort of language and imagery is still used
abundantly, by people who should know better no less, but I guess "it's an
artificial brain" sounds more magical than "it's function optimisation". You
shouldn't let it mislead you though.
>> Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.
I don't agree. It's a subject that's informed by a solid understanding of the
fundamental concepts - function optimisation, again. There's uncertainty because
there's theoretical limits that are hard to test, frex the fact that multi-layer
perceptrons with three neural layers can learn any function given a sufficient
number of inputs, or on the opposite side, that non-finite languages are _not_
learnable in the limit (not ANN-specific but limiting what any algorithm can
learn) etc. But the arguments on either side are, well, arguments. Nobody is
being "blind". People defend their ideas, is all.
See now that's one of the misconceptions. ANNs are not modelled on the brain, not anymore and not ever since the poor single-layer Perceptron which itself was modelled after an early model of neuronal activation. What ANNs really are is algorithms for optimising systems of functions. And that includes things like Support Vector Machines and Radial Basis Function networks that don't even fit in the usual multi-layer network diagram particularly well.
It's unfortunate that this sort of language and imagery is still used abundantly, by people who should know better no less, but I guess "it's an artificial brain" sounds more magical than "it's function optimisation". You shouldn't let it mislead you though.
>> Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.
I don't agree. It's a subject that's informed by a solid understanding of the fundamental concepts - function optimisation, again. There's uncertainty because there's theoretical limits that are hard to test, frex the fact that multi-layer perceptrons with three neural layers can learn any function given a sufficient number of inputs, or on the opposite side, that non-finite languages are _not_ learnable in the limit (not ANN-specific but limiting what any algorithm can learn) etc. But the arguments on either side are, well, arguments. Nobody is being "blind". People defend their ideas, is all.