We need this kind of insight for this current wave of AI hype. Once again we're seeing a lot of wishful thinking, and a lot of aspirational labels (e.g. that DNNs are "thinking" up solutions etc). And inevitable extrapolation of toy solutions, assuming they will scale to real world scenarios (e.g. the fully autonomous driving fiasco). And the hype does not correspond to reality.
However, we're not even at a point yet where we can articulate specifically what that physical process _is_ - much less reproduce simplified artificial versions of it.
To imply we've somehow captured the essence of thinking in a DNN - and that it just needs to get bigger and more complex - that is exactly the type of thing this guy is mocking (deservedly so).
Of course, this doesn't apply to knee-jerk reactions. I don't carry on an inner monologue in order to realize "That thing over there is a stop-sign".
A lot of what people call "Artificial Intelligence" these days would be much better described as "Artificial Knee-Jerk Reactions".
>As McDermott says, “The whole problem is getting the hearer to notice what it has been told. Not ‘understand,’ but ‘notice.’ ” Suppose that instead the physicist told you, “Light is made of little curvy things.” Would you notice any difference of anticipated experience?
>How can you realize that you shouldn’t trust your seeming knowledge that “light is waves”? One test you could apply is asking, “Could I regenerate this knowledge if it were somehow deleted from my mind?”
LessWrong is great as intellectually stimulating entertainment but I don't think they are actually any good at all for making people more rational.