I don't think anyone major ever disputed that.
Having said that, thousand times yes to the author's concerns. Deep learning is AI's cryptocurrency in terms of being overhyped, although its main proponents are not to blame for that.
Seriously, why do we want to replace human intelligence? I get augmenting it with narrow forms of automation, but AGI is a different animal.
Deep learning is not magic, for every network architecture that beats the state of the art, there are a hundred very similar ones that completely fail, or run too slow, or don't fit into GPU memory ... and the only way we know to get improvements is to fiddle with the hyperparameters until everything works out.
The black box part is also an issue which both the author of the paper and literally everyone else, including Google's Peter Norvig, is concerned about. But that's not related to the hype part.
Is it that obvious though?
Any good evidence that human brain on the planet scale cluster can't be modelled with DL?
Oren Etzioni has been shouting about the knowledge representation for years (full disclosure: this is something I'm focused on, too).
3.1. Deep learning thus far is data hungry. First, you could argue that on a low-level, an animal/human gets quite a lot of visual and audio input, so it's data hungry as well. Then, you could argue that the evolution did already do some sort of pretraining/pre-wiring which helps, using million of years of data. Then, related to this is the topic of unsupervised learning and reinforcement learning. Then, dealing with the aspect of learning with small amounts of data, there are the active research topics of one-shot-learning, zero-shot-learning of few-shot-learning. Related is also meta-learning.
3.2. Deep learning thus far is shallow and has limited capacity for transfer. Transfer-learning, meta-learning and multi-task-learning are active research areas which deal with this.
3.3. Deep learning thus far has no natural way to deal with
hierarchical structure. There are various approaches also for this. This is also an active research area.
3.4. Deep learning thus far has struggled with open-ended inference. This is also an active research area.
3.5. Deep learning thus far is not sufficiently transparent. Also this is an active research area. And then, you could also argue that the biological brain also suffers at this.
3.6. Deep learning thus far has not been well integrated with prior knowledge. This is also an active research area.
And of course, just because there's reasearch in a given area doesn't mean that progress will necessarily be made. Frex, research on semantics has been going on since the dawn of AI and we 're not even close yet.
Personally, I think it's always good to have people pointing out limitations of a technique. Minsky and Papert caused a lot of consternation back in Perceptrons, but without that, who knows when the ANN researchers would have gotten off their butts and tried to solve real problems.
It isn't really worth responding too - it's either attacking claims which are never made, or so outrageously wrong it appears to be trolling.
Care to give more info on your second paragraph?