
Deep Learning: A Critical Appraisal [pdf] - sarosh
https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf
======
vadimberman
> deep learning must be supplemented by other techniques if we are to reach
> artificial general intelligence

I don't think anyone major ever disputed that.

Having said that, thousand times yes to the author's concerns. Deep learning
is AI's cryptocurrency in terms of being overhyped, although its main
proponents are not to blame for that.

~~~
bitL
Why do you think it's overhyped? It pushed state-of-art results in quite a few
difficult domains by quite high margin; it's deservedly praised. Or you don't
like that you can't really understand what is going on inside, despite it
using simple math and primitive non-linear optimization, making it
"conceptually" inferior and not as "tasty" to other ML methods where we can
actually prove something?

~~~
yorwba
Deep learning is especially overhyped by articles that describe it as "just
like the human brain". It's a useful technique that lets you skip lots of
feature engineering by just letting the classifier learn its own features, but
it is also really easy to project abilities onto neural networks that they
don't really have.

Deep learning is not magic, for every network architecture that beats the
state of the art, there are a hundred very similar ones that completely fail,
or run too slow, or don't fit into GPU memory ... and the only way we know to
get improvements is to fiddle with the hyperparameters until everything works
out.

------
starchild_3001
This is somewhat of an opinion piece. We need more articles like it to
counterbalance the "AI is the new electricity" crowd. Hyping deep learning
isn't healthy.

------
albertzeyer
Almost all concerns in the paper are active research topics and do have
certain solutions which do use some sort of deep learning approach. Depending
on the viewpoint and interpretation, you could say that some of these
approaches are hybrid solutions, but this is really just a matter of
interpretation. No-one is really denying that the stated concerns are valid
concerns. But also, no-one would say that the current knowledge gained from
deep learning research will not be useful in the future. Of course, maybe for
some aspects, you would need more radical new ideas, but I doubt that for
future methods, nothing from the current methods will be used in some way.

E.g.:

3.1. Deep learning thus far is data hungry. First, you could argue that on a
low-level, an animal/human gets quite a lot of visual and audio input, so it's
data hungry as well. Then, you could argue that the evolution did already do
some sort of pretraining/pre-wiring which helps, using million of years of
data. Then, related to this is the topic of unsupervised learning and
reinforcement learning. Then, dealing with the aspect of learning with small
amounts of data, there are the active research topics of one-shot-learning,
zero-shot-learning of few-shot-learning. Related is also meta-learning.

3.2. Deep learning thus far is shallow and has limited capacity for transfer.
Transfer-learning, meta-learning and multi-task-learning are active research
areas which deal with this.

3.3. Deep learning thus far has no natural way to deal with hierarchical
structure. There are various approaches also for this. This is also an active
research area.

3.4. Deep learning thus far has struggled with open-ended inference. This is
also an active research area.

3.5. Deep learning thus far is not sufficiently transparent. Also this is an
active research area. And then, you could also argue that the biological brain
also suffers at this.

3.6. Deep learning thus far has not been well integrated with prior knowledge.
This is also an active research area.

Etc.

~~~
YeGoblynQueenne
In some of those cases, the active research has been going on for as long as
deep learning itself- for instance, one-shot-learning comes from the '90s, if
memory serves, so does transfer learning ('93, wikipedia says). My hunch is
that in such cases only mediocre solutions exist.

And of course, just because there's reasearch in a given area doesn't mean
that progress will necessarily be made. Frex, research on semantics has been
going on since the dawn of AI and we 're not even close yet.

Personally, I think it's always good to have people pointing out limitations
of a technique. Minsky and Papert caused a lot of consternation back in
Perceptrons, but without that, who knows when the ANN researchers would have
gotten off their butts and tried to solve real problems.

------
irickt
More context here:
[https://news.ycombinator.com/item?id=16083325](https://news.ycombinator.com/item?id=16083325)

------
DrNuke
Different perspectives and research backgrounds converging to the same limits
of the given tool is very good for defining a boundary while containing the
hype. More in general, it still seems generally inefficient (and very risky
from a regulator point of view) to deploy full AI agents in dynamic, human,
imperfect environments, eg. self-driving cars in the common traffic flow.

------
nl
This isn't a great paper (as you can tell by how often the author cites
himself).

It isn't really worth responding too - it's either attacking claims which are
never made, or so outrageously wrong it appears to be trolling.

~~~
srirachahot
Most academics cite their own work... That’s not a quality marker, it’s the
norm.

Care to give more info on your second paragraph?

