Hacker News new | past | comments | ask | show | jobs | submit login

Almost all concerns in the paper are active research topics and do have certain solutions which do use some sort of deep learning approach. Depending on the viewpoint and interpretation, you could say that some of these approaches are hybrid solutions, but this is really just a matter of interpretation. No-one is really denying that the stated concerns are valid concerns. But also, no-one would say that the current knowledge gained from deep learning research will not be useful in the future. Of course, maybe for some aspects, you would need more radical new ideas, but I doubt that for future methods, nothing from the current methods will be used in some way.

E.g.:

3.1. Deep learning thus far is data hungry. First, you could argue that on a low-level, an animal/human gets quite a lot of visual and audio input, so it's data hungry as well. Then, you could argue that the evolution did already do some sort of pretraining/pre-wiring which helps, using million of years of data. Then, related to this is the topic of unsupervised learning and reinforcement learning. Then, dealing with the aspect of learning with small amounts of data, there are the active research topics of one-shot-learning, zero-shot-learning of few-shot-learning. Related is also meta-learning.

3.2. Deep learning thus far is shallow and has limited capacity for transfer. Transfer-learning, meta-learning and multi-task-learning are active research areas which deal with this.

3.3. Deep learning thus far has no natural way to deal with hierarchical structure. There are various approaches also for this. This is also an active research area.

3.4. Deep learning thus far has struggled with open-ended inference. This is also an active research area.

3.5. Deep learning thus far is not sufficiently transparent. Also this is an active research area. And then, you could also argue that the biological brain also suffers at this.

3.6. Deep learning thus far has not been well integrated with prior knowledge. This is also an active research area.

Etc.




In some of those cases, the active research has been going on for as long as deep learning itself- for instance, one-shot-learning comes from the '90s, if memory serves, so does transfer learning ('93, wikipedia says). My hunch is that in such cases only mediocre solutions exist.

And of course, just because there's reasearch in a given area doesn't mean that progress will necessarily be made. Frex, research on semantics has been going on since the dawn of AI and we 're not even close yet.

Personally, I think it's always good to have people pointing out limitations of a technique. Minsky and Papert caused a lot of consternation back in Perceptrons, but without that, who knows when the ANN researchers would have gotten off their butts and tried to solve real problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: