Hacker News new | past | comments | ask | show | jobs | submit login
Deep Learning vs. Machine Learning vs. Pattern Recognition (quantombone.blogspot.com)
187 points by platz on Mar 22, 2015 | hide | past | web | favorite | 13 comments



This is as good a post as any about the theory of why deep learning works so well - esp. if you know a little about spin models from statistical mechanics

https://charlesmartin14.wordpress.com/2015/02/26/why-does-de...

additional flavor: http://arxiv.org/pdf/1412.0233.pdf

and slides 16-22 of https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsd...


From the article linked in this post:

> Multiscale convolutional neural networks aren't that much different than the feature-based systems of the past. The first level neurons in deep learning systems learn to utilize gradients in a way that is similar to hand-crafted features such as SIFT and HOG. Objects used to be found in a sliding-window fashion, but now it is easier and sexier to think of this operation as convolving an image with a filter. Some of the best detection systems used to use multiple linear SVMs, combined in some ad-hoc way, and now we are essentially using even more of such linear decision boundaries. Deep learning systems can be thought of a multiple stages of applying linear operators and piping them through a non-linear activation function, but deep learning is more similar to a clever combination of linear SVMs than a memory-ish Kernel-based learning system.

> Features these days aren't engineered by hand. However, architectures of Deep systems are still being designed manually -- and it looks like the experts are the best at this task. The operations on the inside of both classic and modern recognition systems are still very much the same. You still need to be clever to play in the game, but now you need a big computer. -

> http://quantombone.blogspot.com/2015/01/from-feature-descrip...


Yes,

It's worth saying that deep learning is very powerful at find features but isn't adaptive, requiring careful hand-tuning for a given feature-set.

My question is, at what point does this situation start to resemble the situation of computer-programs and their interfaces, where once a good-enough computer-based solution to a problem arise people are forced to adapt to the solution rather having the solution adapt to them (learning the irrational and counter-intuitive interfaces of X program and then having that knowledge codified as a skill, etc).

I already find myself speaking in a chirpy, robotic view when I am called by the chirpy robotic programs which may sometimes understand me.


I think what differs deep learning from other machine learning techniques is the ability to form a hierarchy of features by itself. I think most of today's successful deep learning systems focus on supervised learning. For me I certainly want to know more about CNN and Caffe, but I also want to learn more about how to apply deep learning to unsupervised learning and reinforcement learning. Any paper/course suggestion?


You can look up the related papers about playing Atari with deep learning just released recently, that one is a reinforcement learning one.

When deep learning first started, it was actually very unsupervised focused with stacked autoencoders and DBMs, the most well known paper around that time would probably be the youtube cat recognizing paper from Google/ Stanford [0]. A lot of the "early" works (from around 2006-2010) is unsupervised learning as well, just look up Geoff Hinton paper around that time.

[0]:http://static.googleusercontent.com/external_content/untrust... .


This is not completely accurate. Deep learning at the moment simply means the use of "deep" architectures in neural networks. Graphical models, standard Bayesian hierarchical models, and the likes all form hierarchies of features as well and are commonly practiced.


You want to look for something called self-organizing-maps or SOM


What does the author mean by "The Linux Kernel of tomorrow might run on Caffe"?


What Tom means, if I know him as well as I think, is not literally the "linux kernel" of tomorrow will run on Caffe, but rather "the linux kernel of tomorrow", being some important, fundamentally important, unifying aspect of computing that emerges in the future, may turn out to be machine learning based.


Yes to @eof's response. He does know me well, and should be able to clarify in my absence.

There is going to be an AI engine that we use in the future in a similar way that we use Linux today. I mean that "future Linux-ish AI engine" and unless Linus has been busy ML-ing, it won't be called Linux.


This part seemed a bit of an exaggeration


As was calling Duda & Hart's book "infamous," but I'm guessing he doesn't know that has a negative connotation. That was my textbook at MIT, and I found it quite useful!


You're correct. I really should have used a word like "invaluable". I love my Duda&Hart. I'm not sure why "infamous" felt appropriate. Thanks for the catch.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: