Hacker News new | past | comments | ask | show | jobs | submit login

Well I think a big part of this is that right now we have finally gotten to where the algorithms + the required computing power are starting to become more widely available. Cheap graphics cards or things like Intel's Phi + techniques like drop out to prevent overfitting are really enabling much more sophisticated things to be done in a reasonable wall time. Granted multilayer neural networks aren't a free lunch that will solve everything, but there are large classes of problems that are falling to these techniques all the time. We are also finding that Neural Networks scale very well to very large architectures, better than some of the other techniques. I understand we should be careful not to overhype since we have seen previous excitement cause a mass exodus from this research before. I however think people like Hinton were always right, and this was awesome stuff. We just couldn't really take advantage of it because we could never train it for long enough and we hadn't learned how to do things efficiently yet.



Yes you are right.

Still, deep learning has done nothing more than classification right now.

What about predictive distributions, regression of complicated outputs (e.g. periodic data) and, most of all, heterogenous inputs? Right: nothing impressive has been done in that area, despite of huge amounts of practical problems.

Let's see if deep learning generalizes to those things. If it does (and I personally believe so) let's be happy. Before that, we still have to envy what Gaussian processes, Gradient boosting machines and random forests can do what DL so far cannot.


Still, deep learning has done nothing more than classification right now. What about predictive distributions, regression of complicated outputs...

http://homepages.inf.ed.ac.uk/imurray2/pub/12deepai/ has predictive distributions from deep learning, passed on to time-series smoothing for articulatory inversion. It's a previous neural net approach made deep, and working better as a result.

(I agree that like any machine learning framework, neural networks have their strengths and weaknesses, and open challenges.)


Okay, I should have worded that differently. There is also a paper of Salakhutdinov learning a kernel for Gaussian processes. That'd account for that as well.

My point is (I did not really write that above) that deep learning does not stand unchallenged in this domain. Its dominance is so far "only" apparent in vision and audio classification tasks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: