There are still applications where e.g. random forests beat the crap out of all kinds of deep learning algorithms in (a) training time (b) predictive quality (c) prediction time.
We should stop hyping this. I am a researcher working in deep learning myself, but the current deep learning hype is actually what makes me worry that I will have trouble getting a job because industry will be disappointed a third time.
Still, deep learning has done nothing more than classification right now.
What about predictive distributions, regression of complicated outputs (e.g. periodic data) and, most of all, heterogenous inputs? Right: nothing impressive has been done in that area, despite of huge amounts of practical problems.
Let's see if deep learning generalizes to those things. If it does (and I personally believe so) let's be happy. Before that, we still have to envy what Gaussian processes, Gradient boosting machines and random forests can do what DL so far cannot.
http://homepages.inf.ed.ac.uk/imurray2/pub/12deepai/ has predictive distributions from deep learning, passed on to time-series smoothing for articulatory inversion. It's a previous neural net approach made deep, and working better as a result.
(I agree that like any machine learning framework, neural networks have their strengths and weaknesses, and open challenges.)
My point is (I did not really write that above) that deep learning does not stand unchallenged in this domain. Its dominance is so far "only" apparent in vision and audio classification tasks.
Edit: update link
There are some subtle differences between HTMs and straight up deep learning, mainly the requirement for HTM data to be temporal and spatial.
I know Andrew used to sit on an advisory committee at Numenta, I don't know if he still does.
This is guaranteed not to be terribly general, considering the many bits of matter on this planet that exhibit intelligence without a neocortex. By many, I mean ones that hugely outnumber humans.
So very interesting stuff, but not the answer that I think he wants it to be.
Basically he just wants something that's very good at recognizing patterns over time, which I can imagine the neocortex would be great at.
Though, he also references the thalamus and hippocampus in the books a lot, as very important parts of the brain to his framework. [http://en.wikipedia.org/wiki/Memory-prediction_framework#Neu...]
It's probably a harder problem, creating smarter-than-human intelligence on a machine, but research isn't as constrained by laws and ethics (they don't have to bemoan not being able to experiment with living human brains). I wish more people were active in the area.
We have some probabilistic models of that successfully predict various future states of the brain from past states or stimuli. This is not the same as understanding it or even approximating it.
Also, I believe it's still possible to join the current session (first assignment was due this weekend, but you can turn it in late with just a 20% penalty.)
It is one of the best Coursera classes. I had a blast, and strongly recommend it. I decided to continue learning ML, mostly because Prof. Ng.
You are also right that you do need a lot of processing power to get neural networks to work well. But that is changing rapidly. Hinton's convolutional neural network has the state of the art in the ImageNet benchmark, yet was trained using significantly less power than google brain. Regardless, you don't need google scale computation to get deep networks to work well. The point of google brain is to see how far one could push neural networks.
Would you mind naming some of these techniques, if you're familiar with them? I'd like to take a deeper look.
This video drives the point home, and is made by the author of this technique.
Neural networks of course "mimic" the way the brain works, but it stops there.
Not to mention several modern AI techniques have nothing to do with mimicking biology (like SVMs)
This looks a reporter pushing his own agenda to make for a colorful story.
In this regard, I thought I would mention the extraordinary simple and elegant talk by G. Hinton last summer: http://www.youtube.com/watch?v=DleXA5ADG78
It starts from a simple and clever improvement to an existing deep learning method and ends up with beautiful (and simple!) insights on why neurons are using simple spikes to communicate.
It beats "This guy is wrong (source: random blog post that I didn't really read)", anyway.
Edit: To be clear, I don't love the Reddit snowclones, but there's nothing wrong with the sentiment behind "I'm a scholar in this field, and I think this guy is a hack."
I agree with the sentiment, but "properly cited" suggests a bit more than a one line comment from a username with barely a handful of posts (generally pertaining to bitcoin and intermediate level networking certifications) on an anonymous website.
>It beats "This guy is wrong (source: random blog post that I didn't really read)", anyway.
I have to disagree. At least a random blog post presents the potential for useful information or a fully articulated opinion. What we here is 4 words and an unsubstantiated appeal to authority.
Isn't the purpose of citing claims precisely so others can effectively verify or discount their validity?
Citation: a post from an anonymous Internet user who claims to have a graduate degree. Take it for what it is. What's wrong with that?
> I have to disagree. At least a random blog post presents the potential for useful information or a fully articulated opinion. What we here is 4 words and an unsubstantiated appeal to authority.
Conversely, it is far easier to engage in vigorous debate on HN than a random blog. I call it a wash.
(Besides, my read of the comment that started this was that it's quite tongue-in-cheek. He was basically saying "Don't trust this any more than any other comment you read on the Internet.")
If you're going to throw broad statements around some examples would really help lend you credibility.
Scientist: X can help us get full AI!
Scientist: Because of reason R.
You: But, reason R is a non sequitur...
More seriously, reasons similar to that for deep learning have been repeated multiple times in AI with failure (e.g. Thinking Machines).
I would suggest that these folks remain calm and build something on the scale of IBM's Watson using just deep learning..
This might be slightly off-topic, but I'll try it here anyway: can anyone recommend any books/other learning resources for someone who wants to grasp neural networks?
I'm a CS student who finds the idea behind them really exciting, but I'm not sure where to get started.