

Google acquires neural network startup that may help it hone speech recognition - boh
http://www.engadget.com/2013/03/13/google-acquires-neural-network-startup-dnnresearch/

======
dia80
I think we are at a tipping point for machine learning. Deep learning is just
getting ramped up and the results are extremely impressive. [1]

How far this could go is hard to say right now but things like this [2] (10
million images, 1 week training, 16,000 cores) are bringing enormous
processing power to bear for the first time on these kind of problems.

Is anyone able to comment on what moving to custom hardware will mean for
performance on these kind of tasks?

[1] <http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf>

[2]
[http://static.googleusercontent.com/external_content/untrust...](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/large_deep_networks_nips2012.pdf)

~~~
gcr
It's my understanding that deep learning networks have to be tuned almost
perfectly to perform well. Not my area ef expertise, but I'm not convinced
just yet.

~~~
SatvikBeri
It's true that Deep Belief Networks take a lot of work compared to, say,
Random Forests, but they make good Neural Networks much easier.

Before Deep Learning, Neural Networks required a lot of hand tuning. Without
that kind of tuning, Neural Networks with more than 1 or 2 hidden layers
tended to perform very badly, because they would get stuck in plateaus or poor
local optima.

In contrast, (one facet of) Deep Learning focuses on using unsupervised
learning to better initialize NNs, which in turn allows for much higher levels
of accuracy. In essence, Deep Learning helps pick the NN parameters
automatically. So it's actually _decreasing_ the amount of hand tuning needed.

------
Cowen
I took Geoff Hinton's Intro to Neural Networks course a few years back. After
class he would have optional Q&A sessions where he would answer anything about
anything. They were amazing. I think we even asked him once about his views on
Margaret Thatcher.

After we covered deep belief nets, RBMs, and autoencoders, I asked him if
anyone was using deep belief nets in the real world. He said "no, but these
results are too impressive to deny. Within a few years, they will be."

Looks like he was right. Congrats, Geoff.

------
myth_drannon
Hopefully he will be able to continue teaching the Neural Networks course on
Coursera <https://www.coursera.org/course/neuralnets>

------
mark_l_watson
I took Geoff Hinton's Coursera NN class last year - excellent.

I am glad that he is remaining part time at his university.

I share other people's opinions on the importance of deep learning. At first,
when it was introduced in class, I thought that it was a nice hack - but it is
much better than that.

I was on a DARPA NN advisory panel in the late 1980s and did a fairly
successful commercial NN product. Lots of advancements in the fields in the
last 20 years!

------
jhartmann
Congrats to Prof. Hinton, Alex, and Ilya, I have worked some with these
techniques and they are so powerful. I'm sure being part of the Google team
will give them the access to the processing power and the raw data to make
some really cool stuff.

