
Yann LeCun, Geoffrey Hinton and Yoshua Bengio win Turing Award - mmq
https://www.nytimes.com/2019/03/27/technology/turing-award-hinton-lecun-bengio.html
======
ignoramous
Geoffrey Hinton's tech-talk in 2007 (!) at Google is a great watch (with a
heavy dose of technical jargon [0] plus some dry British humour interlaced
throughout). He explains digit recognition (vs SVM) [1], document
classification (vs LSH), and breifly summarises image classification [2]
problems and how they were solved:
[https://youtu.be/AyzOUbkUf3M](https://youtu.be/AyzOUbkUf3M)

You could instantly see the results he presents were way better than what was
state of the art at that time. Amazing.

\---

[0] Grant Sanderson (3Blue1Brown) started a youtube-series covering Neural
Networks (4 episodes, so far) that helps gain an intuitive grasp on the topic:
[https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_6700...](https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi)

[1]
[https://en.m.wikipedia.org/wiki/MNIST_database](https://en.m.wikipedia.org/wiki/MNIST_database)

[2]
[https://en.m.wikipedia.org/wiki/AlexNet](https://en.m.wikipedia.org/wiki/AlexNet)

~~~
li4ick
Although 3b1b's videos are awesome, his NN videos are heavily inspired by
this:
[http://neuralnetworksanddeeplearning.com/](http://neuralnetworksanddeeplearning.com/).
Even the code on his github page is taken from there.

~~~
ignoramous
Yes. I think Sanderson does mention as much in one of his videos that he's
borrowing the course and structure from Michael Nielsen.

------
orbifold
For a more balanced view of the history of deep learning see Schmidhuber‘s
review [http://people.idsia.ch/~juergen/deep-learning-
overview.html](http://people.idsia.ch/~juergen/deep-learning-overview.html)
and [http://people.idsia.ch/~juergen/who-invented-
backpropagation...](http://people.idsia.ch/~juergen/who-invented-
backpropagation.html).

~~~
bobosha
No invention occurs in a vacuum, most creations are recombinant effects.The
pieces of the puzzle might very well have existed, but Hinton deservedly
receives credit for putting the various pieces together for backprop IMHO.

~~~
obviuosly
Various researchers, e.g. Werbos, Linnainmaa, Bryson and Yu-Chi Ho have been
doing backprop in neural networks before Hinton et al. Hinton was merely a
popularizer of the idea.

~~~
nabla9
Wright brothers were most likely not the first to achieve heavier-than-air
powered flight. But they made the first controlled, sustained flight.
Controlled flight was the real breakthrough in aviation.

It's the same with Hinton et. all. They were among many other pioneers, but
what really sets them apart is that they made it all work. They also analyzed
why it works.

~~~
simonster
Schmidhuber's take on the Wright brothers:

[https://www.nature.com/articles/421689c](https://www.nature.com/articles/421689c)

[https://www.nature.com/articles/d41586-019-00491-5](https://www.nature.com/articles/d41586-019-00491-5)

~~~
jessriedel
> In 1890 Clément Ader made the first manned, powered, heavier-than-air
> flight, of 50 m, in his bat-winged monoplane.

Seems disingenuous to not mention that this device only worked due to ground
effects and "flew" 8 inches off the ground (according to Wikipedia).

------
darksaints
This is well deserved, but IMO it also signals an upcoming AI winter. This
cycle has happened several times before:

1) Some scrappy researcher on a low budget develops a new AI technology that
shows promise in a specific area.

2) More researchers take that idea and successfully apply it more broadly.

3) Massive investment in research happens, pushing PhD candidates into
extracting every possible nuance out of that technology.

4) Researchers, encouraged by the broad success, begin to think we've found
the one true key in the quest towards general intelligence. Suddenly all the
researchers are Neats. The research funding is now mostly businesses instead
of governments and non-profits. They grow overconfident that it can be used
for virtually anything, funding everything in sight.

5) Starry-eyed futurists start lionizing the founders of the new revolution as
geniuses, publishing flowery bullshit about how the world will be forever
changed and how the General AI revolution is only 5-10 years away.

6) Surprise! It actually can't be used that broadly. Improvements stall, and
CEOs start to realize that they can't just dump data in and get money out. It
becomes a commercial disappointment, triggering disinvestment.

7) The Neats, disappointed that their glamorous and simplistic theory of
general intelligence, start to disappear. Newspapers begin to make fun of all
of the visionaries, comparing them to the people that said flying cars were
5-10 years away.

8) A handful of Scruffies take over the now low-funding AI winter, working
hard on minimal budgets until a new breakthrough is found and we return back
to #1.

This article is showing that we're solidly within stage 5, and we're already
seeing signs of stage 6. When everybody is buying, it's time to sell.

~~~
mindcrime
There's good reason to think that the AI Winter / AI Spring cycle is done. AI
techniques are now creating so much real world value (more so than in the 80's
/ 90's) that there may well not be any more "AI Winter" events. Or perhaps
they will be, but they'll be less pronounced. Maybe an "AI Fall" instead of
"AI Winter".

Part of that too, is that I think people now realize that narrow AI is
sufficient to create tremendous value, and that it doesn't necessarily matter
if the "AGI breakthrough" happens anytime soon or not.

It's hard to be sure, but I don't see the kind of collapse that happened in
the past happening anytime soon.

~~~
darksaints
AI winter doesn't mean people stop using it. AI winter and spring aren't a
phenomenon specific to AI...in essence it's the same dynamic found in economic
bubbles. In almost every economic bubble we see actual economic benefit
underlying the hype, but the hype grows bigger than the benefit can account
for, triggering eventual collapse of the hype. The thing that you should
notice is that when the bubble collapses we don't stop buying, we just stop
_over_ investing.

We haven't stopped buying tulips, trains, stocks, technology, or real estate.
And just as well, we haven't stopped using symbolic AI, single-layer neural
nets, ensemble models, expert systems, or logic programming languages. The new
topological enhancements to Neural Nets won't ever go away either...but that
doesn't mean we won't see a drop in investment once the general public
realizes that your neural nets aren't going to learn how to do double-entry
accounting any time soon. The AI winter isn't characterized by the technology
going away, it is characterized by lofty idealism being shattered and
investment dropping back to reflect reality.

~~~
mindcrime
_AI winter doesn 't mean people stop using it. ... The thing that you should
notice is that when the bubble collapses we don't stop buying, we just stop
overinvesting._

Right, but that doesn't match the way I feel the term "AI Winter" has been
used. Now I could be mis-interpreting things, but I've always looked at an "AI
Winter" as a period of _under_ investment, created as an over-reaction to the
mechanics you're referring to.

 _but that doesn 't mean we won't see a drop in investment once the general
public realizes that your neural nets aren't going to learn how to do double-
entry accounting any time soon._

Right, but again, I don't think most people use "AI Winter" to mean a simple
"drop in investment". If it were something that straightforward, there would
be no need for the "Winter" metaphor.

And that's why I say there may indeed be a drop... an "AI Fall" if you will,
that still represents a pull-back of sorts, but perhaps just a less pronounced
and extended pullback like we've seen in the past.

~~~
mindcrime
D'oh. Too late to edit, but this:

 _less pronounced and extended pullback like we 've seen in the past._

should read:

 _less pronounced and extended pullback than we 've seen in the past._

------
luminati
The Chinese must be furious.

I was at a local government plenary session at Chengdu (long story) a couple
of year back. The lead speaker kept waxing about how China invented and
contributed AI to the world.

I mentioned some points about about China's AI contributions/influence :

0\. Algorithms -> The OGs as celebrated by this Turing Award (Canadian, but
mostly led by American universities. On thing I did mention was due to the
sheer factory production of Chinese PhDs, there is a lot of stuff arxiv from
China )

1\. Frameworks -> Tensorflow, PyTorch (American)

2\. Hardware -> Nvidia (American)

3\. Distribution -> Github (American)

4\. Education -> Medium, Github, Youtube, Fast.ai (American)

5\. Cloud -> AWS, Google, Azure (American)

China has some equivalent for all of them, ie. PaddlePaddle by Baidu,
AliCloud, etc. but none of them have the reach, influence or domination of the
American counterparts.

Suffice to say that my points weren't taken too well and have been disinvited
since then.

~~~
briga
Do you really think China has done anything on the level of these three?
Basically every new neural network architecture in recent memory is
underpinned by Hinton's work on back-propagation.

China has done some fantastic work in ML, but I think the award was a long
time coming for these three.

~~~
scarejunba
It's obvious he doesn't think China has done anything on the level of these
three. Literally everything about his comment says that.

------
li4ick
I think Schmidhuber should be up there. LSTMs are everywhere, not just NLP.
The latest Starcraft 2 bot from Deepmind uses LSTMs extensively in its
architecture.

~~~
nabla9
Schmidhuber's significance in the field is not at the same level. He invented
one type of neural network, but the "Canadian mafia" launched the whole Deep
Learning revolution with multiple breakthroughs and theoretical understanding.

LSTM's are clever and they were bleeding edge up to 2014, but once they were
understood better as a bypass mechanism, attention, context vectors and
averaging networks and causal convolution are starting to replace them.

~~~
Nimitz14
Schmidhuber did way more than LSTMs. And they still are SOTA on many tasks.
You clearly are not very familiar with the field.

~~~
dang
Whoa. Personal attacks will get you banned here. We've warned you about this
more than once before. Please review
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
and follow the rules when posting here. When you do, don't miss this one,
since it should have changed your assumption about the GP:

" _Please respond to the strongest plausible interpretation of what someone
says, not a weaker one that 's easier to criticize. Assume good faith._"

~~~
srean
Just a note, I did not find it any more rude than other comments that get a
free pass. Commenting that more familiarity with the literature is necessary
can be a very factual claim.

~~~
dang
It was an obvious violation of HN's rules. If you see other comments that
violate the rules going unmoderated, the likeliest explanation is that we
didn't see them; we don't come close to seeing everything that gets posted. In
that case the thing to do is to flag the comment (described at
[https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html)).
In egregious cases, you can email hn@ycombinator.com as well.

------
currymj
Extremely pleased to see the weird feud with Schmidhuber sparking up again,
completely pointless feuds like that are very entertaining, especially since
all involved figures have done good work and don't seem to be sabotaging each
other's students. It's like pro wrestling for nerds.

------
mosicr
30% of Google ML workload is LSTM. Only 5% is CNN.
[https://cloud.google.com/blog/products/gcp/an-in-depth-
look-...](https://cloud.google.com/blog/products/gcp/an-in-depth-look-at-
googles-first-tensor-processing-unit-tpu) Schmidhuber should have gotten the
award too.

------
myth_buster
Well deserved!

Imagenet challenge and 2012 has been an inflection point but few have slogged
through the AI winter and made it through to the other side.

~~~
sachin18590
was this the year Relu became mainstream? the paper itself was out there few
years before too right

~~~
rjtavares
I believe he's mentioning this:

> On 30 September 2012, a convolutional neural network (CNN) called AlexNet
> achieved a top-5 error of 15.3% in the ImageNet 2012 Challenge, more than
> 10.8 percentage points lower than that of the runner up. This was made
> feasible due to the utilization of Graphics processing units (GPUs) during
> training, an essential ingredient of the deep learning revolution. According
> to The Economist, "Suddenly people started to pay attention, not just within
> the AI community but across the technology industry as a whole."

[https://en.wikipedia.org/wiki/ImageNet](https://en.wikipedia.org/wiki/ImageNet)

------
ArtWomb
Congrats! Absolutely well deserved. Simons' Foundations of Deep Learning
seminar this Summer 2019 will seek to illumine some of the issues around
reproducibility, failure modes, etc

[https://simons.berkeley.edu/programs/dl2019](https://simons.berkeley.edu/programs/dl2019)

------
g9yuayon
Companies should jointly give CIFAR a big fat award too. :-D Without CIFAR's
10 million dollars back in 2003 to the research groups of 15 people, led by
Hinton, the research groups may have dissolved due to lack of funding, and our
history would be different.

------
amelius
And we still don't know if/how biological brains perform backpropagation.
Perhaps we should consider calling NNs "weight networks" instead.

~~~
ska
That was pretty much hashed out in the 70s when NN interest got going again.
I'm not sure there is much value in recapitulating the discussion....

~~~
erikpukinskis
Well, we do have new perspective after 50 years.

Also, there is educational value in conversation.

What is the value to you in shutting down a conversation someone else is
interested in having?

At least cite the answer to the question if you feel it's settled scholarship.

------
deepnotderp
Tbqh schmidhuber absolutely deserves a spot here. Not just for LSTMs, but also
predictability minimization, various interesting intrinsic motivation concepts
etc.

------
Jach
Well deserved. I was thinking maybe it's a bit too soon (the Turing Award
isn't exactly timely awarded) but remembered it's 2019...

If you wanted a first-rate CS education, you could do a lot worse than to go
through the winners of the Turing Award[0] and review their seminal works.
I've only sampled maybe half of them but every time I pick one I learn
something interesting.

[0]
[https://en.wikipedia.org/wiki/Turing_Award#Recipients](https://en.wikipedia.org/wiki/Turing_Award#Recipients)

------
devilmoon
It's interesting to me that most comments are confusing Deep Learning with the
whole field of AI in general, when it is actually a subset of Machine
Learning. Well deserved anyhow!

------
return0
I somehow associate Hinton with Terry Sejnowski more than his later
collaborators. They are all pioneers and this is well deserved regardless,
even late!

------
tw1010
Do you think (HN) that this incentive boost will mean more people who're
working on fringe risky ideas will get the energy and persistence to keep
going? Will we see a bump in 10 or so years of innovations coming from people
who would have otherwise given up (had this award not been given)? Hard to
measure of course, but what's your hunch?

~~~
BucketSort
If anything it's demotivating. Other people that have also contributed greatly
were just ignored by this award. While those that received the award were
mostly represented by big corps Facebook(LeCun) and Google(Hinton) -- even
though these three are deserving as well, especially Hinton. Awards like this
are quite stupid. The award isn't even based on a single breakthrough, but on
their "collective achievement." This was just a PR thing. Who does science for
awards anyway? I thought that was an athlete thing.

~~~
ignoramous
> Other people that have also contributed greatly were just ignored by this
> award.

Curious: Apart from Jurgen Schmidhuber, who else do you think were left out
and why do you think so?

------
Tistel
I would love to hear a long form podcast with all three (and a CS
knowledgeable interviewer)

~~~
kasperset
This is the closest I could find.
[https://www.thetalkingmachines.com/episodes/history-
machine-...](https://www.thetalkingmachines.com/episodes/history-machine-
learning-inside-out)

~~~
Tistel
thats is a great interview. the right level of detail. The interview starts
~10 minutes in. Thanks for the recommend!

------
mistrial9
as a non-specialist, it seems like several important and distinct areas of
inquiry are just lumped together in the celebration of a "winner" for ML

Text, language, human chat

Image recognition from blurry, multiple views, multiple lighting conditions
photos

formal patterns with large numbers of variations

These are not at all the same, yet the praise seems to want to declare "the
best" and "beating competitors" .. why is something so multi-faceted, reduced
to the logic of a sports event ?

~~~
robotresearcher
The achievement is mainly about the fundamental techniques they developed, not
the application domain. The techniques have been applied to many things since,
hence the large impact.

------
ipunchghosts
Wake sleep algorithm baby! Long live the Helmholtze machine!

------
mostafab
Academic geopolitics. The "deep learning mafia" has striked again, excluding
the guy from the other side of the Atlantic, Schmidhuber

------
scienceyang
they are great, but deep learning is not AI, is not science. I doubt
independence of Turing Award committee.

------
panchicore3
how much is changed since those geniuses started to work in this particular
field? Me coming from web full-stack dev and see how things change fast and I
find myself jumping from A way to B way very quick these days. I just need to
know your perception of how fast thing goes and changes on ML.

~~~
Voloskaya
You change frameworks and tools pretty frequently but they are still all built
around the same building blocks: html, css, javascript etc.

It's pretty much the same in ML. These people built the building blocks that
we still use in ML everyday (e.g. backpropagation, ConvNets etc). But the
fine-tuning, packaging and tooling of these techniques also changes all the
time and it can be hard to keep up. Having moved from full-stack web to ML I
have a similar feeling about the pace of things.

------
swalsh
I have a browser extension that replaces the phrase "Artificial Intelligence"
with the phrase "A bunch of if statements". This might have been one of the
top results.

~~~
0xfaded
I'm also quite dismissive of AI overuse. My preferred substitution is
s/Machine Learning/Machine Guessing.

However, these are 3 people who overcame a lot of nay-sayers to prove some
something could actually work. I remember back in university the lecturer said
that neural networks couldn't scale because of the vanishing gradient problem.

It was actually very easy to be critical, you want fit a massive number of
parameters which are not-very-orthoganal to an under constrained problem?
Sounds pretty dumb to me!

I think these 3 deserve recognition for their tenacity and the new world of
"under defined gradient decent", sometimes called DL, which they opened up.

~~~
TaupeRanger
Computational Statistics is probably most accurate.

------
gzeus
its about time :)

------
TicklishTiger
I have the feeling this got so many upvotes because people think this is
related to the turing test. And that we have a chatbot now that can trick the
average person into thinking it's a human.

It's not.

It's about some incremental progress in making neuronal networks classify
stuff.

~~~
fooker
Pretty sure the Turing award is at least as well known as the Turing test.

~~~
TicklishTiger
"Pretty sure" how?

Google trends seems to indicate the opposite.

[https://trends.google.com/trends/explore?q=turing%20award,tu...](https://trends.google.com/trends/explore?q=turing%20award,turing%20test)

But it's not about "well known" anyhow. It's about significance. A turing test
winning chat bot would be a much bigger breakthrough then an incremental
improvement in image and text classification.

~~~
ccjnsn
Depends with what bias you want to read the data.

It could mean no one needs to Google turing award because everyone knows what
it is and don't know what the Turing test is.

------
amelius
I think NVidia deserves part of the award, as they have been a major factor in
the progress of NNs.

~~~
BucketSort
Is this a meme? No they have not. Sure they are a popular hardware provider,
but NN have been around before they were even a company. I think Fukushima
deserves a spot here too, before giving it to Nvidia, but there are so many
people that have contributed to NN, so I guess those in the spotlight take the
glory if it must be taken.

~~~
yongjik
...Fukushima? Is it a typo for someone (or some company)?

~~~
BucketSort
He's credited with first introducing the CNN architecture (Neocognitron) and a
way to train it with unsupervised learning, inspired by the neurological
research of Hubel and Wiesel and his earlier work in training neural networks
with unsupervised learning (backprop is now used instead). Probably one of the
most famous neural network papers.
[https://www.rctn.org/bruno/public/papers/Fukushima1980.pdf](https://www.rctn.org/bruno/public/papers/Fukushima1980.pdf)

