
A landmark 2012 paper transformed how software recognizes images - eaguyhn
https://arstechnica.com/science/2018/12/how-computers-got-shockingly-good-at-recognizing-images/
======
rasmi
The paper is "ImageNet Classification with Deep Convolutional Neural Networks"
by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, and is available
here:

[https://papers.nips.cc/paper/4824-imagenet-classification-
wi...](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-
convolutional-neural-networks)

------
genericone
I enjoyed genji256's comment on the article:

genji256:"Anecdote: I was one of the three reviewers for that paper and I tend
to review harshly. A few years after it was published, I started worrying that
I had given it a bad score and completely missed a field-changing paper. I
frantically dug through my emails and found the review. Turns out I gave it a
7/10 so it wasn't THAT bad, though my summary makes me cringe a bit: 'A paper
which, by giving precise details on the various tricks used, is a useful
addition to the deep learning literature. I wish comparisons with other
techniques were somewhat fairer.' "

------
rococode
I'm curious, are there papers in other ML fields that could be considered
breakthroughs comparable in impact to AlexNet?

For NLP the recent ELMo and BERT papers for word embeddings come to mind,
although their scope is somewhat different than AlexNet.

~~~
bkanber
Sure, lots! We've made lots of progress just in the last decade or two. LSTM,
recurrent, and convolutional nets. Slightly older (late 90's), but I think
Random Forests were a pretty significant breakthrough.

Another huge one is Paul Graham's own work using Naive Bayes to filter spam.

~~~
p1esk
Both convolutional and recurrent architectures were developed in late 80s.
LSTM (improved RNN) was published in 1997.

------
crescentfresh
Man, I took a neural network class in uni and loved it. It was offered by the
Psych department (however I was in Comp Sci). All I remember now is matlab
labs, some of the terms, but otherwise nothing at all. My career took me
nowhere near this subject matter and I've regrettably lost most recollections
of it, so I appreciate this article explaining the basics again.

------
Hendrikto
> Right now, I can open up Google Photos, type "beach," and see my photos from
> various beaches I've visited over the last decade. I never went through my
> photos and labeled them; instead, Google identifies beaches based on the
> contents of the photos themselves. This seemingly mundane feature

“Seemingly mundane”?? This is scary as hell.

~~~
Retra
Scary how? If you send a photo to Google's computers, expect someone to look
at it and classify it. If the expectation is that one can drown that
possibility away using _volume_ , then one would need to reassess their
understanding of the purpose of computers.

What are your expectations, and why would the elicit such an extreme reaction
to the status quo?

~~~
Hendrikto
I knew they were doing this. I am just objecting to calling this “mundane”.

~~~
stan_rogers
"[S]eemingly mundane". As in something you (a human) wouldn't need to put any
thought or effort into. It doesn't actually occur to Muggles that there are
things that they do that are actually hard problems for computers, even if
they understand that there are all sorts of things computers can do in a flash
that would exhaust their personal resources for weeks on end.

------
loisaidasam
Secure Connection Failed.

What's going on arstechnica?

~~~
louwhopley
Seems like they've got some server problems and are working on it
([https://twitter.com/arstechnica/status/1080502151363715072](https://twitter.com/arstechnica/status/1080502151363715072))

It's also gradually coming back together, one page resource at a time.

