
A Learning Advance in Artificial Intelligence Rivals Human Abilities - pen2l
http://www.nytimes.com/2015/12/11/science/an-advance-in-artificial-intelligence-rivals-human-vision-abilities.html
======
sawwit
Paper:
[https://www.sciencemag.org/content/350/6266/1332.full](https://www.sciencemag.org/content/350/6266/1332.full)

Code: [https://github.com/brendenlake/BPL](https://github.com/brendenlake/BPL)

Abstract: People learning new concepts can often generalize successfully from
just a single example, yet machine learning algorithms typically require tens
or hundreds of examples to perform with similar accuracy. People can also use
learned concepts in richer ways than conventional algorithms—for action,
imagination, and explanation. We present a computational model that captures
these human learning abilities for a large class of simple visual concepts:
handwritten characters from the world’s alphabets. The model represents
concepts as simple programs that best explain observed examples under a
Bayesian criterion. On a challenging one-shot classification task, the model
achieves human-level performance while outperforming recent deep learning
approaches. We also present several “visual Turing tests” probing the model’s
creative generalization abilities, which in many cases are indistinguishable
from human behavior.

~~~
username3
People require tens or hundreds of examples to get to the point of learning
new concepts from just a single example.

~~~
TuringTest
And this algorithm has required hundreds or thousands of previous attempts at
machine learning (be these and other researchers) to get to the point where it
could replicate that human feature.

------
rhaps0dy
"[...] for a narrow set of vision-related tasks."

Which is good and interesting news! But then I would not have clicked, because
the information I was looking for would have been in the title.

~~~
freddref
It's gotten to the stage where I just read the comments first.

~~~
sanoli
Me too. Sometimes I wish there was a tl;dr under the title of each submission.

~~~
alexc05
What an amazing design idea that would be for dealing with clickbait though.

A crowd sourced subtitle could be added to the link and possibly voted on for
accuracy (an expanding list shows alternate submissions or allows you to
submit your own)

Since headlines can no longer be trusted, we could turn to the crowd.

~~~
freddref
This is already in effect as the top comment on HN usually ends up being the
tldr, and the creator gets the karma.

~~~
alexc05
Right, but consider the impact if there was a crowd sourced summary on
Facebook (for example)

"this jaw dropping thing that happened will make you weep for humanity" `baby
drops ice cream, dog eats it`

"Carrie Fisher _destroys_ good morning America" `she throws out a few
lighthearted quips, has a one-liner about jabba`

I think it really becomes effective on video content with extreme linkbait.

Personally, (and as a card-carrying crochety old man) I don't follow those
links on principle. I'm carrying on my own little "boycott of one".

By instituting platform change though, to solve for linkbait inline (not
requiring a page load to comments) would change the value proposition of
headlines and I think it would be more likely to reward for quality content.

A TL;DR summary is not the same as "top comment" and TC is not always a TLDR
... One line, under the headline, crowd sourced and voted. I think it could
fundamentally change behavior, and if the platform doing it was sufficiently
large, the composition of the internet.

~~~
eric_h
> Personally, (and as a card-carrying crochety old man) I don't follow those
> links on principle. I'm carrying on my own little "boycott of one".

Me too, so that makes it at least a boycott of two. I'm not even a crotchety
old man yet (early 30s), but it seems to be coming on fast ;)

------
orbifold
For anyone interested this seems to be the code (written in Matlab) that they
used to get their results:

[https://github.com/brendenlake/BPL](https://github.com/brendenlake/BPL)

~~~
bronz
Why do you think they wrote it in Matlab?

~~~
omginternets
Short answer: because everybody uses Matlab in research.

\- Lots of libraries

\- Nice graphical interface

\- Lots of online help

\- Matrices as a first-class citizen

I hate Matlab as much as the next guy, but it's deeply ingrained in academia.

~~~
bronz
Someone down-voted me just for asking. I have no idea about any of this stuff
and I was just genuinely curious. Apparently people are pretty polarized about
Matlab. Thanks for explaining that to me.

~~~
omginternets
>Someone down-voted me just for asking.

People take the internet way too seriously :/

>Apparently people are pretty polarized about Matlab

Yes, very much so. Much of the contention comes from the obscene licensing
prices coupled with the fact that its a terrible language.

~~~
tormeh
A terrible language with awesome libraries. In the language wars, libraries
are king.

I hate Matlab, but I've got to admit that the toolboxes are without peer.

~~~
pmoriarty
Perl had tons of awesome libraries. Yet Python, Ruby, and Go, which had
virtually none compared to Perl when they started still became very popular,
and are arguably more popular than Perl now.

------
tangled_zans
Can somebody who's read the paper or code briefly comment on how the algorithm
works and where the innovation is compared to the currently used Bayesian-
learning algorithms?

------
gnrlist
This sounds a lot like elastic distortions that have been around for a while.

