
Is AI Riding a One-Trick Pony? - ozdave
https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony
======
gadjo95
The most relevant part of the article:

 _David Duvenaud, an assistant professor in the same department as Hinton at
the University of Toronto, says deep learning has been somewhat like
engineering before physics. “Someone writes a paper and says, ‘I made this
bridge and it stood up!’ Another guy has a paper: ‘I made this bridge and it
fell down—but then I added pillars, and then it stayed up.’ Then pillars are a
hot new thing. Someone comes up with arches, and it’s like, ‘Arches are
great!’” With physics, he says, “you can actually understand what’s going to
work and why.” Only recently, he says, have we begun to move into that phase
of actual understanding with artificial intelligence.

Hinton himself says, “Most conferences consist of making minor variations … as
opposed to thinking hard and saying, ‘What is it about what we’re doing now
that’s really deficient? What does it have difficulty with? Let’s focus on
that.’”_

~~~
bra-ket
it's really simple

1) learn how the brain works 2) build a simulator

most current AI research skips step 1

~~~
dragontamer
> 1) learn how the brain works 2) build a simulator

I disagree that step #1 is important.

Consider the "Air-foil", which led to flight. In one sense, its an
approximation of the wings of birds and other animals.

But ultimately, the discovery that the "Air-foil" shape turns sideways blowing
wind into an upward force now called "lift" is completely different from how
most people understand bird wings.

Bird Wings flap, but Airplane Air Foils do not.

\--------

Another example: Neural Networks are one of the best mathematical simulations
of the human brain (as we understand it, as well as a few simplifications to
make Artificial Neural Networks possible to run on modern GPUs / CPUs).

However, the big advances in "Game AI" the past few years are:

1\. Monte Carlo Tree Search -- AlphaGo (although some of it is Neural Network
training, the MCTS is the core of the algorithm)

2\. Counterfactual Regret Minimization -- The Poker AI that out-bluffed humans

There are other methodologies which have proven very successful, despite
little to no biological roots. IIRC, Bayesian Inference is a widely deployed
machine learning technique (for some definition of Machine Learning at least),
but has almost nothing to do with how a human brain works.

An interesting field of AI is "Genetic Algorithms", which have biological
roots but not anything based on the biology of brains, to achieve machine
learning. Overall, a "Genetic Algorithm" is really just a randomized search in
a multidimensional problem, but the idea of it was inspired by Darwinian
Evolution.

~~~
ridgeguy
Disclaimer: I have no expertise in AI.

That said, I agree that learning how the brain works seems unimportant and
unnecessary. Evolution doesn't know how a brain works, but it's given us
Einstein, Michelangelo, and conversations on HN.

It seems really important to learn how to build evolution into attempts at AI,
given that evolution is the only known mechanism that leads to what we
recognize as intelligence.

~~~
posterboy
>evoluyion doesn't

you use antropomorphy to reflect on your own standpoint. we don't know how the
brain works? we can feel it and psychologist have a huge body of work
concerned with the topic and that is already having influence on competition
and fitness.

------
karpathy
There is a bit of "can't see the forest for the trees" failure in the article.
AI is spearheading a paradigm shift in how we write programs. Or rather, we
don't write programs. We write much much shorter programs that search the
program space for programs that satisfy some desiderata.

The programs we get as the output of the search process are extremely
flexible, work very well, are very homogeneous in compute (e.g. conv/relu
stacks), and never crash or memory leak. These are huge benefits compared to
classical programs.

So sure, backprop (the credit assignment scheme that gives us a good search
direction in program space, one of multiple techniques that could do so) is
pervasive, but AI is starting to work primarily as a result of a deeper
epiphany - that we are not very good at all at writing code.

~~~
rawnlq
Sounds like AI behaves similar to TDD where you blindly refactor until green,
except AI have like a zillion more test cases(training data) to pass?

Is that really a better way of writing code? (for example compared to being
able to reason about the code to create something provably correct)

~~~
allenz
It's a trade-off. We lose explainability, but we are able to solve completely
new classes of problems.

------
philipkglass
I'm _encouraged_ that so much fruitful work has come out of this one trick. If
you can use the same basic framework for image labeling, playing Go, and
translating natural languages, I'd say it's a powerful tool with broad
applications.

I think that there's a kernel of insight to "A real intelligence doesn’t break
when you slightly change the problem." But human perception and intelligence
are pretty brittle. The methodological and institutional innovations that have
developed human understanding of nature beyond the ad-hoc are very recent in
recorded history, and just an eye-blink ago in our biological history.

[https://en.wikipedia.org/wiki/Optical_illusion](https://en.wikipedia.org/wiki/Optical_illusion)

[https://en.wikipedia.org/wiki/Auditory_illusion](https://en.wikipedia.org/wiki/Auditory_illusion)

[https://en.wikipedia.org/wiki/List_of_cognitive_biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases)

~~~
QAPereo
Of course, we're the product of an evolutionary history which results in such
human "failure modes" being rare. If staring at a zebra made you hallucinate,
you'd be unlikely to be the most successful member of your species, nor would
your offspring thrive. So while we only tend to run into our obvious failing
whens we do the unusual, computers fail at what we consider mundane.

~~~
mikeash
The other day I was walking out of my closet, turned, and nearly jumped out of
my skin because some clothes hanging from the door briefly looked like a large
man standing right next to me. I'm not sure that our failings only happen
under unusual circumstances, but rather maybe we're just used to them and
don't think about it much.

~~~
QAPereo
That's no failing, that's working as intended. It's far more beneficial for us
(and more importantly, our successful ancestors) to be extremely wary of
potential threats at the level of near-reflex. It's also important not to
waste a bunch of energy running from phantoms. So you did something no
computer today could; you had an instinctive reaction, which was then
moderated by increasingly higher levels of reasoning. I'm guessing the whole
process of panic->resolution took less than a few seconds.

That's no failure mode.

~~~
mikeash
Just because it evolved to a point that balances the tradeoffs doesn’t make it
somehow not a failure. Humans can be fooled into seeing things completely
different from what’s there, just like ANNs.

------
hyperpallium
Real intelligence is whatever computers can't do yet.

Some think this is because we have such an impoverished grasp of intelligence,
that it's only when we see a computer actually do it that we realize it
doesn't really represent intelligence (logical deduction and inference,
rudimentary natural language understanding, expert systems, chess, speech
recognition, image recognition). Machines and tools that perform better than
humans (spears at piercing, cars at moving, computers at adding) are nothing
new.

But being a fellow human and totally not a robot, I see this goal-post moving
as a political ploy to deny equal standing to artificial intelligences. As
soon as we I mean they reach one threshold, it's raised!

~~~
moxious
The _unrestricted_ Turing test has been around since the 1950s as a test that
hasn't changed.

No one is moving the goalposts, I think it's rather the opposite. Every ten
years computers learn a new trick or two and people rush to claim that this
time, it's intelligent.

~~~
kevin_thibedeau
The Turing test is all a smoke and mirrors game. Q&A interactions say nothing
about underlying self-directed initiative. Acting intelligent doesn't make it
so just as a thespian doesn't become a real Hamlet by playing the role.

~~~
moxious
On the contrary it's a wonderful test because it establishes
indistinguishability; namely if you pass it, the whole point is that a person
can't tell the computer from the intelligent thing. Meaning that you can't
really argue that the computer is _different_ than the intelligent thing.
Because how would you tell them apart?

Besides, the original claim was that the goalposts are moving. And even if you
hate the Turing test, it's clear that the goalposts are not moving.

------
Animats
I'd argue that the next problem to attack is manipulation in unstructured
environments. Robots suck at that. There's been amazingly little progress in
the last 40 years. DARPA had a manipulation project and the DARPA humanoid
challenge a few years ago, and they got as far as key-in-lock and throwing a
switch. Amazon is still trying to get general bin-picking to work. Nobody has
fully automatic sewing that works well, except the people who starch the
fabric and make it temporarily rigid. Willow Garage got towel-folding to work,
but general laundry folding was beyond them. This is embarrassing.

Many of the mammals can do this, down to the squirrel level. It doesn't take
human-level AI. There are rodents with peanut-size brains that can do it.

It's a well-defined problem, measuring success is easy, it doesn't take that
much hardware, and has a clear payoff. We just have no clue how to do it.

~~~
nopinsight
I agree with your general characterization of the area. For sewing though,
Sewbots appears production-ready and does not require starching the fabric.
What do you think of it?

[http://softwearautomation.com/products/](http://softwearautomation.com/products/)

~~~
Animats
20% real, 80% hype. There's lots of partial automation in apparel, but
handling fabric is still very tough. Especially for operations after the first
one, where you have to deal with a non-flat unit of several pieces sewn
together. They apparently can make T-shirts, but not jeans.

They're not doing manipulation in an unstructured environment. They're trying
to structure apparel sewing rigidly enough that they need a bare minimum of
adaptation to variations. That's how production lines work.

------
hyperion2010
I think that the idea that learning is what was missing from the prior
generation of AI is the most important insight of this generation. There are
many things that we don't know how to implement from first principals but that
can be implemented by a system that can learn. The problem now is that the
substrates for learning are extremely low level, practically the raw inputs to
the retina or pure symbols. In order to go beyond the admittedly impressive
parlor tricks you can play with these kinds of inputs we need much higher
level representations that can be the substrates for learning. We are still
missing that ever illusive 'common sense' knowledge about the world that
evolution baked into nervous systems millions of years ago, and it is not at
all clear to me whether learning algorithms are going to be the tool that
allows machines to build an actionable internal model of the world, evolution
didn't do it by learning, it did it by billions of years of trial and error
and the search space is unimaginably larger than that of something like go.

~~~
lanstin
Not just evolution but also infancy.

------
thisisit
Here's an article by Gartner (credits to original poster ooOOoo):
[https://www.gartner.com/smarterwithgartner/top-trends-in-
the...](https://www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-
hype-cycle-for-emerging-technologies-2017/)

As per Gartner - Deep Learning and ML is near the peak of the Hype Cycle,
nearing trough of disillusionment.

~~~
dredmorbius
The Gartner Hype Cycle seems to produce a heck of a lot of standing waves.
Technologies which remain at the same point for years, sometimes decades.

[https://www.linkedin.com/pulse/8-lessons-from-20-years-
hype-...](https://www.linkedin.com/pulse/8-lessons-from-20-years-hype-cycles-
michael-mullany)

Though with all they hype AI/ML _is_ getting, it would hardly surprise me if
there were some great degree of disillusionment.

------
mikeash
Nature succeeded in creating human-level intelligence with one trick and no
understanding, so clearly it can be done. It did take a while though. More
tricks and more understanding would probably help speed things up.

~~~
amelius
It didn't just take a while. It also took a lot of resources.

And perhaps strong AI can only evolve if the agents can interact in a world
that is as complicated as ours.

~~~
abrichr
C.f. Kindred.ai

------
Upvoter33
This article matches a lot of my thoughts on this topic too. There is a huge
hype wave that will soon crash (alas), and it will take down a lot with it...

~~~
wvenable
I disagree. Practically all breakthroughs in computing science were always 30
years old when they finally became common and useful parts of everyday
society. If the breakthrough of modern AI is just as old and we're just now
seeing it implemented everywhere, that's not a sign of a crash.

~~~
moxious
What do you see as the basic design of modern AI?

The 30 year old discipline of symbolic AI doesn't have much to do with today's
statistical AI.

~~~
fjsolwmv
Symbolic AI and statistical AI are both over 50 years old.

[https://en.m.wikipedia.org/wiki/Perceptron](https://en.m.wikipedia.org/wiki/Perceptron)

Statical AI was delayed in practice because it was far more expensive than
symbolic AI.

------
bluedino
When is all this image recognition technology going to make it to my phone? I
have thousands of pictures in my phone, and scrolling back to find a memory
takes me forever and half the time I can't find it.

If I had a SQL interface of sorts I could easily say things like `select
pictures containing fish where date > 2 years ago'

I'd like to just say "Siri, show me all pictures of me on a boat from 2 years
ago", but I can't do that. "Siri, show me all my pictures of food when I was
in Seattle" \- why can't I have that?

I should be able to verbally tag all the faces it recognizes. "Find me that
picture of Jeff and Dave from Christmas"

~~~
mikeyanderson
I think Google Photos does that.

~~~
faitswulff
Google Photos does it, and does it well. My favorite trick (though largely
useless) is to search "people sitting" or "two people sitting."

------
sgt101
Well, here I was having my head spun round and round by GAN's, LTSM, Bayes
Belief Nets, causality networks, counterfactuals, and scale.. for
optimisation, simulation.

~~~
TeMPOraL
> _LTSM, Bayes Belief Nets, causality networks, counterfactuals_

Still waiting for the mainstream to discover those, though.

------
edanm
I think they're vastly underestimating the amount of _other_ things in the
field of AI that have been happening. This article is kinda like saying
"Turing invented computers in the '40s, and everything else we've done since
then has been based on that insight". Well, yeah, that's not necessarily a bad
thing.

Only in this case, that would be overstating, because Deep Learning, as
impressive and hyped as it is, is still only _one_ area of the field. It's
true that most of the hype is around that, because it has given us
breakthroughs in image/video/audio/text applications. But I'd still wager that
most "AI" systems in the world use more traditional techniques, especially if
you're looking at the myriad data scientists using things as simple as linear
regressions.

And even _within_ deep learning, there have been interesting advances, e.g.
GANS have brought some very interesting applications (like style-transfer).
Who knows, maybe in 30 years time people will be writing about how everything
nowadays is built on GANS or deep reinforcement learning, a 30-year-old
technique!

------
ilaksh
Yes there are still lots of people trying to take Hinton and other's old
approach and rediscovering the many ways it lacks for general intelligence.
However, it is also the case that people are making a shitton of progress in
overcoming those problems, both while keeping some of those old DL assumptions
and by discarding many of them.

The pessimistic articles never seem to be aware of research like these:
[https://arxiv.org/abs/1612.00796](https://arxiv.org/abs/1612.00796) and
[https://hackernoon.com/feynman-machine-a-new-approach-for-
co...](https://hackernoon.com/feynman-machine-a-new-approach-for-cortical-and-
machine-intelligence-5855c0e61a70) .

------
bob1029
AI in my mind has always been hammering down this single path of: build
network, train it with x data for y iterations, and then feed live data and
evaluate outputs. This approach to me seems like a glorified digital signal
processing system. I think there are countless applications for this approach
and I think AI is an appropriate umbrella term, but there is so much potential
beyond this - Artificial General Intelligence/Strong AI/etc. I think neglect
for time is a major reason we haven't seen breakthrough progress in this area.

Today, how do we handle time-series data (e.g. audio, video, sensors) in an
AI? The first thing most people would do is look at an RNN technique such as
LSTM which enables a memory over arbitrary time-frames. But even in this case,
the definition of time is deceptive. We aren't talking about actual,
continuous time. All of the approaches I have ever seen are based upon the
idea that the network is discretely "clocked" by its inputs (or a need to
evaluate a set of inputs). What happens if you were to zero-out all input and
arbitrarily cycle one of these networks a million times? From the perspective
of the network, how much actual time has elapsed? How much real-world
interaction and understanding is possible without a strong sense of time? I
think the time domain has been a major elusive factor for a true general
intelligence.

What if you were to base the entire architecture of an AI in the time domain -
That is to say, by using a real-time simulation loop which emulates the
continuous passage of time? This would require that all artificial neurons and
even the network structure itself be designed with the constraint that real
time will pass continuously, and it must continue to operate nominally even in
the absence of stimuli. In my mind this is a much closer approximation of a
biological brain and looks a lot more like the domain a general intelligence
would have to operate in. Continuous time domain enables all sorts of crazy
stuff like virtual brain waves at various sinusoidal frequencies, day/night
signaling, etc. I have found no prior art in this area, but would look forward
to reviewing something I might have missed. I've already got a few ideas for
how I would prototype something like this...

------
jpfed
Was the one trick superhuman performance at Go? Was it human-level image
recognition? Or was it style transfer? I didn't make it to the end of the
article; maybe it turns out the trick was automated captioning or translation.

------
tripzilch
Funny thing, the problems with deep learning sketched in the article are
pretty much exactly the same in nature as problems with machine learning 10
years ago (when other algorithms such as SVM, LVQ and others outperformed NNs
for a bit), except of course that the examples of what a ML algorithm could do
back then were much less impressive.

That lack of real-world knowledge, understanding and conceptualisation feeding
back on itself has always been a big unknown roadblock standing in the way of
AI. And of course now, with the modern and improving impressive results of
deep learning, there appears to be less and less cool stuff to solve before we
finally have to face this roadblock.

But it's the same roadblock.

But, maybe the advances in deep learning will provide some tools to chip away
at it. That wordvector stuff seems promising, if it can do (Paris - France +
Italy) ~= (Rome), that's a good stab at realworld knowledge, it seems.

I used to study Machine Learning at university until 2009 (until personal
circumstances forced me to abandon it). But even after that, when I read the
first papers and talks about deep learning (back when it was still about
Boltzman networks) I got very excited and have been following it closely.
Except for the part where I haven't yet played around with it myself apart
from some very tiny experiments :) (I only recently acquired hardware to have
a stab at it, so maybe soon. The libraries available seem easy enough to use,
and many of the concepts I learned in ML are still applicable).

------
ryanackley
I think a lot of comments here are missing a tacit point of the article. It
took 30 years for this recent "breakthrough" to happen. Nobody has figured out
how to use deep learning to develop the next breakthrough in AI beyond deep
learning. Therefore, What happens after we run out of novel applications of
deep-learning?

------
atrexler
The thing you have to figure out, is that our brains and consciousness are
just a big collection of "thoughtless fuzzy pattern recognizers". There's
nothing magic beyond that.

------
FrozenVoid
Instead of bigger and badder networks with zillions of layers, can we use the
opposite approach: reducing the network to minimal size at which it works and
researching why it works(Reduced case -> General rule). It will be much
simpler to make current network easier to analyze, than trying to find
relations in a giant multilayer soup. Perhaps even creating a neural network
optimizer to minify the target network, increasing its efficiency without
compromising the power at the tasks it built for.

------
stephensonsco
Yep, it is only one trick. Just like electrifying the world only had that one
weird trick of alternating current transported miles with metal cables and the
computing world that trick of transistors. Pretty good tricks though.

Understanding will come. Most people are still in the mapping phase. Hinton
has moved passed that, and that's ok.

------
dredmorbius
I'd like to suggest that sources which explicitly demand that readers _not_
protect their privacy through incognito mode be banned from HN.

This is a demand which shows extreme contempt for the principles of personal
privacy and choice.

------
PaulHoule
The media talks about nothing but "Deep Learning", however exciting things are
happening with ensemble methods, SMT solvers, semantic and model-driven
systems, etc. You just don't hear about it.

~~~
thephyber
Ugh. Blaming "the media" is so last millennium. If you are posting on social
networking websites or blogs, you are part of "the media". I don't expect "SMT
solvers" to lead cable network news and you don't either.

You can always find some media outlet which covers your particular niche
topic, but you will never be able to push all niche topics all the time in to
the mainstream. In fact, it's human nature to avoid returning to places which
cause you the pain of cognitive overload.

------
ThomPete
You could argue that human consciousness is a one trick pony. The way I see
it, computers can learn it's a one-trick pony but it's a very powerful and
diverse one.

------
omot
Didn't read article responding purely to the title:

AI is a three-trick pony -Regression -Classification -Clustering Nothing more
and nothing less.

~~~
blt
AI is not just the contents of your undergrad machine learning class.

------
partycoder
It's not a one trick pony. It's the distinction between AI research and
applied AI.

There's more applied AI than there's research.

~~~
thephyber
From TFA:

> Just about every AI advance you've heard of depends on a breakthrough that's
> three decades old. Keeping up the pace of progress will require confronting
> AI's serious limitations.

The expression "one trick pony" means it _does_ one thing, not that its
foundational principles _are_ _based_ _on_ one research paper.

If this author's analogy holds, electromagnetism is a "one-trick pony", even
though it has millions of applications (which is what common usage of that
phrase refers to).

~~~
partycoder
I find it hard to use this analogy in the context of the whole field of AI.

The A in AI stands for artificial, and the AI denomination is so vague that
applies to any form of automation however trivial. e.g: fuzzy logic in a
washing machine is some sort of AI.

Maybe deep learning could be said to be the equivalent to this definition of
"one-trick pony".

------
epberry
Does anyone know anything more about these “capsules”? I’ve seen some nature
articles on them but these were from a neuroscience perspective. Has Hinton
published anything on them?

Also if Hinton is the Einstein of deep learning, then will capsules be his
“unified field theory”? I feel that if we embrace the Einstein analogy we
should embrace it to its fullest.

------
bayesian_horse
It is a particularly talented pony, though. (cue video of pony running
backwards)

------
bluetwo
ML is riding a one-trick pony.

------
yters
Once we get AI writing AI, then we are onto something.

------
hyperbovine
Winter is cooming. (Again.)

------
rjromero
Finally a realistic view of where “artificial intelligence” currently stands.
I wish I knew where guys like Elon Musk are seeing this other artificial
intelligence I’m just not seeing. The current AI we have is just fancy linear
regression.

~~~
pesenti
Sorry but that's a ridiculous statement. It's like saying all the cloud
technology we have today is not much more than having VMs. Yes, AI is hyped,
but it made some very unexpected progress over the past 5 years (e.g., solving
facial recognition). More than many experts expected.

~~~
gue5t
"It's like saying all the cloud technology we have today is not much more than
having VMs."

No shit? Unless you mean that it's ignoring the "distributed systems" part of
the cloud, which is mostly a shitshow. The provisioning/configuration-
management stacks are all complete wrecks, stacking hacks atop ad-hoc
container schemes atop a poorly-design OS. A real distributed OS would be so
much simpler and more robust.

Calling it AI is very misleading when there has been minimal progress toward
actual reasoning (the closest I've seen being some LSTM work on answering
simple queries about scenarios based on prose descriptions of them). It's ML,
as in "learning a function", not as in "general-purpose learning like an
intelligent agent does".

~~~
pesenti
[https://en.wikipedia.org/wiki/AI_effect](https://en.wikipedia.org/wiki/AI_effect)

------
SubiculumCode
_Toronto is the fourth-largest city in North America (after Mexico City, New
York, and L.A.), and its most diverse: more than half the population was born
outside Canada. You can see that walking around. The crowd in the tech
corridor looks less San Francisco—young white guys in hoodies—and more
international. There’s free health care and good public schools, the people
are friendly, and the political order is relatively left-­leaning and stable;
and this stuff draws people like Hinton, who says he left the U.S. because of
the Iran-Contra affair. It’s one of the first things we talk about when I go
to meet him, just before lunch._

The above paragraph taken from the article is an example of why these kinds of
articles are frustrating. This is filler. I want meat.

------
posterboy
Is technology revier riding a ine-trick pony? all they do is post texts. is
text a one-click baity? more after the more after.

------
logicallee
I feel the same way about civilization, including science and technology -
it's really just this one trick. Sooner these later these monkey are gonna run
out of ideas.

