
Why 2015 Was a Breakthrough Year in Artificial Intelligence - JohnHammersley
http://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
======
peter303
I am old enough to remember three A.I. booms and two intervening "winters".
The first boom was picking all the low hanging fruit like playing checkers,
moving blocks, solving word problems. The power of computers in the 60s and
70s was pitiful. Then the easy stuff was done for while.

The second boom was expert systems and logic super computers. Those systems
never worked that well and A.I. went into long sleep.

Now its supercomputer data mining and greatly improved neural networks.

~~~
applecore
It's different this time!

~~~
rquantz
I think GP is saying it actually is different this time.

~~~
m-photonic
I don't think there's going to be a winter this time. The current boom may
only take us so far, but there are a couple more booms on the horizon that in
all likelihood will be cusping well before this one is finished.

~~~
karmacondon
Care to give any examples of those upcoming booms?

------
mojuba
Back in 2005 I suggested that it would be interesting to build a system that
would play any game without being given directions (that blog post has since
disappeared from the web). This, and not image recognition, would mean
intelligence, I thought at the time: you can't truly tell a donkey from a dog
without having some general knowledge about the world donkeys and dogs (and
your AI) are living in.

The academic image recognition machine though seems unstoppable and yes it
does seem to improve over time. I honestly don't know what the limits of
"dumb" image recognition are in terms of quality, but calling it AI still
doesn't make sense to me.

~~~
gnaritas
> but calling it AI still doesn't make sense to me.

That's the core problem of AI, no matter what progress is made, it's instantly
called not AI anymore while the goalpost of what AI is is continually being
pushed out to not this. The real issue of course is what don't how
intelligence actually works so it's impossible to set a fixed goalpost of when
AI is truly achieved.

~~~
maaku
> The real issue of course is what don't how intelligence actually works so
> it's impossible to set a fixed goalpost of when AI is truly achieved.

No, the real issue is that people still think, intuitively, that there's a
little homunculus in their head making the decisions. Each time we build
something that doesn't look like a homunculus, we've failed at attaining
"real" intelligence...

~~~
mojuba
Making decisions is not that complicated, neither is it interesting. Your
iPhone can make a decision to kill a process that takes too much system
resources while in the background. What's more interesting (and what seems to
be the main function of the so called homunculus) is being aware of your own
location in space and time, as well as remembering previous locations. In
other words, having some model of the world and knowing your place in it is
what computers haven't achieved yet in any meaningful way.

~~~
dllthomas
So, map building is now what will _truly_ define AI until it's also just
another technology.

~~~
dkarapetyan
How is map building AI? It's a pretty mechanical process. Start somewhere,
make some measurements, move along, repeat. At what point is there any notion
of intelligence involved?

~~~
dllthomas
It's deeper than you describe...

[https://en.wikipedia.org/wiki/Simultaneous_localization_and_...](https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping)

... but I agree that there's nothing in particular that distinguishes it from
other problems.

My comment was satirical in nature. SLAM is an interpretation of what the
parent comment had described:

 _" [B]eing aware of your own location in space and time, as well as
remembering previous locations. In other words, having some model of the world
and knowing your place in it[.]"_.

There is a general pattern of statements of the form "We'll only _really_ have
AI when computers X", followed by computers being able to X, followed by
everyone concluding that X is just a simple matter of engineering like
everything else we've already accomplished. As my AI prof put it, ages ago,
"AI is the study of things that don't work yet."

~~~
cwyers
Or it could be, a system capable of reasoning its way to doing X would be
intelligent, but you can also teach to the test, so to speak, and build a
system that does X without being generalizable and thus satisfy X without
being intelligent.

~~~
gnaritas
> Or it could be, a system capable of reasoning its way to doing X would be
> intelligent, but you can also teach to the test, so to speak, and build a
> system that does X without being generalizable and thus satisfy X without
> being intelligent.

Which is exactly what we do with many kids today; makes you wonder how many
times we might invent AI and not know it because we don't raise it correctly
so it appears too dumb to be considered a success.

------
intrasight
I see no AI breakthroughs. I see image processing.

~~~
Houshalter
Image recognition requires AI. It used to be believed that it was simple. A
famous AI researcher in the 50's once sent a bunch of grad students to solve
it over the summer. They then started to realize just how complex and
impossible the task was.

60 years later, we have finally made general purpose learning algorithms,
vaguely inspired by the brain, which are just powerful enough to do it. And
because they are general purpose, they can also do many other things as well.
Everything from speech recognition, to translating sentences, or even
controlling robots. Image recognition is just one of many benchmarks that can
be used to measure progress.

~~~
elijahz
Relevant xkcd: [http://xkcd.com/1425/](http://xkcd.com/1425/)

~~~
kazinator
Thanks to that, I found [http://xkcd.com/1428/](http://xkcd.com/1428/). That
is genius.

------
ced
2700 deep learning projects at Google... What are they doing, besides the
obvious? What's a good ball-park estimate of the number of "projects at
Google"?

~~~
hiddencost
2700 doesn't strike me as remotely crazy. I'm guessing that some of those
projects all serve the same goal, e.g., for their speech system, they have:
acoustic modeling; language modeling; named-entity recognition; intent
classification; domain classification; grapheme-to-phoneme conversion;
language detection; wake-word detection. This ignores other stuff that happens
around speech (for example, I know they were using a CRF to label different
types of numbers for training their spoken-to-written form converters, which
AFAIK are still using WFSTs, although at this point I wouldn't be shocked if
both of those systems were converted to DNNs). So let's take an estimate of 10
DNNs for their speech systems. Per language, so make that 200 DNNs to support
20 languages. This ignores that they have separate models for YouTube, voice
search (one model for on-device and a cloud-side model), voicemail.

Their machine translation system probably has a similar # of DNNs, and there
you have to deal with language pairs, rather than single languages. Let's call
it another 400.

That's two side-projects. Then you pull in query prediction, driverless cars,
all kinds of infrastructure modeling, spam detection, all of the billions of
things that are happening in ads, recommendations, I haven't really even
mentioned search yet... Honestly, if I'm right in assuming that the cited
figure is really "# of DNNs that do different things", then I'm surprised it's
not higher.

------
n0us
Why are there no units on those graphs?

------
bsder
I'll believe it's AI when the speech recognition error rates finally start
dropping again.

~~~
frik
Speech recognition barely improved since the 1990s.

We had Dragon natural speaking on a 133MHz Win95 PC (offline of course). After
training it for like 10min it worked better or equal as good as Ford's Sync
car assistent (offline) and Siri/GoogeNow/Cortana. Well all these services
licensed the Nuance speech technology which they got from buying the company
behind Dragon natural speaking software. The Ford board computer runs WinCE
and has only 233MHz and is still sold in many 2016 Ford cars around the world.
And with cloud hosting, to scale the service each users gets only a small
amount of total CPU timeslice anyway.

What I want is an offline speech recognition software on my mobile devices! So
do I have to install Win95 on an emulator in my smartphone just so my multi-
core high end smartphone can do what a Pentium 1 could do in 1996? My hope is
on open source projects. Though most such OSS projects are university projects
with little documentation how to build the speech model, little community, on
an outdated site, written in Java 1.4 and no GitHub page. There is definitely
a need for good and competitive C/C++/(native code) TTS and speech recognition
project.

~~~
sangnoir
> Speech recognition barely improved since the 1990s.

I find it hard to believe, do you have any citations for that - or is that
just your gut feel?

A cursory search shows a 26% error rate[1] for Dragon NaturalSpeaking in the
year 2000 (beaten by IBM in the same report at 17%).

By May 2015, if Sundar Pichai is to be believed, Google has an 8% error
rate[2]. In my books, 26-to-8% (or even 17-to-8%) is far from _barely_
improved.

1\.
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC79041/#!po=1.562...](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC79041/#!po=1.56250)
Table 3, General Vocabulary

2\. [http://venturebeat.com/2015/05/28/google-says-its-speech-
rec...](http://venturebeat.com/2015/05/28/google-says-its-speech-recognition-
technology-now-has-only-an-8-word-error-rate/)

~~~
bsder
> By May 2015, if Sundar Pichai is to be believed, Google has an 8% error
> rate[2].

Much of the Google's stuff is for search term recognition only. It's
functionality on general dictation is nowhere near that good.

------
mrdrozdov
This is an old article. Posted December 12, 2015.

~~~
drdeca
That's less than a month ago?

