
Andrew Ng on the state of deep learning at Baidu - calebgarling
https://medium.com/backchannel/google-brains-co-inventor-tells-why-hes-building-chinese-neural-networks-662d03a8b548
======
rrggrr
"I think the fears about “evil killer robots” are overblown. There’s a big
difference between intelligence and sentience. Our software is becoming more
intelligent, but that does not imply it is about to become sentient."

It is the period between sentience and "advanced" artificial intelligence that
should be worrying for two reasons: First, it is close at hand. More
importantly, the unintended consequences of tech are never well thought out in
the initial stages of adoption.

I'm reading Eric Schlosser's book on the early days of nuclear weapons and the
many, many near misses the US experienced as it adopted nuclear arms without
much thought to risk management. I see parallels in the race to develop and
deploy pre-sentient A.I. Link below to Schlosser's book.

[http://www.amazon.com/Command-Control-Damascus-Accident-
Illu...](http://www.amazon.com/Command-Control-Damascus-Accident-
Illusion/dp/0143125788/ref=sr_1_1?ie=UTF8&qid=1422904872&sr=8-1&keywords=nuclear+command+safety)

~~~
karmacondon
This appears to be an unpopular idea on hn, but: We are nowhere close to
developing sentient machines. We have made no progress toward it, we're not
getting closer. Statistical optimization and sentience are parallel lines. No
matter how far you travel down one, you don't get any closer to the other.

If "evil killer robots" are created, it won't be because of advances in big
data or deep learning or any other buzzword. It'll happen independently,
potentially aided by current state of the art techniques but not because of
them. It could happen at any time, some guy in his basement could be on the
verge of a breakthrough as we speak. But google's ability to detect cat faces
or Watson's winning on jeopardy are not signs of a coming apocalypse. It's
highly unlikely that we can cluster or gradient descent our way to human level
of intelligence, no matter how many cores or hidden layers are used.

This whole "famous people are worried about AI" thing is understandably media
friendly, but it's a bit of a distraction. Fortunately the majority of active
researchers aren't taking it seriously at all.

~~~
olalonde
To be honest, I'm far more worried by cellular level simulations like
OpenWorm[0] because there seems to be a clearer path to human level AI: more
processing power and more advanced knowledge of brain biology. Given enough
time and effort, it seems inevitable to me. Luckily or unluckily - depending
on your perspective - we're still far from simulating a human brain.

[0]
[http://www.artificialbrains.com/openworm](http://www.artificialbrains.com/openworm)

~~~
djulius
"The modelling data that is processed by the engine is written in XML."

You don't need to fear much, by the time the engine evolves data into
something scary, the universe is long gone.

------
yongjik
> One of the things we did with the Baidu speech system was not use the
> concept of phonemes. It’s the same as the way a baby would learn: we show
> [the computer] audio, we show it text, and we let it figure out its own
> mapping, without this artificial construct called a phoneme.

Eh? A human baby normally doesn't even start to learn _text_ until well after
it shows a complete mastery of every phoneme in its native language along with
some basic vocabulary and a decent understanding of syntax.

I find it baffling that such a distinguished AI expert would assert that
phonemes are unnecessary invented constructs. Are there many who share the
same belief?

~~~
jsprogrammer
Complete mastery? I don't think so. Parents read books with children before
the child can speak.

As Ng noted, phonemes are artificial constructs. _If_ they are necessary for a
particular language then they should be learned in the same way other
constructs are, not imposed on the learning algorithm from the outside.

Are phonemes imposed on human babys? Maybe if you were raised by Hooked on
Phonics[1], but I'd wager that most humans did not have the idea imposed on
them.

[1]
[http://en.wikipedia.org/wiki/Hooked_on_Phonics](http://en.wikipedia.org/wiki/Hooked_on_Phonics)

~~~
yongjik
Well, think of it this way. Characters are artificial constructs, and even
much more artificial (with only thousands of years of history): there are
still languages in this world without a writing system, and they're doing
fine. (Well, maybe not _fine_ , but that's more to do with the influence of
more powerful languages than lack of writing.)

However, no serious NLP researcher would suggest building a text-to-speech
system by getting rid of the middle OCR layer and just hooking the raw image
directly to the expected sound. (Well, at least I think so, but I'd be glad to
know if I'm wrong.)

Phonemes are _absolutely_ imposed on human babies. Sure, they wouldn't know
the word "phoneme", but that doesn't make the concept any less real. Imagine
how a typical education on reading/writing would start like: "This is G. It is
used for words like grass, game, and girl. Actually, it can also be used for
gem or genie!"

Can you imagine an English-speaking child listening to this and not
immediately understanding that there are two distinct sounds involved, that
the first sounds for "grass", "game", and "girl" are somehow the _same_ (even
though the waveforms are different), and that this sound is somehow
_different_ from that of "gem" or "genie"? If a child doesn't understand it,
then the child probably needs a speech therapy.

~~~
anko
Andrew is simply saying (if I read him correctly) that if you use generative
models you might classify sounds into something other than Phonemes on one
layer, and that classification may actually be better than "Phonemes" for
recognising sounds. Phonemes are a construct for us to express a concept with
language, but might not be the best way to represent a sound in a machine. So
why try to write the routines to map sounds to phonemes when you can train it
to just learn the categories that help it understand how to recognise the
sounds better.

~~~
yongjik
Hmm, OK, that sounds much more reasonable. :)

------
dmix
> Ng insists that Baidu is “only interested in tech that can influence 100
> million users.”

Is this an effective way to start projects? Sure its the end goal but using
this as the starting point might be unnecessarily limiting. It might limit
ideas to things that seem safe at the beginning.

The seeding of most novel ideas often seem very niche/minor and then
evenutally exploded in popularity - then their importance starts to become
apparent (see Twitter).

I'd rather invest in a large group of people broken up into small teams trying
a varied approach from bold darpa-style ideas to more practical localized
problem, similar to Valve or YC.

~~~
nl
There's a difference between the kind of things you are referring to as
"projects" and the kind of things Ng is referring to as "tech".

A "project" is usually an application of existing technology done in an
innovative way. It may well end up being the next Facebook, but it isn't
something that usually requires breakthroughs in scientific fields.

Ng's (and Baidu's) "tech" is a much more basic level of research. Baidu isn't
interested in funding research into translating English into Latin because -
while it is novel - it is unlikely to impact 100 million users.

OTOH: Chinese/English translation? Yes. World leading voice recognition?
Yes[1]. Understanding images? Of course![2]

[1] [http://www.forbes.com/sites/roberthof/2014/12/18/baidu-
annou...](http://www.forbes.com/sites/roberthof/2014/12/18/baidu-announces-
breakthrough-in-speech-recognition-claiming-to-top-google-and-apple/)

[2]
[http://arxiv.org/pdf/1410.1090v1.pdf](http://arxiv.org/pdf/1410.1090v1.pdf)

------
dharma1
anything happening with cultured neural networks? If we can't approximate
neurons or neural networks well enough in software or hardware, why not just
grow a large biological neural network and interface with that

