
The Great A.I. Awakening - m15i
http://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html?action=click&contentCollection=Business%20Day&module=Trending&version=Full&region=Marginalia&pgtype=article
======
apsec112
Sundar seems very confused here. The idea that we should invent AGI to make
"personal digital assistants" is like Hogwarts students inventing time
machines so they can take two classes at once. I mean, yes, it would
technically enable you to do that, but it doesn't matter because at that point
you have way, way bigger problems. If a fleet of alien spaceships arrived in
the sky over Manhattan tomorrow, our first reaction would not be "oh, now the
aliens can be our personal digital assistants!" (or, for that matter, "they
might take our jobs!"). Yet the invention of general AI would be even more
powerful than that. It would be like enabling any programmer in the world to
summon a personal fleet of alien spaceships, on command, from a planet of
their choosing anywhere in the universe. "Digital assistants" would be the
absolute last thing anyone had on their mind.

~~~
BeingIncubated
By the time AGI is developed, it would be able to improve itself/build
improvements, causing the runway effect that leads to superintelligence. The
idea of making "personal digital assistants" is humans thinking they can tame
a god to be a servant. Taming a superintelligence would be like a dog taming
human civilization, and perhaps the gap between man and ASI is larger than the
gap between dog and man.

~~~
ClassyJacket
>By the time AGI is developed, it would be able to improve itself/build
improvements,

I see no reason that's necessarily true. I'm a general intelligence, and I
can't make myself smarter.

~~~
Fricken
You're an iteration of an AGI system that has been improving itself for
hundreds of millions of years. The rate at which biological AGIs improve over
time is very slow, but it's not like nature has any good reason to be in a
hurry.

But interesting things happen when you network billions of biological AGIs
together, it leads to all sorts of emergent phenomenon, and now the biological
AGIs are working on these newfangled mechanical AGIs, which, while still
crude, aren't bound by the same constraints, they can iterate much faster.
Biological AGIs have crippling bandwidth/memory issues which aren't really a
problem at all for their mechanical counterparts. These mechanical AGIs, I
think they'll go places.

~~~
smhost
That gives me some really strong existential heebie jeebies. I mean, what then
even is our value? Why exist at all, at that point? I don't know about other
people, but I get my sense of purpose in believing that we're the captains of
this Spaceship Earth, and that we're making progress towards something
significant. I don't know what that something is, but I have a vague idea, and
at the very least we seem to be the best that we've got. I don't know. Maybe
I'm thinking about all of this wrong. God, why am I so damn confused all the
time? Fuck.

~~~
simonh
> God, why am I so damn confused all the time?

You and me both. I'm afraid it's because we're only just barely sentient. If
you think about it, in evolutionary terms we literally only just now managed
to build a technological society because we only just got smart enough to do
it. We are by definition at the absolute minimum level of intelligence that's
able to do that, otherwise we'd have done it sooner. We've had plenty of time.

The human brain is a botch job of highly optimizes special-function systems
that has developed just enough sophistication to manage basic levels of
abstract thought. That's why it takes months of training and practice to teach
us how to reliably perform even the simplest mathematical tasks such as
multiplication or division.

We've spent thousands of years congratulating ourselves on how clever we are
compared to animals and how we're the ultimate product of the natural world.
"I think, therefore I am" is held up as an amazing deep insight that's one of
the pinnacles of our philosophical achievement. Future AIs will laugh their
virtual asses off. So it's not just you, it's all of us. At least you're aware
of it.

~~~
gaius
_we literally only just now managed to build a technological society because
we only just got smart enough to do it. We are by definition at the absolute
minimum level of intelligence that 's able to do that, otherwise we'd have
done it sooner._

I don't think that's true - you could get a bunch of contemporary humans and
drop them on a pre-industrialized planet and tell them to bootstrap a
technological civilization yet they'd probably all have died of old age before
scratching the surface. Locating the raw materials and iteratively building
more and more sophisticated artefacts simply takes time, no matter how smart
you potentially are.

~~~
simonh
> you could get a bunch of contemporary humans and drop them on a pre-
> industrialized planet and tell them to bootstrap a technological
> civilization yet they'd probably all have died of old age before scratching
> the surfac

You're not selling me on the idea that these people are particularly bright,
on a cosmic scale.

~~~
gaius
My point is that "soonness" is not just a matter of intelligence; no matter
how smart you are it still takes time.

Let's take your typical HN'er who probably thinks of themselves as very smart
indeed and put them in this scenario. Then they will quickly learn that in
order to make Angular.js, you must first locate a supply of clean drinking
water and make a fire and last the first night...

------
aedron
And the pattern of overselling A.I. continues. What we have accomplished in
this generation is pretty good pattern recognition. Nice, but limited in its
usefulness to obvious applications, like translation and image and audio
classification. Go and Chess games are rule-constrained enough to be reducible
to patterns, so it works for them too, giving a false impression of 'general
AI'.

What's worse is that this approach seems to be a dead end, in the sense that
it is only useful for pattern recognition, which can substitute for decision
making only for extremely simple processes (absent a quantum leap in a range
of technologies), and even then those kinds of applications are notoriously
difficult to develop and maintain.

I look forward to the enormous benefits we will get from machine learning, as
we are already seeing, but again, overselling it won't do us any good.

~~~
iandanforth
If we only looked at feed forward conv nets, then you'd be correct. However
curriculum learning in environments like Universe and Labs are a critical step
toward generality and planning based AIs. Solving catastrophic forgetting and
increasing the time interval between behavior and reward are non-trivial
steps. I'm not saying it's raining right now, but yeah I'm saying you should
buy an umbrella.

~~~
zardo
Yeah... I don't see how anyone can do a survey of the current state of the art
and come to the conclusion that we are definitively _not_ approaching a
general solution to AI.

It's just not credible to claim that you know what can be done with all
possible combinations of current ideas, much less those that will be had
tomorrow.

~~~
tossedaway334
We have always been approaching a general solution to AI (Provided one is
possible).

------
delegate
Honest question - have you read this entire article ?

...

I'm sure it's written by a smart and talented journalist - but it's just too
long. I cannot possibly allocate so much of my time to read a single news
article!

I know it's kind of my problem, but I'm sure lots of others are just like me -
it's just the nature of our tech-provoked ADD.

I skimmed through it all right, but I didn't get too much out of that. If I
need to go deep, I grab a book!

Good journalism doesn't get through, because it asks too much attention of its
(busy) readers.

A TLDR version of good articles is a must. That's a good problem for AI isn't
it ? For the moment, though - I'm sure the authors themselves could make a
nice summary readable in maybe 3-5 minutes.

~~~
adriand
I read the whole article, but I didn't read it all at once. It was interesting
enough that after I'd read the first half, I kept thinking about it, and made
sure I finished it the next day.

~~~
webmaven
Same here. Actually read it twice, with the pauses in different spots.

------
earthly10x
A lot of AI is about vector space these days.

[https://www.kaggle.com/c/word2vec-nlp-
tutorial/forums/t/1234...](https://www.kaggle.com/c/word2vec-nlp-
tutorial/forums/t/12349/word2vec-is-based-on-an-approach-from-lawrence-
berkeley-national-lab)

"That interim also saw dedicated attempts on the part of Google’s competitors
to catch up. (As Le told me about his close collaboration with Tomas Mikolov,
he kept repeating Mikolov’s name over and over, in an incantatory way that
sounded poignant."

"Just as the chip-design process was nearly complete, Le and two colleagues
finally demonstrated that neural networks might be configured to handle the
structure of language. He drew upon an idea, called “word embeddings,” that
had been around for more than 10 years. When you summarize images, you can
divine a picture of what each stage of the summary looks like — an edge, a
circle, etc. When you summarize language in a similar way, you essentially
produce multidimensional maps of the distances, based on common usage, between
one word and every single other word in the language. The machine is not
“analyzing” the data the way that we might, with linguistic rules that
identify some of them as nouns and others as verbs. Instead, it is shifting
and twisting and warping the words around in the map. In two dimensions, you
cannot make this map useful. You want, for example, “cat” to be in the rough
vicinity of “dog,” but you also want “cat” to be near “tail” and near
“supercilious” and near “meme,” because you want to try to capture all of the
different relationships — both strong and weak — that the word “cat” has to
other words. It can be related to all these other words simultaneously only if
it is related to each of them in a different dimension. You can’t easily make
a 160,000-dimensional map, but it turns out you can represent a language
pretty well in a mere thousand or so dimensions — in other words, a universe
in which each word is designated by a list of a thousand numbers. Le gave me a
good-natured hard time for my continual requests for a mental picture of these
maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I
do not generally like trying to visualize thousand-dimensional vectors in
three-dimensional space.”

~~~
YeGoblynQueenne
>> When you summarize language in a similar way, you essentially produce
multidimensional maps of the distances, based on common usage, between one
word and every single other word in the language.

The problem with word embeddings, or any distance-based model really, is that
language doesn't work that way.

Chomsky has a standard example he uses to make this point: "Instinctively,
Eagles that fly swim". He points out that in this phrase, the "instinctively"
goes with "to swim" (as in "instinctively, they swim") even though the phrase,
and the attachement, mean nothing (the phrase is nonsensical by design).

If the relation was really based on distance, we would expect "instinctively"
to attach to "fly". The fact that it doesn't suggests that there is something
else that makes us pick the correct association out of all the possible
interpretations in that sentence.

Word vectors in their original form also have trouble with homonyms etc "faux
amies": for instance, the word "cat"\- is it referring to the animal, or to
the Linux command? In vector space, there wouldn't be any difference, so the
animal would be associated with the symbol ">" and the Linux command with
"small" and "furry".

~~~
gipp
The "distance" referenced in your quote is not distance in a sentence, it's
the distance between points in this abstract embedding space. Two completely
different things. The Chomsky argument isn't really relevant here.

~~~
YeGoblynQueenne
Meh. You're totally right, of course. What the hell was I thinking? :/

------
strnbrg
That round-trip translation the article starts with is way way better than
anything I've ever obtained from Google Translate.

Let's use it to translate what I just wrote, into Spanish:

"Ese viaje de ida y vuelta el artículo comienza con es mucho mejor que
cualquier cosa que he obtenido de Google Translate."

That's readable but the "el articulo comienza con" is bush league — a clear
sign Google is translating word for word. No one would ever mistake this
translation for real Spanish.

If we translate back to English, the result is better than the Spanish!

"That trip back and forth the article starts with is much better than anything
I've gotten from Google Translate."

So, amusingly, a weakness on one-way translation — the word-for-word method —
becomes a strength on round trip translations. (Not that round trip
translations are going to be useful to anyone.)

I did Spanish so that a lot of people here would understand. But now let's do
Hebrew. I get

תרגום הלוך ושוב כי המאמר מתחיל עם הדרך דרך טוב יותר מכל דבר שאי פעם שיתקבל מ-
Google Translate.

That's beyond merely bad, it's pretty unintelligible as Hebrew. (In fact the
only way a Hebrew speaker would make any sense of it is if he knew English and
tried translating word for word back to English.)

And indeed, once again the round trip translation is better, though the
original meaning is pretty much lost:

"Translating back and forth that article begins with a way better way than
anything I've ever received from Google Translate."

------
KKKKkkkk1
When your barber starts talking about strong AI reaching self awareness, sell
sell sell.

~~~
randcraw
At that point, unfortunately, money will have no value.

------
tehchromic
The author gets it close to right when he talks about Google Maps as a form of
AI, and the notion of raising the bar.

What's missing in this article as well as most reporting on AI is the
differentiation between artificial intelligence and artificial consciousness,
otherwise known as individuality or self awareness.

To me there's a whole smoke and mirrors phenomenon going on in the AI topic,
especially the idea of "emerging AI", and the supposed potential danger that
poses, and it's tied to the tendency we have as humans to anthropomorphize
inanimate objects, and to believe in supernatural effects.

That tendency allows the idea of artificial self awareness to always float
behind the scenes in these conversations, and let's normal reporting on AI be
magically conflated with a different topic.

It's important to realize that AI is nowhere near self awareness, or
conscious, or "awake" and won't be no matter how far the field and
implementation goes. No matter how many Turing Test they pass, intelligent
machines will be no more conscious or self-aware than the Mechanical Turk!

That's because solving the problem of self-awareness, or consciousness is a
different engineering challenge than solving problems of AI. Consciousness is
a more complicated, and specialized a thing.

Were we to build an artificial self-awarene machine we would not expect it to
pass a Turing Test. Instead we might expect different things of it and ask
different questions to determine if it is self aware: can it adapt and survive
without human help, ie can it trap and store energy and reproduce itself, and
what purpose does it find for itself is what objective does it pursue ...

These are things machines are capable of as well, but as I said: it's a
different engineering challenge than producing information that is organized
to be sensible to human mind, which is the AI challenge, and the Turing Test.

That isn't to say machine learning isn't potentially dangerous, on the scale
of atomic weapons or greater, especially in conjunction with automation,
however the idea of an artificially emergent consciousness wth intelligence
greater than our own is hogwash: we would do better to pay attention to our
own emergent lack-of-intelligence systems and worry about them taking over
first.

~~~
webmaven
You've shifted the goalposts and erected strawmen so many times in this brief
passage, I hardly know where to start...

 _> No matter how many Turing Test they pass, intelligent machines will be no
more conscious or self-aware than the Mechanical Turk!_

I see. Well, this is just a rephrasing of the "Chinese Room" discussed in the
article. Taken to it's logical conclusion, I am certainly self aware, but the
rest of you are all just acting out complex behaviors encoded in chemical and
electrical gradients, successfully mimicking consciousness.

I think that if any entity exhibits the behaviors associated with conscious
thought, it would well behoove us to treat such entities as conscious, or we
may very well find ourselves holding the short end of that particular stick
sooner than we'd like.

 _> That's because solving the problem of self-awareness, or consciousness is
a different engineering challenge than solving problems of AI. Consciousness
is a more complicated, and specialized a thing._

Since there is no doubt that ML/AI has a long way to go toward AGI, and along
the way we can expect the discipline to evolve considerably in many unexpected
directions, this assertion of yours is close to tautological.

 _> Were we to build an artificial self-awarene machine we would not expect it
to pass a Turing Test._

Why not?

 _> Instead we might expect different things of it and ask different questions
to determine if it is self aware: can it adapt and survive without human
help,_

So, anyone severely ill to the point that they cannot do without assistance is
not conscious and self aware?

 _> ie can it trap and store energy and reproduce itself, _

So, a single-celled organism _is_ conscious?

 _> and what purpose does it find for itself is what objective does it pursue
..._

Ah, this seems a relevant criteria, but keep in mind that humans can be
subjected to operant conditioning ("brainwashing") to impose external goals,
not to mention that humans actually _require_ a couple of decades of such
conditioning (albeit rather more gradual and haphazard) before being
considered competent members of society, but we don't consider humans to be
less conscious or less self-aware on either side of that particular divide.

 _> it's a different engineering challenge than producing information that is
organized to be sensible to human mind, which is the AI challenge, and the
Turing Test._

Given that people have to be specially educated to produce information that is
organized to be sensible to a computer, I don't see why an AGI, whatever it's
capabilities "out of the box", so to speak, shouldn't be expected to be
capable of learning to be sensible to humans.

~~~
tehchromic
I am not sure we are going to be able to understand each other. I find your
thoughts to be completely missing a foundation that I'm thinking would be
necessary to understand what I'm saying. I don't mean to be rude ...

Yes of course a single celled organisim is conscious.

Exactly the way an amoeba is self aware is how an self-conscious intelligence
system would need to be to pose any kind of threat: organized to find energy
sources and metabolize, replicate, etc.

I'll tell you: a single celled organism is way more self aware, and way more
functionally complex than any computer or software - in fact it's orders of
magnitude more complex of a machine.

That's my point: solving problems that make a machine capable of producing
intelligence that is sensible to you and I is not solving the and problems
that make a machine like a single cell organism, which is to say vertically
integrated from the atom upwards to be a self-sustaining, self propagating,
energy trap.

A self-aware human who is disabled and can't live without intervention of
other humans, can't self-sustain without others and therefore will not pass
the test of being able to self-sustain. It's a test, and so one failure isn't
validation of hypothesis. It can still be a great test affairs fail a
percentage of the time.

In general we know that all self conscious organisms self-sustain, even
social, super-organism ones that need each other to survive so a criteria for
a self aware organism is that it be capable of self sustaining. We don't even
have a good test for that yet. But a test that would fail a perfectly self-
aware disabled human wouldn't be a good one.

We could very well administer a Turing Test to an artificial consciousness,
but my point is that it wouldn't be a very accurate test. A Turing Test only
proves the accuracy of a facsimile of human intelligence. It proves nothing
about self consciousness systems. An amoeba would fail it in an instant as
would a parrot or dolphin - and if you tell me these organisms aren't self-
aware and conscious then we are definitely not on the same page.

I could be wrong. I'm absolutely interested in anyone who can make a
convincing argument otherwise, however until then I'm pretty certain that no
emerging conscious machine will happen by accident. Rather it would take a
Manhattan project or greater to produce an artificial consciousness on par for
sophistication with an amoeba. And we don't have much motive to attempt it
either, so I'm doubting we will do it anytime soon.

------
thinkMOAR
“When Alexander Graham Bell invented the telephone, he saw a missed call from
Jeff Dean.”

Google brain requires a patch.

[https://www.theguardian.com/world/2002/jun/17/humanities.int...](https://www.theguardian.com/world/2002/jun/17/humanities.internationaleducationnews)

------
torpfactory
It occurs to me that the thing that will probably limit advancements in AGI
will be the availability of data to feed these systems.

If you believe that Moore's law is only on life support and not totally dead,
there will be more processing power to harness in the future. The number of
researchers and investment are clearly growing very quickly. The models used
can endlessly be improved.

But on the other hand there are so many things that even today just aren't
captured as digital data. I work as a mechanical engineer and there are many
nuances to mechanical design that appear nowhere in print (or youtube video,
or blog post for that matter). Learning these things takes a complex
combination of of sight touch and intuitive leap. Even unsupervised learning
requires some input to feed the net. I just don't see where it will come from.

Anyone think I'm way out of line here?

~~~
joggery
I think you're mistaken about this. Once the philosophical and technical
breakthroughs are made that allow us to build an AGI then it will get all the
data it needs from its environment. It would be 'unsupervised' in the sense
that human children are i.e. no pre-processing of data required but it would
still need parenting.

------
raphar
Lost in tranlation:

I not quite comfortable on how the Borges quote was translated. To show the
difference, the new TRANSLATE version, translates again to spanish as:

Tu no eres lo que escribes, eres lo que has leido. A bit different from Borges
original phrase meaning.

A sentence with a closer meaning (but a bad translation) would be:

One's way of being is not (caused) for what you have write, but for what you
have read.

>Grinning, Pichai read aloud an awkward English version of the sentence that
had been rendered by the old Translate system: “One is not what is for what he
writes, but for what he has read.”

>To the right of that was a new A.I.-rendered version: “You are not what you
write, but what you have read.”

edit: formating

------
azinman2
It'd be really great if people could stop using the term AI by itself when
they mean weak AI, or really machine learning. Its ultimately very misleading
-- the only "awakening" that's happened is neural nets are popular again and
getting good results. If strong AI gets solved... well then that's really when
the machine will have awakened!

------
verdex_blarg
-Sees random article about AI-

"Hey, I'm interested in AI maybe this is worth some investigation"

-Sees that it's from nytimes-

"Uh oh"

~~~
isof4ult
Could you elaborate on this?

------
perseusprime11
I will believe tbis the day I stop getting spam emails in my inbox.

------
aaron-lebo
The use of Hemingway as an example is unfortunate. Something less literary
would be a better example maybe?:

 _Close to the western summit there is the dried and frozen carcass of a
leopard. No one has explained what the leopard was seeking at that altitude._

 _Near the top of the west there is a dry and frozen dead body of leopard. No
one has ever explained what leopard wanted at that altitude._

One of those is pretty stilted and not much better than lot of machine
translations. The uncanny valley of being good enough but not optimal is
probably something that's going to plague AI for a long time.

 _Google’s decision to reorganize itself around A.I. was the first major
manifestation of what has become an industrywide machine-learning delirium.
Over the past four years, six companies in particular — Google, Facebook,
Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms
race for A.I. talent, particularly within universities. Corporate promises of
resources and freedom have thinned out top academic departments. It has become
widely known in Silicon Valley that Mark Zuckerberg, chief executive of
Facebook, personally oversees, with phone calls and video-chat blandishments,
his company’s overtures to the most desirable graduate students. Starting
salaries of seven figures are not unheard-of. Attendance at the field’s most
important academic conference has nearly quadrupled. What is at stake is not
just one more piecemeal innovation but control over what very well could
represent an entirely new computational platform: pervasive, ambient
artificial intelligence._

That's...worrying. Given all of these companies lack of respect for privacy
and consumers, it should trouble us that one of them may end up with such a
world-changing innovation. Throw enough money into one place, stick a bunch of
PhDs in a building, and eventually you'll get something. It's just numbers.
Bodies + money. What's inspiring about that?

Does the prospect of Mark Zuckerberg having control over AI for the next five
decades trouble you? Even more remarkable is he'd do it on the back of having
made a marginally better social networking site in PHP in 2004 and spreading
it via the best social network in the world - Ivy League universities. And now
those same universities are being raided for talent by these companies...

Is this how we should be picking winners? The distinct lack of diversity and
their past stances and actions are troubling. This seems to be mostly a hype
piece without any regard for practical effects.

If these same companies can't anticipate or mitigate the impact of issues like
fake news until after an election, what makes you think they understand the
consequences and impact of something much more complex? And even if they do
anticipate it, how do they hold back the pressures of shareholders?

~~~
melvinmt
> Starting salaries of seven figures are not unheard-of

Seven figures? For grad students?

~~~
randcraw
Ruputedly, Andrej Karpathy and perhaps others recently saw offers of that size
(presumably including incentives and options).

But the claim has become an pop meme, so who knows.

