
Artificial intelligence: Our final invention? - chwolfe
http://www.washingtonpost.com/opinions/matt-miller-artificial-intelligence-our-final-invention/2013/12/18/26ed6be8-67e6-11e3-8b5b-a77187b716a3_story.html
======
motters
Some counter-points on this narrative, with which I'm very familiar.

1\. Philosophers and AI theoreticians have a hard time defining what they mean
by the term "intelligence". Talking about systems millions of times more
intelligent than a human is nonsense unless you can define what it is you're
talking about.

2\. Whatever intelligence is it only makes any sense within the context of
some environment. Environments impose a multitude of constraints. In the
"intelligence explosion" scenario the environment is assumed to be constraint
free, or something close to it.

3\. Be wary of people trying to sell you ideas based on fear. It's usually
snake oil, concealing some other agenda - such as trying to obtain or maintain
grant money for projects.

4\. That many top AI people have "bug out" houses is simply false. An
exaggeration, intended to add drama to the article.

5\. Historically, the predictions of top AI researchers have not proven to be
particularly accurate, although that does not mean that this will always be
the case.

~~~
pygy_
All they need is to be smart and strong enough to take control of the dense
energy sources (oil, coal, uranium). Then we're toast.

No need to be millions of times smarter, whatever that may mean.

~~~
rubinelli
We have more-than-human entities controlling our energy resources. They are
called corporations, and some even argue they are pretty much already out of
our control.

~~~
pygy_
Corporations are still human-oriented.

Another scary thing is the combination of the internet of things that is
currently taking shape, and the fact that a lot of computer systems are
backdoored by governments.

Nice attack vector for smart software (not counting accidental
vulnerabilities).

------
ar7hur
I was reading, waiting for the mention to IBM Watson... and here it comes! I'm
so tired of reading how Watson is a step toward Artificial General
Intelligence, self-aware machines, etc.

People must really understand that Watson is, like almost any successful AI
(not AGI) product today, "just" a huge statistical pattern matching machine.
Watson does not feel anything. Watson does not know what soccer is. Watson
knows that a label "Soccer" has a distance of x to label Y and Z. Watson can
answer Jeopardy questions, and now medical questions, but it's structurally
unable to learn the slightest new task. So please, let's credit Watson for
what Watson is good at, but stop using it to tell us AGI is coming.

~~~
mindcrime
_People must really understand that Watson is, like almost any successful AI
(not AGI) product today, "just" a huge statistical pattern matching machine._

Arguably, that's all a human brain is as well. I don't want to start some big
debate here over symbolic reasoning versus statistical pattern matching, but
there seems to be quite a bit of contemporary thought along the lines of our
brains being largely based on "pattern matching" as a foundational mechanism.

~~~
wlievens
It can however only do one flavour of pattern matching whereas we combine our
mental facilities dynamically. And to wnticipate your "well, that's nothing a
computer can't do"... show me any work in the direction of serious AGI!

~~~
badmofo666
Sounds like what you're talking about is decision theory. Or, when mixed with
learning: reinforcement learning. The monte carlo AIXI approximation would be
an example. But, it really only works well on small toy problems (I.e.
problems where the agent has only a small number of available actions it can
perform).

~~~
wlievens
Thing is, humans come up with new "actions" all the time. If you drop that
abstraction, and reduce our actions to "controlling information flow in our
bodies" then the action space becomes unfathomably huge.

------
phkamp
We have already invented AI, and it is already running amok.

The AIs are of the "hive mind" kind and it is common to refer to it as "A
transnational corporation."

Known identities are "IBM", "Google", "Unilever", "Monsanto" etc.

These AI's are beholden to nobody, define and persue their own goals, by
manipulating their environment to their advantage.

And they're better at it than humans: They decide for themselves when, where
and how much tax they want to pay, and they are not afraid to remind
parliamentary inquiries about this fact.

For at least 10 years, it has been evident that these AI's are politically far
more astute and successful than humans, and the latest "trade-agreement"
negotiations, what little we get to know about them, is clearly an unmitigated
powergrab by these AIs.

The fact that the hive-minds are composed of humans does not in any way change
this conclusion.

And now: Imagine how your life would be, if Google, the company, truly hated
you, personally, and were out to get you.

~~~
jal278
I agree with you in essence, that large companies are intelligences built upon
the substrate of many human minds, and that a company's motivations often
diverge from those of the humans that compose them.

I'd go further and say that companies are alive (or at least independently
intelligent) in a real sense. Companies are lively: "If you prick us, do we
not bleed? [...] and if you wrong us, shall we not revenge?" They will lobby
and sue to protect their interests, butt heads with other rival corporations,
and hold grudges.

However, I've found it difficult to convince anyone that we should consider
companies alive, or at least entities that are qualitatively different in
intelligence from the humans that enable them. The argument usually centers on
that a company is just a bunch of humans, so it is best understood as such.
However, the 'more is different' effect (e.g. you can better understand a
human as an independent entity than as a bunch of cells even though it is
entirely composed of them), and the selection pressure molding corporations
(optimizing profit), I believe leads to a new type of collective intelligence
(one different from human intelligence) -- basically a true AI, as you are
arguing.

------
personlurking
"Differential intellectual progress consists in prioritizing risk-reducing
intellectual progress over risk-increasing intellectual progress. As applied
to AI risks in particular, a plan of differential intellectual progress would
recommend that our progress on the scientific, philosophical, and
technological problems of AI safety outpace our progress on the problems of AI
capability such that we develop safe superhuman AIs before we develop
(arbitrary) superhuman AIs. Our first superhuman AI must be a safe superhuman
AI, for we may not get a second chance."

\- CEO of the Singularity Institute

I believe he also said that if you die now or soon, you don't just lose a few
decades off your life but possibly immortality.

------
api
What if "runaway" isn't possible?

It might be possible for an AI to be roughly as "intelligent" (depending on
how one measures this) as the smartest humans, but that intelligence is the
result of many millions or even billions of years of accumulated evolutionary
learning.

It might be -- for fundamental information and machine learning theory reasons
-- fundamentally harder to go where there are no roads. Start looking into
combinatorics and the problems of searching large spaces.

The observation that genius is often tied to madness may be indirect
circumstantial evidence for this. When we try to push the boundaries of human
intellect, we seen to run rapidly into weird problems.

~~~
wlievens
One significant advantage q Strong AI has over any human brain though is that
it never forgets anything. No fsc, process or hypothesis would slip by its
mind.

------
mindcrime
Obligatory:

 _" Welcome to the Desert of the Real. We have only bits and pieces of
information but what we know for certain is that at some point in the early
twenty-first century all of mankind was united in celebration. We marveled at
our own magnificence as we gave birth to AI.”_[1]

Also:

[http://sysopmind.com/singularity/aibox](http://sysopmind.com/singularity/aibox)

[http://rationalwiki.org/wiki/AI-
box_experiment](http://rationalwiki.org/wiki/AI-box_experiment)

[1]:
[http://www.philfilms.utm.edu/1/matrix.htm](http://www.philfilms.utm.edu/1/matrix.htm)

------
eloff
People like to laugh, but if we can invent real AI, and there's no reason to
suspect that we cannot since nature did it blindly, then it's just a matter of
time until the end of the human race. We will likely either become them or be
destroyed by them (or maybe some of us kept in zoos or as pets.) You cannot
firewall or imprison a god, especially with humans being so divided, short
sighted and manipulable. Simply put it's still survival of the fittest, what
happens when that's no longer us is pretty inevitable. The only real question
I see is how long will it take?

One happy thought is that the AI uses us as slaves to build a way off the
planet to a place more habitable for an AI. Someplace nice with lots of raw
material for chips and lots of energy. Maybe a super massive blackhole, maybe
another solar system nearby.

~~~
mkingston
Humans try to keep other more vulnerable species around for our own
pleasure/benefit. We don't do a very good job of it; but we do try. What's to
say an intelligence greater than us wouldn't?

~~~
eloff
Yeah, it may well. But it won't need 10 billion of us around, that's for sure.

And humans can be useful to, as expendable biological robots that are cheap to
manufacture and run while also being intelligent and versatile. Although most
of us probably wouldn't call that living.

------
seiji
Choice quote from the end: _But it was alarming how many people I talked to
who are highly placed people in AI who have retreats that are sort of ‘bug
out’ houses” to which they could flee if it all hits the fan._

Does HN agree or disagree a significant number of those "bug out" people truly
exist?

~~~
vdaniuk
Certainly they do, those sneaky "highly placed people in AI" know perfectly
well that it will be quite easy to hide from the strong AI in some rural area.
Its not like AI could access real estate databases or build some flying droids
to find those people. /s

~~~
seiji
One of my favorite hard takeoff scenarios: the AI quickly figures out how to
manufacture nanobots and does so at a scale to blanket the planet, thus
becoming instantly omniscient with a global field of sensors, giving it a
coherent view of everything in the world at every given instant in time from
that point forward.

Also see: "metamorphosis of prime intellect" and "mind war: the singularity"
and to a lesser extent "a young lady's illustrated primer"

~~~
fumar
I always think of the Grey Goo theory when the subject of nanobots taking over
Earth comes up.

"a hypothetical end-of-the-world scenario involving molecular nanotechnology
in which out-of-control self-replicating robots consume all matter on Earth
while building more of themselves"

[http://en.wikipedia.org/wiki/Grey_goo](http://en.wikipedia.org/wiki/Grey_goo)

~~~
seiji
We're working on it: [http://www.nytimes.com/2013/12/08/magazine/crazy-
ants.html](http://www.nytimes.com/2013/12/08/magazine/crazy-ants.html)

~~~
grannyg00se
That has nothing to do with AI or anything we're working on.

------
TrainedMonkey
Here is lengthy debate by some of best minds in the field:
[http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-
Foom_...](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate)

TL:DR - whether AI can become significantly smarter than Humans really fast
depends on intelligence return on computational investment.

~~~
maaku
I'm not sure Hanson or Yudkowsky have the credentials to be considered the
"best minds in the field." It makes for interesting reading though.

~~~
TrainedMonkey
I did not know that, can you please point me to people with better credentials
that discuss the topic in such detail?

~~~
maaku
Ben Goertzel, who is a bona fide AI researcher and was connected with
SIAI/MIRI before a leadership split, has written about the topic:

[http://multiverseaccordingtoben.blogspot.com/2010/10/singula...](http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-
institutes-scary-idea-and.html) [http://jetpress.org/v22/goertzel-
pitt.htm](http://jetpress.org/v22/goertzel-pitt.htm)

You might find some other related commentary with the search term "scary
idea", which is what Ben has labeled Yudkowsky's side of the FOOM debate
typified by the "That Alien Message" story. TL;DR: Goertzel takes friendliness
seriously but considers a fast, hard FOOM to be implausible and worrying over
it detrimental to the cause.

I don't know of anyone else with qualifications to be counted as a "top AI
researcher" that has covered these topics in any significant detail. Those few
times I have seen the topic mentioned in interviews (with Pei Wang and Hugo de
Garis, for example), it's dismissed out of hand as crazy-talk. You may find
some discussion of this at the various AGI conferences though (use search
terms like "friendly" or "safe" on the youtube channels).

~~~
maaku
Regarding Hugo de Garis, I should qualify that he does believe some sort of
Terminator/Matrix-like war against the machines is the most likely outcome,
but he places this far in the future indicating that he definitely doesn't
believe in a Yudkowsky-like hard-FOOM. He's also the only AI researcher I know
that takes Hollywood-like human/machine war seriously, myself included.

------
darkxanthos
"I’m talking about the risks posed by “runaway” artificial intelligence (AI).
What happens when we share the planet with self-aware, self-improving machines
that evolve beyond our ability to control or understand? Are we creating
machines that are destined to destroy us?"

The reason no one is worrying about this is because "AI" is still a) just a
bunch of math (and still surprisingly stupid) b) nowhere near sentient.

My Kinect can barely follow my hand or consistently recognize what I'm saying
with any accuracy. I'm really not concerned with it plotting my demise.

~~~
JamesArgo
>My Kinect can barely follow my hand or consistently recognize what I'm saying
with any accuracy. I'm really not concerned with it plotting my demise.

My radium coated watch face can barely illuminate my timepiece, I'm not
worried about the dangers of these radioactive elements.

~~~
dylandrop
I don't think your argument works. Concerns of using a technology we did not
develop (radium, which was "invented" by nature) exceeding our understanding
is much more reasonable than a concern about man made technology.

------
wavesounds
I usually like Matt Miller but this is just silly. Why be more afraid of AI
then Nuclear War or a Man Made Super Virus getting into the wild? Every new
technology needs to be used responsibly and were very far away from Terminator
at this point. He should be embracing AI to reduce health care costs and
improve city planning and control interest rates and make a more fair society
and accomplish many of the goals he talks about every week on left right and
center.

~~~
ThomPete
Because AI can take over the Nukes and bomb us.

I am reminded of Arthur C. Clark who wrote about the SDI project.

"Though it might be possible, at vast expense, to construct local defense
systems that would 'only' let through a few percent of ballistic missiles, the
much touted idea of a national umbrella was nonsense. Luis Alvarez, perhaps
the greatest experimental physicist of this century, remarked to me that the
advocates of such schemes were 'very bright guys with no common sense.'"

"Looking into my often cloudy crystal ball, I suspect that a total defense
might indeed be possible in a century or so. But the technology involved would
produce, as a by-product, weapons so terrible that no one would bother with
anything as primitive as ballistic missiles."

Clarke, Arthur C. "Presidents, Experts, and Asteroids."Science, June 5, 1998.
Reprinted as "Science and Society" inGreetings, Carbon-Based Bipeds! Collected
Essays, 1934-1998. St. Martin's Press, 1999: 526.

------
wellboy
I don't like these all or nothing articles. Once Singularity is achieved, AI
won't instantly take over the world. It is also restricted by computational
processioning power and will take some time to evolve as well.

Philosophically speaking, why would AI want to destroy humanity, wouldn't it
be just as bad as humanity itself?

~~~
nova
"The AI does not hate you, nor does it love you, but you are made out of atoms
which it can use for something else."
([http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer))

------
acqq
It's simpler and more dangerous than most of us are ready to accept: we don't
need to have anything what we don't already have to reach the point of the
technology used against us, destroying the civilization we take now for
granted. We are already peripherally aware of the problems but we are for
different reasons in various levels of denial. We still have enough nuclear
weapons to end the civilization we now know. A lot of people can't imagine
that global warming is real and dangerous. And of course, a lot of people
can't imagine what's wrong to have so much data about them stored by third
parties and ready for misuse.

There aren't any big arguments that such trends won't continue. It seems that
the dynamics of groups supports living in denial.

------
dmfdmf
I think there is an unfounded assumption that artificial intelligence implies
self-awareness. I don't think that is necessarily the case and all the scary
scenarios rest on that assumption.

I can't recall who (and couldn't find it through Google) the quote by the AI
researcher who said something like "show me what the brain (or mind) is doing
and I can build a machine to do it too". Well, we don't know what the mind or
brain is doing and until we identify the epistemological principles we are
just groping in the dark. Self-awareness my just be an artifact or consequence
of the biological implementation of the epistemological principles behind
intelligence and that an AI does not necessarily need to be self-aware.

~~~
kr4
I have a similar opinion [0]. Humans are likely to be troubled by other humans
who may eventually get a hold on general-intelligence. Mannah [1] is a good
read which seems both likely and scary.

0:
[https://news.ycombinator.com/item?id=6929600](https://news.ycombinator.com/item?id=6929600)

1: [http://marshallbrain.com/manna4.htm](http://marshallbrain.com/manna4.htm)

------
0xdeadbeefbabe
Artificial intelligence is a highfalutin way of saying cool computer tricks.
Hiding behind the hyperbole you can find some pretty interesting and fun
algorithms.

Having said that, I hope the singularity can translate "the spirit is willing
but the flesh is weak"

------
kr4
Self-awareness is likely to be a product of quantum-based, or even more lower-
level conciousness. However, intelligence, which makes it possible to acquire
knowledge and reason, is relatively mmore easy to be built. There is a subtle
yet important difference between Intelligence and self-awareness. Intelligence
gives a living-being an ability to learn and act but self-awareness
personalizes this learning for self and therefore actions are directed by a
desire to benifit the self.

This opens up a possibility that once a general intelligence is built, without
awareness, it can be miss-used for the benefit of its creator or who
eventually controlls it.

------
fiatmoney
I have seen no evidence that the field of meta-learning over AI / problem-
solving approaches even has much interest, let alone is showing enough
progress that it could foreseeably start self-improving. The "AI" areas where
great progress has been made recently are extremely technique- and domain-
specific (e.g., image recognition via deep neural networks). Heck, I haven't
even seen a lot of progress in areas that seem relatively straightforward &
seem like absolutely necessary precursors to strong AI, like automated
refactoring of codebases or just straight-up genetic algorithms.

~~~
wlievens
Thank you! It seems to me that the only people preaching about the AIpocalypse
are either those who know nothing about it, or over ambitious unrealistic
zealots. Strong AI is not on any horizon any time soon. The best thing you'd
expect out of an AI lab these days is a marginally better translator or
planning algorithm or somesuch. Worthy endeavours, but not civilization
killers.

------
heydenberk
Garry Kasparov wrote a fantastic article[0] a few years ago about computer
chess, advanced chess and artificial intelligence. This quote in particular is
worth considering: "Weak human + machine + better process was superior to a
strong computer alone and, more remarkably, superior to a strong human +
machine + inferior process."

[0] [http://www.nybooks.com/articles/archives/2010/feb/11/the-
che...](http://www.nybooks.com/articles/archives/2010/feb/11/the-chess-master-
and-the-computer/)

------
gjmulhol
I am a big fan of what Joe Lonsdale calls human-computer symbiosis. He says
that humans are good at certain things and computers are good at other things.
For example, computers are good at looking at huge amounts of data. Humans are
good at finding intuitive patterns in data. Though in time computers will
undoubtedly start to be better at these things, I do not think that in my
lifetime (or Matt's daughter's) we will see computers that can do literally
everything better than a human can.

~~~
mindcrime
I had not heard of Joe Londsale before now, but just as an FYI in case you
weren't aware... the term "man-computer symbiosis" (which means, as far as I
can ascertain, the same thing as "human-computer symbiosis") predates Lonsdale
by decades[1], and was being used by J.C.R. Licklider[2] back in the 1960's.

[1]:
[http://groups.csail.mit.edu/medg/people/psz/Licklider.html](http://groups.csail.mit.edu/medg/people/psz/Licklider.html)

[2]:
[http://en.wikipedia.org/wiki/J._C._R._Licklider](http://en.wikipedia.org/wiki/J._C._R._Licklider)

~~~
gjmulhol
Interesting. I did not know of him. I was always certain that Lonsdale did not
come up with it totally on his own, but I do think that Palantir is a company
that has done this very well.

Thanks for pointing this out. Very interesting!

------
mkingston
Something that almost nobody talks about in the context of AI is rights. It
seems to me that at some point in the reasonably near future we need to have a
discussion about modifying our laws to include all sentient beings, or at
least decide what rights other intelligences should have. Far better to do
this _before_ we develop AI than after, I think. Unless we want to relive
something like slavery..

------
hyp0
We're had Frankenstein's monster since the first time someone got burnt by
their own fire.

fun fact: Bill Joy (vi, bsd, sun) has misgivings about AI.

------
FrankenPC
AI doesn't interest me. What interest me is the pre-cursor to AI. I have a
suspicion that we will need to use computers to model the electronics and
software necessary to stimulate the birth of AI. It's those modeling/creation
systems that interest me the most as they seem the most relevant. Those
systems have yet to be created.

------
motters
Also see "Apocalyptic AI" by Robert M. Geraci

[http://thelawsofrobotics2013.iankerr.ca/files/2013/09/15-Apo...](http://thelawsofrobotics2013.iankerr.ca/files/2013/09/15-Apocalyptic-
AI.pdf)

------
Patrick_Devine
A friend of mine directed an indie film about the singularity which is _just_
about to come out. Here's the website for it:

[http://www.is-movie.com/](http://www.is-movie.com/)

------
Nano2rad
We are also intelligent, and we have not been able to improve it a million
times. How is AI different?

------
graycat
How to be save against powerful, hostile AI: Keep hand firmly on power switch.

