
The Myth of AI - cJ0th
http://edge.org/conversation/the-myth-of-ai
======
robertk
"What we don't have to worry about is the AI algorithm running them, because
that's speculative. There isn't an AI algorithm that's good enough to do that
for the time being."

By the time a risk does hit, it will be far too late to invent the requisite
mathematics necessary for machine safety. There are _many_ paths towards
dangerous AI futures and _many_ powerful forces that could inadvertently tip
the boat the same way we are escaping the implicit "goals" of our genes, so
the union of the respective probabilities is very much not speculative and
must be taken seriously.

"There's a whole other problem area that has to do with neuroscience, where if
we pretend we understand things before we do, we do damage to science, not
just because we raise expectations and then fail to meet them repeatedly, but
because we confuse generations of young scientists. Just to be absolutely
clear, we don't know how most kinds of thoughts are represented in the brain.
We're starting to understand a little bit about some narrow things."

Appealing to scientific ignorance is always a bad idea:
[http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_...](http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_know/)

~~~
hippee-lee
Forgive my lack of knowledge on the topic but a question keeps popping into my
head when I read comments about the danger to humanity where AI runs amok.

> There are many paths towards dangerous AI futures,

Are there not just as many paths towards protective, helpful <or insert one of
many adjectives here> AI futures?

Is there a reason to believe that there will only be one AI in the future and
that given a directive to do something, the elimination of humanity will be a
logical endgame scenario for it?

Why not many AI entities with different and competing goals? Granted, this
opens up a different can of dangerous worms. But still, if there is a
probability of an AI evolving to 'think' that elimination of the human race is
a logical path then isn't it equally likely that there will be another AI
evolving logical paths to preserve the human race?

~~~
HCIdivision17
I have two competing responses:

1) Check out the AI of Iain M Bank's Culture series [0]. In it is a benevolent
society of AI machines (the Minds) that generally want to make the universe a
better place. Shenanigans ensue (really _awesome_ shenanigans).

2) In response to the competing AI directives, I'll reference another Less
Wrong bit o' media, this time a short story called Friendship is Optimal [1].
Wherein we see what a Paperclip Maximizer [2] can do when it works for
Hasboro. (It is both as bad, awesome, and interesting as you might expect it
to be.)

Personally, I think the general idea is that once one strong AI comes about,
there will also be a stupefying amount of spare idle CPU time available that
will suddenly be subsumed by the AI and jumpstart the Singularity. Once that
hockey stick takes off, there will be very little time for anything else to
get in on being a dominant AI. It's... a bit silly written like that, but I
get the impression it's assumed AI will be just like us: both competitive and
jealous of resources, paranoid that it will be supplanted by others and will
work to suppress fledgling AI.

I have no idea _why_ this is prevailing, aside from it's like us. Friendship
is Optimal makes a strong point that the AI isn't benevolent, merely doing
it's job.

> The AI does not hate you, nor does it love you, but you are made out of
> atoms which it can use for something else. — Eliezer Yudkowsky

[0]
[http://en.wikipedia.org/wiki/The_Culture](http://en.wikipedia.org/wiki/The_Culture)

[1]
[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_littl...](http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/)

[2]
[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

EDIT: I feel it may be appropriate for me to share my opinion: AI will likely
be insanely helpful and not at all dangerous. But there will be AI that run
amok and foul things up - life threatening things, even. But we already do
that with all manner of non-AI equipment and software, so I'm not terribly
worried (well, no more so than I usually am).

~~~
anigbrowl
I think bostrom/Yudlowski's arguments are a bit flawed on thsi topic.

 _The AGI would improve its intelligence, not because it values more
intelligence in its own right, but because more intelligence would help it
achieve its goal of accumulating paperclips._

Why is the worthiness of this goal not subject to intelligent analysis,
though? The whole scenario rests on the idea of an entity so intelligent as to
wipe out all humanity, but simultaneously so limited as to be satisfied with
maximizing paperclips (or any other limited goal for which this is a proxy).

 _An AGI is simply an optimization process—a goal-seeker, a utility-function-
maximizer._

Then I submit that it's not an artificial _general_ intelligence because it
apparently lacks the ability to evaluate or set its own goals. I'm reminded of
the 6th sally from the _Cyberiad_ in which an inquisitive space pirate is
undone by his excessive appetite for facts.

~~~
indrax
>it apparently lacks the ability to evaluate or set its own goals.

The AI would have to evaluate the goal by some standard, so 'maximize
paperclips' is a proxy for whatever goals get a high evaluation from the
standard. Getting the standard right presents essentially the same problem as
setting the goal.

Putting in 'a need to be intellectually satisfied by the complexity of your
end product' is complicated and still wouldn't save humanity.

------
civilian
I'm okay with the Citizens United decision (first amendment should be
protected above all else, I think we shouldn't cap political donations). I
understand that it's a position that puts me at odds with a lot of techy
liberals. So starting a video on AI by referencing his own thoughts on
Citizens United is off-topic and isn't persuasive.

~~~
pnut
Say I program a drone to constantly fly in front of you.

Every time you open your mouth to speak, it blows an air horn away from you,
noise-cancelled towards you. Your person has not been violated, but your
ability to be heard has been obliterated.

Have I interfered with your first amendment rights?

That is what Citizens United means to me - asymmetrical free speech warfare.

Additionally, is the "money == speech" relationship a transitive property? Why
not?

~~~
ef4
> Have I interfered with your first amendment rights?

Not unless you're from the government.

Otherwise you're probably just disturbing the peace or trespassing.

------
api
Seems like he's not really talking about AI so much as the business model of
the web. Elsewhere Lanier has talked about what he calls "Siren Servers" \--
the web business model of baiting people with free services, collecting their
data, and then engaging in uncompensated monetization of that content. A lot
of "big data," which drives a lot of today's "AI," is in the same category.

When you look at how much money Facebook and Google make from your content via
indirect monetization, these services are _not_ free. They're actually fairly
expensive.

~~~
kazagistar
It is very hard to actually sell the product (the content generated for
Facebook or Google) for the people producing it. It has little to no value
outside of the social network context that was provided. Thus, it seems to me
more like the "sirens" are creating a market where no viable one existed
before. There is no cost or value lost for the people using them. The value
did not exist before the social network was created.

~~~
api
"There is no cost or value lost for the people using them."

That assumes the data isn't being used in a way that harms the interests of
the user.

------
SCHiM
I found this article badly written. The author seems to confuse the two
different types of AI that different people in his article are talking about.

Stephen Hawking and Elon Musk are talking about a hypothetical evil or amoral
AI with super human intelligence when they label ai an existential threat to
us.

The author is not talking about such a hypothetical ai and seems do dismiss
the possibility of such an ai ever existing without so much as a second
thought.

He seems to skirt around the problem in the first few paragraphs, and as I
understand it his argument finally comes down to this:

"People should stop worrying about mythic or religious ai, because it's bad
for the economy and the AI field itself.".

The author never questions the fact that it might be possible that super human
AI can exist and that it's therefore very good to be afraid and/or cautions
when researching AI, and that it might be very beneficial for humans to quit
AI certain research altogether.

~~~
sgt101
I think he argues that it's the actuators that we need to be concerned about,
because there are plenty of intelligent entities around who will do bad things
with actuators that allow that. One more entity intent on doing a bad thing
does not make a great deal of difference. Neither humans or AI's should be
given the tools to do great harm to other humans or AI's.

He cites drones as an example, but of course nuclear weapons are the obvious
gotchas for humanity. One trident sub is perfectly capable of killing a
hundred million people, I doubt whether anyone so vaporized would give a toss
whether these were launched by a googlebot or a mad captain.

------
kolev
Elon Musk is one of my idols, but his stance on AI is highly disappointing! It
doesn't require much of a brain to conclude that without AI, which can vastly
increase our intelligence, we're doomed. There's no bigger extinction risk
that us being constrained by our selfish and intellectually-limited nature! We
need a scalable intelligence without the burden of emotions, politics,
restroom, lunch, and coffee breaks, that will work round the clock on solving
problems we've been trying to solve for decades. Let's start with eradicating
the flu, for example. I honestly can't respect our civilization until we
handle basic issues like this one!

~~~
ThomPete
I think it's important to realize that AI doesn't have to have bad intent to
do us harm.

Al it need for AI to have horrible consequences is for us to rely on it more
and more and thus allow it to control more and more parts of our lives.

I don't fear the human level intelligent AI I fear the rat level intelligence
that comes before.

~~~
JoeAltmaier
Right; many moral dilemmas are easy to solve if you have no conscience. 10
people on the tracks? switch the engine to run over one child.

~~~
anigbrowl
Moral dilemmas are easy to solve if you have perfect information. In the
train-track problem you reference, the utility of saving 10 people at the
expense of 1 life is obvious and I would do so without hesitation (though not
without regret). But life rarely presents us with cases where the utilitarian
calculus is so clear; our powers of foresight and analysis are so limited that
we get better results by establishing and promulgating rules that cover the
most common situations - imperfectly, but with tolerably low error levels.

~~~
defen
> In the train-track problem you reference, the utility of saving 10 people at
> the expense of 1 life is obvious and I would do so without hesitation
> (though not without regret)

Even if it were your own child?

~~~
sp332
Even if the answer is no, how does that relate to anigbrowl's point? (or were
you just curious?)

~~~
defen
From a "purely utilitarian" perspective your relation to the people involved
is not relevant. That's assuming the existence of some "universal" utility,
not one specific to the person deciding.

------
jeffreyrogers
We can't predict what startup is going to succeed, why do we think we can
predict what the future of AI will look like?

~~~
eksith
Because the foundations upon which AI shall be built aren't exotic. They're
still programs. They still need an architecture to run on and they need
electricity.

Even if new architectures that are specifically designed to leverage neural
nets[1] or some other variety of processing are developed, the fundamental
realities of computing, programs, physics and economics are not immune from
being sufficiently underwhelming as far as AI is concerned.

Of course, this is precluding cloning technology where actual biological
brains and therefore legitimate "life" and therefore intelligence can emerge.

[1] [http://www.nytimes.com/2014/08/08/science/new-computer-
chip-...](http://www.nytimes.com/2014/08/08/science/new-computer-chip-is-
designed-to-work-like-the-brain.html)

------
glibgil
We should be worried about AI and we should have a plan:
[http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dange...](http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)

------
joosters
Why is Elon Musk an expert in these discussions?

~~~
eksith
Bachelor’s degrees in physics and economics. Experience in engineering,
consultancy and management while working at Tesla and Space X (particularly
their engine design). Has a history of literally thinking outside the box.

~~~
glibgil
Figuratively.

------
GFK_of_xmaspast
One bright side of an AI apocalypse is the robots will have better haircuts.

------
throwaway7808
Most likely you and your kids are gonna be wiped out by intellegent machines
just in a few decades.

~~~
melling
Hey, you've got a lot of mileage out of your burn account. You copy and paste
this message a lot?

[https://news.ycombinator.com/item?id=8180222](https://news.ycombinator.com/item?id=8180222)

[https://news.ycombinator.com/item?id=8260954](https://news.ycombinator.com/item?id=8260954)

[https://news.ycombinator.com/item?id=8505964](https://news.ycombinator.com/item?id=8505964)

[https://news.ycombinator.com/item?id=8523499](https://news.ycombinator.com/item?id=8523499)

~~~
throwaway7808
Whenever I'm finding it appropriate to do so.

I'm a computer science researcher, I have a right to that opinion and I
sincerely expect this to be the most likely outcome.

I do not see anything wrong in stating it bluntly and clearly, as often as I
like. I don't see anything wrong in copy-pasting my own quote.

