

What Artificial Intelligence Is Not - graghav
http://techcrunch.com/2014/12/13/what-artificial-intelligence-is-not/

======
jedharris
I think the most important point is "Additionally, AI is not a single entity.
... AI is not a he or a she or even an it, AI is more like a 'they.'"

All the horrible "clippy" scenarios involve ONE AI that becomes super
intelligent (and therefore powerful -- another fallacy) without any similarly
intelligent and powerful entities around it. Instead we'll have incremental
progress and if we ever do get super intelligent (but probably not super
powerful) machines they'll be embedded in an ecology of other machines nearly
as intelligent and quite likely more powerful.

I'm not saying this doesn't pose risks, but they aren't the risks that the AGI
threat folks are studying.

------
pottspotts
This article only confirms Elon Musk's fears. "Decades away" and "only need to
worry about the people behind it".

Well, yeah.

------
deepsearch
I've observed that there a few good examples of what it actually is.

------
mindcrime
Very well said. Even if we achieve a full-fledged AGI, it would be a mistake
to anthropomorphize it and assign it human-like intentions, desires and
behavior - unless somebody explicitly programmed it that way. The idea of an
"evil" AI seems downright silly to me.

That's not to say that an AI could never be dangerous in some scenario, but
the "demon" comparisons and other recent hyperbole are, IMO, a bit misplaced.

~~~
rrockstar
We cannot say that AI's will not be dangerous just because they will not have
human-like behaviour. We shouldn't let inaccurate sci-fi movies cloud our
judgement like that. Just because the future will most certainly not look like
Terminator does not mean AI's are not a danger. The Elon Musk comment about
the demon was not anthropomorphizing AI's; he was saying the invention of
superintelligent general AI's could be like summoning the demon and trying to
contain it in the sense that it could be a point of no return. We may find out
after the fact that we made a huge mistake and cannot control it. I think the
remarks of Elon Musk and Stephen Hawkings sound very alarmist, but they want
to make sure that we will think about the implications of superintelligent AIs
now, and not after we invent them. I am sure we can think of proper ways to
'contain the demon', but we need to think about that now and not when they are
already 'set free'.

~~~
mindcrime
_We cannot say that AI 's will not be dangerous just because they will not
have human-like behaviour._

I agree, and I'm not making a blanket statement like that. But I think that a
lot of the recent hyperbole about AI seems to assume an AGI with "bad
intention" or dangerous behavior rooted in some seemingly anthrocentric notion
of goals, desires, etc. To the extent that people are saying any of those
things, I think they're barking up the tree, since AI - no matter how
_intelligent_ \- still isn't _human_.

Now, could an AGI still be dangerous, whether intentionally or by accident?
Yes, and I'd guess the "by accident" bit is more likely. I don't think this is
an issue that should be ignored, and I don't expect that it will be. But
whipping up public hysteria over "unleashing the demon" and "AI could end the
world" strikes me as overly reactionary. But that's just me.

