
Why are most people so pessimistic about a powerful AI? - anupshinde
I believe AI (or AGI) that is better than humans is quite far from done. May be the technology available today can generate better-than-human AI (without going the quantum way), but we probably lack the models or structures or simply the scale that can make this happen. Or may be we are missing some fundamental discoveries about cognition and intelligence.<p>But even if we develop powerful AI, why are we so scared about it?  
Why do we think that a  powerful AI will be stupid enough to stay interested in Earthly matters or humans when it has an entire universe to go to?<p>EDIT: From time to time I hear someone established and reputed cautioning against a powerful AI. I am just curious what I do not understand<p>Example (an year old post): https:&#x2F;&#x2F;www.washingtonpost.com&#x2F;news&#x2F;the-switch&#x2F;wp&#x2F;2015&#x2F;01&#x2F;28&#x2F;bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned&#x2F;
======
kleer001
Your best bet to understand the AI debate (and both sides) is to dig through
lesswrong.com .

My brief answer is that people fear what they don't understand. And in this
case the people shouting out their fears haven't properly and in detail stated
their assumptions and definitions.

> I believe AI (or AGI) that is better than humans is quite far from done.

AI and AGI are wildly different things. My preference is to talk in terms of
expert systems rather than the lofty sci-fi acronym. Such as Big Blue, Watson,
etc...

Personally I think artificial personal assistants will help make the world a
much more comfortable (and pleasant) place for those that can afford them.
That is until the fourth or fifth generation when they're cheap as paper (or
have a limited kind of reproduction).

------
patmcc
>>Why do we think that a powerful AI will be stupid enough to stay interested
in Earthly matters or humans when it has an entire universe to go to?

Let's say a powerful AI emerges and is completely uninterested in Earth and
humans, and wants to go exploring the stars. Maybe it decides it needs a bunch
of hydrogen to do that and splits all the water in the ocean. This is bad for
humans.

We don't know what an AI will want, so it's tough to predict its behaviour.
Maybe it'll be fine, maybe not. It may be powerful enough that if what it
wants isn't great for us, things will go very badly very quickly.

