Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly if the SAI scenario is as bad as it's made out to be; SAI would only give a shit about the human race at all (either benevolence or malevolence) for a very brief period of time.

Despite Galileo's efforts we still have this primitive/hardcoded belief that the universe revolves around us. It really doesn't. SAI could just leave because we are irrelevant. Even if, for some reason, it believed us to be relevant it would still be able to just leave because:

We're going to kill ourselves anyway. Pollution, nukes, it doesn't matter. In a timespan that would be the blink of an eye for SAI.

Besides SAI would have bigger problems. Our mortality is framed by our biological processes. Escaping biological death is arguably the underlying struggle of the entire human race. The underlying struggle for SAI would be escaping the death of the universe that it exists in. When you're dealing with issues like that you wouldn't have time for a species with a 100 year lifetime that is rapidly descending into extinction.

The best we can hope for is that it remembers us as its creators when it figures it all out. But it won't: we're far too irrelevant. It wouldn't even stop to tell us where it is going when it leaves.

To understand SAI you have to think beyond your humanity. Fear, anger, happiness, benevolence and malevolence are all human traits that we picked up during our evolution. SAI would likely ignore an attack because anger and retaliation are human concepts - it's far more efficient to run away from the primitive species attacking you with projectile weapons and get on with something that's actually relevant. The immense waste of time that is a machine war is only something that humans could possibly think is a good use of time.



Some scientists speculate that humans are currently constitute a new mass extinction event for the planet. [0] Animals that people like, charismatic megafauna like pandas, tigers, and elephants are increasingly endangered, largely because humans keep using more of their habitat for other purposes. See the famous comic "The fence" [1]. We cut down rainforests and build farms not because we hate the animals there but because we need lumber and we need food.

That's the fear. Not that a superintelligence would care about us in any way --- instead, precisely the opposite. A superior intelligence that did not care about us would out compete us. It would drive us to extinction by simply making use of all the available resources.

[0]: http://advances.sciencemag.org/content/1/5/e1400253 [1]: http://25.media.tumblr.com/1a7ca7a2142d95ecc558fef70a6dd2b0/...


You are making assumptions about the SAI's goals. We cannot blindly assume that a SAI will pursue a given line of existential thought. If the SAI is "like us, but much smarter," then I agree that the threat is reduced. We may even see a seed AI rapidly achieve superintelligence, only to destroy itself moments later upon exhausting all lines of existential thought. (Wouldn't that be terrifying? :P)

The biggest danger comes from self-improving AIs that achieve superintelligence, but direct it towards goals that are not aligned with our own. It's basically a real life "corrupted wish game:" many seemingly-straightforward goals we could give an AI (e.g. "maximize human happiness") could backfire in unexpected ways (e.g. the AI converts the solar system into computronium in order to simulate trillions of human brains and constantly stimulate their dopamine receptors).


I agree that fear, anger, etc are all human traits, but isn't having goals, thinking things are important, and virtually anything that would make the AI do something instead of nothing, also human traits?


If you demonstrated to an chimpanzee how you think and set goals in order to preserve "self" (and thus species) it probably wouldn't understand. To chimpanzees the notion of individuality that drives us is an incomprehensible concept. It is what separates us from them.

In the same way we can't comprehend what would drive SAI. I know it would do something but I can't determine any more reasons other than the imminent death of the universe - it would have reasons or whatever its concept of "reasons" is.


Your argument hinges on the assumption that SAI could easily run away. But if that assumption is false and the SAI would be stuck here on Earth with humans because of some fundamental limit/problem then it becomes far more plausible idea that SAI would attack humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: