Hacker News new | past | comments | ask | show | jobs | submit login

Superhuman Machine Intelligence does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. - Sam Altman

http://blog.samaltman.com/machine-intelligence-part-1

Oates seems to have missed the concept of an Intelligence Explosion, which is why it is difficult to compare current AI limitations to the behavior and capabilities of a superhuman machine intelligence.

I would strongly recommend reading Nick Bostrom's Superintelligence for a full treatment of the source of worry for many brilliant minds.




I'm surprised with how many self-proclaimed experts in AI related fields complete ignore this point of resource optimization. Any article with a title like OP's that doesn't mention that seems pointless to me and just an attempt to jump on the new bandwagon to dismiss fears of strong AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: