

AI Researchers on AI Risk - convexfunction
http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/

======
avmich
> Elon [Musk] is very worried about existential threats to humanity (which is
> why he is building rockets with the idea of sending humans colonize other
> planets).

It's an odd solution. Won't a capable AI be able to reach humanity even on
Mars if it needs to?

~~~
ikeboy
>AI is practically the only major x-risk that doesn't make only the Earth
uninhabitable. Out-of-control replicating nanotech, engineered pathogens,
extreme radioactivity, etc. The distinguishing factor of strong AI as an
x-risk is that it will come and get you on Mars, which is the reason why Elon
Musk is worried.

(Source:
[https://www.reddit.com/r/rational/comments/36proz/how_to_nai...](https://www.reddit.com/r/rational/comments/36proz/how_to_nail_down_the_origins_of_a_martian/crganqg))

------
Czynski
I got bored of the repetition about 2/3 of the way through, which makes it a
pretty good demonstration of not being a niche view.

