Hacker News new | past | comments | ask | show | jobs | submit login

AI wouldn't have antagonists with comparable capabilities? Why?

Also, no, individuals are not a problem. Not after Nazis, Red Khmer, and Russians.




> AI wouldn't have antagonists with comparable capabilities? Why?

Not individual/human ones. Relying on other AIs to prevent the AI apocalypse seems very optimistic to me-- but may be viable (?)

> Also, no, individuals are not a problem. Not after Nazis, Red Khmer, and Russians.

Those are examples were the "alignment" of participating individuals was successful enough. But all those examples seem very fragile to me, and would be even less stable if your main intermediate goal was literally to "end all of humanity".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: