> AI wouldn't have antagonists with comparable capabilities? Why?
Not individual/human ones. Relying on other AIs to prevent the AI apocalypse seems very optimistic to me-- but may be viable (?)
> Also, no, individuals are not a problem. Not after Nazis, Red Khmer, and Russians.
Those are examples were the "alignment" of participating individuals was successful enough. But all those examples seem very fragile to me, and would be even less stable if your main intermediate goal was literally to "end all of humanity".
Also, no, individuals are not a problem. Not after Nazis, Red Khmer, and Russians.