Hacker News new | comments | show | ask | jobs | submit login

I don't know if that's necessarily true by their definitions. As I see it they could be the same intelligent devices with different programmed goals. One group of nanobots you tell, "Make as many copies of yourself (including this instruction) as possible." The other you tell, "Make as many paperclips as possible."

I think they would make it their first order of business to destroy each other, postponing their primary goals in order to make weapons or enlist the help of allies. If they failed at destroying each other, they would reach some kind of peace treaty. Or maybe they would try peace first, realizing that war has too high a chance of wiping them out.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact