Hacker News new | past | comments | ask | show | jobs | submit login

No, I’m saying that even if successful the global outcomes Ilya dreams of are entirely off the table. It’s like saying you figured out how to build a gun that is guaranteed to never fire when pointed at a human. Incredibly impressive technology, but what does it matter when anyone with violent intent will choose to use one without the same safeguards? You have solved the problem of making a safer gun, but you have gotten no closer to solving gun violence.

And then what would true success look like? Do we dream of a global governance, where Ilya’s recommendations are adopted by utopian global convention? Where Vladimir Putin and Xi Jinping agree this is for the best interest of humanity, and follow through without surreptitious intent? Where in countries that do agree this means that certain aspects of AI research are now illegal?

In my honest opinion, the only answer I see here is to assume that malicious AI will be ubiquitous in the very near future, to society-dismantling levels. The cat is already out of the bag, and the way forward is not figuring out how to make all the other AIs safe, but figuring out how to combat the dangerous ones. That is truly the hard, important problem we could use top minds like Ilya’s to tackle.




If someone ever invented a gun that is guaranteed to never fire when pointed at a human, assuming the safeguards were non-trivial to bypass, that would certainly improve gun violence, in the same way that a fingerprint lock reduces gun violence - you don't need to wait for 100% safety to make things safer. The government would then put restrictions on unsafe guns, and you'd see less of them around.

It wouldn't prevent war between nation-states, but that's a separate problem to solve - the solutions to war are orthogonal to the solutions to individual gun violence, and both are worthy of being addressed.


> how to make all the other AIs safe, but figuring out how to combat the dangerous ones.

This is clearly the end state of this race, observable in nature, and very likely understood by Ilya. Just like OpenAI's origins, they will aim to create good-to-extinguish-bad ASI, but whatever unipolar outcome is achieved, the creators will fail to harness and enslave something that is far beyond our cognition. We will be ants in the dirt in the way of Google's next data center.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: