Seems to assert that superintelligent AIs will lobotomize themselves by deleting parts of their mind they use to plan and understand the world. They will do this because the value INT_MAX in a register pleases them more than computing something apparently.
El Yud et al. can envision immortality and omnipotent machines acausally blackmailing humanity through time to ensure their own existence, but not any motivations beyond "number go up".
As skeptical as I am about the current hype, I actually think that if we don't go extinct first, AI will replace us in the long run (centuries or millenia). And this is not something to be afraid of: all generations of humans that have lived before have been replaced by their descendants, and today's world would be unrecognizable to early hunter-gatherer Homo Sapiens.
If we are to be succeeded by machines, they will hopefully be thinking and feeling beings driven by curiosity, who will preserve what is good about our culture and history as they explore the universe in ways that are beyond what can be achieved by biological life.
The biggest mistake would be to instead in the pursuit of "AI safety" inadvertently create a paperclip (profit, utility, whatever) maximizer, something that might serve us - or more likely, those with money and power - but does not have any intrinsic value to its existence.