It sounded logical that when we could create an AI that's cleverer than ourselves, it would create a singularity, a more and more intelligent AI auto-created by itself.
It's the basis of the popular warnings expressed by many clever people, but I've come to doubt this assertion.
1- When Stephen Hawking was born, his parent created a child cleverer than themselves, yet it did not produce a singularity. Hawking got more and more intelligent, yet it wasn't exponential. And his offspring did not create a singularity either.
2- A "better than us" AI may never get exponentially better than us. Maybe it's an NP problem: maybe the AI would "never" have the time to become more and more intelligent.
3- If we can create a very intelligent AI, I doubt it would exterminate humanity to save nature. Why? Because nature loves diversity, and we're a part of this diversity. Also, this AI may have a sense of fate: even if our fate is to destroy the planet because we're so dumb, maybe this AI will think it's just our fate.
4- I guess humans are very afraid AI could do what humans do: kill people, destroy the planet, make war, try to dominate others, try to manipulate others and punish. These human traits have not much to do with intelligence.
5- If I find a way to transfer my brain into a computer, a very expensive and powerful one, there's no guarantee it will evolve faster than me, or handle data faster than me. Actually, I guess the first "better than us" AI will most probably be very slow.
What if the AI singularity just creates better and better computers? Because this super-AI is actually a computer.
2- I think this is an interesting point about possible inherent computational limits on the ability to solve some problems that we might care about, including in designing more intelligent machines.
3- This is something people have thought about quite a lot. What superintelligent machines do depends on what they've been programmed to do. It's very unlikely that an AI would inherently value "diversity" or "fate" unless it were programmed to do so. The AI wouldn't spontaneously create new values (unless it were programmed to). Most concerns about AIs that exterminate humanity are based in the possibility that an AI would fulfil other goals in a surprising or unanticipated way, with bad side-effects for human survival.
4- Intelligence helps people wage war and dominate others more violently, both by coordinating better to do so (including motivating people to join in), and by developing new technology that helps make larger-scale violence cheaper. Weapons research can help you learn how to kill more people faster and at lower cost. A superintelligent AI could engage in this kind of research if it saw an important reason to.
5- I think that's exactly right; perhaps the important difference here is that the machine version would be more flexible (if you wanted to try overclocking it, or modifying the software somehow). This is dangerous and expensive and confusing to do with a physical brain, because it's hard to manipulate the details of its organization and structure, and because you can just die if you mess up. Think of the ease with which you can edit a PNG or SVG file in a computer compared to editing an oil painting. Perhaps with the computer version you can also run multiple copies in parallel -- something you also can't do with your physical self.