Just because you understand a machine doesn't mean it can't be dangerous. I could completely understand every aspect of a nuclear bomb, and I could still make a mistake and cause quite a bit of damage with a real one. Complex systems are notorious for having all sorts of unexpected problems, and mistakes happen all the time. How much complex software is entirely bug free?
The danger of AI is more than just a random bug though. It's that an intelligent AI is inherently not good. If you give it a goal, like to make as many paperclips as possible, it will do everything in it's power to convert the world to paperclips. If you give it the goal of self preservation, it try to destroy anything that has a 0.0001% chance of hurting it, and make as many redundant copies as possible. Etc.
Very, very few goals actually result in an AI that wants to do exactly what you want it to do. And if the AI is incredibly powerful, that will be a very bad outcome for humanity.
You can condition the AI on the well being and freedom of human population. Hard to define precisely what that means, but it can be approximated with indirect measures. This is just what Asimov thought of in his novels.
Another way to protect against catastrophe would be to launch multiple AI agents that optimize for the goal of nurturing humanity. They can keep each other in check.
Also, humans will evolve as well. Genetics is advancing very fast. We will be able to design bigger/better brains for ourselves, perhaps also with the help of AI. Human learning could be assisted by AIs to achieve much higher levels than today.
We will also be able to link directly to computers and become part of their ecosystem, thus, creating an incentive for it to keep us around. Taking this path would enable uploading and immortality for humans as well.
My guess is that we will all become united with the AI. We already are united by the internet and we spend a lot of time querying the search engine (AI), learning its quirks and, by feedback, helping improving it. This trend will continue up to the point where humans and AI become one thing. Killing humans would be for the AI like cutting out a part of your brain. Maybe it will want a biological brain of its own and come over to the other side, of biological intelligence.
We don't know how to "condition" an AI to respect the well being and freedom of humanity. It's an extremely complicated problem with no simple solutions. Making an AI that wants to destroy humanity, however, is quite straightforward. Guess which one will most likely be built first?
Building multiple AIs doesn't solve anything. They can just as easily cooperate to destroy humanity as to help it.
Uploading humans won't be possible until we can already simulate intelligence in computers. We can't have uploads before AI.
The danger of AI is more than just a random bug though. It's that an intelligent AI is inherently not good. If you give it a goal, like to make as many paperclips as possible, it will do everything in it's power to convert the world to paperclips. If you give it the goal of self preservation, it try to destroy anything that has a 0.0001% chance of hurting it, and make as many redundant copies as possible. Etc.
Very, very few goals actually result in an AI that wants to do exactly what you want it to do. And if the AI is incredibly powerful, that will be a very bad outcome for humanity.