> But AI could run out of our control, or have us fall victim of unintended consequences.
A lot of people didn't understand this about 2001: Hal was actually doing the logical thing, he was doing his job. The human was unreliable and was indeed endangering the mission. Hal was a tool of mankind, just like the bone that the ape uses to kill in the first scene.
In the same way when we talk about the dangers of AI we tend to personify them and expect them to be either intelligently nefarious (skynet, matrix) or out of control (bug, malfunction).
What we don't imagine is that we might end up being on the wrong end of a perfectly rational decision, or that we don't realize the consequences of the process that we set them to do.
A lot of people didn't understand this about 2001: Hal was actually doing the logical thing, he was doing his job. The human was unreliable and was indeed endangering the mission. Hal was a tool of mankind, just like the bone that the ape uses to kill in the first scene.
In the same way when we talk about the dangers of AI we tend to personify them and expect them to be either intelligently nefarious (skynet, matrix) or out of control (bug, malfunction).
What we don't imagine is that we might end up being on the wrong end of a perfectly rational decision, or that we don't realize the consequences of the process that we set them to do.