Humans could already be on a path to go extinct in a variety of ways: climate change, wars, pandemics, polluting the environment with chemicals that are both toxic and pervasive, soil depletion, monoculture crop fragility...
Everyone talks about the probability of AI leading to human extinction, but what is the probability that AI is able to help us avert human extinction?
Why does everyone in these discussions assume p(ai-caused-doom) > p(human-caused-doom)?
I think it is because anything that can be used for good can also be used for bad. Advances in gene editing can provide a miracle medicine or a new means for biological warfare.
AI is the same, we can use it to do some great things but it can also be leveraged by bad actors and very easily. The broad scale of what it can be implemented on means there is a lot of attach surface for change in either direction.
The AI systems don't even need to be that advanced to cause real issues simply because of the sheer scale of society as it stands. It can be used in a form of Akido in which it uses the weight of the system to bring itself down.
I think the only defense against superintelligent AI doom is superintelligent AI thinking about how to prevent AI doom.
Fewer projects with concentrated control is thus more dangerous than a diversity of projects: the majority of AI wont want to destroy the planet or all humans, and will thus fight the ones that do.
I wouldn't say that very many, if any projects would set out to destroy the world. The vast majority of people are out to set the world right even if that view is wildly different depending on the perspective. By that I mean a terrorist is seen as a freedom fighter depending on the angle.
What I fear is that something innocuous will have wildly intended outcomes. A good example, it is thought that a part of the 2008 credit crash was caused by algorithms handling all the bundling of securities. In bundling bad debt with good, it could hide a lot of the risk in the system.
It comes down to the whole issue of trying to define the environment in which we deploy these things, defining the goals and hoping there are no gaps that the system can exploit in optimizing for outcome.
I think it's very unlikely that any of those would lead to human extinction, especially since most of those take decades to unfold, and would still leave large parts of the earth habitable.
Sure, but think about how humans drove others extinct. We never decided to "kill all wooly mammoths", we just wanted to use their meat / habitats for other things.
The correlation you mention seems noisy enough that I wouldn't want to bet my civilization on it.
Everyone talks about the probability of AI leading to human extinction, but what is the probability that AI is able to help us avert human extinction?
Why does everyone in these discussions assume p(ai-caused-doom) > p(human-caused-doom)?