“Want” is a problem for me here.
GPT-3, for example, is just a dumb, brute force Markhov chain. Any danger it might pose is no more than that of a mechanical trigger that sits between a finger and gun barrel. The only real intelligence in the system, and where the only danger lies, is from the person behind the gun.
As soon as you put the intelligence in AI this changes. It is easy to say that current AI systems are “dumb” (although note that the precise meaning of dumb has changed significantly over the last few decades). You can say that about any AI with sub-human intelligence. But if you reach human levels, you can likely reach super-human as well, so you need to start worrying much earlier.
Wanting is the easier of the two - the other is ensuring AIs don't inadvertently wind up destroying humanity. The road to hell being paved with good intentions and all that.