The biggest problem won’t be AGI. It will be the thousands of shitty AI and ML models which predict things with 99.9% accuracy, meaning people (read judges etc) assume it’s 100% accurate regardless of how often it gets used.
Look at the Postmaster General scandal in the UK. Now imagine that in all systems, because AI in inherently statistical in nature.
You don't need AGI to be able to implement abusive policies. But it definitely helps if you want to be able to do it at a scale humanity is not in a position to cope with. Stable dictatorships are a very likely outcome of such technology. Also in places where we currently do not have dictatorships.
Look at the Postmaster General scandal in the UK. Now imagine that in all systems, because AI in inherently statistical in nature.