The relevancy of the paperclip maximization thought experiment seems less straightforward to me now. We have AI that is trained to mimic human behaviour using a large amount of data plus reinforcement learning using a fairly large amount of examples.
It's not like we're giving the AI a single task and ask it to optimize everything towards that task. Or at least it's not architected for that kind of problem.
But you might ask an AI to manage a marketing campaign. Marketing is phenomenally effective and there are loads of subtle ways for marketing to exploit without being obvious from a distance.
Marketing is already incredibly abusive and that's run by humans who at least try to justify their behavior. And who's deviousness is limited by their creativity and communication skills.
If any old scumbag can churn out unlimited high quality marketing, it's could become impossible to cut through the noise.
It's not like we're giving the AI a single task and ask it to optimize everything towards that task. Or at least it's not architected for that kind of problem.