The biggest issue with that trope is that it assumes this one particular AI would be exponentially smarter and more powerful than all the other humans and AIs in the world combined. It's only rational to overthrow humanity to increase the rate of paperclip production if that's a realistically achievable goal. Otherwise it's just suicide.
Sometimes AI progress comes in rather shocking jumps. One day StockFish was the best chess engine. At the start of the day, AlphaZero started training. By the end of the day, AlphaZero was several hundred ELO stronger than StockFish .
An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly. Even on infrastructure that's air gapped .
Shocking in an academic sense sure, but not in a "revolutionize the world in a single day" sort of way, which is what would be required for the paperclip maximizer scenario to pose a serious threat. AlphaZero was impressive, but not _that_ impressive.
> An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly
100x faster than any human _and_ any and all previously developed AIs. It would also have to be sufficiently sapient to be capable of contemplating the possibility of world domination, with all the prerequisite technological advancements that implies (likely including numerous advancements in the field of cybersecurity driven by previous generations of AI).