This may or may not be true. In the meantime, computer programmers all over the world are working on computerising subjects that are traditionally used to having political power/influence involved. Pulling the humans out and replacing them with shell scripts, in this scenario, is of course a source of contention.
Fundamentally, governments and politics are broken. Computers can be used to fix them. However, this is one of the most controversial areas of computerisation and - like politics itself, along with governance - a cause of never-ending social strife.
Its almost like something, "ethical", is missing in the equation.
I agree with the first part, but personally disagree with the second part.
Yes, I find politics broken, but I have absolutely no faith in fixing it in any way. So I hack away on problems I feel like I can solve or at least improve on. This keeps me sane.
The argument for this belief is this: people keep gaming stuff. No matter how smart your system is, given enough interest someone will hire someone smarter to cheat it. Even mathematics won't save you here, because math is too narrow-scoped. For any theoretically secure system on paper you can find holes in its messy real-life implementation. See e.g. quantum entanglement used to create untappable communication channels being defeated by simply tapping at the classical endpoints.
A well known example: Target people with a particular political persuasion, make them mad or shocked with some bait headline, and get clicks. Basically the well known phenomenon of "clickbait", which is also implemented non-politically. As implemented in politics, such might reinforces "echo chambers" or epistemic closure. But they also made some people a fair bit of cash.
The same goes for what large social networks probably are already doing to keep everyone happy in a polarized environment: curate people's news feed to match their "political tribe". EG: the red feed blue feed concept reported on in the Wall Street Journal (http://graphics.wsj.com/blue-feed-red-feed/). Again, same concern: such may reinforce "echo chambers" or epistemic closure. But the alternative might be users quitting there social network = less ad dollars. So there is no incentive to promote balance.
Unfortunately, humans are very tribal creatures. To break away from this requires active resistance of what is a very human trait. I don't think hardly any human alive has done this completely.
So even in the "superpower AI god" case, probably another "tribe" would come along to make another "superpower AI god" with a completely different filter, and we'd be in the same place.