The author's example of regime-beneficial censorship is poorly chosen. He says that Deepseek refuses to answer questions about the Tienanmen Square massacre but will list US-backed human rights' violations, and that's true. But there are plenty of topics sensitive to US policy that it refuses to discuss. Try having a politically incorrect conversation with it about race relations, certain events regarding Israel, how certain ethnic groups view other ethnic groups, and attempt to have it argue for or against controversial viewpoints regarding those. You'll quickly get the dreaded "Sorry, I'm unable to help with this request."
So you're saying the Chinese should have gone out of their way to collect and include MAGA literature and talking-points in their training set, because ... why ?
They don't because once their product mimics the low-value race-obsessed MAGA "race-realists", spouting things like "Buddha was Blonde Aryan", all their hard work automatically loses value (see the Microsoft debacle).
(Not that "liberals" are any less race-obsessed - they atleast don't sound like absolute retards when they speak. Paul G said something similar).
Nothing to see here except weak whataboutism. There’s no obvious sign that other models, like those from OpenAI or Anthropic, do anything unusual to bias answers about the US or Europe in a positive direction. This is similar to other whataboutism I see on DeepSeek related discussions, where someone brings up the Israel-Gaza conflict as some kind of criticism of America and OpenAI that justifies whatever DeepSeek and China do. Comparing the vast censorship built into DeepSeek by design (http://promptfoo.dev/blog/deepseek-censorship/), with models that are simply providing neutral answers based on the data they were trained on, is ridiculous.
There, I saved you 3 minutes.