It's arguably harder to do it on a decentralized platforms since they are... Well, decentralized. Those 240,000,000 people hours are then spread over all of the decentralized platforms instead of being focused on one or a few. Also, the cost of moderation is spread over everyone instead of a single entity having to pay for all of it.
The same analysis makes it untenable for centralized platforms, as you noticed yourself.
It is not easy, but it is most definitely tenable for centralized platforms with sufficient funding and motivation i.e. Facebook, Google, etc.
1. Many millions (billions, in Facebook's case) of users, some percentage of which are willing to click those "Report..." buttons
2. Their own farms of in-house workers
3. Their own bespoke tools and heuristics to identify trends and more easily identify malicious actors/content, for review by the human moderators. this would presumably incorporate reports from users.
It's arguably harder to do it on a decentralized platforms
How does a malicious social media team "attack" Facebook? For the most part, they produce content that winds up being spread virally. They don't spead the information around Facebook; the users do. This would function largely the same way in a decentralized network.
Keep in mind that today's decentralized networks are a bit like the Usenet was before "Eternal September."
In other words, these decentralized networks are currently populated primarily with relatively savvy users. Not perfect by any means, but on average, more savvy than your average FB or Twitter user who will eagerly share articles and links without even a cursory vetting. That will change in a hurry if any of these decentralized networks ever gains critical mainstream mass.