I actually disagree with most of this. I think that, for servers, it's best to stay as close to the "cattle, not pets" model as reasonably possible. Servers should be set up and maintained with automated tooling and rarely connected to manually, preferably only to debug issues. Most of the things in here are gimmicky one-offs that don't meaningfully increase security.
Don't bother setting up a user account, use a public key authorized SSH session as root to do everything. Setting up UFW to block everything but what you should be serving is good. I don't see much point in things like Wireguard or this umask thing.
What should one do when that is not possible to handle the servers as cattle, because there is 200 unique servers which different people has to connect to and do different things with, like a university or other academic places?
I don't think your situation has anything to do with what I described. It may still be linux, but it strikes me like saying that the maintenance manual is different between a sports car and a dump truck. Well yeah, obviously.
Bad though I think the original article might be, it would be 10x worse to attempt to write the reluctant sysadmin's guide to triple-digit workstation clusters in a university environment. Nothing about best practices for production web servers will apply for that, you need to hire an actual sysadmin.
Don't bother setting up a user account, use a public key authorized SSH session as root to do everything. Setting up UFW to block everything but what you should be serving is good. I don't see much point in things like Wireguard or this umask thing.