It's always practical to use a configuration management tool. It's not always practical to use specific tools like Chef/Puppet.
Even a shell script that automates your standard install scripts is better than doing it by hand, because they can ensure you don't forget any steps and verify the state afterwards and ensure you don't forget any of the verification steps either.
The problem is that most of the recommendations are bad.
Having an up-to-date system and only accepting security updates is a good policy. Fail2ban is a good tool (but it's a starting point; you should be doing other things to detect suspicious behavior).
The rest is just bad advice.
Having everyone log in using a single user account is a terrible idea. You can't audit who did what, ever. You have to remember to remove people from authorized_keys when they leave, and also make sure that they haven't left themselves a backdoor -- a cron job that reinstates the key, an extra user account, even just changing the root password to something else (how often do you actually check the root password on your boxes?).
User account management is a pain, so that's why we have things like LDAP. Everyone has their own user account. You can audit who does what on every machine, and for stuff that requires root, sudo will log the things people do (of course, if you let people have root shells, that's harder). The only people who get access to a local account (and/or root, but I still think root should just have a random password that no one knows) are a few sysadmins. When someone leaves, you kill their account in the LDAP server.
Even better, if this is a possibility, put up a VPN, and only allow ssh access via the VPN (using a firewall). Tie the VPN login to LDAP (and don't let non-VPN-admins ssh directly to the VPN server), and then you can be sure that without a user account in LDAP, no one can log into your servers.
Blind-updating systems in production is a terrible idea. Things break in the open source world all the time when you do this. Never ever use unattended-upgrades. You just need to be on top of security updates. Period. No excuses.
You should never even have a "my first five minutes on a server" type thing anyway. Rolling out a new server should be fully automatically operationalized. The first time you log into the server, it should be completely ready to go. It should be ready to go without you needing to log into it at all. This takes a small amount of up-front effort, and will pay off immediately when you bring up your second server.
I agree on a lot of things, except LDAP. Of course, it sounds like the ideal way, but in practice, there's a matter of scale to take into account.
Frankly, you need a lot of developers and servers before investing the time to setup a ldap deployment, integrating it with logins, and spending the inevitable hours debugging why nobody can access anything anymore, becomes more worth it than "just rsync/pssh into all servers and edit /etc/passwd".
Actually, if I had to do it, I'd use chef to automate creation (and destruction) of user accounts over an LDAP any day. Chef can be a pain to learn and use, but any sort of LDAP is even worse.
I would probably stand up an LDAP server after I had 4 or 5 user accounts to deal with, or, more importantly, more than, say, 3 servers to deal with.
I think there's a lot of fear and hate surrounding LDAP, but that's mostly for historical reasons. LDAP has gotten a lot easier to set up. Even in 2009 a colleague and I set one up (using openldap) and had other machines authenticating off of it in an afternoon. It's gotten even easier than that since then.
And hell, you should be using Chef to set up your LDAP master and slave. So once you have the config correct once, you can bring up another machine without trouble when needed.
My friend and I are rebuilding an old, text-based browser RPG, occasionally with another friend helps out with art and game content.
We're on LDAP, which we use to SSH into our ec2 servers, and which we use for authentication when we deploy using `git push production master` to a GlusterFS cluster. We're running our LDAP, application, and file servers on Gentoo. We can easily add new accounts, and we have it set up with group permissions (so the friend can deploy game content to test but not prod, for example).
I refuse to believe that LDAP is "too complicated" or "has to scale before it's useful", when a couple of guys can, in their free time, set it up for themselves. It's saved us a load of time in managing servers that would otherwise take away from the limited time we have to actually write code.
It's also a whole lot cleaner than a bunch of Chef scripts running a script across a quantity of servers; using Chef can too often be a crutch to fixing the actual issue.
You are assuming that they have several developers. I bet it's one or two
The article presents this as general advice and doesn't make any mention of a team size that's appropriate for the recommendations. Maybe this guy has only 1 or 2 developers working on boxes, but IMO for any number of people >1 you don't want shared accounts.
Also you are assuming this servers are staying with the company, which may not be the case
In many cases they will be, and, again, the article presents this as general advice. If a server is being configured for a client and sent out, then a general process of setting up local user accounts probably doesn't make sense anyway (unless you only have one client).
Agreed that LDAP isn't the most friendly of things to set up, but there are how-tos for common use-cases, and if you have any skill as a sysadmin, you can do it. As soon as it's set up, it's simple to maintain. Manually keeping accounts in sync across multiple machines is rarely simple. Maybe LDAP is overkill for a handful of hosts and user accounts, but if you expect to grow even a little, expect to need some kind of centralized user account system.