Comments like those are why I normally point people to actual security expects (like, say, Schneier), and why I recommend that new admins should ignore as much as possible the practices chanted by the industry. A secure server does not need a firewall. A firewall can be used to secure a server against a specific threat, but that's it. The days of ping of death are behind us.
I would like to point out that following the article's guide and firewalling away ICMP, you can end up with a lot of trouble. (see http://serverfault.com/questions/84963/why-not-block-icmp). Some ICMP messages are not blocked by default by ufw, so I'm unsure how damaging ufw is when used like this.
At any rate, a Firewall is a block. A new fresh server install won't have ports that needs to be blocked. By putting up a firewall, there is nothing to be gained. Before the firewall, the ports are closed. After adding the firewall, the ports are closed. All that is gained is a hurdle the next time one wants to install something like a monitor tool (like Munin), or a new service.
It might be useful as a last line of defense against malware regarding outgoing traffic. I am normally against that kind of thing however (as focusing on the cause is better than the effect). At best, one can catch a spam malware, but any bot net, web server, ddos or other type of malware are untouched by the rules (port 80 and 443 is allowed). If the server has email sending configured so root message can be sent, then the spam malware can use that route and the firewall will just sit there.
So let's take a newly installed machine. What threats can be identified and what risks are we trying to mitigate with the help of this firewall (as specified by the article)? The only thing I can think of is either a Zero day TCP/IP stack vulnerability (not a realistic threat), or that the admin doesn't trust the other admins when they install new services. Yes, if an admin installs a new email server and enables relaying to the whole world against the explicit recommendation in bold font by the install wizard and the configuration file, a firewall can block that admins' actions. Then again, that same admin could just as well have disabled the firewall to "get the mail to work", so I'm not sure it's a viable defense against bad admins.
You are correct that a firewall will not magically solve all your problems, but it does help to protect against programs that open ports you didn't know about.
Recommending against them doesn't make sense, and implying that they are only useful to prevent TCP/IP zero day vulnerabilities is silly (especially since the firewall likely wouldn't protect against that anyway).
This is about as far from a server installed with ubuntu in 2012 that one can get. You are not going to find any such article by Schneier promoting default firewall installations. I suggest here to check out Secrets and Lies by Schneier, as it is rather clear that a firewall need to be configured against the specific threats one can identify. If you fail at identifying threats, the firewall is likely not be useful at all, or will simply work identical to NAT. At worst, it will give a sense of false security.
I think it's less about defense against "bad admins" than it is about protecting against accidental bone-headedness. :-) I typically set up a restrictive firewall policy even when I have a clear list of the services I'm running and/or I am the only admin. This comes in handy every once in a while, in cases where...
* A service is expecting more ports to be open than are documented. (Happens not-infrequently with license servers.)
* I'm re-using an old image and there are undocumented services enabled by default.
* A user decides to run a network service in their own account without informing the admins.
In all those cases, am I likely to change the firewall to "make it work"? Sure. But having to actually make that change helps keep an audit trail, and helps keep the admins explicitly aware of the attack surface. It's similar to why it's a good idea to periodically run nmap against your own servers.
Here nmap do shine, and periodically running nmap is a technique that should be taught in universities. Great way for students to both learn about computer systems, and about learning how to debug problems.
The days of ping of death are behind us.
Flashback quite a few years. I was working in IT and my coworker asked me if my WinXP (IIRC) machine was up to date. I said "yes". Next thing I know, it crashed hard. Oops, my buddy just hit me with a ping of death.
If the FW itself has that specific NIC, it could be brought down with this attack - but you could prevent SIP traffic from hitting machines which are vulnerable.
Robust, redundant deployments should be a part of an overall security policy, securing yourself from outages and downtime.