Admittedly, I don't recall all the incidents where automatic failover minimized downtime, and probably if a human had to intervene in each of those, the cumulative downtime would be more significant.
But boy, it sure doesn't feel like it.
Assuming you have that, it's OK to rely on a human to assess the situation, make sure the dead master is really dead, salvage any partially replicated transactions, and crown a new master. With the right tools, it could take only a few minutes -- a bit longer if you have to wait for the old master to boot to see if it had locally committed transactions that didn't make it to the network. If it takes 5 minutes to resolve this (including time to get to the console), you can do this ten times a year and still have three nines.
For the more likely case where it's a network blip, the situation resolves itself (in a nice way) by the time the operator gets to the console.
Stateless applications are simple. Systems built with this in mind, like cassandra, elasticsearch, redis+sentinel just do it right after two or three settings like minimum quorum sizes.
But if you have systems without this builtin, like NFS, Mysql, Postgres? I guess we don't hear about the successful automated failures, but we surely hear about really messy automated failover attempts.