This is something I don't think the wider community understands, nor do they understand the incredible amount of work it takes to back-port major kernel/etc features while maintaining a stable kernel ABI as well as userspace ABI. Every single other distribution stops providing feature updates within a year or two. So LTS, really means "old with a few security updates" while RHEL means, will run efficiently on your hardware (including newer than the distro) with the same binary drivers and packages from 3rd party sources for the entire lifespan.
AKA, its more a windows model than a traditional linux distro in that it allows hardware vendors to ship binary drivers, and software vendors to ship binary packages. That is a huge part of why its the most commonly supported distro for engineering tool chains, and a long list of other commercial hardware and software.
I think the gap is the question of how many people there are who want enterprise-style lifetimes but don't actually want support. If you're running servers which don't need a paid support contract, upgrading Debian every 5 years is hardly a significant burden (and balanced by not having to routinely backport packages). There's some benefit to, say, being able to develop skills an employer is looking for but that's not a huge pool of users.
I think this is the reason behind the present situation: CentOS' main appeal was to people who don't want to pay for RHEL, and not enough of those people contribute to support a community. That lead to the sale to Red Hat in the first place and it's unclear to me that anyone else could be more successful with the same pitch.
But lifetimes are support. Support isn't just, or even primarily, about making a phone call and saying "Help, it's broken." After all, there's nothing keeping someone from taking a snapshot of a codebase and running it unchanged for 10 years. Probably not a good idea if you're connected to the network, but certainly possible.
I'm not sure there are enough people left who have the “don't touch it for a decade” mindset, aren't working in a business environment where they're buying RHEL/SuSE/Amazon Linux/etc. anyway, and are actually going to contribute to the community. 100% of the people I know who used it were doing so because they needed to support RHEL systems but wanted to avoid paying for licenses on every server and they weren't exactly jumping to help the upstream.
Red Hat bought CentOS in the first place because they were having trouble attracting volunteer labor and I think that any successor needs to have a good story for why the same dynamic won't repeat a second time.
I think there are two primary reasons.
1.) A developer wants to develop/test against an x.y release that only changes minimally (major bug and security fixes) for an extended period of time.
2.) The point release model where you can decide when/if to install upgrades is just "how we've always done things" and a lot of people just aren't comfortable with changing that (even if they effectively already have with any software in public clouds or SaaS).
I largely agree with your other points.
Re: point 1, I'm definitely aware of that need but the only cases I see it are commercial settings where people have contractual obligations for either software they're shipping or for supported software they've licensed. In those cases, I question whether saving the equivalent of one billable hour per year is worth not being able to say “We test on exactly the same OS it runs on”.
CentOS 8 was released few days ago with kernel 4.18, which not even LTS is, and is older than the current Debian stable kernel(!).
If you need to install anything besides the base distro you need elrepo, epel, etc which I'm not sure can be counted as part of the support.
This means that RHEL 7 using a "kernel version" from 2014 will still work fine with modern hardware for which drivers didn't even exist in 2014.
And for the same reasons that the affected users chose a "stable" and "supported" distro they were also unable to upgrade to one where the issue was fixed.
They bought some fancy new computers at work. Our procedures say to use CentOS 7, so we tried it, it ran like shit. Then we reinstalled CentOS 8, same. It worked, but the desktop was extremely slow. After much hair pulling I found the solution: add the elrepo-kernel repository, and update to kernel 5.x
No amount of backporting magic will make an old kernel work like a new kernel.
If your use case doesn't consider avoiding noncritical behaviour changes for a decade to be a feature, you have other options.
People that run CentOS in prod are normally running ERP systems, Databases, LoB Apps, etc, and the only thing we need is the base distro and the the vendor binaries for what ever is service/app that needs to be installed, and probably an old ass version is JDK...
We need every bit of that 10 year life cycle, and we glad that we will probably only have to rebuild these systems 2 or 3 times in our career before we pass the torch to the next unlucky SOB that has to support an application that was written before we were born...
that is CentOS Administration ;)
I always wonder how many major vulnerabilities are introduced into these super old distros due to backporting bugs.
Plus, there are changes (especially around memory management or scheduling) that are fiendishly hard to do regression testing on, so they are backported more selectively.
It also helps that Red Hat employs a lot of core developers for those fast moving packages. :)
Software are always gonna have bugs, it's written by humans after all. The important thing is to acknowledge and work towards an ideal outcome.
That said, if you're really in the position of depending on a free project for over five years of security support, you probably will be totally fine with just ignoring the fact it's out of support. Just keep running Debian 6 for a decade, whatever. The code still runs. Pretend you've patched. Sure, there are probably some vulnerabilities, but you haven't actually looked to see if the project you're actually using right now has patched all the known vulnerabilities, have you?
(Spoiler, it hasn't: https://arxiv.org/abs/0904.4058)
> Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
> Thus the Debian LTS team takes over security maintenance of the various releases once the Debian Security team stops its work.
And there is even commercial support for Extended LTS now 
Also, it's worth noticing that Debian provides security backports for a significantly larger set of packages and CPU architectures than other distributions.
> Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
I would say a better reason is that while both are Linux distributions, they are distinct dialects and ecosystems. It isn't impossible to switch, but for institutions that have complex infrastructure built around the RHEL world, it is a lot of work to convert.
Consider an appliance that will be shipped to a literal cave for some mining operation. Do you want to build that on something that you would have to keep refreshing every year, so that every appliance you ship ends up running on a different foundation?
A decade ago I was technical co-founder of a company  that made interactive photo booths and I chose CentOS for the OS.
There are some out in the wild still working and powered on 24/7 and not a peep from any of them.
We only ever did a few manual updates early on - after determining that the spotty, expensive cellular wasn't worth wasting on non-security updates - so most of them are running whatever version was out ten years ago.
You either need to upgrade or unplug (from the internet).
There are still places out there that are running WindowsNT or DOS even. Because they have applications which simply won't run anywhere else or need to talk to ancient hardware that runs over a parallel port or some weird crap like that. These machines will literally run forever, but you wouldn't connect it to the internet. Your hypothetical cave device would be the same.
Upgrading your OS always carries risk. Whether it's a single yum command or copying your entire app to a new OS.
Besides, if you're on CentOS 8 then wouldn't you also be looking at Docker or something? Isn't this a solved problem?
Why Docker has anything to do with this discussion?
How would I run our SAP ERP apps/databases without "running servers that long"?
And while it may be cumbersome or cause some downtime or headaches if that isn't the case, I find the very need of doing it once every 1-3 years forces your hand to get your shit together, rather than a once per decade affair of praying you migrated all your scripts manually and that everything will work, as your OS admins threaten your life because audit is threatening theirs.
The main reason people choose Linux is due to its stability.
It is totally ok to run servers in 2020.
Also your statement seems the default cloud vendor lingo used to push people to adopt proprietary technology with high vendor lock-in.
Not everything can be highly ephemeral or a managed service, so running servers yourself is totally okay like you said.
But I agree I also get the tone of "servers should be cattle and not pets, just kill them and build a new one". Which can also be done on bare metal if you're using vms/containers. It seems like most people forget these cloud servers need to run on bare metal.
We have about 40. The oldest is around 17 years old. Our newest server is 9 years old. Our average server age is probably around 13 years old.
The most common failure that completely takes them out of commission is a popped capacitor on the motherboard. Never had it happen before the 10 year mark.
Never had memory failure. Have had disk failures, but those are easy to replace. Had one power supply failure, but it was a faulty batch and happened within 2 years of the server's life.
Also, most of my experience is with rented dedicated servers and they just give me a new one completely so I never really see if they're fully scrapped.
Ideally you'd never upgrade your software in the usual way. You'd simply deploy the new version with automated tooling and tear down the older.
Also, "running a server for ten years" does not need to mean that it has ten years of uptime. I think that wasn't meant.
If it is connected to the Internet, then I guess the kernel needs to be hot-patched need to be applied to avoid security issues.
Were hot kernel patches available ten years ago? I remember some company who did this (for Linux), and it was quite a while back, so it's possible. But I doubt it was mainstream.
I recall long ago that SunOS boxes had to be rebooted for kernel patches.
I don't remember about Solaris.
I'm not familiar with other Unices.
Do you have any idea how much effort it is to change everything over to "treating your servers as disposable"?! It's going to eat up a third (to half) of my "fun time" budget for the foreseeable future!
the 'usual way' is automated tooling
When I worked for a hardware vendor we had customers who ran hundreds of CentOS boxes in dev/test alongside their production RHEL boxes. If there was an issue with a driver, we simply asked that they reproduce it on RHEL (which was easy to do). If they had been running debian or ubuntu LTS the answer would have been: I suggest you reach out the development mailing list and seek out support there.
Whether you like it or not, most hardware vendors want/require you to have an enterprise support contract on your OS in order to help with driver issues.
There is a large world of proprietary enterprise software that is tested, developed, and supported solely on RHEL. CentOS (and theoretically, Rocky Linux) can run these applications because they are essentially a reskin of RHEL. Debian and Ubuntu LTS cannot (or at least not in a supported state) because they are not RHEL.
I'm not familiar with Debian, do they have same infrastructure and documentation quality as RHEL? For example do they have anything like Koji  for easy automated package building?
We used CentOS as dev environments, and RHEL as production. It gave us the best of both worlds; an unsupported but compatible and stable dev environment we could bring up and throw away as much as we wanted _Without_ licensing BS. And when the devs were happy with it, the move of a project to RHEL was easy and uneventful.
And don't even get me started on the 'free' dev version of RHEL. It's a PITA to use, we've tried. It's also why we've halted our RH purchasing for the moment. Sure, it's caused our RHEL reps no end of consternation and stress but too bad. I've been honest with them, and told them that they are probably lying through their teeth (without knowing it) when they parrot the line that RH will have some magic answer for "expanded" and/or "reduced cost" Streams usage in "1st half of 21". That trust died when RH management axed CentOS8 like they did.
The branding stuff was a plus to the sys-admins and Linux die hards.
Most developers use Ubuntu in their laptops. Virtualized on Windows, but Ubuntu nonetheless.
In fact I’ll go a step further and say Windows and macOS got this right, in that third party developers should do the work to “package” their apps.
It would be insane for Microsoft to maintain packages for every piece of software that ships on Windows, but somehow that’s the situation we’re in with Linux.
Hopefully Snap or Flatpak changes this!
And this is why installing ex. Filezilla on Linux is safe and easy, and doing the same on Windows is neither.
This is not the first time I've seen this prediction. What is its basis?
Besides isn't that pretty much the exemplar of FUD?
On the other hand, maybe Ubuntu is providing something special that Debian can't do - then it may make sense to go with Ubuntu and maybe even swallow Microsoft's fishing hook if it comes.
Centos was the rational other free choice, not that Red Hat hasn't made other equally strange decisions.
Sometimes I think we'd be better off rolling our own, like Amazon does.
Server side isn't open, and Canocial repeatedly claims wide industry support ... despite not having it.
I recommend the first step in any Ubuntu system you use is to disable snap. Use something portable like flatpak that does at least have some support, is open source, and seems to have a healthy eco system.