What's the "risk profile" and why is it too great? Kubernetes requires dedicated knowledge and can fail in surprising ways. Using a simple server with some established technology can be surprisingly/boringly reliable.
Some hosts can reach incredible scale with a just few beefy dedicated machines, given the right software architecture.
The fact that it's a dedicated physical host means that any of a dozen components could fail and bring the box down permanently. The hosting company would have to reassign or build a new server for the customer, which could take from an hour to a week (or more in a pandemic with scarce parts). It depends on their excess capacity and on the machine's specifications.
If it was one virtual machine I'd say, sure, just use one box, you can bring up a new one automated in 10 minutes. But if it's physical, it could either last 10 years without going down, or 10 weeks, you never know. Physical machines is precisely the time to use a distributed system as a hedge against expensive failures.
Another consideration is cost; at $1200/year you could just keep buying servers and Colo them. It's more work but you get more (old) hardware to play with.
It's true that it could fail in 10 weeks, but it's pretty likely that it lasts the whole 10 years. Computer hardware is pretty good like that. Especially if you've got power conditioning and a UPS so it never even reboots (except when you want it to).
At home I'm running an old desktop PC with Linux for my router. It's running on >10 year old hardware. I've got several other old PCs running, too, that's just the oldest, so not completely a fluke.
'Pretty likely' isn't a measurement, it's just a way to dismiss thinking about the problem. Most hard drives, for example, are only reliable for 3-5 years. Other components have varying lifespans. And that's just an average; in a hot, vibration-filled building with heavy carts banging into server racks, it gets more dodgy. And I wouldn't bet on the datacenter even having a UPS or multi-phase power set up correctly.
Assuming your one giant server will be as reliable as a 10 year old box sitting on your desktop is a gamble. All I'm saying is be prepared to lose.
> any of a dozen components could fail and bring the box down permanently
The same is true with VPS and "cloud" setups. As the OVH fire last year showed, offsite backups are not a luxury. The chances of that, though, are ridiculously low. Higher-end dedicated hosts have very low failure rates due to using higher-grade hardware and rotating the hardware every few years; they also usually have very competent and available technicians on site which are paid by your monthly bill.
> If it was one virtual machine I'd say, sure, just use one box, you can bring up a new one automated in 10 minutes.
The same is true with dedicated servers. If you house them yourself then sure bringing up hardware replacement is going to take longer. But if you use a managed dedicated hosting solution (as was mentioned in the article) the provisioning times are sensibly similar to that of a VPS (a few minutes).
Some hosts can reach incredible scale with a just few beefy dedicated machines, given the right software architecture.