> Sure, we could use physical boxes. But those will go to procurement. The budget will have to be approved. Orders are sent to suppliers. Hardware arrives, it is a colo is not so bad, but it will be installed according to the colo timelines.
"Cloud as a workaround for internal corporate dysfunction" is certainly a novel argument for cloud. I'm aware of the OpEx vs CapEx issues at a lot of companies, I just happen to think it's a really stupid reason to spend a lot more money than you otherwise would for some set of capabilities.
> You also need to measure apples to apples. You 'disk measured in TB' is a locally attached disk almost certainly. In the cloud, that's likely to be a network attached storage.
If I want to stuff 2TB of files into somewhere that's not-local, why does it particularly matter to me what the exact technology used for storing them is?
I mean, obviously "cloud" is quite successful, and comes with the ability to be able to say "Not our problem!" when AWS is down for some reason or another. But none of the problems you talk about are new, and all of them were quite well solved 20 years ago by companies running their own hardware. Been there, admin'd that. A four-machine cluster (two web front ends doing the bulk of the compute, two SQL database servers replicating to each other, and some disk storage regularly synced between the two database servers) could handle a staggering amount of traffic when properly tuned. The same is true today, without any of the problems of rotational disk latency. SQL on NVMe solves an awful lot of problems.
But, again, not my money to spend. I just find it baffling that a lot of people today don't even seem to realize that physical servers are still a thing.
"Cloud as a workaround for internal corporate dysfunction" is certainly a novel argument for cloud. I'm aware of the OpEx vs CapEx issues at a lot of companies, I just happen to think it's a really stupid reason to spend a lot more money than you otherwise would for some set of capabilities.
> You also need to measure apples to apples. You 'disk measured in TB' is a locally attached disk almost certainly. In the cloud, that's likely to be a network attached storage.
If I want to stuff 2TB of files into somewhere that's not-local, why does it particularly matter to me what the exact technology used for storing them is?
I mean, obviously "cloud" is quite successful, and comes with the ability to be able to say "Not our problem!" when AWS is down for some reason or another. But none of the problems you talk about are new, and all of them were quite well solved 20 years ago by companies running their own hardware. Been there, admin'd that. A four-machine cluster (two web front ends doing the bulk of the compute, two SQL database servers replicating to each other, and some disk storage regularly synced between the two database servers) could handle a staggering amount of traffic when properly tuned. The same is true today, without any of the problems of rotational disk latency. SQL on NVMe solves an awful lot of problems.
But, again, not my money to spend. I just find it baffling that a lot of people today don't even seem to realize that physical servers are still a thing.