While your reasons are valid you are missing an important one:
Resource scarcity: the engineers that I need allocate to infrastructure I rather have working on user facing features and improvements. Talent is scarce, being able to out source infrastructure frees up valuable engineering time.
This is one of the main reasons, for example, that Spotify (I’m not working for them) is moving to google cloud.
I do devops consulting, and typically I end up with more billable hours for AWS setups than bare metal or managed server setups. The idea that AWS takes less effort to manage is flawed. What instead tends to happen is that more of it gets diffused out through the dev team, who often doesn't know best practices, but often nobody tracks how much of their time gets eaten up by doing devops stuff when there's nobody explicitly allocated to do the devops tasks.
There can be advantages in that the developers more often can do the tasks passably well enough that you can spread tasks around, but if it's not accounted for a lot of the time people are fooling themselves when it comes to the costs.
When it comes to large companies like Spotify, the situation changes substantially in that they're virtually guaranteed to pay a fraction of published prices (at least that's my experience with much smaller companies that have bothered to negotiate).
> nobody tracks how much of their time gets eaten up by doing devops stuff
This has been my experience working with companies that use cloud services as well.
Another big waste of time is on application optimization especially around database usage. Cloud services tend to provide very low IOPS storage (and then charge exorbitant amounts for semi-decent performance) which forces spending a lot of wasted time on optimization which would never be an issue on dedicated hardware.
> This has been my experience working with companies that use cloud services as well.
It's generally the case across large parts of IT. I confused the heck out of the first manager I started sending itemized weekly reports of the cost of each functional area and feature requests (based on average salary per job description), as he'd never seen it before. But it very quickly changed a lot of behaviors when they realized the value of the resources spent on various features.
>There can be advantages in that the developers more often can do the tasks passably well enough that you can spread tasks around, but if it's not accounted for a lot of the time people are fooling themselves when it comes to the costs.
It is cheaper than hiring a full DevOps team which is a better apples to apples comparison. By spreading the load across the dev team I can automatically get a high bus factor and 24/7 hour on-call rotation. If the load cannot be spread across the team but requires specialized DevOps engineers then I lose both those very important points. Obviously once your company is large enough it's different but for small teams/companies it is an important factor.
> By spreading the load across the dev team I can automatically get a high bus factor and 24/7 hour on-call rotation.
This assertion supports what vidarh wrote. What you wrote has nothing to do with DevOps or software or engineering - what you are really saying is that you are saving money by coercing your developers into working two jobs at the same time. I have been in this position as a developer at a company where we had on-call rotations. This is a false economy and a quick way to increase stress, alienate employees, and increase turnover. Infrastructure tasks get neglected and are performed poorly because those tasks are now just necessary distractions to the main workload of feature development, to be gotten over with as quickly as possible. A lot of things get overlooked because no one "owns" areas like backups and disaster recovery.
I would phrase it differently as competence scarcity.
It doesn't take many people to do soup to nuts businesses.. think WhatsApp's 50 engineers, Netflix's 100 person OCA team (if you don't think OCA is a standalone product you don't know much about technology business) doing 40% of the Internet by volume.. the vast majority of people just aren't very good that work in technology. Business governance grossly underestimates the effects of mediocre performance.
So the real question is why aren't governors trying to encourage WhatsApp and OCA style businesses, it's far more cost efficient. I understand why an organization itself empire builds, misaligned incentives.
the engineers that I need allocate to infrastructure I rather have working on user facing features and improvements.
Cloud services still need configuring and managing. You're saving on 2-3 days upfront on racking and cabling, on boxes that will last at least 3 years, probably longer. So if this is your only reason, you're making a false economy, eventually the costs of an undermanaged cloud will bite you (e.g. VM sprawl, networking rules that no-one understands, possibly-orphaned storage, etc).
> You're saving on 2-3 days upfront on racking and cabling, on boxes that will last at least 3 years, probably longer.
"Infrastructure" is a little broader than just some cabling, much broader. You're also assuming that whoever will be in charge of DIY is a) more competent at scale than whatever will be scraped together for the cloud, and b) available with no resource cannibalisation.
The point the person you're replying to was trying to make was that for every "good" hire you're deciding where to allocate them, and sourcing plumbing from a cloud provider lets you allocate preferentially to product development (ie business growth). Even if you "pay more" for that setup, in theory business growth you will achieve more rapidly pays for it many times over (first mover advantage, market leader advantage, cost of money over time, etc).
The costs of pinching the wrong penny and making technical hiring more difficult, diluting your talent pool, can be the difference between huge success and too little too late. An undermanaged local setup that cost you 3 years on Time to Market will bite you long before 'eventually' comes, and you won't have oodles of cash to fix the problem.
How does that work out? In situations, granted only a small handful, I've worked where AWS has been extensively used what ends up happening is everyone ends up doing "devops". Whatever that might mean in a formal sense, the way I see it playing out in reality is that every engineer ends up having to spend time tinkering with the infrastructure, so does it really free up valuable engineering time?
For personaly projects, I use AWS and Azure (though am likely to migrate everything to a single box at OVH because it turns out to be cheaper for better performance - go figure) and it's made a certain amount of sense (up to now). At work we use dedicated hardware, because the cloud can't deliver the bang per buck.
You still need to engineer devops yes, so having an engineer allocated to that still makes sense. If "everyone ends up doing "devops"" you might not have a general infrastructure strategy (what your engineers need is a CI/CD pipeline) or doing something like microservices (which might or might not make sense depending how flexible you want to be and how many teams you have/need to have).
At growth companies using cloud makes complete sense because it's all about time to market and iterating on your business proposition. Requirements change all the time and having flexibility of cloud offerings gives you the velocity. Whether at scale/in maintenance mode it makes much for sense to cut corners and optimise spending.
Either way you want to focus on what brings the most business value.
While your reasons are valid you are missing an important one:
Resource scarcity: the engineers that I need allocate to infrastructure I rather have working on user facing features and improvements. Talent is scarce, being able to out source infrastructure frees up valuable engineering time.
This is one of the main reasons, for example, that Spotify (I’m not working for them) is moving to google cloud.