I think one could only say that if those jobs didn't still exist with the new name "Devops Engineer".
I would argue that abstraction layers like VMs and containers have a significant potential to create more work, as they increase overall complexity/configurability.
> SRE at Google are responsible for a ridiculous amount of computing power per person.
Although I'm confident Google still has remarkable "compute" infrastructure efficiency, even looking just at human labor, you'd need to count more than just the SREs. You'd certainly need to include all of their network, hardware, and datacenter operations folks, and a good number of their software engineers working on internal tools.
I'm pretty sure people grossly under-estimate how custom and exotic Google's environment is, to the point where it's not possible to translate to something even vaguely close in the rest of the tech world.
Yeah devops is the same thing but I still think in general that person regardless of title can provision more computing power than ever before. Just the in vogue name right now.
On the other point I read one time that they do a ton of custom work like adding all kinds of sensors to their servers to they can use automation to turn them off and on as needed.
But yeah I think you could argue there are whole teams dedicated to tooling and their labor hours are moved to them.
> that person regardless of title can provision more computing power than ever before.
I'm not entirely convinced that this is true, at least not in the sense of "order of magnitude more" or even "multiples more". I am convinced of the truth of a statement that replaces the word "can" above with the word "do".
That's because I believe the reason for modest scale was (and still is) cost and not administrative burden, at least since as long as 15 years ago. That is, even with just the (judicious) use of RPM/YUM, Kickstart, and the early CM tools, it was much less than 10x as hard to provision 2000 servers as 200. It might not even have been 2x as hard, other than more exotic logistics and network engineering.
It's just that nobody did it, unless they really needed it, because it still cost 10x . I suspect that AWS changed the dynamic with EC2 because now one could have tremendous capacity with reduced cost, if one only uses that capacity part time (but the provisioning cost is still there). I don't discount the possibility that a fall in price of low-end and mid-range server products helped, but I don't have good data for that.
Whatever caused it, now more computer power is being provisioned, and the variety in tools (at least for CM, which, arguably, isn't really for provisioning) to support it has followed.
> they do a ton of custom work like adding all kinds of sensors to their servers to they can use automation to turn them off and on as needed.
I hadn't read that. I thought they pretty much only removed components from otherwise standard boards, but, at their scale, I suppose adding to out-of-band management would be OK.
Still, even something from SuperMicro has a remarkable number of sensors already there. My point is that a great deal of even low-level automation is (and has been) possible, but almost nobody bothers to do it, because that's a hard problem with, arguably, not enough return on investment, except at Google scale.
Instead, there's automation of newly fabricated work, like provisioning of something created by an abstraction layer (e.g. a VM).
> you could argue there are whole teams dedicated to tooling and their labor hours are moved to them.
I argue exactly that, but I admit I have no idea if, in moving those labor hours, they managed to conserve any (or instead inflate them).
More importantly, though, it's that, because so much of that labor is "hidden" elsewhere, any other company thinking that they could emulate staffing levels by just counting SREs is in for a rude awakening.
 Or maybe only 8x, if your negotiation position got that much stronger at higher volumes, but that's not a foregone conclusion when you have schedule needs, too, it might be higher than 10x, too
Gotta love corporate: where big decisions are made by the least qualified.
1) AWS came up with their AZ layout (small nearby buildings) to avoid needing industrial-scale power and cooling gear.
2) Industrial electricians despise working with web startups because they ask stupid questions like, "can we failover the megawatt bus bars periodically?". Those are intended for emergency failovers annually, not like some kind of API you call repeatedly.