There are many, but the problems are more subtle than this video really gives credit for.
I worked at Google at the time this video was made, and empathized (in fact I had been an SRE for years by that point). Nonetheless, there are flip sides that the video maker obviously didn't consider.
Firstly, why did everything at Google have to be replicated up the wazoo? Why was so much time spent talking about PCRs? The reason is, Google had consciously established a culture up front in which individual clusters were considered "unreliable" and everyone had to engineer around that. This was a move specifically intended to increase the velocity of the datacenter engineering groups, by ensuring they did not have to get a billion approvals to do changes. Consider how slow it'd be to get approval from every user of a Google cluster, let alone an entire datacenter, to take things offline for maintenance. These things had tens of thousands of machines per cluster and that was over a decade ago. They'd easily be running hundreds of thousands of processes, managed by dozens of different groups. Getting them all to synchronize and approve things would be impossible. So Google said - no approvals are necessary. If the SRE/NetOps/HWOPS teams want to take a cluster or even entire datacenter offline then they simply announce they're going to do it in advance, and, everyone else has to just handle it.
This was fantastic for Google's datacenter tech velocity. They had incredibly advanced facilities years ahead of anyone else, partly due to the frenetic pace of upgrades this system allowed them to achieve. The downside: software engineers have to run their services in >1 cluster, unless they're willing to tolerate downtime.
Secondly, why couldn't cat woman just run a single replica and accept some downtime? Mostly because Google had a brand to maintain. When she "just" wanted to serve 5TB, that wasn't really true. She "just" wanted to do it under the Google brand, advertised as a Google service, with all the benefits that brought her. One of the aspects of that brand that we take for granted is Google's insane levels of reliability. Nobody, and I mean nobody, spends serious time planning for "what if Google is down", even though massive companies routinely outsource all their corporate email and other critical infrastructure to them.
Now imagine how hard it'd be to maintain that brand if random services kept going offline for long periods without Google employees even noticing? They could say, sure, this particular service just wasn't important enough for us to replicate or monitor and the DC is under maintenance, we think it'll be back in 3 days, sorry. But customers and users would freak out, and rightly so. How on earth could they guess what Google would or would not find worthy of proper production quality? That would be opaque to them, yet Google has thousands of services. It'd destroy the brand to have some parts that are reliable and others not according to basically random factors nobody outside the firm can understand. The only solution is to ensure every externally visible service is reliable to the same very high degree.
Indeed, "don't trust that service because Google might kill it" is one of the worst problems the brand has, and that's partly due to efforts to avoid corporate slowdown and launch bureaucracy. Google signed off on a lot of dud uncompetitive services that had serious problems, specifically because they hated the idea of becoming a slow moving behemoth that couldn't innovate. Yet it trashed their brand in the end.
A lot of corporate process engineering is like this. It often boils down to tradeoffs consciously made by executives that the individual employee may not care about or value or even know about, but which is good for the group as a whole. Was Google wrong to take an unreliable-DC-but-reliable-services approach? I don't know but I really doubt it. Most of the stuff that SWEs were super impatient to launch and got bitchy about bureaucracy wasn't actually world changing stuff, and a lot ended up not achieving any kind of escape velocity.
It works in less arid places too. You can use the humidified cool air to cool incoming air. The now warm air is pumped out. Essentially, you can achieve this by putting a wet filter on the house outflow of an MVHR system, before the heat exchanger. The problem with MVHR is that the flow rates aren't very high. I have an inkling though that with a well designed, efficient house you could maintain a more consistently comfortable temperature with very low energy usage (think a few W and perhaps 1 l/hr water). The psychrometric chart suggests you can get a temperature delta of better than 10 degrees Celsius at 35 degrees with 40% RH.
At this point I can't tell what you're trying to pin me to, but just to be clear: no, I do not think there is an innate difference between men and women that is the cause of male mass incarceration. I believe there are profound cultural forces that create that result.
I worked at Google at the time this video was made, and empathized (in fact I had been an SRE for years by that point). Nonetheless, there are flip sides that the video maker obviously didn't consider.
Firstly, why did everything at Google have to be replicated up the wazoo? Why was so much time spent talking about PCRs? The reason is, Google had consciously established a culture up front in which individual clusters were considered "unreliable" and everyone had to engineer around that. This was a move specifically intended to increase the velocity of the datacenter engineering groups, by ensuring they did not have to get a billion approvals to do changes. Consider how slow it'd be to get approval from every user of a Google cluster, let alone an entire datacenter, to take things offline for maintenance. These things had tens of thousands of machines per cluster and that was over a decade ago. They'd easily be running hundreds of thousands of processes, managed by dozens of different groups. Getting them all to synchronize and approve things would be impossible. So Google said - no approvals are necessary. If the SRE/NetOps/HWOPS teams want to take a cluster or even entire datacenter offline then they simply announce they're going to do it in advance, and, everyone else has to just handle it.
This was fantastic for Google's datacenter tech velocity. They had incredibly advanced facilities years ahead of anyone else, partly due to the frenetic pace of upgrades this system allowed them to achieve. The downside: software engineers have to run their services in >1 cluster, unless they're willing to tolerate downtime.
Secondly, why couldn't cat woman just run a single replica and accept some downtime? Mostly because Google had a brand to maintain. When she "just" wanted to serve 5TB, that wasn't really true. She "just" wanted to do it under the Google brand, advertised as a Google service, with all the benefits that brought her. One of the aspects of that brand that we take for granted is Google's insane levels of reliability. Nobody, and I mean nobody, spends serious time planning for "what if Google is down", even though massive companies routinely outsource all their corporate email and other critical infrastructure to them.
Now imagine how hard it'd be to maintain that brand if random services kept going offline for long periods without Google employees even noticing? They could say, sure, this particular service just wasn't important enough for us to replicate or monitor and the DC is under maintenance, we think it'll be back in 3 days, sorry. But customers and users would freak out, and rightly so. How on earth could they guess what Google would or would not find worthy of proper production quality? That would be opaque to them, yet Google has thousands of services. It'd destroy the brand to have some parts that are reliable and others not according to basically random factors nobody outside the firm can understand. The only solution is to ensure every externally visible service is reliable to the same very high degree.
Indeed, "don't trust that service because Google might kill it" is one of the worst problems the brand has, and that's partly due to efforts to avoid corporate slowdown and launch bureaucracy. Google signed off on a lot of dud uncompetitive services that had serious problems, specifically because they hated the idea of becoming a slow moving behemoth that couldn't innovate. Yet it trashed their brand in the end.
A lot of corporate process engineering is like this. It often boils down to tradeoffs consciously made by executives that the individual employee may not care about or value or even know about, but which is good for the group as a whole. Was Google wrong to take an unreliable-DC-but-reliable-services approach? I don't know but I really doubt it. Most of the stuff that SWEs were super impatient to launch and got bitchy about bureaucracy wasn't actually world changing stuff, and a lot ended up not achieving any kind of escape velocity.