Yep. The amount of power that can be pulled to a building is a difficult thing to change. So the biggest cloud companies try to maximize the amount of power used for compute. There is a metric for this called Power Usage Effectiveness (PUE). It’s the ratio of totally energy usage to energy used for compute. Most modern cloud companies have PUEs <1.3.
The data centers in Ashburn, Virginia are mostly leased facilities and these kinds of facilities are more of the traditional raised floor air conditioning type facilities. However these facilities are also quite efficient with PUEs in the 1.6 range.
I can’t find where that stat comes from, however. As for everything else you said, it would be great to have sources. As far as I know, cooling is one of the major costs of running a data center. It would be illuminating to know why that’s now false.
The 70% thing came from a business development official in Loudoun County, a person whose job is to promulgate misleading boosterism. It doesn't make even a sliver of sense to someone who thinks it over for a moment.
Cooling is a major data center cost, but it's not 40% any more. All of the major datacenter operators claim a PUE of 1.11 or less. Facebook has been the most open and vocal about how they achieve this, for example at https://tech.fb.com/hyperefficient-data-centers/ (scroll down to "Systems").
"Today, power-hungry computer room air conditioners (CRACs) or computer room air handlers (CRAHs) are staples of even the most advanced data centers."
False. The absence of CRACs is, in fact, the defining characteristic of "the most advanced data centers".
"In most data centers today, cooling accounts for greater than 40 percent of electricity usage."
That might be true if the statement is weighted improperly, but for advanced large-scale cloud data centers, the figure is more like 8%.
"Virginia’s “data center alley,” the site of 70 percent of the world’s internet traffic"
This author has no idea what they are talking about.