There was a uranium enrichment plant in the south of France that was colocated with a nuclear power plant. It consumed 3/4th of its power output—2.7 gigawatts.
Fun fact: uranium enrichment centrifuges don't play nice with earthquakes. They spin so fast that they have huge inertia. When there's an earthquake, the support of the machine moves with the Earth, and the spinning part stays in place, so the machine is ripped apart. It happened with an early Iranian covert nuclear project.
I was wrong. It was Pakistan, not Iran. The enrichment program was certainly covert though.
In September 1981, a powerful earthquake measuring 6.1 on the Richter scale shook Islamabad and the surrounding area. Pakistani scientists at Kahuta were on a lunch break when the earth shook, forcing them to run to work stations only to hear the sounds of explosions. Some four thousand centrifuges operating in the Khan Research Laboratory had crashed. The earthquake had unbalanced the rotors, operating in a vacuum at some 64,000 revolutions per minute (RPMs); they hit their casings and turned to powder, making sounds like hand grenades exploding.
You can visit a gaseous diffusion site outside of Oak Ridge, TN at a new, albeit small, museum: https://amse.org/k-25-history-center/ Much of the infrastructure has been leveled, but there are a few remnants. Site selection factors included obscurity, lots of water from nearby rivers and electricity availability-- they needed MW in a time where KW was a lot of power. There is also the AMSE itself and tours of supercomputers at ORNL.
Due to the availability of hydro generation in the early 20th century, Alcoa, TN (Aluminum Company of America) also established operations in the same area due to the energy required for smelting. Their other large operation hub? Niagara Falls.
In the same vein, I heard in a recent The Tim Traveller video (https://www.youtube.com/watch?v=3SQVIZxUrDg) that a third of the power produced from the coal excavated from one mine was used to power the excavators from that mine.
The difference being that in that case, about 1/3 of the power was spent on production, while in the case of the nuclear power plant, less than 1% was. (The uranium enrichment facility produced materials for a lot more than the plant that was powering it.)
Your math seems off. If 75% of 4 nuclear reactors (so 3) was used then they would have needed to be using that uranium for 300+ nuclear reactors for that to represent below 1%. France has never gotten close to those numbers ~300 gigawatts.
What seems more believable is if they where making highly enriched uranium for bombs not just reactor grade uranium or the number is larger than 1%.
I wonder why we don't see datacenters in places with "infinite" potential for electricity generation in cold climates like Iceland. If your workload isn't sensitive to external network latency (you're just crunching numbers or doing long running computation in data you can move there at a normal pace), that sounds very cheap.
We do. Microsoft, google, Facebook and apple have big data centers in des moines to take advantage of the cheap wind power. The local utility claims last year they generated more wind power than customers used in electric.
They got to make windmills rated for South Dakota, drove the entire width of that state with a North wind and I had to drive with my wheel turned entirely to the right. By the time I got to Montana I thought I needed new tires, the tire guy asked me how long I'd be there, I said only that night then driving straight back. He told me don't bother the tires will wear back to even on my return trip.
Pretty sure Google's data center is in Council Bluffs, but yeah. Iowa produces the greenest energy of any state in the country with upwards of 60% of its energy coming from renewables in 2022 (and climbing rapidly).
Oh wow; I could swear I saw Iowa at the top of a bunch of “states by state of renewable energy” lists, usually followed by south dakota. I feel like there must be some difference in how these lists are measuring things that I’m missing.
I don’t think any of it is ethanol. These figures are about electricity generation and it’s like 62% wind out of 63% renewables or something like that.
You still need personel in the datacentre. Remote location makes it more difficult. But I woiuld imagine that latency is the biggest issue. Even university super computers - where I would imagine latency is not an issue tends to be close by.
Yes you need personnel but not very many. Local governments that give these tax incentives for data centers often think the presence of a data center is going to lead to a lot of high tech jobs. This is simply not true. If you’ve ever walked around a data hall in a big data center, it’s an amazingly creepy and lonely place. The first time I visited a huge data center in Altoona Iowa I was shocked at how few people there were.
I'd imagine the optics of this have changed with the recent focus on ocean surface temps. I would also bet the negative PR of pumping waste heat directly into cold climates would offset any savings.
The heat we directly dump into the environment, except where it affects the local ecosystem directly (e.g. warm rivers killing local wildlife, algae blooms) is a quite laughable portion of the global warming problem - greenhouse gases are multiple orders of magnitude more effective (at being part of the problem).
Oceans are insanely large (citation needed) heat sinks.
> Oceans are insanely large (citation needed) heat sinks.
If my math is right (hopefully), and assuming there's no cooling (hopefully not), it would take 790 billion years for Microsoft's datacenter to raise the temperature of the ocean by 1 degree.
I think it is disingenuous to pretend that I am making an argument on logistical, thermodynamic, oceanographic, etc. grounds. I claimed one thing - bad optics, bad public relations.
Sea surface temps are in the environmentalist spotlight at the moment. Between when Microsoft started their pilot program in 2018 and now, surface temps have changed dramatically. Some experts believe this change has made storms more energetic and less predictable.
Salt-water cooled power plants along with desalination plants have been getting pushback from environmentalists for years because of the negative effects of their heated effluent. This will only intensify.
For Microsoft, public relations have a documented cost. My claim is that their PR team would say that right now the cost of a large phase II deployment would outweigh the savings.
If it was that big a deal, they would channel the waste heat to useful purposes such as heating nearby homes for free. The marketing benefits are obvious.
That'd be great. The Volts podcast just had an episode about district heating. Data centers were among the list of readily available heat sources. Also reduces the data center's cooling costs. Win/win.
Iceland absolutely does do that. There are multiple icelandic datacenter operators trying to get european customers with the promisse of being powered by cheap renewable energy.
(But Iceland's potential isn't as "infinite" as it may sound. Yeah, they could still build plenty of electricity generation. But they stopped massively building out electricity generation after building the Kárahnjúkar power plant, which was probably the most controversial infrastructure project in Iceland ever. They still build smaller hydropower plants and lately a bit of wind.)
Connectivity could be easy. There's lots of extant subsea fiber connections in the Arctic region, because of satellite downlink stations there (polar satellite orbits are an easy way to guarantee revisits to the same Earth location, every orbit cycle). Fucking Svalbard, of all places, has 2.5 terabits of dark fiber, paid for partly by the USA:
And the inverse is true as well: Starlink, the LEO satellite constellation, [edit: this is *no longer true*] spends the most loiter time and has the highest coverage, highest bandwidth, over the poles. I imagine you could plausibly get high-bandwidth, low-latency satellite backhauls from an Arctic datacenter.
It’s Iridium that had best coverage over the poles (and actually useful because they’ve had inter-sat links since day 1). As all sats went right over the poles and that’s where latitude rings are the shortest.
Meanwhile starlink has its poorest coverage over the poles, you can see mostly hollow space near the poles:
https://satellitemap.space/
My knowledge was out of date! Still, the general principle's visible in that 3d visualization: the density of satellites is greatest at the highest latitudes within their range. They cluster there where the derivative d{latitude}/dt goes to zero.
Iceland is trying to grow its datacentre industry on the back of having large amounts of cheap and "green" electricity and a cold climate. I think historically its relative remoteness in terms of connectivity has been something of a constraint, but they have been working to address this with new submarine fibre cables recently.
To some extent, we do. Quincy, Washington is one of the most popular data center locations in the U.S. The general vicinity of Motreal, Quebec (in Canada) is also pretty popular.
Neither is Iceland, but both have relatively low temperatures and cheap hydroelectric power.
You can radiate it away, but it would be really hard compared to just dumping it to atmosphere / water bodies. Especially as computers need quite low temperatures.
Elton John told me that space is cold as hell. Presumably the ISS is putting its waste heat somewhere, but I don't really know the limitations of radiators in space.
A cubic meter of space has very little heat energy, that's true, but it's more about having only a few molecules here and there in any cubic meter of space. That leads directly to the problem though, that if there are no molecule to contact you and steal some kinetic energy and then leave, reducing your heat, the only option is to radiate it away. Radiating heat is exceptionally inefficient at rejecting heat from an object than using a material of some sort to sap away that heat with physical contact.
Fluid pipes are threaded through them for transferring heat. These contain mixed-phase ammonia: I believe it should condense from vapor to liquid under the radiators, the coldest point. That phase change adds a huge boost to their heat-carrying capacity.
If you think about why a vacuum flask or thermos is used to keep drinks hot (or cold) for an amount of time you'll realise that space / a vacuum is a very good insulator.
An object in space loses heat according to the Stefan-Boltzmann law of radiation. That use the fourth power of the temperature differential so you get boost from the background of space being -270C. BUT the Stefan-Boltzmann constant is 5.67 × 10−8 J/s · m2 · K4 and that is the problem.
You can do the numbers yourself but various internet sources suggest that spacecraft radiators can only cool between 100 and 350 W of internally generated heat per square meter. That is a big radiator for not much cooling.
ok to correct myself. Due to lack of much air, heat would only dissipate by Black body radiation.
One would need to calculate if that could cool down the chips efficiently.
It won't. There is reason why we use fans. There is two ways to handle high heat loads, convection or evaporation. And you would run out of evaporative material in space pretty quickly.
> You can radiate heat from the surface of the earth into space via the sky, but you don't lose much energy that way.
That's not quite true. Black body radiation is the only reason why earth does not heat up much more. It's the only factor with a negative radiative forcing.
Solar radiation is the only main incoming source of energy, and black body radiation the only outcome with almost the same magnitude (which is huuuuge btw)
What I mean that building a system to radiate radiate energy into space from the surface of the earth isn't very cost effective, and it's undoubtedly far less expensive than building a system you would launch into space.
The actual surface of the earth has the advantage of being able to use the whole surface of the earth.
On earth you can use things like fans and evaporative cooling. It's far superior to making huge space radiators, but nothing is free.
I think this is a very popular mistake to make. It is sort of a well known bit of trivia, but only because everybody has to correct themselves on it at least once, haha.
Air cooling occurs by convection which is completely absent in space. Similarly, advection and convection seem unlikely to contribute much. That leaves mainly radiation which is a lot slower: https://en.wikipedia.org/wiki/Heat_transfer#Heat
How much cloud workload is 100 async from everything else? And while eletricity might be cheap I guess you loose that gain from taxes on importing equipment and trying to recruit qualified staff.
Maybe AI training would work but there you get into data security issues. These are often custom builds with special security measures to stop spying / theft
I wish district heating was more of a thing in the US.
Unlike a steelmill, A datacenter could go anywhere. so why not plant it in an urban neighborhood, heat up water, and pump the hot water around the neighborhood through a hot water network.
Any occupied space could be heated with hot water in the winter and could rely on the hot water for their hot water heating. The datacenter gets a lot cheaper cooling bill. It'd effectively be geothermal cooling on steroids.
One could also use is rurally to heat greenhouses for crops that need warmer climates. Apparently there is one company using their data centre to heat (a?) swimming pool in the UK however!
Anyone remember Rackable Systems Inc.? Their selling point was all about reducing data center costs by doing all the DC conversion in these massive converters at the top of the cage. They supplied something like 48v down rails in the back of the cage that their servers would then plug into via special connectors on the back of each server. They promised more efficient power conversion and better thermal management amongst other things.
I was never comfortable working with/around that stuff. I was always worried about a mishap that could bring me (or the cage itself) into contact with 48v of pure DC doom.
This is how the big cloud vendors do, see OCP server and rack designs for an example. IIRC the rack connects to one leg of a three phase and converts to DC inside the rack’s UPS.
It’s great for efficiency, but makes a hassle bringing in third party equipment since you need to make accommodations for bringing in the regular AC power that everything else is designed for.
I was looking into getting a UPS for my server. Some UPSes do power conditioning, and all do AC/DC conversion. I got to wondering why I can't buy a UPS that outputs DC and feed that directly into a silly looking PSU.
While we are at it, can anyone explain to me why so much power is spent on cooling datacenters to ~20C, when you can just run it "hot" at ~40C with all hardware being just fine and cooling costs being many times lower(also capex costs being lower)? Are my assumptions wrong or is there a good reason for this?
Huge disclaimer that I'm not an expert on this, but:
Googling suggests that you're trading off electricity used to cool the data center with electricity used to blow the air over the electronics, and that ~25C is the temperature where the sum of those is minimized:
On Oxide and Friends (a podcast by a startup building servers) they claims their choice to use larger quieter fans reduces the power consumption of the fans from ~20-30% of the power a rack consumes to ~2%. Which might suggest that that ~25C number is actually a result of poor hardware choices for moving air more than anything else.
It's not even the eventual human, humans are in data centers all the time.
Temperatures of 40° Celsius are easily high enough to kill someone doing manual manual labor. Especially because that heat isn't caused by the sun. If someone starts suffering from heat exhaustion, going into the shade isn't going to do anything to help them.
One reason cooling is used is to protect the Li-ion or lead acid batteries in the uninterruptible power supplies that provide backup power to the servers. Both Li-ion and lead acid lose calendar life and reliability when operated at 40 C. So, you have extra battery replacement costs and reliability concerns at higher temperature.
I also would imagine that you need some type of cooling system even to keep the ambient temperature at 40C. Otherwise the servers are just continuously pumping out heat and the temperature may get high enough to the point where you have equipment issues. It could be the case that the equipment needed to cool to 25C is not that different or more expensive that the equipment needed to maintain a temperature of 40C.
My impression is that these power supplies are usually in separate rooms or even buildings, so you can have different temperatures there.
> *It could be the case that the equipment needed to cool to 25C is not that different or more expensive that the equipment needed to maintain a temperature of 40C. *
From what I've heard, cost difference is quite large.
I have a background in running 150,000+ GPUs in all sorts of environments.
I'm in the process of building a new GPU based supercomputer and we are going with an air cooled datacenter with a 1.02 PUE, while everyone else is going with 1.4 PUE.
We confirmed with our vendors that air cooling is just fine. The machines will automatically shut down to protect themselves if they get too hot, but I doubt that will happen because we just expel the hot air immediately.
The capex to build a low PUE data center is a fraction of the cost of a traditional tier 3. Just think about all the infrastructure you don't have to build out when you don't have chillers. Essentially, all you need is a big box with fans on it.
I know HN hates Bitcorn, but fact is that the people who mine it have developed some really innovative solutions around data centers and cooling designs. We're going to take advantage of that for our new (not crypto) supercomputer. The money saved there can go into buying more compute and growing the business itself.
To push waste heat to a building across the street, it's usually necessary to add heat at the source n order to more efficiently transfer the thermal energy to the receiver.
OTOH other synergies that require planning and/or zoning:
- Algae plants can capture waste CO2.
- Datacenters produce steam-sterilized water that's usually tragically not fed back into water treatment.
- Smokestacks produce CO2, Nitrogen, and other flue gases that are reusable by facilities like Copenhill and probably for production of graphene or similar smokestack air filters.
Worse, we build Bitcoin mining farms near cheap power, which essentially produces nothing of value and in fact can raise power prices for everyone (eg [1]).
30% of stuff on your "usefull" datacenters is just porn. Then we have 30% gaming servers and so on.
Who are you to decide that this is more important as digital currency?
In my view Social Media is characterized by being advertisement funded (HN technically fulfills this), being poorly moderated (HN does not fulfill this IMO), being engagement driven (HN isn't) and encouraging parasocial relationships (HN doesn't). HN is closer to a web forum of the 90s than modern social media.
But it would be a bad argument. Products necessarily compete for attention. The only way to avoid that would be to implement centrally planned economy.
They "compete", but that competition doesn't have to be a push-based model where products insert themselves into unrelated parts of your life vying for dominance (between themselves and over the things in your life you actually care about). If there were a place you could search and have a reasonable expectation of finding just the subset of products worth the price for you personally and able to make your life better, would that count as an ad to you?
Parts of Europe ban political advertising near campaign periods. Yet obviously democracy works just fine. I view product advertisements in the same vein. I can review candidates on their platform and track record, just as I can review products on their merits by researching on my own.
I wouldn’t mind if all of the “impulse buys” that no one plans, but gets from an ad, went away.
And aluminum production near hydroelectricity (all the way to the beginning of hydroelectricity in narrow valleys in the french Alps) and heavy water in Norway (hydro), and currently aluminum in Iceland (geothermal) and ... well ... same in Saudi Arabia et al (natural gas to electricity to aluminum - still more effective than shipping the gas it seems).
Aluminum is easier to ship than electricity.
What's interesting is that there are more constraints today: Can a deal be made for consistent and low prices in the long run? The data center "unit" these days is pretty large - is there enough power? Is it available 24/7 (problem for wind: a data center can soak up when nearby wind power is cheap but it still has to get predictable power for the rest). A new progress is that location mattered perhaps for most data center: legal and latency. But for "pure compute" in AI tasks, it may matter a lot less: latency might not matter if the process will churm for days. So perhaps Iceland has some data centers in its future - no need to export the aluminum.
There are other weird things about pricing vs exports: California exporting water in the form of almonds. And rice.
Hennessy and Patterson is obviously a major work in our field but its hard numbers are slightly antique. Patterson has produced several papers, talks, and presentations recently that are better informed by being a hyperscaler insider.
The link in the article doesn't provide any evidence of the steel mill vs data center claim. I'll be charitable and assume that he got his links mixed up because he's got the same target URL for two links.
But anyway... I don't think we need many more steel mills. Most steel production is actually the recycling of old steel. We are very close to being in a steady-state loop with regards to steel. So you need to find other uses for cheap power ;)
I think reprocessing scrap steel is a really good candidate for solar. Capitalization costs are low enough it's worth it to chase the lowest priced electricity. Doesn't matter if the lowest cost is at night or during the day. Solar is probably better because the marginal cost is close to zero. meaning if you're buying electricity at night from a nat gas plant the plant still has to burn natural gas to supply you power. But you buying excess capacity from a solar farm doesn't cost the solar farm anything.
> Intel recently released a new generation of chips with significantly improved performance per watt that doesn't have much better absolute performance than the previous generation.
When was this published? I can't see a date. The Intel reference seems old, this stuff is still relevant today though.
Also no mention of all those GPUs used for AI LLMs of late.
Solar farms have made it much cheaper to expand electricity production, but the point of this article is still true: efficiency gains in data centers have a compounding effect.
I don't think this is the kind of generation they are talking about.
Solar is intermittent and you would need expensive energy storage to even things out.
I guess unless you can shift your compute usage to match production. Maybe that would work? But then your compute utilization is pry suboptimal. Why own hardware that is only in use 8-12hrs a day?
The web software I've seen has multiple layers and parts with much larger latencies than that. Requests going through all kinds of layers that are geographically distributed, spammy responses, slow database queries etc. (Postgresql by default doesn't even log anything under 10 ms).
Maybe for FPS games it doesn't make so much sense but for everything else...
https://en.wikipedia.org/wiki/Tricastin_Nuclear_Power_Plant
- "Three out of four reactors were used for powering the Eurodif Uranium enrichment factory until 2012, the year that Eurodif was closed."