Hacker News new | past | comments | ask | show | jobs | submit login
Servers Too Hot? Intel Recommends a Luxurious Oil Bath (wired.com)
76 points by theklub on Sept 2, 2012 | hide | past | web | favorite | 38 comments



This of course looks very cool, but to me it does not seem practical besides edge-cases, even though the technology is not new in any way, high-powered transformers which you find in power plants, for example, are oil-filled since basically forever.

The practical problems on the other hand are not to be ignored: The oil needs to be kept rather clean, otherwise it will loose its good insulation properties, hence you need sealed containers or purification devices, in industrial scale deployments this means keeping an eye on the chemical composition of your coolant by regular chemical analysis.

Connectors and cables need to be really oil-tight, otherwise the oil will creep out through cables hanging out of a closed vessel (even if they go higher than the oil-surface)!

It's not an efficient technology in terms of the amount of coolant that is used: You put everything into the oil-bath, both the high-energy-density components (CPU, graphics-card, maybe chipsets, fast RAM), regardless of their energy consumption: Imagine a full data center, you'd basically need thousands of cubic meters of pure, constantly filtered oil... For the good old cray-computer often used as an example it was different: There computing was spread out over a vast number of logic gates that deposited their waste heat over a really high volume.

The examples I have seen (casemodders being a good example) were also rather generous with container-size and amount of coolant where enough circulation and cooling was provided by natural convection. "Professionally", one would use this technology for a reason, hence in devices that are even more densly packed than usual blade-centers and there I expect issues with forced circulations of warm/hot oil just as we have hot/cold air distribution nowadays.

A technology like currently deployed water-cooled devices, where the majority of heat is collected by water at the concentration points (mainly CPU) and air is taking care of the (much reduced) remains seems much more sensible to me.


>: The oil needs to be kept rather clean, otherwise it will loose its good insulation properties, hence you need sealed containers or purification devices, in industrial scale deployments this means keeping an eye on the chemical composition of your coolant by regular chemical analysis.

Considering that they say that it only needs to be changed once a decade, it sounds like they've got that figured out.

>Connectors and cables need to be really oil-tight, otherwise the oil will creep out through cables hanging out of a closed vessel (even if they go higher than the oil-surface)!

You've already proposed one solution to that problem. Another possibility would be a simple lipophobic coating on a small section of the cable.

>you'd basically need thousands of cubic meters of pure, constantly filtered oil...

Once again, if it's changed once a decade, that's not really a big deal.

>"Professionally", one would use this technology for a reason, hence in devices that are even more densly packed than usual blade-centers and there I expect issues with forced circulations of warm/hot oil just as we have hot/cold air distribution nowadays.

OK, but Intel has been testing this in data centers for over a year. This isn't an idea that they're toying with. It's something that has been put into production.

>A technology like currently deployed water-cooled devices, where the majority of heat is collected by water at the concentration points (mainly CPU) and air is taking care of the (much reduced) remains seems much more sensible to me.

Well, I mean, that should be comparable with numbers. They say this takes energy consumption down to 3%. What are the numbers for water cooling?


This may be only partially relevant, but I'm reading about transformer protectors and I can see all sorts of things going wrong here. Extreme heat causes arcing in the oil and consequently fire and/or explosion, in transformers. I don't know how that'd work here.


Arcing is a problem with open air transformers too, and one of the functions of oil in oil-filled transformers is to prevent arcing. Probably this property simply breaks down at very high temperatures, but since arcing isn't generally a problem for computers in the first place, it shouldn't be a problem regardless of oil temperature.


> Arcing is a problem with open air transformers too, and one of the functions of oil in oil-filled transformers is to prevent arcing.

That's part of the reason for oil-filled transformers. The other reason is to cool the heat-generating parts by relying on a liquid's tendency to create convection loops between the warmer and cooler areas.

http://en.wikipedia.org/wiki/Transformer_oil

A quote: "Transformer oil or insulating oil is usually a highly-refined mineral oil that is stable at high temperatures and has excellent electrical insulating properties. It is used in oil-filled transformers, some types of high voltage capacitors, fluorescent lamp ballasts, and some types of high voltage switches and circuit breakers. Its functions are to insulate, suppress corona and arcing, and to serve as a coolant."


Right, that's why I said "one of".


I wasn't being critical, only expanding your point.


That's a good point. Yes, it does break down in certain specific conditions. This causes transformer explosions when there's overload and such. Of course, oil being oil only intensifies this problem.


In spite of components having different energy densities, all of the components produce some waste heat, so it all has to be dissipated by the cooling system one way or another. In a large datacenter, even the low energy density components (combined across thousands of machines) generate a tremendous amount of heat.

Also, it would seem fairly straightforward to design optimized versions of components such as cables to make them better suited for oil, as well as a simple filtration / purity monitoring system.


I built a mineral oil PC a couple of years ago.

I epoxied the motherboard mount from an old computer case to some wood and then stuck the wood to the inside of a cheap fish tank. I installed the computer sans harddrive and then poured in 12 liters of mineral oil that I'd bought from a vet (Vet's use it as a horse laxative).

It resulted in a perfectly silent system but because I didn't cover the top it was also a really effective fly trap. Within a few months there were a bunch of dead flies floating at the top of the tank. Also the weight made it a real effort to move around and no matter how careful I was there was always a mess when I needed to swap out or adjust a part.

Still it was a lot of fun to build and I used it as my primary machine for about a year.


It's fascinating that I heard about homebrewers doing this kind of thing around a decade ago (although I believe they were keeping the hard drives un-submerged rather than sealing them up) yet this is only now being considered "cutting edge" as a commercial option.


The Cray-2 (1985) supercomputers featured 'liquid immersion' cooling - the cubes of processors were dropped into a bath of inert oil [1][2][3].

[1] http://en.wikipedia.org/wiki/Fluorinert

[2] Seymore Cray speaking about liquid cooling http://www.youtube.com/watch?v=QTfi_gNPuh0&list=FLZwq3bl...

[3] Visible cooling fluid within a 90s era Cray at the NSA: http://www.youtube.com/watch?v=BH8X8w8a4f4#t=23s


It's just a reminder that "commercially viable" is not the same thing as "possible".


Yet I wonder what really has changed that makes it commercially viable now. Pumping fluids around isn't a technology that has significantly advanced, after all. Is it just that Moore's law has ceased to apply to heat and power usage? Is it that CPU manufacturers finally got around to testing and falsifying the long-assumed-true hypothesis that servers need to be kept in ultra cool rooms to begin with?


I spent some time reflecting on this, and I think it's because what the bottleneck is has changed over the years. For a while, I/O was the real challenge for servers. But hard drives have improved a ton, and many-drive systems have really improved a lot. Memory had its day, but the price has plummeted.

Today, the limiting factor everyone is focusing on is core density, and cores generally produce more heat than any other component aside from high-end graphics and displays.

Of course, power consumption and thus heat has been continually improving, but not by orders of magnitude. Core density, on the other hand, has been going up by a few orders.

As a result, you've got to move more heat, and because density is what's going up, you've also impaired airflow.

Take this with a grain of salt though, it's just speculation based on what I've seen & heard.


Yup, I was reminded of those too! Feeding the nostalgia: http://web.archive.org/web/20010623203101/http://www.hardwar...


I like how the author of that completely did not understand how heat conduction works:

Just submersing your components in a liquid doesn't do you much good as the liquid will be room temperature and your CPU will burn up just as fast as if it were in the open.

Well no, because see, the thermal conductivity of air is about 0.026 W/mK and the thermal conductivity of mineral oil is at least 0.1 W/mK and oh god I'm trying to argue with someone 11 years in the past.


Reminds me of a Bioshock Casemod from a few years ago[0]. From what the creator said, it doesn't even sound that expensive.

0: http://www.reddit.com/r/gaming/comments/dmrpv/bioshock_miner...


I really cannot understand why this is not more prevalent. The cost savings of not having air conditioning for a datacenter should be well worth any mess of oil.


There are lots of problems.

First the standard vertical rack wouldn't work - you would not be able to remove any blades from the rack since they are submerged.

You would not be able to lift the entire rack out of the tub, since you might not have enough height, but worse the other computers in that rack would probably burn up from a sudden loss of cooling.

So you would have to switch to horizontal racks, so you would lift one blade out without affecting the others. Of course that wastes tons of space. If you want lots of servers you would need multiple racks one above the other - but then how would you service the upper ones?

And you are wasting space again - you need air height above each rack. Normally in a vertical server, the gap you need to remove the server is the same gap you need to walk there, so no extra wasted space.

And then there are weight issues - a typical 21.5 inch x 19 inch x 8 feet rack would weigh about half a ton just from mineral oil! (Not including any metal.) Good luck building something strong enough to hold lots of them, and forget mounting them one above the other.

All of this can be designed around, but it's not a simple matter - for now you would need to make it custom.


Blades in huge datacenters are not serviced one by one. Typically you'd wait until say 20% of the blades fail and only then fix and upgrade the whole rack at once.


What you describe are not faults they are tradeoffs. Perhaps instead of having 14' ceilings and sub-floors and AC a datacenter would have 6' ceilings and no AC.

Whether the technology makes sense depends on the overall cost effectiveness. Google putting a data center near a river and using the river water in a complex water cooling system, or routing used non-potable water to provide cooling are both systems that are used today and likely have at least as many challenging aspects, yet are in use today b/c they lower overall costs.


My thoughts exactly, although most of the time Space should be cheap enough where the DC are located. But you would need at least 2 - 3x the size of DC to handle that.

The Oil also needs money, and the extra cost to handle the construction, etc. And if google could already achieve 10- 20% figure then it is properly not worth while to do it within that 18% savings.


It is not prevalent because they are still testing it and the oil servers are not yet available for purchase. I do hope when they do become available that the price is not too high.



The articles around this are interesting and consider how it could become prevalent and they rightfully point to high performance computing. I think another obvious area is countries like India which have very high costs for power and do not have much in the way of existing datacenters.


Immersion cooling was used on the Cray-2 supercomputer with Fluorinert[1]. Water cooling is fairly common in high-end supercomputing setups (I think all the BlueGene systems are water-cooled, for example).

[1] http://en.wikipedia.org/wiki/Cray-2


Midas Green Tech, anyone? http://www.midasgreentech.com


According to the article, Intel recommends removing the grease between the processor and the heat sink. Question: Why does it need a heat sink if it's being cooled by oil?


> Why does it need a heat sink if it's being cooled by oil?

The heatsink couples the heat source (the electronics) to the oil. The metal heatsink has much higher thermal conductivity than the oil. The advantage of the oil is that it circulates and carries heat away to another heatsink that ultimately dissipates heat to the air.


I interpreted that more as "After you remove the heat sink, make sure there isn't any of the conducting grease left over because it can ruin the mineral oil" but they could mean using a heat sink with no grease as well.


Well, the obvious answer is that heat resistance for that processor area to oil is insufficient to remove all the heat

Hence you still need a heatsink (smaller though and no need for a fan)


Larger surface area to dissipate the heat from the CPU to the oil?


I recently saw a video about a similar implementation at an annual conference in Germany for PC modders that won first place. It looked pretty awesome, the whole PC box was immersed in a greenish liquid. You can see it here, it's at the end of the video after 8:33.

http://www.youtube.com/watch?v=HMduN09xCgs&feature=playe...


Interesting that supermicro is already moving towards putting these on the market. I hadn't heard of them until the other day with the etsy setup, but this looks very promising for future datacenters large and small...


Check out LiquidCool Solutions too. I think this type of technology could very well be the standard in the future. http://www.liquidcoolsolutions.com


No risk of short-circuits?


No, mineral oil as they're using it, with required specs, is an even better insulator than air




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: