The practical problems on the other hand are not to be ignored: The oil needs to be kept rather clean, otherwise it will loose its good insulation properties, hence you need sealed containers or purification devices, in industrial scale deployments this means keeping an eye on the chemical composition of your coolant by regular chemical analysis.
Connectors and cables need to be really oil-tight, otherwise the oil will creep out through cables hanging out of a closed vessel (even if they go higher than the oil-surface)!
It's not an efficient technology in terms of the amount of coolant that is used: You put everything into the oil-bath, both the high-energy-density components (CPU, graphics-card, maybe chipsets, fast RAM), regardless of their energy consumption: Imagine a full data center, you'd basically need thousands of cubic meters of pure, constantly filtered oil... For the good old cray-computer often used as an example it was different: There computing was spread out over a vast number of logic gates that deposited their waste heat over a really high volume.
The examples I have seen (casemodders being a good example) were also rather generous with container-size and amount of coolant where enough circulation and cooling was provided by natural convection. "Professionally", one would use this technology for a reason, hence in devices that are even more densly packed than usual blade-centers and there I expect issues with forced circulations of warm/hot oil just as we have hot/cold air distribution nowadays.
A technology like currently deployed water-cooled devices, where the majority of heat is collected by water at the concentration points (mainly CPU) and air is taking care of the (much reduced) remains seems much more sensible to me.
Considering that they say that it only needs to be changed once a decade, it sounds like they've got that figured out.
>Connectors and cables need to be really oil-tight, otherwise the oil will creep out through cables hanging out of a closed vessel (even if they go higher than the oil-surface)!
You've already proposed one solution to that problem. Another possibility would be a simple lipophobic coating on a small section of the cable.
>you'd basically need thousands of cubic meters of pure, constantly filtered oil...
Once again, if it's changed once a decade, that's not really a big deal.
>"Professionally", one would use this technology for a reason, hence in devices that are even more densly packed than usual blade-centers and there I expect issues with forced circulations of warm/hot oil just as we have hot/cold air distribution nowadays.
OK, but Intel has been testing this in data centers for over a year. This isn't an idea that they're toying with. It's something that has been put into production.
>A technology like currently deployed water-cooled devices, where the majority of heat is collected by water at the concentration points (mainly CPU) and air is taking care of the (much reduced) remains seems much more sensible to me.
Well, I mean, that should be comparable with numbers. They say this takes energy consumption down to 3%. What are the numbers for water cooling?
That's part of the reason for oil-filled transformers. The other reason is to cool the heat-generating parts by relying on a liquid's tendency to create convection loops between the warmer and cooler areas.
A quote: "Transformer oil or insulating oil is usually a highly-refined mineral oil that is stable at high temperatures and has excellent electrical insulating properties. It is used in oil-filled transformers, some types of high voltage capacitors, fluorescent lamp ballasts, and some types of high voltage switches and circuit breakers. Its functions are to insulate, suppress corona and arcing, and to serve as a coolant."
Also, it would seem fairly straightforward to design optimized versions of components such as cables to make them better suited for oil, as well as a simple filtration / purity monitoring system.
I epoxied the motherboard mount from an old computer case to some wood and then stuck the wood to the inside of a cheap fish tank. I installed the computer sans harddrive and then poured in 12 liters of mineral oil that I'd bought from a vet (Vet's use it as a horse laxative).
It resulted in a perfectly silent system but because I didn't cover the top it was also a really effective fly trap. Within a few months there were a bunch of dead flies floating at the top of the tank. Also the weight made it a real effort to move around and no matter how careful I was there was always a mess when I needed to swap out or adjust a part.
Still it was a lot of fun to build and I used it as my primary machine for about a year.
 Seymore Cray speaking about liquid cooling http://www.youtube.com/watch?v=QTfi_gNPuh0&list=FLZwq3bl...
 Visible cooling fluid within a 90s era Cray at the NSA:
Today, the limiting factor everyone is focusing on is core density, and cores generally produce more heat than any other component aside from high-end graphics and displays.
Of course, power consumption and thus heat has been continually improving, but not by orders of magnitude. Core density, on the other hand, has been going up by a few orders.
As a result, you've got to move more heat, and because density is what's going up, you've also impaired airflow.
Take this with a grain of salt though, it's just speculation based on what I've seen & heard.
Just submersing your components in a liquid doesn't do you much good as the liquid will be room temperature and your CPU will burn up just as fast as if it were in the open.
Well no, because see, the thermal conductivity of air is about 0.026 W/mK and the thermal conductivity of mineral oil is at least 0.1 W/mK and oh god I'm trying to argue with someone 11 years in the past.
First the standard vertical rack wouldn't work - you would not be able to remove any blades from the rack since they are submerged.
You would not be able to lift the entire rack out of the tub, since you might not have enough height, but worse the other computers in that rack would probably burn up from a sudden loss of cooling.
So you would have to switch to horizontal racks, so you would lift one blade out without affecting the others. Of course that wastes tons of space. If you want lots of servers you would need multiple racks one above the other - but then how would you service the upper ones?
And you are wasting space again - you need air height above each rack. Normally in a vertical server, the gap you need to remove the server is the same gap you need to walk there, so no extra wasted space.
And then there are weight issues - a typical 21.5 inch x 19 inch x 8 feet rack would weigh about half a ton just from mineral oil! (Not including any metal.) Good luck building something strong enough to hold lots of them, and forget mounting them one above the other.
All of this can be designed around, but it's not a simple matter - for now you would need to make it custom.
Whether the technology makes sense depends on the overall cost effectiveness. Google putting a data center near a river and using the river water in a complex water cooling system, or routing used non-potable water to provide cooling are both systems that are used today and likely have at least as many challenging aspects, yet are in use today b/c they lower overall costs.
The Oil also needs money, and the extra cost to handle the construction, etc. And if google could already achieve 10- 20% figure then it is properly not worth while to do it within that 18% savings.
The heatsink couples the heat source (the electronics) to the oil. The metal heatsink has much higher thermal conductivity than the oil. The advantage of the oil is that it circulates and carries heat away to another heatsink that ultimately dissipates heat to the air.
Hence you still need a heatsink (smaller though and no need for a fan)