Wow, when I was a wee intern at Maxtor's server division ten years ago, I vaguely remember my boss talking about helium-filled drives. I never realized the technology had another decade of development to go! Though I guess it's possible the tech is purely Seagate's, and Maxtor's research was lost in the merger.
Correction: It's HGST (now merging with Western Digital), not Hitachi, and the HGST Helium-filled drives have been available for several years. Seagate is playing catch-up.
I'm pretty sure that economics, not technology, was the hold up. Pressurized hard drives have been around for a long time, the halon filled IBM 3390 being a classic example [0]. Yes helium is more difficult to contain than halon, but it is a long solved problem.
You do know that helium is the second most abundant element in the Universe, right? We are not going to run out of helium. Someone will stand to make some money in going out there and getting it, perhaps, but there is plenty.
In the universe, certainly. Look at this [1] and you'll find that helium is an exceedingly small percentage of the Earth. Helium has an amazing ability escape containers over time due to it's almost completely non-reactive state. Once airborne, it readily escapes to space. We get it primarily because of the decay of radioactives [2] that get caught in pockets. It's not a renewable resource. Worse, it's not a recyclable/reclaimable once lost. It's a big deal and we should be conserving it heavily.
It is abundant, so we're not going to "run out", but it may reach a point where it's impractical to recover.
As the second most abundant element, a lot of it is found diffuse in the interstellar medium and in stars. It's not useful to us there.
Right now, it's easily and cheaply available by extracting it from certain deposits of natural gas. We should price it based on long-term need and long-term availability, which doesn't include the helium found in distant stars.
Large power station alternators are hydrogen cooled.. that's right the whole alternator is filled with hydrogen. you have to monitor how close you get to the upper explosive limit.
at a tenth the windage and ten times the heat transfer capacity of air, the stator and rotor can be a hell a lot closer and magnetic coupling more efficient therefore massive savings in construction and more efficient energy conversion to electrical energy
Hydrogen escapes from any container you put it into. Normally it's contained in 70kg lead bottles(1kg of hydrogen per bottle), but all of it still evaporates within 3-4 weeks naturally(and as it does so, it makes the metal brittle).
Hydrogen is a reactive gas. It is much more difficult to work with. Helium being a noble gas avoids many of the worst complications like making metal containers brittle.
"Usage of helium inside a hard drive helps to reduce the drag force acting on the spinning disk stack and lower the fluid flow forces affecting the disks and the heads. As a result, HDD makers can install up to seven platters into a standard drive and also use lower-power motors and mechanics. This reduces the power consumption of the HDDs."
Nitrogen = air. Around 77% of air is nitrogen, much of the rest of oxygen, then some CO2, and the rest is trace gases.
You don't get any improvement in most things by using pure nitrogen instead of air, unless one of those other gases was causing you a problem (in the case of tires, oxygen degrades the materials the tires are made of). The point of helium is that its molecules are much smaller and lighter than that of regular air so there's less friction. You don't get that with nitrogen. You would get it with hydrogen though (which is quite a bit smaller even than helium), though there's probably some other problem with H2 that prevents that.
And yeah, a vacuum should be ideal; the problem there is probably that atmospheric pressure, at 14.7psi (IIRC) is pretty significant, strong enough to hold your suction-cup-mounted cellphone to your car windshield, so you have to have a pretty robust hard drive enclosure to maintain that vacuum without crushing. Vacuum chambers are usually really beefy. It would be interesting to see a vacuum hard drive; I wonder if WD ever tried that, just to see what it could do. They probably just can't make it and stick with the regular 2.5" or 3.5" form factors. Maybe a double-height 3.5" size (with the internals being the same size as a regular 3.5" drive, the rest just being extra structure) would be possible.
I've always found that odd these days. Given that we're already not doing dumb magnetic manipulation anymore, is it not feasible to use a static magnetic field between the head and platter to the cushion the drive at all? The magnetic interaction with the platter would be presumably predictable, since it's the oscillation that's used to write the data.
Considering that normal hard drives have regular in them, and regular air is over 78% nitrogen, I guess bumping it to 100% nitrogen wouldn't make any difference.
From experience, I would say so. In various arrays I've worked with, the Seagates always seem to be the first to fail SMART. Also they are really noisy when compared to a WD or Hitachi (not really an issue in a Datacentre, but I'm long in the tooth enough to automatically associate 'noisy' with 'failing drive').
Tell me about it. I currently have a four month old Seagate Archive 8TB drive that is failing all over the place and returning corrupted data. And it looks like I'll be having to make threats under the Consumer Rights Act to get it replaced.
"Archive" drives are not meant for day to day use - that's why they are called "Archive" (as in, write-mostly-once-read-occasionally). They use SMR layout, which is just as fast for writing, but much slower for rewriting.
That said, if it is returning corrupted data, you should check the memory chips in your computer (If you can do a non-UEFI boot, an Ubuntu live disk has a memory test utility you can use).
Even if the on-disk data is corrupted, the drive should detect that and return an error and no data, rather than corrupted data.
Yes, I'm aware of the nature of the drive. It is purely used to store incremental backups from my main ZFS set of three, so sequentially-written large tar files, with associated par2 parity data.
No, the drive isn't technically returning corrupted data. I ran "par2 v" over the stored files (which is a read-only verify of the parity data), and it reports several of the files corrupted. However, looking in /var/log/messages, I do have several messages saying "Medium Error" and "Unrecovered read error - auto reallocate failed".
The drive also has quite bad SMART stats, for instance:
I think the trick could be to keep the Helium inside the drive at a pressure lower than the external pressure. This way you wouldn't need to keep the Helium in, but the air out.
Ballpark, that's 100,000 days or 300 years. So I guess that's MTBF, not "rated for", which gives even more the impression that we are talking about per-device lifetime expectancy.
MTBF means that, in the expected lifetime of all devices combined, the expected time between failures is 2.5 million hours.
So, if you buy 3000, one will have a failure within about a month (assuming expected lifetime is larger, but that is not stretching it)
Also, MTBF says nothing about what happens outside the expected lifetime. If that is 10 years, whether they all break down on the first day of year 11 or somewhere between year 10 and year 100 doesn't affect MTBF.
So, what do they expect the lifetime of these devices to be?
Also note that that is "over a five year service life". So, they do not guarantee the helium to be still there after that. Given the nature of this stuff, it likely will slowly escape, but I guess professionals using these will want to retire them fairly soon afterwards.
Drive would be 10 TB, 7200 RPM. WD is already selling similar drives for $700 with mean time between failure of about 3X than usual drives and 50% better rating for workload than non He-filled drives. Given the cost, I guess this won't matter much to consumers for now.
100MB/s * 60 seconds * 60 minutes * 24 hours = 8,640,000MB (or 8.64TB) per day
That makes sense because I have performed 7-pass wipes on 1TB hard drives and it took just about 24 hours (not my decision to do 7 passes, but a customer requirement)
Of course it's going to vary based on the drive and how far away from the center of the drive the tracks are, and some high performance platter drives have benchmarked at 200MB/s.
I'm talking about lifetime writes, not daily writes.
If you look at the Samsung 850 PRO SSD specs page [1], the most important metric in SSD lifetimes is the number of terabytes you can write, or TBW. Yes, they do list the reliability in hours, but if you hit the TBW first, then it's game over.
Reliability for the 850 PRO:
> MTBF : 2 million hours(125GB/256 GB/512 GB/1 TB), 1.5 Million Hours Reliability (2 TB)
So the 2 TB model can (theoretically) be used for up to 1.5e6 hours, but if you write 300 TB, it will simply fail. This is all due to limitations of NAND flash at the device level, so it's essentially unavoidable.
There are primarily two different techniques for NAND flash storage: SLC and MLC. In SLC, you store 1 bit per flash cell, whereas in MLC you store 2 bits per cell. Due to the nature of MLC, you need multiple reads/writes to read/store data for each cell. As a result, SLC SSDs are much more reliable, but this comes at a higher cost. Most consumer SSDs are MLC or even TLC (3 bits/cell). See [2] for more info.