Hacker News new | past | comments | ask | show | jobs | submit login
Western Digital Announces Ultrastar He12 12 TB and 14 TB HDDs (anandtech.com)
84 points by seycombi on Dec 9, 2016 | hide | past | favorite | 109 comments



I'm guessing these aren't really consumer drives. :(

Might as well ask here. My 2TB Plex hard drive is getting full. What's the best solution if I want say 10TB to store my media for Plex but very hands off?

I don't want to weld any contraption just plug something in and have it be visible by Windows 10.


I've got a FreeNAS system built by iXSystems. It works well. But a NAS system is more expensive than a single drive. The FreeNAS box replaced a NetApp StoreVault unit.

Generally I've gone to a archive first strategy for recovery though. I didn't want to buy a tape drive to back it up (10TB usable after dual parity protection), so I write media as I get it to BD-R's and put them on the NAS box for accessibility. I also backup individual projects on their own DVD-R or BD-R. That negatively impacts 'time to reconstruct' because each source will have to be located, loaded, then verified. But it essentially means nothing on the NAS box isn't already available on shelf-stable read only media. It also means the option for recovering from a ransomware attack is to re-format and re-load. The challenge is maintaining the discipline of making the restore media first.


I probably have the same (FreeNAS Mini, right?). Quite happy with it, although it can be fiddly to set up. Has tons of features though.

In fact, as I speak I'm upgrading to 4x8TB Western Digital Red drives (I back up a number of computers, and have some large media files). Resilvering takes foreeeevveeerrr btw. (looks like about 36 hours per disk, times 4)


Would ZFS snapshots protect against ransomware?


Yes, absolutely. ZFS snapshots are immutable, such that once a snapshot is created nothing can change it (until to destroy the snapshot). Not designed to help with ransomware, but is a perfect use case they'd protect against. (You also want backups, of course).

Edit: I guess I'm assuming the ransomware doesn't have access to the raw disks, which would be true if you were using a NAS (e.g. FreeNAS) and connecting to it via NFS, CIFS, etc.


Correct, no access to FreeNAS via ssh. Ransomware would have to find a vulnerability in a service exposed by FreeBSD / FreeNAS.


On the client? Sure, as long at it doesn't grow a brain and starts to SSH into the storage and wipe that clean, too.

On the storage? Nope.

Get backups.


Curious, how/how often do you test the BD-Rs? I'd be a bit worried about their long-term stability.


Had a mass of DVD/BR for another project. The DVD where 10-15 years old with and we checked a batch half a year ago with no loss. If you are concerned for your BD you could use M-Disks https://en.wikipedia.org/wiki/M-DISC


> iXSystems

Looks like a Supermicro reseller - at least their rack offerings.


For hardware, pretty much. But they sponsor the development of FreeNAS, so they aren't just a reseller. They do add a lot to the equation.


Get yourself Avoton motherboard, buy as many WD Green/Red drives that have best cost/space ratio as your chosen enclosure and budget can handle, install FreeNAS, enjoy ;)


If you're going to put WD Greens in a FreeNAS, run wdidle to increase or disable the idle spin down timer in the drive first.


How commonly do WD Greens actually fail? I'm either strangely lucky, or the whole unreliable WD Greens thing is way overblown because I've been using three 1TB ones without incident for 5 years now..


It's actually hard to say. You might be very lucky, someone else might be very unlucky. Generally, RAID was 'invented' to operate on cheap, commodity hardware, where the failure rate might be high. When it comes to data, most reliable one you can find (mostly due to its probe size and lack of competitive reports of similar size) are Backblaze's drives reliability reports.


Exactly, thats why I switched to HGST. They really seem to be a cut above the rest.


I've exclusively purchased WD drives - and most of them are green. I have ten 2TB drives and two 5TB drives. 0 failures.

However from the IRC channels I frequent - many people have had one or two die over the years. So I consider myself lucky.


I had a 1TB WD Green drive that failed. Also had another failure with 320GB USB WD Passport.

Probably just bad luck, but I don't feel like buying another WD drive.


>Might as well ask here. My 2TB Plex hard drive is getting full. What's the best solution if I want say 10TB to store my media for Plex but very hands off?

You probably need a NAS. To store 10TB securely you are looking at a bill of materials of about ~1200usd. This includes a NAS from synology and 3 6TD Western Digital Red drives. This will give you 12TB of storage space running in a RAID5 configuration.

Synology DS416 NAS diskless: 365USD WD Red 6TB: 253USD (x3) Total: 1124

For a little more you could pick up a HP Microserver for just under 400USD and run ZFS on a pool with SMBD and get about the same results but with more hands on fun.

Alternatively, just aim your requirements a bit lower (e.g. only 6TB) and pick up a 2 disk nas and 2 drives running as a mirror and it will be much cheaper (~700-800USD).

Resources: http://www.raid-calculator.com/default.aspx


Don't run 12TB of data with consumer hardware in a RAID5. If you have the money stick with RAID 1, 10, maybe 6 or even JBOD.


I am running raidz1 on a 5-disk home fileserver with the philosophy that if I do get bitten by a multiple drive failure or latent bad sectors during a rebuild, I can count on zfs to tell me which files were affected and then I'll just restore them from backup. (Unlike md raid where rebuilds seem to be much more all-or-nothing.) My underlying assumptions here are:

* A disk failure where some moderate number of sectors become unreadable is far more likely than a disk going completely toast.

* Even if I were running raidz2 or raidz3 I'd still want the offsite backups in case of some disaster befalling the entire array (or more likely, me accidentally rm'ing something)

* It's a home fileserver used mostly for archival purposes. Unlike a business scenario, the loss of availability while I restore from backup is not going to hurt too badly.

Did I miss something here or do you think this is sound?


Looks good to me.


That's good advice. I was trying to cut the price down a bit so OP didn't fall out of his chair. Maybe 10TB was an unlucky number plucked out of the air and just two 8TB drives running in a mirror will suffice.


RAID 6 will be almost as slow and vulnerable as RAID 5. The best one in terms of speed and data security would be RAID 10 or RAID 100.


Gah! Why is anyone even uttering the word "RAID". Ugh. ZFS people. ZFS.


What's the implication here? Do you eat the 50% storage efficiency with mirrored pools? Or do you use a wider stripe for better resiliency (raidz2, raidz3)?


I've got an 11-disk system with raidz3.

All the striped modes will (in effect) limit write rates to the bandwidth of a single disk, but ZFS is good about this: So long as the writes are reasonably large, as they'll likely be for a media server, you'll usually get the maximum streaming bandwidth of the disk. So, 150-200 MB/s. Reads are much faster.

It's not something you'd want to use in a DC, but for mostly-idle storage at home it's perfectly fine. Add in an SSD cache, and it'll be downright speedy.

That gives an effective capacity of 64 TB, given 8TB disks, and the chance of four disks dying before you can replace even a single one is pretty low. The usual limitation of ZFS applies, however: You have to decide the number of disks up-front, as you can't expand the size of a RAIDZ vdev later on.

(Well, you can, but only by replacing all the disks with larger versions.)


I run stripped mirrors with ZFS. 2x2. RAIDZ is simply to slow in comparison. Like A LOT slower. Ill just eat the storage loss with mirrors. Disk is cheap.


Disk is cheap, but how you attach the disks starts becoming an important cost component as you add more disks. :-(

IOPS aren't cheap, and you will find out just how much that matters the moment you have to replace a disk in a pool with a lot of data in it. Resilver times of days can be pretty agonizing in a home setting.

I won't build a pool without at least double redundancy per rotating-drive vdev, meaning at least 3-way mirrors (zpool create foo mirror disk0 disk1 disk2 mirror disk3 disk4 disk5) or raidz2, as I have had to go to backups for failures during resilvers of raidz (never again!) and 2-way mirrors.

3-way mirrors conveniently provide high read IOPS. OpenZFS does a good job aggregating and scheduling writes to the point that writes are mainly choked by random reads. (Very full pools induce quasi-random reads while hunting for space in metaslabs, which is the main reason writes to very full pools perform poorly).

2-way mirrors are only really fine for vdevs that have negligible random-access penalty (and thus likely high IOPS), or where rebuilding an entire pool from backups takes time comparable to resilver and scrub times when dealing with a replacement leaf device that might fail during resilvering (you cannot know the data on the replacment device is OK until you have read it, which generally means scrub-after-resilver), or errors reading the surviving leaf device(s).


I agree, you can see from my other comments in this thread, I am a strong advocate for FreeNAS :)


RAID is fine. As long as it's RAID 10 haha.


What's your reasoning behind that? I would trust a 6+2 RAID 6 more than a 4+4 RAID 10.


Resilvering time. You are faster getting a broken drive fixed from a mirror. This gets more important with increased drive capacity.

There is an article somewhere calculating the probability of two drives failing during resilvering and how this gets worse with the ever increasing capacities, which increases the resilvering times.

But I guess you already account for this by using a Raid 6 with +2.


I can't think of any reason resilvering should be significantly slower with RAID 6. Even an extreme bottleneck of attaching all the drives to a single SATA channel would let you resilver a 10TB disk in roughly one day.

Triple parity can give you pretty good integrity, but at a certain point it's wasted effort. Disk failures cluster too much for any single array to be safe; you need backups.


If your that concerned with redundancy use a 3-disk striped mirror with ZFS. Redundancy AND great I/O.


>You probably need a NAS. To store 10TB securely you are looking at a bill of materials of about ~1200usd.

For just 10TB, I would recommend that most home users stay away from RAID and buy 3 Seagate 10TB drives (street price currently $399 each) for total of ~$1200. Use 1 drive as primary copy and the 2 others as backups. With 3 drives, you could even rotate the 2 backups to an offsite location.

I quit using RAID in 2007. All of them FreeNAS/drobo/Synology/etc required more troubleshooting and babysitting than I wanted to deal with. It didn't matter if it was Linux software or dedicated hardware (NAS appliance).


>"For just 10TB, I would recommend that most home users stay away from RAID and buy 3 Seagate 10TB drives (street price currently $399 each) for total of ~$1200. Use 1 drive as primary copy and the 2 others as backups. With 3 drives, you could even rotate the 2 backups to an offsite location."

And how do you plan to detect if your data is silently corrupted? Unless you're using some archive file format that includes checksums/hashes, you have no way of knowing whether your backups are any good until it's too late.


Your tone makes it sound like silent RAID corruption doesn't happen. That's not true, even for NAS appliances like Synology.[1]

If you meant that a filesystem like ZFS with checksum blocks is designed to protect itself agains corruption errors, one can use that without RAID. A single-disk ZFS is orthogonal to RAID.

To point back to the OP's question, he's running Windows with Plex. In that case, a 10TB NTFS filesystem with periodic binary comparisons/checksums (or format the disk with the newer ReFS[2] that stores checksums similar to ZFS) is simpler than RAID5/6. He also gets the extra benefit of an offsite backup without paying for cloud storage.

I'm not against RAID for all cases but a home media computer with just 10TB doesn't meet the threshold of dealing with its extra complexity. However, if one is wanting to consolidate 25TB+ of disk space -- or running a business -- or need aggregated io bandwidth, the cost/benefit drivers for RAID over JBOD would make more sense.

[1]https://www.google.com/search?q=raid+data+corruption+synolog...

[2]http://www.windowscentral.com/how-use-resilient-file-system-...


The advantage of a NAS is that the file system works equally transparently on OS X, Windows, and Linux. Attached drives can't do that. (Not easily and reliably, anyway.)

You may not need that feature, but it's a very useful option to have.

The downside for media storage is that a straight NAS is often incredibly slow compared to USB3. For decent speed you need iSCSI, which adds cost and complication.


I built a home backup server that works good enough for me:

* zfs for snapshots (and send my inc snapshots from my laptop) * zfs fs with snapshots to rsync non-zfs enabled devices (read: wife's mbp && imac) * rpi3 as brain (slow, but works) * 2x2Tb drives on a zraid

It took me a while to get it to where I wanted, but it was cheap (~230), low power and safe enough for home. I recently did a full reinstall from scratch and it took me less than 15 min to get my system back (I have my /home as a zfs fs backed up).


What are your components???


I would second going with something like a synology NAS vs a PC box with freeNAS mostly because of power consumption. A cheap PC box has about ~60W idle when just sitting there doing nothing as a server, while the NAS has 10-15W idle.


What is your strategy if the NAS (controller) dies in you and you need the data? Is Synology using proprietary formats or could you just put the drives in a desktop machine? Is the strategy to buy a new Synology? Or do you treat the NAS as a backup and add an offsite archive (e.g. S3), and then you can restore the archive to any device you want (new Synology, freenas).

This comment should not be understood as offensive. I am just interested where one draws the line.


Synology's NASes uses Linux mdadm, so you could use either a replacement Synology NAS, or a PC [1] to access your data.

[1] https://www.synology.com/en-global/knowledgebase/DSM/tutoria...


You can restore drives from a dead synology like other people said, and I also have a amazon cloud drive backup. The NAS is for fast onsite backup. I use arq to backup to both destinations.

You always want 3 copies: one on your computer, one onsite and one offsite. If there is a fire or everything electronic is stolen from your house, you still have the offsite.


From what i read on the Synology website they use EXT4 or BTRFS for the internal disks; so no proprietary format.


I've been very pleased with my Synology. I have a two bay with 1TB in RAID1. I will upgrade next year - probably will get a DS416 with four Red 6TB in RAID10.


I just put a Synology 1815+ together, with five 4TB Hitachi drives (shout-out to Backblaze for the recommendation). Using Synology's 2-drive redundancy option I have about 10.5TB available and 3 bays free for expansion.

The only hassle so far has been backups. The built-in backup apps can't span media (external USB drives). Something I used to do decades ago with 3.5" floppies... So I have to do a file-copy "backup" across the network.


I've found that a ZFS cluster is the best solution for your kind of problem. ZFS has its own awesome software RAID system. You could get six 3TB drives, use two of those drives for redundancy with RAID-Z2, and then get 12TB of ultra-reliable private storage.


YEah, for servers - but in 2-3 years you'll be able to pop one into your workstation, for sure. MYself, I'm still recovering from the surprise a few months ago when my wife mentioned we should update our backups and dropped a 5tb external drive on my desk - I hadn't bought a hard drive for a few years and didn't even know consumer drives had hit such capacities. So my advice is forget about it for now and then be surprised when you stumble across a 20tb drive at the bottom of your cereal box in the not-too-distant.


Is this data that you could redownload easily enough if the drive, or raid was lost? A plex server full of movies is generally going to go into this category. If so get multiple 4-5tb drives as these will be the cheapest per gb. You could buy a 5tb one now, and then when you are running low on space add another drive. Eventually it will be worth upgrading the whole system because of power savings and better performance. A raid configuration is going to be more expensive per gb than just having a few drives

Is the data very expensive to replace or one of a kind? Photographs and home movies would fall into this category. First you should have a backup. Crashplan and backblaze are good options for consumers. On a smaller scale cloud based backups like google drive or dropbox work for this. These will protect you not just hardware failure but malware like cryptolocker, accidental deletion, and total computer failure. If you have a large amount of data like professional photography or video production a NAS with RAID 6 is advised. You can lose 2 drives without losing any data, but you lose 2 drives of usable space for parity, and you have to deal with the overhead of setting it up.


for images/home movies I use google photos...for the free aspect that is :)


8TB USB drives are ~$200 these days. Just pick up a couple and tell plex where the media is. I've built up ~30TB in odd drives here and there and available to plex and ubooquity and it's just about pain free to manage.

Not high perf, but my entire family can stream off of it fine so that's about all I care about.


A few years ago, I splurgled for an 8-bay Synology NAS and never regretted it. It has been rock solid for the past 5 years, all drives in RAID6

I had some drive failures at some point (a bad WD batch) and even though I messed up and removed the wrong drive at some point, was able to get everything back. Normal drive replacement is painless (barring stupid human syndrome).

It has Plex built-in and loads of other apps. I use it to automatically get a full and synched version of my dropbox account, it does my automated torrent download for me, and a host of other ready to use functions. Has data dcrubbing, drive health monitoring etc, with email notifications.


Walmart currently has a decent 6TB drive on sale for $160: https://www.walmart.com/ip/48735757

Put 3 or 4 of these in a Synology enclosure or attach directly in a uATX case. I use btrfs "raid1" which protects against any one drive failing and has filesystem-level checksums and compression, but people also say good things about ZFS in a similar mode. Just use either one of those with scrubbing.


Walmart currently has a decent 6TB drive on sale for $160

This would have sounded like science fiction to 12 year old me. Still kinda sounds ridiculous.


An eon ago, when I was 14, I found out about Moore's Law and computers. I remember writing a Basic program on an Apple II to calculate the results for different years in the future. I boggled at the numbers that came out for these decades, and I am still amazed that it's all kept pace, more or less.

I should see if I still have the program somewhere in storage.


The spec in TFA shows a SATA version, so these should work just fine with any desktop system.


When I say consumer, I mean price-wise. Sure there may be a 200TB drive but if it costs $20,000 it's not really aimed at general consumers.


You will need some kind of NAS, I suppose, or you can use something like stablebit drivepool.


I love my Synology rack.


I've heard good things about Synology NASes.


They're great when they work, and horrible, when they don't. Power failure outside of warranty? Good luck buying a replacement, hunt for broken units to scrap their parts. Same for motherboard and pretty much anything else. Once you give up and have the idea of just getting your data back, you're screwed by their proprietary raid-fs-thingy.

I had one, it died, I went through all of that, decided to never make this mistake again. Got myself a nice FreeNAS box, if anything fails there, I can just pop the drives into any FreeBSD system to recover my data and migrate it to new home.



You've heard correctly. If asked to name just one thing I don't like about my DS410, I'd be at a loss to do so. I guess maybe it's too appliance-like and therefore I don't get to fiddle with it much.

Now, back to dreaming about slapping four 12Tb drives in that thing...


I tremble at the thought of trying to backup 48TB of data. The only reason I'd have a local NAS carry that much data in my home would be as a cache backed by something like Backblaze's B2 storage service.


Tape is your friend. Get a library / autoloader, though, don't want to juggle that many tapes all the time... also, with tape the option of deduplicated backups goes out the window, at least with open source software.


I was looking into tape but its so expensive. Even LTO-4 drives were ~500, and that's "only" 800gb. Most tape drives require a SAS interface. for LTO-5 and up, you're looking at $1500-$2000. At that point, you could just buy another NAS and mirror your data that way...


The SAS interface itself might not be an issue. Interface cards are pretty cheap, and even reverse breakout might work.

But the tape drive itself? Oh yeah. You can pay for Backblaze for a long time before you've made up that cost.


+1. No maintenance/admin required, syncs with S3 or Dropbox.


Does it sync w/ Amazon Drive?


Yes. Amazon Cloud Drive


If you're just storing media (no total disaster if the disk dies), I'd go with an 8TB drive in a usb3.1 enclosure or something similar. You can get two, and mirror them with some form of software raid, or backup one to the other.

Eg something like: https://www.amazon.com/Seagate-Expansion-Desktop-External-ST...

Or something like this: https://www.amazon.com/Mediasonic-Bay-Dock-2-5-SATA/dp/B0078...

With a couple of: https://www.amazon.com/Seagate-Ironwolf-3-5-Inch-Internal-ST...

(Note that 8 and 6 are ~equal/gb, the 10tb version is at a bit of a premium).

Benefit of these docks is it's easy to just have a "backup" disk that you drop in from time to time and sync to, and then you can keep that drive in a fireproof box, or off-site. The down side is that I'm sure the odds of physical drive failure goes up when it's so easy to move the drives around all the time...

If you want "high performance", it's going to be a bit more expensive -- I'd try and pick up a cheap HP Gen8 mini server[1] or something similar from Dell - anything that supports ECC ram and a few 3.5 drives. Install a ~512GB SSD, and set up freeNas/openSolaris/Ubuntu with zfs for linux - and use the disks in mirroring or raidz, with journaling on the ssd.

I've yet to build one myself, but it's what I've been mulling over lately. In the short term 6 tb drives might be a bit more affordable, but I'd probably prefer 2x8tb in mirror right now (to be expanded as 10tb drives comes out or with two more 8tb drives as prices come down a bit) - rather than some kind of parity-based raid setup with 3+ 6tb drives.

This still leaves the issue of backup of that 8-16tb of data though.

It might very well be that two 8tb usb 3.1 drives are better - one for "data" one for "backup" with the option of keeping the "backup" drive in a different physical location (say, the office) -- to avoid a single fire eating both drives.

[1] https://www.hpe.com/us/en/product-catalog/servers/proliant-s...

I've seen a series of these being sold off cheap (not sure if it's as a campaign, or as new models are phased in) - and I've basically seen then at ~the price of a "dedicated" nas cabinet for devices with 8gb of ecc ram and no disks.


each standard server rack can store 2400 TB of data if fully populated with 10 TB

240 disks in a rack?? http://www.advancedhpc.com/data_storage/raid_storage/dell/PV... 60 disks per 4u

Thats 600 in a 42u rack


AWS was doing 1100 disk in their previous generation storage 42u rack (used 8TB disks for 8.8PB).

Source: James Hamiltons talk at reinvent 2016 (http://mvdirona.com/jrh/talksandpapers/ReInvent2016_James%20... p25)


SuperMicro has a 90 disk top-loading 4U storage server. It's deeper, of course.

However, they probably thought about standard 2U servers with no second disk row, so 12 disks per 2U = 252 disks / rack.


Ah, yeah, forgot about those. so many disks.

edit: https://www.supermicro.com/products/chassis/4u/946/SC946ED-R... if you want to look and drool.


I think the Sun Thumper kinda invented the concept - many successful copies since :)


HDD is the new tape.


Nearly. It has the benefit of being random access, but for backup purposes it has the disadvantage of 1) being online (usually), 2) high per-drive overhead (even with huge SAS JBODs chained together), 3) shorter lifespan.

Of course nobody wants to deal with the difficulties of tape, and tapes have not kept up with the density of drives, either.


LTO-7 is 6TB/tape, at ~$115/tape (on sale[0]). The cheapest 6TB HDD on newegg is a refurbished WD Red for $198[1].

[0]https://www.tapeandmedia.com/hp-lto-7-tape-ultrium-tapes-c79...

[1]http://www.newegg.com/Product/Product.aspx?Item=9SIAAY949U33...


But the cost of the drives is extravagant. Not anywhere near the price point for individual use.

And for enterprise use, a tape drive doesn't make sense unless it is part of a tape changer. Fortunately, the drives are so expensive that the cost of the changer is marginal in comparison.


Even then, you can get a used (still good, for a couple years at least) LTO-5 drive for around 200-250 EUR. The tapes cost about 20-25 EUR each and are good for 1.5 TB -- with a small tape library you'd break even over hard drives beyond maybe ~20 TB.

Remember, as disks get better, so will tape - the technology isn't the same, but similar enough that advances in one will certainly cause similar advances in the other.


I've built an archival application that can turn off individual drives at will and full JBODs too. That makes the disk "off line". Useful for people who don't want to have any tapes. The application works with tapes too anyway :)


That's a really cool system, and absolutely great for power consumption. For backup purposes, if the drive can be turned on without a physical unlock, IMHO it's not truly offline, as an attacker could still cause data loss without physical infiltration.


We're going to dead end soon with SSD's right? Once we get to a certain density. Then what? You can't get any 'better' than that, like you did going from paper tape to magnetic tape to disks to flash


Here's Intel's bet on the next generation of storage tech beyond Flash: https://en.wikipedia.org/wiki/3D_XPoint


It appears the (SSD) answer is "3D stacking" and I have no idea where that might end...


my understanding is that mram cells can scale down to smaller process sizes than flash cells.


Don't HDDs also have more bitrot than tape?


  Tape is Dead
  Disk is Tape
  Flash is Disk
  RAM Locality is King
Jim Gray, Ten years ago


I bet he didn't predict the MacBook pro ram Joke aye? Same ram for 5 years?


I thought there's nothing worse than Apple fanbois trapped in the reality distortion field. There is: scorned Apple fanbois trapped in the reality distortion field.


I've yet to find something more impressive to me than the fact that a hard drive can accurately find a single bit out of trillions on a spinning disk.


Helium is really hard to contain, (just ask SpaceX!) Do these leak and degrade, or do they have some magical way to keep the gas in?


Here's a whitepaper from Seagate with some details: http://www.seagate.com/files/www-content/product-content/ent...

Sounds like they design them for a 5 year life.


yes and yes, perfect planned obsolescence


These drives contain Helium at 5500 atmospheres while submerged in cryogenic LOX or RP1? Wow.


Are these what people theorized Dropbox was using to get the densities they claimed in the BackBlaze pod threads?


There was talk about Dropbox using 14TB drives. It's possible that these drives have actually existed for over a year and HGST is only now deciding to announce them to the public, but there's really no evidence one way or the other.

The cloud space is full of rumors about awesome yet secret equipment but they don't really pass Occam's Razor since if it's so awesome why keep it secret?


>since if it's so awesome why keep it secret?

Because if Facebook or Google says "I'll take every drive you can make for the next year", you've got guaranteed minimum revenues for the next 12 months.

If you tell them they can only have half, and you release the other half to the retail market, they may or may not sell in the quantities you're hoping for. And you will likely loose the contract for goog/fb.


Certainly anything is possible if you're a large enough (or strategic enough) customer, but I tend to doubt that Dropbox had a year or two head start on these models. For one thing, I wouldn't build out my entire storage system with one model from one vendor that hasn't been field proven yet.

With 4T drives, you can easily achieve raw storage of 2.9PB in a rack. The previous top of the line (10T drives) yield 7.4PB in a rack. 14T pushes that up to 10.4PB.

Higher densities are possible as well. The most I'm aware of is 90x 3.5" in 4U; available from Supermicro, Dell and others. The depth of the chassis can be problematic, and the way Supermicro designed it isn't super great in my opinion.

Sacrificing hot swap capability could potentially yield higher density (~100-ish drives), which I think is a reasonable tradeoff if you're deploying a ton of these hosts and you can let dead drives hang out for a while. I'm not aware of anyone who makes a chassis like that that can be bought off the shelf though.

At these drive sizes though, I would be leery of having that much storage attached to a single host from a fault domain perspective. I think the next step is either a highly dense 2U chassis (HP has something like this, but only 28 bays) or a two server 4U chassis (Dell has something like this).


Just wish someone would come out with a chassis like Synology DS2416's and such. I've been using my actual Synology for so many years because I keep holding out the perfect chassis comes out.


I bought a Synology NAS this year: Highlights include the Amazon Cloud Drive backup, Plex Server, and it can hold four 4TB drives. No problems so far.


Why helium? I know it's inert but there are many other noble gases. Why not some other gas? Also, why not create a vacuum to minimize friction?


Vacuum = requires special lubricants or no lubricants at all, since they would dissipate due to the lowered vapour pressure. Lubricant everywhere would likely be problematic.

It's also harder to seal in, and would likely require a much more rigid construction to avoid warping of the drive (these are precision devices after all).

Lastly, the hovering of the R/W heads over the platters requires an atmosphere - they are lifted by the air/gas accelerated by the platters (this is a well defined lift force and self-regulates the head height relative to the platter).


"The density of helium is one-seventh that of air, which reduces drag force acting on the spinning disk stack and lowers fluid flow forces affecting the disks and the heads, allowing for platters to be thinner and a larger number to be packed more densely." [0]

[0] http://www.anandtech.com/show/10106/western-digital-introduc...


The drive heads float on air, which is why vacuum doesn't work, and the benefit of helium is that it's light -- not that it's inert.


In addition to its lower drag, it has higher thermal conductivity than air. Reduces certain heat effects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: