Hacker News new | comments | show | ask | jobs | submit login
Seagate Announces 16TB HDD (nexthive.com)
165 points by rbanffy 5 days ago | hide | past | web | favorite | 212 comments





From the article: "In a world where SSDs have become more and more common, one might ask what the point of even developing newer and bigger HDDs is."

Solid state drives still cost about 10 times more per gigabyte. I'm just a web developer, but a spinning-disk drive looks 100 times more complicated to me to manufacture. I know at the beginning that hard drives would be cheaper, simply because SSDs were newer, and there was R & D to amortize. But by now I would have thought that SSDs might have matched hard drives or even undercut them. Not only has it been over a decade, but hard drives themselves are using new R & D too. For example this article mentions that this new hard drive uses a new technique, called HAMR. Can the old factory machinery from 2005 be used to manufacture these new HAMR disks?


NAND demand is insanely high. You aren't just building computer SSDs, you have phones, in particular, and hundreds of other devices that use it competing for the manufacturing capacity.

Another issue with demand is IOPS. Enterprise is rarely limited by storage capacity, but instead by reliable operations per second. Pretty much every enterprise wants every SSD it can get at the right price point, which is much higher than consumers will pay. For the time being it will be worthwhile for NAND makers to sell drives to companies at a much higher price point.


By that logic things should get more expensive, not cheaper, as the demand increases.

To put it another way, why wouldn't someone undercut the current suppliers and sell at just above their marginal cost and take over the whole market.


The whole NAND industry is suspected to be engaging in price fixing/collusion[1]. They've already been caught doing it in the past and paid out a $300 million settlement over it back in 2006.

[1] https://bit-tech.net/news/tech/memory/samsung-micron-sk-hyni...


That article is about DRAM, not NAND. Any article that talks about SK Hynix as a major player but doesn't mention Toshiba is clearly not about flash memory.

Ah yes, my bad. The lawsuit in that article is indeed specifically about DRAM. A lot of the reporting on this mixes them up for some reason. But the DRAM and NAND industries are largely the same companies and both have been under investigation for price fixing recently.

https://www.extremetech.com/computing/261330-china-investiga...


The DRAM and NAND markets are similar, but there are important differences. The NAND market has several more major players and is significantly more competitive.

And the Chinese accusations of price fixing need to be understood in the context that they're trying very hard to get their own NAND fabs up and running, almost certainly using at least some stolen tech. Even without Chinese influence, NAND prices have fallen dramatically since that article was written, due to ordinary healthy market forces.


Sandisk and Toshiba have been investigated for price fixing.

What a weird article. The latest brouhaha is about memory, not storage. The website shows a NAND SSD though as the main picture... strange... and incorrect.

>To put it another way, why wouldn't someone undercut the current suppliers and sell at just above their marginal cost and take over the whole market.

Moat. Or the Entry Barrier. You can't built a Fab, ignore the yield and expect it to be profitable. TSMC famously said they will never enter the NAND and DRAM market and thought of those as commodities. In 2017 that look utterly stupid when NAND and DRAM prices were sky high, while not widely reported, making Samsung Electronics the most profitable company in the world ( They own ~60% of the world's DRAM and NAND market ), even more so than Apple. By 2018, a 5% drop in World Wide Smartphone shipment trigger a knock on effect, the prices has seen been plummeting. Still profitable, just not as much as they were.

It was once thought the $100B USD from China would have solve this problem. Even if they make 50% less yield, and earned net zero margin, along with forcing Chinese Smartphone companies to use a percentage of those components, should have been enough to sustain those Chinese based NAND fab. Turns out it wasn't quite as easy as they thought. Most of the DRAM and NAND were far too inferior. 3D NAND, or Multilayer Stacked NAND along with leading edge node is a huge barrier of entry.

One reason why you keep hearing there are lots of IP theft going on.

Note: All the Apple product sold in China are using Chinese bases NAND.


> By that logic things should get more expensive, not cheaper, as the demand increases.

This is... exactly what everyone expects, and the assumption of every economic graph in the world?

The stupidest, most basic supply-demand model is:

Prices fall with falling demand and/or rising supply.

Prices rise with rising demand and/or falling supply.


> This is... exactly what everyone expects, and the assumption of every economic graph in the world?

There are actually markets where the opposite holds. Basically anything with high fixed costs, because the more units sold, the less of the fixed costs each unit has to cover.

But fabs don't really work like that because, while very expensive, they have finite capacity. A fab that can produce a million units costs a billion dollars. If you want two million units, that'll be two billion dollars and be available a few years from now. And if you build more capacity than there is demand, you're bankrupt. So they purposely build somewhat less than expected, which when demand is higher than expected results in much more demand than supply.


The second fab on an identical process node is cheaper than the first.

Sure, but it's not as if the first one is a billion dollars and the second is a nickel. The numbers are of approximately the same order of magnitude.

Because factories for that are hard and very expensive to build?

You need excess supply before you can try that. The question is can anyone raise prices and still sell out. This is a complex question (and often the answer is yes in the short term but you shouldn't because your customers might invest in an alternative and not buy at all in a couple years)

This is the Eco 101 way of thinking about it. Reality is more complicated, and more advanced economics models reality closer. Basically put, what you are describing is whats called a Perfect Market. A Perfect Market is one in which there are no barriers to enter or leave the market as a producer or consumer. However the vast majority of markets are not like this. For example, the sheer complexity and amount of money needed to start a new NAND factory is itself a giant barrier to entry. Most markets that have a high barrier of entry take on a market structure called an Oligopoly. An Oligopoly is when there are only a few producers in the market, and it is difficult to enter the market. The difficulty need not be because of anything nefarious from the producers, it might only be because the production process is complex and/or requires a lot of technical expertise.

In Oligopoly markets, because there are only a few producers, the producers have more leverage over their buyers and are therefore able to charge prices higher than the marginal cost of production.

To your suggesstion, lets say I realize this and I want to start a NAND factory that undercuts my competitors by a little bit but leave me plenty of profit, I have to first build a factory large enough to reach the economy of scales that my competitors have. Reason for that is for a period of time, the marginal cost of production decreases as sheer production increases because the process gets more streamlined and automated. However to do that I need to hire engineers who know how to make a NAND, and maybe I need permission from the Chinese communist party. Then I need to get a bank to agree to give me a loan so I can have enough money to build my factory and hire my engineers. Banks probably are not going to be too keen on lending an unknown quantity and will likely charge me a higher interest rate to compensate for that risk. Only then can I enter the market.

So as you can see while its not impossible to enter the market, its not easy either. Since thats the case, the few existing producers can, without any nefarious activity (which is not to say there isn't any, but its not required), charge prices higher than the marginal cost of production.


> the producers have more leverage over their buyers and are therefore able to charge prices higher than the marginal cost of production.

pretty nefarious if you ask me. we gotta own the means of production! take-over these oligopolies and turn them into worker-owned-and-directed cooperatives!


[flagged]


Your comment would have been more valuable without the jibe at the person you were replying to.

Sure, that's the economical aspect of it.

On the other hand it means the technology is established and people prefer to buy SSDs over HDs. Speaking of myself, I wouldn't consider buying an HD laptop ever again.

This means companies can invest in very long-term endeavours to transform their HDD production businesses into SSD production businesses. The costs are high but the risk is low, lower or at best equally high of continuing with HDDs if you ask me.


The article below has a graph showing the number of "Client SSD vs HDD units" shipped by Samsung over the last 6 years. Given current trends, the SSD shipments will very likely exceed HDD in the next 12 months.

https://www.theregister.co.uk/2018/11/23/ssd_market_q3fy18/


That's interesting and thanks for the link.

I would have guessed SSDs surpassed HDDs already since 2 years on the Client.


Is average price that useful in this context? Surely there would be very fast high end drives that would push that cost up?

It seems you are much smarter than the rest of us when it comes to understanding supply and demand curves, but why don't you help us walk through an example, okay?

Suppose it costs -- and I am making this up -- $10 to produce the first 1000 units, but only $1 after that.

Suppose people are charging a lot of money, say $50 for a unit.

Won't you increase production to try to capture that demand.

After that, because you have increased production you enjoy economies of scale, and the price comes down.

If you are the only producer, you have no incentive to come down, but if there are other producers you will want to undercut them. In this case, you will bring down your price to gain more market share.

Please help us understand what we are missing with your shrewd understanding of supply and demand curves.


Because supply and demand curves aren't smooth - Increasing production scales linearly up to a point, and then you need to build a new factory. Samsung is pumping out all the SSDs it can produce. It is probably also building new factories, but the capital investment and lead-time on that are significant, and until those factories come online, they're simply sunk cost.

Customer demand is not smooth. I will buy precisely 0 SSDs at prices > $125 per 1T drive, and I will suddenly buy 4 as soon as the price drops below there. There are thousands to millions of other customers will scattered plot points around that area, but while you can draw a smoothed graph over them and be reasonably correct, they do not actually represent a mathematical law.

Further - Information is imperfect and markets, for all their elasticity, are only trending towards efficiency.

This all assumes you're arguing in good faith. This in itself is an unreasonable assumption - Your last, mocking line seems to belie it. As does this line.


Nothing in the argument I laid out has anything to do with smooth curves. The same arguments apply for lumpy or unsmooth curves.

OP was talking about 10 year lags! Please read the whole thread in context.


NAND prices have dropped considerably over the last 10 years, but not as fast as hard drive prices.

https://static.seekingalpha.com/uploads/2017/9/24/9577541-15...


Increasing production of semiconductors means bringing up a new fab. It's incredibly expensive and takes years.

Samsung et al do build new fabs, but at a significant lag to demand for logistic and risk-mitigation reasons.


It seems that you assume that if one producer sells its product for slightly less than its competition, then it will get all the customers, à la Bertrand competition.

But if it is not the case, and lowering your price only wins you _some_ but not all customers, then there is no reason to expect that the price point that balance gaining new customers vs selling each product for less will be equal to the marginal cost and not above.


That the growth of demand is outpacing the growth of supply. I thought he was fairly clear about that.

Hard drives are cheap because, regardless of capacity, you make a bunch of moving bits and electronics, and then cover a disk with iron oxide and that holds all the data. When manufacturing SSDs, you need to make circuitry for each bit.

sounds similar to the CRT to LCD evolution of displays.

I agree with the other comments. I'd add that just because the end product can be easily seen to be a complex mechanical artifact, doesn't make it inherently more complex to manufacture than something solid state. A lot of complexity goes into making the solid state device too; it just happens out of sight in the fab.

It's (somewhat) the same situation as with EV and ICE automobiles. The engine and drivetrain in a modern ICE vehicle is extremely complex. Much more so at a macro level than EVs. But we're actually pretty good at building reliable complex mechanical systems. More things shift to solid state over time but it's a long process.


> but a spinning-disk drive looks 100 times more complicated to me to manufacture.

Fun-fact: The "head fly height", i.e. the distance from the underside of the R/W head to the platter surface, is less than a handful nanometers in most hard drives; this makes the gap not just smaller than any production transistor, but also smaller than any production feature size.


How?

If two surfaces are flat enough and moving fast enough, the tiny amount of air (or helium these days) causes them to act hydrodynamically and float across each other which is great cause there's zero friction which means they never wear out.

The spindle in a hard drive is also an air bearing.


Boundary-layer aerodynamics. The head is literally flying on a cushion of air.

The big difference is that the overall performance of spinning disks is not much different than what it was 10 years ago. However, a best-in-class 2018 SSD is an entirely different beast than a 2008 model. They are somewhere around 5-30x faster now with significantly better write endurance, wear leveling, and capacity (you can get up to 4TB in consumer models for reasonable prices). The market hasn't pushed for solid state drives replacing bulk hard drive storage, instead it's pushed for much, much faster drives (yielding 1-2 orders of magnitude improvement over just the last decade alone) along with small form factor portable storage (SD cards and USB drives). The technology is there for someone to manufacture slow SSDs with the same capacity as hard drives at around the same cost, but the market demand isn't.

> 2008 ...They are somewhere around 5-30x faster now with significantly better write endurance,

Thats because 2008 to SSDs was like 1982 to HDDs. If you look at 2011 both of your promises are false. SATA SSD in 2011 was as fast as SATA SSD in 2018, but with about 10x more endurance.

The technology to produce crappy SSD is there, but it will "forget" its contents in couple of months when kept unplugged, or burn thru erase cycles in a year while sitting idle, just keeping content alive.


> SATA SSD in 2011 was as fast as SATA SSD in 2018

That's a meaningless distinction, M.2 SSDs are incredibly popular and are nearly 10x faster than anything you could get in 2011 (for example, an OCZ Vertex 3 would max out at 550/500 MB/s sequential read/write speeds while a Samsung 960 EVO can do 3.2/1.8 GB/s).


and the real world difference when loading a game? or word document? non existent.

Semiconductor lithography is a lot more expensive than plain old manufacturing. Fabs cost tens of billions of dollars to build or expand and have limited capacity, coating a glass disc in rust is cheap.

Also, fabs have investors who want returns on their money, so they're not super keen on crashing the market either. There are not-so-secret cartels between all the major players that conspire to keep DRAM and NAND prices high.

We could certainly have the government dump a half-trillion dollars and bring up a bunch of fabs though. That's exactly what China is doing. Their economy runs on assembling the goods, not lithographing the chips (which is largely done in Taiwan or South Korea or sometimes Malaysia).


> Can the old factory machinery from 2005 be used to manufacture these new HAMR disks?

Probably yes, for everything except the heads. These HDs are built upon many decades of investments in manufacturing capacity.


AFAIK, HAMR is just about using heat to allow for more targeted writes. Write density increases but the actual disk composition doesn't change, just the write head.

The fabs where NAND chips are produced are hugely expensive, and in very high demand.

I suspect it has to do with die area and yield.

Also, a hard drive head and electronics has to be made once for each platter, so its cost does not scale with the number of bits on the platter.


Don't forget the cheapest storage is actually tapes.

Me and some other teammates switched our dev machines from SSD to HDD arrays. 2-3 years of heavy usage - huge C++ - and SSD start silently fail in various ways - from slow down at 60-70% capacity to final outright failure even to read from. Note: the hardware is top line enterprise, pretty expensive. The perf difference between sdd and hdd is imperceptible for our use case as we are cores hungry and have enough RAM.

By "dev machines" do you mean workstations or CI servers? There's no way you could have been burning out top of the line enterprise SSDs in 2-3 years with developer workstation usage patterns.

(I could maybe believe that you ran into trouble if you used low-end enterprise SSDs that also force themselves to go read-only as soon as the warrantied write endurance is exhausted, rather than continuing until the flash itself is actually starting to fail.)


It is dev workstations.

>top of the line enterprise SSDs

i wasn't precise here. It isn't top of the line enterprise SSDs (like the ones you'd use for databases and which cost accordingly). I meant the top-tier corporate enterprise vendor with top-tier hardware in the corresponding categories ( we're a BigCo ).


> I meant the top-tier corporate enterprise vendor with top-tier hardware in the corresponding categories ( we're a BigCo ).

So what you're probably getting is ordinary cheap consumer-grade SSDs, but perhaps with encryption capabilities actually turned on. I'd be surprised if you were getting something like the Samsung 850 PRO without specifically ordering premium SSDs.


>So what you're probably getting is ordinary cheap consumer-grade SSDs, but perhaps with encryption capabilities actually turned on. I'd be surprised if you were getting something like the Samsung 850 PRO without specifically ordering premium SSDs.

we've got various models through the years, yet still a notch higher - the 850 PRO has TBW at 300x-500x capacity where is the ones we've been getting got TBW around 1000x, and i think the earlier hardware was even close to 2000-3000x.


oO. I think that is very strange.

>Solid state drives still cost about 10 times more per gigabyte

I think you should go shopping, because the time where SSDs cost 10x as much as an HDD is over.

When a WD Blue 1TB SATA 6 Gb/s 7200 RPM 64MB is $46 on Amazon and a Crucial MX500 1TB 3D NAND SATA 2.5 Inch is $134.

Unless you're comparing the cheapest 1TB HDD you can find to a Samsung 970 pro 1TB or something... which really isn't a reasonable comparison imo.


I agree that for 0.5-1 TB disk there really is no reason to not to get an SSD. However the price difference jumps significantly once you're looking at larger disks though. The cheapest 4TB SSD costs about to 8 times more than the cheapest 4TB HDD for example.

>When a WD Blue 1TB SATA 6 Gb/s 7200 RPM 64MB is $46 on Amazon and a Crucial MX500 1TB 3D NAND SATA 2.5 Inch is $134.

those are cherrypicked numbers because 1TB is at the very low end for HDD sizes. compare with a more typical HDD size (3TB or 4TB) and you'll see the differences become more obvious.


I don't know about you, but 500-1000 GB is more than enough for my SSD needs, so the comparison at 1TB is much more relevant to me than at 4TB.

Wasn't the parent discussion about a new high-capacity HDD and the per-capacity cost of HDD vs SSD? I don't think that people satisfied with under 1TB are relevant to that topic.

In that market, people are comparing a pile of disks in an array to a pile of solid state storage needed to replace its capacity. The bulk price of storage inverts when it is cheaper to store hundreds of TB on huge SSDs rather than on huge HDDs. That would be the death of HDD, since nobody really wants spinning disks in their datacenter.


> a pile of disks in an array to a pile of solid state storage

It doesn't even need to be a pile of disks. Even for a PC the differences are big.

My computer has a 1 TB SSD which is a decent size for an SSD. It's still a bit tight for me, so I complement with a spinny rust NAS. If I had an HDD instead it would have been like 8 TB and I wouldn't necessarily need the NAS. I think that's what GP is hinting at when he says "me SSD needs". I think he's also complementing with HDD somewhere for bulk storage (be it external drive, NAS or even cloud storage)


If I'm GP, yeah; I use 250-500GB SSDs as primary disk and have a small NAS with a RAID1 of two 5 TB disks for slow archival storage. For that slow archive, though, I don't care what factor SSD price is to HDD price (as long as it's still >1.0).

Sure, that's fair.

Cost isn't linear with storage. On Newegg a 1 TB drive is $37, a 2 TB is $61, a 4TB is $84, a 6 TB is $158 for $26/TB. The metal shell of the HD costs a certain amount no matter how much storage is inside. Still not a full factor of 10 but certainly more significant.

HDD market is basically a monopoly with arbitrary prices not reflecting actual cost of anything. They only compete on price with SSDs.

It’s a monopoly? Kryder’s law may have ended, but HDD prices were $40 a terabyte two years ago.

1TB HDDs are expensive per GB though. The sweet spot is at 4 or 6T these days.

You can't compare 1TB hard disks, they are comparatively expensive. The cheapest per TB I can find right now is 20 euros for HDD, and 120euro for SSD. So factor 6 if you buy the cheapest, respectively.

> I think you should go shopping

Sorry, you're right! I had quickly searched for a few multi-terabyte drives. But at around 100 GB the two kinds are close.


Regarding SSD technology, its just demand still being greater than the supply at this point.

Companies are overbidding at the fabrication lines to get their latest phones built. Data centers want SSD offerings for customers with insatiable demand for SSD servers at a premium.

The whole thing is just demand demand demand and the chips cant come out fast enough.

Will last for a while at the current rate.


I always thought that at some point the price/gb lines would cross. I am starting to believe this will never actually happen.

It will. SSDs counter intuitively scale down in price much better than hard drives at small sizes; there's no cost advantage to making a HDD with less storage than a single platter, or about 1/5 to 1/8 (in the case of helium drives) of the largest on the market.

Eventually, an SSD with cost parity with a low end HDD will be big enough, after which there's no reason for the low end consumer market not to switch over. The high end will have already as well (I certainly will never buy another laptop containing a HDD).

At that point, HDDs will essentially only be found in external drives and data centers. The writing will be on the wall, and R&D will slow down, starting a cycle that will lead to collapse.


This is already playing out. SSDs are cheaper than hard drives at 120GB, and that crossover point is moving upward. Sub-500GB hard drives will probably disappear from the market next year, now that 240GB SSDs are dropping below $30.

SSD controllers are cheaper than the fixed costs of hard drive motors, actuators and clean-room assembly. Adding an extra platter gets you more incremental GB per dollar, but getting that first platter working is much more expensive than the minimum viable SSD.


Interestingly enough SSDs have already outpaced the density of HDDs in the 2.5" form factor, presumably due to exactly the effect you describe. You literally can't buy a 2.5" hard drive bigger than 2 TB last time I checked. All of the thin-and-light stuff like ultrabooks is SSD only too, HDDs can't even match the dimensions you can do with NVMe (maybe something like Microdrive could) let alone capacity.

Cost is still up there but only about 2.5x as expensive as 2.5" HDDs... a 2 TB HDD is about $80 while a 1 TB SSD is ~$100-120. And there are (kinda shitty) 2 TB SSDs that are hitting $250 these days (eg Micron 1100).


> You literally can't buy a 2.5" hard drive bigger than 2 TB last time I checked.

You can get them for sure[0] and they're not really that expensive at all. That particular one gave me about 95MB/s write speed on large files.

https://www.amazon.co.uk/WD-Elements-Portable-Hard-Drive/dp/...


Those are 15mm drives though - unlikely to fit inside a laptop (7mm is thin, 9.5mm is standard).

A 1TB SSD this past Black Friday was at the lowest about $115/1TB. That's about the price a 1TB HDD was back in 2010.

I'm not sure if this was meant to refute my point somehow, but hard drives haven't come down that much in price since 2010...

Sorting by price on Newegg, if you're looking for a cheap laptop drive you can choose right now between a 320gb HDD for $28.55, a 60gb SSD for $19.99, or a 240gb SSD for $31.99.

For a 1TB drive, you're looking at $38.50 HDD vs $109.99 SSD.

Large SSDs remain fairly expensive for now, but the low end is being eaten.


Agreed.

"Eventually, an SSD with cost parity with a low end HDD will be big enough, after which there's no reason for the low end consumer market not to switch over. The high end will have already as well (I certainly will never buy another laptop containing a HDD)."

Reliability and mean time between failures figure in to the equation as well -- at both the low end and the high end.

Data centers don't want to be swapping out bad drives all the time, and low-end consumers can't afford to buy new drives all the time. Both want to stretch the drives they have as long as they can, and at least in the past SSDs were less reliable and failed a lot sooner than mechanical drives did.

I personally do like to buy laptops with mechanical drives in them, because I can get a lot of storage without paying insane prices, and because I want the drive to last as long as possible. And just in general, I don't trust SSDs. They have yet to prove themselves to me.


> Both want to stretch the drives they have as long as they can, and at least in the past SSDs were less reliable and failed a lot sooner than mechanical drives did.

> And just in general, I don't trust SSDs. They have yet to prove themselves to me.

I'm pretty sure you're cherry-picking the data you're willing to look at. It's definitely not true that any datacenter ever avoids SSDs due to worries about drive failure rates. These days, high-end consumer or enterprise SSDs are warrantied to survive more writes than it is physically possible to send to a hard drive during the same 5-year span. Flash memory write endurance stopped being a serious concern by the time consumer SSDs reached capacities that made them sufficient for use as the sole storage device in a mainstream laptop.

Controller/firmware bugs are the only source of SSD failure that you have a non-negligible chance of encountering in the wild, but the rate of such failures is very small, especially if you stick to the major reputable SSD brands. And that's in the consumer market where the vendors aren't specifically validating each SSD model with your specific servers before you put anything into production.


Maybe an SSD under a heavy datacenter I/O load is indeed less reliable than an HDD.

But in a laptop, a device that you lug around, move around while working, and sometimes even drop, a precise mechanical device has a much larger chance to misbehave, to my mind. Its I/O load is way lower than in the datacenter, too.


Used to be that whole series of SSDs had data corruption problems (which in one case ultimately lead to the demise of the manufacturer). Still happens. Recent example: Apple.

Power failure behavior of most consumer oriented SSDs is also data corrupting.


Used to be that entire series of HDDs had data corruption problems, too.

I'd expect QLC NAND to help with this

Big data also needs to be stored somewhere, especially all the environmental data, video data, and photographic data we are creating all the time with our smart phones and youtube uploads.

An added complication is that pure silicon in the required quantities is not cheap.

The amount of silicon we're talking about here is only a couple of cents per device

'16TB' of silicon at current rates is not a few cents, not even close.

On a related note - are there any small nas devices that take m.2 size drives? Seems like a good product which I haven't seen.

nobody (especially not all the ultra capitalist pigs here) would like to admit this but in reality the reason this is the case is price fixing. There are very few companies that do manufacture and hold the necessary patents to do so. You can thank the free market for it.

https://www.theregister.co.uk/2018/04/30/dram_vendors_sued_a...


You seem to think price is set due to cost, when this is not how it works at all. Price is set by what people will pay.

As a web developer, do you charge based on how much it costs for your to eat and pay rent for the day?


I don't quite recall when it happened, but at some point my reaction to announcements like this changed from, "Yay, more space!" to "Good, the size I need will be even cheaper when I upgrade 2-3 years from now."

I record and produce my own music as a hobby. Each project can easily reach gigabytes in size, and my biggest projects are eight gigabytes or so (eight drum mics, guitar, bass, vocals, and synthesizers, all uncompressed 24/96 audio, often with a number of different takes). I go through HDDs very quickly. I know that for development and most uses bigger HDDs are unnecessary, but for media production these improvements are really valuable.

Why not use FLAC? It gets 50-70% compression and is lossless.

Offhand I actually don't know of too many DAW programs that can record the individual tracks in FLAC format. Reaper does. Checking the Wiki (https://en.wikipedia.org/wiki/List_of_hardware_and_software_...), it looks like Cakewalk Sonar does as well. But that leaves a lot of major players (Protools, Logic, Cubase, Ableton Live, FL Studio, etc.) that cannot.

It's not something I personally would do because (as others imply) IMHO the CPU trade-off is not worth it. Usually I use a blend of plug-ins (which take up no space) and recorded hardware, consequently the amount per track ends up being more like 100-600MB and not gigabytes. This is rather manageable.

However, I will note that sample based software synthesizers and sample packs can be huge these days, even despite in many cases lossless compressing being applied to the sample library. Omnisphere 2 for instance comes with a 60GB+ sample library (and that's before you add the add-ons like Moog Tribute). At the current extreme end, orchestral sample company Spitfire offers a string library (https://www.spitfireaudio.com/shop/a-z/hans-zimmer-strings/) that is 183GB in size and a sampled piano that's a whopping 211GB compressed (https://www.spitfireaudio.com/shop/a-z/hans-zimmer-piano/).


And that's why when I use Omnisphere (which I love, BTW), I send my projects to my best friend's computer; there's no way in hell I could fit that on my computer's internal SSD.

Even the relatively small software instruments (I've been really into Soniccoture's Glass/Works, for instance) use up like 8 GB apiece.


Audio mixing and mastering, particular with a lot of channels and hosted effects can be very CPU intensive task. As a general rule you wouldn't want to add compression/decompression to that process as well.

Granted, the industry standard in this area (ProTools) is an absolute CPU hog to begin with, so it needs all the help it can get.


DAWs usually encode as WAVs for when you're working on stuff because it's faster. But it would be nice for them to losslessly compress your files down to FLAC when you're done and shut down the program!

It would be interesting to have a media-friendly archive file format for the final tracks of completed projects that automatically compresses/decompresses WAVs as FLAC...and other raw data formats as their losslesss counterparts...closest thing I can find is zipx with wavpack.


> it would be nice for them to losslessly compress your files down to FLAC when you're done and shut down the program!

Renoise does that. With software synths and sample-based instruments, the biggest chunk for me is always the vocals.. and I would have to have a lot of takes to get over 100mb. Of course, my music isn't the music people who record a bunch of live instruments make, it's not a fair comparison, but still, for my uses it's fine, and it's super fast. I wish there was an option to bake VST into songs, I would love to be able to share full songs as "source" (obviously it would have to be songs only made with freeware VST, but art is all about limitations etc. blah blah :)


You can attach zip files containing anything you want on Bandcamp. I put a few Serum presets in one EP.

When you have enough channels and plugins running in real time, you'd rather optimize for CPU rather than disk space.

Surely OP isn't working on all projects all the time, and could "archive" some in FLAC form?

My guess is that in terms of a time vs. money trade off, buying a new drive and shelving the old one is the preferred method of archiving. The maintenance required to convert all files to some compact format might be enough of a PITA (however minor), that it's simply more worth it to buy a new drive.

Would be good if the tools had an archive option that automatically saves all the files as flac and can decompress to wav again with a single button press.

Reaper's Save As dialog will let you collect all samples and convert them with the project file. I wish all DAWs had that.

Basically what everyone else said. DAWs all prefer uncompressed WAV or AIFF, and it takes quite a bit of time to convert back and forth to FLAC. Plus, even if I did that I'd still have to use external HDDs, just cheaper ones.

WAV is the standard, period. That won't change for the foreseeable future.

With that much data, I am curious as to how you do your backups. Do you backup off-site at all?

Not offsite, which I know I should. I use macOS, so I just have a Time Capsule set up with a giant HDD plugged into that for stuff I'm working on presently. Stuff I don't use actively I just make sure to put on two different HDDs in different places, and so I have two whole parallel sets of HDDs which I occasionally synchronize by hand. The whole system is very awkward, and I would be delighted if I really could rent reliable, trustworthy, offsite storage for a reasonable price.

You can. I've been using Arq [1] for years. You can select a storage provider of your choice (I use Dropbox), but Arq also provide its own cloud storage these days if you prefer to keep it as simple as possible.

After the initial upload it's feasible even with large files since it of course only uploads incremental, block-level changes. I let it run on a daily schedule during the night.

[1] https://www.arqbackup.com/


Yea I remember going up from a 3TB to 6TB, using the old one as a backup and climbing up that chain every 2~3 years .. 8TB, etc. Currently I have a 12TB for primary and a 10TB for backup. Once I go over 10TB I'll probably look at what's current available. It'd be nice to see 100TBs eventually. I honestly think a set of those could last for over a decade.

I'm curious as to why your backup drive is smaller the your primary. For most people the other way around would be desirable.

For me, at least, the main answer is that not everything I have is worth backing up.

This is how I have felt recently, specifically when it came to purchasing a new phone. I would think that these super-sized drives will cater more toward creatives who have to work with large amounts of audio and video content.

Much more than any creatives will be the use of these for all sorts of "big data" projects, from AI training to mass-spying to science. All of these fields have a bottomless appetite for data storage that will eat up these drives like candy.

As a professional storage-system developer, my reaction to every announcement of larger disks is something close to horror. Capacity keeps increasing while performance remains darn near constant, so the gap between the two keeps getting wider. Yes, we can use flash to absorb the I/O demand, treat disk more like tape, yadda yadda, but all of that takes significant effort. You do want your storage system to be correct despite the greater complexity of heterogeneous hardware, don't you?

Looking on the bright side: infinite job security. Yay?


The part that makes me recoil in horror is the potential damage wrought by one failure - at what point does the rebuild time for your RAID Array start to collide with the MTBF for the drives? ;-)

This is why the really big storage systems use erasure coding. At scale, RAID-5 and even RAID-6 are vulnerable to these kinds of overlapping failures. Not theory; seen it happen.

We are getting there, this already happens with unrecoverable read errors (URE).

With RAID5 for example, rebuilding a 4TiB array is expected (i.e. >50% chance) to have at least one URE.


But it was always like that for HDDs. You always had to assume no change in seek time with more capacity. And it was always wrong to treat HDDs as random IO devices with all the random writes and b-trees. At least now there is a pressure on everyone involved to do it right.

That's a completely different issue. Yes, it's important to do disk accesses right. That was even more true when I started doing this stuff in 1990 than it is now, because disks back then didn't have big caches or fancy controllers, and RAID wasn't a thing. My point is that even if you do everything right it's getting harder and harder for a disk-based system to keep up with I/O demand. That means more and more gyrations to keep the demand away from those disk-based systems, which means more overall system complexity. In particular, more places for inconsistency to creep in. As though we didn't have enough to deal with making these things distributed.

I get it, but take RAID for example. It always presumed weird things. That you either have enough performance and not enough load to be able to check disks or rebuild it without much degradation or that you are able to copy an entire disk in like a few hours or that an error means you need to replace the disk. These assumptions break if you don't forget that capacity increases, but seek time stays the same.

Obviously there will be some inherent complexity dealing with that, but not really more complexity, it's just different, maybe proper for once.


I'd say it's a lot more complexity. The first way most people became aware of the capacity/performance gaps is that a backup couldn't complete before the next one was scheduled. Ah, the good old days. That was the easy case. The system I work on today has a dozen such maintenance activities going on all the time. Making sure they all complete in reasonable time, without affecting the endless stream of new user I/O too much, requires increasingly sophisticated scheduling as the gap gets wider. That's a lot of complexity right there. On top of that, there's this little formula:

  load = warmth * capacity
Warmth is effectively constant for relevant time scales, so increasing capacity means increasing load. Scaling out doesn't help in this case, not even with perfect linear scaling, because every bit of added capability brings added capacity and load with it. Until you run out of users, I guess, but those seem to be infinite. ;)

Thus, systems with the same basic architecture get more and more overloaded as per-disk capacity increases, until they become useless. The only escape is to change the architecture, inevitably toward greater complexity (e.g. cache layers and burst buffers). None of this would be necessary with a better capacity/performance ratio. Believe me, nobody wants to make these systems more complicated. But every time that gap gets bigger, there will likely be a new increment of complexity to deal with it

I don't see how that's "proper" or "where it belongs" or any such. It's not that all such systems were poorly designed for the hardware as it was when they were developed. It's that even the best designs have to keep adapting. The demise of Moore's law and the ever increasing number of cores per die or per system have increased complexity in the compute domain. The capacity/performance gap is the storage equivalent.


Cheap cold storage will find its use. So I think it's reasonable to expect the gap to increase for like 10x more in the future, before SSDs overtake storage completely. Have to plan at least for that.

OTOH, I was in the storage industry until recently, and HD capacity increases were always welcome. That is because we were a bandwidth play, sold on the basis of density and capacity. Hello backup and archive. That is frankly where most of HD storage is today, 40 drives sitting in 4U is still a few ten-thousand ops per second, but it has a few tends of GB/sec of bandwidth too. So, if your access patterns are long linear read/writes then HDD's work just fine. Even for midrange IOP applications, if you burst a couple GB from disk to a SSD cache, its possible to absorb a lot of IOP's.

Also, rebuild/scrub operations aren't really an issue either if your running RAID6, and your system is sufficiently designed/over provisioned to deal with rebuild/scrub operations happening in the background. If your IOP limited OTOH, you likely have a problem.


> Hello backup and archive. That is frankly where most of HD storage is today

I'm not sure whether that's true or not. At one end, tape still exists. At the other end, a lot of data that's already stored with high levels of redundancy doesn't get backed up anywhere else. Do you have a good source for a definitive answer?

> rebuild/scrub operations aren't really an issue either if your running RAID6

Rebuild/scrub operations still happen, and are still an issue, with RAID6. They might not be quite as visible because they're not soaking up host cycles, but they are soaking up disk IOPS. In any large or even medium-sized storage infrastructure (by today's standards) you'll have some going on and occasionally you'll have some that overlap because the first one took too long. Before long you'll hit a case where you get a third failure while the first two are still going on, and you stop relying on insufficient RAID6. Then you're into erasure codes and your own kind of scrubbing. Those aren't exotic situations or responses any more.

> If your IOP limited OTOH, you likely have a problem.

Yep, sure do. Can't wish it away, or make it all better with a magical free SSD caching layer. Have to solve it, which requires effort and expense. What I'm saying is that bigger disks make that harder. I'm not complaining, it's what I and others choose to do, but it's a fact.


I don't have a direct summary link, but if you read random storage revenue/capacity reports that seems to be the case.

There are a few large buyers of bulk storage which appear to be using it for some form of nearline. At least in my experience (which is obviously warped by the part of the industry I was in), enterprise applications needing IOP's have been overwhelmingly moving to pure flash. There remains a large amount of revenue in hybrid arrays, but the volume is shrinking (although maybe not the raw capacity, similar to the mainframe which is selling record amounts of capacity in fewer and fewer machines). A few years ago I stopped being surprised to see a couple racks of infortrend's or supermicro storage chassis (or any number of other 2nd tier vendors products) sitting at one end of random datacenters, where the local storage admins were running some huge snapshot repo, or Ceph, or whatever on them and the resulting capacities were frequently powers of ten greater than the online storage.

Also, I've seen plenty of tape arrays too, but they don't seem nearly as common these days. Partially, because tape has the same problems as disk (lots of bandwidth, but it never seems to be enough). It seems everyone still has one, but you have to hunt for it, and it might only be getting the most critical data (or stuff that is required for compliance with some law), which turns out to be limited by the 2-4-8 drives they have constantly spinning, where an operator walks in every few days and swaps a couple dozen tapes offsite. Sure there are larger libraries but it seems most admin's start any conversation about tape with a groan and eye rolling, which tends to be an attitude that keeps finite resources from being heavily invested in them. So, people will buy 10+Gbit links between their data centers (which sometimes tend to be shockingly inexpensive) rather than spend $10k on a tape library and a fedex account.


> Capacity keeps increasing while performance remains darn near constant,

The recent capacity increases seem to be mostly driven by stuffing more platters in, so as long as they don't have independently servoed heads (used to be a thing, maybe it becomes a thing again? Perhaps not fully independent, but still "ganged up" with some sort of micromechanics in each head to servo it to nearby tracks?) or increase data density (bits/cm^2), throughput must plateau.


Even as a home user, expanding a RAID on a consumer NAS with a 10 TB harddisk takes days due to the size. It would take weeks with a 100 TB drive! (then again, that's not something you'd need to do often with 100 TB to play with :)

The bandwidth of these drives are increasing due to the fact that the rotational speeds are tending to remain the same while the density is increasing. So, as the linear in track BPI increases so does the throughput. Tracks per inch, which is where most of the density has been doesn't improve bandwidth. This is where a lot of the density improvements a few years ago was coming from (higher TPI). At one point the TPI was actually higher than the linear BPI, which was something completely non-obvious to me when I discovered it.

HAMR is an increase in linear BPI, so expect these drives to get faster.


I think it's a question of supply and demand.

If professionals need throughput, they will buy a couple of hard drives and array them together. Latency is generally not requested.


> a couple of hard drives > Latency is generally not requested.

That might be your experience, but it's not mine. I'm on a team that runs one of the three or four largest storage infrastructures in the world. In aggregate, those few account for a significant fraction of all the disk drives sold - enough that we have to account for possible market distortions in our planning. So I think our experience is relevant too.

In my world, the professionals very much do care about latency. Yes, the systems are built for massive throughput, but that throughput has to be within a certain latency. If latency goes too high, we get calls in the middle of the night. To keep that system-wide latency low, we deploy many racks' worth of equipment at a time, or shift load between similarly sized parts of the system.

For us and our peers, this capacity/performance gap is a huge issue. Has been for years, and it keeps getting worse. Those concerns might not apply to everyone, notably you, but they are quite real.


And you lack reading comprehension.

I didn't say professionals don't care about latency. I said that professionals don't ask for latency for hard drives, they go straight to SSDs when latency is important.


Reading comprehension doesn't cover you retconning what you said. You didn't mention SSDs at all in your previous comment. What you actually said did imply that professionals don't care about latency. Full stop. Perhaps that's not what you meant to say, but that has more to do with your writing than anyone else's reading.

More to the point, your gaslight version is still wrong. Professionals do have latency expectations even with hard disks. Those requirements might not be as stringent for hard disks as for flash (or for distributed vs. local) but they're very much still there. If a significant fraction of users' requests are taking too long, they do complain. Loudly. Data scientists don't care if the system is simultaneously delivering dozens of gigabytes per second to other users, and doing ten kinds of background maintenance stuff besides. They care that their job is slow. When you're dealing with truly large amounts of data being processed by thousands of machines, and ad hoc queries none of which are likely to be repeated, "just use flash" isn't an answer.

Maybe you're dealing with a different kind of professional than I am. That's fine, but you shouldn't keep making these super-general statements that are wrong for one of the largest classes of data professionals. "Couple of hard drives" was so far off the mark I literally laughed out loud.


No, I didn't. It was your problem you misinterpreted what I said.

Nobody else misunderstood me but you.


> Nobody else misunderstood me but you.

I think it would be more correct to say that nobody at all misunderstood you, and nobody else bothered to correct your claims. I do see there were some downvotes, though, which suggests what others thought of them.


There was one downvote: yours.

Nope, wasn't me.

I know that big disks are useful for a lot of people/applications. However, I topped out at 500GB about 10 years ago. That gives me plenty of headroom. Since then, I have gone from a 500GB HDD to a SSD, but my space usage hasn't really gone up. The largest disk I own is a 1TB disk in my Time Capsule (which I could probably stand to upgrade, but I don't feel like taking it apart).

I think this is probably because I stream everything, and don't download much content. Even most of my applications are web-based these days. Granted, I'm sure that a lot of storage in the cloud is required to service me...


As you say you stream your videos. Now imagine getting that shiny new 4k video camera and start storing raw footage even after you have edited it. Rough ballbark of 120Mbits/s ==> 15MB/s ==> 900MB/minute ==> 54GB/hour. This is why I keep adding more and more disks to my NAS.

What type of NAS do you have? I am researching alternatives would appreciate your advice.

Sibling comment is dead, but I second their recommendation of the Synology 1618+. It's been excellent in my experience. Even better when paired with a 10GbE NIC.

You can vouch for dead comments to un-dead them (bring it back to life?). I've done it for the sibling comment (by twothumbsup).

Not everyone can do that. I can flag submissions but not comments, and I can't vouch for anything. No idea why. Maybe my posts get flagged more often than I realize.

Ah thanks, didn’t see a link, but I probably just wasn’t looking for it. Looks like they’re undead now :-)

Thank you very much.

I have a Synology DS1618+ that I'm happy with. Just upgraded to it this year from a DS413j I'd been using since Jan 2012.

On the build your on basis, there are a number of nice atom/xeon-D micro ATX motherboards with 10G Ethernet and 8+ sata/sas ports. Combined with something like the old silverstone DS380B (8 hotswap 3.5" drives, + couple internal) makes for a nice small form factor NAS.

Why would you build your own? First, because the NAS vendors charge on a non-linear scale for drive slots and 10G Ethernet. The latter is basically a $50 additional charge on a motherboard, or a $100 add in card, but will easily add $500+ to a off the shelf NAS. Then there are the drives, today you might be happy with a two or three drive NAS, but in 5 years adding an additional 8-12TB drive (or three) will be just a hundred dollars or so, vs buying a whole new NAS.

Then there is actually using it. Running a NAS a 400-600MB/sec is a far cry from using one at 80-120MB/sec which is where you will peak out with 1Gbit ethernet. This becomes really noticeable when your copying one of the 50GB 4k video's you record during the kids birthday, or just running a machine backup to the NAS.

Frankly, at the end of the day, I find the flexibility to be the largest advantage. When I got tired of my old NAS's plex encoding perf, I swapped the motherboard for $300, without having to copy any data because i'm running a stock linux distro. Much of the old advantage on NAS's were the 3rd party marketplaces to get things like plex/crashplan/etc. Now those things are frequently found in docker containers, so a bit of setup and your application installs tend to stay continuously updated, rather than hoping your NAS vendors market place is remembered.


So helpful thanks much. One thing I am not familiar with is the quality of user interface for DIY NAS. For example are there applications to upload videos from your phone directly?

Just from a camera perspective, RAW capture from digital cameras can eat up space over time. At ~60MiB per image on 24MP cameras and a usage rate of 2000 images each year, you consume ~120GiB of storage each year. Video and post-processing storage adds further.

Cameras increase in resolution, better sensors, ...


Yea when I started shooting in RAW I became very thankful for these high capacity drives. It's still cheaper and easier just to deal with buying a pair of drives than using any type of online backup when talking about files at that size.

I play games, and rarely get time to finish them. So I usually have a hard drive dedicated just to game installs. Then I have disk(s) for storage of code, pictures, ebooks, music, etc. My boot drive is usually 250 GB or 500 GB, but I use numerous TB drives.

I had a lot of issues with >2 TB HDDs in terms of performance though before, and generally avoid them.


you also do not game a lot. if you look at https://www.game-debate.com/games/index.php?g_id=9339&game=R... you will see that you need 100gb for just that single game.

I mean my own music library which is rather small has 250gb...


I game a bit, but play mostly indie games that are a few GB or less.

I game a lot, but none of the games I play even approach half that size. not all games are bloated, disk space eating messes.

it actually depends. you can either have a game with extremly fast loading, which means that assets won't be compressed a lot or you can compress them and go for file size, you can even reduce the quality. it always depends on your target audience.

but still modern games for pc/mac are around ~10gb per game which would limit your installed game base by a lot.

(also working with docker (pc, mac not linux) or vms in general will quickly eat your 500gb hard disk)


I have a 200 GB or so SSD in my gaming computer. Yeah, I basically just install one AAA game at a time.

Depending on the storage and the compression, loading compressed assets and decompressing them in ram can often be faster than loading uncompressed assets.

10GB is an order of magnitude less than what GP referenced.. I don't mind being limited to having tens of games installed on a 500GB drive.

My photo library is 400 GB alone. And I don't even own a camera, that's all photos taken with my phone! No RAW or anything fancy like that, the worst thing in there are short clips of highly-compressed 4K HEVC. And it's ever-growing as I have a kid, a second one on the way...

Modern video games are reaching 100GB in size per game. I have been waiting for bigger hard drives so I can store raw blueray rips which are about 50GB each.

For general file storage spinning disk HDD are still better cost, but there is a decreasing need for general file storage for the average person. For most people one terabyte is more than enough and it may as well be an SSD that is also their boot drive.

Personally I'd love to replace my multiple 2TB drives with 8TB drives if the price comes down a bit.


Games are the only reason most people will need more.

Most new AAA games are approaching 100GB each. And on the extreme end Gears of War 4 for example takes up over 250 GB.


Datacenters as well. Peoples data storage needs have gone up only now we put all the data on someone elses computer.

Me too, but the idea of having so much of my data on a single drive terrifies me. Not sure about you, but I'd probably buy a couple (from different manufacturers or, at least, different batches), just to RAID-1 them.

And automatically ship logs to another set of disks, preferably on a different continent.


I work at a computer resell/repair place and I beleive 9/10 customers would be fine with a 128GB SSD which can be had on amazon for 20$ ! Most people just watch youtube, netflix, some bills, maybe a few documents, etc.

As a person who doesn't know much about HDD technology, I always wonder: what were the advances that allowed this particular bump to happen?

ELI15: before, when you wrote to disk, you needed a stronger magnetic pulse to change the polarity of the bit. This meant a wider area per bit. Now, we can heat a smaller part of the disk, making it easier to flip the polarity with a smaller magnetic pulse, so the area of the bit doesn't need to be as large, so we can store more info.

From TFA:

>The Exos 16 TB hard drive using HAMR technology is now the world’s biggest HDD in terms of capacity overtaking the 14 TB Barracuda Pro.

>HAMR, which is the acronym for heat-assisted magnetic recording, to be precise. This replaces the regular PMR, perpendicular magnetic recording, found in most HDDs. To the average consumer, this doesn’t mean much at all. However, Seagate believes that HAMR is the key to making significantly larger capacities readily available shortly.


What is the theoretical realistic limits for HDD storage?

From the sounds of it this 16TB HAMR-enabled drive is just the first of many we will see year-to-year to continue HDD sales.



Seagate says they'll get up to 100 TB with HAMR. Curious at what point it's no longer financially reasonable to keep researching ways to increase density.

We produce and process more data in a few days than we did in a year even as recently as a decade ago (and, by "we", I mostly mean companies that want to store every detail of every human life for analysis in order to maximize profits). So far, the increasing need for storage has kept pace with the ability to store it.

That is a great question :-)

The HDD industry, Basically Seagate, Toshiba, and WD together are milking it for as long as possible. The Data created per year, or even growth per year far out-weight the growth of HDD capacity.

At some point later, surely NAND will offer more storage in 4U than a 40U Rack. Lower Energy, higher speed.

I have been asking this a lot, at what price point, will the TCO of NAND, where it offer superior speed, higher density per Rack and its capacity cross and it makes sense to store them all in NAND.


I just checked out the price of that 14tb seagate drive and you can get 2 of their 10tb drives for the same cost of 1 14tb drive. Who is buying this 14tb drive right now if it costs that much?

People who need mass storage and either have limited space or want to maximize the space they do have. A thirty-two drive rack full of 14TB drives is 448TB of storage space instead of 320TB (thirty-two 10TB drives). And if you only need 320TB of storage you only need a rack that holds twenty-three drives - so you might downsize to a twenty-four drive rack.

I updated a lot of my 4/6TB drives to 8TB drives for this reason: too much physical space was being taken up on my desk by having so many 4TB drives.


You can fit more terabytes in a rack, and (likely; I haven’t checked) use less power and cooling per terabyte of storage, at the cost (again: likely; I haven’t checked) of performance.

Including the cost of racks and power, this may even be cheaper.



The Library of Congress is only 10 terabytes.

Disagree, I sold them tape cartridges by the PB

If it's anything like their 8tb offerings it's unreliable rubbish.

Statistics with representative sample size disagree. https://www.backblaze.com/blog/2018-hard-drive-failure-rates...

I don't know if you're looking at the same list I am, because the large Seagate disks on here have some of the highest failure rates outside WDC.

The Seagate 8TB has an AFR of around 1%, this is not only really good, but also makes any anecdotal evidence from a consumer quite useless.

when are we getting 8 tb ssds. its been awhile. ssds should have caught up to spinning rust by now in capacity

Enterprise 15 TB SSDs are already available. It would be easy to create 8 TB consumer SSDs but they don't see demand. There's tons of empty space inside 2.5" SSDs: https://www.anandtech.com/Gallery/Album/6783#3

Right, but a huge part of the consumer SSD space has moved to m.2, where there isn't a lot of space to just pack in another couple dies. While conceivably you might be able to make a 8TB drive in m.2 its going to be easier in 2.5", and people with the money to buy that large of a SSD probably don't want it on SATA. Which is why, as you point out they exist, here is a 8TB u.2

https://www.newegg.com/Product/Product.aspx?Item=1E4-006U-00...

for only $2600.


Anyone have an opinion about Seagate? I had one of their HDD and it died on me within a year.

Not sure if this was a 'one-off' or other people had issues with their quality.

Similar issue with samsung on SSD.


Check the BackBlaze reports: https://www.backblaze.com/b2/hard-drive-test-data.html

A few Seagate models were problematic. See the 2nd table here (did you have one of those?) https://www.backblaze.com/blog/hard-drive-stats-for-2017/

I switched to HGST a few years back, after seeing the color graph on this page: https://www.backblaze.com/blog/hard-drive-reliability-q3-201...


I've had a drive from every major manufacturer fail at some point. Brands don't matter, backups do. Buy hard drives in pairs and put an item on your calendar to pop the other one in once a month, sync it, and put it back in a fire-resistant safe.

> Buy hard drives in pairs and put an item on your calendar to pop the other one in once a month, sync it, and put it back in a fire-resistant safe.

Alternatively, buy one hard drive and keep a copy in Amazon Glacier Deep Archive for $1/TB-month.


And then wait weeks and pay real money to pull it all back if you have 8TB :)

Yeah it's something like $700 to retrieve 8 TB

AWS Lighsail can transfer 1 TB from S3 for $5. And no fear of losing a second drive while you perform a network synchronization.

Meanwhile for highly redundant tape cartridges, you can store that amount for maybe $100-200, but the device to read it back is about 10x more expensive used unless I am just struggling the find the real deals :/

I have Seagate drives spinning for more than 5 years now with no data loss.

Some drives fail. It's a fact of life. Always buy more than one (from different manufacturers, if possible), remember RAID-1 is your friend and do your backups as if your life depended on them. Because, in this age, it does.


In 2013, I ordered 6 x Seagate Barracuda drives (3 TB). By 2017, 5 of the 6 had failed. There was a famed "bad batch" that I think my drives were a part of, but I will never buy Seagate again.

I think Backblaze has the best public hard drive statistics available.


I've had 5 Seagates die on me, 4 of them all within 11 weeks of each other, all different models and capacities, 1 of them was a 2.5" laptop drive, 1 of them failed after being in use for 30 minutes. I told myself I'd never buy Seagate again, but when I had a drive failure in another laptop a few years ago... yep, it was that 5th Seagate; I didn't realise I had it.

I've had one Samsung HD502HJ failure -- my 3 other HD502HJs are still going. I haven't had any other drive failures. I have more than 30 drives, and some of them are more than 8 years old.


Seagate is well known for quality issues. I usually have no more than one Seagate in a RAID and plan for it to fail first.

My understanding is the IronWolf drives are really good.

IMO what's important isn't getting good drives, but avoiding simultaneous drive failure. I usually do this by building RAIDs out of a mix of brands and manufacture dates.

IronWolf is AFAIK the only non-helium non-SMR 8TB drive.

I have had bad experiences in the past, maybe not on the scale of [1], but quite close. The newer drives are better (adjusted for age).

[1] https://www.youtube.com/watch?v=oHcHA5riKbc


>Not sure if this was a 'one-off' or other people had issues with their quality.

My last two HD failures were Seagate. On one of the PC building forums I occasionally frequent, they've had a sticky for years telling people not to buy Seagate.


Same here. One is theoretically still good but a fragging firmware bug prevents it from being enumerated properly. Never again Seagate with your crap reliability.

I've used them exclusively for the past 10 years in my web servers and I couldn't be happier. I've had the occasional drive fail on me here and there, of course, but some of those 10-year-old drives are still humming along nicely.

I’ve owned three or four Seagate drives (at least two of which were enterprise-class), and they all failed. My many HGST disks, with one exception, were all still alive and well when I retired them. I’ve never had a WD disk fail.

Backblaze reports always have Seagate with significantly higher failure rates. I suspect they just RMA them all since failures don't actually affect them too badly.

I sold my Seagate stock a while back, so I have no horse in this race, but I needed to point out that _outside the notoriously bad 3tb line that one year_, Seagate’s reliability has been on par or better than Western Digital’s in every single Backblaze report I have ever read. I challenge you to prove to me that the Backblaze data says Seagate is worse than WD outside of that line.

I've bought a few enterprise models a couple of years ago and they're stellar (Constellation line-up). Apart from the fact that they're quite loud.

Same but the one I got from the RMA has been running for 5-6 years without any issues.

I have had my seagate 3tb drive running since I think 2012



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: