Solid state drives still cost about 10 times more per gigabyte. I'm just a web developer, but a spinning-disk drive looks 100 times more complicated to me to manufacture. I know at the beginning that hard drives would be cheaper, simply because SSDs were newer, and there was R & D to amortize. But by now I would have thought that SSDs might have matched hard drives or even undercut them. Not only has it been over a decade, but hard drives themselves are using new R & D too. For example this article mentions that this new hard drive uses a new technique, called HAMR. Can the old factory machinery from 2005 be used to manufacture these new HAMR disks?
Another issue with demand is IOPS. Enterprise is rarely limited by storage capacity, but instead by reliable operations per second. Pretty much every enterprise wants every SSD it can get at the right price point, which is much higher than consumers will pay. For the time being it will be worthwhile for NAND makers to sell drives to companies at a much higher price point.
To put it another way, why wouldn't someone undercut the current suppliers and sell at just above their marginal cost and take over the whole market.
And the Chinese accusations of price fixing need to be understood in the context that they're trying very hard to get their own NAND fabs up and running, almost certainly using at least some stolen tech. Even without Chinese influence, NAND prices have fallen dramatically since that article was written, due to ordinary healthy market forces.
Moat. Or the Entry Barrier. You can't built a Fab, ignore the yield and expect it to be profitable. TSMC famously said they will never enter the NAND and DRAM market and thought of those as commodities. In 2017 that look utterly stupid when NAND and DRAM prices were sky high, while not widely reported, making Samsung Electronics the most profitable company in the world ( They own ~60% of the world's DRAM and NAND market ), even more so than Apple. By 2018, a 5% drop in World Wide Smartphone shipment trigger a knock on effect, the prices has seen been plummeting. Still profitable, just not as much as they were.
It was once thought the $100B USD from China would have solve this problem. Even if they make 50% less yield, and earned net zero margin, along with forcing Chinese Smartphone companies to use a percentage of those components, should have been enough to sustain those Chinese based NAND fab. Turns out it wasn't quite as easy as they thought. Most of the DRAM and NAND were far too inferior. 3D NAND, or Multilayer Stacked NAND along with leading edge node is a huge barrier of entry.
One reason why you keep hearing there are lots of IP theft going on.
Note: All the Apple product sold in China are using Chinese bases NAND.
This is... exactly what everyone expects, and the assumption of every economic graph in the world?
The stupidest, most basic supply-demand model is:
Prices fall with falling demand and/or rising supply.
Prices rise with rising demand and/or falling supply.
There are actually markets where the opposite holds. Basically anything with high fixed costs, because the more units sold, the less of the fixed costs each unit has to cover.
But fabs don't really work like that because, while very expensive, they have finite capacity. A fab that can produce a million units costs a billion dollars. If you want two million units, that'll be two billion dollars and be available a few years from now. And if you build more capacity than there is demand, you're bankrupt. So they purposely build somewhat less than expected, which when demand is higher than expected results in much more demand than supply.
In Oligopoly markets, because there are only a few producers, the producers have more leverage over their buyers and are therefore able to charge prices higher than the marginal cost of production.
To your suggesstion, lets say I realize this and I want to start a NAND factory that undercuts my competitors by a little bit but leave me plenty of profit, I have to first build a factory large enough to reach the economy of scales that my competitors have. Reason for that is for a period of time, the marginal cost of production decreases as sheer production increases because the process gets more streamlined and automated. However to do that I need to hire engineers who know how to make a NAND, and maybe I need permission from the Chinese communist party. Then I need to get a bank to agree to give me a loan so I can have enough money to build my factory and hire my engineers. Banks probably are not going to be too keen on lending an unknown quantity and will likely charge me a higher interest rate to compensate for that risk. Only then can I enter the market.
So as you can see while its not impossible to enter the market, its not easy either. Since thats the case, the few existing producers can, without any nefarious activity (which is not to say there isn't any, but its not required), charge prices higher than the marginal cost of production.
pretty nefarious if you ask me. we gotta own the means of production! take-over these oligopolies and turn them into worker-owned-and-directed cooperatives!
On the other hand it means the technology is established and people prefer to buy SSDs over HDs. Speaking of myself, I wouldn't consider buying an HD laptop ever again.
This means companies can invest in very long-term endeavours to transform their HDD production businesses into SSD production businesses. The costs are high but the risk is low, lower or at best equally high of continuing with HDDs if you ask me.
I would have guessed SSDs surpassed HDDs already since 2 years on the Client.
Suppose it costs -- and I am making this up -- $10 to produce the first 1000 units, but only $1 after that.
Suppose people are charging a lot of money, say $50 for a unit.
Won't you increase production to try to capture that demand.
After that, because you have increased production you enjoy economies of scale, and the price comes down.
If you are the only producer, you have no incentive to come down, but if there are other producers you will want to undercut them. In this case, you will bring down your price to gain more market share.
Please help us understand what we are missing with your shrewd understanding of supply and demand curves.
Customer demand is not smooth. I will buy precisely 0 SSDs at prices > $125 per 1T drive, and I will suddenly buy 4 as soon as the price drops below there. There are thousands to millions of other customers will scattered plot points around that area, but while you can draw a smoothed graph over them and be reasonably correct, they do not actually represent a mathematical law.
Further - Information is imperfect and markets, for all their elasticity, are only trending towards efficiency.
This all assumes you're arguing in good faith. This in itself is an unreasonable assumption - Your last, mocking line seems to belie it. As does this line.
OP was talking about 10 year lags! Please read the whole thread in context.
Samsung et al do build new fabs, but at a significant lag to demand for logistic and risk-mitigation reasons.
But if it is not the case, and lowering your price only wins you _some_ but not all customers, then there is no reason to expect that the price point that balance gaining new customers vs selling each product for less will be equal to the marginal cost and not above.
It's (somewhat) the same situation as with EV and ICE automobiles. The engine and drivetrain in a modern ICE vehicle is extremely complex. Much more so at a macro level than EVs. But we're actually pretty good at building reliable complex mechanical systems. More things shift to solid state over time but it's a long process.
Fun-fact: The "head fly height", i.e. the distance from the underside of the R/W head to the platter surface, is less than a handful nanometers in most hard drives; this makes the gap not just smaller than any production transistor, but also smaller than any production feature size.
The spindle in a hard drive is also an air bearing.
Thats because 2008 to SSDs was like 1982 to HDDs. If you look at 2011 both of your promises are false. SATA SSD in 2011 was as fast as SATA SSD in 2018, but with about 10x more endurance.
The technology to produce crappy SSD is there, but it will "forget" its contents in couple of months when kept unplugged, or burn thru erase cycles in a year while sitting idle, just keeping content alive.
That's a meaningless distinction, M.2 SSDs are incredibly popular and are nearly 10x faster than anything you could get in 2011 (for example, an OCZ Vertex 3 would max out at 550/500 MB/s sequential read/write speeds while a Samsung 960 EVO can do 3.2/1.8 GB/s).
Also, fabs have investors who want returns on their money, so they're not super keen on crashing the market either. There are not-so-secret cartels between all the major players that conspire to keep DRAM and NAND prices high.
We could certainly have the government dump a half-trillion dollars and bring up a bunch of fabs though. That's exactly what China is doing. Their economy runs on assembling the goods, not lithographing the chips (which is largely done in Taiwan or South Korea or sometimes Malaysia).
Probably yes, for everything except the heads. These HDs are built upon many decades of investments in manufacturing capacity.
Also, a hard drive head and electronics has to be made once for each platter, so its cost does not scale with the number of bits on the platter.
(I could maybe believe that you ran into trouble if you used low-end enterprise SSDs that also force themselves to go read-only as soon as the warrantied write endurance is exhausted, rather than continuing until the flash itself is actually starting to fail.)
>top of the line enterprise SSDs
i wasn't precise here. It isn't top of the line enterprise SSDs (like the ones you'd use for databases and which cost accordingly). I meant the top-tier corporate enterprise vendor with top-tier hardware in the corresponding categories ( we're a BigCo ).
So what you're probably getting is ordinary cheap consumer-grade SSDs, but perhaps with encryption capabilities actually turned on. I'd be surprised if you were getting something like the Samsung 850 PRO without specifically ordering premium SSDs.
we've got various models through the years, yet still a notch higher - the 850 PRO has TBW at 300x-500x capacity where is the ones we've been getting got TBW around 1000x, and i think the earlier hardware was even close to 2000-3000x.
I think you should go shopping, because the time where SSDs cost 10x as much as an HDD is over.
When a WD Blue 1TB SATA 6 Gb/s 7200 RPM 64MB is $46 on Amazon and a Crucial MX500 1TB 3D NAND SATA 2.5 Inch is $134.
Unless you're comparing the cheapest 1TB HDD you can find to a Samsung 970 pro 1TB or something... which really isn't a reasonable comparison imo.
those are cherrypicked numbers because 1TB is at the very low end for HDD sizes. compare with a more typical HDD size (3TB or 4TB) and you'll see the differences become more obvious.
In that market, people are comparing a pile of disks in an array to a pile of solid state storage needed to replace its capacity. The bulk price of storage inverts when it is cheaper to store hundreds of TB on huge SSDs rather than on huge HDDs. That would be the death of HDD, since nobody really wants spinning disks in their datacenter.
It doesn't even need to be a pile of disks. Even for a PC the differences are big.
My computer has a 1 TB SSD which is a decent size for an SSD. It's still a bit tight for me, so I complement with a spinny rust NAS. If I had an HDD instead it would have been like 8 TB and I wouldn't necessarily need the NAS. I think that's what GP is hinting at when he says "me SSD needs". I think he's also complementing with HDD somewhere for bulk storage (be it external drive, NAS or even cloud storage)
Sorry, you're right! I had quickly searched for a few multi-terabyte drives. But at around 100 GB the two kinds are close.
Companies are overbidding at the fabrication lines to get their latest phones built. Data centers want SSD offerings for customers with insatiable demand for SSD servers at a premium.
The whole thing is just demand demand demand and the chips cant come out fast enough.
Will last for a while at the current rate.
Eventually, an SSD with cost parity with a low end HDD will be big enough, after which there's no reason for the low end consumer market not to switch over. The high end will have already as well (I certainly will never buy another laptop containing a HDD).
At that point, HDDs will essentially only be found in external drives and data centers. The writing will be on the wall, and R&D will slow down, starting a cycle that will lead to collapse.
SSD controllers are cheaper than the fixed costs of hard drive motors, actuators and clean-room assembly. Adding an extra platter gets you more incremental GB per dollar, but getting that first platter working is much more expensive than the minimum viable SSD.
Cost is still up there but only about 2.5x as expensive as 2.5" HDDs... a 2 TB HDD is about $80 while a 1 TB SSD is ~$100-120. And there are (kinda shitty) 2 TB SSDs that are hitting $250 these days (eg Micron 1100).
You can get them for sure and they're not really that expensive at all. That particular one gave me about 95MB/s write speed on large files.
Sorting by price on Newegg, if you're looking for a cheap laptop drive you can choose right now between a 320gb HDD for $28.55, a 60gb SSD for $19.99, or a 240gb SSD for $31.99.
For a 1TB drive, you're looking at $38.50 HDD vs $109.99 SSD.
Large SSDs remain fairly expensive for now, but the low end is being eaten.
Reliability and mean time between failures figure in to the equation as well -- at both the low end and the high end.
Data centers don't want to be swapping out bad drives all the time, and low-end consumers can't afford to buy new drives all the time. Both want to stretch the drives they have as long as they can, and at least in the past SSDs were less reliable and failed a lot sooner than mechanical drives did.
I personally do like to buy laptops with mechanical drives in them, because I can get a lot of storage without paying insane prices, and because I want the drive to last as long as possible. And just in general, I don't trust SSDs. They have yet to prove themselves to me.
> And just in general, I don't trust SSDs. They have yet to prove themselves to me.
I'm pretty sure you're cherry-picking the data you're willing to look at. It's definitely not true that any datacenter ever avoids SSDs due to worries about drive failure rates. These days, high-end consumer or enterprise SSDs are warrantied to survive more writes than it is physically possible to send to a hard drive during the same 5-year span. Flash memory write endurance stopped being a serious concern by the time consumer SSDs reached capacities that made them sufficient for use as the sole storage device in a mainstream laptop.
Controller/firmware bugs are the only source of SSD failure that you have a non-negligible chance of encountering in the wild, but the rate of such failures is very small, especially if you stick to the major reputable SSD brands. And that's in the consumer market where the vendors aren't specifically validating each SSD model with your specific servers before you put anything into production.
But in a laptop, a device that you lug around, move around while working, and sometimes even drop, a precise mechanical device has a much larger chance to misbehave, to my mind. Its I/O load is way lower than in the datacenter, too.
Power failure behavior of most consumer oriented SSDs is also data corrupting.
As a web developer, do you charge based on how much it costs for your to eat and pay rent for the day?
It's not something I personally would do because (as others imply) IMHO the CPU trade-off is not worth it. Usually I use a blend of plug-ins (which take up no space) and recorded hardware, consequently the amount per track ends up being more like 100-600MB and not gigabytes. This is rather manageable.
However, I will note that sample based software synthesizers and sample packs can be huge these days, even despite in many cases lossless compressing being applied to the sample library. Omnisphere 2 for instance comes with a 60GB+ sample library (and that's before you add the add-ons like Moog Tribute). At the current extreme end, orchestral sample company Spitfire offers a string library (https://www.spitfireaudio.com/shop/a-z/hans-zimmer-strings/) that is 183GB in size and a sampled piano that's a whopping 211GB compressed (https://www.spitfireaudio.com/shop/a-z/hans-zimmer-piano/).
Even the relatively small software instruments (I've been really into Soniccoture's Glass/Works, for instance) use up like 8 GB apiece.
Granted, the industry standard in this area (ProTools) is an absolute CPU hog to begin with, so it needs all the help it can get.
It would be interesting to have a media-friendly archive file format for the final tracks of completed projects that automatically compresses/decompresses WAVs as FLAC...and other raw data formats as their losslesss counterparts...closest thing I can find is zipx with wavpack.
Renoise does that. With software synths and sample-based instruments, the biggest chunk for me is always the vocals.. and I would have to have a lot of takes to get over 100mb. Of course, my music isn't the music people who record a bunch of live instruments make, it's not a fair comparison, but still, for my uses it's fine, and it's super fast. I wish there was an option to bake VST into songs, I would love to be able to share full songs as "source" (obviously it would have to be songs only made with freeware VST, but art is all about limitations etc. blah blah :)
After the initial upload it's feasible even with large files since it of course only uploads incremental, block-level changes. I let it run on a daily schedule during the night.
Looking on the bright side: infinite job security. Yay?
With RAID5 for example, rebuilding a 4TiB array is expected (i.e. >50% chance) to have at least one URE.
Obviously there will be some inherent complexity dealing with that, but not really more complexity, it's just different, maybe proper for once.
load = warmth * capacity
Thus, systems with the same basic architecture get more and more overloaded as per-disk capacity increases, until they become useless. The only escape is to change the architecture, inevitably toward greater complexity (e.g. cache layers and burst buffers). None of this would be necessary with a better capacity/performance ratio. Believe me, nobody wants to make these systems more complicated. But every time that gap gets bigger, there will likely be a new increment of complexity to deal with it
I don't see how that's "proper" or "where it belongs" or any such. It's not that all such systems were poorly designed for the hardware as it was when they were developed. It's that even the best designs have to keep adapting. The demise of Moore's law and the ever increasing number of cores per die or per system have increased complexity in the compute domain. The capacity/performance gap is the storage equivalent.
Also, rebuild/scrub operations aren't really an issue either if your running RAID6, and your system is sufficiently designed/over provisioned to deal with rebuild/scrub operations happening in the background. If your IOP limited OTOH, you likely have a problem.
I'm not sure whether that's true or not. At one end, tape still exists. At the other end, a lot of data that's already stored with high levels of redundancy doesn't get backed up anywhere else. Do you have a good source for a definitive answer?
> rebuild/scrub operations aren't really an issue either if your running RAID6
Rebuild/scrub operations still happen, and are still an issue, with RAID6. They might not be quite as visible because they're not soaking up host cycles, but they are soaking up disk IOPS. In any large or even medium-sized storage infrastructure (by today's standards) you'll have some going on and occasionally you'll have some that overlap because the first one took too long. Before long you'll hit a case where you get a third failure while the first two are still going on, and you stop relying on insufficient RAID6. Then you're into erasure codes and your own kind of scrubbing. Those aren't exotic situations or responses any more.
> If your IOP limited OTOH, you likely have a problem.
Yep, sure do. Can't wish it away, or make it all better with a magical free SSD caching layer. Have to solve it, which requires effort and expense. What I'm saying is that bigger disks make that harder. I'm not complaining, it's what I and others choose to do, but it's a fact.
There are a few large buyers of bulk storage which appear to be using it for some form of nearline. At least in my experience (which is obviously warped by the part of the industry I was in), enterprise applications needing IOP's have been overwhelmingly moving to pure flash. There remains a large amount of revenue in hybrid arrays, but the volume is shrinking (although maybe not the raw capacity, similar to the mainframe which is selling record amounts of capacity in fewer and fewer machines). A few years ago I stopped being surprised to see a couple racks of infortrend's or supermicro storage chassis (or any number of other 2nd tier vendors products) sitting at one end of random datacenters, where the local storage admins were running some huge snapshot repo, or Ceph, or whatever on them and the resulting capacities were frequently powers of ten greater than the online storage.
Also, I've seen plenty of tape arrays too, but they don't seem nearly as common these days. Partially, because tape has the same problems as disk (lots of bandwidth, but it never seems to be enough). It seems everyone still has one, but you have to hunt for it, and it might only be getting the most critical data (or stuff that is required for compliance with some law), which turns out to be limited by the 2-4-8 drives they have constantly spinning, where an operator walks in every few days and swaps a couple dozen tapes offsite. Sure there are larger libraries but it seems most admin's start any conversation about tape with a groan and eye rolling, which tends to be an attitude that keeps finite resources from being heavily invested in them. So, people will buy 10+Gbit links between their data centers (which sometimes tend to be shockingly inexpensive) rather than spend $10k on a tape library and a fedex account.
The recent capacity increases seem to be mostly driven by stuffing more platters in, so as long as they don't have independently servoed heads (used to be a thing, maybe it becomes a thing again? Perhaps not fully independent, but still "ganged up" with some sort of micromechanics in each head to servo it to nearby tracks?) or increase data density (bits/cm^2), throughput must plateau.
HAMR is an increase in linear BPI, so expect these drives to get faster.
If professionals need throughput, they will buy a couple of hard drives and array them together. Latency is generally not requested.
That might be your experience, but it's not mine. I'm on a team that runs one of the three or four largest storage infrastructures in the world. In aggregate, those few account for a significant fraction of all the disk drives sold - enough that we have to account for possible market distortions in our planning. So I think our experience is relevant too.
In my world, the professionals very much do care about latency. Yes, the systems are built for massive throughput, but that throughput has to be within a certain latency. If latency goes too high, we get calls in the middle of the night. To keep that system-wide latency low, we deploy many racks' worth of equipment at a time, or shift load between similarly sized parts of the system.
For us and our peers, this capacity/performance gap is a huge issue. Has been for years, and it keeps getting worse. Those concerns might not apply to everyone, notably you, but they are quite real.
I didn't say professionals don't care about latency. I said that professionals don't ask for latency for hard drives, they go straight to SSDs when latency is important.
More to the point, your gaslight version is still wrong. Professionals do have latency expectations even with hard disks. Those requirements might not be as stringent for hard disks as for flash (or for distributed vs. local) but they're very much still there. If a significant fraction of users' requests are taking too long, they do complain. Loudly. Data scientists don't care if the system is simultaneously delivering dozens of gigabytes per second to other users, and doing ten kinds of background maintenance stuff besides. They care that their job is slow. When you're dealing with truly large amounts of data being processed by thousands of machines, and ad hoc queries none of which are likely to be repeated, "just use flash" isn't an answer.
Maybe you're dealing with a different kind of professional than I am. That's fine, but you shouldn't keep making these super-general statements that are wrong for one of the largest classes of data professionals. "Couple of hard drives" was so far off the mark I literally laughed out loud.
Nobody else misunderstood me but you.
I think it would be more correct to say that nobody at all misunderstood you, and nobody else bothered to correct your claims. I do see there were some downvotes, though, which suggests what others thought of them.
I think this is probably because I stream everything, and don't download much content. Even most of my applications are web-based these days. Granted, I'm sure that a lot of storage in the cloud is required to service me...
Why would you build your own? First, because the NAS vendors charge on a non-linear scale for drive slots and 10G Ethernet. The latter is basically a $50 additional charge on a motherboard, or a $100 add in card, but will easily add $500+ to a off the shelf NAS. Then there are the drives, today you might be happy with a two or three drive NAS, but in 5 years adding an additional 8-12TB drive (or three) will be just a hundred dollars or so, vs buying a whole new NAS.
Then there is actually using it. Running a NAS a 400-600MB/sec is a far cry from using one at 80-120MB/sec which is where you will peak out with 1Gbit ethernet. This becomes really noticeable when your copying one of the 50GB 4k video's you record during the kids birthday, or just running a machine backup to the NAS.
Frankly, at the end of the day, I find the flexibility to be the largest advantage. When I got tired of my old NAS's plex encoding perf, I swapped the motherboard for $300, without having to copy any data because i'm running a stock linux distro. Much of the old advantage on NAS's were the 3rd party marketplaces to get things like plex/crashplan/etc. Now those things are frequently found in docker containers, so a bit of setup and your application installs tend to stay continuously updated, rather than hoping your NAS vendors market place is remembered.
Cameras increase in resolution, better sensors, ...
I had a lot of issues with >2 TB HDDs in terms of performance though before, and generally avoid them.
I mean my own music library which is rather small has 250gb...
but still modern games for pc/mac are around ~10gb per game which would limit your installed game base by a lot.
(also working with docker (pc, mac not linux) or vms in general will quickly eat your 500gb hard disk)
Personally I'd love to replace my multiple 2TB drives with 8TB drives if the price comes down a bit.
Most new AAA games are approaching 100GB each. And on the extreme end Gears of War 4 for example takes up over 250 GB.
And automatically ship logs to another set of disks, preferably on a different continent.
>The Exos 16 TB hard drive using HAMR technology is now the world’s biggest HDD in terms of capacity overtaking the 14 TB Barracuda Pro.
>HAMR, which is the acronym for heat-assisted magnetic recording, to be precise. This replaces the regular PMR, perpendicular magnetic recording, found in most HDDs. To the average consumer, this doesn’t mean much at all. However, Seagate believes that HAMR is the key to making significantly larger capacities readily available shortly.
From the sounds of it this 16TB HAMR-enabled drive is just the first of many we will see year-to-year to continue HDD sales.
At some point later, surely NAND will offer more storage in 4U than a 40U Rack. Lower Energy, higher speed.
I have been asking this a lot, at what price point, will the TCO of NAND, where it offer superior speed, higher density per Rack and its capacity cross and it makes sense to store them all in NAND.
I updated a lot of my 4/6TB drives to 8TB drives for this reason: too much physical space was being taken up on my desk by having so many 4TB drives.
Including the cost of racks and power, this may even be cheaper.
for only $2600.
Not sure if this was a 'one-off' or other people had issues with their quality.
Similar issue with samsung on SSD.
A few Seagate models were problematic. See the 2nd table here (did you have one of those?) https://www.backblaze.com/blog/hard-drive-stats-for-2017/
I switched to HGST a few years back, after seeing the color graph on this page: https://www.backblaze.com/blog/hard-drive-reliability-q3-201...
Alternatively, buy one hard drive and keep a copy in Amazon Glacier Deep Archive for $1/TB-month.
Some drives fail. It's a fact of life. Always buy more than one (from different manufacturers, if possible), remember RAID-1 is your friend and do your backups as if your life depended on them. Because, in this age, it does.
I think Backblaze has the best public hard drive statistics available.
I've had one Samsung HD502HJ failure -- my 3 other HD502HJs are still going. I haven't had any other drive failures. I have more than 30 drives, and some of them are more than 8 years old.
My last two HD failures were Seagate. On one of the PC building forums I occasionally frequent, they've had a sticky for years telling people not to buy Seagate.