Hacker News new | comments | ask | show | jobs | submit login

Can someone explain to my why SSDs still cost more than HDDs?

When I look at all the moving parts in an HDD, I'm shocked they can still be produced for less.




You're paying for $5 billion in lithography steppers and other fab equipment for one factory (you'll LOL at the cost of 1 deep-UV immersion litho stepper). That's amortized over 5 years. There's also masks, and process research costs, in addition to the basic materials costs.

Did you know that a silicon wafer is a perfect crystal, structured like a diamond? Silicon is right underneath Carbon in the periodic table, which means it shares the same outer electron shell configuration. Making that ain't cheap.

And if one atom is in the wrong place, you have to throw away the chip.

That kind of core expense doesn't exist in a hard drive factory. The disks in a hard drive don't have to be perfect crystals, for example. It's a LOT more expensive to produce chips.

Multiply all the distribution and sales costs, and you'll understand why it's so expensive.


> And if one atom is in the wrong place, you have to throw away the chip

this is certainly the case for CPU

but DRAM & NAND ? this is the typical case of designs where you can add redundancy to accomodate for manufacturing defects.


I thought the reason that there are so many different models (i3, i5, and i7, for example) is that the more defects it has, the more the surface that becomes unusable, thus reducing the amount of transistors in use (which is why i3 are less powerful than i5). Don't they technically have some sort of redundancy?


The most obvious is that they have multiple cores, and it's easy to completely disable a non functional one.

Now can Intel be more granular than the core level, like running a core with some defect ALU, I really don't know.

What's publicly known from the binning process is that it involves disabling core, reducing total cache size, and finding the maximum working frequency.

The bottom line is that it requires more effort to deal with defects in complex logic, for DRAM they would reduce the total memory size.


At the cost of increased die-size.

If it helps any just think of the costs as buying diamonds.

"Wow that's 16TB of diamonds!"

or:

"this GPU uses a bigger diamond than that GPU"


It doesn't exactly help your cause that many people also believe the cost of diamonds is artificially inflated as well.


The cost of diamonds IS artificially inflated. They are useful for purposes other than the one in which the price inflation is a big deal, but it is a demonstrable fact that the price is inflated.


The cost of industrial diamonds is -one presumes- not subject to much artificial inflation.


Only problem with that is that a silicon wafer is like $50 off the shelf individually. They are incredibly cheap.


That's still a lot more expensive than aluminum platters. And you will need a lot of silicon to produce an SSD. You don't do it with just one chip.


That price is for single wafers off the shelf. They get a lot cheaper if you buy bulk, and you can slice them a lot thinner then the sizes I used to use (where I know the price from).


> If it helps any just think of the costs as buying diamonds.

This is an unhelpful analogy.

Comparing the retail price of diamonds to the retail price of CPUs, RAM boards, and GPUs, I am lead to believe that whatever is used as the substrate for modern high-performance ICs is actually rather cheap. I can -after all- get a reasonably fast combination CPU and GPU for $45.

If we ask the USGS, we discover that in 2003, the price of synthetic diamond suitable for reinforcing saws and drills sold for $1.50->$3.50 per carat. However, large synthetic diamonds with "excellent structure" suitable for -one presumes- processes that rely on the crystal's fine structural properties -just as CPU manufacture relies on silicon wafers with fine structural properties-, sold for "many hundreds of dollars per carat". [0]

One carat is 200 milligrams. An entire Core i3 appears to weigh 26,800mg [1]. Let's be generous and assume that the CPU die is 1/1000th of that weight, or 268mg, or 1.32 carats. Given that CPU manufacture requires a substrate with excellent structure, just how much of a substance that costs many hundreds of dollars per carat can there be in a 1.32 carat device? (Especially when ones of similar weight constructed with similar materials can be had for $45 per, retail?) :)

[0] http://minerals.usgs.gov/minerals/pubs/commodity/diamond/dia...

[1] http://www.cpu-world.com/CPUs/Core_i3/Intel-Core%20i3-2100%2...


Yes, a wafer is something like a hundred dollars, that's not the expensive part.


I know that the substrate isn't the expensive part. Even a cursory gut-check reveals a claim to the contrary to be bunk.

I felt that a somewhat detailed analysis of the inappropriateness of the analogy was better than a "Nuh uh! You're wrong!" response.


Well the thing is you're analyzing in a way that's both in-depth and shallow at the same time. It doesn't matter that they have 'excellent structure' unless you care about actual wafer costs. Just use diamond prices and dimensions.


> ...you're analyzing in a way that's both in-depth and shallow at the same time.

I can't really dispute that. I'm no expert in the field.

> Just use diamond prices and dimensions.

Isn't that more or less what I did?

Diamond price per gram depends on the quality of the diamond. If we're gonna address an opinion that includes statements like "Think of the cost of a modern high-performance IC as if it was made of diamonds, because diamonds and silicon are both crystalline structures, and silicon is chemically much like carbon, therefore the substrate manufacturing costs are bound to be very similar." [0], then it seems that we need to look at the cost of high-quality diamonds that are used for their crystalline properties, rather than just for their hardness.

I'm not at all sure, but I would suppose that it would be far more expensive to make one high-quality diamond sheet the size of a silicon wafer than it would be to make a bunch of high-quality diamonds each the size of a CPU die, or maybe cut down a larger one. If it is, then an analysis based just on like-sized crystals would be dramatically unfair. Perhaps you know far more about this than I do? Industrial crystal production is not exactly in my wheelhouse. :)

[0] Direct quote: "Did you know that a silicon wafer is a perfect crystal, structured like a diamond? Silicon is right underneath Carbon in the periodic table, which means it shares the same outer electron shell configuration. Making that ain't cheap." via [1]

[1] https://news.ycombinator.com/item?id=10056870


What I'm saying is: it might be appropriate to look at specific kinds of diamond because of the complexity of lithography and such. But the purity of the wafer doesn't matter because that has nothing to do with chip cost.

That post was wrong about that being a driver of costs, and it's not fruitful to build on that wrongness.

An analogy that leads you to the right conclusion for the wrong reason is a toxic thing.


Do you mean to say that silicon wafers with higher guaranteed purity are not more expensive than those with lower guaranteed purity? I'm seriously asking; I don't know.

To speak to the rest of your comment:

mozumder made an incorrect argument and backed it up with a dangerously misleading analogy. I attacked the analogy by demonstrating its inappropriateness.

In my most recent post, I have attacked his argument with an analysis of what appear to be the actual costs of the thing he's talking about.


A wafer cost isn't insignificant. It's still a lot more expensive than aluminum platters in a hard drive, especially when you're dealing with gobs of chips in an SSD.

Add in processing costs and it really becomes a mess.

So, yes, wafer costs matter when you have to produce tons of silicon for an SSD.


> A wafer cost isn't insignificant.

This [0] seems to indicate that in mid 2009, one could get a 300mm silicon wafer for -worst case- ~$120.

Likely usable wafer area: 90,000mm^2

Largest Intel i3 processor (Haswell) die area: 181mm^2

Max dies per wafer: 497

Silicon wafer cost per die:

* Assuming 0% defect rate: $0.24

* Assuming 50% defect rate: $0.48

* Assuming 99% defect rate: $30.00

Cheapest (Celeron) Haswell on sale at Newegg today: $44.99. Average i3 Haswell price: $140. [1]

Unless Reuters is misinformed, or wafer costs have exploded in the past six years[2], the cost of the wafer truly does appear to be insignificant, even if we assume that wholesale prices are 50% of retail prices.

[0] http://www.reuters.com/article/2009/07/21/shinetsu-idUSBNG50...

[1] https://pcpartpicker.com/trends/price/cpu/

[2] This seems unlikely, as memory and chip costs haven't exploded in the past six years.


This thread is about SSDs. You're forgetting that you need a hundred of these to make a 1TB SSD.

Consider the $0.24/die, and multiply that by 100 to get a 1 TB SSD drive.

Your SSD now has a minimum cost of $24, just for the silicon. That's extremely expensive. You can never sell your SSD for cheaper than that, just to cover the silicon costs of a 1TB drive, never-mind processing, manufacturing, distribution, sales, and profit. And you're competing against 5TB hard drives that sells for $100. (the 16TB SSD meanwhile apparently uses 500 chips..)

This is why wafer costs are like diamonds, instead of aluminum platters.


> This thread is about SSDs.

This sub-thread is about your unhelpful and misleading equivalence and analogy. :)

Silicon wafer costs are like silicon wafer costs. Your diamond analogy is simply inappropriate.

We don't say "Aircraft grade aluminium costs are like diamonds, rather than hard drive aluminium platters." or "Fission reactor grade steel costs are like diamonds, rather than..." because this is an immensely silly thing to say that obfuscates the true cost of the material in question.

What's more, we can generally discover the high end of the true price of the material in question with a little work. As I demonstrated in my replies to you, silicon wafer costs are substantially cheaper than equivalent diamond costs.

If you had said something along the lines of "Due in part to the cost of silicon wafers, silicon-based data storage technologies are now and will be for the foreseeable future substantially more expensive on a per-GB basis than spinning rust or tape-based technologies.", I would have had absolutely nothing at all to object to.

> Consider the $0.24/die...

That figure is based on a particular die area. I would expect a flash memory die to be substantially smaller than a CPU die. This would drive the base cost per die down even further. Moreover, that figure was from 2009. Up to date figures are required to really put a floor on chip prices. :)

> And [that 1TB SSD] competes against 5TB hard drives that sells for $100.

Sort of. For every use that I have except for bulk data storage, I recognize the vast superiority of an SSD. The only HDDs in my computers are the ones I got for free with my laptops-turned-servers that don't do much disk IO, and the disk array that holds my 5TB-and-growing Postgres database.

For the average computer user, I would strongly recommend replacement of the HDD in their computer with an SSD. If you don't need to store more than 1TB of data[0], the performance gains over HDDs are just too great to use anything else.

I'm fairly confident that HDDs will be substantially cheaper per GB than SSDs for the foreseeable future. I'm -however- not convinced by your implicit argument that SSDs will always be -price-wise- unattractive when compared to HDDs. SSDs seem to be sold at the price-per-GB of the HDDs of ~3->5 years ago. We will inevitably see 500GB SSDs at the $80 retail price point.[1] This will make them a no-brainer for every big computer manufacturer. A really fast disk makes slow kit feel really fucking fast.

[0] In my experience, almost no non-technical user has more than 500GB of data that they care about on their machine at any one time.

[1] They're only a little more than twice that price now.


unfortunately, SSDs are going to be more expensive for the foreseeable future, and part of that is because the material costs are much higher.

We need to make sure that everyone understands why, and part of that is because we're using lots of silicon crystals, which have the same lattice structure as diamonds, which are going to be more expensive than aluminum platters.

If you take it to the limit, an SSD won't be cheaper than hard drives even as processing costs go down, because they use so much silicon.

You say the silicon costs are insignificant, but it will be a limit as prices go down.

The diamond analogy works appropriately, and it's unhelpful and inappropriate to claim material costs are insignificant.

And people are always going to end up using as much space as given, so that's another mistake you're making. They will find ways, especially given high-res smartphones everywhere with cameras.


> ...people are always going to end up using as much space as given...

I used to be certain of that, based on my personal space usage habits. Based on my ongoing survey of both technical and non-technical computer users, I no longer believe that to be true.

The rise of The Cloud(TM) means that there are shockingly few users who intentionally keep a local copy of their data. Media streaming and synced storage means that a wide swath of the computer-using population store that shit remotely and throw away data when The Cloud(TM) gets full.

> ...it's unhelpful and inappropriate to claim [silicon wafer] costs are insignificant.

When the analysis demonstrates that the costs are an insignificant fraction of the total cost, then it is entirely appropriate to make that claim. :)

Silicon wafers may well remain more expensive than harddrive platters. The price of silicon wafers may well mean that SSDs will never reach price parity with HDDs. These facts don't magically make $0.25 per chip a significant factor in the manufacturing cost of a product that also required substantial original research and development to come to market. :)


You need to look at the amount of moving parts in the factory, not in the product.


Exactly, I just went to a talk by someone from ASML, a modern processor may go through 700 sequential steps of (UV) illumination, "sanding" (to moleclar flatness) and adding a new layer of lithographic material. It's insane and there are hardly any working specimens in the first batches.


>there are hardly any working specimens in the first batches

Why is that?


Way too many atoms in the wrong places.

Building a transistor on a chip is like making a building by bombarding meteors from space and hoping the craters form the shapes you want.

Building a chip is like making a city with that process.


I think the post was was asking why the first batches are special in that regard (atoms in the wrong places) as opposed to the (presumably successful) later batches.


process tuning


It's ultraviolet voodoo, and every successive generation is tackling new problems of minuteness that have never been dealt with before.

When was the last time your first program in a language you've never used before compiled & ran on the first try?


Back in 1963, and it was my first dozen programs, one a week for 12 weeks. The 13th week, I wrote my first assembly language program, and it had a comma where it should have had a period (or was it the other way around?:-) ). I never did get that program working, because that was the week NASA sent up a new weather satellite, and there was no more computer time to waste on high school students and their trivial programs.


A large piece of spinning rust and some mechanics are cheaper to produce then billions of NAND cells.

Humans are pretty damn good at making small precise mechanical movements.


And it's a more mature arena; mechanical watchers are I'm sure a better example (Gerry Sussman likes to tinker with them to relax, or at least did that while doing his Ph.D. thesis to stay (semi-)sane), but in the area I know best, everything came together for high power rifles in the 1898 Mauser (although reliably getting the steel right took longer world-wide for the first world).

Or look at internal combustion engines. The two absolutely critical technology improvements that changed things from WWI to WWII were "merely" refinements in those and radios. WWI IC engines were so rough they pretty much required wooden frames, and engine power was anemic, as WWI tanks show. Fast forward not very many years and we have much smoother and reliable powerful engines, and had developed the seeds of today's jet engines before the first transistor was demonstrated in 1947.


HDDs cost more because they consist of technologies understood and implemented in factories for long. Manufacturing of SSDs is different, and it'll take some time for manufacturing companies to get up to speed, at least to overtake the scaled assembly line operations of HDD manufacturing.

My bet is by the end of next year, SDDs will be cheaper than their equally sized HDDs. HDDs are certainly on their way out.


SSDs are just chips on boards, which are manufactured using very mature, very high volume technology and yet they improve at best 2x/year. There is no reason why SSD prices would drop 10x in one year.

Hetzler et al. have calculated that the industry would have to spent over $800B to build enough fabs to replace hard disks with SSDs. http://storageconference.us/2013/Papers/2013.Paper.01.pdf


I think the big story here is the number of chips they can stack. From the article you pointed out to they were using 8 stacked dies.

"The later capacity is accomplished using 8 stacked and thinned (< 75 um) NAND chips in a 1.2 mm package"

The samsung one has 48 layers per the article. So that's a 6x improvement.


You're kind of mixing up a few things. 3D NAND (aka V-NAND) has many (48) layers per die, and then they also stack those dies. But they made each cell much larger, so going from 1 layer to 48 layers is only a net density increase of 2x.


You are absolutely correct. Thanks.


It's not always intuitive! Another incredible one- the silicon industry absolutely bends over backwards to keep using silicon and UV, instead of GaAs, E-UV, and other exotic processes. As much as silicon & UV can be a colossal billion-dollar pain in the butt, it's still easier & cheaper than E-UV.

Or automotive engines. We have all sorts of technology tacked on around the basic ICE gasoline engine to make them better- the combustion chamber is just a tiny piece of the machine, which is tended by countless devices managing temperature, airflow, fuel flow, air velocity, etc. Tremendously complex compared to electric motors- but in the end, they are still more popular than electric motors because of the fundamental problem of batteries.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: