Hacker News new | past | comments | ask | show | jobs | submit login
SSD Storage: 2018 in review (anandtech.com)
173 points by PeterZaitsev on Jan 14, 2019 | hide | past | favorite | 126 comments



I've been following the news and it's amazing how SSDs are still progressing, a bit slower but similar to Moore's Law. Prices for Samsung SSD 1TB external USB products:

- 2015 $600: Samsung T1 http://www.thessdreview.com/our-reviews/samsung-portable-ssd...

- 2016 $430: Samsung T3 https://www.storagereview.com/samsung_portable_ssd_t3_review

- 2018 $200: Samsung T5 https://www.androidheadlines.com/2018/12/samsung-t5-portable...

Another incredible exciting point is the amount of competition coming up, it's not just 2-3 large vendors anymore.

If things continue at this pace it seems sensible to expect a 10TB SSD consumer drive for under $1000 within a couple of years.

Note: not afiliated at all, just a happy consumer.


Meanwhile Apple is charging $1,200 for a 1 TB MacBook upgrade. Hoping this year we see a significant jump in consumer availability of these drives. I’ve been using a 512gb laptop since 2010 at least. Would be nice to start seeing multi-terrabyte drives in devices besides after market or external drives.


> Meanwhile Apple is charging $1,200 for a 1 TB MacBook upgrade.

This is specifically one of the reasons I ended up switching to a Lenovo Thinkpad X1 Yoga, even though I'd planned to buy the new MacBook Air. The Thinkpad's SSD is officially user-replaceable and I bought a 2TB WD Blue SSD for less than a third of the cost of Apple's upgrade. Even an NVMe SSD would have been half the price of Apple.

One caveat is that although Lenovo has excellent official repair videos (https://www.youtube.com/watch?v=PqrspYc21PY), I must be doing something wrong, the Phillips & JIS heads in my iFixit Mako toolkit don't seem to fit/grip the Thinkpad's SSD screw. I'm worried I'll strip the screw if I keep trying, so I guess I'll take it to a Lenovo repair shop for some help.


Sometimes manufacturers put a bit of blue Loctite on the threads... makes the torque needed to initially 'break' the threads/unscrew a bit higher. I wouldn't be that concerned, buying a replacement screw kit for the HDD caddy will be a couple dollars on Ebay if you really do strip the head.


At the end of the day if you're buying apple for the hardware you were doing something wrong to begin with. MacOS is one of the few good reasons to get apple products, otherwise it should be a given that you get an alternative device. Combine this with the fact that Apple has made it impossible to get stuff repaired, and what you've just said, I don't see why anyone would buy an apple laptop for the hardware alone.


For a long time, Apple had the best laptop hardware on the market. Since the web revolution and fewer people needing Windows to work, its no wonder that Mac sales have exploded. I believe that's coming to an end though. Macbook Pros have gone too far into the direction of compromise for the sake of design and are on the road to nichedom again.


I just bought the X1 Extreme with 256gb ssd and 16gb of ram because I can upgrade it later.


How long have you had it /how do you like it so far? I've been considering getting the 4k version but I worry about battery life. Seems there's a bios/driver bug that keeps the Nvidia GPU running always


Just bought, waiting for it to arrive.

There's update bios to fix that issue according to reddit.


Thus, one of the many reasons why you'll pay less for non-Apple computers.


To be fair, the $200 1TB SSD linked above is just a standard 2.5" SSD where the Apple SSDs are M.2 with triple the I/O speeds. These Apple(s) can't be compared to the list of oranges above.



Things really have changed over the years. I used to shop in the US and bring everything back with me to Europe because computer parts were half the price. Now I'm thinking about doing the reverse[0] (240€ for the same NVMe SSD over here).

[0]: https://www.amazon.de/gp/product/B07CGJNLBB/ref=ppx_yo_dt_b_...


It's not the same product: 970 PRO vs 970 EVO.


Another shop:

https://www.alza.de/samsung-970-pro-1tb-d5319060.htm

276,47 EUR w/o VAT, 329 EUR with VAT. Currently, that's some 377 USD.


On Samsung's US website it's $350 (free s/h):

https://www.samsung.com/us/computing/memory-storage/solid-st...


Yeah, that link I posted isn’t sold by Amazon but some random person. It’s probably marked up a bit.


NVME, not M2. All Macbook drives are now soldered straight to the motherboard.


Thanks. I confused the M.2 form factor with the NVME chips.


No it is not, I linked to the external Thunderbolt SSD (last one) so there are many more pieces involved in the Samsung SSD. But it's true that the Apple one is a lot faster since it's internal.


Sure, it connects via thunderbolt, but it's still just a SATA style SSD (max speeds around 550MBps). I'd love to find an external M.2 to get 1200MBps via external, but this seems to be a unicorn still. Although, I'm still happy to have 550MBps on a SATA SSD vs 110MBps via platters.


Here you go, 3412MBps (read) and 1884MBps (write) for the Samsung X5: https://www.techradar.com/reviews/samsung-x5-portable-ssd

Granted, it's a heck lot more expensive at 1TB for $700. It seems the only reason for this drive to exist is actual transfer speed because in any other way seems worse than the T5.


If you want USB, forget it. The best you'll get from a USB 3.1Gen2 > NVMe case is about 1GB/sec.

If you're willing (to pay) and able (to plug it in) to use Thunderbolt 3, there are several commercial products that use an NVMe internally and boast up to 2500MB/sec.


I have one of those drives - it's not Thunderbolt, just USB 3.1 gen 2.

It's also not a 2.5" drive since it's obviously smaller.


Oops, that's on me :)


What would consumer availability have to do with the $1000 apple tax?

1tb SSD are already significantly cheaper than what they are charging. They don't seem to base their pricing on cogs.


This is the new Apple RAM surcharge — remember back when Apple let you swap out DIMMs, they would charge 3-5x market rate betting you wouldn’t. Now they’re doing it on SSDs but you can’t get away :) If you follow the r/buildapcsales it’s pretty eats to get a 1TB 2.5” SSD from a top manufacturer for under $120, and M.2 for $20-30 more. I just got a Crucial MX500 1TB for $114 all in.


I would counter that the SSD's aren't entirely the same. The most comparable consumer NVME M.2 would be the 960 pro and it's still around $300. Definitely cheaper than Apple, but it's also not in the $150 range for the same performance.


You are definitely right, an NVMe SSD is going to spank my MX500 in bulk throughput. That is still 3-5X market -- 4X specifically :) in today's market there's no way you can justify a $1200 uncharge from a 500GB to a 1000GB SSD. Of course, Apple doesn't have to, though.


970 Pro is faster than the Apple Macbook SSDs and about 1/3 the price. $349.99 direct from Samsung for 1TB with 3,500MB/s Seq. Read and 2,700MB/s Seq. Write


99% of the time SSDs are a boolean: "Do you have a SSD? Yes/No" and performance between different SSDs is basically unnoticeable. Change my mind


> performance between different SSDs is basically unnoticeable

Spinning rust drives will likely give you up to ~110 MB/sec.

A SATA SSD will likely give you to up ~550 MB/sec, a 5x increase.

An NVMe SSD will double the write and quadruple the read speed of a SATA SSD, in 1/3 the size.

Yes, any SSD is likely good enough for your average user to feel like a computer is fast. If anything you do is even remotely heavy on I/O, more speed = better.


Modern HDDs easily go beyond 110MB/s, but nonetheless stay well below 200MB/s afaik. It'll probably be somewhere in between. Otherwise your figures are fairly correct, if not on the conservative side for NVMe drives (a Samsung 970 Pro will sustain sequential speeds above 3GB/s read and 2GB/s write).

But focusing on sequential read speed misses the main improvement that SSDs give you over HDDs: random reads and writes ("seek speed"). Reading or writing lots of small files from an HDD will ruin your transfer speeds unless they're written sequentially, whereas a SATA SSD will do quite well and an NVMe SSD will do really well. This is the cause of the noticeable speedup when you upgrade to an SSD.


Downvotes, for the speeds of various storage media?

Well done HN, just when I thought you couldn't get more ridiculous, you one upped yourself again.


Generally, yeah, but the exceptions are pretty significant.

Whether or not these use cases are common enough to disprove the "99% of the time" claim is debatable (these kinds of use cases may be far less than 1%) but regardless, check these out:

https://www.storagereview.com/samsung_970_pro_1tb_review

- 10x latency improvement in SQL Server stress tests between slowest and fastest SSD

- 3x rendering time improvement from slowest to fastest SSD

Now, for anything I do in my daily life as a software engineer? Yeah, SSDs were fast enough years ago. Only time I even scratched the surface of their performance was when I copied a VM from one drive to another.


Definitely relative to a hard drive.

Relative to each other what really matters are your 4K random read/write IOPS and some measure of reliability. Back in the day there were in fact huge differences between certain SSDs (Samsung controllers vs. Indilinx Barefoot [1] back in the early '10s). Today it's a non-issue.

SSDs years ago saturated the SATA III bus with respect to peak transfers. We're talking an order of magnitude these days, with SATA capping out at 550-600MB/s burst (my MX500) vs NVMe's 3GB/sec (970 PRO). That doesn't paint the real picture to a large extent because 4K random IO is still <<100MB/s in either case.

Large DRAM buffers vs. unbuffered can make a big difference too. Reliability these days from any of the big players is usually very good, since their NAND only comes from a handful of big players -- ditto for the controllers. I guess if you buy shady "Kingston" knockoff SSDs with recycled NAND that's a different matter.

tl:dr; yep, you're right. Any modern SSD from a big player is a good bet, beyond that it's just gamesmanship.


Agreed I’m still using my late 2013 MBPro with 1TB ssd 16gb Ram, because it feels to me that Apple prices for the 1TB ssd have not reduced at all, I want to upgrade and I can afford it, but I hate being ripped off, so I’m stubbornly waiting for a price drop. I like 1TB because I need to run Windows 10 and MacOS. Apple makes the most reliable windows pc according to some stats (a few years ago) I saw from Microsoft and I believe them, runs Bootcamp Win 10 Office 64 bit Excel v heavy workbooks better than any other machine I’ve tried


I'm surprised how quickly prices are falling.

I purchased a 500GB EVO 860 only 7 months ago for $139.99.

Just checked and it's currently $82.99.

For only $8 more than I paid for that 500GB SSD, I could now get a 1 TB SSD.

(Naturally, I've barely used that drive, which was slated for secondary storage, and now I wish I had waited.)


In June 2015, I found the price of a 1 TB SSD was 400 USD, and in December, the price had fallen to 300 USD. In June 2017, I checked the price again and found it was 250 USD. I've put the numbers in my calculator to perform a curve-fitting using the exponential function, and it suggested that "I'll be able to purchase a 1 TB SSD for ~140 USD in 18 months", and I left the number a future note. Now, it turned out the regression analysis was almost correct (only for unnamed cheaper brands for now, I expect more reliable products are coming soon). The "Moore's Law" is really working.


I'm seeing $135 for a Crucial MX500 1TB or WD Blue 1TB (and lower during the recent holiday sales), so we've already hit your target with name-brand drives from major retailers like Amazon and Newegg.


It's pretty much there —- the Samsung EVO 860 1 TB is $147 at Amazon.


I wouldn’t worry about it. If you barely use it, what are the odds you’ll ever fill it up? And if you do fill it up, SSDs will be even cheaper then.


I await the day I can fill my NAS with SSDs. Might save on power usage as well.


Not sure if it's a good idea though. SSDs tend to fail without warning. I guess that's why we have RAID but still I wouldn't want my backup storage to be so volatile.


You think HDD can't fail without warning? OK, I get that SSD don't implement S.M.A.R.T but..


SSDs do implement smart and usually do give an early warning.

I've destroyed a few Samsung 840 Evo drives in a few servers and didn't lose a single byte. Failure condition was a very degraded read speed. It took a few hours to copy the 120GB to a fresh SSD


> I get that SSD don't implement S.M.A.R.T

Despite the S.M.A.R.T data is less predictive on SSDs, they do have it, at least for SATA disks...


NVMe drives have it as well, at least the Samsung ones.

You can read it on Linux by running:

"nvme smart-log /dev/nvmeX"


smartmontools can grab most of that info too, even on Windows.


In which case, even less reason to hate on SSD for home storage in RAID shells.

I have a spun-down RAID farm as longterm. It's worrying me that the unit the disks are in is a SPARC based motherboard now over 10 years old. Those caps don't last forever.


and the magnetized bits on an HDD have an expected livetime of 25 years. Unless read-and-rewritten every so many years


I had planned on doing this but even with gigabit wifi, and half gigabit transfer speeds for the SSD, NAS encryption algorithms seems to cap the transmission speeds to around 100 megabytes per second

I have all the parts I just haven't formatted my SSDs to go into the NAS yet and I'm not sure if I will


100 megabytes per second sounds more like the limit of a gigabit connection than an encryption algorithm. Most NAS boxes of the last 5 years should be doing encryption offload on the CPU via things like AES-NI instructions which should be hitting ~500 MB/s per core.


Figured someone would say that

I can get 400 megaBYTES per second on my iphone from external sources

Inside my network I can get faster

Dont want the NAS to be a bottleneck for my wireless convenience

edit: still megabits


iPhones "only" support WLAN with a theoretical max of 867 Megabit per second (802.11ac with 2x2 MIMO), so no, you don't get 400 Megabytes data transfer to it.


Youre right, lets keep dividing by 8

The point is that I am trying to avoid bottlenecks and Im not sure my NAS with SSDs does that


Even if the throughput is no better, SSDs will have much better latency and IOPS.


Not to mention sheer storage density that has already far outpaced spinning disks (just not at an affordable price, yet): https://www.theverge.com/circuitbreaker/2018/2/20/17031256/w...

This thing is 30TB in 2.5" format, while the largest capacity HDD I can remember hearing about is 15TB, and that's in a 3.5" format, which is about double in physical volume and weight.

I can't wait for the day when something like this becomes economical, so I can stick two of them in a compact portable NAS for 30TB of redundant storage that I can bring with me anywhere.

Not to mention, SSDs on average seem to have more predictable failure modes that scale based mostly on usage, and tend to be overall more reliable than HDDs that can often suddenly die on you with no warning.


SSDs consume more, not less power, to comparable spinning HDs, especially in always-on scenarios. Once started, the amount of power required to keep a platter spinning is miniscule.



Those HDDs have more than 10 times more capacity. This means you need more than 10 times as many SSDs. Even with the most efficient 0.5W SSD that means all 10 of them would consume 5W which HDDs still manage to beat. In reality that 0.5W SSD is an outlier and they consume 1W so in truth the power consumption is closer to 10W for the same amount of storage. That's above the least efficient HDD. Perhaps bigger SSDs enjoy similar "economies of scale" but what you linked tells a completely different story.


No. Actually the image I posted before is outdated. Modern 1TB/2TB SSDs consume a lot less. Milliwatts.

https://images.anandtech.com/graphs/graph9451/75923.png

Full article:

https://www.anandtech.com/show/9451/the-2tb-samsung-850-pro-...


I wouldn't use external SSDs as the price benchmark, since they had a big price premium over an equivalent mSATA SSD back then.

For example, I bought a 1TB samsung mSATA SSD in 2015 for about $320 and it's enclosure for $15. The equivalent SSD today is about $150. So instead of a 3x price decrease in the past 3 years, it's more like a 2x price decrease.


I agree, there is more overhead. But since I'm interested mainly in external SSDs I'm tracking those :)

Please feel free to create a similar list with 2.5", mSATA or m.2 drives, it probably is even more shocking.

The other awesome thing about those 3 I linked is that the interface also has changed: Micro USB 3.0 => UBS-C Gen 1 => Thunderbolt.


mSATA SSD's are so rare these days


True, but Samsung is still making them. A 1TB mSATA 860 EVO is currently only $3 more than the M.2 SATA version of the same, so it's still a reasonable comparison against drives from several years ago. (However, both mSATA and M.2 SATA versions of the 860 EVO are more expensive than the 2.5" version that faces far more competition.)


Not sure if I'm asking this correctly but, all SSDs now have their own internal filesystem for managing access to blocks right? Are we getting to a point where traditional filesystems like NTFS or ext4 will simply go away? Or will they still stick around and just act as lightweight layers on top of the SSD filesystem?


The flash translation layers in SSDs implement things equivalent to the journalling functionality in modern filesystems, but they don't really provide any hierarchical organization of data. So you'll still need a normal host-based filesystem for the foreseeable future, but filesystem design can be influenced by the assumption that it's running atop flash storage.

There are several efforts underway to provide more specialized software interfaces to SSDs. Several vendors have produced models that expose a key-value store instead of fixed-size block storage; with those drives, you can throw away RocksDB and speak directly to the drive with the same semantics (subject to limitations on supported key and value sizes).

There are a few competing standards for open-channel SSDs that move some of the FTL onto the host CPU so that the journalling overhead doesn't have to exist at multiple layers; the different solutions here vary in terms of how much complexity they move to the CPU vs. how much abstraction the SSD still provides for the sake of software portability. Most of the potential benefits this approach provides are being subsumed by extensions to the NVMe protocol that allow the host and SSD to exchange optional hints about data layouts, GC status, etc.

SK Hynix recently announced they're working on a SSD with transactional storage support, so that the host can send multiple write commands and either commit or abort the transaction as a whole.


At the SSD level, the drives are actually doing block management, not file management. You still need a filesystem to store metadata, manage the layout of the data, etc.

An example of something that a SSD's controller does that the operating system/filesystem doesn't have to worry about is managing bad blocks. If the SSD detects a bad block, it will replace it with a working block and update the data used by its flash translation layer to move the blocks around. This is completely opaque to the operating system; as far as it knows its underlying storage works exactly the same (until there are so many bad blocks that the drive can't keep up this convenient deception).

An example of something a filesystem does that the SSD doesn't provide is storing operating system-specific file metadata, such as permissions, creation times, multiple data streams, directory layouts, etc. SSDs deal only in blocks of data, not arbitrarily-sized units, nor metadata.

The reason that this behavior isn't more tightly-integrated is because some of the details of managing the underlying flash blocks tend to be specific to type of flash, or even different models of flash. For example, the article mentions QLC flash becoming mainstream - we're finally getting to this point because previously, QLC was so difficult to manage that your filesystem had to be aware that it was writing to QLC flash to use it effectively. There are a few filesystems designed for direct flash management like yaffs[0], but this isn't quite as efficient as a SSD's dedicated processor and software stack.

[0]: https://yaffs.net/


> This is completely opaque to the operating system; as far as it knows its underlying storage works exactly the same (until there are so many bad blocks that the drive can't keep up this convenient deception).

Is there a way for the drive to tell the fs that a block is bad? Or does the drive simply keep a bunch of blocks apart just in case?


Hard drives and SSDs both keep a pool of spare blocks so that they can remain functional after having to retire some defective blocks from use. This spare pool tends to be much larger for SSDs. Ordinary IO doesn't convey anything about whether a block had to be retired in the process, but there are SMART indicators that track this stuff.


Those flash translation layers on the SSDs are generally used to maintain QoS/and do wear leveling, block management etc.

I don't think we'll see file systems go away, but what we may see is more knowledge pushed into the file system, instead of keeping it down in the controller.

We have started to see that with the advent of LightNVM which exposes a more RAW API into the drive and the FTL is maintained in the kernel. The current "generic" implementation of the LightNVM FTL is called pblk:

https://elixir.bootlin.com/linux/latest/source/drivers/light...


Probably not putting drives in charge of the entire filesystem, but object storage would be a good way to let the drive manage itself and take care of the details of where to store everything. The SCSI commands for object storage have existed for a decade but so far haven't seen much uptake.


Those filesystems seem to handle different design requirements though. I also don't trust most vendors to properly encrypt the drives in the firmware [1], so I end up putting dm-crypt over most drives. It also doesn't account for raid setups, which are usually done at the block level. (Although I like snapraid for some use cases.)

[1] https://www.welivesecurity.com/2018/11/15/security-researche...


I think it's unlikely that manufacturers will want to turn their consumer SSD devices into a dumb flash storage and move their FTL into the open (like a API layer inside the Linux kernel). It's where their added value lies.


I think we're talking about the opposite where the SSD could take over some functionality from the OS.


Which OS? There are many, which is why they won't.


Two interesting updates based on what's been announced so far in 2019:

PCIe 4.0 will be arriving in the consumer market this year, with a new generation of AMD Ryzen CPUs providing host support, and at least one or two consumer-class NVMe SSD controllers supporting PCIe 4.0 should be ready to start shipping in retail products by the end of the year. (The enterprise/datacenter storage market's transition is already well underway.)

Seagate became the first vendor I'm aware of to start marketing a SSD to the prosumer/SMB market for NAS usage. It's a rebadge of one of their recent enterprise SATA drives and isn't even using QLC NAND so it's probably going to be pretty pricey, but the idea of a solid-state NAS is no longer completely laughable.


Not _quite_ consumer, but we run a QNAP TES-3085U at work. Fully populated with 24 x 4tb EVO drives ends up being way faster and cheaper than our spindle based EMC. (although less reliability since it won't survive a cpu failure).


How does it work out cheaper? Is this DIY vs a product?


EMC is basically the Oracle of network storage, so it's presumably more like "hyper-converged infrastructure" vs. "NAS box".


solid state NAS not not laughable, just expansive.

RAID 10 a few NVME and you can get decent throughput (and storage size) with existing technology.


> RAID 10 a few NVME and you can get decent throughput (and storage size) with existing technology.

is there a good reason to do this in a consumer setup? max realistic throughput over gigabit ethernet is only ~120MB/s, which can easily be saturated by sequential reads or writes to/from a single modern spinning-rust drive.


My NAS has 3x 1gbit Ethernet ports


10Gb ports are quite cheap as well especially if you're only going point to point (the switches are still a little pricy, but the ports for a NAS to VM host are easily under $100 now).


Even switches like Mikrotik CRS305-1G-4S+IN are getting into 100 EUR range, (if you are fine with running optic cables instead of metallic).


Thanks!

I am fine with fiber; it's cheap and for a few ports at home, I don't worry about the power (especially as compared to the servers it cross-connects).


Optic fiber latency is questionable


I wouldn't RAID10 anything, but it's totally feasible to build a ZFS RAIDZ2 pool with hotspare out of a whole bunch of 2TB SATA3 SSDs. People do similar for 'budget' video editing setups.


NVMe SSDs only significantly increase bandwidth, but not random throughput. Unless you have a 40gbit network you're not going to see the difference between one NVMe SSD or two if you're accessing them over the network.


I do everything baremetal, and my servers are a few extra (and quite decent) NICs for direct connection to eachother so that no even switches or routers bring their overhead


How much are you gaining by having multiple interfaces, with each one going to separate machine, instead of bonding them and using a switch?

Basically you dedicate 1/N to each server, instead of allocating it dynamically on demand.


I gain over 10% extra performance for what I use my servers for (with 100% meaning one full new server)

Latency is critical for some tasks, with bandwidth being a distant second.


I don't know that I'd consider RAID 10 for prosumer. RAID 1, absolutely. RAID 10 seems like needlessly adding more complexity and higher failure rates.


Great point. Complexity is killer in home setups.


Having to sort files among multiple volumes is also complexity and can lead to mistakes.


Whoa, check out that Samsung NF1 form-factor [0]. Looks like you could pack a ton of small finger-sized SSD drives vertically into a small space.

[0] https://images.anandtech.com/doci/13752/IMGP2727_575px.jpg

This is one way in which it's a cool time to be alive. I'll never forget circa 1996 when I waited 20+ minutes for my Apple Performa 550 to read and load a 30MB file from the scsi disk into memory. All so quicktime could play a grainy, low-resolution video clip for all of 30 seconds. At the time I was like "That's almost 1 minute per second of video! WTF? That sucks."


That picture is actually the EDSFF 1U Short form factor, which is the closest competitor to Samsung's NF1 out of the variety of form factors defined by EDSFF. Both allow for about 36 drives in the front of a 1U system.


How many M.2 2280 format NVME SSD can you fit in one of those modules?


That must get really hot, I wonder how well it cools?


There's no backplane to get in the way of airflow, so it is quite a bit easier to cool than a typical hot-swap bay full of 2.5" drives: https://images.anandtech.com/doci/13218/IMGP2722.jpg

The new connectors between the drives and the mid-plane board are probably the most important innovation these new form factors bring to the table, though something similar could also be done for existing SAS/U.2 connectors.


Don't they just use 1W each, making up 16W in total? I might be dead wrong. But 16W should be easily transported off by the metallic case and a smallish fan?


The other amazing thing is size. How much smaller and cooler m2 drives are compared to old 3.5" HDs. If you have integrated graphics, PCs can now be tiny. The big tower cases are really for gamers only.


Even gamers don't need towers, games don't benefit much from anything > 4 cores and the only limiting factor so all you really need is a mini-itx motherboard, 8-16GB of ram, and a beefy GPU attached. There are plenty of console sized cases out there you can build in that can fit a 2080 or Vega card.

Big cases are really for tons of mechanical drive capacity, > 8 core CPUs, addin cards like network switches, or a ton of GPUs in a compute cluster.


Why is there competition in the SSD market but not in the RAM market?


I'm not sure. The pattern in the DRAM market seems to have been that only the biggest, most successful manufacturers can weather price crashes while still investing enough in R&D to stay on the leading edge. In the NAND flash market, we may simply not have yet seen any oversupply serious enough to do that, for all that we're in the middle of an oversupply-driven price crash.

It is probably the case that it's easier to stay in business with inferior NAND; Samsung beat everyone else to 3D NAND by a few years but Toshiba still made a killing off cheaper, lower quality planar NAND, and Intel/Micron didn't seem to suffer meaningfully from their first-generation 3D NAND being so slow. Now everyone other than SK Hynix has caught up to Samsung and the number of players in the market is actually increasing.


The recent scandals about DDR4 price fixing lead me to believe it's probably collusion.


Could it also be partly due to the infancy of the technology and the market? You don't see new RAM vendors and there are fewer avenues of improvement left to explore. That leaves a race to the bottom which is pretty unexciting for everyone.


I wish Intel Optane nvme sticks had more capacity. The 120gb drive turned my i7 6500u laptop TWICE as fast as desktop AMD 1700x with regular drive. For example Python tests on a fairly big app ran for 3 minutes on a laptop and same tests about 7 minutes on a supposedly more powerful desktop machine.

Edit: by regular I mean 960 evo


Exciting times indeed, just type "ssd" in your favorite search box. I'm about to upgrade to a 512GB SSD that costs half what an SSD half its size used to cost 5 years ago! Starting this year, I'm considering getting a low-end 120GB for $25 every year on January 1st to use as a write-once backup for all my work.


As I understand, SSD would be an especially poor choice for this - an SSD left unpowered for an extended period will begin to experience data durability issues. The firmware of a powered SSD will mitigate this issue by engaging in periodic refresh activity.

This is a classic use case for tape. LTO drives support reading up to 2 generations earlier, i.e. an LTO-7 drive can read an LTO-6 tape. By not using the latest standard, you should be able to find cheaper drives, but the risk becomes always being able to find a (working) tape drive that is capable of reading your specific tape.

All of a sudden spinning rust drives start to look pretty good...


What about a Industrial SLC SD Card ? Like that:

https://swissbit.com/products/nand-flash-products/cards/sd-m...

Is it safe for backup or same problem?


My understanding is that SLC is seen as safer, and lower densities are also seen as safer. Beyond that I don't know for sure.

I imagine that even an "industrial" grade flash card is still engineered with the assumption that it will be turned on for a while every now and then.


Flash cells leak and degrade. Some firmwares also have bugs - not refreshing cells frequent enough even if the drive is powered. The leaks are even worse on high-write-endurance enterprise drives. Make sure you read the specs and don't expect to be able to read without errors a SSD that was offline for 1-2 years.


> write-once backup for all my work.

I'd expect USB drives to be more durable. Would be nice to know.


edit: i cannot arithmetic today, or sanity check my arithmetic. sorry.

sigh.


I'm not sure how you got that $25 = 41 years of 120GB of storage on Backblaze.

Their current pricing appears to be $0.005/GB-month... which would be $295.20 for 41 years of 120GB of storage.

> i wouldn't trust a cheap SSD to still work in 41 years

I wouldn't trust almost any company to still exist in 41 years, including Backblaze. Preserving data over that length of time will probably require active management of the data in some form or another. DVD or Blu-ray discs might last that long... but it's hard to trust any storage medium to last for 41 years. Modern storage technology just hasn't existed that long.


> I'm not sure how you got that ...

sigh. bad at arithmetic today. sorry.

> I wouldn't trust almost any company to still exist in 41 years, including Backblaze

as i mention in the reply to the sibling comment, the backblaze corporate entity doesn't need to last.

> DVD or Blu-ray discs might last that long... but it's hard to trust any storage medium to last for 41 years

on an anecdotal note, i put a bunch of data on DVDs circa 2004-2008, and tried to retrieve it all spring 2018. all of the inexpensive DVDs were garbage. only the most expensive survived, and even those had a few bit errors.


Don't write optical media at the maximum possible speed, reasonably protect it from light, and it will last much longer. I always did that and haven't lost a disk yet. I'd need to recheck a few really old ones, though...


And don’t stack them one on top of another. I recall reading that added weight on the disks (common when they’re stacked horizontally) can result in data loss because of the dye changing structurally. Store then standing sideways, and as much as possible, closer to a 90° vertical angle.


> that same $25 buys you 41 years of storage on backblaze

What. I just checked backblaze, it was $5 a month, or $50 a year.


do you trust a startup still to exist in 41 years, and to honor your plan that long?


(my arithmetic was wrong, so 41 years was garbage, but addressing the more general point: )

the corporate entity that is currently doing business as backblaze doesn't need to last 41 years, or even 41 more days. the stored data is all a valuable stream of revenue from customers who'd like it preserved, so unless their whole model is unsustainable, i'd expect the company to be sold to/absorbed by another business.

as for honoring the plan, i don't think cloud storage rates are going to go _up_. so maybe they wouldn't want me as a customer at some point, but so long as they're taking consumer business i can't imagine the pennies per GB*month going up unless there is some sort of massive societal upheaval.


Just bought MX500 2TB in black friday for about 200 euros. Maybe it will getting cheaper this year, but 0.1 EUR/GB is already a good deal for me


surprised these are still a thing. I bit the bullet and moved to M.2 storage across pci lanes. It makes my current SSD's feel like spinning disk.

https://en.wikipedia.org/wiki/M.2


TFA includes discussions of all SSDs, M.2 formfactor included.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: