- 2015 $600: Samsung T1 http://www.thessdreview.com/our-reviews/samsung-portable-ssd...
- 2016 $430: Samsung T3 https://www.storagereview.com/samsung_portable_ssd_t3_review
- 2018 $200: Samsung T5 https://www.androidheadlines.com/2018/12/samsung-t5-portable...
Another incredible exciting point is the amount of competition coming up, it's not just 2-3 large vendors anymore.
If things continue at this pace it seems sensible to expect a 10TB SSD consumer drive for under $1000 within a couple of years.
Note: not afiliated at all, just a happy consumer.
This is specifically one of the reasons I ended up switching to a Lenovo Thinkpad X1 Yoga, even though I'd planned to buy the new MacBook Air. The Thinkpad's SSD is officially user-replaceable and I bought a 2TB WD Blue SSD for less than a third of the cost of Apple's upgrade. Even an NVMe SSD would have been half the price of Apple.
One caveat is that although Lenovo has excellent official repair videos (https://www.youtube.com/watch?v=PqrspYc21PY), I must be doing something wrong, the Phillips & JIS heads in my iFixit Mako toolkit don't seem to fit/grip the Thinkpad's SSD screw. I'm worried I'll strip the screw if I keep trying, so I guess I'll take it to a Lenovo repair shop for some help.
There's update bios to fix that issue according to reddit.
276,47 EUR w/o VAT, 329 EUR with VAT. Currently, that's some 377 USD.
Granted, it's a heck lot more expensive at 1TB for $700. It seems the only reason for this drive to exist is actual transfer speed because in any other way seems worse than the T5.
If you're willing (to pay) and able (to plug it in) to use Thunderbolt 3, there are several commercial products that use an NVMe internally and boast up to 2500MB/sec.
It's also not a 2.5" drive since it's obviously smaller.
1tb SSD are already significantly cheaper than what they are charging. They don't seem to base their pricing on cogs.
Spinning rust drives will likely give you up to ~110 MB/sec.
A SATA SSD will likely give you to up ~550 MB/sec, a 5x increase.
An NVMe SSD will double the write and quadruple the read speed of a SATA SSD, in 1/3 the size.
Yes, any SSD is likely good enough for your average user to feel like a computer is fast. If anything you do is even remotely heavy on I/O, more speed = better.
But focusing on sequential read speed misses the main improvement that SSDs give you over HDDs: random reads and writes ("seek speed"). Reading or writing lots of small files from an HDD will ruin your transfer speeds unless they're written sequentially, whereas a SATA SSD will do quite well and an NVMe SSD will do really well. This is the cause of the noticeable speedup when you upgrade to an SSD.
Well done HN, just when I thought you couldn't get more ridiculous, you one upped yourself again.
Whether or not these use cases are common enough to disprove the "99% of the time" claim is debatable (these kinds of use cases may be far less than 1%) but regardless, check these out:
- 10x latency improvement in SQL Server stress tests between slowest and fastest SSD
- 3x rendering time improvement from slowest to fastest SSD
Now, for anything I do in my daily life as a software engineer? Yeah, SSDs were fast enough years ago. Only time I even scratched the surface of their performance was when I copied a VM from one drive to another.
Relative to each other what really matters are your 4K random read/write IOPS and some measure of reliability. Back in the day there were in fact huge differences between certain SSDs (Samsung controllers vs. Indilinx Barefoot  back in the early '10s). Today it's a non-issue.
SSDs years ago saturated the SATA III bus with respect to peak transfers. We're talking an order of magnitude these days, with SATA capping out at 550-600MB/s burst (my MX500) vs NVMe's 3GB/sec (970 PRO). That doesn't paint the real picture to a large extent because 4K random IO is still <<100MB/s in either case.
Large DRAM buffers vs. unbuffered can make a big difference too. Reliability these days from any of the big players is usually very good, since their NAND only comes from a handful of big players -- ditto for the controllers. I guess if you buy shady "Kingston" knockoff SSDs with recycled NAND that's a different matter.
tl:dr; yep, you're right. Any modern SSD from a big player is a good bet, beyond that it's just gamesmanship.
I purchased a 500GB EVO 860 only 7 months ago for $139.99.
Just checked and it's currently $82.99.
For only $8 more than I paid for that 500GB SSD, I could now get a 1 TB SSD.
(Naturally, I've barely used that drive, which was slated for secondary storage, and now I wish I had waited.)
I've destroyed a few Samsung 840 Evo drives in a few servers and didn't lose a single byte. Failure condition was a very degraded read speed. It took a few hours to copy the 120GB to a fresh SSD
Despite the S.M.A.R.T data is less predictive on SSDs, they do have it, at least for SATA disks...
You can read it on Linux by running:
"nvme smart-log /dev/nvmeX"
I have a spun-down RAID farm as longterm. It's worrying me that the unit the disks are in is a SPARC based motherboard now over 10 years old. Those caps don't last forever.
I have all the parts I just haven't formatted my SSDs to go into the NAS yet and I'm not sure if I will
I can get 400 megaBYTES per second on my iphone from external sources
Inside my network I can get faster
Dont want the NAS to be a bottleneck for my wireless convenience
edit: still megabits
The point is that I am trying to avoid bottlenecks and Im not sure my NAS with SSDs does that
This thing is 30TB in 2.5" format, while the largest capacity HDD I can remember hearing about is 15TB, and that's in a 3.5" format, which is about double in physical volume and weight.
I can't wait for the day when something like this becomes economical, so I can stick two of them in a compact portable NAS for 30TB of redundant storage that I can bring with me anywhere.
Not to mention, SSDs on average seem to have more predictable failure modes that scale based mostly on usage, and tend to be overall more reliable than HDDs that can often suddenly die on you with no warning.
For example, I bought a 1TB samsung mSATA SSD in 2015 for about $320 and it's enclosure for $15. The equivalent SSD today is about $150. So instead of a 3x price decrease in the past 3 years, it's more like a 2x price decrease.
Please feel free to create a similar list with 2.5", mSATA or m.2 drives, it probably is even more shocking.
The other awesome thing about those 3 I linked is that the interface also has changed: Micro USB 3.0 => UBS-C Gen 1 => Thunderbolt.
There are several efforts underway to provide more specialized software interfaces to SSDs. Several vendors have produced models that expose a key-value store instead of fixed-size block storage; with those drives, you can throw away RocksDB and speak directly to the drive with the same semantics (subject to limitations on supported key and value sizes).
There are a few competing standards for open-channel SSDs that move some of the FTL onto the host CPU so that the journalling overhead doesn't have to exist at multiple layers; the different solutions here vary in terms of how much complexity they move to the CPU vs. how much abstraction the SSD still provides for the sake of software portability. Most of the potential benefits this approach provides are being subsumed by extensions to the NVMe protocol that allow the host and SSD to exchange optional hints about data layouts, GC status, etc.
SK Hynix recently announced they're working on a SSD with transactional storage support, so that the host can send multiple write commands and either commit or abort the transaction as a whole.
An example of something that a SSD's controller does that the operating system/filesystem doesn't have to worry about is managing bad blocks. If the SSD detects a bad block, it will replace it with a working block and update the data used by its flash translation layer to move the blocks around. This is completely opaque to the operating system; as far as it knows its underlying storage works exactly the same (until there are so many bad blocks that the drive can't keep up this convenient deception).
An example of something a filesystem does that the SSD doesn't provide is storing operating system-specific file metadata, such as permissions, creation times, multiple data streams, directory layouts, etc. SSDs deal only in blocks of data, not arbitrarily-sized units, nor metadata.
The reason that this behavior isn't more tightly-integrated is because some of the details of managing the underlying flash blocks tend to be specific to type of flash, or even different models of flash. For example, the article mentions QLC flash becoming mainstream - we're finally getting to this point because previously, QLC was so difficult to manage that your filesystem had to be aware that it was writing to QLC flash to use it effectively. There are a few filesystems designed for direct flash management like yaffs, but this isn't quite as efficient as a SSD's dedicated processor and software stack.
Is there a way for the drive to tell the fs that a block is bad? Or does the drive simply keep a bunch of blocks apart just in case?
I don't think we'll see file systems go away, but what we may see is more knowledge pushed into the file system, instead of keeping it down in the controller.
We have started to see that with the advent of LightNVM which exposes a more RAW API into the drive and the FTL is maintained in the kernel. The current "generic" implementation of the LightNVM FTL is called pblk:
PCIe 4.0 will be arriving in the consumer market this year, with a new generation of AMD Ryzen CPUs providing host support, and at least one or two consumer-class NVMe SSD controllers supporting PCIe 4.0 should be ready to start shipping in retail products by the end of the year. (The enterprise/datacenter storage market's transition is already well underway.)
Seagate became the first vendor I'm aware of to start marketing a SSD to the prosumer/SMB market for NAS usage. It's a rebadge of one of their recent enterprise SATA drives and isn't even using QLC NAND so it's probably going to be pretty pricey, but the idea of a solid-state NAS is no longer completely laughable.
RAID 10 a few NVME and you can get decent throughput (and storage size) with existing technology.
is there a good reason to do this in a consumer setup? max realistic throughput over gigabit ethernet is only ~120MB/s, which can easily be saturated by sequential reads or writes to/from a single modern spinning-rust drive.
I am fine with fiber; it's cheap and for a few ports at home, I don't worry about the power (especially as compared to the servers it cross-connects).
Basically you dedicate 1/N to each server, instead of allocating it dynamically on demand.
Latency is critical for some tasks, with bandwidth being a distant second.
This is one way in which it's a cool time to be alive. I'll never forget circa 1996 when I waited 20+ minutes for my Apple Performa 550 to read and load a 30MB file from the scsi disk into memory. All so quicktime could play a grainy, low-resolution video clip for all of 30 seconds. At the time I was like "That's almost 1 minute per second of video! WTF? That sucks."
The new connectors between the drives and the mid-plane board are probably the most important innovation these new form factors bring to the table, though something similar could also be done for existing SAS/U.2 connectors.
Big cases are really for tons of mechanical drive capacity, > 8 core CPUs, addin cards like network switches, or a ton of GPUs in a compute cluster.
It is probably the case that it's easier to stay in business with inferior NAND; Samsung beat everyone else to 3D NAND by a few years but Toshiba still made a killing off cheaper, lower quality planar NAND, and Intel/Micron didn't seem to suffer meaningfully from their first-generation 3D NAND being so slow. Now everyone other than SK Hynix has caught up to Samsung and the number of players in the market is actually increasing.
Edit: by regular I mean 960 evo
This is a classic use case for tape. LTO drives support reading up to 2 generations earlier, i.e. an LTO-7 drive can read an LTO-6 tape. By not using the latest standard, you should be able to find cheaper drives, but the risk becomes always being able to find a (working) tape drive that is capable of reading your specific tape.
All of a sudden spinning rust drives start to look pretty good...
Is it safe for backup or same problem?
I imagine that even an "industrial" grade flash card is still engineered with the assumption that it will be turned on for a while every now and then.
I'd expect USB drives to be more durable. Would be nice to know.
Their current pricing appears to be $0.005/GB-month... which would be $295.20 for 41 years of 120GB of storage.
> i wouldn't trust a cheap SSD to still work in 41 years
I wouldn't trust almost any company to still exist in 41 years, including Backblaze. Preserving data over that length of time will probably require active management of the data in some form or another. DVD or Blu-ray discs might last that long... but it's hard to trust any storage medium to last for 41 years. Modern storage technology just hasn't existed that long.
sigh. bad at arithmetic today. sorry.
> I wouldn't trust almost any company to still exist in 41 years, including Backblaze
as i mention in the reply to the sibling comment, the backblaze corporate entity doesn't need to last.
> DVD or Blu-ray discs might last that long... but it's hard to trust any storage medium to last for 41 years
on an anecdotal note, i put a bunch of data on DVDs circa 2004-2008, and tried to retrieve it all spring 2018. all of the inexpensive DVDs were garbage. only the most expensive survived, and even those had a few bit errors.
What. I just checked backblaze, it was $5 a month, or $50 a year.
the corporate entity that is currently doing business as backblaze doesn't need to last 41 years, or even 41 more days. the stored data is all a valuable stream of revenue from customers who'd like it preserved, so unless their whole model is unsustainable, i'd expect the company to be sold to/absorbed by another business.
as for honoring the plan, i don't think cloud storage rates are going to go _up_. so maybe they wouldn't want me as a customer at some point, but so long as they're taking consumer business i can't imagine the pennies per GB*month going up unless there is some sort of massive societal upheaval.