Hacker News new | past | comments | ask | show | jobs | submit login
Benchmarking Cheap SSDs for Fun, No Profit (louwrentius.com)
166 points by louwrentius on March 27, 2023 | hide | past | favorite | 137 comments



> The hard drive shows better write throughput and latency as compared to most of the tested SSDs. Yes, except for the initial few minutes where the cheap SSDs tend to be faster (except for the Kingston & Crucial SSDs) but how much does that matter?

I love the idea of this article, but it lost me here.

“Except for the initial few minutes” is a weird thing to dismiss, since the majority of desktop operations will be done in less than a few minutes. Most are in the span of seconds.

The only time someone will typically go past a few minutes of sustained writes is during very large file copy operations. It’s weird to put an emphasis on this relative edge case while downplaying the importance of burst performance.

Anyone who has switched from even a fast hard drive to a cheap SSD can see the difference an SSD makes. It’s true that my NAS can sustain higher throughput for longer than a cheap SSD, but it’s much slower at doing directory listings and scanning random files than even the cheap SSDs.


It's quite clearly phrased in the article:

Although cheap SSDs do perform fine regarding reads, sustained write performance can be really atrocious.

Notice the sustained write performance.

So you consider copying large files a fringe / or edge case, but personally, I'm not so sure. Especially for people who buy cheap 1TB or 2TB+ SSDs, they may be unpleasantly surprised when they copy some media files or start downloading a game from steam.

Don't forget that it will take quite a while - due to the slow flash - to empy the SLC cache before you can benefit from it again.

> Anyone who has switched from even a fast hard drive to a cheap SSD can see the difference an SSD makes.

This article is not about the merrits of SSDs, that's a done deal, a good SSD beats an HDD by every conceivable metric.

My point is that people should watch out for cheap SSDs, as their sustained write performance is so slow.


> So you consider copying large files a fringe / or edge case,

No, I consider sustained writing at maximum speed for more than several minutes to be an edge case.

Consider the 240GB ADATA in the article. It can write over 400MB/sec for over 150 seconds before throttling kicks in. That's 1/4 of the entire drive.

The Crucial drive has no problem doing at least 100MB/sec for the entire capacity of the drive, if that's your thing.

> but personally, I'm not so sure. Especially for people who buy cheap 1TB or 2TB+ SSDs, they may be unpleasantly surprised when they copy some media files

You can write 40GB to 3 of these drives and never even throttle. That's an entire Blu Ray, and it would occupy 1/3 of the 120GB drives tested.

If someone has a use case that involves writing 50% of the drive at full speed over and over again, a cheap SSD is not the ideal tool. But that's really an edge case for a budget 120GB drive.

> or start downloading a game from steam.

The slowest SSDs in the article can consume the entire bandwidth of a Gigabit internet connection until the drive is full.

The ADATA has dramatic throttling, but it only kicks in after 60GB written at full speed.

You're not going to encounter these throttling scenarios under normal operations. If you're only getting a drive for 100GB sequential transfers over and over at the highest possible speed, get something else. But then again, you're probably not looking at $20 120GB SSDs anyway.


> The Crucial drive has no problem doing at least 100MB/sec for the entire capacity of the drive, if that's your thing.

That's quite slow to be frank, as stated slower than an HDD and that's quite disappointing for an unsuspecting consumer who does want to transfer larger files.

P.S. see the footnote where after 1 hour or about 300 GB the transfer speed start to collapse entirely.

> Consider the 240GB ADATA in the article. It can write over 400MB/sec for over 150 seconds before throttling kicks in. That's 1/4 of the entire drive. > No, I consider sustained writing at maximum speed for more than several minutes to be an edge case.

It's not specifically about the small 120/240 GB SSDs, the 1TB Crucial shows that larger drives exhibit the same problem. If your SSD is 1TB or 2TB, that 50GB transfer doesn't feel so enormous anymore as compared to drive size.

I'm going to agree that most people won't have a use case for transferring large files. I've actually repurposed the Kingston as an OS drive for my lab server.

But the key objective of my blog post is to show that cheap SSDs exhibit this behaviour in the first place. Many people are not aware / don't know. They can still decide that it's no problem for them. But for some, it will be an issue.

This is not about the smal <$20 SSDs, but about the concept of cheap SSDS often having terrible sustained write speeds, regardless of capacity. And many review sites don't highlight this issue or actually show when the throttling kicks in.


> Especially for people who buy cheap 1TB or 2TB+ SSDs, they may be unpleasantly surprised when they copy some media files or start downloading a game from steam.

That would be an entirely different experiment. This one has highlighted the most important pitfall of using critically undersized SSDs, so its conclusions say next to nothing about multi-TB drives.


I'm not so sure given that the Crucial MX500 1TB already performed at the edge of bad, although it could sustain a gigabit download.


The Crucial MX500 is a rather old piece of hardware now.

Tom's Hardware NVMe benchmarks include a "Sustained Write Performance and Cache Recovery" component. Whenever there's a good sale price on an NVMe, that is just about the only metric I hunt down now. Most of the worst drives they test will always beat a mechanical disk, but the worst drives Tom's Hardware ever tests are still decent drives.

I grabbed one of the cheapest SATA SSDs last week to replacing a failing lvmcache drive. It is a NETAC 1 TB that might still be on sale on eBay for $34. I expected the worst, and I did want to test its sustained write performance, but I wasn't as nearly scientific as you!

I just ran dd for a while and watched it stay between 420 and 470 megabytes per second for about 120 gigabytes straight before I stopped the test. The meanest I am to this cache is dropping 50 GB of video on two different days each month, so that was all the data I needed.

Had I known that I would be reading your blog four days later I would have let the dd finish so I could take better notes! Thank you for taking the time to do the science for us!


> The Crucial MX500 is a rather old piece of hardware now.

Depends on when you bought it. Crucial/Micron decided to stop introducing new branding when they updated their SATA SSDs, but the hardware inside has changed several times to incorporate new generations of NAND flash memory, and probably at least one update to the SSD controller by now. None of that matters to the top-line specifications they advertise, but such changes can be relevant for more stressful, more thorough or less realistic benchmarks.


> Especially for people who buy cheap 1TB or 2TB+ SSDs, they may be unpleasantly surprised when they copy some media files or start downloading a game from steam.

Especially seen that the one who has the means to pay for fiber to the home able to sustain 500 Mbit/s+ download (at less than that even cheap SSDs shall sustain the write speed anyway right?) and has the means to buy games from Steam probably can afford to spend 50 EUR on a fast SSD?

TFA mentions 137 EUR Samsung SSD from 2019 but prices have dropped since then. And nowadays all mobos ship with NVMe M.2 PCIe slots and you find stuff like that for 44 EUR: Sabrent M.2 NVMe SSD 256GB Interne Solid State 3400 MB/s read, PCIe 3.0 X4 2280 or for 80 EUR: Samsung 970 EVO Plus MZ-V7S1T0BW (that was a 200 EUR+ drive two years ago I think).

Would have been interesting to compare vs those beasts, which are also cheap.

My point is: if you've got a setup allowing you to max the write speed of a cheap SSD, you've got the 30 or 40 additional EUR to buy an ultra fast beast.

P.S: I don't mind paying a bit more so as adviced here, for my new build I bought a Western Digital SN850X Black.


For tiny SSDs, it's also important to consider how the volume of data written compares to the total capacity of the drive. Having the horizontal axis of the graph be time kinda implies the test could keep going indefinitely. But if the data were recast in terms of % of the drive, it would be easier to see how quickly it runs beyond any realistic use case: re-writing more than half the drive in one operation is simply not how people use their drives except on rare occasions (eg. restoring from backup).


I’d be interested to see some numbers that specifically keep that in mind. I suspect that flash drives suffer when they start to get full.


Says everyone until they have a power loss event…


In a realistic use cases (unlike the ones in the test) the hard drives are accessed very often.

For anything by very light use, I would expect the drives to be written just about at all times.


Copying big file is not a fringe case. I often find myself copying files that take more than a minute because I work with videos that I took on my phone and camera and for video editing. I also have lots of data, which need to be transferred frequently and it is frustrating when the SSD cache is exhausted during copy and as a result, it becomes even slower than my old spinning hard drive (HDD).


Copying big files onto a 120GB SSD is a fringe use case, unless it's an external SSD you're using to transport those files. Editing and organizing large videos is usually done on drives large enough to hold more than two of them (plus the OS and applications).


Practically speaking, some are more concerned with reliability over ludicrous file write-speed. For example, the Intel m.2 consumer drive are considered relatively slow, but run relatively cool without a heat-sink. Notably thermal issues are still very common for laptops/tablets, and Flash premature aging tends to be temperature dependent as well. Very surprised Intel transferred the ssd line to Solidigm.

Also, some Linux users bloat their io buffer size to several GiB, set the eviction priority to 1 (rarely flushes dirty cache back to storage), and use F2FS for /home . This limits ssd wear, trim is auto run once a week, and hardware storage drivers can still regularly flush the drives internal high-speed SLC area as needed. Even the cheapest Sandisk and Samsung SATA drives from 6 years ago are still working just fine on the old hosts with this setup, and we expected them to EOL 4 years ago.

Tip: always sort by lowest negative ssd product reviews first. =)


If you want cheap-ish, EXTREMELY reliable, and "pretty fast" there's always the Intel P "datacenter" series. You can find a lot of PCIe variants on eBay[0] and elsewhere. Yes used typically != reliable but listings will include drive life/use percentages and many of them are > 90%. You can and should verify this upon receipt.

As one example the Intel P3700 can do 17 full drive writes per day[1] over five years! With a MTBF of 2 million hours (230 years).

Over the years I've used these as my boot/OS drives and I've never seen one fail. Yes, 5-10x the price of the extremely cheap drives in this post but all in all not bad if you just don't want a drive to fail.

[0] - https://www.ebay.com/itm/295520377095?hash=item44ce631907:g:...

[1] - https://www.intel.com/content/dam/www/public/us/en/documents...


Genuine question: how those Datacenter-class SSDs manage that insane TBW? Samsung's 970 Pro, which was the last (consumer) MLC "Pro" drive by Samsung, got only about 800 TBW for a 1TB drive. Do they overprovision memory cells on those SSDs? Or are the cells just larger in order to sustain more wear over time?


Datacenter drives tend to have a bit more overprovisioning (ie. 960GB usable capacity rather than 1000GB). They also don't usually do SLC caching. They also grade write endurance based on different criteria: consumer SSDs are supposed to be able to retain their data for a full year after reaching the end of their write endurance, but enterprise SSDs only need to have 3 months data retention at end of life (albeit at a higher temperature).

But mostly, it's a matter of the consumer drives having low-balled ratings so that they don't cannibalize sales of the enterprise drives. Because the write endurance ratings are more about when the warranty expires than about when the memory itself is actually worn out.


> bloat their io buffer size to several GiB, set the eviction priority to 1

How do you tweak these? I'm aware of dirty_writeback_centisecs and the likes, but you are most likely referring to something different.


YOLO mountflags for ext4 that minimize synchronous writes:

    data=writeback,journal_async_commit,lazytime,nobarrier,commit=99999
Good for caches, scratch space, build dirs and anything that can be rebuilt from other data but it'll get corrupted during any non-graceful shutdown.

overlayfs offers an even more aggressive mount option "volatile" which ignores all O_SYNC or fsyncs, but no other filesystem exposes that tradeoff.


Have a look at dedicated external journal disk performance for ext4. It actually is pretty fast, and relatively safe.

I find performance wise, a logging fs like f2fs is actually not as terrible as one would expect for most use-cases:

UUID=abc /home f2fs defaults,noatime,nodiratime,noquota,discard,nobarrier,inline_xattr,inline_data 0 2


No, you're thinking along the right lines. These things are kernel tune ables. If you thumb through the sysfs kernel docs you can find loads of options on this sort of thing.


These settings heavily depends on your OS, hardware, and use-case.

This profile is what I prefer for AORUS 5/RTX3070/i7-12700H/16GB laptops, and despite how terrible the OEM hardware is... this setup will run acceptably well with dual Intel 670p M.2 drives.

The following should work with most Debian variants, but is hardly optimal for every platform. But if your laptop is similar, than it should be a good place to start. One caveat, when ejecting media it may take some time to flush your buffers.

sudo nano /etc/sysctl.conf

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1

# Ignore ICMP broadcast requests

net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing

net.ipv4.conf.all.accept_source_route = 0

net.ipv6.conf.all.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0

net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 2048

net.ipv4.tcp_synack_retries = 2

net.ipv4.tcp_syn_retries = 5

net.ipv4.conf.all.log_martians = 1

net.ipv4.icmp_ignore_bogus_error_responses = 1

net.ipv4.conf.all.accept_redirects = 0

net.ipv6.conf.all.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv6.conf.default.accept_redirects = 0

net.ipv4.icmp_echo_ignore_all = 1

#ban list mem

net.core.rmem_default=8388608

net.core.wmem_default=8388608

#prevent TCP hijack in older kernels

net.ipv4.tcp_challenge_ack_limit = 999999999

#may be needed to reduce failed TCP links

net.ipv4.tcp_timestamps=0

net.ipv4.tcp_rfc1337=1

net.ipv4.tcp_workaround_signed_windows=1

net.ipv4.tcp_fack=1

net.ipv4.tcp_low_latency=1

net.ipv4.ip_no_pmtu_disc = 0

net.ipv4.tcp_sack = 1

net.ipv4.tcp_mtu_probing = 1

net.ipv4.tcp_frto=2

net.ipv4.tcp_frto_response=2

net.ipv4.tcp_congestion_control = cubic

net.ipv4.tcp_window_scaling = 1

kernel.exec-shield=1

kernel.randomize_va_space=1

#reboot on kernel panic after 20 sec

kernel.panic=20

vm.swappiness=1

vm.vfs_cache_pressure=50

#percentage of system memory that can be filled with dirty pages

# run to check io performance with: sudo vmstat 1 20

vm.dirty_background_ratio=60

#maximum amount of system memory filled with dirty pages before committed

vm.dirty_ratio=80

vm.dirty_background_bytes=2684354560

vm.dirty_bytes=5368709120

#how often the flush processes wake up and check

vm.dirty_writeback_centisecs=10000

#how long something can be in cache before it needs to be written

vm.dirty_expire_centisecs=60000

vm.min_free_kbytes = 16384

# increase system file descriptor limit

fs.file-max=120000

#CONNTRACK_MAX = RAMSIZE (in bytes) / 16384 / (number_of_bits_in_a_pointer / 32)

#low power CPU should halve mem usage limits

net.ipv4.netfilter.ip_conntrack_max = 16384

net.netfilter.nf_conntrack_max = 16384

net.nf_conntrack_max = 16384

net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 86400

kernel.pid_max = 32767

net.ipv4.ip_local_port_range = 2000 65000


Interesting. I've had terrible luck with 'cheaper' SSDs where I thought performance wasn't important. I got a crucial USB3.2 drive for backups for my mac mini, which I stupidly thought would be good given the bandwidth of 3.2, but it can barely manage 100MB/sec of writes sustained, which is close to an old hard drive.

Managed to repeat the mistake again on black friday with a cheap SATA drive that cannot write anywhere close to the SATA speeds. I even checked the specs on this but real world performance isn't anywhere close. Probably using different chips with the same model number?

And to top it off I just got a 256GB M2 Macbook air for travelling which has horribly crippled IO perf compared to the other models, which I didn't realise until after I bought it.

It's all tolerable really, given I have some good NVMe drives in my main workstation, but I cannot understand how those do 2000MB/sec no problem, but cheaper drives struggle with 100MB/sec. NVMe drives have plunged in price over the last couple of months so maybe getting them with a USB enclosure/SATA convertor is a better way to assure higher quality.


Don't confuse the interface for the drive itself. In most consumer tech the interface is way beefier than it needs to because of marketing. You can see that right now with PCIe gen 5 drives which could just as well be gen 4 except that no one would be talking about them if they were.


> You can see that right now with PCIe gen 5 drives which could just as well be gen 4 except that no one would be talking about them if they were.

Where? I've only seen a few models get talked about, and they could all easily bottleneck a gen 4 connection.

It's still hard to find a use case, but the interface is not overspecced compared to the flash and controller on the ones I've seen.


Yes, the problem is it's hard to tell what the underlying drive is. I read reviews of one SATA drive, but it seems they swapped out the actual chips but kept the same model number, as all the reviews said it could saturate it, but mine can't, despite having the same model number.


For that matter, how many drives are actually capable of maxing out SATA's 6Gbps?

There are architectural reasons for moving away from SATA I know, but the raw bandwidth is there.


You might be confusing Gigabits for Gigabytes. 6 Gbps means 6 gigabit/s which translates to about 550 megabyte/s sequential in practice.

M.2 PCIe 3.0 x4 is 32 Gbps (4 GB/s), the same with PCIe 4.0 is about 64 Gbps, and PCIe 5.0 gives you 128 Gbps. 5/10/20 times faster than SATA 3, respectively.

Hard drives manage a bit above 250 MB/s nowadays, sequential. MLC and TLC SATA SSDs can usually saturate the interface, but QLC SSDs are generally much slower with write speeds between 50 and 150 MB/s.


SATA is 6 Gbps, not 6 GBps. Tiny cheap drives like the ones in this article are the only ones that struggle to saturate a SATA link during sequential transfers. Any NVMe drive 1TB or larger will offer sequential read performance several times higher than SATA could handle, and if it's a TLC drive rather than QLC the sequential write performance will also generally be at or above the SATA limit even after the SLC cache runs out.


A $50 budget NVMe will do that 5x over on cached writes or reads just fine. Remember SATA is 6 gigaBITS per second and drive speeds are often measured in BYTES per second. After overhead you only get ~500 MB/s if your SSD is attached via SATA, quite a few of the dirt cheap SATA drives in the article even seem to be running into that limitation at first.


> And to top it off I just got a 256GB M2 Macbook air for travelling which has horribly crippled IO perf compared to the other models, which I didn't realise until after I bought it.

When buying anything from Apple (be it an iPhone or a Macbook), always need to make sure it's at least one notch above minimum storage tier.

For example, if the lowest tier is 512 GB, should go for 1 TB storage.

This helps both performance and the longevity.


The performance of that cripled Macbook is still so much better than this cheap crap, it's still perfectly fine and longevity is also likely fine.


What? Why? I'd expect the opposite, which is that the most popular variant would be the most stable and well-tested.


Because how many physical storage chips has a direct correlation to speed. Often the lowest tier may have a single chip- so the throughput is limited to that chip's throughput potential. Multiple chips are basically like raid 0- you spread the read/writes over multiple chips and get the aggregate speed. You also have more 'sectors' to spread errors over as the chip degrades over time.


If the goal is performance and longevity, I can see how multiple chips might help with performance. But what about longevity? It seems that more chips means more things that can break, and when one of them breaks your whole hard drive will be broken. I guess the question is what's more likely, one of one chips breaking, or one of two chips breaking? Given what you said about read/write "spreading over" chips, maybe it's not so simple as assuming more chips equates to higher chance of failure.


More NAND dies means a given number of TB of writes will require doing fewer write/erase cycles per memory cell, because you have more memory cells.

NAND flash almost never fails a whole die at a time. An individual NAND die fresh out of the fab will already have a few defective memory cells, and as the drive is used, more write cycles will result in more memory cells failing and being retired. This gradual, partial failure is fundamental to how SSDs manage flash memory.


> It seems that more chips means more things that can break

Nope.

> I guess the question is what's more likely, one of one chips breaking, or one of two chips breaking?

Doesn't matter. In each case you lose your data. And if the system is so frail what addition of another chip rises the failure rate through the roof then the talk about the reliability of such system, no matter how much chips it have, is moot.


They didn’t speak to testing or stability, I doubt it differs at all. If you want your soldered storage and memory having tech purchase to last longer you upgrade at the time of purchase from the base spec.


> but I cannot understand how those do 2000MB/sec no problem, but cheaper drives struggle with 100MB/sec.

No DRAM cache and less flash chips, means less throughput. All these cheap SSDs I've tested only have one physical flash chip.

High-performing SSDs often spread their capacity over multiple chips so they can leverage the individual chip bandwidth and thus have more throughput.

But there are likely much more factors that I'm not familiar with.


> And to top it off I just got a 256GB M2 Macbook air for travelling which has horribly crippled IO perf compared to the other models, which I didn't realise until after I bought it.

Do you really feel the crippled IO perf? I mean this machine still has read rates between ~800 and 1700 MB/s. Thats not exactly the definition of slow.


I debated about spending the extra whatever to go to the larger/faster storage due to this problem. In the end I went for the base/lowest cost MBA M2 because it's still plenty fast day to day for me and only on rare occasions would I need that extra bump in speed. Even then the difference is probably seconds rather than minutes or hours.

The most ridiculous part of all of this is that they're pushing a pro machine with crippled IO performance.


Not for writing and reading actual files, no. But the 8GB of ram means it can swap to disk quite easily, and the faster speeds really make a difference with that. My M1 mac mini I got to play around with the new arch feels much much faster when memory constrained because of this.


When pointing out that HDDs can outperform these SSDs, 'sequential' is the key word. I regularly pull remote backups with syncoid (i.e. `zfs send | zfs receive`) and over time that fragmented the receiving side considerably. In the end `zpool list` showed over 80% capacity and 40% fragmentation. The hard drives were seeking constantly and the syncoid task would take over eight hours to complete. I replaced the disks with SSDs and now the task completes within 20 minutes.


Back in the day, a common suggestion for speeding up your PC was to defragment your hdd. I didn't start using Linux until right around the SSD transition, so I've never done it there, but for setups like this are there not still tools to do something similar?

I'm sure you got other benefits out of swapping to SSD's, but your comment just got me thinking.


No, there is no defragmentation for ZFS, unfortunately. A way to get around that is to send the pool's content to another (fresh) ZFS pool, where it would be written sequentially. But for that you would need a set of drives of same (or larger) capacity.

There are ideas on how one would do an actual defrag. They are generally based on a concept called block pointer rewrite, which Matt Ahrens once said could be the 'last feature ever implemented in ZFS', as it would make everything so much more complicated, that it would be hard to add new features afterwards [1].

[1] https://www.youtube.com/watch?v=G2vIdPmsnTI#t=44m53s (Link to the beginning of the explanation, the 'last feature ever implemented' quote is at at around 50:25)


There's no point in defragging an SSD unless the low-level controller is doing it; the controller is always presenting a false picture of the mapping between data addresses and physical location of pages.

There's no good ZFS defragging tool, although the initial send to a new pool will accomplish that. This is just a thing for COW-style filesystems.


> This is just a thing for COW-style filesystems.

It doesn't have to be.

ZFS in particular has an architecture that's very hostile to ever moving things.

BTRFS has a design that's amenable to defragmentation, but the builtin option doesn't work with snapshots and the external programs I've tried are partial and finnicky.


long ago I worked on a graphical tool that showed disk fragmentation. Of course all the devs would test on their various hardware, pre-SSD. It was true that you could change the performance for daily tasks by some fragmentation management.

In recent years I use Linux with default ext4 mostly. Linux and ext4 appear to me to regularly maintain the disk allocations somehow, but I do not have a graphical tool to show that; details welcome.


The moment you as IO seek on hard drives they just suck, as you experienced.

In 'almost' every user based usage scenario a SSD is going to perform better than an HDD. About the only time an HDD is better is when you're writing out large singular data files. But even then you have to be cautious, as if the drive is shared with other read/write operations you can find the performance again drops off a cliff.


I remember writing a comment on anandtech years ago where I accused SSD makers of being in a race to make an SSD that performs worse than HDD and this blog post vindicates me.


You were wrong. Windows 11 is unusable when installed on a fast 7200 RPM hard drive and works perfectly well on a super budget SSD. Budget SSDs are an order of magnitude faster than an HDD when it comes to small random reads and writes (which is what a storage medium does most of the time on an average PC/laptop).


If you have an extremely specific use case of ingesting large amounts of purely sequential data, then you can find situations where a fast HDD performs better than the cheapest SSDs you can find.

But that’s about it. The random I/O of even a cheap SSD will be far superior to the limited IO of a fast mechanical hard drive for typical workloads.


> One of the funnier conclusions to draw is that it's beter to use a hard drive than to use cheap SSDs if you need to ingest a lot of data. Even the Crucial 1TB SSD could not keep up with the HDD.


Key information: this is for sequential writes.

HDDs are actually pretty good at sequential writes. Random writes show a much much bigger gap.


I wish manufacturers exposed a way to manually provision the drive as all SLC / MLC / TLC, a small SLC cache drive would be great for several use cases.


I'd gladly pay the same price for a 40GB SLC SSD over a 120TB TLC one, but the former is going to have an endurance >100x more, and that's apparently why there's seemingly no cheap SLC SSDs available. The capacity increase with more bits per cell is multiplicative, but endurance and retention get exponentially worse. Manufacturers would rather go the planned obsolescence route even at the same prices. (SLC also needs far less fancy ECC and wear leveling algorithms, to the point that early SSDs with only SLC didn't have anything but 1-bit ECC and no wear leveling; I still have a 64MB USB drive with a full binary capacity and it would theoretically be capable of 6.4TBW before wearing out.)


Life would be better if you could just buy this stuff on sticks and the operating system would figure out what to do with it.


The 'expensive' SSD used for comparison is a 6-7 year old Crucial MX500

Oddly enough this test doesn't even get 300MB/s writes for the Crucial drive, where other benchmarks are 400MB/s or above for the same drive

https://www.anandtech.com/show/12165/the-crucial-mx500-1tb-s...


Yes, a lot of SSD vendors scam their customers by releasing the first batch with a lot of SLC, reviews are released, and then months or years later silently swap it out with inferior flash.


Can confirm. I still use MD500’s


"Disclaimer

I'm not sponsored in any way. All mentioned products have been bought with my own money."

Funny that the author felt the need to disclaim that they are not taking money from anyone to write the article. How dare they!


how else would one know they aren't sponsored by Big HDD?


A lot of newer cheap NMVe SSDs don't have DRAM, but instead they use the HMB (Host Memory Buffer) feature, which essentially allows it to use a small amount of your system RAM (maybe 64mb).

None of this applies to SATA SSDs, which is probably why it was omitted from the article, but it's a good thing to keep in mind if you're shopping for one for yourself. (These days I would only consider a SATA SSD for a system that did not support NVMe. Otherwise low end NVMe is significantly better.)


This is a subject that gets a lot of reviewer attention and results in considerable gnashing of teeth. Yet for most users, how often do we generate many-gigabyte streams of sequential writes? It’s just not a usage pattern worthy of infinite optimization. As others have pointed out already, it’s the reads that matter. In particular, end users care about low queue depth, kind-of-random read throughout. And here, the cheapest SSD thrashes the best spinning disks.

Imho it’s a little bit like complaining that long, sequential writes to DRAM are slower than reads from L1$. It’s true, it just doesn’t matter for most of us.


I can attest that getting one of those cheapo SSD-s for a gaming PC and then downloading games on Steam over a good connection (500 Mbit/s) can overwhelm the SSD and cause the whole system to be unusably slow, with similar multi-second latency numbers being shown in Task Manager on Windows 10. It’s a relatively common situation among the gaming community, I’d say.


generally the limit here is windows/antivirus making your storage a ton slower. Linux/Mac don't have this problem nearly as bad.


Windows isn’t alone in this problem… someone uploading gigabytes of files to a web server can cause this too.


Are you talking about a web server running in somebody's basement, or a web server running in a data center? Because the SSDs in a real server wouldn't have SLC caching in the first place.


windows is alone in the problem of having an OS with a really slow file system and being commonly deployed with antivirus that will scan every file.


I read that disabling the Windows write cache for the drive helps with this problem. Don't ask me why though.


The OS stops sending bazillions of bytes to the drive, overwhelming its' caches, but waits for the writes to complete. Less throughput from the OS => more time for SSD to complete whatever inner shenanigans are happening.


I've seen people complain exactly about this.


>> many-gigabyte streams of sequential writes?

More often than many would expect. Downloading/installing games can do it. No doubt many complaints to Valve about slow servers are likely the fault of slow consumer SSDs not able to keep pace with modern download speeds. And people with the smaller/slower SSDs are more likely to be installing/removing games more often. It's a strange strange world where local storage might bottleneck a residential internet connection.


I've run into this when I first got access to gigabit internet. I was still storing all of my games on good old SATA hard drives.

I set up a 4 gigabyte RAM disk for downloads and such so my hard drives weren't slowing me down. Eventually I got myself some nice and fast SSDs, but even now I run some games of hard drives because of the still significant cost of replacing terabytes of storage capacity.


I do long sequential writes quite frequently, copy a media file (often from torrenting) for viewing on another machine or to take to another location. Speed of that does matter.


I disagree strongly.

Writes are often just as important, there is no picking and choosing between read/write performance, both must be acceptable.

And that's true for both sequential and random I/O.

If you ever have to ingest some large data set on any of these cheap SSDs you'll be in for an unwelcome surprise.

And although outside the scope of my article, random write I/O performance is beyond terrible as soon as you're out of the SLC cache.


I'm more interested in the patterns some of the drives show. Like the one with huge spike periodically (orange), or the the one goes up and down like crazy for the whole time (red).

Any clue what causes these?


Probably related to how cache flushes are implemented. Some may lock the whole cache during a flush, some may implement a ring buffer, etc.


The thing with the HDD beating SSD's is that this is pure sequential writes, no seeking involved, which is quite rare in real life. If this was random smaller writes the HDD would be awful.


Maybe it's rare for you to ingest large files or a large dataset, but I'm going to bet it's not as uncommon, especially if you work with photography, virtual machine images, (raw) video files and so on.

None of the cheap SSDs are fit for purpose in my opinion for any of this.


I would say, show me the data (and with a file system on top). Log-structured file systems don’t overwrite data, so the randomness of writes becomes moot, as it converts writes to become all sequential.


I wouldn't expect using a log-structured filesystem to help much, because the SSD is already running its own log-structured filesystem internally.

A log-structured filesystem doesn't magically turn a random write workload into sequential writes; it incurs more or less the same overhead that the SSD's FTL does doing read-modify-write cycles causing write amplification.


Fragmentation? Cluster sizes?


>One of the funnier conclusions to draw is that it's beter to use a hard drive than to use cheap SSDs if you need to ingest a lot of data. Even the Crucial 1TB SSD could not keep up with the HDD

Sure, but a cheap SSD is still (supposedly) faster than an HDD when it comes to random writes/reads, which is a big deal for running applications and OSes. I miss a random benchmark to put things like this in perspective.


Something’s up with the benchmarks. MX500 should perform significantly better than it has. I’m using several of MX500 and get sustained write close to 390MB/s and read around 440MB/s. My purchase decision was derived from direct comparison with Samsung 870 Evo, where Curcial was nearly as good while costing much less.

I cannot speak for the other drives.


Maybe you're right, I'm open to suggestions. But I ran the benchmarks multiple times.

Regarding the MX500, I actually used the drive and made a backup with DD beforehand. Afterwards I wrote the backup back on the drive (entire drive write) with DD and sustained write performance fluctuated between 40MB/130 MB/s if I recall correctly.

https://louwrentius.com/static/images/cheapssd06.png


Just curious, when you ran dd, did you also increase the block size? I usually do bs=1M

My real world experience with MX500 is pretty well aligned with: https://ssd.userbenchmark.com/Compare/Samsung-870-Evo-250GB-...

I wonder how your other drives score on those tests.


Yes, just as the fio test I used bs=1M.

Some people report that Crucial uses different hardware under the same MX 500 brand, maybe that explains things.

How old is your drive?


They are all from 2020-2022


Ok, mine is also new, likely from 2022. I really can't explain the difference.


Do you have a chance to test it on different hardware? Or have you?


No, not anymore, I've restored my backup and I'm using the SSD again.


You can get cheap NVMe drives for around the same price with a controller supporting HMB (Host Memory Buffer) which to a large degree neutralizes the disadvantage of having no expensive DRAM on board. If you need some small form factors like 2242 or 2230 you pretty much don’t have any DRAM options anyway.


>possible to buy a 1TB solid-state drive for less than €60

Thats fancy brand name ones! Chinese noname (unless you consider "blue" a brand :P) can be had for 1TB <$30 and 2TB <$40 with free shipping. No doubt using Chinese manufactured NAND flash, I have to wonder about write endurance and reliability.

https://www.ebay.com/itm/385414212830 https://www.ebay.com/itm/165934839845 https://www.ebay.com/itm/394522781098


> No doubt using Chinese manufactured NAND flash, I have to wonder about write endurance and reliability.

I wouldn't count on it; there's only one Chinese NAND manufacturer (YMTC) and they are still a relatively new and low-volume market participant. Most cheap drives use NAND made by one of the other major manufacturers, but it's the leftovers that had initial defect rates too high for the better brands/models to use.


I wonder if there’s a meaningful difference between SATA and NVME anyway in terms of latency and command queue buffer. In particular, it doesn’t feel like seconds of latency comes from the storage tech vs saturating the speed of submitting I/O requests.


This article shows that cheap SSDs are basically cheap USB sticks with a SATA connector attached.

Any run of the mill, entry level USB flash drive behaves the same way. Fast until caches fill, then limping while trying to write this data to the proper flash back end (at 10MB/sec in most cases).

If you pay the price, your USB drive will be a proper SSD with a USB to SATA controller, SMART and everything (Sandisk's Extreme Pro drives, for example).


If you use "how long does my computer take to boot" or "How long does it take my game to load". A cheap SSD, like really cheap with non NAND cache and slow peak speeds, performs withing a couple of % of the most expensive ones. Outside of a few specialty use cases there is really no need to go beyond the "expensive enough to not be garbage" category.


I remember learning about the "Slow TLC backed by fast SLC" a few years ago when I got a new SSD and was cloning the old one.

It went super fast at first, hitting the ~550 MB/s limit of the SATA6 bus, for about the first 16 GB. Then it dropped to about 50 MB/s, slower than a hard drive.

For my use case (gaming), it would never matter, but it still made me scowl a bit.


I have an old 32GB ADATA SP300 SSD. Its a 200/40 MB/s read/write bottom of the barrel SSD, was $20 in 2014 and perfect for a router where it lived 2 years. It finally broke after one more year in retro gaming Windows XP setup with no Trim support. Now writing first 2GB goes at full ~40MB/s but then it drops to .. 2KB/s :D No amount of trimming, zeroing, safe erasing is able to recover it :(. All Smart stats are perfect, including life left.


I'd love to see what flash chips the cheap drives are using. I expect at least some of them to be Micron chips, which is the same chips used in the Crucial drive. I would be very amused if one of them was using Spectec, which is the brand Micron uses for their lowest binned chips.


The 8TB 7200 RPM Toshiba drive has decent sustained write performance. But if the author had tested a cheaper hard disk, such as a 8TB Shingled Magnetic Recording (SMR) Seagate drive, they'd have seen sustained writes even worse than the cheapest SSDs.


One of these days I should write an article on how the invention and sale of SMR drives is hate crime and a violation of the Geneva convention.


These are low absolute prices but normalized for size they are pretty bad. I.e. a WD Black SN850X 1TB is $95 retail. That's double the space per dollar and orders of magnitude better performance depending on how you look at it.


I'm going to bet that if consumers would not go for the lowest prices and pay a little bit more, sustained write speeds will likely be 'fine' as in, at least on par with an HDD or far beyond.


How old are those cheap SSDs? When using older (as in production date, not usage) ones I'd be way more concerned about their reliability than speed, especially when sustained writing is involved.


I bought them brand new. I don't know when they were produced.


This is the point where I would pay a bit more for the DRAM cache.


I think the DRAM issue is mostly a red herring. For a 120GB SSD, the SRAM on the controller will be able to hold a significant fraction of the logical to physical address mapping table. The bigger problem is that these tiny drives only have one or two NAND dies, so any operation that causes a block erase will tie up a quarter of the capacity for hundreds of milliseconds.


The control drive, the Crucial MX500 1TB does have DRAM but I'm not sure it helps much.


A DRAM cache generally doesn't matter for sequential transfers. A DRAM cache will most noticeably help with random reads over the whole drive (random reads from a small range of the drive can be handled well with just a few MB of SRAM on the controller).


Such are the woes of modern TLC/QLC flash memory.


I'm not so sure it's an issue of TLC/QLC memory at all. It's that to make cheap drives, costs must be cut and this can be the outcome: bad write performance.


With a TLC-configured controller each write is actually three consecutive program/erase operations on the affected NAND cell. With QLC it's four such operations. All MLC schemes - DLC, TLC and QLC - are cost-cutting measures, and the technique inherently incurs a performance and endurance toll.


I know and understand this. However there are plenty of well-performing TLC drives at higher price points, so it doesn’t seem specific to TLC memory, there are other factors at play I think.


Larger drives are faster because in order to provide more storage they have to use multiple NAND chips, which the controller interfaces with in parallel. Older DLC SSDs of 250-500 GB could often sustain e.g. 550 MB/sec write speed (SATA-III bus maximum) indefinitely, but most 120 GB models from the past few years use a single NAND chip, and as of late this is also the case with most 250 GB models. The idea that the cheaper drives are slower because of some unspecified generic cost-cutting is a fallacy. They're primarily slower because of iterative MLC exploitation and because NAND-cells-per-chip is growing.


Yes I also know this, which only shows that TCL memory is not the issue: it's a cost issue.

Indeed, all the cheap SSDs I've tested use just one chip.


That Kingston looks to be quite good value, although I personally would stay away from anything using TLC flash and worse.


I'm unaware of any SSD made in 2023 that doesn't use TLC or worse.


Even Samsung "Pro" flagship consumer SSDs are using TLCs nowadays. I want to believe they found a way to make TLCs suck less. At least, they can sustain the throughput on benchmarks, but I'm yet to find some benchmark on durability (vs past MLC iterations).


afaik those are around 400 erase cycles :(


Samsung rates the 990 Pro 1TB model at 600 TBW vs 800 TBW from the 960 Pro (MLC)... It's bad, but a shot I would take for the much better performance (4 times IOPS) if those numbers are to be trusted.


"worse" is subjective.


In my opinion they all have quite good GiB-per-dollar value these days for what they really are: a medium that's crazy fast to read from, but crazy slow to write to.


I upgraded my moms pc from spinning rust to semi-cheap SSD and Windows 10 now boots in seconds rather minutes. Reads are what matter most for most people


Actually, I've taken the Kingston as my OS drive for my lab computer. The storage for my VMs is a bunch of datacenter SSDs at quite different price points, and the performance shows.


Cheap SSDs slow down when they run out of the faster cache memory!


And so do the expensive SSDs. Usually writing the first 30% of the available space goes through fast pseudo SLC mode and then the speed falls down drastically.

The most in depth SSD reviews I've seen are on this YT channel: https://www.youtube.com/@prossd

They are in Russian, but subtitles are available.


LTT just did a video on this, looking at the latest gen PCIe drives and found similar results https://www.youtube.com/watch?v=jnMMtbVP0ps


What software did you use to make the charts?


It's linked in the article, but the name is fio-plot

https://github.com/louwrentius/fio-plot



No read performance? Other than being solid state, that's the real advantage of SSDs, especially non-sequential ones. I typically use an SSD for the OS and applications, and then use a regular hard drive for the actual content that I work with which needs better write performance. It may take (very) slightly longer to install the applications to the SSD, but they start up faster.


It seems to me extremely unlikely that an HDD has anywhere near the sustained write performance of a proper SSD. E.g. a Samsung 980 Pro 2TB will write at ~2GB/s indefinitely (well, until you wear it out, which at that rate will only take about 2 days). That's the aggregate speed of a whole box full of HDDs.


Sorry, the point of my article is that cheap SSDs have terrible write performance. Read performance may be fine, but write performance does matter and people often don't know how bad it can be.

And then they wonder why copying some files is so slow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: