
Everything I Know About SSDs - classified
http://kcall.co.uk/ssd/index.html
======
StavrosK
Mirror: [http://archive.is/K9SFI](http://archive.is/K9SFI)

~~~
exikyut
archive.is links have not been working for me for a while now. I wasn't sure
if something was wrong with the site or what, but seeing this link posted here
convinced me to do a bit of digging.

I have news: 1.1.1.1, aka CloudFlare's DNS, resolves archive.is to 127.0.0.5.

Live proof:
[https://digwebinterface.com/?hostnames=archive.is&useresolve...](https://digwebinterface.com/?hostnames=archive.is&useresolver=1.1.1.1)

 _Archived_ proof: [http://archive.is/utJfW](http://archive.is/utJfW)

Of course the moment I changed 1.1.1.1 to 8.8.8.8 in resolv.conf I was able to
access the site again.

~~~
leesalminen
1.1.1.1 has been acting up for me recently as well but haven’t had a chance to
investigate. Glad to hear it’s not just me

~~~
vsl
It’s really archive.is acting up here...

------
notacoward
The part about filesystems is slightly incorrect.

> The way the file system handles this is incompatible with the workings of
> NAND flash.

That's true of most conventional filesystems, but log-structured filesystems
are much more flash-friendly. That's why there has been a resurgence of
interest in them, and also why a typical flash translation layer bears a
striking resemblance to a log-structured FS. There are also flash-specific
filesystems.

> to an HDD all sectors are the same.

This is not true because of bad blocks. Every disk has a reserve of blocks
that can be remapped in place of a detected bad block, transparently, much
like flash blocks are remapped. Beyond that, it's also useful for a disk to
know which blocks are not in use so it can return all zeroes without actually
hitting the media. There are special flags to force media access and commands
to physically zero a block, for the cases where those are needed, but often
they're not. Trim/discard actually gets pretty complicated, especially when
things like RAID and virtual block devices are involved.

~~~
rckoepke
> to an HDD all sectors are the same.

Also I believe some humans, and (filesystems?) intentionally stored certain
data towards the inside/outside of the HDD because the simple cylinder
geometry allowed faster reads in those regions. However, I'm not seeing
conclusive proof that modern HDDs show performance variation with respect to
radius.

~~~
monocasa
Io schedulers would also understand the geometry and reorder commands to
minimize seek distance (and therefore seek time).

~~~
tzs
At some point, I think around the mid '90s or early 2000s but maybe earlier,
though, seek time was fast enough on widely available drives that on average
for random access you spent about as much time on the right cylinder waiting
for the sector you wanted to rotate to under the head as you did on seeking to
the right cylinder.

You could get some decent gains then if you made your scheduler take rotation
into account. A long seek that arrived just before the target sector came
under the head could be faster than a short seek that would arrive just after
the sector passed the head.

On the other hand, taking rotation into account could make the scheduler quite
a bit more complex. You needed a model that could predict seek time well, and
you needed to know the angular position of each sector in its cylinder.

I don't think that there were any drives that would tell you this. SCSI drives
wouldn't even tell you the geometry. IDE drives would tell you _a_ geometry,
but it didn't necessarily have anything to do with the actual geometry of the
drive.

At the time I worked at a company that was working on disk performance
enhancement software (e.g., drivers with better scheduling, utilities that
would log disk accesses and then rearrange data on the disk so that the I/O
patterns in the logs would be faster [1], and that sort of thing).

We had a program that could get the real disk geometry. It did so by doing a
lot of random I/O and looking at the timing of when the results came back. If
there were no disk cache, this would be fairly easy. (Well, it didn't
necessarily get the _real_ geometry, but rather a purported geometry and seek
and rotational characteristics that could predict I/O time well).

For instance, read some random sector T, then read another random sector, then
read T again. Look at the time difference between when you started getting
data back on the two reads of T. This should be a multiple of the rotation
time.

If the disk has caching that can still work but you need to read a lot of
random sectors between the two reads of T to try to get the first read out of
the cache.

Anyway, we had to give up on that approach because the program to analyze the
disk took a few days of constant I/O to finish. Management decided that most
consumers would not put up with such a long setup procedure.

[1] Yes, that could mean that it would purposefully make files more
fragmented. A fairly common pattern was for a program to open a bunch of data
files and read a header from each. E.g., some big GUI programs would do that
for a large number of font files. Arranging that program and those data files
on disk so that you have the program code that gets loaded before the header
reads, then the headers of all the files, and then the rest of the file data,
could give you a nice speed boost.

The flaw in this method is that, to use the above example, if another big GUI
program also uses those same font files, the layout that makes the first
program go fast might suck for the second program. If you've got a computer
that you mostly only use for one task, though, it can be a viable approach.

~~~
zozbot234
> I don't think that there were any drives that would tell you this. SCSI
> drives wouldn't even tell you the geometry. IDE drives would tell you a
> geometry, but it didn't necessarily have anything to do with the actual
> geometry of the drive.

I thought the point of native command queuing was precisely to enable the
drive itself to make these lower-level scheduling decisions, while the OS
scheduler would mostly deal with higher-level, coarser heuristics such as
"nearby LBA's should be queued together."

BTW, discovering hard drive physical geometry via benchmarking was extensively
discussed in an article that's linked in the sibling subthread. I've linked
the HN discussion of that as well.

~~~
jlokier
Native command queuing won't help with _placement_ decisions . (And the NCQ
queue is rather short anyway for scheduling optimisation.)

For example, even something as trivial and linear as a database log or
filesystem log can benefit from placement optimisation.

Each time there's a transaction to commit, instead of writing the next commit
record to the next LBA number in the log, increment the LBA number by an
amount that gives a sector that is about to arrive under the disk head at the
time the commit was requested. That will leave gaps, but those can be filled
by later commits.

That reduces the latency of durable commits to HDD by removing rotational
delay.

Command queueing doesn't help with that, although it does help with keeping a
sustained throughput of them by pipelining.

~~~
wtallis
> (And the NCQ queue is rather short anyway for scheduling optimisation.)

Isn't it 31 or 32 commands in the queue? That's a worst-case of around a
quarter second for a 7200rpm drive, which sounds like an awfully long time
horizon to me.

~~~
jlokier
That's probably why the queue depth is limited in NCQ. It's not intended to do
bulk parallel scheduling, and as you imply, you wouldn't want the drive
committed to anything for much longer than that.

But for ideal scheduling, you need something to deal with the short timings as
well.

For example, if you have 1024 x 512-byte single-sector randomly arriving
reads, of which 512 sectors happen to be in contiguous zone A and 512 sectors
happen to be in contiguous zone B, all of those reads together will take about
2 seek times and 2 rotation times.

Assuming the generators of those requests are some intensively parallel
workload (so there can be perfect scheduling), which is heavily clustered in
the two zones (e.g. two database-like files), my back-of-the-envelope math
comes to <30ms for 1024 random access reads in that artificial example, on
7200rpm HDD.

Generally that's what the kernel I/O scheduler is for.

------
sn
Things I've learned from using SSDs at prgmr:

Since the firmware is more complicated than hard drives, they are way more
likely to brick themselves completely instead of a graceful degradation.
Manufacturers can also have nasty firmware bugs like
[https://www.techpowerup.com/261560/hp-enterprise-ssd-
firmwar...](https://www.techpowerup.com/261560/hp-enterprise-ssd-firmware-bug-
causes-them-to-fail-at-32-768-hours-of-use-fix-released) . I'd recommend using
a mix of SSDs at different lifetimes, and/or different manufacturers, in a
RAID configuration.

How different manufacturers deal with running SMART tests under load
drastically varies. Samsung tests always take the same amount of time. The
length of Intel tests vary depending on load. Micron SMART tests get stuck if
they are under constant load. Seagate SMART tests appear to report being at
90% done or complete, but the tests do actually run.

Different SSDs also are more or less tolerant to power changes. Micron SSDs
are prone to resetting when a hard disk is inserted in the same backplane
power domain, and we have to isolate them accordingly.

Manual overprovisioning is helpful when you aren't able to use TRIM.

What a drive does with secure-erase-enhanced can be different too. Some drives
only change the encryption key and then return garbage on read. Some
additionally wipe the block mappings so that reads return 0.

~~~
wtallis
Have you found any real value to instructing in-service SSDs to run SMART
self-tests, vs simply observing and tracking the SMART indicators over time?

~~~
sn
It's not as valuable in and of itself as monitoring SMART counters. We've only
had a single SSD report failures during a long test, and it also reported an
uncorrectable error. However, not finishing the test is a good proxy for if a
drive is overloaded and less able to perform routine housekeeping.

------
Stratoscope
"Website is sleeping"

000webhost lives up to its name!

In the meantime:

[https://archive.is/20200115095916/http://kcall.co.uk/ssd/ind...](https://archive.is/20200115095916/http://kcall.co.uk/ssd/index.html)

~~~
geek_at
strange. archive.is is resolving to 127.0.0.5 for me

even if I do

nslookup archive.is 1.1.1.1 Server: one.one.one.one Address: 1.1.1.1

Name: archive.is Address: 127.0.0.5

~~~
Stratoscope
Here are a couple more mirrors:

[https://webcache.googleusercontent.com/search?q=cache:http%3...](https://webcache.googleusercontent.com/search?q=cache:http%3A%2F%2Fkcall.co.uk%2Fssd%2Findex.html)

[http://cc.bingj.com/cache.aspx?q=url:http%3A%2F%2Fkcall.co.u...](http://cc.bingj.com/cache.aspx?q=url:http%3A%2F%2Fkcall.co.uk%2Fssd%2Findex.html&d=4855675377620122&mkt=en-
WW&setlang=en-US&w=1oy47Uw_sjWgG1ZfbJ2ltPcVHIfT1mID)

Hopefully one of them will work for you.

~~~
lucb1e
Those will stop working some time soon. Long-term mirror:
[https://web.archive.org/web/20200115163630/http://kcall.co.u...](https://web.archive.org/web/20200115163630/http://kcall.co.uk/ssd/index.html)

------
cattlemansgold
There are a few technologies that I’ve tried very earnestly to understand,
only to find out that it’s basically black magic and there’s no use in trying
to understand it. Those things are modern car transmissions, nuclear reactors,
and SSDs.

~~~
24gttghh
A very basic nuclear reactor can be explained pretty simply I think. You
enrich a bunch of let's say uranium. Pack it together in a rod, and put a
bunch of those rods in a pond. Those rods have controlled (ideally) nuclear
decay from their being in close proximity to other rods which generates a lot
of heat, which is transferred to a separate cooling loop that boils water to
make steam which drives an electric turbine.

Now I'm no nuclear scientist so please be forgiving with that description, but
that's how I understand them to work :)

I can't even begin to explain how an SSD works, but I know there are no moving
parts besides electrons.

edit: moved the "(ideally)"

~~~
cattlemansgold
I guess what I meant is that the overall concepts are understandable. Nuclear
fuel gets hot, boils water, drives a turbine. For transmissions, different
sized gears allow things to turn at different rates.

But as soon as I dive into the details, I get lost. How exactly can you
control the nuclear decay? How exactly does do the gears in the transmission
move around and combine with eachother to create a specific gear ratio? These
concepts probably are probably pretty simple for a lot of people, but they
just make my head spin.

~~~
toast0
> How exactly can you control the nuclear decay?

That's what the control rods are for. The uranium in one fuel rod in isolation
decays at whatever natural rate, which would warm water but not boil it, and
placing the rods near each other allows for the decay products (high energy
particles) to interact with other fuel rods and induce more rapid decay.

The control rods slot in between the fuel rods, and absorb the decay products
without inducing further nuclear decay. Usually these are graphite rods.

> How exactly does do the gears in the transmission move around and combine
> with eachother to create a specific gear ratio?

It really depends on the specific transmission, a manual transmission is using
the shift lever movement to move the gears into place. An automatic
transmission most likely uses solenoids to move things (a solenoid is
basically a coil of wires around a tube with a moveable metal rod inside, when
you put current through the wire, the metal rod is pulled into the tube, you
attach the larger thing you want to move to the end of the rod (sometimes with
a pivot or what not), and use a spring, another solenoid, or gravity, etc to
make the reverse movement. A solenoid by itself gives you linear movement, if
you need rotational movement, one way to do that is have a pivot on the end of
the solenoid rod, then a rod from there to one end of a clamp on a shaft, then
when the solenoid pulls in its rod, the shaft will rotate (this is the basic
mechanism for pinball flippers).

~~~
labawi
> The control rods slot in between the fuel rods, and absorb the decay
> products without inducing further nuclear decay. Usually these are graphite
> rods.

AFAIU graphite rods increase fission by slowing (not capturing) neutrons which
in turn have a better chance of propagating further fission, because ..
physics.

Quite nifty actually - without the moderator, the fuel wont burn.

~~~
toast0
I think you're right; I misinterpreted the term 'graphite-moderated reactor'
to mean something it doesn't. Graphite will slow the neutrons so they react
more. Also, the Chernobyl reactor design has graphite tips on its control
rods, which I misremembered as the primary substance of the rod.

The primary substance of the control rods is (usually) a neutron absorber, and
most reactors with control rods have a passive safety system, so gravity and
springs will force the control rods in to significantly slow the reaction
unless actively opposed by the control system.

The Chernobyl rods had graphite ends so that when fully retracted, the reactor
output was higher than if there was simply no neutron absorber present;
unfortunately, this also meant that going from fully retracted to fully
inserted would increase the reactivity in the bottom of the reactor before it
reduced it, and in the disaster, this process overheated the bottom of the
reactor, damaging the structure and the control rods got stuck, and then
really bad things happened.

Long story short, most control rods don't have graphite. ;)

------
mjw1007
I hoped it would discuss how SSDs cope with sudden power loss, but it doesn't
seem to.

I remember this page but I don't know of a modern update:
[http://lkcl.net/reports/ssd_analysis.html](http://lkcl.net/reports/ssd_analysis.html)

These days, if I want an SSD for my desktop and want to minimise the chance I
have a disk problem and have to restore from backup, would I be better off
with one "data centre" drive (eg Intel D3-S4510), or two mirrored "consumer"
drives (perhaps from two different manufacturers)?

The prices look similar either way.

~~~
bb88
The big problem with one data center drive is that if that goes bad, you still
lose all your data. You're assuming their marketing MTBF is correct.

They do make NVME raid solutions now -- with the advantage being that NVME can
be faster than SATA. And there are various price points for the NVME drives
depending upon speed.

This one from 2018 (not sure if it has full raid or uses VROC)(EDIT: it
requires software raid)

[https://www.pcworld.com/article/3297970/highpoint-7101a-pcie...](https://www.pcworld.com/article/3297970/highpoint-7101a-pcie-
nvme-raid-card-review.html)

This one is cheaper but relies upon Intel VROC (which has been hard to get
working on some mobo's apparently)

[https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQ...](https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z/)

In either case you're looking at max throughput of 11 gigabytes per second,
which is roughly 20 times faster than SATA 3's 6 gigabits per second.

~~~
wtallis
Almost all NVMe RAID products—including both that you've linked to—are
software RAID schemes. So if you're on Linux and already have access to
competent software RAID, you should only concern yourself with what's
necessary to get the drives connected to your system. In the case of two
drives, most recent desktop motherboards already have the slots you need, and
multi-drive riser cards are unnecessary.

~~~
SlowRobotAhead
PERC HP740 controllers in Dell servers iirc are hardware raid for the flex
port U.2 and backplane pcie nvme drives.

~~~
wtallis
Yes, that's one of the cards that use Broadcom/Avago/LSI "tri-mode" HBA chips
(SAS3508 in this case). It comes with the somewhat awkward caveat of making
your NVMe devices look to the host like they're SCSI drives, and constraining
you to 8 lanes of PCIe uplink for however many drives you have behind the
controller. Marvell has a more interesting NVMe RAID chip that is fairly
transparent to the host, in that it makes your RAID 0/1/10 of NVMe SSDs appear
to be a single NVMe SSD. One of the most popular use cases for that chip seems
to be transparently mirroring server boot drives.

~~~
SlowRobotAhead
So stay under 8 physical NVME and it should be fine?

~~~
wtallis
A typical NVMe SSD has a four-lane PCIe link, or 2+2 for some enterprise
drives operating in dual-port mode. So it usually only takes 2 or 3 drives to
saturate an 8-lane bottleneck. Putting 8 NVMe SSDs behind a PCIe x8 controller
would be a severe bottleneck for sequential transfers and usually also for
random reads.

~~~
SlowRobotAhead
I need to think about this for a second.

You’re saying the performance gains stop at two drives in raid striping.
RAID10 in two strip two mirror would still bottleneck at 8 total lanes?

I also need to see about the PERC being limited to 8 lanes - no offense - but
do you have a source for that?

Edit: never mind on source, I think you are exactly right [0] Host bus type
8-lane, PCI Express 3.1 compliant

[https://i.dell.com/sites/doccontent/shared-content/data-
shee...](https://i.dell.com/sites/doccontent/shared-content/data-
sheets/en/Documents/Dell-PowerEdge-RAID-Controller-H740P.pdf)

To be fair; they have 8GB NV RAM, so it’s not exactly super clear cut how
obvious a bottleneck would be.

------
Ariez
I was under the impression that if you do not encrypt an SSD from the first
use, then any attempt at overwriting with 0s is futile, as well as any other
method to securely delete the files. The files will be easily recovered.

This guy seems to say the opposite, in that the files are "simply not there
anymore", contrary to everything I've read: who's right here?

~~~
blattimwind
In practice all SSDs are always encrypted because they use the encryption to
whiten the data written to them. That's why "Secure Erase" takes less than a
second on SSDs, it doesn't erase anything but the key.

~~~
nemo1618
Interesting! Do you happen to know which encryption algorithm is used? I would
think that, if the only goal is whitening (as opposed to robust security), a
fairly weak algorithm would be used, or perhaps a strong algorithm with a
reduced number of rounds.

~~~
sigstoat
the hardware is going to use AES because their ASIC vendor will have well
tested AES IP that they can just throw down on the chip. any other algorithm
would require massive development effort for zero benefit.

and by using AES they can probably claim to satisfy some security standards
that will make their marketing people happier.

------
ololobus
"As for writes, at 1 gb a day - far more than my current rate of data use - it
would take the same 114 years to reach 40 tb."

Maybe it works for author's very specific system or use-case, but on my
personal MBP laptop with very occasional usage pattern--some days I do not use
it at all during the week--I end up with 10 GB per day of writes on average.
That way it will be 11.4 years already, not so many. And I do not do something
very disk-expensive on it like torrents downloading or database testing, only
some general development task, watching online videos, web surfing, docs, etc.

~~~
ls612
I have an NVME SSD as a boot drive in my desktop and three years and change in
(say 1200 days) it’s used 40tb of write (out of an advertised 400tb endurance
so I’m not too worried). That works out to about 30gb of writes per day, which
seems about right for medium to heavy use.

I guess what I’m saying is that for modern SSDs I don’t think write endurance
is a binding constraint in most cases.

~~~
wtallis
For client/consumer SSDs, most vendors seem to view 50GB/day as plenty of
write endurance for mainstream, non-enthusiast products. Virtually all retail
consumer SSDs have a warranty that covers at least that much, and usually
several times more for larger drive capacities (since write endurance is more
often specified in drive writes per day).

~~~
rasz
Coincidentally 50GB/day is what Chrome craps out to the hard drive during few
hours of usage, doing such clever things like "caching" YT videos (YT player
NEVER reuses data, and rewinding always generates new server fetch with new
randomly generated URL).

------
mxfh
I get that it's meant to be subjective, but the premise makes it clear that
it's an ongoing knowlegde collection, but for that some sources would be nice.

Currently my troubles with SSDs in PC/Server/NAS environment are a somewhat
more practical, more about compatibility NVMe/SATA, M.2 key types, PCIe port
bifurcation support vs PLX switches, none of them are even mentioned. Advice
for this is notoriously hard to find, resorting to trusting rare and random
forum posts is my state of knowledge progress there.

~~~
shifto
On forums like ServeTheHome all this information is readily available. It's
spread between some guides and posts but if you have a specific question those
people will be able to answer it most of the time.

------
computator
> _Deleted file recovery on a modern SSD is next to impossible for the end
> user_

OK, he’s talking about SSDs, but I want to mention that I’ve easily recovered
many large deleted files from SD or micro-SD cards (formatted as FAT32 or
exFAT) using Norton Unerase or an equivalent utility.

Are the controllers for SSDs that different from the controllers for SD cards?
Has anyone tried Norton Unerase or an equivalent program on an SSD? I’d like
to hear a first hand account to help confirm (or deny) what the author claims.

~~~
toast0
It would likely depend of if the filesystem uses TRIM to notify the SSD of
deleted sectors.

If it doesn't, the deleted files should retain their data until the filesystem
reuses the sector, as normal. If trim is used, the SSD doesn't have to retain
the data, but it doesn't necessarily make it unreadable immediately, there are
many implementation strategies for trim.

~~~
wtallis
Enterprise SSDs for the most part promise that reading a trimmed range of the
drive will return zeros. Consumer SSDs usually won't make that strong of a
guarantee, so that if you're running enterprise software that requires this
behavior you have to pay extra for enterprise drives. As originally
formulated, the TRIM command was supposed to be more of a hint that the SSD
could ignore if it was too busy or if the TRIMMed block was too small for the
drive to do anything useful with.

------
0xff00ffee
Excellent summary. One thing he left off: some SSDs continue to copy/erase
blocks even if there is nothing new to write because multi-level cell state
does degrade over time. There is a concern that some MLC drives will suffer
bit corruption over time if not regularly power up to allow this in the
background. Citation needed: I only recall this when I was interviewing to
work for Western Digital many years ago.

~~~
wtallis
This problem was most prominent right before the switch to 3D NAND, when
planar NAND dimensions were at their smallest and the consumer market had
already mostly switched over to 3 bit per cell TLC rather than 2bpc MLC. In
the worst case, we were down to about 8 electrons difference between cell
voltage states. That's now been relaxed by 3D NAND allowing for larger cell
sizes, and most 3D NAND also switched from floating-gate to charge-trap cell
design so leakage is less of an issue. Nowadays, data retention in SSDs is
only a concern toward the end of their lifespan (as measured by write
endurance), and it's probably inadvisable for the SSD to start doing
background data scrubbing until the raw read error rate starts climbing.

~~~
0xff00ffee
Thank you for bringing me up to speed!

------
rahuldottech
There's some interesting content about SSDs and HDDs here:

SSDs:
[https://superuser.com/questions/tagged/ssd?tab=Votes](https://superuser.com/questions/tagged/ssd?tab=Votes)

HDDs: [https://superuser.com/questions/tagged/hard-
drive?tab=Votes](https://superuser.com/questions/tagged/hard-drive?tab=Votes)

------
fareesh
I wonder if anyone here has experienced something similar: I have a Samsung
Evo 860 SSD. Sometimes after powering on my desktop the BIOS either "forgets"
the drive (sets some other drive as primary boot drive) or doesn't recognize
it at all.

The non recognition issue goes away after I power off and power on.

It's been this way for about 8 months. Happens every 1/15 times I power on.
I've heard it may have something to do with "sleep mode" or something like
that. I always shutdown via software though.

~~~
cptskippy
That sounds like the drive is failing to initialize in time. Have you tried
enabling a POST test or Boot Delay? I suspect the problem might magically go
away if you do.

~~~
fareesh
I'll try it out

------
plughs
I found these helpful. The first one has a link to a video which is pretty
ELI5

[https://flashdba.com/2015/01/09/understanding-flash-
floating...](https://flashdba.com/2015/01/09/understanding-flash-floating-
gates-and-wear/)

[https://www.youtube.com/watch?v=s7JLXs5es7I](https://www.youtube.com/watch?v=s7JLXs5es7I)

------
tjoff
> _The OCZ Myth: [...] with one overwrite pass of zeroes [...] A sort of
> RETRIM before that was invented._

It wasn't a myth, that was the idea all along.

> _SSD Defragmentation [...]_

An important factor left out is wear leveling, it doesn't make as much sense
to arrange data in "file-system-order" when the bits on the drive move around.

------
creeble
Anyone know how TRIM works with Linux?

I find myself copying entire partitions between SSDs from time to time, is
there a utility to clear the destination SSD before copy?

Is it possible to do the same for an SD card, so that writing a new Raspi OS
to it doesn't do unnecessary garbage collection?

~~~
leetcrew
I believe all major OSes support trim in a reasonable way now. afaik it needs
hardware support though, so idk how it would work for an SD card.

~~~
cm2187
Unless you have a RAID card in the middle.

~~~
leetcrew
I don't mean to say it wouldn't work, just that I don't know enough about sd
cards to say whether it would/should.

~~~
cm2187
Sorry, I meant for TRIM being passed down to an SSD.

~~~
leetcrew
oh gotcha. not sure why I thought it was reasonable to think you were talking
about a bunch of SD cards in a raid array...

------
Rafuino
The author forgot about non-NAND SSDs (e.g. Optane SSDs). There's no garbage
collection to worry about, for example.

~~~
wtallis
Optane SSDs do need wear leveling. What makes it much simpler is that there
isn't the mismatch between small-ish NAND pages and massive NAND erase blocks,
so you don't have to suffer from the really horribly large read-modify-write
cycles.

------
pettycashstash2
Says website is sleeping? did he go over the bandwidth limit?

------
lostgame
It lost me at Comic Sans, and made me almost ill with any further font and
design choices.

So I finally kicked it into Reader View, only to find a lot of questionable
spelling and grammar issues.

These kinds of basic things go a long way to making an article valuable.

------
lousken
site doesn't work - archive link
[https://archive.is/K9SFI](https://archive.is/K9SFI)

------
cptskippy
The CSS of this site manages to making reading this article equality
inconvenient and painful on both Mobile and Desktop.

~~~
amarshall
Indeed. Thankfully, Reader View in both Firefox and mobile Safari remedies it.

------
cbdumas
I love a minimal, text-only website as much as the next crotchety HN reader,
but a little bit of CSS goes a very long way in terms of readability.

Edit: Upon further inspection I see that this page was designed to be hard to
read. Very curious.

~~~
occamrazor
Reader view helps a lot, and with such a simple website it is guaranteed to
give a good result.

~~~
dr_zoidberg
Not just a lot, it actually makes it readable. However I do feel this article
needs a few graphs and diagrams. I've written [0] about data recovery and
SSD/Flash storage is a massive beast to tackle even with diagrams to give you
a vague idea of what's going on.

[0] in old fashioned tree-based paper, unfortunately

~~~
ShroudedNight
Is there an available method to acquire some of these marked-up dead trees?

------
vira28
thanks for writing.

Is it me or someone else noticed that the font is too tough to read?

~~~
Thessdauthor
I am the author of kcall.co.uk/ssd/index.html and I trust that the site is now
back up and running without further 'sleeping' episodes. The reason for the
interrupts was, as some have surmised, that the site was mentioned here and in
other places and that caused a surge in hits that exceeded the host's usage
limit. Whilst I am pleased that so many people have shown an interest in my
efforts it does mean that I have had to move the site to a more generous host.

It is apparent from the first sentence and the website itself that this
article is, or arises from, tha musings of shall we say an amateur. It's not
professional because I'm not a professional. Whilst some of the comments in
this thread are helpful, some are baffling. To the person who couldn't get
past the header bacause Comic Sans offended him, the following 8058 words were
in Century Gothic, which is perhaps not so offensive. I have however accepted
the comments on readability and changed the header and all the rest of the
text to Calibri, made the font larger, and changed the line spacing to web
standards. I hope it is now more soothing and readable. As for the claim that
the text is 'spoiled by having dozens of typos and spelling mistakes' I'm not
sure what he is reading (or smoking). Apart from the removal of one stray
colon, the current text is entirely unchanged - it is the same file. I don't
claim to be perfect and I'm sure that in such a large essay there are some
typos and other errors, but to say that there are dozens is patently untrue.
Just let me know where they are and I will happily correct them.

Apart from all that, I am surprised that there are so few comments about the
veracity or otherwise of the contents. I had great difficulty in finding
material that was up to date and actuially delved into the technicalities of
SSDs, without baffling me - as much did.

I shall end by thanking those who found the article interesting and I hope
that some of it at least has been of some help.

