Hacker Newsnew | comments | show | ask | jobs | submit login

Modern SSDs simply don't work like this, they all internally use some variant of log-structured storage, such that regardless of the user's write pattern a single continuous stream is generated and only one method is needed to distribute modified pages across available flash. This means an infinite loop that rewrites the first 128kb of the device with random data will eventually fill (most of) the underlying flash with random data (128kb because that's a common erase block size).



Write patterns still matter until you're talking about like megablock granularity. Mmap will swap out pages at random (relative to disk layout) and page granularity is far smaller than a megablock. It's certainly possible for controllers to handle this properly, and I don't want to tell you that they never will, but even the very expensive PCI-E flash we use at fb demonstrated this "bad behavior".

-----


Are there standard practices for securely erasing any random SSD without having to look up it's implementation details? Or is this the sort of thing you just use a shredder for?

-----


Encrypt it and store the key anywhere except on the drive. To erase, simply destroy the key. Many motherboards come with a tamper-proof key storage device you can reset on command (the TPM). There's a SATA secure erase command, but its been shown multiple vendors have managed to botch its implementation. So if you can't make the encryption approach work, shredder is probably still best bet

-----


Standard practice in government and large enterprise is still physical destruction, for exactly the reason you mention.

http://www.monomachines.com/shop/intimus-crypto-1000-hard-dr... or you can get a service to come out and do it on site.

-----


Does it make any difference for append-only writes vs. in-place modifications?

-----


Can you cite anything for this? Wear leveling is not the same as log-structured storage.

-----


Indirection in a log-structured form is the best way to increase write IOPs and optimize write amplification. More sophisticated SSDs actually have multiple log heads, for data with different life cycle properties.

You get a write amp of 1 until the drive is filled the first time. After that, it's a function of 1) how full the drive is (from the drive's point of view—this is why TRIM was invented) 2) the over provisioning factor 3) usage patterns, such as how much static data there is 4) how good the SSD's algorithms are 5) other (should be) minor factors, such as wear leveling

Source: I used to be a SSD architect.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: