Hacker News new | comments | ask | show | jobs | submit login
Self-encrypting deception: weaknesses in the encryption of solid state drives [pdf] (zdnet.com)
224 points by exceptione 3 months ago | hide | past | web | favorite | 109 comments

The paper is here: https://www.ru.nl/publish/pages/909275/draft-paper_1.pdf.

(That was the submitted URL but download links don't work so well for HN stories.)

Litany of failures:

* Firmware protection in drives is almost uniformly broken, so that they can get code execution (through JTAG or through hacked firmware images) routinely. This is bad, but shouldn't be the end of the world, since in the drive encryption threat model you don't want to have to depend on the firmware anyways. But:

* Two Crucial SSDs encrypt the drive with a key unrelated to the password; the password check is enforced only with an "if" statement in the firmware code, which can be overridden.

* Another Crucial SSD uses PBKDF2 to derive keys, but then has a master override key, which is blank by default. It also has a multi-volume encryption interface (Opal) with slots for volume keys, all of which are populated whether they're in use or not, and if they're not in use, they're protected with an all-zeroes key that recovers the master key for the device.

* Two Samsung drives implement PBKDF2, but not in the default mode, which is "password is checked in an if statement, like the Crucial drive". Also, the wear-leveling logic in one of the drives doesn't zero out old copies of the master key, so that when you change your disk password (or set it for the first time), unprotected copies of the data encryption key are left in blocks on the device.

* The Samsung T3 portable drive uses the drive password in an "if" statement and is trivially unlocked through JTAG. Its successor, the T5, is no more cryptographically sound, but is simply harder to obtain code execution on.

People have strange ideas about what disk encryption is good for (in reality, full-disk encryption really only protects you from the situation where your powered-down, locked device is physically stolen from you and never recovered [if you get the drive back, you have to assume, at least from a cryptographic standpoint, that it's now malicious.])

But the net result of this work is that Samsung and Crucial couldn't even get that right. This paper is full of attacks where someone simply steals your drive and then unlocks it on their own. It's bananas.

Great summary. Ill also note FDE also lets you quickly erase a drive by zeroing out a key. That can either eliminate the need during disposal for HD overwriting and/or destruction. It can also help in situations where an attack is incoming and they want to avoid durress. That's why so-called Inline, Media Encryptors and Enclosures with Zeroize buttons were used in Defense sector a lot. Example I modeled some designs on was ViaSat's that NSA recommended:


Also has 2FA, a trusted path for PIN's, and emanation shielding. Being Type 1 means they focused hard on the RNG, algorithm implementations, firmware/protocol code, and error-handling code. Typical for high-assurance crypto. Just one of a few internal and external enclosures that do the job right due to security regulation for certain products. We can't buy them of course since they actually work and NSA uses them.

Then, unregulated vendors in private market are doing least they can for actual quality/security, doing most they can on marketing it, and making a killing with no or limited liability for preventable defects. Typical. I still maintain pretty much all of them are insecure and/or negligent until proven otherwise.

The high-assurance community selling to defense shows they couldve built better designs 10-20 years ago if they just followed standards, had a specialist for crypto, and hired some experienced breakers to check it. Peanuts for these big, companies' profits. Useful in protecting the IP that generates those profits, too. They still do this shit... Always will without better incentives and/or regulation...

> physically stolen from you and never recovered

... or taken out of service and scrapped/resold/otherwise leaves your control. Which all drives eventually will.

Are there regulations in any industry which require the name of software authors on binaries/products, or at least the name of someone who approved the security claims made by the device?

Absolutely not.

... but in this particular case, people used their e-mail addresses as PBKDF2 salts, so we do have some idea. ;-)

(Of course, blaming whatever engineer isn't useful.)

Buy drives with FIPS 140-2 certification. At least then you know a third party lab has reviewed the implementation.

It is almost definitely not the case that FIPS certification of any kind reliably informs you that a capable crypto engineer verified the implementation.

Respectfully, I'll need a citation on that.

Sure. Find any device crypto vulnerability, and check to see if the device was FIPS 140-2 certified. For instance, Infineon with their batshit RNG (the ROCA vulnerability).

Does your argument boil down to any review process that doesn’t catch 100% of problems is worthless? I don’t think anyone claims fips certification is always going to garuntee perfection but it’s better than nothing. It also stipulates things like tamper evident seals which would mitigate attacks that need jtag access to muck with firmware.

If someone lifted your drive - the thing FDE is supposed to protect against - tamper evident seals are not going to offer much in the way of mitigation.

that isn’t what the seals are for.

They keep the data USDA Grade A fresh.

So looking at the ROCA vulnerability, it looks like the vulnerability wwas with how the primes were generated, not with the correctness of the actual crypto algorithm. Fips typically certifies that the algorithm is correct, but it can't be faulted for bad primes.

So I'm still going to have to ask for an actual citation

Edit: I also didn't see where that device is Fips certified either

Fips typically certifies that the algorithm is correct, but it can't be faulted for bad primes

I think I rest my case.

How? I honestly don't follow your logic at all.

Treating a certification of algorithm correctness as a certification of a well engineered product is a poor choice, because many security critical things in the product are not considered in the certification.

That makes a lot of sense, thank you! I got the impression that the parent comment meant that the certification itself means nothing, hence my confusion.

I mean; a crypto certification that doesn't tell you very much about the effective security of the product doesn't provide very much value though, does it? All it tells me, is that someone thought it would increase their sales enough to justify the time and money; they may not have spent time and money to generate random numbers properly, and may just use 4 all the time; they could probably still get a fips cert, because it's out of the scope of the certification.

Pretty much, yep.

In this SSD case, if they only certified the algorithm then all the SSDs would be fine (probably). The encryption itself is probably good and wouldn't be easily broken if you only had access to the encrypted data. It's the key generation and usage outside of the algorithm that is broken in these drives.

If the algorithm is FIPS 140-2 compliant but the key-handling/checking fails best practices...

Why would anyone be surprised about this?

If you can't read a whitepaper, and inspect the algorithm to a degree where you can create your own provably-compatible implementation, or easily inspect the vendors source code, then you should just assume it's implemented incompetently and is completely and utterly compromised.

The same goes for all cellular/mobile phone encryption standards, proprietary VPN solutions, proprietary DRM, crypto used in banking (Chip n Pin etc etc). All compromised. Period. Don't trust it. All crypto implementations should be guilty until proven innocent under serious peer review.

The same attitude should be taken with anything not encrypted. Yes your ISP is spying on your browsing habits, logging everything, and probably will one day sell that data. Yes your bank is analyzing your spending habits. Just assume it's happening. There's enough evidence out there now that this is the new reality.

Really, not so much.

The problem here isn't so much with transparency as with delegating cryptography to non-cryptographers (or, for that matter, trusting that other vendors have had cryptographers review their designs). Based on the track record not only of open source cryptography but also of internal review of closed-source cryptography (that is: closed source software reviewed by the vendor that wrote it), having access to designs and code is of marginal importance. Crypto bugs of comparable severity have lived for many, many years in these systems.

What you actually want is an assurance that no link in the security chain of a system protected by cryptography depends on a cryptographic capability had hasn't been formally assessed. Ideally, you want the specific name of the person who assessed it. For instance: Intel's SGX setup has been audited by third parties and was also designed and in some sense overseen by Shay Gueron. You will never see any of that underlying code, but you would probably be crazy to trust an comparably ambitious un-vouched cryptographic enclave more than SGX. Similar story with Amazon and KMS (again: Shay Gueron, but also in-house crypto experts at AWS).

If anything, this story illustrates how weak an obstacle closed source really is. This is an academic project, and they tore up something like 8 different drives and game-overed most of them.

Of course, I agree with the broader point that Microsoft shouldn't have trusted hardware cryptographic capabilities they clearly knew nothing about (Microsoft also has sharp crypto people, all of whom would have barfed all over any of these designs).

Sorry no, i don't trust Shay Gueron either and, even if I did, I wouldn't trust what Intel, AWS or anybody else did after he left the building. Same goes for Whatsapp. It might be based on Moxies protocols and excellent work. I might trust Moxie, but I still don't trust WhatsApp to be eavesdrop free because I can't easily create compatible clients, inspect their protocol, or look at their code.

That's not to say you shouldn't use TPMs, HSMs, or CPU enclaves. It's just that, if at all possible, you shouldn't elevate them to being the only thing protecting you. In fact, most of these devices can be coupled with software encryption by splitting secrets between them and the users brain. My android phone for instance is encrypted, but only protected by a 6 digit PIN. I'm aware that I'm totally dependent on the SoC vendor here to rate-limit and otherwise fend off bruteforce attacks and protect the key ultimately protecting my data. But you know what? The encryption itself is still done in software and can be verified, and I have the choice to use a ridiculously long passphrase instead.

What Microsoft did is choose a stupid default, and put their users at unknown risk.

You can have whatever criteria you'd like for trust. I'm familiar enough with the "open source security" argument on Hacker News to know where this discussion converges.

I'm just offering a counterpoint:

1. Closed source wasn't too much of an obstacle for a pair of interested researchers to independently validate, without assistance from vendors, the firmware and hardware crypto interfaces on a bunch of drives.

2. The determinant of whether something is likely to be cryptographically sound is less likely to be "open source" (open source is full of cryptographically unsound things, many of which won't be ascertained as such for years to come) than whether you know the identities of the people who verified the design.

1. There's a difference between kicking the tyres and looking under the bonnet of a car, and taking the schematics and building your own. It's easier to show something is crap than convince people that it is good when you're outright hiding things. If the things you aren't hiding (well) are crap, then god only knows how terrible the things you are really hiding are.

2. Making claims based on the calibre of the companies and individuals involved doesn't work. People don't trust DJBs algorithms simply because of his personal track record. He also publishes whitepapers about their design, and demonstrates their advantages over the state of the art. That's hard science, not "it must be good because DJB is a rock star". The fact that people can then take his designs and implement them themselves, in the open, is also important.

I don't know many people who actually work in cryptography who believe that verification is cheaper and easier than building systems.

People absolutely do, for the most part, trust DJB because of his track record. Find 10 people who have adopted Poly1305 and ask them what Poly1305 does better than GCM. That's not to say their decision is uninformed; rather, they've delegated a very specialized part of the decision that very few people are actually qualified to make out to an expert they trust, and trust for good reason.

In practical cryptography, the opposite approach --- people who trust nothing until they verify it from first principles --- is extremely problematic. That's how you end up with oddball libraries that only look like they can do a secure scalar multiplication or reliably add two numbers together. As Bruce Schneier is fond of pointing out, everyone can design a system they themselves can't break.

> Find 10 people who have adopted Poly1305 and ask them what Poly1305 does better than GCM.

Two things, one of which cannot be credited to Poly1305 directly.

1. Poly1305 is easier to implement without introducing side-channels (modulo integer multiplication on some CPUs; in which case, I defer to Thomas Pornin on how to avoid them).

2. The Chapoly constructions that use Poly1305 do so in a way that leaking the one-time Poly1305 key doesn't buy you much. (This is the one that cannot be attributed to Poly1305 directly.)

Others may have better or different answers. I implemented it in PHP [1], so anyone reading this thread should keep that in mind. :)

[1] https://github.com/paragonie/sodium_compat/blob/f3b2d775a50d...

> delegating cryptography to non-cryptographers

Well, ok, but who is a cryptographer then? How can a non-cryptographer tell who it's safe to delegate cryptography to? You can't even really complete a degree in cryptography (you can do a CS degree with a specialization in cryptography, but that doesn't mean all that much). There's no meaningful regulatory standards body there - the CISSP is a joke and is focused on producing compliance reports anyway rather than much to do with cryptography.

Those are all good points. It's not easy.

> Crypto bugs of comparable severity have lived for many, many years in these systems.

Do you have any examples of bugs as bad as "password is checked with if statement" living for many years in widely deployed open source software?

Heartbleed was added to OpenSSL two years before it was reported. Some of the nasty downgrade bugs (especially around export encryption) we're probably there longer.

Yep, yep, and yep. I've said it enough to sound like a broken record:

Transparency is a dependency of trust.

If you can't see how it works, then it's objectively impossible to trust it to do the right thing. This applies to everything on a computer, really. Is your browser transparent? Is your operating system transparent? What about the drivers? Device firmware? That "security" solution that unexpectedly ushered in the Year of the MINIX Desktop™?

Of course, we sacrifice some of those points in favor of pragmatism, but if you ain't paranoid about your computer's lack of transparency, you should be.

It's not really about vendors being malicious. It's just that both the carrots and the sticks involved in providing secure products are weak. You can slap a "Uses 256-bit military grade encryption" sticker on your product, pull some half-arsed home-baked crap and make more money. When someone exposes you you just go "Darn, those hackers... ruining things for good everyday folk"

It's an attitude problem. Most people wouldn't buy a glass house where people on the street could watch you shower and poop, but that's what we have in tech.

Never said it was about malice. Incompetence and negligence are just as good of reasons for someone to be untrustworthy :)

>Transparency is a dependency of trust.

I think you go too far here. Trust is a dynamic spectrum, not binary. Transparency is one potential avenue to add trust, but is neither necessary nor sufficient in most cases. Every instance must be evaluated in light of its specific details, threat scenarios, and available tradeoffs.

>If you can't see how it works, then it's objectively impossible to trust it to do the right thing.

For most people "seeing how it works" is utterly meaningless by itself because few are able to properly evaluate the system (both knowledge and time play a role). For complex systems they're simply physically beyond the capability of any human to retroactively fully evaluate, or any system at all for that matter. Transparency can help positively change the incentives all involved players face, lower the upfront resource cost to evaluation which in turn may allow users to expand their trust parties to include actors with more direct interest in their interests, allow more casual white hat investigation even if there is less direct compelling need, etc.

In this specific case yes, open should definitely be standard given the stakes and because pure symmetric crypto should be simple, fast and standardized long since. There is no competitive advantage or secret algorithms here to be had. Furthermore as something that is effectively "infrastructure" it can result in less attention then flashier stuff.

But transparency is not a panacea, and closed software can perfectly well be probed anyway (as this very software and every single vulnerability ever found in proprietary software demonstrates!). The biggest value of open is less finding issues then fixing them. That was always the big driver of open source, not that it would somehow be magically bug free by the power of a thousand eyes but that when issues were discovered users wouldn't be up shit creek because whoever made it had gone out of business or moved on to new products or whatever. That has happened constantly with proprietary products for decades, the frustration with that is the huge practical driver for OSS. Over a long enough time span a lot of proprietary software devs will die or pivot or abandon useful expensive "old" tools without support and in turn their customers will get screwed. Transparency/openness can help with that, or at least offer more options. But in terms of just finding security vulnerabilities it's much less of a big deal then it gets made out, which again should be obvious given all the security vulns in proprietary software constantly being found.

I did not read GP as saying it was a binary choice, but rather a prerequisite. That is to say, without transparency, there can be no trust, but you can also have no trust with transparency.

For someone like me, who is somewhat technically competent, but not a security expert, openness means that names and figures I do trust have the chance to examine whatever I intend to use and comment on it, observe it, and make assessments, including shaming poor implementations claiming traits they have no right to. Turns out there are plenty of people more competent than me that can catch these things.

Transparency also allows less hiding of incompetency in otherwise commercial and/or proprietary designs.

>I did not read GP as saying it was a binary choice, but rather a prerequisite.

That is a binary statement though, saying it's a prereq is saying without it trust is 0.

>That is to say, without transparency, there can be no trust, but you can also have no trust with transparency.

That would clearly be incorrect though. Transparency is not needed to probe a system, it is mostly helpful in changing the incentive structure for development in the first place (helping avoid laziness mostly), making the system more likely to be casually looked at, and fixing it if the original party is slow about it or absent. There can absolutely be trust without transparency however when other factors are strong enough, it's just that transparency lowers the strength needed and in turn helps in cases where it's not there.

>For someone like me, who is somewhat technically competent, but not a security expert, openness means that names and figures I do trust have the chance to examine whatever I intend to use and comment on it, observe it, and make assessments, including shaming poor implementations claiming traits they have no right to. Turns out there are plenty of people more competent than me that can catch these things.

>Transparency also allows less hiding of incompetency in otherwise commercial and/or proprietary designs.

That's what I said though? But are you arguing in turn that the typical iPhone is a lot easier to break into and less trustworthy then the typical Android or PC? It's the least transparent after all. And what about for the majority of the population who lack even the technical meta-knowledge we have let alone anything more? They still have to exercise trust, so how do you think that works from their perspective?

"Transparency is not needed to probe a system"

Maybe not strictly, but without transparency that probing becomes more difficult. It also suggests that there's something worth hiding, which is all the more reason to not trust it.

"names and figures I do trust have the chance to examine whatever I intend to use and comment on it, observe it, and make assessments, including shaming poor implementations claiming traits they have no right to" does not necessarily require openness - third party audits, if done by people who you do trust, can do the same thing for closed systems, for example as described in the tptacek's comment https://news.ycombinator.com/item?id=18385507 above.

Audits done by a select few who were given privileged access by definition are not open or transparent. You don't just want conclusions to be published, you want the methodology and source material to be too so the findings are reproducible.

You're right that transparency alone is insufficient. It's a dependency of trust, not the dependency. However, it's still absolutely necessary.

Sure, you can create your own variety of transparency by reverse-engineering something to figure out how it works (or hiring someone you trust to do so on your behalf), but if you have to resort to that, it's because you already (rightfully) don't trust the thing.

"For most people 'seeing how it works' is utterly meaningless by itself"

Well yeah, most people aren't going to be able to make sense of even a perfectly-transparent system of sufficient complexity. That's why they hire auditors to do it for them. It ain't about the user actually doing it; it's about the user being able to do it should the user have the time and technical competence to do so.

Taken another way: if you don't even have the ability to inspect a transparent thing, then what makes you think you can meaningfully inspect an opaque thing?

"closed software can perfectly well be probed anyway (as this very software and every single vulnerability ever found in proprietary software demonstrates!)"

Correction/clarification: closed source software can occasionally - and with a bit of luck and a heck of a lot more skill and time and effort - be probed for specific reasons to be untrustworthy. You're unlikely to ever reach the point where you have full understanding of the system (and if you do reach that point, then it's pretty transparent - to you at least - and therefore possible to be trustworthy, at least until the next update). Without transparency, there's always the possibility of something nasty (a fatal bug, or an overreaching telemetry "feature", or somesuch) lurking in the places yet to be probed. Sure, it's possible for those to hide in transparent (e.g. FOSS) programs, too (though I'd argue they're not being particularly transparent in those cases), but it's much easier to find that nastiness in a transparent program than an opaque program.

I've been moving more of my purchases back to cash. Not that I'm purchasing anything nefarious -- well, until I consider how e.g. purchasing medication X may be used against me, something I consider to be nefarious on the part of these companies doing and using said data mining.

Anyway, not that I'm purchasing anything nefarious. But to hell with them.

P.S. I also occasionally think about the 2-5% price markups we universally face, put in place to compensate for credit card processing fees. Although there are valid counterarguments about the risk and cost for businesses, and people in more personally vulnerable environments, of handling cash.

"We have analyzed the hardware full-disk encryption of several SSDs by reverse engineering their firmware. In theory, the security guarantees offered by hardware encryption are similar to or better than software implementations. In reality, we found that many hardware implementations have critical security weaknesses, for many models allowing for complete recovery of the data without knowledge of any secret. BitLocker, the encryption software built into Microsoft Windows will rely exclusively on hardware full-disk encryption if the SSD advertises supported for it. Thus, for these drives, data protected by BitLocker is also compromised.This challenges the view that hardware encryption is prefer-able over software encryption. We conclude that one should not rely solely on hardware encryption offered by SSDs."

we know how bad general code quality is in commercial vendor firmware, it's not surprising to see this

I still think it's shocking, and I imagine a lot of companies and professionals rely on hardware encryption based on the presumption that it would be safer than software based encryption. I myself trusted the Opal-certification too.

There was really no option for the vendors to stunt this feature.

For me, this paper is rather alarming and should be on the front page.

It's genuinely interesting to hear this is the case. I mean, I don't disagree with you that it's important to have out in the open regardless. But I honestly just assumed from the start that Opal would just generally be a badly implemented feature checkbox not something dependable. For this case of crypto code it didn't seem trustworthy if it wasn't open, standard and trivially auditable. Unlike Apple for example these products don't have the kind of marketshare, clear public profile and constant open as well as covert attacks that would be necessary to grant a higher degree of confidence despite being closed, the barrier needs to be lower given the lower profile. So I've never used the hardware crypto in any SSD, it's always been through software on top, although in fairness that has become easier to do as AES-NI has become more universal in CPUs. That has given a lot of the assurance and speed of a dedicated hardware crypto chip while still using higher level software.

I guess in retrospect it's quite reasonable most people would trust it, or for that matter not think about it at all but rather trust Microsoft or whomever to be getting that right for them and that checking the FDE Box in the GUI would mean what it said. A good example of having personal blinders on for me, I'm too used to just ignoring Opal entirely since the day it was introduced.

> the presumption that it would be safer than software based encryption

Is this really a common presumption? Why are companies making security decisions based on mere presumption? Wikipedia has a citation from 2013 [1] that discusses a number of vulnerabilities exploiting constant-power hot swapping, so if you'd done any research at all during the last decade you wouldn't be so shocked.

It seems SEDs are merely a convenience as far as transparency and overhead goes, and as a last resort when proper software FDE isn't available. All this talk about Bitlocker and such over LUKS suggests they're targeted at consumers, which would explain the shoddy engineering and proprietary specs.

[1] https://www.cs1.tf.fau.de/research/system-security-and-softw...

We show, however, that depending on the configuration of a system, hardware-based FDE is generally as insecure as software-based FDE [like TrueCrypt and BitLocker]

Note that, despite Linux being mentioned in the paper and utilized for tests, dm-crypt/Luks is not, only software solutions like Bitlocker. Likewise in the OP paper. Which makes me think this is a consumer-class vulnerability, due to the focus the researchers take. Surely enterprises are using something other than SED?

I fully agree with you

EDIT: I found this link in the references section to a Travis Goodspeed talk (he writes a lot for PoC||GTFO) - it is phenomenal: https://www.youtube.com/watch?v=8Zpb34Qf0NY

What surprised me was that bitlocker could make use of it. I always assumed that software encryption would be much safer but I had no idea that the in-device hardware encryption could be leveraged by software.

> we know how bad general code quality is in commercial vendor firmware, it's not surprising to see this

Quite. My experience of firmware is that it's usually total junk, from Internet connected web cams, to HTTP routers, to backplanes for NAS devices.

I was putting together a FreeNAS box for home a couple of months ago and this was the first time I'd seen drives with a self-encrypting ability. I ignored it and went for software encryption.

> This challenges the view that hardware encryption is prefer-able over software encryption.

WAT? Who ever thought that? For exactly the reason you state, I certainly would not have recommended anyone to ever trust "hardware encryption". And it's not just that firmware tends to be terrible: We had plenty of "secure storage" thingies that weren't.

properly implemented, hw encryption is better because it does not expose key material in the RAM even for a brief moment.

Well, yeah, sure, but who would possibly expect something like this to be implemented properly?

95%+ of consumers.

In case you're wondering if this affects you (I know I was wondering) the relevant command to run is (in an elevated command prompt)

`manage-bde -status <drive_letter>`

Then look at the output for "Encryption Method". If it says something like "XTS-AES 128" I think that means you're using software encryption. If it mentions hardware encryption, then it's using it :) (more info. https://helgeklein.com/blog/2015/01/how-to-enable-bitlocker-...)

FWIW on my Win 10 install with a Samsung PM871 it was set to software encryption.

I never used HW disk encryption (other than TPM) because I always seen it as unnecessary as in it doesn’t really improve on SW+TPM in terms of performance or compatibility and could only cause potential data recovery and security issues.

There is no performance benefit in fact with modern CPUs that have crypto extensions it’s often slower and I never trust commercial solutions ever since you could dump the key from the SanDisk/McAfee “secure” flash drives, and all previous HDD password protection schemes like the ATA passcode were so shit I didn’t even understood why people bothered with them in the first place.

The hw drives absolutely perform better if you have a raid.

Is there a source for that? XTS-AES was actually slower with some of the drives, block chaining for raid is done on the raid logical blocks I don’t see how HW would be faster, in fact until fairly recently HW encryption didn’t really work with raid setups at least those available to consumers.

That was my point. Software aes-Xts gets slow when you use it on a raid.

If you do use it on raid put it under the raid so make one decrypt device for each drive then raid those.

Does anyone have any info on whether this affects Macs as well? I recently bought an external Samsung T5 SSD. I'm using APFS with encryption. Does that mean it is using the broken hw encryption that Samsung provides or is OSX actually doing this properly?

If you are using FileVault you are not affected. The encryption is done in software.

As far as I know, no other popular full-disk encryption (LUKS, geli, FileVault) delegates to the SSD’s hardware encryption (by default, anyway) as BitLocker apparently does [0]. Anyone know otherwise?

[0]: https://docs.microsoft.com/en-us/previous-versions/windows/i...

Lots of enterprise SAN/NAS equipment uses SEDs, although one would hope those are actually secure considering how much they charge for it.

"although one would hope those are actually secure considering how much they charge for it"

Considering BitLocker requires a Windows purchase (unless you pirate it) and LVM costs exactly $0, I think it's safe to say that the amount of money thrown is not a good indicator of actual security.

Yeah. Would love to see if a Toshiba PX04/PX05 with SED/SED FIPS is vulnerable. That's a very expensive add-on.

I did not know that BitLocker relies on hardware encryption if the SSD has support for it. That seems like an extremely dangerous default to have, especially as the implementation is closed source in most (all?) cases.

But bitlocker itself is closed source. Why do you trust microsoft more than the disk vendor? Further, if I were going to choose an opaque blob to trust I would choose the one that has the smaller attack surface.

Because historically, hardware vendors have had pretty bad security, and are usually not getting reviewed. There's also many of them, which makes any specific one less likely to undergo review.

Meanwhile, BitLocker has received at least some level of review, it is the most common disk encryption product for Windows, and Microsoft can be reasonably expected, based on past experience, to put somewhat competent people on it.

Additionally, at least for parts of BitLocker, there is at least high-level documentation how it is supposed to behave (e.g. https://docs.microsoft.com/en-us/windows/security/informatio..., there may be more detailed documentation elsewhere), plus there is likely reverse-engineered research available confirming the basic functionality.

Whether or not Microsoft can be trusted is not relevant. It's just very surprising to me that Microsoft itself blindly trusts disk vendors, instead of using their own encryption layer on top.

Ahh. I would guess that they aren't blindly trusting them. Given Microsoft's historical relationships with hardware vendors I would bet they have at least partially audited the firmware.

Yeah, I can imagine they'd conduct audits for firmware on the hardware they ship with their own products. I doubt they look at much beyond that though.

Because Windows due to its attack surface is much better researched and understood, Microsoft actually has a decent track record of investing in security and it can be patched.

The attack surface of windows is many orders of magnitude larger and also includes the attack surface of the CPU itself. It is _much_ easier to do a good job securing simple firmware.

You are looking it wrong, when the computer is on the attack surface of hardware or software encryption is the same, if the OS is compromised or any other major part like your CPU the hacker has everything.

When the computer is off the software has zero attack surface so your only attack surface is a cold boot attack against the computer in which case it doesn't really matter if it's HW or SW encryption as long as the keys are in the TPM or an offline attack.

With an offline attack the attack surface of a HW encryption that might also store a copy of the key encrypted or not is now greater.

Also the attack surface alone is only a small part of the risk metric, how easy it is to fix it is just if not more important than how likely it is to have a vulnerability and a firmware not to mention controller level flaws in the cheapest SoC with AES encryption the SSD vendor could find is a much much harder thing to fix than a software solution.

If someone would compromise your OS then your data is compromised anyhow, for what FDE supposed to protect against that is unauthorized access when the device is out of your control and off then the software stack does not pose a greater attack surface.

Literally the only case in which the "software" solution might be more vulnerable is when your device is suspended with the key in memory which means that you can attempt memory extraction through physical means (e.g. freezing it and transferring it to a reader before the charge fades), in which case there is no guarantee that the HDD solution would be any better, nor is there any guarantee that you don't hold the copy of the key in memory regardless of what mode is used.

If the device is simply locked then the HDD is in an unlocked mode anyhow if they can unlock your OS through some sort of an exploit then HW or SW they still get your data.

And yet that still seems to be quite difficult, as evidenced by this very paper.

Sure. Security is hard.

It's a very difficult process to turn on and isn't something that happens automatically. You must flip the bit using the SSD manufacturer's tool, reinstall Windows, and then enable BitLocker. Even then, it only works with a handful of SATA SSDs and maybe 2 NVMe SSDs with 1 or 2 motherboards.

I knew something was fishy when I enabled Bitlocker on an OPAL compliant SSD and it took a few minutes to encrypt it. OPAL drives are supposed to be encrypted by default, and I expected Bitlocker to use that, but it didn't. If only one specific set of hardware is needed, why bother?

Check out sedutil it makes using opal drives easy.

Bitlocker probably uses hardware encryption if the CPU supports it, or maybe (unlikely) an encryption card. The implementation is closed-source and I don’t know if it is documented but there also is an open-source implementation:


Also, GFD, there's yet another policy that has to be set to force things to be configured correctly since it literally doesn't ask (or even warn me that it's defaulting to assuming hardware encryption is desired)...

Computer Configuration\Administrative Templates\Windows Components\BitLocker Drive Encryption

Configure use of hardware-based encryption for fixed data drives


In case it hasn't been obvious by now, BitLocker has been designed to be compatible with law enforcement requests. That means for one that the vast majority of Windows users will never see it work by default on their machines, as you can see encryption on Android and iOS devices. And second, most of those that do enable it, will be relegated to the broken and/or backdoored encryption of the OEMs.

Thirdly, BitLocker itself may have a backdoor, or at the very least Microsoft continues to design it in such a way that they (and law enforcement have or can get your private key for it, when needed). I remember a while ago people were complaining that BitLocker keys are automatically saved to their OneDrive account, where Microsoft of course can see it.

I’m sure that the Department of Defense would like to know if you have proof that some of their security measures are backdoored!


Can I see some citations for your first set of claims? Because I know some law enforcement officers would be very happy to learn this.

Here's what I love the most about this: If you have a full-disk encrypted Windows laptop, which is fully powered down (or hybernated), and the laptop contains PHI, _and_ you lose the laptop, then you probably do _not_ have to report it as a data breach.


But with this revelation, if you have an affected SSD, and you are running Windows, then losing such a laptop may now be a reportable event.

There's a very strong argument that encryption should only be implemented in software, in something GPL, LGPL, BSD or Apache licensed so that its full source code can be examined by cryptographers. I'm extremely wary of vendor supplied hardware-based "crypto" that is totally opaque with what's going on under the hood.

IMO, it should be common sense at this point

BitLocker, ehh? Truecrypt.org, before its final demise, claimed it was obsolete due to FDE being native in Windows nowadays (ie. BitLocker).

i remember this being widely interpreted as sarcasm or a straussian double meaning at the time


I wonder what this implies for SSDs that claim to support instantaneous ATA Secure Erase. Many SEDs, when issued this command, will simply discard their internal key and generate a new one, in theory rendering data irretrievable. I've encountered this method recommend over traditional wiping for reasons of security (you can't be sure all blocks are wiped when overwriting due to internal reservation), speed, and flash-longevity.

My own personal policy is to allow SSD reuse internally, but to just physically destroy the drives at the end of useful life.

I am the original inventor of self-encrypting drives. My 3-4 page comment is too long so you can read it here www.privust.com/sedlies Which describes the work at RU, its partial truths, and the lying tweets that followed. Robert Thibadeau, Ph.D.

You'll probably get more credibility if that's linked in some way with your page at CMU or your Linkedin profile.

I spent a lot of time this year looking at ssds with encryption support. I’ve come to the conclusion that what is really needed is to separate the crypto hardware from the storage. Make a 2.5” drive sled for example that takes any m.2 and adds cryto to it. Then you can choose the best supplier for both crypto and flash independently.

Who would have thought that proprietary firmware encryption implemented by hardware vendors would be crappy?

Who would have thought water is dry?

Not related to encryption, but I enjoyed this article where they hack into the microcontroller on a typical HDD: http://spritesmods.com/?art=hddhack

I'm not surprised. My understanding was that encryption was originally added as a whitening step to the data, a product guy somewhere heard about it and added it as a checkbox.

Based on the presented analysis, TCG Opal encryption for the Samsung 850+ series appears to be secure.

I don't see a reason to still use software encryption. I would sooner expect a backdoor in cpus rather than SSD. Almost nobody even has the capability to even analyze the microcode for modern cpus. It could contain a backdoor that stores N aes passwords in the cpu itself, sorted by the amount of data encrypted. Using AES-NI makes it relatively trivial. At worst, both ssd controllers and amd/intel cpus would have backdoors like that, but if that's the case there's nothing to be done.

> I don't see a reason to still use software encryption.

Why not? Software encryption can still be hardware accelerated (thanks to encryption instructions in the CPU), and it is fast enough to not be a bottleneck unless you have a very fast IO device (and very fast in this case mean either Optane or modern SSD's in RAID-0). Also, the impact in CPU usage/power consumption is low.

It would have been interesting to see the result with Intel SSD

Warning - this file will automatically download instead of opening up in the browser.

It's a direct link to a PDF, what it does depends on how your browser is configured. For me on FF it shows a dialog asking if I want to open or save.

Doesn't the website set a header or something like that for the preferred way of consuming it?: in-site viewing or downñoading to hdd?

We changed the link. The pdf is at https://www.ru.nl/publish/pages/909275/draft-paper_1.pdf.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact