(That was the submitted URL but download links don't work so well for HN stories.)
* Firmware protection in drives is almost uniformly broken, so that they can get code execution (through JTAG or through hacked firmware images) routinely. This is bad, but shouldn't be the end of the world, since in the drive encryption threat model you don't want to have to depend on the firmware anyways. But:
* Two Crucial SSDs encrypt the drive with a key unrelated to the password; the password check is enforced only with an "if" statement in the firmware code, which can be overridden.
* Another Crucial SSD uses PBKDF2 to derive keys, but then has a master override key, which is blank by default. It also has a multi-volume encryption interface (Opal) with slots for volume keys, all of which are populated whether they're in use or not, and if they're not in use, they're protected with an all-zeroes key that recovers the master key for the device.
* Two Samsung drives implement PBKDF2, but not in the default mode, which is "password is checked in an if statement, like the Crucial drive". Also, the wear-leveling logic in one of the drives doesn't zero out old copies of the master key, so that when you change your disk password (or set it for the first time), unprotected copies of the data encryption key are left in blocks on the device.
* The Samsung T3 portable drive uses the drive password in an "if" statement and is trivially unlocked through JTAG. Its successor, the T5, is no more cryptographically sound, but is simply harder to obtain code execution on.
People have strange ideas about what disk encryption is good for (in reality, full-disk encryption really only protects you from the situation where your powered-down, locked device is physically stolen from you and never recovered [if you get the drive back, you have to assume, at least from a cryptographic standpoint, that it's now malicious.])
But the net result of this work is that Samsung and Crucial couldn't even get that right. This paper is full of attacks where someone simply steals your drive and then unlocks it on their own. It's bananas.
Also has 2FA, a trusted path for PIN's, and emanation shielding. Being Type 1 means they focused hard on the RNG, algorithm implementations, firmware/protocol code, and error-handling code. Typical for high-assurance crypto. Just one of a few internal and external enclosures that do the job right due to security regulation for certain products. We can't buy them of course since they actually work and NSA uses them.
Then, unregulated vendors in private market are doing least they can for actual quality/security, doing most they can on marketing it, and making a killing with no or limited liability for preventable defects. Typical. I still maintain pretty much all of them are insecure and/or negligent until proven otherwise.
The high-assurance community selling to defense shows they couldve built better designs 10-20 years ago if they just followed standards, had a specialist for crypto, and hired some experienced breakers to check it. Peanuts for these big, companies' profits. Useful in protecting the IP that generates those profits, too. They still do this shit... Always will without better incentives and/or regulation...
... or taken out of service and scrapped/resold/otherwise leaves your control. Which all drives eventually will.
(Of course, blaming whatever engineer isn't useful.)
So I'm still going to have to ask for an actual citation
Edit: I also didn't see where that device is Fips certified either
I think I rest my case.
If you can't read a whitepaper, and inspect the algorithm to a degree where you can create your own provably-compatible implementation, or easily inspect the vendors source code, then you should just assume it's implemented incompetently and is completely and utterly compromised.
The same goes for all cellular/mobile phone encryption standards, proprietary VPN solutions, proprietary DRM, crypto used in banking (Chip n Pin etc etc). All compromised. Period. Don't trust it. All crypto implementations should be guilty until proven innocent under serious peer review.
The same attitude should be taken with anything not encrypted. Yes your ISP is spying on your browsing habits, logging everything, and probably will one day sell that data. Yes your bank is analyzing your spending habits. Just assume it's happening. There's enough evidence out there now that this is the new reality.
The problem here isn't so much with transparency as with delegating cryptography to non-cryptographers (or, for that matter, trusting that other vendors have had cryptographers review their designs). Based on the track record not only of open source cryptography but also of internal review of closed-source cryptography (that is: closed source software reviewed by the vendor that wrote it), having access to designs and code is of marginal importance. Crypto bugs of comparable severity have lived for many, many years in these systems.
What you actually want is an assurance that no link in the security chain of a system protected by cryptography depends on a cryptographic capability had hasn't been formally assessed. Ideally, you want the specific name of the person who assessed it. For instance: Intel's SGX setup has been audited by third parties and was also designed and in some sense overseen by Shay Gueron. You will never see any of that underlying code, but you would probably be crazy to trust an comparably ambitious un-vouched cryptographic enclave more than SGX. Similar story with Amazon and KMS (again: Shay Gueron, but also in-house crypto experts at AWS).
If anything, this story illustrates how weak an obstacle closed source really is. This is an academic project, and they tore up something like 8 different drives and game-overed most of them.
Of course, I agree with the broader point that Microsoft shouldn't have trusted hardware cryptographic capabilities they clearly knew nothing about (Microsoft also has sharp crypto people, all of whom would have barfed all over any of these designs).
That's not to say you shouldn't use TPMs, HSMs, or CPU enclaves. It's just that, if at all possible, you shouldn't elevate them to being the only thing protecting you. In fact, most of these devices can be coupled with software encryption by splitting secrets between them and the users brain. My android phone for instance is encrypted, but only protected by a 6 digit PIN. I'm aware that I'm totally dependent on the SoC vendor here to rate-limit and otherwise fend off bruteforce attacks and protect the key ultimately protecting my data. But you know what? The encryption itself is still done in software and can be verified, and I have the choice to use a ridiculously long passphrase instead.
What Microsoft did is choose a stupid default, and put their users at unknown risk.
I'm just offering a counterpoint:
1. Closed source wasn't too much of an obstacle for a pair of interested researchers to independently validate, without assistance from vendors, the firmware and hardware crypto interfaces on a bunch of drives.
2. The determinant of whether something is likely to be cryptographically sound is less likely to be "open source" (open source is full of cryptographically unsound things, many of which won't be ascertained as such for years to come) than whether you know the identities of the people who verified the design.
2. Making claims based on the calibre of the companies and individuals involved doesn't work. People don't trust DJBs algorithms simply because of his personal track record. He also publishes whitepapers about their design, and demonstrates their advantages over the state of the art. That's hard science, not "it must be good because DJB is a rock star". The fact that people can then take his designs and implement them themselves, in the open, is also important.
People absolutely do, for the most part, trust DJB because of his track record. Find 10 people who have adopted Poly1305 and ask them what Poly1305 does better than GCM. That's not to say their decision is uninformed; rather, they've delegated a very specialized part of the decision that very few people are actually qualified to make out to an expert they trust, and trust for good reason.
In practical cryptography, the opposite approach --- people who trust nothing until they verify it from first principles --- is extremely problematic. That's how you end up with oddball libraries that only look like they can do a secure scalar multiplication or reliably add two numbers together. As Bruce Schneier is fond of pointing out, everyone can design a system they themselves can't break.
Two things, one of which cannot be credited to Poly1305 directly.
1. Poly1305 is easier to implement without introducing side-channels (modulo integer multiplication on some CPUs; in which case, I defer to Thomas Pornin on how to avoid them).
2. The Chapoly constructions that use Poly1305 do so in a way that leaking the one-time Poly1305 key doesn't buy you much. (This is the one that cannot be attributed to Poly1305 directly.)
Others may have better or different answers. I implemented it in PHP , so anyone reading this thread should keep that in mind. :)
Well, ok, but who is a cryptographer then? How can a non-cryptographer tell who it's safe to delegate cryptography to? You can't even really complete a degree in cryptography (you can do a CS degree with a specialization in cryptography, but that doesn't mean all that much). There's no meaningful regulatory standards body there - the CISSP is a joke and is focused on producing compliance reports anyway rather than much to do with cryptography.
Do you have any examples of bugs as bad as "password is checked with if statement" living for many years in widely deployed open source software?
Transparency is a dependency of trust.
If you can't see how it works, then it's objectively impossible to trust it to do the right thing. This applies to everything on a computer, really. Is your browser transparent? Is your operating system transparent? What about the drivers? Device firmware? That "security" solution that unexpectedly ushered in the Year of the MINIX Desktop™?
Of course, we sacrifice some of those points in favor of pragmatism, but if you ain't paranoid about your computer's lack of transparency, you should be.
It's an attitude problem. Most people wouldn't buy a glass house where people on the street could watch you shower and poop, but that's what we have in tech.
I think you go too far here. Trust is a dynamic spectrum, not binary. Transparency is one potential avenue to add trust, but is neither necessary nor sufficient in most cases. Every instance must be evaluated in light of its specific details, threat scenarios, and available tradeoffs.
>If you can't see how it works, then it's objectively impossible to trust it to do the right thing.
For most people "seeing how it works" is utterly meaningless by itself because few are able to properly evaluate the system (both knowledge and time play a role). For complex systems they're simply physically beyond the capability of any human to retroactively fully evaluate, or any system at all for that matter. Transparency can help positively change the incentives all involved players face, lower the upfront resource cost to evaluation which in turn may allow users to expand their trust parties to include actors with more direct interest in their interests, allow more casual white hat investigation even if there is less direct compelling need, etc.
In this specific case yes, open should definitely be standard given the stakes and because pure symmetric crypto should be simple, fast and standardized long since. There is no competitive advantage or secret algorithms here to be had. Furthermore as something that is effectively "infrastructure" it can result in less attention then flashier stuff.
But transparency is not a panacea, and closed software can perfectly well be probed anyway (as this very software and every single vulnerability ever found in proprietary software demonstrates!). The biggest value of open is less finding issues then fixing them. That was always the big driver of open source, not that it would somehow be magically bug free by the power of a thousand eyes but that when issues were discovered users wouldn't be up shit creek because whoever made it had gone out of business or moved on to new products or whatever. That has happened constantly with proprietary products for decades, the frustration with that is the huge practical driver for OSS. Over a long enough time span a lot of proprietary software devs will die or pivot or abandon useful expensive "old" tools without support and in turn their customers will get screwed. Transparency/openness can help with that, or at least offer more options. But in terms of just finding security vulnerabilities it's much less of a big deal then it gets made out, which again should be obvious given all the security vulns in proprietary software constantly being found.
For someone like me, who is somewhat technically competent, but not a security expert, openness means that names and figures I do trust have the chance to examine whatever I intend to use and comment on it, observe it, and make assessments, including shaming poor implementations claiming traits they have no right to. Turns out there are plenty of people more competent than me that can catch these things.
Transparency also allows less hiding of incompetency in otherwise commercial and/or proprietary designs.
That is a binary statement though, saying it's a prereq is saying without it trust is 0.
>That is to say, without transparency, there can be no trust, but you can also have no trust with transparency.
That would clearly be incorrect though. Transparency is not needed to probe a system, it is mostly helpful in changing the incentive structure for development in the first place (helping avoid laziness mostly), making the system more likely to be casually looked at, and fixing it if the original party is slow about it or absent. There can absolutely be trust without transparency however when other factors are strong enough, it's just that transparency lowers the strength needed and in turn helps in cases where it's not there.
>For someone like me, who is somewhat technically competent, but not a security expert, openness means that names and figures I do trust have the chance to examine whatever I intend to use and comment on it, observe it, and make assessments, including shaming poor implementations claiming traits they have no right to. Turns out there are plenty of people more competent than me that can catch these things.
>Transparency also allows less hiding of incompetency in otherwise commercial and/or proprietary designs.
That's what I said though? But are you arguing in turn that the typical iPhone is a lot easier to break into and less trustworthy then the typical Android or PC? It's the least transparent after all. And what about for the majority of the population who lack even the technical meta-knowledge we have let alone anything more? They still have to exercise trust, so how do you think that works from their perspective?
Maybe not strictly, but without transparency that probing becomes more difficult. It also suggests that there's something worth hiding, which is all the more reason to not trust it.
Sure, you can create your own variety of transparency by reverse-engineering something to figure out how it works (or hiring someone you trust to do so on your behalf), but if you have to resort to that, it's because you already (rightfully) don't trust the thing.
"For most people 'seeing how it works' is utterly meaningless by itself"
Well yeah, most people aren't going to be able to make sense of even a perfectly-transparent system of sufficient complexity. That's why they hire auditors to do it for them. It ain't about the user actually doing it; it's about the user being able to do it should the user have the time and technical competence to do so.
Taken another way: if you don't even have the ability to inspect a transparent thing, then what makes you think you can meaningfully inspect an opaque thing?
"closed software can perfectly well be probed anyway (as this very software and every single vulnerability ever found in proprietary software demonstrates!)"
Correction/clarification: closed source software can occasionally - and with a bit of luck and a heck of a lot more skill and time and effort - be probed for specific reasons to be untrustworthy. You're unlikely to ever reach the point where you have full understanding of the system (and if you do reach that point, then it's pretty transparent - to you at least - and therefore possible to be trustworthy, at least until the next update). Without transparency, there's always the possibility of something nasty (a fatal bug, or an overreaching telemetry "feature", or somesuch) lurking in the places yet to be probed. Sure, it's possible for those to hide in transparent (e.g. FOSS) programs, too (though I'd argue they're not being particularly transparent in those cases), but it's much easier to find that nastiness in a transparent program than an opaque program.
Anyway, not that I'm purchasing anything nefarious. But to hell with them.
P.S. I also occasionally think about the 2-5% price markups we universally face, put in place to compensate for credit card processing fees. Although there are valid counterarguments about the risk and cost for businesses, and people in more personally vulnerable environments, of handling cash.
we know how bad general code quality is in commercial vendor firmware, it's not surprising to see this
There was really no option for the vendors to stunt this feature.
For me, this paper is rather alarming and should be on the front page.
I guess in retrospect it's quite reasonable most people would trust it, or for that matter not think about it at all but rather trust Microsoft or whomever to be getting that right for them and that checking the FDE Box in the GUI would mean what it said. A good example of having personal blinders on for me, I'm too used to just ignoring Opal entirely since the day it was introduced.
Is this really a common presumption? Why are companies making security decisions based on mere presumption? Wikipedia has a citation from 2013  that discusses a number of vulnerabilities exploiting constant-power hot swapping, so if you'd done any research at all during the last decade you wouldn't be so shocked.
It seems SEDs are merely a convenience as far as transparency and overhead goes, and as a last resort when proper software FDE isn't available. All this talk about Bitlocker and such over LUKS suggests they're targeted at consumers, which would explain the shoddy engineering and proprietary specs.
We show, however, that depending on the configuration of a system, hardware-based FDE is generally as insecure as software-based FDE [like TrueCrypt and BitLocker]
Note that, despite Linux being mentioned in the paper and utilized for tests, dm-crypt/Luks is not, only software solutions like Bitlocker. Likewise in the OP paper. Which makes me think this is a consumer-class vulnerability, due to the focus the researchers take. Surely enterprises are using something other than SED?
EDIT: I found this link in the references section to a Travis Goodspeed talk (he writes a lot for PoC||GTFO) - it is phenomenal: https://www.youtube.com/watch?v=8Zpb34Qf0NY
Quite. My experience of firmware is that it's usually total junk, from Internet connected web cams, to HTTP routers, to backplanes for NAS devices.
I was putting together a FreeNAS box for home a couple of months ago and this was the first time I'd seen drives with a self-encrypting ability. I ignored it and went for software encryption.
WAT? Who ever thought that? For exactly the reason you state, I certainly would not have recommended anyone to ever trust "hardware encryption". And it's not just that firmware tends to be terrible: We had plenty of "secure storage" thingies that weren't.
`manage-bde -status <drive_letter>`
Then look at the output for "Encryption Method". If it says something like "XTS-AES 128" I think that means you're using software encryption. If it mentions hardware encryption, then it's using it :) (more info. https://helgeklein.com/blog/2015/01/how-to-enable-bitlocker-...)
FWIW on my Win 10 install with a Samsung PM871 it was set to software encryption.
There is no performance benefit in fact with modern CPUs that have crypto extensions it’s often slower and I never trust commercial solutions ever since you could dump the key from the SanDisk/McAfee “secure” flash drives, and all previous HDD password protection schemes like the ATA passcode were so shit I didn’t even understood why people bothered with them in the first place.
If you do use it on raid put it under the raid so make one decrypt device for each drive then raid those.
Considering BitLocker requires a Windows purchase (unless you pirate it) and LVM costs exactly $0, I think it's safe to say that the amount of money thrown is not a good indicator of actual security.
Meanwhile, BitLocker has received at least some level of review, it is the most common disk encryption product for Windows, and Microsoft can be reasonably expected, based on past experience, to put somewhat competent people on it.
Additionally, at least for parts of BitLocker, there is at least high-level documentation how it is supposed to behave (e.g. https://docs.microsoft.com/en-us/windows/security/informatio..., there may be more detailed documentation elsewhere), plus there is likely reverse-engineered research available confirming the basic functionality.
When the computer is off the software has zero attack surface so your only attack surface is a cold boot attack against the computer in which case it doesn't really matter if it's HW or SW encryption as long as the keys are in the TPM or an offline attack.
With an offline attack the attack surface of a HW encryption that might also store a copy of the key encrypted or not is now greater.
Also the attack surface alone is only a small part of the risk metric, how easy it is to fix it is just if not more important than how likely it is to have a vulnerability and a firmware not to mention controller level flaws in the cheapest SoC with AES encryption the SSD vendor could find is a much much harder thing to fix than a software solution.
If someone would compromise your OS then your data is compromised anyhow, for what FDE supposed to protect against that is unauthorized access when the device is out of your control and off then the software stack does not pose a greater attack surface.
Literally the only case in which the "software" solution might be more vulnerable is when your device is suspended with the key in memory which means that you can attempt memory extraction through physical means (e.g. freezing it and transferring it to a reader before the charge fades), in which case there is no guarantee that the HDD solution would be any better, nor is there any guarantee that you don't hold the copy of the key in memory regardless of what mode is used.
If the device is simply locked then the HDD is in an unlocked mode anyhow if they can unlock your OS through some sort of an exploit then HW or SW they still get your data.
Computer Configuration\Administrative Templates\Windows Components\BitLocker Drive Encryption
Configure use of hardware-based encryption for fixed data drives
Thirdly, BitLocker itself may have a backdoor, or at the very least Microsoft continues to design it in such a way that they (and law enforcement have or can get your private key for it, when needed). I remember a while ago people were complaining that BitLocker keys are automatically saved to their OneDrive account, where Microsoft of course can see it.
But with this revelation, if you have an affected SSD, and you are running Windows, then losing such a laptop may now be a reportable event.
I don't see a reason to still use software encryption. I would sooner expect a backdoor in cpus rather than SSD. Almost nobody even has the capability to even analyze the microcode for modern cpus. It could contain a backdoor that stores N aes passwords in the cpu itself, sorted by the amount of data encrypted. Using AES-NI makes it relatively trivial.
At worst, both ssd controllers and amd/intel cpus would have backdoors like that, but if that's the case there's nothing to be done.
Why not? Software encryption can still be hardware accelerated (thanks to encryption instructions in the CPU), and it is fast enough to not be a bottleneck unless you have a very fast IO device (and very fast in this case mean either Optane or modern SSD's in RAID-0). Also, the impact in CPU usage/power consumption is low.