Hacker News new | comments | show | ask | jobs | submit login
Are passwords stored in memory safe? (stackexchange.com)
129 points by amalantony06 1257 days ago | hide | past | web | 72 comments | favorite



The raw memory itself isn't safe either.

Ed Felton did some great work (2008) where he physically removed the sticks of DRAM from one computer, stuck them in another, and read their contents. But doesn't DRAM lose it's content without continuous power? Not if you turn a can of compressed air upside down and spray the chips first, cooling them to -50C! He used this to recover encryption keys and defeat whole disk encryption.

Pretty crazy stuff: https://citp.princeton.edu/research/memory/

update: link to images/videos: https://citp.princeton.edu/research/memory/media/


For Linux there is TRESOR (http://www1.informatik.uni-erlangen.de/tresor). It uses CPU registers, which can prevent this sort of attack. However, you're limited to one encrypted drive with this method.


Modern CPUs have a lot of cache these days. Couldn't the keys be store there are well (assuming the threat is just the removal and reading of RAM)?


Hi. L3 caches are indeed huge these days. For example, the latest Mac Pros are shipping with 30MB L3 caches.

You can do a lot with that space. My company PrivateCore runs an entire Linux/KVM stack within the L3 cache, then fully encrypts main memory. Someone physically acquiring memory would only obtain plaintext.

Note that a software compromise could still read memory. That's one reason why it's necessary to fully attest a system before provisioning it with any keys or passwords.

Here's a CanSecWest talk with more details: http://cansecwest.com/slides/2013/PrivateCore%20CSW%202013.p...


You mean cypher text. Plain text is clear text is unencrypted.


Very cool! How small of a cache can you work in?


Thanks!


The caches are designed to be transparent, so you can't store data in them separately. Not to mention that your program could be interrupted at any point, so the OS would have to backup this data into ram (like it does registers).


TRESOR only helps against passive attacks since since the code is still exposed in memory. Active attacks that modify memory can easily circumvent it.


Not sure what you mean. TRESOR prevents cold boot attacks. If your Linux OS is compromised, TRESOR wouldn't help you anyway. But that's not the point of it, either.


Yes, TRESOR can help against cold boot attacks, which are passive and read-only.

Several physical attack vectors can modify the contents of memory, thus compromise the software stack and divulge keys kept in registers or cache.

Our approach at PrivateCore is to fully encrypt main memory with an authenticated cipher mode and keep the software stack pinned in cache. An attacker able to physically modify memory can only conduct a DoS attack by inserting junk data.


Doesn't work on DDR3 ram. Cold boot attacks are extinct. There's also stuff like this you can combine with encrypted swap http://www.onarlioglu.com/privexec/


So does this mean that the non-upgradeable computers with soldered RAM (i.e., MacBook Air) are more secure?


It means that they're resistant to that attack, sure (which is distinct from being "more secure" as if there were some continuum). The Air is also resistant to the firewire DMA attack, in that it doesn't have firewire


It should be resistant to the firewire DMA attack even if it had firewire. Modern processors protect against this. AMD calls their mechanism the device exclusion vector, I forget what Intel calls theirs.


IO virtualization (IOMMU) prevents this. Intel calls it VT-d.


Wouldn't the inclusion of Thunderbolt (similar concept to Firewire) make that some sort of attach still possible?


Thunderbolt doesn't provide raw access to DMA (and thus the ability to directly access any process's memory) the way Firewire did.


It seems thunderbolt in fact DOES provide raw DMA: http://www.breaknenter.org/projects/inception/

But at least on Macs, the attack is somewhat mitigated by the OS shutting off thunderbolt/firewire DMA if the screen is locked.


Windows has a CryptProtectMemory() function [1] that can be used to encrypt in-memory secrets using an OS-allocated session key. As far as I know the key is stored in non-paged memory in kernel memory space.

On Linux, libgcrypt can do encrypted malloc, which might also help.

[1] http://msdn.microsoft.com/en-us/library/windows/desktop/aa38...


We store passwords' one-way hash + salt only, we don't even know what our users' password are, they are never in memory to begin with. Even if someone got access to the stored value in memory somehow, they wouldn't know what to input to get that result for some time.


I think this is worrying more about things like database passwords and API keys used by your application.


Are passwords stored in ____________ safe? No. Next question. ;)

Are they "safe enough"? Maybe, it depends entirely on your use case.


Interestingly, one value that fits in that blank is "your head."


Still unsafe because I'll reveal it if they (e.g., NSA, CIA) torture me, threaten to kill my family etc.


Or a new technology is invented to read the human mind.


Or a technology already exists :)

"UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips..."

http://gizmodo.com/5843117/scientists-reconstruct-video-clip...



That's the idea -- even your mind is not a safe place for a password.


As this story illustrates: http://www.bbc.co.uk/news/uk-25745989


+1. First, I wish to know what technology he used. Is TrueCrypt sufficient (with similarly strong password). Not because I have to hide anything, of course, just my wife wants me to encrypt our spicy photos, in case :)

Second, I am not sure I understand the judge decision. If the man truly forgot it, or lets say the files on USB got corrupted (can happen) and the password he is giving out does not work, then how can you continue to jail him (for that reason)? Is this something specific to UK, or are similar US cases out there?


Send me your photos and I'll encrypt them for you!

I think the law is very vague due to the people writing it now knowing much about it. I imagine they have a use case of this guy and want jail for anything other than seeing some incriminating evidence.

I guess it doesn't help that it turned out he did know the password and it had a load of incriminating stuff on.


What if I have just erased my USB-stick with good dose of /dev/random and haven't actually formatted it to any filesystem yet. Good encryption is indistinguishable from randomness, so could they blame me for not giving up password and jail me for carrying randomness in my pocket?


In many countries they can jail you for failure to disclose encryption keys – for example, in the UK you could spend two years in jail unless you could convince the court that there was in fact no encrypted data on that USB key:

http://en.wikipedia.org/wiki/Key_disclosure_law#United_Kingd...


If they did arrest you for that, but you aren't really Jack Bauer you could at least get points on hn for writing the story of "How I was arrested for carrying random data on my usb stick"


I think they will use that to put you away, if they don't believe you...


thumb screws is all anyone would need with me.


The threat of lengthy prison sentences is the more 'civilized' version of this.


Yeah. Safety and security seem to be fundamentally misunderstood by most people: they simply don't know what a threat analysis is, how to do one, or why it's important. If you can't answer, "From who?", then the answer is flatly, "No."

I have a hard time understanding why such a skill is so rare. It seems sort of useful.


So given the stories about the Target hack, which mentions "memory scrapers", how can this be done on an entire network of (Windows in this case, no idea what version) systems? I assume you would have to have discovered an escalation attack that gave you sufficient privileges to read another processes' memory. But even given that, how do you find what bytes are useful to scrape?


You do it with entropy calculations. Crypographic material (encrypted data and/or keys) has higher entropy than unencrypted data (even if the data is binary). You scan the memory looking for high-entropy regions.

Note that this doesn't work if the entire memory is encrypted, but I can't imagine how a computer could function with the whole RAM encrypted.


In past cases, some of these ram scrapers have been in the form of dlls which are injected, by Windows (AppInit_DLLs), into the intended process.



Some servers used to have a resettable "case has been opened" flag in the BIOS - the pieces that was based on could be leveraged against the DMA attacks. Overwrite certain items in memory, or maybe just power off the system, when the box is opened, and obstruct opening of the box (a lot of glue? or somesuch) to extend the time past the "recover memory" window.

Yes, I realize this would still be susceptible to coercion of the humans involved, and other issues, but it could be a building block of some degree of NSA-proofing.


I see this option in my bios switched every time I open my computer. I never thought it might be a security feature, just an flag that must be set for someone with mild OCD.

I'm still not sure how my motherboard detects if my case is open...


Are there any instances of this being used in the field?


Yes, very easy. In a mac you just plug in a Firewire/thunderbolt device[1]. More extreme measures involve freezing the RAM. Both require physical access to the machine, but a bit more scary that plugging your laptop into a public display/TV gives an attacker control of your computer and passwords.

Full disk encryption TrueCrypt/BitLocker/FileVault can act as countermeasures[2] and modern versions of OSX don't allow DMA from the login screen anymore.

[1] http://www.breaknenter.org/2012/02/adventures-with-daisy-in-...

[2] http://www.researchgate.net/publication/49277520_Cold_Boot_M...


Would something like TRESOR[1] along with RAM encryption help? Or would this just move the attack target to the CPU itself? (Certainly this would be harder to attack than sticks of DRAM, like was demonstrated by a group a few years ago)

I guess the number one thing is to prevent physical access, and failing that, make an attack that targets RAM take longer.

1. http://en.wikipedia.org/wiki/TRESOR


No, and neither are cryptographic keys derived from passwords, and neither are cipher tables derived from cryptographic keys.

A simple defense would be to access all such material through a permutation table. (Which need not be explicitly stored in memory, but could be computed by means of multiplication modulo a prime.)


> virtual machines and cloud computing cannot be ultimately safe

Can somebody with more expertise comment on this? I was under the impression that virtualization software (Xen etc) was deemed safe?


It depends on your assumptions. How do you know you're running on Xen and not EvilXen?


Virtualization software protects the host against software running in the virtual machine. It's completely useless for protecting the virtual machine from the host (as you'd want on the cloud).


Yes, but how is hacking the host infrastructure different from hacking your own infrastructure (in case of a dedicated server)? You'll always have external trusted third-parties unless you also maintain your own datacenter and network provider. I can't see how cloud is inherently less secure unless the host acts against their own interests.

(all this in context of the statement that "if you are serious about security, use dedicated hardware")


don't forget mlock().


this wont secure the memory just prevent it from being paged to disk, if I had access to your user or to root I can just read it directly via say proc (or any of the other attack vectors)


don't forget mlock() regardless. they're talking about how the OS will page to disk in TFA. My comment answer that with mlock - for platform that have it of course :)


Another recommendation that I learned from an old boss is the following.

1. Store password in a function and return the password. 2. Whenever the password is needed call the function.

Example in JS.

function getPassword(){ return 'I am a password'; }

if(req.data.password == getPassword()){ passwordIsCorrect(); }else{ passwordIsNotCorrect(); }

This is not for sure. Core dump and debug dump usually dumps variables, but not the source code of program.


This is bad advice. The password ends up in the program's binary where it is easily accessible. Also, users can't change it after it's compromised.


I'm pretty sure that this wouldn't be safe. By simply disassembling the software you would see the strings stored in the application itself, so you could easily find any password stored that way.


that would mean you would have access to the source code. Or do you mean the software is stored in the memory as well?


In a compiled language, it would be part of the binary. If you opened up "checkpassword.exe" in a hex editor, you'd see the password clear as day somewhere in the static data section.

In an interpreted language, the interpreter would create some kind of string object containing the contents of the password in memory, and getPassword() would return that object when called.


Even simpler, Unix has a tool called "strings". Call it on any piece of data and it will return all readable strings contained in that data. Works on binaries as well.


good to know. I guess the best thing is to NOT store password at all in the the code? Rather in encrypted databases?


You should never store a password. If absolutely needed you could store a derivation of that password from a KDF like scrypt, and then see if the user's password derives to that same value. This is how checking for login passwords is done on Unix and Unix-like systems with shadow passwords (which never store the user's password).

And even with all that I probably screwed something up, but that advice is still better than storing plain passwords.


Nobody ever thinks of the case where a program has to supply a password and not just check it. For example, suppose your program (that runs unattended) has to interact with some JSON-RPC API, and the remote server expects a password. That password has to come from somewhere.


You should use public-key crypto for this case when possible. Any key you bake into the binary will be a publically-known password for anyone who gets access to that binary. With public-key crypto it's safe to include the public key and then make an ad hoc session key using secure key exchange.


You actually could get this from memory while the program is running, but even if it weren't, you could get it by using a hex editor. If you've got access to the "strings" command, that will also usually work for things like passwords.


ah ok, so what is the recommendation for storing password in a scripting language like

Ruby, Python, RingoJS, NodeJS, PHP, etc? Don't think they have fancy function to determine to store the variables in CPU register or encrypted memory?

I got the feeling that all these should have being taken care of on the hardware level, but someone just got too lazy during the design process and forgot to patch up the security flaw.

Kinda like how everyone was still using telnet to get into their *nix servers in the 90's.


If you're storing passwords on disk, you're going to use something like bcrypt (http://en.wikipedia.org/wiki/Bcrypt).

The way it generally works is you keep a user's encrypted pass with a unique per-user salt stored on disk (usually in a database of some sort). When you need to authenticate a user, your script will ask the user for their password. Then, you encrypt this input pass with the stored salt. Finally, you compare the encrypted input pass with the original encrypted pass on disk. If they match, you're good. At no point do you store passwords on disk that have not been encrypted (via bcrypt or similar). Depending on the security needed (i.e. your threat model and risk) this can get tricky if things like hibernation or virtual machines are involved.

If you're confused at all about this, I would browse the security forums (http://security.stackexchange.com/) and ask for expert advice.


> ah ok, so what is the recommendation for storing password in a scripting language like > Ruby, Python, RingoJS, NodeJS, PHP, etc? Don't think they have fancy function to determine to store the variables in CPU register or encrypted memory?

You shouldn't need to store passwords (or any sensitive info) in memory. In fact, you shouldn't store passwords at all. Normally, you would put the password in memory temporarily (for hashing or something), and then you would securely wipe it when you are done.

Failure to wipe memory (or use it improperly) can have awful consequences. IIRC, that was how the Target hack worked. Basically, the attackers were able to search through memory and dump credentials.

You should use wrappers for trusted/vetted libraries (like Crypto++ or something). I know for a fact that Crypto++ has a concept of "secure buffers", whose contents are securely erased before destruction. All libraries of that sort should wipe any sensitive information from memory (by overwriting it with garbage or zeroes) before freeing it.

That particular library probably does not have wrappers for other languages. However, GnuTLS and OpenSSL probably do.


I'm pretty sure it is (I might be wrong though)


thank you everyone for correcting my ignorance.

that more diverse the people are the more perspective we can look at a problem:

collective indulgence >= individual common sense

Thanks HN




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: