
Are passwords stored in memory safe? - lucb1e
http://security.stackexchange.com/q/29019/10863
======
InclinedPlane
If you can't trust the OS you're screwed regardless. If you are concerned
about physical access based attacks (like cold boot) then there are
alternatives.

Here's some interesting reading: <http://en.wikipedia.org/wiki/TRESOR>

~~~
zurn
If the OS is actively plotting against you, it's a losing game, but if it's
just security-oblivious then there may be some things you can do mitigate.
Consider for example, an OS with a habit of writing out random parts of your
process memory onto disk.

~~~
derefr
Here's a question I've been pondering: suppose you are a program delivered
from an origin (trusted) to a client machine (untrusted.) You start without
any credentials, but you have the option of dialing out and _asking the
origin_ for a shared secret (e.g. a private key.) Is there any useful way for
the origin to require that you, the delivered program, _prove_ that the client
machine you're running on can be trusted with the shared secret? If it's
possible at all, I'm guessing it involves a "secure boot" with a TPM chip.

~~~
kefka
Why would I want a machine, which I own, trust someone else than the owner?

And no, I do not trust the TPM in its current iteration. We mere owners are
prevented from knowing its private key. Nor can we generate and store away a
private key (or buy a known private keyed chip).

~~~
derefr
So you can, for example, participate in a distributed computing project where
the results sent in by your machine can be trusted.

(An online game that calculates physics client-side is a special case of a
distributed computing project ;)

It doesn't require you to give up ownership over your _entire_ computer, mind
you. If your own OS ran in a hypervisor that was in one TPM "domain" (you have
the key to this domain), but then applications could request to be run
directly on the hypervisor with a separate TPM domain (and thus keep keys your
own OS wouldn't ever be able to touch), that'd be good enough to allow for any
secure distributed computation you might want to do. At any time, you'd still
be able to wipe out those domains (and thus kill the apps running in
them)--but you wouldn't be able to otherwise introspect them.

Basically, it's like the duality of "OS firmware" and "baseband firmware" on
phones--except it would all be being handled on the same real CPU.

~~~
VLM
"you wouldn't be able to otherwise introspect them"

Can't implement it until you define its behavior. If you define its behavior
you can emulate it (which, outside this discussion, is really useful). If you
can emulate it, you can single step it, breakpoint it, dump any state of the
system including memory, reboot it into "alternative" firmware...

Your only hope is playing games with timing. So here's a key, and its only
valid for one TCP RTT. Well if they want to operate over satellite they must
allow nearly a second, so move your cracking machine next door and emulate a
long TCP path and you've got nearly a second now. On the other hand if instead
of runnin over the internet you merely wanted to prove bluetooth distances or
google wallet NFC distances, suddenly you've gone from something I can
literally do at home easily to a major lab project.

Another thing that works is "prove you're the fastest supercomputer in the
world by solving this CFD simulation in less than X seconds". Emulating that
would take a computer much faster than the supposedly fastest computer. So
this is pretty useful for authenticating the TOP500 supercomputer list, but
worthless for consumer goods.

~~~
derefr
> Your only hope is playing games with timing.

This is inane. My question was about _mathematically provable_ secure
computation, not kludges that any old advanced alien civilization could bypass
by sticking your agent-computer in a universe simulator. :)

Let's ignore the computers. You are a spy dispatched from Goodlandia to
Evildonia. You want to meet with your contact and exchange signing keys. You
can send a signal at any time to Goodlandia that will tell them to cut off all
contact with you, because you believe you have been compromised. (A
certificate revocation, basically.)

Your contact, thus, expects one of three types of messages from you:

1\. a request for a signing key with an attached _authentication proof_ ;

2\. a message, signed with a key, stating you have been compromised and to
ignore all further messages sent using that key;

3\. or a message, signed with a non-revoked key, containing useful
communication.

Now, is there any possible kind of "authentication proof" that you could
design, such that, from the proof, it can be derived that:

1\. you have not yet been compromised;

2\. you will _know_ when you have been compromised;

3\. and that, in the case of compromise, you will be allowed to send a
revocation message before any non-trusted messages are sent?

You can assume anything you like about the laws of Evildonia to facilitate
this--like that it is, say, single-threaded and cooperatively multitasking--
but only if those restrictions can also carry over to the land of
Neoevildonia, a version of Evildonia running inside an emulator. :)

~~~
VLM
It might be possible to exclude enough realistic current day threats to
eventually end up with something that "works" but I don't think that's useful
in any way.

None the less, if you want to exclude computers, the human equivalent of
"stick it in an emulator" is the old philosopher's "brains in a vat" problem.
That's well traveled ground no there is no proof you're not in a vat.

There is no way to prove you have not been compromised because there is no way
to prove no theoretical advancement will ever occur in the field in the
future. (or not just advancement but NSA declassification, etc) So you're
limited to one snapshot in time, at the very least.

You're asking for something that's been trivially broken innumerable times
outside the math layer.

Its hard to say if you're asking for steganography (which isn't really "math")
or an actual math proof or you just want a wikipedia pointer to the kerberos
protocol which is easily breakable but if you add enough constraints it might
eventually fit your requirements.

~~~
VLM
"Bitcoin-style block-chain consensus"

Majority rule not consensus. Given a mere majority rule protocol, I think your
virtual world idea could work.

~~~
derefr
Eh, either way, it's the same problem. Imagine you're an agent for BigBank,
thinking you're running on Alice's computer. If you authenticate yourself to
BigBank, BigBank gives you a session key you can use to communicate securely
with them--and then you will take messages from Alice and pass them on to
BigBank.

But you could also be running, instead, on an emulator on Harry's computer--
and Harry wants Alice's credit card info. So now Harry reaches in and steals
the key BigBank gave you, then deploys a copy of you back into the mesh,
hardcoded to use that session key. Alice then unwittingly uses Harry's version
of you--and Harry MITMs her exchange.

In ordinary Internet transactions, this is avoided because Alice just keeps an
encryption key (a pinned cert) for BigBank, and speaks to them directly. If
you, as an agent, are passed a request for BigBank, it's one that's already
been asymmetrically encrypted for BigBank's eyes only. And that works... if
the bank is running outside of the mesh.

But if the bank is itself a distributed service provided by the mesh? Not so
much. (I'm not sure how much of a limitation that is in practice, though,
other than "sadly, we cannot run the entire internet inside the mesh.")

------
DanBC
There are attacks on passwords stored in RAM. There's an example against the
Apple keychain. Root can run the software and it collects a bunch of passwords
for logged in users ([http://juusosalonen.com/post/30923743427/breaking-into-
the-o...](http://juusosalonen.com/post/30923743427/breaking-into-the-os-x-
keychain))

But there are best practices for passwords, and those reduce the risks; and
most attacks need privileges and access to the machine, which again reduces
the risk.

If you're worried about stormtroopers kicking the door down and squirting
liquid nitrogen on the RAM you probably have enough money to have very strong
perimeter defences.

~~~
StavrosK
What does liquid nitrogen on the RAM do? I was under the impression they
plugged the computer into a UPS or something and took it.

~~~
ygra
The bits stored in DRAM remain readable for a time (sometimes minutes) even
after the power is cut. In normal operation frequent refresh is needed to
avoid decay the decay doesn't happen nearly as fast as the refresh cycles make
it seem. Cooling the cells lengthens that time, from minutes up to hours
(depending on the temperature), permitting an adversary to read them without
much time pressure.

Paper on that: <http://citp.princeton.edu.nyud.net/pub/coldboot.pdf>

~~~
StavrosK
I see, thank you very much.

------
Murk
At the company I worked for previously we frequently used a firewire DMA
attack such as inception (<http://www.breaknenter.org/projects/inception/>) to
gain access to computers, and dump ram to recover other passwords.

~~~
mappu
I'm familiar with DMA attacks, but it's always shocking to see publicly
available GPL code that just works against popular and recent versions of
windows, OS X and linux. UEFI Secure Boot is no help if you signed a 1394
driver : )

Everyone should read the mitigation steps and caveats as appropriate.

If you have physical access to an unlocked windows machine, i'd reach for
mimikatz. Instant plaintext.

------
amadvance
There are well known attacks that allow to read memory, and even write,
thought DMA.

See for example:

0wned by an iPod - hacking by Firewire
<http://md.hudora.de/presentations/#firewire-pacsec>

More papers are linked in the wikipedia page:
<http://en.wikipedia.org/wiki/DMA_attack>

------
FourthProtocol
Reading comments here and on stackexchange I'm surprised that no mention is
made of the Data Protection API (DPAPI) on Windows, which is designed
specifically for this purpose.

<http://msdn.microsoft.com/en-us/library/ms995355.aspx>

I've been using it for years, and while nothing is infallible, any sensitive
plain text in my apps isn't there for more than it takes to encrypt and then
destroy.

I can't comment on Linux or OSX but would be surprised if the OS didn't offer
a similar API tied to the principal to protect in-memory data.

~~~
laumars

        > Reading comments here and on stackexchange I'm surprised
        > that no mention is made of the Data Protection API (DPAPI)
        > on Windows, which is designed specifically for this purpose.
    

It was mentioned and quickly dismissed as not being effective:

1) if it can be decrypted by the API, then it can be cracked by any process
given enough time and resources.

2) further to point #1, the Data Protection API _was_ reversed engineered in
2010.

3) security is only as strong as your weakest link, and that API doesn't
address the "weakest link" of running malware locally. eg it's much easier to
keylog passwords to begin with than to scan the RAM.

I'm inclined to agree with those comments. While that API is a nice idea, I
think it's a little ineffective in practice.

~~~
FourthProtocol
Can't find anything online about a fix from Microsoft (didn't give it much
effort - little pressed for time), but it seems that decryption is possible
because the master key timestamp isn't protected by an HMAC mechanism
(perpetuating access to the secret). Also, all the user's previous SHA1 hashes
aren't salted.

Both of those seem easy enough to fix, which I imagine Microsoft has done (the
exploit was discovered 3 years ago). I'm going to do a little more research
when I have time.

Lastly, and importantly, this attack is an offline attack. I wasn't able to
find anything that compromises in-memory data. Granted, nothing other than 2FA
will protect anyone (or DPAPI) against key loggers, but that's true for all
OSs.

If local malware is the strength of the argument against DPAPI then I might as
well go so far as to say that the most secure system is no system.

~~~
laumars

        > Both of those seem easy enough to fix, which I imagine Microsoft has
        > done (the exploit was discovered 3 years ago). I'm going to do a little
        > more research when I have time.
    

Well, by your own confession, you've not read about a fix thus far, and
Microsoft have been known to let poorer encryption APIs go unpatched for great
lengths of time even when a problem is known (eg NTLM passwords aren't
salted), so I wouldn't be the slightest bit surprised if this hasn't been
patched either.

However, regardless of whether it has or hasn't, said patches only increase
the CPU time it takes to crack the passwords, it doesn't make the passwords
impossible to crack. And, as I'd already said, there's weaker links that can
be exploited anyway.

    
    
        > Lastly, and importantly, this attack is an offline attack.
    

All the attacks we're talking about here are offline attacks.

    
    
        > Granted, nothing other than 2FA will protect anyone (or DPAPI) against key
        > loggers, but that's true for all OSs.
    

Of course it's true for all OS's. I never once suggested otherwise. Honestly,
I'm puzzled why you'd even bring that point up.

    
    
        > If local malware is the strength of the argument against DPAPI then
        > I might as well go so far as to say that the most secure system is no system.
    

Which is what myself and pretty much everyone else in this discussion have
already been saying. At some point, there needs to be a balance between
usability and security. I think NT (and Linux / BSD / Mach too for that
matter) offer enough security to make in-memory password hacking awkward
enough that it's a minor security risk while still leaving the host OS highly
usable.

For the average user, social engineering will always remain the most exploited
vector of attack, and for ultra sensitive servers, all we can do is lock them
down as best as we can based on the latest fixes and proof-of-concepts. But a
sufficiently determined attacker will usually find a way in if there's a
sufficiently high enough incentive. Our job (or at least mine), is to make it
sufficiently hard that attackers lose interesting trying. But if they've
gained root access (in order to run any of the aforementioned attacks), then
it's already game over - regardless of whether they manage to decrypt your RAM
or not. Which is why I think the aforementioned API is akin to _the emperors
new clothes_ (ie all hype, no practical security)

------
qompiler
For this reason the Java JPasswordField getPassword() method returns an array
with no other copies around. An array can also easily be zero'd out with
fill().

[http://docs.oracle.com/javase/tutorial/uiswing/components/pa...](http://docs.oracle.com/javase/tutorial/uiswing/components/passwordfield.html)

~~~
viraptor
Isn't JRE using compacting GC which makes the "no copies" guarantee void
anyway?

~~~
RyanZAG
The char or int datatypes do not have their values stored as objects. When you
change the value of a char inside a char[] array, that value is directly
changed in ram.

This will leave the "hello" in ram (subject to GC):

    
    
      String x = "hello"
      x = null;
    

This will clear the "hello" from ram:

    
    
      char[] x = new char[]{'h','e','l','l','o'};
      x[0] = '0'; x[1] = '0'; ...
    

This will leave the "hello" in ram (subject to GC):

    
    
      char[] x = new char[]{'h','e','l','l','o'};
      x = null;

~~~
ygra
The point was about the garbage collector compacting memory regions, thus
moving objects around. If you don't pin your array it _could_ leave "hello"
somewhere in memory when it's moved before you zero it.

------
thomas-st
Is it possible that a (non-privileged) process could read data of a process
that has previously terminated by looking at uninitialized memory and gain
access to sensitive information that way?

~~~
FooBarWidget
Most operating systems clear the ram pages when the process exits, or when it
requests a page, so no.

------
FooBarWidget
One method that I've thought about in the past is hashing your password using
bcrypt, then zero and free the original password, and check all future
authentication attempts against the bcrypt hash. Nobody, not even you, knows
the password now, just whether a given password is correct.

~~~
Mahh
Hashing passwords for storage is standard practice in all systems that involve
password based authentication.

Even then, the password must reside in memory at some point in order to
compute the hash of your password [using bycrypt or whatever scheme], which is
necessary for both generating the hash the first time AND generating the hash
for authentication attempts. This is the issue described in the given link.

[http://en.wikipedia.org/wiki/Cryptographic_hash_function#Pas...](http://en.wikipedia.org/wiki/Cryptographic_hash_function#Password_verification)

------
eaxbin
Random Access Memory Memory?

~~~
RKearney
It's where ATM Machines store their application code while running.

------
surferbayarea
Here's what I do. The login page on my browser is sent to my phone which
creates a https session with the remote website and then hands over the
session back to the browser. The mechanics to do this are a bit tricky but
like a few days of coding. The advantage: your password never ever enters your
computer's RAM(or HDD/network)...take that keyloggers!

~~~
alanctgardner2
What about phone malware? And what stops someone reusing the https session as
you move it from your phone to your desktop?

------
icebraining
Smart cards can move the private key from the PC to a dedicated, self-
contained and (supposedly) safer machine - the card itself.

~~~
DanBC
I'm worried that the smart cards and their software is well designed, but then
you have to rely on other vendors to do thier bit securely.

Fravia said of dongles that they were often great, with nice libraries, but
when software vendors implemented them they would use stupid methods.

(<http://home.scarlet.be/detten/tuts/dongle_zeezee.htm>)

> _Don't panic when you read all info about dongle security. They ARE secure.
> OK. You can't crack them unless they're done by complete idiots. OK. But you
> want to crack the application, NOT the dongle. When you read about RSA
> encryption, one-way functions and see in the API some interesting
> Question/Answer hashing functions, remember that it's only API. No one uses
> it. Only simple functions like Check/Serial Number/Read and sometimes Write
> are used._

~~~
rdl
The long-term solution is probably some kind of super-smartcard (essentially
an HSM) which can put per-application logic inside the secure envelope. Things
like rate limits on decryption requests, heuristics to require higher levels
of authentication as transactions are more suspicious, etc.

Combine that with per-application virtualization and various forms of user
authentication (other than passwords), and public key cryptography, and you
could probably start to build substantially more secure services. Same stuff
on clients and servers.

ARM's TrustZones are actually more interesting than TPMs on x86; you can
essentially start the general purpose CPU as a trusted device and then
partition off less-trusted pieces. If you're going to have a single processor,
vs. a specialized security processor, this is probably how to do it, not the
x86 + TPM + TXT way.

Probably all meaningless until there's a framework as simple as Ruby on Rails
was vs. everything else in 2005, or php, which makes doing things securely the
easy default.

------
yxhuvud
Safety is a float - _not_ a boolean.

A more appropriate question would have been: 'How safe is it to store
passwords in memory?'

------
_cbdev
A quick CTRL+F didn't find anything, so I might actually be the first to point
out that "RAM memory" is a case of the RAS Syndrome :)

<http://en.wikipedia.org/wiki/RAS_syndrome>

------
jayfuerstenberg
All you can do if you don't trust the OS is to assemble the password at the
point of use (each time) and erase the memory location directly afterwards.

And even that is not 100% foolproof (the OS can detect it in between these two
steps).

~~~
mariusmg
Or keep it encrypted in memory

~~~
ars
And how exactly would that work? You would need to decrypt it use it - but
then you need to store the decryption key in memory.

Gaining you exactly nothing.

~~~
lucb1e
You could store the decryption key on the disk, only loading it when needed,
and possibly byte-by-byte. This is all hackable, especially when such
techniques are used mainstream, but it increases the amount of work needed to
hack something. In the end it's the OS's responsibility of course.

~~~
ars
What's the point of that? If you are going to do that, just store the original
key that way.

Not that it helps in any way at all.

------
niggler
This is why ssh-agent is awesome: they don't hold the passwords in memory
(only the keys) so you minimize the number of times you type password and the
number of places it is stored.

~~~
ygra
But the key is what you authenticate with after all, so why go after the
password when reading memory when you can go after the key directly?

~~~
niggler
the keys expire, so even if you have it it won't be useful later.

------
wslh
No, I found a password in the pagination file of an old Windows.

------
diminish
hey security experts, are microprocessor registers or cache, subject to any
security attacks?

~~~
dfox
It depends on whether you consider attaching JTAG ICD and just reading whole
state of CPU out an security attack. In some aspects just attaching an ICD to
desktop CPU is simpler than attacks on physical security that involve freezing
DRAM chips and reading their contents with patched BIOS or whatever. On the
other hand it mostly requires attacker to have JTAG ICD that supports that
particular CPU. Almost all x86 chips have some kind of ICD interface, usually
very low-level and complete one. But because of low level nature of registers
you see through it, JTAG register maps and so on are NDA-only and thus ICDs
that support non-embedded CPUs tend to be very rare and in the realm of "you
can't afford it if you have to ask the price". But they exist and buying one
is more about the price than about any questions of the kind "why do you need
that?". But because of rarity of these things most current research simply
ignores this attack vector.

Bottom line: there is no such thing like security against local attacks on
almost any kind of commodity hardware (including TPMs, excluding devices
explicitly designed to be reasonably secure like gaming consoles). When you
need that you also need proper tamper-proof hardware.

