
Cryogenically frozen RAM bypasses disk encryption methods (2008) - andreyvit
http://www.zdnet.com/article/cryogenically-frozen-ram-bypasses-all-disk-encryption-methods/
======
moyix
There has been some more recent work on this lately:

[https://www.dfrws.org/2016eu/proceedings/DFRWS-
EU-2016-7.pdf](https://www.dfrws.org/2016eu/proceedings/DFRWS-EU-2016-7.pdf)

Essentially, with newer RAM (DDR3), the location things end up on the physical
chip is scrambled to improve reliability:

> Storage of bit streams which are strongly biased towards zero or one can
> lead to a multitude of practical problems: Modification of data within such
> a biased bit stream can lead to comparatively high peak currents when bits
> are toggled. These current spikes cause problems in electronic systems such
> as stronger electromagnetic emission and decreased reliability. In contrast,
> when streams without DC-bias are used, the current when working with those
> storage semiconductors is, on average, half of the expected maximum.

So once you image the RAM you have to figure out the scrambling and undo it.

Related: [https://github.com/IAIK/DRAMA](https://github.com/IAIK/DRAMA)

~~~
nickpsecurity
I didn't know DDR3 RAM was scrambled. Thanks for the link & tip. Analog
scramblers of the past were defeated in a number of simple and clever ways.
Most security engineers stopped trusting scrambling as it almost always fails.
A useful obfuscation at best on top of genuine encryption and authentication
in RAM as in my other comment. No surprise we have another one on breakers'
resumes in the RAM area.

Note: Nice shortcut link to the blog and research on it. LAVA is still on my
backlog to check out although some itch in me suggests it has potential in
high-assurance for testing tools, mechanisms for recovery-oriented
architectures, or even teams with simulated subversion. Just haven't had time
so far after work and looking at preventative stuff.

~~~
moyix
Thanks! I'll be writing follow-up posts on the technical details and
evaluation of LAVA as well. It's a really cool problem and I do hope that it
can actually make bug-finding software better.

~~~
nickpsecurity
So, without digging into paper, it basically is about creating many test-cases
to assess strengths and weaknesses of static analysis tools that should
theoretically have found those injected bugs? As in previous comment, that
could be a whole field of research in itself if it's not already. I'm not just
talking about common case of injecting to test static analysis. Here's a few
that pop into my mind as I run through old Orange Book and EAL lifecycle:

1\. Inject flaws into formal specification of problem. Knowledge-based methods
might do that for modifications in domain representing slip-ups of experts.
Semantic methods might modify specs directly to flip a logical property.
Preferably one similar to it to increase odds that they overlook it.

2\. Formal policy of safety or security. This is a policy, type system, info
flow labels, access control matrix... any of that. The injections would
deliberately weaken the policy. This would combine templates that ensure
common flaws got represented plus truly random ones. Lean more toward
templates here as opponents will want to create things that break policy in
catastrophic ways. That likely, but not certainly, means they'll focus on
specific areas of policy for mods.

3\. Formal verification. Similar to above or code-level tests but for
intermediate forms or verification conditions. Maybe for proof tactics too if
someone is trying to design a prover that doesn't get lost for too long. I
think a simple time-out with trace log would suffice for that, though.

4\. Code is the next part. You people largely have it covered in isolation.
However, it must map to specs or requirements in high-assurance products. So,
the mappings could be screwed up where code is tweaked to mis-match the spec
similar to how specs are tweaked above. Also, the Design-by-Contract or other
invariants might be modified to mislead an evaluator like SPARK Examiner or
Astree Analyzer. If it supports it, the compiler instructions or pragma's
might be modified to make compiler inadvertently break the code.

5\. Testing. There's already a lot of work on this one where compilers,
analyzers, tests, and/or fuzzers are mixed. I'll leave them to it as only
basics are really needed for high-assurance. Other benefits or methods are
still debated.

6\. Effects of optimizations on source to object translation. Starts with
equivalence checking ability. From there, different optimizations are used to
try to break safety/security/correctness properties of the code with tool
attempting detection. Goal of injector is to prevent that. Eventually, a
corpus would be generated here of software fragments, transformation, and
result that could tell us more via machine learning techniques. That's true of
above in general but SSA etc is low level & small.

7\. Covert channel analysis. Step most "secure" software misses although
mainstream rediscovering it slowly as "side channels." Mainly focus on
storage, timing, and resource exhaustion channels. Create tools to find the
connections automatically in software either statically or dynamically with a
test suite in progress. Inject covert channels all over the place. See what it
finds.

8\. Type systems. They're usually eyeballed or analyzed with formal methods.
Often abstractly. I'd like to see above code- and formal-level injection
combined to produce and test possible failures of executable, formal spec of
type system and/or implementations of it. I bet it would find something.

So, there's what's off top of my head as I apply my mental framework for high-
assurance security onto the abstract of your LAVA tool. Much potential for
research and results if this grows into its own field. All I ask if you or one
of your PhD's turns this into real papers is credit for contributing the idea
to do so. :)

Regardless, what do you think of these? Are other people doing them? Are they
novel? Which you see most bang for buck? Far as I can tell & based on current
market share, biggest impact in order are static analysis, compiler
optimizations, type systems, covert channels to stop memory leaks, and
improved formal policies to support type systems and design-by-contract. The
rest possibly useful but less so.

Note: The short-cut link on your Twitter to your blog was hilariously named.
If it was intentional.

------
nickpsecurity
The problem here was already known before the publication of the paper even
though the paper was still a clever attack. Most of security research,
including high-assurance software, was largely ignoring attacks on hardware.
There was a subfield growing that didn't trust the RAM, disk, peripherals,
etc. These designs drew a boundary at the ASIC or SOC level where anything
tampering outside was protected with crypto, PUF's, etc. The first I saw was
Aegis:

[https://people.csail.mit.edu/devadas/pubs/aegis-istr-
august6...](https://people.csail.mit.edu/devadas/pubs/aegis-istr-
august6-2005.pdf)

Joshua Edmison's dissertation lists a number of others along with his own,
interesting scheme:

[https://theses.lib.vt.edu/theses/available/etd-10112006-2048...](https://theses.lib.vt.edu/theses/available/etd-10112006-204811/unrestricted/edmison_joshua_dissertation.pdf)

Nobody has learned anything different since for the fundamentals. The
fundamentals are still to use authenticated crypto of some sort on RAM to
detect attacks there to fail safe at worst. Also, use special IO/MMU's, SOC
mechanisms, and software protected by them to handle stuff on disks. Stopping
cold boot attack is straight-forward on such architectures that don't trust
RAM in the first place.

From there, we move into cat and mouse game of SOC attack and defense. Most of
those require physical possession for more than a few minutes, though, with
often destruction of the chip as a result. So, this is a significant step
forward in security vs just snatching the RAM out of the system.

------
lunixbochs
[https://en.wikipedia.org/wiki/TRESOR](https://en.wikipedia.org/wiki/TRESOR)

OS X has a setting called "destroy FileVault key on standby" in `pmset` which
mitigates cold boot attacks.

I kinda want the CPU/MMU to support loading encryption keys to transparently
encrypt some or all of RAM (could also toss in error checking while we're at
it). SGX has this in the trusted containers, but I think it makes sense for
general use too.

~~~
kdeldycke
And here is the command to activate FileVault key destruction on stand by:
[https://github.com/kdeldycke/dotfiles/commit/32a01c08e196f87...](https://github.com/kdeldycke/dotfiles/commit/32a01c08e196f871a670467ab8e8d40257f79306)

------
Canada
I heard that when power is interrupted ACPI still has time to inform the
system, and not only that, the CPU will continue to execute many, many
instructions before it's finally deprived of power. The computer seems to turn
off instantly to us, but at the time scale the CPU operates at it's actually
quite a while. I heard this was enough time for an operating system to detect
power failure and zero out megabytes of memory.

Anyone know if this is true or not?

~~~
derefr
> zero out megabytes of memory

Is there some reason that RAM has to be zeroed serially? It seems like
nothing's theoretically stopping the entirety of a RAM memory module from
being erased, in parallel, in a single memory write cycle.

Would it be very much more costly to design a memory module with a signal line
that, when high, would e.g. turn a write into a write on every memory cell on
the module?

~~~
frozenport
There is an easier way, ram needs to be refreshed. If you can ensure the
refresh pin gets messed up then first, reads become destructive among other
things.

------
teddyh
[https://en.wikipedia.org/wiki/Cold_boot_attack](https://en.wikipedia.org/wiki/Cold_boot_attack)

------
mschuster91
There's only one solution to prevent this, if you're operating a server that
might be of federal interest (which might even be running an open proxy or TOR
relay):

1) Rent an entire rack with a 19" rackmount UPS, as well as locks connected to
the server to signal if the rack has been opened, and motion sensors, as well
as a compass

2) If either the power from outside goes down, or the lock/cage alarm
triggers, or the motion sensor/compass detects motion, wipe the RAM section
that contains the HDD encryption keys and power down the machine.

Why a compass? Because in case the cops try to move the entire rack carefully
(to not trigger a motion sensor with false-alarm filtering), and they rotate
the rack, the compass will detect it.

~~~
AdamJacobMuller
Figure out which part of the rack points north, stick a really strong magnet
there.

Also, spoof the GPS signal to avoid your next attack, wrap you in a faraday
cage and broadcast an identical set of WiFi/Cellular signals.

My point isn't that any of this is more or less practical, simply that any
protection you can dream up can be defeated if they know about it and are
expecting it.

Also, your password was 'lovesexsecretgod' and you left SSH open :)

~~~
spraak
> Also, your password was 'lovesexsecretgod' and you left SSH open :)

Is this sarcasm or did you actually get this from the above poster?

~~~
psk
I think it's a reference to the movie Hackers.

------
aaron695
That's nice first.... first time I saw it.

Any evidence of it in the wild in the past 8 years, like, you know, actually
used once?

~~~
gizmo686
Back in highschool, me and a couple of friends tried this attack ourselves
(not going after disk encryption, just looking for strings in the memory
dump).

By simply plugging in a bootable USB that we made, powercycling the computer
[0], and selecting boot from USB in the BIOS we were able to get a memory dump
from the computer.

By analyzing this memory dump, we were able to reconstruct the HTML of
webpages that were open, and generate a list of password looking strings,
which did contain several actual passwords.

I guess this isn't really in the wild, as it was one of our own laptops, but
their was nothing stopping us from doing it on school computers to hack into
other people's accounts if we wanted to; and, the person whose computer we
used did end up changing her passwords.

More importantly, if a couple of highschoolers could turn this idea into a
usable exploit, I would be amazed if this was never done for actual attacks.

[0] Most of the time it worked if we manually powered the computer down then
up, but it worked consistently when we rebooted from within the OS.

------
amelius
I have the feeling this could be trivially solved by adding reset lines to the
RAM design, and triggering them on shutdown (perhaps powered by some
capacitor).

~~~
dietrichepp
I don't think that would be trivial at all. How would it work?

~~~
amelius
Have a look at the design of a DRAM cell (search for it in google images).
Basically you can just pull all the address lines high, and set the bit lines
to ground. SDRAM works similarly.

~~~
dietrichepp
But that would require an immense amount of current, compared to what those
lines can ordinarily source. That's what I'm talking about. High current
density can cause problems with electromigration.

------
arca_vorago
This has been a known attack vector for quite some time (hence 2008...) One of
the best training courses I ever did was a forensics course and this was one
of the first techniques taught for a "black bag", along with faraday cage bags
for all the things.

I have never gotten to use it irl though.

------
mirimir
Use Arctic Alumina[0] to fill all USB and Firewire connectors, and embed RAM.

[0]
[http://www.arcticsilver.com/arctic_alumina_thermal_adhesive....](http://www.arcticsilver.com/arctic_alumina_thermal_adhesive.htm)

~~~
deftnerd
Wouldn't the embedded aluminum particles cause a risk of shorting out the
electrical connections?

~~~
AdamJacobMuller
> Pure Electrical Insulator: > Arctic Alumina Adhesive is a pure electrical
> insulator, neither electrically conductive nor capacitive.

I'm not entirely sure I understand what it means by capacitive in this
context, but, not conductive and the overall sentence there seems to strongly
indicate that it specifically will not short out.

~~~
mirimir
Not capacitive means that there are no isolated electrically conductive layers
that act as capacitors. That is, the stuff isn't only non-conductive for DC.
The alumina also increases thermal conductivity, and strengthens the epoxy,
making removal much more difficult.

------
imjustsaying
So why don't OS's just zero out the RAM as part of the normal poweroff cycle
now?

------
sandworm101
Kickstarter idea: Memory modules with an inbuilt temp sensor. Below 0c, they
just stop. Put that tiny circuit into the silicon and the problem goes away.

------
dec0dedab0de
this research is neat, but was also neat in 2008 when it was released

