
Meltdown Proof-of-Concept - __init
https://github.com/IAIK/meltdown/
======
martin1975
I'm curious if someone can point me to any source that discusses how the next
generation of CPUs that Intel, AMD, ARM might be working on is actually going
to address this & the Spectre issue architecturally.. It's great that we have
a potentially performance killing fix but the real "fix" or rather, solution,
is to alter the architecture. Since I'm not an EE/CE dude... is anyone aware
of where such discussions on the WWW might be taking place?

by the way, that PoC was intense. Makes you wonder if the NSA knew about it
all along :)

~~~
arkadiyt
> Makes you wonder if the NSA knew about it all along :)

Former head of TAO Rob Joyce said "NSA did not know about the flaw, has not
exploited it and certainly the U.S. government would never put a major company
like Intel in a position of risk like this to try to hold open a
vulnerability." [1]

Who knows if that's true or not, though. Certainly the U.S. government has
done exactly that many times in the past (like with heartbleed).

[1]: [https://www.washingtonpost.com/business/technology/huge-
secu...](https://www.washingtonpost.com/business/technology/huge-security-
flaws-revealed--and-tech-companies-can-barely-keep-
up/2018/01/05/82ccbe18-f24e-11e7-b3bf-ab90a706e175_story.html)

~~~
SheinhardtWigCo
It's odd to publicly state that they didn't know about it, because now if they
don't do the same after the next big flaw comes out, the implication will be
that they indeed knew and were quietly exploiting it. I thought that was why
they generally don't comment on these things. The less-charitable assumption
is that they'll make this claim every time regardless of whether it's true.

The claim that "the U.S. government would never put a major company like Intel
in a position of risk" is obviously bullshit. TAO's job necessarily involves
exposing companies both in the US and overseas to that kind of risk on a daily
basis.

~~~
dirtbox
It's the type of announcement that makes me wonder if they had the chip makers
incorporate it specifically for them to exploit.

~~~
mehrdadn
> It's the type of announcement that makes me wonder if they had the chip
> makers incorporate it specifically for them to exploit.

...sorry, _what_?

It makes you wonder if the NSA had chip makers incorporate speculative
execution and caching because... timing attacks?

~~~
dirtbox
No.

It's just that it's highly suspicious that anyone is making any type of
mention of it at all.

------
runesoerensen
The Project Zero bug report (with PoCs/timeline) was also made public a few
minutes ago [https://bugs.chromium.org/p/project-
zero/issues/detail?id=12...](https://bugs.chromium.org/p/project-
zero/issues/detail?id=1272#c3)

~~~
ehPReth
I wonder what happened to "This bug is subject to a 90 day disclosure
deadline. After 90 days elapse or a patch has been made broadly available, the
bug report will become visible to the public." Executive meddling?

Edit: Probably the 'extreme circumstances' bit mentioned in
[https://news.ycombinator.com/item?id=16108434](https://news.ycombinator.com/item?id=16108434)

~~~
adjkant
I think for a bug this big it is pretty understandable. So far, it seems clear
the actions of all involved were in a good spirit of responsible disclosure.

~~~
HugoDaniel
Except if you are into *BSD. In that case you might want to label it
"selective disclosure" instead of "responsible disclosure".

~~~
Xylakant
Well, since some of the BSD folks publicly stated that they’d ignore any
embargo, that seems like a pretty predictable consequence. And in this case I
understand that it took a while to develop workable mitigations. Immediate
disclosure might have caused great harm.

~~~
JdeBP
Tarring all of the BSDs with the same brush is wrong, both in general and here
specifically. There's also the matter of both Matthew Dillon and Theo de Raadt
discussing this topic months or even years before Google Project Zero made its
discovery.

* [https://news.ycombinator.com/item?id=16086047](https://news.ycombinator.com/item?id=16086047)

* [https://news.ycombinator.com/item?id=16074531](https://news.ycombinator.com/item?id=16074531)

* [https://news.ycombinator.com/item?id=16075744](https://news.ycombinator.com/item?id=16075744)

Moreover, the OpenBSD people have made some remarks about how it was
commentaries in Linux patches and discussions on _LWN_ that actually let the
cat out of the bag this time.

* [http://pythonsweetness.tumblr.com/post/169166980422/the-myst...](http://pythonsweetness.tumblr.com/post/169166980422/the-mysterious-case-of-the-linux-page-table) ([https://news.ycombinator.com/item?id=16046636](https://news.ycombinator.com/item?id=16046636))

* [https://news.ycombinator.com/item?id=16084404](https://news.ycombinator.com/item?id=16084404)

~~~
tptacek
The bugs Theo was talking about were unrelated to these ones.

~~~
JdeBP
I did point to
[https://news.ycombinator.com/item?id=16074531](https://news.ycombinator.com/item?id=16074531)
. However, it is also wrong to err in the other direction as you have and to
say that they were unrelated. Others have already made this point in
[https://news.ycombinator.com/item?id=16075744](https://news.ycombinator.com/item?id=16075744)
, which I also pointed to.

------
kodablah
This was the GitHub repo mentioned in the meltdown.pdf that was 404'ing until
now. We have native Spectre replication code too. What still seems to be
elusive is the JS-based Spectre impl (probably waiting at least for Chrome 64,
though I confirmed via
[https://jsfiddle.net/5n6poqjd/](https://jsfiddle.net/5n6poqjd/) that Chrome
seems to have disabled SharedArrayBuffer even before they said they would
which wasn't the case a few days ago).

~~~
diyseguy
This is the closest thing to a javascript implementation I have seen:
[http://xlab.tencent.com/special/spectre/js/check.js](http://xlab.tencent.com/special/spectre/js/check.js)

from:
[http://xlab.tencent.com/special/spectre/spectre_check.html](http://xlab.tencent.com/special/spectre/spectre_check.html)

~~~
kodablah
Nice. Reviewing the code, it is as the PDF said where they are just constantly
incrementing a val in the shared buffer to get a fairly precise timer. But it
seems to be using the timing to determine across 256 indices (99 tries to
check) to check cache hits. So just removing this timer is not enough, it just
increases the surface area of bytes you have to read and sift through to see
if you have other mem? Anyone have a writeup on this?

~~~
JonathonW
Isn't the high-precision timer required to detect a cache hit or miss-- as in,
the side channel being exploited here is in the timing of a cache hit or miss;
there's no data leaked directly into Javascript?

That's not to say that removing SharedArrayBuffer (and high-precision
performance timers, which were removed a couple years back to mitigate some
other timing-related vulnerabilities) is enough to completely eliminate
Spectre; there might be other methods that can time accurately enough to
reveal information.

(I might be completely wrong here, but this is my current understanding of the
situation, at least.)

------
thebeardedone
Moritz Lipp's twitter is actually interesting to follow. He is reconstructing
images which do not fit into cache. Quite amazing.

[https://twitter.com/mlqxyz/status/950378419073712129](https://twitter.com/mlqxyz/status/950378419073712129)

(I personally do not have a twitter account but was looking for the paper and
stumbled upon it, glad I did!)

------
trendia
Linux 4.15 and the appropriate modules protect against the attack.

To test, set CONFIG_PAGE_TABLE_ISOLATION=y. That is:

    
    
        sudo apt-get build-dep linux
        sudo apt-get install gcc-6-plugin-dev libelf-dev libncurses5-dev
        cd /usr/src
        wget https://git.kernel.org/torvalds/t/linux-4.15-rc7.tar.gz
        tar -xvf linux-4.15-rc7.tar.gz
        cd linux-4.15-rc7
        cp /boot/config-`uname -r` .config
        make CONFIG_PAGE_TABLE_ISOLATION=y deb-pkg

~~~
noobermin
I have CONFIG_PAGE_TABLE_ISOLATION on. I roll my own kernel and all that.

Trying the kaslr program right now, it's not figuring out the direct map
offset and it's probably already been a minute or two. So it works?

EDIT: After 40 minutes, it has attempted all addresses and did not find the
direct map offset.

~~~
trendia
It took about an hour for it to find the offset for me.

I think that the page isolation slows it down, even if it doesn't completely
eliminate it.

The second test had something like a 0.05% success rate on my PC, and took
over an hour to get a few dozen values read.

After trying this with the new kernel, I started up an AWS instance and ran
the tests there. The first test (KASLR) succeeded within a few seconds, and
the second test had a 100% success rate (read 1575 values in a few seconds).

~~~
noobermin
Basically, the first test (kaslr.c) did not even work for me, and it scanned
all addresses and wrapped around and started again.

You probably know this (saw you're the person I replied to initially), but for
others reading this to check that it's on, "dmesg | grep isolation" should be
able to tell you whether the page table isolation is on after you enable it in
the kernel.

Given the other tests require the offset, I think I'm safe? I'm going to run
it again just to be sure.

------
tptacek
libkdump is really clean code and worth a read, nicely wrapping the inline
assembly you need to do the flush+reload and keeping the algorithms in pretty
simple C. It's worth taking a few minutes to read through it.

This code is from TU Graz; I assume this is from Daniel Gruss's team, who
participated in the original research.

------
samsonradu
High-level programmer here. Can someone explain please (already read the ELI5
in previous threads) how does the attacker extract the actual data from the
processor L1 cache after tricking the branch prediction and have the CPU read
from an unauthorized memory location?

I understood the "secret" data stays in the caches for a very short time until
the branch prediction is rolled back, which makes this a timing attack but
don't get how you actually read it.

EDIT

So perhaps someone can ELI5 me "4.2 Building a Covert Channel" [1] from the
Meltdown paper which is what I didn't understand.

[1]
[https://meltdownattack.com/meltdown.pdf](https://meltdownattack.com/meltdown.pdf)

~~~
ajanuary
Caveat: I am also a high level programmer.

My understanding is that the problem is that the data in the cache _isn't_
rolled back.

You fetch the secret data. You then fetch a different memory addressed based
on the contents of the secret data e.g. fetch((secret_bit * 128) + offset) [1]
so if secret_bit is 0 it's fetched the memory at offset into the cache, if
secret_bit is 1 it's fetched the memory at offset+128 into the cache.

After the speculative work is rolled back, the data that it fetched into the
cache still remains. You then time how long it takes to fetch offset and
offset+128. If offset comes back quickly, secret_bit was 0. If offset+128
comes back quickly, secret_bit was 1.

_That_ is where the timing attack part comes in: "timing attack" refers to
using measurements of how long something took to glean information, not that
you need to do it quickly.

[1] In reality you do it on the byte level and use &, but I wanted to keep it
to guessing a single bit to make it simpler.

~~~
samsonradu
> You fetch the secret data. You then fetch a different memory addressed based
> on the contents of the secret data ...

I was under the impression that there is no interface to read data from the
CPU caches and that the cache is managed by the CPU itself only.

~~~
ajanuary
Right, which makes it a bit of a tricky attack to pull off. But if you know
what you're doing you can do some operation that requires memory address x and
be reasonably sure it will end up in the CPU cache. If you then do an
operation on memory address x, and it happens really quickly, and you do an
operation on memory address x+128, and it happens a bit slower, you can assume
that x was in the cache and x+128 wasn't.

~~~
samsonradu
Yes, I got the part where you can time if memory address X is in cache and
X+128 isn't. But how does one read the data at memory address X?

~~~
ajanuary
You load it into a register. If you're trying to drive it from a high level
language, I guess you can do something like an add which will get compiled
into instructions to load it into a register first.

------
krylon
I have run the first test on several machines, with mixed results, but on my
workhorses (ThinkPad x220, Zenbook UX305) the exploit seems to work.

I thought the recent kernel-/firmware-/ucode-patches should have prevented
that.

EDIT: The other demos fail, though, as they should. _sigh_

EDIT: For some reason, demo #2 (breaking kaslr) works on my Ryzen machine, but
not on the others. :-?

~~~
cookiecaper
Spectre should work on most modern computers. There are no kernel patches in
stable to prevent Spectre right now. Only Meltdown is mitigated by KPTI. The
new Intel microcode and the kernel code to control it will propagate out in
the next couple of weeks.

------
anonymousDan
Looks like Intel SGX is at least vulnerable to Spectre attacks too:
[https://github.com/lsds/spectre-attack-sgx](https://github.com/lsds/spectre-
attack-sgx)

------
pbhjpbhj
First I read about this, so I thought "who's shorting Intel now I wonder",
turns out it's the CEO [kinda]:

>"reports this morning that Intel chief executive Brian Krzanich made $25
million from selling Intel stock in late November, when he knew about the
bugs, but before they were made public" ([https://qz.com/1171391/the-intel-
intc-meltdown-bug-is-hittin...](https://qz.com/1171391/the-intel-intc-
meltdown-bug-is-hitting-the-companys-stock-big-time-while-rival-amd-is-
soaring/))

I assume he's supposed to now be prosecuted, that sounds like insider dealing?
[I'd like to say "will be prosecuted" but ...]

~~~
stefs
as far as conspiracy theories go (i read that some days ago on reddit), he's
wont be persecuted because he cooperates with the NSA. refuse to cooperate
with them and join Nacchio and Qwest.

------
aeleos
I am running a razer blade 2017 with ubuntu 16.04 and so far all of the PoCs
have worked. I currently have my kaslr offset and I am now testing the
reliability. So far it doesn't seem very good with a 0.00% success rate at 60
reads. It did take a while to find my kaslr offset with multiple passes
through the entire randomization space so I need to stress my CPU more in
order to improve the success rate of having successful branch speculations.

~~~
jeshwanth
I installed the recent kernel release from Ubuntu, but the tests still working
fine.

------
Uplink
Not sure what this means, but while I'm mining Monero on the CPU with xmr-stak
the PoC is thwarted.

First, the "Direct physical map offset" comes back wrong in Demo #2. Second,
if I use the correct offset, the reliability is around 0.5% in Demo #3 - but
not consistently... after a few tries it did come back with >99%

Basically, screw up your caches continuously.

------
srcmap
From the papers, these two bugs are also exploitable from ARM.

Does it mean a hacked IOS/Android app can also (in theory) sniff the password
enter in system dialog as demo in the video?

    
    
       Realtime password input - https://www.youtube.com/watch?v=yTpXqyRYcBM

~~~
gok
Important to differentiate between ARM the company, the instruction set
architecture(s) and the specific implementation of those ISAs. The licensable
nature of ARM means there very likely are (possibly undiscovered)
implementations of the ARM ISAs floating around which are susceptible to
Meltdown.

~~~
palotasb
I was under the impression that they generally license the IP cores (or at
least some IP blocks) to implement the ISA and downstream vendors don't
implement those differently.

------
Acen
MacOS is yet to have a patch for 10.12.6 (Sierra) to resolve this.

~~~
K0nserv
It is patched on Sierra, this was part of the 2017-002[0] security update on
the 6th of December.

0: [https://support.apple.com/en-gb/HT208331](https://support.apple.com/en-
gb/HT208331)

~~~
ridgeguy
That link shows Meltdown in reference only to High Sierra, not Sierra. What am
I missing?

~~~
K0nserv
You're right, can't believe I missed that.

Edit: See the archive[0] apparently I'm not going mad and it used to say that
the patch was applied to Sierra and El Capitan, but Apple has since changed
that.

0:
[https://web.archive.org/web/20180105102220/https://support.a...](https://web.archive.org/web/20180105102220/https://support.apple.com/en-
us/HT208331)

~~~
ridgeguy
Thanks. I thought I was going nuts, too.

------
rstuart4133
Does anyone have a link to Linux PoC code for Meltdown that uses speculative
branch execution?

I've only seen two implementations: one based just doing the access to kernel
memory, catching the SIGSEGV, and then probing the cache. Obviously that could
be closed by the kernel flushing the cache prior handing control back t user
space after SIGSEGV. Doing that would have no impact on normal programs.

The second is by exploiting a bug in Intel's transactional memory
implementation. But I assume Intel could turn that feature off as they have
done so in the past. Since bugger all programs use it doing so wouldn't have
much impact.

Which means the approach being take now is done purely to kill the speculative
branch method (ie, Spectre pointed at the kernel). The authors say it should
work, but also say they could not make it work. I haven't been able to find
working any PoC for my Linux machines.

So my question is: is there any out there?

~~~
rstuart4133
Never mind: [https://bugs.chromium.org/p/project-
zero/issues/detail?id=12...](https://bugs.chromium.org/p/project-
zero/issues/detail?id=1272#c2)

------
VikingCoder
Can the videos be put on YouTube for convenience?

~~~
che_shirecat
#1 - realtime password input -
[https://www.youtube.com/watch?v=yTpXqyRYcBM](https://www.youtube.com/watch?v=yTpXqyRYcBM)

#2 - physical memory leak -
[https://www.youtube.com/watch?v=kn0FopiF16o](https://www.youtube.com/watch?v=kn0FopiF16o)

the videos aren't very long, someone should compress it to <10mb as an
animated gif and do a pull request to put it in the README

~~~
garblegarble
>the videos aren't very long, someone should compress it to <10mb as an
animated gif and do a pull request to put it in the README

There's no need to use an awful format like gif, just embed an efficiently
compressed video file with the <video> tag

~~~
che_shirecat
I completely agree with the sentiment, however github currently does not
support embedded video in markdown [1]

Animated gifs do work when embedded, but need to be <= 10mb [2]

[1] [https://stackoverflow.com/questions/4279611/how-to-embed-
a-v...](https://stackoverflow.com/questions/4279611/how-to-embed-a-video-into-
github-readme-md)

[2]
[https://stackoverflow.com/a/46701929](https://stackoverflow.com/a/46701929)

~~~
garblegarble
Oh wow, that's crazy - especially 8 years since they said they'd look at it!
Thanks for the info

------
yuhong
One of the reason I don't consider the timing attacks that important is that
there are often easier ways to bypass ASLR.

~~~
tedunangst
Are there easier ways to read kernel memory?

~~~
yuhong
The point is what reading kernel memory would be useful for.

~~~
tedunangst
One wonders why /dev/mem was ever read restricted to start.

~~~
yuhong
Reading /dev/mem is far easier/faster and typically provides more data than
this attack would.

------
revelation
The secret program confirms what others have seen, it's not so much "read any
physical memory" as "read memory in cache"

~~~
K0nserv
> "read any physical memory" as "read memory in cache"

You can force values from any memory to affect the cache in a predictable
manner which enables you to read all physical memory. See
[https://news.ycombinator.com/item?id=16108574](https://news.ycombinator.com/item?id=16108574)
or read the paper yourself
[https://meltdownattack.com/meltdown.pdf](https://meltdownattack.com/meltdown.pdf)

~~~
koolba
> You can force any memory into the cache so yes it's is read any physical
> memory.

Is there a direct method for that or do you mean that you can repeatedly try
reading memory addresses until the address that you want to access is actually
in the cache prior to your access?

~~~
K0nserv
The exploit is based on reading values that you shouldn't be allowed to access
in speculative execution and then using the returned values to create
persistent changes in the cache(they persist even after the CPU detects your
illegal access). Those persistent changes are then read via a side channel
attack.

So you read any address you want speculatively and then use the result to
prime the cache in such a way that you can determine what the value you read
speculatively was. This works because modern operating systems map kernel
space addresses into normal processes and to make syscalls faster.

I'd recommend reading the paper[0], it's fascinating stuff.

[https://meltdownattack.com/meltdown.pdf](https://meltdownattack.com/meltdown.pdf)

------
john_teller02
These two bugs (Meltdown and Spectr) are really very speculative things. It is
like when human beings became aware of astroid orbits they thought that earth
is in danger of being hit by one. Now that is indeed a theoritical possibility
but what are the chances? These two bugs have been existent for 20 years and
there is no known exploits of them. In the GitHub demos also they mention that
the demos will work only if "For this demo, you either need the direct
physical map offset (e.g. from demo #2) or you have to disable KASLR by
specifying nokaslr in your kernel command line." \- So you basically start
with a broken system to exploit these bugs.

~~~
firethief
This is literally a PoC. It's too late for the standard "I can't imagine how
to exploit this so surely it cannot be done" fallacy. You are looking at an
example of how to do it.

