
How a researcher hacked his own computer and found 'worst' chip flaw - sonabinu
https://www.reuters.com/article/us-cyber-intel-researcher/how-a-researcher-hacked-his-own-computer-and-found-worst-chip-flaw-idUSKBN1ET1ZR
======
binaryapparatus
This is so deep and complex that I don't expect to be at ease using computer
in the future. There can only be more flaws that easily open this can every
time somebody wants to open it.

The only half secure way I see is to carefully pick what I install and forget
running any JS code in browser, ever. Typing this in w3m/vim btw. Makes sense?

FreeBSD I am using just issued news about not being ready yet to fix this
vulnerability. Fun.

[https://www.FreeBSD.org/news/newsflash.html#event20180104:01](https://www.FreeBSD.org/news/newsflash.html#event20180104:01)

~~~
jerf
That might be a bit of an overreaction. But I will concede this is a close as
I've ever seen this get to reality: [http://ansible.uk/writing/c-b-
faq.html](http://ansible.uk/writing/c-b-faq.html)

Which is really sequel to the short story:
[http://www.infinityplus.co.uk/stories/blit.htm](http://www.infinityplus.co.uk/stories/blit.htm)
, and which is followed up with
[http://www.lightspeedmagazine.com/fiction/different-kinds-
of...](http://www.lightspeedmagazine.com/fiction/different-kinds-of-darkness/)
(the "FAQ" I linked above is in between those two things).

~~~
JeremyBanks
I think the parallel's a bit of a stretch, but I really enjoyed the story, so
thanks for sharing.

~~~
jerf
I agree it's a stretch, but it's a fun link.

The parallel I see is that this is as close to anything I've ever seen in this
industry to having to start so far back again. I very much suspect that we're
going to be playing cat and mouse with timing attacks for the next 10 years,
while pie-in-the-sky research projects just get started more-or-less today
that will finally fix it.

The Mill CPU people have popped up a few times today... they seem to have an
interesting opportunity here.

------
seanalltogether
> “When I saw my private website addresses from Firefox being dumped by the
> tool I wrote, I was really shocked,” Gruss told Reuters in an email
> interview, describing how he had unlocked personal data that should be
> secured.

I wish I knew how to interpret this comment. Did he manage to output his own
firefox memory using client side javascript? Or perhaps a local script running
in terminal? or is it totally different and hes describing running a process
on his own webhost that broke the vm container and printed out other hosted
websites ip addresses?

~~~
gefh
I'm pretty sure he's running an unprivileged binary on his local machine, that
shouldn't be able to see Firefox's memory, but can with this attack. I don't
know of any js exploit - this attack relies on specific machine code
instructions that would be very difficult to get a js engine to generate.

~~~
Klathmon
>I don't know of any js exploit - this attack relies on specific machine code
instructions that would be very difficult to get a js engine to generate.

The Spectre paper outlines that this is not only possible to do in javascript,
but they included a Proof-of-concept of it working in javascript to read
anything from that process's memory.

~~~
GunlogAlm
Their proof-of-concept only reads memory from the browser it's running within,
right? Violating browser sandboxing, yes, but does that mean it would also
(theoretically) be able to access the memory of other programs?

~~~
Klathmon
As far as I understand it, the JavaScript PoC only allows you to read memory
from the same process. Meaning if all processes are isolated, you can't read
anything other than your own tab's memory. However many browsers don't do
process isolation, and just about none of them do it completely (Even Chrome
won't have mitigations to this that prevent iframes from other origins doing
this attack for a week or 2).

And while that alone might seem pretty innocuous, think of things that are
kept in memory for a specific tab. This could make XSS attacks able to grab
HTTP-Only cookies, or be able to read some autofill data or potentially
password autofill information. I don't know enough about the architecture of
those things to know if they are kept in the same process, but if they are
it's possible.

That being said, I didn't read any specific reason why that JavaScript PoC
couldn't be expanded to work on other processes just like the rest of the
Spectre attack. And personally I'm assuming it's only a matter of time before
it is figured out.

~~~
trendia
> Even Chrome won't have mitigations to this that prevent iframes from other
> origins doing this attack for a week or 2

You can enable (experimental) Chrome process separation here:

chrome://flags/#enable-site-per-process

------
Pxtl
Okay, one thing I don't get: Fundamentally, the Meltdown problem comes from a
process getting to inspect the wreckage after the the CPU _tried_ to clean up
access of protected memory, but failed to properly account for a fun trick of
caching.

That is, the process tried to read some data from protected memory A and did
some fancy stuff with it such that left enough breadcrumbs that after the CPU
responds to the illegal read, the process still has enough info to piece
together what was in A, at least from timing of reads, right?

Why not kill the process instantly on a prohibited read, then? Do these
prohibited reads happen too often to switch from "oh, that's okay, you can try
again and do other stuff" to "the punishment is death"?

~~~
apendleton
My understanding:

Because it may not be known whether or not the read will have ended up being
prohibited at the time it's speculatively executed. Think of a simple for loop
over the contents of an array: usually at the end of each loop iteration you
jump back to the beginning of the loop, until the last time you don't, when i
> array.length or whatever. The branch predictor will likely predict that the
last time you'll jump back again (you always have before, and it doesn't know
the actual value of array.length yet since that takes hundreds of cycles to
come back from memory) and try to speculatively execute the contents of the
loop body on what turns out to be out-of-bounds data. That's not typically
malicious; it's at the end of pretty much every iteration over an array ever.

So then, if we can reliably get our loop to operate on a bit of data that
might turn out to be out of bounds, the trick is then to have a loop body like
this:

if (potentially_illegal_bit == 1) { load_legal_location_1() } else {
load_legal_location_2() }

After the branch misprediction occurs, the state of the program gets rolled
back so we don't know what potentially_illegal_bit itself was anymore, but we
don't have to, we can surmise it from the cache contents. The program isn't
going to do an illegal read, it's going to do a legal one, to either legal
location 1 or legal location 2, and see which is faster (as only one of them
should be cached). Both reads are legal and will succeed, so the only
difference is the speed. We've now exfiltrated one bit of data from outside of
your array bounds, but the only actually illegal read was speculative, which,
as far as the CPU is concerned, is par for the course (predictions are wrong
all the time), so no harm, no foul.

~~~
Pxtl
Ahhh, that's what I missed. I thought the "speculative execution" people were
talking about was executing the tricks on the copied data after the illegal
read.

But not only that, the entire illegal operation can be fenced behind a
speculative execution with a simple "if" branch. And illegal reads happen in
that kind of context all the time because (for example) that's how every FOR
loop ends.

There's really two places where pipelining and speculative execution are
causing this problem - the obvious one where the CPU is allowed to manipulate
protected data before everything gets blown away (except cache loads of legal
data, hence the problem), but a second one that means that reads of protected
data happen _all the time_ in normal branch prediction misses.

When it happens in normal flow, you can't punish it by killing the process.
Reading protected memory in a speculative branch isn't some digital "attempted
breaking an entering" that you can clamp down hard on, it's totally normal
behavior.

I get it now, thanks.

~~~
bennofs
The other thing is that it is not necessary to be in the same process for
this: you could in theory have one process do the access (which gets killed)
and then observe the cache result in another process.

~~~
Pxtl
Daaang, I hadn't even thought of that.

------
tischler
How was this bug independently discovered by three research teams (Graz
University of Technology, Google's Project Zero, Cyberus Technology)? Was it
known beforehand that such an attack or a similar one could be possible in
theory such that all teams came up the same idea?

~~~
exhilaration
The article suggests that all the research teams were inspired by one source:
_The team quickly got in touch with Intel and learned that other researchers -
inspired in part by Fogh’s blog - had made similar discoveries._

~~~
tischler
This is the blog article: [https://cyber.wtf/2017/07/28/negative-result-
reading-kernel-...](https://cyber.wtf/2017/07/28/negative-result-reading-
kernel-memory-from-user-mode/)

~~~
oh-kumudo
But this blog is written in 07/28 last year, Google reports this bug back in
06/01, right?

~~~
DannyBee
Yes. The timeline mentioned is the timeline for others, not Google.

------
forgot-my-pw
“Jann Horn developed all of this independently - that’s incredibly
impressive,” he said. “We developed very similar attacks, but we were a team
of 10 researchers.“

That is very impressive.

------
neduma
NSA?

~~~
crb002
Hubris.

I've been trying to wrap my head around for over a decade how these side
channel attacks between L1 caches and branch prediction caches don't happen.

Wish I had published my November 2016 research into function call leaks. CPU
registers leaking information between function calls is still a huge
unmitigated issue.

You want to shit yourself, goose GCC or LLVM to instrument your function calls
with a dump of everything in CPU registers before they start. Do this with
libraries that people blindly link in dynamically.

~~~
quacj
> _CPU registers leaking information between function calls is still a huge
> unmitigated issue._

How is that a security issue? It's in the same process.

~~~
crb002
Say you have a memory pointer to sensitive info in a CPU register.

Could the subroutine troll through memory and find it? Yes, but it would be
orders of magnitude harder than having that pointer in a register.

Better example is if you are using 64 bit ints in registers as security
tokens. Those should be zeroed after use and the compiler needs to not
accidentally optimize away zeroing them.

~~~
quacj
Do you have a proof of concept you can share for this type of attack?

I don't quite see how an attacker would take advantage of this is a real-world
scenario.

~~~
crb002
Yes, John Deere tractor running Windriver Linux; heavy use of dbus and Qt. Any
compromised tractor app vendor could inject it.

Could probably reproduce from upstream Yocto distro and the free version of
Qt.

Much larger attack surfaces to worry about like their craptastic security
around USBs that farmers stick in to read maps.

------
danjoc
I'm surprised practically nobody cares about this. NPR was running a story
about meditation this morning. Here at HN, we're all excited about a new Linux
laptop with defective Intel chip included.
[https://news.ycombinator.com/item?id=16073039](https://news.ycombinator.com/item?id=16073039)

I guess this is why Intel can get away with it. Nobody cares.

