
More Intel speculative execution vulnerabilities - wolf550e
https://mdsattacks.com/#ridl-ng
======
xucheng
> We are particularly worried about Intel's mitigation plan being PoC-oriented
> with a complete lack of security engineering and underlying root cause
> analysis, with minor variations in PoCs leading to new embargoes, and these
> "new" vulnerabilities remaining unfixed for lengthy periods. Unfortunately,
> until there is sufficient public / industry pressure, there seems to be
> little incentive for Intel to change course, leaving the public with a false
> sense of security. Slapping a year-long embargo after another (many news
> cycles apart) and keeping vulnerabilities (too) many people are aware of
> from the public for a long time is still a viable strategy.

This is troubling.

~~~
segfaultbuserr
Perhaps researchers can use the tested-and-proven "Full Disclosure" tactic to
exert public pressure on Intel. It doesn't need to disclose everything, just
two or three additional unpatched PoCs with full source code would be enough.

However, unlike buffer overflow exploits, most researches on CPUs are
conducted within academic institutions, doing this certainly breaches the code
of conduct. Also, CPUs are the most critical components of all computers and
their vulnerabilities are difficult to fix, doing this would put a lot of
users under immediate risks, unlike a root exploit, which is less risky and
can be fixed within a week. Doing Full Disclosure of hardware exploits that
users can not fix is much more ethically problematic than software exploits.

But leaving the users in the dark and allowing Intel to delay its fixes by
_not_ to exert pressure is obviously irresponsible, which is the original
argument for full disclosure.

So I'm not sure. Perhaps Google's Project Zero is a good model and a good
compromise between resposible/full disclosure - embargo for 90 days, full
disclosure later, all information becomes public after 90 days and no
extension is allowed, period. For CPUs, perhaps we can use 180 days.

~~~
hghhbvv
I wish FPGAs were fast enough because I don’t see how else this problem can be
solved. Easy to issue a patch that fixes logic implemented in FOGA. Impossible
to fix hardware rooted issues unless they work around it at a huge performance
cost, if at all possible.

Not an EE so I’m just dumping my thoughts.

~~~
segfaultbuserr
One problem of FPGAs is that the absolute dependence on proprietary tools from
the vendor, the hardware industry is much more closed in comparison. By using
those tools, you have to agree the terms and conditions such as the following
(this one is from Xilinx),

> By using this software, you agree not to: [...] display the object code of
> the Software on any computer screen.

From a security perspective, it doesn't inspire confidence, there's no ability
to do an independent verification, something like doing a reproducible build
with an compiler which source code is open to audit. There's no equivalence of
GCC or LLVM for FPGA.

Fortunately, there are some people working on it, just saw it on the homepage,
although it's a long way to go...
[https://news.ycombinator.com/item?id=21522522](https://news.ycombinator.com/item?id=21522522)

I'm not an EE, just my 0.02 USD.

~~~
abjKT26nO8
_> By using this software, you agree not to: [...] display the object code of
the Software on any computer screen._

I.e. ... I can print it? Good.

------
Jonnax
So Intel failed to mitigate the vulnerability when it was first reported. Then
they extended the embargo from May until November.

And they still didn't fix it.

What's going on with Intel? Like they're going all in with lying in benchmarks
against AMD and straight up forgetting what has been reported as security
issues.

~~~
IntelThrowaway
One of the problems with Intel culture - especially under BK was that the
philosophy was "Focus on our key goals to the exclusion of everything else".
It was meant to keep focus and ensure we moved quickly. The problem with that
is it meant we were entirely unresponsive. It doesn't matter if something
important has come up because you've already agreed what the priority is,
you've already committed to what you're going to do. So even if something does
come up, communicating that problem to the team that needs to fix it is
impossible because you'll get ZBB'd (if we do this, we will drop that). Then
once you've got engineering to commit, the bureaucracy won't let you just
release anything so you need to line up into a release process.

I'm sure no one intended to mislead, but organisationally Intel just isn't
designed to fix bugs. It doesn't have a process to respond to issues.

~~~
jhalstead
Dumb question: Can you clarify what ZBB'd means in this context? I've never
seen it before. There's a wiki page [0] with various meanings for the
abbreviation, but nothing seemed to fit. Maybe Zero-based budgeting?

[0] [https://en.m.wikipedia.org/wiki/ZBB](https://en.m.wikipedia.org/wiki/ZBB)

~~~
epsilon_greedy
Zero-based budgeting. It effectively means that a project is no longer getting
funded or staffed and is therefore dead.

~~~
nieve
I have to wonder how many bitter Pyrrhic victory jokes get made by victims of
poor implementation based on the creator's last name - Pyrrh. I've seen too
many companies destroyed by superficially attractive budgeting schemes, even
with how much tech has started to acknowledge perverse incentives they seem to
get overlooked. The worst I had direct experience with was a company where
they moved to all income being credited to marketing & bizdev and all other
parts of the company being treated as losses to be minimized. We were both
advertising supported (and thus needed content to sell anything) and had
professional products that required analysts, but they when the market started
looking rougher they kept the sales & ad people and concentrated layoffs on
people who produced things or ran the infrastructure.

Ultra-simplified bookkeeping interpreted through the lens of too much coke
nearly destroyed them. The dotcom bubble was a very strange time.

------
jannemann
To reiterate the written words from the authors: Intel prevented this
particular problem gets to the public and than they flat out lied to the
customer.

------
wyldfire
> On July 3, 2019, we finally learned that, to our surprise, the Intel PSIRT
> team had missed the PoCs from our Sep 29 submission, despite having awarded
> a bounty for it, explaining why Intel had failed to address - or even
> publicly acknowledge - many RIDL-class vulnerabilities on May 14, 2019.

What does this usage of the word 'missed' mean in this context? That they lost
it / failed to deliver the PoC to the relevant team? Or that they released a
"fix" knowing that it didn't defeat the PoC?

~~~
nolok
From the way the phrase is turned, I believe they released a fix that covered
all previously known PoCs but not those from that submission.

Generally speaking, that really illustrate the dumb way Intel is going about
it, fixing on a PoC basis rather than going after the strong underlying
problem. It basically screams "there will always be issues, the question is
can you find them !".

------
iforgotpassword
AMD is suffering much less from these flaws. Seems they didn't ignore as many
security boundaries with their implementation.

~~~
wolf550e
AMD (and ARM OoO chips) are vulnerable to Spectre variant 1 (bypass in-process
array bounds checking) but not to the vast majority (any?) of the other issues
which are Intel-only.

AMD chips don't have the feature that speculation failure is determined at
instruction commit time when it is already too late, so most issues just can't
happen.

~~~
rrss
AMD processors were also vulnerable to spectre v2. I don't know the status of
the mitigations or whether it was fixed in zen 2.

EDIT:

I found the list I made a few months back. no guarantees, but i think it is
mostly accurate.

Meltdown: Intel, IBM, some ARM

Spectre v1: Intel, ARM, IBM, AMD

Spectre v2: Intel, ARM, IBM, AMD

Spectre v3a: Intel, ARM

Spectre v4: Intel, ARM, IBM, AMD

L1TF: Intel, IBM

Meltdown-PK: Intel

Spectre-PHT: Intel, ARM, AMD

Meltdown-BND: Intel, AMD

MDS: Intel

RIDL: Intel

~~~
kllrnohj
spectre v2, like v1, isn't one that is "fixable." The mitigations (retpoline &
microcode updates) are essentially additions that are added in places where
security checks are done to just disable speculation for that particular
check. But you still have to choose when & where to use those or even if you
opt to use them at all.

There are no sweeping fixes for either v1 or v2, and there probably won't be
for a long time at best.

But the positive news is that v1 & v2 only matter at all if you do in-process
sandboxing of untrusted code. Which most things don't do, so most things are
not at any risk from it.

~~~
rrss
> But the positive news is that v1 & v2 only matter at all if you do in-
> process sandboxing of untrusted code.

I don't think this is accurate. It seems to be a widespread misunderstanding
that started because the original proof of concept was within a single
process. Spectre, before mitigations, allowed userspace to read kernel memory
if appropriate gadgets in the kernel could be identified and exploited.

My understanding is the impact is only intra-process after mitigations.

------
foxes
Another nail in the casket lake. Is the solution just to throw everything out
and start again? Do we just abandon speculative execution?

~~~
nolok
The problem for most of these is not speculative, it's not doing proper acl
during speculation. Just because the cpu is speculating doesn't mean it
shouldn't check if you are allowed to access this or that, but that's what
Intel did, and they only did the security check at the end before giving back
the result, except at this point its too late you've already accessed it.

Other manufacturers AMD included didn't get affected by those variants.

~~~
koheripbal
I wonder what the performance costs of those ACL checks are.

~~~
nolok
A single one might not be that big, but they add up (if there are several
suchs checks to do in your speculated branch, instead of doing them when they
appear during speculation intel chips pooled them up all at the end), and they
disappear if the speculation was wrong (speculated the wrong branch ? Didn't
lose any time on permission check).

They're the >~10% of cpu perf intel chips have lost in the last few years with
all the mitigations.

~~~
kllrnohj
> They're the >~10% of cpu perf intel chips have lost in the last few years
> with all the mitigations.

Given AMD's Zen 2 has comparable IPC to Intel at this point without doing the
ACL check late it's not evident that the difference in when the ACL is done
was a key efficiency gain.

~~~
nolok
I was not talking about intel's 10% lead over AMD, I was talking about chips
from 3-4 years ago, when tested again now with mitigations, perform 10% worse
or more in affected workload (see
[https://www.phoronix.com/scan.php?page=article&item=intel-
ic...](https://www.phoronix.com/scan.php?page=article&item=intel-icelake-
mitigations&num=4) for exemple)

~~~
resoluteteeth
Surely the mitigations are not the same as "doing proper acl during
speculation" in the first place and have a worse performance hit?

------
TheMagicHorsey
I worked at Intel in 1998 as a young engineer a few years out of college. Back
then they were riding high on their CPU monopoly.

I left in about a year and a half and moved to a startup company. My issue
with Intel was that, as a monopoly, they had grown fat and complacent.

I am not exaggerating when I say that in 1998-99, engineers were working maybe
4-5 productive hours in a week. Political savvy and alliance-building were the
most important things for promotions and influence.

Those who actually produced good work had credit for their work diluted
through many layers of management. You could do something amazing and your
manager's manager might present it in a powerpoint to the company without
mentioning your name at all, and acting like the idea was his all along.

I'm surprised the company has lasted this long. Its a place where mediocre
people gather.

------
jacquesm
Is there a list of the exploits found 'in the wild' that depend on speculative
execution?

~~~
wnevets
for regular consumers, its doubtful there are any.

------
aeiou1234
another 0-4% performance hit for skylake

~~~
atq2119
The really damning part is that it applies even for processors that are
supposedly fixed in silicon because Intel dropped the ball by playing wack-a-
mole with proof of concept exploits instead of thoroughly building their chips
with security in mind.

If the history of Microsoft and Windows security is any indication, it'll take
Intel many many years to turn that ship around.

There's a question of whether AMD has been mostly unaffected only because
their chips haven't received as much scrutiny, but for the time being it does
seem that if you care about security, you'd better go with Epyc.

~~~
wtallis
I don't think anyone seriously expected Intel to be able to thoroughly harden
their chips against Spectre-style attacks given just a year to tweak their
existing microarchitecture. They were able to move some permissions checks
ahead of some speculative actions, but they simply haven't had enough time to
design an architecture that can unwind all observable side-effects of
mispredictions. It was obvious that all the near-term fixes in silicon would
be equivalent to or only slightly better than the microcode or OS-based
mitigations.

------
duxup
I recall a Google security blog that spoke more generally that with so many
layers of code that you don't know what they're doing the result is that
effectively they consider any speculative execution that exists at the time of
writing the blog may be a potential vulnerability.

------
ageofwant
Running the below over my machines gives me back the 8-30% cycles I originally
paid for, depending on load type. This will have to do until everything is
swapped to AMD. Note you only need 'mitigations=off' in later kernels.

    
    
        - name: Disable CPU-sapping security mitigations
          become: yes
          lineinfile:
            path: /etc/default/grub
            line: GRUB_CMDLINE_LINUX_DEFAULT="noresume noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off mitigations=off"
    
        - name: Update grub
          become: yes
          command: /usr/sbin/grub-mkconfig -o /boot/grub/grub.cfg
    

No I don't give a fuck about the 'risk' this introduces, but I expect my bank
to.

~~~
wolf550e
Your web browser runs untrusted code. On a computer that never runs untrusted
code you can do that.

~~~
scandinavian
There have been more Chrome/Firefox 0-days than speculative execution
vulnerabilities exploitable in Javascript (0). Sure, there is a chance that
the Chrome/Firefox teams missed something, but there have been sighted no
exploits since the release (of spectrev1) and the browser fixes.

It's not a crazy threat model to have on a personal PC, the risk is so very
minimal. If your threat model is that strict you shouldn't be running JS
anyway.

~~~
dijit
What are you even talking about? the introduction to this problem came with a
proof-of-concept _IN JAVASCRIPT_.[0]

Session keys, private keys, passwords and all other kinds of access tokens
that your system is using, it's the next worst thing from remote code
execution.

Your browser runs so much untrusted code that it's really unreasonable, and
yes, we should definitely be pushing back hard on this. But it's probably the
most stupid thing you can do to disable these mitigations because they're not
just theoretical, they're real, they're here and everyone knows about them.

This is like anti-vaxx philosophy. "the risk is low"; well, maybe the risk is
low because of herd immunity, it's not feasible to run these attacks as
they'll be obvious to those who have the mitigations in place (100% CPU), but
if there's a 0.00000001% return then it becomes profitable to exploit, just
like mail spam.

Do not fucking turn off these mitigations on desktop computers, they are too
complex and run untrusted code all the time. Unless you can work without
javascript and I doubt you can because the web today is basically unusable
without it.

If you have a database which is only accessible internally, you can disable
the mitigations, because those things are hit hardest by the mitigations and
do not run untrusted code.

But really, your desktop is running untrusted code _a lot_. Please do not do
this, not only for your own sake but for everyones sake. Don't make it
profitable for malicious agents to run these attacks.

[0]:
[https://www.reddit.com/r/javascript/comments/7ob6a2/spectre_...](https://www.reddit.com/r/javascript/comments/7ob6a2/spectre_and_meltdown_exploit_javascript_example/)

~~~
scandinavian
From the comment in your link:

> It's missing the entire actual implementation of the Spectre attack, which
> requires analysis of read times to see if you're hitting the processor cache
> or not.

"analysis of read times" is what the browsers "fixed" to mitigate the attacks
(and site isolation later). Again, there has been no working attacks on
updated browsers.

Please feel free to link an example of one though, I will gladly admit I'm
wrong. You just seem to frankly have no idea how the exploits actually work
though (did you actually read the code in the reddit post?), so I suspect that
this conversation will be a waste of time.

~~~
dijit
The browser mitigation’s only work with the kernel mitigation’s. Neither is
working well without the other. And yes. I’m very well versed on this topic.

~~~
scandinavian
>The browser mitigation’s only work with the kernel mitigation’s.

That's just not true. The timer precision doesn't have anything to do with the
kernel mitigations.

~~~
dijit
The timing precision thing by itself doesn't twarte anything, it just makes
the attacks harder or take more time.

The browser vendors themselves said this; and it's not a permanent solution as
tech such as Stadia and WebVR rely on high precision timers.

But, whatever man, I'm telling you that it's stupid and you want to bury your
head in the sand.

You just make these attacks more likely; I'm not going to be impacted except
for a few trillion CPU cycles of idiots trying to exploit me.

You're the one who puts their entire digital life on the line by eking out 5%
performance.

~~~
scandinavian
>The timing precision thing by itself doesn't twarte anything, it just makes
the attacks harder or take more time.

Oh, so it just takes more time, so you have knowledge of an exploit? Fine,
show me any PoC or similar bypassing the lower accuracy and site isolation.

You are such a big part of the problem with how this whole class of exploits
have been handled. No technical knowledge, just spewing stuff like "You're the
one who puts their entire digital life on the line", when we there is no
indication that anything like that can transpire.

Please stop spreading misinformation.

~~~
dijit
Except that the Spectre paper already takes degraded timers into account and
suggests to use a Web Worker thread that increments a value in a loop as a
replacement.

This is not misinformation, _you_ are spreading "certainty" of safety
surrounding a dangerous idea.

[https://spectreattack.com/spectre.pdf](https://spectreattack.com/spectre.pdf)

Even if I was wrong, and very wrong, why the hell would you choose to be less
safe? this whole thread chain is absolutely baffling. Buy an AMD CPU or leave
the mitigations on. Everything else is needlessly opening yourself up.

~~~
scandinavian
>Except that the Spectre paper already takes degraded timers into account and
suggests to use a Web Worker thread that increments a value in a loop as a
replacement.

Yeah, which was why SharedArrayBuffer was disabled when spectrev1 was
released. It is still disabled in Chrome if site-isolation is disabled and
it's still disabled in firefox.

You should really know all this if you are so very well versed in the subject.

>Even if I was wrong, and very wrong, why the hell would you choose to be less
safe? this whole thread chain is absolutely baffling. Buy an AMD CPU or leave
the mitigations on. Everything else is needlessly opening yourself up.

I don't run without migrations. I commented on the original parent comments
threat model and that I think it's perfectly logical. And I maintain this, if
your threat model is so strict, that you are afraid of speculative execution
vulnerabilities hitting you through javascript, you should not run JS at all,
as regular js 0days have hit while no actual speculative execution browser
exploits have hit.

~~~
dijit
“Threat model is so strict” is weird to say when any ad network can access any
and all memory on your desktop potentially.

That’s a very wide attack scope.

~~~
scandinavian
You have misunderstood how the browser attacks work. They are limited to the
memory assigned to the browser process. If site-isolation and the other
browser mitigations were somehow bypassed, an ad-network would potentially be
able to read some data from other loaded tabs.

You can't use the speculative execution vulnerabilities to just read all
system memory using a javascript exploit. Like the exploit that is the topic
of this post can't be used in a browser at all, as you are limited to what the
JIT executes, you can't just execute TSX instructions in a browser.

You might be thinking of when you have native code execution.

~~~
saurik
The website covers a handful of things, and the RIDL exploits don't require
special instructions.

> We leak a string from another process using Javascript and WebAssembly in
> the SpiderMonkey engine.

~~~
scandinavian
> We leak a string from another process using Javascript and WebAssembly in
> the SpiderMonkey engine.

They leak in flight data, using a detached spidermonkey engine, patched to
make performance.now() return rdtscp at a rate of 1B/s while the victim
application is spamming a load string instruction as fast as possible.

This does not allow:

>any ad network can access any and all memory on your desktop

This allows any ad-network to access random bits on the cache line. If the
timing mitigation didn't already fix this, it seems impossible to me to get
anything useful from it, the precision and bitrate is just too low (which is
why the exploit just spams load instructions in a while 1 loop).

>and the RIDL exploits don't require special instructions.

Weird, in the new addendum it says it uses TSX, and in the PoC it uses XBEGIN.
Must be a mistake.

------
ksec
And yet, despite all of its security problems and issues with Intel, server
vendors are buying more Intel than ever. On one hand they complain about
Intel's pricing, threatening about making their CPU with POWER or ARM, on the
other hand they happily use AMD as a tool to get better pricing.

The sales number dont lie, AMD doesn't even account for 10% of Server CPU
shipment, and may not even happen in 2020 given Intel's new price cut.

------
m_eiman
Is it possible to apply the mitigations on an per-application level in
Windows? IMHO it'd be pretty useful to be able have them on by default, but
disable them for specific applications where you care about maximum
performance and know that you won't be running untrusted code.

~~~
alkonaut
Surely if you emulate a processor without speculative execution with good
fidelity (Such as a good NES emulator) then the program running ON that
emulator can't deduce anything from speculative execution?

Is the answer (for _home users_ ) to just sandbox some processes under an
emulator layer? I'd be happy to just sandbox some sensitive processes like my
browser even if it took a huge performance hit, so long as some other apps
like games did not take the same hit.

~~~
HelloNurse
Not very likely: the attacker is outside the emulated vulnerability-free
sandbox, and the state of the emulator is exposed like the state of any other
program.

Accessing the emulator's memory means accessing the emulated program's memory,
it's just slightly obfuscated.

~~~
alkonaut
How is the attacker outside, assuming it's a process running on the emulator
(I.e. the attacker surface here in the emulator example would be only the NES
game, so he has to work with NES cpu opcodes, NES memory locations etc)?

~~~
HelloNurse
The attacker is not a process running on the emulator, not even if you
_assume_ it is. Security is about the worst case, not about hit or miss
partial solutions.

~~~
alkonaut
I’m talking about a hypothetical future where there are mitigations in place
such as running each app sandboxed in an emulator. Obviously if a malicious
process can sidestep any such mitigation and run any way it wants, then
presumably it can read the memory of any process too? Why would an app even
need to rely on vulnerabilities then?

~~~
HelloNurse
If this is your scenario, the attacker runs in its own emulator. A fairly
thick pair of gloves, but not enough to prevent exploiting speculative
execution vulnerabilities and reaching outside the emulator to poke into other
processes.

As another comment points out, attacks from Javascript code that escape the
browser sandbox have been demonstrated, which is exactly your sandboxing
scenario minus the easy part of targeting what an emulator is emulating.

~~~
alkonaut
> attacks from Javascript code that escape the browser sandbox have been
> demonstrated

As far as I understand you still need to be able to have some kind of timing
information or cpu state available in the sandboxed program, which is possible
if the emulator/sandbox runs close enough to the metal (Such as a js program
in a modern browsee, because they need to be fast). Remove ALL timing info and
it should be possible to make it impossible to exploit speculative execution.
It might run 1000x or 10000x slower than a modern JS engine however.

~~~
HelloNurse
I need to reiterate that optimism ("remove all timing info...") has no place
in information security.

If you think you have removed all timing information sources you are aware of,
many remain: those you aren't aware of at all, those you failed to recognize
as exploitable, those you didn't actually remove by mistake, those that are
degraded but still present... The attacker should be assumed to be clever and
knowledgeable; as the saying goes, creating a system that _you_ don't know how
to crack is easy.

------
zaphirplane
Ok this has been bugging me for a while. How does speculative execution roll
back side effect like write to disk or send a packet on the network, when the
speculation is wrong. At a guess there are safe instructions that can be run
when doing branch prediction?

~~~
wolf550e
Nothing like that happens. Writes to dram or pcie devices or data bus cannot
be rolled back. You have a misunderstanding of what speculative execution
entails. The things that get rolled back are writes to general purpose
registers.

~~~
gmueckl
How is this implemented? Undoing register renaming?

~~~
monocasa
Exactly. Also not committing writes that are almost in the store buffer.

------
voidmain
Can we please have an architectural MSR to disable TSX?

~~~
hansendc
The good news: There's a new MSR which lets you do this:

[https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c2955f270a84762343000f103e0640d29c7a96f3)

The less good news: as far as I can tell, Intel did not commit to how
architectural this will be going forward. Considering the role TSX has played
in speculation-based attacks, it appears to me to be a generic mitigation that
would be great to accompany TSX wherever it is available in the future. Now
that MSR_IA32_TSX_CTRL is defined, it should be easier to implement going
forward.

Disclaimer: I work on Linux at Intel.

~~~
voidmain
...speculation attacks, and outright bugs in TSX, and the fact that even when
everything is working it's a timing mechanism that VMX can't intercept. I
really hope it becomes architectural (and gains a bit for HLE)!

------
mikorym
What would the performance impact be if they simply took away speculative
execution (but not cashing)?

------
annoyingnoob
Is disabling hyper-threading a via work-around for these vulnerabilities?

~~~
OrangeMango
My understanding is that this works for many (but not all) of the
vulnerabilities. Interestingly, in the past few years Intel has dramatically
increased the number of non-SMT processors on their product list.

If that is going to be your personal way of mitigating the issue, you've got a
choice of 4, 6, and 8 core parts at a significant discount compared to their
HyperThreaded variants.

------
openbsd4lyfe
Lord Theo was right again!

------
systemdtrigger
One question- do these vulnerabilities , including spectre and meltdown only
help in stealing information or can they also hijack your computer to do
arbitrary things?

~~~
paulddraper
To exploit these vulnerabilities, you already need (unprivileged, sandboxed)
RCE.

These vulnerabilities "only" steal information; however that information could
of course be leveraged into privilege escalation or anything else.

~~~
gpm
This isn't true unfortunately.

Being able to cause manipulate the control flow of code that already exists on
the computer can be sufficient. See netspectre for an example that worked on
real google cloud vms and local wired networks.

[http://www.misc0110.net/web/files/netspectre.pdf](http://www.misc0110.net/web/files/netspectre.pdf)

~~~
paulddraper
Wow, that is impressive.

Yes in _theory_ you could do that, but to actually exploit in practice I would
have guessed couldn't be done.

~~~
icedchai
Don't get too excited. From the paper: "In the Google cloud, we leak around 3
bits per hour from another virtual machine." This is, of course, under ideal
conditions.

~~~
paulddraper
They estimated that with some dedicated hardware they could improve that by
2-10x.

Still not very useful for an attacker.

But still fascinating and impressive they could do it at all.

