
The Next Vulnerability: Looking Back on Meltdown and Spectre One Year Later - DyslexicAtheist
https://www.rambus.com/blogs/the-next-vulnerability-looking-back-on-meltdown-and-spectre-one-year-later/?hss_channel=tw-19615847
======
deepnotderp
The basics of the spectre exploit were discoverable 20 years ago and run deep,
to the very core of branch speculation. This is nothing but marketing for
risc-v and rambus, there's no magic in risc-v that prevents spectre except
that the ecosystem is so primitive that they barely have any speculative
processors...

We could have found the spectre exploit 20 years ago. To me, the fact that we
didn't until now is maybe the most interesting thing.

~~~
ufo
I think one of the reasons is that nowadays we run a lot more untrusted code
on top of our hardware, due to things like Javascript and virtualization.
Meltdown and Spectre would not be as dangerous 20 years ago.

------
saagarjha
> These challenges are going away as chipmakers and innovators collectively
> leverage open-source to develop better solutions and reduce time-to-market.
> The open source RISC-V architecture is particularly notable for its
> availability of unencumbered reference implementations and compiler/software
> support. As a result, RISC-V greatly reduces the amount of ancillary work
> required for a processor security project, allowing design teams to move
> more quickly and focus on areas of innovation – including security.

Is this a thing? I have not heard of anyone working on trying to replace x86
with RISC-V because of Spectre…

~~~
tyingq
It probably switched a few orders from Intel to AMD, where the performance hit
of the Intel patches vs the AMD ones changed the performance/$ enough.

But probably nothing of enough volume to be notable.

Risc-V would be unlikely, since there's not yet real server class silicon.

------
pdimitar
I view the current situation as a good time to invent 32+ core CPUs with less
fancy features like all speculative execution or branch prediction (at least
not to the extent which made Spectre/Meltdown possible anyway). And a good
time for a next-gen kernel to emerge that can make use of several kernel-
reserved cores and with better preemptive scheduling overall.

Not a hardware guy or a driver programmer but I believe this area stands to
get some innovation.

~~~
gpderetta
We have today 32+ cores CPUs with minimal or no branch prediction and
speculation. They work spectacularly well on some specific workloads but are
terrible for general purpose computation. They are called GPUs.

Speculation and branch prediction is not going away. On the contrary is going
to get more and more sophisticated.

What needs to die is the belief that you can rely on just software (i.e.
memory safety) for isolation between trusted and untrusted code[1].

[1] Yes, meltdown bypasses hardware protection, but that's an intel specific
fuck up, not inherent issue with speculation.

~~~
twtw
> What needs to die is the belief that you can rely on just software (i.e.
> memory safety) for isolation between trusted and untrusted code[1].

I don't understand how you came to this conclusion. How does spectre indicate
that you can't rely on software for isolation?

> meltdown bypasses hardware protection, but that's Intel specific

It's not Intel specific. Certain ARM and POWER architectures are also
vulnerable.

Spectre bypasses hardware protections too, just in a more subtle way.

~~~
gizmo686
The idea behind spectre is that speculative execution bypasses software
protections.

Given if f(A) B else C, one cannot safely assume that B only gets executed
when A resolves to true. It is not reasonable to expect programmers to produce
safe code in an environment where they have to doubt if statements.

This could be solved without any major overhauls in hardware, by just updating
it to make speculative execution actually speculative and not commit any side
effects until it knows it went down the correct branch.

~~~
twtw
Sure, I get it. Perhaps I phrased my response poorly.

This comment:

> What needs to die is the belief that you can rely on just software (i.e.
> memory safety) for isolation between trusted and untrusted code

is an argument that doubting if statements should be acceptable, and that the
assumption that only B has architecturally visible side effects when A is true
is not sound. I disagree strongly with that.

You _should_ be able to rely on software checks for some things. Going
forward, the solution is _not_ to rewrite software to not depend on software
mechanisms (e.g. bounds checks) for protection (unclear what would be used
instead...) but to _fix the hardware_ so these software mechanisms work.

What needs to "die" is not "belief that you can rely on just software," but
hardware that violates fundamental guarantees.

It sounds like we agree on this, I just wanted to make my point clear since I
see now that my original comment was not clear.

FWIW, this is essentially the argument Torvalds made when Intel tried to add
feature flags for non-broken speculation.

~~~
gpderetta
I don't think Linus has any expectation that spectre v1 will ever be fixed in
hardware.

~~~
loup-vaillant
There's a difference between what you expect, and how you think things should
be. Linus was probably making a normative statement, not a prediction.

------
ivankolev
I feel uneasy giving up general purpose computing in the name of security.
Can't we have both?

~~~
tomxor
Yes but at a performance cost. I _think_ general purpose scalar processors
lack any of the timing vulnerability since they almost literally do one thing
at a time. But you are winding back the performance clock 1-2 decades - that's
not to say they are unusable, for instance the arm core in the pi zero is
scalar... but I doubt there is any modern desktop x86 equivalent without
speculative execution.

~~~
gpderetta
More like 4 decades. OoO itself is about three decades.

~~~
vardump
Yup. OoO is a thing from the mid-nineties for us mere mortals. CPUs like
Pentium Pro and AMD K5. Of course the _first_ out-of-order CPU is way older
CDC 6600, back from 1964.

MIPS R2000 from 1986 could do simple form of speculative execution.

------
mehrdadn
How many people's computers are known to have gotten hacked via Meltdown or
Spectre?

~~~
mikeash
How would you know? It allows reading privileged data without actually taking
over privileged processes, and wouldn’t leave behind any signs of what
happened.

~~~
pmoriarty
It really would depend on what they did and how they did it.

If the "privileged data" they read was, say, the root password which they used
to gain root access and then started snooping around the system as root or
modifying parts of the filesystem, that could be easily detected.

If the exploit itself was performed over the network or if the privileged data
they read was transferred over the network, that might also be detected,
depending on how it was sent and where it was sent to.

If they tried to launch attacks or probes from the exploited system they could
be detected at well.

Network intrusion detection systems and host detection intrusion systems could
both help here.

------
childintime
What strikes me the most is that the state of the art in CPU architecture is
immune to Spectre and Meltdown, yet it isn't once mentioned. Makes the article
sounds like Intel funded stuttering.

~~~
roblabla
Can you expand on which CPU exactly you are talking to? I'm genuinely curious.

From what I've seen, every major high-end CPU architecture is affected,
because they all rely on speculation. Of course, some low-end CPUs (such as
ARM Cortex-M cores) don't have speculation, so they aren't vulnerable. But on
the high-performance front, I haven't seen a credible alternative that wasn't
vulnerable to speculative execution side-channels. Which one am I missing?

~~~
twtw
Given the phrase "Intel funded" 'childintime is probably confused and thinks
AMD Zen is not vulnerable.

~~~
childintime
Intel is under threat, not only from AMD, but from the end of the x86 era. Of
course they need to protect their current bread and butter, but AMD isn't
their only worry. Far from it.

