
Why Raspberry Pi Isn't Vulnerable to Spectre or Meltdown - MikusR
https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/
======
tptacek
This is great, but remember that it covers Meltdown, not Spectre. Meltdown is
the more immediate disaster, but Spectre is the more batshit vulnerability.
You _really_ want to get your head around:

* The branch target injection variant of Spectre if you want to get a sense of how amazing this vulnerability is: you can spoof the branch predictor to trick a target process into running arbitrary code in its address space! This is crazy!

* The misprediction variant of Spectre if you want to get a hopeless feeling in the pit of your stomach, since the implications of mispredict are that certain _kinds_ of programs are riddled with a new kind of side channel we didn't really grok until last week, and no upcoming microcode update seems to be in the offing.

You could probably use the same Python conceit to illustrate the other two
attacks; someone might take a crack at that.

(I'm not disputing that the R-Pi's aren't vulnerable to Spectre).

~~~
ploxiln
This covers both Meltdown and Spectre.

> Both vulnerabilities exploit performance features (caching and speculative
> execution) common to many modern processors to leak data via a so-called
> side-channel attack. Happily, the Raspberry Pi isn’t susceptible to these
> vulnerabilities, because of the particular ARM cores that we use.

The reason why Spectre is not a problem is because there is no branch
predictor in these simpler arm cores. Instructions are processed in parallel
when possible, but not before dependencies, including branch decisions.

EDIT: under "What is speculation?" branch prediction is described. Then in the
conclusion: "The lack of speculation in the ARM1176, Cortex-A7, and Cortex-A53
cores used in Raspberry Pi render us immune to attacks of the sort."

~~~
loeg
There is almost certainly a branch predictor even in these simple ARM cores.

~~~
phkahler
There's no reason to predict a branch if you're not going to execute
speculatively.

I need to re-read the papers but I think the real problem isn't even
speculative execution but allowing speculative cache changes.

The notion that "gadgets" didn't even need to return properly was both amusing
and eye opening for me. It doesn't matter because the result will be flushed
anyway! ;-)

~~~
cwzwarich
In an in-order CPU, you can still use a branch predictor to predict what to
fetch and decode, so that you don't stall waiting for instruction fetch to
finish after you resolve the branch.

In practice, advanced in-order designs contain more local reordering
mechanisms, e.g. in the load/store unit, but they lack the unified global
abstraction of a reorder buffer. The most successful timing attacks involve a
mis-speculated load, so they wouldn't apply to these mechanisms, but it's not
completely out of the question that they are also an effective side-channel.

------
SomeHacker44
This is a good overview of modern, superscalar, out-of-order, speculative CPUs
that literally any programmer could easily understand. Recommended reading for
every single engineer in the whole world (who doesn't already understand this
stuff from reading source material e.g., Google Zero post).

~~~
PuffinBlue
Non-engineer here, this bit is key right:

> However, suppose we flush our cache before executing the code, and arrange
> a, b, c, and d so that v is zero. Now, the speculative load in the third
> cycle:

> v, y_ = u+d, user_mem[x_]

> will read from either address 0x000 or address 0x100 depending on the eighth
> bit of the result of the illegal read. Because v is zero, the results of the
> speculative instructions will be discarded, and execution will continue. If
> we time a subsequent access to one of those addresses, we can determine
> which address is in the cache. Congratulations: you’ve just read a single
> bit from the kernel’s address space!

To my understanding it is that saying that by...

1) ...flushing the cache so you have a 'clean' state, you can get...

2) ...the speculative execution to 'pull in' to cache the address user_mem[x_]
but...

3) ...the particular address that's pulled into cache, 0x000 or 0x100, is
determined by whether...

4) ...the illegal read of kern_mem[address] 8th bit was a 1 or 0...

5) ...which you can then subsequently determine the value of by...

6) ...timing how long it takes to access that user_mem[x] address once again
and...

7) ...thereby leaking the value of kern_mem[address]...

So you still have to perform some logic on the result of the speed of the
access to the secondary address read right?

If read of 0x000 is slow you know kern_mem[address] was a 1 and if fast
kern_mem[address] a 0, and if 0x100 is slow you know kern_mem[address] was a 0
and if fast that kern_mem[address] was a 1?

Is that correct?

If it is it seems that timing is the key right, and actually the clever leap
of creativity in completing the exploit, at least to my untrained mind.

Please do correct anything I've got wrong, I'm not an engineer/developer!

~~~
pilom
You're exactly correct. This is why the browsers decreased timing resolution
in javascript so that you couldn't time memory accesses accurately enough to
tell if the address was cached or not.

~~~
O_H_E
Does that mean -another- slight performance drop ???

~~~
tialaramex
Not in general.

Consider an Olympic 100 metre sprinter. Today we time this event very
accurately, I think it's to one hundredth of a second, using sophisticated
technology.

But even if the judges used a much less accurate mechanical stopwatch, Usain
Bolt wouldn't actually be slower, we'd just be less confident of how
ridiculously fast he is.

In some special cases, timing things very accurately might be essential to a
use of Javascript, but I can't think of any examples off the top of my head.

~~~
make3
gotta get that sweet rollover effect right to the 24th decimal baby

------
styfle
I understood everything up until the "suppose we flush our cache before
executing the code" part which is probably the most important part.

There was a comment below the article that explained this part a little
further:

> Imagine the value at the kernel address, which gets loaded into _w, was
> 0xabde3167. Then the value of _x is 0x100, and address user_mem[0x100] will
> end up in the cache. A subsequent load of user_mem[0x100] will be fast.

> Now imagine the value at the kernel address, which gets loaded into _w, was
> 0xabde3067. Then the value of _x is 0x000, and address user_mem[0x000] will
> end up in the cache. A subsequent load of user_mem[0x100] will be slow.

> So we can use the speed of a read from user_mem[0x100] to discriminate
> between the two options. Information has leaked, via a side channel, from
> kernel to user.

[https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-
vulne...](https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-
to-spectre-or-meltdown/#comment-1375375)

~~~
JdeBP
Yes, that is the _this is left as an exercise for the reader_ part of the
explanation. (-:

The remaining part is to iterate the process over all of the bits in the word,
using different bitmasks. The resultant set of 0 or 1 results for each bit
yields the complete word.

Then one iterates _that_ whole process over all (useful) words in (mapped)
kernel memory.

~~~
rofex
Thanks, this was the missing piece in my understanding. I was wondering how
only knowing only 1 bit would be useful. Suppose the attacker wants to read
this entire address (0xabde3167) using this method. Is it guaranteed that over
multiple runs, this address would be the same each time at that point in
execution?

~~~
ufo
It is certainly possible that the memory the exploit is trying to read might
be changing under its nose. An actual implementation of the exploit would need
to account for that.

------
zackmorris
Ok I think I understand the subtleties of these attacks now. But: _can anyone
tell me why the accessibility check for protected memory doesn 't happen
before the cache loads the contents of RAM?_ If that happened then none of
these attacks would be possible.

I got my computer engineering degree in 1999 and ended up going the computer
science route making CRUD apps all day. I feel in my gut that some engineer,
somewhere, MUST have asked this question at one of the big chip manufacturers.

Am I missing something fundamental? Is the access check too expensive? If it
isn't, then can the microcode be updated to do this, or is
caching/accessibility checking happening at a level above microcode? If that's
the case then it would seem that pretty much all processors everywhere that do
speculation without protected memory access checks are now obsolete.

~~~
ProblemFactory
> can anyone tell me why the accessibility check for protected memory doesn't
> happen before the cache loads the contents of RAM?

From what I understand, it does happen on AMD, which is why AMD CPUs are not
vulnerable to the more dangerous Meltdown attack (any code reading kernel /
hypervisor host memory).

Intel / ARM delays the checks until later, to the time when the speculated
instructions are actually finalised and make their results available. This is
faster, and loading some memory into the cache is normally invisible to the
unprivileged code. The checks would still be done when actually reading that
memory. But nobody spent enough time considering the timing side-effects of
the cache.

Now even if the protected memory reads are fixed by the OS updates - then it
still leaves the Spectre attack - code running in a process reading all of
"its own" memory, regardless of any software sandboxing. This means that all
sorts of sandboxing methods for javascript interpreters, bytecode
interpreters, plugin architectures, etc are insecure. And the OS patches can't
help here, because the sandbox isn't in protected memory.

~~~
zackmorris
Thank you, that was a succinct explanation of the difference between AMD/Intel
and the order of speculation and protected memory access check for the
Meltdown attack, and makes it easier to understand:

[https://en.wikipedia.org/wiki/Meltdown_(security_vulnerabili...](https://en.wikipedia.org/wiki/Meltdown_\(security_vulnerability\))

I see now why the Spectre attack is so serious (reading out of bounds within
the same process memory). I feel like there may be ways to catch unallocated
memory access similarly to how protected memory works. But, that wouldn't help
reads from allocated memory in runtime environments like for Javascript (where
separate scripts in the same process space aren't meant to see each other's
data). This is clearer to me now:

[https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit...](https://en.wikipedia.org/wiki/Spectre_\(security_vulnerability\))

Going forward, we may have to assume that security is only possible with true
process isolation. For example this might put pressure on OSs to fix their
slow context switching implementations to encourage the use of processes
instead of threads. Beyond that, I can't see any easy way to fix the situation
and am highly skeptical of things like compiler fixes, because there will
likely always be another way to abuse various instructions to read outside
memory boundaries.

~~~
tathougies
The slow context switching between processes has nothing to do with OS
implementation. True context switching involves a page table flush. This is
slow due to caching, independent of the OS. The only thing the OS can do is
tell the processor which parts not to mark dirty, but -- as these attacks show
-- this can expose vulnerabilities.

------
Osiris
With all the news about these attacks lately, this is one of the best posts
I've seen in explaining to less knowledgable people how exactly speculation
causes a problem.

One question I still have that gets glossed over is how timing of instructions
is captured.

~~~
Fronzie
in order to exploit it from a script running in a web-browser: there's a high-
resolution timer in javascript. This one is limited to 5 to 20 us resolution
to prevent such attacks.

Recently a shared-memory extension has been proposed. one javascript thread
just increments a counter in the shared memory, functioning as a clock for the
other thread.

In both cases, (Spectre) attacks can be prevented by browser updates, so any
performance impact is not system wide.

This is different from Meltdown, which (only?) affects intel. That one
requires kernel changes which cause system-wide performance degradation.

~~~
blattimwind
> This one is limited to 5 to 20 us resolution to prevent such attacks.

* make such attacks more difficult.

~~~
Spivak
Impossible. This attack relies on detecting the timing between a cache hit and
miss. If your clock resolution is larger than a cache miss then you can't
differentiate the two events and so no information is leaked.

~~~
rocqua
Not quite. An instruction that takes 1us is much less likely to start and end
in a different 20us clock cycle than a 10us instruction. Simple repeated
sampling combined with statistics still yields a timing attack. It'll be
slower and less deterministic, but it's still a problem.

------
pslam
The cores in (all versions of?) Raspberry Pi _do_ speculatively execute. It's
just that the window of opportunity is tiny - just a few cycles (and maybe up
to twice that many instructions) - and there's (probably) no way to get an
indirected side-effect.

I wouldn't write off the ability to get a useful side-effect signal. The
variants widely documented are not the only possible methods of inducing
speculative side-effects.

~~~
Symmetry
Yes, the Raspberry Pi will issue loads before the branch resolves. But usually
a processor's pipeline won't be laid out in such a way that the AGU won't have
time to pass an address to the load pipe before the branch resolves and
squashes the load. The Cortex A8 was an interesting exception but it was
pretty deeply pipelined compared to most in order cores.

------
blattimwind
> The lack of speculation in the ARM1176, Cortex-A7, and Cortex-A53 cores used
> in Raspberry Pi render us immune to attacks of the sort.

I didn't check, but these will almost certainly have branch prediction. What
they probably lack is a predictor advanced enough to speculate on indirect
branches, which AIUI is the primary vector of Spectre.

~~~
Scaevolus
Branch prediction alone is insufficient. Speculative execution alone is
insufficient. You need speculative memory loads for _any_ of these attacks to
work.

The Cortex-A53 branch predictor [1] does prefetching to keep the core fed.
This ensures that the instructions are ready for decoding, but has no
architectural effects beyond the L1 instruction cache, which is already a
well-studied timing sidechannel.

[1]:
[http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0500d/CJHEICEB.html)

~~~
bazizbaziz
What about the fact that these instructions might get partially executed in
the pipeline before the branch gets resolved and the pipeline flushed? If a
mis-fetched instruction can reach the LSU stage before the pipeline gets
flushed, it might serve as a speculative memory load...

~~~
Scaevolus
They're not partially executed. The branch predictor only fetches
instructions. They might be _decoded_ , but it's not an out-of-order
processor-- pipeline stages only proceed if the previous phase is correct.

Here's the Cortex-A53 pipeline: [https://www.anandtech.com/show/11441/dynamiq-
and-arms-new-cp...](https://www.anandtech.com/show/11441/dynamiq-and-arms-new-
cpus-cortex-a75-a55/4)

It's an in-order CPU, so that "issue" phase (pipeline step 5) stalls until the
instruction pointer is resolved. Instructions must be issued to the "AGU Load"
functional unit, which is what actually performs the read and pulls data into
the cache hierarchy.

Note also that a single speculative memory load is insufficient for Spectre.
You need _two_ speculative memory loads.

------
thingification
I was _already_ on the lookout for a small ARM-based mini PC, just for doing
financial transactions and record-keeping. Now that seems more pressing but I
don't know of any such thing in existence.

I tried doing that on RPi 3, but the IO seemed not up to the job -- the CPU
appeared to be just about tolerable, but using micro SD as a disk was too slow
and prone to failure (I'd have tried an external USB disk but I believe the
problems were in part because of poor I/O bandwidth). Other single board
machines seemed to have better provision for disks that are up to the task I
had in mind, but lack software support, so that I had little confidence in
security updates, for example.

If somebody sold this I think they'd have my money tomorrow:

* An ARM mini-PC

* With a decent security update team behind it (probably the hard part?)

* That will let me run some basics: for me, a Unixy OS with Chrome/Chromium, emacs, ledger and python, without a big effort to install those and keep them up to date

* Ideally without too much anti-commodification BS (from my customer perspective) so that hardware can be swapped out if needed

Does anything like that exist?

~~~
shasheene
SD cards are optimized for sequential IO (reading/writing photos, video,
music). For an OS root partition, random IO is much more important for general
use. If the root partition is mounted from an external USB drive with higher
random 4K IOPS benchmarks, IO performance should be greatly improved.

------
jtchang
This is a fantastic read. Timing attacks are insidious and tend to crop it in
the oddest places. I first learned of them when learning how to securely
compare strings (used a lot with passwords). A naive implementation means that
you can easily guess if a character is correct depending on how fast the
compare function returns.

~~~
Franciscouzo
You shouldn't be comparing plain text passwords anyway. You should be using a
secure password hash, such as bcrypt, sure, you should use constant time
comparison, but in this specific case, it won't really make you vulnerable to
use normal comparison.

~~~
ufo
Normal comparison is bad even if you are comparing hashes. Letting the
attacker figure out the password hash allows them to attempt to crack the
password through an offline brute-force attack running on GPUs.

~~~
odonnellryan
I don't understand how this can be an issue. This is usually what happens:

> User enters password > PW gets hashed > Hash gets compared to the DB

I don't know of any same system that would allow you to compare a hash to a
hash? Unless you have access to something you shouldn't, in which case it
doesn't matter anyway, because you can probably just read the hashes.

~~~
stordoff
If the user knows exactly how the hashes are generated (I believe a random
salt would prevent this), it could still be used to better target an online
bruteforce, though other defences such as rate-limiting should still kick in.

If, say, you submit a password with a hash that starts "b94", if the database
doesn't use a constant time comparison, you can use the timing to figure out
that the stored hash also starts with "b94" (statistically, given network etc.
delays involved), meaning you can pre-filter your submitted guesses (i.e.
bruteforce offline and only submit guesses that start "b94").

It's definitely a edge-case thought (and probably not worth worrying about
unless you don't salt/rate-limit requests). I also don't know if the number of
requests needed to determine the timing would actually be less than just
making random guesses outright (intuitively it seems so because even if it
takes a lot of requests it shrinks the search space at each step).

------
lxe
This article wonderfully explains a complex context without losing a lot of
relevant detail.

------
Chaebixi
> In the good old days*, the speed of processors was well matched with the
> speed of memory access...Over the ensuing 35 years, processors have become
> very much faster, but memory only modestly so: a single Cortex-A53 in a
> Raspberry Pi 3 can execute an instruction roughly every 0.5ns (nanoseconds),
> but can take up to 100ns to access main memory.

In real-world terms, what's the fastest processor we could build today whose
execution speed is reasonably matched to it's main memory access speed (so it
doesn't need caches, etc)?

I could imagine that a processor, with a simple design that closely matches a
naive model of how CPUs work, would be very useful for high-security
applications. It would be much easier to reason about up-front.

~~~
Symmetry
The problem isn't really the speed of the memory but its size. The planar
nature of memory and the speed of light impose a latency of access on a pool
of size N proportional to the square root of N. The L1 cache on your CPU has
kept up in speed with the processor and is the same size as the memory
computers had when computers could access their main memory quickly.

~~~
blattimwind
The latency of DRAM is mostly governed by Dennard scaling ("regardless of
transistor size, power density remains the same"), because it means that
making cells smaller reduces the available charge currents for bit-lines and
the like proportionally, so only a small latency advantage can be gained.

------
njitbew
I enjoyed reading this a lot. I wonder why the developers decided to allow
reading kernel-memory in the first place. When a scalar processor reads kernel
memory, it crashes. When a speculative processor reads kernel memory, it
relies on the assumption that the read is never committed to prevent leakage.
It takes no expert to realise this is a potentially dangerous decision (and,
as becomes clear now, is only valid in the absence of a cache).

To me it would make a lot more sense to use a special value to indicate the
read did not succeed and propagate this value until it is time to crash. I
guess this introduces some overhead (e.g. reserve a special value); but are
there any other drawbacks?

~~~
infinite8s
This is the part I don't understand. How is the processor able to read a
cacheline from a protected memory page without crashing (even if it was
speculative and wouldn't happen in the idealized execution due to branching).

~~~
pwg
Because in the Intel design, for memory reads issued by speculative
instructions, any "access denied" results are also delayed until the CPU
control unit determines the instruction that issued the read should really
have been executed.

But the actual read from is allowed to occur, even if the "access denied"
signal is given. Which allows the read to effect the state of the data caches.
This was likely done this way as a performance booster, because this would
allow speculative instructions to also perform cache pre-fetching during their
speculation window.

That seems to be why AMD CPU's are immune to Meltdown. AMD's design prevents
the read from occurring when the "access denied" signal appears, so the cache
state is not effected, so there is no side channel to detect.

~~~
stordoff
> But the actual read from is allowed to occur, even if the "access denied"
> signal is given. Which allows the read to effect the state of the data
> caches. This was likely done this way as a performance booster, because this
> would allow speculative instructions to also perform cache pre-fetching
> during their speculation window.

Why is this? Is it because the CPU doesn't know ahead of time what is valid
(because it depends on the "outcome" of instructions in flight), or is there
something I'm overlooking?

~~~
pwg
Well, if you try to put yourself in the mindset of a CPU designer, without
extensive cryptography experience [1] to be fully aware of timing side-channel
attacks, you would see the speculative execution memory reads as harmless. If
the predicted path is wrong, you'll reset the CPU state (cpu registers) so the
running program sees nothing different. And if you skip running the reads
through the full memory protection gamut (you still _have_ to do the address
translation) during the speculation window you'll save a few cycles on the
reads, and maybe a tiny bit of power. And in the case that the predicted path
was correct, any "access denied" signals need to be delayed until the
accessing instruction would commit anyway (to maintain proper sync with how
the signal works, in that it indicates which instruction took the memory
access fault, so you can't raise the signal until you know for sure the
instruction would have executed). And if you see the reads as harmless
(because they are thrown away if the speculative guess was wrong) then you
might also see them as "free" cache pre-fetch instructions (because they do
pre-warm the cache when the speculative path is the correct path).

In the end the result is a confluence of several different topic (speculative
execution, data caching, high resolution timers [although these can be
simulated with a plural CPU system]) that each in isolation is all but
harmless, but together emergent behavior appears that was not immediately
apparent from each one viewed individually. I.e., without caches there's no
side channel to monitor. Without speculative execution there's no way to trick
read a bad address and avoid taking a memory access fault. Without high enough
resolution timers it becomes very hard detecting the time difference between a
cache hit and miss.

[1] a reasonably safe assumption - most CPU architecture designers are not
cryptographers, and most cryptographers are not CPU architecture designers,
and most timing side-channel attacks have historically been against crypto.
algorithm implementations.

------
zakk
The best explanation of Meltdown I’ve read.

------
oblib
The best comment made on that blog post was by Eben himself:

"One almost wishes that they’d stuck with the original name for the KPTI
patchset: Forcefully Unmap Complete Kernel With Interrupt Trampolines.

[https://www.theregister.co.uk/2018/01/02/intel_cpu_design_fl...](https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/)
"

Now that's funny!!!

------
criddell
I've been wondering (and haven't seen it addressed anywhere) if these attacks
could be used to get the private key out of game consoles. These days I would
assume not - that the key would be in a secure enclave - but the current
generation of consoles are a few years old now and maybe that's not the case.

~~~
jenscow
The private key used to sign code? No, that wouldn't be in the console at all.

~~~
confounded
I imagine they mean the decryption key (rather, the ‘private, encryption’
key).

------
jtgeibel
Off topic, but I haven't seen this discussed anywhere yet. My understanding is
that font files can contain complex instruction sequences to control exactly
how a font is rendered. I believe Windows implements a kernel space VM to
execute these instructions. I know variants 1 and 2 did not necessarily
require eBPF but that it made the attack simpler because the desired
instruction sequences could be injected directly into kernel space (rather
than finding existing sequences in the code base). It seems that in theory
font rendering could serve a similar function on some platforms.

------
O_H_E
Now... I am interested in assembly :D Any recommendations ???

Really awesome explanation

~~~
teacpde
I highly recommend [http://www.nand2tetris.org/](http://www.nand2tetris.org/)
and the The Elements of Computing Systems book
[https://www.amazon.com/Elements-Computing-Systems-
Building-P...](https://www.amazon.com/Elements-Computing-Systems-Building-
Principles/dp/0262640686/ref=ed_oe_p)

~~~
O_H_E
Thanks :)

------
greguu
Hey, what about Intel Xscale processors like the PXA2xx series ?

These do have Dynamic branch prediction/folding afaik and may be affected ?

Does somebody have a spectre.c tuned for generic armv5tel for example?

Current versions of spectre.c, like this one
[https://gist.github.com/LionsAd/5116c9cd37f5805c797ed16fafbe...](https://gist.github.com/LionsAd/5116c9cd37f5805c797ed16fafbe93e4)
still contain "_mm_clflush" and therefore do not compile on ARM at all.

------
mirthflat83
Fantastic read. Before reading the article, I assumed there were many HN
readers who were extremely proud about their Raspberry Pi being invulnerable
to Spectre or Meltdown

------
lelandbatey
I've done some cursory searching and not found anything, so I'll ask here:
what mechanism is used to measure how long it takes to access a specific
address in memory?

I assume there is some way to tell the CPU "when memory location X is read,
store the current time in register Y" or some such thing. Could anyone share
what that mechanism is?

~~~
lucb1e
Elsewhere in the thread, someone asked the same question and got an answer:
[https://news.ycombinator.com/item?id=16080230](https://news.ycombinator.com/item?id=16080230)

~~~
lelandbatey
Thank you for that link! I'll write out the conclusion I came too from reading
those comments:

Instead of measuring the literal time interval between instructions, the
number of cycles between two points is measured (using the RDTSCP
instruction).

------
aplorbust
How many RPi users are using this board to run untrusted code?

The RPi may mitigate risk of these attacks simply in the way it is used.

Perhaps hobbyists use it to run their own small programs, not random third
party Javascript in an enormous web browser from some corporation.

------
mtgx
None of the RISC-V chips are either:

[https://riscv.org/2018/01/more-secure-world-risc-v-
isa/](https://riscv.org/2018/01/more-secure-world-risc-v-isa/)

~~~
userbinator
...nor are 486s, 386s, AVRs, 8051s, and a bunch of other low-performance in-
order CPUs.

There's also this...
[https://groups.google.com/a/groups.riscv.org/forum/#!topic/i...](https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-
dev/JU0M_vug4R0)

~~~
tomxor
That's a little unfair, the RISC-V design is intended for both low power and
high power applications, in implementation is has the potential to be
comparable to the Pi 3's Cortex-A53. Where as the two 86s you mention are very
old slow CPUs and the other two you mentioned are 8bit microcontrollers.

------
matte_black
What would finally bring this all together to me would be an example of a real
world attack that would be carried out using these methods on some target,
perhaps with an implementation.

~~~
oblib
Since it was an industry effort to find the flaw, and they are vulnerable to
the threat they've exposed, it would seem at odds to their interests (and my
own) to provide you or anyone else with an example of how to exploit it.

And I'll offer that if you're not capable of demonstrating it after reading
Eben's description of how it works than there is no good reason for you to
have an example handed to you.

If you think you are capable I'll offer your time would be better spent
working on fixes.

------
646754375
Didn't ARM say the Cortex A53 is vulnerable to Meltdown?

~~~
networked
According to [https://developer.arm.com/support/security-
update](https://developer.arm.com/support/security-update), the Cortex-A5 _7_
is affected, but the Cortex-A53 isn't.

~~~
Sephr
A Google Project Zero member said that they got Meltdown working on a
Cortex-A53.

------
daveheq
Does anyone expect bug revelations after this to be less worse, or is there
still a chance there could be vulnerabilities that are worse than these?

~~~
InclinedPlane
It's hard to imagine something worse to be honest. These vulnerabilities
basically amount to ripping away the entire veil of protections at every level
that we've built up over the years.

Future vulnerabilities that I could imagine being "worse" would be either
encryption vulnerabilities or signals level vulnerabilities.

------
tomxor
Technical details aside, I find it quite amusing that the hardware in my pi
zero is more secure than my desktop that is two orders of magnitude more
expensive

~~~
Demiurge
And yet, my abacus is even more secure ;-)

~~~
trynewideas
Only if you wear gloves while using it and shake it after finishing a
calculation.

~~~
jabl
Been reading Cryptonomicon, have we?

