
Intel CPUs afflicted with simple data-spewing spec-exec vulnerability - Nux
https://www.theregister.co.uk/2019/03/05/spoiler_intel_flaw/
======
userbinator
_will make existing Rowhammer and cache attacks easier_

I've said it before and I'll say it again: Rowhammer is a functional
correctness problem that the memory industry has been trying to hide ever
since it was discovered. Authors of memory testing tools have been convinced
to ignore it, with the rationalisation that "almost all memory would test as
defective." AFAIK it's only after ~2009 that RH became a problem; anything
older is not affected due to its lower density.

On the bright side, since the first Spectre timing in JS has been so reduced
in resolution that the last time I read about it, the rate at which you can
read memory (keep in mind that where you read is random, and how large the
whole address space is) was extremely low --- something like a few bytes per
hour; and that was after the researchers had already done a ton of preparatory
work to set up everything just right. (Otherwise, because like with anything
timing/cache-based, simply running something else may already change the
timing and possibly invalidate some of those bytes being "read". Then comes
the question of where in memory you're actually reading --- a 64-bit address
space is _huge_ \--- and what significance those bytes have. It could be a
private key, it could be random bits; the point I'm trying to make is, being
able to read memory is just one of the requirements to an actual attack, and
there are many more hurdles an attacker has to overcome.)

IMHO it's something to worry about if you have JS on by default _and_ are
being subject to a _very_ targeted attack. If you have JS off by default and
aren't someone of particularly high interest, there is much less to worry
about.

~~~
inetknght
> _I 've said it before and I'll say it again: Rowhammer is a functional
> correctness problem that the memory industry has been trying to hide ever
> since it was discovered._

How do you measure correctness? By reliability?

What trade-off would you make to improve density, throughput, or latency?

Why do you think that trade-off wasn't made?

~~~
userbinator
If memory is operating correctly, then the value last written to any location
should _always_ be the one which is read back. Any deviation from that means
there is something _wrong_ , in the same way that a calculator which
intermittently produces 1+1=3 would be considered _broken_.

The fact that the correctness of software since the beginning of computing
depends on the correctness of the hardware means that it's not a "trade-off",
memory that doesn't behave like memory should is simply defective. The
$$$-chasing manufacturers would like you to think otherwise, however, and IMHO
there's been a huge cover-up --- which the manufacturers certainly are trying
hard at; just imagine recalling every single DRAM produced in the last 10
years.

Security is only one important piece of the whole story. Imagine computations
being subtly incorrect (that includes things like "IsUserRoot()" being
occasionally wrong, but it affects correctness in general.) That undermines
everything about what computers are supposed to do. I only wish there was far
more outrage about Rowhammer than Spectre/Meltdown (I remember some people
suggesting recalling all CPUs made in the last 2 decades...), because while
timing side-channels are "only" a security concern that simply did not receive
much attention until recently, Rowhammer and similar corruptions affect _all_
computation. You don't have to be attacked, all that needs to occur is some
computation happens to have a "fragile" access pattern that flips bits
somewhere, and weird undefined behaviour appears. I mentioned this in a
comment I made 4 years ago about the same thing:
[https://news.ycombinator.com/item?id=9175734](https://news.ycombinator.com/item?id=9175734)

~~~
lixtra
We all know cosmic rays and other stuff can flip bits. So your reads are only
correct 99.99..9% of the time. Deciding how many nines you want is a trade
off. Welcome to the real world.

~~~
userbinator
Confusing a random natural event which is literally orders of magnitude lower
with specific access patterns that can rapidly cause errors is pure
disinformation.

------
msandford
I've been using AMD CPUs exclusively on the desktop for the last 3-4 builds
I've done and it sure feels nice all of a sudden. I recognize it's luck rather
than skill of course, but I'll take what I can get because this one is a
doozie!

"An attacker therefore requires some kind of foothold in your machine in order
to pull this off."

Right, but these days browsers are handing over footholds to anyone with a
webserver! It used to be that you worried about pop-ups because they were
annoying. Now it seems you need to worry about the modern-day equivalent
because they could at least theoretically ruin your digital and perhaps real
life.

~~~
dspillett
_> these days browsers are handing over footholds to anyone with a webserver_

It is bad for providers using non-dedicated cloud infrastructure too: some of
these flaws allow breaking out of the hypervisor's protections so an attacker
can in theory read from other VMs on the same infrastructure not just other
processes on your (virtual)server.

 _> I've been using AMD CPUs exclusively on the desktop_

That doesn't protect you all that much. While this particular flaw seems from
current reports to be Intel specific, some of the past ones affected AMD and
Arm designs also, and maybe there are some AMD/Arm/other specific attacks
waiting to be found too.

~~~
dvdkhlng
> That doesn't protect you all that much. While this particular flaw seems
> from current reports to be Intel specific, some of the past ones affected
> AMD and Arm designs also, and maybe there are some AMD/Arm/other specific
> attacks waiting to be found too.

I think this is called "security through minority" which is a special case of
"security through obscurity" :)

~~~
phkahler
Dont forget about meltdown.

~~~
dfrage
Only AMD has avoided Meltdown, both ARM in some CPUs, and IBM, both POWER and
mainframe, have Meltdown bugs.

------
Causality1
Spectre attacks have so far only been observed at the nation-state level in
the wild. If you're truly paranoid about being in the first wave of victims,
disable Javascript and other active content on non-white listed pages. At some
point processors are going to have to drop into a non-speculative security
mode when dealing with vulnerable data like passwords and handshakes, and pop
back out of it when they're done.

~~~
phkahler
That's not enough. You have to disable speculative execution on untrusted
code, not on sensitive data. There is no solution on shared infrastructure
because the untrusted code is another VMs trusted code.

~~~
akvadrako
He was talking about desktop computing. For server side, of course you need
dedicated machines in your own locations. You basically have no security
against targeted attacks in shared hosting because you don't even have access
to cameras or the ability to vet the personal with access to your hardware.

------
christophilus
> This security shortcoming can be potentially exploited by malicious
> JavaScript within a web browser tab,

Well, that’s lovely. Turning off JS just got more important.

~~~
aaaaaaaaaaab
I have basically zero sympathy for people who enable JS in their browsers and
then complain about their privacy being violated. I mean, how can someone
expect privacy when doing the exact opposite of best practices?

~~~
luma
Some people need most of the internet to work correctly. Like it or not, JS
has become a core requirement for modern web functionality and that is
unlikely to change any time soon.

~~~
userbinator
I have been browsing with JS off by default for over a decade and a half now.
The list of domains for which I allow JS on has accumulated less than 100
entries. I do not use "appsites" much, and the rest of the document-centric
Web is perfectly usable without any scripting.

The more who turn it off and complain about useless appification, the more
likely the trend can change. There are already plenty of reasons besides
security to turn it off.

~~~
snazz
The only device on which it is truly impractical to disable JS is my company-
owned iPhone. It does not allow me to install apps from the App Store, so
while I can flip the switch to disable it in
Settings>Safari>Advanced>JavaScript, doing so is inconvenient since I cannot
whitelist the few domains that I’m okay running JS on.

That’s a pretty special case, however. On any other device, uMatrix + NoScript
works great.

------
dooglius
The title and opening paragraph are misleading; this only leaks physical page
mapping information, not data. So this can allow ASLR to be bypassed, but
isn't a vulnerability in the same class as Spectre.

------
gambler
How big of a fuckup do we need to see to realize that running megabytes of
arbitrary code from a dozen different domain for every website we visit is a
bad idea?

What needs happen for engineers to realize that process isolation is something
that needs to be taken serious at the lowest possible level (hardware + OS),
rather than through some magic abstraction layer (VMs, hypervisors,
containers, etc).

------
jakeogh
So, with surf, it's easy to browse without JS (set the default to False in
config.h). It's way better than the default 'on'. If something does not work,
I either hit ctrl-shift-s to enable JS for that WebKit process, or (more
often) just axe the window.

[http://surf.suckless.org/](http://surf.suckless.org/)

~~~
gambler
Browsing without JS is a major pain if you're enabling it for only certain
domains. If you don't do that, you don't add much security anyway.

~~~
jakeogh
It's per-process. Under most circumstances, each surf window/tab is it's own
WebKit process. So for example, I might decide to enable it for a banking
session (although it's a perfect example of somewhere JS should not be
necessary).

------
exabrial
I'm beginning to think crypto operations should no longer be implemented in
code on general purpose cpus.

~~~
plandis
I’m not an expert in this but considering the purpose of these exploits is to
gain knowledge of memory layout to then execute exploits against DRAM, simply
securing your CPU isn’t going to be enough.

~~~
exabrial
Exactly. So dont keep the extremely valuable stuff in RAM and reduce the
attack surface.

------
tmikaeld
"..it's not something you can patch easily with a microcode without losing
tremendous performance"..

Spectre also caused performance drops, could Intel have made these flaws
intentionally - just to boost performance?

~~~
kevingadd
Speculative execution in general is a sizable performance boost that brings in
the risk of all sorts of attacks like this. You could of course try to
implement it all in a safe way - and of course not every form of speculative
execution in existing processors is unsafe - but in practice lots of vendors
messed it up and Intel seems to have just produced the least-safe speculative
execution implementation. I'd argue that speculative execution's risks were
not particularly scary when it became a common technique (a very long time
ago!) because things like shared hosting on anonymous machines with
hypervisors or multiprocess browser sandboxes storing important data were
nearly impossible to predict.

AFAIK Intel, ARM and AMD all have shipped chips that are vulnerable to a
handful of speculative execution attacks. Microcode can work around these
issues by doing things like changing the micro-ops that instructions decode to
or disabling processor subsystems - though what they can or can't disable is
something I suspect only the vendor knows, it's common for vendors to include
many of what are referred to as 'chicken bits' that are essentially panic
switches to disable a feature or subsystem if it turns out to be broken.

~~~
dfrage
> AFAIK Intel, ARM and AMD all have shipped chips that are vulnerable to a
> handful of speculative execution attacks.

Some ARM CPUs, and IBM, both POWER and mainframe CPUs, have Meltdown bugs. All
4 of these vendors of high performance out-of-order speculative execution CPUs
including AMD have Spectre bugs.

