
Spectre chip security vulnerability strikes again; patches incoming - CrankyBear
https://www.zdnet.com/article/spectre-chip-security-vulnerability-strikes-again-patches-incoming/
======
gruez
previous discussion:
[https://news.ycombinator.com/item?id=17121811](https://news.ycombinator.com/item?id=17121811)
(172 points by brandon 19 hours ago | 20 comments)

~~~
CrankyBear
It's not a dupe. This article gathers up info from multiple and later sources
--Intel, Red Hat, etc. and how to deal with it--not simply the first mention
of the problem.

------
hinkley
These performance regressions are getting a bit depressing.

Is there some other way we could architect chips or operating systems that
would allow these branch predictors to go full bore instead of having to
hamstring them? When this story first broke I know we talked a bit about the
impetus to move code out of the kernel into user land.

~~~
blattimwind
Switching (or indexing) predictor state on task and privilege level.

~~~
yaantc
It's even more complex. It's not only the predictor state that would need
switching/indexing, it's the result of any prediction.

A speculative load in cache is global today, and that leads to a possible side
channel attack. One may think about marking a speculatively cached entry by
its security context, the only context allowed to "see" it until the
speculation is resolved. But that is not enough: the entry may have evicted
another, and that's globally visible. Or one may not use the regular caches,
but add another lookup structure (CAM) for in-fly speculative loads (and same
for store), indexed by context.

Another source of complexity is: what is a security context? An OS could use
existing structures like processes, and it makes sense, but it's not enough.
For a sandboxed application a given process or thread could run untrusted
sandboxed code (Javascript) and also trusted runtime. In this case, the
application need to help and manage security contexts. It's possible, but not
backward compatible.

I expect progress will be made, but considering the complexity/costs it may
take some time...

~~~
titzer
> It's even more complex.

Oh, this one goes elbows, armpits, neck, ten thousand fathoms deep. Now just
imagine that every memory access that the processor does actually consumes
another type of resource, which is memory bandwidth. There are a limited
number of cache-fill lines and those can be occupied or not; after tracking
down all of the state of things at rest, you now have to think about the
information leakage from the occupancy of the various networks of the CPU--
data in flight. Tougher to analyze, but also tougher to secure.

It's gonna be separate CPUs, memory busses, and disks if we really want to get
rid of side-channels.

And that's actually simple. Simple is better for security!

~~~
hinkley
Wouldn't this mean that kernel calls are interprocess communication?

~~~
titzer
In general, yes. But one could, e.g. have a second kernel, or some microkernel
subsystems, running on the other system for the most common calls. In fact,
that'd be even safer.

It'd be safest to have a whole machine per process :)

------
ythn
I hope there's a way for users to disable these patches. I for one would
rather have a fast processor that does speculative execution than one that
takes a significant N% performance hit but is armored against speculative
attacks.

Security compliance is becoming more and more like a ball and chain shackled
to our legs. I expect security to slowly eat up a bigger and bigger share of
processing power which translates to more sluggish chips, more sluggish
software, etc.

~~~
zedder
Hi, could you please explain this further? I always assumed that if a program
or chip is vulnerable then it is not operating as intended and must be patched
quickly. You’re suggesting that if a security vulnerability is patched and it
removes a feature (in this case, processor performance) we should forgo it. Am
I understanding correctly?

~~~
hackinthebochs
I think the point is that "security" isn't absolute, its always relative to
some threat model. But if untrusted code isn't a part of your threat model
then you might prefer the performance benefits.

