
Intel CPUs impacted by new PortSmash side-channel vulnerability - Aissen
https://www.zdnet.com/article/intel-cpus-impacted-by-new-portsmash-side-channel-vulnerability/
======
ysleepy
OS could make Hyper threading opt-in per application thread. So chrome could
mark its render threads as HT-groupable and they can share a core between
them, but otherwise cores are not shared.

There might be different levels of thread opt-in, prime95 might not care about
other threads finding out what the 338134th digit of prime is and mark its
threads as unrestricted sharable.

~~~
illumin8
This is the right approach, IMHO. Let me run local apps like video rendering
and transcoding using all cores, but let me mark anything that has Internet
access as only executing on a dedicated core.

Apple's new hardware approach might work well for this: on the A12X chip used
by the new iPad Pro, they have 4 high performance cores and 4 low power cores.
Let Chrome have a couple low power cores, and that solves a few problems:

1\. I get security isolation between Chrome and the rest of my apps. 2\. I no
longer care very much when poorly written javascript wants to consume 100% of
CPU because it's only running on a low power core. 3\. My battery life is also
better because Chrome can't consume a high power core.

~~~
amaccuish
FYI Apple didn't come up with the idea of having differing cores:
[https://en.wikipedia.org/wiki/ARM_big.LITTLE](https://en.wikipedia.org/wiki/ARM_big.LITTLE)

------
cperciva
This is not a big deal; the exploit code is new, but the vulnerability has
been known for 13 years -- and doesn't matter since any code vulnerable to
this is also vulnerable to other attacks.

[https://twitter.com/cperciva/status/1058424239156412416?s=19](https://twitter.com/cperciva/status/1058424239156412416?s=19)

~~~
yread
maybe they used it so that script kiddies can't start stealing everyone's
private keys

------
rayiner
> "This is the main reason we released the exploit -- to show how reproducible
> it is," Brumley told us, "and help to kill off the SMT trend in chips."

Maybe what we should be killing off instead is exporting everything to the
cloud and running untrusted native code willy nilly.

~~~
jlebar
It's not just untrusted _native_ code. A lot of these exploits in general are
exploitable through js.

And I know there are folks on hn who think js is an abomination and noscript
is the answer to all of life's persistent problems. "My web browser should be
exclusively for reading text." But personally I'm not interested in taking us
back to the 1990s.

~~~
josefx
I don't need to be in the 1990s, I just wished websites wouldn't need to draw
from ~200 untrusted sources and possibly worlds largest malware distribution
channel to show some ads from even less trusted sources right next to badly
formatted text.

~~~
SolarNet
I mean the text is probably from poorly trusted sources as well but that's a
different problem.

~~~
dijit
but text has a limited attack scope compared to say: an entire interpreted
language and DOM.

------
xoa
As predicted Spectre/Meltdown were just the first fruit of a major new wave of
investigations into these avenues of side-channel attacks in CPUs. Now we're
seeing more as it gets more attention, and it's interesting stuff. That said I
think the researchers might go too far here:

> _" This is the main reason we released the exploit -- to show how
> reproducible it is," Brumley told us, "and help to kill off the SMT trend in
> chips."_

> _" Security and SMT are mutually exclusive concepts," he added. "I hope our
> work encourages users to disable SMT in the BIOS or choose to spend their
> money on architectures not featuring SMT."_

I don't blame them for being security focused about all else at any cost and
any layer, that's their gig. But I think the real response here is likely to
be a lot more subtle and interesting. Of course perhaps SMT can in fact be
fixed for this without a wholesale tossing in which case it'll just be a
universal hardware revision somewhere down the line. This increasing level of
public research and awareness of this specific class is still relatively early
days after all. But taking it as a given for argument that there really is a
fundamental conflict, the fact would remain that SMT can provide significant
performance gains, and furthermore that we're still far from the point where
SaaS/IaaS is everywhere. Lots of systems are still under single user local
control, and in turn attackers being able to co-run their own arbitrary code
on the same physical core isn't necessarily part of the threat model at all
(and more specifically if attackers get that far least common denominator
kicks in, they've already owned what's important). Even if it's desirable to
run _some_ risky code as well, hard core affinity for non-secure processes is
a brute force solution in a local system context that seems like it shouldn't
be a big deal given the a surfeit of physical/logical cores for many work
loads.

But perhaps this could be a leading edge of true processor level physical
differentiation required between IaaS and more traditional deployments, and
that might make for an interesting change to the competitive landscape there.
It'd change the cost/benefit in some scenarios, or require more custom
processor work to enact harder (and performance costing) boundaries that a
traditional setup might take care of with machine isolation. I wonder if that
could shift things back away from the Cloud and centralization trend in some
instances, or at least create and more dynamic market?

~~~
jcranmer
> Of course perhaps SMT can in fact be fixed for this without a wholesale
> tossing in which case it'll just be a universal hardware revision somewhere
> down the line.

There's a very simple solution here. Don't schedule two threads with different
trust domains on different SMT threads on the same core. No need to change any
hardware, no need to disable anything, just accept (as has been known for at
least a decade) that there is very likely to be a side channel attack if you
look hard enough when SMT is involved.

~~~
kbenson
At what point does the extra complexity and cost of including SMT in the
design outweigh the gains is provides if you can only use it in specific
situations, and if you don't err on the side of caution is may be a security
problem?

If it was clearly outlined as 8 cores that support hyperthreads or 10 cores
that don't for around the same price, what do we suppose most people would
choose?

Part of the reason for the current status quo might be that hyperthreads are a
big differentiator beteen AMD and Intel, and play towards Intel's strength
(higher clock speeds at fewer cores).

We have something close to the proposed scenario starting to play out with
AMD's offerings that support more cores and no hyperthreading, but it's not a
perfect experiment because AMD's cores are also lower clock speed, and there's
a lot of brand name loyalty currently.

~~~
jcranmer
The point of SMT is that it is quite low overhead. You're not duplicating most
of the core, only the physical register file and the TLB units (I think). The
execution units and caches--where most of the die of a core goes--remain
unreplicated. In terms of area tradeoff, you're not looking at "we'll give you
2 more cores if we disabled SMT." For the Pentium 4 (quickest I could get),
adding SMT is a die overhead of 5%. Your realistic trade-off for area
constraints is on the terms of SMT, or better out-of-order parallelism
scavenging, or more L2 cache.

~~~
kbenson
> For the Pentium 4 (quickest I could get), adding SMT is a die overhead of
> 5%.

Ah, that's what I was looking for. I'm aware that most the resources are
shared, but I also assume there have been design choices in other components
to make SMT easier or faster, and possibly that increases their die size a
small amount, or in general just complicates their design.

The question (which is ultimately unanswerable) is what would we have had
Intel not chosen SMT as the path to pursue? If they had instead invested those
resource into other areas (e.g. more cores) and never let SMT concerns enter
into the discussion, how would _those_ (theoretical) CPUs compare?

That's the CPU comparison I was alluding to in the prior comment, and why I
noted AMD vs Intel (even with all it's other differences) may be the closest
match up of that idea we'll see.

It wasn't meant as a rebuttal to anything, it was more a "wondering out loud"
type of comment spurred by yours, about what could have been had different
paths been taken.

------
Symmetry
My understanding is that best practice for writing crypto libraries these days
is that you avoid any data dependent jumps and always execute the same
instructions every time, using cmov or similar instructions to make use of one
calculation or another. In that case the port usage should be the same
regardless of input data and you should be immune to this attack and many
classes of timing attack you might be otherwise vulnerable to. You've still
got to worry about data dependent power draw but that's hard to attack even if
you have the chip on a bench in your lab. So would this attack actually be
workable against crypto libraries designed that way? Not that the ones in the
wild are necessarily but this might be an impetus.

------
ccnafr
Just throwing it out there - the researcher said "he strongly suspects that
AMD CPUs are also impacted."

~~~
iforgotpassword
I hope someone who isn't as incapable as me will try to make it work there. It
seems Intel is always the primary target since it has a much greater market
share, but if we want to consider AMD the safer x86 alternative we should
actually check their CPUs too :)

------
greggyb
I have a somewhat naive question. Since this and many similar recent issues
are associated with SMT (Spectre, Meltdown), and the concern is about
malicious code being run on the same core.

Can IaaS vendors simply restrict VMs to always use whole cores. If you want 3
cores in your VM, you get

    
    
        1. Core0 Main thread
        2. Core0 Hyper thread
        3. Core1 Main thread
        Core1 Hyper thread is un-allocated
    

And then we don't have two actors on one core? Or just only offer 2-core VMs.

~~~
wrs
“Google Compute Engine employs host isolation features which ensure that an
individual core is never concurrently shared between distinct virtual
machines.”

([https://cloud.google.com/blog/products/gcp/protecting-
agains...](https://cloud.google.com/blog/products/gcp/protecting-against-the-
new-l1tf-speculative-vulnerabilities/))

------
stephenr
It seems that for 'personal' devices (i.e. laptops/desktops), the biggest
vulnerability here is probably actually javascript.

You can be smart about what software you run, but most people don't use the
web without JS.

Maybe it's time for javascript to be 'off by default'.

~~~
dijit
I got heavily downvoted for suggesting that maybe google shouldn't be forcing
JS for login yesterday.

I feel somewhat vindicated with this, however the fact remains that the web
today doesn't really work without JS. I don't see that changing for any reason
other than it offering a better (or more consistent) experience, but that
requires web developers to support that. Which I don't see happening any time
soon.

~~~
stephenr
Welcome to the hive mind.

------
DeepYogurt
No paper up yet, but there's a discussion on seclists

[https://seclists.org/oss-sec/2018/q4/127](https://seclists.org/oss-
sec/2018/q4/127)

~~~
yread

        01 Oct 2018: Notified Intel Security
        26 Oct 2018: Notified openssl-security
        26 Oct 2018: Notified CERT-FI
        26 Oct 2018: Notified oss-security distros list
        01 Nov 2018: Embargo expired
    

Why even do an embargo if you give hardware people 1 month and software people
1 week?!

~~~
rincebrain
Any number of possible reasons.

OpenSSL shipped a patch for it in that interval. Intel isn't going to fix it
faster than OpenSSL ships a patch revealing it unless they already had a
convenient killbit for the affected things.

I have no idea what the relevant researcher's policies are, but I would assume
we'll hear about it if somebody requested a longer embargo and they refused.

(It also appears to be much harder to reproduce in the presence of dynamic
clock speeds, so the impact in most smaller environments is going to be low
unless someone does further work to make it reproduce well with that.)

------
bitL
Seems like variations of these bugs are going to wipe out past 10 years of CPU
performance advancements... As security research tooling gets more advanced,
it can possibly touch even more intrinsic areas of computation, leading to
orders of magnitude slowdowns in exchange for minuscule increases in security
- when is security finally good enough?

~~~
51lver
Intel has done nothing it can stand behind since the pentium 3. That chip was
amazing. They had to dead end P4 development and go back to a P3 base to
develop core. Now I betcha they will now have to dead end core and go back to
P3 again. It could be competitive with a die shrink, clock boost (3.5Ghz P3?
drooool), a newer instruction set (I'm sure it's fast enough to decode, and
intel is good at adding accelerators), and then running a ton of them on a
single socket like AMD is doing. Maybe pair it with some optane for instant
boot or some gimick. It'd sell.

~~~
ahje
As much fun as a 4 GHz Tualatin 4- or 8-core CPU sounds, it would probably
take several years of development to make something usable by today's
standards.

------
bsmith
This is definitely a nit, but isn't calling this a "new vulnerability"
incorrect? The vulnerability has existed, but has only recently been
discovered. I'd prefer it was referred to as a "new exploit."

~~~
chithanh
It is "new" in the sense that it has not been described before.

Much like a mathematical theorem, which of course has been true all the time
since someone formulated the axioms, but when someone proves it for the first
time it is "new".

------
bvxvbxbxb
Hyperthreading is thoroughly done.

------
jandrese
Intel's response to this is to cut back on Hyperthreading significantly in the
latest generation:
[https://ark.intel.com/compare/134896,186605,186604](https://ark.intel.com/compare/134896,186605,186604)

~~~
vbezhenar
I don't see it as a response to vulnerabilities, it's just new flavours of
market segmentation. They have flagship CPU with HT enabled after all.

~~~
jandrese
The i9 is their "performance at any cost, even security" chip. It's for people
who want/need the absolute fastest chip and damn the torpedoes. It's so far
past the optimal point on the price/performance curve that nobody sensible
will be buying it unless they have a very specific need.

~~~
keldaris
I'm no fan of the i9 series at all, but if you have SIMD-heavy workloads
(especially anything you're willing to port to AVX-512), those CPUs actually
make a lot of sense price/performance wise. It's a fairly specific niche, but
not necessarily a tiny one.

