
Intel FP security issue - stevekemp
http://www.openwall.com/lists/oss-security/2018/06/13/7
======
amluto
Linux hasn’t used CR0.TS for some time. I removed it a while back because
manipulating TS is very very slow.

(I am not part of any process with respect to this, embargoed or otherwise.)

Edit: the upstream commit is 58122bf1d856a4ea9581d62a07c557d997d46a19, called
“x86/fpu: Default eagerfpu=on on all CPUs”, and it landed in early 2016. Greg
K-H just submitted backports to all older supported kernels.

~~~
ericseppanen
Wasn't 212b02125f3 ("x86, fpu: enable eagerfpu by default for xsaveopt")
sufficient protection for everyone on a modern CPU?

~~~
hansendc
That commit disables the lazy mode by default on processors supporting the
XSAVEOPT instruction. But, it's possible to override that default with the
eagerfpu= kernel command-line option, or that a hypervisor has masked out
support for that instruction even on hardware with XSAVEOPT support.

The point is: although relatively unlikely, it is still _possible_ that you
need some mitigation even if you have newer hardware (Sandybridge or newer is
where XSAVEOPT first showed up, I believe).

Disclaimer: I work on Linux at Intel.

------
blattimwind
The money quote is this: (OpenBSD):

"3) post-Spectre rumors suggest that the %cr0 TS flag might not block
speculation, permitting leaking of information about FPU state (AES keys?)
across protection boundaries."

AES-NI is part of the vector/FP units and uses those registers as well.

~~~
mikec3010
I wonder if these "bugs" will create a market for security dongles that
perform AES, RSA, etc? That way they aren't black boxes like CPUs that
literally have minds of their own these days (IME). I would like to own a USB
dongle that took files in and outputted them in encrypted form. Bonus if they
were an open spec so you could have various vendors or open source FPGA
versions. Bonus if the key load would be airgapped from the PC side, say via
QRcode, hex buttons, microSD, Bluetooth with hardware disable switch, or even
rfid.

Yes that does create some new attack vectors, but these "bugs" make me think
that the whole architecture is a rooted, burning trash fire.

~~~
bdamm
Well, yes. There is already a large market for these "security dongles", and
many libraries and protocols for interacting with them. They're called HSMs
and examples of libraries include PKCS#11, JCE, MCE, and protocols like KMIP.
Widely used in the financial sectors, CAs (of course), revenue collection such
as tolls, government functions such as passport issuance, and some kinds of
industrial control segments, among others.

It's long been the case that side-channel attacks can extract key materials
out of conventional CPUs. Power analysis alone has been now a decades long
science and not going away any time soon, made all the more exciting by the
prevalence of RF and the advancement of antennas. Spectre and the like is just
another wake-up for those not paying attention e.g. in cloud services.
Consider yourself one of the enlightened when it comes to crypto material
handling.

~~~
hamilyon2
Well, I worked with one of proprietary security tokens before. Nothing to be
proud of, unpatched software/firmware bugs, zero responsibility of
manufacturer and usability mess. The thing is, not only cryptography hardware
and software itself should be safe, but whole system should be up-to date and
have no weak links, which is hard in practice and few want to pay for it.

Makes me think if there is any incentive to do crypto properly or security
theater will always prevail?

~~~
tgtweak
I've had the unfortunate experience of integrating a gemalto network HSM and
the broken state of the documentation alone is enough to make you question any
engineering inside.

It's security through obscurity.

------
JStanton617
So Intel tried to shut *BSD out of the process again (like they did for the
original Spectre/Meltdown) so they didn't feel they had to respect any
embargo?

~~~
tomxor
> So Intel tried to shut *BSD out of the process again (like they did for the
> original Spectre/Meltdown) so they didn't feel they had to respect any
> embargo?

Yes and no... It's really important that this be viewed from the context of
the discussion opened by theo in the video from the previous HN post (provided
in this thread by codewriter23).

Here's My TL;DW from the irritatingly poor quality video:

Yes, they are pissed that they are being excluded (rumour is amazon and google
have been implementing fixes).

However, they are not necessarily "not-respecting" the embargo according to
the proposed methodology Theo outlines in the video: to (speculatively)
exclude _any_ potential source of speculative execution vulnerabilities to
ensure they are safe without giving weight to any one rumour. And then
gradually prune back the precautions as they become publicly disclosed.

Apparently they used a similar strategy previously to provide patches for sshd
before they were allowed to publicly disclose the vulnerability... prevent the
bug from being reachable without revealing exactly what is broken in the
commits by never touching the offending code. In this case the idea is to be
non-specific, disable a whole class of things even though it might not be
necessary (because in this case they really don't know where the problem is
exactly).

Disclaimer: The above is not my opinion, it was my interpretation of the
relevant context from the video, i do not know if it matches their actions.

It seems possible the commenter on the oss-security mailing list is not aware
of this strategy and is giving more weight to openBSD's patch than it deserves
(and perhaps wrongly implying openBSD have disrespected the embargo as a
sideffect).

However these patches are way beyond me so I cannot tell.

~~~
teamhappy
The braking the embargo part is about the FPU issue that they published a
patch for a few days before Theo gave the talk.

The part you're referencing is Theo speculating about the next bug. He
suspects fixing it requires flushing a cash line but he doesn't know which one
(because he doesn't know where the bug is) so he proposes flushing all of them
until the bug is published and then removing the flushes that aren't
necessary.

He then mentions the last serious OpenSSH bug. Instead of publishing a fix for
the bug (and thus disclosing the bug) they decided to publish a patch that
moved a bunch of code around and just happened to also make the buggy code
unreachable. Then they told everybody to upgrade and once that happened they
could safely disclose the bug and publish a fix for it. No embargo necessary
and everybody got the fix at the same time. (I assume that's why he brought it
up.)

~~~
tomxor
Ah, thanks for pointing that out. But I wonder if there is actually a
demonstrable exploit for that patch? Or, if it's the same preemptive approach?
I guess i'm arguing that patches didn't necessarily wait for Theo's talk to
reveal why they are doing what they are doing.

Can someone with deep enough knowledge of the patch tell if it implicitly
demonstrates the flaw? (and therefor effectively breaks public disclosure) or
is it purely speculative - oh god the puns are killing me.

------
notaplumber
Dragonfly went a little further with this as well.. very precarious future
ahead for us, I think..

[http://lists.dragonflybsd.org/pipermail/commits/2018-June/67...](http://lists.dragonflybsd.org/pipermail/commits/2018-June/672325.html)

~~~
koolba
I find it amazing how clean the diff for something like this is within the BSD
source tree:
[http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/9474...](http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/9474cbef7fcb61cd268019694d94db6a75af7dbe)

~~~
tomxor
:) Some people go to great lengths to make commit history produce nice clean
diffs... although I doubt anyone ever reads mine. Rebase, squash and split,
your future self will thank you!

------
stryk
Can anyone breakdown the impact and severity of this in more digestible terms
for those of us not that deeply technical?

~~~
monocasa
Unpatched systems can leak SIMD/FP state between privilege levels. Pretty
fucking high severity since that's where we stick private keys these days.

The cost is more expensive context switches currently since we'll have to
fully unload and reload all SIMD/FP state. I'm sure Intel will fix this one in
a couple gens.

~~~
stryk
Unpatched meaning systems without the Spectre/Meltdown mitigations enabled? Or
is this something unrelated to the previous bugs?

~~~
monocasa
This is unrelated and requires new patches. Somewhere else in the thread here,
someone is saying that Linux isn't vulnerable, but I don't know for sure.

~~~
stryk
Thanks for clearing that up for me. Wooo boy, another one.

------
pstuart
Does this impact AMD as well? If not, might this bring further performance
parity between the chipmakers?

~~~
greglindahl
Linux appears to have patched things to avoid this problem on Intel and AMD
more than 2 years ago, with the reasoning being that modern CPUs are fast in
"eager" mode and "lazy" is not needed. So no, no performance difference for
AMD as a result of this issue.

~~~
bonzini
It's not just that modern CPUs are fast in eager mode; also str* and mem*
functions nowadays use the FPU (via SSE/AVX) and the dynamic loader (ld.so)
uses them. Therefore Linux's heuristics switched to preloading the FPU anyway
for most processes, even before the program code started running.

------
0x006A
an empire built on sand...

~~~
PedroBatista
Literally and figuratively.

------
jokoon
I am not a security expert, but I am still able to understand the low level of
how software works.

Yet I have been always mesmerized how hard it is to understand security stuff.
Maybe it is because I don't find it interesting, as I am more interested in
creative stuff like gaming.

Honestly it has been maybe 10 since I've abandoned the idea of caring about
security. I just do what is minimal: passwords, avoid sketchy websites, not
keeping sensitive files, using trustworthy software, etc.

Security is just too hard now. Maybe manufacturers like intel are to blame,
and obviously there MUST be some political will to make sure that most
electronics are insecure to give an advantage to intelligence agencies.

Because ultimately when I first heard about the sony rootkit, and lately about
the HD firmware worm, I was really feeling powerless and outdated. I really
think that even for a guy like me, who can write software, to not be able to
protect myself efficiently against those attacks, and to tell non-programmers
that "no I cannot hack people's computers", is starting to make me feel like
an idiot.

As years go by it seems that electronics seem more and more vulnerable, and I
still feel completely unable to defend myself against it. Even politically I'm
sure that designing a completely secure computer would be a taboo subject
because people would argue that it could help the bad guys, and I'm sure that
politically, one could not design such a device with success.

The whole "I don't care since I have nothing to hide" is really a fair excuse
to show that I'm not capable of defending myself, and I will let others go at
their cyber wars without me caring at all. For now the security of individuals
seems to be lost, and I fear that one day it won't only be state actors that
use security for policing, it will be petty criminals. If cyber chaos ensues
nobody will use computers anymore, and they might even become banned from
possession.

