Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] CPU.fail (cpu.fail)
282 points by razer6 9 days ago | hide | past | web | favorite | 28 comments





List posts, which this home page is, lead to lowest-common-denominator discussions. People focus on what the list has in common and its gravity prevents specific items from gaining liftoff. Specific discussions tend to go deeper than generic ones, so we're going to unmerge these threads and have a separate one for each major disclosure:

Zombieload: https://news.ycombinator.com/item?id=19911341

MDS: https://news.ycombinator.com/item?id=19911277

This will take several minutes, so if you see weird incongruities or disappearances, hold your fire.

Edit: Ok, I've done as much of this as I'm going to do. If you notice anything wrong, can you let us know at hn@ycombinator.com so we can fix it?


This is the overview page. It comes alongside:

* https://zombieloadattack.com/ , on Hacker News as https://news.ycombinator.com/item?id=19911341 (The technical paper is hidden inside a collapsed part of the page and is at https://www.cyberus-technology.de/posts/2019-05-14-zombieloa... .)

* https://mdsattacks.com/ , on Hacker News at https://news.ycombinator.com/item?id=19911277

* Google's announcement about ChromeOS at https://sites.google.com/a/chromium.org/dev/chromium-os/mds-... , on Hacker News at https://news.ycombinator.com/item?id=19911406

( Several Hacker News discussions have since been merged here. And were then re-split. )


I thught Theo deRaadt was exaggerating when he said that Intel does not know how to build a CPU.

He was, obviously.

Intel certainly does not know how to build a secure CPU.

The blog post is buried a bit deep, but has the actual technical information on the topic

https://www.cyberus-technology.de/posts/2019-05-14-zombieloa...


The overview page, https://cpu.fail/ , is on Hacker News as https://news.ycombinator.com/item?id=19911715 .

( This comment was merged from a duplicate discussion. )


The worst thing about heartbleed is that it introduced marketing into vulnerability disclosures :(.

In some cases I agree, but these are very interesting attacks, and having a nice, informative landing page about it is very welcome.

Boosting your impact factor is how to get tenure. Why stick to dry journals when you can leverage mass media?

How is that a bad thing?

It isn't. Some people just think "marketing" is the root of all evil, when done right, it's actually just effective communication.

I never knew about the .fail TLD.

> Computer makers Apple and Microsoft and browser makers Google and Mozilla are releasing patches today.

Computer makers? Wouldn't that be OS makers? They are patching their OS to prevent leaking...


apple makes macs / macbooks etc, microsoft makes surfaces / surface pros...

Yes, but I couldn't find anywhere if Apple and Microsoft are patching this as a "hardware" fix for specific products.

Almost like saying that the "software maker" John Deere will fix their latest-model Haverster.


Also see https://mdsattacks.com for the RIDL and Fallout landing page

Well designed, minimal and useful. There should be an email alert subscription for every upcoming exploits.

Is it time to just write an X86 API on top of GPUs and get rid of CPUs? Seems like the shortcuts we've been taking to get sequential speed are all blowing up in our faces, and fixes aren't possible without huge performance regressions.

"Is it time to just write an X86 API on top of GPUs and get rid of CPUs?"

How many days are you willing to wait for your computer to boot?

GPUs aren't "better" that CPUs, they're different. Between the two, CPUs probably make better GPUs than GPUs make CPUs, but it's a tough call; neither of them are very good at the other!


We've been building personal computers around GPUs for a long time now. The CPU based computer was kind of an oddball IBM "thing". Old MOS (nintendo, commodore) and ARM for example seem to have the CPU serve the GPU in most configurations I've seen.

I didn't say it would be impossible in some abstract sense. Goodness knows even a single execution unit of a modern GPU is more powerful than my first IBM computer, if you just hook it to the right things. I asked how many days you'd be willing for your computer to boot, by which I mean something like your current workstation.

Huge, huge swathes of your normal boot process would be trying to run on a single GPU execution unit, since there would be no parallellization available, and your GPU is terrible at out-of-order dispatch (i.e., basically can't do it last I knew), so all the optimization we've spent the last 50 years putting into our CPUs won't be firing. You'd basically be trying to run your computer on something that would be in the range of 100MHz down to for all I know single-digit MHz-equivalent of your current Intel or AMD CPU (after all the penalties around not using any prediction, not having the proper caches, thrashing like hell in your GPU's memory caches, and all the other effects... I'm not even sure I'm willing to promise you'll never hit code with KHz-equivalent performance; you might just get that NES performance(!)).

Your modern GPU-based computer trying to boot Windows or Linux is gonna craaaaaaawwwwwwllllll. Can it do? With the right work, yeah, probably, but you're not going to enjoy it, or be willing to use it. CPUs are terrible GPUs, but GPUs are terrible CPUs.

How well a computer could run if it was optimized for the GPU is an open question, but I guarantee you that if in some bizarre parallel universe everything was running on GPU-like hardware, but in 2010 suddenly people figured out CPUs and next year the Core Duos were available, people would be flipping out over how awesomely they perform and would be rushing to rewrite huge swathes of code in these new-fangled "in-order execution units".


Last year I got a video of my 75MHz pentium booting windows faster than a web app could load on my macbook pro (1tb ssd, 16GB ram, i7). General performance in 2019 is already horrible. Rethinking everything from the ground up would be quite pleasant imho.

Which Windows?

98se.

I’m not sure what you mean here. The GPU is a specialized parallel compute element, it doesn’t do general purpose calculations with complex branch conditions very well. That’s why it can be so massively parallel. The CPU conversely does comparatively well with branching and arbitrary memory loads.

The examples you give are gaming machines, so the programs on the CPU, especially on older machines, are based around video interrupts to facilitate smooth graphics. It’s a program choice on the CPU.


The shortcuts that we're talking about here are all pretty trivial in terms of performance impact and in terms of what it takes to fix the issue.

Meanwhile, GPUs are designed to run thousands of threads in parallel, and orders of magnitude slower than a CPU when running only a handful of threads.


Haha. Yeah, if you think Intel is bad at security, I’m sure you’ll love Nvidia taking over that role...



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: