Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Meltdown and Spectre (spectreattack.com)
347 points by dustinmoris on Jan 4, 2018 | hide | past | favorite | 141 comments

It's not everyday that a security issue is borne out of a side-effect of underlying theory; in this case, how out-of-order execution works. It's almost as bad as if someone proved P = NP, rendering all cryptography useless on the spot.

The game is changed forever, and CPU designers across the industry will likely have to can some or all of OoOE, setting performance back considerably. It can take years to reachieve performance we're used to today at the security level required.

It can mostly be fixed by making sure aborted speculative execution has no side effects.

I think this is doable by adding a buffer to store cache entries evicted by speculative execution, so they can be put back, and a log of new cache entries, so they can be removed

Also, the abort process itself needs to take a constant amount of time out of the critical path to avoid leaking info via timing (the timing then only depends on how fast the CPU determines it speculated wrong, but that doesn't depend on the speculated code).

EDIT: Hyperthreading also needs to be handled somehow, which other than disabling it might require adding a per-thread "L0" cache and using that exclusively for speculatively executed code.

There is still the problem of theoretical plain timing attacks on normally executed code though, but that's unrelated.

Making sure that the indirect branch predictor only makes predictions when the source address fully matches would be nice as well (although theoretically not required if there are no side effects).

I don't think undoing all cache changes is enough, since it went to memory it generated memory bus traffic, which could be observed by another core (as extra latency). And speaking of another core, the cacheline might have come from another core, which could observe the effect on its own cache, even if it's later put back.

Yes. I doubt this will be what actually happens, but perhaps the safest fix is to remove speculative execution in its entirety, increasing clock frequency and/or core count to mitigate the resulting performance loss by as much as possible.

> It can mostly be fixed by making sure aborted speculative execution has no side effects.

Not an expert, but if speculative execution has no side effects, not even on timing, then you might as well disable it.

Speculative execution is meant to speed up computations.

The game is changed forever, and CPU designers across the industry will likely have to can some or all of OoOE, setting performance back considerably. It can take years to reachieve performance we're used to today at the security level required.

The approach Mill Computing [0] is taking is to have the compiler toolchain handle all instruction scheduling for their new design. All instructions have a fixed, known latency (in terms of clocks). And the processor doesn't have much in the way of global state (like condition code registers and such). All metadata is stored with the data on the 'belt' (replacement for registers) itself, which is saved / restored upon context switch.

They are also years away from silicon it seems.

[0] https://millcomputing.com/docs/

I’m a bit puzzled how this helps prevent handcrafted binaries from exploiting CPU designs.

Well, if nothing else, the design of the Mill has a much smaller "attack surface". There's not nearly as much in the way of interesting activities going on behind the scenes as compared to a modern OoOE CPU. That logic is pushed out of the hardware and into the software development toolchain.

The Mill has many other aspects that improve the security story. All protection is based on memory address, for example. And pointers are not integers. There are reserved portions of the address space that programs are completely not allowed to touch.

Maybe it's a sign that the current CPU architectures are too complex for their own good. Branch prediction is a really, really weird concept. It could be avoided if we switched from Von Neumann architecture.

> It's almost as bad as if someone proved P = NP, rendering all cryptography useless on the spot.

No, P=NP does not immediately break crypto, because that relation makes no statement w.r.t. if those algorithms in P solving problems we thought were NP are easy to find. Or efficient. Just because the problem your adversary has to break is part of P doesn't make it easy.

Technically, if P=NP then we already have a working algorithm to solve NP problems in polynomial time.

It involves iterating through and running all possible programs ordered by length, giving them an increasing amount of time to run and checking (in polynomial time) if they produce the correct answer.

If there exists a program that finds the answer in polynomial time, this algorithm will eventually find it after wasting (at most polynomial) time iterating through other programs and tentative bounds.

If P != NP, it's still at least asymptotically optimal.

Hmm, but running 'all possible programs' is an exponential time operation (in the lengths of the programs)...

Yes, but as you note, "in the size of the program" and the size of {the program that solves 3-SAT in polynomial time if there is one} is a constant, so it's all constant.

Of course, that constant is astronomically unpractical.

I like the term "Galactic Algorithm" for these algorithms with unpractical constants.


Polynomial time is unbounded. Indeed constant Time is unbounded.

Bare metal servers that run trusted code like a build not are not effected. Maybe this will be a bios setting or a kernel flag to set .

Kernel flag at compile time? - Yes

Kernel flag at boot time? - No (the fix is a to compile instructions differently).

Bare metal servers that only run trusted code are only unaffected so long as they explicitly opt out of the new security model, and I'm not sure how easy that'll be...

IIUC it's not possible to opt out at this time (short of reverting the patches). Linus expressed some concern about it: https://lkml.org/lkml/2018/1/3/797

What a tl;dr!

  Please talk to management. Because I really see exactly two possibibilities:
  - Intel never intends to fix anything


  - these workarounds should have a way to disable them.

  Which of the two is it?

In case of Meltdown and Linux you simply set the boot parameter pti=off and reboot the machine.

Ah, sorry. My comment was specifically directed at Spectre (though I didn't say it), the more serious of the two.

I feel like user-facing server-side scripting languages should be investigated whether they incur the same risk as running JavaScript in a browser (which is big and complicated). I'm thinking about stuff like Sievescript and IFTTT-like applications.

Side-effects strike again! Not only does the hardware have observable side-effects that are not cleaned-up, but our browser languages allow pervasive side-effects which are used for exploits (e.g. precision timing).

Note one of the mitigations for this is that Mozilla (and I'm guessing other browsers) is reducing the resolution of performance.now()

From the Spectre paper: Chrome reduced resolution of performance.now(), so they set up a webworker with a tight loop that increments a shared variable.

Yes and creating one or more "web workers" (threads) is a significant side-effect. Javascript to me, looks more like a systems-programming language (imperative with threads and sockets) than something trusted for interactive web sites. The web is a huge mess and today we learn that hardware is too.

This is why Mozilla also removed support for shared variables (i.e. SharedArrayBuffer).

How many years should we think back?

I have some old laptops at home and they work quite well with a proper Linux setup.

Should we think in terms of slowing everything down by some factor, or just slowing tasks that need an un-throttled CPU?

Qualifier: I'm working off articles and a few computer-architecture courses from college, so don't trust me too much here.

Taht said... Branch prediction and TLBs have been ubiquitous since about 1975. We're seeing vulnerabilities in ARM processors with unit costs of a tenth of a cent. We don't have to go that far back; if nothing else our toolchains and manufacturing techniques are far better. But if we're forced to discard speculative execution as a concept, which I think may be necessary to entirely prevent side channels, we'll be going way, way far back. Further back than the Pentium 4.

Fundamentally speaking, the power of out-of-order execution is not one of engineering. It's not like object-oriented programming or version control in that it simply makes it easier to engineer fast processors. It is more powerful on a fundamental level. And there's been so much theoretical and practical work put into it over the decades that it may take more decades to bring a different mathematical formalism up to the same level of development.

Let's just say that I really hope we don't have to throw out out-of-order execution. It'd suck. Hard.

It's not so much out of order as it is the speculative execution. Predicting a branch and then trying to run the code after the jump while you wait on the branch itself to finish calculating. At least that's my understanding from the papers. This means that otherwise normal out of order techniques like memory load/store reordering and such aren't subject to these particular attacks. I hope that attacks on them aren't found either because those are some of the biggest gains because it can allow better cache coherence and avoid misses, or make the impact of one almost non-existent since it can do other instructions while waiting on a load from system memory.

My understanding is: Speculative executive is flawed, out of order makes it hard to tell when the errors are going to be really bad.

> if we're forced to discard speculative execution as a concept

Do you have any reason to believe this will be the case?

> Further back than the Pentium 4

This is a bad example.

The Pentium 4's pitfall was a high clock speed with little concurrency, and thusly a big heat problem that limited peak computation power.

Yes, we lose some concurrency without branch prediction, but post-P4 multi-core processors would have to take some very major hits to be comparable to the P4 in a modern multi-threaded computing environment.

The P4 is just not a good reference point for old processors because of its architectural differences.

The patches you're seeing go out amount to little more than hacks that try to stop the CPU from doing branch predictions in some cases. It depends on the CPU, really. For example, and this is just a guess, netburst-era Intel CPUs may have even more of a hit as they relied on deep pipelines to get higher clockspeed, and any step in that pipeline may do a branch prediction.

I'm far from an expert in this field, but I just read the Wikipedia article on it and it says Spectre affects pretty much every processor since 1995.

Rather than changing OoOE, wouldn't it be much simpler to prevent unprivileged processes from seeing the real time? For maximum compatibility with existing software, a CPU could provide fake time, that on average passes at the same rate as the real time, but is completely determined by the instructions executed. If the length of process time slices are determined by fake time instead of real time, the process will never be affected small variations in real execution time, no matter how much time passes.

Unless you never get the real time doesn’t this just push out the problem to statistical analysis of differences between real and fake time?

The fake time can be synced to real time at the beginning of the time slice of a process. So the process never sees the real time, but the fake time is fairly close to it.

Processes can run a long time and pass data around. This isn’t realistic.

I'm not seeing the issue with a long running process. If the fake time is regularly synced to real time, it should not matter how long the process runs.

If passing data involves frequent switches between processes, then, yes, I see there would be a problem with syncing on every process switch. I think syncing at longer intervals would solve that problem. All processes could share the same fake time, but see the fake time skip a small fixed amount at fixed regular intervals. That fixed amount of fake time would take a variable amount of real time, which absorbs the small variations in real execution time.

[cough] Oh dear, Moore’s Law


Lots of simple cores would be inherently better.

But much harder to program and not effective for speeding up quite a lot of software.

Isn't that exactly what made PS3 hard to develop for?

Yep. It bad to develop for, and left lots on the table compared to its contemporary Xbox 360, despite "theoretically" better specs. 1

The Cell processor was never used in any other consoles, and the IBM Roadrunner supercomputer using them ended up being replaced by Cray+AMD hardware.

1 https://kotaku.com/5885358/why-skyrim-didnt-play-nice-with-t...


That's the Xeon Phi concept... which is apparently being taken out back and shot.

Our current programming paradigms don't adapt well to massive parallelism, except for some small classes of "embarassingly parallel" problems (e.g., make -jN).

There are also GreenArrays chips, like the GA144.[1] 144 cores on a chip.

The downside (or upside) is that they're desgined for running Forth, and are not x86 compatible.


Most of what we do is pretty parallel: web hosting, servicing requests, databases, hosting tons of VMs and containers, etc. I don't buy it.

My guess on Phi is that it wasn't a big win over fewer deeper cores and so there was not much market. This might change things a bit.

Video demonstration of the Meltdown attack was just taken down: https://www.youtube.com/watch?v=RbHbFkh6eeE due to "violating YouTube's policy on spam, deceptive practices, and scams."

Thanks for pointing that out. I've escalated internally to have the video restored.

Edit: the video has been restored.

"I've escalated internally" wow almost as cool as "I called the POTUS and the TSA thing is fixed now"!! Kudos and thanks

Thanks a lot!

It's odd that the "LEARN MORE" isn't actually a link.

Yes it is. It redirects to the Knowledge Base. Check your adblockers.

edit: huh that's weird that so many people are getting an error, it works for me across two browsers on both Linux and Windows

Tried with and without uBlock Origin in Chrome, and with no adblocker in Edge, and it's black & white with the "learn more" text but no link. IE renders it correctly as a link.

Edit: Sorry for being off-topic, but here's a comparison - https://postimg.org/image/fhb9vlmt1/

I don't have any adblockers. I'm using Firefox. It's not a hyperlink for me, it's plain text.

Not a link for me either. Even with uBlock and Ghostery turned off.

Chrome - Mac OS X Sierra

Just tried with vanilla Chrome and it's still not a link.

Can you put it on vimeo?


This tweet has a GIF of the original YT video: https://twitter.com/misc0110/status/948706387491786752

Besides the LWN article Notes from the Intelpocalypse [1] this is the best high level overview I've seen for now. Here is are a few quotes (emphasis mine):

Am I affected by the bug?

Most certainly, yes.

Can I detect if someone has exploited Meltdown or Spectre against me?

Probably not. The exploitation does not leave any traces in traditional log files.

What can be leaked?

If your system is affected, our proof-of-concept exploit can read the memory content of your computer. This may include passwords and sensitive data stored on the system.

Which systems are affected by Meltdown?

Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.

Which systems are affected by Spectre?

Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.

Which cloud providers are affected by Meltdown?

Cloud providers which use Intel CPUs and Xen PV as virtualization without having patches applied. Furthermore, cloud providers without real hardware virtualization, relying on containers that share one kernel, such as Docker, LXC, or OpenVZ are affected.

Why is it called Meltdown?

The bug basically melts security boundaries which are normally enforced by the hardware.

Why is it called Spectre?

The name is based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time.

[1] https://lwn.net/SubscriberLink/742702/e23889188fce9f7f/

I don't see user-mode linux, QEMU, or KVM on that list, or am I mistaken an they are affected as well(so maybe some amd users are safe?). Another reason I advocate every spinup should have FDE by default with a ssh key entry shimmy at initram, or is this runtime cpu acccess so it gets anything live? If so that's scary.

I wonder if any of the companies the build their own ARM cores are not vulnerable?

I understand this "isn't the point" etc - for curiosity sake: How practical are these exploits? From my reading, it seems using them in an attack against a sophisticated target is incredibly difficult, and against an unsophisticated target is still probably prohibitively difficult? I'm not an engineer so please pardon the ignorance.

Based on the description here: https://googleprojectzero.blogspot.com/2018/01/reading-privi...

These attacks are very practical. Unprivileged code, doing nothing inherently wrong (so very hard to detect), reading arbitrary memory locations contents at a rate of about 2000 bytes per second.

Another blog somewhere (I no longer have the URL) indicated that the Google group that helped discover the issue also managed to read arbitrary, normally inaccessible, memory locations from Chrome by running JavaScript. Which makes the attack feasible remotely using nothing but a properly programmed malicious website.

In all fairness, they disabled SharedArrayBuffer and diluted clock precision, so afaik the JS attack vector can be eliminated for patched browsers (FF did the same).

I don't think this is enough. I think this just mitigates the ability to use high precision to determine cache misses on array accesses. The leaked data is still there and the paper says it's one of 64 possibilities. Enabling site isolation helps since it's only process memory. Someone correct me if I misunderstand.

EDIT: Also, just confirmed via https://jsfiddle.net/5n6poqjd/ that only FF has SharedArrayBuffer disabled. Chrome isn't going to release 64 which has SharedArrayBuffer disabled for a couple of weeks.

The SAB and clock changes are all to reduce timer precision. Reducing timer precision doesn't _fix_ the problem, but does reduce the rate of the data leak, which means it takes longer to exfiltrate the same information.

Whether this is sufficiennt mitigation depends on your threat model and the exact amount of rate reduction. For example, a leak of 1 byte per hour might be OK in some threat models but not others.

Wasn‘t aware, thx!

A nice little gif from twitter: https://twitter.com/misc0110/status/948706387491786752

User typing into a "secure" Linux (edit: was MacOS - can we do strikethrough here?) password dialog while another process steals the data.

The first attack was incredibly difficult, but once there's a proof of concept, copies will proliferate.

That's Gnome, not MacOS ;)

That demo is Linux, not MacOS: "Full Ubuntu was running with Firefox in the background. [...] it is only a demo app (a simple GTK application on Linux)."

That looks like linux, not mac os.

But still.

Meltdown (which so far is proven to impact Intel whereas AMD's design appears to negate it pretty much entirely) is catastrophically practical to use. Meltdown can read kernel memory at a rate of around 503 KB/s according to the paper. Not enough for a RAM dump probably, but if you can generally figure out what you want and where to look you've got plenty of bandwidth to dump it. All you need is code execution at any privilege level and an Intel host without the kernel workaround (so that's, what, ~95% of servers/desktops in the world currently?).

Spectre appears far less significant in practice. The primary exploit target seems to be via JavaScript to snoop on the hosting browser process (aka, running untrusted code in your own process just became borderline impossible). If you're in your own process, though, being able to read other processes memory appears non-trivial. There wasn't any example of actually doing this in the paper, but it was speculated to be possible. The example code has the attacking code and the victim code in the same process (aka, you could also just get the secret by just reading it - the "attacker" had full read access to the memory to begin with).

It was speculated that this could be done via IPC to get another process to inadvertently leak, but there was no PoC of this and given how much overhead (in terms of CPU time/instructions) there is in all forms of IPC I'm not sure how realistic that actually is in practice.

Thank you for this very detailed reply. Re: Meltdown - It would still take quite some effort to craft a series of events that would result in significant access to say, a banking or government system (the crux being the code execution)?

If your bank or govt system uses AWS or any other cloud shared hosting provider it's probably quite possible to get code on the same physical machine.

Depends on the browser. Chrome has mitigations, but not solutions to this, which largely consist of degrading the quality of the timer required to pull off Spectre. WebWorkers give you a more accurate timing method that can pull it off. Basically, any website can now theoretically read your encryption key in RAM and something like noscript will be required, not optional for maximum safety.

What you need are some use cases. Imagine going to a website that had some JS that could read memory of that browser process. I'm not sure what's in Chrome's memory and the process is shared (without site isolation enabled) by iframes and referring pages and what not IIRC.

But basically, imagine you could go to a certain website and they could open bankofamerica.com in a hidden frame and if you were logged in, they could possibly find spots in memory that had that site's information. Or Google, or Facebook, or whatever. Could be worse depending upon how Chrome's password manager stores passwords in memory.

It looked like one of the proofs of concept was using javascript code to escape the sandbox and read memory from the browser's address space. So that seems to be quite a realistic attack vector, right?

Pleased to see a growing set of links at the bottom for the relevant information for different OSes, etc...

However, somewhat surprised that Ubuntu isn't present - done a fair bit of searching today and have only found that the patches are in progress, which I'm a bit surprised about given the disclosure timeline that appears to be in place, and that SuSE and Red Hat seem to have patches in place and ready to go?

* EDIT * - since posting this, there is now an Ubuntu link present, and an explanation that they were expecting disclosure on Jan 9th.

The linked entry states:

The original coordinated disclosure date was planned for January 9 and we have been driving toward that date to release fixes. Due to the early disclosure, we are trying to accelerate the release, but we don't yet have an earlier ETA when the updates will be released. We will release Ubuntu Security Notices when the updates are available.

Suse Enterprise and Red Hat Enterprise had no problems to release updates for their Kernels.

That is a little surprising. Definitely making me re-evaluate my choice of local Linux.

This was supposed to release on January 9th, but Google pulled the trigger earlier due to community members putting the pieces together and starting to draw press attention. It isn't very fair to blame Ubuntu for being behind.

It’s not just that. I’ve been experimenting with different distros a lot this year (Fedora, Mint and Ubuntu for committed stretches) and Fedora is the only one where the machine didn’t regularly hang for some reason after a few days. Was happening multiple times a day with Ubuntu and currently much less often with Mint. Never happened at all with Fedora though.

Between that at Redhats promptness on this it’s just giving me more confidence in that side of the house.

Practical questions for ordinary users: as I understand it, a malicious actor could inject Javascript into a web site which could read all my computers memory. To read, for example, my bank credentials, would they have to be sitting in memory at the time the malicious code was executed? Could I mitigate this risk by, for example, quitting my browser and relaunching it after visiting my bank? Or will my unencrypted password still potentially be lingering somewhere in memory? Could vendors provide a "memory cleaner" that I could use between web site visits?

Typical allocator free() calls don't actually overwrite the memory, meaning that the data stays there until it happens to get overwritten by a future process, which is an unpredictable event. When I was doing research on the cold boot attacks I did a lot of "strings /dev/mem" and would notice stuff related to things I was working on several days ago. As our cold boot paper described, this could sometimes include things that you were doing even before the computer was last rebooted.

There is a really nice paper on this particular topic


which describes some useful countermeasures which haven't been widely implemented. If they had been, they could have somewhat reduced the impact of opportunistic exploitation of memory disclosure vulnerabilities.

Mostly OT, but is there an explanation for why the "Meltdown" vulnerability was named specifically as such, I mean besides the term's meaning when it comes to "melting" security barriers? Coincidentally, "meltdown" is currently a very popular term to describe reactions/revelations re: a soon-to-be-released politics book titled "Fire and Fury" [0]. I guess yet another example of "Naming things" being one of the hard problems.

[0] https://www.publishersweekly.com/pw/by-topic/industry-news/p...


> Why is it called Meltdown?

> The bug basically melts security boundaries which are normally enforced by the hardware.

Not a bug according to Intel.

The term is ominous because it is the common description of what happens when the core of a nuclear reactor overheats and melts. The theoretical result is a molten puddle of fissioning material that melts through the containment vessel then continues down into the earth.

It melts down stuff implemented on bare metal?

What does this have to do with politics? 'Meltdown' is a very common term.

Right, that's the problem. Doing a social media search for "meltdown" brings up irrelevant (to me) messages about the purported Trump vs Bannon feud, nevermind anything else in which "meltdown" is used to describe celebrity drama (i.e. every reality TV episode wrapup ever).

Compare to "Heartbleed" and "Sandworm", which while at the time had a little mocking for being a bit too polished/branded [0], at least had the benefit of being relatively scoped. "Heartbleed" seems to have very few collisions period as a noun/proper noun [1]. And while "Sandworm" will perpetually collide with sci-fi fans of Dune (a subgroup that likely overlaps with security research), discussion of Dune's sandworms won't be trending on a weekly/daily cycle.

[0] http://www.zdnet.com/article/the-branded-bug-meet-the-people...

[1] https://en.wikipedia.org/wiki/Heartbleed_(disambiguation)

I've been reading up a bit on these attacks and I was wondering if there are any particular requirements to implement them in an arbitrary language?

For example, can you implement the attack with Java but without JNI? i.e. are syscalls required to be able to leverage the exploit?

Would cable modems or wireless routers be vulnerable? My router's specification page does not list what CPU it uses. Haven't checked on my modem yet.

In order to be vulnerable, a system needs to run malicious code. This is an especially big deal in the context of virtual machines in shared hosting, because you have no idea what code is running in other virtual machines on the same host. It also impacts end-user systems because the web is basically 100% untrusted code.

Even if the cpu in your router is vulnerable, what untrusted code is it expected to run?

None of this is to say it's not something that should be fixed. But it's low priority, as it requires the ability to execute code remotely already.

Ah, thanks. :)

I don't think those use x86 chips so they should be OK.

Everything is vulnerable to Spectre.

Not everything. In-order CPUs such as Cortex-A7 and Cortex-A53 are not affected. Confirmed by Arm at https://developer.arm.com/support/security-update "all other Arm cores are NOT affected"

Doesn’t Spectre just require speculative fetching? In which case in order is not sufficient as to guarantee immunity.

It’s Meltdown that relies on out of order execution.



From the Ubuntu wiki, it's clear that someone broke the coordinated release date of January 9th. Does any one have more insights on why the news broke to soon?


From what I recall, patches started showing up in the Linux Kernel, causing lots of buzz and speculation among the security community.

People noticed that one patch was only being applied to Intel processors, leading some tech blogs to speculate that only Intel processors were affected by the bug, and that triggered a PR response from Intel. Then someone reverse-engineered an exploit and posted about it on Twitter, and the cat was out of the bag at that point, so they moved up the disclosure timeline.

And it's a good thing they did that because a motivated black hat with the correct skill set most likely did, too. Not to mention nation state cyber ops.

IIRC, the Google Project Zero folks went ahead with the announcement early, because what had already leaked / been speculated, was so close to the truth. Apparently a bunch of smart people were paying attention to a recent set of kernel patches, and starting putting "two and two together" relatively quickly, based on the contents of the patches.

https://meltdownattack.com/meltdown.pdf https://eprint.iacr.org/2013/448.pdf

TLDR: Userland process' read access to Ring 0 memory will throw an exception (n.b.: kernel mode memory is actually mapped into process' address space), but before that the instruction reading the memory is actually executed and data are cached. The process can use value of data as an address in userland for another read instruction. Now the process just needs to check range of possible addresses where the data was read from and see how long it takes (using rdtsc) to access them - if it's quick, then we have a match.

Is that correct, or am I missing something? e: write changed to 2nd read

How does this affect the end-user security of the apps we use every day, like Gmail, Twitter, GitHub, Stripe, etc.?

All your passwords, are belong to the Internet

So far the possibility of JavaScript attacks looks the biggest concern for the normal users. (Assuming that you don't install any software from fishy sources, of course). And cloud service and virtualization service providers are under the biggest threat.

If I understand it correctly (not entirely sure), an adversary needs to have code running on a system to exploit either attack - would Javascript running in a browser be in principle sufficient to be able to exploit this, or are there some mitigations (sandboxes?) in place to prevent this from happening?

Edit: Okay, found this post from yesterday: https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

Thanks for the reply!

From the Spectre paper:

> As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs (cf. Listing 2).


Yes javascript is sufficient.

I think Spectre is theoretically possible with just untrusted data (not just untrusted code).

It would need an existing trusted program that has a branch-predicting loop like the one in the paper. The attacker would feed untrusted data into the loop and then observe secrets through cache timing.

It's weird code, so it's probably unlikely an existing program would have it, but it's not outside the realm of possibility. The attacker would also need a high-precision way to measure the timing of the operation, which also might be hard to find in an existing program, but not impossible.

Just something else to keep you awake at night!

They need code execution to observe that cache timing, though, and they need to prime the branch predictor and cache.

I think the branch predictor can be primed with well-crafted data? (start off with a long series of 0s for X)

For the other aspects like getting the cache timing, that does seem harder. I don't really know if it's feasible in practice.

Yes, and one of the mitigations is to reduce the precision / accuracy of certain timer calls in Javascript.

The paper on this site is well written explaining the method of using ROP with branch mispredict to load gadgets. Anybody interested in this, ROP, caches and branch prediction/misprediction is all in the intro [1]book[2]course CSApp

[1]http://csapp.cs.cmu.edu/3e/labs.html [2]http://www.cs.cmu.edu/~213/schedule.html (Youtube lectures)

Just want to share an technical article about meltdown with things that isn't covered in the white paper. https://www.moesif.com/blog/technical/cpu-arch/What-Is-The-A...

Would a microcode update be possible to fix this? I assume not and that it's fundamental to the speculative execution architecture, but haven't yet seen anything that explicitly addresses the question. Presumably if it is possible then it would reduce any longer term financial consequences for Intel of the problem.

Skimming the flush+reload paper I see how timing alone is enough to get e.g. a private key if the underlying algorithms are known and prone to revealing what they're working on.

Meltdown however seems to be able to arbitrarily read memory (at about ~500 kB/s).

How does this work that it can read from the cache?

Are the chips used by Intel Management Engines subject to these attacks as well?

Yes, but the two have nothing to do with each other.

Does anyone know the amount of dollars involved in the bounty paid by Intel?

Marked as dupe? Seriously?

Other domain used to publish the same info: https://meltdownattack.com

How would one exploit a Google Cloud physical machine for example? One would still need to inject/patch a file to exploit processor right?

Meltdown doesn't _directly_ allow modification of code. However, a guest VM could run userspace code that accesses ring 0 / kernel memory of the hypervisor / physical machine.

This would mean that security certificates, passwords, etc of the hypervisor could be exposed, which could then allow compromise.

I would also assume this means exploits on a machine provisioned with multiple VMs from different customers would allow a malicious customer to leak data from another customer's VM located on the same machine.

Well, I'm feeling mighty good about my purchase of a 2011 intel Atom chip back in the day... Now if only I knew which box it's in...

I wouldn't count on that to be enough. The Atom still speculatively issues and begins execution on instructions past branches and a quick glance at the pipeline[1] makes me think it can start on loads before the branch resolves, the same way an ARM A8 can despite both being in order.


Admins: why did you take this down? The previous submission didn't get any attention and this had some good discussion.

Which one did OSX partially protect against? Also, which one allows JS execution to activate the attack?

I don’t know what the OSX protection was, but that’s probably Meltdown. Meltdown = using native code to read kernel/system memory.

Spectre is the one that can be done with JS.

The youtube video says:

"This video has been removed for violating YouTube's policy on spam, deceptive practices, and scams."



We've already asked you to please not post dumb comments. Would you stop?


There is value in being able to package up the issue into something that is understandable and presentable to a larger non-technical audience. If a "cutesy logo" increases interest and engagement from a larger non-technical community, I'll take it.

Research has concluded that naming storms/hurricanes results in a much better public response. People is more likely to take preventive action against "Storm Eleanor", rather that against an unnamed bad weather report, plus a name allows people to differentiate simultaneous events.

In the same way, it is not far fetched to think that a name and logo are much more likely to convince users to quickly apply patches etc. than just CVE-2017-5753, CVE-2017-5715 & CVE-2017-5754. Plus, they are way easier to remember!

Very much agree. The other benefit that I see from the branding is that it serves as a bulwark against the (mis)information that will appear from those who might have a vested interest to downplay it (eg: intel).

"Can I use the logo? Both the Meltdown and Spectre logo are free to use, rights waived via CC0. Logos are designed by Natascha Eibl."

I'm sure she would love to hear your opinion personally. Site - https://vividfox.me/ Email - mail@vividfox.me

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact