The game is changed forever, and CPU designers across the industry will likely have to can some or all of OoOE, setting performance back considerably. It can take years to reachieve performance we're used to today at the security level required.
I think this is doable by adding a buffer to store cache entries evicted by speculative execution, so they can be put back, and a log of new cache entries, so they can be removed
Also, the abort process itself needs to take a constant amount of time out of the critical path to avoid leaking info via timing (the timing then only depends on how fast the CPU determines it speculated wrong, but that doesn't depend on the speculated code).
EDIT: Hyperthreading also needs to be handled somehow, which other than disabling it might require adding a per-thread "L0" cache and using that exclusively for speculatively executed code.
There is still the problem of theoretical plain timing attacks on normally executed code though, but that's unrelated.
Making sure that the indirect branch predictor only makes predictions when the source address fully matches would be nice as well (although theoretically not required if there are no side effects).
Not an expert, but if speculative execution has no side effects, not even on timing, then you might as well disable it.
Speculative execution is meant to speed up computations.
The approach Mill Computing  is taking is to have the compiler toolchain handle all instruction scheduling for their new design. All instructions have a fixed, known latency (in terms of clocks). And the processor doesn't have much in the way of global state (like condition code registers and such). All metadata is stored with the data on the 'belt' (replacement for registers) itself, which is saved / restored upon context switch.
They are also years away from silicon it seems.
The Mill has many other aspects that improve the security story. All protection is based on memory address, for example. And pointers are not integers. There are reserved portions of the address space that programs are completely not allowed to touch.
No, P=NP does not immediately break crypto, because that relation makes no statement w.r.t. if those algorithms in P solving problems we thought were NP are easy to find. Or efficient. Just because the problem your adversary has to break is part of P doesn't make it easy.
It involves iterating through and running all possible programs ordered by length, giving them an increasing amount of time to run and checking (in polynomial time) if they produce the correct answer.
If there exists a program that finds the answer in polynomial time, this algorithm will eventually find it after wasting (at most polynomial) time iterating through other programs and tentative bounds.
If P != NP, it's still at least asymptotically optimal.
Of course, that constant is astronomically unpractical.
Kernel flag at boot time? - No (the fix is a to compile instructions differently).
Bare metal servers that only run trusted code are only unaffected so long as they explicitly opt out of the new security model, and I'm not sure how easy that'll be...
Please talk to management. Because I really see exactly two possibibilities:
- Intel never intends to fix anything
- these workarounds should have a way to disable them.
Which of the two is it?
I have some old laptops at home and they work quite well with a proper Linux setup.
Should we think in terms of slowing everything down by some factor, or just slowing tasks that need an un-throttled CPU?
Taht said... Branch prediction and TLBs have been ubiquitous since about 1975. We're seeing vulnerabilities in ARM processors with unit costs of a tenth of a cent. We don't have to go that far back; if nothing else our toolchains and manufacturing techniques are far better. But if we're forced to discard speculative execution as a concept, which I think may be necessary to entirely prevent side channels, we'll be going way, way far back. Further back than the Pentium 4.
Fundamentally speaking, the power of out-of-order execution is not one of engineering. It's not like object-oriented programming or version control in that it simply makes it easier to engineer fast processors. It is more powerful on a fundamental level. And there's been so much theoretical and practical work put into it over the decades that it may take more decades to bring a different mathematical formalism up to the same level of development.
Let's just say that I really hope we don't have to throw out out-of-order execution. It'd suck. Hard.
Do you have any reason to believe this will be the case?
> Further back than the Pentium 4
This is a bad example.
The Pentium 4's pitfall was a high clock speed with little concurrency, and thusly a big heat problem that limited peak computation power.
Yes, we lose some concurrency without branch prediction, but post-P4 multi-core processors would have to take some very major hits to be comparable to the P4 in a modern multi-threaded computing environment.
The P4 is just not a good reference point for old processors because of its architectural differences.
If passing data involves frequent switches between processes, then, yes, I see there would be a problem with syncing on every process switch. I think syncing at longer intervals would solve that problem. All processes could share the same fake time, but see the fake time skip a small fixed amount at fixed regular intervals. That fixed amount of fake time would take a variable amount of real time, which absorbs the small variations in real execution time.
The Cell processor was never used in any other consoles, and the IBM Roadrunner supercomputer using them ended up being replaced by Cray+AMD hardware.
Our current programming paradigms don't adapt well to massive parallelism, except for some small classes of "embarassingly parallel" problems (e.g., make -jN).
The downside (or upside) is that they're desgined for running Forth, and are not x86 compatible.
My guess on Phi is that it wasn't a big win over fewer deeper cores and so there was not much market. This might change things a bit.
Edit: the video has been restored.
edit: huh that's weird that so many people are getting an error, it works for me across two browsers on both Linux and Windows
Edit: Sorry for being off-topic, but here's a comparison - https://postimg.org/image/fhb9vlmt1/
Chrome - Mac OS X Sierra
Am I affected by the bug?
Most certainly, yes.
Can I detect if someone has exploited Meltdown or Spectre against me?
Probably not. The exploitation does not leave any traces in traditional log files.
What can be leaked?
If your system is affected, our proof-of-concept exploit can read the memory content of your computer. This may include passwords and sensitive data stored on the system.
Which systems are affected by Meltdown?
Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.
Which systems are affected by Spectre?
Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.
Which cloud providers are affected by Meltdown?
Cloud providers which use Intel CPUs and Xen PV as virtualization without having patches applied. Furthermore, cloud providers without real hardware virtualization, relying on containers that share one kernel, such as Docker, LXC, or OpenVZ are affected.
Why is it called Meltdown?
The bug basically melts security boundaries which are normally enforced by the hardware.
Why is it called Spectre?
The name is based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time.
These attacks are very practical. Unprivileged code, doing nothing inherently wrong (so very hard to detect), reading arbitrary memory locations contents at a rate of about 2000 bytes per second.
EDIT: Also, just confirmed via https://jsfiddle.net/5n6poqjd/ that only FF has SharedArrayBuffer disabled. Chrome isn't going to release 64 which has SharedArrayBuffer disabled for a couple of weeks.
Whether this is sufficiennt mitigation depends on your threat model and the exact amount of rate reduction. For example, a leak of 1 byte per hour might be OK in some threat models but not others.
User typing into a "secure" Linux (edit: was MacOS - can we do strikethrough here?) password dialog while another process steals the data.
The first attack was incredibly difficult, but once there's a proof of concept, copies will proliferate.
It was speculated that this could be done via IPC to get another process to inadvertently leak, but there was no PoC of this and given how much overhead (in terms of CPU time/instructions) there is in all forms of IPC I'm not sure how realistic that actually is in practice.
But basically, imagine you could go to a certain website and they could open bankofamerica.com in a hidden frame and if you were logged in, they could possibly find spots in memory that had that site's information. Or Google, or Facebook, or whatever. Could be worse depending upon how Chrome's password manager stores passwords in memory.
However, somewhat surprised that Ubuntu isn't present - done a fair bit of searching today and have only found that the patches are in progress, which I'm a bit surprised about given the disclosure timeline that appears to be in place, and that SuSE and Red Hat seem to have patches in place and ready to go?
* EDIT * - since posting this, there is now an Ubuntu link present, and an explanation that they were expecting disclosure on Jan 9th.
The original coordinated disclosure date was planned for January 9 and we have been driving toward that date to release fixes. Due to the early disclosure, we are trying to accelerate the release, but we don't yet have an earlier ETA when the updates will be released. We will release Ubuntu Security Notices when the updates are available.
Status page for the fix.
Between that at Redhats promptness on this it’s just giving me more confidence in that side of the house.
There is a really nice paper on this particular topic
which describes some useful countermeasures which haven't been widely implemented. If they had been, they could have somewhat reduced the impact of opportunistic exploitation of memory disclosure vulnerabilities.
> Why is it called Meltdown?
> The bug basically melts security boundaries which are normally enforced by the hardware.
Compare to "Heartbleed" and "Sandworm", which while at the time had a little mocking for being a bit too polished/branded , at least had the benefit of being relatively scoped. "Heartbleed" seems to have very few collisions period as a noun/proper noun . And while "Sandworm" will perpetually collide with sci-fi fans of Dune (a subgroup that likely overlaps with security research), discussion of Dune's sandworms won't be trending on a weekly/daily cycle.
For example, can you implement the attack with Java but without JNI? i.e. are syscalls required to be able to leverage the exploit?
Even if the cpu in your router is vulnerable, what untrusted code is it expected to run?
None of this is to say it's not something that should be fixed. But it's low priority, as it requires the ability to execute code remotely already.
It’s Meltdown that relies on out of order execution.
People noticed that one patch was only being applied to Intel processors, leading some tech blogs to speculate that only Intel processors were affected by the bug, and that triggered a PR response from Intel. Then someone reverse-engineered an exploit and posted about it on Twitter, and the cat was out of the bag at that point, so they moved up the disclosure timeline.
TLDR: Userland process' read access to Ring 0 memory will throw an exception (n.b.: kernel mode memory is actually mapped into process' address space), but before that the instruction reading the memory is actually executed and data are cached. The process can use value of data as an address in userland for another read instruction. Now the process just needs to check range of possible addresses where the data was read from and see how long it takes (using rdtsc) to access them - if it's quick, then we have a match.
Is that correct, or am I missing something? e: write changed to 2nd read
Edit: Okay, found this post from yesterday: https://blog.mozilla.org/security/2018/01/03/mitigations-lan...
Thanks for the reply!
I think Spectre is theoretically possible with just untrusted data (not just untrusted code).
It would need an existing trusted program that has a branch-predicting loop like the one in the paper. The attacker would feed untrusted data into the loop and then observe secrets through cache timing.
It's weird code, so it's probably unlikely an existing program would have it, but it's not outside the realm of possibility. The attacker would also need a high-precision way to measure the timing of the operation, which also might be hard to find in an existing program, but not impossible.
Just something else to keep you awake at night!
For the other aspects like getting the cache timing, that does seem harder. I don't really know if it's feasible in practice.
http://www.cs.cmu.edu/~213/schedule.html (Youtube lectures)
Meltdown however seems to be able to arbitrarily read memory (at about ~500 kB/s).
How does this work that it can read from the cache?
This would mean that security certificates, passwords, etc of the hypervisor could be exposed, which could then allow compromise.
Spectre is the one that can be done with JS.
"This video has been removed for violating YouTube's policy on spam, deceptive practices, and scams."
In the same way, it is not far fetched to think that a name and logo are much more likely to convince users to quickly apply patches etc. than just CVE-2017-5753, CVE-2017-5715 & CVE-2017-5754. Plus, they are way easier to remember!
I'm sure she would love to hear your opinion personally.
Site - https://vividfox.me/
Email - firstname.lastname@example.org