Hacker News new | past | comments | ask | show | jobs | submit login

Spectre is fundamental to processor design, but Meltdown is pretty much a bug.



It is only a bug once you have discovered that branch prediction can be used as a vector of attack. Until then, it was a perfectly valid design implementation.


It was a bug right from the original design, because it sidestepped the privilege level separation guarantees of the CPU. That the bug could not be exploited for other reasons does not make it not-a-bug.


What actual data can be extracted by a Meltdown/Spectre attack? Still need to find an answer to that, nothing online says anything specific.

Datacenters should probably be worried, but what about the hundreds of millions of users out there? Doesn't seem like a big deal, tbh - until an actual exploit is out there, why should they worry?


Meltdown allows userland native code (the Javascript your browser loads from random websites is JIT'd down to native code) to dump kernel memory.


It is worth clarifying, when people talk about "kernel memory", for x86-64 it really means all of memory, because all of physical memory is mapped into the kernel's address space. So really, meltdown allows userland code to read anything in memory.


Incorrect. When people talk about kernel memory, they are talking about pages marked as supervisor in the page tables for a particular process. That is not "anything in memory."


Meltdown allows applications to read any mapped pages, regardless of the protection bits on those pages. That mainly means kernel memory, which is the only page set that's normally unreadable. The kernel mapping normally includes all of physical memory.


I think you missed the "all of physical memory is mapped into the kernel's address space"

For a typical kernel without Meltdown mitigations, the entire kernel, including that window into all of physical memory, is in the page tables of every 64 bit process at all times.


How do people find the addresses to even start probing for stuff like this?


Theoretically, any application data stored in memory. An example might be a key used to encrypt user data server-side. The scary part of this attack is how it basically breaks the assumptions that most software was written under around memory isolation. Consider an AWS instance that shares a physical system with a different customer’s instance.

Amazon, Google, Microsoft, and basically everyone else is scrambling to fix these issues because of the potential. Basically, it’s going to take years to cover the long tail for this issue, and waiting until exploit kits are commonplace isn’t necessary to understand the potential impact.


The page from the researchers is your best source: https://meltdownattack.com/. I also recommend reading the actual papers (https://meltdownattack.com/meltdown.pdf and https://spectreattack.com/spectre.pdf). I haven't gotten to the Spectre paper yet, but the Meltdown paper is excellent, readable, and clear.


I don’t think it’s fair to call Meltdown a bug. It’s like LLVM aggressively taking advantage of undefined behavior in C. Processors, like compilers, are designed making assumptions about what they owe the user and what they don’t. The architecture manuals promise that a user space read from protected kernel memory will trigger a page fault. They don’t make any other promises.


Not calling it a bug is ridiculous, it breaks x86 memory protection totally.


What does “x86 memory protection” mean? What promises does the hardware make to the software?


That software in ring 3 cannot read memory in ring 0 unless the appropriate permission bits on the page table entries are set.


I don't think the spec actually makes that guarantee anywhere. It says that a page fault will be generated if a memory access violates page protection bits, but doesn't discuss any other potential side effects.

The closest thing I can find is Intel Architecture Reference Manual Volume 3, Section 5.1.1:

> With page-level protection (as with segment-level protection) each memory reference is checked to verify that protection checks are satisfied. All checks are made before the memory cycle is started, and any violation prevents the cycle from starting and results in a page-fault exception being generated. Because checks are performed in parallel with address translation, there is no performance penalty.

If you read section 11 on caching, the terminology "memory cycle" seems to exclude cache access. Indeed, Volume 3, Section 11.7 explicitly warns that implicit caching might happen that you would not expect:

> Implicit caching occurs when a memory element is made potentially cacheable, although the element may never have been accessed in the normal von Neumann sequence. Implicit caching occurs on the P6 and more recent processor families due to aggressive prefetching, branch prediction, and TLB miss handling.


The spec isn't really relevant to the point I'm making. I'm talking about what the purpose of the design is. Memory protection as a feature exists in order to (among other things) prevent code in ring 3 from reading data in ring 0 that ring 0 has not explicitly granted permission to read. If the implementation fails to do that, then it's failed to achieve its goal, and the whole exercise is pointless—why include the silicon at all? Documented bugs are still bugs.

Again to use the crypto analogy: An implementation of, say, RSA that uses timing-sensitive memcmp to compare signatures would follow the RSA specification. But everyone would agree that such software has a severe bug.


Meltdown is a bug in the same sense that a crypto routine that uses a timing-sensitive memcmp has a bug. Memory protection is designed to prevent ring 3 code from reading memory in ring 0 (assuming the appropriate page tables are set). The implementation fails to do that.

The fact that speculative reads don't check the permission bits is arguably a design bug, not an implementation bug, but I'd still call it a bug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: