Hacker News new | past | comments | ask | show | jobs | submit login

So this is something that I've never gotten a full answer to: what is the difference between a "thread" and a "process" in this model?

This isn't a facetious question. A thread is just, at its core, a process that shares memory with another process. (In fact, this is how threads are implemented on Linux.) But all, or virtually all, processes also share memory with other processes. Text pages of DLLs are shared between processes. Browser processes have shared memory buffers, needed for graphics among other things.

What separates processes that share memory from threads that share memory regarding Spectre? Is it the TLB flush when switching between processes that doesn't occur between threads? Or something else?




For meltdown (spectre v3, iirc) It's not so much sharing memory as sharing address space. Processes have different page tables. Threads within a process share page tables.

For spectre v1 and v2, right now (on existing hardware) mostly nothing separates threads from processes. In the future, process isolation is a good candidate for designing hardware + system software such that different processes are isolated (via partitioning the caches, etc).

You probably still want threads within a process to share cache hits.


So, if that's true, why is Chrome considered to have solved Spectre? Browser content processes from different domains share some memory. Moreover, if process boundaries don't have any effect on the branch predictor on current hardware, then why is process separation relevant at all? Doesn't all this mean Spectre is still an issue?


I guess I jumped the gun a bit in my comment above.

In terms of the possibility of exploit, as I understand there isn't at this point any isolation between processes.

In terms of the ease of exploit, being able to run untrusted code in the same process as the victim helps quite a bit. Otherwise, you have to find a gadget (i.e. qualifying bounds check for v1, indirect branch for v2) in the victim process that you can exploit from the attacker process. Possible, but quite a bit harder than making your own gadget.

This all ignores the forward looking reasons process isolation is a good idea. I can't keep track of the latest mitigations in Linux, but they pretty much all will only help between processes by flushing various hardware data structures. And hopefully someday we will have hardware actually designed to restore the guarantees of isolation between processes.

I'm pretty sure this is accurate, but I'm just a random guy on the internet so don't trust my word for it too much.


It's not really about process isolation then, but the amount of control untrusted code can have over a process. Which means if everything that code can do is masked to some part of the process, it should be able to achieve the same isolation between such subprocesses but within the OS process boundaries. Although the paper claims this is too hard.


Chromium does not fully solved Spectre. It is still too expensive to run one process per domain, so many unrelated pages run in the same process. But Chromium contains a few mitigations that makes exploiting Spectre from JS much harder.


The threat model is: A code triggering spectre v1 gets read access to the entire address space currently mapped in. ^*

Since process boundaries are enforced by not mapping any ram not usable by the process, this means they don't get violated by spectre v1. If you have two threads which only share part of their address space, the unshared part is protected. Any executable or library mapped into multiple processes is readable from any of them.

^*: With modern cpus, multiple processes can be mapped in simultaneously using ASIDs, however this doesn't matter because they work as they should and properly isolate the processes. You can just assume the model "only one process is mapped at a time".


Your description implies the existence of another mitigation. Namely: When you enter untrusted code you mprotect() all sensitive areas and remove PROT_READ. When exiting the untrusted code you add the permissions back.

Are you sure that works? As I understand it, the issue with Spectre is the branch predictor, not the memory mappings. The reason why process isolation works is that branch prediction gets reset on context switch (or that this will happen on newer generations of hardware in the future).


Mprotect should in fact work, but it is likely more expensive than actual process separation. Resurrecting segments or using virtualization hardware in userspace (see libdune) might be workable solutions.

The issue is that speculation allows bypassing software enforced bound checking, but, discounting meltdown, the hope is that hardware can still enforced them.


mprotect does not issue a memory barrier (mfence), so whilst theoretically protected it is practically delayed and can still be read from the cache via sidechannels. Same issue with the unsafe bzero call. A compiler barrier is not safe enough to delete secrets.


Mprotect should work because even under speculation the CPU shouldn't allow a read to an invalid address to be executed. Meltdown shows that some CPUs speculate even this sort of checks, but it seems that it is not inherently required for an high performance implementation.


"Text pages of DLLs are shared between processes"

I thought this wasn't possible with ASLR'd relocations all over the place in the text?


Most modern architectures make extensive use of PC-relstive instructions for branches and load/store. That means when rebasing a binary you just need to modify the pointers in the data segment (things like GOT entries, etc) and can leave the text untouched.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: