This isn't a facetious question. A thread is just, at its core, a process that shares memory with another process. (In fact, this is how threads are implemented on Linux.) But all, or virtually all, processes also share memory with other processes. Text pages of DLLs are shared between processes. Browser processes have shared memory buffers, needed for graphics among other things.
What separates processes that share memory from threads that share memory regarding Spectre? Is it the TLB flush when switching between processes that doesn't occur between threads? Or something else?
For spectre v1 and v2, right now (on existing hardware) mostly nothing separates threads from processes. In the future, process isolation is a good candidate for designing hardware + system software such that different processes are isolated (via partitioning the caches, etc).
You probably still want threads within a process to share cache hits.
In terms of the possibility of exploit, as I understand there isn't at this point any isolation between processes.
In terms of the ease of exploit, being able to run untrusted code in the same process as the victim helps quite a bit. Otherwise, you have to find a gadget (i.e. qualifying bounds check for v1, indirect branch for v2) in the victim process that you can exploit from the attacker process. Possible, but quite a bit harder than making your own gadget.
This all ignores the forward looking reasons process isolation is a good idea. I can't keep track of the latest mitigations in Linux, but they pretty much all will only help between processes by flushing various hardware data structures. And hopefully someday we will have hardware actually designed to restore the guarantees of isolation between processes.
I'm pretty sure this is accurate, but I'm just a random guy on the internet so don't trust my word for it too much.
Since process boundaries are enforced by not mapping any ram not usable by the process, this means they don't get violated by spectre v1. If you have two threads which only share part of their address space, the unshared part is protected. Any executable or library mapped into multiple processes is readable from any of them.
^*: With modern cpus, multiple processes can be mapped in simultaneously using ASIDs, however this doesn't matter because they work as they should and properly isolate the processes. You can just assume the model "only one process is mapped at a time".
Are you sure that works? As I understand it, the issue with Spectre is the branch predictor, not the memory mappings. The reason why process isolation works is that branch prediction gets reset on context switch (or that this will happen on newer generations of hardware in the future).
The issue is that speculation allows bypassing software enforced bound checking, but, discounting meltdown, the hope is that hardware can still enforced them.
I thought this wasn't possible with ASLR'd relocations all over the place in the text?
It's worth noting that no existing or announced common hardware is "properly designed" according to this condition. Even the "fixed" Intel hardware that's been announced is still vulnerable to spectre v1 across process boundaries.
AMD Zen is.
Spectre v1 (bounds check bypass) only works inside processes. All it allows you to do is to read any memory location currently mapped into your address space, and so it gives anything that can execute code complete read access to the address space of the process it's running in. On Intel CPUs, this also allows reading the kernel address space, unless kpti is used. Eventually, the ability to read kernel memory will be removed, and so kpti becomes unneccessary.
On all AMD post-BD cpus, spectre v1 cannot be used to read kernel address space.
All the rest of spectre (and meltdown) can eventually be fixed, but it is effectively impossible to make a cpu that is both fast and doesn't exhibit spectre v1.
I don't think this is true. If it is, why did Linux add speculation barriers to bounds checks in the kernel?
I was in a discussion of this last week on another thread - see my previous comments for why I think spectre v1 has impact across processes.
I think you were having that discussion with me.
So, I went and read the whole lkml threads you linked and if I understood correctly, regarding spectre v1, the kernel is only expected vulnerable to bpf based attacks or similar. As far as I understand, the speculation barriers are used to protect arrays directly accessible by bpf programs.
There is a mention of out of process attacks to other userspace programs, but no details.
By carefully crafting inputs, I'm ready to admit that it might be theoretical ly possible to attack some exploitable branches, but the big deal with spectre is the high bandwidth that can be attained by directly running code in process.
Do you have any pointer to any description of an even remotely practical out of process spectre v1 attack that doesn't involve executing code in process? Repurposing an interface that is not meant to be used to run code (I e. Build your own VM) is fair game.
> If you read the papers you need a very specific construct in order to not
only cause a speculative load of an address you choose but also to then
manage to cause a second operation that in some way reveals bits of data
or allows you to ask questions.
> BPF allows you to construct those sequences relatively easily and it's
the one case where a user space application can fairly easily place code
it wants to execute in the kernel. Without BPF you have to find the right
construct in the kernel, prime all the right predictions and measure the
result without getting killed off. There are places you can do that but
they are not so easy and we don't (at this point) think there are that
> The same situation occurs in user space with interpreters and JITs,hence
is particularly vulnerable to versions of this specific attack because
the attacker gets to create the code pattern rather than have to find it.
> big deal with spectre is the high bandwidth that can be attained by directly running code in process
That depends on your perspective. If you are an OS developer who strives to guarantee process isolation, than it is a pretty big deal that spectre v1 allows you to read memory from the kernel or from other processes, even if it might be tricky to do so. If you write a JS JIT, then yeah you are probably most concerned about the single-process case.
> remotely practical
IMO, most spectre attacks are not remotely practical. No, I don't have a pointer. The only actual demonstrations of spectre I've seen is the one included with the original paper (single process).
But then, things moved on, standard ways to add more cache with multiple cores was lapped up and later on we found a design flaw that echoed back for a decade or more upon all these multi-core CPU's.
Though in fairness and to put some context upon all this, CPU design is more complex than writing TAX laws. Yet we have exploits for TAX laws appearing and used all the time by large corporations. Whilst the comparison is not ideal and some would say, unfair. It does highlight that nothing is perfect and what we may class as perfect today (or darn close), could and may very well be classed as swiss cheese in the future. It gets down to how far away that future is.
After all, we still have encryption utilised that we have (on paper) shown to be flawed to future quantum CPU's!
But in a World that was aware of Y2k decades before the event, the penchant of business to drive everything to the last minute for profit will always be a factor in advancements. After all, if CPU cores had isolated caches instead of sharing, then that would mitigate so many issues, yet it would cost more to make and most consumers would not appreciate the extra cost for what to them is little value in return above and beyond the cheaper solution. That's business for you and CPU's are made by them for profit.
That's why I said we need trusted cores - i.e. ones that don't implement speculative execution or share cache with other cores. Untrusted code needs to be run in physical isolation, not just virtual isolation.
Perhaps we just need to have a more restricted idea about what untrusted code is allowed to do.
Eg, if you do Haskell and just verify the function you are running is not in the IO monad, you might miss some usage of UnsafePerformIO. Even if you check their code, if you let them specify dependencies they might manages to sneak a buggy use of UnsafePerformIO into a library they submitted to hackage.
Plus, your restriction is essentially: no clock, no contact with the outside world, no threading, and a carefully considered interface to the host program to prevent time leaks.
For many usecases, this is not workable
Disallowing direct access to the outside world is a big restriction, but it may be that a lot of the things you'd want to do inside a sandboxed application that aren't safe could be delegated to trusted code through an appropriate interface.
Threading isn't necessarily a problem; the Haskell Par monad for instance should be fine as there is no program-visible way to know which of two sub-tasks executed in parallel finished first.
Presumably, this could be fixed easily by using the phantom type trick (same as ST) but it would make the type signatures ugly and possibly break existing code. (Maybe there's a more modern alternative to phantom types?) So, yeah, you might not want to use the Par monad as it's currently implemented in ghc as your secure parallel sandbox.
The online docs suggest using lvish if you want a safer Par monad interface, which I'm not familiar with (though the lvish docs say that it's not referentially transparent if you cheat and use Eq or Ord instances that lie).
The general idea seems sound, though -- it should be possible to have parallelism in a sandbox environment without allowing the sandboxed program to conditionally execute code based on which of several threads finished some task first.
umm, "on today's hardware"