The Mill CPU is Single Address Space (SAS). It has separate Protection (PLB) and Translation (TLB), with the PLB being in parallel with the L1 and the TLB being between cache and main memory.
Unlike previous SAS machines, the Mill supports fork() https://millcomputing.com/topic/fork-2/
PS sorry to everyone suffering Mill fatigue :(; we love bragging about the baby ;)
And so I hear about The Mill again. All we do is hear about it. I can't wait to benchmark its performance, reliability, and security against the others once it's perfected and released. ;)
And all this even assumes you have to do the switch. That's not necessary if you split your system between kernel and user threads that run side by side on different cores with message passing, memory, or CPU interrupts to do notification.
So, not only are VM's and Unikernels old, their advocates are ignoring improvements to the other side of CompSci for MMU systems. Interestingly, that was the side making self-healing, live-updating, and NSA-resisting systems all these years on COTS hardware. A little weird that such architectures get the least attention. ;)
Do you happen to have any relevant references on modern anti-exploit technology like ASLR (which creates tons of MMU entries to describe the fragmented address space) and microkernels (which tend to rely on fast context switching)? I can imagine a few partial solutions, but... And no, "switch to OCaml" doesn't solve the problem. ;-)
Far as modern tech, Ill try to get you a few when Im back on my PC at home. I have tons of them actually so I need to look again to apply the mental filter. Two interesting ones for you to Google for now are SVA-OS by Criswell and Code Pointer Integrity. Certain aspects of those have minimal overhead with strong prevention.
Best solutions are CPU mods that give better protections with lower costs. Especially memory safety. Architecture is main problem. All this other stuff is how we BAND-AID it. ;)
Oh, no you didn't! You saying Green Hills ain't been shipping? Secure64? Infineon? Better security != not shipping. Although, if I read that as a confession, then it might make a bit more sense. :P
re security mitigations
Ok. Thirty plus minutes into collection shows, aside from no organization, that I need to narrow this down. Most of great stuff is CPU modifications or compiler transformations that make safety/security easy. The HW has relatively low overhead in varying degrees of features supported while the SW approaches have significant overhead but support monoliths like Unikernels (or say Dom0). There are VM-style papers in my collection with clever, low-overhead stuff but bound to be breached like other clever stuff was. Nizza Security Architecture & MILS kernels are still best of that breed.
So, need to know if you're interested in the HW mods and/or stronger SW safety tricks. Honestly, they're most likely to pay off. Worst case: throw extra HW at either to cover performance hit. Will get cheaper in volume. Plus, a few are simple enough to apply to your domain if they do custom CPU's for RedFox, etc.
Want me to send a few?
The memory was not bits and bytes, though. For example a Symbolics 3600 used 36bit words with all data being tagged. various number types, characters, strings, vectors, bitmaps, arrays, hash tables, lists made of cons cells, OOP objects, ...
That was used and checked on the processor level. That way software would not manipulate raw memory in some region, but actual data objects, knowing which space they use and what structure they have.
TI used 32bit Lisp processors in their later machines, while Symbolics went to 40bits to support larger address spaces.
Far as safe/secure, look at crash-safe.org for best in class of those. SAFE architecture isn't just tagged: it's holistic in addressing security & SW dev issues at each layer. Actually, they might be trying to do too much haha. Told people they should've just ported Oberon or Java to the CPU to give us interim solution.
This is an odd statement. That footnote (2) directly refutes part of it.
Concretely, lots of CPUs have "address-space identifiers" that enable MMU contest switches without TLB flushes. Intel Sandy Bridge and up has a limited form of this capability (Intel calls it PCID for incomprehensible reasons), and I'm working to enable it on Linux 4.6 or 4.7.
With ASID available, MMU context switches are a single inatruction and have negligible cache footprint. The extra bookkeeping needed will be one or two cachelines.
Pretty much any core in "Cortex-A" series (e.g popular Cortex-A8, A15, A57) has support for fast, flush-less address space switching using ASIDs. Address space / context switch overhead has NOT been a good reason to avoid this kind of compartmentalization, for at least the last decade or so.
Nemesis has a single address space and memory protection: you can twiddle the permission bits in MMUs without necessarily incurring the cost of a TLB flush.
A unikernel system is usually a single-application system with no local multiuser capability; security is applied at the virtualisation layer and within the application. This is quite different from the traditional timesharing model.
I recall some discussion about leaking virtual addresses as an optimization to avoid needing to manage or GC large page tables. This was the mid-1990s so a 64-bit address space was more than anyone could possibly need. ;)
So single address space or no? I ask 'which way gives the smaller chip footprint'?
I think we spend (waste?) too much time trying to do things in hardware when we could offer an extremely simple core to the s/w layer. That makes h/w designer's job easy and leaves a ton of footprint where you can put more cores.
Make the core RRISC (reduced-reduced-instruction-set-computer) and offer it in 64-core or 256-core versions and let the programmers have fun with it.
Looking at the brainiac-vs-speed-demon debate (from this ), I'm squarely in the speed-demon camp.
A Cell with a few megabytes of RAM in each support processor might be useful for bulk packet processing. IBM built some blade servers like that. They were powerful, but too weird to sell in volume, and were discontinued in early 2012.
That's the problem with exotic architectures. It's quite possible to build very high performance special purpose systems, but if the volume isn't there, they can't compete with commodity hardware. Google, Facebook, or Amazon, who consume enough hardware to justify going custom, might do things in this direction.
Also I'm not sure why a simpler core (note: simple architecture not slower core) would be so slow that it would be unusable. I mean such cores are called "speed demon" for a reason I assume.
I personally would love to see 256-core 64-bit ARM-compatible becoming mainstream (by compatible I mean take the ARM instruction set and reduce it in half or a quarter).
The main problem is access to DRAM is a terrible bottleneck, and adding cores makes this worse if they all need to access it. You can fill a die with ALUs but unless you can keep them fed this doesn't help you at all. That's why Tilera focused on the network stream use case; there's enough storage on-die for a few frames per core, and the data to operate on can "flow" through the system from one 10G Ethernet MAC to another.
This was circa 1992, if it matters.
They might have changed it in the very latest version, but from what I remember applications used to crash the os all the time.
You could argue that unikernels are by definition one component, so it's fine to share one fault domain. That's easy when they're missing all the facilities that even the OP admits still need to be built. If you're going to have something like a network, a filesystem, and so on, you need tools for understanding them. It seems like you'll want an interactive environment for using those tools, along with common facilities for filtering and otherwise processing that output. And we're back where we started -- with several discrete components that are better off in separate fault domains.
You can argue that the isolation is better provided by the language, and the article claims that "it’s easier to create quality tooling for something written in a single language with a decent type system that lives in a single address space". That's only true if you allow these components to be tightly coupled. But neither removing direct access to memory nor providing a rich type system magically eliminates the possibility for a bug in one part of a program to affect a different part of the program. And why should all these components be tightly coupled anyway? And besides all that, this is an argument for monoculture -- everything must use the same language and runtime environment. But different languages are better suited to different tasks.
The author also claims a false dichotomy between code reuse and multiple address spaces. But it's completely possible to build common facilities for instrumentation and reporting and still have them be loosely coupled.
All of this typifies a lot of my issues with unikernels: they represent a complete rejection of major advances in modern systems and software design without addressing the underlying problems that those advances were built to solve. There's some baggage in modern operating systems, but many (most?) of the major architectural decisions are the results of thoughtful incremental improvement by engineers looking at concrete problems. Let's not throw all of that away.
While using shitty languages, so not that relevant IMO.
> But different languages are better suited to different tasks.
I think the industry can support multiple competing unikernels in different languages.
> ...tightly coupled...
Rigorously define ones interfaces while hiding implementation details :). Yeah it takes discipline, and OCaml/Rust/Haskell/etc are not able to codify all the invariants one might want to enforce. But as more powerful languages are polished, I believe that the situation will improve. I dream of a computing service where one submits software / with proof that it will "play nice" with other tenants, no hardware sand-boxing needed.
That way we can reimplement existing, common facilities not once, but N times -- and still not support what I was alluding to (namely, allowing specific subcomponents written in a language appropriate for that component).
> Rigorously define ones interfaces while hiding implementation details
Fine, but then there's little advantage to mandating a single address space and language.
Ah sorry did not realize you meant that. Well the other answer is that more powerful languages can support more expressive and diverse embedded languages.
> Fine, but then there's little advantage to mandating a single address space and language.
Rigorous interfaces != primitive interfaces, but Unix forces both those on us. For example, how feasible is it to share a tree between to processes? Powerful languages allow us to specify the end-goals of per-process address spaces etc, while leaving the means much more open ended.
It would be great to have standard API for this, let's say fake mux syscall that would queue other syscalls and trigger them all at once after some period of time.
This stuff is literally a half-century old now. IBM did it with CP/CMS back in the mid-1960s on the original System/360 hardware. Thumbnail sketch: CP (Control Program) is now called VM (Virtual Machine). It is a hypervisor. CMS is the Conversational Monitor System, once the Cambridge Monitor System. It's about as complex as slime mold and/or MS-DOS: Single address space, no hardware protection. CP allowed people to run multiple instances of CP and CMS as guests. CMS provided a command line and an API, CP provided separate address spaces and multiplexed the hardware.
This argument is pretty poor, since we've done a lot of things - cooperative multitasking, windows versions which had little to no isolation between the root user and other users, etc. etc. it doesn't mean they are the best approach now, or were even a good idea at the time. Yes we got them to work. Microsoft Bob and Windows ME worked too.
At the kernel level performance/resource usage really does matter. I don't know enough about OCaml to comment on how it performs but especially when you make your argument about how (allegedly) terribly linux performs in certain circumstances, and that's a lot of what the selling point of unikernels are, it doesn't really follow to talk about how 'acceptable' performance is possible from higher level languages.
I'm almost willing to bet money VM has been used in production for longer than you've been alive. And of course machines had MMUs back then: They built a special one for the IBM System/360 model 40, to support the CP-40 research project, and the IBM System/360 model 67, which supported CP-67, came with one standard. IBM, being IBM, called them DAT boxes, because heaven and all the freaking angels forfend that IBM should ever use the same terminology as anyone else...
> This argument is pretty poor, since we've done a lot of things - cooperative multitasking, windows versions which had little to no isolation between the root user and other users, etc. etc. it doesn't mean they are the best approach now, or were even a good idea at the time. Yes we got them to work. Microsoft Bob and Windows ME worked too.
The difference between this and Windows Me is that we knew Windows Me was a bodge from day one. Windows 2000 was supposed to kill the Windows 95 lineage. (What's a "Windows 2000"? Exactly.)
Anyway, the hypervisor design concept came from people who'd seen what we'd now call a modern OS; in this case, CTSS, the Compatible Time-Sharing System (Compatible with a FORTRAN batch system which ran in the background...). They weren't coming from ignorance, but from the idea that CTSS didn't go far enough: CTSS was a pun, in that it mixed the ideas of providing abstractions and the idea of providing isolation and security into the same binary. The hypervisor concept is conceptually cleaner, and the article gives evidence it's more efficient as well.
You missed my point. My point was that it works acceptably fast (yes, it does, I've used it, and you won't convince me my perceptions are wrong) even though it's operating in the worst possible context: In a userspace process on an OS kernel, where everything it does involves multiple layers of function call indirection and probably a few context switches. Compared to that, getting a stripped-down unikernel written in OCaml to be performant has got to be relatively easy.
> At the kernel level performance/resource usage really does matter. I don't know enough about OCaml to comment on how it performs but especially when you make your argument about how (allegedly) terribly linux performs in certain circumstances, and that's a lot of what the selling point of unikernels are, it doesn't really follow to talk about how 'acceptable' performance is possible from higher level languages.
First: Only the unikernel would be written in OCaml. The hypervisor would have to be written in C and assembly.
Second: I never said Linux performs terribly. Linux performs quite well for what it is. It's just that what it is imposes inherent performance penalties.
Third: Although the article focused on performance, the main reason I support hypervisors is security. Security means simplicity. Security means invisibility. Security means comprehensibility, which means separation of concerns. Hypervisors provide all of those to a greater extent than modern OSes do.
You clearly know more about the details :) however my point is that modern requirements aren't the same as those of the past, particularly with regard to security but also workloads, use cases, etc. are generally very different.
>The difference between this and Windows Me is that we knew Windows Me was a bodge from day one. Windows 2000 was supposed to kill the Windows 95 lineage. (What's a "Windows 2000"? Exactly.)
You said "The best argument is this: We've done it" - the point of these examples is that, yes we've done many things, so it's not a very good argument. If you want to skip over ME, then 3.1 - it used cooperative multitasking. Arguably this might be more efficient than pre-emptive multitasking (I'm not saying it is, rather saying maybe somebody _could_ argue this), and it was good enough for the time, but the fact we've done it doesn't mean we should do it.
This applies even more to security - for many years there were little to no efforts made towards hardening software. We live in a world where this just cannot happen any longer.
I'm not saying by the way that the past use doesn't have value or demonstrate the usefulness of the approach, it might do, just that the fact it was done before doesn't _necessarily_ mean it's a good idea now.
>Anyway, the hypervisor design concept came from people who'd seen what we'd now call a modern OS; in this case, CTSS, the Compatible Time-Sharing System (Compatible with a FORTRAN batch system which ran in the background...). They weren't coming from ignorance, but from the idea that CTSS didn't go far enough: CTSS was a pun, in that it mixed the ideas of providing abstractions and the idea of providing isolation and security into the same binary. The hypervisor concept is conceptually cleaner, and the article gives evidence it's more efficient as well.
Interesting. Not sure the article does demonstrate that though, it does suggest performance penalties, some serious, due to the abstractions of a modern OS. I'd want to look more closely at these before I believe the penalties are THAT severe, other than in the case of networking where it seems more obvious the problem would arise.
>You missed my point. My point was that it works acceptably fast (yes, it does, I've used it, and you won't convince me my perceptions are wrong) even though it's operating in the worst possible context: In a userspace process on an OS kernel, where everything it does involves multiple layers of function call indirection and probably a few context switches. Compared to that, getting a stripped-down unikernel written in OCaml to be performant has got to be relatively easy.
I raised the performance issue because this seems to be the main selling point of a unikernel, but now we're losing performance because it's acceptable? Ok fine, but I think a 'normal' kernel in most cases has acceptable performance penalties. This is something that really requires lots of data, and maybe even ocaml is nearly as fast anyway (I hear great things about it), but I just wanted to point out the contradiction.
>First: Only the unikernel would be written in OCaml. The hypervisor would have to be written in C and assembly.
>Second: I never said Linux performs terribly. Linux performs quite well for what it is. It's just that what it is imposes inherent performance penalties.
Absolutely, and agreed there are inevitable perf penalties (as the article describes well.)
>Third: Although the article focused on performance, the main reason I support hypervisors is security. Security means simplicity. Security means invisibility. Security means comprehensibility, which means separation of concerns. Hypervisors provide all of those to a greater extent than modern OSes do.
I really find it hard to believe that security is really wonderfully provided for in a unikernel - you have a hypervisor yes, but if you get code execution in the application running in the unikernel you have access to the whole virtual system without restriction. I'd bet on CPU-enforced isolation over software any day of the week, even memory safe languages have bugs, and so do hypervisors.
I may have made incorrect assumptions here so feel free to correct me. I'm certainly not hostile to unikernels, either!
... and so do CPUs! I do like CPU protections as long as they are dirt-simple, but it really scares me sometimes how complicated CPUs and chipsets are getting with their "advanced" security features. When an exploitable flaw is found, and malware survives OS/firmware reinstalls, it will be a mess.
I think you've missed the point: The hypervisor provides security. The unikernel does not. Everything in the same unikernel guest is in the same security domain, meaning everything in the same unikernel guest trusts everything else in that unikernel.
The hypervisor is the correct level to provide security because that's all it does. It exists to securely multiplex hardware, to allow multiple unikernel guests to run on the computer at the same time without having to be aware of each other. The hypervisor is, ideally, invisible, in that its only "API" is intercepting hardware access attempts and doing its magic at that point; it provides no abstractions of any kind, so it can be as simple as possible. You can't hit what you can't see, and you can't exploit code which isn't there.
The hypervisor of course uses all of the hardware tricks the CPU provides to provide isolation. Modern virtualization hardware fits the bill just fine here.
It's therefore possible for unikernels to establish and enforce their own security policy, just like how you can run Linux as a guest under Xen. It's just that it shouldn't be necessary in a proper hypervisor/unikernel setup, because everything running in the same guest should trust each other and only need to worry about information coming in from the outside world.
Is the idea that a single unikernel is equivalent to a single process? Surely we're getting into realms of serious performance issues if that's the case?
I do take your point on there being less going on meaning there is less to attack, and what you are saying is very interesting, don't get me wrong :) I'm just trying to understand it.
There have been hypervisor exploits, but of course far fewer than linux/windows/mac escalations
A unikernel is equivalent to a process in a more traditional system. We usually don't secure parts of a process against other parts of the same process. We just start more processes.
> Surely we're getting into realms of serious performance issues if that's the case?
Are you suggesting running an entire virtualised kernel in place of a process is not going to introduce a performance penalty?
There might also be latencies introduced in IPC.
Go, read the old exokernel papers (see https://en.wikipedia.org/wiki/Exokernel#Bibliography, especially http://pdos.csail.mit.edu/exo/theses/engler/thesis.ps). They got nice performance improvements out of running their equivalent of unikernels. It's exactly because they can cut through all the layers of one-size-fits-all abstraction.
They also address IPC.
(This reminds me, I should go and re-read how they actually did IPC.)