Furthermore, why are all the APIs so diverse? Why aren't there reactive operating systems (as in OS with reactive API)? All of these ideas can be explored in Rust but on some level I'm not sure what should be the feature set of the OS of the future.
The current driver models aren't that great either.
We already had it in 1961 as ESPOL and NEWP.
Followed by many others, before UNIX's adoption due to its source being available for free.
> I'm not sure I want another UNIX implementation.
Yeah, they just keep repeating what was already done and without much research besides adding mainframe features.
As Rob Pike puts it:
"We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy."
What we really need are OSes that take the ideas from human-computer interaction from Xerox PARC and Viewpoints Research Institute and take them further into mainstream, using safe languages in the process.
> What we really need are OSes that take the ideas from human-computer interaction from Xerox PARC and Viewpoints Research Institute and take them further into mainstream, using safe languages in the process.
How are modern OSes prevented from using said ideas (which ideas exactly?) simply because they are a UNIX implementation?
Everything else is fluff, and could stay just like it was on UNIX System V.
Hence why for macOS, iOS, tvOS, Android and ChromeOS, having an UNIX kernel is just a matter of convenience, given their application models and programming languages.
NeXT based NeXTStep on UNIX as a door into the then new UNIX worksations market, but it never had a CLI culture like the other UNIX vendors.
GUI workflows like on Xerox PARC were always important to Steve.
Is your answer to my question: "Modern UNIX OSes cannot take advantage of said ideas because they are POSIX compatible."?
So, each UNIX clone project ends up replicating POSIX and doesn't move beyond what is actually yet another TWM clone with pretty graphics.
GNOME, KDE, Unity are the only ones that try to somehow modernize the experience and tend to get pretty vocal pushback.
The UNIX culture, is the culture of the command line and something like XFCE is probably the GUI a TWM user is willing to accept to manage its XTerms.
This is something that's been interesting me lately. I've been working with the excellent futures and tokio libraries, and I've wondered what would it be like if that was the primitive for all OS interactions?
Basically, imagine a message passing system like Mach, but where the fundamental messages are all futures based. There might be some interesting things possible without holding locks in the kernel.
Prove that an existing RTOS style library can be written with a simple scheduler working well.
That would go a long way to it gaining adoption in the embedded world, show that traditional solutions work correctly with the new language first.... then move on to an innovative OS.
In the case of this hypothetical OS idea.... Most of what people "would want" from an OS is a made up thing they can't really express. There is no working backwards from a generalization. You work backwards from concrete ideas.
Realistically, the OS everyone wants is "one that runs all of their software." They would prefer one that "doesn't crash." However, starting over with no real software is a non-answer.
This need not throw everything away that we have today. Why not make the equivalent of higher level bindings over a C API for command line programs? Or maybe any of a hundred other options that might improve over what we have now? Is unix really the end stage evolution for user space?
The solution is incrementally improving these things. What you're talking about is absolutely the kind of thing I'm interested in. Arguably PowerShell is one piece of the right direction. Show me a better way with compilers etc. and I'm interested (I work in Scala and compile time is a big issue, but I couldn't stand to have SBT as my framework).
> This need not throw everything away that we have today. Why not make the equivalent of higher level bindings over a C API for command line programs? Or maybe any of a hundred other options that might improve over what we have now? Is unix really the end stage evolution for user space?
If you've got a way to introduce one of those things incrementally, I'm interested.
I want an OS that's easy to program to the point I don't actually have to rely on third party software too much. This might be a pipe dream but I'm not convinced it's not achievable on some level.
> They would prefer one that "doesn't crash." However, starting over with no real software is a non-answer.
I'm trying to open a discussion about what ideas have been tried in the past, seem good, but for one reason or another didn't catch on.
Move that to other areas. My car? Forget about it. I could have a fighting chance at building a bike. However, to build a good one, not a chance.
So... why is this something that is even desirable in the computer?
Now. Do I want to be able to fix what I can? Of course. I run emacs, in part because I like that I can pull up the source, quite easily, of any component I am using. I still rely majority on code written by others. And have no shame or concern in doing so.
That's the point, you don't have much capability to alter the OS you are running. But you should. It's not unachievable but the languages, tools and OS (and maybe HW) have to come together to allow for this.
> Move that to other areas.
Computer is different from any of those since it's fundamentally a machine to model human mind. I always feel like my OS is somewhat restrictive as to what it lets me to do easily and what alterations it lets me to make.
> Now. Do I want to be able to fix what I can? Of course. I run emacs, in part because I like that I can pull up the source, quite easily, of any component I am using. I still rely majority on code written by others. And have no shame or concern in doing so.
Right. But the languages and tools that we use currently don't lend themselves to terseness and correctness. I think that that's the direction of future languages.
And i don't think languages really help here. Given that many advanced languages have products that are, again, beyond me.
Directly, what sorts of edits do you wish you could do, but feel prevented from doing?
Part of this is that setting up a development environment is kind of tricky.
> And i don't think languages really help here.
They really do actually.
> Directly, what sorts of edits do you wish you could do, but feel prevented from doing?
Imagine an OS where you can do something like "view source" with html, you can edit it, and immediately re-execute it. I think that the list of things I wouldn't look into tweaking would be shorter.
And that is the thing, I rarely want to. Or, I'll want to, but it isn't easy enough to do that I can get it done. Often, the thing I would like to do is a melpa install away. And the amount of code involved with many features is far beyond my skillset, it usually seems.
So can you give maybe a top 3 concrete examples of things you would change?
Do you mean view the kernel's source? Or user program source? Either way, this already exists, it's just often unsuitable for OS software.
I basically want a simple RTOS and HAL/Driver body.
I want timers and clocks. I want some block devices and perhaps file-system-primitives etc.
I want TCP/IP and a USB stack.
I don't want anything nearly as complex as a Unix, nor need it be Posix compliant etc.
I would like it to be runnable from uboot, as it's ARM time now, and arm64 support should be a given.
I would think there is a substantial market for a minimalistic ultra-simple braindead-easy alternative to the "embedded linux" (LOL) wave that seems to be proliferating on all "a" class arm processors...
i need determinacy more than thoroughput, but a 64k ram microcontroller with 500mips isn't going to cut it either.
there is a serious hole in the OS market for CPU's where *nix is overkill or unsuitable (and no, RTAI nor a realtime patched kernel count. the complexity of the entire codebase is still there)
I've heard about MirageOS from the oCaml people, but I have not been able to find out any info about timing and concurrency etc...
any info on edgy projects highly appreciated, please!
- strongly typed syscalls.
- easier way to circumvent the kernel / OS (unikernels are very active and tangible space, see HFT) while potentially staying relatively safe
- safe zero copy IPC
- projects like https://zinc.rs/ which fold Embedded Linux' Device Tree concept into a coherent declarative HW description that is consumable by the bootstrapper
Driver developer is soooo far out of my frield but it's always fascinated me. I'm curious why hardware companies didn't go with something like an interface for products versus these highly complex drivers; wouldn't it be possible for each type of device a group comes together, decides on an interface, and now anytime you plug a device in it would work (and if a specialized driver could be better you could install that separately but the goal is making everything magically work, immediately).
But since no one has done it I feel like my idea is either horrible or short sighted. But I do want to look into how it all works one day.
There really is not that much savings or advantage to this approach so there are not a lot of Open Firmware and EFI drivers out there. You really do not want hardware manufacturers writing drivers either - they tend to be terrible and full of bugs. Having the manufacturer publish the hardware specifications so that drivers can be easily implemented by developers is a much better approach, sadly a lot of manufacturers cannot be bothered with this.
-- sound cards being compatible with AdLib and/or SoundBlaster and/or SoundBlaster Pro;
-- network cards being compatible with NE2000;
-- video cards being compatible with VESA standards.
Then Windows took over, buses changed, the shitfest started and never stopped. And now the shitfest comes with NDAs too.
Accelerated graphics is probably the biggest remaining disaster, honestly. At least with Vulkan/DX12 we might be getting thinner drivers...
> The iretq instruction is the one and only way to return from exceptions and is specifically designed for this purpose.
Not quite true. STI; LRET works too, and it's faster for stupid reasons.
Also, the AMD architects blew it badly here. That quote from the manual:
> IRET “must be used to terminate the exception or interrupt handler associated with the exception”.
Indicates that the architects didn't think about how multitasking works. Consider:
1. User process A goes to sleep using a system call (select, nanosleep, whatever) that uses the SYSCALL instruction.
2. The kernel does a context switch to process B.
3. B's time slice runs out. The kernel finds out about this due to an interrupt. The kernel switches back to process A.
4. The kernel returns to process A's user code using SYSRET.
This is an entirely ordinary sequence of events. But think about it from the CPU's perspective: the CPU entered the kernel in step 3 via an interrupt and returned in step 4 using SYSRET, which is not the same thing as IRETQ. Oh no!
It turns out that this actually causes a problem on AMD CPUs: SYSRET will screw up the hidden part of the SS descriptor, causing bizarre crashes. Go AMD.
Intel, fortunately, implemented SYSRET a bit differently and it works fine. Linux has a specific workaround for this design failure -- search for SYSRET_SS_ATTRS in the kernel source. I don't know how other kernels deal with it.
Of course, Intel made other absurd errors in their IA-32e design , but that's another story.
Interesting, didn't know that.
> This is an entirely ordinary sequence of events. [...] It turns out that this actually causes a problem on AMD CPUs
Sometimes I think that the hardware designers intentionally made kernel development complicated :D. Thanks for the heads-up!
In case you're curious, here's an implementation for Linux:
There are a couple gotchas. RF and TF won't work right with the LRET hack. You need to make sure not to clear IF until the STI, as otherwise you lose the magic one-instruction no-interrupts window. And it's unclear in the spec if NMIs or MCEs honor that window, so, if you want to be robust and your kernel can recover from NMI or MCE, you should detect if this happens, rewind one instruction, and clear IF again before returning.
Other than that, it appears to work perfectly. :)
> Unfortunately, Rust does not support [a save-all-registers calling convention]. It was proposed once, but did not get accepted for various reasons. The primary reason was that such calling conventions can be simulated by writing a naked wrapper function.
> However, auto-vectorization causes a problem for us: Most of the multimedia registers are caller-saved. [...] We don’t use any multimedia registers explicitly, but the Rust compiler might auto-vectorize our code (including the exception handlers).
This seems like a pretty convincing argument in favor of supporting this calling convention explicitly: only Rust knows what registers it is actually using. The current approach devolves into preserving every register that Rust might possibly use.
AVX-512 has 2kb of registers alone! That's a lot of junk to save to the stack on the off-chance that Rust decides to super-auto-vectorize something.
Note that LLVM loves to use the XMM registers to do memcpys. This is something that kernels definitely do. So it's definitely a tradeoff.
If a "save all registers" calling convention was natively supported by Rust, you would only pay the cost for registers that are actually used.
For caller-saved/scratch registers in interrupt handlers, not only do you have to avoid stomping over registers, anything you call has to do too. You have three options here:
- Save all the registers in each interrupt
- Only call "save all registers" functions in your function, enforce this somehow. Since things like pagefault handlers can get pretty involved, you probably don't want to do this.
- Just compile your kernel with most extra registers disabled.
There is the fourth option which involves whole program taint analysis or something to track the registers stomped on by all transitive calls from the exception handler. Requires special compiler support though.
And yes, optimizing this would require special compiler support. That is the point!
The compiler is the only component that is in a position to possibly do something smarter than spilling everything. Even if the compiler doesn't actually do this, letting users say what they mean is better than making them write something that will definitely be sub-optimal. It at least leaves open the possibility that the compiler could do something smarter.
There's hardware support to help with this; see "Task state segment" (16 and 32 bit x86 only, amd64 is different).
Sadly, AMD64 came up with a terrible design for SYSCALL, and an exception right after SYSCALL will not automatically switch stacks. The result is a big mess.
One other thing the series does is show how much we dink around due to x86 backward compatibility:
- GDT still required by CPU but not needed for programs.
- Ridiculous structure of IDT pointers due to multiple generations of bit width extension.
- Boot sequence.
Compared to the low-level setup for an ARM chip, it's night and day. ARM is what Intel was in the days of the 8088: load your program at an address, CPU jumps to that address, end of story!