I notice they put an ELF loader for executables in the kernel. It's better to have the loader outside the kernel. See this Linux vulnerability. They may get there; they don't have shared objects yet. QNX starts up new processes with a canned startup which links to a shared object to do the loading of the executable. So loading runs with the privileges of the thing being loaded, which is safe.
They put very few drivers in the kernel - just a serial console. That's good. Like QNX, the boot image contains user-level processes you want run at startup. So you don't need drivers in the kernel to get going. This is good for embedded applications. QNX comes with a boot image builder into which you put what's needed at startup. There's a set of services for disk-based systems which get you a UNIX desktop like environment. But for embedded use, you can build a diskless QNX image with much less.
One advantage of a small unchanging kernel is that it can be put in a boot ROM for the life of the machine. This is a win in embedded devices. Any updating is done by loading new processes in userland, not with some super-powerful low level thing that nobody understands and probably relies on security through obscurity.
In order to make this easy, ELF loading is part of the kernel. Otherwise, the ELF loader would have to be loaded somehow. How does QNX solve this issue? I could see there being a simpler executable format, or maybe the loader could do what the kernel does and modify its own paging tables.
With regards to the book, it was outlined by a contributor who is no longer part of the project, so it is likely to be restructured when there is time, and the sections may change.
There are drivers for what are considered to be critical features for any userspace inside the kernel. This mainly means the interrupt controller, but has included a serial console for kernel debugging. This will be an optional component in the future.
QNX, being a hard real time system, does not page at all. This has advantages. You never have to worry about something being paged out when you need it. Response is very consistent. That's worth considering. Really, you don't page in embedded, you don't page in mobile, and if you're paging on servers you're in big trouble. RAM is cheap. I'd suggest not putting in paging.
How does message passing work? Can't find the docs.
Did you make such a calculation for your microkernel?
I'm a bit of a novice when it comes to operating system implementation, so I'm not sure if this is a naive question, but how big of a priority is this for an OS written in Rust? By default, Rust statically links everything besides things written in C/C++ (e.g. libc), so would static linking be sufficient for an OS designed to implement everything in Rust?
You get the best of both worlds?
On the other hand if they crash or contain memory corruption bugs, the host process suffers as well.
If things don't work, I notice and fix it. If things aren't secure, I don't notice, get hacked, and my files get ransomed back to me.
Or alternatively, like LD_PRELOAD, just have an env variable; LD_DONT_PRELOAD. Though that would make it more complicated.
The problem, though, is that all the libraries are duplicated on disk. Flatpak and snaps get around this by recognizing when there is a shared dependency and only downloading it once. But if you "bake in" all your dependecies into a single executable file, that would cause pretty significant inflation for program sizes.
This may be less of an issue now as disk space gets cheaper and cheaper, but it still feels like a step in the wrong direction.
It’s better than having static libraries that you cannot fix without recompiling from source.
If you edit a file, it could do a copy-on-write, to ensure you're not inadvertently changing a different file.
If you want something that's higher level, there are tools like git annex that do this.
The problem with this is DLL hell. You need a central repository of images for mapping, but that leads to conflicts. Possibly a good opportunity for CAS?
Rust is apparently capable of producing shared libraries: https://doc.rust-lang.org/cargo/reference/manifest.html#buil...
Either way, if you find Redox interesting, I bet you'll find https://os.phil-opp.com/ interesting too. It's a much less full fledged project, but the blog posts are a great way to learn about Rust and OSes at the same time.
It has been my impression that one of the advantages of Unix is not having to rewrite the userland each time you write a new kernel or port an existing one to a new platform.
It still draws quite a bit from Unix though, and provides a shim translating `/dev/proc` to `proc:` or whatever.
But it seems like for type safety, you wouldn't want different OS services with different API's to have the same type? If they aren't actually interchangeable, pretending they are with a common naming scheme is just a source of bugs.
That's something new... So, for the N-th device type in addition to the device driver I might need 2*(N-1) translating drivers!
In any case, you only need 2N for translating both ways between file: and whatever: to get the same experience Unix does, and wherever that makes sense it is usually provided.
The win is of course that each of those protocols can be strongly typed and provide exactly all the operations that make sense for that protocol. Basically, think of all the things IOCTL does and give them their own names.
This elevates the mistake, and polishes the worst of the early Unix conceptual fuckups.
it can be used as a primitive typing layer
There should always be a supported standard, but there should be nothing forcing you too it. This is the freedom we need to demand of our OSes.
As long as you don't have POSIX only special interest users will use your system.
I believe the only way forward is to start on POSIX and then move on step by step and deprecating part by part.
No POSIX for userspace as official API, rather ISO C and C++ APIs plus Android native APIs.
Hell, even between supposedly POSIX systems there's a lot of #ifdef going on to make things work.
I don't know if deprecating parts of POSIX is going to work any better than deprecating parts of C++. If all the bad stuff is still there waiting to be misused...
Except we are? We are pretty POSIX compliant all the way into the kernel, we have "/dev", filemodes, etc. We don't have X11 or other UNIX staples, sure, but we are pretty UNIXy.
By "userspace" I was more talking about "the programs and interfaces that a normal user interacts with". Haiku is pretty unique in that regard.
In terms of GUI apps... sorta? We use extended attributes and IPC messaging more than most Linux desktops do, that's true, and our UI/UX is often different.
But if you're talking CLI, then, also no. Bash is the default shell, coreutils are installed by default, sshd is activated by default, etc.
Haiku own extensions to the original design?
This might not be as big of a deal. Rust increases your productivity quite a bit and I'm really impressed with the pace of progress in the community. I can imagine that new, better & more integrated tools will be made.
I'm more concerned with drivers/firmware, which could be handled the same way, but seems less appropriate.
"Nebulet is a microkernel that executes WebAssembly modules in ring 0 and a single address space to increase performance. This allows for low context-switch overhead, syscalls just being function calls, and exotic optimizations that simply would not be possible on conventional operating systems. The WebAssembly is verified, and due to a trick used to optimize out bounds-checking, unable to even represent the act of writing or reading outside its assigned linear memory."
Of course, you can't have everything as WebAssembly, some core drivers will need to run some critical machine code, but those could be tightly enough integrated that the overhead is almost zero (ie, by using WA imports you can turn this into a function call overhead)
* more light weight memory manager (possible thanks to borrow checker)
* GPU first OS
* a better shell. I know I can run whatever shell I want on unix but a better shell being native would go far.
If anyone knows work being done in this area I'd be curious to read more personally.
As for a better shell, I also completely agree, but I'm not sure it needs to break POSIX. Shameless little plug, I recently started a shell in Rust myself: https://github.com/nixpulvis/oursh
POSIX compatibility at the scripting layer is beneficial for being able to run existing shell scripts, but the sh scripting language sucks in many ways.
What I'd really like to have is a shell that supports both a POSIX compatibility mode for running existing scripts, alongside a more powerful and modern scripting language for use in writing scripts targeting the new shell directly. I'm not sure how to identify which mode an arbitrary script should run in though, or which mode should be used at the command line.
Take a look at: https://nixpulvis.com/oursh/oursh/program/index.html
Incidentally, the link to the `modern` module is broken, it's just program::modern (which is of course not a valid link). Given that I don't see a `modern` module in the TOC I'm assuming the module doesn't actually exist yet?
Until then I struggle with background jobs, because they are a fucking pain in the ass.
On a related note, here's something I've been thinking about:
I want to be able to insert shell script functions in the middle of a pipeline without blocking anything. Or more importantly, have two shell functions as two different components of the pipeline. I believe fish handles this by blocking everything on the first function, collecting its complete output, then continuing the pipeline, but that's awful. Solving this means allowing shell functions to run concurrently with each other. But given that global variables are a thing (and global function definitions), we need a way to keep the shell functions from interfering with each other. POSIX shells solve this by literally forking the shell, but that causes other problems, such as the really annoying Bash one where something like
someCommand | while read line; do …; done
So my thought was concurrently-executed shell functions can run on a copy of the global variables (and a copy of the global functions list). And when they finish, we can then merge any changes they make back. I'm not sure how to deal with conflicts yet, but we could come up with some reasonable definition (as this all happens in-process, we could literally attach a timestamp to each modification and say last-change-wins, though this does mean a background process could have some changes merged and some discarded, so I'm not sure if this is really the best approach; we could also use the timestamp the job was created, or finished, or we could also give priority to changes made by functions later in a pipeline so in e.g. `func1 | func2` any changes made by func2 win over changes made by func1).
When I first started typing this out I thought that this scheme didn't work if the user started a script in the foreground and then backgrounded it, but now that I've written it out, it actually could work. If every script job runs with its own copy of the global environment, and merges the environment back when done, then running in the foreground and running in the background operates exactly the same, and this also neatly solves the question of what happens if a background job finishes while a foreground job is running; previously I was thinking that we'd defer merging the background job's state mutations until the foreground job finishes, but if the foreground job uses the same setup of operating on a copy of the global state, then we can just merge whenever. The one specialization for foreground jobs we might want to make is explicitly defining that foreground jobs always win conflicts.
Also, the ability to "rerun" previous commands from a buffer without actually re-executing anything would be a cool somewhat related feature.
If you want to chat about shells anytime shoot me an email or something: firstname.lastname@example.org
> * entirely async api
Surely async only makes sense for IO?
> * GPU first OS
What does this actually mean? The GPU can't be used for everything
> * a better shell. I know I can run whatever shell I want on unix but a better shell being native would go far.
What do you mean by 'native'? Do you just mean 'ships with the OS'?
Chrome OS has a fixed app install location, as does Windows 10 in 'S' mode (since you can only install store apps).
I remember being intrigued by the database-as-filesystem idea when it was first touted - has any OS actually implemented this? I'd be interested to see how it works in practice.
Everything old is new again... this was from the mid-60s and was relatively popular for its time.
> It is named after one of its developers, Dick Pick.
What an unfortunate name.
WinFS was ahead of its time. Also re file watching, I think I want a reactive api for a lot ornate operations.
The actual end of the line for UNIX improvements done by the original designers.
I never get the mistic around the middle stop instead of the end of the line.
Without specifics about what differences you mean, lambda = function = process = thread = fiber = service = worker = ...
That lambda will need to be scheduled, it will need to maintain it's scope... All the same general issues could exist.
But no users would have to be able to start processes. Instead, lambdas could be associated with persistent storage of state, and processes would be started by the OS to apply the lambdas in a simulation loop, but users wouldn't directly start processes.
Perhaps thinking of those as processes isn't quite right either.
Interesting. Actually, I have one of my processes serializing its state and exporting it to a different process, while all of the clients continue playing the MMOsteroids game they're logged into.
But I think I kinda get your point, but would challenge you to think about this issue with the frame of mind of access control and permissions a bit. I think you'll find the need for some kind of process like task. Maybe not...
Note I already mentioned persistent stores of state.
I think you'll find the need for some kind of process like task. Maybe not...
Maybe re-read. I've already said that there would be a process like task. Lambdas will need to be associated with state. Users won't have to start processes. Instead, processes will be more like processors.
You were actually in that thread. Did the commenter's work not actually use FP or you not see it?
That's kinda trippy. It's like I was a different person. Also, I was working on the predecessor system to the one I'm currently working on. Back then, the thing was written in Clojure. I later ported it to Go. The design philosophy has changed a heck of a lot as well. Back then, I was going to have everything on the same very large and fast virtual instance, with modest goals for the largest population/scale. Now my system is scalable by adding more "workers," which I've spread out onto small AWS instances.
Software list is here https://static.redox-os.org/pkg/x86_64-unknown-redox/
Aren't these two contradictory? Unix-like would be a classic monolithic design.
That said, I don't think any of the issues in that thread are related to unsafety. It's perfectly safe to panic. None of those bugs are memory corruption, arbitrary code execution, or so on, which is what safety tries to protect against.
It's safe if you can prove it's safe. Even C with Frama-C. If the type system can't do it, use an external tool to do it. Tools exist, automated and manual, for doing such safety proofs on high-, low-, assembler-, and microcode-level software. There's a long tradition of verification in hardware, too. Rockwell-Collins even uses the tools common for hardware to verify software and hardware together.
The real problem, aside from just limited resources to build tooling, is a mindset of trying to rely on one tool for safety instead of mixing up different approaches to get their benefits.
Those are all bugs produced essentially on request. That doesn't bode well for the security and robustness of the project. The end user doesn't care whether the class of bug is memory related or whatever else if the end consequences are the same. Despite having the benefit of safety and the decreased burden on the programmer this offers, bugs still abound in Redox, which points to it being written in Rust as incidental at best.
NOTE: Observers should resist the temptation to interpret this post as an endorsement for the "rewrite everything in Rust!" crowd.
Linux kernel developers are the first ones to acknowledge that something has to be done to change the course of CVE in the Linux kernel.
Just in 2017, 68% of exploits were caused by out of bounds errors.
I've installed Linux on like 5 of my desktops/laptops. The best way to describe my issues with it are "death by a thousand cuts". Namely that random stuff just doesn't work, either at all or the way I expect or want.
Installing Nvidia drivers is hit or miss. Wifi often doesn't work out of the box. The touchpad experience is far inferior. Also all of the desktop environments I've used have been really ugly (GNOME, KDE, Unity).
Even when installing Linux there are so many options for partitioning (what format you want, swap space size, etc) which are likely overwhelming for non technical users.
This is very much in the eye of the beholder. Linux with KDE has been my daily driver since at least as far back as 2009 (with the KDE 3.5 series), possibly earlier.
In no way would I say it's any uglier than windows, especially now with all the effort to make key GTK applications fit with Qt ones via theming. Windows 10's hodgepodge of old and new styles for things like settings is more offensive to me than anything a Linux graphical desktop does.
Agree the driver situation lagging on Linux vs. Windows is sub-optimal, but when the driver support is there, I don't find Linux to work any more poorly than Windows.
The reality is that every major desktop system has issues, but at least with Linux, if you learn enough about the plumbing, you can go in and try to fix or work around issues that arise. Until we move into a new world of robust, correct-by-construction, non-worse-is-better software, I'll take the lumps I get with Linux over the others whenever I have the choice.
Have you ever tried that? You'll find that the plumbing consists of 20 different standards of pipe cobbled together over the past 30 years by dozens of different plumbers, each with their own conception of how plumbing should work but too lazy to tear out the whole thing and replace it so they just patch in their change with duct tape and rubber bands.
And worse, that's the culture the community seems to prefer. Case in point: the one guy who's shown a willingness to unify that plumbing, Lennart Pottering, is loathed for being successful at it.
This is actually true:), but I defy you to find a desktop OS of which the same isn't true. Did MS ever fix the fact that they have 2 completely different control panels with partially-overlapping functionality? And I know Windows Explorer still can't open certain paths because DOS had a ... poor implementation of device files.
> Pottering, is loathed for being successful at it.
Pottering unified the plumbing by taking a demolition crew to the house and replacing the plumbing and electrical systems while people were living in it, informed us that objects being automatically thrown in the trash if they were on the floor when you left the room was a feature, and demanded that all faucet manufacturers adopt a new pipe size that only his plumbing uses.
 https://news.ycombinator.com/item?id=11797075 https://github.com/tmux/tmux/issues/428
if you're referring to the split that came with Windows 8 , there's been (slow) progress in Windows 10. There's still a handful of settings left in the old Control Panel, but most of them have been moved to the new Settings app.
I have. I have been able to find offending runaway Flash sessions and kill them without taking out all my Firefox windows. I have been able to force wifi associations when some software flaw is preventing an automatic join to those networks. In similar situations Windows will simply not enumerate the network and recourse is limited. The examples go on for situations where things don't work.
Look, I agree with you and the criticisms of the CADT development model. I'm not claiming the Linux experience is objectively great, end-of-story. I'm claiming that if the hardware is supported by a mature enough driver (which is true of a lot of hardware!), I don't find the Linux experience to be more frustrating than Windows/Mac, subject to the caveats I made about commercial software in https://news.ycombinator.com/item?id=18444723 . And it's great to not to have to go out of my way to keep the OS vendor from gathering data from my system without my express permission.
Another nice thing about Linux is that it makes a good host for VMs, so for those times when Windows is needed (assuming not for games), it can be kept in a VM with some measure of control.
We are a long way from desktop software utopia, but real breakthroughs probably depend more on rigorously-architected and implemented environments vs. working on the edges of decades-old architectures whose fundamental shortcomings are legion and are implemented in unsafe languages. Windows, MacOS, and Linux (or name your choice of free Unix-alike) all suck in this regard.
> And worse, that's the culture the community seems to prefer. Case in point: the one guy who's shown a willingness to unify that plumbing, Lennart Pottering, is loathed for being successful at it.
I don't know that that's a fair characterization. With some of the people raging, you'll pry their current way of doing business out of their cold dead hands. Others welcome better and more sound ways of doing things <raises hand>. But there are plenty of problems with the ad-hoc, NIH, and questionable software quality approach that the systemd implementers use. There was a front-page HN submission just a day or two ago on readiness protocols (written by J. de Boyne Pollard) covering systemd shortcomings, not to mention udev screwups, dhcp issues (does systemd really need its own dhcp client??), etc. All in all, MHO is that systemd is a significant step forward but suffers mightily from its ad-hoc development approach.
Just for clarification: the punctuation in that sentence should not be mis-read as my FGA on readiness protocols covering udev and DHCP, which it does not. (-:
That's why I've specified it as for non-gamers. I have an old built-in Intel graphics controller (the same as on the old MacBook Air model that still has F-keys) and have no problems. Nevertheless I have indeed always been highlighting graphics drivers quality as a permanent problem, nobody ever writes really good ones, even Windows drivers are quirky and Linux drivers are always full of problems in every single version (yet it rarely is too hard to set up a configuration working nicely and forget about that if 3D graphics is not among your primary computer usage tasks).
> Wifi often doesn't work out of the box.
On MacBook only. According to my experience it has always been working out of the box on non-Apple PCs for about 7 years already.
> The touchpad experience is far inferior.
According to my experience exactly the opposite. But I haven't used PCs with multitouch touchpads so it's probable you're right.
> Also all of the desktop environments I've used have been really ugly (GNOME, KDE, Unity).
As for me and whom I've shown it Unity and today KDE5 (as shipped with Manjaro) look great (old KDEs looked ugly, I agree). And the look can be customized to whatever you may desire.
> Even when installing Linux there are so many options for partitioning (what format you want, swap space size, etc) which are likely overwhelming for non technical users.
It's exactly the same as with Windows: either use default partitioning or whatever partitioning you want, the only difference is Linux installers usually allow you to define more complex partitioning without having to use 3-rd party tools like PartitionMagic/Acronis if you want. Anyway, it is always a great idea for a non-geek user to ask a geek friend of theirs to install an OS for them rather than to do it themselves, regardless to what OS they would like to install.
Another huge problem for Linux desktops is the lack of support for high-quality hardware--for example, I haven't found any Linux laptops with trackpads that are in the same ballpark as Macs' (and installing Linux on Macs and configuring/calibrating it to behave sanely is a huge pain).
There is a lot of value to having some organization have both end-to-end responsibility and authority for the functioning of end-user software stacks such as desktop environments. Even the Red Hat model is not enough to keep all the myriad independently-developed and maintained pieces of FOSS synchronized and moving in the right direction collectively to make an appealing target for commercial desktop development. I don't know if there is a viable solution to this problem building on the FOSS ecosystem as it exists.
And of course, irrespective of what RMS would wish, it seems the only people willing to work on a lot of the hard and unsexy problems are in fact commercial developers that need to make money from the sale of the software, not just support.
But I still share the original poster's disappointment in yet another UNIX-like system, especially at a time when, in my opinion, Personal Computing and the Desktop in particular are being driven towards extinction.