Reasonable enough. The article focuses on client-side programs, especially Android, which have somewhat different requirements than the Posix model envisions. Most of the things you need to do on headless servers can be done through the Posix model.
The big weaknesses in Posix they point out are in interprocess communication and parallelism. Both were afterthoughts in UNIX.
UNIX/Linux land never had a good IPC mechanism. (As a former QNX user, I've seen this done right; check out MsgSend/MsgRecv there.) Posix thread-level locking primitives tend to be too slow, but that's partly a hardware support problem. Async I/O now seems to be popular; networks are so slow on mobile, and servers have scaled to the point that dispatching overhead for all the running threads gets to be a problem.
Yes, Posix doesn't do graphics. That's good; if it had a window API, everybody would be complaining about it. Look at the KDE/Gnome wars, or the OpenGL/Direct-X wars.
Here's what I'd change in Posix.
On the file system front, I occasionally argue that UNIX/Linux/Posix file system semantics should be slightly different. There should be several types of files. "Unit files", the normal case, can only be changed in their entirety. Opening a unit file for writing should create a new file, and on successful close (not program exit, abort, or system crash) the new version replaces the old version for future opens. Today, you still have to jump through hoops to do atomic file replace, and the hoops are different for different OSs. Posix doesn't address this. This is something that should Just Work as the default file semantics.
"Log files" should just extend. This is what opening for append should do. You can't go back and overwrite. On an abort or system crash, the end of the file should be at the end of some recent write, and correct out to the last byte of the file length. Logging may be behind after a crash, but should never trail off into garbage.
"Managed files" are for databases. You would get two async I/O completions - one when the OS has accepted the write, and one when it's safely committed to disk/flash. Rather than flushing, database programs would usually wait for the commit completion. This would be a special mode asked for with an "ioctl" call, used only by the few programs that really care about database recovery. The database people would love to have well-defined semantics like this.
"Temp" files (in /tmp or some designated directory) would have only the guarantee that after a crash, they're gone.
On the interprocess communication front, I'd copy MsgSend, MsgRecv, and MsgReply from QNX. That's subroutine-call type IPC; send a message and wait for a reply back. This is what you usually
want for multi-process programs. Connection setup could be improved over QNX; the mechanism for finding the receive port in another process needs to be improved and there should be at least one well-known port number (like stdin/stdout/stderr are 0,1, and 2) that each process uses first. This needs to be integrated with the CPU dispatcher, as it is in QNX, so that when you do a MsgSend, the sending thread blocks and the receiving thread unblocks without going through the scheduler. Otherwise, you go to the back of the line for the CPU on each interprocess call.
I would wish that the whole (disk) I/O mess would be cleaned up. It's just laughable how broken that is across operating systems. And it's supposed to be the back-bone of most operating systems as well.
Also, memory management. Only in very recent years proprietary APIs surfaced in Linux and friends that allow to leverage some of a modern MMUs capabilities (which is good). Meanwhile we still have stupid MM semantics like fixed / non-reversible allocation commits.
> The database people would love to have well-defined semantics like this.
Yes. Yes they would.
Right now there is no way in no operating system to actually do asynchronous disk IO. You can do an equivalent of write(2) asynchronously, at least on Linux, Windows and I think SunOS, but in all of them it's not really asynchronous; they will run extent allocation synchronously which can -worst case- mean multiple disk read/write cycles. This of course means that for many applications that could make use of async write actually need to use threads and queues for doing it.
Like I said, just the extent (cluster run) operations; if you're overwriting they shouldn't block and I didn't manage to make them, if you're extending they usually don't seem to but some times do block.
> Today, you still have to jump through hoops to do atomic file replace, and the hoops are different for different OSs
Do you? I may be misunderstanding you, but Posix says rename() is atomic.
Regarding your file system issues, note that you can have most of this with O_TMPFILE on linux. I think the slight exception is that file modes can't be restored if you are not a member of the original file's user or group.
But disregarding that, the file system changes you suggest (strict separation of "unit", "log", "managed", "temp" files) are artificial, so I don't think that they belong in the kernel. It's the old mechanism vs policy theme.
A small issue I have with Posix file systems is that the transaction granularity is a single file or sub-tree. You can't have transactions that affect file in distinct sub-trees.
Posix "rename()" is now supposed to be atomic, but it isn't on NFS.[1] Windows has version problems. XP couldn't do an atomic rename. Vista could for NTFS file systems, using the NTFS transactional rename function. That's now deprecated in favor of ReplaceFile, which came in with Windows 7.[2] That's supposed to be atomic even if a copy across file systems is required. Not sure about Windows file shares.
In MacOS X, "rename" was not atomic before 2011. It's now believed to be.[3]
Even with "rename", you have to find the directory in which the old file lives, and create the new file there (as something like a ".part" file) before renaming. This requires resolving symbolic links and such. So it's non-trivial to get this right. My point is that this should Just Work as the default.
I don't know QNX, but for IPC you should also take a good look at the L4 microkernel IPCs. It might be what you describe here. The point is that they are fast. Nearly function-call-speed.
The should probably a more complex IPC layer on top or beside it. Something like DBus with schemas, names, groups, broadcasts, etc because otherwise people will reinvent it poorly.
QNX IPC is deliberately just a delivery mechanism for a byte string. That's the OS level. You can put JSON, Google protocol buffers, or something else on top of it. In the hard real-time world, it's common to just pass known C structs with the necessary data items around. The OS doesn't care.
If you're implementing microservices, it's quite useful. I did a robot vehicle that way. All the Boston Dynamics robots run QNX.
The big question with IPC is what problem you are trying to solve. For microkernels you want blazingly fast cross-process function calls, but IPC mecanism that can do only that is mostly useless for Unix userspace. For example all of QNX style IPC, traditional SysV-style unix IPC and pthreads have no API for select()-on-IPC-primitive (VMS and Symbian have that and NT has mechanism for this that mostly works)
> The should probably a more complex IPC layer on top or beside it. Something like DBus with schemas, names, groups, broadcasts, etc because otherwise people will reinvent it poorly.
People also tend to reinvent stuff poorly that they consider as "bloated" or "unnecessarily complex for their use case". I just say C++ vs. Java (Java was indeed also marketed at the beginning that C++ is unnecessarily complex; besides the famous "write once, runs everywhere"), XML vs. JSON, CORBA vs. SOAP (no, please don't start a flame war which of these is the better one ;-) ).
I'd sum it up in terms of "standards bodies / enterprise would be perfectly happy if everything were ossified in a form that encompassed all possible use cases, regardless of complexity."
Whereas reimplementation in a lighter weight protocol is usually the result of "devs don't want to deal with the above when it isn't needed for 80% of common use cases."
They managed to write an entire article about POSIX and never mention the filesystem parts? Even the section about "asynchronous I/O" seems to have more to do with locking than actual I/O. Nothing about directory operations, permission models, consistency/durability requirements, and all the other things that make that part of POSIX the most egregiously out of date. I wrote a bit about this (http://pl.atyp.us/2016-05-updating-posix.html) and barely scratched the surface. I wish they'd dared to go there too.
There's a whole lot of POSIX that they didn't mention. They concentrated upon the system call API and left out all of the concepts (character sets, locale, environment variables, regular expressions, command line conventions), the filesystem, the shell, and the utilities.
Yep, only libc. And only for some popularly installed (after system install) programs.
>We selected these apps from the official marketplaces for each OS: Google Play (45 apps), Apple AppStore (10 apps), and Ubuntu Software
Center (45 apps). We chose popular apps based on the number of installs, selecting apps
across nine categories, ..
So mostly GUI programs. Not to mention that probably all those "apps" call into libc eventually (even android ones).
Only thing this made me think about is "why is this taken seriously here?".
The original goal of the POSIX standard was application source code portability. However, modern applications are no longer being written to standardized POSIX interfaces.
Actually the inverse: don't speak for yourself, because it's irrelevant.
What the authors did was try to speak for what happens at large -- which is what matters. Outliers will always exist, but their team analyzed thousands of packages across three different OS environments to come to conclusions.
It's not about there being counter-examples, it's about what they saying being the norm.
I don't know what you're referring to with that distinction. AFAICT they weren't testing for POSIX compliance but rather where the POSIX layer was being used in practice, which statistically was by system-level frameworks rather than the applications themselves on the OSes they tested.
The notable change between now and 25 years ago is that the POSIX layer tends to be used at the lower levels of system frameworks, while the programs that run on those OSes tend to be written on top of higher level frameworks and do not directly use the POSIX APIs.
This is fine and makes sense given how much has changed in the last 25 years, but given that the POSIX layer is no longer being used (statistically speaking) for what it was originally designed for, it raises questions about to what degree that design is still valid, and whether or not it should be revisited to better suit what it's actually being used for.
I don't think anyone has any answers, but it's always good to question assumptions and never let anything in tech become a sacred cow. Maybe POSIX is fine for its current role, but that doesn't mean people shouldn't think about it.
Disclaimer: I skimmed the article pretty aggressively, so I may have misunderstood something.
I'm a little confused by the distinction. For example, if I use a framework that abstracts away pthreads from me, but still uses pthreads in its implementation of threading, is the application I write still POSIX-compliant?
No, the application would not be POSIX-compliant, but it would be running on a POSIX-compliant OS. I was confused by the comment I replied to because it sounded like he was saying that POSIX compliance was more likely in "modern" applications, but if anything the opposite is the case.
POSIX compliance is much more common in consumer OSes nowadays, to the point that every major desktop and mobile platform is built on a POSIX-compliant kernel and runtime except for Microsoft's offerings. In that sense POSIX has taken over the non-Microsoft world, but now it's being used as a much lower-level compatibility layer for system-level frameworks rather than what it was originally designed for.
Even Windows has had multiple POSIX implementations over the years, they were just built as compatibility layers on top of the native APIs. So POSIX-compliant operating systems are almost universal at this point, but POSIX-compliant applications/daemons/etc. (all the programs that run on those POSIX OSes) are rarer than ever.
it sounded like he was saying that POSIX compliance was more likely in "modern" applications
Yes. Part of this is because more operating systems are approaching POSIX-compliance; but the number of patches we need to port random Linux code to run on FreeBSD has dropped tremendously over the past 20 years.
> Try to write a modern iOS, OS X, Windows, Android, ChromeOS UWP with standardized POSIX interfaces.
You're confusing popularity with being outdated.
It's irrelevant if some vendors intentionallly go out of their way to avoid implementing any standard interface which, in the very least, can't be controlled by them.
And it's also absurd that you're quoting platforms which even restrict which programming language can be used to write applications purely due to their business strategies.
"You're confusing popularity with being outdated."
POSIX was supposed to be popular to the point of default way that apps were written. It failed. Whereas there's quite a bit of portable apps on (insert framework here). They succeeded with many running on non-UNIX OS's, too. You'll need those or similar things in your code instead of POSIX for truly, portable apps. That's his point.
> POSIX was supposed to be popular to the point of default way that apps were written. It failed.
So POSIX was a long series of attempts to codify baseline Unix practice that arose because in the '87 time period there were tens of small, warring, incompatible Unix distributions ( cf. http://www.ugu.com/sui/ugu/show?ugu.flavors ), and systems software vendors could not write software that worked across all of these platforms properly.
The original post report is a cri de coeur call to action written in POSIX magazine attempting to oversell the story with a clickbait title in order to urge further POSIX work. This thread and others is loaded with people that have only read the title and have their own half-formed opinions.
There was never any idea that all apps everywhere would be written with POSIX. That's a ridiculous reading. I was there at the time. It was just an effort to get all of the Unix vendors on the same page so that the big commercial non-Unix OS vendor and their salespeople wouldn't eat the lunch of the high end workstation and server market.
And it succeeded phenomenally. With POSIX as a more-or-less stable substrate, it became possible to run, e.g., gnu's entire platform on almost anything that was POSIX-compliant or close; e.g. SunOS, Solaris, and eventually Linux, with minimal effort. It became possible for software like vi, emacs, mysql, sqlite to exist. Sysadmins could write shell scripts without anything like the kind of portability fear they had in the past.
Uncoincidentally, this continues today; all of the non-Windows software you're using is based on POSIX standards. Every phone is packed with POSIX-foundationed databases, daemons and sometimes even userlands. And of course there's Linux which, despite the continuing efforts of Red Hat, is about as POSIX as it comes.
Read the box at the bottom of page 9 of the original post. POSIX won, totally and unconditionally.
> I was there at the time. It was just an effort to get all of the Unix vendors on the same page so that the big commercial non-Unix OS vendor and their salespeople wouldn't eat the lunch of the high end workstation and server market.
Precisely.
Additionallly, criticizing the technical merits of an international standard established in 1997 is a disingenuous ordeal. If anyone actually has any meaningful and tangible technical issue to point out regarding POSIX, they should simply put forth an implementation that actually addresses the issues and use it as a basis for a discussion to update the standard.
Standards are meant to be used as references, and therefore are meant to be updated when the need arises.
As I said go Async, I think there's two conversations going on in these threads: POSIX for UNIX portability; general portability. POSIX succeeded on prior but not latter as it wasnt designed for it. Today, we have portability frameworks that can do more than POSIX on more OS's. They definitely win on general portability with them worth consideration for UNIX portability. That's my position.
Far as early days, you might be able to help me out on a point I have few sources on. One history of Xenix posted here claimed its influence and design choices had strong influence on adoption of features that latter led to POSIX. Source was biased so hard to assess. What's your take on Xenix's influence on adoption of standard or POSIX-like API's?
I bet your source was right. I don't have inside knowledge, but as one of the first Unixes to try to make a serious business out of it, and as an arguable progenitor of the SysV approach, it wouldn't surprise me.
That said, POSIX was never designed for general portability. Just as a seal of unifying approval and sanity for existing practice; far more descriptive than prescriptive.
If you read up on why companies such as Microsoft make it their point to break and avoid standards, you would understand why your comments regarding "popularity" are moot.
Business strategies don't make or break technical merits.
Linux is mostly POSIX compliant. It has layers on top that are not POSIX, but that doesn't mean it is not POSIX or that you can't expect your POSIX apps to work on Linux.
Not at all, it means that many mistakenly use Linux syscalls as synonym for POSIX, thus making the same lock-in to GNU/Linux as Google, Apple and Microsoft do to their own systems.
*BSD devs jump of joy having to port GNU/Linux applications that depend on D-BUS, systemd, or any other Linux specific APIs.
Given that *BSDs also have many of their own extensions, (ie openbsd has many security extensions to the POSIX standard), that isn't the best example of a defense.
I would not if the goal of a standard is uptake by OS and application developers. Wide usage of a product = popularity. It didn't achieve the popularity it sought. This is independent of what businesses like Microsoft were doing.
Microsoft's strategy was to create a permanent dependence on them for long-term profitability. They had too many tactics for that to cover here. Once open standards proliferated, they subverted them too with proprietary extensions under "Embrace, Extend, Extinguish" strategy. Companies buying into their solutions and using non-standard features paid the price.
Regardless, people still created apps that worked on such proprietary platforms and UNIXen via portability frameworks or libraries. Some products using them became quite popular (eg Apache, Netscape/Firefox) with some of the frameworks themselves becoming popular (eg Qt, GTK, wxWindows, Tk). POSIX's got into enough stuff that it was a default many picked up. Whereas, these others people went to willingly even when POSIX was available with even more portability. That's why they were objectively more successful on portability.
POSIX was never trying to be the universal interface for all kinds of applications, just some sane common interface for UNIX vendors, all of which do implement POSIX to a large extent these days.
You could perhaps make the argument that by the time POSIX saw enough adoption, software development evolved and that nowdays there are lots of interfaces that are not part of POSIX/not portable, but that's a slightly different issue.
That's close to my argument. There's kind of two conversations going on in these threads: UNIX portability and general portability. POSIX is fine on former but worse on latter since it wasnt designed for it (among other things).
> I would not if the goal of a standard is uptake by OS and application developers.
That point is moot, as those platforms were designed with the express purpose of being particularly hostile regarding interoperability while forcing other corporate products as alternatives, in a well-known lock-in strategy.
"those platforms were designed with the express purpose"
They were designed with the purpose of giving users a way to get things done that was profitable to the company. The first part along with lock-in techniques are why most of the desktop market is Windows with a large chunk of the server market. That means the kind of portability that matters to people wanting to maximize benefit to users or profits to the company better include a Windows version. There are portability alternatives to POSIX that allow that usually with other benefits on top of it. Far from moot...
> They were designed with the purpose of giving users
No, they actually weren't.
The "users" don't have any say whether Windows complies with any standard, or even if Microsoft breaks all of them to try to force the world to submit to their "vendor lock-in" business strategy.
Trying to pass off the consequences of vendor lock-in policies as technical arguments is somewhere between absurd and disingenuous.
You said the only purpose of what Windows did was locking in people. I said it was to provide software that let people accomplish what they want while making profit sustained by lockin tricks. Are you saying Microsoft never intended people to use Windows API or software to accomplish their personal goals? That there is zero benefit to users of Windows in consumer or business space?
Looking at that list, I'm struck by the fact that it includes most of the applications users know they're using, but only a small minority of the processes running on an OS X system.
POSIX is an application interface. How OS X is implemented under the hood is irrelevant. It does present an application interface mostly conformant to POSIX standards, so it is some way POSIX.
That's hardly encouraging, when all the basics and all the newer OS X abstractions are based things other than POSIX standards, and ask programmers to connect to them with non-POSIX interfaces.
The fact that it also supports a POSIX layer is about as relevant to what we're discussing (whether POSIX is fit for today's needs and used in today's mainstream development) as the presence of some X Windows translation layer is in Wayland.
That's what majority of computer users (smartphones included) use though. As for developer-oriented CLI tools, I've also seen more and more GNU-isms and macOS-isms creeping in.
Counter example: Almost everyone uses the Apache webserver when they browse the web, but hardly anyone even knows what that is. We could easily compile a long list of similar crucial infrastructure software that virtually everyone is using and that doesn't have graphical user interfaces.
Even if everybody is using Apache: Apache httpd doesn't use POSIX but APR (Apache Portable Runtime) which removes the POSIX-dependenc from the core of httpd.
That's one of the competitors to POSIX for application portability. Unlike POSIX, it was totally open as well vs people telling me The Open Group will sell me a copy of full standard.
Interestingly, I took a random tour through the source code and didn't find a single OS specific API. Which is not to say that the code was intrinsically POSIX specific. It is a surprisingly good example of multi-platform code. The #ifdefs that I saw were all to do with defining functionality from scratch if it didn't exist rather than the more usual massively indented list of platform specific hacks. Although I had fairly high expectations, I was fairly surprised at how nice the code was.
I didn't spend all that much time, so I'm sure I missed a lot, but I recommend taking a look for yourself as it is quite interesting.
> There is some sort of perverse pleasure in knowing that it's basically impossible to send a piece of hate mail through the Internet without its being touched by a gay program. That's kind of funny.
Reading comprehension. I did not say that people only visit pages that are served by Apache. Even if Apache has just 20% market share there is a very high chance that you will hit some pages served by Apache. Plus, the same argument really applies to most alternatives to Apache.
I'll agree that a lot of people prefer and recommend things like kqueue+kevent over select/poll. But my experience is that the statement that "hardly anyone uses" select/poll is still erroneous.
On an out-of-the-box installation of FreeBSD/TrueOS, most processes listed by "ps" show a wchan of things like ttyin, select, poll, pause, uwait, wait, or nanslp. On the FreeBSD machine that I am typing at right now exactly one program is waiting in kqread, one of the inner processes of Chrome.
Install my nosh toolset, and a fair amount of that changes, because my programs use kqueue+kevent and suddenly the "ps" output is full of kqreads. But what's left is still not "hardly anyone", by a long chalk.
Would that it were! There's a nasty kernel panic somewhere in the innards of the kevent system on FreeBSD, where a sleeping thread holds a non-sleepable lock, that only manifests itself (at least for me) when there are a lot of programs using kqueue+kevent on a system that is doing a lot of stuff. If more people used kqueue+kevent in more programs, there would be more people wanting it fixed. (-:
Yeah. I considered this critique, which is why I added the performance caveat. The thing is, libraries of other languages are being written abstractly over the native eventing systems on the OSes.
For example, the MIO library in Rust, is an abstraction over epoll, kqueue, and the MS primative. Not one over the POSIX standard. This means that if you use that in a Rust library, you get that platform independence for free.
The standard select and poll syscalls perform just fine if you only have a handful of sockets. Heck, they perform just fine as long as a nontrivial proportion of your sockets are ready at any given time.
The place you need kqueue and epoll is when most of your connections are idle -- IRC servers and web servers with large keepalive times. But the vast majority of processes are not IRC or web servers.
One hilarious one is basename(3), which is two functions with different behavior, but the same name. The one you get depends on which headers you include, and what _GNU_SOURCE is set to.
CLI and GUI have different grammars and capabilities. If you think that CLI means "prehistory" and GUI means "future", then ask yourself why CLI has "survived" GUI for at least 30 years.
At the end of the 80s and throughout the 90s, a common belief was that the CLI could be completely replaced by the GUI, one just hadn't yet figured out how. The GUI, however, never delivered those last 10-20% that you need in automation or in a context that requires reproducibility.
Windows NT was designed towards those promises, but eventually Microsoft realised their mistake and created - ten years ago - Powershell.
The entire dev world has been catching on to the fact that you get reproducible procedures when you use a CLI. Data scientists are another group who know how much power they get from a CLI.
Yet the myth of the obsolete CLI and the omnipotent GUI still is strong. Especially with those people who never really worked with a decent CLI.
Microsoft didn't have a mistake to realize. From very early on, one of the must-have toolsets for Windows NT was the Windows NT Resource Kit, with a whole bunch of command-line tools -- provided by Microsoft. There was in fact a whole load of command-line tooling for Windows NT, well before PowerShell and back to version 3.1 .
Far from this being the reversal that you propound, actual history is even more in favour of your argument. (-:
Look at the design of NT. Every output is based on IPC and calls to the event logger. That something like a stdout exists is due to backwards compatibility and mostly the Posix subsystem. The entire OS was meant to be used and administrated by means of GUI. The existence of a bunch of CLI tools that deviate from the main paradigm doesn't prove anything to the contrary. Have a look at the literature on the design and concepts of NT.
>"Every output is based on IPC and calls to the event logger."
What does that even mean?
I think you're painting an incorrect picture of the design goals of NT. The original NT was designed by a group of people (e.g. Cutler) with VMS background and they sure as hell didn't have any GUI in mind. In fact, Windows came to the picture much later. Whether NT is administered by a GUI or CLI is totally irrelevant to NT design. Today all the NT derivatives (Windows Server) can be administered with a rich variety of command line tools.
Stdout exists because the original subsystems (OS/2, Windows, MS-DOS) required that. Same applies TODAY. Where is the NT bias to GUIs exactly?
To be fair, original Windows API didn't have any concept of stdin/out/err. In win16 days most development environments had something like that but it was emulated inside compiler vendor supplied runtime libraries. CLI as first class OS APIs come from NT.
Ok. I'll bite. You're speaking from the perspective of someone locked into a way of doing work through the GUI, limiting your development experience to that of what the GUI developer (MS in this case) has provided you.
The CLI offers an infinitely combinable and customizable development experience, where huge ammounts of innovation have been occuring. You can see this in modern web development toolchains and new build tools for new languages, which simply would not happen if you restrict your Dev experience to just the GUI. I keep going back to just creating Makefiles for simple automation tasks.
Even the GUIs are starting to learn from this, which is why I think VSCode and Atom are becoming popular; they make it very simple to integrate new CLI tools into the GUIs workflow.
>Ok. I'll bite. You're speaking from the perspective of someone locked into a way of doing work through the GUI, limiting your development experience to that of what the GUI developer (MS in this case) has provided you.
No, I'm speaking from the perspective of someone that has worked in the old days with Sun OS (pre-Solaris) workstations and HP-UX machines, has used (for years) actual VT102 terminals, started using Linux distros around '97 and has been using the CLI in various forms since the mid-eighties or so.
There's nothing limited about the GUI (in theory, and, for most things that matter, in practice too). For example a visual flow language (think Automator in OS X or Quartz Composer, etc) can achieve all the "configurable pipeline" stuff people like in the CLI in a more controlled and formal way.
I also dislike the "dynamic typing" (everything is text/bytes) in the CLI, and would prefer something like PowerShell to have been the norm (with the accompanying toolset). Also note that there's nothing about the GUI that prevents someone from typing some commands too -- and in fact that's part of how lots of GUIs work.
Also, I wouldn't point to "modern web development toolchains" as something to advertise CLIs -- what would that be, Gulp, Grunt, Webpack and the like? All crufty solutions to inadequacies in steering the underlying language properly.
Even in theory, CLIs are not inherently more powerful, it's the other way around (they lack a dimension that GUIs offer (graphics) -- GUIs on the other hand don't lack any dimension -- they can incorporate text commands and text fields and pipelines just fine.
Most Windows programmers I know have never really used an environment that doesn't punish you for using the command line. So what's your point? That most people you know never realised that many of their daily tasks can be easily automated if they wouldn't think that endless, tedious clickfests are the norm, and that this somehow proves that the CLI is obsolete?
I doubt that there is anyone at all in this entire discussion that can state with authority what "most Windows programmers" do. We can point to what we and the people we have met and talked to about this do, but to go from that comapratively scant few handsful of people to a statement the size of "most windows programmers" is an overgeneralization by several orders of magnitude.
I doubt that discussion and insight can happen without generalising. I also doubt that we need to cover 100% or even 99.999% of Windows programmers to be able to speak about what "most Windows programmers do".
Anybody who has experience from 2-5 different Windows shops and been to a few relevant conferences can make a quite accurate assessment of what "most Windows programmers" do.
Silly flamefest this. Even back when I worked mostly for Windows/MS shops or clients, writing CLI tooling for business (not developer) needs were often the order of the day. Batch import/export tools and what not. Also prototypes of daemon-like Windows Services for client testing. All kinds of tools that weren't directly developer tools. Sure enough, we were interfacing with IT folks at the client end but still, no programmers per se!
There's a reason GUI starts with a G. Other than "graphical" user interfaces, "other" user interfaces will also always have their place..
In some ways looking at how much non-developer computer usage in the past decades has been in front of excel/access sheets and tables, these aren't so "graphical" either. They essentially end up evolving into (haphazard) custom CLI-like keyboard-driven power tools, half-programmable, half requiring intimate familiarity with its "commands" (formulas) or even "training" to use them fluidly.
But if you'd rather code out your CLI needs in a PL REPL (aka CLI), more power to you ;D
I mean there may not be as many users, but they are massively more powerful (in the realm of data processing only, alas) than people who don't know the CLI. It may be obvious to us, but I should perhaps also point out that CLI ~= REPL for the language that you're using (Bash or whatever).
Sure, these days CLI may mean "a Jupyter notebook", but the same principle applies.
It may not always look the same, but the point is that a CLI/REPL should be a combinatorial force multiplier. Otherwise, it's not really worthwhile.
Find all files containing a particular string in a directory tree. For each of those files, if its filename appears in a file called Whitelist, write it to a file called Output. Otherwise, do nothing.
Here is one possible command line version (I haven't tested it as I'm on mobile):
Apparently you forgot to read the part where I mentioned "Languages like Python?".
import glob
import shutil
whitelist = set(open('Whitelist').read().split())
for fd in [ x for x in glob.glob('mydir/*', recursive=True) if x in whitelist]:
shutil.copy(fd, 'OUTPUT')
I think it would take at least a half hour, unless you have directory traversal, file I/O, and regex APIs memorized which most people don't. That's "trivial" in the grand scheme of things but it's really not practical to do day-in and day-out every time some random scripting necessity arises.
And you presumably have to clutter up your filesystem with random project files, etc, which is annoying. Are you going to have 1000 of these, one for every time you need to do some sort of minor task like this?
That command, on the other hand, took me 2 or 3 minutes to write, and I had to look up the syntax for `mktemp`. If I had remembered the mktemp syntax it would have taken a few seconds.
What? Those are not even POSIX environments (well, their primary and intended interface interface is not POSIX, but they maybe eventually inherit some of it because of their underlying environment or Unix influences, but that's irrelevant). Also, I never used OS X but AFAIK many Unix software runs on it easily, so it must be mostly compliant if not completely (IIRC it's a certified Unix).
That's exactly the point of the report: Hardly anybody uses POSIX. Many people use frameworks on top of it though. If POSIX would go away many won't notice.
Isn't a nice common interface that those frameworks and higher-level language libraries and interpreters can be written for a nice thing? It's like a nice road network, you sure want to be in your car not on the road, but it's better if most of it is good for the common tire, you don't want to change your tires at every turn.
It's a nice thing if you're trying to run on many UNIX's. Most projects just target one these days: Linux. If it's mobile supported, then add Android which is also a Linux. That means they'd only have to code the portability framework for 1-2 POSIX-like OS's. POSIX offers about no advantage there. They'll just wrap it like they did Windows and Mac.
Just like CPU opcodes have failed... POSIX is a low level interface, that of the OS and the C library. It's another level of abstraction, necessary for the ones that are building other levels on top of it.
I don't think that's true. POSIX was written in the 90s, when many of these frameworks didn't exist. It's still nice that I can run GNU tools on basically any POSIX platform and I think many (developers) would notice its disappearance.
User's probably won't notice, but they tend not to care about standards anyway.
Isn't the point that those are the most prolific platforms, where a common API to the system, POSIX, would be most beneficial to us as developers? But it's simply not possible to do?
From what I can tell POSIX has been replaced by per language stdlibs, with the portability of your software tied to the portability of a languages stdlib.
POSIX was mainly designed for C, most other languages have a higher level abstraction.
POSIX wasn't designed, per se. It's a standard, a lot of which was the codification of existing practices on operating systems where the C language was the primary language.
Also note the existence of FORTRAN language bindings, and the low level but persistent rumbles about C++ language bindings that have continued for the past 20 years or so.
Then note the number of languages where the doco tells you all about how "we wrap the C library", but what is actually there is in effect a language binding for POSIX for that language.
"POSIX wasn't designed, per se. It's a standard, a lot of which was the codification of existing practices on operating systems where the C language was the primary language."
That's what I was going to say about UNIX, C, and POSIX. The UNIX and C languages' made tradeoffs specifically oriented around the weak hardware they had. We no longer need those. POSIX evolved from a bunch of competing vendors doing a mix of justifiable and weird stuff with a standard trying to abstract it all. It's inherently going to be worse than a clean design over the smaller number of modern UNIXen without all the politics that went into POSIX.
And then we have fact that many portable apps need to run on Windows, Mac, iPhone, etc POSIX ranged from useless to lacking platform-specific capabilities on these. So, a clean-slate design covering each of these platforms is the obvious solution. Fortunately, we have a bunch of them with varying styles and tradeoffs to choose from. :)
Yes, you are completely correct. And 'designed' was perhaps a poor word choice.
In terms of wrappers of POSIX, I'm definitely not saying that POSIX is dead and not used. It's great that it is a beginning point for porting to various platforms. What I was more pointing out is that as a developer, its your stdlib's portability that is key. For example, many languages use POSIX interfaces where they can, and then switch to platform specific interfaces where necessary, for performance or other reasons.
I think your reading more into my comment than is there.
If you're trying to write a portable app between all of those platforms, would you choose POSIX or would you instead pick a framework or language that has a better high level abstraction?
Again, you're confusing popularity with being technically antiquated or obsolete.
The thesis presented in the article is that POSIX is outdated. The popularity of a platform, particularly one which was forced upon the world through a de-facto monopoly, is entirely irrelevant in any argument on the technical merits of a technical standard, particularly when some of those platforms were designed with the express purpose of locking out virtually all standardized programming languages.
Hm. The point of POSIX is to support cross platform development with a single standard. If you find that it is not keeping pace with the most modern and popular platforms, is it not outdated?
Technically speaking I think the standard is great, especially around having consistent definitions for thing like close(). But there are better options on all platforms to the select() and poll() interfaces.
In terms of my desires for open platforms that support open standards, I'm completely with you. If anything what we should be debating is how to update the POSIX standard such that it is worthwhile to target during development.
I don't see how it's helpful to ignore what is popular out in the world.
> Hm. The point of POSIX is to support cross platform development with a single standard.
No, the point of POSIX is to specify a set of interfaces that software vendors should provide and target if their goal is to provide an UNIX variant or develop UNIX-compliant software.
The key issue is that the main requirement is willingness on behalf of the software vendors to comply with the standard, and offer/target the standardized interface.
If a software company is blatantly hostile regarding interoperability, and bases their business model on vendor lock-in strategies then it's quite obvious that those companies are particularly are not willing to play ball with others regarding interoperability.
Yet, somehow you're presenting those companies, and the consequences of the actions taken by companies that are openly hostile regarding interoperability, as some sort of proof that a specific interoperability target suffers from technical problems.
It's the other way around, really. The OS is what provides the POSIX API to the higher levels of the stack. All of those are POSIX under the hood, and their middleware is dominated by POSIX code that is reasonably portable.
Gee I wonder why you cannot write a program for an interface that nobody imagined would be possible in 2000, using only a standard described in the late 80s. Must be some kind of conspiracy..
Totally agree, the authors seem to be missing the point that these libraries are a new layer between the posix api's and the end result which is typically a GUI application. The fact these libraries also bring a walled garden to the table which bypass the posix api's also demonstrate how security is being bypassed.
Plus with the analysis only featuring the 3 os's Android, OS X and Ubuntu, they would have noticed that Windows OS's pretty much mirror what POSIX does as well.
1 "usage is driven by high-level frameworks, which impacts POSIX’s portability goals.", that's the libraries/frameworks fault, plus have they heard of GCC? It works on many OS's.
2 "extension APIs, namely ioctl, dominate modern POSIX usage patterns as OS developers
increasingly use them to build support for abstractions missing from the POSIX standard" Perhaps they have missed the point, that without the POSIX standard these extensions would not exist in the first place. They might just complain about procedural coding practices being outdated when comparing to OOP coding practices, even though OOP brings slow bloat to the table as well. Perhaps a better question to ask is, why has a framework/library got some functionality which could have used some of the POSIX api's but didn't? Was it expedient to keep it within the framework/library and bypass the Posix API's altogether?
3."new abstractions are arising, driven by the same POSIX limitations across the three OSes, but the new abstractions are not converging." So abstractions are a new name for libraries and framework and not the concept like OOP coding is an abstraction. Perhaps complaining to the authors of libraries and frameworks that don't port would be better?
POSIX is like a foundation that allows creativity to take place. This is why we see the differences in libs/frameworks, different OS's have different parameters, different design ethos or objectives. Even in Windows, some API's are used more frequently than others, for example I cant remember when I last used the API's involved in getting a dial up modem to work, or using API's to print direct to a dot matrix printer. This paper just seems to be an analysis of API's not used as frequently as they used to be, sometimes because libraries/frameworks have rightly or wrongly replaced them for a myriad of reasons.
Qudos to them for downloading so many apps and analysing them though. Just what you need if you are looking for backdoors en-masse or exploitable bugs.
Glancing at the article I think it misses an important aspect. I lived through the creation of the POSIX spec and reported to a CEO involved. POSIX is a political spec.
Customers were complaining to DEC, SUN, IBM, etc about how confused they were using UNIX (think of the same Linux fragmentation today.) Major customer groups got together and said they would switch to Windows if the companies didn't create a stable environment for UNIX work. So the company CEOs got together before the "official" POSIX meetings to decide how little they could standardize (and cut their own costs) while still allowing them to have their important differentiators for sales. They decided that integrating the programming API and some minimal administration aspects would be the right approach. So they then got staff involved in the series of POSIX meetings to decide the spec (not telling them explicitly about the initial agreements between the CEOs)
Really, there is a lesson here. Customers (us) need to force the Linux vendors to create a more uniform experience for us. It doesn't "just happen". There is too much arbitrary variation (with no real benefit) and the learning curve is just absurdly hard for working with Linux (which is old enough that all the basic operations should be quite stable - and mostly quite simple.)
Whatever happened to Linux Standard Base? I don't ever see any apps compiled against that specification. Is this simply because it is out of date, or incomplete?
I know people tend to hold up POSIX compliance as something that's of utmost importance, but from my conversations with Bionic's maintainers, there's an awful lot of cruft and poor designs in some of the POSIX APIs.
One of the primary goals of POSIX was to standardize and encode existing behavior, but the behavior of Unices is sufficiently divergent that many APIs are uselessly underspecified. One especially hilarious case is close being impossible to use correctly in a cross-platform manner: on HP-UX, you must check for EINTR and try again; on Linux, close will close the file descriptor even if interrupted by a signal, so you must not try again [1]. Almost nothing is specified as being async signal safe. Tons of functions are thread-safe on bionic or glibc but not other libc implementations (e.g. readdir, which has a readdir_r counterpart which should not be used [2]).
Lots of APIs are fundamentally unusable in the presence of threads (e.g. there's no way to modify environment variables safely). This is a bit excusable, because pthreads were bolted on a decade after everything was already ossified.
Lots of APIs are a giant pain in the ass to use compared to alternatives provided in Linux/BSD (e.g. alarm vs timerfd or kqueue).
A few APIs are massive footguns that basically shouldn't ever be used. It's impossible to use POSIX shared memory without leaking. pthread_cancel is insane.
POSIX compliance is important if you value portability - especially for business critical systems (e.g. you want your bank's transaction processing data to work on IBM Z/Series's UNIX, whatever Linux distro they use, and say... SunOS).
My main takeaway from the article is POSIX is too antiquated and anemic as a "system API" for modern-day use - however I'm surprised the article doesn't seem to touch upon POSIX shortcomings instead being a consequence of the UNIX design philosophy - a strong personal concern of mine regards UNIX-like's future development course - should we continue to uphold "everything is a file", ioctl(), and piping text-streams as the best way to build an OS?
Curiously, the article chides POSIX for lacking modern abstractions (specifically, graphics) but I disagree - POSIX would be a disaster if it included anything like OpenGL or Cairo (case-in-point: Win32 is tightly coupled with GDI and now we're paying the price by being physically unable to do deep-color graphics in Win32 without serious hacks) - I think that (to paraphrase one of the UNIX philosophies) "small APIs that do one thing and do it well" applies to POSIX (as it does process and system management, and nothing else) and it has worked.
Coming from a modern Windows developer experience - I think there's a great deal to be learned from the failures of the vaporware "object-oriented operating system" concepts of the mid-1990s and how the concept has been resurrected in the form of PowerShell object-piping and WinMD today and should be explored as the next possible step in OS design at a fundamental level - not just as a shim on top as it is with Windows.
That said, UNIX and GNU/Linux have always evolved in small steps - pretty much every great-leap-forward is doomed to failure or endless ideological controversy (see `systemd`) so maybe UNIX-like OSes are not the right place to test new design concepts - but some new OS will - and eventually that OS will supersede Linux in the way many people believe Rust will replace C++.
> should we continue to uphold "everything is a file"
POSIX doesn't do a very good job of upholding that. POSIX is full of entities (e.g. notably processes and threads) which can't be manipulated via file descriptors. Some POSIX implementations do better – e.g. FreeBSD's pdfork – but that isn't standardised. By contrast, Windows NT in which "everything is a handle" actually provides a more unified interface to OS objects than POSIX does.
Which isn't a standard part of POSIX. It is an implementation-specific extension, which only some implementations have (OS X most notably lacks it), and even those who have it implement it in incompatible ways (e.g. /proc on Linux is quite different from /proc on Solaris).
Also, Linux /proc doesn't really allow you to treat processes as file descriptors like FreeBSD's pdfork does. Sure, I can open /proc/123, but operations on the resulting file descriptor have no relationship with the process – if I pass the FD into a select() I won't get any notifications about the process lifecycle.
I think this discussion misses the point what "everything is a file" is about.
The standard way to exchange (read, write) information should be through the Unix file interface where possible. Note that "file" here means "FILE *", i.e. an "opened file" or "stream". Not a file on a filesystem which is just one way to make one kind of stream.
In other words that mantra is about simplicity. Simplicity is what enables software to exist.
> I think this discussion misses the point what "everything is a file" is about.
I don't agree. POSIX systems provide APIs to wait on events on file descriptors – select, poll, etc. I can use those APIs to wait on an event on a pipe, a network socket, a device file, etc. To wait on an event on anything which is represented by an file descriptor.
But what if I want to wait for a child process to exit? I can't use select/poll for that since processes are not represented by file descriptors in standard POSIX – FreeBSD is a notable exception, but FreeBSD's facilities are non-standard.
If I want to simultaneously wait on both a child process to terminate and a message to arrive on a socket, I am forced to use multiple threads. For example, I could have a thread to execute waitpid() and then have it write to a pipe, and have another thread select/poll on both that pipe and the network socket. If processes were represented by file descriptors, I could have a single thread doing a select/poll on both the network socket and the child process.
> In other words that mantra is about simplicity. Simplicity is what enables software to exist.
Yes, and POSIX fails to be simple here. By failing to sufficiently unify its abstractions, it forces application code to be more complex than it should be.
> But what if I want to wait for a child process to exit? I can't use select/poll for that since processes are not represented by file descriptors in standard POSIX – FreeBSD is a notable exception, but FreeBSD's facilities are non-standard.
And Linux has signalfd (also not POSIX).
I agree with you, signals are a pain, and could probably be replaced by fds at the handling side (of course not: KILL, STOP, CONT...). The overhead of polling vs asynchronous signal delivery could be optimized away with the VDSO.
> If I want to simultaneously wait on both a child process to terminate and a message to arrive on a socket, I am forced to use multiple threads.
As a matter of fact, you are not. I'm sure you know about EINTR.
Anyways I was more referring to your original parent ('should we continue to uphold "everything is a file"'), and it seems we both agree that "everything is a file" as a pure ideology is very worthwhile.
> As a matter of fact, you are not. I'm sure you know about EINTR.
The problem with using signals to manage child processes comes when child processes are started by libraries. If the library installs a handler for SIGCHLD, it could conflict with other libraries or with the application which wants to do the same thing. The great thing about FreeBSD style process descriptors is that no signal handlers are needed–a library can start a child process and then wait for it to complete (and wait for other things such as network sockets simultaneously) without relying on signal handlers which don't work well when an application is composed of numerous independently developed libraries (as many modern applications are).
I agree. Signals have process scope, so they are a bad idea in general (they counter composability), no matter what's the delivery mechanism. Signal FDs don't help here vs EINTR.
> and it seems we both agree that "everything is a file" as a pure ideology is very worthwhile.
I don't understand that ideology - I don't believe everything can be abstracted as a file considering that ioctl() breaks that abstraction. For example, what is it to "read from /proc/{pid for Chrome}"? Would you be reading from the process' memory space? Reading from a rendered bitmap of the process' main framebuffer? Reading a human-readable text file of process metadata? If it's labelled metadata then how do you deal with UX localization? Same thing when "reading from a /dev device" which isn't a storage device, what should it mean to "read from a GPU"?
The OS kernel needs to provide user space processes with access to kernel-managed resources such as files, devices, network sockets, processes, threads, etc. In order to do so, there needs to be some way of identifying these individual resources to the kernel. And then the question is, should every type of resource have a distinct type of identifier? Or should we have a single type of identifier which could refer to an instance of any one of those types of resources?
The pure "everything is a file descriptor" ideology (or its Windows NT equivalent, "everything is a handle") says we should have a single type of identifier, the file descriptor (or handle), which can represent resources of any type for which the process can invoke kernel services – processes, threads, files, network sockets, etc.
Standardised POSIX does a poor job of living up to this ideology, since APIs for managing processes (kill, waitpid, fork, etc) take and return PIDs, not file descriptors. Since a process is not a file descriptor, I can't select()/poll() on it. Using PIDs is also prone to race conditions, whereas file descriptors are less prone to this problem (although not completely immune from it.)
The process descriptor functions such as pdfork provided by FreeBSD do a much better job of living up to "everything is a file descriptor" ideology than pure standardised POSIX does.
> I don't believe everything can be abstracted as a file considering that ioctl() breaks that abstraction. For example, what is it to "read from /proc/{pid for Chrome}"?
Why must every file support read() and write()? Some device files or other special files might only support ioctl(), and maybe also select() and poll(), and I see nothing in principle is wrong with that.
You're asking the question backwards. It isn't a question of which one the file is -- "/proc/{pid for Chrome}" isn't even a file, it's a directory.
The idea is rather that there should exist a file you can read the process' memory space from, which there is. The path to and specific format of that file is specified by the system documentation.
Ioctl is a deranged joke. The rest has held up surprisingly well, and I regularly find myself dropping down a few layers of abstraction to get away from higher level APIs.
I can generally get more done faster with a binary pipe and some sort of generator for serialization than with higher level communication APIs.
COM can be said to be overly complicated (at least not as bad a CORBA). I think WinMD is a step in the right direction, but both COM and WinMD are userland constructs and neither particularly pervade the OS (e.g. you can't pipe COM objects)
POSIX is neither antiquated nor anemic. There's nothing antiquated about a hierarchical filesystem with a single root. There's nothing antiquated about 'everything is a file'. In fact, that's continually useful to me on a day-to-day basis. There's nothing antiquated about byte streams, which is all they are. They aren't text streams.
A great example of how all of this turns out to be good design is that you can see how easily UNIX-based systems adopted Unicode. Say what you like about byte streams as a universal interface, but it sure was easy to switch from ASCII to UTF-8 when your system APIs weren't all built around the idea that everything would be encoded in a particular way.
I don't think systemd is unpopular or controversial because it's a great leap forward. That's just buying into the propaganda. It's unpopular and controversial because it presents itself as a great leap forward while not being one.
>but some new OS will - and eventually that OS will supersede Linux in the way many people believe Rust will replace C++.
C++ hasn't even replaced C. Why would Rust replace C++? When has a popular, stable, architecturally-important language ever been replaced?
The sort of people that think that Rust will replace C++ are the sort of people who thought Java would replace C++, who thought C++ would replace C, who thought Windows would replace Unix and who think that another system of opaque proprietary object-oriented APIs will replace POSIX. They're wrong, in every way.
When I look at large amount of Enterprise software: Yes that happened.... I see Java everywhere. Also think of Android (alone the amount of software). Of course it's seldom exclusively Java in a corporation.
FWIW, I think C++ is killing it on mobile and many other platforms and disagree with the post you responded to. Just pointing out that 2 samples isn't statistically significant.
100 % is a trap. Lots of games that require performance use c++ heavily. There's an NDK for a reason.
There's a lot of languages and many are used in different contexts. Just because people use Java and C# to write desktop apps doesn't mean a lot are not using c++ and Qt to do the same.
I can't think of any really big programmes written entirely in a single language. Every single programme out there is dependent on libraries written in C, kernels written in C, libraries written in C++, etc. Emacs is full of Lisp.
In my case a bunch of microservices. This included a rewrite of a legacy monolithic service written in Python. The C++ rewrite was not only more efficient and maintainable, but also ended up with a quite a few more features fitting in a smaller codebase. People do underestimate the value of rewrite from scratch, which when paired with a redesign trumps any language advantage.
My wife's work was (she's full time parent now) on some gargantuan backend for ticket selling service. Afaik frontend to that is also C++.
I would say that you could probably have gotten the same benefits from rewriting the Python system into any AOT compiled language, like Go for example.
We have been doing such rewrites to Java and .NET stacks, with some C++ only on "as little as possible" basis.
Frontend apps are only done in C++, actually QML, if they need to be native cross platform.
Otherwise they are either native to the OS (WPF, Android, Cocoa) or pure web.
> My favourite C compiler most certainly is not written in C++.
So which one it is?
As gcc, clang, icc, VC++, C++ Builder are all written in C++.
Are you using tcc for production work?
> Many people write programmes in C++, unfortunately. Any Qt programme is written in C++, all performance-dependent modern libraries are written in C++.
Sure, even I do it.
But we only do it, because during almost two decades C and C++ became the only options available for compiling code AOT to native code, with support for value types.
And I would never ever use C willingly, so C++ it is.
But the wind is changing and the choice for programming languages with AOT compilation to native code and support for value types, is widening.
> Whether people write every layer of their programme in C++ is irrelevant.
No, it reduces the usefulness of the language, the bigger the upper layers are, eventually the underlying layer can be migrated to something else.
For example, Oracle is planning to replace Hotspot (C++) with Graal (Java) in the long term. Likewise Microsoft has plans to replace parts of CLR with C# now that there is .NET Native.
Great example! I read reddit via a browser written in C++ on a Windows machine (written in C++) or on an operating system compiled by a compiler written in C++... or even an app written in Java.
Software engineering still uses languages/tools from the first few years that it existed, so it's unlikely that anyone means "completely replace every single LoC inexistence" when they say "replace".
>"I don't think systemd is unpopular or controversial because it's a great leap forward. That's just buying into the propaganda. It's unpopular and controversial because it presents itself as a great leap forward while not being one."
How did you come to that conclusion? The bulk of the controversy I've seen about systemd has nothing to do with hype, the controversy I've seen has largely been focused on its invasiveness.
The controversy about systemd is that while it has some good ideas, it presents itself as being the only modern init system, when that isn't actually true. All the stupid things about it are defended on the basis that it has all these cool modern features, and its defenders continually compare it to sysvinit rather than other modern init systems (over which it has no advantages).
Its designers are also very anti-open-source, disliking any element of choice. They also seem to care only about desktops.
>"The controversy about systemd is that while it has some good ideas, it presents itself as being the only modern init system, when that isn't actually true."
Nope, as I said before the controversy is about its invasiveness. Read almost any article criticising systemd and you'll find people expressing something along the lines of 'what started as an init system has grown massively into something that resembles its own layer of the OS design'.
Bingo. Right now the way to do lid close detection on Linux is via systemd-logind(?!). More and more of what used to be individual projects under the freedesktop umbrella gets lumped into the system blob, and thus one is required to either use systemd wholesale or effectively recreate the Linux "desktop" from scratch.
Containers are to a large extent a solution to an artificial problem. It your app is a single binary file + a single text configuration file with no dependencies apart from system libraries (i.e. libc POSIX) then you don't need a container, you're just a process. Containers are necessary because applications now consist of hundreds of small files with complex inter- and external dependencies.
Containers don't provide additional security, nor do they provide additional ways to restrict resources. If you want to restrict resources or sandbox programmes, you can do so without containers.
Hell, if you want to do that, you can do it. But there's no reason to want to do that. Partitions are an implementation detail of the filesystem. You shouldn't care whether /home is on the same partition as / or not.
If you read about the semantics of mounting a filesystem, particularly with regard to bind mounts, you realise that this is poorly abstracted. Mounting a filesystem and then binding it to a namespace are two separate actions. There is no API for the former.
POSIX added openat and related calls that operate relative to a specified directory descriptor, rather than the root of the mounted directory hierarchy.
It would also be possible to take this a step further and have calls which work with a filesystem descriptor and work on a filesystem independently of the mounted filesystem hierarchy. If it was possible to mount a filesystem without binding it (getting a filesystem descriptor), and then separately bind that filesystem into the mounted filesystem hierarchy using the descriptor. This would enable the direct use of a filesytem without it being bound, and that would allow for example private use of a filesystem by a process without it being globally visible, while might be useful for transient access to storage media without any potential for races (nothing else could open files or have a CWD in the filesystem).
I'll acknowledge that this is not strictly necessary, but it is an area where the underlying design is not well abstracted and is inflexible--sophisticated use of bind mounts is difficult, and the mount(8) dance to use them is terrible, all because it has to work around the lack of separation between mounting and binding.
Well, you could still add mounts – but while we currently can refer to devices by UUID, we can’t access their content by UUID, requiring ugly hacks such as sytemd’s automounting to /run/media/USERNAME/uuid/.
That's a great idea if you want to limit the users' choice of disk layout, similar to antiquated software in Windows that only installs on c: because the paths are hardcoded.
Windows now runs all the workloads that Sun, SGI, DEC, HP, IBM yadda yadda workstations once ran. I don't remember the last time I saw a real Unix workstation outside my home office where I keep an Octane for nostalgia's sake, and the wife has her old SPARCstation.
Let me paint this in a more interesting colour: at $work, most of our development work is very POSIX-y software (it only runs on Linux, but historically, at least portions of it used to run on a bunch of BSDs, too, and they probably still do, but no one tried it in years). Almost no one in the office uses Unix on their computer, not even those of us who use Linux at home.
Most of us have Windows stations. We can install anything on them, but most of us didn't bother. They're glorified remote terminals anyway (the code is compiled and edited on a bunch of build servers). The servers do run Linux, but if we were to start from scratch now, we could probably pull off a decent build system under Windows as well. I'm using Emacs and relying on a bunch of Unix tools, either directly (grep) or indirectly (cscope through xcscope), but my colleagues who use Eclipse wouldn't feel much of a difference if the servers ran Windows.
I used to run Linux at my previous $workplace, but I definitely lost more time than my employer would be comfortable knowing disentangling things that regularly broke after updates (mostly Gnome- and systemd-related, really, but thanks to xdg's lovely practices, using a Linux system without something that can perform the black magic associated with sessions, seats, file associations and whatnot is very unpleasant).
My Windows laptop hasn't crashed once, the applications I need haven't broken once, and I haven't had to do the restart your computer to install this driver dance at all, I literally started it and hacked away. Plus, doing systems programming on a Windows system isn't all that bad, thanks to Dave Cutler and his colleagues being brilliant engineers.
I'm torn about using Windows at home, largely because I now have fifteen years' worth of BSD and Linux-managed files and countless convenience scripts to migrate, because I don't really trust Microsoft and because I think fighting my OS so that it will stop giving me ads and sending my stuff to Redmond is about as productive as wrestling with a modern Linux systems' innuvashon. But, much unlike 15 years ago, I probably could.
> my home office where I keep an Octane for nostalgia's sake, and the wife has her old SPARCstation.
I used to run Linux at my previous $workplace, but I definitely lost more time than my employer would be comfortable knowing disentangling things that regularly broke after updates
Sounds like you were upgrading too often (almost sure, if you were already dealing with systemd). Windows releases a new version only every 3-4 years, there's no reason to update your Linux workstations any faster. At my work, we're still using a mix of Ubuntu 12.04 and 14.04, both of which are still supported, and we just add a couple of repositories for specific applications (particularly browsers).
I do use Debian Unstable on my personal laptop, but that's because I don't mind fixing it if I have to (which, mind, I haven't had to do in a long time). But for work? LTS all the way.
Sadly, the choice wasn't quite mine to make, so I had to follow the regular Ubuntu releases. What can I say, I looked at non-LTS releases and looked and looked but didn't see the beta marking and I thought they were actually production-ready...
I used to run Debian stable at home a while ago and did like the stability, but the security update situation is not exactly something I'm happy with.
It's not that Ubuntu releases aren't production-ready, it's that no major OS releases are production-ready - hence Windows 8.1, and one of the reasons why Enterprise is still on 7. By using LTS, you get to skip that nonsense for years, and jump directly into a release which has had its kinks ironed out (or at least documented).
There is a great deal of effort involved in backporting updates to frequently-updated packages. Debian, for instance, doesn't update WebKit (or at least didn't last time I checked, and had a policy for it). Consequently, things like Evolution (which uses WebKit internally) are a walking CVE museum on Debian stable. The situation is similar for a lot of other packages, on a lot of other distributions with long-term support (Debian actually has a large enough community of skilled enough developers that they're faring well in this regard).
I don't want to minimize or belittle the work that they're doing, I only mention Debian because it's been my go-to distro for a very long time. They're also alleviating the problem in the most common use cases (e.g. they do update Chromium if you need a webkit browser). Codebases like WebKit's are simply too large, too complex and too quickly-shifting for a community-driven project to be able to backport fixes.
Even where the codebase is small enough, backporting is a nasty business. I've seen it done commercially, so with proper funding and proper teams and whatnot, and the success rate is not something that I'd consider encouraging. I've shot myself in the foot while doing it, too.
There are certain types of setups that lend themselves well to long-term support models. Server systems, up to a certain degree of complexity, embedded systems with a restricted set of packages -- maybe. A modern Linux desktop is not one of these systems IMO. A Linux desktop with four year-old packages is very likely to be very buggy in very nasty ways.
I don't know enough about the POSIX APIs to comment with any authority on that, but I certainly wouldn't be surprised if a long running software project that has been required to stay largely backwards compatible as it evolved has become crufty.
However, the importance of POSIX isn't really to do with specific design details, but rather because it aids in software portability. This is still a desirable trait. In other words, the answer is to design a better portability solution (N.B. For anyone thinking of posting it, the XKCD standards link is not relevant here, you'd want to design something that had support from key stakeholders before it was implemented, there's no requirement to fragment the market).
I can tell you right off the bat that POSIX might be better than no "standard" at all, but porting stuff between any POSIX-ish OS still means an actual port.
Practically every non-trivial POSIX-ish program is not portable. This also includes quite some supposedly higher level applications written using things like Java, Go, Ruby, Python ... since they all leak POSIX details all over the place and make non-portable use quite easy.
Google should have innovated or they should have complied. Instead they built on POSIX's shoulders and then diverged only enough to harm portability. Not cool.
ChromeOS might be built on top of a POSIX like kernel, but the actual application stack is the browser.
Android is mostly Java, the NDK only white lists a restricted set of libraries that aren't POSIX related, Android 7 dynamic linker will kill any app that tries to link to device libraries not part of that list, the Linux fork lacks APIs like e.g. UNIX IPC.
Android Things is even tighter. Only user space drivers are allowed, with C support just added this week. Before only Java drivers were allowed.
Magenta/Fuchsia is a micro-kernel with Dart as main application stack.
> the NDK only white lists a restricted set of libraries that aren't POSIX related
Uh, libc?
> Android 7 dynamic linker will kill any app that tries to link to device libraries not part of that list
Yes, it prevents you from depending on things that aren't required to be there. Apps depending on the device's OpenSSL libssl.so were horribly broken when Android switched to BoringSSL after Heartbleed. Preventing apps from loading system libraries guarantees that they won't break when those libraries are changed or removed in future releases.
> the Linux fork lacks APIs like e.g. UNIX IPC.
POSIX/SysV IPC are unusable on any system with untrusted applications. They live in a global namespace not tied to a process, and there's no reasonable way of doing any sort of accounting on them. There isn't even a way to use them correctly. If you shm_open, and fail to shm_unlink (for example, if you're OOM killed), the shared memory object will live forever. In the specific case of shm, Android introduced a far superior API, ashmem, which tracks shared memory objects as file descriptors, instead of manually refcounting names in a global namespace (memfd was added in linux 3.17, which solves the same problem in the same way).
A conforming ANSI C implementation is not required to provide headers like unistd, or any kind of POSIX compatibility beyond what ISO/IEC 9899 requires.
> POSIX/SysV IPC are unusable on any system with untrusted applications.....
Either POSIX compatibility matters, or it doesn't.
> libc is the C runtime library, a subset of POSIX.
> A conforming ANSI C implementation is not required to provide headers like unistd, or any kind of POSIX compatibility beyond what ISO/IEC 9899 requires.
Yes, and Android's libc provides almost all of (all of? I don't know for sure) POSIX.1.
> Either POSIX compatibility matters, or it doesn't.
More to the point, POSIX compatibility matters up to the point where it's useful. Implementing the shared memory functions by returning -1 and setting ENFILE is a perfectly compliant implementation. It's also worse than just not providing an implementation.
For the record, I'm always sad when I can't do everything I want in POSIX and need to use, say, Apache APR (to support Windows), or the C++ standard library (to support Windows), especially in cases where I don't stray far from what POSIX supports. (I am not sad when I use something that actually adds value over POSIX; e.g., I like using C++ regexes instead of POSIX regexes because I don't have to guess how many matches I might have; and I'm happy to use std::unordered_map -- or the APR equivalent -- because I can have more than one at a time).
First i looked at who made the paper (since it is right there on the first page) and thought "oh, this should be detailed and objective". But alas it was neither...
Android cares about POSIX about as much as Windows. Programs that were tested were from "app stores", based on the popularity of downloads from same. POSIX equals only libc in this study. Actually i could poke holes in this study all day. Just want to say that it is very much not an objective one.
PS I even went above the first sentence that talks about age as if it matters.
POSIX has been outdated since the day it was released. Talk about binding OS development to the absolute lowest common denominator, (requiring the presence of a fortran compiler - http://pubs.opengroup.org/onlinepubs/9699919799/ - really?) and every additional release simply adds more bloat.
> The original goal of the POSIX standard was application source code portability. However, modern applications are no longer being written to standardized POSIX interfaces.
This is a weird claim to make. The three OS's that they're talking about (desktop Linux, iOS and Android) all have plenty of popular applications written directly at the Posix API level.
Heck, when we were architecting our mobile-network-enabler stack that had client side and server side components (all in user-space), it was a no-brainer for us to just pick the posix API as our base and we designed and wrote a lot of native level code that "just worked" on all three platforms.
Even if there's any sort of framework layer between the application and the system, the Posix semantics are so deeply embedded in the culture of programming today that frameworks can say things like "this API opens a new file for write" without having to explain what all that means. The very language we use to architect a solution has deep, unspoken assumptions built-in, and it's more or less based on what Posix semantics allow.
I don't think that the articles addresses the real problem with POSIX/SuS. The fact that SuS does not deal with GUI is feature, also there are no OS implemented complex syscall multiplexers on top of ioctl() for that, such multiplexing happens in hardware. (Try stracing glxgears, with my radeon it calls ioctl quite heavily, but only uses two functions, both of them are essentially "wait for interrupt from hardware")
What I see as biger problem is that current Single UNIX Specification is that it essentially has nothing to do with UNIX. On the list of certified conforming systems, there is Solaris (which does not implement all the optional modules one would expect from something that _IS_ derived from SysV), z/OS and bunch of embedded RTOSes noone ever heard about. And if you look on what is required and how underspecified some things are, you can make "Certified UNIX" out of MS-DOS by rewriting the userspace tools and writing libc with "POSIX-ly correct" syscall names.
Though the article begins by noting that POSIX was first developed over 25 years ago, that opener is actually misleading and sets the wrong tone for the article's content.
POSIX has been continuously developed since the first version was ratified.
The POSIX of 25 years ago didn't have threads or async I/O, for instance.
Then what the article criticizes is precisely the lack of use of mostly the newer stuff from POSIX, not of what was developed over 25 years ago.
Nowhere do I see the claim that functions like open, unlink or chdir are being replaced by something home-spun in Java or what have you.
So the argument boils down to this TL;DR: "new activity in POSIX over the past quarter century has been increasingly bloating the specification with committee-designed cruft that few applications use."
This article is biased towards desktop environments, but misses embedded.
Lots of RTOSes are POSIX compliant for things like Pthreds and file systems. Off the top of my head, VxWorks, RTEMS, INTEGRITY, and QNX all have POSIX compatability.
POSIX ACLs, nee TRUSIX ACLs, are one of several ironies of the world of international standardization. Whilst the attempt to produce a formal standards document for them failed, they actually got implemented in many operating systems anyway with a fair degree of consistency. So whilst they are not de jure standardized, one can enjoy the de facto existence of tools like getfacl and setfacl across multiple operating systems today.
That said, I agree that it would be most welcome if there were fewer niches where ACLs are still wholly unavailable: OpenBSD, tmpfs filesystems on FreeBSD, and so forth.
Who has put ACLs to good use anyway? Even the old model usually forces one to overspecify one's intent. It's too easy to shoot in your foot, and too hard to figure out what you need in the first place.
I doubt there is a good general purpose permission model that's more complex than Unix modes.
Collaboration on filesystems just doesn't work very well. It could work for many small groups if files had no users associated and everybody had just R and W bits for a whole repository.
For special problems there are specialized solutions. For example distributed models like git, or complex client-server solutions.
Well, for starters there are every Windows NT system you have ever used, which uses ACLs quite successfully and extensively for a whole host of reasons, and ... me. I put ACLs to good use myself.
Your wording, speaking of repositories and git, bespeaks a too-narrow idea of what ACLs can be used for. Aside from all of the examples that one can glean from Windows NT, from window stations to the trusted installer, there are things like using ACLs to govern access to in-filesystem control/status APIs, or using them to limit the things that a subverted logging daemon could possibly do.
Our business uses them to restrict portions of our shared drive to certain groups. They're also used to restrict programmatic access to system folders, by Windows.
It's too coarse grained and has annoying and inflexible semantics regarding umask and the setgid bit. And you can only have a single group granted read or write access. What about granting an additional group read-only access? For situations where one group has responsibility for maintaining a dataset, but others need access to the data but are not permitted to modify it. You could use the other permissions, but not if you still need it private for everyone else.
umask: A user with a umask of 0220 can write to the shared directory but no other group members will be able to read
setgid: A user creating a directory with the setgid bit dropped will end up losing the shared group ownership and use their default group instead, again making it impossible for other group members to read and write data
ACLs such as NFSv4 ACLs include inherited ACEs for files and directories, so that the permissions set on newly created files and directories are controlled. Since these extended permissions are outside the scope of the standard group, setgid and umask settings, you have a system which is transparent to all users--once they are set up, no end user needs to care about them.
Multiple editors are needed rarely at my workplace, and I haven't had one case which required multiple editors and multiple readers but no global read only access (in > 8 years).
If I had such a case, the ACL solution could easily be emulated by putting such a directory below another directory which is accessible only to readers and writers. (Yes, files would need mode 664 and the right group. Not saying it's not a kludge, but you can tell people to fix the modes, or write a script which does that. It's a minor pain given that this case seems to be really rare).
Contrast that to ACLs on Windows. What I do know is it has Apache-insane chains of "access" and "deny". I've messed up permissions more than once, as only user of a system. Allegedly confirming to view another user's files in the explorer silently causes destructive changes to all the files in there (that must be why it takes ages). The mechanism is more complicated, so harder to debug, and it's harder to understand where's the problem when things don't work as expected.
Anyways, when things get complicated as you described, doesn't that indicate that a proper VCS should be put in place? Multiple editors is crying for problems. It's easy enough to just put these files in a git repo and be done. The traditional hierarchical file system is convenient for simple (i.e. most) uses, but in the end it's just an object graph with a restriction to avoid loops to achieve some amount of "hierarchical". That object graph model isn't really made for treating subtrees of files uniformly. It doesn't have transactions. It's not good for anything much except helping to organizing a bunch of objects in a DAG.
How do inherited ACLs play with files that are linked from multiple directories?
Inherited ACLs take effect only at creation time inside that directory as far as I understand it; they are basically copied to the new child file/directory. You have to enable it explicitly for ZFS with the aclinherit property. Links won't affect anything--you can't link directories.
Regarding using a "proper VCS", it depends upon what kind of data you're dealing with. We use git for all our code, but git has zero permissions associated with it other than the execute bit. We use the filesystem for data files; we have many terabytes of uncurated and curated data for integration testing and other purposes. Some people have write access to curate it; some accounts need read access (CI slaves, developers); other accounts are not allowed to read it (some is public, some is under NDA, and other restrictions apply).
I certainly agree that on Windows this can look horrific (and is horrific). However, all I'm suggesting for this situation on Unix is to add a single group access control entry, and optionally file+directory inherited entries to ensure their propagation. You add them once at the root, and then that's it; they are set and done. Since you are using just one group permission, you're not getting a huge explosion of entries, and it will remain clean and understandable. I've seen repeated problems with the basic Unix perms over a five year period here which require repeated action by the admins or guilty parties to fix them up, plus the issue with access via Samba or NFS on client systems accessing the data. That adds up to a maintenance burden which can now be eliminated entirely, making life easier for everyone concerned and the infrastructure is more robust as well. None of the end users even need to be aware of the existence of ACLs; unless they manually screw with the automatically set ACEs, it will just work. As with everything there's a cost/benefit to consider; there is a cost, but ACLs are not universally bad.
> Inherited ACLs take effect only at creation time inside that directory as far as I understand it
There is "dynamic inheritance".
> Links won't affect anything--you can't link directories.
You can link files from multiple directories, leading to the question which is the parent from which permissions should be inherited.
> We use git for all our code, but git has zero permissions associated with it other than the execute bit.
Git is just an object model. It doesn't have an authorization scheme, and this is the advantage. It's easy to add authorization schemes that are truly repository-wide and match the requirements. You can write authorization logic which takes a (PERSON x REPO x COMMITBIT) table to handle pushes and pulls. Then for more complex requirements like groups, write a script which generates these per-person-per-repo bits from a per-group permission list and a group membership list - no actual need to code complex (and domain-specific) logic in the authorization layer. (If you have no proper authorization in place and don't want gitlab or gitolite, get in touch with me. It's easy to do it yourself, I even have a (some what over-engineered) solution "gitadmin" at github).
> I've seen repeated problems with the basic Unix perms over a five year period here which require repeated action by the admins or guilty parties to fix them up, plus the issue with access via Samba or NFS on client systems accessing the data.
I know these problems from my workplace, and my assumption (backed by minor experiences from personal use) is that ACLs only make this problem worse. With static inheritance you still have to fix up permissions. Dynamic inheritance is hard to debug. It's easier to just have a script (maybe cronjob) which fixes up permissions.
They're related, but not the same. ACLs are Access Control Lists; basically a more sophisticated version of UNIX permissions. Mandatory Access Control is about restricting what users can put into their ACLs; e.g., "Alice is not allowed to make any of her files readable by Bob".
On Linux, ACL is the "facl" family of functions functions (and the set/getfacl programs). These supplement the traditional owner/group and mode bits. Essentially, it allows you to set a separate set of read/write/execute bits on a file for an arbitrary number of specific users or groups, rather than being limited to simply "owner", "group", "everyone-else". If anything is set beyond the normal owner/group/world, GNU `ls -l` indicates this with a "+" next to the normal mode bits. I'm unsure how much this changes for other kernels.
Linux supports POSIX.1e DRAFT ACLs. These are a de-facto standard by implementation, but were never formally standardised by POSIX. They are also supported by other kernels.
However, other systems support the more modern and featureful NFSv4 ACLs. NFSv4 ACLs are supported by the NFSv4 filesytem and also used by ZFS. They are part of the NFSv4 formal specification. On systems supporting ZFS (other than Linux), you get the full NFSv4 ACL support. Linux doesn't have VFS support for NFSv4 ACLs, and doesn't expose them even for NFSv4 mounts. In contrast, a FreeBSD mount of an NFSv4 filesystem served from ZFS exposes the full permissions model to the client. FreeBSD has a separate set of richer acl_* functions rather than the get/setfacl functions on Linux.
> Linux really needs to get proper support for NFSv4 ACLs.
Not to nitpick, but: why would it need to do that? Nobody (in Joel Spolsky definition of nobody) cares about NFSv4. The only place where ACL matters for Linux users is Samba. And Samba works fine with Linux ACLs as it is.
Most people aren't aware of what they are missing and how it could bring a great deal of benefit.
This isn't about NFSv4 (the filesystem), it's about the rich permissions model it standardised which is now used by more than just NFS. It's used by ZFS, and it's used by the FreeBSD and Solaris VFS, and likely others as well. It's the only rich ACL model in real use today on open platforms, and being a superset of both POSIX.1e DRAFT and Windows ACLs, it's needed for full interoperability between platforms.
Samba doesn't "work fine" except for the simple case. Using it with groups and group permissions can be exceedingly painful. When I export a ZFS filesystem on FreeBSD over Samba, it exposes the full ACL model to the Windows client which is accessible and manipulable via the standard security properties as well as the command-line on the server or and NFSv4 clients.
When working with group permissions locally or over NFSv4 or CIFS, you aren't restricted to the pitiful group and setgid permissions offered by the standard UNIX permissions or even POSIX.1e ACLs. You can add an @group ACE detailing the group access permissions and their inheritance, or custom ones per user, which can make working on shared data much simpler. No more pain when someone with a broken umask adds something and strips off the group permissions, or they drop the setgid bit and the inheritance is lost, or they access it via samba and it sets the wrong group ownership or permissions. All of which I see regularly on shares exported from Linux. We often have to export the same share multiple times with different configuration to work around the limitations (read-only, group and group perms), where if it was using proper ACLs just one would work for all cases.
However, your first sentence is valid for many things, not just ACLs. For example, there are people (ehm, me) who value being able to have Kerberos tickets for multiple principals from multiple KDCs at the same time. This is something that the most popular OS (Windows) is not able to do - and nobody minds. Because for most people, being part of one domain is good enough.
With ACLs, I find it similar. For most people, the complexity is too high, the reasoning about them so complicated, so there's no benefit to have them available, they won't use them anyway. Also, in our case, we just have multiple groups at the root of samba share and that's it. It is good enough. So maybe it is yet another case of worse is better.
The big weaknesses in Posix they point out are in interprocess communication and parallelism. Both were afterthoughts in UNIX. UNIX/Linux land never had a good IPC mechanism. (As a former QNX user, I've seen this done right; check out MsgSend/MsgRecv there.) Posix thread-level locking primitives tend to be too slow, but that's partly a hardware support problem. Async I/O now seems to be popular; networks are so slow on mobile, and servers have scaled to the point that dispatching overhead for all the running threads gets to be a problem.
Yes, Posix doesn't do graphics. That's good; if it had a window API, everybody would be complaining about it. Look at the KDE/Gnome wars, or the OpenGL/Direct-X wars.
Here's what I'd change in Posix.
On the file system front, I occasionally argue that UNIX/Linux/Posix file system semantics should be slightly different. There should be several types of files. "Unit files", the normal case, can only be changed in their entirety. Opening a unit file for writing should create a new file, and on successful close (not program exit, abort, or system crash) the new version replaces the old version for future opens. Today, you still have to jump through hoops to do atomic file replace, and the hoops are different for different OSs. Posix doesn't address this. This is something that should Just Work as the default file semantics.
"Log files" should just extend. This is what opening for append should do. You can't go back and overwrite. On an abort or system crash, the end of the file should be at the end of some recent write, and correct out to the last byte of the file length. Logging may be behind after a crash, but should never trail off into garbage.
"Managed files" are for databases. You would get two async I/O completions - one when the OS has accepted the write, and one when it's safely committed to disk/flash. Rather than flushing, database programs would usually wait for the commit completion. This would be a special mode asked for with an "ioctl" call, used only by the few programs that really care about database recovery. The database people would love to have well-defined semantics like this.
"Temp" files (in /tmp or some designated directory) would have only the guarantee that after a crash, they're gone.
On the interprocess communication front, I'd copy MsgSend, MsgRecv, and MsgReply from QNX. That's subroutine-call type IPC; send a message and wait for a reply back. This is what you usually want for multi-process programs. Connection setup could be improved over QNX; the mechanism for finding the receive port in another process needs to be improved and there should be at least one well-known port number (like stdin/stdout/stderr are 0,1, and 2) that each process uses first. This needs to be integrated with the CPU dispatcher, as it is in QNX, so that when you do a MsgSend, the sending thread blocks and the receiving thread unblocks without going through the scheduler. Otherwise, you go to the back of the line for the CPU on each interprocess call.