
What Unix Cost Us [video] - stargrave
https://www.youtube.com/watch?v=9-IWMbJXoLM
======
BlackLotus89
This talk took some weird directions. It's called "What Unix Cost Us", but to
be honest it isn't.

Starts with USB-driver coding in Win, Mac, Linux.

Goes over to colonialism.

Made some good points about Computer-architecture and how they shape
programming languages and then some bad/wrong points.

Ends on community cultures and some controversial thoughts about gender
equality and then he doesn't take any questions.

Talks off a shitstorm and then isn't ready to face it. Don't know have seen
better videos about the same topic.

~~~
magicalhippo
It had some interesting points, but also some weird detours.

One of his core points revolve around hanging on to the way things were, and
how that often ends up poorly.

Though I got this feeling that really, it's that often it's just really
difficult to come up with good abstractions. Abstractions is after all what
allows us to be so productive with our hardware.

For example, Electron and similar is abstracting away the operating system,
which makes sense because for a lot of programs the specifics of the operating
system doesn't really matter. However JavaScript and HTML can hardly be
described as the best way of obtaining that abstraction.

------
linguae
I thought this talk was an interesting overview of some of the limitations of
Unix (particularly Linux) and C, although I wish the speaker had discussed
more details as to why C is not a match for modern processor architectures and
how modern languages such as Rust address these issues.

Another thing that I'm curious about is how much familiarity the speaker has
with Plan 9? Plan 9 from Bell Labs is the spiritual successor to Unix, created
by many of the Bell Labs researchers who originally worked on Unix. The
creators of Plan 9 pushed "everything is a file" to its limits. Sadly,
although I have read some Plan 9 papers, I haven't actually used the operating
system. I'm really curious about what the USB example in this talk would look
like in Plan 9?

I do wish there was more development in alternative operating systems. I like
Unix, but there were many ideas from VMS, Smalltalk, the Genera operating
system for Symbolics LISP machines, the classic Mac OS, BeOS, Plan 9, and
other operating systems that we can learn from and that would be very useful
to have today. Also, today's hardware is quite different from the hardware of
50 years ago. We have multi-core CPUs, GPUs with hundreds of cores, very fast
NVMe storage devices, and other amazing technologies. There has also been much
advancements in programming languages, with more people willing to explore
alternatives to C and C++ such as Python, Rust, Go, Clojure, Haskell, Swift,
and many more, each providing different abstractions for dealing with diverse
programming tasks. I wonder what an operating system for the 2020s would look
like given the advances in technology and the lessons learned in the past
decade.

~~~
pjmlp
> I wonder what an operating system for the 2020s would look like given the
> advances in technology and the lessons learned in the past decade.

While not perfect, with their own set of issues, macOS/iOS, Windows (with
.NET/UWP) and Android are a couple of steps into that direction.

Interesting that you refer Plan 9, but fail to mention Inferno, which was the
final vision from the same researchers.

Regarding UNIX, one thing that keeps being missed regarding NeXTSTEP and its
derived OSes, and to go along your remark, it that for NeXT and Apple, UNIX
wasn't never relevant.

The UNIX compatibility was just a means to bootstrap the OS and attack the
UNIX workstation market, the Objective-C technology stack (now Swift as well)
was/is where the real value of the OS was.

Anyone using macOS or nowadays WSL as UNIX replacement, is replicating the
farmer's story from the talk.

~~~
cpach
I quite frequently see people mention Plan 9, but seldom Inferno. Does anyone
know why Plan 9 became more popular?

~~~
pjmlp
Lack of teaching would be one.

I bet not many professors bother to dive into it.

The documentation is still available,
[http://doc.cat-v.org/inferno/4th_edition/](http://doc.cat-v.org/inferno/4th_edition/),
and you can get it as open source and commercial variants,
[http://www.vitanuova.com/inferno/](http://www.vitanuova.com/inferno/)

So it is mostly a matter of actually caring to dive into it.

Not only one will discover how many of the Plan 9 design issues got worked on,
some of the design ideas that went into Go will be more clear.

It is also revealing to dive into Oberon OS linage and then see how some of
its design went into ACME on Plan 9, dynamic loading on Inferno or eventually
Go's method call and unsafe package design.

Instead most get off at the Plan 9 train station.

------
incompatible
I suppose on Linux, libusb itself is supposed to be the general programming
interface, not the poking around with /dev or ioctl. The Windows and Mac
interfaces he was using are quite possibly implemented as libraries over a
more primitive layer.

Speculation, knowing nothing about USB specifically.

~~~
pjmlp
On Windows and Mac, the more primitive layer are kernel private APIs, which
you aren't supposed to call directly.

~~~
magicalhippo
Linux is the kernel, so really the equivalent on Windows is the Windows kernel
API, not the Win32 (user) API.

However, the kernel API seems to be somewhat similar to the Win32 version
presented in the talk and not nearly as gnarly as the Linux equivalent:

[https://github.com/microsoft/Windows-driver-
samples/blob/mas...](https://github.com/microsoft/Windows-driver-
samples/blob/master/hid/hidusbfx2/sys/usb.c)

[https://docs.microsoft.com/en-us/windows-
hardware/drivers/dd...](https://docs.microsoft.com/en-us/windows-
hardware/drivers/ddi/wdfusb/)

That said I'm just an application programmer so I've only ever used the Win32
API.

------
soraminazuki
Not everything may be a file indeed, but I didn't get what he was trying to
point out with the specific example he gave. To make his point, he compared
the USB APIs provided by Windows, macOS, and Linux. Then he went ranting about
how Linux looks terrible because you had to manipulate files and use functions
such as snprintf, ioctl, and fnctl. However, this is not at all a fair
comparison because he was comparing the Linux _kernel_ interface against much
higher-level _userspace_ library APIs in Windows and macOS. So what are the
problems he encountered with this specific example, that had actually harmful
consequences? I honestly can't tell.

~~~
pjmlp
It was completely fair, because other OSes don't expect users to directly
access kernel interfaces, most of them even forbid it, other than for writing
device drivers purposes.

~~~
soraminazuki
How can it be "completely fair" when you're not comparing the same thing?
Linux applications can and should use higher-level libraries too.

~~~
emn13
Without getting into the specifics here, it's fair to compare linux kernal
calls - sometimes - to windows api calls, because that's simply a technical
detail, and in fact - that's just how those OS's expect you to communicate
with the respective "OS" (whatever that even means; the linux kernel is hardly
the whole OS). Calling the windows api's isn't calling some convenience
wrappers, it _is_ the api - the kernel api isn't (generally) public, and they
try to avoid stability promises too, though, this being windows, I wouldn't be
surprised if the kernel "internal" syscalls were nevertheless usually stable
for legacy support reasons.

~~~
soraminazuki
The original speaker is comparing two totally different layers of abstraction.
That's _not_ a technical detail when considering if the comparison is
appropriate. It's simply misleading when higher level libraries are available
and widely used in Linux.

On the other hand, whether the kernel interface is documented or not _is_ a
technical detail which simply doesn't matter in this context.

~~~
emn13
It's just not that simple. A kernel syscall just isn't an abstraction on
windows, so if you want to compare the two, you _need_ to include other OS
APIs. Whether something is a syscall or not is a technical detail in the sense
that it simply doesn't matter for it's actual function, but _does_ matter for
lots of non-functional concerns; possibly including api usability,
performance, etc.

I don't know anything about the USB issue here, so I can't comment on that.
But, typically, if you're looking for the closest equivalent to linux syscalls
on windows, you're unavoidably going to be looking at non-kernelmode dll calls
(which internally may or may not involve a kernel-mode syscall - that's a
private implementation detail).

It's not a question choice of abstraction, simply of how the OS exposes low-
level functionality (at least - it's the OS's choice, not yours).

The point being that if you reject any comparison of linux syscalls vs. window
api calls then you effectively reject all comparisons between linux and
windows apis. Fair enough; if you want - but it's not very helpful.

~~~
soraminazuki
If you want to compare the capabilities of Linux and Windows systems,
comparing Linux kernel APIs and Windows library APIs might make sense.
However, complaining that dealing with the Linux kernel interface directly
requires more work than calling Windows API functions doesn't make any sense,
because of course higher-level interfaces are more pleasant to work with
regardless of the target platform. If you want to make that comparison in a
meaningful way, you should be comparing Linux libraries against equivalent
Windows APIs.

~~~
jsjohnst
To make it a bit more clear, libusb would be a closer comparison to the
Windows/MacOS APIs OP is talking about being better.

~~~
emn13
Sure; nothing wrong with that comparison either - you're just not going to get
a straightforwards apples to apples comparison. Unlike libusb, the windows api
really is part of the OS; whereas libusb is a convenience library that - on
windows - must use the lower level api; it's not a peer. Then again, there's
no reason to care much about great convenience from a lowest-level api, so I'm
not really sure what all the fuss is about: as long as the api isn't so wrong-
headed to make good wrappers difficult or error prone, inconvenience seems
like a fairly small price.

------
clktmr
"Everything is a file" is one of the most misunderstood concepts I
encountered, to the point where people think that disk IO is involved. A file
doesn't even need a filename (eg sockets).

The main point is to have a common interface for system resources, which
happens to be the file api in Linux. Think of it as the base class of
everything. There is a bunch of tools/code that operates on file descriptors,
which you can reuse now.

Build better interfaces on top of that if you want to.

~~~
wbl
Sockets depart badly from that model with special purpose setup syscalls.

~~~
clktmr
Yeah but that's rather because the socket api is older than Linux. Underneath
there is the sockfs implementation.

------
gdm85
Some parts of the talk did not really stitch together; if we consider the main
arguments against Unix (and the funny dab at Plan9) I felt them a bit lacking
in the sense that they do not propose alternative philosophies, but only a
liberating "mixing things together is cool".

I can totally understand that as a form of catharsis, and a mindset totally
apt for hobby projects, but if we are playing in the realm of OS design...I
would think we can do better in terms of architecture and philosophy?
"anything else is better" does not really convince me, some structure is
usually better than none at all.

------
thunderbong
Wonderful talk. Lot's of interesting points. Much of the technical went over
my head though. But from what I understood, the main point he's trying to make
was - we are stuck with a way of thinking in technology which was right when
it was thought up, but has run out of it's usefulness.

I found his point, with respect to *nix, very insightful - > Unix suited it's
time. I worry it has ended up straight-jacketing the way that we think because
that was quite a while ago. It still works, which is amazing. But that doesn't
mean that it's tenets and it's way of work should be sacrosanct. We should
feel free to examine every idea and throw them out if we feel they no longer
have value for what we're doing.

The analogies from history and communities were also very interesting (e.g.
meritocracy).

Other points that I liked -

\- Complex problems have simple, easy to understand, wrong answers.

\- Understand the past, but don't let it bind the future

------
kccqzy
I watched the video. A few good points: should everything really be a file? He
used the example of simply talking to a USB device on Linux without libusb
involving a bunch of snprintf to construct a file path, and then a big bunch
of ioctl. Setting up a device apparently requires creating a bunch of magic
directories and magic files and magic symlinks and mounting magic
filesystems... I feel that he has a good point about this. Not sure how Plan 9
does this better though.

Besides, he also talked about how I/O was blocking, and even when non-blocking
I/O was available it's still synchronous until the recent introduction of
io_uring. Windows appears to do this "right" with its completion ports. This
part of the video was much less convincing, IMO.

~~~
h1x
Also thought that some of the technical points were good, but...

Mixing together Linux and Unix? Saying that killing a process from command
line in Linux is bad because MacOS has GUI for that? Not explaining in more
detail a claim that C is responsible for Spectre? Topping a technical
presentation with political statements?

That's too much for 30 minutes I think.

~~~
catalogia
The Activity Monitor is hardly even the fastest way to kill a process in
macOS. Usually when I'm working on something that goes haywire needs killing,
I already know the process name and a simple `killall example` can be typed
out faster than he can ⌘-space his way into the Activity Monitor and navigate
that GUI. I don't think that proves anything profound though, just that his
example was weak.

I went into this video agreeing with the stated premise (and still do), but
this presentation was a bit underwhelming.

~~~
celticmusic
linux has pkill as well.

------
swiley
Unix at least picked one abstraction and stuck with it, contrast windows which
is a dizzying mess of incompatible ideas. It’s so bad that more paranoid
people may think Microsoft disorients people on purpose.

There are other operating systems that also chose singular abstractions like
the cannon cat and smalltalk.

~~~
teddyh
Reference:
[https://en.wikipedia.org/wiki/Canon_Cat](https://en.wikipedia.org/wiki/Canon_Cat)

------
rs23296008n1
I think I prefer the plan9 approach of "everything is a file/filesystem"
rather than the unix "everything is a file" as typically seen. Actual usage of
files, filesystems and namespaces under plan9 was more consistent than the
file approach under linux. It was a lot of fun playing in plan9.

Regarding the video, of the three examples the windows approach was shown as
more succinct. I thought the linux version was messy. The mac version seemed
overly complex. Whether these were actually representative is another matter.
I can't say how but I'm not convinced - there seemed to be some artistic
license / suspension of belief required.

The remainder seemed to be meandering but I sense you had to be there.

------
layoutIfNeeded
I’m 29 now. I wonder if I’ll live long enough to be able to use cancellable
non-blocking mkdir.

------
pjmlp
Very entertaining talk, specially regarding the cargo cult that gets carried
around.

------
robert_tweed
I'd like to know, from anyone with Plan 9 experience, does its approach to
"everything is a file" solve these problems?

~~~
pjmlp
No, everything is a file doesn't work at all if you care about high
performance graphics and real time audio.

~~~
kragen
qznc made the same claim. Why would "everything is a file" be incompatible
with high-performance graphics and real-time audio? I'm particularly
interested in this question because my knowledge of those areas is not very
deep, so I'm wondering what unknown unknowns I'm missing.

Here's my naïve thinking:

· Real-time audio requires low-latency scheduling, which is unrelated to the
API for audio itself, and FIFO depth monitoring so that you don't get either
buffer overruns or buffers that are so full that they induce latency in excess
of what your scheduling and processing latency unavoidably adds. For output,
you could provide this with a /dev/audio for writing data and a
/dev/audiooutputbuffersize for reading the FIFO buffer depth. For input, you
could provide this by making reads from /dev/audio nonblocking, by providing a
/dev/audioinputbuffersize you would read before risking a blocking read on
/dev/audio, or by reading in a separate thread.

· High-performance graphics covers a lot of ground, but one of the big pain
points is unnecessary memcpying on the paper path from your CPU graphics
program into the GPU, whether that's textures, vertex buffers, or prerendered
pixel data (though maybe we could argue that that last case is never going to
be _high performance_.) One way that unnecessary memcpying arises is from
buffers for write() that are not correctly aligned with respect to the start
of a page, which unavoidably means that they will have to get copied to a
place with the correct alignment. This is a big problem if you're streaming
out gigabytes per second of graphics data. In particular you don't want the
data that goes to the GPU concatenated with data that tells the kernel
graphics driver what to do with it.

Solution: open a new file in a /dev/gfx directory for the data to be sent to
the GPU, then write the data to it from a properly aligned buffer. Fork a
separate thread to write this data, which is done by marking the pages COW
rather than copying them. (A more radical and less sneaky solution I'm
exploring for Wercam and BubbleOS is to transfer ownership of pixel data
buffers from the application to the window system, causing them to disappear
from the application's memory map, avoiding the need to either copy or COW
them. But this definitely departs from the Unix read()/write() model.)

· A different aspect of high-performance graphics, other than raw throughput,
is that you might need to synchronize your drawing with monitor refreshes to
avoid an extra half-frame of latency, particularly at low refresh rates like
60 Hz. This seems straightforward to solve in a filesystem interface: reading
from /dev/vbi produces a byte just before each VBI, giving you time to send
any necessary new data to the GPU.

Certainly it's true that Plan 9 does not provide high-performance graphics or
low-latency audio. But I don't think that proves that filesystem-based
interfaces can't provide them, just that the Plan 9 designers had spent the
1980s researching DSLs, fault-tolerance, concurrency, and typesetting, not
rasterization, real-time control, and animation.

In general, system-call or IPC interfaces of the form "invoke function foobar
with a baz handle and a quux struct or memory buffer" can be cleanly replaced
with "open /bazzes/$i/foobar and write a quux struct or memory buffer to it",
can't they? I mean, that's three system calls instead of one, so about 1 μs of
system-call overhead instead of 300 ns, but in the cases where that's the
performance bottleneck you can usually solve it by writing N quuxes instead of
one. Can't you? A potential problem arises when you need to combine multiple
handles to extra-process resources in a single system call, like SCM_RIGHTS or
rename(), but I don't know where those would come up in the particular
application domains you identify as problematic.

But maybe my proposed solutions above won't solve the problems I think they
will, or maybe I'm missing the biggest issues entirely. What am I missing?

(I'm not claiming that this approach would be easier, safer, or more
discoverable than the approach of adding a bunch of system calls; Benno's talk
discusses the many ingenious ways a filesystem-based API can be hard to use,
and no doubt some of the same criticisms can be leveled at the above strawman
proposals. I'm just saying I don't see where it inherently fails to meet
performance requirements.)

~~~
tedunangst
Perhaps getting into no true ship of theseus territory, but you can redefine
read/write to do _anything_ but it's arguable if that's still the same
philosophy. Like, why even have open()? You could start each process with "/"
already open, and then open other files by writing "open /etc/passwd" and
reading back a new fd.

~~~
kragen
Yeah, so, is there a there there? I think there is. The
open/close/read/write/getdents interface at least offers the _possibility_ of
certain kinds of REST-like uniformity, providing some benefits:

\- A thin hourglass-waist interface makes it relatively easy to add new
components to the system, whether clients, servers, or proxies. Plan 9
implemented both network fileservice and GUI windowing as filesystem-interface
proxies, and they composed properly, allowing you to remote your windowing
system over the network and to test new versions of the windowing system in a
window. Less remarked on is the fact that if you need 500 system calls to
invoke the full functionality of your operating system, most of those calls
are going to remain inaccessible to newly ported scripting languages until you
write a C extension module for them; and if implementing a fileserver involves
handling hundreds of protocol messages, you aren't going to have very many
kinds of fileservers. (CIFS, as far as I know, has only two: Windows and
Samba.) ioctl and its demonspawn brethren are the reason we didn't have rr in
the 1990s, when Michael Elizabeth Chastain wrote mec-replay.

\- Putting all the system resources into a single namespace means that you can
handle them uniformly in some other ways, like setting permissions and
interactively exploring the hierarchy.

\- A lot of system state can be meaningfully treated as data that can be
either read or written, with certain useful properties: if you write and then
read, then what you read is what you wrote (or is, unless there was an error
or a subsequent state change), and if you write back something you read at
some previous time, you restore it to the state it had at that time. The
framebuffer is one example from Plan 9. The contents of NVRAM are a thing it
might be more important to be able to back up and restore in this way.

\- Also, byte-oriented things don't care what size your reads and writes are;
you read the same sequence of bytes whether you read them one at a time or a
million at a time. Usually.

\- Naming things with strings means you can add more things later without
breaking backward compatibility.

\- A slightly richer interface that provides cache invalidation notifications
(like inotify) can enable caching proxies and polling proxies, which are sort
of dual to each other. Application containers (like Docker, but also like
Vesta's build environment, and like rr) can provide isolation,
reproducibility, observability, and auditing, but the difficulty of building
them is multiplied by the number of different namespaces and system calls they
need to interpose on.

\- It's useful to unify the interprocess communication interface for sending a
series of employee records from one process to another with the interface for
writing them to a file, a tape, or a terminal, and the interface for receiving
them from a file, a tape, or a terminal. script(1) and ttyrec and their kin
take advantage of this polymorphic interface to make it possible to replay a
terminal session later. Unfortunately as far as I know nobody has implemented
a system that lets you record and play back GUI screencasts or mouse-click
test scripts in such a simple way.

Now, open/close/read/write is really oriented toward the last point more than
anything else, treating files as nothing more than recorded output streams
that can be replayed. And you lose it if you start reading and writing
multiple files! There are other uniform interfaces that have similar benefits
for composability; SNMP provides one, REST provides another, and Named Data
Networking proposes a third, one which unifies asynchronous notifications with
satisfaction of read requests, which is sort of what Unix does too, except
that as Benno complains, in Unix you have to bend over backwards to wait for
any of multiple events, the polar opposite of the Win16 message loop.
(Hilariously, as Benno points out, the Win32 APIs for nonblocking file and
socket access are a total mess.)

The particular set of restrictions imposed by your chosen architectural style
and protocols will determine what kinds of things you can do to your system
once you have it running. Surely we can do better than the Unix filesystem
interface in 2020.

------
kragen
I'm skeptical of Benno's claim that we _should_ consider meritocracy "a dirty
word" because "it's a lie". Certainly it is true that no community of practice
achieves meritocracy, just as no polity achieves democracy — there are always
some citizens with more influence than others, so it is always possible for
the government to act against the interests and values of the majority of its
population. Should we therefore consider democracy "a dirty word" because
democracy "is a lie"? Perhaps it is better to consider it an ideal to which we
aspire, without feigning to have achieved it.

Let's consider what alternative ideals are available in place of meritocracy
for governing a community of intellectual practice.

We could strive for a democracy, in which the decisions are made by the
majority — but the majority of whom? For this to meaningfully distinguish a
community of intellectual practice from the surrounding community from which
it arose, as a lotus blossom arises spotless from the swamp, a distinction
must be made between voting members and outsiders. (Can you imagine a Linux
User's Group where all the presentations are about Microsoft Windows, or a
Python conference where all the talks are about Java?) But that is just a way
of postponing the question of who the voters are.

We could strive for consensus, like the Quakers, in which any collective
decision is postponed until every member agrees; but, like democracy, that
demands gatekeeping that draws an ingroup/outgroup distinction, so it is not
really an answer to the question of who governs, just how they govern.

We could strive for anarchy, in which all decisions are made individually, and
there are thus no collective decisions to be made, whether by the meritorious
or by anyone else. A variant of anarchy is "do-ocracy", where decisions are
made by whoever shows up and makes the effort required to implement them.

We could strive for a gerontocracy, in which the oldest members — perhaps by
length of membership rather than by physical age — make the collective
decisions.

We could strive for a high-school clique, where the decisions are made by
whoever is most popular.

We could strive for a plutocracy, where the decisions are made by whoever is
wealthiest, or who donates the most.

Given these alternatives, it seems to me that when anarchy and consensus
demand unacceptable tradeoffs, the least undesirable alternative is
meritocracy. In meritocracy, the decisions are made by the _best_ members of
the group, according to some measure of merit that seems worthwhile to the
group; in a community of intellectual practice, this usually amounts to some
kind of knowledge and skill, seasoned with judgment and perhaps a guess about
aptitude. If they are the wisest members, then they will make the best
decisions. The greatest foolishness is to subject the wise to the government
of the foolish. If some of the foolish and ignorant currently are so simply
because they have not had the opportunity to learn, we can best remedy that by
guiding them to learn from the wise, not by putting the foolish and ignorant
in charge.

Of course, meritocracy as an ideal cannot be reached, only striven for; but it
is a better ideal to strive for than a high-school clique, a gerontocracy, or
a plutocracy.

Which of these are the campaigners against meritocracy hoping for?

------
Aqueous
am i the only one who thinks it’s a good thing that his Linux example was
shorter than the other two and required less calling into hyper-specific API
functionality?

------
dscpls
Awesome talk. Especially connecting how leaving an entrenched mindset behind
applies both to architecture and to diversity in our communities.

And that our meritocracies are not as pure as we'd like to think.

