
The Collapse of the Unix Philosophy - andreygrehov
https://kukuruku.co/post/the-collapse-of-the-unix-philosophy/
======
ChuckMcM
That was a sad thing to read, the author is so clueless they don't even know
when the reasons they _imagine_ something might have been broken are wrong.

Back when UNIX was born the <tab> character was a first class citizen in every
computer on the planet, and many languages used it as part of their syntax.
Static binaries were invented when Sun and Berkeley co-developed shared
libraries and there needed to be binaries that you knew would work before
shared libraries were available (during boot before things were all mounted,
during recovery, etc)

It always amazed me when someone looks at computer systems of the 70's through
the lens of "today's" technology and then projects a failure of imagination on
the part of those engineers back in the 70's. I pointed out to such a person
that the _font file_ for Courier (60 - 75K depending) was more than the entire
system memory (32KW or 64KB) that you could boot 2.1BSD in.

Such sillyness.

~~~
sowhatquestion
> It always amazed me when someone looks at computer systems of the 70's
> through the lens of "today's" technology and then projects a failure of
> imagination on the part of those engineers back in the 70's

True enough, but as a younger programmer, I find it pretty reasonable to look
back at computer systems of the 70s and wonder if we can do better _today_. I
feel a little bit gross every time I have to write a bash shell script (or
edit config files that aren't JSON/XML/YAML, for that matter), and I don't
think that's a bad impulse. That something so inelegant and unsafe is still in
widespread use in 2017 really ought to be a scandal. Even if the author didn't
frame the issue in the most charitable way for the earlier trailblazing
generations, he's calling attention to the right issues.

In other words, if you couldn't justify something being designed a certain way
de novo, why be content with the existing design!?

~~~
onion2k
JSON doesn't have comments so it's a bad choice for human-editable config.
YAML doesn't have an end marker so you can never be sure if you've got the
entire file. XML is a huge pain to edit by hand if the schema is complicated,
and overly verbose if it isn't. None of them are even close to being safe (for
example
[https://arp242.net/weblog/yaml_probably_not_so_great_after_a...](https://arp242.net/weblog/yaml_probably_not_so_great_after_all.html)).
All of those choices fail your "elegance" test.

TOML is my preferred config file language option where I have a choice -
[https://github.com/toml-lang/toml](https://github.com/toml-lang/toml) \- but
I suspect that suffers a lot of the same problems.

~~~
vertex-four
The issue with all of these, of course, is that in order to get a system
running you have to configure multiple "independent" tools, processes and
daemons. Think setting up a web application - you have to configure the web
application to listen on a certain port/UNIX socket, then configure your web
server to go find it. You then need to scale this up across logical servers
separated by a network - your web servers need to communicate with your
database, they need some sort of authentication key/password, etc etc. You're
never just configuring one thing.

The modern solution would be that there needs to be a network configuration
tool which generates specific configurations for each component, is capable of
encoding arbitrary invariants, and works consistently. Configuration also
needs to be "push" based on events - when a DNS server dies, it should be able
to figure out "we need at least 2 DNS servers, we have 1, fire up a new one -
then update all systems to know about the new one".

Configuration management systems for Linux, by and large, suck. They're very
good at starting from an Ubuntu Server install and building on that, and then
get more and more fragile as the system lives on. Some of them (Saltstack, for
example) do have some degree of event management - you can run certain
commands on certain things happening, but it's not declarative or reactive in
the way you'd hope - e.g. you can't just say "this system knows about all DNS
servers" and expect it to work. The Docker/Kubernetes ecosystems claim to
solve the network configuration problem (in a really awkward roundabout way),
but not really intra-system configuration, and it still takes a lot of manual
work.

NixOS gets a lot closer - but it needs to be expanded with a constraints
solver and an event processing system. It's Turing-complete, so you can encode
pretty much whatever you want into it, while still being a reasonable
configuration language (basic use of it looks a lot like JSON).

But the point is - the formats individual components use for configuration
should be more-or-less irrelevant. They could be completely opaque, so long as
it's possible to get from the network config to the individual component's
config and it's possible to update that config on-the-fly. In fact, it'd be
more useful to standardise on one library which can handle all that for you.

~~~
zzzcpan
I agree with most of your post, but this is still more complex than just
adding a constraints solver and an event processing system. Different things
not just depend on each other, they also require different strategies for
dealing with failures. Trying to squeeze everything into a single model will
not work well. Maybe something like supervision trees for services might solve
that, where supervisors for each service are part of the package and handle
everything from automatic configuration to failures in any way they need.

------
Analemma_
Everyone should, at some point, read The Unix-Haters Handbook. A great deal of
it is outdated or simply wrong, but it does have a running theme of a
prediction that has largely been borne out: people assuming that all Unix's
flaws are actually virtues and that if you don't think so, then you Just Don't
Get It.

It's not hard to see how this happened: since pretty much all computers that
people normally interact with are either running Windows or a Unix-like
system, it has set up a dichotomy in people's minds. When the Unix-Haters
Handbook was released, there were still other operating systems which could
have a plausible claim to being better, but they have all faded away, leaving
only these two. And since the "Real Hackers" prefer Unix, Unix and all its
decisions must be the right ones.

Unix is great, but people need to be more realistic about its shortcomings
instead of mindlessly repeating mantras about "The Unix Way" without critical
examination.

~~~
nickbauman
I love motorcycles too. I've owned many sophisticated bikes: Ducatis with
their strange Desmodromic heads. With the ability to dial an exchange of
torque to horsepower at the handlebars. Buells with their fuel-in-frame
chassis. My current Suzuki even has 4 sensors in the airbox alone. One that
measures the air input pressure. One that measures the oxygen level. One that
measures the air temperature. Really amazing performance. It will be in a
junkyard within ten years though. I won't be able to find those sensors in a
few years.

So all that new and advanced technology doesn't really interest me anymore.
I'm looking for a 1969 Honda CL350 right now. They're still around and running
fine. They're much simpler and much more maintainable. No Engine Computer. No
sensors. Everything really easy to understand.

I kinda want my OS like that too. With all its warts I can keep it running.

~~~
sedachv
> It will be in a junkyard within ten years though. I won't be able to find
> those sensors in a few years.

Not true. You will be able to get an aftermarket ECU that can just ignore the
sensors and run in open-loop mode. That will be exactly the same as running
with carburetors: fixed fuel/air ratio that is almost always wrong. This is
also the failure mode for OBDII cars - sensor failures lead to the ECU running
in open-loop mode, which lowers MPG and increases emissions, which will
eventually foul the catalytic converters.

> I'm looking for a 1969 Honda CL350 right now. They're still around and
> running fine. They're much simpler and much more maintainable.

My wife has owned a CB175 and a CB550. Both required tons of work and were
maintenance nightmares. They really are piece of shit bikes when it comes to
reliability when compared to most Japanese bikes from 1990 onward. The prices
old Honda bikes command on the market are completely out of whack with what
you get because of strong demand from both the vintage bike enthusiast and
hipster demographics. I would not ride one if it was given to me for free.

~~~
mulmen
Maintenance "headaches" are part of the appeal of old bikes. It's much easier
to learn how an internal combustion powered vehicle works on an old Honda bike
than a new one, or on a new car.

Compared to other bikes of their day these are very simple to maintain and
they were designed from the start to be kept running by the average person. It
really depends on what you are looking for in a bike.

If enjoying turning a wrench on a Saturday makes me a hipster then pass the
beard wax.

~~~
sedachv
There is weekend wrenching and there is dealing with design flaws and poor
manufacturing. Problems with the CB175 and CB550 that were not regular
maintenance (carburetor/timing/valve/etc/etc) related:

* fast cylinder wear (poor materials/manufacturing, engine rebuilds all around)

* unreliable electric system ("mostly" fixed on CB550 with Charlie's solid state ignition and rectifier)

* Leaking gaskets (design flaw)

I know a lot of vintage Honda collectors and a few racers, and also a lot of
vintage BMW collectors. BMW motorcycles from the same era do not have these
problems.

~~~
mulmen
What gaskets leaked? Side covers? I haven't had a problem with side cover
gaskets but I did have some replacement non-JIS screws back out because they
were not torqued properly. Can't speak to cylinder wear, my bike has close to
10k hard miles and doesn't compression test real well but does work fine.

~~~
sedachv
Old side cover gaskets did, those were easy to replace. Something else was
leaking before the engine rebuild, and then something else entirely started
leaking after the rebuild.

~~~
mulmen
I have a hard time blaming either of those on design flaws.

------
Animats
There are many early UNIX design decisions that have outlived their shelf life
by decades.

Probably the biggest one is that UNIX is, at bottom, a terminal-oriented
multi-user time sharing system. This maps badly to desktop, mobile, and server
systems. The protection model is a mismatch for all those purposes. (Programs
have the authority of the user. Not so good today as in the 1970s.) The
administration model also matches badly. Vast amounts of superstructure have
been built to get around that mismatch. (Hello, containers, virtualization,
etc.) Interprocess communication came late to UNIX/Linux, and it's still not a
core component. (The one-way pipe mindset is too deeply ingrained in the UNIX
world.)

~~~
pjmlp
Hence why UNIX on mobile is a Pyrrhic victory, as iOS, Android, ChromeOS rely
on Objective-C, Java and JavaScript runtimes and their respective frameworks,
just with good enough support from POSIX, that could be replaced by what is
expected from any ANSI C implementation.

~~~
favorited
Isn't that the whole point of POSIX though?

~~~
pjmlp
ANSI C and POSIX aren't the same thing.

POSIX is more like the extended runtime that C needs to be portable outside
UNIX walls, which isn't being fully implemented on iOS and Android.

[http://www.cs.columbia.edu/~vatlidak/resources/POSIXmagazine...](http://www.cs.columbia.edu/~vatlidak/resources/POSIXmagazine.pdf)

------
wyclif
_This article was written hastily, and I don’t want to further improve it.
You’re lucky I wrote it._

I feel so privileged to read this random guy's blog, and it's terrific that he
eschews inflating his ego so well.

~~~
whack
As an hobbyist blogger, I can perfectly empathise with why the author wrote
that. People love to crap all over a blogger who dares to post his thoughts on
a private blog, without first subjecting it to PhD-thesis-level scrutiny. This
really gets on my nerves for the same reasons that engineers get pissed off
when they decide to open source a pet project and suddenly start getting
"URGENT ASAP" feature requests from entitled users.

The author is taking the time to post his thoughts on a private blog. If
you're not happy with the level of rigor, then don't read it, don't share it,
and don't believe it. But no, the author has no responsibility to provide you
with a comprehensive list of citations and references.

~~~
ci5er
I stopped posting code from side-projects I was done with, when it started
taking on a maintenance life of its own, and getting angry emails. There was a
simplicity of the time when you could put a tarball on an FTP server and post
the path to the appropriate Usenix group: "Here's a tarball. Have at it. Or
don't."

~~~
dredmorbius
[https://www.reddit.com/r/dredmorbius/comments/5x9tws/gresham...](https://www.reddit.com/r/dredmorbius/comments/5x9tws/greshams_law_of_tarball_drops/)

------
pjmlp
"We really are using a 1970s era operating system well past its sell-by date.
We get a lot done, and we have fun, but let's face it, the fundamental design
of Unix is older than many of the readers of Slashdot, while lots of
different, great ideas about computing and networks have been developed in the
last 30 years. Using Unix is the computing equivalent of listening only to
music by David Cassidy."

Rob Pike 2004,
[https://interviews.slashdot.org/story/04/10/18/1153211/rob-p...](https://interviews.slashdot.org/story/04/10/18/1153211/rob-
pike-responds)

~~~
Arizhel
>while lots of different, great ideas about computing and networks have been
developed in the last 30 years. Using Unix is the computing equivalent of
listening only to music by David Cassidy.

The problem here is that there aren't a lot of alternatives. You could use
Windows, which is like listening only to music by MC Hammer, or you could use
a Mac, which is like listening only to music by Duran Duran.

Because of software/backwards compatibility concerns, and how dependent
everything is on the underlying OS, it's really hard to change anything in the
OS, especially the fundamental design. It'd be nice to make a clean-sheet new
OS, but good luck getting anyone to adopt it: look at how well Plan9 and BeOS
fared.

~~~
TheRealDunkirk
> which is like listening only to music by Duran Duran

You say that like it were a bad thing.

~~~
salesguy222
i'm hoping you are being sarcastic, but i do not want to live in a world where
i only listen to my favorite band

------
pmoriarty
This reminds me of the Stroustrup quote:

 _There are two types of languages, the ones everyone complains about, and the
ones nobody uses._

------
em3rgent0rdr
This whole article is a bunch of strawman arguments. It points out historical
mistakes by unix developers, but those mistakes aren't inherent to or a result
of the Unix Philosophy: (1) Write programs that do one thing and do it well.
(2) Write programs to work together. (3) Write programs to handle text
streams, because that is a universal interface.

~~~
TeMPOraL
The arguments in the article indeed don't have much to say about Unix
Philosophy per se - they're just a list of various fuckups and idiocies Unix
accumulated for some reasons or others. As for Unix Philosophy, the point (3)
in your summary is something that's a) dumb, and b) back when it was created,
they already had better solutions.

Passing text streams around is a horrible idea because now each program has to
have its own, half-assed shotgun parser and generator, and you have to glue
programs together with _your own_ , user-provided, half-assed shotgun parsers,
i.e. calls to awk, sed, etc.

Think of it this way: if, per Unix Philosophy (points (1) and (2) of your
summary), programs are kind of like function calls, and your OS is kind of
like the running image, then (3) makes you programming with a dynamic,
completely untyped language which forces each function to accept and return a
single parameter that's just a string blob. No other data structures allowed.

I kind of understand how it is people got used to it and don't see a problem
anymore (Stockholm syndrome). What shocked me was learning that back before
UNIX they _already knew_ how to do it better, but UNIX just ignored it.

~~~
em3rgent0rdr
> "The arguments in the article indeed don't have much to say about Unix
> Philosophy per se - they're just a list of various fuckups and idiocies Unix
> accumulated for some reasons or others."

Right. The title should have been reflective of that "Various idiocies Unix
has accumulated to this day" but since the article mentions Unix Philosophy,
my point is that the article should have criticised the philosophy and not the
practice.

> "Passing text streams around is a horrible idea because now each program has
> to have its own, half-assed shotgun parser and generator, and you have to
> glue programs together with your own, user-provided, half-assed shotgun
> parsers, i.e. calls to awk, sed, etc."

But this has actually proved to be very useful as it provided a standard
medium of communication between programs that is both human readable and
computer understandable. And ahead of its time since it automatically takes
advance of multiprocessor systems, without having to rewrite the individual
components to be multi-threaded.

> "(3) makes you programming with a dynamic, completely untyped language which
> forces each function to accept and return a single parameter that's just a
> string blob. No other data structures allowed."

That may be a performance downside in some cases, but the benefit of having a
standard universally-agreeable input and output format is the time it saves
Unix operators who can quickly pipe programs together. That saves more total
human time than gained from potential performance benefits.

~~~
TeMPOraL
> _And ahead of its time_

It wasn't ahead of its time. By the time Unix was created, people were already
aware of the benefits of _structured_ data.

> _it automatically takes advance of multiprocessor systems, without having to
> rewrite the individual components to be multi-threaded._

That's orthogonal to the issue. The simple solution to Unix problems would be
to put a standard parser for JSON/SEXP/whatever into libc or OS libraries and
have people use it for stdin/stdout communication. This can still take
advantage of multiprocessor systems and whatnot, with an added benefit of
program authors not having to each write their own buggy parser anymore.

> _but the benefit of having a standard universally-agreeable input and output
> format is the time it saves Unix operators who can quickly pipe programs
> together. That saves more total human time than gained from potential
> performance benefits._

I'd say it's exactly the opposite. Unstructured text is _not_ an universally-
agreeable format. In fact, it's non-agreeable, since anyone can output
anything however they like (and they do), and as a user you're forced to
transform data from one program into another via more ad-hoc parsers, usually
written in form of sed, awk or Perl invocations. You lose time doing that,
each of those parsing steps introduces vulnerabilities, and the whole thing
will eventually fall apart anyway because of million reasons that can fuck up
the output of Unix commands, including things like your system distribution
and your locale settings.

As an example of what I'm talking about, imagine that your "ls" invocation
would return a list of named rows in some structured format, instead of an
ASCII table. E.g.

    
    
      ((:columns :type :permissions :no-links :owner :group :size :modification-time :name)
       (:data
        (:directory 775 8 temporal temporal 4096 1488506415 ".git")
        (:file 664 1 temporal temporal 4 1488506415 ".gitignore")
          ...
        (:file 755 1 temporal temporal 69337136 1488506415 "hju")))
    

With such a format you could trivially issue commands like:

    
    
      ls | filter ':modification-time < 1 month ago' | cp --to '/home/otheruser/oldfiles/'
      find :name LIKE ".git%" | select (:name :permissions) | format-list > git_perms_audit.log
    

Hell, you could display the usual Unix "ls -la" table for the user trivially
too, but you wouldn't have to _parse it_ manually.

BTW. This is exactly what PowerShell does (except it sends .NET objects),
which is why it's awesome.

~~~
jstimpfle
There are no problems where you see them.

Most text formats are trivial to parse and space-separated or character-
separated is the way to go. It really doesn't help if you enclose shit in
parens. (Parens are sometimes a good way to encode trees, though).

    
    
        > (:columns :type :permissions :no-links :owner :group :size :modification-time :name)
    

That format doesn't solve any of the problems you mention. The problem is that
it's hard to agree what data should be inside, not how you encode it.

    
    
        > ls | filter ':modification-time < 1 month ago' | cp --to '/home/otheruser/oldfiles/'
        find -mtime -30 | xargs cp -t /home/otheruser/oldfiles
    
        > find :name LIKE ".git%" | select (:name :permissions) | format-list > git_perms_audit.log
        find -name '.git*' -printf '%m %f\n' > git_perms_audit.log
    

Use 0-separated if you care that technically filenames can be anything (except
/ and NUL). Or say "crap in, crap out". Or assert that it's not crap before
processing it.

> Hell, you could display the usual Unix "ls -la" table for the user trivially
> too, but you wouldn't have to parse it manually.

You don't parse "ls -la". You just don't.

> BTW. This is exactly what PowerShell does (except it sends .NET objects),
> which is why it's awesome.

Powershell is an abomination, and because it encourages coupling of
interacting programs it will never be as successful as the Unix model. There
will never be the same variety of interacting programs for very practical
reasons.

------
rsync
I think it's very telling that the author consistently refers to directories
as "folders".

All of UNIX makes perfect sense if you are using UNIX for UNIX.

If you're doing other things, like abstracting to "folders" and so on ... I am
open minded and can see where it starts to fall apart a bit.

But I use UNIX for the sake of UNIX ... I am interested specifically in doing
UNIX things. It works _great_ for that.

~~~
rocqua
What then are "UNIX things".

This could be either a no-true-Scotsman, or a tautology. To solve this, you'd
need to specify what UNIX is good at.

~~~
rsync
"This could be either a no-true-Scotsman, or a tautology."

It's _even worse_!

I am saying that working in terminals, with strings of text and non-binary-
format config files ... and all of the tools built around that ... _is an end
in itself_.

Every single "broken" example in the OP is something that I find non-
remarkable and, in fact, makes perfect sense to me.

~~~
rocqua
To argue for the OP, consider the case of passwd being parsed on every system
call. That is simply sub-optimal. (It also seems exaggerated to me, and feels
like a prime candidate for caching).

Further, there is an immense value in GUI based systems: discoverability. On a
GUI, you can learn how to use a program without ever consulting a manual just
by inspecting your screen. This addition is what brought the computer to the
masses.

Finally, the terminal model of UNIX is just horrible. The hacks-on-top-of-
hacks that are needed to turn the equivalent of a line-printer into something
like nCurses or tMux are horrible. The current terminal is like this purely
because of legacy. If you'd design a system for "working in terminals, with
strings of text and non-binary-format config files" from the bottom up, it
would look totally different. Sadly, getting it to work with existing software
would be a total nightmare.

All that being said, UNIX still has the better terminal (though I hear good
things about powershell). Certainly, it is the best system for "working in
terminals, with strings of text and non-binary-format config files". Though
competition is sparse (windows, and maybe mac, depending on whether you
consider it to still be unix or not).

~~~
GalacticDomin8r
> To argue for the OP, consider the case of passwd being parsed on every
> system call. That is simply sub-optimal.

passwd is not read every system call and anything that is read frequently is
almost certainly in the fs cache.

I got about 3 assertions into the article before I decided I had enough of
that bullshit.

~~~
paulddraper
Note that "read from disk" and "parsed" are two different things.

~~~
jstimpfle
In any case I haven't heard of performance problems. I think these files are
needed basically for logging in and when tools like "ls" convert UIDs to
names.

If there were performance problems, something would be done. And you can
easily switch to LDAP or your backend of preference.

------
white-flame
Some of my primary beefs with Unix, stated more concisely than the rambling
article:

1) Text is for humans, and is generally incomprehensible to machines.
Encodings, arbitrary config file formats, terminals, etc, are all piles of
thoughtless one-off hacks. It's a horrible substrate for compositing software
functionality, through either pipes or files.

2) Hierarchical directory structures quickly become insufficient.

3) The filesystem permission model is way too coarse grained.

4) C is a systems/driver/low-level language, not a reasonable user-level
application or utility language.

~~~
greenhouse_gas
1\. Binary formats are great, until they break. I would take plain strings
configuration over Win95 registry any day.

Actually, I'd take plain text conf over Win10 registry any day also. How do
you transfer your (arbitrary) program settings from one computer to another? I
can tell you how on Unix. How would you on windows?

2\. / based filesystems are head and shoulders better than Windows.

Why should moving a directory from my internal hard drive to a SSD over USB
change my location?

On Unix I can keep my home directory (or Program Files) on a NFS share, SMB
share or SSD harddrive.

Can I do the same on Windows?

3\. It is, which is why SELinux was invented. But that's to hard, so no one
uses it.

4\. All major OSs (Both Unix/Linux and Windows) are C or C++ based.

But here's something interesting: in the 90s, it was considered "advanced" to
have the GUI an inherent part of the OS, rather than being just another
program. Windows and Plan9 did that.

Yet, it turned out that admining a system without a 1st class command line is
a pain, so windows is rolling back with Power shell.

Maybe Windows will one day make their Server headless, and where you can do
full sysadmining through SSH

~~~
white-flame
I'm pointing out flaws in Unix. The fact that other environments have
different/worse flaws than Unix doesn't make Unix's flaws not flaws. The point
is to think outside the box and stop being constrained by arbitrary decisions
of the past, be they from Unix, Windows, or wherever.

See my other response below about the text issue. It's not about 1 program
having a text file, it's about managing all text files on your system that
sucks, especially when you want them to interoperate in terms of data or
automation, or even have some sort of consistency in expectation between them
in human terms. The term "Text" implies data that is unspecified and
unpredictable; in encoding, syntax and semantics; across various instances of
it.

~~~
jstimpfle
It is good that these are many separate files, because that avoids the problem
of accumulating cruft that nobody can sort out. Also different syntax is
possible for different problems. Contrast that to a key-value store with a
fixed set of datatypes (say, bool/int/string) where invariably the complexity
is pushed into string values (for example, screen geometry, Hotkey
descriptions...). Nothing gained.

The term "Text" has different cultural connotations. You can have a very
strict encoding, and in fact in my perception many people agree today that "be
liberal in what you accept" is wrong. Personally I am a nazi in what I accept,
and it has worked out great so far.

I don't think text is a problem with automation either. The easiest way to go
is just to regenerate the whole config file when something changes. Also some
formats can safely be transformed with e.g. sed.

------
jstewartmobile
I wish Symbolics had a better run.

The Windows registry/powershell approach has the niceness of passing pieces of
data instead of one big blob of text that has to be re-parsed at every step,
but with the drawback of verbosity and fussy static typing.

Being able to directly pass s-expressions between between programs without the
format/parse/typing hokey-pokey of Unix and Windows would be nice.

~~~
eschaton
Symbolics Genera doesn't pass S-expressions (i.e. text) between processes, it
passed objects. Everything runs in a single address space so this is cheap and
accurate.

Also, ZetaLisp and Common Lisp have way more types than those supported by
S-expressions. For example, they have real vectors/arrays, structures, and
full objects.

Don't assume all of the Lisp world uses just the subset of Scheme used in the
first couple chapters of SICP.

~~~
jstewartmobile
Only knowing Genera from a user perspective, I just _assumed_ that process-to-
process communication gives preference to the internal representation when it
is available.

------
radiowave
This is just beautiful: _" Standard utilities provide the output in the form
of a plain text. For each utility, we actually need a parser of its own."_

(More people would probably have read the Unix Haters Handbook if it was as
pithy as this.)

------
yellowapple
Most of the author's criticisms around the Unix Philosophy™ (aside from
perhaps the performance aspect) would be solvable in two steps:

1) Standardize on some structured text serialization format (I like YAML for
this)

2) Write a new shell

Both of these things are compatible with the Unix Philosophy™, and thus said
Philosophy is nowhere near collapse. Rusty around the edges, sure, and maybe
with some asbestos in the ceiling tiles, but certainly renovatable.

The philosophy is already prevalent in the world of "microservices"; an
application is split into a whole bunch of independent (usually containerized)
programs communicating via something like JSON over HTTP.

~~~
jstimpfle
One format for all use cases? Databases (passwd,group...), single-word files,
key-value(-list?) files, rc files for a thousand programs?

Great idea! We should use XML for that...

~~~
TeMPOraL
XML, YAML, JSON and s-expressions are all _just_ flavours of representing
_trees_.

So yeah, any of that would be a much better idea than unstructured text, and
yes, you can serialize all those use cases into trees. I'd steer away from XML
for sake of efficiency and human-readability though.

~~~
jstimpfle
Not everything is a tree, and neither XML nor JSON nor sexp are particularly
efficient or "beautiful". And there is no canonic representation. You could
strip _all_ whitespace or indent _all_ childs, but... And YAML for example has
no nice way to put lists of single words on one line.

~~~
TeMPOraL
I'm yet to see a practical data set that could not be encoded as a tree. Maybe
if you have a cyclical data structure and you want to save that directly, but
then it's a simple meta-level extension. For example, Lisp reader does that
when reading S-expressions. If you want to create a list like this:

    
    
      1 ---> 2 ---> 3-|
      ^_______________|
    

you write: #1=(1 2 3 #1#), where #n=OBJECT means "this is the object N", and
#n# means "here is the very same object N too".

~~~
jstimpfle
Yes, you can encode everything "as a tree". You can also encode everything "as
binary", "as a big integer", whatever. That doesn't mean it's a good idea.

~~~
yellowapple
Unlike "as binary" or "as a big integer", a tree is structured. "As an
arbitrarily-formatted string" would be much closer to those two comparison
points.

~~~
jstimpfle
If you think strings (or "binary") are "unstructured", think again. (Start
with: what does that even mean?)

~~~
yellowapple
I don't think they're _unstructured_ ; rather, I know they're _arbitrarily_
structured, usually requiring a great deal of ad-hockery to deal with them. A
standard structure means a lot less work for data consumers and producers
alike.

This comment is structured in the sense that it's two paragraphs of more-or-
less-correct English. That doesn't make it useful to tools that don't
understand English. As far as a tool like 'rm' is concerned, it might as well
be unstructured.

------
pdkl95
> Unix Philosophy

Note that this philosophy covers many concepts. These discussions often
mention the modularity and composition rules, but the other parts of the
philosophy are also important.

See "The Art Of Unix Programming" for a full explanation of the philosophy.

[http://www.catb.org/esr/writings/taoup/html/ch01s06.html](http://www.catb.org/esr/writings/taoup/html/ch01s06.html)

------
rmusial
"Taking into account the numerous mistakes of UNIX. However, no one raises
Plan 9 on a pedestal."

Suckless and cat-v.org would disagree. I'd also disagree since I'm a huge fan
of plan9port.

~~~
dekhn
The Plan 9 FS network protocol, P9, is extremely well regarded and the basis
for a number of security projects.

~~~
ptman
the procotol is actually called 9P (or 9p):
[https://en.wikipedia.org/wiki/9P_(protocol)](https://en.wikipedia.org/wiki/9P_\(protocol\))

------
Aloha
Unix as we know it is almost 50 years of accretion - sh/bash is a great
example of this. I think the Unix philosophy is still sound and alive, but the
movement of technology means not everything that was universal before, is now.

~~~
nine_k
There's a recurring theme (e.g. [1] among many examples) of comparing the Unix
Way to the way of functional programming. Both prefer small things that do one
thing and compose well.

What is missing in many cases is a _concepts guide_ , explaining the key
ideas, how to combine things, and what's possible in various subject areas.

For GUI programs, menus / toolbars used to be the concept guide: what they
show is what's possible, and they offer context help. This is why a GUI feels
friendly. It sucks at composability, though. Current mobile interfaces,
unfortunately, tend to lack this.

If tiny GUI-oriented programs were easy to compose, had an easy way to save
the composed state, and a number of daily-use programs bundled with an OS came
in this form, providing example and reference, many people would consider
following the suit, I suppose.

[1]:
[http://softwareengineering.stackexchange.com/questions/61814...](http://softwareengineering.stackexchange.com/questions/61814/is-
programming-in-the-unix-philosophy-the-same-as-functional-programming)

~~~
rocqua
> For GUI programs, menus / toolbars used to be the concept guide

This simple fact seems like the key to getting the masses into computing. For
something like 6 years (say 12-18) GUIs were the way I interacted with
computers. Need to do something and learn about it? Go and explore the UI
until you find the option. If the option has a shortcut printed, you will
remember it eventually.

Sadly, GUI design is a quite separate discipline from software design. This
means much open source software is missing GUIs. Those who write the software
aren't always GUI designers. This also creates the mismatch between composing
software and composing GUIs. As they are different disciplines, what it means
to combine them means different things.

A decent stopgap is massive frameworking and standardization on GUI to make it
easier for devs to get a GUI. To get the really good stuff, commercial
entities have the best position. They need their stuff to be usable by
everyone, and this finances the hiring of GUI people.

There is the rare gen of a developer that can also do GUI right, but that only
has value in the case of small projects. When projects grow, unless all devs
have the GUI knack, you're gonna need some dedicated GUI people.

It would be great if we could get more GUI-oriented people into open-source
stuff but it seems like they aren't as attracted to open-source as devs are.
It might be because devs can be at the ground floor of a project, and GUI,
almost by neccesity, comes later.

------
harel
He lost me right at the beginning, with "I guess there wasn’t even Microsoft
DOS at the time (I guess and I don’t bother to check, so check it yourself).".

This is the OS equivalent of a Call of Duty teenage online player on XBox
live.

~~~
rwallace
Well it's a true statement, even if made in an irascible way. If you're going
to insult the guy, at least wait until he makes a false statement.

~~~
harel
Wasn't intended as an insult. Just a statement as to where he lost me and why.
The tone of the post was so arrogant it might have carried over to my comment.

------
nanodano
"This article was written hastily, and I don’t want to further improve it.
You’re lucky I wrote it. Therefore, I may provide some facts without source
links."

I guess I am lucky he wrote this for a pleb like me.

~~~
sehr
It's a joke though?

~~~
fredoliveira
Doesn't read like one.

~~~
sehr
Would be commonplace in any banter with friends imo, normal affected
brashness.

------
bigger_cheese
For a interesting critique of Unix the "Ghosts of Unix Past" series of
articles on LWN is pretty interesting.

[https://lwn.net/Articles/411845/](https://lwn.net/Articles/411845/)

The whole series is worth a read, especially part 3 'unfixable desgins' which
talks about Signals and the Unix permission model.

------
gwu78
"How can we recursively find all files with \ name in folder foo?"

Whenever someone tries to critique UNIX they always make up these nonsensical
problems.

No UNIX user would intentionally name a file with a forward slash, a space, a
semicolon, etc.

But I will play along.

To find these files, a number of ideas come to mind.

Maybe the easiest would be to use mtree to make a file specification and then
search the specification.

Something like

    
    
        mtree -cp /foo |exec mtree -C
    

Any forward slashes, spaces, or other nonsense in filenames would appear as
octal values.

If there are so many files that these specifications become large enough to
cause problems with UNIX utilities, I import them into a database like kdb.

Is there a way to hide files from mtree? Maybe.

But at least with the UNIX concept for a system, the user can look at the
source and see how it works.

The truth is, UNIX is not so impressive.

It is only the fact that the alternative systems for doing the types of things
one does with UNIX have always been inferior/rubbish in the opinion of a
certain set of informed users.

It is this _contrast_ that makes UNIX seem impressive.

As usual, the author makes no suggestion or detailed comparison of any other
alternative that can be used for doing the tasks one does with UNIX.

~~~
deathanatos
He didn't even get the find command that he has such a distaste for correct:

> _How can we recursively find all the files with \ name in a folder foo? The
> correct answer is: find foo -name '\\\\\\\'_

No, the correct answer:

    
    
      % find foo -name '\\'
      foo/\
    

> _We need to write four backslashes here as UNIX shell performs backslash
> expanding, and find does it too._

The single quotes prevent the shell from performing escaping. Find does need
escaping, and that's the only layer that does, so the correct answer has two
backslashes.

------
falsedan
I would prefer more constructive criticism: this reads like a synopsis of _The
UNIX-HATERS 's Handbook_[0], and (while quoting from _Worse is Better_ [1])
glosses over how UNIX is terrible because it out-competed its peers by being
'better' (compromising its design and consistency to improve its delivery and
practicality).

It's valuable to challenge assumptions held in a community (to avoid the
_Normalization of deviance in software: how broken practices become standard_
[2]). It's more valuable to suggest improvements, and most valuable to improve
it.

[0]:
[http://web.mit.edu/~simsong/www/ugh.pdf](http://web.mit.edu/~simsong/www/ugh.pdf)
[1]:
[https://www.dreamsongs.com/WorseIsBetter.html](https://www.dreamsongs.com/WorseIsBetter.html)
[2]: [http://danluu.com/wat/](http://danluu.com/wat/)

------
czep
A lot of the author's criticisms are about UNIX utilities not gracefully
handling filenames containing special characters. But seriously, who puts a
newline in a filename? Instead of wasting time writing scripts that
meticulously handle all possible edge cases, I'd much rather fix whatever
broken process is putting control characters into file names.

~~~
TeMPOraL
"Doctor, it hurts when I do this."

"So don't do it."

The problem with filenames is just a symptom of the biggest problem of UNIX
conventions - passing around unstructured text. Filenames should have one
well-defined format (AFAIR kernel allows pretty much anything but the NULL
character). That's it. For most applications, filenames should be opaque data
blobs compared for binary equality. But because we're passing around
unstructured text, each program has to parse, reparse, and concatenate strings
via ad-hoc, half-assed shotgun parsers. Each program does it slightly
differently, hence the mess.

~~~
fnj
When the design was made, no one was considering pathology. We were all too
invested in the wonder of making it all work to worry about people screwing
around in crazy ways, let alone purposeful attacks.

As long as everyone recognized that putting certain characters in pathnames
was counter-productive, things worked fine. Nobody ever dreamed of putting a
space character in a filename when they all came from a CLI background. When
barbarians came from Mac/Windows to UNIX, they lacked this background, and
there went the neighborhood.

I remember being stunned when I first encountered _two_ periods in one
filename! But it only took me a few seconds to grasp the fact that period was
just another character, "suffixes" were just conventions, and it all made
perfect sense. OTOH, the first time somebody showed me a file named "-fr\ \\*"
and suggested I delete it, I got one of my first disillusionments.

P.S. - "/" is a character which is also "special" in pathnames. Actually, the
particular FS may make other exceptions which are not globally enforced by the
kernel. ZFS has several (such as "@").

~~~
TeMPOraL
> _When the design was made, no one was considering pathology. We were all too
> invested in the wonder of making it all work to worry about people screwing
> around in crazy ways, let alone purposeful attacks._

I could buy this if not for the fact that back when UNIX was created, there
were already better operating systems and sane solutions to those issues
existed. It's more like that those aspects simply weren't really thought
through, but instead just hacked together.

Contrary to what seems to be a popular opinion nowadays, UNIX wasn't the first
real operating system, just like C wasn't the first high-level programming
language. I knew I actually believed the latter, due to the way C/C++ many
books were written. But no, in both the worlds of programming and operating
systems, there already were better thought-out solutions. It's a quirk of
history that UNIX and C ended up winning.

~~~
zzzcpan
"But no, in both the worlds of programming and operating systems, there
already were better thought-out solutions."

There were differently thought-out solutions, but not necessarily better or
worse. Perhaps the issues they thought out didn't matter as much at the time
and very unlikely to matter even a little bit today, and who knows how much
they got wrong and much worse. It's very hard to speculate. But one thing I'm
sure of is that these things cannot be really well designed from scratch and
all the problems manifest only once they are used by people. So widely used
systems can only be compared to widely used systems, not something niche or
unused.

~~~
TeMPOraL
Sure, but we're not talking about niche systems. There was a whole flourishing
world of computing before Unix and C came to be. In fact, a lot of significant
theoretical and practical advancements came from that age.

Our industry does seem to be stuck in circles, continuously forgetting the
ideas of past cycles and reinventing them, only for them to be forgotten
again. To see that phenomenon in action, one does not have to look much
further than the last 10-15 years of history of JavaScript to see how the web
ecosystem basically slowly reinvented already long established practices from
desktop operating systems and GUI toolkits...

~~~
cassowary
I don't think that's a fair characterisation of Javascript development. It's
more that people gradually realised they want to do more and more. But when
they've finally understood what they want to do, they quickly adopt learning
from elsewhere. A lot of the latest and greatest developments (like React and
Redux) are directly inspired by theoretical work.

------
jff
Although I agree that Unix is a big collection of hacks and well past its
prime, the author displays several fundamental misconceptions of what he's
talking about. Here's a few examples:

\- _Dirty hacks in UNIX started to arise when UNIX was released, and it was
long before Windows came to the scene, I guess there wasn’t even Microsoft DOS
at the time (I guess and I don’t bother to check, so check it yourself)._ At
least he acknowledges that he's being incredibly lazy, and he shows the
glimmer of an understanding as to why some of the things mentioned later
happened: because Unix is from the early 70s, which were a very different time
in computing & culture.

\- _Almost at the very beginning, there was no /usr folder in UNIX. All
binaries were located in /bin and /sbin._ /usr was the place for user home
directories (there was no /home). Putting /home on a separate partition
remains a pretty common thing to this day because users will tend to have
greater storage requirements than just the root. /usr/bin and the like are the
result of people realizing that this secondary larger disk is an acceptable
place to put binaries and other files that _aren 't_ needed at bootup.

\- _In other words, if you’ve captured Ctrl+C from the user’s input, then the
operating system, instead of just calling your handler, will interrupt the
syscall that was running before and return EINTR error code from the kernel._
That's not the kind of interrupt they're talking about.

\- _I’ve read somewhere that the cp command is called cp not because of copy
but because UNIX was developed with the use of terminals that output
characters very slowly._ Yep, terminals that print on paper are pretty slow,
as are 300 baud modems. I'm absolutely crushed I had to learn that 'cp' means
'copy'\--it took hours to beat that into my head, and the thousands of
keystrokes I've saved over the years are a small comfort (except to my rsi-
crippled hands)

\- _The names of UNIX utilities is another story. For example, grep comes from
command g /re/p in the ed text editor. Well, cat comes from concatenation. I
hope you already knew it. To top it all up, vmlinuz — gZipped LINUx with
Virtual Memory support._ 'cat' comes from 'catenate', in fact. What would you
name 'grep' instead? "searchregexandprint"?

\- _at least the main website of C that would be the main entry point for all
beginners and would contain not only documentation but also a brief manual on
installing C tools on any platform, as well as a manual on creating a simple
project in C, and would also contain a user-friendly list of C packages_ This
is one of the most ridiculous ones. You're talking about a programming
language defined in the 70s, for Christ's sake. Lot of websites created in the
70s? There is a document with a good introduction to C, project examples, etc.
and it's call The C Programming Language, a book by K&R. When Kernighan made
another language a few years ago, yeah, he made a website for it--golang.org,
it's one of the best project sites I've seen.

The article points out some legit problems in Unix, but even leaving aside the
author's ESL challenges it's poorly-written, poorly thought-out, and poorly-
defended.

~~~
__jal
> What would you name 'grep' instead?

Yeah, I found that one an especially weird gripe. Grepping was a new thing, so
we needed a word for it. 'Grep' is short, easy to say and type, and relatively
hard to confuse with similar words in the domain. Works for me.

I can unfortunately imagine a modern startup implementing it, and shudder at
potential names my imagination is coming up with... Searchlr, the best way to
search text! ReadMonkey, your personal pattern recognizer! I'll stop now.

~~~
richardwhiuk
search?

~~~
flukus
Search what? File names? File contents? Users? Machines?

~~~
TeMPOraL
You could say the same about "mv". Move what? Files? File names? File parts?
Users? Machines? Screens?

There's always some default subject implied for every command name. For "find"
it is files, for "search" it could have been text.

~~~
flukus
> You could say the same about "mv". Move what? Files? File names? File parts?
> Users? Machines? Screens?

Files, the base type that's consistent across all the basic commands (AFAIK).

~~~
int_19h
It depends on what you mean by "basic command". There are plenty which take
arguments that aren't files - chown, chgrp and su, for example.

------
SFJulie
Prefer an original older funnier version :
[http://harmful.cat-v.org/software/operating-
systems/linux/](http://harmful.cat-v.org/software/operating-systems/linux/)

I suspect the author of this site have contributed to the linux haters
handbook.

Some quotes:

“Linux printing was designed and implemented by people working to preserve the
rainforest by making it utterly impossible to consume paper.” – Athas

“ALSA is like the emperors new clothes. It never works, but people say it’s
because you’re a noob.”

“Object-oriented programming is an exceptionally bad idea which could only
have originated in California.” – Edsger Dijkstra

“[firefox] is always doing something, even if it’s just calculating the
opportune moment to crash inexplicably” – kfx

....

------
gabrielblack
Am I the only one's thinking that this article is full of errors ? Starting
with "killing zombie process" (sic!) and so on ?

------
pklausler
> I guess there wasn’t even Microsoft DOS at the time (I guess and I don’t
> bother to check, so check it yourself)

1969 < 1981.

~~~
kps
Heck, in 1969 there wasn't even CP/M that MS-DOS was modelled after.

Heck, in 1969 there wasn't even RT-11 that CP/M was modelled after.

There _was_ a brand new OS/8 that RT-11 was modelled after.

------
vzhang
Evolution is never clean.

~~~
TeMPOraL
Evolution does what evolution does. The problem is that selection pressure is
too weak, and things that should die off survive and flourish.

~~~
thehardsphere
Isn't that contradictory? If you start with the assumption that it's
evolutionary, how does it make sense to judge whether selection pressure is
"too weak" and that things "should die off?" Selection pressure is what it is.

~~~
TeMPOraL
Selection pressure may or may not be in a feedback loop with the evolutionary
process, but you can still view it as a separate component. In case of
computing, the (broadly understood) market is the selection pressure. As for
the notion of what _should_ happen, this comes from humans who are capable of
thinking _about_ the evolutionary process and who value some goals over
others. In particular, those humans tend to notice that the selection
pressures in software industry do _not_ promote good, efficient, and well
thought-out solutions.

------
gluggymug
The comments on this thread have been more interesting than the article. I
think the common theme is a push to innovate and take the next step in
computer systems.

If we scan this thread we could come up with a list of issues/problems with
UNIX. Human inclination is to instantly look for each solution as each problem
presents itself. Eg. "Text as an interface is unstructured" -> "Use other
formats". The end result becomes a lot of feature creep, adding layers over
the OS to hide the usability issues of the past. Then, a new user comes along
and wonders, "How did it all get so messy?"

The ideation-style approach to innovation is NOT to leap straight to
solutions. Avoid analyzing the problems immediately. Don't criticize other
people's existing thoughts.

Instead we use our creativity to add to the list of problems. We build it up
even more. Add tangential issues that may be not just UNIX related problems.
Add future issues. Add past issues. You keep adding until you exhaust all the
avenues. You don't want to block your thoughts or anyone else's. If anything,
the previous problem someone raises should inspire your next one.

Once you have drained out all the issues and you can't squeeze out any more
complaints, then you can take a step back and look at all the problems as a
whole. You group them into categories that have common themes. You try to
generalize and re-express them in vaguer terms.

After you have your themes, you can think about making a list of solutions.
Again you don't want to be critical about the feasibility of a solution. You
just want to build a list of different solutions. Each solution should inspire
a new different solution. There are mental exercises that you can do to
inspire tangential thoughts - word games etc.

When you have a giant list of solutions, then you categorize again. Those
categories are the start point to building something innovative.

The alternative is just another iteration of what already exists.

My own thoughts on this are that I seem to need "power-user" abilities way too
much. My field is actually Hardware. Something is wrong if I am jumping
through a lot of hoops outside of my specialty just to get work done. It's not
just UNIX either. My friends and family often call on me to handle their
Windows or Mac issues because I am the closest thing to an expert they know.
They shouldn't have to.

------
mwpmaybe
For those of you who might be fooled into thinking this is about things like
SystemD and Docker—it's not!

~~~
__jal
More a reprise of the _Unix Hater's Handbook_ with a slightly different gloss,
for those who remember that.

~~~
randcraw
Though a Unix Lover(tm), I enjoyed the UHH:

[https://en.wikipedia.org/wiki/The_Unix-
Haters_Handbook](https://en.wikipedia.org/wiki/The_Unix-Haters_Handbook)

------
jdblair
The most interesting thing I learned from this article is the sprintf() pre-
compiler ([http://blog.kazuhooku.com/2014/10/announcing-qrintf-and-
qrin...](http://blog.kazuhooku.com/2014/10/announcing-qrintf-and-qrintf-
gcc.html)). That's awesome.

The rest, well, its a helpful check on over enthusiasm to be familiar with the
shortcomings of UNIX. By dint of its popularity, diversity and long lineage,
UNIX and its derivatives are particularly rich in warts. Yet, I think it must
be true of all real-world operating systems with a long life that cruft,
workarounds and ossified bad ideas accumulate.

------
daemonk
I am not a power user. But I found this to be a great read. It's interesting
to see the historical and sometimes vestigial traces in unix.

However, for all the flaws that were pointed out, I still can't imagine doing
my work on windows vs linux. Even if I do have to sometimes spend effort
battling linux's idiosynchracies. At least when something goes wrong in unix,
I can probably go and find/fix what went wrong because the underlying system
is mostly plaintext and transparent. Versus when something goes wrong in
windows, it's more of a blackbox to me. But that could just be me not
understanding windows enough.

------
pjmlp
Where everything is a file handle, except when it is not (sockets, IPC, ...).

~~~
teddyh
Note: A “file handle” is a FILE *, i.e. a stream as used by a lot of the
higher-level functions of the C library. The term you were looking for is
probably “file descriptor”. (Or possibly “inode”, “directory entry”, or “file
name”? It’s not entirely clear what you mean.)

------
w8rbt
I read once that the dd command (which stands for convert and copy) was not
named cc because the compiler was already called that so they used the next
letters in the alphabet.

NAME dd - convert and copy a file

~~~
tmccrmck
No, it came from data definition on OS/360 JCL which is the reason why it has
a different syntax than the other utils.

------
markhahn
Newbie discovers historical and technical debt, interprets as collapse.

------
vatotemking
I'll wait for the satirical summary of this HN thread. :-)

------
dorianm
I do think we lost of the "do one thing and do it well"
[https://en.m.wikipedia.org/wiki/Unix_philosophy](https://en.m.wikipedia.org/wiki/Unix_philosophy)

Part of it is that it's way more annoying to maintain a little tool than to
make it part of a bigger project with a much bigger community, it's also far
harder to discover than just being in the docs a bigger project

------
gotthemwmds
I'm sorry but I am not reading an article that says, "You're lucky I wrote
it," in the first paragraph.

And from the comments, it sounds like I'm not missing much.

Author, don't treat your readers like idiots. I can guarantee some of them are
much smarter and more experienced than you.

------
bch
> Some people think that UNIX is great and perfect

"...great and perfect" is a strawman. Whether "some people" think that is
irrelevant.

Some of this article is interesting, but the fact of the matter is 40-year-old
systems have signs of being 40 years old. If "fixing" everything were easy,
it'd be done. Tabs in Makefiles throw off the uninitiated for 10 minutes, then
they learn, shrug and move on. These scars and stories are part of the
package.

Reading further, some of this is just incorrect...

>That’s not to mention the fact that critical UNIX files (such as /etc/passwd)
that are read upon every (!) call, say, ls -l, are plain text files. The
system reads and parses these files again and again, after every single call!

Not on my system.

> It would be much better to use a binary format. Or a database.

On my system, it is ( running "ls -ld ."):

    
    
      kamloops$ uname -a
      NetBSD kamloops 7.99.64 NetBSD 7.99.64 (GENERIC) #26: Thu Mar  2 07:15:26 PST 2017  root@kamloops:/usr/src/sys/arch/amd64/compile/obj/GENERIC amd64
    
      kamloops# dtrace -x nolibs -n ':syscall::open:entry /execname == "ls" / { printf("%s -%s", execname, copyinstr(arg0));}'
      dtrace: description ':syscall::open:entry ' matched 1 probe
      CPU     ID                    FUNCTION:NAME
        0     14                       open:entry ls -/etc/ld.so.conf
        0     14                       open:entry ls -/lib/libutil.so.7
        0     14                       open:entry ls -/lib/libc.so.12
        0     14                       open:entry ls -.
        0     14                       open:entry ls -/etc/nsswitch.conf
        0     14                       open:entry ls -/lib/nss_compat.so.0
        0     14                       open:entry ls -/usr/lib/nss_compat.so.0
        0     14                       open:entry ls -/lib/nss_nis.so.0
        0     14                       open:entry ls -/usr/lib/nss_nis.so.0
        0     14                       open:entry ls -/lib/nss_files.so.0
        0     14                       open:entry ls -/usr/lib/nss_files.so.0
        0     14                       open:entry ls -/lib/nss_dns.so.0
        0     14                       open:entry ls -/usr/lib/nss_dns.so.0
        0     14                       open:entry ls -/etc/pwd.db
        0     14                       open:entry ls -/etc/group
        0     14                       open:entry ls -/etc/localtime
        0     14                       open:entry ls -/usr/share/zoneinfo/posixrules
    
      kamloops# file /etc/pwd.db
      /etc/pwd.db: Berkeley DB 1.85 (Hash, version 2, native byte-order)
    

Now, I see that /etc/group -is- a plain file. This could get the same
treatment as /etc/passwd if it becomes a burden. In the meantime, if it's a
performance bottleneck, make a memoizing function to lookup groups and use a
'-n' switch to ls. Article is probably mostly important as the author thinking
deeply about Unix, and part of the developmental process of the user.

...All the bluster (some of which is interesting), then at the end walks it
back:

> So, I do not want to say that UNIX – is a bad system. I’m just drawing your
> attention to the fact that it has tons of drawbacks, just like other systems
> do. I also do not cancel the “UNIX philosophy”, just trying to say that it’s
> not an absolute.

Shame about the title... But maybe that's what landed it here on HN (?)

Edit: explain the "ls" command actually run.

~~~
metaobject
"This could get the same treatment as /etc/passwd if it becomes a burden. In
the meantime, if it's a performance bottleneck, make a memoizing function to
lookup groups and use a '-n' switch to ls."

Exactly, and this is the sort of thing that can be done with open source
software. It may not even be a lot of code depending on how it is approached.

~~~
rwmj
It's not even clear that for small password files, scanning /etc/passwd is any
slower than a database. It's likely already in memory, and a full scan of a
few kilobytes of text in highly optimized C is likely to take only
microseconds.

------
xkxx
> Let’s say we need to delete a file on host a@a. The name of the file is in a
> variable A. How can we do it?

> [...]

> Anyway, the correct answer is: `ssh a@a "rm -- $(printf '%q\n' "$A")"`

In zsh it's just `ssh a@a "rm -- ${(q)A}"`.

------
jondubois
I think the problem is that Linux developers started abusing the Unix
Philosophy to the point that you had to know about a lot of different programs
in order to be productive - Sometimes, it's much more convenient if one
program can do everything that you want it to do out of the box (it requires
less understanding of the system).

The Unix philosophy is essentially the opposite of the Apple philosophy. It
gives you flexibility and composability at the cost of simplicity and the
overall experience.

The optimal solution tends to be somewhere in-between. If you look at Linux,
it's actually a monolithic system (which goes against the Unix philosophy);
the popularity of Linux is in itself proof that people do want a single
cohesive product - If the Unix philosophy was the best approach, we'd all be
using Minix by now.

~~~
pmoriarty
I didn't find MacOS to be any simpler than newbie-oriented Linux distros like
Mint and Ubuntu. It was just filled with a ton of proprietary, bastardized,
closed-source garbage and limits that made it more difficult for power users
to understand and effectively manage the system.

~~~
jondubois
To be fair Linux distros have improved a LOT in the past 5 years. When I first
used Ubuntu many years ago, you couldn't do anything without the command line.
Installing software was a pain (I had tons of problem with Ubuntu Software
Center and it never seemed to work).

I use Ubuntu (Gnome) these days. The only thing I miss from Windows is Windows
Explorer. Nautilus just doesn't cut it in my opinion; I always end up browsing
the file system with the command line. That said, I still prefer Nautilus over
OSX's Finder.

~~~
SwellJoe
"Installing software was a mess."

Of all the things to complain about in Linux, you choose the one thing that
Mac OS and Windows _still_ don't have right, and Linux had pretty good even
back then?

~~~
jondubois
Installation on Windows was always much easier; all programs had a relatively
consistent UI wizard that stepped you through the installation process.
Installing software from disks on Windows was really convenient (and disks
where the real deal back then).

The fact that Linux relied on people to install stuff with the command line
was a massive oversight. UIs are just way more intuitive than shell commands.

~~~
SwellJoe
"Installation on Windows was always much easier"

I've rarely disagreed with something said on HN so strongly (at least among
things that, in the grand scheme of things, really don't matter that much, but
they matter a lot to my personal experience).

"The fact that Linux relied on people to install stuff with the command line
was a massive oversight. UIs are just way more intuitive than shell commands."

This has never been true in the past 12 years. You have to go back even
further to find a time when there weren't multiple GUIs for the leading
package managers. And, for at least the past decade, the core GUI experience
on every major Linux distro has had some sort of "Install Software" user
interface that was super easy and provided search and the like.

There's lots of things Linux got wrong (and some that it still gets wrong)
that Windows or macOS got right. Software installation _really_ just isn't one
of them, IMHO.

It's the thing I miss most when I have to work on Windows or macOS, and I miss
it constantly...like multiple times a day. A good package manager is among the
greatest time savers and greatest sources of comfort (am I up to date? do I
have this installed already? which version? where are the config files? where
are the docs? etc.) when I use any system, particularly one I haven't seen in
a while.

I just really love a good package manager, and Linux has several. Windows and
macOS have none (because if the entire OS didn't come from a package manager,
it's useless...you can't know what's going on by querying the package manager,
if the package manager only installed a tiny percentage of the code on the
system). So, even though there's choco on Windows and Homebrew (shudder...) on
macOS, they are broken from the get-go because they are, by necessity, their
own tiny little part of the system with little awareness or control over the
OS itself.

~~~
pmoriarty
Why don't you like homebrew?

Also, if your problem with non-Linux package managers is that they only know
about and control their own packages, then you must have the same objection to
Nix and Guix, right?

What happened to wanting simple tools that do one thing and one thing right?
Don't we want package managers to only manage packages, to decouple them as
much as possible from the rest of the operating system, and leave system
configuration management to other tools?

~~~
SwellJoe
_" Why don't you like homebrew?"_

I've blogged about some of my problems with Homebrew. Generally speaking,
Homebrew is a triumph of marketing and beautiful web design over technical
merits (there are better options for macOS, but none nearly as popular as
brew).

The blog post: [http://inthebox.webmin.com/homebrew-package-installation-
for...](http://inthebox.webmin.com/homebrew-package-installation-for-servers)

I get that it's easy and lots of people like it, so I mostly try to hold my
tongue, but every once in a while I'll see someone suggest something crazy
like using Homebrew on Linux (where there is an embarrassment of good and even
great package management options) and it makes me shudder. I'm not saying
don't use Homebrew on your macOS system if it makes your life easier. I just
would never consider it for a production system of any sort. I'm even kinda
mistrustful of it on developer workstations (though there are plenty of
similarly scary practices in the node/npm, rubygems, etc. worlds, so that ship
has kinda sailed and I am resolved to just watch it all unfold).

 _" What happened to wanting simple tools that do one thing and one thing
right?"_

I still want that. Doing one thing right in this case means doing more than
what packages on macOS or Windows do. One can argue about the complexity of
rpm+yum or dpkg+apt, and it's likely that one could come up with simpler and
more reliable implementations today, but if you want them to be more focused,
I have to ask which feature(s) you'd remove? Dependency resolution? That one's
a really complicated feature; a lot of code, and it's been reimplemented
multiple times for rpm (up2date, yum, and now dnf). Surely, we can just leave
that out. Or, perhaps the notion of a software repository? Is it really
necessary for the package manager to download the software for us? I mean, I
have a web browser and wget or curl. Verification of packages and the files
they install, do we really need it? Can't we just assume that our request to
the website won't be tampered with, and that what we're downloading has been
vouched for by a party we trust? I dunno...I'm not really seeing a thing we
can get rid of without making Linux as dumb as macOS or Windows.

 _" Don't we want package managers to only manage packages, to decouple them
as much as possible from the rest of the operating system, and leave system
configuration management to other tools?"_

This is the strangest question, to me. Why on earth would we want the OS
_outside_ of the package manager? Why would we want to only verify packages
that aren't part of the core OS? This is _why_ Linux is so vastly superior to
Windows and macOS on this one front. I'm having a hard time thinking of why
having the package manager completely ignorant of the core OS would be a good
thing. What benefit do you believe that would provide?

And, NixOS does not meet the description you've given. The OS is built with
nix the package manager. Running nix as a standalone package manager on macOS
does have the failing you've mentioned, but that's not the fault of nix. And,
yes, nix is a better option for macOS than brew, but the package selection is
much smaller and not as up to date in the general case...so maybe worse is
better, in that case.

I get a bit ranty about package management. I spend a lot of time working with
them (as a packager, software builder, distributor, etc.) and have strong
opinions. But, I believe those strong opinions are backed by at least better
than average experience.

------
webaholic
"You’re lucky I wrote it."

No Mr. Askar Safin. YOU are lucky that I am reading it.

------
drivingmenuts
I'd like to posit that at least part of the problem is we're using a multi-
user operating system on machines that are universally single-user. While it
may be nice to have that option, is it realistic to think that many people are
going to hand over their laptop/desktop machine to another person?

We're trying to use a server OS on a single-user machine, complete with all
the management cruft that comes along with a server-based OS. Consequently, we
don't bother re-thinking what the user needs vs. a sysop.

~~~
beojan
Windows did that, and look at the mess that got them into. Eventually they had
to introduce UAC, essentially a simplified Unix-style permissions system.

------
powera
TLDR - things have names, isn't that terrible?

------
vermaden
The author writes 'Unix' but backs his thoughts only from Linux systems ...
and Linux is not Unix ... I will stop right here ;)

------
sedatk
The Collapse of The Blog Philosophy.

------
angry_octet
I spent many minutes reading this and at the end I wish I'd put it into the
TL;DR category.

------
smcdow
Can't wait to read the n-gate webshit weekly entry on this one.

------
rnhmjoj
I'm surprised it doesn't mention the X Window System.

------
howfun
Article is badly formated on mobile.

~~~
gkya
Well it's not decent on desktop either.

------
swolchok
Sounds like the author needs to read Worse Is Better
([https://www.jwz.org/doc/worse-is-better.html](https://www.jwz.org/doc/worse-
is-better.html)).

~~~
mundo
^^^ Open incognito if you don't want to see jwz's NSFW salutations to HN
readers...

~~~
haylem
Funny that the parent was flagged, even though the _link_ is valid, and the
_parent 's author is not at fault_.

Oh! The beauty of censorship by the masses.

Now I can see why jwz would choose to take a stab at HN. He may be right, and
even too kind.

That flag is just sad...

~~~
grzm
As a sibling comment points out, the submission actually quotes from Richard
Gabriel's _Worse is Better_. That, and the snarky tone of the comment may have
been what caused people to downvote and/or flag the comment. 'swolchok could
also have linked to the original, thus avoiding jwz's HN-referrer redirect.

~~~
haylem
I know all that, but if reposts or not reading articles was deserving of
censorship, there'd be quite a lot of [flagged] placeholders around here.

------
prodmerc
You're free to take it all and turn it into an OS of your vision.

Next thing - bitch about how Earth works, with every region, country and even,
_gasp_ , city is different because a few people a long time ago decided "this
is the way to go".

~~~
int_handler
Similarly, our appendices, which are useless to begin with, occasionally
decides to become infected, swell, and possibly burst, threatening our lives,
just because biology a long time ago decided "this is the way to go."

~~~
Arizhel
This is incorrect. People did indeed think appendices were just useless
throwbacks for a long time, but more recently the medical community recognizes
their usefulness. They're basically like a first-level bootloader for the GI
system, used to store bacteria in case of a severe illness like cholera or
dysentery. It also serves some other immune functions. See here:
[https://en.wikipedia.org/wiki/Appendix_%28anatomy%29#Functio...](https://en.wikipedia.org/wiki/Appendix_%28anatomy%29#Functions)

~~~
duncan_bayne
That common opinion is odd, in hindsight.

Natural selection ought to quickly do away with an organ that served no
purpose but to occasionally kill one of the luckless organisms that possessed
it.

It should have been obvious that it was doing _something_ else, or more
specifically, that the gene(s) for "having an appendix" were.

~~~
Arizhel
Did you miss the part about it being helpful for restoring GI function after a
bad disease? Things like dysentery and cholera were pretty common before
modern times. An organism that can recover from diseases like this is going to
live longer and pass on its genes more often. The downside of it occasionally
causing appendicitis and killing the organism is likely a small risk in
comparison: how often did people in pre-modern times (before they knew what it
was and had the ability to do surgery to remove it) die of appendicitis? Not
very often. It wasn't the main killer of people by a very long shot. But
people got sick all the time from various things that affected the GI system.

~~~
duncan_bayne
You're uncharitably mis-reading my post, and being quite hostile in your
reply.

What I meant was that - in hindsight - the fact that it had some beneficial
function like restoring GI function should have been obvious, because
otherwise, it would have been selected away.

That is, even a layperson _should_ have looked at the situation and thought "I
bet it does something for us". But that's the benefit of hindsight; most
(including myself) didn't. We just accepted the folk wisdom that it was
useless.

~~~
Arizhel
Sorry, you're right, I misread your post. I think you have a good point here,
but to be fair I don't think people weren't thinking in evolutionary terms at
all. Not everything in our (or other animals') bodies are actually there for a
good reason now. I'll give you one good example: horse toes. I don't remember
the exact term now, but horses equine ancestors used to have multiple toes,
like us and many other mammals. Now they just have one, with a big fingernail,
called a "hoof". But if you look closely at the anatomy of their legs, you'll
see some of the vestigial toes still there, serving no useful purpose now.
Evolution isn't like engineering design, where we improve a design, see
something that's not needed any more (like an automotive engine distributor),
and just remove it entirely, and everything related to that, in one clean
sweep, even mundane small things like bolt holes, and optimize the design for
the new system (being coil-on-plug for this analogy). With evolution, it's
slow and gradual. Another example is the recurrent sublaryngeal nerve in
humans: this nerve apparently takes a rather bewildering and inefficient
route, for seemingly no good reason, but when looked at from an evolutionary
perspective, it makes sense why it takes that route. (The nerve itself is not
vestigial and serves an important purpose, but the route it takes is very sub-
optimal and could be called "vestigial" in a way.)

[1] [http://www.thehorse.com/articles/34382/where-did-horses-
extr...](http://www.thehorse.com/articles/34382/where-did-horses-extra-toes-
go)

[2]
[https://unzipyourgenes.wordpress.com/2011/03/24/unintelligen...](https://unzipyourgenes.wordpress.com/2011/03/24/unintelligent-
design-1-the-recurrent-laryngeal-nerve/)

So I think it's entirely reasonable, in the absence of contradictory evidence,
to use evolutionary thinking to make an argument that an anatomical structure
seems to serve no purpose any more, just like those extra horse toes.
Unfortunately, they were obviously wrong about the appendix, and I do think
it's a little dangerous because the assumption of vestigiality rests upon the
lack of evidence for the part having a modern use, but this can cause people
to stop looking for that modern use, and then we wind up with what happened to
the appendix, where it took a really long time to learn the truth because we
assumed we already knew.

~~~
duncan_bayne
Agreed almost entirely - and I apologise for uncharitably calling your reading
uncharitable :)

I think there is a subtle difference between the appendix and the horse toe,
though: the appendix regularly kills people due to appendicitis. I _think_
that ought to have been a clue. Unsure though as it's not really my field.

------
redsummer
I've always wondered why a new open-source OS has not arisen to claim the
mantle of UNIX / Linux. There are so many ancient design decisions, piled up
upon each other, which spawn millions of man-hours of frustration. It
continues perhaps _because_ of its esoteric nature. People don't want to throw
away all the weird stuff they've learnt.

~~~
int_19h
And also, because backwards compatibility matters once something is deployed
on as large scale as Unix was.

(Windows also has a similar problem.)

~~~
redsummer
Maybe a parallel system needs to be implemented, like how OS X originally had
a Classic OS 9 system running alongside. Once everyone had come to OS X, the
classic environment was dropped.

------
ebbv
This is an absolute garbage tier article. What on earth is it doing on the HN
front page? The author has no clue what he's talking about (and admits it
several times) and makes many logical leaps (one time the author said Gnome
should have used a registry, so UNIX configs are bad?) all over the place.

------
vacri
> _[shell] It becomes especially bad when we try to develop in it, as it’s not
> a full-fledged programming language._

Who the hell 'develops' in shell? It's a glue language, not a development
language. I've never heard anyone say "We're a shell shop".

~~~
dllthomas
I spent a year working on ~80k lines of bash. It was an interesting
experience.

~~~
vacri
Was the primary product in bash? Or were you working on glue?

You must have mastered the bizarreness of bash arrays by the end of that :)

~~~
dllthomas
The actual product was hosting, more or less, although with some additional
constraints that meant more moving parts. The bash grew out of manual scripts
to provide automation and a UI for support.

> You must have mastered the bizarreness of bash arrays by the end of that :)

I mastered a lot of the weird corners, though we mostly kept them out of the
code. I've forgotten much since.

------
curiousGambler
> You’re lucky I wrote it.

I read this, and was like "screw that, I'm not going to read it," and then was
like, well let's see, and after reading, wish I had stuck with my initial gut
instinct to close that tab. smh.

------
Const-me
My favorite one “everything is a file”.

My GPU has 4 times as many transistors as my CPU, and for parallel tasks, it
computes stuff 50 times faster. Just too much complexity for a file, even with
ioctl.

I think that ideology is the main reason for the current state of 3D graphics
in nix and bsd based platforms.

~~~
rwallace
I didn't downvote you, but I'm not seeing the problem here. It's great that we
have these fantastically powerful GPUs, which of course don't use text files
as their internal representation, but then we have to tell them what to render
or otherwise compute, and very often some form of text file is the most
effective way to do that.

~~~
Const-me
Very often, the majority of GPU IO traffic is textures. No form of text file
is the most effective representation for them.

However, in my comment I didn’t meant text files. I meant that in *nix, a GPU
itself is a single file, /dev/dri/cardN. All that GPU’s complexity is squeezed
into a single ioctl system call for that special file. The approach is IMO one
of the reason why Linux still doesn’t have reliable 3D acceleration support.

Here’s a long article why Linux ain’t ready for desktops, note the author
listed 3D acceleration on the top:

[https://itvision.altervista.org/why.linux.is.not.ready.for.t...](https://itvision.altervista.org/why.linux.is.not.ready.for.the.desktop.current.html)

------
drivingmenuts
IMHO, a lot of the Unix philosophy boils down to: we do it this way because
the neckbeard of my father and the neckbeard of his father and all the
neckbeards before him unto the beginning of Unix on January 1, 1970 did it
that way and if you want to do it differently, well, you better have a
stronger neckbeard, because then we'll have two problems instead of just one.

~~~
technofiend
[http://dilbert.com/strip/1995-06-24](http://dilbert.com/strip/1995-06-24)

