
Unix Delenda Est (2015) - networked
http://housejeffries.com/page/3
======
bluejekyll
After starting to become more interested in
microkernels/exokernels/unikernels, I've started wondering if POSIX in all of
its glory (and I've always been a huge POSIX fan) is perhaps holding us back
from a different and better development model.

What I like about this post is that it's trying to point out some specific
things that are perhaps the most constraining, though I'm not sure the
solutions or conclusions are correct.

The question is the right one though.

~~~
tremon
Which question is that? I've so far only skimmed the article, but I haven't
been able to find a central theme or discussion point.

~~~
bluejekyll
I guess more specifically, the question is does the UNIX/POSIX
principle/specification currently constrain our development methodology such
that we are building software that could otherwise be simpler?

------
qwertyuiop924
Yeah, UNIX sucks. We know this. The problem is, there isn't really anything
better. Oh, plenty of things have TRIED, but it's the elisp curse: It's not
the most elegant system, but it's easier to try and hack what you need on top
of it than make a better one.

>Tagged Filesystems

While not in common use, these exist now. And you can run them on your unix
machines. Look up BeFS.

>Typed Data

...I'd rather have 1000 ops on 1 crappy data structure than 10 ops one 100
good data structures. And that's what you usually end up with, regardless of
what you're trying for.

>Immutable Data:

This is actually something I'd like to see. OH WAIT, I have, it's called git.
Now we just need to integrate it into an FS. It's also called Nix, for
packages specifically.

>Server First:

Plan9 did it, It's been emulated in userspace, It should have caught on.

>Decent Shell Lang:

Python and Ruby anybody?

We really should make those defaults.

>Reasonably Secure.

Yeah, we need to fix this. We need less C.

>Better Security Model

We'll implement it if somebody can design one that F _king works, and isn 't
F_ing SELinux.

>Browsable Executables.

Yeah.

>Sandboxes

THE DOCKER REVOLUTION HAS ALREADY COME.

~~~
nextos
I think the winning solution could be Nix-like package management (like NixOS,
GuixSD or to a lesser extent GoboLinux) PLUS lightweight containers.

You get the best of both worlds, good package management within containers,
reproducibility, etc.

With Docker or apt you only get half baked solutions.

~~~
qwertyuiop924
Meh. Containers are pretty heavy compared to nix/guix.

~~~
nextos
Lightweight ones? Guix already supports its own jails-like containers and they
are a breeze. So are systemd ones offered by Nix.

~~~
qwertyuiop924
I meant docker, where there's an entire new set of system libs.

------
jstimpfle
> A single global, mutable tree of untyped text is a bad persistence model.

In a strict sense, almost all software accessing standard hierarchical file
systems is even just broken -- it breaks for certain concurrent operations,
and there is no way to forbid them.

But still hierarchical filesystems are the best usable-by-default model I
know. It's a humane approach from a cognitive standpoint: the real world of
"things that exist only once, in one place" is inherently hierarchical.

For different needs, it's often easy enough to flatten data a bit and use
tools like filename globs or regular expressions, to get a performant, good
enough solution that is also maintainable.

For very specific needs, you can always use a relational, or whatever, DB.

~~~
bluejekyll
I think the default indexing features in our modern OSes are already getting
us into the DB filesystem mode. I use search more often these days than
traversing the hierarchical tree.

------
oldmanjay
I'm midway through the article and I don't get the point at all. Is this
person trying to explore the limits of the usefulness of uninformed opinion in
the guise of meandering rants? Maybe the second half resolves the mystery.

------
chris_wot
You know, this is all said very earnestly but the problem is that it tends to
ignore the fact that many, many great things have been achieved with current
technology.

The linked to "worse is better" article I have read before, years ago. The
criticism of the failure mode of Unix is invalid. If you hit an error
condition, then unless you have have kept your systems state for an
indeterminate period of time, it's often not feasible to back out of the
program to where the error occurred.

In terms of tagging, that's never taken off. Why? Because it turns out tagging
things is harder than you might think, because you can call the same thing by
different names, which is somewhat ironically a major problem with
hierarchical file systems. You basically end up with needing to be able to
locate all the tags that are synonyms of each other.

At the very least, a file system comes with a convention. I know where
configuration files are on Unix - I go the the /etc directory. If I want to
find my executables, I go to either /usr/bin or /bin. For my files, I'm under
/home/chris.

If I was to tag stuff, I'd also need to stick to a convention, or chaos
reigns. I know, I've tried this before. I basically ended up with a
hierarchical tagging structure.

That's not too say tagging is awful, only that it's not a panacea to all
perceived ills.

As for version control - git is honestly the best example of a versioning
system that does everything you describe - content addressing, versions
control, snapshots... And guess what? It's all in a DAG.

Oh, and one more thing: all that "untyped text" that is a "bad persistence
model" firstly ignores the fact that it's not untyped, that every file system
contains metadata, and that a file name is no different to a tag that is a
label.

~~~
blablabla123
>In terms of tagging, that's never taken off. Why? ... >because you can call
the same thing by different names

I guess for software packages some sort of hierarchy seems essential. At least
for documents I always imagined that tags would be perfect. It seems that
documents always fit in multiple hierarchies...

On the other hand, if you really want no hierarchy in software packages, some
standard tags are necessary. Like 'ConfigFile', 'Active', ... Also one should
keep in mind that data in software is also in most cases not hierarchical.
Hierarchical databases are a weird animal that nobody uses. Maybe hierarchical
filesystems have constraint our minds so much that we cannot easily imagine
some other way.

Regarding the rest... I think as well that Unix has some problems, mostly due
to old design. Having to use regexes to match file names is so error-prone, I
think only for document file names that would be okay. I really like the
innovative approach of Windows Power Shell eventhough the syntax is still too
weird.

~~~
chris_wot
A file system is so because it adopted the metaphor of a filing system, which
was hierarchical in nature itself. It's main advantage was that it enforced
order, and once you knew the filing systems you could easily locate your
important records.

The reason it is ubiquitous is that the human mind needs order, and a
filesystem imposes this. It ain't perfect, but it works.

~~~
blablabla123
Sure, but you don't necessarily need a hierarchical order.

Of course the whole desktop computer is a metaphor of the desk and the first
commercially successful graphical interfaces resemble a desk.

And GUIs are what most people use most of the time, even most devs tend to
start their terminal from the desktop in a window, which appears a bit like
paper, laying on the desk.

It's interesting that only few graphical operating environments break with
this convention. It seems there hasn't been much conceptual innovation in the
space of "desktop computing". Even the tablet seems to represent either a
piece laying on the table or the table itself, depending on the size.

It should be more efficient to design systems based on the available digital
infrastructure, instead of designing them based on partially obsolete office
table infrastructure.

------
otabdeveloper
If you want usable tags, then you first need a good ontology hierarchy.

Inventing good ontologies is a problem that's orders of magnitude harder than
a problem of making a filesystem hierarchy searchable. Look at Wikipedia --
despite all this time and all this truly immense planet-size effort, they
still haven't figured out a good categorization system!

------
pampa
Some of the complaints seem reasonable. But how did he make the jump from the
web-faxmachine to unix? did i miss something?

Web pages being saved as blobs and not providing raw data is not a problem
technologists can fix unfortunately.

~~~
seagreen
Sorry the intro moves so fast. The answer is here:

    
    
        But if web browsers are so bad, why do they keep spreading?
        [...] mainly because the alternative is defective.
        The startup, academic and open source communities all
        have their weight behind Unix, but Unix is a dead end.
    

Like many people I used to do everything in the browser (using windows, but
treating it like a Chromebook). When I became I programmer I switched to Unix
and tried to use the command line for everything I could. It was amazing!

I wanted other people to share this experience, but couldn't make any
converts. It turned out that it was only working for me because I was willing
to plow hours into fixing problems. For most non-programmers, command line
Unix isn't a good enough user environment to be a competitor to the web. And
we really need that competitor.

~~~
tremon
_It turned out that it was only working for me because I was willing to plow
hours into fixing problems_

Fixing problems, or finding solutions for your problems? Every advanced system
has a learning curve, and Unix is an advanced system without a well-defined
domain. It's great that you invested the time to make it work for you, but
that you feel that it is inappropriate for "most non-programmers" is not
necessarily a failure of the system.

As a sibling post said, "there is not much interest in empowering non-
technical users". I think that's close, but I would add that it's a problem of
scale: non-technical users require domain-specific solutions. If there were
one single solution to empower non-technical users, there would surely be
commercial interest.

I'm sorry, but I can't help feeling that the article has a high dose of "lo!
I've seen the light and my light will solve every problem on the planet". It
ignores the many good things Unix has enabled, and is very light on the
details how a better system would look (or how it would be secured --
befriend, really?).

~~~
seagreen
Man, "befriend" is the one part of the article I can totally stand behind! Why
the heck doesn't my computer have a "befriend" command? No wonder
FB/LinkedIn/every webapp ever is beating us (meaning the FOSS OS community,
i.e. Linux) on mindshare.

(You could implement this different ways, but I'm imagining that once you
friend someone they can see a select subset of posts on your computer. For me
that would be status posts, some photos, some WIP code, etc. This would be a
huge deal. Think about how many webapps it would make unnecessary.)

EDIT: Less flowery sentences.

~~~
tremon
I didn't mean to imply that was a bad idea. I meant that you provided no
details on how it could be implemented or secured. That's more than just an
implementation detail to me.

For example, your code as shown implies that you can reach your friend's
computer through an Internet name (through dns even?). That already assumes
that these non-technical friends can register their own Internet names, and
care enough to do so. But that assumption is well outside the bounds of "unix
sucks".

~~~
seagreen
That's a totally fair point. Hey, I never said it was a good article!

Perhaps a charitable reading of it would be that instead of reading "unix
sucks" (sadly I use almost these exact words in the article) instead read "the
mismatch between the tools provided by unix and the needs of a normal user,
combined with decisions made with good intentions at the time and kept for
backwards compatibility, make the unix user environment a bad user environment
for normal people today".

But of course I didn't write that, so I'm in no position to complain too much.

------
VLM
The problem with hierarchical filesystems is people who can't organize in a
hierarchical manner, regardless of presence or absence of excuse or
responsibility, make a mess of size X. And likewise the problem with tagged
data is likewise the people who can't organize tags is arguably an equal or
larger group and they end up with a huge steaming mess, although its of size
1000 times X especially WRT human effort required to clean up and curate.

The classic analogy is you unleash genealogists on a village cemetery and have
them produce a family tree and the output is actually useful information.
Unleash the same team and ask them to digitally tag tombstones, and you just
get a pile of useless, error filled data, packed with duplicates and weird
capitalization issues. This is a classic "information vs data"
misunderstanding.

Another problem with hierarchical vs tagged is we have somewhat obtuse yet
usable set of tools to handle truly immense spammy hierarchical filesystems,
but strategies for handling and scaling tagged filesystems usually are hand
waved away with "it'll never happen" or "we'll sprinkle magic AI dust on it,
then it'll evaporate away". A classic "make the simple easier, while making
the more difficult impossible". No thanks.

Much as there's nothing really new under the sun in IT, a generation or two
ago this was "sure sendmail configuration is complicated, but all it needs is
a GUI because in the small subset of problem domains that are a match for GUIs
and people with a problem simple enough to be handled in a GUI, it works well,
therefore its perfect for everyone to do everything in"

Also data typing is just a subset of tagging. Or rephrased a type is a very
poorly implemented limited single tag.

Ditto the above with a "decent shell language" again we're dealing with people
who can't handle paper calendars without a book telling them how (GTD, 42
folders, etc). They're just going to mess things up quicker with automation,
while people who can handle the cognitive challenge will likewise jump into
org mode or crontab without any serious effort on their part... You "could"
write a noob gui for a nuclear reactor, or a UI for even newborn babies to use
matches, but you probably shouldn't.

As a meta observation this is a classic "future is already here just unevenly
distributed" problem, I use databases and git all the time. However see the
comment about the GUI for sendmail... Much like some people are unqualified to
be handed an acetylene cutting torch or a beryllium-Pu neutron source, some
people should not be handed relational DBs or KV stores or version control,
and if they had the background to understand and safely use those tools, the
existing tools aren't really all that hard to use for folks of adequate
cognitive level.

~~~
TeMPOraL
To your last part - I have a problem with the current software trend that's
present both in web and mobile. Everything is being dumbed down, so that it
can be used from the get-go.

It is _stupid_. The only way you can make someone an expert of a new tool in
15 seconds is by making the tool so simple that it's completely useless.
There's a reason you get a few dozen hours of training before you are allowed
to drive a car around. Or fly a plane. If you want to have a simplified car
UI, then one exists too - it's called _a taxi_.

The reason of course is the competitive market pressure. The primary interest
of a company is to _sell_ the product, not for the product to be useful. And
so most of current design effort is about making software easier to sell, not
about making it actually useful.

It's absurd to expect people to figure out by themselves how to use their file
systems, databases or - yes - even their paper calendars. All of those are
tools that require some learning to use effectively. We need to make people
read manuals again, and stop expecting everything to be usable out of the box
by an untrained monkey.

~~~
toyg
_> We need to make people read manuals again_

If you've ever read a technical manual, you're probably in the 1% of the
population. People don't even read two-lines popups, and you expect them to
read complicated manuals?

What you really need is adaptive interfaces that grow in complexity as
requirements expand, reliably detecting the newbie / power user / hacker
progression. That's what works best. Hackers ridicule UIs that bury stuff
under "Advanced" modes, but they're actually the best approach.

Doing three (or four, or five) times the work, though, is no fun for
developers, so we go from "you're a dumb user, be happy with limited options"
to "you're a clever user, surely you can read a 1000-pages bible before
launching a command", because building multiple interfaces for the same
routines is boring.

~~~
jstimpfle
Although it depends on what you mean by "technical" manual, 1% seems way to
low.

------
anthk
>Sandboxes make a much better security primitive than sudo, file permissions,
etc. which are useless for protecting a single user’s data.

Is not only sudo, but sudo AND polkit. Are you sure you know what are you
talking about?

And about your sensitive data, AppArmor is more than capable. For the other
sensitive data, create a folder out of /home and "chattr +i" your content
recursively, giving proper permissions first.

And you rely in Plan9 for your speech, which is more UNIXy than UNIX and
everything IS a filesystem in a more hardcore way (something you look as
obsolete), altough factotum and namespaces are a complete different beast.

~~~
legulere
Many unix systems are used by just one person (Desktops, Smartphones,
Tablets). Most unix systems that are used by more people are servers and don't
use the unix user system to discern between users.

The fewest unix systems out there have several users connecting to it with
each user having a different unix user account. But exactly this is what the
unix user system was designed - nothing else. Everything else like root or
users for services is just a dirty hack, because unix wasn't designed to offer
real isolation of applications.

AppArmor doesn't work because users creating profiles about what syscalls are
allowed is out of touch with reality.

The only unix systems partially offering the isolation actually needed today
are Android and iOS. You could still improve them massively though.

~~~
bluejekyll
Just because you don't have more than one "person" user on a system doesn't
mean you don't have multiple users. The UNIX security model is based on user
separation, and has been used in many server environments to define boundaries
between different privileges on the system.

Security wouldn't be possible without this. You can also think of
cgroups/lxc/docker as a direct extension of this.

~~~
spdionis
How much simpler would OSes and security be if we threw away multiple users
(and groups)?

I think we don't really need them. We only need 3 operating "modes" in 90% of
the cases: normal, administrator, guest. And after that various per-
application permissions a la Android or iOS.

We don't need multi-users systems.

~~~
bluejekyll
I think multi-user in this context is more about authorization of operations.
It's correct that we don't need users/groups for this today, there are equally
good things in cgroups and pam, but it's a simpler abstraction in many cases.

I think your mostly correct, but in most cases I would prefer individual
authorization rules, like apparmor. There's always cases when you don't want
to escalate the entire app to root level, but need something more than what's
a normal mode.

------
makmanalp
Inspiration for the title:
[https://en.wikipedia.org/wiki/Carthago_delenda_est](https://en.wikipedia.org/wiki/Carthago_delenda_est)

------
anthk
>We are not NASA. RHEL had 30 million lines of code, 71% of them C, in 2001!
How many hundred vulnerabilities lurk there, unseen by human eyes?

As if with "modern" languages you would be safer. Poor new delusional kids.

~~~
vpkaihla
> As if with "modern" languages you would be safer. Poor new delusional kids.

That's a rather amazing quip. How would I, in my mid-thirties, accidentally
write a buffer overflow in Haskell or Rust?

~~~
imglorp
Wait, what, these things have no bugs, you say? All software has bugs,
unintended side effects, untested code paths, and other cruft at the lower
layers, especially when interfacing with the OS. Look how many vulnerabilities
have been found in the JPG libraries alone, and that's just a bitmap; how hard
could it be, right?

[http://stackoverflow.com/questions/498234/vulnerability-
in-t...](http://stackoverflow.com/questions/498234/vulnerability-in-the-
functional-programming-paradigm)

~~~
icebraining
Interesting choice of a link, since all the answers essentially say that it
may be easier to make it consume lots of memory, leading to a possible DoS.
That's not pleasant, but hardly a major problem compared to remote code
execution.

All software has bugs, it's true, and language runtimes included. But that
doesn't mean some languages can't be safe _r_.

