Hacker News new | past | comments | ask | show | jobs | submit login
The Plan-9 Effect or why you should not fix it if it ain't broken (2016) (unipi.it)
81 points by Lwrless on Dec 10, 2022 | hide | past | favorite | 158 comments



This kind of article develops a dangerous line of reasoning.

The failure of Plan9 is not about what went wrong or right. It is about how the market evolved and made decisions, which goes beyond the scope of this comment.

The first conclusion "...is not to try to fix things that are not broken..." is dangerous because what someone identifies as broken is not the same as someone else. As a Plan9 user you may identify Unix deficiencies as problems and a Unix user might disagree.

The second conclusion "...is to try to identify if there is a market..." and there are many things in life that contradicts this (maths, basic science, maybe the beginning of Unix itself...).

And the final conclusion is about backwards compatibility. This is sound in the authors perspective because of the first conclusion but does not hold against good reasoning. Plan9 broke some compatibility because it needed and kept others because they were already good in the POV of the developers/researchers.

The Plan9 effects to me is far different. It is about the fact that Plan9 is unable to die against all expectations. 9P is there to stay, 9front gets releases every year, /proc is everywhere, same for UTF-8, and so on.

Good ideas stick, they are hard to let go and even harder to ignore. Plan9 failed in the commercial OS sense, just as many others failed. "You may never know it's broken until you fix it" (I'm sure heard it somewhere).


> Good ideas stick, they are hard to let go and even harder to ignore.

Horrible ideas are even stickier unfortunately. Like Unix, Plan 9 is also designed around forking processes and even has asynchronous signals.


Can you please elaborate on why those two ideas are "horrible"? Forking processes seems like a rather elegant concept, to me.

Granted, it's sometimes weird that in order to have a process running image A create a process running image B, it must first have a mirror image of itself for a while, then replace it. But the symmetry and simplicity at the conceptual level is nice, and the ability to have code for both parent and child in the same image is neat.


> Can you please elaborate on why those two ideas are "horrible"?

The problems with fork are thoroughly explained by this Microsoft paper:

https://www.microsoft.com/en-us/research/uploads/prod/2019/0...

As for asynchronous signals, they are a completely broken concept and implementation.

It's not safe to do pretty much anything in a signal handler other than setting a flag and returning. You cannot use any non-reentrant function which eliminates pretty much everything useful, including the vast majority of standard library functions.

The system drops events. Signals are essentially a pending delivery flag in the kernel, signaling a process just sets the flag to 1 so there's no difference between doing it once and 10000 times.

Was in a system call when your process was signaled? It will be interrupted or cancelled. It returns EINTR and your code needs robust retrying logic to handle such a case. I've seen code that couldn't handle it and crashed.

There's race conditions everywhere. Signals can arrive while you're handling other signals, you need to block block them if you don't want that. Every thread has its own signal handling behavior. Signals sent to a process are handled by "any" thread. I remember reading in some standard that using signals in multithreaded programs was undefined behavior. Who even knows what's going to happen?

The only borderline sane way to handle signals is with file descriptors: you block traditional signal handling on all threads and set up a signalfd that you can epoll along with everything else on a thread dedicated to the event loop. Even this is pretty bad:

https://ldpreload.com/blog/signalfd-is-useless

https://news.ycombinator.com/item?id=9564975


Asynchronous processor interrupts have analogues to basically all these issues that signals do. Signals are certainly a pain to implement and get right, but people seem to think they're something that they aren't, or expect them to be used for something they can't do. Clearly signals aren't a message passing scheme, they're a notification system. And obviously taking an asynchronous interrupt needs to use reentrant code.

As for the fork paper, seems like a typical academic type of critique.

"Fork today is a convenient API for a single- threaded process with a small memory footprint and simple memory layout that requires fine-grained control over the execution environment of its children but does not need to be strongly isolated from them."

I.e., exactly what it is good for and used for. And it goes on

"Fork is incompatible with a single address space. Many modern contexts restrict execution to a single address space, including picoprocesses [42], unikernels [ 53], and en- claves [ 14]."

And talking about heterogeneous address spaces, and all other things academics love but nobody really uses.

They also make pretty outlandish claims "Fork infects an entire system." based on the Windows implementation where the Win32 API and kernel explicitly do not implement fork. And it's provocative language "infects" -- the Linux kernel fork implementation is maybe a thousand lines of C code. And that does not constrain it or prevent it offering several of the suggested alternatives.

K42 was also a failure of a research operating system, and another microkernel (surprise). Turns out (as usual) that designing a system around some latest craze or fad in technology no matter how good (RCU and other lock free algorithms) rather than designing it to deal with workloads that people actually use, is still the recipe for disaster and one of the main causes of second system syndrome.


> Asynchronous processor interrupts have analogues to basically all these issues that signals do.

I know. That doesn't really excuse the suckiness of asynchronous signals. It's quite simply stupid to have operating system facilities that are as limited as actual hardware interfaces. With signals, you're supposed to write code like your computer is a Super Nintendo or something.

> Clearly signals aren't a message passing scheme, they're a notification system.

Signals suck as a notification system too, simply because there is no queue. You're supposed to be notified by a signal when a child process exits but the system can actually lose that information forever in certain conditions which means it is literally impossible to build truly correct software, best you get is good enough. You also get a signal when you write to a pipe with no readers or when the terminal is resized, making it that much more of a pain to deal with that stuff.

People build literal user interfaces using signals. SIGINT and SIGTERM, everybody knows about that. SIGHUP to make some running process reload configuration or something. Even dd prints a progress report if you send SIGUSR1, how insane is that? I really don't want to think about signal insanity when doing important I/O operations.

> And talking about heterogeneous address spaces, and all other things academics love but nobody really uses.

The paper mentions several concrete examples in wide use right now even in consumer machines. Systems on a chip with accelerators, GPUs...


> I know. That doesn't really excuse the suckiness of asynchronous signals. It's quite simply stupid to have operating system facilities that are as limited as actual hardware interfaces. With signals, you're supposed to write code like your computer is a Super Nintendo or something.

This isn't really comprehensible. The issues with signals are presented as something that makes them "broken". Are you claiming that hardware interrupts are broken?

> Signals suck as a notification system too, simply because there is no queue.

You are conflating two different things. A queue is for messages. An interrupt is for notification. This is how hardware interrupts work. Some work arrives somewhere (in a queue, a register, some memory, whatever), so an interrupt is raised to notify that work is pending.

Likewise signals can be and are associated with queues or messages or pending events that can be interrogated after the notification that there is work.

> You're supposed to be notified by a signal when a child process exits but the system can actually lose that information forever in certain conditions which means it is literally impossible to build truly correct software, best you get is good enough. You also get a signal when you write to a pipe with no readers or when the terminal is resized, making it that much more of a pain to deal with that stuff.

I won't go into every type of signal. There are some that are not defined in a way that can be used by what people want to use them for. That's not a problem with "signals", it's a problem with a particular signal or lack of additional interfaces around that to provide what is required.

> The paper mentions several concrete examples in wide use right now even in consumer machines. Systems on a chip with accelerators, GPUs...

For the most part not heterogeneous. Masters that have access to host address translation services (like cache coherent GPUs or FPGAs on some busses, like nvlink or CXL with ATS) have equal ability to access the entire process memory space. And fork doesn't look really different from many other operations on an address space from the point of view of an MMU whether it's on the core or associated with an accelerator -- all it is is changing memory protections and taking page faults, the same kind of COW is done with private writable mappings, or page deduplication, for example. They really are just handwaving up things that nobody actually uses.


Good discussion both pro and against the paper here: https://lwn.net/Articles/785430/

Fork causes huge complications. I summarised some of the paper here https://news.ycombinator.com/item?id=31702952

Edit: I imagine forking and signal handlers don’t compose well, and I also would hate to have to think how forking and SCM_RIGHTS interfere with each other: https://googleprojectzero.blogspot.com/2022/08/the-quantum-s...


Fork is actually very fast. Too funny a paper coming from Microsoft about fork/exec speed -

https://www.bitsnbites.eu/benchmarking-os-primitives/

Linux absolutely destroys the "proper" API. Nearly 40x faster at launching a program with fork+exec than Windows' CreateProcess. Not to mention the fact that vfork has always been available which is even faster.

Fork is also pretty scalable, it requires no global locks. It is thread-safe, it has defined semantics in threaded programs and can be used to exec a process. And it isn't insecure, it does what is advertised, as securely as advertised.

And close on exec is hardly a huge complication, it's actually a detail of exec(), not fork. It applies independently of exec, and you could make an exec that closes fds by default unless they're marked with a persist-on-exec flag. Library or runtime code can do this anyway really without any "huge complication". I don't know what you mean about SCM_RIGHTS interfering with fork, do you have something in mind? The problem would really be at the exec boundary, fork does not purport to alter any security attributes of the child or parent, so it really doesn't make sense to call it insecure. It doesn't suddenly get new rights, or have any limits enforced.

I mean it is complicated stuff, but so is any process runtime environment that provides async notifications, threads, spawning, etc. Anybody who tells you they can make this simple and broadly usable is selling you snakeoil or a toy API. If people can't cope with reading documentation and thinking carefully about this stuff, they shouldn't use it anyway, they should use a higher level runtime or library to do process management. The handwringing about fork is a bit baffling. Reminds me of the handwringing about fsync, it seems that people just don't read documentation and make silly assumptions about how things should work, and then get embarrassed and blame the tools.

I mean fork retains file descriptors from the parent process. This is not some obscure undocumented behavior, it's like the second thing you read in the manual page. Same as execve. I don't like to make excuses for badly designed APIs and code, but honestly if a programmer isn't capable of thinking about what happens to file descriptors there they certainly should not be writing code that uses fork or exec, let alone something that's security sensitive. I don't think that's being unreasonable or elitist. You wouldn't want them writing security sensitive Windows code either, would you?


If you'd read the Microsoft Research PDF linked above, you'd have seen that fork scales with how much memory the parent is using, which would be invisible in these synthetic benchmarks. They say that Chrome on Linux might take up to 100ms to fork. That doesn't scream very fast to me.


> If you'd read the Microsoft Research PDF linked above, you'd have seen that fork scales with how much memory the parent is using, which would be invisible in these synthetic benchmarks.

Ah right, I was talking about SMP scaling, but yes fork does have an O(memory) scaling factor.

It certainly shows up on benchmarks because 99% of forking in Unix is on small processes (make, bash, etc.), not huge ones like Chrome. This kind of thing is why the Windows kernel is unable to compete with Linux in performance and scalability in in a lot of important basic operations that make things like git slow. The focus was on some academically supposedly "correct" interface or way of doing things, not what programs actually want to use.

> They say that Chrome on Linux might take up to 100ms to fork. That doesn't scream very fast to me.

Yeah probably true. If I needed a facility to be able to exec very frequently in a highly threaded application with a huge memory footprint and significant security concerns, I would almost certainly use a dedicated process to do that. You can potentially use posix spawn or clone directly, but forking from a thread from the main process in this case just seems unnecessary and asking for portability problems.

I don't say fork is perfect or can't be improved, but the hysteria about it's "infecting" the whole system and causing some vast problem is just not at all true.


> You can potentially use posix spawn or clone directly

You cannot use clone directly if you link to any libc.


You can if you are careful what you use libc for, of course you can't use most of its process management, threading, or make any reentrancy or thread safety assumptions.

I don't mean you would be likely use it if you were writing a normal application in C, but a special case like creating your own runtime where existing interfaces don't do exactly what you like. You wouldn't be using those things in libc anyway in that case.


I would paraphrase your answer as: speed trumps complexity, and any programmer that gets caught out by the complexity is just a bad programmer.

Where speed is critical, other techniques are used to avoid processes. Often when using a separate process, the reason is for security and correctness. I do agree that performance does matter, for example in shell scripts.

Either way, your answer does not address the issues listed in the paper. Yeah, the paper probably has a Microsoft bias, but that doesn’t mean the identified issues should just be hand-waved away for performance reasons.


> I would paraphrase your answer as: speed trumps complexity, and any programmer that gets caught out by the complexity is just a bad programmer.

That's a strawmn. A complete mischaracterization of what I wrote.

> Either way, your answer does not address the issues listed in the paper.

The "issues" are just wrong, as I pointed out. Like the laughable "infects the entire system" comment using as their example an implementation of fork in which major layers of the system were entirely unaware of fork. Contradicting themselves with their own example. It's not a paper, it's a rant, and it doesn't somehow gain gravitas or value just by being laid out in a particular way.


So M$ write this to convince people their CreateProcess is better than fork? Not smart.

Thinking signal sucks is just because people trying relying on it to do something it is not designed to do. You can not complain a pile of wood is not a table.


You got me. That is quite horrible.


Forking is one of those things that is a super elegant solution when things are simple, but breaks down when things become complicated.

Multithreaded app? Fork is now a liability, and is only useful if the only (more or less) thing you do in the child is exec. Might as well only have Windows-style CreateProcess at that point.

For single-threaded programs, sure, fork is fine and gives you more flexibility than a CreateProcess-type API.

Asynchronous signals have a lot of the same problems, but those problems are also present in single-threaded programs. Quite a few APIs have been added over time to try to make working with async signals easier and safer, but all of them add their own new gotchas.


> Asynchronous signals have a lot of the same problems, but those problems are also present in single-threaded programs. Quite a few APIs have been added over time to try to make working with async signals easier and safer, but all of them add their own new gotchas.

Isn't this referring to uses beyond what async signals are good for, or are you saying that async signals should just not exist in favor of something else? It's not like they're meant to be the only IPC mechanism, but they're good as a standard way to inform a process of certain things while having default handlers.

EDIT: Nevermind.


so... shitpost warning but...

Perhaps forking is the elegant refined mechanism and multi threading is the abomination that should have never been invented.

Multithreading is the concept that a process(an independent execution unit) can share memory space with another process. and it turns out you can, only at the cost of making all your memory access methods extremely fragile and error prone. The concept should have never been invented.


Sharing an address space was the default until the MMU was invented. That said I agree with you that multiple process with some optional shared memory seems to be a much safer approach than the share by default multithreading I don't know why MT won..


> Forking processes seems like a rather elegant concept, to me.

Forking is good because it saves you a rich process manipulation API, and because generally speaking doing things to an execution environment always ends up more clunky than doing things in one. (Cleaning up after doing them is another matter.)

Forking is bad[1] because it (essentially if not literally) forces memory overcommit on you, at which point resource accounting becomes hopeless.

[1] https://lwn.net/Articles/785430/


It's called trolling. They're using the word "horrible" to bait a response on something that can be easily argued on.


It's called having an opinion and expressing it. You can accept my opinion, ask me to elaborate or convince me it's wrong. What you can't do is accuse me of trolling just because you disagree.


No, I thought you were being hyperbolic, but after you've explained your problems with signaling, I agree with it. Kind of depressed at the state of that, now...


You and me both. I've wasted way too many neurons trying to understand this legacy brain damage. Hyperbole doesn't quite do it justice, it's a design that deserves an epic rant like mpv's locale commit:

https://github.com/mpv-player/mpv/commit/1e70e82baa9193f6f02...


A more agreeable example would be Windows forbidding specific file and folder names to reserve for device names, something it apparently inherited from CP/M.


Thanks you to express in English my inconfort growing while reading this ( otherwise nice ) article.

Market fit is insight 20/20 IMO, but you phrased it better.

Overall, my discomfort is that it’s the type of reasoning that keep you with your two feets in the mud. Stuck.

I’m stealing “you don’t know it’s broken until your fix it”


Procfs appeared originally on UNIX, see USENIX paper.


And BSDs removed it, deeming it a security risk.

Personally, I don’t like that it’s more of parsing text files to get a number, when you could have functions returning structs (of variable length, to get extensibility, while preserving backwards compatibility).


A fair assessment, but at the end of the day it is all just a stream of bits, bits packed into a into a structure, how do you know the structure?, how do you know how wide each part is? or bits packed into an array of encoded bytes. what is the encoding? how do you parse it?

The array of encoded bytes despite it's complexity overhead has an advantage in that is lays on the human visible side of computing, that part of computers designed for the human to use. I have to admit more often than not I prefer to eat the overhead and have an interface that I can see.


The BSD approach at the time was to require setuid access for programs like ps to be able to read the kernel memory space via /dev/kmem to produce a running process list.

That is infinitely more stupid than procfs.


The stupidity of that is orthogonal to whether to use procfs or another sane, but less plan9 approved design like a syscall interface or an ioctl.

I'm reminded of another thing that used to be file based but moved away, and towards syscalls: random number generation. One criticism of a /dev/random approach I've seen is that open(2) could have some fringe error case (descriptor table too big?), and you don't want your secure RNG to bail on you. In particular lazy initialization of a secure RNG where the caller may not be able to check for errors.


Overall it's far more universal interface, you app "just" needs to parse relatively simple texfiles instead of api call per data type.

The text format is a problem on its own tho, "procfs but serialized using single format" would IMO be a best middle ground between tying your app to essentially kernel headers and parsing a bunch of random textfiles


Not just BSDs; no major operating system other than Linux uses procfs anymore.


Sure, but Linux is also on more devices than any other OS in the world.

Your statement would be more damning it Linux was a minority player or on the decline, but that's not the case.

Procfs seems... fine, really.


Being pedantic, isn't Minix more widespread because Intel embedded it in CPUs?


Also most Linux machines, like phones, also contain instances of things like SEL4, often a couple of them.

And Linux... well, it managed to accumulate a lot of historical baggage for something that young. Device numbers are another example.


Nearly 30 years seems like plenty of time?

Many projects manage to accumulate plenty of tech debt in 30 months, without being a deliberate work-alike reimplementation of an existing system.. which I suspect means you're already starting with a certain amount of baggage.


Solaris has had procfs for years.


Solaris is a history at this point.


Oracle and Fujitsu still sell and support it.


Really? Tell that oxide computers...well it's not Solaris but Illumos.


> Good ideas stick, they are hard to let go and even harder to ignore.

That's a noble wish, but that doesn't make it true.


If an idea does not stick, maybe it does not provide a benefit?

Then it's not really good, but merely good-looking.


It’s absolutely mind-boggling to me that everybody is ignoring the relevant difference between Unix™ and Plan 9: Unix source code was given away (or cheaply licensed) to hardware companies which all wrote their own operating systems on top (SunOS, Ultrix, HP-UX, etc. etc.). This made Unix the common factor of very many commercial workstation environments. Plan 9? It was sold directly as a commercial product, for no hardware platform in particular. Nobody wanted to buy it.

People liked Unix because it was free – either really free, via BSD, or as a Unix derivative provided at no cost when people bought their workstations. A new revolutionary operating system had absolutely no reason for anybody to buy it: No commercial developers wanted to develop to a platform without users, and no users wanted a platform without software.


I’m reminded of a similar blunder made by IBM with their PS/2 series of computers. The initial IBM PC, PC XT and PC AT models were very popular, because they could be cloned by third-party manufacturers, which in turn created an enormous market for third-party software and hardware accessories and peripherals. But IBM wanted a bigger piece of that pie than just the few sales of genuine (and very expensive) IBM hardware, so they created the IBM PS/2 line: Heavily locked down, with its own new proprietary bus (MCA), connectors and peripherals; all heavily patented, so only IBM could produce them. It was a disaster. The only thing which endured was the PS/2 style connectors for mouse and keyboards, and those were made obsolete with USB. Nothing beside remains.


> Nothing beside remains.

This is wrong. The influence is far wider than you realise.

The PS/2 standard for keyboards and mice is perhaps the only one that bears the name PS/2 but it is very far from the only standard that came out of the PS/2 range.

VGA is a PS/2 standard.

The VGA port and the still-ubiquitous SVGA plug and the SVGA modes is a PS/2 standard.

My Core i7 nVME-only native-UEFI laptop still has both these ports on its vendor-supplied docking station.

3.5" HD 1.4MB floppy drives are a PS/2 standard. TTBOMK the 3.5" hard disk bay is also a PS/2 range innovation: while tower PS/2 machines came with 5.25" hard disks in the early models, the smaller desktop machines came with 3.5" hard disks. That was certainly the first time I ever saw a 3.5" hard disk and I vividly remember being boggled by how small it was.

PS/2s predated the rise of CD drives and optical media. The desktop ones didn't have 5.25" bays. That drove the success of 3.5" hard disks.

But the floppy drive was more influential. If you have an old PC with a floppy drive, then it's a PS/2 format floppy drive.

More to the point, the PS/2 range is the direct reason that Microsoft Windows was a success. The widespread success of Windows is what created the conditions for Linux to be written.

The reasons that the PC industry exists and thrives today is the IBM PS/2 range.

I had to try to explain this here on HN recently, as in just this autumn, and while I can't find that comment, here's the blog post I made from it: https://liam-on-linux.dreamwidth.org/87108.html

And that itself caused considerable discussion here: https://news.ycombinator.com/item?id=33019019


All right, yes, PS/2 spearheaded many more widely-used technologies than the keyboard and mouse port. Plan 9 also pioneered many things now widely used, including UTF-8. Now, does this in any way invalidate the analogy? If not, why go to such lengths in disproving a minor detail? The topic was Plan 9, and I made a comparison to IBM PS/2. Why write a minor essay about some detail about PS/2 which I got wrong, when that wasn’t the topic?


See the quote of yours that I started with.

You are saying nothing else remains of the PS/2 than its mouse/keyboard port.

That is not true. Lots of things from the PS/2 remain apart from that.

That's it. That is the point. Not metaphors, not concepts, not influence.

You said it only left 1 thing behind. It didn't. It shaped the whole PC industry from then on. All PCs after the PS/2 used PS/2 ports and PS/2 displays and PS/2 format storage media.

OK, I concede, the stuff about the PS/2 driving Windows is a different issue: that's the inverse, the negative, the shape it left.

But the ports and the media still stand.


Yes, but how does this relate to Plan 9?


PS/2 series closeness, was not a problem.

Real problem where other vendors, which offered new hardware (with for example, new CPUs), before IBM.

First case was Compaq-386, which considered by wide public as brand, and appear on market, before 386 machine from IBM.

As I know, Compaq-386, was not much commercial success, but it shown people, they could buy excellent hardware from other brands, not only from IBM.

And after this happen, it made PS/2 series not so interest, as before.


The issue with PS/2 was the new MicroChannel bus that was incompatible with existing expansion cards (except for the 30/286 which had a couple of old-style AT bus connectors). This was a real problem for people that had devices that required an expansion cards (tape drives, network adapters, memory expansion, DTP display adapters, multi serial port cards) - you couldn't just move up to IBM. As a result the clone manufacturers came out with EISA - a card format that had all the benefits of MicroChannel, but was compatible (with a really over-engineered connector) with older AT bus cards. I believe ALR and Compaq were among the first with EISA.

Incidentally, Compaq really put it to IBM with their lunch box style portable 386 and 486 models.


I worked at Compaq just after the 486 model and those things were nice for their time


Yeah, I seen Compaqs of near times. Have some quirks, but overall impression good.


Unfortunately, practical EISA was too expensive, to compete with MCA.

MCA was completely defeated only when happen PCI.

Because of this, some time exists Vesa, and some other exotics.


I remember buying a bunch of ALR EISA 386 boxes that were about the 10% less than an IBM equivalent (Model 70/80). The big difference was that we could re-use some of the cards in our existing machines, and back then that was $1500-$2k in saving per machine. You are very right that PCI killed MCA.


Certainly not the last time a company failed because they thought they could make big bucks by creating a vendor lock-in. Just look at Intel with Optane.


What works: make something good for building infrastructure and give it away. It will grow enormously. You will only take a small piece off this giant pie, but it will be large enough in absolute terms. Examples: Unix, IBM PC, Netscape, Android, Chrome, a huge lot of others.

What also works: make something completely unique which nobody is able to competently reproduce or replace. Lock it down heavily, reap the benefits. Examples: Apple, maybe Nintendo. Hardly anyone else: the order is really tall.

What does not work: make something fine but not uniquely great, lock it down. Failures are by the thousand.


Isn't "make something fine but not uniquely great" just what Oracle does, as a rule?


They are sufficiently uniquely great at getting government and big business contracts though. Their leverage is not necessarily in their technology (which can be of all kinds, form excellent to barely usable), but in the skill of selling it to particular markets, and access to that markets earned by decades of effort.


"People liked Unix because it was free"

Good point, but have You hear abot, https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good

That is the point, that better things must be hugely better, than ordinary, to win.

For example, my friend worked in startup, based on Erlang. Once at public talk one guy asked "have you hear about Erjang, a virtual machine for Erlang, which runs on JVM, it is up to 5 times faster than Beam?". Answer was "5 times faster is not enough, to consider switch of platform, in my case, need 10 times faster".


>Unix source code was given away (or cheaply licensed)

Yeah that's not true, at the beginning it was ~10k per processor. But you can thank Berkeley for rewriting Unix and replace all the AT&T parts. And yes SunOS was from the BSD lineage.

https://www.quora.com/How-much-did-Unix-cost-during-the-70s-...


It was free up through and including V7. Because of the specifics of AT&T's monopoly on the phone system, they were banned from commercializing other technologies. It was only starting with System III and 32V that they were able to charge for it since their monopoly was taken away and they were broken up into the 'baby bells'.


>It was free up through and including V7

They could not sell a product but the licenses.


They could not sell licenses. Hell, it wasn't even clear that software had copyright protections at all anyway, with computer programs being added to us code as explicitly copyrightable in the Computer Software Copyright Act of 1980.

They charged their tape copying fee that was a couple hundred dollars or so, but that's the kind of thing even the GPL explicitly says is OK for distribution.

They would later charge a license fee for even the older versions, but they didn't originally.

> In the early 1970s AT&T distributed early versions of Unix at no cost to government and academic researchers, but these versions did not come with permission to redistribute or to distribute modified versions, and were thus not free software in the modern meaning of the phrase. After Unix became more widespread in the early 1980s, AT&T stopped the free distribution and charged for system patches. As it is quite difficult to switch to another architecture, most researchers paid for a commercial license.

https://en.wikipedia.org/wiki/History_of_free_and_open-sourc...


>at no cost to government and academic researchers,

I really don't know why you try to change history, your own Wikipedia sentence says that it was just free for gov and edu.


They didn't distribute it to anyone other than edu and gov researchers at that point. This isn't changing history; they were forbidden from selling anything other than phone service.


> at the beginning it was ~10k per processor.

Was Unix popular at that time?


That was golden time for developers. Their public status was like movie stars.

And for West it was normal price, because even considering wide usage of also cheap CP/M, on some level of organization size, was unavoidable to switch to Unixes or to mainframe - CP/M just was too limited, was not capable to handle big business load.

When BSD first appear, Unix machines also where too small and slow for big business, but very soon appear client-server architecture, and later clouds (Beowulf, I think, oldest wide used private cloud soft), Unixes become unlimited.


Well maybe not as much as VMS but it started to gain traction.


It's mind boggling that there are still some people who don't think Plan 9 was a text book case of second system syndrome.


It's mind boggling that someone will consider it a case of second system syndrome.

Second system syndrome is not about "trying to make improvements to a first version of something and failing in the market".

It's about trying to make improvements and ending up with a convoluted mess. The Wiki definition is literally: "The second-system effect or second-system syndrome is the tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence.".

That's nothing like Plan-9, whose developers insisted on a simple elegant small design, with the original ethos of Unix, kept it KISS at all times, and if anything made it even simpler and more coherent than the original (which is why Linux copied all the major Plan-9 features over the years).

Failing in the market is orthogonal to this being (or, in this case, not being) a "second system syndrome".

The reason for that was that it was a proprietary platform, the existing unices were "good enough" for it to gain traction, and it came at a time that companies turned first Windows NT and soon after Linux. Besides ALL traditional UNIX vendors failed and most folded at the same period.


It is second system syndrome, and it absolutely is about the market. No wonder so many academic types fall to second-system syndrome, they don't think about their users only themselves. Second system syndrome is absolutely related to market success when you're talking about a commercial system like this -- how else are you going to gauge success?

Transparent distributed single system, for example. The strange and unexplainable obsession by so many computer systems developers. The reality is almost nobody wants it, yet Plan9 was almost entirely designed around this concept. A complete boondoggle. Even the nerdy people who get a kick out of this kind of thing will set up a single system distributed cluster and run a few commands and say "hey that's neat" and then go back to using Windows or Linux, complete with tools for remote administration of systems, and applications that can do clustering and distributed systems in the cases where it's necessary or helpful.


> The reality is almost nobody wants it

Hint: there is this thing called Kubernetes. It's quite successful now.

It is a direct attempt to bodge onto Linux what Plan 9 did in the kernel.

More examples...

Microkernels: trying to turn a monolithic Unix kernel into a fleet of cooperating userspace servers, without using the filesystem as the comms mechanism as Plan 9 did.

RPC: trying to bodge inter-node comms onto the standalone OS design of UNIX.

The definition of the term "second system effect" is:

« the tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems »

Source: https://en.wikipedia.org/wiki/Second-system_effect

Plan 9 is dramatically smaller and simpler than mainstream UNIX systems, such as Linux.

It is precisely and specifically the opposite of what you are claiming it to be.


Notice how all these things are being done in the real world on Linux, a monolithic Unix derivative.

I mean some of the connections you're trying to draw there are pretty shaky too, but I didn't say Plan 9 has no good ideas and nothing was ever learned from it. If you want something concrete, the actual 9p protocol is used to share certain resources between virtual machines in Linux today. I think certain aspects of mount, e.g., bind and union mounts at least came to Linux via Plan9 too.

That's not my argument. I'm not saying Plan9 is garbage with no redeeming features. But it is generally a failure, and a victim of second system syndrome.

> the tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems

Over-engineered being the important part, although that phrase is pretty arbitrary. It's obsessing and becoming irrationally myopic on certain parts of the system that the designers liked or may have been unique or interesting in the first system, in the mistaken belief that those are the important parts of the system.

> Plan 9 is dramatically smaller and simpler than mainstream UNIX systems, such as Linux.

Well naturally, Plan 9 is a relic, and was never more than a toy even when it wasn't.


I think you profoundly misunderstand what is meant by the "second system effect" and this comment in no way addresses that issue.

You're trying to redefine one term here but that does not defend your case. "Over-engineered" means "made more complex than it needs to be".

More definitions:

https://www.pcmag.com/encyclopedia/term/second-system-syndro...

https://wiki.c2.com/?SecondSystemEffect

The point of the second-system syndrome is that something is bigger and much more complex than its predecessor.

Plan 9 is smaller, simpler, and cleaner than Unix, because whereas Unix was designed and built on and for standalone, text-based, terminal-driven minicomputers, Plan 9 was built for networked graphical workstations.

The "graphical" part of that arguably is a liability, but the networked part is absolutely integral. All modern UNIX machines, to an approximation, are networked, and the Unix kernel at heart does not understand networking.

This also applies to microkernel Unixes such as the HURD, Minix 3, and so on.

The entire point of Plan 9 is that by considering the developing nature of Unix workstations and adapting the core Unix concepts to the concept of a network, by extending the kernel's core abstractions to include networking and indeed specifically TCP/IP networking (also not yet a thing in the late 1960s when Unix was designed) that Plan 9 profoundly simplifies the model.

The "second system" here is not Plan 9. It is commercial Unix and the best example of it is the Linux kernel. Everything is in there. Drivers for all the hardware under the sun, dozens of protocols, dozens of processors.

A "second system" is one that tries to include everything and becomes bloated and overcomplex.

Plan 9 is the opposite of this. It is stripped down to the essentials, elegant and lean and honed. Which makes it hard to learn and hard to use and not very practical.

Whereas once Unix escaped the original labs into industry, it became its own second system: big and over complicated. Then this got worse when it was reimplemented as FOSS when everyone could and did add in everything they wanted.

What the term "second system" really means is the diametric opposite of what happened with Plan 9. Plan 9 is the tiny elegant core of a powerful idea, built by geniuses for geniuses and unfortunately too abstruse and theoretical and difficult for ordinary joes with their simple editors and simple conceptual models.


It is made more complex than it needs to be. I feel you're just not able to accept that. Shoehorning all abstractions into a file, making a distributed system, these are examples of over-engineering.

It doesn't have to be "bloated", or particularly bigger than a previous system or competitors, you're fixating on that part of the second system syndrome too much. But if you're really fixated on that vs Unix then take a look at the original unix for something far smaller and more elegant.


I would expect you to defend your argument, but I do not accept it.

The term has a meaning: bigger and more complex. Plan 9 does not fit the meaning. It is smaller and simpler. Therefore, Plan 9 is not an example of the SSS/SSE. It is an example of many other things and has other reasons for failure, but the SSE is not it.


Making a transparent distributed operating system in the mid-late 80s and 90s as a Unix replacement was total over-engineering and over-complexity that never had any real demand or advantage. Therefore it is a great example, you just refuse to acknowledge that. Comparing it with actual real world operating systems of the time is misleading and does not refute the second system syndrome, because a lot of complexity in those came about from making them work in in real systems which Plan9 did not need. The fact is there was significant effort and complexity put into making it distributed, and that would not be required if it did not have this unnecessary capability.

You're grasping at straws a bit here, I would have expected you to defend your argument a bit better. First you try to justify the distributed capabilities of Plan 9 by saying it's now used in kubernetes 30 years on, and when I point out that's not transparent at the OS layer then you just change to not accepting it is a complexity at all.

Anyway the argument is pointless to continue. Plan 9 was a commercial failure and brought little new to systems design. It was a warmed over Unix with over-engineered distributed systems concepts that had been built up since the 50s and 60s and weren't new. At least contemporaries like OS/400 were attempting to bring something new even if they did not ultimately succeed (although OS/400 itself was far more of a commercial success than Plan9 of course).


Your replies are increasingly baffling to me.

I don't think you understand what I am saying at all, nor do you seem to understand the meanings of the phrases you are throwing about.

> Making a transparent distributed OS [...] was total over-engineering

That is not what "over-engineering" means.

> over-complexity

It is LESS COMPLEX. This is not a tricky or difficult concept.

> Therefore it is a great example, you just refuse to acknowledge that.

It's not a good example because you don't understand the term you are trying to use.

> First you try to justify the distributed capabilities of Plan 9 by saying it's now used in kubernetes 30 years on,

No, I did not say that, or mean that, or imply that.

You seem to be having real problems with comprehension here.

> when I point out that's not transparent at the OS layer

You did nt point that out or even attempt to, but you did not understand what I wrote, so :shrug:.

> Plan 9 was a commercial failure

Agreed.

> and brought little new to systems design.

Totally disagree.

> It was a warmed over Unix with over-engineered distributed systems concepts that had been built up since the 50s and 60s and weren't new.

Now I think you don't understand Plan 9 either.

I would love to see some citations of the "existing distributed systems concepts" you claim.

Agreed re OS/400, though.


I think you just weirdly don't know what over engineering means. It absolutely means adding things to a system that aren't necesary, even if you personally think they are simple or elegant or great or should be included. That's half the reason why over-engineering happens in the first place is because engineers let their desires or passion override rational analysis.

That's really one of the primary psychological drivers behind second system syndrome actually, is that designers will latch on to their pet ideas or theories in the mistaken belief that the success of their first product validated them. Everything is a file is a great example. Everything is a file is not what made unix so successful, and yet they over engineered all their network APIs into the file and mount interfaces unnecessarily. It's not that it wasn't neat or a fun toy, but it's just an over engineering of an already solved problem in an incompatible way that offered no compelling real world advantage.

> It is LESS COMPLEX. This is not a tricky or difficult concept.

It was MORE COMPLEX than it could have been. Also not tricky or difficult.

> No, I did not say that, or mean that, or imply that.

You definitely did.

> Totally disagree.

What useful things did it bring to systems design?

And I gave some examples of existing distributed systems. Lots of material on those you can read up on if you weren't aware of them.


No. No to every line of this.

You cannot simply re-interpret existing terms as you wish.

You do not get to saty "well clearly bigger can really mean smaller, and more complex can sometimes mean less complex, and therefore this smaller thing is therefore an example of how things get bigger over time."

The statement is ridiculous and your attemps to defend it are just getting increasingly absurd the more you flail around angrily trying to make your meaning fit.

When Lewis Carroll put these words in his character's mouth, he was mocking what you are doing:

“When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

’The question is,’ said Alice, ‘whether you can make words mean so many different things.’

’The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.”

https://www.goodreads.com/quotes/12608-when-i-use-a-word-hum...


You've gone off the rails because you can't respond to the fact that things like the distributed design are bigger and more complex and over engineered than necessary.

And you also can't respond to the fact that your claim the system as a whole is smaller or simpler is meaningless because you're comparing a toy to real systems.

You were also unable to furnish me with any examples of what it brought to systems design.

So, this is just a total desperate clutching at straws to try to claim it's not over-engineered for some weird reason. Whatever you call it does not change what it is! And that thing is an unremarkable and un-notable system that grew overly complicated features that were not necessary because the designers fixated on or misidentified certain features from their fist successful system, and failed to gain adoption or influence, in part because of these complexities causing incompatibility and lack of focus on more important areas. That is easily recognizable as second system syndrome by anybody, but you can call it something else if it pleases you.


Every paragraph is still wrong. You are, in the typical way of people whose belief systems are based on faith instead of facts and evidence, accusing me of what you are doing.

> You've gone off the rails because you can't respond to the fact

It is not a fact. You are wrong. This claim is false.

> that things like the distributed design are bigger

It is smaller.

> and more complex

It is simpler.

> and over engineered than necessary.

Therefore it is not over-engineered.

> And you also can't respond to the fact that your claim the system as a whole is smaller or simpler is meaningless because you're comparing a toy to real systems.

I am not. I am not comparing Plan 9 to anything other than things that compare in scope to Plan 9, in other words, previous and subsequent UNIX-type OSes.

You are.

And you're wrong.

> You were also unable to furnish me with any examples of what it brought to systems design.

I did not, because I did not need to. I am not making that claim. You are. This is called a "straw man argument", where you attempt to rebut me by claiming I said something I didn't say and then attempt to prove that wrong.

I didn't claim it was a success, I didn't claim it was influential, and I didn't claim it influenced anything else.

Therefore, your attack fails, because it doesn't matter if you disagree with things I didn't say.

> to try to claim it's not over-engineered for some weird reason.

"That is not what the phrase `over-engineering' means" is not a weird reason.

> Whatever you call it does not change what it is!

Again, it is what you are calling it that I am debating.

You are just shouting angrily, as far as I can see, because you are unable and unwilling to engage with what I am actually saying.

> And that thing is an unremarkable

The only way to refute this would be to give examples of rival systems that did so much in so little code.

I am not aware of any, and you have given none.

That makes it remarkable.

> and un-notable system

It is the last system from the group of developers who built the most successful OS that the world has ever known. That in itself is very notable, even if it's a flop.

> that grew overly complicated features

It is small and simple in design. That is not "overly complicated".

It has few features.

This is your core argument and what you claim is not true.

> that were not necessary because the designers fixated on or misidentified certain features from their fist successful system,

That may be true. I am not disputing that.

The increasing number of typos reinforces my impression that you are angrily pounding away at your keyboard, angrily arguing with things you only imagined I said.

You are shouting at yourself.

> and failed to gain adoption or influence

It did. This is true. I am not arguing that.

> in part because of these complexities

I am arguing that, though.

> causing incompatibility and lack of focus on more important areas.

All fair. All valid. None are my points.

> That is easily recognizable as second system syndrome by anybody

Nope. I refer you again to the definition.

> but you can call it something else if it pleases you.

Being right pleases me, yes. It's my job. My real name is at the top of my posts, while you hide behind a pseudonym.

I've been a published writer since 1995, writing about technology for about 15 different print magazines on 4 continents, paid by hundreds of organizations around the world to explain tech for them and to them.

My job is to understand words, language and technology, and explain it.

I am proud of my record in this. I list many of those publications on my LinkedIn profile.

I don't have a homepage on the WWW because if you Google my name, you will find me. I do not use pseudonyms or screen names anywhere.

I am always "lproven" or "liamproven" online and have been since 1991, when I signed up for my first personal/home email address.

But do continue angrily telling me that I don't understand technology, even though it's been my day job since 1988. It's quite funny.


If Plan 9 had been released earlier before Unix took over the world, then there's a good chance Plan 9 would've taken over the world instead. Bad decisions can be made without the second system effect coming into play, and vice versa. Releasing plan 9 as a propietary OS during that time was pure folly, but not the second system effect.

And the second system effect needs not be about the market at all? I wrote a distributed caching thing once that was somewhat of a hack in many places and the replacement for it has been stuck in development hell for over a year now. No market to speak off.


> If Plan 9 had been released earlier before Unix took over the world, then there's a good chance Plan 9 would've taken over the world instead.

If something different had happened things might have been different. Maybe. There's also a good chance it wouldn't have, in my opinion, and Unix still would have taken over the world.

> Bad decisions can be made without the second system effect coming into play, and vice versa.

Yes but Plan 9 is a classic second system syndrome. Unix made "everything is a file, but not quite". Plan 9 took that to the absolute extreme, demonstrating that they clearly missed what was good about that Unix design choice, which was was not some theoretical purity. Unix was a highly network-able system, and Plan 9 took that to the absolute extreme trying to develop a distributed single system image, again missing what was good about Unix networking, which was not the ability to make a remote machine behave as though it was the same system as the local one.

> Releasing plan 9 as a propietary OS during that time was pure folly, but not the second system effect.

No you're right, the proprietaryness of the OS wasn't the part that was the second system syndrome. Over-engineering something that was not needed or wanted and releasing it 8-10 years after many successful commercial and academic Unixes had been used.

I mean if it was a research effort to inform how Unix might evolve and be improved in the real world sure, but they expected actual users to switch over to this warmed over Unix-ish looking thing with lots of things you don't want, oh and it's incompatible with your existing systems. Come on.

I know this hurts Plan 9 fanatics to hear, but it's not amazing. You can listen on a network port with a file, and you can access a file or run a program on some other computer "in the ether". Neat. Then you go on with your life.


>If something different had happened things might have been different. Maybe.

You're absolutely right, this is too hazy and hand-wavy. Let's stick to what really matters: Plan 9 had no attributes of a "second system effect" as per the actual definition of the term.

To paraphrase a famous quote: You keep using this term "second system effect". I don't think it means what you think it means.


I notice you were unable address the parts of my comment where I showed it does fit the second system syndrome. That's what I suspected.


Second system syndrome can happen with both systems sold commercially, and software systems used internally. I've seen both. Success is gauged in different ways for internal systems, supposed improved processes, efficiencies etc. There's no "market".

Transparent distributed systems are the biggest thing now in software development, both private and public "cloud" are moving towards orchestration tools like Kubernetes, abstracting away the underlying CPU/memory/network hardware.

People don't log in to systems to "administer" them anymore. Containers are cattle and thrown away when they break.

This is much more prevalent than just "applications that can do clustering and distributed systems in the cases where it's necessary or helpful."

People are finding that it's "necessary and helpful" for most software that is used commercially or personally.


None of that transparency is done at the OS level today though, it is done in layers above that, and that's not some new thing that's just started now. It's obvious that's the way to go and trying to shoehorn some over engineered network transparency into a kernel never had any real justification or demand for it.

It's not like Plan 9 was new with distributed systems or concepts or network transparency. You had many distributed systems, MOSIX, VMS, X11, NFS, etc all a decade or more before Plan 9. For the most part they were an academic obsession that's has mostly run out of steam these days replaced with local operating systems and layers on top of those which add the distributed support.


By some standards, the Velvet Underground were an unsuccessful band; never having much radio airplay, and no high-charting hits.

But...the were hugely influential. Someone once said that they may have only sold 500 albums, but each an every one of those purchasers started a band.

Success needs to be carefully defined in advance of declaring failure.


They sold 500 albums ?

Crazy to think they are an absolute must listen to high school kids ( at least in rural France of the 90s… )


The quote in question was from Lou Reed and was about the first pressing of their debut album, IIRC.


It was Brian Eno, and it was 30,000 copies, not 500.

https://quoteinvestigator.com/2016/03/01/velvet/


I don't think this guy understands what research is and what it's good for. He even says that the AT&T Research Group "were not used to create commercial software and AT&T has never been in the software business" as if that's a bad thing.


Yeah it always sounds like such a bad take when people use research operating systems as some kind of failure case against giant commercial behemoths which have a totally different model of operation and goals.

Key word: RESEARCH.

OpenBSD gets a lot of flak for this, which is funny because outside of its performance issues which are somewhat the result of all the mitigations, people sure love taking things that were first implemented there and using it in stuff like Windows, or using software which is an integral part of that project like OpenSSH and tmux. I don't think some people realize that it's largely a research project led by professors out of universities in Alberta when they're really quick to heap criticism on it.


Yeah it's kind of weird that he acknowledges UTF-8 came from there and simultaneously calls it a failure.

UTF-8 alone would make it a success.


Well, that is how UNIX got its free beer adoption, which workstation startups like Sun took to their advantage.

Plan 9 and Inferno failed at that, because AT&T had already changed by then, and there was no more free beer.


This strikes me as revisionist. Research Unix v8-20 had quietly trended towards more things-as-files. The hardware environment that Plan 9 assumed was very different - separate file servers, compute servers and graphic workstations, plus it came out when Microsoft and Windows were at their maximum power.

Beyond all that, today's leading OS, Linux, has incorporated many of Plan 9's ideas. /proc filesystem, the signalfd system call, and UTF-8 are all examples.

I think it's dangerous to assume what this article advocates. You risk trying to make a faster horse, or maybe the Uber of $X


Linux is still incorporating ideas from Plan 9 in a haphazard and unplanned way. "Containerization" (kernel-level namespaces, really) was a big one; user-space block devices was one of the latest.

I do think that Plan9's de facto requirement for graphics-capable hardware is a bit weird in retrospect. It really was made to be a research OS for networked workstations (and the default UI basically clones Oberon) even though the design concepts it ultimately leverages are far more applicable.


> I do think that Plan9's de facto requirement for graphics-capable hardware is a bit weird in retrospect.

Graphical workstations were always the desired end result for decades since the mother of all demos. In the 90's GUI's were all the rage: Windows 95 removed the need for DOS and MacOS was already leading in this space. So it makes sense that plan 9 was designed to be graphics first.


IBM mainframes got there first.


> Research Unix v8-20 had quietly trended towards more things-as-files

That brings up another point; UNIXes v8-v10 are even more irrelevant and forgotten than plan9, despite presumably being more compatible with the venerable v7 UNIX. So pinning plan9s failures on it being different seems to me very misguided


> Research Unix v8-20 had quietly trended towards more things-as-files

Hang on, what?

AFAIK UNIX 10 was the last ever version.

https://en.wikipedia.org/wiki/Research_Unix#Versions


> No one, in the Unix world, ever complained about the Unix's missed promise about file abstraction.

I was complaining about the missed promise of file abstractions since the first time I was exposed to their potential: "Oh, that's great - but then, why isn't XYZ accessible as a file? And why can't we have the kernel offload interpretation of path suffixes to userspace code? And why isn't `/proc` richer? And why do I need to mess around with BSD sockets if we can just go through files? etc. etc."

Now, sure, I'm just a nobody (and was a complete nobody 25 years ago when I noticed this) - but the author's claim is just false.


So did I.

Just the last week I needed a replacement for a container init that watches the process outputs and kills the process on timeout. Surprising, but I haven't found anything readily available, so I had to hack my own tiny program.

I haven't done much system programming any recently so my knowledge was kinda rusty. I knew the basics of what I should do - I had to wait on the child PID, watch out for the signals (because I'm running an init process, can't keep those zombies around), and work with the piped stdout/stderr fds. Sounds trivial, except that it's three completely independent APIs and those aren't files so select(1) is not my friend here. So I learned about signalfd and then learned how it's broken. Almost thought I have to run some threads but then I've learned about pselect(1) which seem to do the job.

Either way, "gosh, I'd wish I'd have Plan 9 instead of all this mess" was one of my thoughts that day.


Problem is, it’s quite difficult to know in advance what will be the win. Things that help the development and maintenance processes are good, even if they don’t map 1-1 to a problem. Projects are always short of resources and code is hard to maintain. Perhaps Linux might have come from plan 9 rather than minix if there was a better internet when it appeared. Perhaps we’d have plan gnine from rms.

I see a value in being to abstract away hardware issues as it lets me reduce the complexity of what I’m working on. Operation within an abstraction can be improved or changed without changing what uses it - and the changes are made in a single place, simplifying maintenance. There can always be performance issues, but one should not set out to tune things without a reason to.


Exactly, and each system doing something different from standard AT&T V7 Unix did so because they thought something was broken. And they weren't entirely wrong. Yes, most people don't use Plan 9, or Hurd, or Minix, today, but we don't use V7 Unix either. Unix/Linux evolved, and it it didn't do so in a vacuum isolated from these other systems but took things from them that turned out to be improvements.


I wouldn't tell, looking at how some keep using text based workflows, TUI, and multiple TTYs as if their last Linux distribution has hardly changed since AT&T V7 UNIX.


What do you have against text? I can't cope with GUIs hiding everything while wasting half the screen on decorations. Text is simple, doesn't change, is easy to remember. I started with GUIs too, used Windows, Mac OS 9 and OS X before discovering the CLI, so this isn't nostalgia, I really prefer text. It's incredibly empowering to interact with the computer through a simple and comprehensible text interface.


Text, is ok when it makes sense, and even then, a REPL is much better option.

Living in the past of when graphics workstations were more expensive than buying a house, if available at all, not so much.

When I started, text was the only option, so yeah for me it doesn't make sense to throw away everything we got since 1980's in computing.


If we're being nitpicky, did TUIs even exist on V7? IIRC, support for terminals that did more than scroll downwards was lacking in Unix at the time, only really starting on BSD.


"A few years later, Mary Ann Horton, who had maintained the vi and termcap sources at Berkeley, went to AT&T Corporation and made a different version using terminfo, which became part of UNIX System III and UNIX System V."

https://en.m.wikipedia.org/wiki/Curses_(programming_library)


Have you ever considered that's because this is an optimal solution to the problem ?


The problem being living in the past.

There is this thing called REPL and scripting languages.


This article is kinda garbage, it gets simple facts wrong in the first sentence of the abstract (and again in the second sentence of the article): Plan 9 is not Unix. At all. It shares some ideas with it sure, but if that's enough to make something Unix so is Windows NT. It's also rather debatable that it "failed" - a lot of good research happened there and those ideas have gone on to influence production environments the world over.


Mmm not written very well but there are some things that are true. I just take objection to "don't fit it if isn't broken". Plan 9 fixed a ton of things that were broken. Everything-is-a-file isn't one of them but that's not the only thing they changed.

I think a better conclusion is that Plan 9 blew their "weirdness budget". I can't remember where I read it (either here or Reddit) but someone stated the idea of a weirdness budget.

If you're making a new thing it can only be a bit weird and different. If it's too weird people will just stick with what they know and you'll be relegated to a research project that nobody uses.


I agree with your assessment. I tried p9 in 2002 and it was a daunting experience. Everything was just different enough to make typical tasks a pain in the ass.


I agree. The really sad thing is as well as improving on many other aspects of Plan 9, Inferno also makes the UI much more sane and comprehensible, by accepting that by the mid-1990s a lot of standardisation had happened in GUI design, and incorporating that.

Inferno is, for me, much easier and saner to use.

Inferno is to Plan 9 as Plan 9 is to UNIX: taking the ideas further, fixing the rough edges and making it more versatile and more general.

But Inferno is even more obscure and forgotten than Plan 9.

Some 15Y ago I was able to find an ISO image of x86-32 Inferno and boot it in a VM. Not today. :-(


> I am quite sure that you do not know what Plan-9 is and that, probably, you never have heard of it too.

First sentence is false. Confidently, arrogantly false. I'm hesitant to continue reading. Assuming too much of your audience like that can really detract from what you have to say.


First sentence made it clear I'm not the intended audience so I didn't read any further.


Right? I have had so many discussions about plan-9 over the years.

A pet peeve of mine is statements like that… just because you just learned about something doesn’t mean other people didn’t know about it before. Even if you ask all your friends and they don’t know, that doesn’t mean different circles don’t know about it.


Same here. Author comes off as a bit of a jerk. Stopped reading.


He comes off as Italian & opinionated, and writing informally about a topic. He probably doesn't feel the need to be circumspect about his observations on his own blog. Even though I think he's (mostly wrong) about his conclusions, I wouldn't ascribe jerkishness to what is probably an unintended cultural impedence mismatch.


I know a lot of Italians who are less arrogant than this guy. I don't think you can blame it on him being Italian.


If you could get the ones you know to write a piece on Plan9 (in English), we could have a comparison. I'm pretty forgiving of people who want to have their say on things, even if they're not omniscient.


It's okay, the second sentence was so incredibly wrong I began to wonder if it was a parody site.


Things being files was never the interesting thing about Unix or Linux. Don’t make me tap the sign:

https://www.joelonsoftware.com/2001/04/21/dont-let-architect...


I think the other lesson to learn is that “everything is a …” abstractions don’t work as well in practice as in theory.

All abstractions leak. The more surface area an abstraction covers, the more leaky it is likely to be.


It actually works like a charm. The lack of adoption certainly isn't due to the 'everything is a file' abstraction layer, more because by the time it had been released to the masses the momentum in Unix / Linux was absolutely massive and super hard to overcome. And they in turn were under fire from Windows and Mac on the desktop, which left only an extremely small niche, one that was too small to be viable.

But if you've ever worked with it then you will be longing for exactly those features forever and wondering why we did not manage to take more of its deep insights forward.

A lot of really good ideas have been slaughtered on the altars of commerce and perceived performance improvements. As a result we are still developing software roughly the same way that we did 50 years ago, a model that wasn't ideal but that was good enough.


There is a reason why high performance graphics and real time audio don't make use of file handles, rather shared memory (with different MMU configurations) between GPU, kernel code, drivers and userspace.

Everything is a file is even slower than micro-kernels IPC.


High performance networking also uses shared memory. I wonder if MMU backed shared memory is a better “universal abstraction” than file handles (at least for high performance workloads).


The API/ABI structure of every modern OS is a hot mess, that has nothing to do with syscalls not being files, and everything to do with the way the systems have been extended over and over again for decades.

Making a simpler API is a great idea, and from you comment I guess they mostly succeeded, but they did not have to go down the everything-is-a-file path to do that.

What role this played in lack of adoption depends on what alternate course of action you compare it to. A non-file-based rewrite of the API would likely have had approximately the same result, in any case a good compatibility layer might have made a big difference.


The "everything is a ..." is an incredible powerful abstraction type, but you really have to go all the way for it to really work well.


This is a very shallow, incorrect analysis. The author needs to learn about composition to understand both Unix and Plan 9.

As others mentioned, the author also needs to consider what research is.


Even Minix, which was (IMHO) worse than Unix Version 7, was popular at one point! What Torvalds started from. With the advent of i386, many people definitely wanted something better than MSDOS or early Windows. If plan9 had been released as open source by 1992 things might’ve been very different.


Minix is arguably more popular than its ever been given its use in Intel chips' management engine.


That's unrelated. It's about what people use, not what a company decided is a good fit for an extremely domain-specific part of a system. Most people have no idea what MINIX is.


That's one way of looking at it. I guess by that line of reasoning Minix and Linux/BSD/etc. are also unpopular since they are effectively only used because companies decide it's a good fit for their system (relatively speaking there are very few such as myself that run Linux on their main system).


Yes, companies. A huge number of them. I'd argue that makes Linux and BSD quite successful/popular systems. My point was not that so much that it's a company, but that it's one company. I think measuring popularity by how many people chose it makes sense.

Though popularity is a very weird measure. You have to also consider in what space it's popular. Linux, and especially BSDs, are unpopular in the consumer space, because not a lot of people chose it (even though a lot of people use it - consider IOS, Android, macOS, ChromeOS, PlayStation, et cetera). In contrast, Windows is rather unpopular in the server space, most choose Linux or some BSD.


That's Minix 3.

Minix 3 is a very different OS, with a different design, different goal, and different licence.

It's very important to distinguish between them, or else you make statements that sound akin to "Anglo-Saxon is the main language of trade in the 21st century."


You could buy commercial UNIX for the i386 in 1985, worked well, I ran one version on my home PC before switching to 386BSD 0.1 in 1992.


No access to the sources was the difference.


Sources were not available for MSDOS and Windows.


Discussed at the time:

The Plan 9 Effect or why you should not fix it if it isn't broken - https://news.ycombinator.com/item?id=11791980 - May 2016 (107 comments)


> I am quite sure that you do not know what Plan-9 is and that, probably, you never have heard of it too.

Pretty sure I knew what both Plan9 and Inferno were by 2001.


I would have done more work with Plan9, but the GUI is an acquired taste and I just can't make myself like it enough to work with it. If only Plan9 had been released as an "improved Unix" and retained the primarily tty console / CLI mode. DrawTerm, RIO, etc could have been released as an add-on instead of as an integral part.


Even when the fixes are arguably improvements, it's still tough sledding.

Ask Guido van Rossum re: Python 2=>3.


Plan-9 isn't Unix


I've always wondered about the hidden convergence of the "9" in Plan 9, and the "IX" in Unix. IX, after all, is how Romans and Western Civilization wrote numbers until say 1200CE.

Given how smart Pike, Thompson, Ritchie etc are and we're, I doubt this is a coincidence!


It probably was, seeing as Plan 9 was named after a movie.


nit: iotcl in the article should be ioctl


The article is riddled with grammatical and spelling errors as well as its many errors in reasoning.


I'll risque got many minuses, this is right of all. But read whole comment, and if you could, write counter arguments.

I must admit, this is good article, because it illustrating different styles of thinking, and using excellent example of really great fail, but absolutely avoidable.

Unfortunately, it is written considered tech people, in language of tech people, on resource intended for tech people, but it is much closer for entrepreneurs, business owners, investors, freelancers.

I could not add much to what already mentioned, except, that exists huge number of examples, when big corporations used best scientific research and fail.

This is because scientific is right in only some dimensions of space-time and https://en.wikipedia.org/wiki/Noosphere, which are accessible for current science (and not right in huge non-digitized areas.

And usually, researched really past, but product is made for future, and future is mostly unpredictable for current science.

As summary of my comment, when use research data for work, you risque, to make product for some abstract market, existed in past, but will not exist in future.


> As summary of my comment, when use research data for work, you risque, to make product for some abstract market, existed in past, but will not exist in future.

Of course - this happens all the time. But I'd argue this is definitely not what happened with Plan 9. I think the article mentioned this as well, that Plan 9 was not really made by business people with commercial interests in a software company with experience. It was actually the polar opposite.

The people that worked on Plan 9 wanted to make a better system. I don't call it a failure, I call it an enormous success. It of course failed as a commercial product, but that was an afterthought from the beginning.

I digress. You said that Plan 9 was designed for a future that never got to be. What about all its legacy? All the things that I'm sure I don't have to repeat, that got into every other system eventually. Someone earlier in the thread mentioned a weirdness budget. I think that's the best explanation. Plan 9 finished up their weirdness budget about 5% in. That made it an awfully good system that nobody used. We need systems like that to show us what's possible, but they'll never be mainstream.


"Risqué" (French: rees-KAY) is not the same word as "risk" (English).

The main use of "risqué" in English means "overly sexual and provocative".


Thank You. I sometimes do mistakes under pressure.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: