Hacker News new | past | comments | ask | show | jobs | submit login
The Unix-Haters Handbook (1994) [pdf] (web.mit.edu)
382 points by arpa on March 3, 2017 | hide | past | favorite | 307 comments



When I read this 20 years ago I would never have believed I'd be typing this on a Mac laptop running yet another Unix variant. This line is now so funny...

As for me? I switched to the Mac. No more grep, no more piping, no more SED scripts. Just a simple, elegant life: “Your application has unexpectedly quit due to error number –1. OK?”


Ha! I switched to Mac so I could have pipes, grep and bash. I personally believe a big reason for the Mac resurgence has to do with the switch to UNIX. I was on Windows using Cygwin for years before anyone I knew was using a Mac. Then OSX came out, and all of a sudden all the academics I knew switched to Mac, and a couple years later most of the professional programmers I knew had switched.

The great thing about a Mac is that I get to have my user level UNIX without having to know anything about the system level UNIX. I don't need to be a sysadmin to run the thing. I get grep and pipes without having to know all the crazy commands to tweak networks settings and display timings and boot scripts. When OSX came out, that was the state of Linux, you couldn't be just a user of it, you had to know way too much to get it running.


> I personally believe a big reason for the Mac resurgence

Mac resurgence is more of a perception than reality. In 2016, Apple sold ~18.5M Macs, down from the previous two years [1] and off more than 10% from last year.

Granted that the Mac numbers are falling at a slower pace than the PC market as a whole, but resurgence is probably an over-characterization.

[1] http://finance.yahoo.com/news/state-apples-device-sales-1940...


The other comments here are absolutely correct -- I was referring to OS9 vs OSX, on a 10-20 year time frame. OSX ushered in new waves of adoption that outshines previous numbers by an order of magnitude.

Given that Mac has gone from well under half a percent market share to over 7% today, I'd say "resurgence" is an under-characterization, relative to what I was talking about. ;) OS9 never had the market share that OSX has, so it's less coming back, and more dominating like it never did before.


In this context, I think "Mac resurgence" refers to the Mac coming back from the dead over the past 20 years, not recent sales figures.


I think the resurgence refers to the market share Apple gained since the OS9 days.


Agree! A real relief to get a proper Unix shell after getting by with Cygwin.


As soon as I could install MySQL and Apache on OS X I had to have it. I no longer needed an internet/SSH connection to code.


Have you tried the Windows Subsystem for Linux?


Not OP, but I'm genuinely interested in how this experience is. What do you think of it compared to using, say, iTerm2 on OSX? I've been thinking of switching back to a Windows PC since I haven't been very impressed by OSX in the last couple of years.


I've been using it since it's release to the fast insider ring and at work since it's release to the slow insider ring. It does everything I need to do. Build C++ with GCC or clang. ssh client works fine (have not tried ssh server, but supposedly supported). Python, Perl work fine. Have had some UI issues with the console, but those have been fixed on the fast ring.

The team behind WSL is also very responsive to issues reported at github.


It's quite impressive. Ex Valgrind runs out of the box in last Insider versions.


I haven't tried it yet, but I am looking forward to it.


> The great thing about a Mac is that I get to have my user level UNIX without having to know anything about the system level UNIX. I don't need to be a sysadmin to run the thing. I get grep and pipes without having to know all the crazy commands to tweak networks settings and display timings and boot scripts.

Luckily, that is basically true for modern Linux as well.


basically


Alright, I'll bite. What's missing?

I ask as I type from my Dell XPS 13" DE (aka Sputnik).


It's mostly hardware support still. As an example, try using a Baytrail/CherryTrail laptop with Windows 10. Now try in Ubuntu. Oh you need Linux 4.11. Oh that's not out yet, so you need to run a bleeding edge kernel directly from git. Even Ubuntu 17.04 is supposed to ship with 4.10 so you'll have to wait until Ubuntu 17.10 before your hardware might be usable.


Before blaming Linux devs for lack of support of new chips I would first check how Intel is cooperating with them by releasing proper documentation in due time.


As a counter-example, try buying a Baytrail/CherryTrail laptop with Windows 10. Now try to install OS X. Oh, you can't since it's not a Mac.

All the time the argument is given that OS X has much better hardware support than Linux, which is just not true. It might be true for OS X on a Mac compared to Linux on unspecified hardware. However, if you constrain the hardware choices even half as much as you do for OS X, that's not the case anymore.


Yeah, but that's not really an argument for macOS, given Windows is not even Unix and OP mentioned macOS.


This and I was also under the impression that Ubuntu was experimenting with Bay Trail/Cherry Trail support in 16.10:

> http://www.cnx-software.com/2016/10/14/ubuntu-16-10-images-r...

Seemingly confirmed here:

> http://linuxiumcomau.blogspot.com/2016/10/running-ubuntu-on-...

Where this guy updated to say that current ISOs of 16.10 should work...

> Whilst two ISO sets of various Ubuntu flavours for both 16.04.1 and 16.10 releases are provided, I recommend first trying one of the 16.10 ISOs as these are the most recent and incorporate the latest kernel, patch sets and fixes based on previous releases and feedback. In particular, the Yakkety 16.10 ISO kernels support micro SD cards (although with some limitations), includes a patch for I2C bus, has improved RTL8723BS wifi and bluetooth support and most recently I've included support for full disk encryption on Lubuntu and fixed the home directory encryption for all flavours.

OpenSource tends to have a lag time to hardware by nature...but I do believe there was some exaggeration here.


> No more grep, no more piping, no more SED scripts

Fedora/Gnome here. I never use this sort of stuff, and modern power users really don't have to. That's why I categorically state that practically no-one in the real world "uses Unix", even though most OS-es incl mainstream mobile ones derive directly or indirectly from it, save for Windows and esoteric ones.

Do I enjoy modern Linux? Heck yeah. Do I want to use Unix? Piping and grepping and cat'ing and touch'ing and so forth? It's not rocket science but it's also a doggerel not smooth UX for me, and "not smooth" means not productive.

If I need to compose functionality for "shell"-based task automation, I whip up a small Go program. Portable across OSes should I ever move, bin dir is in PATH, compiles fast enough to not need script interpreter for iterating the "script" (tool) at hand, and the syntax and semantics are saner to me than sh scripts, aka no impedance mismatch vs other coding tasks.

(Sure enough though, for one-offs "search these for that" while already in the bash/etc, the built-in old-school tools still come in handy occasionally)


If grepping and piping is not productive for you, you are simply doing tasks that do not require that to be productive.

OTOH working on some non-trivial code bases, maybe including system level code, typically requires grepping and piping, for at least some developers of the team, to be productive.

I too don't gratuitously use cat and touch and grep when I don't need to. I even less often use cmd stuff under Windows (using one of the dozen of bash that exists there is better for interactive command line use, when needed, on computers where I can have those), but if I need to I can do it.

Do I use Unix or not when I'm not typing into a traditional command line shell, but still using a Unix-based system as my direct terminal (or a light terminal connected to it)? "Interesting" question. Do I use an internal combustion engine when I'm driving a car? Do I use electricity when I switch on the light? I don't think we can answer any of those questions in a truly absolute way. But most of the time I would probably say yes.


How does one use a Mac without grep or piping?

I bought a macbook thinking I'd learn how to use it... but after 3 years, all I can do on a Mac is open a web browser and terminal.


This book was written long before Mac OS X.


I thought OSX was a weird Mach/BSD amalgam. In which case, not so much.


> How does one use a Mac without grep or piping?

Using Finder, XCode, Objective-C and Swift frameworks,...


Bluh, don't get me started on Finder. Icons in directories would randomly be overlapping just because I hadn't been in them in a while and new files were added, they added this weird tagging system with colors that just doesn't make sense to me at all, it randomly thinks I want to view some sort of "recently used files and some random shit" list instead of my homedir. I don't know what the devs of it were thinking. Even bone stock Windows Explorer seems better. Easier to understand what's going on at least. I don't know why people seem to think Apple is some sort of UI god when they can't even make a usable file manager.

Objective-C looks like a clusterfuck and I never want to touch it. Swift seems like a quite good solution to that and I hope it eventually becomes the full replacement - but now more non-Apple laptops have high DPI screens, so I don't think I'll be around to see it happen.


> Objective-C looks like a clusterfuck and I never want to touch it.

This is pretty much moot now, the Apple world is abandoning Obj-C, but AFAIK it's just old, not bad. Like, came out at the same time as C++ old, and has seen fewer changes to the language than C++ has. It's a compiled, C-like language, so yeah, hard to use and very easy to crash, just like C & C++.

Interested to hear your point of view on what makes it seem like a cluster, if you want to share. (And I'm only interested in hearing and understanding your opinion, not on debating or contradicting your experience.)

Obj-C had a pretty cool calling mechanism underneath, known as "message passing" -- more or less the analogy to C++'s virtual member functions. But it is the only compiled language I know where you can actually call functions by name, like construct a string dynamically and call, and you get those dynamic calls at the same speed as compiled code (minus a fast function lookup you can usually do once).

I used that mechanism in some games to build a nice state machine class for actor behavior. I did the same in C++, and the Objective-C code was way nicer & easier to use.


I remember trying to do the "construct a function name as a string and call it" thing in VB6 many years ago before I "knew better". Any idea what that functionality is called? I always wondered why languages wouldn't let you do that.


>I remember trying to do the "construct a function name as a string and call it" thing in VB6 many years ago before I "knew better". Any idea what that functionality is called?

It's called by various names - not sure if there is an official one - such as dispatch tables, jump tables, dynamic dispatch, etc. It's a pretty old technique, I would say it probably dates from the time of early high-level languages, and I seem to remember that it was used in assembly languages too (so probably even earlier), by using various addressing modes (indexed, indirect indexed, etc. (those terms are from long-forgotten 6502 instruction set BTW), storing the address of a function at a location and then jumping to that address (where the address is dynamically set at run time by some other bit of code based on some condition), etc.

Here is a simple example in Python:

Simulating the C switch statement in Python:

https://jugad2.blogspot.in/2016/12/simulating-c-switch-state...

Edit: Googled and got a couple of relevant links:

https://en.wikipedia.org/wiki/Dispatch_table

https://en.wikipedia.org/wiki/Branch_table


I don't think there's another widely used name for it except dynamic or virtual dispatching (with inline caching of the lookup table). C is too low level and simple a language to have it in the string lookup form and in C++/Rust name mangling, symbol stripping from binaries, and generics would make the feature very difficult to implement and rather unergonomic. C#/F# and CLR C++/VB probably already have this in the form of the reflection/.Net runtime APIs (including JIT for generics) but implementing it in C++ or Rust would be very complex and it would be yet another feature that bifurcates the language ecosystem (like heap allocation for embedded). Python and Javascript do have this feature but you still have to have access to the scope to do it (self['fn_name'] or this['fn_name']) and this implementation exists in one way or another in most dynamic languages.

You can do this as a hack in most compiled languages by exporting symbols from a shared library and just using the OS specific dynamic linker but this is ugly, slow, and pointless in a static language.


You usually get that through either reflection or an "eval" command.


The finder is a wart. Apple does have a lot of good UI elsewhere, but the finder needs an overhaul so bad it hurts.

Know what drives me the most nuts? That there's a single default key (Enter) to rename a folder, while you have to use a key chord (Command-O) to enter a folder. It's so backwards, renaming is not something I spend even remotely as much time doing as moving around. That's not to mention the so completely obvious missed opportunity to let the Enter key do what it says and Enter the folder.


What's ridiculously annoying is how good the Finder was under classic Mac OS compared to what exists now. The new Finder is a bad translation of some of the classic functionality overtop of the NeXT workspace manager. And it's a bad fusion of the two, in my opinion.


> (Command-O) to enter a folder.

I use Command+Down, which opens Finder shell-objects generally. It's the "descend" to Command+Up's "ascend."


Not sure why I haven't been using that, but that is better than Command+O for sure, thanks!

Still, all navigating should be single-key-able, right? I guess it is technically, with left & right arrow keys in the List & Columns views, but I don't use those or like those as much, for whatever reason.


Finder is absolutely boggling to me. I've considered switching to Mac as a mobile development platform a few times, but every time I sit down to actually USE one I realize I'd rather just put Ubuntu on a windows laptop or something.

>I don't know why people seem to think Apple is some sort of UI god when they can't even make a usable file manager.

I know! I ask myself how this is possible every time.


I completely agree with you, the "All My Files" view that it defaults to seems to not fit me very well.

The nice thing is that it is configurable.


From day one, they should have called it the "Loser" instead of the "Finder".


for anyone unaware, mdfind foo on Mac does the same as locate foo on Linux. CLI spotlight search. Also, open bar will open a thing (file, folder, application, whatever) in the associated application. I typically do open . to pop a finder if I want a GUI to mess with files in the current dir.


As most developers do. Most everything else a pain in the Asteroid.


I switch back to Mac from Linux so I could have grep, pipes, sed and all the UNIX stuff alongside real commercial applications like Word, Excel and Photoshop.

The irony is now I run Linux in a VM because the toolchains for the embedded work I do all run there. In principle I can make it all work in MacOS, but it already works in Linux.


You beat me to that respose! The book is a great fun to read. I read it in grad school 20 years ago (where Unix was the only option if you wanted to write your articles in TeX) and re-read it with much pleasure today (when 75% of my computer time is spent, by choice, on Linux variants)


I like to cover myself with some crazy question and think about them. For example, what would look like the thing that would make people go "oh this looks good enough to replace UNIX".

Don't get me wrong, I am big fan of UNIX, but I hope I will be alive around the time(but I doubt that) when we will see some new thing which will make UNIX feel dated. Now, some of you might jump and say "Oh, but UNIX already feels dated", and that would make conversation on it's own, but I think people say that more because they are bored with UNIX, or they dislike certain segments.

And what breaks my heart is general disinterest in Operating systems with young developers/students (I am student too, but I find those things to be most interesting of all university courses). I see very little people doing OS work today. I wasn't there to see how it was in 80s & 90s, but from what I've read, you had much, much more choice, but their quality was debatable. Why we always consider Operating Systems to be "solved" thing? Is it because the way Von Neumann architecture works, we tend to abstract computer as an entity in the way that gave us UNIX and that we won't be able to discover and make something different but as capable as UNIX without changing our way of thinking about what computer is and how it works? Did we got used to computers as they are, especially newer generations, taking things for granted, and just going forward with what they inherited?


Google's Fuschia OS project is intriguing. It's open source, but I haven't been able to find any whitepapers or conference talks about its design or Google's plans for it. The speculation is that they wanted an ultra-lightweight OS for future low-latency augmented-reality applications.

(https://news.ycombinator.com/item?id=12271354)


For decades, I have challenged people with a dollar bet tha they cannot correctly spell "fuchsia" given five tries.

Everybody thinks they can spell it, but I have never lost the dollar. Sometimes I pull out the same bet six months later and still win it.


Don't bet against German speaking persons. "Fuchs" mean "fox" in German so fuchsia basically means/sounds like fox-ia in German, guessing they will be able to spell it correctly as it is to easy to miss spell.


Relevant username?


Indeed, I seem to recall from the XKCD color survey that the correct spelling of "fuchsia" was not in the top five.

Which makes it a messy name for a project. Though maybe Google can change the spelling by brand-name fiat, just like they did with "googol".


I have a hard time understanding why augmented-reality applications need a different type of OS.

I get the low-latency part. But there're already OSs designed for low-latency.


I wish someone would try to reimplement Symbolics Genera under an open source license and make it run on modern hardware properly, without VLM. Its design and philosophy is quite different from Unix derivatives, being so object-oriented in the right parts and extensible on the source code level. The user interface was very reminiscent of the notebook interface in Mathematica - output in the listener wasn't plain text as in Unix CLI, but arbitrary objects like bitmaps or graphs or interactive widgets. You could click on any of them and inspect its state or open the source file that implements it.


The Genera terminal, or "listener" was a real eye opener for me the first time I used it. It can display rich text and images, has mouseable elements like buttons and editable forms, and has a powerful incremental online help system. It's almost like a kind of scrolling desktop.

The thing I really liked was that the console output was independent of the input line. You can start typing in a command, hit <help> and have the (rich, hypertext) documentation appear in the console above where you are typing in. You don't have to delete everything and type man foo like in Unix, only to have the man page disappear when you quit.

Simply in terms of usability the Genera listener craps on any Unix or Windows terminal I have used.


I just hope that whatever replaces *NIX is still free software. Your computer OS is the last thing that should be controlled by corporate gatekeepers.


That's not even a question.


I find it hard to imagine a world which in a 100 years Unix-derived systems are prevalent let alone running everything from phones to supercomputers. I can't imagine what would come along, but whatever it is will eventually be just as crufty, because that's just the way of things.

What I'd really like to know, what is Unix, at the core. I know POSIX defines certain standards, but what can we rip out and still call what remains Unix? If we're being really reductive can't we take it beyond the features that were present in the very first Unix for the PDP7? Is it simply the syscall interface which provides a filesystem abstraction? Do we call an OS a Unix if it only has a POSIX layer like Windows NT?

So if we could strip everything out and leave just a few pieces, and still recognize it as Unix, the use of the word approaches meaninglessness. Unixes exist without memory management or process isolation. Think of how fundamentally dissimilar an Apple Watch running iOS is compared to a PDP11. I think that versatility in the terminology itself lets us keep calling something Unix long after it's morphed into something else.

It's not hard to imagine replacing system calls with other ways of achieving the same outcome. I think fundamentally, at the lowest level, where the operating system is an interface to the core features of the processor, it is relatively invariant. For truly unique systems, we do have to look at the hardware. Unique operating systems have been tied to unique hardware, like the Xerox Alto, or the MIT CADR, whereas Unix as a generic set of functionality has made many computer architectures useful, but also reduced their variety.


I dont see a world in 100 years where anything is unix derived - but that doesnt mean it wont be Unix compatible.


Why not? We still have software that was written 50+ years ago for IBM System/360's running on a derived architecture today (System z.)


Of course, there are good reasons for running legacy software, and likely some of that will still be around.

How about, I don't see Unix-like systems as the end-all be-all of operating systems such that these systems will be the basis of anything in 100 years. To extend the timeframe indefinitely, is this system the one to last the ages?


Really, though, Unix is just the UI over the top of the underlying operating system. Like, Minix and Linux work in radically different ways, but they both expose a Unixy/Posixy interface with file descriptors and pipes and the same execution model. Further out, Haiku is even more different; and then there are all the different Unixy front ends to Windows, which is very far from a traditional Unix inside.

Then there's stuff like Genode, which is... difficult to describe concisely, but is a capability-based OS designed for running OSes in; they wanted it to have the ability to self-host, so they added a personality for running toolchains in, and of course it's Unixy, because what else is it going to be?

https://genode.org/documentation/release-notes/11.02#Noux_-_...

So, while I agree that I'd like to see a working environment that's not a traditional Unix, I think that's a UI thing rather than an OS thing.


Honestly, even Haiku etc. have underneath very Unix-like assumptions about the services and nature of the things an operating system provides, and a lot of that goes with the Posix and ANSI lib-C slant that programmers expect everything to have now.

Basically every single mainstream OS out there has the assumption that there are files with filenames as binary/text blobs in a separate storage space from 'memory', and with that ... file descriptors, processes, threads, and sockets.

But it doesn't take much imagination to conceive of different models -- what if instead of a file system we have a relational model? Relations/relvars and tuples as fundamental storage units? What if instead of malloc/free and fopen/fclose (and mmap/msync) we have instead some unified notion of allocation/storage with hints/specifications of permanence requirements, performance requirements, type, and intended use and the OS takes care of where to put it (cache, main memory, flash, magnetic storage, network storage etc.)?

One can think of any number of possibilities. But the needs of the C-language semantics and the orthodoxy of 50 years of OS/language patterns keep us where we are.


Well, the browser already did replace Unix for most people=( Presumably we'd be trying to design something good and not awful though, so lets not stop there. But it does give us a starting point:

+ one click installation of applications

+ applications are always up to date

+ applications are sandboxed

Then, since we want users to be able to tinker:

+ the source for all software installed through this mechanism is available

+ optional binary caches NixOS style so it's not slow

And while we're talking about NixOS:

+ immutable package directory so you can rollback at will

And then on to data storage, which is my favorite topic:

+ defaults to tags instead of a hierarchy for organization

+ everything is schema'd, so a file saying it's a contact entry or whatever actually has to be that thing

+ data is immutable by default, so undo and versioning can be built-in (you could still turn this off for certain tasks like video-editing where immutability isn't feasible)

I have more thoughts on data storage here if anyone's interested: https://juniorschematics.com/


You forgot:

+ web applications almost always have a shitty, slow, ad-infested, dumbed-down interface with bare-bones features

+ you're at the mercy of whatever company owns and runs the servers that the application is served from, if they decide to end the service or change it in a way you don't like tough luck for you

+ your privacy and control over your own data are usually forfeited

+ your data is usually trapped on the servers or difficult to extract and use independently of the web service itself


"Applications are always up to date" is a bug, not a feature. Updates sometimes break things, and the inevitable UI changes require adaptation. The last thing I want is an automatic update that changes my tools and breaks my workflow right in the middle of some important project. I'll update manually, when I feel like it and have the time to deal with it.


It's interesting that no one blinks twice at the thought of using a decades-old operating system - it's "mature", "battle tested", "proven over time" or some other phrase.

On the other hand, programming languages that are decades old are looked upon as antiquated and not fit for modern problems, so we create new languages and discard the old ones, including forgetting useful ideas those languages contained. Sometimes, we come back to some of those ideas. So a dynamic language decides that types are in fact quite useful, and a single exe file is so much simpler to distribute than endless tiny script files, and maybe speed does matter after all.

Not passing judgement, just saying that's how it is. And it is rather odd isn't it?


> It's interesting that no one blinks twice at the thought of using a decades-old operating system - it's "mature", "battle tested", "proven over time" or some other phrase.

We use maintained operating systems that started decades ago - there are basically no good maintained options outside of Unix and Windows these days.

You need a seriously extreme number of testers to have a relatively bug-free operating system. We run proprietary RTOSes for some stuff at work and the number of weird bugs are crazy. We have to reboot every 280-some days because otherwise the machine will crash, IGMPv2 randomly stops working after a couple months - there's no v3 implementation at all, weird hacks just to make the thing boot, etc. And that's on top of the fact that their semi-official ports of modern open source tools don't work half the time and haven't been updated in years.

You really don't want to mess around with esoteric operating systems if you want any chance of having a stable useful application. OSes just do too much.


I think most programming languages that are decades old aren't looked at as 'not fit for modern problems' so much as 'inefficient for modern* development'.

Is it possible to do the same things in C that we can in Ruby, PHP or Node? Absolutely. But can it be written in the same time frame?

While I'm sure there are HNers out there who can - or who at least claim they can ;) - let's ask ourselves if that's really broadly truthful outside of trivial example applications. I'd wager 'no'.

Those higher level languages might 'forget' something in the past and 'rediscover' it but I feel like that's kind of a very minor sub-plot in the larger story of the usefulness of their abstraction and ability to represent more powerful ideas in less lines of code.

* - read 'web'. Old and low level languages are still obviously used in a wide variety of projects and industries.


Meanwhile, erlang...


That's the fundamental difference between infrastructure folks - inherently conservative, keep it running, 9 9s of uptime - and developers - restless, ohhhhh shiny, seekers of the new and novel.


Unix is weird because it evolved organically and without a unified direction. But it remains because power and familiarity beat user experience.

Yes, the "pure" Unix tools are awful, GNU improved on their usability a lot. But they're still a simple command that does something.

Except Autotools. Those should burn in eternal damnation.


Since when "evolve organically" become bad? Unix was created as a tool to solve unbounded problems. It is not possible to attempt to have a complete solution when the problems are unbounded. Today, vast computer users have bounded problems -- surfing webs, messaging, playing games. With the bounded problem, it is possible to have complete solutions -- Chromebook, iPad, etc. are for them. They are not supposed to deal with Unix just like drivers are not supposed to understand transmissions or engines. Engineers sit in between the components, tools and the products, consumers. If the average computer users have problems, blame engineers, not the tools or framework that engineers deal with. And for engineers, they still appreciate today that the underlying tools and frameworks are created versatile enough to solve unbounded problems. And for some comsumers would like to think they are smart (or cheap) and would like to bypass engineers, then they shouldn't complain or they are mixing up their problems.


"Autotools is the worst form of build system, except for all those other forms that have been tried from time to time." -Churchill, probably

In all seriousness, what's your preferred alternative? Seems like Autotools is a pain, but it gets the job done and is widely available. I haven't found a build system for C projects that is:

- Less complicated

- Available from default package repos so others who clone don't have to track down some esoteric package themselves

- Free software that runs standalone on a shell, not in an IDE or whatever


Like the quote says.

Autotools sucks big time, and what it does has been largely unnecessary since around the turn of the millenium [1]; but since it's used almost everywhere in infrastructure (often decades-old) software you have to deal with it.

If you pull a 927 and make a grand unified build system, you get something like CMake which sucks even more (its own build process takes longer than that of the software it's supposed to build, need to deal with C++, have fun with CMakeLists build errors and customizations, etc.).

I believe git's build system is just a bunch of (or a single) Makefile. That's what an optimal build (but not mine, unfortunately) looks like. Though, pure (POSIX) make is insufficient for anything non-trivial IMHO.

phk's famous rant on libtool:

[1]: http://queue.acm.org/detail.cfm?id=2349257


You should be able to build well-written code on a variety of modern systems using nothing more than a Makefile, or even just a shell script. See plan9port for an example of the latter.


Plan9port shops with, and uses, mk.


You're right, but first the INSTALL script makes a few customizations based on what OS you're running (Solaris, OS X, Linux, *BSD), then bootstraps mk and finally runs mk on all the application source.

The script takes into account differences in operating systems, without including thousands of lines of weird little leftovers from 30 years ago when you needed to check if you were actually running on a 16-bit Data General machine or whatever (I'm looking at you, autoconf)


As if I needed to build the latest version of Gnu cat on a Microvax.


There are two ways to do what autotools do. One is the autotools way. The other is the way every other similar tool does it (to my knowledge): the imake way.

Autotools sucks. But everything else sucks more. There's a life lesson there, somewhere.


You could try cmake. :)

It also has better Windows support than autotools.


> - Less complicated

;) cmake is actually exactly what I had in mind when I wrote that. Maybe I haven't used it enough, but I find it basically just trades Makefile.am everywhere for CMakeLists.txt everywhere.


The thing is that it does have (or rather can be made to have) a good user experience for experienced users. The problem is the learning curve, and to a lesser extent, compatibility. MacOS is easy enough for non-techies to use while still having fast alt-tab, readline shortcuts in almost all apps (OS level I believe), and it's unixy-enough that we get things like homebrew while also getting things like Photoshop as others have mentioned.

Windows is OK for non-techies (being generous but there aren't that many real choices if you're e.g. Buying a computer from a store and don't know how to install an OS) but you really need a VM to do some types of work (though the Ubuntu subsystem is pretty awesome and now we have mobaxterm instead of just cmd.exe and putty).

Linux is fine for techies / power users but requires customization to get to the productivity levels that Mac gets to (IMO). Non-techies wouldn't be able to achieve the same thing or probably even install it in the first place.

You can buy a Mac in a box from a store and pay someone at the store to help you if it breaks or you don't understand how to do something. You can develop iOS apps on it. It also looks cool to people who don't know any better and people who do. Daft Punk and almost every other musician who works with computers on a stage uses Mac.

They all have their place and I run all 3 at home (Win desktop for gaming and some browsing, Linux laptop for personal programming and browsing + travel, Linux desktop for my home electronics lab bench mostly for embedded development, Mac laptop for work, FreeBSD for a NAS, MIPS Debian for router)


> readline shortcuts in almost all apps (OS level I believe)

Sorry but OH MY GOD. After 4 years of obsessive, hours-a-day usage of this macbook, I never realised I could do any of this. Literally just ctrl-K'd in Chrome's address bar. This is just next level. THANK YOU.


> while still having fast alt-tab

Yes, but it also has a slow, not customizable animation when changing desktops, uses arbitrary keybindings you can't remap easily and generally imposes lots of choices on its users without any way to opt out from them.

I was forced to switch from a very personalized Fedora/StumpWM setup to Mac OS and it was a disaster. For the first time in decades(!) I had to pay for external programs to customize things. And even with these, there are features I simply can't disable. It's been seriously harsh. It looks like the entire platform has lost all interest in customizability but, for me, customizability is one of the basic requirements for any tool.


Except that GNU is Not UNIX and many companies still have UNIX boxes without GNU on them.


Correct.

And the last time I came across one of those I compiled and used modern tools.

Because even 20 year old Linux versions had tab complete and a vi that worked with directional keys


Assuming one has the access rights for it.

I do occasionally ssh into boxes were I am a lame user without compilers installed.


Yeah, it's not always possible to compile your own tools

(At least they have ssh and not telnet)

What you can sometimes do is pick an alternate shell or find some other editor someone 'left lying around' (even typing vim instead of vi can give you something different)

And sometimes "tab completion" works with \ or ESC or something similar, I don't remember exactly


> At least they have ssh and not telnet

Well I recently went into ones that only had ssh, but then files were also copied over plain FTP or NFS.


Oh yeah

What was more amazing was people using X for accessing a terminal. You know, because that's how someone told to use it, so they just went with the flow.


Sure it is. Heck, it's not even that hard to build gcc as a cross compiler, if some idiot hasn't installed a compiler.


It quite easy to prevent clever users to do that, and punish them if they still try to.


You can install stuff in your home directory. No access rights needed. Look up the --prefix argument.

This isn't windows, after all.


You are assuming that:

1 - You can put them there;

2 - You have enough quota;

3- $HOME isn't mounted as noexec;

4 - IT don't have scripts that purge $HOME from executables, which the corresponding penalties for the respective user when found.

I am not a UNIX newbie, although I favor Windows.


Hey, the constraint was not having privileges, not being hounded by death-crazed, fire-breathing, Fascist IT goobs out to prevent all work.


Earnest question, why your preference for Windows? From reading your other comments (and my experience) I'm guessing GUI programming and Visual Studio debugger? I do glue-programming (mostly Python) and admin. I've been at Linux shops for 10+ years and recently switched to a Windows shop. I've been trying to give it an earnest chance but can't stand it.


I know Windows since 3.0, and do GUI related coding since those days also had some friends on the Amiga demoscene, mostly with the 500 model.

I know UNIX since Xenix, and used DG/UX, Aix, HP-UX, Solaris, OS X, FreeBSD and lots of GNU/Linux distributions starting with Slackware 2.0.

After a decade of using UNIX like systems, and Windows as well, while researching everything that came out of Xerox PARC, and being an Oberon user for a short while, I came to realize that Apple and Microsoft developer cultures are closer to Xerox dreams of how computing should look like, than UNIX where X Windows is just a manager of xterms.

Also I never was a big fan of C, even back on MS-DOS days, where I would rather use Turbo Pascal or Turbo C++.


Wget or scp? Paste a binary into vi and save it?


You need to compile it first for the UNIX that you are accessing, and that is assuming IT hasn't locked down the ability to execute unauthorized binaries.


I guess you don't AIX much.


That's one of the ones I used, so your guess is wrong


SMIT happens.


Having worked extensively with some of those non-GNU Unixes, I feel that the difference is almost academic. Even if your box comes with nvi rather than vim, it probably still has Perl, and the bash shell is likely installed if not set as the default. Overall, the feel is not that different.


nvi is already an improvement over plain vi.


that's not necessarily true as much as you think.

I worked in a Solaris shop that had gnu coreutils and packages installed everywhere with a <g> prefix, and another place running openbsd which had the same.

gecho, gcat, gln etc;

I'm pretty sure gnu coreutils can be built on any Unix-like OS.


It depends on the IT guys, I have occasionally sshed into boxes where that wasn't the case and compiling wasn't an option.


- compile stuff statically on your own hw

- base64encode it

- paste the encoded text over ssh to a file

- base64decode it

- chmod +x the decoded file

- …

- profit


- Mount $HOME as noexec

- Disable execution bit for all directories for home

- If the UNIX variant allows it, jail/contain home for each user session

- Punish users or their employer if a consulting firm severely that bypass IT regulations


You also mentioned these points in your other comments here, but, to me, they look over the top to the point of being paranoic. That's probably because I lack any serious experience in admin side of things, nor of working in environments where such measures would be required.

So I wanted to ask: where such a level of security would be required? And also, what attacks does it defend from?


- If Linux, find the current local privilege escalation bug, get root, and burn it all to the ground.


Followed by get fired and charged.


Oh, you're no fun anymore!


> - Mount $HOME as noexec

/usr/bin/env $your_binary


> If the UNIX variant allows it, jail/contain home for each user session


does that still work? i know the /lib/ld-linux.so /some/binary doesn't work anymore


It should be "a lot of fun" to try to cross compile an AIX or Solaris binary on a Linux machine

You'll probably go insane doing it

(Linux x86 compiling to Linux ARM/PPC/etc is doable and done quite frequently)


Funny book, I have it in my hands (with the barf bag!). The anti-forward by Dennis Ritchie is awesome all by itself.

Some of it is long obsolete. For example, Usenet/NNTP is rarely used today. And many of the specific implementation problems they mention are long-fixed on modern systems.

Some of the problems they note are still valid. Some of the challenges of dealing with filenames are absolutely still true; because filenames are sequences of bytes (NOT sequences of characters), and allow stuff like leading dash, control characters, and non-characters, you can have a lot of problems. See the stuff on page 168-171. I talked about this in https://www.dwheeler.com/essays/fixing-unix-linux-filenames.... and https://www.dwheeler.com/essays/filenames-in-shell.html

That said, there are reasons that Unix-like systems took over everything. Many of their complaints are from lack of consistency. But some loss of consistency is inevitable when you have a marketplace of many people with many ideas. Many of the other systems they remember fondly were often tightly controlled by a small number of people - they were more consistent, sure, but they were also slow to implement newer ideas. When you're running a race, the one who runs faster usually wins.


Stop saying that! It's a foreword. Like 'foreplay' but with 'word'.


While entertaining, I'm left with the following question: if not UNIX, then what?

Are there any successful non-UNIX-y OSes that are worth checking out?

I may embarrass myself here, but I was under the impression that BSD, Plan 9, Solaris and HP-UX were all UNIX-y ...


Basically only two paradigms survived the 90s. The Unix way (via Linux, BSD, Darwin) and the VMS way (via Windows NT). Everything else is either ultra-niche, dying, a mainframe so ancient and terrifying nobody will go near it, or only of marginal historical import.

Bear in mind that when this book was written, the average desktop PC was running MS-DOS, and possibly Windows 3.1 if it was new and powerful enough.

What UNIX is being compared to is the other mainframe OSes of the 1980s, from DEC's VAX/VMS to IBM's OS/360 and OS/400


Seems like a good summary. Let's not forget that microkernels and RTOS's dominate embedded.

http://www.cs.vu.nl//~ast/reliable-os/

Microkernels are also favored in the safety- and security-critical niches in high-assurance systems. Although niche, it's tens to hundreds of millions of dollars in activity over the past decade. Seemed worth adding as a surviving paradigm given that. One of more successful ones below.

http://www.ghs.com/products/safety_critical/integrity-do-178...


The Xerox way also influenced Windows, OS X, iOS and Android and to certain extent ChromeOS.

In regards to IDEs, frameworks, programming languages culture.


There's a good book about this, called 'Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age'

http://a.co/9Gp3tQ7


I googled "the VMS way" an got this: https://youtu.be/96LQ9_jiwQ4

Very 90's, but I think this is what I was actually looking for ;) https://en.wikipedia.org/wiki/OpenVMS


Note: this book is copyright 1994, the same year linux 1.0.0 was released.


still worth checking out? or just worth checking out?

The coherency of VMS (and the VAX platform) is quite amazing -

Coherent command line interfaces, API's, documentation, hardware, and software, even down to coherent part numbering made by a single company, from small workstations up to larger scale minis, and nearly transparent clustering.

I mention VAX specifically, since by the time Alpha arrived, DEC was competing in a much different environment, and was subsequently merged, etc etc etc.

IBM and other mainframe-ey OS's, though definately arcane, have some amazing facilities (parallel sysplex.. multi architecture systems that are completely redundant and hot-swappable) and capabilities since the late 60s/70s which are only now being caught upto in crude and hackish form at the present time with commmercial PC servers.

BeOS had some cool concepts.


I've installed and played around with VMS on an emulator and even on my own VaxStation, but never used it in a production environment. I'm sure real world usage left a lot to be desired, but my impressions are that VMS as a product was well-rounded, feels complete as if there aren't a lot of loose ends. In contrast to the unix philosophy, VMS feels DESIGNED, as if the people working on it had a coherent idea of the way the entire system was supposed to work and its components interroperated. The documentation is superb. Linux distro maintainers would do good to revisit VMS docs from time to time to see how its done.

The versioning filesystem is something I've waited for since I first used VMS (1991) and discovered such a thing existed. Though to be fair recent Windows and OSX versions have something like it, it's not the same.


"VMS feels DESIGNED"

It was. This is one of the reasons outside of its legendary reliability that I keep bringing it up. It's a cathedral where most things fit. There was cruft to deal with. Overall, the pieces work very well together as the teams had a coherent vision for how they wanted the system to work. It also integrates solutions to problems such as distributed locking and node failures that other systems dump on the app developers.

Not to mention I never got these testimonials or personal experiences when I started with UNIX...

http://h41379.www4.hpe.com/openvms/30th/t_guestbook.html


Which emulator did you use? Simh?


I used Simh Vax, but I've also used Charon-VAX in the past when they were allowing free usage of the software (I don't know the current situation). Charon was quite fast at the time on the hardware I had in maybe 2008 or so. But I've used Simh recently and it's plenty fast enough on modern hardware.


Not OP but I recently installed OpenVMS 7.3 on Simh following this guide [1], (ignoring the building Simh part) and it works like a charm.

[1] https://www.wherry.com/gadgets/retrocomputing/vax-simh.html


What's the easiest way to get OpenVMS these days?


The official site is here:

http://www.openvmshobbyist.com/news.php

You can sign up as a hobbyist here:

http://plato.ccsscorp.com/hobbyist_registration.php

someone will email you in the next day or so with a license installer and the URLs to download the software.

Also they have the Alpha version of OpenVMS available.


I put the post below together for anyone wondering what UNIX alternatives existed with what superior attributes:

https://news.ycombinator.com/item?id=10957020

"Successful" was mostly social and economic factors rather than technical. The most successful in mass market are the OpenVMS variant called Windows, an OS building a GUI platform on a UNIX/Mach hybrid, UNIX OS's such as Linux, and an OS building a user-space platform on Linux. UNIX's and BSD-licensed code allowed companies to save money & increase adoption building on existing code/ecosystem that had tons of momentum. Mainframe OS's are still going pulling in huge revenues, including MCP via Unisys (Burroughs renamed post-merger). IBM i's succeeded AS/400 that succeeded modern System/38. Microkernels and mainframe VM's got reinvented as modern clouds with VM's or containers. Legacy continues even if original products mostly died out.


I think the important thing to accept is just because something is the best option we have, doesn't mean it couldn't be significantly improved -- I feel each of bash, vim and Emacs fall into this category, in their own way.


They all are, but this book dates from a time when Unix had nothing like the absolute dominance of paradigm it does now. True, by the early-mid 90s, the Unix model had effectively won - but there remained a very large number of people with fond memories of other platforms, many of which did at least some things better than any contemporary or even modern Unix.


>there remained a very large number of people with fond memories of other platforms, many of which did at least some things better than any contemporary or even modern Unix.

Yeah, that's what I'm curious about. Could you name names? :)


I still miss things from AmigaOS (still being developed, but ultra-niche; with "clones" in the form of AROS - which is open source - and MorphOS).

Things like ubiquituous scriptability of apps via Arexx (the language is awful, but you don't need to use the language much to call the APIs), heavy multi-threading throughout the OS, datatypes (new image format? drop a library in the right directory and every application that knows how to load images via datatypes can load it; same for text/documents, sound etc.), assigns (think of it a bit like $PATH, but not limited to a single variable, and enforced OS-level, so e.g. C: in AmigaOS works roughly like $PATH, but by default there's also LIBS: for libraries, T: for temporary storage, CLIPS: for the clipboard, and by convention people tend to have e.g. WORK: pointing wherever you want your project data - you can define your own, and redefine them at will, and they can refer to eachother, so e.g. your "C:" may refer to System:C and Work:C (at the same time), and either System: or Work: or both can be either partitions or labels assigned to removable media (in which case the OS may ask you to insert the right disk if you reference it - it can reference a specific disk rather than just the drive), or to another assign).

Or workspaces as an OS-level construct (Screens) that applications can open, close and manipulate, either for "private" use for just its own windows or by opening public screens that can be used to combine windows from multiple applications.

Or third party standards like XPK, which lets any application transparently support file compression - similar to datatypes you can drop in a library forany compression algorithms and all the apps gains support for it (there are several similar standards for e.g. archivers, disk images etc.).

Current mainstream OS's still feel very backwards in many ways after being used to those things.


Further more, you get a new application on floppy. The disk is named FOOAPP. The application can then request data files such as FOOAPP:data1.dat and FOOAPP:data2.dat. If the program is running and you remove the floppy, no issue. When the program tries to open FOOAPP:data3.dat, the operating system will then ask you to insert disk FOOAPP (and there's no having to dismiss the alert box or anything---the system will detect the inserted disk and dismiss the alert box automatically).

Later, when you get a harddrive (or decide you like the app and want to continue using it), you can copy it to the harddrive (wherever you want) and add an assign "FOOAPP=MY_APPS:new/fooapp" and the application will still work just fine.

It's still something I miss.


There were lots of attention to details like that... I still hope AROS will get to the point where I can use it more. The big challenge is SMP and memory protection; without it a lot of software gets really hard to port (a lot of essential OSS software depends on fork()). It's in the works, though, so maybe one day...


Assigns sound a bit similar to how Plan9 handles the "everything is a file" thing


Lisp machines turn up pretty frequently in this context, and I understand why - some of their capabilities still haven't been replicated anywhere else, nor are they likely to be, and that's a real loss; in particular, there is a great deal to be said for a model in which changing the behavior of the running system, right down to the microcode, can be as simple as writing up the changes you want and evaluating them. No compile step, no restart, no nothing - you just point at something and say "do this instead from now on". If you like it, save it and hang it off the boot process; if you don't, reverting the change is as easy as evaluating what you replaced; if you discover you've broken the running environment, you can usually fix it without a reboot, and in extremis a (fast!) reboot cures all such ills.

About as close as you can get and stay relevant to the modern world is Emacs, which, while a lot of fun, I gather from those with real Lisp-M experience isn't really all that close. And it says something, I think, that the next former Lisp-M user I meet who has a bad word to say about that platform will be the first one.

In general, the book does these systems more justice than I can, and it's a fun read in general. I recommend it quite highly - in those areas it covers where I have experience of my own, I found little with which to disagree and much with which to laugh.


Live modifications of OS code sounds like a recipe for disaster. Unless you are an expert that codes himself an OS before breakfast


Bear in mind that LispMs had well-designed tools for supporting this kind of development. For example, they had version control that tracked all changes everywhere at all times, even changes that were not saved in any file.

Also bear in mind that they were designed by programmers for programmers. They were not intended for the general public.


> they had version control that tracked all changes everywhere at all times, even changes that were not saved in any file.

No. The file system did auto-versioning, so you could generally recover recent history of a file, but there was nothing that "tracked all changes everywhere".

Well, ZMacs (the editor) did have unbounded 'undo' functionality, and you could even select a function and undo changes to that particular function, even if those weren't the most recent changes to the file. Maybe that's what you have in mind.


Thanks for the correction. I'm probably conflating features of ZMACS undo with features of Interlisp's Masterscope.


It's not something one would probably use every day, or without due care. But it is something you can. And, as I said before, unless you take explicit steps to persist the change across boots, one flick of the big red switch and it's gone.


Very interesting, thanks. Why do you suppose Lisp-M isn't more widely used?


They were expensive. Sun workstations that cost five figures were cheap compared to LispMs.

Their UI was powerful and efficient, but not easy to learn. They were designed by programmers for programmers. Their UI assumed that you were willing to invest nontrivial time and effort learning a broad palette of specialized tools.

Their system design made some pretty different assumptions about how they were going to be used, and the environment they would be used in. For example, LispMs didn't have any kind of security or memory protection. They were wide open systems.


If you would have wanted a comparable Lisp experience on a SUN workstation (large bitmap screen, large disk, software, ...) the actual price difference wasn't that much. But there were a lot of underpowered and slow SUNS, which were cheaper. But then you would better not use Allegro CL, Lucid CL or LispWorks on those. 16 or even 24 MB RAM were simply not enough and GC was fighting with the virtual memory pager...

The big cost driver at that time was memory and peripherals like disks.

The lab I worked for had slow SUN SPARCs. The idea was that you would net boot them and they would get their software from a main server. Relatively cheap. But it turned out a bit slow and clunky. People then often used their office Macs for software development, with Macintosh Common Lisp.


True, but the point is that there weren't any really cheap LispMs until it was too late to matter.

Besides which, a system that was so much designed for programmers was probably never going to command a very big market.


There was never a really cheap Lispm. The 'cheap' ones were a Mac II + TI microexplorer or the Mac II + MacIvory. But developer workstations were always expensive - Lispm or not - or underpowered.

What was also expensive was the software. I got a KEE license on a used machine I was given. I think it cost $50k. Similar Symbolics extensions for their OS could be extremely expensive. The graphics suite did cost several 10k per module (paint, model, render, tools, ...).

> a system that was so much designed for programmers was probably never going to command a very big market

Well, they could not live from developers alone. They were also looking for end users. Some of the machines were thought as delivery machines, cheaper and with less resources. There were also tools like 'firewall', which were supposed to keep users from getting in contact with the development environment (like debugger, ...).

Actually Symbolics also targeted high-end endusers, both with hardware and with software. They had to go beyond government developers - to be able to sell machines/software. There were applications that were kind of development centric, like the iCAD cad system - where one could program 3d models via Flavors extensions. But one huge area for Symbolics was the graphics market. 2d, 2.5d animation, 3d animation, paint/model/render. Much of that was used without programming. The graphics suite was mostly targeted at graphics professionals, often doing 3d visualizations for TV, cinema and games. Many TV stations had their logos animated on Symbolics machines.

https://www.youtube.com/watch?v=V4HXPJtym2Q

https://www.youtube.com/watch?v=V4HXPJtym2Q

https://www.youtube.com/watch?v=f4Lo0IfUSPk

Early:

http://lispm.de/symbolics-3/symbolics-3.html

Later high-end hardware/software for accelerated graphics: http://lispm.de/symbolics-1/symbolics-1.html

Thus the UI had to be usable for non-programmers doing graphics work.

https://www.youtube.com/watch?v=gV5obrYaogU


It's a fair observation that you could build end-usery UIs on a LispM. The default UI was pretty programmery, but then that's true of pretty much all computers at the time they were being introduced.

I guess a more important factor is that PCs were cheap enough to create a market for friendlier UIs, and LispMs weren't.


> It's a fair observation that you could build end-usery UIs on a LispM.

Texas instruments ran the Interface Builder on their Lisp Machine, which was demoed to Steve Jobs and which then was developed into the NeXTstep interface builder.

> I guess a more important factor is that PCs were cheap enough to create a market for friendlier UIs, and LispMs weren't.

The higher-end commercial applications on the Lispm were kind of user friendly. Using something like the font editor or a graphics editor was not difficult. There was a problem that during its lifetime much of the UI landscape was under development - both in look&feel and APIs. There were even UIs on the Lispm that looked similar to Mac apps (like Plexi, a neural network toolkit).

Symbolics' Dynamic Windows GUI had some polish, which came from being used in some applications and it had seen some substantial investment in development. Where this investment was not done, applications lacked polish and remained experiments, demos, sketches, research prototypes, ...

TI also developed a more conventional UI toolkit for their Lispm.

> I guess a more important factor is that PCs were cheap enough to create a market for friendlier UIs, and LispMs weren't.

The market was small - I would guess that around 10000 machines were sold. Additionally the software was not portable to other platforms. CLIM was thought to support that - with some applications using it - but that was bound to commercial Lisp systems, some of them expensive... Generally I think the Lispm impact on other UI systems wasn't that great.


Was there anything preventing the development of a less hacker-focused UI, save lack of reason to do so?


There is a price list for Lisp machines in this archived Symbolics presentation made in 1986, page 14: http://bitsavers.informatik.uni-stuttgart.de/pdf/symbolics/h...


Lisp machines were expensive, the early ones at least took a very long time to boot, and I think there are weaknesses to the Lisp "image"/"world"/whatever model that Lisp-M partisans aren't quite willing to acknowledge. They're still extraordinarily cool machines.


> there are weaknesses to the Lisp "image"/"world"/whatever model that Lisp-M partisans aren't quite willing to acknowledge

I'd be interested to hear this expanded upon.


Old-fashioned Lisp and Smalltalk systems presented a programming model in which the language process is your running program, but it doesn't yet know how to do the right thing. Your job is to interactively teach it how to be your application. This you do by teaching it one little piece at a time until it is transformed into the application you want.

This process is facilitated by the ability to at any point save the state of the process for later resumption. These saved states are called "heaps" or "images" or "worlds". Start up one of them and you are more or less instantly returned to the last state you were in when the process was last running.

LispMs extended this model to the entire machine.

There are advantages and disadvantages. The advantages are legion, and they can enormously accelerate development in the hands of someone comfortable with that mode of working.

There are two main types of disadvantage. One is that, since you are interactively modifying a live, running system, if you make a mistake, the mistake becomes part of the running system. If you save the image with that mistake in it, it becomes part of the system in future sessions, too.

The second problem is that it becomes troublesome to separate your application from the scaffolding that you've used to build it. To take a very simple example, suppose you construct a few simple data structures for testing purposes. Those test structures then become part of the running system, and there is a risk that your application will inadvertently come to depend on their values, leading to obscure bugs if they change or if the application is deployed without them.

There are reasonably straightforward ways to deal with both classes of problem, and properly-designed Lisp and Smalltalk systems include tools that help solve these problems, but it's appropriate to acknowledge them as problems.

I like old-fashioned Lisp and Smalltalk systems, and I like image-based development. I prefer the programming model in which I am teaching the process to be my application, and I'm much more productive in such an environment than in the now more-mainstream, model where programming is more like building something from a blueprint than it is like teaching something to behave the way I want.

But that doesn't mean I don't acknowledge the problems of image-based development. They exist. They are solvable, but they do exist.


> very long time to boot

Actually a typical Symbolics 3600 didn't boot much longer than a SUN... my NXP1000 Lisp Machine boots in three minutes from a very large image.


i'm not trying to be fussy, but we would boot the 3670 overnight when the GC stopped being able to catch up with itself. because as i recall it took a couple hours


A full GC could take easily take half an hour on a machine with large virtual memory.

Booting then was much faster.


OpenVMS: distributed locking (we're barely catching up with it, in 2017!) -- see e.g. http://download.oracle.com/otndocs/products/rdb/pdf/rdbtf05_... , filesystem versioning (we still have nothing like it, and applications like ClearCase (very poorly) try to implement their own over filesystems that don't support it). ASTs (see https://en.wikipedia.org/wiki/Asynchronous_System_Trap ) are also great, but I think something akin to them exists in Windows.


See Show Stopper!: The Breakneck Race to Create Windows NT by Zachary. Gives you an idea what the Unix landscape was like, and how primitive Windows was when Dave Culter's team from Digital began building NT (after the VMS at Digital), which became Windows XP (when Windows stopped crashing every hour).

Much of what we assume every OS does today was cutting edge development in 1990.


Yeah everyone seems to rag on C for being ancient. They don't realize that, in 1990, writing a kernel in C was an huge improvement over assembly language! And it still is today -- there's really no other option.

If your only other choice is assembly language, then C starts to look really good. You don't complain about weird function interfaces, because at least you have functions!


The original NT graphics code was in newfangled C++. The books goes into detail on the time lost teaching the team C++, and that they should have stuck with C. ;)


Nitpick: NT became Windows 2000, and that's where Windows stopped crashing every hour. XP came next, and was where the 2000/NT line entirely superseded the 95/98/Me line, whereupon there was much rejoicing.


Amiga OS, BeOS


And consequently, HaikuOS


Well, Haiku is mostly POSIX, so I'd say it's mostly UNIX. It's more than that however, and the graphical subsystem is awesome. But I'd say it's unix.


Having a POSIX API doesn't imply a UNIX architecture.


ITS is often mentioned and I suppose PR1MOS (which was descended from ITS)


I'm familiar with ITS, but haven't heard this before, and am highly skeptical of any connexion with PRIMOS. The Wikipedia article on PRIMOS says "Legend has it that the unusual choice of FORTRAN for the OS programming language had to do with its history. Allegedly, the founders of Prime had worked for Honeywell on a NASA project. However, Honeywell at that time was uninterested in minicomputers, so they left and founded Prime, taking the code with them." ITS was written at the MIT AI Lab in MIDAS assembly language.


It seems more likely that PRIMOS would have been influenced by Multics.


- Temple OS. Very simple OS but has interesting innovations. http://www.codersnotes.com/notes/a-constructive-look-at-temp...

- Amiga OS / Morph OS

- Menuet OS / Kolibri OS. minimalistic and 100% written in Assembly.


A lot of the Unix Haters Handbook is comparing it to proprietary mainframes of the time.


There are lots of non-Unix operating systems that were never successful. But they are still an important, if forgotten, source of ideas: BeOS, RiscOS, OS/2, Pick Operating System.


I think that quite a bit in this book is outdated by now, but it's still a fun read, even as Unix person.

Especially the chapter on NFS was a fun read. When I first set up my home server, I had my desktop lock up completely a few times when the server became unavailable, because of NFS. (Then I found out one can mount NFS shares in interruptible mode so the requests will eventually time out... but it was so annoying up to that point.)


Many parts of the book are outdated, and some parts are just pedantic and outright trollish, but other parts still painfully up to date.

For example, mandatory file locking. I bumped into this in a project once.

We wanted to prevent concurrent read/write access to a particular file (a serial device, actually). Unfortunately there was no simple and reliable way to do this on Linux.

The "traditional way" of doing this is via lock files: you create a file "somewhere", and other applications are supposed to check if this file exists before operating on the locked file.

But you have three problems here: 1) the location of the lock files is not standard 2) there are race conditions with checking for the existence of the lock files and creating lock files and, most critically, 3) applications can still ignore the lock file at their leisure.

There is also flock() (not mentioned in the book, IIRC, I think it was added to Linux at a later date). But it still blows because it only solves (1) and (2). You can flock() all you want, if applications don't care for it, they can still do whatever they want.

We just ended up accepting our losses, used flock() for our applications, and hoped for the best.


-o hard,intr is still in my fingers almost ten years after I quit being a sysadmin.


Any thoughts on UNIX security? I've been wondering what the most secure OS is for a long time, and the answer seems to be "systems that stopped being developed before you were born."


Android, iOS, and Chrome OS all have security models that are fundamentally superior to what you get on a base *nix system: Fine-grained, application-level permissions which the user can grant or deny on a case-by-case basis.

And before you say that all those operating systems are based on Unix; that's true but it's mostly just an implementation detail. There's no reason, for example, that Chrome OS couldn't be based on an entirely different kernel and still have the same permissions model, and most Unix-like systems don't have fine grained permissions like this out-of-the-box.


Windows Phone 7 did fine grained security much earlier and with full device encryption as default.

It's now also a standard on WinRT/UWP on Windows8/10.

Android was very late to the party regarding fine grained security. Google knowingly put billions of consumers at risk for a long time. And even today its security a nightmare.


If you're not aware of it already, have a look at these links as starting points.

https://en.wikipedia.org/wiki/Trusted_Computer_System_Evalua...

https://en.wikipedia.org/wiki/Security-evaluated_operating_s...

A lot of the lower-level criteria require features present in operating systems we all use from day to day.


Thank you, I'll take a look!


> I've been wondering what the most secure OS is for a long time

Out of the box, most system with no public services running are usually decent (and that's hard to do with some systems). Beyond that it all comes down to configuration. There is no "most secure" OS. You can set them all up to be insecure or secure.


re UNIX security

It was proven impossible to fully secure by high-assurance engineers back when INFOSEC was being invented. UNIX was much smaller back then. Project such as UCLA Secure UNIX and products such as Trusted Xenix were still unable to reach high assurance due to problems baked into the architecture and UNIX principles themselves. Even with API changes, they still had problems with covert channels. That led the inventors of INFOSEC to abandon UNIX for secure applications in favor of clean-slate, security kernels with user-mode or deprivileged layers for legacy, UNIX apps. Anything truly trustworthy runs directly on the security kernel.

re secure OS's

Probably INTEGRITY-178B or Turaya with Linux VM's for now. Maybe SourceT for network appliances. They're sold for nice sums of money to militaries, governments, and companies with nice budgets. GenodeOS is working toward a FOSS version of Nizza architecture.

http://www.ghs.com/products/safety_critical/integrity-do-178...

https://os.inf.tu-dresden.de/papers_ps/nizza.pdf

http://www.perseus-os.org/content/pages/Architecture.htm

https://secure64.com/secure-operating-system/

Two of the original, high-assurance systems are still around for OEM license from BAE (STOP) and Aesec (GEMSOS). We've learned a lot of ways to break stuff since then. Who knows what current level of assurance is. ;)

http://www.cse.psu.edu/~trj1/cse443-s12/docs/ch6.pdf

There's also language-based approaches being investigated that could be done in something more trustworthy than Java:

http://www4.cs.fau.de/Projects/JX/

Finally, there's CPU's designed to do isolation, reliability, and so on at the gate level. Rockwell-Collins uses one in their guards and crypto appliances. Sandia has a high-assurance Java CPU. Industry also sells Java CPU's for embedded that might be converted into some safe execution platform combined with something like JX. Call these building blocks for secure OS rather than the OS itself.

http://www.ccs.neu.edu/home/pete/acl206/slides/hardin.pdf

https://www.ajile.com/index.php?option=com_content&view=arti...


Wow! Thanks so much for all the resources. I have added them to my notes.


Previous discussion on Hacker News: https://news.ycombinator.com/item?id=7726115


The current post is also a follow-up to https://news.ycombinator.com/item?id=13777077 from yesterday.


I just pulled the book off my shelf and it still has the UNIX barf bag in the back.

https://goo.gl/photos/gHvGwHpC3z5KxLv59

(edited link so that people don't need to login to google to see it.)


see also: The Unix haters' UNIXUX server. It's pretty funny. http://www.art.net/~hopkins/Don/unix-haters/login.html


It was more realistic back when the <BLINK> tag worked:

    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
    <HTML>
    <HEAD>
       <TITLE>UNIXUX: Click on the cursor.</TITLE>
       <META NAME="Author" CONTENT="Don Hopkins">
       <META NAME="GENERATOR" CONTENT="User-Agent: Mozilla/3.0Gold (Macintosh; I; PPC)">
    </HEAD>
    <BODY TEXT="#7CFE5A" BGCOLOR="#191919" LINK="#FF0000" VLINK="#FF3333" ALINK="#00FEFA">
    <P><B><TT><FONT SIZE=+1>UNIX HATERS Release 2.0 (unixux)</FONT></TT></B></P>
    <P><B><TT><FONT SIZE=+1>login: <BLINK><A HREF="password.html">_</A></BLINK></FONT></TT></B></P>
    </BODY>
    </HTML>


CSS to the rescue! :) https://jsfiddle.net/54bpngL3/


This is a good read. Especially if you use Linux/BSD/OSX on a fairly regular basis.

While it is mostly long usenet rants about the _failure_ of Unix. There is some grains of truth buried in 80's-esque proto-shit-posting.


This very book was brought up yesterday in a post critical of the unix philosophy. If you scroll down a bit, or do a search for Analemma_ you might find some useful info.

https://news.ycombinator.com/item?id=13777077


I've just found this subreddit looking for what else if not UNIX:

https://www.reddit.com/r/EsotericOS/


"We have tried to avoid paragraph-length footnotes in this book, but X has defeated us by switching the meaning of client and server. In all other client/server relation- ships, the server is the remote machine that runs the application (i.e., the server pro- vides services, such a database service or computation service). For some perverse reason that’s better left to the imagination, X insists on calling the program running on the remote machine “the client.” This program displays its windows on the “window server.” We’re going to follow X terminology when discussing graphical client/servers. So when you see “client” think “the remote machine where the appli- cation is running,” and when you see “server” think “the local machine that dis- plays output and accepts user input.”"

Yes, Garfinkel, et al., don't know what a "client" and "server" are.


I'm the "et al." who wrote that chapter, and I don't understand what you're trying to say.

What do you think "client" and "server" mean, and what do you think is wrong with that paragraph-length footnote (besides its length)?

Could you please try to rewrite it more accurately or in fewer words?


Hi, Al!

The X use of "client" and "server" is correct. The "remote machine"/"local machine" thing isn't.

See my other reply: https://news.ycombinator.com/item?id=13783802

Edit: Forgot to refer to you as Al. (Dang it, this is why I can't have nice jokes.) Apologies!


I remember back when I first started using X on a network: it was weird that I'd run an X client on the server and an X server on my client.

It makes sense when one thinks about it, but it's still strange until one gets used to it.


And yet if you are a programmer you'd be inclined to go with X's terminology.


That's about as true and insightful as 'mcguire doesn't know how to read'.


The X server manages the physical display and provides a virtual display abstraction for clients. X clients connect to the server and make requests.

The X server is called a server because it is a server and X clients are clients.

It's excusable to be confused by the terminology, as in the quote, of course. Unless you claim to understand the whole "network" thing.


Actually, the sale of babies for meat would not have eased economic conditions in 18th century Ireland.


As humor, it's like Tesla's appearance on Top Gear: kind of funny until the 57th time you have to explain to someone why it's wrong.


So you're saying these people don't know what 'client', 'server' and 'networking' mean and they should have stopped trying to be funny at 'Annals of Unix Hating, Vol. 56'. Tough crowd.


Yes. Yes, that is exactly what I'm saying.


I suppose I know who to call on when a real zebra's head transportation emergency arises.


I love this book. Yes, it's now out of date, but it's a) still funny and b) is a great book for learning to think about the design of systems.

The second point is sadly neglected. If you read it and your principal reaction is "Unix isn't like that anymore" you've completely missed the point.


Hate all you want but I regularly have moments where I solve something using a string of commands and think to myself; gosh darn I love Unix!

It's not really Unix, sometimes it's Macintosh with GNU tools installed but mostly it's CentOS or Debian servers where all this magic takes place.

Sure I need to have a lot of stuff in that thick head of mine to do magic, but the joy of stringing together a series of commands to solve a problem my windows using co-workers would have to spend license money to solve is just amazing.

Hating Linux is definitely right if you're looking from the perspective of universal user friendliness. But for true power users, the programmers, the scientists and the sysadmins; I can't picture life without it.


The question is: what would the ideal non-POSIX API look like?


For all of the usability flaws pointed out here, to what would you ascribe Unix's proliferation and modern ubiquity?

- Giving it away to universities?

- The C programming language?

What am I missing?


The common explanation is "worse is better" (see The Rise of Worse is Better[0])

[0]: http://dreamsongs.com/RiseOfWorseIsBetter.html


tldr.

The lesson to be learned from this is that it is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing.


Giving it away to universities.

All the alternatives had lots of zeros on their price tags.

C just came along, like JavaScript on the browser.

We already had better system programming languages since Burroughs B5000 in the 60's.



"Steep, learning curve included free of charge." I'm happy to admire the architecture at a distance, not actually use it, and simply apply its good lessons to future work. ;)


I know. :)


The source licensing in the early days (notably before USG) was almost certainly a large part of the key, but it certainly wasn't "given away". The flexibility allowed for the BSD, which dramatically increased the number of hardware platforms which could run UNIX, at least in settings where an underlying source licence was available.

Mt Xinu (commercially supported 4BSD for anyone with a usable source licence) was wildly popular at university labs where TCP/IP was immediately useful for local internetworking across disparate minicomputers, and the distribution included translation, bridging, tunnelling and so forth of other protocols (DECNET, AppleShare) over IP.

Additionally, their port to x86 let humble 386SX microcomputers do the work of much more expensive minicomputers, and was "legit" in a way that USG argued Bill Joy's 386BSD was not.

Notably UNIX distributions where source licencing was exceptionally expensive (or otherwise rare) did quite badly in settings where BSD -- supported or otherwise -- was usable.

Crucially, the University of California decided to support the release of 4.4 BSD-Lite, which had all the AT&T/USL proprietary code removed, both for research purposes and in the obvious knowledge that "someone" would make Lite usable by filling in the missing functionality.

"Someone" was largely BSDI, which survived USL's lawsuit; other independent Lite completions also benefitted from this victory, and FreeBSD/NetBSD/OpenBSD are descendents of one of those.

Additionally, Linux came along.

So I think you got it mostly right in the first line.

C was mostly along for the ride; a usable compiler, toolchain, and programming environment (ed/ex/vi, lint and so forth) was included in the source distributions, and the "p" in pcc was its claim to be readily portable to arbitrary architectures (which was almost true, in part because it was written in C itself). Had it been a language other than C, that probably would have become the systems programming language instead. (Indeed, UNIX was a bit odd in that the number of general languages considered standard was tiny; for example, compare the number of languages one could use on TOPS-10 (six standard) and TOPS-20, which were never intended to run on arbitary hardware).


My copy came with a barf bag.


Maybe there really wasn't such a thing as a "good" operating system long ago, just one that you could tolerate more than another. I remember using OS/2 Warp as a kid and thinking it was pretty neat. Although I couldn't really do much with it when I was a kid, since I wasn't interested in writing my own apps in OS/2.


can anyone give a first-hand historical perspective on this, regarding what was going on and how this was seen in 1994?


It was written in an age where there were still plenty of computer options to go around with.

On the consumer and small business end, there were DOS/Windows boxes, WindowsNT boxes, and Macs, of course. But OS/2 was still a thing, Win95 was around the corner, and, while on the decline by 1994, Atari and Amiga devices were still somewhat sensible options for e.g. music or gaming a few years earlier.

On the corporate end, and more saliently for this specific book, you still had plenty of mainframes and such around, and an equally rich variety of options. Some of these computers were called Lisp machines, and built around Lisp rather than Unix/C. They had started to lose traction after the onset of AI Winter:

https://en.wikipedia.org/wiki/Lisp_machine

https://en.wikipedia.org/wiki/AI_winter

The authors of the book basically are Lisp aficionados who mourn the good old days when they could get by using elegant Lisp machines instead of clunky, user hostile Unix boxes.

The book itself is an entertaining read insofar as I can remember. (My memory might be playing tricks on me though; I read it 10-15 years ago.)

The authors obviously cherry pick to maximize Unix bashing. The authors make a number of very valid points - some of which still apply today. The tone is usually playful. But then, at times, contempt with a pinch of self-importance emerges, and you're left wondering why you're bothering to continue reading a 300+ page long rant... only to realize that the points made are so obnoxiously valid (or, at least, were when I read it) that you continue.

Dennis Ritchie's anti-forward passage return the favor, too:

> Your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.


I saw the anti-forward, but missed that quote. Thanks for pointing it out, and for the thoughtful blurb. Great context!


Unix was very popular in the mid 90s amongst some. There were half a dozen RISC unix vendors each investing $100m a year on their unix platforms to drive billions in new business. This was seen as one of the few insider critiques worth reading. It was a bit over the top and rant-t but there was some good insight in there. It wasn't such a devastating critique that it really shifted peoples opinions about unix.


Make sure to read the preface. It is the best preface I've ever read. Pay special attention to who wrote it.


I cannot find a distinct credit for the Preface. Where is it?


I assume the OP was referring to the Anti-Foreword by Dennis Ritchie.


OMG the anti-foreword by Dennis Ritchie. Pure gold.


"Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag."

Harsh words from Dennis, RIP.


I have to admit, I laughed at this: "C++ Is to C as Lung Cancer Is to Lung" :D


This book is one of my top three books on how to become a good software dev :)


Could you name the other two?


They're for good developers to know and every one else to find out /jk

Seriously though, I think almost everyone has a pick of different books that influenced them the most and helped them to become the best developer that they could be, and while most of the time they are contained within a single set of books, it's kind of fruitless to buy books you might not find interesting with the promise of "They will make you a good developer". I have a fair amount of books I find difficult to read because of the style, etc. that I bought with that promise, and none of them improved my programming ability significantly more than a bunch of coding + an internship has.

[What I would recommend is finding authors that you like reading andor tend to get a lot from]


"The Art of Unix Programming" and "Structure and Interpretation of Computer Programs"


If I ever need to feel good about myself as a software developer, I only ever need to read this book's chapter about the X window system.


"If the designers of X Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which followed the same principles—but you’d be able to shift gears with your car stereo. Useful feature, that."


An office I worked in in the early 90's had a shelf with a ten(?) volume set of X windows books - and each individual book was a thick bugger. I'd only had fleeting experience of it, but remember thinking "how complex can this thing be ?!"


I remember them. Those were both the guides and the references for the whole X11 ecosystem. I guess many modern pieces of software would be that large if properly typesetted and printed on paper. Luckily we fit them on the web and some thousands of blog posts, stackoverflow and more.


I was on Sun's X11/NeWS beta program, and each time they came out with a new release, they'd send me an entire whole new set of X11 and Adobe PostScript manuals! I appreciated it, but had those manuals coming out of my ears! It was great having all those Red Books to pass out at parties, but nobody ever wanted the XView manuals. ;)


I remember them --- they were really good.

The architecture was layered, too, so you typically only ever used one book at a time. Doing raw graphics operations? All you need is the Xlib book. Xt/Xaw? There's a book for that. Motif? There's a book for that, too, but it wasn't in our set.

The raw X protocol was, IIRC, documented in volume 0.


I think a lot of those books covered stuff that most people writing applications never really had to worry about - I'm pretty sure I got by with one on Xlib and maybe one on Xt?


Much of the mass was printed man pages for every single API call.


Yeah but try and figure out how you'd do better... efficient.ly. Over the network...


As it turns out, the web eventually came around to using NeWS's extensible client/server architecture, once they rediscovered it 20 years later and called it "Ajax". (2005 [1] - 1985 [2] = 20)

[1] https://en.wikipedia.org/wiki/Ajax_(programming)#History

[2] http://www.chilton-computing.org.uk/inf/literature/books/wm/...


Ha! Yello, Don. Howzit?


Hi Jim! As it turns out, AJAX is immensely popular in Amsterdam, because it's the name of the local soccer team [1]. But they pronounce AJAX like "ah yax", and JavaScript like "ya va schkript"! ;)

[1] http://english.ajax.nl/streams/ajax-now.htm


The contemporary windowing system NeWS is worth a look: https://en.wikipedia.org/wiki/NeWS

Its architecture was more similar to modern web apps, with the UI running code and processing events on the client machine.


Less important and less popular there was also MGR, which treated windowing more like a terminal session, and so was more Unix-philosophy than most. It was neat, and because of its smaller profile it got ported to a lot of systems (Atari ST, etc.) that struggled under the full X11 system

http://www.hack.org/mc/mgr/


Uhler put MGR on the Tadpole SPARCbook. It was really cool.


MGR was awesome! Not extensible, but simple and clean, and efficient over a slow connection.

I once saw a great demo of it by its author, Stephen A. Uhler. For some reason, from then on, I always associate MGR with great big bushy moustaches.

https://media.licdn.com/mpr/mpr/shrink_150_150/p/2/000/013/3...


I remember it. It was far prettier than motif at that time. And Framemaker was so pleasant to use compared to Word.

Both NeWS and NeXT were using Display Postscript (https://en.wikipedia.org/wiki/Display_PostScript). Do you know if NeWS was an inspiration for NeXT ?


Other than using PostScript for 2D rendering I don't think NeWS and Display Postscript had much in common.

In NeWS you could write your client applications completely in PostScript (or something compiled down to PostScript) and it had its own object-oriented, multi-threaded environment with event handling etc.

NeWS was more of a predecessor for Java than NeXT.


That's correct!

It's no coincidence that NeWS and Java were both designed and written by James Gosling.

It could be said that Java rose out of the ashes of NeWS.

Although I wouldn't go as far as to say that NeWS rose out of the ashes of Gosling/UniPress/EvilSoftwareHoarder Emacs and Mocklisp. But Gnu Emacs did! ;)

However, at UniPress we did develop a nice NeWS display driver for Emacs that worked efficiently over a slow network connection, providing local interactivity like text selection feedback, control panels, pie menus, multiple tabbed windows, etc.

Here's the PostScript source code of the UniPress Emacs 2.20 NeWS display driver [1], and a screen snapshot [2] of Emacs as an authoring tool for the NeWS version of the HyperTIES hypermedia system (plus a diagram of HyperTIES extensible client/server architecture).

[1] http://www.donhopkins.com/home/archive/emacs/emacs.ps.txt

[2] http://www.donhopkins.com/drupal/node/101


Here's the source code for the classic PizzaTool demo (BTW, I've just looked on Google Maps, and I think that Tony & Alba's pizza place in Mountain View no longer exists):

http://donhopkins.com/home/archive/NeWS/pizzatool.txt

...and here's a link to a HN thread a few months ago where Don Hopkins talks about Postscript and windowing systems; lots of good links there:

https://news.ycombinator.com/item?id=13196983


Tony & Albas has moved to 3137 Stevens Creek Blvd. in West San Jose.

https://www.yelp.com/biz/tony-and-albas-pizza-and-pasta-san-...

Also La Costeña, home of the World's Largest Burrito, is now at 235 E. Middlefield Rd. in Mountain View.

http://costena.com/

And Cho's Mandarin Dim Sum has moved to 209 1st St. in Los Altos. (The rent for his hole-in-the-wall on California Ave. in Palo Alto was too high!)

https://www.yelp.com/biz/chos-mandarin-dim-sum-los-altos

Please update your programs!


I actually worked on a project that used NeWS (actually HyperNeWS) as a front end for a Lisp based AI system written in Common Lisp.

HyperNeWS could do some neat things - e.g. you could draw a shape (any shape!) in the graphical editor and paste it as the shape of a window, all without writing any code.

Edit: changed "uses" to "used" - was quite a long time ago!


I really miss HyperNeWS [1], which I worked on with Arthur van Hoff at the Turing Institute, and I used it to port SimCity to NeWS on Unix [2].

Arthur van Hoff (who developed HyperNeWS and other stuff like Java [3]) and I are working together again, this time at his 360° VR video camera company, JauntVR [4]! I'm developing a secret project called HyperJaunt, that I can't say anything about yet, but you can guess by the name that I'm pretty excited about it! ;)

We are looking for a lead software engineer with leadership experience in Amsterdam! [5]

[1] http://www.art.net/~hopkins/Don/hyperlook/

[2] http://www.art.net/~hopkins/Don/hyperlook/HyperLook-SimCity....

[3] https://www.linkedin.com/in/aavanhoff

[4] https://www.jauntvr.com/technology/

[5] https://www.jauntvr.com/careers/apply/?gh_jid=251962

LEAD SOFTWARE ENGINEER, AMSTERDAM, NETHERLANDS.

The Role: This is a highly challenging and highly technical role offering the chance to define the early days of a new industry. Candidate would have to know or be willing to learn mobile 3D graphics programming (OpenGL/GLSL/Unity) with focus on interactivity, network optimization and performance tuning. Candidate should have strong leadership skills. While we are looking for prior experience as a good indicator of future success, our main criteria includes passion for VR, intelligence, creativity and strong work ethics.


Postscript isn't a complex language. It's probably quite straightforward to translate it into Javascript and run it locally on a web browser, rendering into a canvas. With a websocket connection to the backend, it'd be possible to recreate NeWS and use it for traditional client/server Unix apps, but with an ordinary web browser instead of an X server.

I don't know whether this would be useful, mind, other than being a cool hack (which has value all of its own)... but it would be a very cool hack.

Is any of the NeWS source code available?


Omar Rizwan has been developing a project called "dewdrop" [1] [2] to re-implement NeWS in the web browser, using canvas to render the graphics.

It's based on the WPS PostScript interpreter [3], which he's rewritten and extended in TypeScript.

He says: "It's _very_ incomplete (not many events, multi-canvas support unclear, no GUI toolkit, no network stuff), but a surprising amount of the core stuff is in, I think (cooperative multitasking including timer events, OOP, graphics). Thinking about where to go from here -- could start the GUI toolkit, or server/client stuff..."

A few years ago I started writing my own "SunDew" [4] [5] NeWS interpreter in JavaScript. It's not complete and doesn't have any graphics, but I've written some comments and stubbed out some classes that specify all the various NeWS data types and operators that would have to be supported.

But now that I have seen how much cleaner Omar's code is written in TypeScript, and how it elegantly takes advantage of TypeScript's asynchronous programming to implement NeWS light weight processes, I think it would be better to build on top of what he's done instead. But my code and the comments in it could at least help serve as a spec for the various types of NeWS objects and operators that are required.

There's still a lot of work to do, but it's certainly possible and would be really cool!

There are some nuances that need to be worked out where NeWS doesn't quite match up with JavaScript or standard PostScript.

Strings, arrays and dictionaries are references to shared object bodies, and arrays and strings of different lengths can share memory with sub-intervals.

References to shared object bodies on the stack and in other objects contain their own permission bits independent of the object body, so you can have read-only access from one reference, and writable or executable access from another reference.

And of course NeWS has magic dictionaries, which is a way of calling native code when you access dictionaries, which NeWS uses to implement fonts, canvases, processes, etc. [6]

There are also a lot of undocumented nuances in NeWS that would have to be reverse engineered or figured out by looking at the original source code.

[1] https://github.com/osnr/dewdrop

[2] http://dev.rsnous.com/dewdrop/executive/

[3] http://logand.com/sw/wps/

[4] http://donhopkins.com/home/sundew/sundew.js

[5] http://donhopkins.com/home/sundew/test.html (open the JavaScrip console to see the logs)

[6] http://www.donhopkins.com/drupal/node/97


Ha. Of course it wasn't an additional idea. Thanks --- will check it out!


I worked in Edinburgh and visited the Turing Institute a few times - I did meet Arthur a few times though I think we mainly spoke to some other chap (Danny?).

When Arthur moved to work for Sun I remember begging a copy of Java from him early in '95.


No, NeWS was around before NeXT, and it was developed independently by James Gosling.

NeXT's Display PostScript architecture was not concerned with networking, extensibility, providing local interactivity, or reducing client/server communication by downloading code, which is what the modern term "Ajax" refers to.

Here is one of Gosling's earlier papers about NeWS (originally called "SunDew"), published in 1985 at an Alvey Workshop, and the next year in an excellent Springer Verlag book called "Methodology of Window Management" that is now available online for free. [1]

Chapter 5: SunDew - A Distributed and Extensible Window System, by James Gosling [2]

Another interesting chapter is Warren Teitelman's "Ten Years of Window Systems - A Retrospective View". [3]

Also, the Architecture Working Group Discussion [4] and Final Report [5], and the API Task Group [6] have a treasure trove of interesting and prescient discussion between some amazing people.

[1] http://www.chilton-computing.org.uk/inf/literature/books/wm/...

Methodology of Window Management

F R A Hopgood, D A Duce, E V C Fielding, K Robinson, A S Williams

29 April 1985

This is the Proceedings of the Alvey Workshop at Cosener's House, Abingdon that took place from 29 April 1985 until 1 May 1985. It was input into the planning for the MMI part of the Alvey Programme.

The Proceedings were later published by Springer-Verlag in 1986.

[2] http://www.chilton-computing.org.uk/inf/literature/books/wm/...

5. SunDew - A Distributed and Extensible Window System

James Gosling

SunDew is a distributed, extensible window system that is currently being developed at SUN. It has arisen out of an effort to step back and examine various window system issues without the usual product development constraints. It should really be viewed as speculative research into the right way to build a window system. We started out by looking at a number of window systems and clients of window systems, and came up with a set of goals. From those goals, and a little bit of inspiration, we came up with a design.

GOALS

A clean programmer interface: simple things should be simple to do, and hard things, such as changing the shape of the cursor, should not require taking pliers to the internals of the beast. There should be a smooth slope from what is needed to do easy things, up to what is needed to do hard things. This implies a conceptual organization of coordinated, independent components that can be layered. This also enables being able to improve or replace various parts of the system with minimal impact on the other components or clients.

Similarly, the program interface probably should be procedural, rather than simply exposing a data structure that the client then interrogates or modifies. This is important for portability, as well as hiding implementation details, thereby making it easier for subsequent changes or enhancements not to render existing code incompatible. [...]

DESIGN SKETCH

The work on a language called PostScript [1] by John Warnock and Charles Geschke at Adobe Systems provided a key inspiration for a path to a solution that meets these goals. PostScript is a Forth-like language, but has data types such as integers, reals, canvases, dictionaries and arrays.

Inter process communication is usually accomplished by sending messages from one process to another via some communication medium. They usually contain a stream of commands and parameters. One can view these streams of commands as a program in a very simple language. What happens if this simple language is extended to being Turing-equivalent? Now, programs do not communicate by sending messages back and forth, they communicate by sending programs which are elaborated by the receiver. This has interesting implications on data compression, performance and flexibility.

What Warnock and Geschke were trying to do was communicate with a printer. They transmit programs in the PostScript language to the printer which are elaborated by a processor in the printer, and this elaboration causes an image to appear on the page. The ability to define a function allows the extension and alteration of the capabilities of the printer.

This idea has very powerful implications within the context of window systems: it provides a graceful way to make the system much more flexible, and it provides some interesting solutions to performance and synchronization problems. SunDew contains a complete implementation of PostScript. The messages that client programs send to SunDew are really PostScript programs. [...]

[3] http://www.chilton-computing.org.uk/inf/literature/books/wm/...

4. Ten Years of Window Systems - A Retrospective View

Warren Teitelman

4.1 INTRODUCTION

Both James Gosling and I currently work for SUN and the reason for my wanting to talk before he does is that I am talking about the past and James is talking about the future. I have been connected with eight window systems as a user, or as an implementer, or by being in the same building! I have been asked to give a historical view and my talk looks at window systems over ten years and features: the Smalltalk, DLisp (Interlisp), Interlisp-D, Tajo (Mesa Development Environment), Docs (Cedar), Viewers (Cedar), SunWindows and SunDew systems.

The talk focuses on key ideas, where they came from, how they are connected and how they evolved. Firstly, I make the disclaimer that these are my personal recollections and there are bound to be some mistakes although I did spend some time talking to people on the telephone about when things did happen. [...]

[4] http://www.chilton-computing.org.uk/inf/literature/books/wm/...

[5] http://www.chilton-computing.org.uk/inf/literature/books/wm/...

19. Architecture Working Group Discussions

19.1 INTRODUCTION

The membership of the Architecture Working Group was as follows:

George Coulouris (Chairman). James Gosling. Alistair Kilgour. David Small. Dominic Sweetman. Tony Williams. Neil Wiseman.

[...] The possibility of allowing the client process to download a procedure to be executed in response to a specific class of input events was discussed, and felt to be desirable in principle. However, more work was needed to establish the practicality in general of programmable window managers. The success of Jim Gosling's SunDew project would be an indicator, but it was felt that it would be fruitful to initiate a UK investigation into this issue. John Butler pointed out in discussion that in the Microsoft MS- Windows system an input event received by a client process could be sent back to the window manager for interpretation by one of a set of translation routines. [...]

[6] http://www.chilton-computing.org.uk/inf/literature/books/wm/...

21 Application Program Interface Task group

[...] There was a strong feeling that, at this stage in their development, window managers need to be very flexible. The downloading-of-procedures idea in James Gosling's work was seen as a nice way to achieve this. In this context protection issues were seen to be important. There need to be some limits on loading arbitrary code, especially since the window manager has in some sense the status of an operating system in that it must be reliable and not crash. One idea for achieving protection was through the use of applicative languages which are by their nature side-effect free. [...]

21.4 DISCUSSION

Teitelman: Referring to point (3) in your list, can you characterize the conditions under which a window manager would refuse requests from a client? It feels so soft that the user might feel uneasy. Is the window manager surly? Is it the intention that requests are honoured most of the time, and that failure is rare?

Gosling: Yes, but failure should be handled gracefully.

Bono: I think that there are two situations which arise from the same mechanism. The first is occasional failure such as a disk crash. The program environment should be robust enough to deal with it. The other situation is where device independence is written into the system. What happens if a colour device is used to run the program today, where a black and white device was used yesterday? This may show up in the same mechanism, so you cannot say that it is rare.

Gosling: When an application makes a request, it should nearly always be satisfied. The application program can inspect the result to see if it is satisfied exactly. If it asks for pink and it doesn't get it, it should be able to find out what it did get. Only then should the application deal with the complex recovery strategy that it may need. We need some sort of strategy specification. What sort of strategy should we use to select a font or colour if there is no exact match? What feature is more important in matching a 10 point Roman font, its size or its typeface? At CMU, if you point at a thing and want 14 point Roman you may get 14 point Cyrillic, which is not very useful. On point (7), are you implying a dynamic strategy, or one determined at system configuration?

Gosling: Harold (Thimbleby) is all for downline loading this. In reality this is not usually very easy. GKS adopts a compromise - an integer is used to select a predefined procedure. As you may only have 32 bits, this does not give you many Turing machines. Something of that flavour would not be a bad idea.

Cook: Justify synchrony in point (2).

Gosling: This is mostly a matter of complexity of program. Not many languages handle asynchrony very well. If we have Cedar or Mesa then this is possible.

Teitelman: How we do it in Cedar is that the application is given the opportunity to take action. In Mesa we require that the application catches the signal and takes any action. In the absence of the application program intervening, something sensible should be done, but it may impose a little bit more of a burden on the implementer.

Gosling: In Unix software there is no synchronization around data objects. In Cedar/Mesa there are monitors which continue while the mainline code is running; there are no notions of interrupt routines.

Teitelman: This is a single address space system. We are unlikely to see this in Unix systems.

Newman: How realistic is it to design an interface using your criteria?

Gosling: Bits and pieces already appear all over the place. The CMU system deals with most of this OK, but is poor on symmetry. The SUN system is good for symmetry, but not for synchrony. It is terrible on hints, and has problems with redraw requests. There is no intrinsic reason why we can't deal with all of these though. The problem is dealing with them all at the same time.

Williams: A point that I read in the SunWindows manual was that once a client has done a 'create window' then the process will probably get a signal to redraw its windows for the first time.

Gosling: Right, but it's a case of maybe rather than will. Some programs may redraw and redraw again if multiple events aren't handled very well, and give screen flicker.

Hopgood: Do you have a view on the level of interface to the window manager?

Gosling: Clients don't want to talk to the window manager at all, but should talk to something fairly abstract. Do you want to talk about this as the window manager as well? The window manager shouldn't implement scroll bars, or buttons or dialogues, we need another name for the thing which handles the higher level operations.


Ohhhhhhh good answer.

One day it'll win :) Actually I think we could do NEWS in JS. Somehow I don't feel that would make you happy tho :P


Does AJAX make you happy?

https://en.wikipedia.org/wiki/NeWS

NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:

used PostScript code instead of JavaScript for programming.

used PostScript graphics instead of DHTML and CSS for rendering.

used PostScript data instead of XML and JSON for data representation.


You can't do opengl efficiently over the network. At least xorg can't. Most applications uses opengl these days.

For me, I've been using Linux fulltime for the last 15 years and I have never, not even once, had the need to connect to a remote X11 server. ssh has always been enough for me.


I think it is likely that your lack of need for running applications remotely largely reflects another change: the rise of the web. We run fewer native applications than we used to, for better and for worse. I run mostly a browser and a terminal. The browser has taken much of the role X11 had, and things like VNC have taken most of the rest.


Sure you can, it's just called WebGL! ;)

They just added another layer, flipped the words "server" and "client" around, and added more hardware.

Now you run the web browser client on top of the local window system server, through shared memory, without using the network. And both the browser and GPU are locally programmable!

And then the local web browser client accesses remote web servers over the network, instead of ever using X11's networking ability.

One way of looking at it in the X11 sense is that a remote app running in the web server acts as a client of the local window server's display and GPU hardware, by downloading JavaScript code to run in the web browser (acting as a programmable middleman near the display), and also shader code to run in the window server's GPU.

Trying to pigeonhole practices like distributed network and GPU programming into simplistic dichotomies like "client/server," or partition user interface programming into holy trinities like "model/view/controller," just oversimplifies reality and unnecessarily limits designs.


> the need to connect to a remote X11 server

You mean client, not server.

(ducks for cover)


> the need to connect to a remote X11 server

You mean X-Windows, not X11.

(ducks for cover)


XFree86 had problems with OpenGL over network (to the extent of "it does not work at all" for any reasonable definition of "work"). Since the X.org fork this got mostly fixed and the related infrastructure (DRI, AIGLX, GLX_EXT_texture_from_pixmap, ...) is used extensively by other parts of the X server (for example to implement XVideo or Render in terms of OpenGL).

On the other hand, typical applications (ie. games) usually expect that the channel to GPU has quite large bandwidth and no meaningful latency, which simply isn't true for any kind of network connection.


> Most applications uses opengl these days.

Now 'most' is a relative term, but I do not think that's true at all. There are very few applications that use OpenGL. Obviously games are an exception, but if you count in all GUIs I would say mostly target X11 (via Gnome or KDE).


GNOME and KDE can also use opengl for some tasks if it is available. But right, most applications doesn't directly use opengl. I believe if it weren't such a pita to integrate opengl in desktop appliations (thanks to X11's bad design), many more applications would be using it.

To give you an example, think about desktop effects. On Windows, several applications such as Explorer makes parts of their windows semi-transparent. It's a nice simple effect that is impossible (without tons of hackery) to replicate using X11.


https://www.x.org/releases/current/doc/compositeproto/compos...

I don't think that this qualifies as "tons of hackery" given the fact that other contemporary UI systems with truly transparent windows implement it in same way (IIRC in pre-Vista Windows truly transparent windows are supported on the OS level but the implementation involves hacks with backing buffer and synthesized expose events).


Hm right. That is indeed possible because the client asks xorg for a surface with an alpha channel. What I don't think clients can do is doing the transparency effects themselves. Because that involves using opengl and doesn't work (very well) in a networked context.


Clients doing transparency themselves is the hackish solution to this problems because it involves the client sohehow knowing what it paints over.

And in all this has nothing to do with OpenGL except that GLX_EXT_texture_from_pixmap is particularly efficient way to implement it on OpenGL supporting hardware. For simple transparency, the compositor can do the blending completely in software (which involves getting the drawables to the client and back), do it via XRender (which may get translated into OpenGL by AIGLX-supporting server) or by calling OpenGL directly. For compositor that only cares about transparency XRender is probably better API, for 3D-ish effects (as in Xgl or Sun's Looking Glass) OpenGL makes more sense.


At Sun we experimented with implementing an X11 window manager in NeWS. We didn't have transparency at the time (1992), but we did support shaped windows!

The NeWS window manager supported cool stuff (for both X11 and NeWS windows!) like rooms, virtual scrolling desktops, tabbed windows, pie menus, was easily extensible and deeply customisable in PostScript, and ran locally in the window server so it could respond instantly to input events, lock the input queue and provide feedback and manipulate windows immediately without causing any context switches or dealing with asynchronous locking, unlocking and event handling. You'd never lose a keystroke or click when switching between applications, for example.

I touched on some of those ideas in this ancient window manager flamey-poo:

http://www.art.net/~hopkins/Don/unix-haters/x-windows/i39l.h...

Also on that topic (I can't believe I still love flaming about this stuff so many years later! Sorry if I sound like a broken record.):

https://news.ycombinator.com/item?id=5861229

https://news.ycombinator.com/item?id=5844345

https://news.ycombinator.com/item?id=8039156

https://news.ycombinator.com/item?id=13198492

https://news.ycombinator.com/item?id=11520680

https://news.ycombinator.com/item?id=11319498

https://news.ycombinator.com/item?id=11319783

https://news.ycombinator.com/item?id=9977226

https://news.ycombinator.com/item?id=13196983

https://news.ycombinator.com/item?id=11481604

And here's how I think you should design a programmable "window manager" these days -- but it would be much more than just a window manager! It would be great for integrating legacy desktop and mobile applications into VR, for example!

aQuery -- Like jQuery for Accessibility

http://donhopkins.com/mediawiki/index.php/AQuery

Don asks Peter Korn: Hey I would love to bounce an idea off of you! I didn't realize how much work you've done in accessibility.

There is a window manager for the Mac called Slate, that is extensible in JavaScript -- it makes a hidden WebView and uses its JS interpreter by extending it with some interfaces to the app to do window management, using the Mac Accessibility API.

So I wanted to make pie menus for it, and thought of a good approach: make the hidden WebView not so hidden, but in the topmost layer of windows, covering all the screens, with a transparent background, that shows the desktop through anywhere you don't draw html.

Then just make pie menus with JavaScript, which I've done. Works like a charm!

THEN the next step I would like to do is this:

aQuery -- like jQuery, but for selecting, querying and manipulating Mac app user interfaces via the Accessibility framework and protocols.

So you can write jQuery-like selectors that search for and select Accessibility objects, and then it provides a convenient high level API for doing all kinds of stuff with them. So you can write higher level plugin widgets with aQuery that use HTML with jQuery, or even other types of user interfaces like voice recognition/synthesis, video tracking, augmented reality, web services, etc!

For example, I want to click on a window and it will dynamically configure jQuery Pie Menus with the commands in the menu of a live Mac app. Or make a hypercard-like user interface builder that lets people drag buttons or commands out of Mac apps into their own stacks, and make special purpose simplified guis for controlling and integrating Mac apps.

Does that sound crazy? I think it just might work! Implement the aQuery "selector engine" and heavy lifting in Objective C so that it runs really fast, and presents a nice high level useful interface to JavaScript.

Here is an issue I opened about it on the Slate github page, describing what I've done, but I haven't written up the "aQuery" idea yet. That's the next step!

http://www.donhopkins.com/home/archive/piemenu/uwm1/hacks.f

That's the FORTH source code from 1987 of a programmable multi threaded X10 window manager that lets you throw windows around so they bounce off of the edge of the screen!


I think he means "most by total runtime".


Chrome (and I guess most web browsers) do.

Stupid, stupid Nvidia drivers...


And that's basically the only application that counts any more these days.


Maybe you used Synergy to share a mouse and a keyboard over multiple machines. It's a use case that partly overlaps with running remote desktop applications on your display. Or VNC.


Plan 9 did it in an interesting way. You draw to the screen by writing to files under /dev. On Plan 9, all file operations take place over 9P, a networked file protocol. So if you are connecting to a remote machine and want to run a graphical program, you mount your local /dev/draw files on the remote end (this is actually taken care of automatically by cpu(1)) and just run the program. Its graphical functions access your local machine's /dev/draw and it all Just Works. Hard to explain but very neat when you use it.


Plan 9's "everything is a file" philosophy never worked for me, because it's far too low level, and I don't believe Unix's file I/O API itself is very pleasant: open, read, write, ioctl and select (gag).

I'd much rather have a real API with rich app-specific functional or event interfaces to call, and to be able to pass actual typed parameters instead just a stream of bytes, including structured data like json, s-expressions or PostScript data, or even (gasp) Turing complete programs, like PostScript code!

NeFS, as defined in 1990 in the infamous NFS3 proposal aka "Network extensible File System Protocol Specification" did just that, and it was actually a great idea too early for its time. So it went over like a lead balloon, and was never actually adopted.

NeFS should not have been framed as a successor to NFS, because it required a revolution in how programs interacted with the file system. And of course there are the security and stability implications of running downloaded code in the kernel, which hadn't been properly addressed. ;)

Take for example (shown below) the act of copying a file to a backup file in the same directory.

With traditional NFS, the server would have to send each block of the file to the client, which would then send it back to the server, which would then write it to disk. That required a lot of network traffic, as well as many context switches between user and kernel space on both the client and the server (at a time in history where they were extremely expensive).

Instead, the client could just send a simple PostScript program to the server (or call one that was loaded from a library or sent previously -- see the example below), which copied the file in the kernel of the server, without sending it over the network, or requiring any context switches on either the client or server.

That's several orders of magnitude more efficient in terms of both CPU and network usage, and just the simplest and easiest to explain example possible of what you could do.

Just imagine how much more efficient, powerful and tightly integrated together other utilities like "find" and "grep" could be, tightly woven together procedurally in the kernel instead of communicating with a stream of bytes over a pipe!

http://www.donhopkins.com/home/nfs3_0.pdf

Introduction

The Network Extensible File System protocol(NeFS) provides transparent remote access to shared file systems over networks. The NeFS protocol is designed to be machine, operating system, network architecture, and transport protocol independent. This document is the draft specification for the protocol. It will remain in draft form during a period of public review. Italicized comments in the document are intended to present the rationale behind elements of the design and to raise questions where there are doubts. Comments and suggestions on this draft specification are most welcome.

1.1 The Network File System

The Network File System (NFS™*) has become a de facto standard distributed file system. Since it was first made generally available in 1985 it has been licensed by more than 120 companies. If the NFS protocol has been so successful why does there need to be NeFS ? Because the NFS protocol has deficiencies and limitations that become more apparent and troublesome as it grows older.

1. Size limitations. The NFS version 2 protocol limits filehandles to 32 bytes, file sizes to the magnitude of a signed 32 bit integer, timestamp accuracy to 1 second. These and other limits need to be extended to cope with current and future demands.

2. Non-idempotent procedures. A significant number of the NFS procedures are not idempotent. In certain circumstances these procedures can fail unexpectedly if retried by the client. It is not always clear how the client should recover from such a failure.

3. Unix®† bias. The NFS protocol was designed and first implemented in a Unix environment. This bias is reflected in the protocol: there is no support for record-oriented files, file versions or non-Unix file attributes. This bias must be removed if NFS is to be truly machine and operating system independent.

4. No access procedure. Numerous security problems and program anomalies are attributable to the fact that clients have no facility to ask a server whether they have permission to carry out certain operations.

5. No facility to support atomic filesystem operations. For instance the POSIX O_EXCL flag makes a requirement for exclusive file creation. This cannot be guaranteed to work via the NFS protocol without the support of an auxiliary locking service. Similarly there is no way for a client to guarantee that data written to a file is appended to the current end of the file.

6. Performance. The NFS version 2 protocol provides a fixed set of operations between client and server. While a degree of client caching can significantly reduce the amount of client-server interaction, a level of interaction is required just to maintain cache consistency and there yet remain many examples of high client-server interaction that cannot be reduced by caching. The problem becomes more acute when a client’s set of filesystem operations does not map cleanly into the set of NFS procedures.

1.2 The Network Extensible File System

NeFS addresses the problems just described. Although a draft specification for a revised version of the NFS protocol has addressed many of the deficiencies of NFS version 2, it has not made non-Unix implementations easier, not does it provide opportunities for performance improvements. Indeed, the extra complexity introduced by modifications to the NFS protocol makes all implementations more difficult. A revised NFS protocol does not appear to be an attractive alternative to the existing protocol.

Although it has features in common with NFS, NeFS is a radical departure from NFS. The NFS protocol is built according to a Remote Procedure Call model (RPC) where filesystem operations are mapped across the network as remote procedure calls. The NeFS protocol abandons this model in favor of an interpretive model in which the filesystem operations become operators in an interpreted language. Clients send their requests to the server as programs to be interpreted. Execution of the request by the server’s interpreter results in the filesystem operations being invoked and results returned to the client. Using the interpretive model, filesystem operations can be defined more simply. Clients can build arbitrarily complex requests from these simple operations.

[...]

Example: Copy a File

Make a copy of file (foo) called (bar). Both files exist in the same directory dfh. The request starts by looking up the filehandle for the file to be copied and creates a filehandle for the copy. The loop operator executes a procedure that copies the file using 1K reads and writes. It maintains a running count of the number of bytes yet to be copied.

    % Copy a file
    %
    dfh (foo) lookup /foofh exch def % get filehandle for (foo)
    dfh (bar) create /barfh exch def % create filehandle for (bar)
    /bytes foofh getattr /fsize get def % get size of (foo) so we know how much to copy
    /offset 0 def % initialize offset for (bar)
    {
        /data foofh offset 1024 read def % read up to 1K from (foo)
        barfh offset data write % write up to 1K to (bar)
        /bytes bytes 1024 sub def % decrement byte count by 1024
        bytes 0 le { exit } if % if it’s < 0 then we’re done
        /offset offset 1024 add def % increment offset by 1024
    }
    loop
    barfh getattr 1 encodereply sendreply % return the attributes of the new file to client.


How was it efficient?


I don't have numbers but in general I'd call it no less usable than X forwarding over ssh. For a long time I used to connect to a Japanese Plan 9 server using drawterm (a Windows/Linux application which essentially emulates the /dev/draw infrastructure) and it was pretty decent considering the latency--I used it to read email, follow IRC, and write code. Over a LAN, it's so fast as to be indistinguishable from something running on your local machine.


Yes; the chapter actually made me admire the design.


The whole book was like that for me. I think that's what it's all about - our love/hate relationship with Unix(-like systems).


Everyone that dislikes Win32 should program directly with Xlib and Athena, and rejoice of the experience.


Win32 with function signatures that look like this:

  HWND WINAPI CreateWindowEx(
    _In_     DWORD     dwExStyle,
    _In_opt_ LPCTSTR   lpClassName,
    _In_opt_ LPCTSTR   lpWindowName,
    _In_     DWORD     dwStyle,
    _In_     int       x,
    _In_     int       y,
    _In_     int       nWidth,
    _In_     int       nHeight,
    _In_opt_ HWND      hWndParent,
    _In_opt_ HMENU     hMenu,
    _In_opt_ HINSTANCE hInstance,
    _In_opt_ LPVOID    lpParam
  );
The only redeeming feature is the documentation.


Windows doesn't need a bookshelf full with around 15 OReilly books for GUI programming.

Since we are counting arguments:

    Window XCreateWindow(
        Display *display,
        Window parent,
        int x,
        int y,
        unsigned int width,
        unsigned int height,
        unsigned int border_width,
        int depth,
        unsigned int class,
        Visual *visual,
        unsigned long valuemask,
        XSetWindowAttributes *attributes
    );


Yeah, I don't understand the grandparent on that. The Xt API (and the Athena widget set) were hardly a great work of computer science. But they were relatively clean and easy to understand, and basically invented the "widget" (rectangle on the screen that can draw itself and respond to input in an encapsulated way) component model that we're all still thinking in today.


No they did not, Xerox PARC invented the widget.

They never had anything to do with UNIX.


I wouldn't say Xt invented the "widget", and I would never pass up the opportunity to state most emphatically that Xt widgets are neither clean nor easy to understand.

Here's a video tape by Brad Myers called "All the Widgets" [1] that shows demos of widgets from way back, many of them long before Xt. (Over 175 segments, from 30 systems, from 15 companies!)

[1] https://www.youtube.com/watch?v=9qtd8Hc90Hw

All the Widgets, by Brad Myers. This was made in 1990, sponsored by the ACM CHI 1990 conference, to tell the history of widgets up until then. Previously published as: Brad A. Myers. All the Widgets. 2 hour, 15 min videotape. Technical Video Program of the SIGCHI'90 conference, Seattle, WA. April 1-4, 1990. SIGGRAPH Video Review, Issue 57. ISBN 0-89791-930-0.

What kicked Xt's and Motif's ass, especially in terms of power and flexibility, as well as being clean and easy to understand, was TCL/Tk.

The reason TCL/Tk was so successful in spite of how lame TCL was as an scripting language, is that TCL was there from day one of Tk's design, not an afterthought.

As a result, there was no need for Tk to invent a bunch of half-assed kludges, and require applications to build complex Rube Goldberg devices on top of those, when Tk could simply call into TCL to do anything the scripting language interpreter could handle, including calling into the application's C code from TCL.

So there was zero overlap between what the scripting language could do, and what the toolkit needed to do. And that made the toolkit vastly simpler and more consistent, and much easier to flexibly program and modify.

Xt based toolkits like Motif [2] had to built on top of all that half assed pseudo-object-oriented crap trying to reimplement programming language level concepts like objects, components, properties, defaults, inheritance, events, handlers, delegation, layout, etc, in brittle C and macros.

[2] http://www.art.net/~hopkins/Don/unix-haters/x-windows/motif....

Motif was yet another proof of Greenspun's tenth rule of programming [3]: Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

[3] https://en.wikipedia.org/wiki/Greenspun's_tenth_rule

So just start with Common Lisp, or at least something in the ballpark like TCL, PostScript, JavaScript, Lua, etc.


"The only redeeming feature is the documentation."

Lmao. That's close to what I said when I first learned it. I was griping about how complicated window creation was vs some tools I had. "The MSDN docs are awesome, though!"


I never get why some people think the Win32 MSDN doc are appropriate. They are typically several order of magnitude less precise than man-pages on function about a similar topic -- if not just containing a few errors!


They helped me a lot. Especially the code examples for various API functions. That simple.


I don't think that at the raw level there is that much of a difference. XCreateWindow() has mostly the same arguments as CreateWindowEx() except that the X11 API only handles creating window and does not mash it together with upper layer concepts like event handling (lpClassName) and widget sets (hMenu).

The X11 model is also easier to reason about because it is clear where the layer boundaries are and what is implemented in your process as opposed in the kernel/display server and what causes some kind of IPC (eg. what happens to lParam when you send message to window with WndProc implemented in different process?).


Why?


I doubt there is any worse UI tooling available.


Than win32!? Are you serious?

Xlib is pretty bad but win32 is absolute bonkers. Or are you saying the opposite?


Of course I am serious.

I do UI coding since Amiga 500 days and never found so borked API, with more parameters and configuration structures than Xlib, without any support for printing or proper use of fonts.

The amount of wasted hours of my life using xlsfonts....

And those bare bones widgets, yet another headache.


And yet Xlib ran happily on machines with 4MB of RAM and would only use 512K of that; mind you that was X11R3. I used to run this very config on Apollo Domain/OS machines in 1990.


The Amiga had 512KB for the whole OS, including a GUI stack way better than X.

For us targeting desktop computing, the network features of X were never relevant, what mattered was GUI toolkits for workstation usage, running on the same computer.


I remember when this was new and I've never found it as amusing as other people seem to. It's super easy to shit on software that's almost 50 years old. But where's the replacement?

Windows is still worse as a web server platform than various Unix descendants (Linux and BSD) despite Microsoft's best efforts, and it's not free.

Unix has its flaws but being a "Unix hater" is just dumb.


UHH absolutely does not attack UNIX for being old. See "Who We Are" in the preface. The authors worked with systems they felt were superior in many ways, until industry/economic forces pushed them to UNIX.

"This book is about people who are in abusive relationships with UNIX, woven around the threads in the UNIX-HATERS mailing list."

They're not a bunch of angry outsiders, they're UNIX insiders who reject the idea of "Worse is Better." Worse is Worse, but we're stuck with it.


Worse is Worse negates the aspects that purity in the vacuum is very pure and very useless. There is (extreme) value in network effects, and network effects also exists in technology, not only in social networks... (and well, in technical fields, both are related). Also it's extremely easy to keep only good memories of extinct systems, and remain focused on past defaults, that for the most part have been largely corrected or at least compensated by dozen of years of refinement on still alive systems.

I don't negate that there are some technical choices that are better than others in some context, or even in some cases in every aspects. But e.g. the PC loser-ing problem has in practice hardly caused much problems (and is even in some cases a superior approach even for the userspace), and judging systems by that kind of details can easily be compared to an hypothetical crazy mechanical engineer not buying a car because of an obscur mechanical technical detail in the engine, that has been made to work perfectly well by taking into account its peculiarities, but that he dislike by principle and in an absolute way.

I'm also very aware of the enormously costly impact of some de facto industry choices. For ex: the C language and its derivative are shit and have arguably costed an absurd amount to the human kind -- compared to a technological-fiction world where safer languages would have been used. Does that mean that this is a case of "Worse is better"? Maybe, but then what would it mean? We have processors with MMU, we even have IOMMU that are becoming more and more common. Given some existing software stacks this does not need to be an intrinsically requirement -- but IMO this is vastly better. Can Worse sometimes win, and then Better also win on other subjects? What should we deduce from that then? What insight can we extract? If none, this whole categorization would be meaningless.

So in some cases, "Worse" is actually just better, the "worse" in question only being in the eyes of the opponent. Or you need to be extremely precise about the criterion you are using.

But I prefer to stick to consider all the advantages and drawbacks of what I'm talking about, the situation I want to apply it to, and avoid binary categorizations when they have no predictive value.


I wasn't saying they were attacking it because it is 50 years old, I said it's easy to attack something that is 50 years old. Computer science and systems design has evolved a lot since then, so of course there were a lot of decisions made then that we wouldn't do now. And a lot of things evolved organically in UNIX, etc.

Anyway, I'm not gonna rehash the same old arguments with you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: