Hacker News new | past | comments | ask | show | jobs | submit login
War of the workstations: How the lowest bidders shaped today's tech landscape (theregister.com)
174 points by lproven on Dec 25, 2023 | hide | past | favorite | 100 comments



One thing i think forgotten here, which actually is in the Worse is Better talk. But people tend to miss it.

These ST and Lisps systems failed at another aspect. Reuse. The biggest change of the past 2 decades in software engineering compared to previous generations is the amount of reuse. It is tremendous.

It is hard to talk of cause and effects here, but mostly this is due to the Internet. At this point, the vast majority of code running on any proprietary system is... Open source infrastructural packages.

This condition a lot of the current ecosystem. You can only reuse code on systems in which said code runs well. As such, the Linux "stability" combined with x86 won, same as C and friends because of the tooling that made the code "portable".

Yes i know. It is far from magically portable, but it is far more than full machine living image SmallTalk or Lisp like.

As such, these "living code" are fundamentally evolutionary deadend. They are amazing but they cannot easily move to different machines and sharing parts of them is hard to separate from the rest of the living organism.

On top of this, a lot of the elements to make this kind of machine works does necessitate deep in depth expertise. As the piece shows, the Newton is a pale copy of the goal because they did not have that knowledge in house nor the time (or money) to create it.

Same thing all over the stack. A good efficient logger need deep expertise. Same for a good localization library. Same for a good set of graphic servers. Same for audio servers. Same for a http parser or a network library. A good regexp engine is knowledge knows by less than 10 people in the world probably.

Once you realise that, you realise that at scale reuse is the only realistic way forward for software so ubiquitous as it is today. And that is how we got the current FOSS ecosystem, not because the code is better but because it would need too many licences to be manageable without breaking the bank in numbers of lawyers.

Same thing for the Worse is Better. It works because it provides extension points and can adapt. Something the Lisp and SmallTalk machines fundamentally failed to provide. And that is something Richard Gabriel focuses on far more than the whole New Jersey schtick in his talk.


> They are amazing but they cannot easily move to different machines and sharing parts of them is hard to separate from the rest of the living organism.

The code of a Symbolics Lisp Machine is written in so-called "systems". A "system" is a collection of files in a directory. Moving the code to another machine is either a) a copy of the directories to another directory or b) dumping an archive to copy to somewhere else or c) most of the time not necessary, since the Lisp Machine edits files on NFS file servers , which makes it possible to share the Lisp code directly with other Lisp systems in the network.

The Symbolics keeps track of versions of files and systems, something other Lisp systems typically don't do themselves.

> As such, the Linux "stability" combined with x86 won, same as C and friends because of the tooling that made the code "portable".

There is little stability. The main theme is ever evolving fragmentation. Linux / BSDs is fragmented in zillions distributions, variants, versions, competing library variants, software archive systems, ... and open source UNIX is fragmented into various BSDs, Linux variants, ... Look at some portable code, it uses a huge configuration checker, which looks for all the variations of code and libraries. Zillions of checks...


yes but it is portable. On the lisps machine, you could not port to a different type of machine by another producer in general, even less across version. The Linux ABI is stable.


Sure you could, if you wanted. There were a bunch of software developed on the Lisp Machines, which were available over a range of different machines. In 1984 Common Lisp was presented as defined language, available from PCs up to supercomputers (Cray, Connection Machines). Common Lisp was the standard language for applications in Lisp. Symbolics itself had a Lisp system for the PC/Windows, which was thought as a delivery system, called CLOE (used for example to run Macsyma on the PC, IIRC). Early on a bunch of Lisp applications were ported to UNIX-based Common Lisp. These platforms appeared in the mid/end 80s (Allegro CL, Lucid CL, LispWorks, CMUCL, AKCL, ...) and especially the first three got very powerful. All the big expert system development tools were on those, too. The Symbolics graphics suite was ported to UNIX and Windows NT using Allegro CL. Stuff like that can be more demanding, given on how many machine features are needed.

The whole motivation to develop Common Lisp (which is mainly based on the ZetaLisp language for the Lisp Machine) was to have a common language for application development and delivery across operating systems and machine types.

When the Common Lisp Object System was developed in the end 80s, it was developed for ~15 Lisp implementations, from Lisp Machines to Apple's Macintosh Common Lisp.

To give you a recent example: ASDF is a built tool for Common Lisp software. It's 13k lines written in Common Lisp. I have here literally the same source file (Version 3.3.6) loaded into SBCL (macOS 14.2) and a Lisp Machine (Symbolics Portable Genera 2.0.5, running on an iMac Pro): https://i.redd.it/h94h3emjap8c1.png

> The Linux ABI is stable.

That's only a part of the Linux fragmented eco-system: CPUs, distros, ... A few days ago I have installed the Linux subsystem for Windows 11 on an ARM64 machine. Obviously it is not that easy to get all kinds of Linux software running on it...


This is a fundamental misunderstanding of what was the a tual problem with sharing code, at least with Lisp systems (can't speak of then commercial Smalltalks as I have no experience with those).

The lisp environments were fragmented, so sharing code between different systems was a problem - even assuming Common Lisp, a group working in facility A might be using Symbolics machines and depend on software packages like Dynamic Windows and Symbolics specific network stack, which mean portable core logic would be missing various usability elements like GUI when ported to facility B running CMU CL with Motif on Unix, or facility C which had TI Explorers.

Another issue was that the machines were powerful enough you didn't have to contact others just to get running. Working in C meant you either spent a lot of time filling in basic functionalities, or you collaborated with others.

This was compounded of course by C and Unix being effectively free for huge chunk of zeitgeist-creating masses in academia.


> depend on software packages like Dynamic Windows

That was the reason for specification&development of the "Common Lisp Interface Manager" (CLIM), which was a portable reimplementation of Dynamic Windows to Common Lisp and CLOS. It supported to build the presentation-based user interfaces of Dynamic Windows. Implementations of CLIM were available for Lisp on Genera, Unix, Mac and Windows. CLIM 2 then was an attempt to blend better into the other GUIs.

https://github.com/franzinc/clim2

https://mcclim.common-lisp.dev (McCLIM is an independent reimplementation of the CLIM 2 "standard".

TI came up with a UIMS (User Interface Management System) based on Common Lisp + CLOS + CLX , called CLUE (Common Lisp User Environment) and CLIO. That one was then also used by LispWorks for its GUI and IDE as the low-level UIMS system. LispWorks later replaced CLUE with their own UIMS, called CAPI, which supported/s Motif (now obsolete), Gtk, Windows and macOS.

https://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/...


Yup!

But if you didn't write with portable interfaces, you might not have bothered to port it later, etc.


Lisp and Smalltalk failed because it was near impossible to share the code. Living images are really hard to merge.

Smallltalk did eventually get a fantastic solution for code sharing between living images (Envy/Developer) but by that stage it was too late.

All that philosophical guff about “worse is better” had nothing to do with the practical ease of sharing files rather than a tangled mess of code that Lisp and Smalltalk encouraged. Sharing files is definitely not “worse” by any definition.


You say Smalltalk encouraged "a tangled mess of code" when we can all see that Smalltalk encouraged code sharing in external files:

"Within each project, a set of changes you make to class descriptions is maintained. … Using a browser view of this set of changes, you can find out what you have been doing. Also, you can use the set of changes to create an external file containing descriptions of the modifications you have made to the system so that you can share your work with other users."

1984 Smalltalk-80 The Interactive Programming Environment page 46

https://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePr...


Sharing those changes was never that easy. Because it was encouraged to make additions and even changes to the base classes along with the project, we would inevitably end up with clashes. A typical scenario was filing in one programmers changes, then a second set of changes only to find they didn’t work together requiring a 3rd set of changes to be produced. Rinse and repeat in a big team and you’ve lost all the benefits of Smalltalk in hideous merges.

The change sets did not have hash based versioning or anything like that, so we tried cramming entire classes into SCCS or PVCS which kinda worked but quickly got unmanageable as base class extensions grew.

The Smalltalk change sets were fine for tracking one programmers work but quickly multiplied into a giant headache in any team.


> encouraged to make additions and even changes to the base classes

By whom?

"Guideline 120 Avoid modifying the existing behavior of base system classes."

1996 Smalltalk with Style page 95

https://rmod-files.lille.inria.fr/FreeBooks/WithStyle/Smallt...


You had to be there. Those books were not available when I was using Smalltalk and not all programmers really “got” the system or the implications of their changes on their team.


You must have been a very early user, this was published in 1984 —

"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager." (page 500)

1984 "Smalltalk-80 The Interactive Programming Environment"

https://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePr...

Without a defined software development process, that everyone habitually uses, we can usually make "a tangled mess of code" with any programming language.


You do know the difference between citing a book and actually being there might be, right?


Yes, we can all verify what's written.

I didn't use Smalltalk until 1988, so there was at-least the opportunity to learn from the mistakes of others.


Glad to see you're still fighting the good fight on this one, lproven. If anyone finds this interesting, Liam has three great fosdem lectures out introducing this topic from the perspective of business software dev, hardware tech, and open source. I found the first after reading The Unix-Haters Handbook, and as much as I like its style Proven's videos are broader and up-to-date, so much better for learning:

The Circuit Less Traveled: https://news.ycombinator.com/item?id=16309195

Generation Gaps; a heretical history of computing: https://news.ycombinator.com/item?id=22265615

Starting Over; a FOSS propisal for a new type of OS for a new type of computer: https://news.ycombinator.com/item?id=26066762

I also coincidentally found what might be the only good video about the Apple Newton on the internet today (although with the state of google search it's impossible to know outside of youtube). It makes a great supplement to the discussion of breaking the mold of OS conventions: https://www.youtube.com/watch?v=5kxRi34PqWo

The channel also has full interviews used in the dicumentary. I haven't seen the newest one, but it's two hours with a Mac and Newton architect so it's probably great: https://www.youtube.com/watch?v=KsaGx0loR3M


Oh, thank you! :-) That was a very pleasant Newtonmas surprise to read! :-D

Yes, the core of this article was adapted -- with my editors' full knowledge and agreement, of course -- from one of my FOSDEM talks. I plan to return with some more of the talk -- and some new stuff! -- in later pieces.

Sadly, the FOSDEM organisers did not accept my proposal for a talk this year. Ah well.


I started reading the article, and it was making so much sense that I scrolled back up to the top to see if you wrote it, and you did! So I came here to say I really love your work because you really get the big picture and do your research, but somebody beat me to it, dammit! After recently watching that depressing trending four hour youtube video about psychopathic plagiarizing charlatans, it's really refreshing and uplifting to read something original that's so well researched with great citations. I hope you don't get plagiarized too, but if you do, consider it the highest form of flattery.


I think the author confused lowest bidder with producing something people could afford. For all their weaknesses micro's could be purchased by normal people. You could not produce a lisp machine or alto equalent in the 80' that it was possible to afford for regular people.


And Lisp machines were extremely complicated to use. Every time I see one at Vintage computer conventions I am reminded that.


They were complicated to use but they were also self-teaching. I learned how to use a Lisp machine by sitting in front of one and trying things. By the end of a week I was fluent. And this was before the Internet; there was no Google. The machine itself contained all the docs and tutorials you needed to learn it. This was a remarkable achievement.

Once you learned it you felt like a god because of the abilities the machine gave you. Some degree of learning curve was more than acceptable for that kind of power.


I did not have this experience.

We had a pair of them, TI Explorers, in our office. I spent several hours over different days "fiddling" with them, and made no progress.

The single person that could use them, was not a generalist at all. He simply knew whatever was necessary to dabble wit the expert system bit he was working on.

Whatever intuitive relationships were to flower from access, did not manifest with this person. Simply, he wasn't seeking them. He was focused on his little task. If the training docs said "type xxx and hit F5" he would do so without spending an iota of attention about either what he was typing, why he was typing it, or whatever the F5 button did.

I made no progress on that machine. Perhaps if I had an hour of knowledgable introduction or guidance as a jumping off point, it would have been different and it would have opened a world to explore.

But at the time, for me, it was inscrutable and I went back spending my time on the Suns that we had instead.


TI Explorer LX machines ran Unix with LISP on top. Very different.


Can't speak to your experience with TI machines because I never used them. I learned on Symbolics and LMI machines.


Unix had (has!) excellent manual page (man ls for those who don't know). I had a similar experience learning SunOS in the early 90s like this.


You can see the Symbolics documentation interface in action: https://vimeo.com/83886950

It comes with built-in hypertext documentation, which was available online and printed.


And once you let the Lisp Machine teach you everything about itself, you had superhuman abilities!

One of my favorite Lisp Machine stories (I've fixed dead links to point to archive dot org, and archived the file from stanford's ftp to my https server):

https://news.ycombinator.com/item?id=30812434

Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":

https://news.ycombinator.com/item?id=7878679

http://lispm.de/symbolics-lisp-machine-ergonomics

https://news.ycombinator.com/item?id=7879364

eudox on June 11, 2014

Related: A huge collections of images showing Symbolics UI and the software written for it: https://web.archive.org/web/20201216064308/http://lispm.de/s... ...

agumonkey on June 11, 2014

Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.

DonHopkins on June 14, 2014

Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.

I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":

There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.

It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).

He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!

Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:

ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html => https://donhopkins.com/home/SlugMailingList/msg00339.html (I archived the whole directory so you can follow the "Followups" links to read the whole discussion.)

Also eudox posted this link:

Related: A huge collections of images showing Symbolics UI and the software written for it:

http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....


Maybe something like LISP will be what AI naturally converges to on its own journey towards superhuman intelligence.


Slightly more complex than the UI of something like GNU Emacs, but with a better user interface. Keep in mind that not that many machines ever existed, I would think around 10000 -> thus the UI was less polished, but more capable.

It's definitely more difficult, because it does not look&feel like common other user interfaces (say, the Desktop Metaphor used by Apple and others), plus it lacks a few decades improvements.


I was around when our lab replaced Symbolics Lisp Machines with Sun workstations.

Maybe this was a European (or non-US) thing, but non of the Lisp vendors had a built out worldwide distribution and service network, so buying and actually getting machines delivered was not a trivial undertaking.

Once Sun did add the support for performant garbage collection that Lisp needed, the choice was clear. Local sales and service clenched the deal, and versatility and broader support were cherries on top. Then the whole lisp machine business collapsed so they weren't even a remote option anymore.

A relatively short term later Suns where replaced by Apple Mackintoshs running Mac Common Lisp. MCL was a very decent Lisp environment, machinrs were cheap and by now used for much more than just programming.

The next wave would have been PCs, which had far better support. Apple never seemed to understand that your work depended on your computer, and that driving to an Apple store to drop of your machine so you could hopefully pick it up again in 2 to 3 weeks was not a serious option. Meanwhile Dell and the likes offered cheap Next Business Day on-site service.

But PC's for Lisp would be held back for a long time by Allegro cornering the PC Lisp market with a product that at the time felt inferior to MCL, while having very aggressive (phone) sales techniques and extremely high prices.


There's a lot that rings true here, but the article keeps noting that the IBM PC won partially because it was cheap. That's only true versus the examples the authors cite. In the business space, it was pretty expensive when compared to, for example, business oriented Z80/CPM platforms. And it was very expensive compared to the variety of home 8-bit choices. There were a lot of different markets/niches at the time, and things changed quickly, so it's hard to distill the history into soundbites about what happened and why.


Not quite.

Firstly, re predecessors:

DOS was a lot more capable than CP/M-80 and any 8-bit systems. It displaced them for much the same reasons x86-64 displaced x86-32: it became affordable to have lots of RAM. In the early 1980s, 8080/Z80 couldn't handle >64kB without clunky paging. Some 25Y later, x86-32 couldn't handle >4GB without clunky paging, in much the same way.

Secondly, re the PC's cheapness. I'd argue the PC was cheap for a business machine, but it's not simply that the PC hardware was cheap, but the bigger point that the bundle of the cheap machine and the cheap OS is what won out.

The PC launched with 3 OSes: PC DOS, the UCSD p-System (a self-hosted Pascal IDE based on a JVM-like environment), and DR's CP/M-86.

MS-DOS was about ¼ of the price of the other offerings.

DR had an amazing comeback planned: it added the multitasking of MP/M to CP/M-86, creating Concurrent CP/M, then bolted on MS-DOS emulation, to create a multitasking OS that could run and multitask MS-DOS apps on a 286: Concurrent DOS (CDOS for short).

It also had a multitasking version of the GEM GUI called X/GEM.

This is something even the later OS/2 1.x couldn't do, and of course OS/2 1.0 didn't have a GUI at all.

But CDOS/286 depended on a feature of the development versions of the 80286 which Intel removed from the final shipping version. So, when IBM launched the first 286 PC, CDOS didn't work.

Intel put the hardware feature back but it was too late: the OS had been killed at birth -- it didn't work on shipping 286 kit. Only native apps multitask.

DR had a fallback plan: it promoted CDOS as a realtime OS and made a living for years afterwards, but the big product to outdo DOS, which originated as a clone of DR's CP/M anyway, was thwarted.

So it's not just "the PC was cheap", no. It's bigger than that.

It's:

* On the business desktop, the PC was cheap, _and_ the dominant PC OS was cheap, and the PC was built from open COTS parts and ran a COTS OS so was cheap to clone.

* In academia and research, UNIX on mostly-COTS hardware killed off elaborate proprietary OSes on proprietary hardware. While PCs used x86, ST-506 and then ESDI and then IDE disks, ISA slot cards, etc., workstations used fancy 32-bit CPUs on fancy buses like NuBUS, VMEbus, and so on, with SCSI storage... and later moved to RISC chips with fancy buses and SCSI.

This killed off proprietary non-open non-standard OSes such as VMS, Apollo DomainOS, and other exotica.

Meanwhile, in business, DOS killed DR and the p-System and all the eight-bits.

Later, Windows killed all the 16-bit GUI machines, except the Mac, which only survived by self-destroying and becoming a Unix machine.


> ISA slot cards

This was actually bigger than I think people remember. IBM gave away all the information on their bus standard immediately. They clearly understood they had to create a robust third party add on market for the PC and this would be key to driving the platform. They were not wrong.


They were not wrong in terms of what had to be done to make the OC platform the dominant one, except they didn't get to profit from the PC business for too long. The openness "mistake" is what Apple fought to avoid.


And Apple almost died because of that opposition to openness. They only returned to profitability once they promised to be as open as everyone else: having intel processors and a standard Unix userland.

(They then got filthy rich by effectively pivoting to portable computing, a world where openness and standardization were never essential.)


> And Apple almost died because of that opposition to openness. They only returned to profitability once they promised to be as open as everyone else: having intel processors and a standard Unix userland.

Not really. The brief time when they allowed what we would now call "hackintoshes" was when they came closest to dying. They adopted intel processors for cost reasons, but went out of their way to keep their system "different". (And IIRC that was after they'd recovered to healthy profitability on the back of the iMacs and iPod)


> They only returned to profitability once they promised to be as open as everyone else: having intel

They were profitable in the early 2000s with PowerPC and they probably would've stayed on it had it remained competitive with Intel.


They gave it away because of the consent decree they were operating under. Until the consent decree, IBM preferred to rent/lease their computers to you, not sell them.


> In the early 1980s, 8080/Z80 couldn't handle >64kB without clunky paging. Some 25Y later, x86-32 couldn't handle >4GB without clunky paging, in much the same way.

This isn't analogous. At the time x86-64 was introduced virtually no one had memory limitation issues. PAE allowed the systems to have more than 4GB of RAM even if individual processes on 32-bit OSes were limited to 4GB. Even today you'd be hard pressed to find normal user workloads running into 4GB RAM limitations.

That's not to say larger memory spaces aren't useful and aren't used. It's just not the same as some very basic task hitting limitations of only 64K of RAM.


I disagree.

When CP/M etc. launched it was not a problem that they maxed out at 64kB of RAM, because that was $tens of thousands worth of memory.

When XP launched, it was not a problem that it maxed out at 3-and-a-bit GB of RAM, for exactly the same reason.

CP/M launched in 1974, I think, and was a current OS until say 1984 or so when the PC-AT came out and MS-DOS systems could address a whole megabyte of RAM and it was clearly game over for Z80 boxes.

(Aside: my 3rd ever personal computer was an Amstrad PCW9512, a new CP/M machine launched in 1987 when I was 20.

https://news.ycombinator.com/item?id=36649261

It shipped with 1/2MB of RAM and could take more. CP/M 3, AKA CP/M Plus, could use the rest as a RAMdisk. This is again analagous to LIM-spec EMS on DOS on 8086 class kit, and PAE on x86-32 servers.)

The point here being, it began to be a problem for rich folks and power users long before it was a problem for the mass market.

But the user base of the old OS was a problem that hindered adoption of something new and better that fixed the issue.

CP/M overcame being unable to handle >64kB of RAM, but too late to help. DOS was taking its place. The big-name apps (WordStar, dBase, SuperCalc, etc.) were rewritten for the new OS.

DOS overcame being unable to use a 386 and >640kB of RAM, but too late: Windows was taking its place. The big name apps (1-2-3, WordPerfect, FoxPro) were rewritten for the new OS.

Windows 9x overcame being unable to use >256MB of RAM or gigahertz-class CPUs, but too late: XP was taking its place. The big name apps... just got updated.

Windows Server 2003 (XP for servers) overcame being unable to use >3.5GB of RAM, but too late: Windows Server 2008 R2 (Win 7 for servers) went 64-bit only and replaced it.

The wheel turns, and all is one.


> But CDOS/286 depended on a feature of the development versions of the 80286 which Intel removed from the final shipping version. So, when IBM launched the first 286 PC, CDOS didn't work.

What feature was that?


[Author/submitter here]

It's described in the Wikipedia article on the 286, under "OS Support"

https://en.wikipedia.org/wiki/Intel_80286#OS_support

It was discussed here on HN earlier this year:

https://news.ycombinator.com/item?id=36649261

Be sure to follow all the links and read the contemporary accounts.

The comments are very amusing in a sad way. I recently discovered that it is a meme for we 40+ year old people now to have the experience of 30-somethings passionately telling us that historical events we personally witnessed never happened.

The angry denials in many HN threads are a perfect example.

I've had many examples myself, but then, I'm a published writer. Right now the Reddit discussion of this article is full of angry (I presume) youngsters telling me I am totally wrong and frantically downvoting me.


I'd argue that the Mac only survived because of the anti-trust case against Microsoft which meant that MS needed (on paper) competition so bailed them out.,


Well the mainframes never went away. I know a guy - he's now retired but i knew him since 15 years back when he was very active in development - who was a mainframe programmer in banking all his career - and he is the happiest developer i knew. In that industry, the entire stack is coherent and stable, and exists to get things done, not to confuse or bullshit people into infinite rewriting into next shiny framework. There's always a budget to get things done properly, clients know what can be done and what can't. Everything is conservative and things really get delivered, and work in production, in time and on budget, and that ecosystem appears to select people who are able and willing to work that way, not "move fast and break things". And, he retired rich, never doing anything but actual coding for money.

But of course, i could never break into things like that because it's not built with people with ADHD and no formal educational credentials in mind, to say the least. That's real business, not hippie hackspace.


Really outstanding article. The opening paragraphs did not hint at the depth to come; I'm happy I put in the time to read the whole thing. There were little bits I disagreed with or thought overly simplified, but on the whole a banger.

This bit jumped outa t me:

The language was called Dylan, which is short for Dynamic Language. It still exists as a FOSS compiler. Apple seems to have forgotten about it too, because it reinvented that wheel, worse, with Swift.

Have a look. Dylan is amazing, and its SK8 IDE even more so (a few screenshots and downloads are left). Dylan is very readable, very high-level, and before the commercial realities of time and money prevailed, Apple planned to write an OS, and the apps for that OS, in Dylan.

On investigating the screenshots, Dylan turned out to be remarkably reminiscent of Visual Basic - arguably another good design that has rather unfairly fallen by the wayside.


I was lucky enough to get the chance to play around with sk8 when I worked at Kaleida. It was absolutely amazing, totally reflective and introspective and browsable and editable and even graphically beautiful! I just wrote about it a month ago (plate of shrimp!!!):

https://news.ycombinator.com/item?id=38406111

DonHopkins 31 days ago | parent | context | favorite | on: The Revival of Medley/Interlisp

When I worked at Kaleida (a joint venture of IBM and Apple), I had the wonderful opportunity to play around with Sk8, which was amazing! It was kind of like Dylan and ScriptX, in that it was an object oriented dialect of Lisp/Scheme with a traditional infix expression syntax. But it also had wonderful graphics and multimedia support, and cool weird shaped windows, and you could point at and explore and edit anything on the screen, a lot like HyperCard.

Q: What do you get when you cross Apple and IBM?

A: IBM!

https://en.wikipedia.org/wiki/SK8_(programming_language)

>SK8 (pronounced "skate") was a multimedia authoring environment developed in Apple's Advanced Technology Group from 1988 until 1997. It was described as "HyperCard on steroids",[1] combining a version of HyperCard's HyperTalk programming language with a modern object-oriented application platform. The project's goal was to allow creative designers to create complex, stand-alone applications. The main components of SK8 included the object system, the programming language, the graphics and components libraries, and the Project Builder, an integrated development environment.

[...]

The SK8 Multimedia Authoring Environment:

https://sk8.dreamhosters.com/sk8site/sk8.html

What is SK8?

SK8 (pronounced "skate") is a multimedia authoring environment developed in Apple's Research Laboratories. Since 1990, SK8 has been a testbed for advanced research into authoring tools and their use, as well as a tool to prototype new ideas and products. The goal of SK8 has been to enable productivity gains for software developers by reducing implementation time, facilitating rapid prototyping, supporting cross platform development and providing output to multiple runtime environments including Java. SK8 can be used to create rich media tools and titles simply and quickly. It features a fully dynamic prototype-based object system, an English-like scripting language, a general containment- and renderer-based graphic system, and a full-featured development interface. SK8 was developed using Digitool's Macintosh Common Lisp.

[...]

Sk8 Users Guide:

https://macintoshgarden.org/sites/macintoshgarden.org/files/...

Lots more information discussion of Sk8 at this link -- I'll just include the first comment, but click for more:

https://news.ycombinator.com/item?id=21846706

mikelevins on Dec 20, 2019 | parent | context | favorite | on: Interface Builder's Alternative Lisp Timeline (201...

Dylan (originally called Ralph) was basically Scheme plus a subset of CLOS. It also had some features meant to make it easier to generate small, fast artifacts--for example, it had a module system, and separately-compiled libraries, and a concept of "sealing" by which you could promise the compiler that certain things in the library would not change at runtime, so that certain kinds of optimizations could safely be performed.

Lisp and Smalltalk were indeed used by a bunch of people at Apple at that time, mostly in the Advanced Technology Group. In fact, the reason Dylan existed was that ATG was looking for a Lisp-like or Smalltalk-like language they could use for prototyping. There was a perception that anything produced by ATG would probably have to be rewritten from scratch in C, and that created a barrier to adoption. ATG wanted to be able to produce artifacts that the rest of the company would be comfortable shipping in products, without giving up the advantages of Lisp and Smalltalk. Dylan was designed to those requirements.

It was designed by Apple Cambridge, which was populated by programmers from Coral Software. Coral had created Coral Common Lisp, which later became Macintosh Common Lisp, and, still later, evolved into Clozure Common Lisp. Coral Lisp was very small for a Common Lisp implementation and fast. It had great support for the Mac Toolbox, all of which undoubtedly influenced Apple's decision to buy Coral.

Newton used the new language to write the initial OS for its novel mobile computer platform, but John Scully told them to knock it off and rewrite it in C++. There's all sorts of gossipy stuff about that sequence of events, but I don't know enough facts to tell those stories. The switch to C++ wasn't because Dylan software couldn't run in 640K, though; it ran fine. I had it running on Newton hardware every day for a couple of years.

Alan Kay was around Apple then, and seemed to be interested in pretty much everything.

Larry Tesler was in charge of the Newton group when I joined. After Scully told Larry to make the Newton team rewrite their OS in C++, Larry asked me and a couple of other Lisp hackers to "see what we could do" with Dylan on the Newton. We wrote an OS. It worked pretty well, but Apple was always going to ship the C++ OS that Scully ordered.

Larry joined our team as a programmer for the first six weeks. I found him great to work with. He had a six-week sabbatical coming when Scully ordered the rewrite, so Larry took his sabbatical with us, writing code for our experimental Lisp OS.

Apple built a bunch of other interesting stuff in Lisp, including SK8. SK8 was a radical application builder that has been described as "HyperCard on Steroids". It was much more flexible and powerful than either HyperCard or Interface Builder, but Apple never figured out what to do with it. Heck, Apple couldn't figure out what to do with HyperCard, either.


Things to add: Wirth's Oberon hardware and software a small but successful bit of software that was ported to many other architectures and the language remains influential. Also Apollo's DomainOS which had network transparency in the early 1990s


Domain/OS was the final name of the system, it was originally called Aegis and had network transparency in the early 1980s.


ITS (which Richard Gabriel discussed in his "Worse is Better" paper in relation to the "PC-loser-ing" problem in particular, and the "better" MIT school of design in general) had a network transparent file system in the 70's!

https://en.wikipedia.org/wiki/Incompatible_Timesharing_Syste...

https://www.techopedia.com/definition/26097/incompatible-tim...

Techopedia Explains Incompatible Timesharing System

[...]

The ITS OS was developed in late 1960s and continued to be used up to 1990 at MIT, and until 1995 at the Stacken Computer Club in Sweden.

Some of the important technical features of ITS are as follows:

The operating system contained the first device-independent graphics terminal output. The screen content was controlled using generic commands created by a program. The content was usually translated into a sequence of device-dependent characters defined by the terminal the programmer was using.

Virtual devices were supported in software run in user processes called jobs.

It provided inter-machine file system access and was the first OS to include this feature.

It provided a sophisticated process management in which the processes were organized in a tree. Any process could be transparently frozen or restarted at any point in time.

A highly advanced software interrupt facility was provided, which could operate asynchronously.

It supported real-time and time-sharing operations, which worked simultaneously.

[...]

https://web.stanford.edu/class/cs240/old/sp2014/readings/wor....

[...]

Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called ``PC loser-ing'' because the PC is being coerced into ``loser mode,'' where ``loser'' is the affectionate name for ``user'' at MIT.

[...]


Reading some docs from MIT during that era, one can notice instructions that required use of multiple different machines, and how pervasive transparent network filesystem access was involved in that.

For example, to build a dial-in server using cheap PDP-11, essentially a way for mid-1980s students to connect a bunch of random PCs they might have to MIT's network, involved building code using at least PDP-10 and Lisp Machines - PDP-10 to run PDP-11 assembler and linker, and then Lisp Machine because several of them had permanently attached EEPROM programmers. The burned EEPROM would then be used as ROM for a common cheap PDP-11 cpu board combined with Interlan Ethernet board, and the ROM would download the remaining OS code over network.

https://github.com/PDP-10/its/tree/master/src/mits_s


It's an interesting perspective but ultimately unsatisfying because it doesn't touch on basic questions it seems like anyone would think to ask.

For example: from a hardware perspective, today's "microcomputer" hardware is vastly more powerful than the mainframes of old, is it not? The time when it was much less powerful was long ago. If the old philosophy is so vastly superior, what's stopping folks from recreating it in today's computing ecosystem?


The article is comparing systems from the same or close time periods. To merely compare modern systems with systems from 30+ years ago ignores what happened in those 30 years, which is why your comment says so little.


It's also trying to suggest that that history explains the dominant computing paradigms today. I think that historically-driven thinking only goes so far when today's hardware is vastly more powerful than anything we used to have. Any paradigm that serves users better could be used on today's very capable hardware, and presumably it would be extremely attractive. So why does this article sound like moving to the old paradigm that it prefers is a lost cause? We're not at the end of history quite yet.


Well, in my opinion, inertia is a dominant factor. There is an enormous amount of inertia present in the current systems and thought patterns. Redesigning the world from scratch while applying different principles is hard, especially when you need to catch up with 30 years of evolution and situational developments.

The point of the article, I believe, is that the current set of principles underlying modern systems are not necessarily sacred and immutable. It does not follow that applying differing principles to today’s ecosystems, where current principles have metastasized, would be easy or even necessarily achievable.


I kindof think that’s what private cloud is: mainframe-style workflows and architectures on (many) microcomputers.


I wonder if there’s room for a theory of convergent evolution, in software and languages, towards something that is generally appreciated.

Maybe there will always be camps arguing for one thing or another, but together and over the longest term, do they all pull in one general direction?

I’m not so sure I agree with most of the article. The perspective is valuable.


> I wonder if there’s room for a theory of convergent evolution, in software and languages, towards something that is generally appreciated.

There definitely seem to be a few attractors. Almost every serious language from the last 20 or so years is now essentially OCaml with some minor tweaks. (e.g. Java has pretty much spent 25 years gradually turning into OCaml; so has Python, despite a very different starting point). Meanwhile languages that get extensive macro capabilities early on tend to turn into Lisp.


In many ways Rust was an attempt to turn OCaml into C (deliberately stopping partway).


Most likely, just like all phones now are big black rectangles.

There are also orders of magnitude more programmers now, so the style of programming changed. Think classical music (few listeners) versus popular music (lots of listeners).


And all phones in sci-fi movies are big transparent rectangles! What's up with that???

Maybe it's meant to be scathing social commentary on the ultimate destination of today's mobile phone trends in terrible user interface design and terrible privacy. But probably not.

https://www.quora.com/Why-do-so-many-SF-shows-depict-future-...

https://www.reddit.com/r/Showerthoughts/comments/g5w1ph/the_...


> towards something that is generally appreciated.

I think it's simpler than that.

It's convergence to the lowest common denominator. The simplest implementation of the simplest thing that does the job, in the simplest easiest language that is capable of it.

But because "simple" can be taken to extremes of very clever but abstruse tools that only near-genius level minds can use effectively, and those folks do not work well together, what ends up cheap at first is the simple easy thing that is simple and easy to get working.

Lisp might be arguably simpler in a theoretical sense but for a lot of ordinary folks it's just too weird and too hard.

APL is "simple" inasmuch as a hundred lines of nested loops can be written as one line of a dozen hieroglyphics that does the same task in one operation... but there are only a few hundred people on the planet that can read it.

C is a simple tool for simple computers but you can pick it up and learn how to use it with just a book and some time.

It's fiddly and hard to do anything clever which means programmer must do grug brain stuff.

https://grugbrain.dev/

And that means others can follow it, and work with it, and that means teams scale. Not very well but a bit. So companies can hire rooms full of grugs and make it work.

Result, fairly simple dumb tool beats clever fancy more capable tool.

Result, people who like the dumb tool have strength of numbers and they mock the speccy nerdy weirdos who like the weird tool.

Which is fine until you end up with a million programs built from a million lines of C each and a million techies trying to get them to work together.

Then you end up with an industry that makes billions of dollars a month, selling billions of lines of code that nobody understands, and hundreds of thousands of people trying to prop it up and keep it working.

By that point it becomes clear that you should have employed a dozen geniuses to do it in a few thousand lines of the weird nerdy tool. You'd have ended up with something smaller and simpler and easier to maintain, and even the paycheques of the dozen geniuses would be less than the paycheques of a thousand grug brain developers.

But it's too late. The geniuses retired or went off to grow bonsai or something. All that's left is people who only know the grug brained methods.

It is easy to believe it's some kind of elegant evolution of the best possible solution... but it's not really. It's lots and lots of the cheapest worst solution until everything else dies out.

That is what I am trying to get at here.

But I am always learning and I try hard to be open minded. If you have specific point by point rebuttals, I'd love to read them!


> By that point it becomes clear that you should have employed a dozen geniuses to do it in a few thousand lines of the weird nerdy tool. You'd have ended up with something smaller...

Probably.

> ... and simpler...

Maybe not. Some geniuses produce simple; some produce insane complexity.

> ... and easier to maintain

Almost certainly not, unless you have another genius.


So computers should’ve stayed expensive and inaccessible to most people. Yeah, that’s how markets grow.


That is not only not right, that is not even wrong.

It's not even a misinterpretation of what I was saying. It's like taking the first letter of each line, spell checking it until it reads as English somehow, and disagreeing with the result!

I am saying we shouldn't have done it the cheap way. We should have done it the best way when it was visible what was the better but more expensive way.

Because that is what we must do know, but it's going to be much harder and much more expensive now because we need to replace 30+ years of accumulated useless cruft, while staying interoperable with it throughout.


You’re not wrong, it’s just not how technology tends to develop. We always go with “good enough for now.” Even if some people understand in the moment that there is a better way, there is always a tension between goal and subgoal pursuit. Usually, developing IT systems is the subgoal to the domain goal. Once the IT system is “good enough” to accomplish the domain goal, we move on.


> We always go with “good enough for now.”

The whole point of the piece is that not everyone does this and there are plentiful counter-examples, and while they never took over the mainstream, they have survived for decades despite going against the flow.

I can say this with authority because I wrote the article, and delivered the original talk upon which it's based.


The description of Lisp machines sounds a lot like running Emacs in a way. Is Emacs… just a Lisp machine emulator?!

(There is a lot I dislike about Emacs Lisp; there is a lot of power in it though. :)


In some sense, you might consider it to be a crude one. If you configure your computer to boot directly into full screen emacs, never leave it, and do your damndest to interact only with in-process elisp, you might get some feel for what such a machine could be like.

It will also be a frustrating experience, since emacs was never really intended to be used in such a way. There’s going to be a lot of friction and limitations. Still, it’s a fun exercise for the right sort of person :)


You've just described the user interface to every cloud server I use.

0) ssh to server

1) run screen -RD

2) run user emacs in one screen, run user shells in emacs

3) run root emacs in another screen, run root shells in emacs

4) run top in another screen

5) run nvitop in another screen on gpu servers

6) try to remember not to leave tail -f foo.log running in any shells or server explodes after six months


Emacs predates Lisp machines, but a version of Emacs was written in Lisp and used there (EINE[1]). The point of a Lisp machine was that it ran Lisp natively or, at least, closer to native than, say, a 8086 DOS box.

1. https://en.wikipedia.org/wiki/EINE_and_ZWEI


EINE was the second Emacs (the first was TECO EMACS, an Emacs written in TECO macros) and EINE was the first one to be (completely) written in Lisp, on a GUI-based system.


It's a Lisp interpreter that happens to come with a text editor. I guess you can replace the text editing default skin, but I never tried it.


This is a very interesting article on computer history. There's a lot that I disagree with, though...


Like I said above -- please do read my earlier comments -- I'd love to hear what, and why.


If one wants to read a description of the hard- and software of a Lisp Machine from the 1980s, here are two good and lengthy overviews of second and third generation systems. The first generation were the research prototypes developed at MIT: CONS and CADR. The Symbolics 3600 is an improvement (and a commercial offering) and the TI Explorer then already moved to higher integration incl. a special Microprocessor. Both operating systems were based on the original MIT Lisp OS. The TI Explorer from 1988 already supported Common Lisp, which Symbolics at that time also did. The Symbolics 3600 from 1983 ran ZetaLisp (aka Lisp Machine Lisp), a very object-oriented Lisp, using the Flavors object-system.

Symbolics 3600 in 1983

The Symbolics 3600 was the first improved machine after Symbolics was founded, released in 1983. The first machine, the LM-2, was a re-packaged version of the MIT CADR. The 3600 was an improved design with 36bit architecture (plus 8 bit ECC). Typically a machine new would have cost around $100k in 1983. When the machine boots up, it says "Yes, master" on a LED display. It was very extensible with memory, color graphics, disks, tapes, network, printer, ... It had a console with special keyboard, mouse, high-res b&w screen and digital audio. A dedicated Motorola 68000 was the Front End Processor.

The Symbolics 3600 Technical Summary from 1983

http://www.bitsavers.org/pdf/symbolics/3600_series/3600_Tech...

A brochure for the machine: http://www.bitsavers.org/pdf/symbolics/brochures/3600_Jul83....

TI Explorer in 1988

The second overview is from TI and is from 1988, not long before they were quitting the market. There machine range was called TI Explorer, after licensing some technology from LMI (Lisp Machine, Inc.), another Lisp Machine company. TI had in 1988 already developed a Lisp Megachip, a 553k transistor 32bit CPU for running Lisp. The Chip design was done inhouse using a dozen Lisp Machines (including several from Symbolics). It was at the time one of the first CPUs of that complexity. The CPU had been financed by DARPA and one of the applications thought was adding a Compact Lisp Machine into a fighter plane, running an AI-based co-pilot, based on a large research project (Pilot's Associate). The TI Explorer itself used the NuBUS as extension bus. Their console was attached via a fiber-optic cable.

The TI Explorer Technical Summary from 1988:

http://www.bitsavers.org/pdf/ti/explorer/2243189-0001D_ExplT...


> The CPU had been financed by DARPA and one of the applications thought was adding a Compact Lisp Machine into a fighter plane, running an AI-based co-pilot, based on a large research project (Pilot's Associate).

So is that why everybody called it the "TI Exploder"? I always assumed it was because they occasionally exploded, not because they made other things explode!

https://cl-pdx.com/comp.lang.lisp/1988/apr/553.html#:~:text=...

http://computer-programming-forum.com/50-lisp/e1910d32684c7e...

https://gopherproxy.meulie.net/sdf.org/0/users/kickmule/unix...

(Oh wow cool, by googling for "TI Exploder" I found those old unix-haters archives, with some of my old sarcastic Unix Weenie / X-Windows / Perl combo-rants from '91, that I'd completely forgotten about! Including DSHR's epic "Sun Deskset == Roy Lichtenstein Painting on your Bedroom Wall" rant that somebody with the initials DH leaked. https://blog.dshr.org/ ) ... I'll transclude the highlights since it perfectly illustrates what Liam was getting at in the article and his comments here:

    Date: Thu, 8 Aug 91 18:20:00 PDT
    From: DH
    Subject: Simple Unix Weenie Question

      Date: Thu, 8 Aug 91 20:50:26 EDT
      From: J

      > =+= Gee, I can't seem to convince a shell script or a
      > program to change my working directory.  This makes some
      > sense, given how Unix handles programs (as separate
      > processes).  But when I put "cd" in a shell script, it
      > works for the duration of the shell script, and then
      > reverts back to where I was before.

      So, what's wrong with that?  running a shell script
      forks off another shell, and when it terminates, you are
      returned to where your parent shell left you.  If you
      want to end up in another directory, use an alias.

    The elegant approach is for the subshell to open /dev/mem
    and change the directory of its parent process
    directly. Another popular approach is for both shells to
    open connections to the X server, select for property notify
    events on screen 0, and negotiate an incremental selection
    transfer in a manner acceptable to both the parent process,
    child process, and the X Consortium Office of Management and
    Budget, falling back on the system wide default directory
    (which you must be running automounter to access) in case
    the server returns a BadAlloc error.

    From: DH
    Date: Thu, 8 Aug 91 19:07:55 PDT
    Subject: Simple Unix Weenie Question

    Oh yeah, in case you were wondering how to open an X
    connection from the shell, you have to be running
    "xperlsh".  But since it runs your .xperlshrc file through
    the C preprocessor first, you have to make sure you have an
    ANSII cpp in your path supporting ## style macro
    concatination, but other than that it's pretty much like you
    would expect.

    (But of course you have to set your path up right first, so
    you need to set the "SkipAheadOnErrorsAndTryAgainSomeOtherTime"
    option to xperlsh in your .xperlshinit file, which is
    sourced by your session manager (so naturally you have to
    restart the window system before it takes effect.).)

    Date: Mon, 14 Oct 91 18:01:14 PDT
    Subject: I Love Working At Sun ...

    -- Customers are really stupid: if you dump on them they
      quit buying your product ;-/

           - Author's name withheld (not me).

    From: DH
    Date: Mon, 21 Oct 91 14:18:19 PDT
    Subject: dr on Deskset

    Date: Wed, 31 Oct 90 09:39:51 PST
    From: OD
    To: tt
    Subject: fyi

    Hmmm...

    ----- Begin Included Message -----

    >From kh
    From: kh
    To: ws
    Subject: dr on Deskset

    From: DR
    Date: 18 Oct 90 17:02:39 GMT
    Newsgroups: sun.open-windows
    Subject: Re: Deskset environment

       [NS replied to me directly.  Her reply illustrates the
       reasons why I sent out yesterday's mail so perfectly that
       I'm taking the liberty of copying my reply to
       openwindows-interest]

    > When we give standard Deskset presentations, a couple of
    > things tend to "dazzle" the audience ...
    >
    > 1.    Use the MT Calendar template to generate an
    >       appointment.  Mail it to yourself, then
    >       drop it onto CM which will schedule it.  The
    >       template is totally hokey (we're working on
    >       it) but it works and is wizzy.
    >
    > 2.    Build a small application with GUIDE and make it
    >       on the spot.  Show it up and running on XView
    >       in minutes.  You can talk to BW about that
    >

    Thank you, but you have completely missed the point.  I
    don't want to show people how whizzy the standard default
    desktop environment is.  That's your job.

    I want to give a talk about a quite different subject.  I
    merely want to *use* the desktop environment to achieve my
    own ends.  And as soon as I try to actually *use* it for
    something instead of merely showing off the glitz, it falls
    to pieces in my hands.  Unfortunately, this is becoming all
    too common in Sun products these days, because we no longer
    *use* the things we build for anything but whizzy demos.

    Have you ever actually tried to *use* the desktop for
    anything?  Like, say, printing a PostScript file?  The
    answer has to be no - because dropping a PostScript file on
    the print tool doesn't work.  Or binding a shell command to
    a pattern?  Again no, because doing so depends on
    undocumented features of /etc/filetype.  Even trying to
    create a new icon from the standard set causes the icon
    editor to dump core.  I'm not joking when I say that I've
    been filing a bug report every couple of hours of trying to
    use the desktop.  Its this kind of fragility that shows me
    that I'm treading on fresh snow.  No-one else has walked
    this way.

    And that is a truly sad commentary on the state of Sun -
    no-one has been this way because no-one believes that
    there's anything worth doing over this way.  The reason Unix
    was such an advance over previous operating systems was that
    you could customize your environment in arbitrary ways.
    With just a few shell scripts, for example.  Its just like
    the cold war - in our anxiety to compete with the enemy
    we've ended up losing the things that made our way of life
    worth defending in the first place.  Like the freedom to
    disagree with the authorities.

    > I believe you're correct in saying that most people live
    > with the default environment, but I think it's only partly
    > because they don't know how to customize it.  We've done
    > some user testing and, surprisingly, people either prefer
    > the default environment or just don't want to take the
    > time to make it special.  This is particularly true of
    > people like admins, marketing, etc.

    Testing whether people actually do customize their
    environment is beside the point.  Of course they don't.  In
    order to do it, I have to write C code using bizarre
    features of Xview, exercise all my shell wizardry, and
    dredge up undocumented features of the system from the
    source.  And you're suprised when admins can't do this?  I
    don't expect admins to do it.  But I do expect ISVs and
    Sun's SEs to be able to do it, and right now they can't.

    PS - I notice that someone filed a bug today pointing out
    that even your example of dropping a mail message on CM
    doesn't work if CM is closed.  That's a symptom of the kind
    of arrogance that all the deskset tools seem to show -
    they're so whizzy and important that they deserve acres of
    screen real estate.  Why can't they just shut up and do
    their job efficiently and inconspicuously?  Why do they have
    to shove their bells and whistles in my face all the time?
    They're like 50's American cars - huge and covered with
    fins.  What I want is more like a BMW, small, efficient,
    elegant and understated.  Your focus on the whizzy demos may
    look great at trade shows, but who wants to have their tools
    screaming at them for attention all the time?  It's like
    having a Roy Lichtenstein painting on your bedroom wall.

    ----- End Included Message -----

    From: DH
    Date: Tue, 22 Oct 91 12:43:59 PDT
    Subject: Forwarding Messages

    Please don't forward that message about the deskset around.
    It is only for the enjoyment of people who truly hate Unix.
    Since the incredible bogosity of Unix is one of the best
    kept "open secrets" of the industry, I assumed that this was
    a relativly private mailing list.

    From: DC
    Date: Tue, 22 Oct 91 12:53:32 -0700
    Subject: Getting People Fired

      Well, it also made you some friends here.

    Yes; apparently quite a number of them.  Perhaps in fact it
    was a net gain!

    From: DH
    Date: Tue, 19 Nov 91 08:27:49 EST
    Subject: Once Again, Weenix Unies Reinvent History

    Yesterday Rob Pike from Bell Labs gave a talk on the latest
    and greatest successor to unix, called Plan 9.  Basically he
    described ITS's mechanism for using file channels to control
    resources as if it were the greatest new idea since the
    wheel.

    There may have been more; I took off after he credited Unix
    with the invention of the hierarchial file system!
The other archive contains the whole 1990 thread about ESR rewriting the JARGON file!

https://gopherproxy.meulie.net/sdf.org/0/users/kickmule/unix...

    Date: Fri, 14 Dec 1990 13:54 EST
    From: DH
    Subject: Re: MC:HUMOR;JARGON >

    The JARGON file is being updated.  The guy doing so has
    changed the nasty references to Unix to refer to MS-DOS
    because "all the ITS partisans have now become Unix
    partisans, since the Unix philosophy is the same as the
    ITS philosophy." as he says.

    Date: Fri, 14 Dec 1990 16:57-0500
    From: KP
    Subject: Re: MC:HUMOR;JARGON >

    Isn't there some pending federal law against colorizing
    things that were originally black and white?  Perhaps we
    should each call our congresspeople and lobby for its
    immediate passage of that so we can go after this vandal in
    court...

    Date: Mon, 17 Dec 90 15:04:18 EST
    From: MT
    Subject: Re: MC:HUMOR;JARGON >

    This guy is also a flaming political loony, so make sure to
    mention that you're an agent of the international communist
    conspiracy if you write to him (unless you're trying to be
    persuasive, in which case you should claim to be a sworn
    enemy of the ICC).

    Date: 27 Dec 90 07:17:26 GMT
    From: ESR
    Newsgroups: alt.folklore.computers

    For the record, I have been a GNU contributor and supporter since 1982.

    From: DH
    Date: 28 Dec 90 22:08:22 GMT

    Um, for the record, there wasn't any GNU project in 1982.

    Date: Tue, 1 Jan 91 17:25:11 EST
    From: RS
    Subject: This Is What Worries Me

    Eric the Flute's credibility has just taken another
    nose-dive...  you would expect someone who claims to be a
    purveyor of net history and lore to at least get the dates
    right when he's lying...

    From: ESR
    Date: Tue, 1 Jan 91 19:44:17 EST
    Subject: Re: This Is What Worries Me

    In recent spewage, dh calls me a liar for writing that
    I've been a GNU contributor and supporter since 1982.

    My history with GNU spans its entire lifetime to date (most
    recently, RMS accepted an SCCS control mode into the
    libraries for v19 EMACS).  It's possible that the project
    had an official start time that postdates 1982, but I was
    involved before that, when the project was just a gleam in
    RMS's eye (by 1982 I'd already known him for four years).

    He's welcome to ask RMS when I volunteered the code that
    became GNU sed.  That was before the project had even
    definitely settled on the GNU name.

    RMS may even recall that I was one of the first people --
    possibly *the* first -- to urge that an EMACS implementation
    be the flagship product of his embryonic project.

    I can only assume that gumby's remarks are a function of
    ignorance and his expressed distate for the Jargon File
    revision I am currently undertaking.  I leave it for others
    to judge whether any juxtaposition of the above facts with
    GNU's official history justifies a vindictive flame.  --
  >>eric>>

    From: AB
    Date: Tue, 1 Jan 91 21:09:08 EST
    Subject: But DH didn't know Mr. Raymond was on unix-haters!

       For the moment I have removed Mr. Raymond from
    unix-haters.  If anyone wants to put him back, they should
    first give him a crystal clear explanation of the boundaries
    of discussion, and they should also warn the rest of us in
    advance so that we can avoid repeating dh's faux pas.

       Personally, I think it's kind of inhibiting to have
    "prominent Unix personalities" on unix-haters.  It's hard to
    work up a really good flame about clueless, twinkie-crazed,
    brain damaged unix weenies when you have to worry that you
    might offend someone.

    This is not a list for the faint of heart.  Accuracy is
    -not- required on unix-haters.  Unix-haters requires only
    hatred.

    (Of course accuracy does have its role.  Nothing improves a
    really good anti-Unix rant more than the knowledge that the
    eye-popping misfeature in question really does work
    -exactly- as described.)


> Software built with tweezers and glue is not robust, no matter how many layers you add. It's terribly fragile, and needs armies of workers constantly fixing its millions of cracks and holes.

Dude, come on. I really don't think anyone who actually worked on a medium to large Lisp codebase would write something like this.


The author of the linked blogpost, Mark Tarver, has spent over a decade developing his own pair of Lisp dialect and symbolic AI system, called "Shen" and "The Logic Lab".


I have no say in the matter but I think it's a matter of crowd and sociology rather than tools. I've seen people go into shock at the first mention of smalltalk. "Message not understood" they said.

These languages are powerful weapons, for some it will allow creating suns, for other it will be a self damaging nuclear meltdown.


There is so much to unpack here, but I am going to focus upon one thing: interpretation.

The recounting history aspect of the article coincides with my understanding of the history of computing. The point of victors writing the history is certainly valid. While this is not the case literally, their success has offered a sense of legitimacy to their versions of history. Our acceptance is also shaped by our own experiences, which will favour the victors simply because they gained our acceptance back in the day.

Yet I also think that the "lowest bidders" interpretation is deeply flawed for two reasons. One reason is that the success of the successors was built upon making computing more affordable to a broader base of customers. That is to say that there is a lower end to the cost of a vacuum tube based computer, which is higher than the cost of a transistorized computer, which is higher than one based upon integrated circuits, which is higher than one based upon microprocessors. The goal at each stage isn't necessarily to be the lowest bidder, but it is to sell computers at a lower price point than the prior generation.

The second issue is that customers rarely needed the best technology at the time. Such an ideal would force many adopters into bankruptcy. The real point is to adopt technology in a way that either to improve efficiency, improve accuracy, or enable them to do more. That does not necessarily imply they need the best technology. It does imply they need technology which balances a cost-benefit analysis. Good is often the enemy of the best because it does not strike that balance. In a sense, that suggests that the best is not actually the best (i.e. it fails on some criteria).

Of course the real point of the article is that some technologies were delayed and some have never been adopted at a large scale. The delays were necessary, not just because they required more sophisticated software but because they required time to implement. To understand that you only need look at our mobile devices. We are over a decade into the introduction of the smartphone, yet the software has yet to reach the sophistication of software seen on traditional desktop computers. While some of this is due to shifting priorities and the challenges of adapting traditional interfaces to fit within new interfaces, part of it is also due to the effort required to port software that may have 20 years of development behind them to a new platform. The second crack at a problem may be easier, but it is not easy.

As with the author, I have much disappointment in the paths not taken. Or, to be more accurate, the paths so rarely taken that the industry can, by in large, be considered to be striving towards homogeneity. While I have no real love for LISP[1], SmallTalk helped me to understand object oriented programming much more deeply. While I have never truly pursued them, other oddities helped me realize that there is a plurality of paths the technology could have followed (Oberon comes to mind).

As for retrocomputing, I truly appreciate those who choose to document the history of computers. It has guided my understanding that what we have today is not a foregone conclusion, and that our pet technology may have taken a very different path (and, perhaps, a better path) if the various detours were followed through to their conclusion. That said, I also believe that the path we have followed is viewed with too much cynicism in the eyes of some. It would have been better to have more diversity in our toolkit, than to prefer one approach over another.

[1] Okay, I'm weird. I love the syntax of LISP, brackets and all, while I can appreciate the benefits of functional programming. I simply haven't crossed the threshold to view it as a useful language, even in an abstract way.


> The story about evolution is totally wrong. What really happened is that, time after time, each generation of computers developed until it was very capable and sophisticated, and then it was totally replaced by a new generation of relatively simple, stupid ones. Then those were improved, usually completely ignoring all the lessons learned in the previous generation, until the new generation gets replaced in turn by something smaller, cheaper, and far more stupid.

Beautifully said. But for the last couple decades not even that is true. It's just been more and more of the same OSes, doing basically the same things. What it means to develop a native app is pretty fundamentally unchanged since Windows 3.11 or ISX days.

Software has changed, but only neo-mainframes. All the groupware and multi-tenant and scale-out software systems the author decries as lost? Rebuilt as online systems, powered by communicative capitalism and vast data-keeps ("the cloud").

> We run the much-upgraded descendants of the simplest, stupidest, and most boring computer that anyone could get to market. They were the easiest to build and to get working.

Unsaid here is I think where we really got on the fixed path, is what begat honogenization. Everyone else seemed to have been competing to build better hardware+software+peripheral systems against everyone else (with some close alliances here and there).

1989 changed everything. The EISA bus was the reformation of a thousand different competing computing industries into a vaster cohesive competitive ecosystem, via the Gang of Nine. https://en.wikipedia.org/wiki/Extended_Industry_Standard_Arc... . This commodified what used to be closed systems. Everyone either adapted to the homogenized common systems, or, one by one, the DECs and Suns and every other computer maker folded or got sold off.

The business model, to me, defines what happened. Giving up on being proprietary, giving up on controlling "your" ecosystem was the hinge point. Computing was boutique and special systems, competing on unique and distinct features and capabilities. With the rise of interchangable computing parts it transitioned to a more collaborative & much more competitive system. Distinction carried the risk of being too far afield, of your stuff not being compatible with the others. You had to play in the big tent. Versus all the other upstarts playing against you. Meanwhile the legacy featureful unique & distinct business-model players never could get enough market share to survive, could certainly never compete downmarket, and slowly has positions eroded up and up market until those last upmarket places collapsed too.

The love of lisp here has many of the same pitfalls. Sure it's an elegant modifiable system, one that can be shaped like a bonsai tree into distinct and beautiful forms. And I hope we see the rise, I hope very much, that we make software soft again, that big overarching ideas of cohesive computing where OS and apps and user's will blend together in dynamic fashion rise again. But the peril is disjointedness. The peril is a lack of cohesion, where different users have vastly different bespoke systems and commonality is lost.

I'm tempted to stop here while I've said little that feels controversial. Putting in anything people dont want to hear risks the greater message, which is that the new ways could arise via many many means. But, I do think: the brightest most malleable most user controlled system we have is the web. Userscripts are very powerful, and they grant us power over disjointed chaotic industrial systems, which have no desire to let the user shape anything. I believe that, with a little effort, with tools already available like web components, we can build very sympathetic forms of computing. HTML like LISP can make visible and make manifest all the components pieces of computing, can be an S-expr like system, as we see with modern front-end-derived router technology for example. The whole web might not be terraformed in our lifetimee, we might not dislodge the contemporary industrial web practices to make all the web excellent, but I think many stable wonderful media forms that spread and interoperate can take root here, I think they already resemble the alive system of the gods the author so nicely speaks to. I too speak, for malleable systems everywhere (not mine but https://malleable.systems ), and for the web as one conduit to bring us closer to the gods.


First, you comment was a nice addition to the lproven article and was thought-provoking.

>> The story about evolution is totally wrong.

Actually, this is exactly the story of evolution. Mammals are the simple, stupid things that showed up when the MUCH smarter and capable dinosaurs ruled the world. But there is a HUGE amount of homology in the biological tree of life. That is true in computer evolution too. But more in core hardware and less in platforms and in software - which is what lproven was mostly writing about. Core hardware evolves more like basic metabolic and signaling pathways and channels - very slowly and with stability. Hardware platforms evolved in pretty dramatic step functions. Some of those steps did effectively obsolete everything that came before - like the EISA bus. But computer software is very much like human software - it can be largely rewritten in a evolutionary blink of the eye.

> But for the last couple decades not even that is true.

It's not true in computers for the same reason it's not true with humans in ecological evolution. We've won. There will likely be no next evolutionary step biologically. That's a change from the history of the last several hundred million years and one of which we don't yet full understand the ramifications. In computer tech, it's why we now have the Magnificent Seven, and why it differs from the Nifty Fifty. The Nifty Fifty didn't have an evolutionary moat.

I was very actively in the computer industry from 85 to 2000 and witnessed first-hand the very Darwinian evolution at work in computer tech. My first job (88-90) was at Westinghouse building a bespoke nuclear plant operating system running on minicomputers and Sun workstations. Then a new species arrived running on EISA bus and Windows OS and they kicked our ass even though we'd been doing that type of work for two decades.

I also saw it at my second job (91-95 - studied neuroscience in between) at Dataviews - the dominant computer workstation software vendor at the time. We ran on all the major and most of the minor Unix workstations. We'd port the software to your workstation for about $100K and had many takers - which meant that it ran on basically everything. But it didn't run on Windows. And so we hit a cost/produtivity wall and had other companies kick our ass that were Windows-native just as had happened at Westinghouse.

The web arrived and HTML/Javascript kicked the ass of everything else. I was an X/Motif expert and trainer in the early 90s. Great tech. Dead end because it's license couldn't compete against free.

Is machine learning different? Certainly, there have been some step functions ("Attention is all you need") but mostly it's been a slow evolution of Moore's Law and better backpropogation stacks. I did my early independent research in AI at CMU in '88 and mostly ran my backpropogation models in Excel. Biologically, this isn't so different from a slug brain vs a human brain. Same basic hardware but 100,000x more capable. If I'd told my 20 year old self that in 40 years we'd have backprop models with a trillion parameters that could run on PCs, I'd have said "yeah right - in some very distant future". But it was/is a pretty distant future.


This is one angle of the history and I do think it is a useful and accurate representation for what it describes.

My critique: This lacks context regarding the still ongoing conflict between terminals and functional workstations. Time and again there is a back and forth in the computing realm. Case in point - I was selling SaaS deployments of what had been traditionally an On-Premises large cost platform.

Who gets control is, to me, fundamentally a more significant angle on the history and future of computing. Again, will AI be on your phone or via network sends to a Host? Cost is a factor but I believe quite downstream from how these decisions are truly reached at the time.


Local and personal vs remote and centralized tends to follow the same pattern as hardware and OSes that this article describes. The wheel turns when the precious iteration gets complex and expensive.

Cloud offered an escape hatch from the nightmare of Microsoft centric enterprise IT and expensive complicated on premise business applications, not to mention the awful bureaucracy of IT departments themselves.

Now I get the sense that cloud has become complicated, expensive, and encrusted with hordes of consultants and vendors layered on top of vendors. A back to local anti-cloud movement has been brewing for a while but hasn’t quite boiled over.

You’ll probably soon start seeing more and more articles about how much money a company saved by leaving cloud.

… and the wheel will turn once again.


> Local and personal vs remote and centralized tends to follow the same pattern as hardware and OSes that this article describes. The wheel turns when the precious iteration gets complex and expensive.

I've been looking for a name on such pattern, compression/diffusion based on cost, scale, market. It's so prevalent.


Is this "Worse is Better", but in the snarky and obnoxious style of The Register?


The answer to this is in the article.


After reading 30% of the article, it became clear to me that the author has probably never created a GitHub account.


You didn't look very hard.

https://github.com/lproven


I’d be interested to hear how you reached that conclusion, and what that conclusion is meant to imply.


This ridiculous paragraph:

“Now Lisp and Smalltalk are just niche languages – but once, both of them were whole other universes. Full-stack systems, all live, all dynamic, all editable on the fly.

The closest thing we have today is probably JavaScript apps running in web browsers, and they are crippled little things by comparison.”


... And what's untrue about it?

I have tasted development on Smalltalk, with Common Lisp on modern tools, and even with Symbolics Genera 8.3/8.5. The kind of environment they provide pretty much by default an environment that at best you're going to have to bring with yourself in case of JS - in case of modern Lisp, it's usually a matter of installing an IDE plugin first though, but commercial stuff comes with IDEs as well.

The best of "built in" support for JS in browsers pales in comparison to what was one mouse click away under Genera. Similarly most desktop/server environments except niche ones (Erlang ones? modern Lisps, Smalltalks). Even the better parts of less capable language tooling is often overlooked (advanced debugger support in things like Visual Studio, etc.)

That paragraph is nowhere near ridiculous to anyone who actually used the tools, even if they agree with the dissenting voice about NT with (D)COM that was written on symbolics lisp user groups ~1995 - because the process described there is also now often disregarded unless one develops exclusively for Windows.


LISP fetishism must be resisted every time it appears.

Programmers who learn LISP and think they have become enlightened are no different than drunks who ramble on about how it is impossible to know if the colors they see are the same as the colors you see.


I'm not a Lisp fetishist and I personally find it nearly as unreadable as Perl and APL.

The article is in fact expressing my admiration of Dylan, if you look more closely, and Smalltalk and other things of that ilk.

Lisp was a tremendously important step... In 1959. They should have continued with the planned Lisp-2, but they didn't, and sadly, McCarthy is dead now.

In its absence, Dylan is about the best candidate for a Lisp-2 we have.


As a somehow lisp head, and even though I mellowed with time, especially about syntax, I still don't think there's anything wrong with sexps. It might be a neurological trait, some people just can't live with a syntactic layer.. their brain might rejoy with it, think clearer and faster. I, for one, is the opposite, it removes one dimension in the problem space, and structural edition is so cute. I also have a tiny personal bet that in 30 years, if people are still writing code.. someone will make sexps mainstream.. just like closures are now, just like tree processing / ast / transpiling is mainstream now.. it just take time to cook the mainstream.


> It might be a neurological trait, some people just can't live with a syntactic layer

Genuinely, I think this is the case, and it might be one of the biggest tragedies in computing history.

There is a type of brain that can deal happily with bare S-exps and once it learns to do so, sees syntactic sugar as waste.

As a result, that type of brain dominated Lisp's evolution and so the Lisp mainstream never got past that.

But, I suspect, most brains can't. And so demonstrably inferior languages thrived because they were prettier, or sweeter if that fits.


My brain just loves reduction and homogeneity and sexps / fp hits very high on this. It's like vector math, the building blocks are simple, the rest is on you.

And at the same time, I also remember enjoying the 'magic' of php array/map syntax .. it felt magical (thanks to dynamic loosely typed semantics too I guess). But rapidly you hit syntactic hurdles that just don't exist in function based languages.

Then there are tribal effects, when a bunch of guys work in python or else, they'll love it, and resist anything too alien (even if .. under the clothes, it's mostly a limited common lisp :)


> My brain just loves reduction and homogeneity and sexps / fp hits very high on this.

OK then! Good for you. With great effort I can just barely follow what is going on; it is so very much not my thing.

But we all have different strengths and weaknesses.

> Then there are tribal effects

This is so very true and much underestimated.

Unix isn't an OS: it's a mindset and a set of cultural traditions, that have led multiple groups of people to re-implement the same style of OS over and over and over again.

And now, it's so pervasive that many can't imagine anything else.

I personally come from an era when rich complex GUI OSes fit into a single digit number of megabytes, and were amenable to exploration and comprehension by a single non-specialist individual in their spare time, without external materials or help.

And I think that was hugely valuable and important, and we've lost it. We threw it away with the trash not realising how vital it was.


It's a strange sort of Lisp fetishism that wants to bring back Smalltalk instead of Lisp.


Exactly! :-)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: