Hacker News new | comments | show | ask | jobs | submit login
Why did Symbolics fail? (dlweinreb.wordpress.com)
31 points by zachbeane on Nov 16, 2007 | hide | past | web | favorite | 13 comments

DLW's thesis that Symbolics lost as part of the general losing of custom hardware (including all the parallel computer companies) is basically correct. Lucid on Suns was as fast as ZetaLisp on Symbolicses.

But not so fast that that alone made me switch. What made me switch was that Lisp machines (both Symbolics and LMI) were so gratuitously, baroquely complex. The manuals filled a whole shelf. Each component of the software was written as if it had to have every possible feature. The hackers who wrote it were the smartest and most energetic around. But there was no Steve Jobs to tell them "No, this is too complex." So the guy in charge of writing the pretty-printer, for example, would decide. "This is going to be the most powerful pretty-printer ever written. It's going to be able to do everything!"

I learned how to program a Symbolics myself, from reading the manuals. I sometimes suspected that I was the only person who'd ever done this-- that everyone else who knew how to use the damn things had either learned how from the guys who invented them, or had learned from someone else who had. The libraries were so hairy that I generally found it was faster to write something myself than find and understand the documentation for the built-in way of doing it.

Unfortunately this complexity persists in Common Lisp, which was pretty much copied directly from ZetaLisp. In fact, both of the worst flaws in CL are due to its origins on Lisp machines: both its complexity and the way it's cut off from the OS.

It's true that the system was feature-laden. I think this was more true of the API's than the user interfaces, though, and so I'm not sure that the Steve Jobs reference is exactly appropriate. Steve Jobs is making consumer products; most customers don't care much about the API's.

It was also featureful because we didn't know which features were the ones that would turn out to be most useful; if there had been a second generation, we could have pruned out some of the stuff that never really got used. It was something of a "laboratory" that way.

Also, the kind of people who used Lisp machines, generally early adopter types, really did ask for amazing numbers of features. If you had been there, you would have experienced this. We wanted to make all our users happy by accommodating all their requests. It's probably similar to the reason that Microsoft Word has so many features. Everyone thinks there are too many and has a long list of the ones they'd get rid of; but everyone has a different list! I think Joel Spolsky wrote something very convincing about this topic once but I can't remember where.

Lucid on Suns was eventually as fast, if you turned off a lot of runtime checking and put in a lot of declarations. Later it was even fast if you didn't do that; the computational ecosystem changed a whole lot since the Lisp machine was originally designed. You have to remember how old it was. At the time it came out, it was very novel to even suggest that every AI researcher have his or her very own computer, rather than timesharing! That's early in the history of computers, by today's standards.

No, we didn't teach all of our customers personally, although we did have an education department that taught courses, and some of them learned that way. There were classes in Cambridge and in San Francisco. Allan Wechsler designed the curriculum, and he's one of the best educators I have ever met. (My own younger brother worked as a Symbolics teacher for a while.)

Common Lisp is complicated because (a) it had to be upward-compatible with very, very old stuff from Maclisp, and (b) it was inherently (by the very nature of what made it "Common") a design-by-committee. For example, consider how late in the lifetime of the language that object-oriented programming was introduced. (Sequences and I/O streams should obviously be objects, but it was too late for that. CLOS wasn't even in the original CLtL standard.)

In other words, I'm mainly not disagreeing with your points, just explaining how things got that way.

have you ever tried to use Windows API and COM stuff? often even simple task requires several pages of complex and cryptic code. there is fine documentation for each function, but barely anyone can write meaningful programs using only this documentation, you need a code samples to do this, and problem is not because APIs are too low level -- they are just too complex.

i had no luck of using Lisp Machines, but if you compare CL to them, i can say CL is absolute easiness, joy and bliss comparing to WinAPI/C++ stuff.

even pure C++ (w/o WinAPI) gets very cryptic with modern template-metaprogramming stuff.

so.. WinAPI and C++ were quite popular and prolific in 90-s, despite their weirdness and complexity. so maybe it's not actually so much problem of itself?

I never used LMs, but I wrote my dissertation in CL. What a monster (this was after 5+ years of Lisp programming writing a couple of interpreters, teaching it). A baroque morass, similar in X to complexity. (Btw, if you're comparing graphics APIs, X is a heck of a lot worse than the Windows API.)

To those commenting upon C++ and its metaprogramming complexity -- a lot of your complaints here really focus on the complexity of metacode WRITTEN in C++/templates, not in the template strucure itself. Even with STL, C++ is simpler and as rich as CL. If you'll exclude the paradigm shift, of course...

C++ is simpler and richer than CL? I've worked in both languages and would say exactly the reverse. In fact, I'm stunned by this statement; I would have thought it impossible from anyone with sufficient experience in both. Our minds must work very differently!

My feeling when I started working in Lisp (CL) was that of a fish who had spent years swimming in oil, glue, and other fluids, finally experiencing what it was like to swim in water. C++, by contrast, was my wet cement (and cement doesn't stay wet for very long). So while it makes sense to me that a Lisp connoisseur would find CL overly complex compared to other Lisps, I am flummoxed by the idea that anyone would think this in comparison to C++.

Programming language preference is far from rational. Not only is there a strong intellectual bias toward what one already knows (viz. Blub), there is a strong emotional bias toward what one /likes/, and that is conditioned by personal experiences. For example, if a language is imposed on you (by a professor or manager), you'll probably dislike it and look for things to hate about it. Conversely, if you discover something for yourself and associate it with the freedom to do and create what you want, you'll probably feel strongly about defending it against criticism. Of course, we spend a lot of time arguing that our emotionally driven choices are strictly rational. Such is human nature.

I don't think anyone is immune to this, and that's why working in a language that one loves will always be more important than working in an objectively better language (by some criteria).

I find it hard to imagine _any_ API more nightmarish than Win32 + COM + MFC + ....

As someone who has never used a Lisp machine, it interests me to hear about the complexities of the machine. I have been considering buying one, perhaps through the same detailed here: http://fare.tunes.org/LispM.html. I don't think I'd use it for any active development, but it would be a fun hobby tool to hack around on. Anybody here besides Paul have much experience on a Symbolics LM?

Oh come on, Common Lisp isn't that complicated...

You're arguing with the guy that wrote the book, _ANSI Common Lisp_. Just a comment. :)

I know :-) Are his gripes with it documented somewhere then? Or is that Arc?

It looks like you're trying to address the first problem (complexity) in Arc. Any attempts on the second (OS support)?

1. This sounds an awful lot like the story of NeXt, from making their own custom hardware, to the super advanced software development environment that they eventually tried to sell by itself for other platforms. Too bad there was no struggling, mainstream company to buy them that they could have then taken over from the inside.

2. Half way through, I was thinking, it would be fun to contrast this with ITA as bad and good examples of how to start a Lisp company with a bunch of guys from MIT. And then he says he works at ITA! So, lesson is, if you're going to start a Lisp company, don't make Lisp the product, make it your secret weapon.

In all fairness, the manuals that filled a whole shelf documented a lot of major applications. On my desk right now, I have a copy of the O'Reilley book on Subversion (a source control system). I have another book on Emacs. And so on. ALL of those things were covered in that shelf.

Regarding simplicity versus complexity, please see http://www.joelonsoftware.com/items/2006/12/09.html. Different people want different things; you can't just provide the common 20%.

Over the last few days, I have been surveying the WWW for criticisms of Common Lisp. The two that I see most often are: (1) it's too big, and (2) it's missing so many important features like threads, sockets, database connectivity, operating system interoperability, Unicode, and so on. Ironic, no?

It is really too bad that Common Lisp was not defined as a language core, plus libraries. We did originally intend to do that (they would have been called the "White Pages" and "Yellow Pages"), but we were under too much time pressure.

There is no question that Common Lisp is a lot less elegant that it could have been, had it been designed from scratch. Instead, it had two major design constraints: (1) it had to be back-compatible with MacLisp and Zetalisp in order to accommodate the large body of existing software, such as Macsyma, and (2) it had to merge several post-MacLisp dialects, in a diplomatic process (run magnificently by Guy L. Steele Jr) that made everyone reasonably satisfied. It was quite literally a design by committee, and the results were exactly what you'd expect.

But the imperative was to get all the post-MacLisp implementations to conform to a standard. If we failed, DARPA would have picked InterLisp as the reigning Lisp dialect, and we would have all been in a great deal of trouble. (Look where InterLisp is today; actually there's nowhere to look.)

You wonder how other people learned to use Symbolics machines. Some of them took courses - we had an extensive education department. Before you say "that proves that it was too complicated", keep in mind that the system was very large and functional because that's what its primary target market wanted. We did not get feedback from customers saying "make it simpler"; we got feedback saying "add more features, as follows". I bet the people who maintain the Java libraries are rarely asked to remove large amounts of the libraries.

I'm not sure what the reference to Steve Jobs is about. Look at how many features the Macintosh has now. It takes a long time to learn all of them. Their documentation is much shorter because they don't give you any; you have to go to the book store and buy David Pogue's "The Missing Manual" books.

I admit that some (not most) of the complexity was gratuitous and baroque, but not because we liked it that way. The complexity (mainly the non-uniformity) of Common Lisp was beyond our control (e.g. the fact that you can't call methods on an array or a symbol, and so on). Some of the subsystems were too complex (the "namespace system", our distributed network resource naming facility) comes to mind.

In summary, I'm sympathetic to what you're saying, but the reasons for the problems were more involved.

-- Dan Weinreb

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact