Hacker News new | comments | show | ask | jobs | submit login

DLW's thesis that Symbolics lost as part of the general losing of custom hardware (including all the parallel computer companies) is basically correct. Lucid on Suns was as fast as ZetaLisp on Symbolicses.

But not so fast that that alone made me switch. What made me switch was that Lisp machines (both Symbolics and LMI) were so gratuitously, baroquely complex. The manuals filled a whole shelf. Each component of the software was written as if it had to have every possible feature. The hackers who wrote it were the smartest and most energetic around. But there was no Steve Jobs to tell them "No, this is too complex." So the guy in charge of writing the pretty-printer, for example, would decide. "This is going to be the most powerful pretty-printer ever written. It's going to be able to do everything!"

I learned how to program a Symbolics myself, from reading the manuals. I sometimes suspected that I was the only person who'd ever done this-- that everyone else who knew how to use the damn things had either learned how from the guys who invented them, or had learned from someone else who had. The libraries were so hairy that I generally found it was faster to write something myself than find and understand the documentation for the built-in way of doing it.

Unfortunately this complexity persists in Common Lisp, which was pretty much copied directly from ZetaLisp. In fact, both of the worst flaws in CL are due to its origins on Lisp machines: both its complexity and the way it's cut off from the OS.




It's true that the system was feature-laden. I think this was more true of the API's than the user interfaces, though, and so I'm not sure that the Steve Jobs reference is exactly appropriate. Steve Jobs is making consumer products; most customers don't care much about the API's.

It was also featureful because we didn't know which features were the ones that would turn out to be most useful; if there had been a second generation, we could have pruned out some of the stuff that never really got used. It was something of a "laboratory" that way.

Also, the kind of people who used Lisp machines, generally early adopter types, really did ask for amazing numbers of features. If you had been there, you would have experienced this. We wanted to make all our users happy by accommodating all their requests. It's probably similar to the reason that Microsoft Word has so many features. Everyone thinks there are too many and has a long list of the ones they'd get rid of; but everyone has a different list! I think Joel Spolsky wrote something very convincing about this topic once but I can't remember where.

Lucid on Suns was eventually as fast, if you turned off a lot of runtime checking and put in a lot of declarations. Later it was even fast if you didn't do that; the computational ecosystem changed a whole lot since the Lisp machine was originally designed. You have to remember how old it was. At the time it came out, it was very novel to even suggest that every AI researcher have his or her very own computer, rather than timesharing! That's early in the history of computers, by today's standards.

No, we didn't teach all of our customers personally, although we did have an education department that taught courses, and some of them learned that way. There were classes in Cambridge and in San Francisco. Allan Wechsler designed the curriculum, and he's one of the best educators I have ever met. (My own younger brother worked as a Symbolics teacher for a while.)

Common Lisp is complicated because (a) it had to be upward-compatible with very, very old stuff from Maclisp, and (b) it was inherently (by the very nature of what made it "Common") a design-by-committee. For example, consider how late in the lifetime of the language that object-oriented programming was introduced. (Sequences and I/O streams should obviously be objects, but it was too late for that. CLOS wasn't even in the original CLtL standard.)

In other words, I'm mainly not disagreeing with your points, just explaining how things got that way.


have you ever tried to use Windows API and COM stuff? often even simple task requires several pages of complex and cryptic code. there is fine documentation for each function, but barely anyone can write meaningful programs using only this documentation, you need a code samples to do this, and problem is not because APIs are too low level -- they are just too complex.

i had no luck of using Lisp Machines, but if you compare CL to them, i can say CL is absolute easiness, joy and bliss comparing to WinAPI/C++ stuff.

even pure C++ (w/o WinAPI) gets very cryptic with modern template-metaprogramming stuff.

so.. WinAPI and C++ were quite popular and prolific in 90-s, despite their weirdness and complexity. so maybe it's not actually so much problem of itself?


I never used LMs, but I wrote my dissertation in CL. What a monster (this was after 5+ years of Lisp programming writing a couple of interpreters, teaching it). A baroque morass, similar in X to complexity. (Btw, if you're comparing graphics APIs, X is a heck of a lot worse than the Windows API.)

To those commenting upon C++ and its metaprogramming complexity -- a lot of your complaints here really focus on the complexity of metacode WRITTEN in C++/templates, not in the template strucure itself. Even with STL, C++ is simpler and as rich as CL. If you'll exclude the paradigm shift, of course...


C++ is simpler and richer than CL? I've worked in both languages and would say exactly the reverse. In fact, I'm stunned by this statement; I would have thought it impossible from anyone with sufficient experience in both. Our minds must work very differently!

My feeling when I started working in Lisp (CL) was that of a fish who had spent years swimming in oil, glue, and other fluids, finally experiencing what it was like to swim in water. C++, by contrast, was my wet cement (and cement doesn't stay wet for very long). So while it makes sense to me that a Lisp connoisseur would find CL overly complex compared to other Lisps, I am flummoxed by the idea that anyone would think this in comparison to C++.

Programming language preference is far from rational. Not only is there a strong intellectual bias toward what one already knows (viz. Blub), there is a strong emotional bias toward what one /likes/, and that is conditioned by personal experiences. For example, if a language is imposed on you (by a professor or manager), you'll probably dislike it and look for things to hate about it. Conversely, if you discover something for yourself and associate it with the freedom to do and create what you want, you'll probably feel strongly about defending it against criticism. Of course, we spend a lot of time arguing that our emotionally driven choices are strictly rational. Such is human nature.

I don't think anyone is immune to this, and that's why working in a language that one loves will always be more important than working in an objectively better language (by some criteria).


I find it hard to imagine _any_ API more nightmarish than Win32 + COM + MFC + ....


As someone who has never used a Lisp machine, it interests me to hear about the complexities of the machine. I have been considering buying one, perhaps through the same detailed here: http://fare.tunes.org/LispM.html. I don't think I'd use it for any active development, but it would be a fun hobby tool to hack around on. Anybody here besides Paul have much experience on a Symbolics LM?


Oh come on, Common Lisp isn't that complicated...


You're arguing with the guy that wrote the book, _ANSI Common Lisp_. Just a comment. :)


I know :-) Are his gripes with it documented somewhere then? Or is that Arc?


It looks like you're trying to address the first problem (complexity) in Arc. Any attempts on the second (OS support)?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: