Hacker News new | comments | show | ask | jobs | submit login

I worked in an expert systems company in the 80s (one mentioned in the Phillips thesis referenced in the danweinreb article). Take all this with a dose of IIRC.

As part of out work, we evaluated and benchmarked Xerox Interlisp machines, Symbolics systems, VAXen, later Gold Hill etc. to find a cost-effective delivery platform. We even eventually funded the development of a delivery-focused subset of Common Lisp.

One aspect that Symbolics didn't seem to understand back then was cost of entry and deployment: the Xerox D-machines were (IIRC) around 1/3 the cost of the Symbolics. Perhaps not as speedy, but adequate for our day-to-day development work as well as for the end customer's needs.

Symbolics had great development systems, but the delivery answers were late in coming; too late to help us.

There's lots more to be said about the late 80s collapse of AI (ES) applications and expectations, but the margins here are too small to contain it....




There's lots more to be said about the late 80s collapse of AI (ES) applications and expectations

On that subject, Richard Gabriel [1] writes about his experiences as a founder of Lucid, which produced Common Lisp for regular Unix workstations, in the "Into the Ground: Lisp" chapter of his book, "Patterns of Software", which is available as a free PDF from his web site [2].

(Lucid's pivot to developing a C++ environment is covered in the "Into the Ground: C++" chapter).

There's some interesting history there (and the rest of the book is probably worth reading as well for a variety of reasons).

[1] https://en.wikipedia.org/wiki/Richard_P._Gabriel

[2] http://dreamsongs.net/Files/PatternsOfSoftware.pdf


Lucid CL was a very nice implementation. Maintenance is still available for it. It's now called Liquid CL and maintenance comes from the LispWorks guys.

Lucid took the money they earned and invested it into some ill-fated and ill-designed C++ environment. Lisp competitors from that time, Franz Inc. and LispWorks are still in business.


Your memory of the costs of the systems is fine.

I couldn't afford a Lisp Machine, I just used Franz Lisp on an Atari ST.


You did benchmarking so you may be able to confirm/deny this: I've heard that Lisp machines went out of favor because Lisp just ran faster on a VAX. Was this the case?


Besides horrible management---echoing Zigurd, a friend and contemporary, I was at one point reliably told that manufacturing, R&D and marketing were paying no attention to each other, such that manufacturing had built a factory that couldn't make the latest hardware R&D had developed, which was done completely independently of what marketing thought was needed---they were killed by non-recurring engineering (NRE) costs.

Basically they couldn't amortize the NRE for their custom hardware and later most especially chips (from memory, first, a chipset that spread the CPU across several chips, rather like the one Western Digital did that among other things enabled the LSI-11, then of course an all in one chip) across the huge number of units that Intel and Motorola sold. They also canceled their RISC rethink of their basic low level architecture on the day it was supposed to tape out; don't know if it had a low enough gate count to be like the first SPARC processor, which was implemented on 2 20,000 gate arrays, one for the CPU and one for the floating point unit (gate arrays are all alike until a few layers of metal are put on top).

So soon enough you could run a full Common Lisp, almost certainly without as much run-time error checking, faster on cheap commodity hardware than on a Lisp Machine.

Something like that seems to have happened to Azul Systems, which apparently isn't developing any new hardware, but is selling their version of the HotSpot JVM to run on x86_64 hardware. A prior generation of their pauseless GC (vs. 1 second per GiB, a big deal if your heap is 100s of GiB) required a software write or read barrier that cost ~20% of performance (all this from memory). It's likely that soon enough, even if it perhaps ran slower than their custom hardware, it was a lot cheaper to run it on commodity Intel/AMD hardware.


I can't find a reference for this, but I seem to recall that Azul uses virtualization features of modern CPUs to decrease the read-barrier overhead; if that is correct, then that's a case of the general-purpose hardware fortuitously getting features to out-compete special-purpose hardware.


No, the older version of their GC used bulk VM operations, but not virtualization features, and there was still a penalty reported ... errr, I can't find it now. Probably in a Clifford Click blog posting. I just skimmed the new edition of Jones' GC book (http://www.amazon.com/gp/product/1420082795/), published before it could consider the newer one; it talks about the changes needed on stock hardware but I didn't see any estimation of costs while glancing through it. (I'm not searching any more right now because it's obsolete.)

The base papers are:

Pauseless GC, uses a read barrier instruction in their custom 64 bit RISC chips: https://www.usenix.org/legacy/events/vee05/full_papers/p46-c...

And the newer one that they're using in that old hardware and the software on commodity hardware Zing JVM, the Continuously Concurrent Compacting Collector (C4), which I have not studied (the paper was published 2 weeks after the Joplin tornado trashed my apartment and rather disrupted my life): http://www.azulsystems.com/sites/default/files/images/c4_pap...

It's possible they figured out how to minimize or eliminate the penalties of the original software read barrier they applied to the Pauseless system in C4 (or perhaps in relation to their custom vs. newer commodity hardware); I just did a quick skim of the relevant part of the C4 paper and a few keywords and couldn't tell.

This is all great stuff that I hope to get back to soon....


I don't have any of the old papers / results. But my recollection is that the Lisp workstations greatly outperformed LISP on a contemporary VAX for a given cost, in part because of the workstation's microcoded instruction set tailored to Lisp. Over time, though, commodity hardware increased significantly in relative performance simply due to the economies of scale in producing it.

You might've been able to buy a Xerox workstation for, say, $15K-$20K, while a VAX box was over $100K. BUT for a production system, the VAX could run multiple Lisp processes at the same time. (I'm guessing at costs here, it's been too long.)

Also, Richard Gabriel did a bunch of Common Lisp benchmarks, might help to look for them. (Great fellow.)

The real threat to Symbolics et al, circa 1987, was things like Gold Hill Common Lisp running on, say, a IBM PC/AT with a 286 chip and maybe a meg of memory. At, perhaps, $3,000. It ran pretty fast, the Gold Hill tech folks were very good. But GHCL had an unsophisticated development environment compared to Symbolics / Xerox.

As excellent as the workstation environments were for development, a market demands that you eventually deliver a cost effective product into customers' hands. (Or is that too much old-think?)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: