Another reason that Lisp machines were in high demand (despite a general lack of results) is due to a "natural trajectory" phenomenon. AI, and especially expert systems, were seen as the future of computing. This is due to hype generated by Feigenbaum, a Stanford University professor. Tom Knight claims that Artificial Intelligence was oversold primarily because Feigenbaum fueled outrageous hype that computers would be able, for example, to replace medical doctors within 10 years. Additional hype was generated by the Japanese government sponsoring of the "Fifth Generation" project. This project, essentially a massive effort by the Japanese to develop machines that think, sparked a nationalistic chord in America. SDI funding in some ways was a means to hedge the possible ramifications of the "superior" AI technology of Japan.
I went through Stanford CS when Feigenbaum was hyping away. For the hype, read his book, "The Fifth Generation". Much of the Stanford CS department was convinced that expert systems would change the world, despite having almost nothing actually working. It was pathetic. Especially the aftermath, the "AI Winter", when I once saw Feigenbaum wandering around the building amidst empty cubicles, looking lost.
Symbolics had some big technical problems. The biggest was that they didn't use a microprocessor. They had a CPU built up from smaller components. So their cost was inherently higher, their hardware was less reliable, and they didn't benefit from progress in microprocessors. Once people got LISP compilers running on Motorola 68000 UNIX workstations, LISP machines were not really needed. Franz LISP on a Sun was comparable to using a Symbolics (we had both where I worked), and the refrigerator-sized Symbolics wasn't worth the trouble. Symbolics was also noted for having a poor maintenance organization. They had to send someone out to fix your refrigerator-sized machine; you couldn't just swap boards as on workstations.
Eventually Symbolics shrank their hardware down to reasonable size, but by then, nobody cared.
There were a number of projects like that around at the time, including the Three Rivers/ICL PERQ which was optimised for PASCAL, and arguably DEC's PDP-11 range whose entire architecture was closely aligned with C and eventually C++, pointers and all.
These were all interesting machines without a future, because the early 80s were a crossover when it turned out that model was too bloated to be sustainable. DEC scraped through the bottleneck with Alpha, but couldn't keep it together as business. Meanwhile the 16-bit and early 32-bit architectures were eating everyone's lunch. SGI and Sun flared up in this space and died when their price/performance ratio couldn't compete with commoditised PCs and Macs - which happened sooner than hardly anyone expected.
This is obvious now, but it wasn't at all obvious then. The workstation market and the language-optimised market both looked like they had a real future, when in fact they were industrial throw-backs to the postwar corporate model.
So it wasn't just the AI winter that killed Symbolics - it was the fact that both hardware and software were essentially nostalgic knock-offs of product models from 5-10 years earlier that were already outdated.
Meanwhile the real revolution was happening elsewhere, starting with 8-bit micros - which were toys, but very popular toys - and eventually leading to ARM's lead today, via Wintel, with Motorola, the Mac, and the NeXT/MacOS as a kind of tangent.
The same cycle is playing out now with massively accelerated GPU hardware for AI applications, which will eventually be commoditised - probably in an integrated way. IMO Apple are the only company to be thinking about this integration in hardware, and no one at all seems to be considering what it means for AI-enhanced commoditised non-specialised software yet.
Apple are giving it some thought, Google are thinking about it technologically, plenty of people are attempting Data Engineering - but still, the current bar for application ideas seems very quite limited compared to the possibilities a personal commoditised integrated architecture could offer, because again current platforms have become centralised and industrialised.
There's a strong centrifugal and individualistic tendency in personal computing which I suspect will subvert that - and we'll see signs of it long before the end of the decade.
In the 1980s, the commercial market in electronics and computers passed the DoD market, first in volume and then in technology. This was a real shock to some communities. There were complaints of "premature VHSIC (Very High Speed Integrated Circuit) insertion" from DoD, by which they meant the commercial market using stuff DoD didn't have yet. DoD thought they were in charge of the IC industry in 1980. DoD had been the big buyer in electronics since WWII, after all. By 1990, DoD was a minor player in ICs and computing.
Symbolics was really a minicomputer manufacturer, building up CPUs from smaller parts.
They went down with the other mini makers - DEC, Prime, Data General, Interdata, Tandem, and the rest of that crowd. That technology was obsoleted by single-chip CPUs. Many of the others hung on longer, since they had established customer bases. But they were all on the way down by the late 1980s.
Up to 1987. In 1988 they switched to microprocessors.
> They went down with the other mini makers - DEC, Prime, Data General, Interdata, Tandem, and the rest of that crowd. That technology was obsoleted by single-chip CPUs.
Symbolics introduced their single chip LISP CPUs in 1988. That one was used in their workstations, boards for SUNs and Macs, and in embedded applications.
That's my Symbolics LISP Machine as a board for an Apple Macintosh Quadra:
It uses a microprocessor. The daughterboard is RAM.
I really like this quote and perspective. I think it was a byproduct of period communications - not today's ubiquitous connections.
Essentially it was a very small, even inbred community that talked mostly to itself. "Itself" including associated members at the likes of DARPA. And that was enough to get the funding (and closely related hype) ball rolling. There was little if any feedback from outside the crowd. Even the West Coast was another world to a certain extent.
I'm reminded of XKL - a Cisco founder going into business to (initially) produce modern PDP-10s in the early 90s. Because, you know, that's what the world (even that small inbred world) was waiting for.
That was Cisco’s actual business plan — they just sold a few routers (from a design they’d developed for Stanford) to get some bucks in the door while they geared up to build the PDP-10 clone.
(Obviously the never pivoted back to the original plan).
The first "toy" computer I had that could run Lisp well was the Atari ST, I ported Franz Lisp to it. Had an 8086 machine at the same time but it didn't have a big enough address space.
The Symbolics Ivory microprocessor was introduced in 1988.
> Franz LISP on a Sun was comparable to using a Symbolics
The Lisp alternatives to Symbolics came later with commercial systems like the TI Explorer and then Allegro CL, Lucid CL, LispWorks, Golden Common Lisp, Macintosh Common Lisp.
Generally all kinds of IDEs were common on smallest machines. Lisp had nice IDEs on small machines like the early Macintosh Common Lisp which ran usefully in 4 MB RAM on a Mac SE.
> found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era
Many others thought different and GUI based Windows systems with IDEs won much of the market.
> I found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era.
Franz LISP was dead end, never made it to Windows as a product.
> Common LISP systems of the era were one giant application you never left.
Many of CL systems of that area (which appeared mid 80s) could be used like Franz LISP just with vi and a shell: CMUCL, KCL, Allegro CL, Lucid CL, LispWorks and many others.
Franz Inc. created Allegro CL, which ran bare bones on Unix with any editor&shell, with GNU Emacs (via ELI) or additionally with its own IDE tools. Eventually also ran on Microsoft Windows, including a GUI designer.
Among other things, I ported the Boyer-Moore theorem prover to Franz Lisp. That started life on Interlisp on a PDP-10. I later ported it to Common LISP, and I have a currently working version today on Github, for nostalgia reasons. It's fun seeing it run 1000x faster than it did back then.
SUN did not sell anything in 1980/81. The SUN 1 came to the market in mid/late 1982 as a 68k machine with a SUN memory management unit.
Basically in 1980/1981 there were no UNIX system with a GUI on the market (i.e. commercially available) at all.
> including their rather clunky object system.
Common Lisp's object system was developed many years later. The first spec was published in 88.
Franz LISP used the same object system as Symbolics. It shipped with an object system called Flavors.
Flavors in the Franz LISP sources: https://github.com/omasanori/franz-lisp/blob/master/lisplib/...
> I just came across “Symbolics, Inc: A failure of heterogeneous engineering” by Alvin Graylin, Kari Anne Hoir Kjolaas, Jonathan Loflin, and Jimmie D. Walker III (it doesn’t say with whom they are affiliated, and there is no date), at http://www.sts.tu-harburg.de/~r.f.moeller/symbolics-info/Sym...
> This is an excellent paper, and if you are interested in what happened to Symbolics, it’s a must-read.
His comments follow that.
If you'd like to read one of the papers that inspired him to make the change, read "The Completeness of Molecular Biology". It is a truly inspiring paper published in 1984 that is still relevant to synthetic biology.
From the paper: "Thomas F. Knight, Jack Holloway and Richard Greenblatt developed the first LISP at the MIT Artificial Intelligence Laboratory in the late 1970s."
From Wikipedia: "John McCarthy developed Lisp in 1958 while he was at the Massachusetts Institute of Technology."
Shouldn't the paper say "Symbolics LISP"?
Symbolics was a company founded later to commercial that research - at the same time with its competitor Lisp Machines, Inc.
> The market created by funding from SDI was quite forgiving. The government was interested in creating complex Lisp programs and Symbolics machines were the leading alternative at that time. Officials who allocated funds for SDI did not demand cost-effective results from their research funds and hence the expert-systems companies boomed during this period.
Sorry to be glib but, maybe we should be grateful we got an AI winter, not a nuclear winter.
For example the DART logistics planning system written in Lisp for the Gulf War was said to have paid back all investments into AI research up to that point.
On page 14, several howlers in one, "[Word tagging] eliminates the need for data type declarations in programs and also catches bugs at runtime, which dramatically improves system reliability." Where to start? Dynamic type errors are a major cause of program failures in obligate runtime-bound languages, today. Word tagging turned out to be a dead end, easily outmatched by page tagging in normal architectures.
Higher up on page 14: "... an extra instruction in the final 386 architecture to facilitate garbage collection." What instruction? Was there one, really?
Also on page 14: Asserting novelty of virtual memory in the '80s, really?
Page 16: Calling out the importance of their proprietary debugger gives the lie to the claim on page 14.
Page 20: "Many potential customers of Symbolics were interested solely in Symbolics' software. However, customers could not buy the software without purchasing a expensive LISP machine." It looks like the authors would like to think the software was attractive. But of course any such potential customers could get the same software from MIT without a 5x$ machine hanging off. Did they?
Page 28: "The Symbolics documentation was outstanding ... former Symbolics employees still treasure them." People developing important things can't afford to spend enough attention on docs to make them outstanding, because the important things demand that attention.
And, yes, the kerning is positively abominable. No TeX?
Unable to discover the affiliation of its authors (not stated in the document itself), I can't tell where it did come from -- although it is hosted by several sites around the world, not just MIT. It shows internal evidence of font character code corruption, typical in the 2001 era of moving a document from one kind of system to another, or even just trying to change font-set. Note the prevalence of capital U's where an apostrophe belongs, and the clipped text in the second "competitive wheel" diagram. In fact, the diagrams show all the signs of being pasted in from an incompatible system: that wheel diagram is no longer circular, and the egregious box-and-pointer diagrams show the character misplacement typically exacerbated by PDF encoding.
None of that would have issued from Symbolics Press.
I may be a little defensive on the issue of paying attention to docs -- why else would I make a thoughtful response to a down-voted comment from someone with a history of them? At Symbolics, I was pleased that my technical writer colleagues shared the same title of MTS as us developers. I led the small team which developed Symbolics' document development system Concordia, and its document formatter -- which btw was contemporary with TeX and drew upon Knuth's published paragraph and equation layout algorithms. Also I personally inspected every pixel of every character (and all the pixels between) in every font used in Symbolics documentation. In a side-by-side comparison, Symbolics docs would look superior to something from 1986 TeX. One might see how even Prof. Knuth himself wished that Computer Modern had had the attentions of a professional font designer.
Finally, the paper itself explained that such in-house efforts were all of a piece with the rest of the company's engineering ethos.
And, they kerned badly. I leave to you which was the greater offense.
Also you’re confusing Symbolics software (post LM2) with the very barebones MIT CADR release. It was far better than that and the closely related LMI release. RMS rantings on this subject have only the most tenuous connection to reality (much like their originator ;).
Finally, Symbolics online hypertext docs (via Document Examiner) really were great. When one is attempting to expand the market for a powerful, yet esoteric system, good docs make that system more approachable. Sadly I think they really did run out of Lisp hackers as an audience but they did try.
An independent measure may be derived from the value placed on the code when the company went into receivership, and from how widely it found use, afterward, detached from its overpriced host machine.
It’s two different questions, to discuss the quality of their code and to discuss the value placed on it. It didn’t match the development and operational model of the tech world when they went into receivership, hence a low value.
Initial quality of code is largely independent of its importance, although important code improves. I am happy to stipulate that theirs was admirable code, besides being admirably documented. Still: the implication is that few people ran it, and any consequences of running it faded quickly.
When there's a choice of only two of good, important, or well-documented, a well-run organization chooses the first two.
If you're not giving them this, I don't expect you're reading the rest with a very charitable interpretation. And variable sized arrays weren't really supported in C/C++ until C11/C++11, and this was written in 2001.
The bullshit about C/C++ is immediately preceded by a claim that everything in "LISP" is represented by lists at the lowest level, which is in contrast with other languages that have fixed arrays, like C/C++.
The people who wrote this garbage paper were not properly familiar with Lisp, C or C++.
Annnnd, C99 VLAs as such were not in C++11 at all, and were removed ("made optional") in C11, recognized as a mistake. That does not, of course, mean that C or C++ functions' array arguments were ever restricted to a fixed size.
In c, an array has fixed size and dimensionality. You can, of course, create your own data structure using a pointer which allows you to access an unbounded number of objects; but that, in c parlance, is not an array.
And in c89, all variables must be declared at the beginning of a lexical scope (though not necessarily the beginning of a function).
Despite the tone problems of the post I'm replying to, I'm honestly curious about this: Is there such an opcode? I can't think of one, and I like to think I know things about x86 ISAs, despite all the dark corners that architecture has. Are they, somehow, confusing it with the iAPX 432, as unlikely as that sounds?
Anyway, here's the claim in context:
> (Interestingly, Symbolics at one time was working with Intel to build a development platform based on the 386, which led to the inclusion of an extra instruction in the final 386 architecture to facilitate garbage collection).
How did you get past "[o]n the lowest level, LISP represent [sic] all objects, even the expressions of the language itself as lists" right in the previous sentence?
> What kind of Lisp hacker does this?