Hacker News new | comments | show | ask | jobs | submit login
A Brief History of Lisp Machines (andromeda.com)
67 points by auvi on Sept 21, 2013 | hide | past | web | favorite | 43 comments

A cultural item missing from the MIT side of the history: The Lisp machines at the MIT AI Lab were named. First, the names were all dead rock stars (I was there in the 80s, so Hendrix, Joplin ...). Eventually there were enough machines that the namespace needed to be expanded, so dead actors were added to the list (Sinatra...). Some wag decided Reagan, who was president at the time, had done enough unpopular things to universities that he deserved to be added to the namespace.

It was all fun and games until one of the funding agents was on a tour, saw the Reagan machine, and put it together.

Edit: All above IIRC. It was a whlie ago.

And at Intellicorp the machines were all named after famous disasters -- Johnstown Flood, Tacoma Narrows Bridge, etc.

Not realizing this naming convention, a confused Apollo Computer salesman once asked "Where are the other 12 machine?"

They maybe coud have shown them Apollo 1: http://en.wikipedia.org/wiki/Apollo_1

I find the narrative from the Wikipedia page on Lisp Machines to be really enlightening (far more so than a dry chart like TFA): http://en.wikipedia.org/wiki/Lisp_machine . In particular, I think it is much easier to understand Stallman's outlook/philosophy if you put yourself in his shoes around this time. The MIT AI Lab went from a bustle of futuristic technology to essentially a ghost town, consisting of him and Marvin Minsky, following the dual raid of LMI and Symbolics on the AI Lab staff.

The Wikipedia page on Lisp Machines isn't great. Read Dan Weinreb's "Rebuttal to Stallman’s Story About The Formation of Symbolics and LMI" [1]. Weinreb was a much more credible source than Stallman.

The AI Lab hackers had free will and were not forced to go anywhere. Saying that these companies "raided" the AI Lab is, quite frankly, insulting. And Symbolics's practice of hiring the hackers full time was actually done entirely for ethical reasons. Other spinoff AI Lab companies were taking advantage of AI Lab resources, and Symbolics wanted to be sure that they didn't do that.

The MIT Lisp Machine code was owned and copyrighted by MIT. When MIT licensed the code to LMI and Symbolics, they wanted to be sure that the code would not be given away for free to other companies, including companies that they had also licensed code to. Stallman essentially wanted Symbolics to give their code for free to their competitor, LMI, which would have been illegal according the licensing agreement (that MIT wrote) and would have resulted in Symbolics being shut down.

By the way, the comment thread on that blog ends up being about the origin of Emacs and is quite fascinating. Stallman isn't technically the creator of Emacs (Guy Steele and David Moon are), although he did so much to improve it that he is generally given credit for it.

[1] http://web.archive.org/web/20110719154038/http://danweinreb....

Of all the problems Symbolics had, LMI and Stallman weren't two of the bigger ones. Specialized development workstations just lost to more general-purpose ones that could be built and sold in much larger quantities.

Rational started out making ADA development workstations; that didn't go too well either.

Stallman is a prolific programmer, but ask yourself, how is he was able to out-program entire teams of his former colleagues? Answer, because the code wasn't the hard part, figuring out what to do was the hard part, and copying it once that was done was easy. The lesson which still hasn't sunk into the "free" community today is, because a thing is cheap to reproduce, doesn't mean it was cheap to do. Stallman did a massive disservice to his friends and the entire computing community. We are still paying for his sabotage today.

> Stallman did a massive disservice to his friends and the entire computing community. We are still paying for his sabotage today.

I don't think I understand your argument here. Are you saying that free software in general hurts hackers because it provides free alternative implementations of their ideas that the original author has no control over? If so, why is it a bad thing? As patio11 would surely confirm, you can successfully compete with free and open source software.

Not quite. I'm generalising over all sorts of IP. See the Dan Bricklin post from earlier today.

Which was the act of sabotage?

He made their products commercially unviable. We could all be using Lisp machines now, the PC would never have happened, no DOS, no Windows, no x86, no Linux either for that matter. We would be 50 years ahead of where we are now.

This statement is ridiculous. Symbolics is dead for a number of reasons, but Stallman's Emacs isn't one of them. Stallman copied some of the features in Zmacs. Zmacs was one small part of a much larger operating system which, among other things, included industry leading animation software. There were many, many systems (including the hardware architecture itself) that Stallman could not, and did not copy. No company picked an alternative to Lisp Machines simply because it also had something that looked like Zmacs. Your statement is akin to saying that a modern tablet with a web browser makes desktops (which can run much more powerful tools) obsolete.

Symbolics got used to selling hardware for tens of thousands of dollars and could not fully adjust when competitors were eventually able to sell machines for much less. The company also had a couple of bad CEOs, and suffered large financial losses because of a series of bad real estate deals. Efforts to port their software to other platforms were largely unsuccessful.

Lisp Machines were always a niche market because they were expensive, top-of-the-line machines. The target market never really overlapped with Windows. Cheap workstations that could have multiple users eventually became almost as capable as Lisp Machines and thus displaced them.

If you want to credit Stallman with sabotage then mention Elisp, because Elisp is the worst Lisp still in use (Zmacs ran ZetaLisp).

If I remember the story correctly Stallman saw the new features of the Symbolics software, re-coded them and handed them to Lisp Machines Inc.

So I guess your thesis is that by allowing LMI to maintain feature parity, that fragmented the market meaning neither company could survive? (We're ignoring the Xerox and Texas Instruments lisp machines obviously)

But this is ignoring the point that while this was happening the Mac and the PC were already in the market at a tenth of the price of any of the lisp machines and were selling by the bucket load. Essentially the PC revolution was already well underway before Stallman even started his hacking.

Lisp machines failed because they were selling high end limited use hardware running software that was good for a narrow range of highly specialised uses, but far too resource heavy to be economically viable for most users. The VAX for example was a far better machine for nearly every real world task except at running a single niche programming language.

If you really think single programmer stopped a $70,000 single user machine from derailing the PC revolution and kick-starting whole new world of computing heaven then you and I have a very different definition of "cheap" and without cheap computing we wouldn't be 50 years ahead, we'd be 40 years behind.

Lisp machines (much like the Xerox Star series) were high-end workstations that ran a proprietary OS and didn't compete with PCs. One could build a case Unix workstations killed Lisp machines, but to imagine Emacs held back them until PCs stole the market is quite a stretch.

The real answer is that he didn't.

The bigger picture is that the MIT AI Lab was working on projects for DARPA ( DEFENSE Advanced Research Projects Agency ). This sponsor wanted to commercialize the research results, so that that at the next stage companies would be suppliers of AI hardware and software for the military (for space-based weapons, fighter planes, autonomous vehicles, ...). Many of the early Lisp machines were bought by Reagan's star wars program.

Thus it was not LMI or Symbolics who raided some innocent lab, it was a new stage in a government directed and financed technology development program, primarily for the military.

Any active projects turning FPGAs into Lisp machines? Should hopefully get more interest in this since every system we currently have is infested with backdoors and closed source firmware. Would be nice, as Stallman put it to press halt, be able to change source and resume without having to re-compile.

I've worked on a GA144 into a 24-core (24 cores of 6 F18 cores working together) Lisp Machine, with some success. I was able to define a tiny Lisp in under 16 bytewords of F18 code, with the rest of Lisp being á la carte, and distributed across those six cores.

Because the core design is modular and all the cores are the same the GA144 is much harder to backdoor and it's easier to detect if it is.

this sounds very interesting, is it open source? I am wondering how you made a processor made specifically for a stack based language (ColorFORTH) for Lisp work.

I don't know about Forth - but I one worked on a project where I was writing code in Lisp and PostScript and there is a definite "lispness" about RPN languages like PostScript and possibly Forth. This might make the mapping from Lisp to Forth easier that you might think.

I haven't open sourced it.

The stack based language works quite nicely with Lisp; in fact, Lisp machines were traditionally stack based.

Holy cow, that is very cool. I have a question about the GA144. How do you get the code to the cores? Once it's there, does it change at all?

You send the code to the cores with a bootstream from your SPI or Async port. It's a bucket brigade for code.

Once it's there, yes you could change it, but in practice changing all of it, all the time, is hard, and that's one of the challenges with this hardware. It was originally developed for embedded, a use case that needs so little code that it doesn't need to change afterwards, but I want it to be fully general, and moving it in and out without paying forbidding time penalties is a bit hard.

Interesting. That makes me wonder what architectures are (or used to be) out there that do support rapidly reconfiguring the cores. Obviously, if deploying the code is so time-intensive that you'd lose the benefits of the parallelism for dynamic use cases, there's no point. It's a little reminiscent of the situation with GPGPU where the overhead of marshalling all the data can eat away your computational speedup.

The program I'm working on (a kind of JIT compiler) is inching ever more towards a strict dataflow model in which the computations are divided into blocks and each block knows exactly which downstream blocks to send its outputs to. The ultimate output of the computation is simply the last downstream block that collects whatever outputs were originally requested and pushes them back to the client. Seems like a sweet spot for an array-based architecture. However, it would need to rapidly reconfigure the cores. Each time a block finishes its work, its core would go back into the available pool and (hopefully soon afterward) be reprogrammed with the code for some other block as the compiler produces it. The scheduler would also try to position blocks near their downstream neighbors. All of this would have to be dynamic. So, a poor fit for the GA144 given what you're saying.

If the above makes sense, I'd be curious to hear what alternatives come to mind.

Running this code on such an architecture is not currently a priority, but it does excite my curiosity—it seems so obviously doable in principle.

Well, so it's hard to do in part because you need code to load code, and there's only 64 words of RAM. But if you got the code to the core, let's say you're only changing 32 words of code, about five-ten functions, you'd have to use 32* ~10 ns = 320 ns to do the reload. Not too shabby, but you have to get it there, and that means using other cores. It's hard, but in theory there is no outstanding reason it would be a problem. Is the GA144 a poor fit? Barring that you crack the problem of getting the code there, yes, it's a poor fit.

I'd forgotten that there's so little RAM to work with. It might be doable; the blocks I'm talking about are mostly very primitive operations.

Do you think the benefits are valuable enough to be worth all the trouble of figuring out how to program this architecture, or is it more just a fun puzzle?

Unfortunately, Lisp Machine software was not particularly secure. Everything ran in the same address space, with no protection; ordinary user code could, for instance, just call a low-level disk driver. And the code running in this environment was written in pre-Morris-worm days, when no one had ever heard of a remote code execution vulnerability. (I remember some people needed to be convinced that running an "eval server" --- a server that would execute code that came over the network from unauthenticated sources --- was a REALLY BAD IDEA. In this environment!)


I don't know about the internals of Symbolics ones but the MIT/LMI/TI machines didn't have any hardware assistance for bounds checking.

I know of two such projects:

  0. LISP CPU: http://www.frank-buss.de/lispcpu/

  1. LispmFPGA: http://www.aviduratas.de/lisp/lispmfpga/
Maybe there are others.

Stanislav at loper-os.org is one of my absolute favorites, because he is an absolutist. I know he wrote previously about remaking a proper computer on an FPGA, but I don't know how he's progressing.

There has been work done on an FPGA implementation of the MIT CADR by Brad Parker, he also wrote the software emulator for it.

I see. He named the revised version of CADR as CADDR :) From here: http://www.unlambda.com/cadr/

If you are interested in the hardware side, look at Hardware-Assisted and Target-Directed Evaluation of Functional Programs, by Matthew Francis Naylor.

It wasn't Lisp all the way down in the Lisp Machine design--it was a microcoded engine, of course. Though, yes, the system software was written in Lisp.

There was a front-end processor (designed by Howard Cannon, who also was the main designer of the Flavors system) which loaded the microcode and booted up the main processor.

I loved Symbolics. Indeed, I dated three different women from the company. :)

Hope those women were Lisp wizards :) I would love to know about your experience with Symbolics Lisp machines. I think those were pretty expensive machines, so you must have something unique to work on a Lisp machine.

Nope. They were in marketing. And I was a stock analyst.

Though I've also never used a Lisp machine, integrating my setup closely with Emacs, SLIME, and stumpwm has been one of the very best computing decisions I've made.

"Why Lisp Machines? The standard platform for Lisp before Lisp machines was a timeshared PDP-10, but it was well known that one Lisp program could turn a timeshared KL-10 into unusable sludge for everyone else. It became technically feasible to build cheaper hardware that would run lisp better than on timeshared computers. The technological push was definitely from the top down; to run big, resource hungry lisp programs more cheaply. Lisp machines were not 'personal' out of some desire make life pleasant for programmers, but simply because lisp would use 100% of whatever resources it had available."

This explanation doesn't seem to make sense.

If you wanted to prevent a Lisp program from using up 100% of CPU on a timeshared machine, all you would have needed to do is impose CPU quotas on the users.

And a lot of the engineering on Lisp Machines did go toward making life pleasant for programmers. The windowed GUI environment on a Lisp Machine was a whole lot nicer than logging into a PDP-10 on a VT-100 terminal. If running Lisp more cheaply was really the goal, they wouldn't have given these machines high resolution bitmapped displays, which were very expensive at the time.

> This explanation doesn't seem to make sense... impose CPU quotas on the users

CPU quotas don't help.

Take for example a VAX 11/780. 1 MIPS, 32bit architecture and only a few MB RAM (2MB -8MB). Additionally used virtual memory.

A single Macsyma running on top of Maclisp on such a computer could use all the memory and more. A garbage collection would touch all memory and swap everything else out. This swapping would make the machine completely crawl. The resources CPU and main memory would all be used by a single process. Limiting the CPU for the Lisp process does not help. The Lisp process would be slow, the machine I/O would be slow. Most programs would wait for I/O.

The minicomputers (PDPs and later) were too expensive and optimized for timesharing - not for running Lisp images which were larger than main memory.

> If running Lisp more cheaply was really the goal, they wouldn't have given these machines high resolution bitmapped displays, which were very expensive at the time.

That's even more expensive. Now you have an expensive bitmap display hooked up to an extremely expensive minicomputer. Btw., that was experimented with before the Lisp Machines appeared - Interlisp was running in such a combination.

As a relative youngster, I've only ever heard tall tales about Lisp Machines. I still wonder, with a tear in my eye, why they failed. In particular, Genera OS seemed like such a good idea.

The table omits the Xerox Daybreak (aka Xerox 6085, Xerox 1186), circa 1985.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact