It was all fun and games until one of the funding agents was on a tour, saw the Reagan machine, and put it together.
Edit: All above IIRC. It was a whlie ago.
Not realizing this naming convention, a confused Apollo Computer salesman once asked "Where are the other 12 machine?"
The AI Lab hackers had free will and were not forced to go anywhere. Saying that these companies "raided" the AI Lab is, quite frankly, insulting. And Symbolics's practice of hiring the hackers full time was actually done entirely for ethical reasons. Other spinoff AI Lab companies were taking advantage of AI Lab resources, and Symbolics wanted to be sure that they didn't do that.
The MIT Lisp Machine code was owned and copyrighted by MIT. When MIT licensed the code to LMI and Symbolics, they wanted to be sure that the code would not be given away for free to other companies, including companies that they had also licensed code to. Stallman essentially wanted Symbolics to give their code for free to their competitor, LMI, which would have been illegal according the licensing agreement (that MIT wrote) and would have resulted in Symbolics being shut down.
By the way, the comment thread on that blog ends up being about the origin of Emacs and is quite fascinating. Stallman isn't technically the creator of Emacs (Guy Steele and David Moon are), although he did so much to improve it that he is generally given credit for it.
Rational started out making ADA development workstations; that didn't go too well either.
I don't think I understand your argument here. Are you saying that free software in general hurts hackers because it provides free alternative implementations of their ideas that the original author has no control over? If so, why is it a bad thing? As patio11 would surely confirm, you can successfully compete with free and open source software.
Symbolics got used to selling hardware for tens of thousands of dollars and could not fully adjust when competitors were eventually able to sell machines for much less. The company also had a couple of bad CEOs, and suffered large financial losses because of a series of bad real estate deals. Efforts to port their software to other platforms were largely unsuccessful.
Lisp Machines were always a niche market because they were expensive, top-of-the-line machines. The target market never really overlapped with Windows. Cheap workstations that could have multiple users eventually became almost as capable as Lisp Machines and thus displaced them.
If you want to credit Stallman with sabotage then mention Elisp, because Elisp is the worst Lisp still in use (Zmacs ran ZetaLisp).
So I guess your thesis is that by allowing LMI to maintain feature parity, that fragmented the market meaning neither company could survive? (We're ignoring the Xerox and Texas Instruments lisp machines obviously)
But this is ignoring the point that while this was happening the Mac and the PC were already in the market at a tenth of the price of any of the lisp machines and were selling by the bucket load. Essentially the PC revolution was already well underway before Stallman even started his hacking.
Lisp machines failed because they were selling high end limited use hardware running software that was good for a narrow range of highly specialised uses, but far too resource heavy to be economically viable for most users. The VAX for example was a far better machine for nearly every real world task except at running a single niche programming language.
If you really think single programmer stopped a $70,000 single user machine from derailing the PC revolution and kick-starting whole new world of computing heaven then you and I have a very different definition of "cheap" and without cheap computing we wouldn't be 50 years ahead, we'd be 40 years behind.
Thus it was not LMI or Symbolics who raided some innocent lab, it was a new stage in a government directed and financed technology development program, primarily for the military.
Because the core design is modular and all the cores are the same the GA144 is much harder to backdoor and it's easier to detect if it is.
The stack based language works quite nicely with Lisp; in fact, Lisp machines were traditionally stack based.
Once it's there, yes you could change it, but in practice changing all of it, all the time, is hard, and that's one of the challenges with this hardware. It was originally developed for embedded, a use case that needs so little code that it doesn't need to change afterwards, but I want it to be fully general, and moving it in and out without paying forbidding time penalties is a bit hard.
The program I'm working on (a kind of JIT compiler) is inching ever more towards a strict dataflow model in which the computations are divided into blocks and each block knows exactly which downstream blocks to send its outputs to. The ultimate output of the computation is simply the last downstream block that collects whatever outputs were originally requested and pushes them back to the client. Seems like a sweet spot for an array-based architecture. However, it would need to rapidly reconfigure the cores. Each time a block finishes its work, its core would go back into the available pool and (hopefully soon afterward) be reprogrammed with the code for some other block as the compiler produces it. The scheduler would also try to position blocks near their downstream neighbors. All of this would have to be dynamic. So, a poor fit for the GA144 given what you're saying.
If the above makes sense, I'd be curious to hear what alternatives come to mind.
Running this code on such an architecture is not currently a priority, but it does excite my curiosity—it seems so obviously doable in principle.
Do you think the benefits are valuable enough to be worth all the trouble of figuring out how to program this architecture, or is it more just a fun puzzle?
0. LISP CPU: http://www.frank-buss.de/lispcpu/
1. LispmFPGA: http://www.aviduratas.de/lisp/lispmfpga/
There was a front-end processor (designed by Howard Cannon, who also was the main designer of the Flavors system) which loaded the microcode and booted up the main processor.
This explanation doesn't seem to make sense.
If you wanted to prevent a Lisp program from using up 100% of CPU on a timeshared machine, all you would have needed to do is impose CPU quotas on the users.
And a lot of the engineering on Lisp Machines did go toward making life pleasant for programmers. The windowed GUI environment on a Lisp Machine was a whole lot nicer than logging into a PDP-10 on a VT-100 terminal. If running Lisp more cheaply was really the goal, they wouldn't have given these machines high resolution bitmapped displays, which were very expensive at the time.
CPU quotas don't help.
Take for example a VAX 11/780. 1 MIPS, 32bit architecture and only a few MB RAM (2MB -8MB). Additionally used virtual memory.
A single Macsyma running on top of Maclisp on such a computer could use all the memory and more. A garbage collection would touch all memory and swap everything else out. This swapping would make the machine completely crawl. The resources CPU and main memory would all be used by a single process. Limiting the CPU for the Lisp process does not help. The Lisp process would be slow, the machine I/O would be slow. Most programs would wait for I/O.
The minicomputers (PDPs and later) were too expensive and optimized for timesharing - not for running Lisp images which were larger than main memory.
> If running Lisp more cheaply was really the goal, they wouldn't have given these machines high resolution bitmapped displays, which were very expensive at the time.
That's even more expensive. Now you have an expensive bitmap display hooked up to an extremely expensive minicomputer. Btw., that was experimented with before the Lisp Machines appeared - Interlisp was running in such a combination.