They were smoking crack. By the time this was written, regular RISC platforms were riding an economy of scale and were beating dedicated Lisp hardware, those metrics shown in the article notwithstanding.
In addition: two months to a compiler? Four to a complete chip? All from a team under 15 ppl?
In the 80s I programmed almost exclusively Lisp machines (CADRs, Symbolics and PARC) and the PDP-10 (also a Lisp machine) and even so I knew the market was tiny and not really growing.
>In addition: two months to a compiler? Four to a complete chip? All from a team under 15 ppl?
"Because of [Symbolics' automated design] methodology, we were able to do five different revs of the chip on four different technologies, which was pretty amazing [for the time]. ... We took about 10 to 12 man-years to do the Ivory chip, and the only comparable chip contemporaneous to that was the MicroVAX at DEC; I knew some people who worked on that, and their estimate was that it was 70 to 80 man-years to do the MicroVAX. That in a nutshell is the reason that the Symbolics lisp machine was great." -Kalman Reti (2012) https://youtu.be/OBfB2MJw3qg
Non-blub work environments bring sone fantasy-tier goals down to just within possibility by a motivated team of geniuses, which Symbolics had. Reminds me a little of how Alan Kay describes the forgotten art of Starrett erecting the empire state building in under a year: "Gentlemen, this building of yours is going to present unusual problems. Ordinary equipment won't be worth a damn on it. We'll buy and make new stuff, fitted for the job. That's what we do on every big job; it costs less than renting secondhand stuff, and it's more efficient." https://youtu.be/QboI_1WJUlM
I really wish I could see old or recent time teams working in similar environments. I know mentally how they can remove all barriers to exploration and optimization but I've never seen it live. Especially compared to mainstream dev daily life where it's grind and grind and salty tears.
ARM (1986) has 4 Dhrystone MIPS, and Motorola 68020 (1984, as CISC as you can get) has 4.84 Dhrystone MIPS.
RISC was not riding economy of scale then, it barely existed on the periphery of the mass computing, being utilized in very expensive workstations or cheap arcade game consoles. The first significant RISC chip that made any waves was Alpha AXP 21064, when banks worldwide started to buy these in bulks.
The fact that 68020 outperformed ARMv2 is what people from Syymbolics had at hand then, in 1986, writing that document.
This is not fair compare. Motorola and Intel that time outperforms first RISC machines, because have much better silicone level (even when logic level was not best).
But game consoles are fair, as they was specialized machines, made with good silicone and with large scale production.
On the same time, I'm not agree to consider 68020 as genuine CISC machine - I think it is more fair to consider it hybrid - RISC-optimized CISC (or, as it named on rumor talks - "mini on one chip").
I think, genuine CISC machines was most S/360 and some mini-machines, like Alto in which was documented feature to develop your own microcode (in S/360 it was marketed feature with high value, because with custom microcode, 360 could run software compiled for older hardware like 1401 or 7xx/7xxx series).
And I agree with you, RISC achieve market adoption, when micro-processor (one-chip CPU) market share become so large, that these games with compatibility with old machines become non interest financially.
So, as resume, I say, RISC machines are not players of CISC market, they are essentially players of micro-market, which in mid 1980s was not much more than games and terminals for mainframes and mini's (micro with, say 128k RAM is considerable to run accounting for small business but impossible to do more).
In 1990s, RAM become so cheap, micro's swallowed mini's market, and appear clouds, so become possible to run large enterprise accounting on micro hardware. And at that moment, created Alpha, and than AMD-K8, which was essentially Alpha with hardware translator of x86 instructions to RISC.
So, after all these adventures, RISC entered market via back door :)
To avoid misunderstanding, as I understand from mainframe talks, many of them was customizable like modern FPGA machines, as they even considered make custom hardware (imagine, you could order fully custom machine with additional registers, custom ALU, etc), that's why I compare custom microcode of CISC to FPGA, and why I think it is lost feature, just not existing in mini's and in micro's.
But they weren't looking at ARM, they were looking at MIPS, which handily outperformed them at the time. ARM wasn't on their radar at all, since that was an oddball UK chip not used by any of their competitors.
Symbolics competed in the very expensive workstation market, their machines cost more than most in that category, so they had to keep up the performance of MIPS R2000.
MIPS (R2000) did not outdone Intel and/or Motorola at the time, those last, but not least two enjoyed all "economics of scale" of the world.
MIPS was slower clock-frequency-wise than competitive offerings from Intel and/or Motorola and better pipeline design of MIPS did not add much.
Complex expensive workstation like IRIS used proprietary hardware to boost I/O bandwidth and/or to have several CPUs working in parallel. It was easier for SGI to make CPUs and other hardware around them in accord and I honestly think that Symbolics was able to pull something like that.
Doesn't matter. Generic CPUs were getting 32-bit word lengths and increased speed, and Lisp compiler technology had improved to yield fast code on those CPUs.
The maker of Golden Common Lisp partnered with a hardware company to make a "Lisp machine on an ISA card" solution for PCs called the Hummingboard (not to be confused with present-day SBCs using that name). It was just a 386 and gobs of memory on a card. (this predates the Compaq 386-based PC). A special build of Golden CL sent compiled code to be executed on the 386 instead of the PC's main CPU.
They state that the chip was 50% cache memory, so possibly the speedup would come from a simple RISC plus the boost from cache. 1987 was SPARC v7 chips (v8 came out in 1990), which has 110K transistors or so depending on CPU.
However the strategy of "retreat to the high end" rather than producing something in volume might have contributed to the rejection; they had no strategy for a volume system that would have gotten them more desktop / technical workstation sales.
Absolutely – Symbolics should've ditched physical hardware for a virtual machine/their port to Alpha much, much earlier than they did. Were you writing MACLISP?
I don't think so. Their hardware was the better thing.
What they should have done is cannibalise themselves. Something extremely difficult, but it could be done.
Fairchild did that with the integrated circuit. There was a coup inside from the people that wanted to kill the IC and stay selling expensive in small quantities instead of going to earning way less per unit en masse.
That's an interesting possibility to think about. What do you think they could have done that involved hardware?
I know Symbolics had coprocessor boards that used their existing architectures, but stuck it on a NuBus and plugged into old Macs to run Genera in a window.
Perhaps they could have stuck with that: I could see SGI workstations with Symbolics coprocessor boards leveraging the foothold they had in 3D graphics and animation, for example. Or a RAD development environment for technical computing on NeXT computers offloading hard symbolic computations as needed, since NeXT had high-end RAD as their selling point.
> That's an interesting possibility to think about
[skip]
> I know Symbolics had coprocessor boards that used their existing architectures, but stuck it on a NuBus
That is answer. Mini-computers first appear as cheap alternatives to mainframes, where was impossible to just connect client by leased line.
Micro-computers created, when semiconductors grow to stage, when could put whole CPU on one die, so they just substituted mini's.
At first, semiconductors don't allow to put numeric coprocessor on same die with CPU, so short time existed wonders like Weitek, but when first RISC appeared, already existed i486 with integrated coprocessor, so discrete coprocessor was already not viable economically.
Yes, much later appear ccNUMA (cache bus with cross-bar switch interconnect), but it's complexity IMHO close to coprocessor, so it became viable on very large scale, I think 16 cores or more.
I've seen mentions of Symbolics working on an SGI board but I’m not sure if they ever released that. At least they released a board for another Unix workstation vendor, Sun.
There was also an article mentioning a possible version for Sony News systems.
On SGI it seemed that (later?) several 3d graphics applications used Allegro CL -> perhaps that was powerful enough. For Symbolics to really make sense of SGIs would have made it necessary to tap into the graphics hardware&software from the embedded Ivory boards.
On a related note - it's a common concern among performance-oriented programmers that OOP, pointer-chasing, polymorphism, etc... are not adapted to modern CPU, and cause terrible performances.
Has there ever been "OOP-machines" developed that would optimize / work around those problems, similar to (as I understand it) what "LISP-machines" were trying to do for LISP ?
That was really not the case, there are many misconceptions about the architecture of the Lisp Machine. The CADR for example is a very simple machine, and has no knowledge of OO or methods, likewise for the Lisp Machine macrocode.
You keep pretending that the CADR is the only Lisp machine, while it’s basically a throwaway prototype for the real deal that was being implemented at Symbolics. ;)
There are no misconceptions in my post. The CADR software already included Flavors, which is what I meant. The Ivory definitely had the method dispatch optimisations.
There was nothing "prototype" or "throwaway" about the CADR, it was heavily used at the AI lab for over 10 years, with both the Lambda and 3600 heavily based on it, to the point that the Lambda was microcode source and Lisp FASL compatible with it.
The CADR "software" does not include "Flavors" -- that is part of the Lisp Machine operating system (which for a large of its life was shared between the CADR and Lambda, and even partially for the 3600 series), which is just plain Lisp and unrelated to hardware.
None of TI or Symbolics machines had these type of optimisations that you mention. The Ivory did not have any special method dispatching optimisations. The Explorer was basically a Lambda.
Yes, it’s a throwaway prototype that overstayed its welcome for over 10 years, as it tends to happen in computing all the time. You said it yourself elsewhere that it’s a grad project.
You are nitpicking here as usual. The point is that OOP was something expected to be available on Lisp machines.
Okay, I will spoon-feed you. Open the specification on Bitsavers and search for “4.7 Generic Function and Message Passing”.
I'm quite familiar, I've restored and written simulators and implemented FPGAs for almost all of them in various states of disarray. Your claim was that they had special architecture optimisations, that was not the case, and a common misconception -- your quite condescending tone notwithstanding. OOP was indeed available, but it was software -- just like in CL, and not backed by hardware, and essentially the same way it is done today.
The K-Machine would be a super interesting retro-computing project to get going. The schematics, the Lisp Machine system associated for it, are available (I'm unsure of legal status). If anyone is interested in driving that, I'd be really interested to help (but my primary interest is just the CADR due to lack of hours in the day :-) )
Rekursiv [1], though I'm not sure how viable it really was.
Edit: it's been discussed [2] here a few times before, and had an extremely colourful history.
Genera embedded in Macs and SUNs, running on the MacIvory and UX boards. For example the Symbolics graphics suite was available on Macs, running on the MacIvory board and a NuVista graphics board. There were also talks to expand this to other hardware like Sony News.
IIRC, that was used in an ATM switch, with network switch software running on top of Minima, "a real-time Lisp run-time environment and operating system for the Ivory processor".
Followed by Energize C++, a C++ IDE based on XEmacs, with a language server, incremental compilation and code reloading, that only became common decades later.
IBM had something similar with Visual Age for C++ v4, coming from the Smalltalk experience side.
In addition: two months to a compiler? Four to a complete chip? All from a team under 15 ppl?
In the 80s I programmed almost exclusively Lisp machines (CADRs, Symbolics and PARC) and the PDP-10 (also a Lisp machine) and even so I knew the market was tiny and not really growing.