One was the cost of the gc memory barrier (cleverly managed for commodity hardware by using the mmu and toggling the write bits, I think thought up by Solvobarro). I think a slightly more sophisticated trick could be done with some extra TLB hardware to generalize this for generational collectors for any gc language, say Java. Another smart trick would be to skip transporting unless fragmentation got too bad. In a modern memory model compaction just isn’t what it used to be.
A second one is runtime type analysis. With the RISC-V spec supporting tagged memory this could be speeded up tremendously for Lisp, Python et al. Is anyone dabbing chips with that option?
The nice thing today is that a lot of languages are revisiting ideas originally shaken out by lisp, so speeding those languages up can speed up Lisp implementations too.
Ps: wish this article has mentioned the KA-10 (first PDP-10) which was really the first machine designed with Lisp in mind and with an assembly language that directly implemented a number of lisp primitives.
While Intel borked their MPX implementation, it has been successfully adopted on Solaris SPARC, and has been making their way across ARM implementations (v8+) and Apple's. Microsoft's Phonon might have something similar, but very few details are public available.
In the LISP machines for example, you had an add instruction. Which would happily work correctly on pointers, floats, and integers depending on the data type, at the machine code level. Offers safety and also makes the the compiler simpler.
But where this really shines is in things like, well, lists, since the tags can distinguish atoms from pairs and values like nil, fairly complex list-walking operations can be done in hardware, and pretty quickly too. It also makes hardware implementation of garbage collection possible.
This is just my intuition, but I suspect, these days, it all works out to about the same thing in the end. You use some of the cache for code that implements pointer tagging, or you can sacrifice some die area away from cache for hardwired logic doing the same thing. It probably is in the same ballpark of complexity and speed.
An interesting question for a modern ISA design would be to figure out how to make tagged words and memory work well with SIMD.
Not sure what the issue might be. Let’s say you’re doing a multiply-add: you’d call the one for the data type you want and if any operand were of the wrong type you’d get a fault. Am I missing something?
Also I expect any compiler would assume that the contents of an array subject to SIMD computation would be homogeneous anyway, perhaps trying to enforce it elsewhere.
In any case this doesn’t seem like a big deal to me…but I could be wrong!
I guess to really get into it a good start would be to work with existing SIMD and take a quantitative approach to what is actually the most hot part of it. I wonder if any existing language implementations (eg Common Lisp) attempt to do these kinds of things in the first place.
I know it's been done, but it sounds like fun.
Wouldn't it be cooler to understand the architecture, upgrade it and put it on an FPGA? Have a faithful Lisp machine with faster everything that fit in a $50 FPGA?
No Intel backdoors. No adtech. No telemetry. No X11 cruft. No SystemD boot mess. No Nvidia driver that doesn't play well with others. You could own the whole architecture - it's small enough for a few people to really grok.
Like restarting all of computing - with a lisp machine, for fun. Not relying on the million years of effort in Linux and the million years of effort on the modern processor below it.
You'd be relying on the fundamental advancements at the silicon layer - modern ASIC cells are practically perfect, compared to what was available in the 1970's. No need for multi-phase clocks, multi-power supply systems (you don't need +/- 10V?). No 10A just to drive 128kB of SRAM. It simplifies everything!
Architecturally a lot of the design in something from this vintage is "because they had to". Modern FPGA design is almost like having 'perfect' or textbook components. You can fan-out hundreds and hundreds of nets and meet timing at 100MHz - something designers would have killed for in 1980!
With "proper" design, on a modern FPGA fabric, you could run at 500MHz. You'd have the world's most roided-out lisp machine.
Boot time? Practically instant. Key lag? What lag?
Need extra horsepower for a scientific calculation? Attach an accelerator directly to the bus.
You could use Yosys and Verilator and the whole chain would be open. Nobody could ever take it from the community.
At outdated silicon nodes, you can build an ASIC. You could put the whole design on Skywater PDK and publish your transistor level design. Would it be competitive with a 5nm processor? Absolutely not.
Would it be the ultimate expression of the Hacker rebellion? I think so.
Personally, I would refrain from "upgrading", but instead faithfully recreate the digital circuits (simply on an FPGA instead of discrete logic), as it was apparently done in the referenced project. It's the same intention as when (re-)implementing Babagge's machines. If it's just to do Lisp programming on a modern machine, everything is already there.
I chose to emulate the OpenCores ethernet controller in the CADR software emulator to make it easier to move images between software and FPGA implementations.
But is Lisp used for anything real?
I mean, even teaching - is it worth it to learn lisp? Aren’t other languages more practical to learn?
Isn’t learning lisp like learning a a dead language that once you leave the lisp class you’ll never use again?
Wouldn’t you learn the exact same things you learned in lisp using a more widely used (practical)language?
>Eric Raymond has written an essay called "How to Become a Hacker," and in it, among other things, he tells would-be hackers what languages they should learn. He suggests starting with Python and Java, because they are easy to learn. The serious hacker will also want to learn C, in order to hack Unix, and Perl for system administration and cgi scripts. Finally, the truly serious hacker should consider learning Lisp:
>Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot. This is the same argument you tend to hear for learning Latin. It won't get you a job, except perhaps as a classics professor, but it will improve your mind, and make you a better writer in languages you do want to use, like English.
>But wait a minute. This metaphor doesn't stretch that far. The reason Latin won't get you a job is that no one speaks it. If you write in Latin, no one can understand you. But Lisp is a computer language, and computers speak whatever language you, the programmer, tell them to.
Small Lisps are sometimes used as an embedded or scripting language -- AutoCAD to Emacs. Not just old code, I saw a project using it as a scripting language for a game engine in Rust the other day. When the language is crafted to that kind of purpose, it's often a good fit.
Not to harp on like a CS prophet, but Lisp is basically a syntaxless language. '(It (can (be . maddening))) but the simplicity has its advantages. Ridiculously easy to implement (see niche scripting applications again) and you never get bogged down by syntax when coding. Just functions and data. It encourages a very meta style of programming, often where your program helps write itself at runtime.
Even if you aren't sold on Common Lisp being a good language for career advancement (I'm not either, honestly) I do think something in the Lisp family is probably worth learning for the same reason you should learn at least one assembly language in passing. A handful of basic primitives support every major approach to programming, if you structure them correctly. Makes you think about the problem differently.
Common Lisp runs a lot faster than Python or Ruby and it is faster to write than C or Java.
Well, similar to learning a dead language, lisp has lots of benefits!
For example, learning some latin is very useful when you come across new (to you) words with latin roots for understanding what they mean!
Similarly, learning lisp is very useful for understanding useful ways to solve new programming problems that you come across.
This is what people mean when they say that lisp will change the way you think as a programmer.
But even beyond that, lisp can be very productive. Especially some of its derivatives (like clojure).
Edit: I did not see that someone already replied with a similar response. But I will leave this here anyway.
They are surely not surviving the last 30 years out of charity.
The problem was maybe, aside from a $50,000 PC being hard to sell, that even on such generous hardware with specialized support, Lisp, particularly with the more naive compilation techniques of the 70s and early 80s, and after adding a fairly sophisticated operating environment, was still a rather hefty language.
Very large customer bases like Nvidia can have annual design releases and keep up.
This dynamic is dead now, thanks to the slowing down of Moore's Law. We're even seeing a resurgence of special-purpose hardwired accelerators in CPU's, because "dark silicon" (i.e. the practical death of Dennard scaling) opens up a lot of opportunity for hardware blocks that are only powered up rarely in a typical workload. That's not too different from what the Lisp machines did.
Now I now live in a combination of SBCL+Emacs+Slime and also LispWorks Pro. For newbies who want to learn a Lisp, I point them to Racket.
Instead of sharing one minicomputer having 8 MB RAM (or less) with tens or hundred users, the Lisp programmer had a Lisp Machine as a first personal workstation with GUI (1981 saw the first commercial Lisp Machine systems, before SUN, Lisa, Macs, etc.) - thus the Lisp programmer had not to compete with many other users with scarce memory availability. Often Lisp programmers had to work at night when they had a minicomputer alone - a global garbage collection would make the whole machine busy and response times for other users were impacted, up to making machines unusable for longer periods of time. When I was a student I got 30 minutes (!) CPU time for a half year course on a minicomputer (DEC10, later VAX11/780).
So for a Lisp programmer their personal Lisp Machine was much faster than what he/she had before (a Lisp on a time-shared minicomputer). That was initially an investment of around $100k per programmer seat then.
Later clever garbage collection systems were developed, which enabled Lisp Machines to practically use large amounts of virtual memory. For example: 40 MB physical RAM and 400 MB virtual memory. This enabled the development of large applications. Already in the early 80s, the Lisp Machine operating systems was in the range of one million lines of object-oriented Lisp code.
The memory overhead of a garbage collected system increased prices compared to other machines, since RAM and disks were very expensive in the 80s.
A typical Unix Lisp system was getting cheap fast, though the performance of the Lisp application might have been slower. Note that there is a huge difference between the speed of small code (a drawing routine) and whole Lisp applications (a CAD system). Running a large Lisp-based CAD system (like ICAD) at some point in time was both cheaper and faster on Unix than a Lisp Machine. But that was not initially, since the Unix machines usually had no (or only a primitive) integration of the garbage collector with the virtual memory system. Customers at that time were then already moving to Unix machines. New Lisp projects were also moving to Unix machines. For example the Crash Bandicoot games were developed on SGIs with Allegro Common Lisp. Earlier some game contents was even developed on Symbolics Lisp Machines - the software later was moved to SGIs and even later to PCs. Still a UNIX based system like a SUN could cost $10k for the Lisp license and $40k for a machine with some memory. Often users later bought additional memory to get 32MB or even 64MB. I had a Mac IIfx with 32MB RAM and Macintosh Common Lisp - my Symbolics Lisp Machine board for the Mac had 48MB RAM with 40bits and 8bit ECC.
Currently a Lisp Machine emulator on a M1 Mac is roughly 1000 times faster than the hardware from 1990 which had a few MIPS (million instructions per second). The CPU of a Lisp Machine then was as fast as a 40Mhz 68040. New processor generations had then either been under development, but potential customers moved away - especially as the AI winter caused an implosion of a key market: AI software.
For an article about this topic see:
"The Lisp Machine: Noble Experiment Or Fabulous Failure?"
My Symbolics systems are elegant, don’t get me wrong. But Genera wouldn’t have been any less elegant if they’d taken their 80386+DOS deployment environment (CLOE) and used it as the basis for a true 80386 port of Genera. They were too stuck on being better than everyone else at designing hardware for Lisp that they missed not needing special hardware for it.
Actually I think Lucid was founded, because Symbolics did not want to further invest into a UNIX based implementation.
Symbolics did support SUNs with Lisp Machine boards (the UX400 and UX1200). TI had Lisp Machines with UNIX boards.
Later Symbolics developed a virtual Lisp Machine running Open Genera (a version of their Genera operating system) for the 64bit DEC Alpha chip on top of UNIX.
"The Symbolics Virtual Lisp Machine Or Using The Dec Alpha As A Programmable Micro-engine"
They wouldn’t even let a company decommissioning a workstation give it to an employee who wanted to take it home without paying about the cost of a Macintosh in “license transfer fees” and then whoever got it had to pay “maintenance” to stay within the letter of the license.
VLM is decent but they’d have been better off retargeting 80386 and 80486 atop either Unix or Windows, rather than trying to maintain their own special fancy architecture forever.
To do a bit of an apples to apples comparison, look at the Apollo and Sun workstation lines versus the Symbolics workstation line from 1983/4-1991/2. That takes you from the Apollo DN300, SUN-1, and Symbolics 3600, through the Apollo DN10000, Sun SPARCstation 2, and Symbolics XL1200. They all started at about 1 MIPS but ended at very different positions.
The Symbolics system is only available as a pirated and slighty buggy software for Linux (also in VM running Linux). A better version exists, but that one is only available in limited commercial form. It's another parallel universe from 30 years ago. Most development basically stopped mid 90s.
They will also give you that answer, for various reasons.