
Lisp Machine Inc. K-machine (2001) - mepian
http://fare.tunes.org/tmp/emergent/kmachine.htm
======
classichasclass
> correct-endian, i.e. little.

Hey now.

------
jf
The site appears to be down, here is the most recent copy available on the
Wayback Machine:
[https://web.archive.org/web/20181123210702/http://fare.tunes...](https://web.archive.org/web/20181123210702/http://fare.tunes.org/tmp/emergent/kmachine.htm)

------
znpy
> The machine is correct-endian, i.e. little.

Ah, back in those times it was already an argument.

------
FullyFunctional
As I was recently playing with McCarthy's interpreter [1] (and much more that
hasn't hit github), I had some thought about a modern Lisp machine, as an
extension of a RISC-V core [2].

Software is always a big concern and rather than requiring everything be
written in Lisp, I'd want to also be able to run regular binaries written in
C. This requires a way to safely embed the "impure" world.

The obvious way to do this is extend the value domain with a hidden tag bit
[3] which is carried around everywhere, but can't be changed or inspected by
regular RISC-V instructions and in fact, almost all instructions trap if any
operand is tagged.

Memory space would be partitioned into a tagged space and untagged. Tagged
values can only live in the tagged space and registers (this is important for
precise GC). For regular user space code, tagged values can only be created
with a `cons` instruction, and accessed with `car`, `replaca`, etc
instructions. Having dedicated instructions would allow a hardware read
barrier for real-time (or incremental) GC. (Machine mode would be allowed
unsafe access for part of the GC and various tasks).

TL;DR: adding fast Lisp support to an existing RISC-V core is likely much
easier than building a new dedicated architecture from scratch.

(set! projects (cons 'RISCV-LISP projects))

[1] [https://github.com/tommythorn/lisp](https://github.com/tommythorn/lisp)
[2] [https://github.com/tommythorn/yarvi](https://github.com/tommythorn/yarvi)
[3] If you have data cache, it's not hard to implement a cache line as, say,
32 33-bit word line, backed by 33 32-bit memory words.

~~~
mepian
Someone at UC Berkeley is already adding hardware-assisted GC to a RISC-V
core: [https://people.eecs.berkeley.edu/~maas/papers/maas-
asbd16-hw...](https://people.eecs.berkeley.edu/~maas/papers/maas-
asbd16-hwgc.pdf)

~~~
FullyFunctional
I'm on the RISC-V J committee (headed by Martin Mass), but what I'm proposing
here is unlikely (?) to appeal to the mainstream RISC-V SoC developers, at
least not without a strong PoC.

------
pinewurst
Every time I see this (it's been posted a few times) I'm surprised that this
architecture was designed so memory limited (26 bit addressing) and with a big
penalty for IEEE floating point. The need to transcend both these limitations
was pretty obvious at that time or so I remember. If LMI had continued in
business along with the Lisp Machine gravy train, I can't imagine choosing a
K-machine over a Symbolics 36xx and/or its 40-bit follow-on (which wasn't that
much later).

It's another subject, but I think there's a bit of a Stallman-esque
glorification of LMI when they never seemed to go anywhere nor were Lambda
machines that appealing IMHO.

~~~
lispm
LMI had some interesting hardware: the were one of the first users of the
NuBUS (which was later used in a smaller version in the Mac II by Apple) and
they had Lisp Machines with several CPU boards: these used the infrastructure
of a single machine, but each board could run its Lisp independently.

TI bought/licensed the technology from LMI and then developed a bunch of
interesting machines, like: Lisp Machines with embedded Unix system running on
its own 68020 and communicating via a 68010, the first commercial Lisp
Microprocessor (a 32bit chip), compact Lisp workstations using that chip, a
NuBUS board for the Mac II based on their Lisp chip, ...

~~~
classichasclass
There was also the Symbolics MacIvory. I've got one of those in a IIci.

~~~
lispm
The Symbolics MacIvory is a bit more complex, since it is a 40bit machine and
also uses 48bit ECC memory.

------
ngcc_hk
Reading that gives me an impression many issues of getting lisp to run fast
like CDR, type checking for add fixnum (sort of like the guess approach),
floating point > 32 bit. Hard to implement a fast lisp?

Is lisp relevant if the whole world move deep learning AI with self
programming is the norm. May be. But a hardware?

~~~
eschaton
Even back in the Lisp Machine days, array processors were used for neural
network stuff. You could get an LMI or Symbolics with a floating point
accelerator (a Weitek chip), or with an array processor board and library (a
whole bunch of Weitek chips and memory, for MIMD or SIMD operation).

------
user2994cb
Building a tagged architecture on top of a stock 64-bit system would be much
easier than with 32 bits - current systems use 48 bits for addresses, which
leaves plenty of bits left over for the tag (you could even represent all non-
floating point values as 64-bit NaNs and avoid the problems with FP mentioned
in the article).

~~~
znpy
I am not sure, but if my memory serves me right somewhere in the amd64
specification there was something specifically forbidding the use of the
uppermost 16 bit to implement tagged pointers, and this is one of the reasons
why tagged-pointer-based runtimes haven't been spawning up like mushrooms
after the amd64 ISA was implemented.

~~~
user2994cb
Useful discussion on Stack Overflow:
[https://stackoverflow.com/questions/16198700/using-the-
extra...](https://stackoverflow.com/questions/16198700/using-the-
extra-16-bits-in-64-bit-pointers)

Upshot: you can probably get away with it if you are careful to canonicalize
pointers before using them.

~~~
znpy
thank you.

------
pmoriarty
I wonder if a kickstarter to create a new Lisp Machine could work.

I envision something like a Raspberry Pi, but with the hardware designed to
run Lisp, and taking inspiration from the original Lisp Machines.

~~~
eigenspace
I imagine that the hard part is probably not making a lisp machine but making
what made the lisp machines so special: it's software and tooling.

My understanding is that the tooling available on lisp machines was so good it
made emacs look rigid and dumb. I'd imagine that reacreating such tooling
would take more man hours that building the lisp machine and its kernel but I
don't know.

~~~
mepian
Perhaps the open-sourced software for the original MIT CADR Lisp machine could
be used as the starting point: [https://github.com/mietek/mit-cadr-system-
software](https://github.com/mietek/mit-cadr-system-software)

