Hacker News new | comments | ask | show | jobs | submit login
Lisp Machine Inc. K-machine (2001) (tunes.org)
114 points by mepian 53 days ago | hide | past | web | favorite | 74 comments



I wonder if a kickstarter to create a new Lisp Machine could work.

I envision something like a Raspberry Pi, but with the hardware designed to run Lisp, and taking inspiration from the original Lisp Machines.


If you have a Raspberry Pi, you have already all the hardware to run Lisp. There is very little reason to develop special hardware. Lisp runs nicely on an ARM processor.

Actually some the first ARM processors were supposed to run a Lisp operating system. When Apple did the research&development for the Apple Newton, they originally attempted to run a Lisp-like system on mobile systems like PDAs and tablets. This Lisp dialect was derived mostly from Scheme and Common Lisp - the language was eventually called Dylan. Apple had ARM systems in their lab which ran a Dylan OS. When they at one point chose the ARM processor, Apple invested in ARM and they used a chip which was able to support a garbage collected language. Unfortunately these Lisp operating systems by Apple did not reach the market and Apple instead sold the Apple Newton with an OS kernel written in C++ and the software on top written and scripted in NewtonScript - a language with many aspects from Lisp - incl. garbage collection.


Could you explain what you mean by a "Lisp operating system"? I understand the idea of Lisp as a language, but what does Lisp as an OS mean?


Historically it's an operating system written in Lisp, typically running as a Lisp runtime+image on the metal. Itself providing memory management, interrupt handling, process scheduling, network I/O, graphics, keyboard, mouse, windows, applications ... and everything else a typical OS provides.

For a recent attempt see: https://github.com/froggey/Mezzano


Is there any advantage to that over a generic OS? I have not, in general, cared what language my OS was written in.


First off, note that Lisp OSes were made before modern cybersecurity was a concept.

Since everything was Lisp all the way down, and Lisp is a very dynamic language, all calls that comprise the OS are available as plain function calls visible to the interactive REPL, introspection, and modification. They were very developer-oriented and had lots of ways to have live displays of objects, to reuse or interact with them, and very deep debugger and documentation integration, since all code was at the same "level".

Since it's a high level language that does not expose low level memory (although certainly there are obscure implementation-specific Lisp operations for doing so used deep in the guts), corruption of memory at a low crashy level isn't generally a thing. Protections between applications/processes aren't as necessary and it can all remain plain running threads & functions interacting with each other & the OS.


So there isn't really a distinction between the OS and the API? (That is, the Linux kernel, say, is a monolithic binary. Then we have the OS utilities that make up the GNU/Linux OS. And then we have the C - or whatever - standard library API. If I understand you correctly, those distinctions would not exist in a Lisp OS.)


Correct. It's not a process-oriented environment, with crash protection facilities keeping everything at arm's length.

There are namespacing facilities (though I'm not sure how far back in history they go) to discriminate your code's reach, as well as to import the public API from others' code, or in a different manner to bore into their private affairs generally for admin/debugging/tracing/modification. "Global" variables have thread-local, dynamically scoped bindings which are super useful for configuration, redirection, or setting other side-band broad context when calling into shared code.

Also it should be noted that these types of machines (at least from the Symbolics point of view) tended to be single-user workstations, though were still networked to allow remote (and simultaneous) access to its running world, with varying levels of per-connection context.

One of the biggest downsides of ye olde Lisp machines was garbage collection times. Some had facilities to define dedicated heap regions, but generally the GC had to walk the entire workstation heap across all "applications" in 1980s hardware, which wasn't great. But at least as a programmer, you could freely code and only worry about minimizing GC pressure when it became an issue, instead of starting from a required malloc/free perspective and constantly worry about leaks.


> Some had facilities to define dedicated heap regions, but generally the GC had to walk the entire workstation heap across all "applications" in 1980s hardware, which wasn't great.

a stop-the-world global GC was generally tried to be avoided. mid 80s Symbolics Genera had regions for data types (areas), areas for manual memory management, a generational GC, incremental GC (the GC does some work while the programs are running) and an ephemeral GC efficiently tracking memory pages in RAM. For normal use it then looked like global stop-the-world GCs were only used in special situations like saving a GCed image or when running out of free memory. But one often preferred to add more virtual memory then, save the work and reboot. For that one could add paging space when Lisp crashed with an out-of-memory error and continue from there.


Emacs.

(Only half joking)


not really. GNU Emacs is not even good at multi-tasking.

Actual Lisp operating systems also don't look like an Emacs running on hardware. The MIT Lisp operating system had Zmacs, but it was one application among others and while Zmacs had modes (incl a Dired), the typical Lisp OS application was not an Zmacs mode. None of these were based on Zmacs: the Listener (the REPL), the process overview, the font editor, the inspector, the terminal, the chat program, the documentation reader, ... On a Symbolics there were basically two major other programs reusing parts of Zmacs in their UI: the mail client and the documentation editor.

Thus when people think that a Lisp OS looks, feels or works like an Emacs on the metal or that GNU Emacs looks like the typical Lisp operating systems from the 80s (from MIT, Symbolics, LMI, TI or Xerox), then this is actually not the case.

See this for an example how an 3d animator used a Symbolics Lisp Machine and parts of the graphics suite for it: https://youtu.be/8RSQ6gATnQU


Well, there is a port of glibc to UEFI, so one could very easily "boot into Emacs". Strangely I've not seen anyone do this.


Here is boot into emacs implemented by making emacs the Linux init program, "Emacs standing alone on a Linux Kernel"

https://www.informatimago.com/linux/emacs-on-user-mode-linux...


This is different to what I was talking about -- your link is about running Emacs as pid1 on Linux (cool, but not particularly innovative -- if you run Emacs in a Docker container that is effectively what you're doing). I was talking about booting from UEFI into Emacs, without the Linux kernel at all. Given that there is a glibc port for UEFI that uses just BootServices it would be very possible to do this.


Ah yes, I see. Something like this has been done with Python. It was surprisingly difficult, I thought.

Python without an operating system https://news.ycombinator.com/item?id=9453677

Porting Python to run without an OS https://www.youtube.com/watch?v=bYQ_lq5dcvM


I imagine that the hard part is probably not making a lisp machine but making what made the lisp machines so special: it's software and tooling.

My understanding is that the tooling available on lisp machines was so good it made emacs look rigid and dumb. I'd imagine that reacreating such tooling would take more man hours that building the lisp machine and its kernel but I don't know.


Perhaps the open-sourced software for the original MIT CADR Lisp machine could be used as the starting point: https://github.com/mietek/mit-cadr-system-software


An LMI Lambda (shipped 1st gen machine that was an evolved MIT CADR) emulator is at: https://github.com/dseagrav/ld


You could start with the old software, the MIT tree has been released. I have copies of the final LMI software including the K-machine stuff.

The software wasn't dumb, there was just a lot less of it. They didn't have IPv6, SSL, a web browser ...

One halfway house would be a Lisp unikernel.


Andreas Olofsson and his team at Adapteva have demonstrated that it's possible for a small experienced team to design and launch a niche processor and a single-board computer board similar to the Raspberry Pi around it within a relatively small (less than 5 million USD) budget, and they even used Kickstarter to fund the later iterations: https://en.wikipedia.org/wiki/Adapteva


Custom hardware language machines fall behind firmware on general purpose CPUs because the latter have an order of magnitude faster hardware releases. There were all kinds of clever architectures coming out in the 1980s- LISP machines, prolog computers, Masspar, Thinking Machine, etc. The first release tended to be economically competitive. But the next release 3 to 5 years later fell behind Moores law.


Not only that, but Lisp compiler technology improved to further narrow the gap between Lisp on Lisp Machine and Lisp on generic CPU.


I don't think a Lisp machine would make much sense these days, as far as the hardware is concerned. Designing and building CPUs is very expensive, and the small market for such a machine would make them prohibitively expensive.

But an operating system built around the idea could be quite interesting. From what I know, the systems running on the Lisp machines of the 1980s were nightmares from a modern security perspective. But the idea of having a closely integrated development-runtime-environment centered around Lisp is appealing, I think.

Except that building an OS from scratch is a lot of work, at least if you want to support a reasonable range of hardware. But maybe you do not even need that. Now that I think of it, one might build a desktop environment for a modern Unix-ish system that aims to reproduce the Lisp machine development environment of the olden days. This would also allow one to harness the existing Lisp implementations. Does anyone know of such a project?


It would be fun if an fpga implementation was done, perhaps from the vlsi design described in [1] "Design of LISP-based Processors, or SCHEME: A Dielectric LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode". I can imagine an asic could fit an array of those and use little power, given advances in densities since 1979.

[1] https://dspace.mit.edu/handle/1721.1/5731



That's interesting, and I wonder if it ever went in to production, and if so how many units were sold.

Unfortunately, it seems to have been based around PicoLisp, which isn't exactly a popular Lisp implementation.

I expect that for any project like this to get wide traction in the Lisp community, it would have to be based around Common Lisp, as that's by far the most popular Lisp out there, with a huge existing ecosystem it could leverage -- an advantage akin to basing the Raspberry Pi around Java.


Speaking of which, a hardware JVM would also mean clojure on the metal :)


if the JVM would have been implemented in Clojure and all the other stuff, too. Already the Clojure compiler is not written Clojure, but Java. The JVM also for example has no TCP stack on its own, it interfaces the OS - which then is written in C/C++.


An example of a Java Machine is JOP [1], it has a TCP/IP stack and is a microcoded architecture too.

[1] https://www.jopdesign.com/



Sorry, you can't sell Lisp. Maybe once upon a time that was possible, but those days are gone. And Lispers generally are a whiny, creaky bunch to deal with; me included. Pleasing that crowd is a serious undertaking.

I sometimes try to imagine a world where these things took off, but the closest thing I've touched is Emacs and I'm pretty sure that's not what they had in mind.


I would buy a Lisp machine, provided that I could afford it. That proviso was the undoing of them, IMHO.


I would also buy a Lisp Machine, provided I could afford one. I'm really interested to see this software environment that I've heard spoken so highly of. What I've understood so far of it sounds amazing.


Welp, in the mean time you can learn about the keyboards they used :) https://youtu.be/oDozftThFMw


Not bad, but Richard Greenblatt was not founding Symbolics - his company was Lisp Machine, Inc., which he founded together with Stephen Wyle, and Alex Jacobson. ;-) Though, LMI also sold its machine with a space cadet keyboard and their first machine was also basically a CADR.

Symbolics also was not a joint venture. Tom Knight was actually a co-founder of Symbolics.


Wait another hundred years and it will be close to magical. We have the collective habit of ascribing great properties to old technologies while bitching about the technologies we have today. I'm certainly no exception here, but the fact is that if it really was that good it likely would have won in the marketplace. Nostalgia is a powerful agent of deception.


You should definitely read 'Worse is Better' by Richard Gabriel if you truly believe that the marketplace is the final arbiter of what is really good. Hint: It's not.

In order to see how important and iconoclastic (nevermind good) the Lisp Machine paradigm is, examine the trickle-down effect that it put in motion. There are popular IDEs today that are only _just beginning_ to integrate concepts from that paradigm. The Lisp Machines were so far ahead of their time, so visionary, that it will take decades if not centuries for the consensus to catch up to what they represented.


The market place is not perfect. But it has - over time - been a pretty good arbiter for those things that could give a business an edge, and computing power definitely is such a thing. Yes, they were ahead of their time. But being ahead of your time is not an advantage per-se, you need to be ahead of your time and you need to be able to deliver practical results which will give the users of your hardware an edge over the competition.

The marketplace sucks for things like healthcare, public transportation, infrastructure. But it works pretty good for many other things and if there is one success story in the last 6 decades then computing would be it.


The marketplace gave us PHP and Javascript, Facebook and Google.

The marketplace also gave us the billion dollar computer security industry as a direct result of practices that were 'selected' by said marketplace. We find ourselves operating in an emergent, cascading-effects, all-subsuming space - that we have absolutely no control over - and is now threatening to destroy us.

I would go as far as to say that you are completely out of touch with technology if you think for a moment that the marketplace has longterm memory or stratagem ability. This is validated by the constant re-invention / regurgitation of the same ideas. It seems that abiding by 'greed is good' short-term, is not really a good way to wield the Promethean fire.


C'mon, you are really proving gp right here with that kind of superlative language. It's plain untrue. If the s/w was even very good for it's time, why would nobody have cared to just port it to a general purpose machine? After all LISP is all high level stuff, divorced from the hardware level. So, if the software was even just good, why did nobody care to carry it into the future?


> After all LISP is all high level stuff, divorced from the hardware level

Maybe you have not paid attention: but we are talking about Lisp Machines with special hardware and their specific operating system written in Lisp from the ground up. Thus it's not your idea of a high-level Lisp, but a Lisp which runs the graphics card interface, handles memory, schedules processes, as much as it implements the network interface incl. the whole TCP/IP stack. It runs on top of a special processor for which the compiler generates machine code.

And yes, there are various emulators for various Lisp Machines - there are some for Intel + SPARC (Medley, the emulator for Xerox' Interlisp-D) and there is Symbolics' Open Genera (DEC Alpha, and nowadays Intel 64bit and ARM 64bit). Those have been sold commercial. The there are a handful of non-commercial emulators. But this is old software and there are only tiny user groups left. You can use those, but its more like a time-travel to an alternative universe and its state 20 years ago.


I have been paying attention, I've known that story for well over twenty years, and I fear I might have to listen to rehashes of the same damn old story at least every couple of years for the rest of my life, thank you very much. What's irking me is that the older the story gets the more irrational and extreme the ongoing deification. I say if the UX and tooling where that godlike as some are wont to make out, you can easily have that without the need for an actual lisp machine, just port the damn software. I maintain that 90% of those who irrationally praise that old system this much did never even work a single hour with one of them, so where are they getting their delusions from? How would they know it's not just as inefficient and frustrating at times as any other system? They don't, but they just can't keep from creaming all over any forum where the LISP machine is mentioned at all. Darn I need my morning coffee, I'm cranky.


> just port the damn software.

That's what Open Genera does. Symbolics wrote an emulator for the Ivory architecture, which allowed people to use much of the OS on a DEC Alpha under Unix.

> I maintain that 90% of those who irrationally praise that old system this much did never even work a single hour with one of them

Possibly true, though I have. I know both its pros and cons (and it conses a lot).

> They don't, but they just can't keep from creaming all over any forum where the LISP machine is mentioned at all. Darn I need my morning coffee, I'm cranky.

I think you have a point here.

I think its useful to have alternative approaches preserved and some people have influenced by it and implemented slightly similar stuff on current software (see for example MCCLIM for Common Lisp, which is a portable reimplementation of some of the UI management and graphics substrate of the Symbolics Lisp Machine OS). It did not catch on, since much of that is very exotic and complex at the same time. But those have probably never had the opportunity to format a disk in Lisp, read a Unix tape via a Lispm tape drive or copy a file via anonymous FTP onto a Lispm. It's not the fault of the current generation - they haven't had the chance to actually work with such computers (since the whole thing was mostly dead after the early 90s) and all in all they were very rare (roughly 10000 Lisp Machines in various forms were built) and the whole experience was very expensive - thus only affordable to companies and well funded research. This makes it more mythical and mysterious. Then also the commercial systems have't been 'freed' yet - their source code is not available under an open source / free software license - only the old MIT code has been made available - but that is really ancient.

It's the same with the old Lispm keyboards - the layout is cool, but using it for typing? Who has ever typed with the old mechanical keyboards from the 80s and would do that today? There are only a few...


+1 for the very detailed and eminently reasonable answer! I wasn't aware (but probably should have) that a significant portion of the software is copyright encumbered because it was not officially legally abandoned. I think ideally the current copyright holders would release the source codes in question, both for reasons of preservation, and so that everyone gets the chance to explore these systems for themselves, on their own boxes at home.


How can you talk about irrationality when it's obvious that you've never worked on a Lisp Machine and are thus totally projecting and making up things?

Then of course you accuse the Lisp advocates of suffering from what you so clearly represent. If you had a clue, even a minimal one, you would never write 'just port the damn software'. This statement - and your previous comments - portray fundamental ignorance of what you are trying to argue against. May I suggest further education - Lisp Machine emulators are widely available - so as to stop making a fool of yourself.


I have done a fair bit of work on the various Lisp Machine emulators and don't think that asking why the software wasn't ported to something else is a stupid question.

The K-machine development was 'just porting the damn software' to a large extent.


"Lisp Machine emulators are widely available - so as to stop making a fool of yourself"

Thank you for proving my point, which you didn't get.

a: They were not successful, but should have been, and now it's too late.

b: If anybody cared it wouldn't be too late for success.

a: No see, somebody DID care!

b: And still no success... qed


The hardware wasn't all that special though, at least for the MIT/LMI/TI machines.


The LMI software could have been ported to a 68020+custom MMU instead of the K-Machine but wasn't. I suspect that they didn't have enough staff with experience of low-level coding to be able to see different options, they seem to have been reluctant to do much to extend the microcode (equivalent to assembler) to support things like new ethernet cards, later drivers just worked by read memory word/write memory word function calls from Lisp.

Lots of the features of the software only work because everything is in the same address space, working out how to do similar things across multiple address spaces for a modern operating system is much harder.

I have talked with RG on Skype but not about historical stuff.


LMI ran out of steam in 86/87. The end of LMI was really strange.

TI modernized a lot of stuff of LMI (hardware and software). But TI management radically pulled the plug when the government funding for the various projects went away. DARPA financed their 32bit Lisp microprocessor - but there was no one to fund the next round and TI not even published an emulator or published the source code under some license. It just went away... there is an archive with code from them - but who would touch it without a license?


> After all LISP is all high level stuff, divorced from the hardware level.

A student travelled to the east to hear Master Sussman discourse on the Lisp-nature. When Master Sussman began holding forth on low-level programming in Scheme, the student interrupted him.

"Master," he said, "How can you do such bit-level programming in a high-level language such as Lisp? Would you not want to use C or pure assembly language?"

Master Sussman replied, "A fisherman was walking along the beach, when he spotted an eagle. 'Brother eagle,' said he, 'how distant and unreachable is the sky!' The eagle said nothing, and flew away."

With that, the student was enlightened.


Yes, but the opposite happens too. There are plenty of people who believe that in technology, the latest is the greatest, and everything that came before that must have been inferior.


> the fact is that if it really was that good it likely would have won in the marketplace

I don't think so, and, this being HN, I thought we all knew by now that just because the market favors something it doesn't make it better in every aspect to other virtually unknown things. Most easily identifiable examples of this are windows vs linux, gui vs cli, and point-and-click interfaces (touchscreen or mouse) vs keyboard.

The market favors what the masses favor, and the masses favor simple, intuitive interfaces with minimal learning curve. Versatility, efficiency, power and everything else that necessitates even the smallest of learning curves are all damned by the market. When's the last time we've seen an average joe read an instruction manual?

In any case, looking at Lisp OSes vs Unix OSes, there is a design dichotomy where both options have great advantages. Anyone correct me if I'm wrong, but I understand Lisp OSes chose to have the whole OS work in a single high-level language, which allows a very natural coupling between programs, basically destroying the distiction between whole programs and program functions. On the other hand, Unix OSes chose to have a very unassuming framework for programs that would best support a great diversity of programming languages so that they could best interoperate despite the fact that they could work via very different semantics. This structure, as we all know, consists around the semantics of files which could be thought of as global variables, plain text arguments as very unassumming (untyped and with no predefined arity) function call arguments, standard input, output, and error as lazily-evaluated function arguments, and environment variables which could be thought as dynamically-scoped variables.

I don't know exactly why Unix won over Lisp, but I could guess that it was because people favored language diversity or even just being able to stick to the language they already knew over having to learn Lisp.

I must say, though, I'm very interested in how Lisp OSes took advantage of this monolanguage property of theirs. I think I remember hearing that when an error (an exception) ocurred in any applicacion, the system could open the exact line that raised the error in an editor, allowing one to edit the source and load the modifications while the program is still running. That's just impossible in Unix by design and sounds super exciting. It would be totally useless for an average computer user of today, though. Maybe if the average computer user were a programmer they could see the value, but we all know that's never going to happen. In the beginning all users were programmers and administrators, but as the masses also became users, the administrators are now a minority and the programmers even more so. Such technology like Lisp OSes favor programmers and that's their sin in the market.

That was then, though. Now, the real obstacle for Lisp OSes, more so than them favoring a minority, is that we've built a shitload of stuff on top of Window and Linux. How much work would it be to port Chrome or Firefox (both of which are like OSes on their own) to a Lisp OS? Not to mention the other loads of apps that people depend on for their work.

That reminds me, you know how much people bitch about not being able to work because they can't find buttons with specific drawings when offered LibreOffice instead of MS Office? There's lots more to the market than it choosing solely based on quality. The software market is really depressing at times.

> Nostalgia is a powerful agent of deception.

I wasn't alive when Lisp machines were a thing, so no nostalgia applies to me here.


Try one of the emulators.


Nah, that would a very wrong approach. If you want any chance, a Lisp machine needs to run on current OS. It needs to run on Linux, Windows, OSX. It's very easy to run on any OS, besides VM, we have seen linux subsystems for windows, ios, windows call on linux. And even user land kernel with gVisor, so build it on current OS.


It would probably be a better step to approach whoever owns one of the Lisp OSes, buy it with a fund raiser, and put it under an MIT license. Then go for custom hardware if needed.


The site appears to be down, here is the most recent copy available on the Wayback Machine: https://web.archive.org/web/20181123210702/http://fare.tunes...


Every time I see this (it's been posted a few times) I'm surprised that this architecture was designed so memory limited (26 bit addressing) and with a big penalty for IEEE floating point. The need to transcend both these limitations was pretty obvious at that time or so I remember. If LMI had continued in business along with the Lisp Machine gravy train, I can't imagine choosing a K-machine over a Symbolics 36xx and/or its 40-bit follow-on (which wasn't that much later).

It's another subject, but I think there's a bit of a Stallman-esque glorification of LMI when they never seemed to go anywhere nor were Lambda machines that appealing IMHO.


LMI had some interesting hardware: the were one of the first users of the NuBUS (which was later used in a smaller version in the Mac II by Apple) and they had Lisp Machines with several CPU boards: these used the infrastructure of a single machine, but each board could run its Lisp independently.

TI bought/licensed the technology from LMI and then developed a bunch of interesting machines, like: Lisp Machines with embedded Unix system running on its own 68020 and communicating via a 68010, the first commercial Lisp Microprocessor (a 32bit chip), compact Lisp workstations using that chip, a NuBUS board for the Mac II based on their Lisp chip, ...


There was also the Symbolics MacIvory. I've got one of those in a IIci.


The Symbolics MacIvory is a bit more complex, since it is a 40bit machine and also uses 48bit ECC memory.


Building a tagged architecture on top of a stock 64-bit system would be much easier than with 32 bits - current systems use 48 bits for addresses, which leaves plenty of bits left over for the tag (you could even represent all non-floating point values as 64-bit NaNs and avoid the problems with FP mentioned in the article).


I am not sure, but if my memory serves me right somewhere in the amd64 specification there was something specifically forbidding the use of the uppermost 16 bit to implement tagged pointers, and this is one of the reasons why tagged-pointer-based runtimes haven't been spawning up like mushrooms after the amd64 ISA was implemented.


Useful discussion on Stack Overflow: https://stackoverflow.com/questions/16198700/using-the-extra...

Upshot: you can probably get away with it if you are careful to canonicalize pointers before using them.


thank you.


Reading that gives me an impression many issues of getting lisp to run fast like CDR, type checking for add fixnum (sort of like the guess approach), floating point > 32 bit. Hard to implement a fast lisp?

Is lisp relevant if the whole world move deep learning AI with self programming is the norm. May be. But a hardware?


Even back in the Lisp Machine days, array processors were used for neural network stuff. You could get an LMI or Symbolics with a floating point accelerator (a Weitek chip), or with an array processor board and library (a whole bunch of Weitek chips and memory, for MIMD or SIMD operation).


Also, “machine learning” and “deep learning” are just other terms for neural network systems, which are good for a few specific tasks but are not a general architecture by any means.

Symbolic AI of the sort for which Lisp Machines were originally created is still extremely important as something is needed to provide “executive function” in intelligent systems.


Until 2010 A.I. was pretty synomynous with LISP.


As I was recently playing with McCarthy's interpreter [1] (and much more that hasn't hit github), I had some thought about a modern Lisp machine, as an extension of a RISC-V core [2].

Software is always a big concern and rather than requiring everything be written in Lisp, I'd want to also be able to run regular binaries written in C. This requires a way to safely embed the "impure" world.

The obvious way to do this is extend the value domain with a hidden tag bit [3] which is carried around everywhere, but can't be changed or inspected by regular RISC-V instructions and in fact, almost all instructions trap if any operand is tagged.

Memory space would be partitioned into a tagged space and untagged. Tagged values can only live in the tagged space and registers (this is important for precise GC). For regular user space code, tagged values can only be created with a `cons` instruction, and accessed with `car`, `replaca`, etc instructions. Having dedicated instructions would allow a hardware read barrier for real-time (or incremental) GC. (Machine mode would be allowed unsafe access for part of the GC and various tasks).

TL;DR: adding fast Lisp support to an existing RISC-V core is likely much easier than building a new dedicated architecture from scratch.

(set! projects (cons 'RISCV-LISP projects))

[1] https://github.com/tommythorn/lisp [2] https://github.com/tommythorn/yarvi [3] If you have data cache, it's not hard to implement a cache line as, say, 32 33-bit word line, backed by 33 32-bit memory words.


Someone at UC Berkeley is already adding hardware-assisted GC to a RISC-V core: https://people.eecs.berkeley.edu/~maas/papers/maas-asbd16-hw...


I'm on the RISC-V J committee (headed by Martin Mass), but what I'm proposing here is unlikely (?) to appeal to the mainstream RISC-V SoC developers, at least not without a strong PoC.


> The machine is correct-endian, i.e. little.

Ah, back in those times it was already an argument.


> correct-endian, i.e. little.

Hey now.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: