Hacker News new | past | comments | ask | show | jobs | submit login
uLisp (ulisp.com)
377 points by tosh 12 days ago | hide | past | favorite | 87 comments





I think its hilarious that this project is basically a 5k-line .ino file + immeasurable awesomeness: https://github.com/technoblogy/ulisp/blob/master/ulisp.ino

This is something that is accessible to read and modify in a weekend. Really cool.


For a similarly accessible lisp implementation, see Scheme48 which was originally written in two days.

https://www.s48.org/1.9.2/manual/manual-Z-H-2.html#node_chap...


There are actually a few of these.

https://github.com/zpl-c/tinyscheme/blob/master/source/schem...

And Scheme in one defun (SIOD), although it seems the recent version of this has been split into multiple files.

The video game Abuse also had a simple embedded Lisp: https://github.com/videogamepreservation/abuse/blob/master/a...


Another is s7, one C file to embed. I use it for computer music, I love it. It's a Scheme that is heavily influenced by Common Lisp.

s7 is amazing, but at ~100k loc it's hardly at the same scale

ha, I guess thats true. Instead of a SIOD, it's a SIOBFD. ;-)

It's similar in terms of ease of embedding, being all ANSI C, and coming from a TinyScheme, originally...


The mark sweep GC implementation is beautiful in its direct simplicity.

Some past threads:

Lisp Badge: A single-board computer that you can program in uLisp - https://news.ycombinator.com/item?id=23729970 - July 2020 (25 comments)

A new RISC-V version of uLisp - https://news.ycombinator.com/item?id=22640980 - March 2020 (35 comments)

uLisp – ARM Assembler in Lisp - https://news.ycombinator.com/item?id=22117241 - Jan 2020 (49 comments)

Ray tracing with uLisp - https://news.ycombinator.com/item?id=20565559 - July 2019 (10 comments)

uLisp: Lisp for microcontrollers - https://news.ycombinator.com/item?id=18882335 - Jan 2019 (16 comments)

GPS mapping application in uLisp - https://news.ycombinator.com/item?id=18466566 - Nov 2018 (4 comments)

Tiny Lisp Computer 2 - https://news.ycombinator.com/item?id=16347048 - Feb 2018 (2 comments)

uLisp – Lisp for the Arduino - https://news.ycombinator.com/item?id=11777662 - May 2016 (33 comments)


Big fan of uLisp, I got it running on an ESP8266. Love how the whole language fits in a single file, making it easy to hack around.

Recently I learned how the author generates the uLisp variants for different platforms using Common Lisp:

https://github.com/technoblogy/ulisp-builder

..And an accompanying article to describe how it works:

uLisp Builder - http://www.ulisp.com/show?3F07

Also, a treasure trove of other Arduino and AVR projects by the author here:

http://www.technoblogy.com/


This is really cool, I would love to program my fleet of ESP8266s with anything but C. And I’ve been passively interested in using Lisp for many years. Using Lisp for my program as glue for the normal ESP C libraries would be awesome. I’ve read the documentation of uLisp a bit, two main questions remain:

1. What’s the workflow? REPL for development is super cool, but how do I persist my programs? How does »flash this code onto the microcontroller« work?

2. How can I interface with the large amount of C libraries out there? For example, uLisp does not provide an OTA library (for updating the software over Wifi), or one for MQTT. I don’t want to rewrite those myself, so how do I call existing C from uLisp?


The scheme distributions I'm familiar with (Chicken, Gambit, and Gerbil) have well-defined FFIs for C.

I think with Gambit you can have C inline with your Scheme:

https://www.iro.umontreal.ca/~gambit/doc/gambit.html#C_002di...

This is no help for programming for embedded targets? Maybe? I'm very close to clueless (I RTFM, but I still don't understand a large portion of functionality of Schemes of FFIs).


Having never heard of Gambit, it looks like Gambit:Scheme::Awka:AWK

https://github.com/noyesno/awka


> I would love to program my fleet of ESP8266s with anything but C.

Have you looked into Embeddable Common-Lisp? (https://gitlab.com/embeddable-common-lisp/ecl)


The "Embeddable" in Embeddable Common Lisp means that you can embed your CL code in C programs, not that it's for embedded programming. Probably a poor naming choice because a lot of people have this perception about it.

All true, but ECL after having been transpiled to C has a fair chance of being x-compiled for and run on a uC.

> but how do I persist my programs?

    (save-image 'your-fun)

I used uLisp in a project. I had no available non-volatile storage so I handled persistence by writing my code on the laptop and sending it serially to the micro.

When some functions proved to be stable and useful I ported them in as uLisp builtins with a C implementation.

I used it as a debug driver in an STM32 for an eZ80 CPU - the STM32 provided USB to TTY and a third serial channel for debug control.

It was a nice addition but I do wish it was designed in a way that made these modifications easier.


Another commenter already mentioned Gambit Scheme. That provides for inline C and therefore very easy interop with external libraries. It still has a runtime and GC though - those might pose a problem depending on your platform and task.

Ferret (https://github.com/nakkaya/ferret) and Carp (https://github.com/carp-lang/Carp) are both Lisp-like low level languages. Both seem to be fairly experimental in nature though.

> anything but C

Taking you literally, Rust and D can both compile for bare metal. D in particular has a "Better C" subset. (https://dlang.org/spec/betterc.html)

In the same vein, Terra is a C like language (manual memory management) that you metaprogram with Lua. (https://github.com/terralang/terra)

Taking you very literally, Forth is also an option.


I went down many of those routes myself, though not the lisp ones. There's also TCL. But I've settled on Nim as my favorite for embedded (at least for a while). For the esp2866 Rust (or D) would be tricky. Nim can compile to C, unlike Rust which doesn't support the Xtensa architecture found on most esp chips. Not sure about D bit it seems unlikely to support Xtensa.

There's a Nim Esp2866 sdk: https://github.com/clj/nim-esp8266-sdk And a youtube talk on Programming esp2866: https://youtu.be/eCCrkZI0rVU

Just be sure to use ARC/ORC. If you're interested in esp32's as well, I wrote a Nim wrapper of the esp-idf: https://github.com/elcritch/nesper


Just to clarify - Gambit, Chicken, and Carp all compile to portable C.

I hadn't realized LLVM mainline doesn't support Xtensa. I'm surprised.

D does support Xtensa via LDC (https://forum.dlang.org/thread/rpdwiusbevvphqyvjhdj@forum.dl...). It looks like GDC also nearly supports it, requiring only a minor patch at present.

A functioning LLVM backend does exist (https://github.com/espressif/llvm-project/issues/4) and appears to be making very slow progress towards being merged. A quick search shows that it works for Rust. I suspect (but don't know) that it might work for Terra as well.

There's also the LLVM C backend (https://github.com/JuliaComputingOSS/llvm-cbe) but I've no idea how efficient such an approach is when applied to real world embedded tasks.


in the 1970s there were two common interpreted languages on small-RAM (say 4k) microcomputers and those were BASIC and FORTH; the second of which is almost "take the parenthesis out of your LISP program, reverse the order of the tokens, and... it works!"

By 1980 or so I think there was a LISP for CP/M that fit in a large-memory micro (48k) but it was expensive not that popular. A LISP runtime could have been an answer to the certifiably inzane segmented memory model of the IBM PC and I know people tried it, but other languages pulled ahead... In particularly Turbo Pascal and other languages with very fast-compilers and IDE user interfaces ripped off from LISP machines that were about as good as Visual Studio, Eclipse, IntelliJ are today -- only the PC versions were much much faster than the modern equivalents!

To be fair: modern IDE's are handling much larger programs. I remember a 1988 issue of BYTE magazine devoted to the idea of "software components" which the author thought were a long way off, but are now realized in the form of "npm", "maven", "pypi", if not the ancient CPAN, ActiveX, ...

I think it's also funny that circa 1988 Nick Wirth thought that true modularity in the module system mattered, the world didn't listen, and we've gone on to see Java only moderately harmed by the "Guava13+ broke Hadoop" crisis and to see npm practice diverge into inzanity with 2000+ dependencies for small projects but it works because the half-baked module system is more modular than Java.


> FORTH ... is almost "take the parenthesis out of your LISP program, reverse the order of the tokens, and... it works!"

Strictly speaking, FORTH does not have a lambda syntax. But if you literally "reverse the order" of function application (while keeping lambda abstraction the same) in lambda calculus, you get De Bruijn notation which is rather more compelling than the LISP default, and also rather FORTH-like as you note. (Even moreso if variable names are replaced by De Bruijn indices.)


For those like me who hadn't heard of De Bruijn notation, check out this PDF from Cornell. [0] I found the Wikipedia article pretty impenetrable, but no doubt it makes sense to a reader who already knows all about De Bruijn notation.

[0] (PDF) https://www.cs.cornell.edu/courses/cs4110/2018fa/lectures/le...


FORTH is very different from LISP but it's very similar in that you can use the quote operator to write control structures the same way you write any other function.

Common LISP had CLOS that implemented object-oriented programming in a style not too different from Python, but as a library without compiler support and that was also true for late 1980s FORTH.


> but as a library without compiler support

I'm not sure what you mean by this;As part of CL, CLOS always had compiler (and run time compiler) support.

Many early OO systems were implemented in lisps, partially because they are flexible enough to add language features reasonably well, by default. See, e.g. Flavors, Loops. These predate CL and influenced the design of CLOS.


> can use the quote operator to write control structures

The quote operator cannot be used for writing control structures in Lisp, unless we are referring to something you generally should not do in production code, like:

  (defun meta-circular-if (cond then else)
    (if (eval cond) (eval then) (eval else)))

  (meta-circular-if (quote (> 3 2)) (quote (+ 4 4)) 0)
Quoting is nothing like referring to Forth words as data. When Forth code refers to a Forth word foo, that's more like (function foo) than (quote foo).

> The quote operator cannot be used for writing control structures in Lisp

I'm not sure this is what they had in mind, but I've actually done pretty much this when bootstrapping. Eg:

  (def (do-if cond)  # actually a built-in, but defined this way
    (if cond id quote))
  ((do-if tee) (foo))  # (id (foo))
  ((do-if nil) (bar))  # (quote (bar))

CLOS generics+methods style is actually quite a different model in some ways from the Smalltalk-like model Python runs on! Primarily, method dispatch has the generic function primary and the classes secondary: methods are separately-definable specializations of what would otherwise be functions, rather than “owned” by a receiver object via its class. There was a recent item on Python versus CL: https://news.ycombinator.com/item?id=27011942

I can't find a link to it at the moment, but Pierre de Lacaze gave an excellent talk about the CLOS/MOP systems at Lisp NYC about 5 years ago. If anyone is interested in these topics, I'd recommend digging around to see if you can find it.

The Python object model is quite unlike CLOS.

> Even moreso if variable names are replaced by De Bruijn indices.

De Bruijn indices are not simply variable names: they vary dynamically depending where in the term (i.e. under how many binders) they occur, which is why beta reduction is so hairy with the De Bruijn notation.


It's interesting that you cite package managers that are deeply flawed while not listing the two that are, in my opinion, the best in class around:

- The JVM Maven Central repo

- Rust's Cargo

Also, you never say exactly which product from Wirth you are referring to, I'm assuming Modula 2. If so, all Modula 2 introduced is namespaces at the language level, and while this was a pretty novel idea at the time, it was very simplistic.

Also, the reason why Turbo-Pascal and family were fast was because the compilers were not just single pass but would abort on the first error.

I can't think of any reason why I'd trade the current compilers I use on a daily basis (mostly Kotlin and Rust) to go back to a single pass compiler, even for speed. My time is precious and a fast compiler that then forces me to spend three times as much time writing and correcting my code is not a good use of my time.


I did mention maven.

The advantage that Javascript has over Java in this respect is that in Javascript module A can import version B.1 of module B and module C can import version B.2 of module B and it works almost always.

Contrast that to the Guava - Hadoop incompatiblity in which a tiny change to Guava that broke HDFS in an racy way got Hadoop stuck at Guava 13 so you have the choice of going with G13 forever, ditching Hadoop, or ditching Guava.

I think that one is the worst diamond dependency in Java ever, but it is one of the scaling problems that maven has as the number of imports goes up, it might not be the worst one limiting what you can do with maven. (e.g. if you have a platoon of developers and a corresponding number of projects you might be able to complete a build that downloads SNAPSHOT(s) about 0.5 of the time)


You're over dramatizing.

For one instance of the Guava/Hadoop incident (which could happen just the same in any env), there are millions of builds that are happening every day that work just fine with the JVM + Maven Central repo combination.

And as opposed to the Javascript repos, you cannot delete anything on a Maven repo once it's been uploaded there.


Hi Paul,

beside Byte, do you know any place to read about that part of history ?


David Ahl's, Creative Computing magazine is rarest and best of all. My mom threw most of mine out when I was away in school and so did most of the world's libraries!

The other day I found the first issue of "PC Computing" which did survive in my collection and it has a nice article on the "future of printing" that explains current-day inkjet and laser printing quite well.


Ha, my mom did the same thing. Luckily, my issues of The Transactor survived the purge.

Back issues of CC are archived here: https://archive.org/details/creativecomputing


I'd say for that part of history, I'd go with Dr Dobb's Journal from the period. Not sure if any are online or if you'd have to find paper copies at the library.

I used uLisp to program a really small microcomputer for a hobbyist rocket project. Essentially it collected telemetry data, sending it back to my laptop, and activated a locator beacon when it knew it landed. Good fun to learn but it was still slightly too large for the boards I was programming - I assume it'll be a lot better on an Arduino or something with a decent amount of memory.

That sounds like a really fun project. What board were you using? And you didn’t by chance write about this somewhere - did you?


There's also Lua RTOS[1] plus Fennel[2]. Lua is easy to embed, but I find the syntax pretty bad, and Fennel fixes most of it for me.

[1] https://github.com/whitecatboard/Lua-RTOS-ESP32/ [2] https://fennel-lang.org


How does it compare to NodeMCU? I've tried to write some code to read I²C sensors and publish data via MQTT using Node MCU, but the docs and the examples are outdated. I've wasted time, only to find out that it's more productive to read the function prototypes. In the end I went with Tasmota, flash the appropriate image, setup wifi, the MQTT endpoint and you're done. The ESP32 version also has a Python like embedded language called Berry.

https://tasmota.github.io/docs/Berry/


I haven't used it in an MCU, I am afraid. I've recently taken up Fennel for a few projects and was made aware of its MCU capabilities, but that's it.

I see that on the 16 bit micros, numbers range from -32768 to 32767. Which means that bits aren't being stolen for either tags or garbage collection marking. Does that mean it uses boxing instead of tags, or are the tags just on the pointers, or is there some magic during the compilation step? I'm just partway through mal, which uses boxing, so I'd like to look at a simple implementation using tagging or another memory friendly implementation.

From http://www.ulisp.com/show?1AWG:

> Every Lisp object is a two-word cell, representing either a symbol, number, or cons.

(Also, it's fun to see that my article on GC was helpful. :) )


So basically using a full word for the tag, if I understand correctly.

Yup, exactly right.

I love lisps, but isn't using a garbage collected language on a micro a bit bizarre? I'm sure in lots of cases it won't matter, but then you could probably can use some cheap ARM SBC and IO pins (and a full dev environment).

the Venn diagram of situations where a SBC won't do and you don't care about random pauses seems kinda small.. I could be wrong


Lots of people are embracing MicroPython and CircuitPython on microcontrollers these days. The computational capabilities of many of these devices exceeds what a high-end PC could do in the early/mid-90's. Many applications, especially hobbyist projects, are doing relatively mundane and low frequency work like turning things on and off and taking sensor readings at human scale intervals and then going to sleep. So why not?

I'm not experienced in this whole area, so I meant to ask in an open ended way :)

Does uLisp play well with sleep modes and setting up interrupts to come out of sleep and stuff?

I've been wanting to build a lower power weather logger that can run on a battery for a year. While scoping out the difficulty of the project and digging into lower power stuff .. things got very hairy very fast (this was with STM32F1 chips). A GC seems like an added layer of complexity. Like what happens if you have garbage collection during an interrupt handler? Or if your garbage collection is interrupted by something else..? is garbage collection on its own timer/handler that you need to manage?

Or is this built on top of the Arduino main-event-loop based model of programming? In which base it doesn't seem to be the normal interrupt driven thing you'd be looking at for lower power applications.. i think


I run s7 Scheme in audio contexts (with Scheme for Max), which is quite similar (soft-realtime), and I've found the impact of the GC to be much smaller than I expected. I need to run things such that an extra ms of either latency or jitter does the job, because the GC runs are bursty. So running in an audio app, if the i/o buffer is more than few ms (latency) in size (the norm when amount of heavy dsp is going on anyway), the GC runs just happen in that latency gap, and timing is solid. Or alternatively, timing is off by a ms for one pass and correct on the next, either of which are fine for music.

If you care about latency, not performance, I think it's fine? You can do a Cheney semispace collector in which it's safe to allocate in interrupt handlers if you have control over the ABI (and you've got a few registers to spare) or if your instruction set provides an "increment this memory location" instruction.

The overlap might indeed be small outside the hobby space, but I'd like to note that some MCU, e.g. the RP2040 (of Raspberry Pico board) have additional hardware ("PIO") and support for those (in the form of an assembler) in a MicroPython implementation, which allows for some hard real-time applications with latencies in the sub-us range. So with sufficient hardware support, performance and latency in the language used to program the core might not matter.

The PIO's "state-machines" in the RP2040 offer fairly limited functionality, but the principle applies also to more capable hardware like TI's Sitara MCUs and FPGAs with soft-core CPUs.


> So with sufficient hardware support, performance and latency in the language used to program the core might not matter.

E.g. a 8b, 125MS/s (!) arbitrary wave generator (AWG) in MicroPython: https://www.instructables.com/Arbitrary-Wave-Generator-With-...


It wouldn't (likely) be the main way you program a device so much as an interpreter available on the device for interaction.

I last used it on an ARM micro. Building and deploying a new image to flash is substantially slower than using a lisp console over USB serial, and requires me to wire up the programming headers instead of just using the USB line providing power and other services.


Lisp was born on IBM 401, many of these micros are supercomputers versus 50 and 60's hardware.

Eh? IBM 704

Yeah, should have looked it up before commenting, thanks for the correction.

Very cool! Quite similar to fe [1], a tiny, embeddable Lisp by the magnificent rxi. Seriously, if you like games and gorgeous C and Lua code, check out his projects and the games on itch.io!

[1] https://github.com/rxi/fe


If you like Clojure there is also https://github.com/mfikes/esprit

Another one for the Clojurists - https://github.com/nakkaya/ferret which compiles down to c++11.

Though it doesn't offer a real lisp REPL or eval, so it's not a meaningful comparison with uLisp.


How does the speed compare with micropython?

They do both support very straightforward inline ASM, so there's an option for things that need the speed (LED matrix driving, etc) for either.

And there's also the option of just using brute force. Like the Teensy. It's kind of funny that a 32 bit 600MHz 1MB Ram device is now a "micro" controller.


That's not really what I'm getting at. Yeah you can inline ASM, but that kind of defeats the purpose. And yeah, micros are "powerful", but a 10x speed difference is still significant. "Speed" is a proxy for "power draw", and a 2 hour smartwatch is rather less useful than a 20 hour one.

They do have performance and benchmark pages. No direct comparison to micropython, but I assume you could recreate the crc32 and fibonacci ones.

http://www.ulisp.com/show?36M3

http://www.ulisp.com/show?1EO1


one of the best and useful lisp sites. Good for learning and studying. Not sure anyone use it for production though.

I'd feel the limitation of unsigned 16-bit ints on my AVR8.

There is no language that seems like a good fit for the AVR8 thanks to its many unique characteristics. For instance, the Harvard architecture and 8-bit math with a fair sized register file. You should be able to use 3 registers to do 24 bit math, allocate a block of registers for the exclusive use of the interrupt handler, do away with the "undefined behavior" that comes from using a stack, etc.

The C language environment from Arduino is acceptable, particularly for education, because you are not wasting your time learning C.

I write a lot of programs from the arduino that are mostly "traverse a graph of data structures and do something" (say of laser light show coordinate data or something like that) and I pack the data up with a "data assembler" written in Python... That is a weekend's project work of tooling.

It would be fun to have either a high-level assembler or a pascal-like language matched to AVR8 but I think it won't happen because AVR8 is a dead end in some respects. That is, they aren't going to make a better one, maybe I can get 4x better performance with assembler compared to C for one task, but if I care about performance I will buy a more powerful microcontroller and just recompile the C program.


It is fairly easy to implement N bit arithmetics in 16-bit lisp. Especially if the system provides a Carry-flag, which it seldom does unfortunately.

For some stupid reason I had strange ( 10000 . 10000 ) arithmetics, where the big number X was (cons (/ X 10000) (mod X 10000)) and primitives like */10000 helped to use it (multiply and divide with 10000).

Ok. The stupid reason was it was aesthetically pleasing. For example the big number 12346789 was (1234 . 6789).


That was common with Forth. Limit? (2^16)-1. If you need something bigger, declare a long number on your own.

Many FORTHs were based on 16 bit ints but I knew somebody who wrote a 32-bit FORTH for an 8-bit machine because they needed 32-bit ints.

Ask and you will be served,

https://www.mikroe.com/mikropascal-avr


I've got to check this project out. I've been playing around micro-controllers quite a lot lately and strangely Lisp is still my most fluent language from all my years of AutoLISP programming. What a great initiative.

How does it compare with Python on productivity and hardware power needed?

Is it called that because u looks a bit like μ? How is it pronounced?

Yes, u is often used as an approximation to μ. Pronounce it »micro«. The most prominent abbreviation is uc for microcontroller, and uLisp is »Microlisp«.

Oddly enough, when I wrote uLisp as µLisp, the project lead edited my post to use uLisp instead.

I pronounce it 'U'-Lisp.

micro

Is there a way for me to buy a pre-assembled Lisp Badge machine?

I'm paying the cost of the machine + shipping + $150. Throw in a printed manual for good measure, too. :)


Slightly surprised there's no Pico port, given other M0-based boards that are on the list.


Give it time, I'd say. The RP2040 is out just a couple of weeks and availability was spotty so far.

femtolisp?

I wonder if I could make a model to accurately predict the number of HNers over 50 years old by the number of Lisp related posts...hmmmmmm :-)

Well you know, once you hit your 40's (I'm 46), you've had lots of time to explore languages, and you really don't waste your limited life remaining coding in bad ones... haha

A few coders from each new generation rediscover the lisp arts... https://xkcd.com/297/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: