This is really cool, I would love to program my fleet of ESP8266s with anything but C. And I’ve been passively interested in using Lisp for many years. Using Lisp for my program as glue for the normal ESP C libraries would be awesome. I’ve read the documentation of uLisp a bit, two main questions remain:
1. What’s the workflow? REPL for development is super cool, but how do I persist my programs? How does »flash this code onto the microcontroller« work?
2. How can I interface with the large amount of C libraries out there? For example, uLisp does not provide an OTA library (for updating the software over Wifi), or one for MQTT. I don’t want to rewrite those myself, so how do I call existing C from uLisp?
This is no help for programming for embedded targets? Maybe? I'm very close to clueless (I RTFM, but I still don't understand a large portion of functionality of Schemes of FFIs).
The "Embeddable" in Embeddable Common Lisp means that you can embed your CL code in C programs, not that it's for embedded programming. Probably a poor naming choice because a lot of people have this perception about it.
I used uLisp in a project. I had no available non-volatile storage so I handled persistence by writing my code on the laptop and sending it serially to the micro.
When some functions proved to be stable and useful I ported them in as uLisp builtins with a C implementation.
I used it as a debug driver in an STM32 for an eZ80 CPU - the STM32 provided USB to TTY and a third serial channel for debug control.
It was a nice addition but I do wish it was designed in a way that made these modifications easier.
Another commenter already mentioned Gambit Scheme. That provides for inline C and therefore very easy interop with external libraries. It still has a runtime and GC though - those might pose a problem depending on your platform and task.
I went down many of those routes myself, though not the lisp ones. There's also TCL. But I've settled on Nim as my favorite for embedded (at least for a while). For the esp2866 Rust (or D) would be tricky. Nim can compile to C, unlike Rust which doesn't support the Xtensa architecture found on most esp chips. Not sure about D bit it seems unlikely to support Xtensa.
A functioning LLVM backend does exist (https://github.com/espressif/llvm-project/issues/4) and appears to be making very slow progress towards being merged. A quick search shows that it works for Rust. I suspect (but don't know) that it might work for Terra as well.
in the 1970s there were two common interpreted languages on small-RAM (say 4k) microcomputers and those were BASIC and FORTH; the second of which is almost "take the parenthesis out of your LISP program, reverse the order of the tokens, and... it works!"
By 1980 or so I think there was a LISP for CP/M that fit in a large-memory micro (48k) but it was expensive not that popular. A LISP runtime could have been an answer to the certifiably inzane segmented memory model of the IBM PC and I know people tried it, but other languages pulled ahead... In particularly Turbo Pascal and other languages with very fast-compilers and IDE user interfaces ripped off from LISP machines that were about as good as Visual Studio, Eclipse, IntelliJ are today -- only the PC versions were much much faster than the modern equivalents!
To be fair: modern IDE's are handling much larger programs. I remember a 1988 issue of BYTE magazine devoted to the idea of "software components" which the author thought were a long way off, but are now realized in the form of "npm", "maven", "pypi", if not the ancient CPAN, ActiveX, ...
I think it's also funny that circa 1988 Nick Wirth thought that true modularity in the module system mattered, the world didn't listen, and we've gone on to see Java only moderately harmed by the "Guava13+ broke Hadoop" crisis and to see npm practice diverge into inzanity with 2000+ dependencies for small projects but it works because the half-baked module system is more modular than Java.
> FORTH ... is almost "take the parenthesis out of your LISP program, reverse the order of the tokens, and... it works!"
Strictly speaking, FORTH does not have a lambda syntax. But if you literally "reverse the order" of function application (while keeping lambda abstraction the same) in lambda calculus, you get De Bruijn notation which is rather more compelling than the LISP default, and also rather FORTH-like as you note. (Even moreso if variable names are replaced by De Bruijn indices.)
For those like me who hadn't heard of De Bruijn notation, check out this PDF from Cornell. [0] I found the Wikipedia article pretty impenetrable, but no doubt it makes sense to a reader who already knows all about De Bruijn notation.
FORTH is very different from LISP but it's very similar in that you can use the quote operator to write control structures the same way you write any other function.
Common LISP had CLOS that implemented object-oriented programming in a style not too different from Python, but as a library without compiler support and that was also true for late 1980s FORTH.
I'm not sure what you mean by this;As part of CL, CLOS always had compiler (and run time compiler) support.
Many early OO systems were implemented in lisps, partially because they are flexible enough to add language features reasonably well, by default.
See, e.g. Flavors, Loops. These predate CL and influenced the design of CLOS.
> can use the quote operator to write control structures
The quote operator cannot be used for writing control structures in Lisp, unless we are referring to something you generally should not do in production code, like:
Quoting is nothing like referring to Forth words as data. When Forth code refers to a Forth word foo, that's more like (function foo) than (quote foo).
CLOS generics+methods style is actually quite a different model in some ways from the Smalltalk-like model Python runs on! Primarily, method dispatch has the generic function primary and the classes secondary: methods are separately-definable specializations of what would otherwise be functions, rather than “owned” by a receiver object via its class. There was a recent item on Python versus CL: https://news.ycombinator.com/item?id=27011942
I can't find a link to it at the moment, but Pierre de Lacaze gave an excellent talk about the CLOS/MOP systems at Lisp NYC about 5 years ago. If anyone is interested in these topics, I'd recommend digging around to see if you can find it.
> Even moreso if variable names are replaced by De Bruijn indices.
De Bruijn indices are not simply variable names: they vary dynamically depending where in the term (i.e. under how many binders) they occur, which is why beta reduction is so hairy with the De Bruijn notation.
It's interesting that you cite package managers that are deeply flawed while not listing the two that are, in my opinion, the best in class around:
- The JVM Maven Central repo
- Rust's Cargo
Also, you never say exactly which product from Wirth you are referring to, I'm assuming Modula 2. If so, all Modula 2 introduced is namespaces at the language level, and while this was a pretty novel idea at the time, it was very simplistic.
Also, the reason why Turbo-Pascal and family were fast was because the compilers were not just single pass but would abort on the first error.
I can't think of any reason why I'd trade the current compilers I use on a daily basis (mostly Kotlin and Rust) to go back to a single pass compiler, even for speed. My time is precious and a fast compiler that then forces me to spend three times as much time writing and correcting my code is not a good use of my time.
The advantage that Javascript has over Java in this respect is that in Javascript module A can import version B.1 of module B and module C can import version B.2 of module B and it works almost always.
Contrast that to the Guava - Hadoop incompatiblity in which a tiny change to Guava that broke HDFS in an racy way got Hadoop stuck at Guava 13 so you have the choice of going with G13 forever, ditching Hadoop, or ditching Guava.
I think that one is the worst diamond dependency in Java ever, but it is one of the scaling problems that maven has as the number of imports goes up, it might not be the worst one limiting what you can do with maven. (e.g. if you have a platoon of developers and a corresponding number of projects you might be able to complete a build that downloads SNAPSHOT(s) about 0.5 of the time)
For one instance of the Guava/Hadoop incident (which could happen just the same in any env), there are millions of builds that are happening every day that work just fine with the JVM + Maven Central repo combination.
And as opposed to the Javascript repos, you cannot delete anything on a Maven repo once it's been uploaded there.
David Ahl's, Creative Computing magazine is rarest and best of all. My mom threw most of mine out when I was away in school and so did most of the world's libraries!
The other day I found the first issue of "PC Computing" which did survive in my collection and it has a nice article on the "future of printing" that explains current-day inkjet and laser printing quite well.
I'd say for that part of history, I'd go with Dr Dobb's Journal from the period. Not sure if any are online or if you'd have to find paper copies at the library.
I used uLisp to program a really small microcomputer for a hobbyist rocket project. Essentially it collected telemetry data, sending it back to my laptop, and activated a locator beacon when it knew it landed. Good fun to learn but it was still slightly too large for the boards I was programming - I assume it'll be a lot better on an Arduino or something with a decent amount of memory.
How does it compare to NodeMCU? I've tried to write some code to read I²C sensors and publish data via MQTT using Node MCU, but the docs and the examples are outdated. I've wasted time, only to find out that it's more productive to read the function prototypes. In the end I went with Tasmota, flash the appropriate image, setup wifi, the MQTT endpoint and you're done. The ESP32 version also has a Python like embedded language called Berry.
I see that on the 16 bit micros, numbers range from -32768 to 32767. Which means that bits aren't being stolen for either tags or garbage collection marking. Does that mean it uses boxing instead of tags, or are the tags just on the pointers, or is there some magic during the compilation step? I'm just partway through mal, which uses boxing, so I'd like to look at a simple implementation using tagging or another memory friendly implementation.
Very cool! Quite similar to fe [1], a tiny, embeddable Lisp by the magnificent rxi. Seriously, if you like games and gorgeous C and Lua code, check out his projects and the games on itch.io!
I love lisps, but isn't using a garbage collected language on a micro a bit bizarre? I'm sure in lots of cases it won't matter, but then you could probably can use some cheap ARM SBC and IO pins (and a full dev environment).
the Venn diagram of situations where a SBC won't do and you don't care about random pauses seems kinda small.. I could be wrong
Lots of people are embracing MicroPython and CircuitPython on microcontrollers these days. The computational capabilities of many of these devices exceeds what a high-end PC could do in the early/mid-90's. Many applications, especially hobbyist projects, are doing relatively mundane and low frequency work like turning things on and off and taking sensor readings at human scale intervals and then going to sleep. So why not?
I'm not experienced in this whole area, so I meant to ask in an open ended way :)
Does uLisp play well with sleep modes and setting up interrupts to come out of sleep and stuff?
I've been wanting to build a lower power weather logger that can run on a battery for a year. While scoping out the difficulty of the project and digging into lower power stuff .. things got very hairy very fast (this was with STM32F1 chips). A GC seems like an added layer of complexity. Like what happens if you have garbage collection during an interrupt handler? Or if your garbage collection is interrupted by something else..? is garbage collection on its own timer/handler that you need to manage?
Or is this built on top of the Arduino main-event-loop based model of programming? In which base it doesn't seem to be the normal interrupt driven thing you'd be looking at for lower power applications.. i think
I run s7 Scheme in audio contexts (with Scheme for Max), which is quite similar (soft-realtime), and I've found the impact of the GC to be much smaller than I expected. I need to run things such that an extra ms of either latency or jitter does the job, because the GC runs are bursty. So running in an audio app, if the i/o buffer is more than few ms (latency) in size (the norm when amount of heavy dsp is going on anyway), the GC runs just happen in that latency gap, and timing is solid. Or alternatively, timing is off by a ms for one pass and correct on the next, either of which are fine for music.
If you care about latency, not performance, I think it's fine? You can do a Cheney semispace collector in which it's safe to allocate in interrupt handlers if you have control over the ABI (and you've got a few registers to spare) or if your instruction set provides an "increment this memory location" instruction.
The overlap might indeed be small outside the hobby space, but I'd like to note that some MCU, e.g. the RP2040 (of Raspberry Pico board) have additional hardware ("PIO") and support for those (in the form of an assembler) in a MicroPython implementation, which allows for some hard real-time applications with latencies in the sub-us range. So with sufficient hardware support, performance and latency in the language used to program the core might not matter.
The PIO's "state-machines" in the RP2040 offer fairly limited functionality, but the principle applies also to more capable hardware like TI's Sitara MCUs and FPGAs with soft-core CPUs.
It wouldn't (likely) be the main way you program a device so much as an interpreter available on the device for interaction.
I last used it on an ARM micro. Building and deploying a new image to flash is substantially slower than using a lisp console over USB serial, and requires me to wire up the programming headers instead of just using the USB line providing power and other services.
They do both support very straightforward inline ASM, so there's an option for things that need the speed (LED matrix driving, etc) for either.
And there's also the option of just using brute force. Like the Teensy. It's kind of funny that a 32 bit 600MHz 1MB Ram device is now a "micro" controller.
That's not really what I'm getting at. Yeah you can inline ASM, but that kind of defeats the purpose. And yeah, micros are "powerful", but a 10x speed difference is still significant. "Speed" is a proxy for "power draw", and a 2 hour smartwatch is rather less useful than a 20 hour one.
I'd feel the limitation of unsigned 16-bit ints on my AVR8.
There is no language that seems like a good fit for the AVR8 thanks to its many unique characteristics. For instance, the Harvard architecture and 8-bit math with a fair sized register file. You should be able to use 3 registers to do 24 bit math, allocate a block of registers for the exclusive use of the interrupt handler, do away with the "undefined behavior" that comes from using a stack, etc.
The C language environment from Arduino is acceptable, particularly for education, because you are not wasting your time learning C.
I write a lot of programs from the arduino that are mostly "traverse a graph of data structures and do something" (say of laser light show coordinate data or something like that) and I pack the data up with a "data assembler" written in Python... That is a weekend's project work of tooling.
It would be fun to have either a high-level assembler or a pascal-like language matched to AVR8 but I think it won't happen because AVR8 is a dead end in some respects. That is, they aren't going to make a better one, maybe I can get 4x better performance with assembler compared to C for one task, but if I care about performance I will buy a more powerful microcontroller and just recompile the C program.
It is fairly easy to implement N bit arithmetics in 16-bit lisp. Especially if the system provides a Carry-flag, which it seldom does unfortunately.
For some stupid reason I had strange ( 10000 . 10000 ) arithmetics, where the big number X was (cons (/ X 10000) (mod X 10000)) and primitives like */10000 helped to use it (multiply and divide with 10000).
Ok. The stupid reason was it was aesthetically pleasing. For example the big number 12346789 was (1234 . 6789).
I've got to check this project out. I've been playing around micro-controllers quite a lot lately and strangely Lisp is still my most fluent language from all my years of AutoLISP programming. What a great initiative.
Yes, u is often used as an approximation to μ. Pronounce it »micro«. The most prominent abbreviation is uc for microcontroller, and uLisp is »Microlisp«.
Well you know, once you hit your 40's (I'm 46), you've had lots of time to explore languages, and you really don't waste your limited life remaining coding in bad ones... haha
This is something that is accessible to read and modify in a weekend. Really cool.