This is something that is accessible to read and modify in a weekend. Really cool.
And Scheme in one defun (SIOD), although it seems the recent version of this has been split into multiple files.
The video game Abuse also had a simple embedded Lisp:
It's similar in terms of ease of embedding, being all ANSI C, and coming from a TinyScheme, originally...
Lisp Badge: A single-board computer that you can program in uLisp - https://news.ycombinator.com/item?id=23729970 - July 2020 (25 comments)
A new RISC-V version of uLisp - https://news.ycombinator.com/item?id=22640980 - March 2020 (35 comments)
uLisp – ARM Assembler in Lisp - https://news.ycombinator.com/item?id=22117241 - Jan 2020 (49 comments)
Ray tracing with uLisp - https://news.ycombinator.com/item?id=20565559 - July 2019 (10 comments)
uLisp: Lisp for microcontrollers - https://news.ycombinator.com/item?id=18882335 - Jan 2019 (16 comments)
GPS mapping application in uLisp - https://news.ycombinator.com/item?id=18466566 - Nov 2018 (4 comments)
Tiny Lisp Computer 2 - https://news.ycombinator.com/item?id=16347048 - Feb 2018 (2 comments)
uLisp – Lisp for the Arduino - https://news.ycombinator.com/item?id=11777662 - May 2016 (33 comments)
Recently I learned how the author generates the uLisp variants for different platforms using Common Lisp:
..And an accompanying article to describe how it works:
uLisp Builder - http://www.ulisp.com/show?3F07
Also, a treasure trove of other Arduino and AVR projects by the author here:
1. What’s the workflow? REPL for development is super cool, but how do I persist my programs? How does »flash this code onto the microcontroller« work?
2. How can I interface with the large amount of C libraries out there? For example, uLisp does not provide an OTA library (for updating the software over Wifi), or one for MQTT. I don’t want to rewrite those myself, so how do I call existing C from uLisp?
I think with Gambit you can have C inline with your Scheme:
This is no help for programming for embedded targets? Maybe? I'm very close to clueless (I RTFM, but I still don't understand a large portion of functionality of Schemes of FFIs).
Have you looked into Embeddable Common-Lisp? (https://gitlab.com/embeddable-common-lisp/ecl)
When some functions proved to be stable and useful I ported them in as uLisp builtins with a C implementation.
I used it as a debug driver in an STM32 for an eZ80 CPU - the STM32 provided USB to TTY and a third serial channel for debug control.
It was a nice addition but I do wish it was designed in a way that made these modifications easier.
Ferret (https://github.com/nakkaya/ferret) and Carp (https://github.com/carp-lang/Carp) are both Lisp-like low level languages. Both seem to be fairly experimental in nature though.
> anything but C
Taking you literally, Rust and D can both compile for bare metal. D in particular has a "Better C" subset. (https://dlang.org/spec/betterc.html)
In the same vein, Terra is a C like language (manual memory management) that you metaprogram with Lua. (https://github.com/terralang/terra)
Taking you very literally, Forth is also an option.
There's a Nim Esp2866 sdk: https://github.com/clj/nim-esp8266-sdk
And a youtube talk on Programming esp2866: https://youtu.be/eCCrkZI0rVU
Just be sure to use ARC/ORC. If you're interested in esp32's as well, I wrote a Nim wrapper of the esp-idf: https://github.com/elcritch/nesper
I hadn't realized LLVM mainline doesn't support Xtensa. I'm surprised.
D does support Xtensa via LDC (https://firstname.lastname@example.org...). It looks like GDC also nearly supports it, requiring only a minor patch at present.
A functioning LLVM backend does exist (https://github.com/espressif/llvm-project/issues/4) and appears to be making very slow progress towards being merged. A quick search shows that it works for Rust. I suspect (but don't know) that it might work for Terra as well.
There's also the LLVM C backend (https://github.com/JuliaComputingOSS/llvm-cbe) but I've no idea how efficient such an approach is when applied to real world embedded tasks.
By 1980 or so I think there was a LISP for CP/M that fit in a large-memory micro (48k) but it was expensive not that popular. A LISP runtime could have been an answer to the certifiably inzane segmented memory model of the IBM PC and I know people tried it, but other languages pulled ahead... In particularly Turbo Pascal and other languages with very fast-compilers and IDE user interfaces ripped off from LISP machines that were about as good as Visual Studio, Eclipse, IntelliJ are today -- only the PC versions were much much faster than the modern equivalents!
To be fair: modern IDE's are handling much larger programs. I remember a 1988 issue of BYTE magazine devoted to the idea of "software components" which the author thought were a long way off, but are now realized in the form of "npm", "maven", "pypi", if not the ancient CPAN, ActiveX, ...
I think it's also funny that circa 1988 Nick Wirth thought that true modularity in the module system mattered, the world didn't listen, and we've gone on to see Java only moderately harmed by the "Guava13+ broke Hadoop" crisis and to see npm practice diverge into inzanity with 2000+ dependencies for small projects but it works because the half-baked module system is more modular than Java.
Strictly speaking, FORTH does not have a lambda syntax. But if you literally "reverse the order" of function application (while keeping lambda abstraction the same) in lambda calculus, you get De Bruijn notation which is rather more compelling than the LISP default, and also rather FORTH-like as you note. (Even moreso if variable names are replaced by De Bruijn indices.)
 (PDF) https://www.cs.cornell.edu/courses/cs4110/2018fa/lectures/le...
Common LISP had CLOS that implemented object-oriented programming in a style not too different from Python, but as a library without compiler support and that was also true for late 1980s FORTH.
I'm not sure what you mean by this;As part of CL, CLOS always had compiler (and run time compiler) support.
Many early OO systems were implemented in lisps, partially because they are flexible enough to add language features reasonably well, by default.
See, e.g. Flavors, Loops. These predate CL and influenced the design of CLOS.
The quote operator cannot be used for writing control structures in Lisp, unless we are referring to something you generally should not do in production code, like:
(defun meta-circular-if (cond then else)
(if (eval cond) (eval then) (eval else)))
(meta-circular-if (quote (> 3 2)) (quote (+ 4 4)) 0)
I'm not sure this is what they had in mind, but I've actually done pretty much this when bootstrapping. Eg:
(def (do-if cond) # actually a built-in, but defined this way
(if cond id quote))
((do-if tee) (foo)) # (id (foo))
((do-if nil) (bar)) # (quote (bar))
De Bruijn indices are not simply variable names: they vary dynamically depending where in the term (i.e. under how many binders) they occur, which is why beta reduction is so hairy with the De Bruijn notation.
- The JVM Maven Central repo
- Rust's Cargo
Also, you never say exactly which product from Wirth you are referring to, I'm assuming Modula 2. If so, all Modula 2 introduced is namespaces at the language level, and while this was a pretty novel idea at the time, it was very simplistic.
Also, the reason why Turbo-Pascal and family were fast was because the compilers were not just single pass but would abort on the first error.
I can't think of any reason why I'd trade the current compilers I use on a daily basis (mostly Kotlin and Rust) to go back to a single pass compiler, even for speed. My time is precious and a fast compiler that then forces me to spend three times as much time writing and correcting my code is not a good use of my time.
Contrast that to the Guava - Hadoop incompatiblity in which a tiny change to Guava that broke HDFS in an racy way got Hadoop stuck at Guava 13 so you have the choice of going with G13 forever, ditching Hadoop, or ditching Guava.
I think that one is the worst diamond dependency in Java ever, but it is one of the scaling problems that maven has as the number of imports goes up, it might not be the worst one limiting what you can do with maven. (e.g. if you have a platoon of developers and a corresponding number of projects you might be able to complete a build that downloads SNAPSHOT(s) about 0.5 of the time)
For one instance of the Guava/Hadoop incident (which could happen just the same in any env), there are millions of builds that are happening every day that work just fine with the JVM + Maven Central repo combination.
beside Byte, do you know any place to read about that part of history ?
The other day I found the first issue of "PC Computing" which did survive in my collection and it has a nice article on the "future of printing" that explains current-day inkjet and laser printing quite well.
Back issues of CC are archived here: https://archive.org/details/creativecomputing
> Every Lisp object is a two-word cell, representing either a symbol, number, or cons.
(Also, it's fun to see that my article on GC was helpful. :) )
the Venn diagram of situations where a SBC won't do and you don't care about random pauses seems kinda small.. I could be wrong
Does uLisp play well with sleep modes and setting up interrupts to come out of sleep and stuff?
I've been wanting to build a lower power weather logger that can run on a battery for a year. While scoping out the difficulty of the project and digging into lower power stuff .. things got very hairy very fast (this was with STM32F1 chips). A GC seems like an added layer of complexity. Like what happens if you have garbage collection during an interrupt handler? Or if your garbage collection is interrupted by something else..? is garbage collection on its own timer/handler that you need to manage?
Or is this built on top of the Arduino main-event-loop based model of programming? In which base it doesn't seem to be the normal interrupt driven thing you'd be looking at for lower power applications.. i think
The PIO's "state-machines" in the RP2040 offer fairly limited functionality, but the principle applies also to more capable hardware like TI's Sitara MCUs and FPGAs with soft-core CPUs.
E.g. a 8b, 125MS/s (!) arbitrary wave generator (AWG) in MicroPython: https://www.instructables.com/Arbitrary-Wave-Generator-With-...
I last used it on an ARM micro. Building and deploying a new image to flash is substantially slower than using a lisp console over USB serial, and requires me to wire up the programming headers instead of just using the USB line providing power and other services.
Though it doesn't offer a real lisp REPL or eval, so it's not a meaningful comparison with uLisp.
And there's also the option of just using brute force. Like the Teensy. It's kind of funny that a 32 bit 600MHz 1MB Ram device is now a "micro" controller.
There is no language that seems like a good fit for the AVR8 thanks to its many unique characteristics. For instance, the Harvard architecture and 8-bit math with a fair sized register file. You should be able to use 3 registers to do 24 bit math, allocate a block of registers for the exclusive use of the interrupt handler, do away with the "undefined behavior" that comes from using a stack, etc.
The C language environment from Arduino is acceptable, particularly for education, because you are not wasting your time learning C.
I write a lot of programs from the arduino that are mostly "traverse a graph of data structures and do something" (say of laser light show coordinate data or something like that) and I pack the data up with a "data assembler" written in Python... That is a weekend's project work of tooling.
It would be fun to have either a high-level assembler or a pascal-like language matched to AVR8 but I think it won't happen because AVR8 is a dead end in some respects. That is, they aren't going to make a better one, maybe I can get 4x better performance with assembler compared to C for one task, but if I care about performance I will buy a more powerful microcontroller and just recompile the C program.
For some stupid reason I had strange ( 10000 . 10000 ) arithmetics, where the big number X was (cons (/ X 10000) (mod X 10000)) and primitives like */10000 helped to use it (multiply and divide with 10000).
Ok. The stupid reason was it was aesthetically pleasing. For example the big number 12346789 was (1234 . 6789).
I'm paying the cost of the machine + shipping + $150. Throw in a printed manual for good measure, too. :)