Conservative garbage collection has serious problems on 32-bit. Stuck pointers happen quite often. The usual culprit is floats, which thankfully don't usually appear in embedded code, but by no means is the problem limited to floats.
I'll probably take a look later today!
https://justinclift.github.io/tinygo-wasm-rotating-cube/ (WebGL, ~12kB)
Pros: you don't have to care about type information and precise stack/structure walking.
Cons: If you have range 0x8000-0x8010 allocated, and have a variable with integer value 0x8001 somewhere in the memory, it will keep that range allocated. It doesn't matter that it's an int, not a pointer. Floats and 32-bit pointers have quite a lot of accidental collisions that way.
Lack of careful programming and you run out of memory, but that's more a function of assuming you have a large memory space, not GC in general.
I say this watching 30+ years of alternate implementations of different languages and runtimes. Even those that failed (or maybe moreso) indirectly inspired positive changes in the survivors.
Or missile firmware: https://groups.google.com/forum/message/raw?msg=comp.lang.ad...
Although if that’s the case, I personally don’t get the draw. Typically these managed/higher-level/interpreted languages are harder to use than the embedded-friendly alternatives if you’re not using their automatic memory management. But a lot of times people just want to stick to (really weird and distant) versions of languages they already know/use (D is probably the only exception here for obvious reasons).
> At the beginning of the project we considered using LLVM for gc but decided it was too large and slow to meet our performance goals. More important in retrospect, starting with LLVM would have made it harder to introduce some of the ABI and related changes, such as stack management, that Go requires but not are not part of the standard C setup. A new LLVM implementation is starting to come together now, however.
There is more to a toolchain than translating source code to machine code. I'm sure the others do that job just as well, but only TinyGo combines that with a reimplemented runtime that optimizes size over speed and allows it to be used directly on bare metal hardware: binaries of just a few kB are not uncommon.
In my opinion, this feels very much in the hallowed traditions of Tiny Basic and Tiny C, but with modern tool chains.
For instance, UART transmit looks to be implemented using blocking IO. I don't know Go very well, but it would be interesting if this could be implemented as a buffered channel , which would provide a nice abstraction for the hardware FIFO buffer used by the UART, and allow the CPU to be doing other things. The same could also be used for the I2S support I think, which will often be used to send much more data (streaming audio) than the UART.
On closer inspection, looking at the issue tracker, it appears this is already in planning, which is great .
The first step to make this happen, would be to add a PIC backend to LLVM.
LLVM is going to be much slower than the Go toolchain. It may produce smaller binaries, but mainly through dropping the bulk of the runtime on the floor. The main thing that LLVM does is (apparently) reduce the stack usage.
But it does support the CPUs that the author wants to target out of the box.