Ray tracing is conceptually beautiful. It's a great example of an algorithm, its essential insight is thought-provoking, and perfect for educational purpose since a programmer has immediate visual feedback to see how the implemented code works.
Lisp, of course, is conceptually beautiful. I've only started learning it in the past year or so, but the more familiar I get, the more I appreciate the elegance of the language and the insights it contains about programming and the thinking process.
Microcontrollers look like a whole world of fun, I'm just dipping my toes into it as a hobby. It brings back childhood memories of tinkering with assembly language on an 8086.
..And Tiny Lisp Computer! (http://www.ulisp.com/show?2KZH) I'm sure it's been discussed on HN before, but my God, what a beautiful thing that is. Thank you, author, for sharing your work.
I am currently reading that book, so I was immediately reminded of said book when I read the title.
Nice work making a microcontroller version of it :)
> On an Adafruit ItsyBitsy M4 this scene containing five spheres and a plane takes approximately 230 seconds to render. It occupies about 1460 Lisp objects, and should also run nicely on an Arduino Due or ESP32.
> The ray-traced image has a resolution of 160 x 128 pixels. To generate this we call tracer:
(defun tracer ()
(dotimes (x 160)
(dotimes (y 128)
(plotpoint x y (apply rgb (colour-at (- x 80) (- 64 y)))))))
How much time is spent updating all of the pixels compared to the time taken to do the mathematical calculations of ray tracing? I imagine that the calculations dominate the time taken by a lot.
Can a whole region of the display be updated faster than the amount of time it takes to set each pixel of the region individually?
What I was thinking of is that since it takes so much time before the whole scene is done rendering, perhaps it would feel like it took less time if the code was changed to render the whole thing in finer and finer chunks.
First set the whole display to the color of the coordinate at position (x, y) = (160, 128), then divide the display into four regions and update the three of the four subregions that said coordinate is not within, with the color of their local bottom right coordinates, and keep subdividing into fours and updating three of them like that, to quickly get a lower resolution sense of what the scene looks like as a whole.
It will take a bit more time in total, but if the calculations dominate by a lot, and if updating regions at a time is relatively fast then it may be perceived as being faster. Plus if you are working on a scene you can spot misplaced objects sooner without having to wait until everything “before” has finished rendering.
I think a lot of 3d renderers used to let you render in that way and perhaps some still do?
Stuff like this makes me wish I had more time to devote to electronics hacking.