Hacker News new | past | comments | ask | show | jobs | submit login
Ray tracing with uLisp (ulisp.com)
102 points by lispm on Aug 2, 2019 | hide | past | favorite | 10 comments

This article - in fact, the whole site and its contents - gives me pure joy.

Ray tracing is conceptually beautiful. It's a great example of an algorithm, its essential insight is thought-provoking, and perfect for educational purpose since a programmer has immediate visual feedback to see how the implemented code works.

Lisp, of course, is conceptually beautiful. I've only started learning it in the past year or so, but the more familiar I get, the more I appreciate the elegance of the language and the insights it contains about programming and the thinking process.

Microcontrollers look like a whole world of fun, I'm just dipping my toes into it as a hobby. It brings back childhood memories of tinkering with assembly language on an 8086.

..And Tiny Lisp Computer! (http://www.ulisp.com/show?2KZH) I'm sure it's been discussed on HN before, but my God, what a beautiful thing that is. Thank you, author, for sharing your work.

The ray-scene hit test uses comparison between distances, which involves a square root for each sphere. If you switch to using squared distances, the comparison will work equally well and the code will be a lot faster in many cases. You will need to reorganize the code a bit, since the moment you actually find the hit, you may need to do the square root to calculate the actual distance (but it will only be once per pixel instead of once per pixel per object in the scene).

> This ray tracer is developed from an example in Paul Graham's book "ANSI Common Lisp"

I am currently reading that book, so I was immediately reminded of said book when I read the title.

Nice work making a microcontroller version of it :)

> On an Adafruit ItsyBitsy M4 this scene containing five spheres and a plane takes approximately 230 seconds to render. It occupies about 1460 Lisp objects, and should also run nicely on an Arduino Due or ESP32.

> [...]

> The ray-traced image has a resolution of 160 x 128 pixels. To generate this we call tracer:

  (defun tracer ()
    (dotimes (x 160)
      (dotimes (y 128)
        (plotpoint x y (apply rgb (colour-at (- x 80) (- 64 y)))))))
> This calls plotpoint to plot the pixel on the display device. For each pixel it calls colour-at to get the colour of the pixel:

How much time is spent updating all of the pixels compared to the time taken to do the mathematical calculations of ray tracing? I imagine that the calculations dominate the time taken by a lot.

Can a whole region of the display be updated faster than the amount of time it takes to set each pixel of the region individually?

What I was thinking of is that since it takes so much time before the whole scene is done rendering, perhaps it would feel like it took less time if the code was changed to render the whole thing in finer and finer chunks.

First set the whole display to the color of the coordinate at position (x, y) = (160, 128), then divide the display into four regions and update the three of the four subregions that said coordinate is not within, with the color of their local bottom right coordinates, and keep subdividing into fours and updating three of them like that, to quickly get a lower resolution sense of what the scene looks like as a whole.

It will take a bit more time in total, but if the calculations dominate by a lot, and if updating regions at a time is relatively fast then it may be perceived as being faster. Plus if you are working on a scene you can spot misplaced objects sooner without having to wait until everything “before” has finished rendering.

I think a lot of 3d renderers used to let you render in that way and perhaps some still do?

Nice suggestion - I'll try it.

A Lisp-1 derived from Common Lisp and runs interactively on microcontrollers? Awesome!

Stuff like this makes me wish I had more time to devote to electronics hacking.

This could be dramatically faster using the rtg-math library https://github.com/cbaggers/rtg-math , but very nice work.

Does rtg-math run on an ATSAMD51 Cortex M4? Didn't see anything on the github to suggest it does, but maybe I missed it. Or is it just a well optimised device-non-specific math library? Does it work with uLisp?

Not sure about uLisp in particular, but the approach of the library could be used.

I've read so many blog posts on HN about ray tracers and I still haven't got tired of them. Very cool twist using Lisp and a microcontroller!

love this, cool little project and nice write up!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact