
Writing a Raytracer in Rust – Part 1 - miqkt
https://bheisler.github.io/post/writing-raytracer-in-rust-part-1/
======
K0nserv
I too have been writing a raytracer in Rust[0] recently. I tend to use
raytracers as my go to project when learning a new language, especially those
that are OOP, since it touches a few different areas that are commo such as:

\+ Protocols/traits/interfaces and using that for polymorphishm, think
intersectable/primitve types.

\+ It typicall involves both understanding heap and stack allocated memory in
the language as well as understanding the general memory model to build scene
graphs etc

\+ Building a small linear algebra library usually touches things like low
level operations and performance operations as well as operator overloading if
the language supports it.

\+ Writing images to disks via pixel buffers

Primarily though I think raytracers are very fun projects because you can
produce nice looking results quickly which I find helps with motivation and
passion for the project. I'm pretty pleased with some of my renders already[1]

0:
[https://github.com/k0nserv/rusttracer](https://github.com/k0nserv/rusttracer)

1:
[https://raw.githubusercontent.com/k0nserv/rusttracer/master/...](https://raw.githubusercontent.com/k0nserv/rusttracer/master/docs/bit-
later-render.png)

~~~
pasta
I also agree. Learning a new language by building a raytracer is a lot of fun.

What also helps a lot is that you can use only spheres to create a Cornell box
(use very large spheres for the walls). And ray-sphere intersection is 'easy'.

Then the next step is path tracing. This will help you to learn a lot about
handling recursion (with or without recursion).

Other areas that you learn:

\+ How scope is handled

\+ (In dynamic languages) how the conversion between floats and ints work.

\+ Multi threading

~~~
K0nserv
Totally forgot about recursion. The recursive steps for reflection and
refraction aren't strictly speaking path tracing are they? I honestly don't
know that distinction well. Multi threading was definitely a lot easier in
Rust than when I gave it a shot in my Swift version.

~~~
pasta
I think path tracing is just a form of ray tracing. The only difference is
that you continue another path at a point where the ray hits an object and
collect all the light energy back to the pixel.

~~~
K0nserv
From a quick Google.

Path tracing doesn't trace towards light sources for shadow rays and instead
just sends several rays in different directions and accumelates the resulting
colors of those rays.

~~~
alpaca128
Path tracing is just like ray tracing with randomized deviations of the rays
and reflections, and this is repeated many times to reduce the noise. This can
also be done with shadow rays to some extent, for soft shadows.

(But it seems like every person you ask knows a different meaning for terms
like raycasting, path tracing and physical based rendering)

------
gonewest
Peter Shirley's "Raytracing in One Weekend" e-books are a nice resource for
people who want to do this themselves from scratch.

~~~
xmonkee
It's kind of surreal to see this post and this comment here, because that is
literally what I did last weekend - adapted that book to Rust!

[https://github.com/xmonkee/rusttracer](https://github.com/xmonkee/rusttracer)

~~~
playing_colours
It looks like Ray tracers are new Fibonacci calculators / Tower of Hanoi
solvers :)

------
izym
> raytracing an image takes much longer than the polygon-based rendering done
> by most game engines.

Minor nitpick, but it has nothing to do with the fact that it renders
polygons. Ray tracing can also render polygons. More precisely, game engines
use rasterization which works by projecting triangles onto the screen rather
than tracing rays through the screen.

~~~
a_e_k
It also depends very much the geometric complexity of your scene. With
hundreds of millions of polygons it's not difficult for raytracing to
outperform rasterization, especially if most of those polygons are instanced.

~~~
mnw21cam
Indeed, with hundreds of millions of polygons, a rasterisation method will
generally have to splat them all onto the screen one by one (minus some clever
occlusion pre-processing). By contrast, a ray-tracer has the ability to shove
all the objects into a R-tree or kd-tree, and efficiently search for only
those objects that intersect the ray, and produce the objects in guaranteed
order of distance from the camera.

~~~
mschuetz
I'm a bit sceptic of this claim. You can also produce acceleration structures
for reasterized polygons, create hierachical level of detail representations
of your scene and render whatever LOD is necessary. This reduces the number of
polygons that have to be rendered considerably. It always seems like the claim
that raytracing is faster for tens of millions of polygons due to acceleration
structures misses the point that accelerations structures can also be applied
to rasterization.

~~~
corysama
Ray tracing and rasterization are pretty much duals of each other. Enough
effort put into one can achieve very similar results to the other.

Meanwhile, I don't know how much effort would be required to effectively
rasterize a beach with individually modeled grains of sand.
[https://www.fxguide.com/featured/the-tech-of-pixar-
part-1-pi...](https://www.fxguide.com/featured/the-tech-of-pixar-part-1-piper-
daring-to-be-different/)

------
Patient0
I didn't follow this bit (from Part 2):

"This requires a bit more geometry. Recall from last time that we detect an
intersection by constructing a right triangle between the camera origin and
the center of the sphere. We can calculate the distance between the center of
the sphere and the camera, and the distance between the camera and the right
angle of our triangle. From there, we can use Pythagoras’ Theorem to calculate
the length of the opposite side of the triangle. If the length is greater than
the radius of the sphere, there is no intersection."

The two sides he describes have the camera in common - so the "opposite" side
of that triangle is the line from the center of the sphere to the right angle
- I don't see how this helps....

Edit: ok I finally get it but I think he should just label some of these
lengths on the diagram with letters (a,b,c etc) and then just show how they
are related by stating Pythagoras theorem explicitly...

------
mijoharas
Is anyone able to shed any light on the FOV correction? I'm not sure I
understand exactly what is happening there.

~~~
izym
Scratchapixel has a lesson on it [1] which takes the same approach.

[1]: [https://www.scratchapixel.com/lessons/3d-basic-
rendering/ray...](https://www.scratchapixel.com/lessons/3d-basic-
rendering/ray-tracing-generating-camera-rays/generating-camera-rays)

~~~
mijoharas
Of course, because we have the plane fixed at 1 unit in front of the camera,
instead of moving that distance we have an adjusting factor that we multiple
the co-ordinates by to change our field of view.

Thanks, I looked at scratchapixel but only found the stuff about how pinhole
cameras work.

------
santaclaus
> Despite that, it also happens to be the simplest way to render 3D images.

I'm not sure I would claim that -- with a line drawing routine in hand (a for
loop), you can have 3D perspective renderings of objects with a few matrix
vector multiplies.

~~~
tbabb
Line drawing algorithms can be surprisingly tricky. I don't think what you
describe would be easier than a basic raytracer, which would be about a page
of code, and the most complex math involved is the quadratic formula.

~~~
clarry
Surprisingly tricky being something around two or three dozen lines of C for
filling a triangle (with some constraints), after it's been transformed
appropriately.

But that is not the only way. You can iterate over a plane and test whether
the coordinates are within a triangle. It ends up being very similar to the
code you'd have for ray tracing a triangle.

------
kobeya
Pixar's RenderMan is not a ray tracer, btw.

~~~
hex12648430
This is correct, RenderMan is a path tracer which is more physically correct
in a lot of aspects (light bounces, caustics, conservation of energy, etc).
Before that it relied on the Reyes architecture with ray tracing extensions
according to Wikipedia but Pixar stopped supporting it in 2016.

~~~
pixel_fcker
To be really pedantic about it, RenderMan is an API. Photorealistic RenderMan
(aka PRMan) was the Reyes implementation of said API, and the new path tracer
is called RIS.

------
saosebastiao
It's be really cool if someone wanted to port this raytracer[0] to Rust and
compare benchmarks.

[0]
[http://www.ffconsultancy.com/languages/ray_tracer/benchmark....](http://www.ffconsultancy.com/languages/ray_tracer/benchmark.html)

