
Show HN: Monte Carlo ray tracer in Rust - Dalamar42
https://github.com/Dalamar42/rayt
======
Dalamar42
Hi HN, first time posting here!

I started this project last year when I was looking for something fun to learn
Rust with and I had Peter Shirley's excellent Ray Tracing in a Weekend series
of books recommended to me. In the process I got interested in learning ray-
tracing for its own sake so I decided to finish all three books and I am now
considering continuing this further on my own.

I am sharing in case this is useful to others who are on a similar path to
where I was.

~~~
gdubs
Really beautiful results. It’s interesting to see Ray Tracing’s durability. In
the 90s it was so prohibitively expensive (computationally), that a whole
bunch of techniques were developed to achieve realism in somewhat... hackier
ways. Now we’re seeing GPU support, etc.

~~~
ShamelessC
Since you seem knowledgeable about this -

Can you explain what the difference between 3D graphics generated with Ray
Tracing and without? My understanding is that Ray Tracing is essentially the
brute force solution to the problem of lighting. We developed "hackier"
approximations which run much quicker so our games can run at reasonable
framerates.

Is true Ray Tracing really that much better than the approximations we've
developed? Is it really worth the significant drop in performance? As an old
PC gamer, I've often revered clever optimizations and approximations as not
only necessary, but undeniably clever and (in my eyes) cooler.

Today it seems there's a shift in the other direction.

Now that our GPUs can just barely run Ray Tracing in modern games, people are
so hyped to use it. For me, it is strange that people would clamor over
graphics which are less optimized.

Is the visual benefit that high? Does a game that runs on lots of hardware
like Minecraft really need Ray Tracing that requires very specific GPUs?

I guess for me Ray Tracing feels like finding the primes by naively running
divisions on every number less than the one you're looking at. It's the easy
solution that is terribly slow. Existing lighting engines are (perhaps) like
using a sieve?

~~~
bryal
It's not just that ray tracing looks better than rasterization for the same
scenario, but that some scenarios are simply impossible / impractically
complicated to render with only rasterized graphics. Just one example is
specular reflections with non-screen space content and a dynamic environment.

Rasterized graphics can do specular reflections quite well as long as the
reflection only shows things that are currently in view. One can then perform
a limited form of ray tracing in the depth buffer to detect which part of the
screen is visible where in the reflection. However, as soon as we want to
reflect things outside of the screen it gets tricky, as we can't simply
perform this screen space ray tracing. Instead, we have to rely on pre-baked
reflection map textures. This works well enough for static objects, like the
environment, but you can't see dynamic content, like players, as they can't be
pre-baked into the texture. Also, it's useless when there are no static
objects, like in Minecraft where there's no such thing as a static environment
-- every block can change dynamically. And this is where the limit of specular
reflections of rasterization is basically hit. There are of course
workarounds, like rendering the scene a second time from the mirrors
perspective, and then combining the view render and the mirror render, but you
can imagine it gets prohibitively expensive quite fast as you add a few more
reflective objects to the scene. Also, this method doesn't work for glossy
reflections -- that's even more complex.

So this is just one example of how rasterized graphics limits us -- we can't
have more than a few reflective objects in a scene which features dynamic
geometry, which is a very sensible thing to want to have!

In my opinion, ray tracing really is that much better than the approximations
we've developed. Also consider how much faster / cheaper it would be for a
studio to create new graphics engines when you only have to write 1000 lines
of ray tracing code instead of 100'000 lines of rasterization hacks (for a
worse-looking result!).

~~~
pixel_fcker
Hate to tell you this but ray tracing gets just as complicated. You’re just
shifting the realism bar much higher, but certain effects are always just out
of reach in a given time budget and require specialised solutions and hacks to
achieve.

~~~
bryal
In a given time budget, maybe, but that wasn't much a part of the question I
answered. The question asker compared ray tracing to brute forcing finding
primes vs. using a sieve -- and that's just not how it is.

Also, I'm not sure about "just as complicated". Is rendering refraction with
dispersion "just as complicated" to achieve in a given time budget with ray
tracing as with rasterization? I must admit I'm not well versed in modern
rasterization hacks, but as far as I know that is simply impossible to
achieve, regardless of how much time you have.

~~~
pixel_fcker
> Also consider how much faster / cheaper it would be for a studio to create
> new graphics engines when you only have to write 1000 lines of ray tracing
> code instead of 100'000 lines of rasterization hacks (for a worse-looking
> result!).

You were talking about games here, no? That's ultimate hard time constraint.

1000 loc gets you a very basic path tracer which isn't really going to be good
for very much.

Your dispersion example is interesting - you can't really do it correctly with
rasterization, no, although you can do a distorted background texture lookup
with individually offset/blurred RGB channels. If you want to do rough glass
you can just increase the blur amount. Not correct but looks 'good enough' in
a lot of cases.

With ray tracing you can just trace 3 rays (one for one for each of red, green
and blue). Simple! Except can you really afford 3 rays? Also how much do
offset them by? You could use Cauchy's formula and use real refraction
indices, but then you're going to get ugly separation between the channels.
You could sample the whole visible spectrum and use temporal accumulation to
build up the correct color, but now you've got color noise. What happens if
you want to simulate rough glass? That's going to be very noisy indeed.

What about shadows from the glass? You can't afford to render caustics to do
it correctly after all. Do you just ignore them? That'll look weird. Use a
fresnel-weighted transparent shadow? Probably but now you have to handle that
correctly everywhere and running a shader for shadow rays is expensive too so
maybe you have to special-case that situation so most of your scene lands on
the happy path.

My point is that anyone can write a basic path tracer in a weekend that will
correctly simulate light transport given an infinite amount of time. Writing a
renderer that will produce an image of a given quality in a given amount of
time, incorporating a list of effects that an art director has decided are
essential to the look of you product, is a very hard task still. It's simpler
in a lot of ways, but also has to handle a lot of other complexities for the
things that aren't possible in a rasterizer but are still very expensive to
compute in a ray tracer.

~~~
bryal
You make many good points, but I still feel like you're really underselling
the potential of simple ray tracing. Hardware acceleration of intersection
testing and BVH construction/traversal is only going to become more prevalent,
and with temporal reprojection and spatial denoising methods, many kinds of
noise can be mitigated. It doesn't even have to be that complex -- "An
Efficient Denoising Algorithm for Global Illumination" by Mara et al. is only
7 pages (more like 5, really).

Regarding spectral path tracing and colored noise, Wilkie et al. have written
a good paper on this, "Hero Wavelength Spectral Sampling", if you're
interested.

------
bryal
I'll chime in with my own ray tracer! [https://gitlab.com/JoJoZ/futhark-
tracer](https://gitlab.com/JoJoZ/futhark-tracer)

It's a spectral path tracer written together with my friend as part of our
master's thesis. It's implemented in Futhark, which is a new language for GPU
programming. It's a purely functional array language, and it's very ML-like.
It makes GPGPU programming quite easy, and as far as we can tell, the
optimizations are really good!

------
boulos
For those looking to go beyond Pete’s “in a weekend” book, the full text of
the 3rd edition of PBRT is freely available online:

[http://www.pbr-book.org/3ed-2018/contents.html](http://www.pbr-
book.org/3ed-2018/contents.html)

------
dagmx
Great work.

You might be interested in checking out the implementations of pbrt in rust
too

[https://github.com/wahn/rs_pbrt](https://github.com/wahn/rs_pbrt)

~~~
Dalamar42
That's interesting. Thanks, I'll have a look

------
_bxg1
How do you get it so clean? My renders have artifacts no matter how high I
crank up the fidelity:
[https://github.com/brundonsmith/raytracer](https://github.com/brundonsmith/raytracer)

Edit: I just zoomed way in and I can see ever so slight bits of speckling :)
Maybe I'm just not turning the bounce-ray count up high enough...

~~~
boulos
The author is actually preferentially sending samples towards "attractors" [0]
for lights and dielectrics. Doing so massively reduces variance. My favorite
example is Figure 5 in this old paper [1] (I've never actually read Pete's ray
tracing in a weekend, so I don't know if this is in there).

The most common variance reduction technique though would be to do direct
lighting (aka "next event estimation" if you want to be overly pedantic). Your
lighting in that image looks like it's just a "Hi, I happen to be emitting
tons of energy" sphere in space, which will cause a lot of noise.
Alternatively, if you _are_ sampling it, you are likely not sampling the
sphere as well as you could. You'll want to sample the sphere following the
setup in Figure 2 in Pete's "Direct Lighting" paper [2] (which as a reminder,
is currently accessible at the ACM during Covid-19).

[0]
[https://github.com/Dalamar42/rayt/blob/fc57fa4afc080a578e21e...](https://github.com/Dalamar42/rayt/blob/fc57fa4afc080a578e21ececb153e6d717c9459f/src/world/materials/mod.rs#L103)

[1]
[http://graphics.stanford.edu/~boulos/papers/gi06.pdf](http://graphics.stanford.edu/~boulos/papers/gi06.pdf)

[2]
[https://dl.acm.org/doi/10.1145/226150.226151](https://dl.acm.org/doi/10.1145/226150.226151)

~~~
Dalamar42
The 3rd book [0] is primarily about implementing an MC ray tracer and then
adding the techniques in the second paper you linked. That's what I've used in
my code as well. I just skimmed through the paper and I think the book covers
most of the material from the paper with the exception of spatial subdivision.
I will have a more careful read later to see exactly what my implementation is
missing from this.

[0]
[https://raytracing.github.io/books/RayTracingTheRestOfYourLi...](https://raytracing.github.io/books/RayTracingTheRestOfYourLife.html)

------
peterbraden
Here's my version:
[https://github.com/peterbraden/rays.rust](https://github.com/peterbraden/rays.rust)

------
kidintech
Hey, could the readme provide a list of how far along this project got (i.e.
material types, object imports, textures, optimizations for collisions)?

~~~
Dalamar42
Hey. At the moment what is implemented is exactly what you are going to find
in the three books of the Ray Tracing in a Weekend series. If I can find the
time to continue this project, I am going to make a git tag for the current
version for people who just want to use this while going through the books and
I will also add a changelog for any changes I make from that point onward.

