
Photorealistic Path Tracer - azhenley
http://thume.ca/ray-tracer-site/
======
trishume
Well I hoped to be on HN today for my high-effort blog post on generics and
metaprogramming ([http://thume.ca/2019/07/14/a-tour-of-metaprogramming-
models-...](http://thume.ca/2019/07/14/a-tour-of-metaprogramming-models-for-
generics/)) but at least I'm on HN for something :P

Edit: And at least right before I go to bed my post technically made it on the
front page in the very last slot :)

I'm also glad people like my ray tracer, it was super satisfying to build, and
nice to have a project that I know I put substantial effort into and can
really be proud of.

~~~
robbrown451
You're in the very first slot now. Congrats, this is super impressive, both in
implementation and documentation.

------
xal
Fun story about the OP: We brought Tristan on as a high school intern at
Shopify. I heard about how fantastic he was off and on during his internship
(I’m founder and CEO).

So when I heard that he was giving a town hall talk at the end of his stint I
definitely wanted to check it out.

What I walked into was a 30 minute take down of Liquid, the template language
that I wrote. He systemically took every single design decision I made and
dismembered it based on language design fundamentals.

That was still one of my favorite townhalls. As a self taught programmer it
also send me to read about compiler and language theory for the next years to
better understand all the errors of my ways. Thanks Tristan!

~~~
mberning
Can you share the critique? It would be an interesting read.

~~~
trishume
Basically, all the parsing is based on unanchored regexes. This means that
Liquid will just ignore syntax that doesn't match what it expects instead of
giving a syntax error.

One example is `{% if this && that %}`, which is not the correct syntax for
logical and but people write in Liquid templates and see that it doesn't give
an error so expect it to work, but actually it acts the same as `{% if this
%}` and just ignores the invalid suffix.

~~~
mberning
Thanks!

------
donatj
Somehow my brain didn’t catch Steve and the pig coming out of the monitor in
the top hero picture, and I spent most of the article thinking that first
render was a photo of the authors desk.

When the Mandelbox came up I was fully expecting some sort of explanation
about how they thought it was so cool the author had to 3D print it.

At this point I was like “I wonder if they are going to model their desk from
earlier?” I was actually delighted and ecstatic when I figured out my mistake!
Wonderful work, truly.

------
magicalhippo
One of the best books I've read on the subject is
[https://www.pbrt.org/](https://www.pbrt.org/) (which the article also
mentions).

It manages to tackle both the math and the gritty details of implementing the
math. Blew me away the first time I read it.

~~~
trishume
Yah PBRT was my most important resource for completing this project, way more
important than anything I learned in course lectures.

I read it cover to cover before doing much, but then made sure I wrote things
how I wanted without referencing the book except for a few bits of formula-
heavy code I mention in the credits.

Definitely the best technical book I've ever read.

~~~
person_of_color
How did you read it without falling asleep?

~~~
bmurphy1976
He liked the content?

I used to read programming books cover to cover 25 years ago when I was in
High School/College. I don't do it so much these days, the books aren't as
useful and I know too much so most of it ends up being redundant now.

If you're learning something new you can consume voraciously.

~~~
colmvp
I'm just wondering how people retain the content? I've read a lot of books
over the last few years to learn AI/programming ranging from Sutton/Barto's
Reinforcement Learning to Scott Meyer's Effective C++ series, and so much of
it ends up flying away from my brain.

~~~
delinka
I'm like this. I find topics fascinating, but I don't have good memory
retention. I find that I allow the subjects to influence me rather than
creating a catalog of stuff to reference in my brain.

~~~
corysama
Agreed. I let the author’s brain temporarily take over mine by thinking the
words of the book for so long. And, instead of copying the material to my
brain, it becomes more of an index. I know these ideas exist, can be
referenced, and roughly where to find them if necessary.

------
macspoofing
I recognized the final project structure of CS488. Great class, though quite
time-consuming.

The author links to past year galleries:
[https://www.student.cs.uwaterloo.ca/~cs488/gallery-A5.html](https://www.student.cs.uwaterloo.ca/~cs488/gallery-A5.html)

I'm not sure if this was done in all the years, but one of the interesting
(and slightly unfair) aspects of the final project was that you were not only
graded on the quality of the final render, but also how closely you aligned to
your plan as stated up front. That is, if you planned an ambitious render BUT
had to scale down to something slightly less ambitious (due to difficulty or
time-constraints), your mark would be lower than the individual who planned a
simple render and executed that.

~~~
wjnc
That seems like quite the wrong incentive for learning. A lot of ambitious
plans fail. To be graded only on the final delivery gives adequate incentive
to execute. I perhaps get the education signal that executing successfully on
a plan is a worthwhile skill, but isn't learning also experimenting?

~~~
cldellow
You're graded only in part by aligning with the proposal. You're also graded
on innovation, technical depth and software design. The TAs compare it to
figure skating / high diving -- you should be able to say what you can do, and
then nail it.

In my experience, the students who take the project courses at UW are
ambitious and driven. I suspect the chief effect of this policy is an
ambitious project whose plan has been fleshed out by a healthy back-and-forth
with the TAs upfront, vs a ridiculously ambitious project that hasn't been
fleshed out.

------
duck2
The path tracer is impressive, doing it in a month's worth of work is
impressive too :) I took a ray tracing course in second grade, and had to
leave everything else and write ray tracers during the term.

And, as an item of nostalgia, here is some of my glossy mirror code:

    
    
      if(matl->t == M_GLOSSMIRROR){
          double θ0 = acos(cosθ), Δθ = M_PI * matl->rough / 2, lθ, dθ = 2*Δθ/ray->w, dφ = 2*dθ, θ, φ;
          lθ = (θ0 - Δθ) < 0 ? 0 : ((θ0 + Δθ) < M_PI ? (θ0 - Δθ) : (M_PI - 2*Δθ));
          θ = lθ + (p+RAND1)*dθ; φ = -2*Δθ + (q+RAND1)*dφ;
          v4 z = hit->n, x = norv4(subv4(ray->p, scv4(dotv4(ray->p, z), z))), y = crv4(x, z);
          ...

~~~
sfkdjf9j3j
Now that is impressive. When I was in second grade I was still wrapping my
head around addition and subtraction.

~~~
trishume
I'm guessing that they're from a country where second grade means something
different. That or I should have been looking for this school that teaches ray
tracing in grade 2...

~~~
duck2
I meant second year in uni. Too much thinking in native language I guess.

~~~
CDSlice
Still really impressive! I'm in my second year of college right now and I know
I'd have a hard time with it even though I'd love to learn

~~~
duck2
It's simple to start! You can try writing a sphere-only, Blinn-Phong ray
tracer already.

------
p0nce
I feel a bit old now :) Really appreciate the ethics of citing sources,
explaining in much detail and presenting choices from first principles.

------
magicalhippo
Nice work and presentation, always fun with ray tracing. Bonus points for the
fractal :)

I assume you did this in RGB space, so no pretty prisms? I couldn't find any
explicit mention. From the report I guess you didn't implement refraction at
all?

Spectral leads to great images, but the color space conversions are such a
pain to get correct.

~~~
alkonaut
Also makes it a lot slower. For a really nice/readable (but therefore also
“naive” and slow) spectral one, look at this one:
[https://github.com/TomCrypto/Lambda](https://github.com/TomCrypto/Lambda)

It’s a lot easier to learn from than the pbrt one in my opinion.

There are ways to avoid having to do single wavelength rays (hero wavelengths
or basis functions) but it’s tricky.

------
lazzlazzlazz
Exceptionally well-documented. A very interesting read, an renewed inspiration
for my work ethic.

------
zokier
First of all, great work. In the "Final Scene" with DOF, the reflection of the
mandelbox on the stapler seems to be more in focus than the stapler itself, is
that actually correct?

~~~
stan_rogers
Yes, it would be. The virtual image in the reflection would be "behind" the
reflecting surface (with the distance being the sum of the camera-to-stapler
distance and the stapler-to-mandelbox distance). You're not focusing on the
reflecting surface, but on the object being reflected.

------
danielbigham
UWaterloo CS488! Some of the funnest 200 hours of my life. Here's what I
produced 17 years ago. Kind of embarrassing next to your slick work!
[http://www.danielbigham.ca/raytracer/raytracer.htm](http://www.danielbigham.ca/raytracer/raytracer.htm)

------
Guthur
It looks good except for a few standout things.

The reflection of the cube in the stapler is too sharp, like a perfect mirror
finish.

The book in the background seems to lack and sort of ambient occlusion.

Still a good scene, it's the fact these stand out so much to me is evidence of
how good it is :)

------
aasasd
Apparently my senses are burned from the years of hedonism, because the “no
tone mapping” pic is the one I want to touch. “Auto levels” in Gimp on the
“high contrast” pic also gives a result similar to the “no mapping.”

------
2_listerine_pls
Impressive resume

~~~
person_of_color
The definition of 10x

~~~
tachyonbeam
Apparently, 10x developers even design and build their own keyboards so they
can type faster (he really did that): [http://thume.ca/2014/09/08/creating-a-
keyboard-1-hardware/](http://thume.ca/2014/09/08/creating-a-
keyboard-1-hardware/)

~~~
dancek
It must have been very impressive in 2014, but nowadays DIY keyboards are a
big thing. Of course most people just build a kit but designing and building
isn't that hard with 3d printers, the QMK firmware and all.

Disclaimer: I'm in the middle of my first keyboard build, modified from an
existing 3d-printable design. The left half works already. I'll probably blog
about it when it's finished.

------
tempodox
So, no code and no math, just some nice pictures. How utterly boring and
disappointing. Oh well, at least we know it can be done with Blender.

------
androidfantasy
congrtas for achivement. this is super impressive

------
susam
This is amazing work. It is very well written and very well presented.
Congratulations!

Ray tracing is fun! Writing code that simulates physical optics and generates
a photorealistic image can be very gratifying. Many years ago, I dabbled in
ray tracing briefly and it was exciting. I have archived some of the code
here, in case, you want to take a look:

\- [https://github.com/mycask/java-ray-
tracing](https://github.com/mycask/java-ray-tracing) (A very rudimentary
orthographic ray tracer written from scratch in Java)

\- [https://github.com/mycask/pov-ray-tracing](https://github.com/mycask/pov-
ray-tracing) (Examples written while learning POV-Ray)

------
200px
This is cool. I want to start learning this. What programming language would
you suggest to write my code while I learn this? Is C++ pretty much necessary
for performance reasons or other languages would be okay too? Any specific
recommendation?

~~~
alkonaut
If you don’t already know c++ I’d go for Rust.

~~~
trishume
I would have used Rust if I hadn't already put 10 hours of work into an
assignment to write a basic ray tracer, which had to be in C++ unlike the
final project.

------
leowoo91
I just wonder why author named ray tracing as path tracing..

~~~
sampo
_> I just wonder why author named ray tracing as path tracing.._

Ray casting (1968) was simply casting rays from camera, one per each image
pixel, until the ray hits the closest object. Then the pixel gets the color
from the object.

Ray tracing (1979) made ray casting recursive for three cases:

1\. Reflection: The object surface is a perfect mirror, so we calculate the
reflection angle from the incoming ray angle, and continue the process.

2\. Refraction: The object is transparent, like glass. The Fresnel equations
give the refraction (transmission) and reflection angles and the distribution
of energy between them. For some angles, one of these can be 0.

3\. Shadow ray: Shadow rays are traced towards each light source. If the
shadow ray is blocked by any other object, we're in shadow in respect of that
light source. If a light source is visible, both the light source color and
the object surface color contribute to the color of this pixel. The object
surface can also have a reflectance model (BRDF) so it doesn't need to be a
matte Lambertian surface, but the BRDF is only applied to light coming
directly from the light sources.

An object can have partially reflective surface (like glossy plastic), and it
can also be partially transparent (like colored glass). So at each
intersection, we may generate up to all 3 types of rays.

What we miss here, is that all surfaces diffusely reflect light to some
amount, and that the colors in all diffuse incoming light from every direction
contributes to the color of a surface, not just direct light from visible
light sources. (We also miss caustics. Most notably, a transparent glass ball
between the surface and a light source will only give a shadow in ray
tracing.)

Historically, there was a time when computers were too slow for sampling, even
statistically weighted importance sampling, of diffuse light coming from all
directions to all surfaces. So the above model of handling only 3 types of
rays (mirror reflection, transparent object refraction, light directly from a
light source) became both popular, and established as ray tracing.

Then later, advances towards including all kinds of physically possible light
rays, adopted names other than ray tracing. They are still tracing rays, but
they are not called tray tracing, because the name ray tracing is understood
to mean only the first method that got widely popular, with all its
limitations.

~~~
leowoo91
You almost touched photon mapping there but as long as rays start only from
the screen, there is nothing wrong calling anything varying as ray tracing.
Limiting reflection count or having multiple rays (monte-carlo) are often
aligned with priciples you described.

~~~
quickthrower2
Photon mapping? As in can render diffraction?

~~~
magicalhippo
Simplified, photon mapping is a technique where you shoot rays from the light
sources and then record the hits on geometry. Name comes from pretending
you're tracing a photon bouncing around.

Then, when doing the regular ray/path tracing, at each intersection of the
camera-ray, you see if the intersection location is near any of the light-ray
hits. If so, add in some contribution from the light-ray hits.

This avoids you having to explicitly check each light at each intersection
point, saving you quite a number of visibility tests. Also, due to the
smoothing kernel used when the "photon contributions" are added up, it leads
to low-frequency noise rather than high-frequency noise that is normal in
regular path tracing. This is especially noticeable when caustics are
involved.

A nice introduction to the technique can be found here:
[https://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zac...](https://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html)

