
Ray Tracing Essentials, Part 1: Basics of Ray Tracing - mariuz
https://news.developer.nvidia.com/ray-tracing-essentials-part-1-basics-of-ray-tracing/
======
optymizer
On the topic of Ray tracing books:

I'm currently at chapter 9 of The Ray Tracer Challenge and I have to say it's
a wonderful book and the stellar reviews are well deserved.

It handholds you all the way and that is such a breath of fresh air for
someone like me, who's not a math guy, nor do I instantly 'get' modern
graphics APIs, since there is a lot of prerequisite knowledge encoded in those
APIs.

I also like to understand things from first principles and it helps when those
principles are explained in a straightforward, even somewhat humorous way and
I get to see results right away.

Learning practical things would be so much easier if the other books on my
shelf had the same approach.

It's already in the top 3 of my favorite books, along with Nature of Code.

~~~
chrsig
I had a really fun time going through the ray tracer challenge recently! It
really is an amazing introduction to ray tracing

I've since moved on to the pbr book[0] and finding myself getting much more
lost in the math. I'm doing my best to brush up and/or learn all of it, but
it's a bit daunting.

It's really telling how reassuring the tests are in the RTC. Right now I can
re-implement something from the pbr-book, but I can't really say if I got it
right other than by doing a render (and even then, it can be hard to tell).

My plan is to at least go through and write my own tests by doing the math by
hand and trying to verify my implementations.

I really wish there were some intermediary book between the two.

[0] [http://www.pbr-book.org/](http://www.pbr-book.org/)

~~~
optymizer
I wholeheartedly agree with you. The tests do help a lot.

Funnily enough, I own PBR too (it's in my reading queue), and, having skimmed
through it, it seems math heavy. I think I'll leave it for last. My current
queue is:

\- RTC (for fun),

\- 3d Math primer for Graphics and Game Dev by Dunn (to solidify the math
part)

\- Foundations of Game Engine Dev - Mathematics, by Lengyel (because when it
comes to math, overkill is underrated)

\- CGPP (to get the basics down)

... not sure about the order for my other books, but then...

\- OpenGL SuperBible (second time around, sadly)

\- Real-Time Rendering

\- PBR

For the math books, I was thinking to do the same thing as you: do the math by
hand, and then translate them into tests.

~~~
fesoliveira
> \- OpenGL SuperBible (second time around, sadly)

I tried using the SuperBible in order to learn OpenGL a few years back, but it
always seems to be a bit too dense for my taste. Since OpenGL itself is an API
specification and usually you would learn the basics of 3D graphics before
delving into it, I recommend using the fantastic Learn OpenGL site
([https://learnopengl.com](https://learnopengl.com)). It goes through the
basics of OpenGL from ground-up and touches on more advanced techniques, such
as shadow mapping and deferred shading. It is a fantastic site and a great
resource for learning the API.

~~~
optymizer
Thanks for the advice, I'll give it a try!

------
mafm
Peter Shirely's Ray Tracing in One Weekend series
([https://raytracing.github.io/](https://raytracing.github.io/)) is a great
way to learn enough about ray tracing to implement it yourself.

~~~
Buttons840
It is a great project for learning a new language. I used it to practice Rust.

You get to implement vectors, with basic operations on them, this gives you a
chance to practice some abstractions. It's also good to create some unit tests
to ensure your vector operations are correct. There's also good reason to
parallelize your code and perform benchmarks. Abstractions, unit tests,
parallelism, benchmarks, you have an excuse to try them all.

~~~
dkersten
Oh, that’s a great idea! The main reason I still haven’t learned Rust is I
didn’t have a project to use with it, but this tutorial is something else I’ve
wanted to do, so it’s a perfect match.

~~~
sbeckeriv
I love seeing this book listed. I picked it up 4ish years ago while learning
Rust. I was converting the code to Rust and I found a small bug[1] because I
could not convert the code as it was. Peter was amazingly responsive and
encouraging. I highly recommend this and the second second book.

[1]
[https://github.com/RayTracing/raytracing.github.io/blob/7e2a...](https://github.com/RayTracing/raytracing.github.io/blob/7e2a8a10746f6de3e08f216d7a47baf059d46d77/books/RayTracingInOneWeekend.html#L1504)

~~~
dkersten
Oh cool, congrats on finding the bug!

------
willis936
From the example in this video I’m not quite sure how you get anti-aliasing
for free in sub-pixel sampling. I understand it in general, but in this
example the light is being reflected on a diffuse surface. Assumedly where the
diffused ray goes has a large random component, so sub-pixel sampling provides
very little, if any, benefit. Is my take in this correct?

~~~
m1el
When you shoot a ray from camera perspective, you've already chosen which
pixel it's going to contribute to. "Platonic" pixels are infinitesimally
small.

In reality, pixels on the screen and camera sensor have an area. If you choose
a random position on that area, you get anti-aliasing "for free" because
there's going to be variation in the direction of rays that contribute to the
same pixel.

~~~
chongli
You actually don’t get anti-aliasing for free on camera sensors. Many cameras
put a low-pass filter in front of the sensor specifically to avoid the problem
of aliasing. More recently, high-end cameras have been omitting the filter in
order to improve resolution. This occasionally does result in aliasing (called
moiré by photographers) when taking pictures of fine-structured repeating
patterns, however.

------
datashow
Interesting video. I didn't get how the Ray casting process formulates the
final picture in the eye.

~~~
logfromblammo
There's a point in space that represents the lens of the observer's eye, and a
rectangle in space that represents the viewport. This rectangle is divided
into pixel-equivalent square areas. For each area, a sampling of one or more
rays is drawn from the lens point through the bounds of the area, until it
encounters a surface of the scene. At that point, the material rules of the
surface might generate another ray for specular reflection, a cone for diffuse
reflection, another ray for refraction, and also add the emissive light value
from that material. If the specular or diffuse reflections encounter a light
source or ambient light, they add some of that light to the pixel-equivalent.

The diffuse cones send out a sampling of rays and attenuate the light from the
light source, based on how many of those rays hit it, instead of some other
object.

Instead of drawing light onto the scene and calculating how much passes
through the viewport to the lens, ray-tracing cheats by working backwards,
because photons traveling backward in time follow exactly the same rules as
those traveling forward in time. Every photon that can travel backwards in
time from the eye to hit a light source must have emanated from a light source
with exactly the right direction and polarization to enter the eye. So the
only photons calculated are the ones that contribute to the scene as viewed by
the eye.

~~~
datashow
Thanks. I think I got most of what you described.

If I understand it correctly:

1) a point / pixel in the scene (as viewed by the eye) sends out a cone of
rays, and the final color of this pixel is a combination of what those rays
hit. This is the ray casting process, the reverse of light traveling.

2) the overall picture of the scene is the combination of pixels each
calculated by the above ray casting process.

Am I right?

~~~
logfromblammo
Yes. The problem with working backwards is that some optical calculations have
probability elements. A photon that hits a half-silvered mirror has a 50%
chance of (specular) reflecting and a 50% chance of transmitting.

So for ray-tracing, you calculate along both paths and give 50% weight to
each. Every time a ray hits a triangle in the scene, the material properties
determine how the various components sum up to determine the color of the
pixel.

------
dundercoder
Video is really well done. Will be looking for the next one.

------
signa11
slightly tangential question: does someone know of resources that i might be
able to use for plotting mesh surfaces f.e. z = f(x, y) ? all online resources
seem to point to howto use surface plots in either matplotlib or gnuplot or
some variant thereof. thank you kindly !

------
falcolas
Dear NVidia, While I and many others appreciate the quality of your video, in
an age where people are distributing ray-tracing code on business cards,
there’s not a ton of value in producing even more “Basics” or “Essentials” of
ray tracing educational material.

What would provide most of us with interest in ray tracing real value is
expanding on the territory covered by the PBRT book, making the material it
covers more accessible, and covering the changes to the state of the art which
have occurred in the decade since it was written (particularly the elements
that the RTX hardware enables; real-time mixing of ray tracing and
rasterization rendering).

Thanks!

~~~
llukas
Something like Ray Tracing Gems book they published last year? Or maybe CFP
for Ray Tracing Gems II book they have right now?

~~~
falcolas
Sweet. Didn’t see these come out. Thanks for the heads up.

~~~
llukas
There are previous GPU gems available for free as well:
[https://developer.nvidia.com/gpugems/gpugems/contributors](https://developer.nvidia.com/gpugems/gpugems/contributors)

~~~
pjmlp
And the very first one of the NVidia series, "The Cg Tutorial" is also
available.

[https://developer.download.nvidia.com/CgTutorial/cg_tutorial...](https://developer.download.nvidia.com/CgTutorial/cg_tutorial_chapter01.html)

Granted, Cg is now just a curiousity in the context of shading languages, it
is nevertheless interesting to read.

