
The Tungsten Renderer - bewuethr
http://noobody.org/tungsten.html
======
AaronFriel
This is very impressive! The project doc was a great read and, while beyond my
level, I appreciated the author's ability to distill the academic papers into
something workable and discuss the snags encountered when trying to
approximate the solutions.

For the author, if they're able to take any questions, I would like to know
how much time was spent rendering that final scene (the 3840x2160 or so
resolution one), and how much you were able to cut that rendering time while
prototyping it with your low resolution or low sample approximations of the
scene?

~~~
Tunabrain
Thanks!

The final render took about 30 hours at >5k samples per pixel. While
prototyping I typically used a resolution of 1000x563 (low-res 16:9) and used
the incremental renderer in the editor, meaning that the first images would
come back after a few seconds. This was usually enough to figure out whether a
composition would work or not.

To setup and tweak the lighting, I needed more converged renders, and that
could take anywhere from 15 minutes to several hours. Most of the museum
converged pretty quickly, making it easier to set up the sun and these sorts
of things, but the planet would stay noisy for a very long time. Multiple
scattering in the atmosphere has enormous variance and tends to just spray
bright blue dots throughout the scene. Any change to the parameters of the
planet, atmosphere or clouds would require a few hours of rendering before I
was able to check whether the change was for the better or worse.

Not the most efficient feedback loop, but I had other final projects to work
on while renders were running, so it wasn't too bad.

~~~
omega_rythm
Great work! The image quality is astounding, really.

So, I just tested your renderer with the material test scene, and while it ran
pretty fast, it seems it only used 3 cores out of 4 on my system. Is there a
way to configure/force the renderer to run a number of threads?

~~~
Tunabrain
By default it will use one thread less than the amount of cores available, so
that the rest of the system is still usable while a render is running.

It's not currently user configurable, but I'll add it to the Todo list.
Thanks!

~~~
omega_rythm
It makes sense, thanks for the explanation.

------
jrapdx3
While all of the images are impressive, I found the hair rendering to be the
most interesting. It's very tough to get it right, that is, realistic-
appearing. As presented it's very close to that ideal.

An attribute of hair that makes it hard to render is color. At least on my
monitor the "blonde" sample seems a bit too reddish. (Blonde is really sort of
a dull green.) Another factor is color variation, not only the result of
lighting, but also intrinsic to hair growth. (Commonly hair at its "roots" is
slightly darker/lighter, and there's natural randomness among individual
hairs.)

I hope that doesn't sound like picky criticism, it really does look very
promising. Portrait artists I know would empathize with the challenge of
convincingly depicting the hair of human heads.

~~~
Tunabrain
You're absolutely right. Getting the hair data right (groom, strand color,
little fiber variations) is just as difficult as implementing the right
shading model.

My renders are currently a bit lacking in that regard - I'm not an artist, but
I'll try to look out for that in the future. Adding some subtle random
variation between fibers is definitely doable.

As far as the hair color goes, I was following measured absorption
coefficients of real human hair. Eugene d'Eon's paper "An Energy-Conserving
Hair Reflectance Model" contains measured absorption coefficients for the
pigments Eumelanin and Pheomelanin, which are responsible for human hair
color. By mixing their concentrations, you can reproduce any hair color found
in real life. It's difficult to find the mixtures that are just right though,
and it's likely that I was a bit off for the blonde hair render :) I'll tweak
it some more in the future.

------
Svenstaro
Absolutely stunning! I've been glued to the idea of real-time path tracing for
a while and I'm wondering how feasible you think it would be to implement a
real-time version of this at a much lower quality? What would you change if
you were to do this for a real-time implementation?

~~~
Tunabrain
I think that an implementation targeting real-time would need an entirely
different design than an offline implementation, especially if it had to run
on the GPU.

The main challenge is probably achieving full utilization of SIMD units (CPU)
or streaming multiprocessors (GPU), which would require tracing and shading
multiple rays within one unit. There's lots of papers on how to do this for
geometry intersection (i.e. packet tracing), and I think there's a few slides
from nvidia that give hints on how to do it for shading (avoid megakernels,
sort all shading points by material id).

Brigade is a path tracer that is close to the real-time ideal (for outdoor
scenes and using high-end GPUs), so looking into their code would certainly be
interesting.

~~~
Svenstaro
As far as I know, Brigade isn't open source though. :/

------
ilaksh
I really want graphics hardware that does path tracing. Like if I could feed
that material test json into some graphics card or something and get realtime
results. That would be bad ass.

The reason I know something like this is doable with enough engineering is the
Brigade Engine demos.

~~~
rndn
It looks like voxel cone tracing is becoming popular at the moment. Both
approaches have the disadvantage that you need to keep the entire scene in
video memory, however with VCT that memory usage is bounded by the voxel
resolution and integration over large solid angles is very cheap and has no
aliasing. It's probably also a disadvantage of GPU path tracing that it makes
little use of the features that graphics cards provide.

~~~
wtracy
Here's an article on a voxel raytracer focused on realtime operation:
[https://directtovideo.wordpress.com/2013/05/08/real-time-
ray...](https://directtovideo.wordpress.com/2013/05/08/real-time-ray-tracing-
part-2/)

And here's a video of the raytracer discussed in the above link:
[https://www.youtube.com/watch?v=i8hSZGTXTx8](https://www.youtube.com/watch?v=i8hSZGTXTx8)

~~~
rndn
I'm not sure how well this renderer handles arbitrary scenes through. Pieces
of broken glass are very suitable for GPU raytracing, because rendering them
is not a "branchy" task since there is no scattering, only definite
reflection/refraction. The reflections also make it look like there were a lot
more triangles. With Monte Carlo sampling you need 4x samples for .5x error,
which means that a lot of samples are needed for scattered indirect
illumination in a typical scene (I think in the order of 100 to 1000). In VCT
you only need 7-13 samples but a lot of preprocessing.

~~~
wtracy
The author mentions that he chose reflective materials for exactly that
reason.

Realtime renderers still struggle with things like Monte Carlo.

------
huuu
Is this really forward path tracing as in "follow the photons from a light
source"?

I'm very interested in forward rendering but it has always been slow as hell
because it's very hard to predict which ray of light will end up in the
camera.

But this is looking very good. Nice work!

~~~
berkut
It depends on the context: for path-tracing (uni-directional / bi-directional
without photon mapping / VCM), "forward" generally means starting at the
camera (eye) and bouncing around the scene.

"backward" generally means starting at the lights in this context (for bi-
directional).

Forward in the photon context generally comes from the fact that multiple
bounces didn't used to be done, you generally sent out photons from the
lights, they were stored, and then a final gather was done to create the
image.

------
Ono-Sendai
As someone who enjoys making atmospheric simulation renderings
([http://www.indigorenderer.com/images/clouds-0](http://www.indigorenderer.com/images/clouds-0)),
this is nice work!

~~~
AceJohnny2
How're you rendering the clouds? A friend of mine did a thesis a few years ago
on real-time cloud rendering [1]. It was very interesting learning about cloud
transference functions (and what a Glory is!) and how they found a way to
optimize that into a manageable database (IIRC) usable for real-time and
realistic rendering.

[1] [http://www-evasion.imag.fr/Publications/2008/BN08a/](http://www-
evasion.imag.fr/Publications/2008/BN08a/)

~~~
Ono-Sendai
The clouds are done with multiple-scattering path-tracing with participating
media. Basically the ray bounces around inside the cloud until it exits :)

The cloud density is defined by a shader, which basically takes the form of a
function from position to density (scattering coefficient).

~~~
AceJohnny2
Sounds like you're taking the brute-force approach, and the aforementioned
thesis could show how to improve that. The powerpoint is surprisingly clear,
IMHO.

From the paper's conclusion: "We propose a new approach to find significant
light paths in clouds via searching for their collector area which comprises
their entry point on the lit surface of the cloud. In addition to this new
formulation we have proposed a new, GPU-based algorithm to find these
collectors and compute their contributions in interactive time. Similarly to
our previous model, we account for the varying anisotropy of light transport
by treating separately light paths of different orders."

Of course, that's avoiding assumptions about how easy it is to integrate in
your rendering... ;)

------
santaclaus
Cool looking project. I get a bit of a Mitsuba [1] vibe from the renderer, at
first glance -- I'm curious how the two stack up.

[1] [http://www.mitsuba-renderer.org](http://www.mitsuba-renderer.org)

~~~
ykl
Considering that the author and Mitsuba's author, Wenzel Jakob, are both at
ETH Zurich, I wouldn't be surprised if they knew each other. ;)

Additionally, the author has a report page [1] with side-by-side comparisons
of material renders with Mitsuba.

[1][http://noobody.org/is-report/medium.html](http://noobody.org/is-
report/medium.html)

~~~
Tunabrain
Yep! Wenzel was even one of the judges at the rendering competition.

Mitsuba was and continues to be an important inspiration.

------
bobbane
I was looking forward to an article about a 3-D printer that created objects
out of tungsten.

