

Problems Raytracing Doesn't Solve - blackhole
http://blackhole12.blogspot.com/2012/09/7-problems-raytracing-doesnt-solve.html

======
tsahyt
There are a number of problems with this article. First, the way ray tracing
is described is very odd. Rays aren't traced beginning from the light source
that emitted them, but are shot from an "eye point" through a pixel on the
screen and are _then_ traced back to a light source (possibly with multiple
bounces). Suppose all other operations are constant time, this yields a linear
time rendering algorithm (linear with screen resolution). For some data
structures this is ridiculously effective. The way it is described in the
article is horribly inefficient. Not just that, it won't ever terminate since
a light source theoretically emits an infinite number of rays that have to be
traced. Infinite number of rays => Infinite time => algorithm never
terminates. There are ways to solve that, yes, but in general rays are traced
starting at the camera and _end_ at the light source.

On to the paragraphs on photorealism. Yes, ray tracing won't solve this
because you haven't even given it a problem yet. They're basically three
paragraphs going on about how we don't know how to define photorealism. Yes,
it is a bad term. Now, let me have a shot at defining realism in computer
graphics: The easier I can mistake it for a real scene, the more realistic it
is. What ray tracing does here is substantial! One of the main reasons why we
can easily distinguish a computer generated scene (except for some scenes in
movies nowadays that have been rendered off-line using thousands of hours of
processor time) is lighting. We instinctively "feel" things that are odd about
lighting. We're very sensitive to that. Ray tracing can provide better
reflections and better shadows than almost any other rendering method and
basically does them with just a couple of additional bounces. Given an
efficient data structure this is one of the quicker ways to do real time
lighting effects properly. To top that off, ray tracing can do proper
refraction, proper ambient occlusion, interactive indirect illumination and a
few other nice effects. Altogether this means one thing: stepping closer
towards realism.

Concerning complexity, ray tracing itself doesn't offer a solution. As I've
pointed out already, time complexity of ray tracing algorithms depends
_heavily_ on the data structures that are used for looking up collisions of
rays with geometry. There are good ones, there are bad ones. What ray tracing
does though is free occlusion. Only geometry that is visible is ever rendered.
That is pointed out in the article as well. What rubs me the wrong way here is
"it still has to navigate through the scene representation". Obviously that is
true, but as I just said, a lookup algorithm is _not_ part of ray tracing. RT
merely paves the way to use good data structures with it. It's not a problem
ray tracing can solve, because it is a _rendering algorithm_ , not a data
structure.

The scale problem with the stars. Well that is really a memory problem. Again,
an efficient data structure will work no matter the spatial distance a ray has
to travel. How are entire solar systems simulated? Well, on that scale you get
away decently with not loading everything at once into memory and fetching
data when needed. I suspect that this is exactly what is done with current
approaches and there's nothing that prevents you from doing the same while ray
tracing the scene.

Materials like aerogel or clouds are volumetric effects. Ray tracing is
_perfect_ for this. In fact, off line rendering uses ray tracing almost
exclusively to perform those stunts.

The physics argument doesn't hold properly because it assumes that we've
implemented the renderer on the CPU and are therefore eating processor time in
big lumps. It would be suboptimal to implement the ray tracer on the CPU,
because RAM access times are usually slower than video memory access times
from the GPU. Since we're dealing with large amounts of data in all 3D
processing, it's a _much_ better choice to use CUDA, Stream or OpenCL to
program the GPU to do the entire raytracing and merely use the CPU for
occasionally shovel data into video memory. To top that off, ray tracing is
easy to parallelize, since the algorithm operates the same for each screen
pixel. Modern GPUs are bloody brilliant at that. Use them! This leaves some
room to process physics on the CPU, as well as game logic, AI, event
processing, network code, etc.

I don't really get the content argument either. The content has to be made
_regardless of the rendering method_. It's the same content after all. How
does ray tracing make "it worse"(sic)?

TL;DR: Ray tracing is not a solution to all our problems but a good step
forwards in real time rendering. What's really keeping is from implementing it
in commercial engines yet is that it's not easily compatible with the
polygonal representation of geometry. _This_ is the problem that we should
actually be solving.

Oh and by the way, the link labelled with "boring" is actually pretty
impressive and _far_ above anything any real-time rendering engine available
today produces.

------
pgsandstrom
I believe raytracing is normally done from each pixel on the screen, into the
world that is being rendered. Not "a bajillion rays" from the light sources in
the scene. I.e. there is always a constant number of rays for each frame.

~~~
struppi
You are right. The author has absolutely no clue what he is writing about.
Just like what you pointed out.

He also claims raytracing does not solve photorealism, but does not back this
claim. No single argument why! [Edit: But he's right here, for photorealism
you probably want a combination of methods, like ray tracing and radiosity or
ray tracing and photon mapping]

Complexity: For raytracing, you can store the scene in very efficient data
structures, so you do NOT have to search your "billions of triangles" for each
ray. Also, you can make sure rays degrade quickly since you trace them from
the camera backwards to the light source, as you have pointed out.

"How do you raytrace stars"? They are a light source at infinite distance.
This is a solved problem in computer graphics - And a pretty simple one at
that.

Materials, Physics, Content and AI have nothing to do with computer graphics
AT ALL.

~~~
blackhole
Storing triangles in a super efficient data structure is precisely what I mean
by traversing the scene. Do you really think that's a free operation when you
have billions of triangles inside a data structure that must be dynamic by
nature? You hit all sorts of arcane memory issues. I hit the same thing just
developing a basic kd-tree for 2D culling.

The approximation algorithms I pointed out are exactly how raytracing is done
backwards, but actually still must use a combination of camera rays and light
source rays in a stocastic method in order to determine a path from the camera
to the light source. Brute force raytracing is, in fact, just shooting out
rays from the light sources. That's obviously inefficient, which is why we
have the backwards lighting attempts, as discussed in the article.

Raytracing stars is not as simply as you think, not when you can visit them.
They can no longer be point sources at infinite distance because the object
itself is not infinitely far away, so you therefore can only substitute an
infinite light source in when its essentially a point, which gets really nasty
when you are dealing with entire galaxies as light sources, and if it isn't
you must deal with all the lighting effects in between.

Materials are _the most important part_ of computer graphics. If you don't
think this, you have absolutely no idea what you are talking about.

~~~
Geee
I don't think anyone has ever done purely a _from light source_ tracing,
because it doesn't make sense. Tracing _from camera_ is not an approximation.

Brute-force path tracing is random sampling _from camera_ to every direction
and bouncing/interacting until light source is hit. This is the standard Monte
Carlo path tracing method which is unbiased and arrives at the accurate
solution given enough time. This is unbiased, accurate and very simple to
implement, but very slow to compute.

Metropolis and photon mapping are bidirectional methods where rays are
selectively traced back from light source to forward direction, to speed up
the path tracing computation.

------
vilya
The entire article seems to be founded on misconceptions about raytracing and
a wilful misunderstanding of what is meant by photorealism. Don't waste your
time.

~~~
danielbarla
Yeah, honestly, I stopped reading at the argument of 20 to 30 bounces not
being enough to render a kitchen. I mean, this is still around 19 to 29 times
more than what traditional 3D methods are using for environmental mapping. But
more importantly, I don't think it's common (even in a kitchen) to have more
than a handful of "bounces" being visible - things tend to get small at that
point (unless we're looking at two large, flat mirrors pointed at each other).

------
anonymouz
From the article: "The game industry spends all its computational time trying
to render a scene, leaving almost nothing left for the AI routines, forcing
them to rely on techniques from 1968. Think about that - we are approaching
the point where AI in games comes down to a 50-year old technique that was
considered hopelessly outdated before I was even born."

This can hardly be considered a problem of ray tracing specifically. And the
A* algorithm is a perfectly decent algorithm for path-finding. The notion that
an algorithm that does its job should be disregarded just because it is old is
ridiculous.

~~~
jiggy2011
A* is fine if you have a single moving object navigating a static scene.
Problem is that most game scenes aren't static so you either have to
recalculate on every movement (very expensive) , break it down to smaller
paths (possibly leading to very sub optimal paths) or wait for it to get stuck
(at which point it might become impossible to be unstuck).

------
delinka
There's an entire field of research around realism in computer simulation and
graphics. Raytracing is but one tool to solve the problems in this field. And
there's at least one company who has monetized its efforts in this field:
Pixar has been solving these problems for years. For quite some time, they've
solved a handful of problems with each movie they made. Then, they release
that knowledge in the newest version of Renderman. Now their products aren't
cheap, nor is the educational material related to it, so I guess that complete
"freedom" in the CGI space is quite a few years out, but I digress...

~~~
vilya
There are a LOT of companies monetising their efforts at making realistic
computer graphics - including the one I work for. It's not just Pixar. :-)

~~~
delinka
"at least one..."

At most ... indefinite.

------
optymizer
I think the author doesn't know how ray tracing is implemented.

> raytracing is the process of rendering a 3D scene by tracing the path of a
> beam of light after it is emitted from a light source, calculating its
> properties as it bounces off various objects in the world until it finally
> hits the virtual camera

With my limited knowledge of graphics, I was under the impression ray tracing
works exactly the opposite way: one sends 1 ray per pixel (based on the size
of the viewport) into the scene, bouncing it off objects (taking into account
their material properties) to finally compute the color value for that pixel.
It's the exact opposite of the real world (where the sun shines light on the
objects and the rays end up hitting our retinas).

Reversing the process has the advantage of being finite: only 1024x768 (or
1920x1200,etc) rays will be sent from the viewport into the scene. In the
author's scenario, there would be an infinite number of rays to be traced
(light sources generally emit light in all direction and they try to emulate a
real light source, which emits an infinite number of rays).

Am I completely wrong here? Is my whole world a lie?

~~~
anonymouz
Usually one starts at the eye and traces back through the scene. But the
converse is also sometimes useful, and the terminology is slightly ambiguous
(different people have called different variants "forward" and "backward" at
different points in time...), Wikipedia has some details:

[http://en.wikipedia.org/wiki/Ray_tracing#Reversed_direction_...](http://en.wikipedia.org/wiki/Ray_tracing#Reversed_direction_of_traversal_of_scene_by_the_rays)

> Am I completely wrong here?

No.

> Is my whole world a lie?

Yes.

------
mattyppants
So maybe they should have called it. "3 problems that raytracing can't solve,
and 4 other ones that just suck about modeling real life in 3d(on a 2d
surface)".

------
moviewatcher
so, traditionally, things that have been implemented in software, have then
been implemented in hardware, to accelerate the process. Is this possible to
do with raytracing?

~~~
Geee
Current GPUs are very suitable for ray tracing, because raytracing can be
almost completely parallelized and GPUs are programmable nowadays. However,
one problem (huge) is random memory access. When rays can bounce at objects
scattered very far from each other, it's difficult to keep anything cached.
Nevertheless, most current realtime raytracers are GPU implementations, some
are hybrid GPU/CPU.

