"Ray Tracing" has always been an overhyped or misunderstood technology, at least in my experience. Because of the impressive lighting effects it's famous for producing, people view it as some "holy grail" of superior computer graphics technology that we just need to optimize a bit for use in games.
As Carmack described, those highly realistic ray traced renders come at a price: billions of ray calculations. You don't get all that pixel-perfect refraction, etc. for free (and probably never will - at least not with ray tracing.) He also explains that for most people, this (e.g. pixel perfect refraction) really doesn't matter, when rasterization techniques exist that achieve extremely high quality approximations at a fraction of the computational cost.
Conversely though, ray tracing and related concepts (ray casting) are not at all without value. Many modern video games today actually use a sort of hybrid rasterization / ray casting approach. Macrostructure objects are rasterized the normal way, while smaller high resolution detail patterns (like a brick wall, or stones in the ground) are "added" to surfaces via per-pixel displacement mapping, which is ray casting at its core. This is one of the few cases where you can take advantage of the "log2" efficiency Carmack mentioned -- in a highly specialized implementation without a huge time constant.
I'm trying to imagine how complex a 'core' would be that computed the incident rays on a single pixel. Then figuring out how big that is in a 18nm process technology and then trying to see if I can fit 2M on a reasonable size die. My head exploded sadly.
Actually, it will when the complexity classes of the algorithms differ. Which is true in this case -- the time taken to ray trace a scene rises as the log(n) of the size of that scene, while the time taken to rasterize rises linearly with regards to the scene.
There exists a threshold of computing power/scene complexity above which raytracing beats rasterization in speed. However, like Carmack pointed out, the constant factors are massive, so this won't be reached in the near future, if ever.
The reason the constant factors are so huge is that complexity of raytracing rises linearly with the count of pixels to be drawn, while in rasterization a lot of the work can be shared by neighbouring pixels.
Intriguingly, this means that if you can reduce the pixel counts, you can vastly improve the value of raytracing.
Notably, if you can do eye tracking and rendering in less than 15ms, you can reach the same visual quality as a full-screen, high-resolution render by rendering only the areas you are actually looking at in high resolution, and rendering the rest of the scene at a progressively lower resolution farther from the focus point. The cone of high-precision vision is surprisingly small, something like tens of pixels when looking at a screen from a normal viewing distance. If you did this, you should be able to cut the amount of rays you need to send by at least two orders of magnitude, which would bring raytracing to real-time quality on modern hardware.
Does your last paragraph have any implications for VR? Could something like the Oculus Rift benefit from raytracing technology? Or have I misunderstood what you are saying?
I suspect this would look highly unnatural in practice, though, as even static scenes would flicker and change, especially with reflections and any surface or technique that uses random sampling methods (which is virtually every algorithm that is both fast and looks good).
In pair tests test subjects were unable to see a difference between the foveated image and the normally rendered one. The flicker can be removed by blurring the image to the point where flicker no longer exists. Because your brain is used to a very low sampling rate outside the fovea, it actually helps in hiding the artifacts, because they occur naturally in your vision.
These renderers start at the light source and stochastically generate and follow photons. Like a digital camera they suffer from noise :-)
Yes, that's similar to ray tracing, but going 'backwards'
Rasterizing is certainly faster, requiring more work in the 'intelligence' of scene assembly.
With raytracing (or physics rendering) you have lights, objects 'naturally interacting' with each other so shadows, reflection, radiosity comes ""automatically"" (still hard to do)
With rasterizing you have triangles with different colors, and it's up to you to paint them accordingly.
Carmack is right about games (surprise!). I can't imagine "Imagination Technologies" is pushing the Caustic R2500 at games.
Large scale "render farms" that exploit both already exist and are in commercial use in the VFX industry.
There are some tools that allow you to render a frame quickly by using many hosts to parallelise the task. This is usually only practical when the data that generates the frame is relatively small, in other words not gigabytes of fluid simulation voxels. It's also not realtime, by any stretch!
A renderfarm is usually not technologically any different to local rendering on a workstation. We generally don't parallelize across hosts within a single frame, but at least you can render lots of frames simultaneously!
Another thing that doesn't seem to have been mentioned on this post is that raytracing can be accelerated massively (where applicable) by using instancing. A single object can be used many times in a scene, only differing in its transformation. This allows the geometry to be stored in ram once and re-used, incoming rays that are incident on an instance's bounding box can simply be transformed into the local space of the instance. Of course this is of no help in an extremely complex scene full of unique objects, but in practice you can make great savings (and create very complex scenes that are cheap to raytrace) this way.
You're also forgetting about the flip-side. A classic benefit of scanline renderers was that a scene could be split into multiple parts, which (via a z-buffer) could then be combined without any further rendering. A raytracer, on the other hand, has to have access to the whole scene (if you consider reflections, which make culling objects essentially impossible) to calculate any given pixel sample.
I don't know whether this is still an issue, but for a while it was a barrier to raytracing scenes on a Pixar level.
Something like this would really help with things like raytracing. But then again, as Carmack mentions, the huge disadvantage are the acceleration structures which need to be discarded and rebuilt per frame for dynamic objects. It's like saying that raytracing is by it's nature inferior(or at least very wasteful) from performance point of view. It's like a problem to which you simply don't say "Just throw more hardware at it!".
I wrote a ray tracer once, but it was primitive. So I'm not completely talking out of my butt, just mostly.
Yeah but surely not for the geometry itself. In rasterization, we may use simple low-overhead acc. structures to efficiently traverse a relevant sub-set of the whole (but coarsely described) scene for some fancy culling, collision detection (OK that's not really rendering) ... but geometry (vertices, polygons, vertex attributes) does not need to be traversed like that and thus does not have to be stored as individual triangles in an octree or bounding-volume hierarchy or what not. Quite a difference in overheads here. In GPU terms, with rasterization you have geometry neatly stored in vertex buffers and an awesome Z-aware traversal method with vertex shaders. In a simplistic current-gen fragment-shader-based raytracer, each pixel traces a ray traversing through your acceleration structure which may be stored in a volume texture (ouch, so many texel fetches...) since vertex buffers are not sensibly accessible in a frag shader.
> scenes that are indistinguishable from reality
This would also require a high dynamic range output device. Looking directly into perfectly raytraced sunlight still won't glare and blind me like the real world does, but indeed, by 2050...
You could try to look directly into a beamer instead of onto the screen, if glare is all you want. ( And by 2050 I would suspect that the scene is directly copied into the frontal lobe, bypassing the visual cortex.)
But that's from my simple understanding, possibly there would be other problems too...
I am 90% sure that the eventual path to integration of ray tracing hardware into consumer devices will be as minor tweaks to the existing GPU microarchitectures.
"I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?"
Brigade is one example: http://igad.nhtv.nl/~bikker/
Here an in-game example: http://www.youtube.com/watch?v=6_DrgiwLABk
And a nice blog with posts about Brigade, Octane and others: http://raytracey.blogspot.nl/
For some reason I really want to understand what this sentence means. I don't know why it jumped out at me, maybe because it seems both accessible and arcane, I want some path to even just tour the arcane concerns of someone so deep into a (this) particular domain.
The simplest way to calculate the light sources is to follow another ray to each of your light sources, and if there's nothing in the way, you add the intensity of that light to the pixel.
The problem with this simple approach is that something is either blocking your path to the light source or it isn't, which creates absolutely sharp shadows because it's simulating the lights as if all the light's brightness is emanating from a single infinitesimal point.
In real life, lights tend not to be like that.
To get realistic looking shadows with ray tracing you have to send multiple rays to different parts of each of your lights, adding a portion of the lights brightness each time (you're essentially doing a monte carlo integration over the area of the light). The more you do, the better looking shadow edges you can get, but also the more effort you have to spend in calculating.
Raytracing is rather fun because you can get really interesting results with relatively little code and it's very visual - you actually see your bugs. Some people do simple ray tracers as katas. I did a basic one as a way of learning Scala. If you're interested in having a look it's here (very simple of course - I do treat my light sources as points): https://github.com/kybernetikos/ScalaTrace/wiki/ScalaTrace
Isn't that exactly what a GPU is? They aren't terribly fantastic at traditional computation, but then again we're talking parallelism here.
Because ray tracing involves a log2 scale of the number of primitives, while rasterization is linear, it appears that highly complex scenes will render faster with ray tracing, but it turns out that the constant factors are so different that no dataset that fits in memory actually crosses the time order threshold.
I'm a Unix programmer but I suspect I could learn as much from him about software design as from weeks of talking with just about anyone else.