Getting nearer to photorealistic graphics  isn't just about doing "more of the same and faster" (Moore's Law), it's increasingly also about "division of labor" across many subcomponents.
I like this rendering passes breakdown for Battlefield 4 (there are 54 passes):
(Frostbite engine by DICE, started as Battlefield franchise backbone, now it's becoming the engine for EA, sharing tech across many games - Battlefield, Dragon Age, Mass Effect, Need for Speed, Star Wars: Battlefront, Mirror's Edge)
 See e.g. here how close to photo sources Frostbite engine can get today (latest Need for Speed game):
honestly these days I'd much rather play a game with good style and direction rather than perfectly repicating reality. it's impressive and will continue to get better for a while, but I like my art a little less like a photo :)
(I'm a 3d artist myself, and have to deal with the tradeoff between quality and time on a daily basis)
I am a software engineer, and I think its black magic. Probably inside your nvidia chip there is a ouija board summoning spirits to conjure images onto your screen. Why else would cards have names like Voodoo?
I had to dump the rendered frames off on to the video playback server every 4 hours for about a week over Christmas 1996 because the local disks would be full. That was a weird week !
I got on with Martin and I particularly enjoyed the sequence flying around inside his psychedelic head. I made the animated textures and mapped them with C++ and then 3DS did the rendering.
The walking tiger is from a plastic toy. I took a scan on an A4 flatbed scanner, rotated it, took another scan until I had a full rotation. I then turned the outlines into vectors with the same number of vertices and made those into a triangulated solid.
The three minutes of video took about three months of very long days.
You can poke around D3D11 and OGL games, though I haven't tried out the OGL functionality. Next-gen api support is coming as well.
If you've poked around other engines, either via debuggers or actual engine development, you actually run into a lot of the same concepts as far as piecing a frame together. The interesting parts come in the details of those sub-passes, which is what gets analyzed by the article author.
Practical application of cutting edge graphics tech to both make a beautiful image and create an immersive world is so compelling to me. I also really like the tight visual/audio iteration loop games provide for day to day work. It's incredibly rewarding to me to conceptualize how to solve a problem and have a playable prototype in the next few hours that a real player can test.
I can imagine myself working in various fields but I'm always drawn back to games for better or worse. While it's just as likely as a developer you'll incite gamer rage with some small mishap there is also the opportunity to make a piece of entertainment people love and remember. That's the goal I keep striving for in my work.
I think this is a better answer for why they use checkerboarding: http://gamedev.stackexchange.com/questions/47844/why-are-som...
Summary: Alpha doesn't work in deferred shading, so they use screendoor transparency instead.
In case it still isn't clear after reading both descriptions of what's going on: that method and a bit weird look comes from using transparency via dithering/checkerboarding when doing LOD levels transitions in deferred rendering pipeline.
When rendering levels-of-details, you need to render both higher-level and lower-level quality models at the same time and blend between both, not to have sudden visual discontinuities.
In deferred pipeline transparencies are tricky / costly, so doing dithering is a clever workaround, basically piercing holes in a opaque solid models.
This would be quite ugly for regular transparent surfaces like glass / windshield / water, so those are rendered in a different way (usually forward pass).
But for LOD transitions, which are short in duration, it can look quite acceptable (imagine superimposed images of two versions of one tree, one low-poly model, one high-poly model, further away you are from the tree, more pixels of low-poly tree you see).
It gives me a new appreciation for games, the amount of work and creativity that goes into the final experience for the player.
What would you like to know? I have some time, so if anyone was hoping to learn more about this, ask me.
What should I read next?
Actionable advice: Set up a program such that you have an array of pixels. Say, 512x512. Now write a while loop which randomizes those pixels. Finally, figure out some way to "create a window and put those pixels on the screen."
You now have a game loop. This is roughly how every game works at a basic level.
The next step is to realize that a triangle is the fundamental way to draw shapes quickly. Want to draw a square? That's two triangles. Want to draw a humanoid? Sounds complicated, but artists approximate humanoids with a combination of spheres, cylinders, cubes, etc. And all of those can be divided up into triangles.
So a triangle is therefore the first place to start in understanding graphics in general. What's the goal? "Create a data structure to represent a triangle. Now try to draw a white triangle into your little pixel array."
It's going to be tough. But tough work is good. I remember how difficult it was for me to even get a basic white triangle up on the screen. But the rewards are worthwhile, because at this point a lot of other things will start to click. A "vertex buffer" will no longer be mysterious, for example, because you'll immediately see that your triangle structure (whatever you came up with) was really just "a vertex buffer with three vertices." And then you'll start to wonder why graphics programmers came up with such complicated words to describe such simple concepts...
At this point, you'll have the ability to go in one of two directions: "Notch" or "Carmack." It depends entirely on what you find interesting. If you like the idea of making games, concentrate on creating Pong. (You have everything you need, because you just got a triangle up on the screen, after all.) If you like more along the lines of what the article talks about, then concentrate on creating a software rasterizer. "Software rasterizer" is another one of those terms that turn out to sound scary, but is way easier than you'd expect. It's hard in the same way that learning to ride a bike is hard: it'll take awhile, but you'll never forget it. After that, nothing else you ever do (in graphics) will ever seem even slightly mysterious. I have some thoughts on how to learn the latter, if anyone's interested.
In general, the reason is that these were not the first concepts that they came up with. Often they started out with even simpler concepts that didn't even need names -- but it eventually turned out that these were too simple to offer good performance, or did not map adequately onto evolving hardware.
The original way to draw a triangle in OpenGL didn't involve vertex buffers at all. Basically you just said "OpenGL, draw me some triangles":
glVertex3f(0, 0, 1);
glVertex3f(1, 1, 1);
glVertex3f(0, 1, 1);
But when we got programmable GPU hardware that could execute whole programs separately from the CPU, these old simple ways became a tremendous performance bottleneck and an obstacle to implementing more advanced rendering algorithms. On the desktop, all the old OpenGL APIs still work, but they were removed entirely from the mobile edition.
This domain is unique in that it's not a good idea to read.
Sure, learn by doing; but there are great resources to help you understand far, far faster than you ever will by experimenting alone. If nothing else, there is good code to read, and classic texts.
For example, Carmack was able to "invent" BSP because he was (as far as I've heard) an avid reader of medical journals. Specifically, journals and papers about the graphics techniques they used at the time. The field of medicine turns out to be very lucrative for an ambitious graphics programmer, because they're often at the frontiers of what's currently possible. So apparently BSP was used in accelerating medical renderings, and Carmack was able to see their potential for realtime graphics. The only reason he was able to do that was by reading pretty much every possible thing.
None of that will help you unless you force yourself to do and not read, though.
This is just not true in either of its claims. It's not even useful hyperbole, really, it's just wrong.
Graphics programming is exactly like other domains of technical development, you will learn best by a combination of reading good summaries/examples of what is known, doing work on your own (not cutting corners), and talking to people that know more about that you do.
The best I can say is that my career began from that method. And by asking a lot of questions on IRC.
The supporting media and presentation are as exceptional as the bespoke game engine (RAGE) used in in the GTA series.
This series of blog posts is excellent.
Why would the engine do this? Is it to try and give a 'cinematic' feel to the game? Do our eyes suffer Chromatic aberration?
>The tech that built an empire: how Rockstar created the world of GTA 5 featuring an interview of Aaron Garbut. http://www.techradar.com/news/gaming/the-tech-that-built-an-...
>GTA V NVIDIA Performance Guide with details about the different graphics settings. http://www.geforce.com/whats-new/guides/grand-theft-auto-v-p...
>Renderdoc which made picking into GTA V internals a breeze. https://github.com/baldurk/renderdoc
It really makes me appreciate some of the more well optimized games, it sounds like solving a very difficult puzzle.
> ... it’s a real technical prowess.
> ... it’s a real display of technical prowess.
> ... it’s a real technical achievement.