
Games Look Bad: HDR and Tone Mapping - megaman22
https://ventspace.wordpress.com/2017/10/20/games-look-bad-part-1-hdr-and-tone-mapping/
======
NathanKP
A lot of the oversaturation problem can be attributed to bad displays. Think
about early Instagram. Everyone used terrible super saturated filters on their
photos because the early smart phone screens were so bad, and got washed out
so easily in the sunlight. By overcompensating with filters you got a photo
that looked good on a bad screen, even in sunny conditions.

The same was true of early games. Gamers love the bright colors of World of
Warcraft for instance because displays were bad, and it was easier to watch
bright colors for hours on end while gaming. Even today my modern TV looks
pretty bad in the sunlight. As much as I love the muted colors of Breath of
the Wild I have to admit that without closing my curtains its really hard to
tell what's going on if a sunbeam is hitting the TV.

I think as gamer's screens get better we will start to see a transition away
from exaggerated brightness and a trend toward more realism, just like
Instagram has transitioned largely from a platform that overcompensated for
bad smartphone screens to a #nofilter style.

~~~
fulafel
WoW was released the same year as LCD monitor sales first surpassed CRTs
(2003). So in the early days of WoW, people actually got better colors than a
few years later when most people had dumped CRTs for the relatively poor
consumer LCD monitors of the day.

~~~
tomxor
I was just a kid, but as far as I remember the transition to LCD brought worse
colour not better. To be clear, i'm not talking about early professional
colourists LCD displays that were £5000 or whatever. Surely everyone is
familiar with those horrible TN panels in laptops... desktop LCDs used the
same tech once.

~~~
pilsetnieks
That's the point - that in the first years while CRTs were still quite
widespread, people had better displays than a few years later when many had
switched to terrible early LCD screens.

~~~
katastic
On the otherhand, I cannot COUNT the amount of headaches I got as a kid from
the TV and CRT monitors in our house.

It's physically painful to remember it.

I got a Compaq Portable (8088 clone) for a dollar years ago. I weighs a "tiny"
~35 LBs. ("portable"!) I got it to boot up after some problems with floppy
drives. I was so cool to see little BASIC programs displaying silly, fancy,
tech-demo graphics. But the whine... oh the whine... I'm half tempted to
replace the built-in screen with a small LCD and just disconnect the blue and
red wires to keep it solid green.

Sidenote: I still miss the warm glow of an amber screen though. There's
something magical about it.

------
vinistois
This article makes me appreciate the balance struck in GTA.

IMHO this type of scene is better than any of the examples shown in the
article:

[http://prod.hosted.cloud.rockstargames.com/ugc/gta5photo/YjB...](http://prod.hosted.cloud.rockstargames.com/ugc/gta5photo/YjB4SamD3UmQ1HF7Qf1bsg_0_0.jpg)

~~~
ttoinou
This picture is from a video game ?!

~~~
wandererer
A more than four-year-old videogame, yes.

~~~
_frog
on last generation hardware too, though the screenshot in question might be
taken one of the ports to more modern systems

~~~
driverdan
It's from a computer, not a console.

------
chrisseaton
> We would need 20 bits of just luminance to represent [the range 1,000,000:1

I don't understand this statement. Why do you need a certain number of bits to
represent a large range? You can encode a range using any number of bits you
want, but if you use fewer bits the resolution within may be lower.

Representing a large luminance range isn't a problem of resolution though is
it? It's a problem of presenting that contrast ratio on a screen that is not
capable of a contrast ratio of 1,000,000:1. If I represented the same
luminance range with a thousand bits it doesn't do anything to solve the core
problem, does it? So why does the number of bits matter?

Not a graphics programmer, so genuinely asking, not saying the article is
wrong!

~~~
sandofsky
The larger the dynamic range, the more bits you need to represent the possible
values in between.

Imagine a gradient that smoothly transitions from black to white. 8 bit color
means only 256 shades of luminance, which would quickly show banding
artifacts.

In fact, the entire reason we gamma encode images is because 256 is too small
for even a low dynamic range monitor. Gamma encoding is a clever technique to
give us more bits to represent shadows, which humans are more sensitive to
than highlights.

~~~
chrisseaton
Right, but the contrast range 1:1,000,000 - that's a real range isn't it? Not
an integral one? So why does it ideally need to be represented with a
particular quantisation of 1,000,000 different states? I mean there's not
1,000,000 different values between the real numbers 1 and 1,000,000 are there?
There's an infinite number. There's only 1,000,000 values (20 bits) if you
decide to quantise on each integer. Why do you need to do that? Why not more?
Or less? What's special about the integral values on the number line?

~~~
ttoinou
Because 1 defines the lowest value you want to see. 20 bit is the minimum
number of bits you need (in linear light) if you don't want noise in the
darkest parts of your image

~~~
chrisseaton
There are an infinite number of luminance levels between a contrast of 1 to
1,000,000, right? You will need 20 bits if a human can distinguish 1,000,000
levels between those two extremes. Is that the case? Maybe a human can
distinguish 2,000,000 levels between 1 and 1,000,000, in which case you would
need 21 bits.

You can't take a real range and give a number of bits needed to represent it
without also stating what resolution you need. And I think it's unlikely that
the resolution that a human can perceive is coincidentally the same as the
factor of one extreme to the other.

~~~
Promit
This was certainly not meant to be a crucial piece of information, but sure,
let's get into it.

Much of the post comes from a general assumption that the goal of computer
graphics is primarily to replicate how a camera sees the world around it. Thus
I think it's easiest to start from the idea of real world light entering a
digital image sensor. Light in this setting is not continuous! Each subpixel
in an image sensor acts as a photon counter. One photon hits the sensor, the
count ticks up by one. There's no question of being able to perceive the
values between 1 and 2 because they don't even exist. Either the sensor
counted one photon or two. If you were going to literally create a digital
camera that can process the entire world, you need 20 bits to count up to a
million photons without losing any along the way. So if you were to build the
hypothetical rendering pipeline that works on "real world" data about the
scene, that 20 bit value would be the input.

As a practical matter, nearly all modern games store lighting levels
internally in floating point, in arbitrary units chosen by the developers.
Lighting pipelines are not integer based, but they're linear and not
perceptual. The conversion to perceptual 8 bit (gamma curve) happens as part
of the tone map stage. Doing things in floating point physical units is a
better idea than the photon counter anyway, but the line you're confused about
was really written with idealized cameras in mind.

( _Technically an image sensor is an analog device and the voltage increases
with each photon detection by an increment that is subject to noise of all
sorts and pre-amplification before hitting ADC. Don 't jump me on the photon
counter thing._)

~~~
tripzilch
> Technically an image sensor is an analog device and the voltage increases
> with each photon detection by an increment that is subject to noise of all
> sorts and pre-amplification before hitting ADC. Don't jump me on the photon
> counter thing.

But it's rather important, and not even for those reasons. This is not meant
to jump on you! (and I really loved your article, like you said it's not a
crucial point at all)

The 1:1000000 or 1:2^20 contrast ratio only corresponds to exactly 20 bits if
the 1 on the low end of this ratio corresponds to exactly one discrete unit of
light (photon). If it's off by a factor of 0.5, 1.618 or whatever, that's what
the whole argument is about.

First, the sensor counts not photons but a value relating to _photons per
second_ (because exposure time). If the 1 on the low end of the range
corresponds to some exact minimum number of photons, it's going to be "one
discrete unit of light per <exposure time>". Making the whole thing analog
from the start.

Second, those sensors most probably aren't able to count individual photons
any way[0]. The human eye, after about 30 minutes to get optimally adjusted to
darkness, can _sort of_ perceive individual photons, or small bunches of maybe
2 or 3, kind of. Those barely-perceptible specks of light in the utter
darkness aren't the sort of resolution issues we're worrying about in the dark
end of these types of scenes. And, as soon as you make a light source that can
be described as "emitting single photons" in a certain context, you get
uncertainty effects and all that quantum jazz (show me a photon/path tracer
renderer that gets the slit experiment correct[1] and you can have your
integer photon counters :) ).

So the sensor output values can (and should), for all intents and purposes, be
assumed to be an analogue value.

The amount of bits you represent it with just puts an upper bound on your
signal-to-noise ratio (as per Shannon entropy). But since we're dealing with
2D images, the distribution of this noise over the spatial resolution (either
as a result of sensor noise at the input or explicit noise shaping dithering
at the end of the pipeline) also comes into play when considering the quality
of the output.

If the signal-to-noise ratio of a sensor output happens to allow for 20 bits
of precision, for a sensor that happens to have a 1:2^20 brightness range,
that's coincidental. Sure it correlates because higher-end sensors tend to
perform better in both range and SNR. But I don't believe that the 1 on the
very low end of a discrete range represents precisely one photon per <power-
of-two times minimum exposure time>.

[0] correct me if I'm wrong about this btw. There might be specialized
scientific equipment that can, but I doubt even high-end cameras bother to go
to the accuracy of single photons. But, I mostly know about digital signal
processing, not about state of the art of camera hardware. Yet even if they
are able to detect individual photons, that's going to be a probabilistic and
per-unit-of-time measurement, so the rest of my argument holds.

[1] these probably exist, but aren't used for games or photorealistic
rendering purposes

------
ndh2
I feel like this post does nothing in terms of educating readers. It all
sounds very interesting, but I didn't learn anything. It alters between
putting the blame on technical limitations and ignorance, so I would have
liked to see a suggestion on how to address either of these.

~~~
aaron-lebo
It's a well-written and informative post, it deserves a little more praise
than "does nothing in terms of educating readers".

What he is discussing is about aesthetics. There are two things at play here:
the big studio business model relies on making blockbusters, so you pack
together a few hundred devs and artists and try to pop out the biggest game
you can. The other is that many devs are technically trained but not
especially artistically educated. They have a limited understanding of
lighting, color, framing a scene, etc.

Someone who has made some great criticisms of this as well as provided real
solutions is Eskil Steenberg.

~~~
xigency
The article isn't very concise and is actually quite repetitive, it uses a
bunch of jargon (without any description), and shifts blame around.

The topics are interesting and the opinions are there, but it certainly could
be written better.

As a programmer who has done a small amount of work in graphics but hasn't
worked in HDR rendering I have no idea what any of these are....

    
    
        film LUT
    
        color grading
    
        post process LUT color grade

...but I could write the linked-to shader. So who is he helping?

> It alters between putting the blame on technical limitations and ignorance,
> so I would have liked to see a suggestion on how to address either of these.

I second this.

~~~
jacobolus
CLUT = color look-up table

color grading = adjusting the color in an image, as you might do in Photoshop
or whatever

post process = any image adjustment that happens after the image is originally
captured

This author is assuming a basic background in lighting / human vision / film
technology.

~~~
xigency
Yes, I understand the acronyms, but they are being used as jargon, not in a
instructive way.

If I were to talk about a web framework and say, "we've got our JSON files
wrong and our package configs are cluttered, and don't get me started on the
XML files," it's clear that it is metonymy [1].

Knowing that JSON is JavaScript Object Notation, the package.json is used for
Node Package Manager, and that XML is a file format doesn't capture the
feeling that a developer might have reading that sentence, and it doesn't
instruct.

[1]
[https://en.wikipedia.org/wiki/Metonymy](https://en.wikipedia.org/wiki/Metonymy)

From the article:

>>> It was Reinhard in years past, then it was Hable, now it’s ACES RRT. And
it’s stop #1 on the train of Why does every game this year look exactly the
goddamn same?

If the problem the author is facing is a gap between art skills and technical
skills, then the post could be less angry and more informative about color
theory or rendering technology, maybe bridging the needed skills.

~~~
blattimwind
The title of the blog is literally "vent[ing] space".

~~~
xigency
Sure. I feel like this discussion has drifted away from the original comment
and reply. Great vent, mediocre article.

------
imcrs
Artstyle is far far more important than graphical capability.

A game like Team Fortress 2, released in 2007 (!) looks so much better than
many modern games because there is a coherence and style to the art. It's not
"HD" for the sake of "HD."

We're in a period where the graphical canvas is getting larger every year, and
the temptation is to fill it with as much color and pop as possible. But some
restraint really works wonders.

~~~
theandrewbailey
> We're in a period where the graphical canvas is getting larger every year,
> and the temptation is to fill it with as much color and pop as possible.

It's been a while, but last I looked, it seemed that the temptation was to use
ever muddier and desaturated visuals, with as much glare and shiny surfaces as
possible.

~~~
subwayclub
It comes and goes in phases according to the available technology. The first
Quake was all muddy brown and gray. Quake 2 and Unreal both introduced color
lighting and looked like a disco on Saturday night. Mid-2000s games added a
lot of new lighting and post-processing effects but they had limitations
causing harsh shadows and highlights, hence another round of gray/brown
photorealism came through.

But in this decade things are finally feeling more evened out. Lighting models
are sophisticated enough to allow for designs similar to a film set or photo
shoot, and post-processing is getting past basic glows and filters and into a
spectrum of quality/performance tradeoffs.

Of course, the games that don't aim for photorealism always age better. That's
been the case since people started digitizing photographs for games.

------
rcthompson
Everyone commenting here that they don't care about photorealism in their
games is missing the point. The example games (and movie) given at the start
of the article lack an appearance of photorealism not because they were going
for something else, but because they either didn't pay enough attention to the
HDR rendering, or else they didn't have the expertise to do it right. The
article also gives examples of games that intentionally went with a non-
photorealistic look to achieve a specific effect. So the point is not "these
games look bad because they aren't photorealistic", it's "these games look bad
because their HDR rendering is bad".

I don't know enough myself about HDR rendering to have an opinion on whether
the author is right or wrong about any of this, but people arguing that the
author is wrong because games shouldn't look photorealistic are attacking a
straw man.

~~~
kllrnohj
The author doesn't appear to have bothered to investigate or honestly ask
_WHY_ games are the way they are, though. They are just blindly attacking an
entire industry on basically the claim that the entire industry is stupid.

But the industry isn't stupid, and some of the "over-contrasty garbage" is
intentional to help people rapid identify the things that they need to
identify. Being able to quickly spot the things that you need to play the game
are more important than sensible tone maps and contrast curves.

This is not a medium you observe like film & paintings are, it's a medium you
_interact with_ , and rapidly at that. Approaching the visuals with the same
perspective as something you only observe is a fundamentally flawed approach.

This article is packed with baseless assertions, unfounded claims, and
offhanded but not investigated technical limitations. Hell, right after saying
"partly due to technical limitations" the author proceeds to throw out a
"nobody in the game industry has an understanding of film chemistry" snark.
The author doesn't even bother to elaborate on the technical limitations that
they remarked on, opting instead to just hurl insults for no reason.

~~~
zamalek
I was hoping that the author would go into how to do tone mapping correctly.

> ask WHY games are the way they are, though.

To add to your point, the main reason for "bad HDR" in FPS games is for
simulating eye adaptation. Screens don't have 1,000,000:1 constrast ratio and
can't shine as bright as the sun (we wouldn't want them to). This means that
games are forced to approximate this effect - inevitably resulting in
shortcomings.

Furthermore, game development is a race against time. You have milliseconds to
render the scene and apply tone mapping. It may very well be mathematically
possible to create automated tone mapping that would make a cinematographer
proud, but can you make an algorithm that completes in less than 1
millisecond? I doubt it.

To my point, there are mods on PC that replace the tone mapper (I think
ReShade does this) and they absolutely destroy your framerate - however the
results are absolutely gorgeous if you know what you're doing.

~~~
pfranz
> To add to your point, the main reason for "bad HDR" in FPS games is for
> simulating eye adaptation...

How is that different from what photographs do?

> Furthermore, game development is a race against time. You have milliseconds
> to render the scene and apply tone mapping.

Maybe I know just enough to be ignorant of the topic, but LUTs are just lookup
tables. People working in film apply LUTs to the footage they're watching in
realtime on normal PC hardware. Fragment shaders have been common in games for
years. Last year I was working on a VR project (targeting 11ms per frame) and
I think the post-processing took between 0.25 and 1.5ms--any more an it would
have been dropped). For a desktop game having 16 or 30ms to render many fewer
pixels is an eternity, comparatively. The only difference between a LUT and
ICC profile (which most OSes support out of the box) is that for ICC profiles
are mean to be calibrated against either an input or output--they're used
everywhere in realtime.

I've never actually used ReShade, so I had to look it up. It does way more
than color correcting. Depth of field, ambient occlusion, antialiasing are
completely separate from tone mapping and notoriously computationally
expensive (not just in video games, but in film as well). I also think since
third-parties are writing them, they're not always tuned to be performant.

~~~
zamalek
> How is that different from what photographs do?

A typically developed HDR photo is a single sample following the time at which
the eye has adapted. There are [hopefully] no over- or under-exposed areas.

The dynamic range in HDR games is often used to simulate the entire process of
eye adaptation. It's a function over time that intentionally creates over- or
under-exposure some of the time.

~~~
pfranz
I still don't see why animating the exposure explains why there's "bad HDR" in
games (high contrast and over saturation)? I haven't implemented it or read
any papers on the subject, but the games I've played don't look that much
different than auto-exposure on camcorders in the 90's (or your phone today).
For actual cameras it's more difficult because you need to guess and adjust
the aperture or shutter instead of having a HDR image in a buffer and deciding
how to generate a LDR image to present. Calculating the exposure (especially
if you're animating it and only willing to go so far in one direction or the
other), should be fairly quick.

------
Raphmedia
I've always said (as a joke) that all game developers must have bad eyesight
and dirty glasses.

Everything is blurry, you can't see far, you get dirt over your vision and
there are glares and lens-flares all over.

What's worst is that this is done on purpose and on almost all the games. They
must think that this is what real life look like.

~~~
turc1656
I rarely play games but the most recent game I played was Alien: Isolation. It
had been quite a while since I played a game before that so I was surprised to
see a lot of these things on by default. Specifically, chromatic aberration.
Why the hell would anyone add in a setting for that? I get that they were
going for the Super8/VHS type of look but it's just really distracting. I
forgot what the other settings were but I turned off a lot of those junk
visual effects and then things looked beautiful.

~~~
potatolicious
I also can't stand chromatic aberration in games - especially because it's so
freakishly exaggerated compared to actual CA you would experience with old
video/photo gear.

A lens that completely separates color channels is defective. It was defective
in 1950 and it's defective today, so unless your goal was to replicate the
look of hopelessly broken hardware, it's in no way "vintage".

Which I think goes more generally into my issue with game graphics - the
taking of interesting effects (depth of field, chromatic aberration, lens
flares, etc) and cranking it up to 11 for its own sake.

------
jasonlotito
From part 0, they make the claim and question:

> So why the heck do most games look so bad?

And then they go on to pick a game that people specifically calling out for
looking amazing: HZD.

In fact, the problem seems to be:

> But all of them feel videogamey and none of them would pass for a film or a
> photograph.

To which I would reply: because they are video games. A photograph doesn't
pass as film, and film doesn't pass as a photograph (though the nature of the
two, film essentially being a lot of photographs). A video game must serve a
different purpose than a photograph or a film. The players interact with the
game, and therefore the intent and purpose is different. Glares on the screen
aren't just there to add lens flare, but to make it harder to see because
that's part of the challenge. It is harder to see, and you have to deal with
that to overcome the challenge. That can be offset by keeping the sun to your
back. Same with grime in the screen, or other such things. You want to
highlight the things players need to react to, and in such a way that allow
them to react quickly.

There is so much more, and yet so far this article is only referencing the
pure look and seems to ignore the goals of the medium. I'll reserve judgement
until I read the next parts of the articles, but until then, I can't help but
wonder if this really matters.

~~~
HelloNurse
Most games attempt to look realistic, and they fail. They definitely _try_ to
look like film, so comparing them to film is perfectly fair: it is an
impossible standard that masochist, mannerist or hopeless game art directors
have freely chosen for their projects and by which they deserve and _demand_
to be judged.

Failure at realistic game graphics is so common, so taken for granted, that
the scale of artistic accomplishment shifts downwards: if HZD looks only
slightly bad, not constantly, and without disgusting the player too much, it's
considered to have "amazing" graphics.

------
Kiro
> But all of them feel videogamey and none of them would pass for a film or a
> photograph.

Until graphics and animation are photorealistic I prefer my games to look
videogamey. All those 4 screenshots are beautiful and I wouldn't want them any
other way.

------
AcerbicZero
As a fairly avid gamer, but there is a reason why my top 10 games are all
fairly mediocre in the graphics department. High end Triple-AAA games with
"Great" graphics usually lack depth in their mechanics, and while this isn't
always true, it happens enough to keep me from enjoying them.

There is a reason why Skyrim, Mount&Blade, GTAV, etc are still dominating
playtime charts years after release. Good enough graphics + deep, flexible
mechanics is a much better combination in my opinion.

~~~
megaman22
> There is a reason why Skyrim, Mount&Blade, GTAV, etc are still dominating
> playtime charts years after release. Good enough graphics + deep, flexible
> mechanics is a much better combination in my opinion.

Those are all sandbox games that you effectively never stop playing. People
tell me that there's a story and an endgame in Skyrim, but I've played a few
hundred hours without doing any main-line quests past the first city.
Mount&Blade I've sunk even more into - there isn't even the pretense of a
story there, so you're pretty much on your own if you want to role-play, or
just take in the enjoyment of riding around, hacking at people, and looting
their corpses.

~~~
AcerbicZero
Those were just 3 of the more mainstream games with semi-limited graphics I
could think of, but you're right they do share a sandbox style. Overwatch
would be another example of a game where they put "good enough" graphics
combined with much more refined mechanics (depending on who you ask, this last
patch...eh) to achieve success

Anyway, the point I was going for the last thing we need is having them spend
even more effort trying to make pretty looking but mediocre playing games.

------
taw55
While on the subject of graphics in games...

Another pet peeve of mine: ambient occlusion (crude GI approximation) and
especially SSAO (crude approximation of a crude approximation).

Corners don’t look like that:
[http://nothings.org/gamedev/ssao/](http://nothings.org/gamedev/ssao/)

(Ambient occlusion proper and in moderation I think looks fine, but most games
really overdo it.)

~~~
sillysaurus3
Latch on to that idea -- corners don't look like that -- and follow it to its
logical conclusion: no one knows how to make any fully-simulated video look
indistinguishable from reality.

That's a huge leap, but it has the benefit of being true.

(To be precise: no one has ever created fully-simulated video capable of
fooling human observers anywhere close to 50% of the time. The video has to be
reasonably long (>30sec) and complex. But in a double-blind test, almost any
nature video will handily beat any synthetic video.)

~~~
sdwisely
> (To be precise: no one has ever created fully-simulated video capable of
> fooling human observers anywhere close to 50% of the time. The video has to
> be reasonably long (>30sec) and complex. But in a double-blind test, almost
> any nature video will handily beat any synthetic video.)

Out of curiosity is this from something? it sounds like you're citing
something and for realtime I'd agree. I think people are fooled everyday by
raytraced images (movie cg etc).

> no one knows how to make any fully-simulated video look indistinguishable
> from reality.

I'd say we know how to do it, we don't know how to do it _fast_.

~~~
sillysaurus3
_I 'd say we know how to do it, we don't know how to do it fast._

If all I do here today is hatch an egg of doubt in your head, I would be
delighted. Someone needs to carry the torch.. My life has turned to other
interests.

It's the central problem. It's as hard as AGI, and it might be as impactful as
the invention of the airplane.

Think of it. Fully-simulated video, indistinguishable from reality.

In many ways, it was my first love. The desire to be a gamedev drove me to
learn programming. I ended up a graphics dev -- Carmack's old path. My job was
to make a certain game engine look better. What an innocent problem... 12
years later, I still feel the pull, the need to blow off everything in my life
and bend a computer to my will. To our will. The human mind has never once
achieved this goal. And it was the perfect problem... I never cared much about
fame or money. But to be _the first_. Think of it... How can you not want to
spend the rest of your life on this? The solution is out there, taunting us.
Everyone is pursuing physics, when all we need to do is pursue the fact that
video cameras can already generate images that look identical to real life.

The ancients have made up stories about the sky and stars since long before
civilization. Put yourself in their shoes, if they even had shoes.

Look up. It's the night sky, far brighter than anything we can see today.
Imagine staring up at the infinite complexity, wondering, how does it work?
Why do the stars go the way they go? We tell stories; could one of the stories
be right?

A few millennia later, one person had an idea: What if we watch the stars
very, very carefully, and collect the stories very carefully? We could compare
the movements of the stars to the stories, so that the alternative theories
might be distinguished from one another.

This was the key to modern science, and the root of wisdom. When you stop
thinking about what everyone else is doing, you're free to hit on solutions
that everyone else overlooked.

And think of the feeling you'd get when you finally solved it. Can you
imagine? You'd get the same rush as the Wright brothers, or Ford when he made
the assembly line, or McCarthy when he stumbled across Lisp.

If you or anyone else intends to take on this challenge, know this:

The fact that no one believes you when you say "No one has ever done this, and
no one has any idea how to do it," is your biggest advantage.

It means you're free to spend the next five years figuring out that solution
that everyone else missed because they were too busy chasing the pipe dream
that if you throw enough physics+time at a computer, it will produce synthetic
video that fools people into thinking it's real.

The moment people realize that it's probably not _that_ hard, you'll lose your
advantage, because every top tech company will start exploring your problem
space. Like if in 1902 you'd hinted to a top university the gist of Einstein's
thesis. No one would take you seriously. Lucky for you.

So, what's the secret technique? Well, if I knew that, I'd have fulfilled my
12-year dream. But I know a few things that will move you (12-N) years toward
the goal.

There is one rule, and one rule alone. You have to force yourself to stay true
to it, or else nothing else you do will matter. Here it is:

If you get a dozen people together, and show them a mix of 10 real videos and
10 simulated videos, and those videos are reasonably complex, like clips from
a nature documentary, then 12 out of 12 people will effortlessly call out your
fake videos as fake and your real videos as real. It's not even close. That's
how far away we are from the goal of fully-simulated video indistinguishable
from reality.

Maybe I've hooked you at this point. Maybe not. But if anyone comes up with a
way to fool those 12 people so completely that their responses are no better
than random chance, you win.

Let's call this the "Carmack criterion." If you tried administering the above
test to a dozen clones of Carmack, here's how they'd sound: "That's a fake.
That one's real. Fake. Fake. Real. Real." No matter how much ornament or
showmanship you throw into the video, you can't fool Carmack. He'll report
whatever his eyes are telling him.

And as of 2017, he'll be right 100% of the time. His eyes would shout: "None
of those fakes were even close to real! Are you kidding? One of the real
videos was of a lion taking down a gazelle. I know every artist in the gamedev
industry. None of them have ever produced anything approaching that level of
quality, even working together."

That, and that alone, is the game. Literally nothing else matters. If you can
fool people until their responses are statistically identical to RNG, you've
done it. You're world-famous. Yer a wizard.

Corollary: you can use the Carmack criterion like a compass for every decision
you face. Should you research physically-based rendering, or try to apply
machine learning? The latter seems unpromising. Yet Hollywood has been
administering the Carmack test to millions of people, most recently with
Avatar, which completely rules out physically-based rendering. So we know to
spend zero time on it.

As you can see, that razor is so sharp that it will cut away every illusion
you might try to cling to that humans are anywhere close to achieving the
Carmack criterion. Or that some smart hacker somewhere has a pretty good idea
of how to achieve it, or that it's just a matter of letting computers advance
another few decades, or any other false reason that those around you like to
tell themselves.

But if mainstream ideas are dead-ends, then what should we research?

I hesitate to give concrete suggestions, because the history of science
demonstrates that progress isn't made like that. Whatever the real solution
is, it's far beyond anything you or I can imagine today. People were forced by
mathematics to believe that planets' orbits were elliptical. An ellipse is the
only shape that makes the numbers come out right. Yet how many of our
ancestors came up with that idea? Even by accident, it's probably too bizarre
for anyone to seriously consider it. Not without mathematics.

Yet that's a positive statement: It meant that if someone _were_ audacious
enough to trust in mathematics alone, they _could_ determine the right answer.
The solution was always there, waiting for you to find it.

To make any progress at all, your ideas will need to seem shockingly
different. The whole world has spent two decades going over every inch of
physically-based rendering -- presumably hoping that if they put on a
different pair of glasses, maybe they'll spot anything other than a mountain
of evidence that it doesn't work.

So you _have_ to let yourself consider _every angle_ , no matter how strange.

720 frames of 720p video. That's all you need. That's 30 seconds of HD
footage. Get a computer to conjure up those 720 frames. Summon DaVinci's
ghost, and you win.

Whenever someone finally solves this, you'll think "Oh, right. That technique
makes sense." But it only makes sense because you see it works. Till then,
that correct answer will seem to be a complete waste of time.

Think of Airbnb, and how awful their idea sounded. Yet when someone spent a
couple years exploring the problem space, shazam! Out popped a billion-dollar
company.

Since there is ~zero chance these ideas are anywhere close to the right
answer, here are the two avenues I left off with:

1\. A video camera generates images that pass the Carmack criterion. Ask
yourself: why do those videos look real? And why is it so important to judge
video, not photos? (It's crucial.)

This is key: Are you absolutely certain you should be ignoring the fact that
any old camcorder's videos look real? Whip out your phone. Take a video. That
video looks real. Why? Quantify the difference vs footage of the latest game
engine.

(Try to avoid using the latest movies as a basis of comparison, because movies
mix real-life footage into their VFX. Our criteria of "fully-simulated video"
is strict by design: it keeps us honest about our progress. Especially to
ourselves.)

2\. After meditating for a year on why crappy cellphone videos look real, you
may start thinking along the lines of "how can I write a program to mimic the
essence of that realism?" It looks real because the colors are _exactly_
right. Think of evolution, and how long we've been evolving. That whole time,
every single one of our ancestors were staring at images that they believed
were real. Our brains are wired to notice even a hint of strangeness. ("When
we notice there's something strange about a video, what exactly is going on
there? What do we mean by that?" is another "fun" question.)

Now, wouldn't it be handy if someone knew how to write a program that can
mimic real-life data? If only such a technique were possible... We even have
an infinite stream of pre-classified data to feed it: phones and webcams.

Hmmm. :)

~~~
sebastos
You've probably looked at this, but maybe it would help to avoid thinking "out
of the box", and try it more incrementally: create an extremely simple
cellphone video of an "easy" scene, like a teapot sitting on linoleum or
something. Then try to recreate it -perfectly- using computer graphics. Get it
to the level where you can literally compare the pixels for each frame.

Maybe that could bring you closer to understanding what the important factors
are that the current graphics pipeline can't do. Why is it hard to get the
pixels in the simulated video be the same color as the cell phone ones? Are
the materials off? The shadows? If you can't even make the teapot look real,
then you've zeroed in on something fundamental that's still going to bite you
when you're busy trying to rig antelope skeletons.

~~~
tripzilch
There's already one standard scene / benchmark like that, the Cornell Box:
[https://en.wikipedia.org/wiki/Cornell_box](https://en.wikipedia.org/wiki/Cornell_box)

Of course it's ridiculously simple, so you may want to increase the bar a
little to impress GP :p

------
mstade
Compelling article, but I can’t help but feel the author hasn’t appreciated
some of these games on an HDR capable screen. They call out Battlefield 1 and
Horizon Zero Dawn, both of which look stunning in HDR mode on an LG OLED
screen. BF1 only got HDR mode implemented in a patch fairly recently, and
before then was unplayable on an OLED with its deep blacks, it was a contrast
fest with no luminance range what so ever. After the patch however, that all
changed.

I’ve never played HZD in anything _but_ HDR mode (in fact, I bought the game
specifically to test HDR) so admittedly I don’t know the difference there. All
I know is in HDR mode, HZD is one of the prettiest games I’ve ever played,
with fantastic colors and brilliant transitions in luminance.

HDR I feel is one of those things you just can’t appreciate in screenshots or
comparison videos. You have to get a good screen (I can’t see myself coming
back from OLED, it’s – no pun intended – a game changer) capable of HDR and
just appreciate it for yourself, it’s really something.

I’m sure it’ll be even better as technology matures and game developers become
more familiar with it, and as well the market penetration of HDR capable
screens increase, but to say they don’t know what they’re doing now seems a
bit of a stretch. Especially if all you have to go on are screenshots.

------
andrewguenther
Maybe I just play too many video games, but I don't think any of those
"terrible" screenshots look bad. If anything, the foggy image from Breath of
the Wild looks worse than any of them. What am I missing? What is supposed to
be bad about these?

~~~
djur
Some of it is just one's personal taste, but I would say that a practical
difference is that I find it difficult to pick out details in the images
called out as "bad". My eyes feel overwhelmed by the huge shifts in contrast
and it's hard to visually identify objects in the very dark areas and the very
shiny, bright areas.

I also get a very strange sense of depth from those images -- it feels like
everything is either very close or very far away, without having a sense of
how far apart objects actually are.

In comparison, in the BOTW screenshot, I can clearly identify objects of
interest (a shrine, a tower, a volcano, a river, a bridge) and have a sense of
where they are compared to each other and myself.

That said, I feel like it's not a completely fair comparison -- the "bad"
images all seem to have the camera in a dark area looking out and up into a
bright area, while the BOTW camera is on top of an exposed mountaintop looking
out into the sunset. The lighting is going to look a lot different regardless.
However, the HZD image after _that_ is really quite garish and harder to
justify.

------
bitwize
The first game to feature (pseudo) HDR was actually _Shadow of the Colossus_ ,
and it's still a beautiful game _despite_ its washed out colors, super high
contrast, and "that dynamic exposure adjustment effect nobody likes". It may
have been the template for most or all of the "bad" games in the article.

These effects are beautiful in _Colossus_ because they were chosen for
artistic reasons and they are artistically used. The lack of color saturation
complements the game's lush, but somehow bleak and foreboding, natural
environment. The extreme contrast highlights the differences between the dark
and dank indoor environments of the various ruins and temples, and the raw
untamed wilderness that lies outside them. And the dynamic exposure adjustment
effect is, vaguely, supposed to parallel the main character's eyes adjusting
as he moves from one kind of environment to another. (Either that or just "hey
look, we can do this on a PS2!")

It's kinda like how the parallax line-scrolling effects in _Shadow of the
Beast_ were amazing in that particular game at that particular time, but take
the same effect in a shitty game -- like, say, _Bubsy_ \-- and it just looks
awful and tiresome.

------
pasta
Isn't this just a matter of taste? There are a lot of tone mapping algorithms
everybody know about. So it's just a matter of taste choosing the one you
like.

Edit: As far as I know render engines use the complete dynamic range until the
last step which converts it to a 16 bit output range. So maybe games can let
the user decide which tone mapping algorithm to use.

~~~
pasta
*24 bit output range

------
gilbetron
The point of most video games is to be distinct, not to look like reality.
Furthermore, they use high-contrast to help distinguish game elements. Games
are not movies.

~~~
zanny
Came looking for this. Unnatural contrast is good in games because it can help
players distinguish different set pieces easier.

This is also why a lot of people prefer older FPS titles or games like TF2
used stylized profiles of the characters that stand out. In some games, you
want interpretation of the game scene to be a challenge, but in most games,
you want players to easily tell what they are looking at. Using high contrast
/ distinct models / distinct animations / color differentiation all help to
achieve that.

~~~
cousin_it
Photos and film need to be readable too. I think gaming culture has poor taste
in art for historical reasons. Here's some games that were called "gorgeous":

[https://www.dualshockers.com/wp-
content/uploads/2016/06/COW-...](https://www.dualshockers.com/wp-
content/uploads/2016/06/COW-IW_E3_Ship-Assault-Zero-G-Combat_WM.jpg)

[http://assets1.ignimgs.com/2014/01/22/capture9png-e9804c.png](http://assets1.ignimgs.com/2014/01/22/capture9png-e9804c.png)

[http://www.gamesknit.com/wp-content/uploads/2015/09/dust-
an-...](http://www.gamesknit.com/wp-content/uploads/2015/09/dust-an-elysian-
tail-xbox-360-1345475788-020.jpg)

But there's a small minority of games that do look great:

[http://cdn3-www.playstationlifestyle.net/assets/uploads/gall...](http://cdn3-www.playstationlifestyle.net/assets/uploads/gallery/the-
order-1886-release-day-screens/the-order_-1886_20150215102558.jpg)

[https://www.gamegrin.com/assets/games/hollow-
knight/screensh...](https://www.gamegrin.com/assets/games/hollow-
knight/screenshots/hollow-knight-screenshot-0.jpg)

[https://cdn.arstechnica.net/wp-
content/uploads/sites/3/2015/...](https://cdn.arstechnica.net/wp-
content/uploads/sites/3/2015/08/cupheadcarrot.jpg)

~~~
PetitPrince
> Screenshot of Shovel Knight

I don't think it's fair to put Shovel Knight in your list. Yacht Club Games
tried really hard to recreate a NES aesthetics and constraints (limited color
palette, sprites, etc...) with careful deviations when it is really necessary
[1]. I don't think you'll find more aesthetically pleasant game in the NES
library.

[1]
[https://www.gamasutra.com/blogs/DavidDAngelo/20140625/219383...](https://www.gamasutra.com/blogs/DavidDAngelo/20140625/219383/Breaking_the_NES_for_Shovel_Knight.php)

~~~
cousin_it
Yeah, NES games looked pretty bad. I can find only a couple screenshots that
don't make my eyes bleed:

[http://www.defunctgames.com/pic/review-batmannes-
big-1.jpg](http://www.defunctgames.com/pic/review-batmannes-big-1.jpg)

[https://www.myabandonware.com/media/screenshots/k/kirby-s-
ad...](https://www.myabandonware.com/media/screenshots/k/kirby-s-adventure-
epf/kirby-s-adventure_12.png)

Hence my mention of historical reasons that are holding game aesthetics back.
I think making good art requires criticizing your childhood tastes, not
following them.

------
jdright
I hate this idea that it should be photo realistic.

    
    
        > But all of them feel videogamey and none of them would pass for a film or a photograph.
    

Thanks, if they didn't looked at least "videogamey" I wouldn't have any
interest. Stop this terrible trend of being a film!

~~~
rcthompson
The point is that the lack of photorealism in the given examples is not
intentional, but happens because of a lack of proper attention to the HDR
aspects of rendering. The article also gives examples of games that
intentionally deviate from photorealism in order to achieve a specific effect.

~~~
jdright
HZD is intentional the way it is and it is perfectly good on my not-HDR tv. So
he is missing the point.

------
theprotocol
Some pretty bold and controversial (and subjective) statements, but I think
the author makes a good case! Very interesting article.

~~~
kevlar1818
"ventspace" indeed, but I agree--it's an interesting perspective.

------
Bodell
Honestly, I'm tired of the race for games to just 'look' better. Most of the
time it seems as if it is a detriment to actual gameplay and frame rate.

I would definitely prefer games to look bad (or preferably stylized in a way
that works for the game) over having a somewhat realistic looking game ( only
when your standing still so the frame rate doesn't dip) and terrible mechanics
and a bad story.

Far to much information and talk about making games look better and very
little about how to make games actually better. Modern gaming to me has become
the same as clicking the X on hundreds of popups from your windows 98 machine.
Just with somewhat realistic looking ugly faces instead of X's.

------
ttoinou
Related to that, colors of videos on the internet (and YouTube) are always
kinda ugly and (I think) not very faithful of what the color artists intended
to show...

I'm not sure but I think one of the issues are from the 16 - 235 channel
remapping

~~~
nts34nhteo90
It's probably more likely due to the fact that codecs for videos on YouTube
tend to have lower resolution in the red and blue channels, which leads to
very bad blocky artifacting around bright red and bright blue areas.

~~~
andrewf
I think everything else (Blu-Ray, DVD, cable TV) is using the same technique.

------
skybrian
It makes sense for artists to concentrate on the details of how games look,
and this article provides some good insight into that world.

But using stark moral terms for aesthetic judgements seems rather insular at
best. When I think about games like Minecraft, Factorio, RimWorld, and so on,
I wonder if maybe imitating movie-quality graphics is really as important as
the author thinks it is? It's only a small part of the experience.

~~~
fhood
Correct me if I am wrong, but I thought the author was protesting against the
misuse of techniques for imitating movie quality graphics. He held up BOTW for
not attempting such imitation.

------
HellDunkel
The author makes a point of Nintendo opting out of HDR but does not realize
thats due to hardware limits not allowing deferred shading. The HDR techniques
he does not like are all made possible by the use of deferred shading. These
techniques may be sometimes used to overky stylize the look but it is
undisputed they create more realistic images.

~~~
eropple
Promit Roy has been being knowledgeable about graphics since before I was in
college. I kind of doubt he "doesn't realize" it. The dude is really, really
good.

And HDR doesn't really do anything to materially aid what he's calling out as
well done in the first place. It's ancillary. It's not important. What's
important is actual artistic intent and structure. That's what's lacking,
along with a misunderstanding of why certain techniques work in film and don't
when they are uncritically applied to games.

~~~
HellDunkel
I dont know what kind of HDR you refer to. My point was that for realistc
rendering or physically based, whatever you might call it, a linear workflow
(light calc in linearized space) is crucial. It is very important for the
artist too if you aim at realism. Of course it brings challenged such as
exposure correction and managing dyamics. These challenges are not yet fully
solved but to say we ought to abandon HDR and go back to SRGB is at least
questionable... we might as well go back to mode 13.

------
pizza234
Does anybody know what's the technical ground for some games to have dark
areas turn brighter (typically, of a squashed, dark, green/blue) instead of
darker?

I remember one game ("Through the woods") which had this problem in a very
aggressive form. It was very visible while playing, but it's hard to find a
screenshot.

Here, for example, the house on the left (but also the cabin in the center:

[http://cdn.akamai.steamstatic.com/steam/apps/368430/ss_74ec0...](http://cdn.akamai.steamstatic.com/steam/apps/368430/ss_74ec0ff9c9c29ba576d021b2332a3547800a35f2.600x338.jpg?t=1477404343)

What's the exact issue? Is it a tone mapping problem? I'm curious, because I
have a rather good monitor, and some dark games (eg. "Alien: Isolation") are
equally dark, but don't look as bad as this.

~~~
pasta
Looks like environment lighting to me.

I'm not sure this is the case in your example, but a lot of render engines use
an environment light when rays don't detect collision.

So the camera is looking at the wall of the house and bounces into oblivion.
Therefore the environment values are used. And since the stones of the house
are gray a lot of environment light bounces back to the camera. But when the
camera is looking at the trees a lot of rays bounce to leaves and the ground
returning almost no light back to the camera.

I would even say that your example looks realistic (although way to dark).

------
jdlyga
Half Life: The Lost Coast, this is all your fault!

------
coldcode
I play World Of Tanks and tried every one of their mappings (forget what they
call them). Even the one I can tolerate makes the tanks look like toys in a
sandbox. Never feels like a real tank.

------
reiichiroh
HDR is the new "next-gen Brown!"

------
puzzlingcaptcha
As HDR panels used for TVs 'trickle down' to PC displays the HDR part of the
problem will solve itself. Perhaps then people will focus more on color
grading (although I suspect over-saturation will always be a staple in some
genres).

------
supernintendo
This can really be attributed to a general rule of thumb in game development:
aesthetics are more important than graphics.

------
lanius
I would guess this isn't a priority for the industry because gamers themselves
don't care. Case in point: gaming monitors. Gaming monitors advertise high-
refresh rates and low input lag but rarely color quality. What percentage of
gamers use wide gamut monitors or even perform color calibration?

~~~
ancientworldnow
FWIW, I'm a colorist and have very expensive color critical monitors and
calibration probes. For GUI monitors I've started recommending gaming panels
for suites that don't want to invest in professional graphics displays. Once
calibrated these monitors are surprisingly accurate and hold the calibration
well (at least as good as professional displays many times more expensive -
though still a far cry from color critical displays). The expanded refresh
rates are a nice bonus for eye relief as well.

------
mozumder
Some films are absolutely just as contrast-y as these HDR images. In
particular, Fujiilm RDP-III slide film. See:
[https://www.flickr.com/groups/provia100f/pool/](https://www.flickr.com/groups/provia100f/pool/)

But yes, Arri does have a beautiful soft look along with Fuji digital cameras,
and plenty of film stocks do as well. It all depends on the tastes. Gamers
aren't going for soft reality. A lot of hard-edged gamers are going to go for
the high-contrast, desaturated ExTreMe ChaLLeNgE look.

------
hasenj
The resident evil screenshot looks incredibly realistic. I thought that was a
real photograph placed there to contrast reality with games. Until I read the
paragraph below it.

------
gilrain
It's amazing how many commenters seem to be embarrassing themselves by trying
to be the first to wildly misunderstand the problems being discussed and
reduce them to trite memes. You're not even disagreeing with the thrust of the
article in your vehement objections.

Yes, this indicates the article was poorly written. I advise you move on
rather than reflexively attack nothing.

