
How close are we to truly photorealistic, real-time games? - evo_9
http://arstechnica.com/gaming/news/2012/02/how-close-are-we-to-truly-photorealistic-real-time-games.ars
======
angdis
I think that the word "photorealistic" should be considered in a more
philosophical way.

The goal of games is to create an immersive and convincing environment but
that does NOT necessarily mean that it would (or should) be indistinguishable
from what a camera/eyeball would see if the environment were "real".

If you talk with computer game artists, you will find out that they
deliberately manipulate features, textures, geometry, distances, and colors in
a way that is decidedly different from what would exist in reality. The
purpose of this, ironically, is to make the environment more convincing (real-
looking) to the player when the game is on. It is all about perception.

I think that by looking at what is happening with computer animated films, one
can get an idea of what will happen soon with computer games. It will
certainly be possible to render an image at high fps such that it simulates
with stunning accuracy what would appear on a CCD sensor if a camera were
placed in a "real" scene. But then there will be deeper problems. Instead of
worrying about the nitty-gritty of ray-tracing fidelity, game makers will be
concerned with the same problems that photographers and filmmakers worry
about: lighting, focus, focal length, lenses, scene layout. And then there are
the well-known problems of "uncanniness"
(<http://en.wikipedia.org/wiki/Uncanny_valley>).

In other words, when you watch a great scene in a film with actors, it looks
"realistic", but that "photoreality" is actually a very careful and deliberate
fabrication-- it is not real.

------
evolvingstuff
One way the circumvent the need for 5000 teraflops mentioned in the article is
to exploit the fact that our eye is only capable of seeing a relatively small
area in great detail at any given instant. Per viewer, we only need to finely
render a tiny bit of the scene. This of course precludes the same level of
realism on a shared screen, but I suspect that viewer-specific devices (e.g.
[http://www.scientificamerican.com/article.cfm?id=virtual-
rea...](http://www.scientificamerican.com/article.cfm?id=virtual-reality-
contact-l) ) will become the norm as we move further towards virtual reality.

~~~
Someone
I doubt that would work. A system like that would have to be able to predict
where saccadic land. Otherwise, you would, for a short time after every
saccade, look at a part of the screen that is low in detail. Short saccades
take about 20ms. So, if the graphics pipeline has higher latency, it would
mean that we would have to be able to predict saccades.

I am not familiar Witt the date of the art here, but I do not think that is
possible.

~~~
soup10
Just 60 FPS = 17ms per frame. Theres lag with the display response, graphics
pipeline, and detecting eye movements, but it doesn't seem unreasonable for
future tech.

~~~
evolvingstuff
Agreed. To elaborate:

 _Saccades to an unexpected stimulus normally take about 200 milliseconds (ms)
to initiate, and then last from about 20–200 ms, depending on their amplitude
(20–30 ms is typical in language reading)._

Saccades of 20ms in duration are ones that are very near to the current center
of focus (e.g. moving to the next chunk of letters while reading the words of
this sentence). This just means that detailed rendering needs to extend to a
slightly larger radius, but this is still significantly cheaper to render than
an field of view. For larger jumps there is ~200ms during which the computer
can attempt to predict the final destination of the saccade, and thus begin to
do some preemptive computations. Once the saccade lands at the new location,
assuming a rendering speed of 100fps, there would be at most 10ms before the
high-res version kicked in, but again, with some degree of
preemptive/predictive computation, perhaps a slightly better version could be
available immediately.

------
ChrisNorstrom
VERY FAR I think.

Take Transformers 3 for example: "It took 288 hours PER FRAME to render the
Driller along with the photo-real CG building that includes all those
reflections in its glass."

Less complicated scenes in Pixar films take hours to render just ONE frame,
which is why they do it using thousands of powerful computers at render farms
working together.

And even then, you can tell the scene is CG.

Maybe in 50 years when we all have the power of a render farm in our graphics
cards we'll be able to, but even then. It's just not worth the cost of
development. Storing massive textures like that would require an insane amount
of hard drive space. Creating textures like, physics engines, character
models, worlds, lighting, etc... would exponentially multiply the cost of cost
of game development to a point that it's just not worth it.

Is it possible? YES. Is it feasible? NO. Why? Because being able to render a
photo-realistic scene with humans in it means being able to develop a game
with photo-realistic scenes without the studio going bankrupt.

~~~
monochromatic
> 288 hours PER FRAME

So it took close to a year to render each second on-screen. I do not believe
this.

~~~
VikingCoder
288 cpu hours.

Divide that by a large number of CPUs.

Most rendering is very parallelizable.

~~~
astrodust
Yeah, that's a day on a 12-core system.

------
Lagged2Death
_How close are we to truly photorealistic, real-time games?_

Who cares? Does anyone really want that?

Players don't come back again and again to games like WoW or Farmville because
of the graphics.

There's an argument to be made that obsession with higher and higher degrees
of pixel-slinging has crippled the triple-A games market, leaving real
invention and creativity (and ironically enough, the truly staggering hits, in
terms of P/E) to the indie scene.

------
davidsiems
If you're looking for a cutting edge whitepaper on this stuff check this out:
[http://maverick.inria.fr/Publications/2011/Cra11/CCrassinThe...](http://maverick.inria.fr/Publications/2011/Cra11/CCrassinThesis_EN_Web.pdf)

------
jseims
IMO, the hard part isn't rendering enough polygons to be photorealistic.

It's crossing the uncanny valley of human facial expressions, so they look
natural rather than creepy.

------
ypcx
> While this scene from Crysis 2 looks pretty good, in a few decades it's
> going to look like outdated crap.

Man, in few decades, you hit a button, which blocks your short-term memory and
connects your brain to a game simulation that you will consider a reality, not
knowing (and not even pondering) how you got there. Real shit™.

~~~
nuttendorfer
A Matrix-like scenario seems pretty possible to me. Except that we humans do
this voluntarily(No machine threat) because the world will be in a sad state?
Something along those lines.

------
beloch
Watch some animated or entirely CG movies. They look nothing but real life,
but feel far more real than any video game. This isn't due to short-comings in
the visual quality of games. It's the believability of the physics of the
video game world and the behavior of its inhabitants.

Play any 3D video game on the market and it's almost guaranteed that you'll
see blatantly unphysical things happening. In most games, your own character
will move in ways such that parts of his body and clothing appear to magically
pass through each other without interaction! Run up to a wall and you'll
either be stopped by an invisible force or parts of you will go into the wall
like it was nothing, or both! Jumping and running and fatigue are almost never
even remotely realistic.

As bad as physics are, the behavior of the other creatures and beings you
encounter is usually much worse! Even the best RPG's have dialogue and
interaction that would be absolutely baffling and weird if encountered in any
other setting. Here's a humorous video most Skyrim fans out there have already
seen that illustrates the problem nicely.

<http://www.youtube.com/watch?v=YEMD28MMtNg>

The next major leap in video game fidelity is not going to be graphics. Quite
frankly, they're good enough now that you can suspend disbelief. It's the way
things move, interact, and how game creatures behave that really needs to
improve. Yes, in ten years Crysis is going to seem hilariously bad, but it
won't primarily be because of how a static screenshot looks. In fact, I bet in
ten years you'll be able to look at a screenshot from Crysis and say, "Gee,
that looks pretty good actually!", but when you try to play the game it will
feel very primitive.

In the near future I think we're going to see games devoting a lot more
processing power to simulating the physics of the environment. We're going to
see greatly improved motion capture technology for video game actors bringing
much more realistic facial expressions to video game characters. The most
difficult task ahead is getting video game creatures and character to behave
realistically when they're _not_ reading from a script. That's going to be
much harder and will likely take much longer than a decade to perfect.

Incidentally, with video games taking increasingly large amounts of money to
develop and now being viewed as a form of art, I wonder if more attention will
be made to not just preserving them for the future, but making them
upgradeable to utilize the future's technology. e.g. Already we're seeing
modding communities springing up to improve various aspects of obsolete games
years or even decades after their developers abandoned them. What would happen
if video game companies started planning for future re-releases? i.e. Spend
some extra time when coding games such that people ten years in the future can
easily rip out an out-dated graphics engine or AI behavior module, replace it
with something more modern, and then re-release the game for another hit of
profit.

This would be, for video games, similar to the advent of home video. Before
home video came along, old movies were trash in a film canister as far as the
producers were concerned. Most movies, once they'd had their theatrical run,
were done making money. Now movies that are a hundred years old can make
people money! People do currently play old video games, but on a relatively
small scale since the difference between Pong and Skyrim is a _lot_ bigger
than the difference between "Battleship Potemkin" and "Inception". When you
look at all the wonderful design, art, and acting that goes into a game these
days, it seems like an incredible shame that the current state of video game
physics should render the whole works primitive inside of ten years. I think
the economics of the video game business have reached the point where these
assets are too expensive to just throw away on the video game equivalent of a
"theatrical run". I think companies are going to start saving stories,
performances, artwork, etc. and then release new versions of games in a decade
or two.

------
InclinedPlane
Pretty close actually. The easy part is the hardware and basic software. In 10
or 20 years a long list of technological innovations and cummulative advances
will put more than enough processing power on people's desktops, bringing
advanced rendering techniques like radiosity and ray tracing into the realm of
real-time rendering with commodity resources.

But that's not that the hard, or interesting part. The hard parts will be
making content and simulating physics convincingly. It is already hugely
expensive to have artists painstakingly recreate textures and models for 3d
environments, how much more difficult will it be to do so at higher levels of
fidelity? Obviously we need some pretty significant tooling breakthroughs to
make this problem even remotely tractable. The physics problem is even harder.
It's a lot easier to fake it and just cram things into acepted game mechanics
(invisible walls, imovable and indestructable objects, etc.) than to perform
robust simulations. Physics may easily require more computing power than
rendering.

But a simple immersive photo-realistic virtual 3d world is but a tiny corner
of the implications of such technology. Think about augmented reality, for
example, integrating smart phone functionality into the world around you
convincingly using transparent displays. Or using VR goggles that reproduce
images of the real world with overlayed data. Seemlessly integrating infrared
or radar data, for example.

Imagine playing a game of cops and robbers except with enhancements that merge
it with a video game. Or simply imagine the world being enhanced in a seamless
and subtle way with computer data, like a hud but far more sophisticated.
Imagine if every single person in the world was not a stranger to you, if you
could instantly know their name, their job, their public history, etc. How
does that impact society, friendships, business, government, war, etc. The
implications are both broad and deep.

------
corysama
Here's a nice, short blog article on the subject

<http://timothylottes.blogspot.com/2012/01/games-vs-film.html>

"The industry status quo is to push ultra high display resolution, ultra high
texture resolution, and ultra sharpness. IMO a more interesting next-
generation metric is can an engine on a ultra-highend PC rendering at 720p
look as real as a DVD quality movie? Note, high end PC at 720p can have
upwards of a few 1000's of texture fetches and upwards of 100,000 flops per
pixel per frame at 720p at 30Hz."

------
ricksta
We won't have photorealistic game for a long time because how way our GPU and
3D graphics works. With a pipeline architecture, every single object inside a
game is rendered individually. Things like shadows and reflections in games
are all "gimmicks" or "Tricks" to make it seem like shadows and reflections.
Until our computational power increase to a point where we can do massive ray
tracing in real time, we won't have a truly photorealistic games.

------
Samu
Would it be unreasonable to want all that, but also in stereo?

~~~
evolvingstuff
Only twice as unreasonable :)

