Hacker News new | past | comments | ask | show | jobs | submit login
The Problem With Photorealism (blackhole12.blogspot.com)
163 points by blackhole on March 18, 2014 | hide | past | favorite | 120 comments



It's often been said that the difference between Steven Spielberg and Ed Wood is merely thousands of subtleties. Tiny things add up to either make or break created worlds.

Clipping is one of my biggest pet-peeves with current video game engines. It really destroys the illusion when stuff that looks realistic and solid starts magically clipping through something else that looks realistic and solid. How about when the camera passes too close to something and you see the hollow insides? This is especially jarring when it happens to be a character's head and you see the backside of his/her eyeballs and teeth! Pick any AAA title randomly, make a character, and then look at that character closely while standing in his/her default start-of-the-game gear. The odds are good that you'll see dozens of little bits of his/her gear clipping into itself while that character is engaged in the default standing animation. It's telling that even excellent artists can't solve this problem for the one combination of gear and poses they know the player will see with 100% certainty.

The next big step for computer game realism is going to be material physics. Simulated materials instead of textured polygons and bounding boxes. This is going to be computationally intensive, but the boost to realism is probably going to be far greater than we'll gain from ray-tracing.


"Simulated materials instead of textured polygons and bounding boxes."

What does this mean? Like, what do you envision this changing?


Think about how a game treats a block of stone right now. It's a hollow cube with textures pasted on it. It's bounding box is pretty easy to calculate, but a complex object that doesn't match it's own bounding box closely will still clip right through it.

Now, imagine if we replaced that single block of stone with a matrix of tiny blocks. Also, imagine that we've made the bounding box of the complex object fit its geometry much more closely. If the same motion that caused clipping before is carried out, we'll detect it. It's going to take a lot more computation, but we will detect it.

Not only will we detect collisions that we wouldn't have before, we'll be able to handle them in a variety of gameplay enhancing ways. If the stone is meant to be strong stuff, we make it's mass high and the bonds between sub-blocks strong so that the complex object will come to a jarring halt when it makes contact instead of passing into the stone. If the stone is meant to be softer, we dial down the bond strength between the sub-blocks a bit so that they can be broken. The object might dig in a bit and leave a noticeable gouge as some of the sub-blocks are separated from the main block. If the stone is meant to be more like dirt, we make the bond strength very low so that a blow will cause a chain-reaction that will cause the whole block to tumble apart. We can change the probability of blocks sticking together to simulate either wet, clumpy earth or dust that's ready to fly off in all directions when it's touched. Deforming the complex object is also an option. It could be similarly be composed of sub-blocks, but with tweakable levels of stickiness/stretchiness holding them together that would allow us to capture things like flexing or bending.

In short, we really oversimplify the physical nature of objects in video games at present. This greatly reduces computation, but it makes the world seem unrealistic in how objects interact. By splitting things into smaller chunks that more closely resemble real matter we can start to simulate real materials, but at the cost of greater computation that isn't devoted directly towards making things pretty. Prying resources away from the graphics engine has long been something developers just don't want to do.


Voxels, pretty much? Like minecraft, but 10 or 1,000 or 1,000,000 times smaller than the objects/characters in the world.

It might look pretty blocky at lower resolutions, but considering the enduring appeal of "pixel art" (and minecraft's success too), it's kinda surprising it hasn't been done yet. i.e. a voxel physics world at a similar resolution to old 8-bit games (for example, 8x8x8 voxel characters).

If voxel physics can be done largely locally (with long-range effects being a slow ripple), it could be a killer app for massively multicore devices.


Lexaloffle has been working on something like this for a couple of years: http://www.lexaloffle.com/voxatron.php

Apparently it can actually run well on a volumetric display!


wow, love the volumetric water (in video)!


So, that's cool and all, but we aren't making simulations--we're making games.

Nobody gives a hoot in hell how realistic the drapes on a window are if the game isn't fun.

Many games have gone for very good realism, and it hasn't really worked out so well.

Doom 3 had per-poly hit detection, and all it did was make many weapons feel less accurate and multiplayer more annoying.

ArmA2, one of the most detailed military sims out there, did very accurate character-world collision detection and as a side-effect actually moving around inside buildings became quite hard. A community mod was created to address that and restore the funner, less realistic behavior.

Red Faction Guerilla tried to make more realistic destructible physics, but it mostly just made more trouble for level designers, because things they built tended to fall apart.

Again, simulations aren't games.


Jarring details do spoil the fun a little. I'm fine with lack of realism. I am not fine with noticeable differences in realism.

My last example comes from Rembember Me, which I have played on the PS3. An excellent game, with very good graphics… except for self shadows: they emphasise the underlying vertices. I'd rather have no shadow and a smooth face, than a "realistic" shadow and a blocky face. Other titles who feature self shadow display a horrible dithering (I have seen this in Mass Effect).

To date, the best consistency I have ever seen comes from Dragon Quest on the PS2. It's not realist at all (its style is Japanese motion picture), but there's next to no mistakes such as clipping. Even more recent titles such as Ni no Kuni aren't as consistent.


I would guess that it would imply that meshes are no longer static, and that they could bend, stretch, and fold based on their properties. In some ways, we already get this in modern games that try their best to simulate cloth in flags and such, but it's mostly still smoke and mirrors.


Smaller polygons.


This guy isn't thinking outside the box. He's trying to solve the problem using techniques from the past.

Obviously the solution isn't to manually make levels, it would be to automate them. Where are EA on flying scanning drones around football stadia in order to save artists thousands of man hours?

They already use these techniques to scan actors faces, I can imagine someone from 10 years ago predicting it would be impossible to script all the movements of lips and eyelids to produce realistic characters.

Disclaimer: I have no idea what I am talking about, but thought my off the cuff theory would have more weight if I wrote like I did.


Ironically, procedurally generated levels was a big thing in the early 90s. Castle of the Winds, Rogue, Nethack, and more were 100% dynamically generated.

Truly dynamic agents existed in Rollercoaster Tycoon, where every single person that walked into your theme park would have Nausia ratings, money, and make individual decisions at every point in the game. G-force was measured on every ride, and would effect different people in different ways. The amount of vomit in front of rides would disgust people in differing amounts. Etc. etc.

True AIs, agent theory and dynamic games were the norm in the mid-to-late 90s. But this approach to gaming isn't what people want. The truth of the matter, is that what makes money are these pre-scripted game-movies that sorta-kinda look photorealistic.

In fact, true-AIs have been implemented in games by probably every game development team. When AIs fear for their own life and run away from players (instead of standing and fighting), players tend to get bored. When AIs become so good at the game that they crush the player, players become frustrated and quit.

Believe it or not, few people are willing to put up with realistic and dynamic challenges. There are no game guides, no assistance, no "help menu" when a game is created dynamically. It is hard to share experiences with friends, because everyone is sort-of playing a different game.


People didn't turn away from these games because they don't want realism; they turned away because primitive world-generation and AI are not very convincing. Creating a building layout and a story by hand are much closer to reality than whatever Nethack can generate; the only problem is volume (Nethack could create one very unrealistic dungeon for every playthrough).


Reality has nothing to do with making a successful video game. Of the top 10 video games of 2013 were Animal Crossing, Monster Hunter, Pokemon X/Y, Assassin's Creed, Bioshock Infinite, and Grand Theft Auto.

Only Fifa 2014 can claim to be somewhat realistic, maybe Call of Duty: Ghosts. But we all know what _real_ war is like... not fun. http://www.theonion.com/video/ultrarealistic-modern-warfare-...

I mean, doggies taking down Helicopters is cool and everything and makes for an exciting storyline... but no one is going to convince me that the action inside of Call of Duty: Ghosts is "realistic".

http://kotaku.com/oh-my-god-its-a-dog-taking-down-a-helicopt...

With regeneration (Bullet shots to the chest recovers in seconds), Medpacks that magically heal you and so forth... the modern FPS is hardly realistic... but a fantasy designed to make gameplay fun.

After all, "campers" make games go stale, even if it is the most effective and realistic tactic of games. Modern games discourage camping and encourage close-up action.

----------------------

Those who fight for "realism" don't understand the typical gamer. Why have a realistic fight when you can instead, turn into a dragon? http://dota2.gamepedia.com/Dragon_Knight


Miguel Cepero of the Procedural World blog said something similar: that with the increasing complexity of AAA games, the only way forward is to procedurally generate much of the content. He talks the talk, too: the engine that he's currently building can algorithmically create varied terrain and architecture while still allowing you to manipulate the game world like in Minecraft. http://procworld.blogspot.com

Personally, I don't necessarily agree. I've had way more fun with simple-looking games over the past few years (Super Mario Galaxy, all the crazy indie platformers, Minecraft, Counter-Strike, etc.) than with any AAA games. I think the way forward for the medium is to focus more on tight and innovative game design rather than shiny graphics and enormous (but ultimately empty) spaces. No procedurally generated world will ever contain the intelligence of a single cleverly-designed level.

On the other hand, I've also had a lot of fun with roguelikes, and I really look forward to the innovations in procedural generation over the next few years.


That kind of tech is definitely coming, but the game studios aren't developing it. They'll buy it once it comes out though.

Game studios aren't VCs. A VC buys pieces of HIGHLY speculative companies trying to maximize his/her odds of a 1000:1 exit. Game studios fund a small portfolio of games and try and make a profit on every single one.

The culture in a studio is so completely different it's very unlikely that anyone high enough up at a game studio to have the budget and other resources to try and develop it in-house won't have the risk appetite to actually do so. And the people who have the risk appetite won't have the budget.


> Disclaimer: I have no idea what I am talking about, but thought my off the cuff theory would have more weight if I wrote like I did.

Ha. I like the cut of your jib.


Scanning and other forms of automatic content generation do not replace manual 3D modeling and texturing; just as cameras did not replace painting and sculpting.

Personally, I'm more excited about procedural generation than any of these other ideas. Programming seems to be the most powerful tool humans have ever devised. Why not apply it to create art?


If you could use automated scanning (or other means of observing the real world) combined with procedural content generation, that could be huge.

Buildings that look and feel like buildings because they were procedurally generated with a deep understanding of how buildings actually exist? Yes, please!


Usually you want to deliberately distort the real world significantly, to reduce the distances by an order of magnitude while keeping the 'feel'.

You don't a GTA-style game to take 40 minutes to commute from one place to another; so you want to keep the noticeable landmarks but cut out huge parts in the middle. You want wild alley chases to be possible, so you make many places connected and cross-navigable in patterns that don't occur in real life. You want to drastically increase density of 'interesting' places, not mirror their density in real life; and leave out endlessly repeating miles of similar suburbs or apartment blocks.


So this is where you spend the man hours, tweaking a generated or autonomously created world.


No, please!

I already have the real world. That's not What I'm looking for in a game.

If you want a Boringness Simulator, there's always Farming Sim, or something.


A false world that's at least informed by the real world can be more compelling than something totally abstract.

I'll always remember riding around in stolen cars in Vice City, a city that, while not real, is lovingly informed by the real world. If someone made a procedurally generated Vice City that I could drive through forever, yes, please!


I sort of agree with you. I haven't been to Florence, but after playing so much Assassin's Creed 2 I recognized real photos.

On the other hand, if it's procedurally generated, it's not informed by the real world. There's a reason even procedural generation needs game design, you can't just throw random stuff together and expect it to be coherent.

Or it could take chunks of the real world and randomly sprinkle them through the game world. It would get boring fast.


I got deja vu in Paris after playing Battlefield 3. :)

Frostbite is an amazing engine, if a bit quirky.


Have you looked at google maps 3d view lately? That's all procedural, and the emphasis isn't even on photo realism. It's not difficult to conceive of a drone that does scan and take high res images and software that stitches it all together later.


The guy is also ignorant. Speedtree generates all sorts of photo-real foliage for a million games and film/TV CGI, has done for years. Even Forza Motorsport uses Speedtree for trackside stuff. Also, loads of materials are generated procedurally - fur and hair are the obvious examples. For games, MGS V uses lots of procedural stuff in its modelling. http://www.youtube.com/watch?v=QZFR4H4LORU is the GDC 2013 presentation. Has a lot of neat tricks and optimisations for content generation in general, all in search of the photo-real.

The other problem is research. Loads of people are working on dynamics, rendering and materials. Not very many are working on contextual object distribution to detail an interior so it has decent complexity and realism, or ways to procedurally create Skyrim villages. Even less are working on generating complex quests, but when these things get "good enough", it'll be a huge step forward.


iRacing scans racetracks, but that's for physical accuracy; it likely increases the time and money involved rather than decreases it.

http://www.iracing.com/track-technology/


That's more problematic than you think. Many architects and / or building owners assert intellectual property rights over the building. Movies can be sued if they are used prominently in shots without permission.

So scanning could open devs up to lawsuits.


That only works if you're doing a game that takes place in real-life locations. If you want a made-up world, like most games do, then scanning won't help at all.


Debevec (yes that Debevec) is working on capturing a performance in a way that lets you re-light afterward. https://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSp...


The author's point is more abstract.

Yes, you can certainly work out a few problems, scaling up the computations in some way.

Some problems though, are inherently hard (eg. animations), and it's hard to imagine some breakthrough even in the next decade.


I'm puzzled. IIRC a decade ago Endorphin brought hybrid keyframed / ragdoll / physics animation to studios and a few years back on consoles. Am I wrong ?


You are not wrong. The majority of comments in this thread, and the original article, are all written as if by people who do not actually know what the current state of the art even is. They just know their favorite video game is disappointing.

If someone wants to make you wrong, all they have to say is that they don't care for the aesthetics of Endorphin/Euphoria. Never mind that the tech didn't even exist 10 years ago. I was working on Euphoria-like technology in 2001, 5 years before Euphoria was announced. Ultimately, I just wasn't smart enough to solve all the problems, but I could see that it was possible and I knew it was coming. And arrive it did, only 5 years later. Likewise, I think I have a pretty good view of what the next steps are, and the next 5 years should be very exciting, especially in the voxel and procedural spaces.


I'm sure this is all private thoughts, but if you write publicly about it or know some interesting mailing lists / sources to follow I'd be glad to hear about, be it animation, light transport, geometry ...


That's why they are called breakthroughs! :)


Just simulate the entire universe. It requires a computer the size of the universe.

There is one freely available, however, you will have to share it with a few other people. It's barely noticeable though because it's highly parallelizable.


It's rather poorly documented, though.


Proprietary assholes. Maybe if they just made the source available we wouldn't have to spend countless man-millennia trying to reverse-engineer this thing.


No poorly, is very well documented. The problem is that we run inside the exe, no in the source code!


And extremely buggy.


It's pretty bad. The other day an entire plane just disappeared. And I have yet to see someone respawn properly.


I saw the greatest fake release notes entry the other day: "Fixed bug where matter would sometimes spontaneously become self aware."


Its a bug, and the 'AI' is trying to fix it.


Heh, actually this makes me think that Augmented Reality gaming is the way to go - like, proper real live-action quake overlaid onto RL environments. Then you only need to model the monsters, the environment's already done.

But I don't think we have the VR glasses for that one yet - I mean, Oculus is opaque.


You could lazily evaluate the universe and save some cpu power.


Isn't that only true if you assume locality?


And that's basically what we do already. When we throw global illumination and global fluid dynamics into the mix we quickly run up against the limitations of our machines.


Actually the problem is quite a bit simpler. You only have to simulate the observable universe, not the part that is outside our light cone, which is probably most of it. And the observable part makes it simpler still. You dont need to simulate planets more than a few hundred light years away, because they are not observable. It gives you time to upgrade your hardware before the next round of astronomical satellites goes up.


Its quite hard actually. By simulating the observable universe, I am not implying to draw the objects on screen. Its much more and computational astrophysicists are struggling like hell!


Just need some rocks. Well, a lot of them really. http://xkcd.com/505/


It would have to simulate itself, recursively, so probably a computer greater than the size of the universe is required.


Minecraft is an interesting case study here, in that it deliberately throws away realistic graphics in favor of a comprehensively mutable environment. It's interesting how that seemed like such a totally unprecedented idea at the time.


It wasn't unprecedented, just look at Dwarf Fortress. Tarn threw out all graphics in exchange for focusing on an incredibly complex and deeply layered game -- with an environment that is fully constructable and has both a geological and cultural history.

Once the 3D accelerator become an item all gamers had, (perceptually) it appeared all major game develop shifted to producing "photorealistic" graphics. We ended up with an endless trail of games that basically produced the same thing.

I wonder, what would happen if the budget of something like Call of Duty was spent on a game with Minecraft or even Ascii graphics? Perhaps a total disaster, but what could be produced if it was done really well?


The current state of Dwarf Fortress probably is within an order of magnitude of the number man hours in it as an early Call of Duty. The difference is that the work on Call of Duty is mostly art which is trivially parallelizable, whereas the work on Dwarf Fortress is the engine which is not parallelizable to anywhere near the same degree.


I'm not sure where Minecraft came in the recent era of gaming...since it was built in 2009, that was well after games started being big on iOS, and some of those big games went for 8-bit as a way to maintain a modern sense of style and artistic quirk with the limited dev and system resources. Also, notch got some inspiration from Dwarf Fortress, which made an even more extreme sacrifice of graphics (the precursor to Dwarf Fortress was a 3D game that was too difficult to develop for).

Either way, who could've guessed back in 2004ish that pixelated retro games (including FTL, TowerFall, and countless iOS hits) would be what gaming would look like 10 years later?


> Either way, who could've guessed back in 2004ish that pixelated retro games (including FTL, TowerFall, and countless iOS hits) would be what gaming would look like 10 years later?

Flash games were probably the 2004-era precedent.


A game with very stylised graphics will always win over a game with very realistic graphics as far as I'm concerned. If you've got a game with unassuming graphics it's a lot easier to focus on the gameplay.

Alternatively, stylising things _just right_ can get you a lot further in creating a specific atmosphere - games like Limbo work because everything fits together so perfectly.


Once upon a time there was an open source game called "Cube" which was a first person shooter. It allowed in-game multi-player level editing, with all the occlusion and lighting recomputed on the spot.

It was superseded by Sauerbraten, which improved the level structure to allow multiple floors, but didn't run as quickly due to the calculations being a bit harder.


Having worked on a FPS MMO, I can't agree with this enough. People always complained it didn't look real enough. I always wanted to ask them if would mind buying 1,000 video cards up front and then wait 10 years buying 1000 new ones every year until I got it right. I figured by then I could retire and duck the problem. Reality is ridiculously hard to do right. Now try doing it over the internet with a half second ping time and you realize that not only reality bites, but physics bites harder.


While WoW looks the way it does partly because of it's 10 year old origins, it's artwork also communicates to the player: "It's just a game". And once you're far enough away from an imitation of reality, the brain stops trying to fit it into it's model of the world and complaining when it can't.


I don't know if you guys know how fast path-tracing is becoming reality. It just needs a few more years (of GPU advancement) to get rid of the noise, watch: https://www.youtube.com/watch?v=aKqxonOrl4Q

For art, next-generation photorealistic graphics will be 3D scanned and/or procedurally generated. Lots of work still (you can't scan cars or anything mechanical), but graphics could also be reused more often.


Yeah this is exactly what I was going to say and I was even about to post a Brigade engine video. LOL.

McClure kind of dismisses "ray tracing" offhand (and by implication/context, similar techniques like path tracing) with the premise that every texture would need to be manual developed by an artist.

The answer to that is as you mention 3d scanning and procedural generation.

There are quite a few path-tracing software (and even one or maybe a few hardware) efforts out there, and many 3d scanning companies.

I believe that the main thing that is holding these technologies back now is just people not knowing that they are realistic technologies, which keeps them from being mainstream. But once things enter the mainstream consciousness of engineers, you get an order of magnitude increase in the number of people working on them, starting with some of the existing working ideas and you start getting much more practical and inexpensive solutions.

I believe that within say 7 years Nvidia and ATI will either acquihire or build hardware themselves that makes real-time path-tracing, procedural generation, and real-time physics, convenient and efficient.


By the way, if you are interested in (realtime) ray tracing technologies, check out this forum: http://ompf2.com/

By procedural/scanned I meant mainly procedural that is based on 3D scanning (scans used for learning set), such as FaceGen http://www.facegen.com/ Lots of other stuff could be made this way, too.


You don't understand.

The article is arguing in a different level. It says that reality is inherently artistically irrelevant, and this race to photorealism is a wild goose chase. You can't 3D scan Skyrim!


I get that, but path-tracing is realism, there's nothing to add or nothing to remove. It's a full optical simulation. Path-tracing is not about what 'effects' it can do, it's about speed, precision, and accuracy, and nothing else. That answers the technology question the article is talking about.

Another question was art and content. For that I will say that new tools will be developed that combine huge libraries of 3D scanned content and procedural generation. Artists are still needed for 'art direction', which is the part where he decides what general shapes and materials he wants, and basically traditionally models almost everything like houses etc. That is still a huge amount of work for games like Skyrim, but the end result will always be photorealistic, and it should not be the artist's job to care about how it looks anymore.


And that's a big part of why video games suck these days. Creating a photorealistic or cinematic experience is extremely difficult and requires a huge amount of artistic assets, and even then the illusion only holds up under certain tight constraints. So game developers do as much as possible to enforce these constraints all throughout the game.


Or it's because the games just suck and no amount of photorealism is going to cover the cracks.


100 times this. If you want to know how important photorealism is to getting your subject involved with the medium, go read a paperback copy of Lord of The Rings. It's pretty lo-fi.


And this is why I point to Dwarf Fortress as the state of the art in games.


I would love to play a DF with a decent UI, and preferably with simple 2D graphics. And I'm an ASCII DCSS player.

Are there any hacks out there that make the UI less memorization-focused? I know there are graphics hacks, but I was never able to get them to work...


There's tilesets, but honestly I think they detract. My extended ASCII are like 8x12 pixels and scan really easily. Even at 32x32 I can't reliably tell the difference between a dog a cat or a rat in most tilesets, especially on first glance, so a lot of memorization is still required with tiles, they allow for less information to be displayed at once, and they are more complicated to recognize.

I don't even see the extended ASCII anymore ( http://i.imgur.com/7ci106g.png)


I agree, it is a proof that content >> presentation, but there's no doubt a better interface and graphics wouldn't benefit DF greatly -- it currently has a huge cost barrier before the fun part, which most people just aren't willing to try and break.


I think there is a connection; "AAA" games suck for a lot of the same reasons that Hollywood films suck:

* The "average joe" gamer/moviegoer expects incredibly lavish productions with photorealism, explosions, over-the-top action, etc. Making a game/movie for this audience is very expensive and very risky: They need to spend a lot, and charge a lot to a huge audience to make up the difference.

* So that developer/studio tries to minimize their risk and sticks to what they know how to make and what they know people will buy: the same two genres over and over and over again (FPS and third person action, or over-the-top action and screwball comedy), lavish "set pieces" and gimmicks that look cool in advertising but don't contribute anything at all to the gameplay/story (like "the one building you can destroy" or the "quick-time event": "press X to watch the main character do something really cool instead of just doing something cool yourself"), big-name actors for promotional value, etc etc. Some aspects like writing quality aren't considered as important and aren't given as much attention. Some things that make a better game/movie would be actively harmful to its commercial success (true novelty, gameplay/plot that takes effort from the audience to understand/appreciate), and so these qualities are avoided.

* Despite being incredibly formulaic and so on, these games/movies are very successful with their target audiences. To add another layer of metaphor, they're the fast food of their mediums: unremarkable and probably bad for you, but highly available and consistent in their quality. You might wish it were better, but you'll probably enjoy yourself on some level regardless. Critics and more sophisticated gamers/moviegoers decry the state of their respective industries at large while temporarily ignoring all of the cool stuff put out by smaller studios and passionate hobbyists.

In the decades before expensive CG raised the expectations of gamers and moviegoers alike, there was always the equivalent of these successful but substanceless creations, but you saw a lot more experimentation coming out of the big studios. The same company could put out a generic space shooter (sorry, shmup fans) today and a unique exploration game tomorrow. It's simply not feasible for those same companies today to dedicate as many resources on niche titles as they do for their blockbusters, and at the same time they've forgotten how to do small productions. It's not the end of the world, because digital distribution and crowdfunding gives the little guys the power to strike out on their own, create niche, experimental productions, and survive, but it is sad to me that these media juggernauts do more to advance the state of the art in computer graphics than they do in gameplay or storytelling. (No offense, computer graphics programmers, I'm fascinated by the field)


No, I think the parent is right. The games suck because of all the constraints placed on them by the asset pipeline.


Well, that's part of the reason. Another part is that since the debut of the PlayStation, possibly a bit earlier, video games are marketed to a wider audience, and the emphasis has been shifted away from challenge and fun towards audiovisual razzle-dazzle. In order to sell to this new audience, the games had to be easy so that the average new player had a chance of beating them; and they had to make a visual impact. Sony set developer guidelines to make the graphical capabilities of the PlayStation a selling point and enforced them on third parties; in North America, for instance, sprite-based games were highly discouraged in favor of polygon games and also severely restricted. This restriction did not hold in Japan due to peculiarities of the Japanese market in which certain genres with recognizable tropes (shooters, 2D fighters, JRPGs, "visual novels") predominate.

I used to joke that the camera spin effect prominent in early PlayStation titles like Final Fantasy VII was mandated by Sony's PlayStation developer license. I still have my doubts as to whether that was entirely a joke.

Anyway, combine this massive shift in marketing emphasis with typical 90s xtr33m to the max attitude and the passel of limitations imposed by an asset pipeline that overwhelms the developers' technical capability to keep up, and you have a recipe for trivializing the medium to the point where the damage is still strongly felt today.


I noticed the most recent crop of engines (thinking of Frostbite 3 here mostly) do a lot of cheating to run well. This is most apparent on the awful draw distance of terrain decoration. Draw-in is very apparent at even the highest graphics setting.

Worse, despite games like Battlefield having beautiful art assets, it takes very little for the illusion of photorealism to be destroyed; it could be as simple as getting too close to a piece of tall grass and realizing it's actually just a sprite. Or noticing that a flag is always flapping one way, and smoke is drifting the opposite way.

It's unfortunate, but the brain is really good at picking these inconsistencies out. More stylized games seem to age much better; TF2 still looks great.


Takeaway: the problem isn't photorealism in games, although that's hard enough. The problem is actually having somebody create the zillions of points of detail required to have something to photorealistically render.

So, seems like the technical challenge is to take a small portal, say 64x64 pixels, and then create a procedurally-based photorealistic (including damage, real-world illumination, and a damage model) engine to power it, right?


> So, seems like the technical challenge is to take a small portal, say 64x64 pixels, and then create a procedurally-based photorealistic (including damage, real-world illumination, and a damage model) engine to power it, right?

You may be interested in .kkrieger as a proof of concept for this type of engine. The most incredible part is the whole game is less than 100 KiB in size, but then we're talking about coders who are used to working with in 4KiB limits: http://en.wikipedia.org/wiki/.kkrieger

Talking of the demo scene, you may be interested in this 4K demo, another great demonstration of what procedurally generated content can do: http://www.pouet.net/prod.php?which=52938


I prefer comics and cartoons to live-action movies. It's ironic, but somehow, a primitive smile or frown on a cartoon character's face feels more genuine and "real" to me than a professional actor pretending to be happy or sad. I can take a drawing as the unfiltered expression of the artist, but when I watch a live-action film and see real people carrying out the trappings of fiction, I have a hard time taking it seriously. It feels silly, kinda like how the so called "uncanny valley" effect makes bad CG creepy and not just unconvincing.

I don't think that's a common opinion, given the lasting appeal of television and Hollywood, and the lack of respect comics receive in comparison, but in my mind, if a drawing feels more "real" than an actual human acting in front of a camera, then photorealistic CG doesn't stand a chance.


On the other hand it might just be a matter of getting used to it.

Japanese live-action movies are a textbook definition of overplayed, and few years ago I couldn't stand watching them. It was making me physically uncomfortable. Fast forward a few years and I enjoy watching them...


Wasn't there something about people actually prefering non-photorealistic renderings more, at least in Google Earth?


It should be "the problem OF photorealism" not "WITH". We all enjoy photorealism, but it is hard to make. We have a problem OF 3 times 6. Not a problem with 3 times 16.

Any how....

I worked on Flight sims in the mid 90's. We had a problem of Physics back then. So we didn't do real physics, we precalculated the physics based on a number of factors and loaded them in to tables, and if you were between point A and Point B in a table we took the weighted average of the two. It worked well enough.

Animations today do the same thing.

Anti-aliasing is now done based on point of contrast rather than the whole image. This works well enough, you notice the jaggies on white and black borders more than 45% grey next to 65% grey.

This doesn't strike me as a well informed post. Others have pointed out that it is directed at those new to the space, fine, but if that's the audience then explain "uncanny valley" in there somewhere. Where you can't be "right" be far enough off people accept it as fake because it is. We see this all the time in Pixar. We also see it in racing games with "arcade physics".


About the last point about generating content, I think it's extremely likely that this leads us down the path of Voxels and Procedural Generation. Both those techniques together would help us at least get a starting point for an environment that's generated from certain chosen parameters, and then the "least effort" becomes modifying that environment for your game needs.

I mean, if you can have a procedurally generated jungle with a huge mountain in the middle and a river cutting through the jungle, all you need to do really is to cut through the mountain for your secret underground base, create a couple of roads, camps and checkpoints and you've got a better remake of FarCry 1 (yeah I'm extrapolating of course but you get the idea).


I look at the loading scenes in GTAV and I wish the game looked like that, rather than the 'uncanny valley' that it is.

Borderlands II is a great example of not trying to do something you cant, instead it has great art that you can lose yourself in.


I've always wondered, given all the issues presented in this article and the expertise required to solve them does $50-$60 dollars properly pay the teams working on these games? Not even just engineers, there are artists and even actors capturing motion of characters and authors writing up scripts it doesn't seem like $60 would cover everyone involved in creating a video game.


You can buy LOTR on Blu-Ray for about $20.


Can someone list the possible technical advancements in gamedev yet to happen?

-> AI programmers are negative towards AI, saying gamers want 'fun' not Intelligence.

-> Graphics programmers say, the graphics will be photorealastic soon. What will graphics programmers do after that?

-> Gameplay, they are focusing more on creating the same style?


Reminded me this "What Is Real" trailer (https://www.youtube.com/watch?v=4dnw9dWISvw), I welcome any trick that can help the player's immersion.


I am a long-time 3D graphicist, and have written numerous software and accelerated renderers. I am really struggling to see the point of this article (it seems trivial and tautological to me), although I'll be glad if it generates some discussion.

This is an area where I feel very strongly that perception is reality, or at least so close to it as to make no difference. The game world is virtual, so the fact that say a rock is not made of zillions of atoms computing quantum chromodynamics doesn't come as a surprise to anybody. Well, that's a shortcut right there. Immediately, the game world is not arbitrarily "realistic".

But I don't think that's what the word "photorealism" usually means. I know I've never used it that way. I use it to mean "feels subjectively real". By that definition, as the OP alludes to, today's most advanced game engines come very close. In my opinion, they come so close, that the rest is gravy.

That's what will be coming in the next decade. More gravy, more cake, more icing. In the next year or so, somebody (Sony? Everquest Landmark?) will start the inevitable development toward Minecraft-style procedural/cellular automata based worlds, combined with increasingly photorealistic rendering.


I am really struggling to see the point of this article (it seems trivial and tautological to me)

That's because you're a veteran of the industry he's talking about. I see this as directed at people less familiar with 3D game programming: "This is a reminder that our simulations are less complex than reality by orders of magnitude, despite what you see while playing Crysis."

I agree that the focus on "photorealism" as it pertains to graphical rendering techniques was a bit off, because he was also talking about animation and kinematics, content generation, and good old processing horsepower (i.e. we already know a lot of math that would make more accurate simulations but it can't be done in real-time). Again, the tone of it feels like he was responding to a more populist sentiment (e.g. "photorealistic" catching on as a buzzword maybe), but I think a better term would have been the more general "digital simulation of real-world phenomena" rather than just "photorealism."

I think it's mostly just a "hey guys, we're a lot farther away than a lay observer might think, here's a few reasons why."


> so the fact that say a rock is not made of zillions of atoms

But that's the problem. When the player misses the target with a rocket and hits the rock, it gets a strange looking explosion and then the rock is either still there, or simply 'removed'. Sense of realism? Lost.

In an open world game, the player might want to pick up the rock. If you took a shortcut and just made the rock an object that sits on top of the ground plane, when you pick up the rock there is normal grass underneath. If you throw the rock at a building, it bounces off and rolls around stupidly. Sense of realism? Lost.

> That's what will be coming in the next decade.

That's what the article is saying - you're deluded into thinking we're close to solving this stuff. There is no way it's coming in the next decade, it is a long way off with a huge number of very difficult problems to solve. This is like the AI debate. AI is always 'just around the corner'. No it isn't, we are nowhere close to even beginning to understand what we actually need to accomplish once the tricks are taken away.


Which is why tons of people play "Minecraft", because the voxel technology and ability to deform the environment is much more real than many other games. There are some advancements in this field, but really... at the end of the day... the best "deformable field" is Minecraft still. And it hardly even tries to be realistic.

Fortunately, most gamers don't care about realism to this degree. Many FPS gamers want a tight engine, fun mechanics. Bunny Hopping was left in the Halo series for this reason. Ditto with super-cancels in Fighting Games (tell me if MMA-fighters cancels their jabs into super-attacks).

Realism for the sake of realism is hopeless. Game programmers need to focus upon entertaining the gamer. Some gamers require a degree of realism... but it should never be the primary point of a game.


I was wondering when someone was going to mention Minecraft because that's a great way of illustrating to people something of just how vast the gulf to be crossed is.

There are graphical mods for Minecraft, but none that bridge the gap from "cube world" to "our world."

And even Minecraft has artifacts like floating islands and limitless water -- the voxels are meant to represent fairly unchanging substances like dirt and stone, and even the trunk and leafy volume of a tree, and they support a limited amount of state being attached to them, but they are not being computed constantly in a way that would make it possible to implement something like erosion or a landslide, or realistic water motion, etc. And when you try to imagine going much beyond minecraft, you realize that, crap, you'd have to be computing the entire (essentially unlimited) voxel world maybe several times a second ...

I think it will be done, but there's a good case for it NEEDING to be done in a massively multiplayer way, because the work of simulating the world becomes too large to be done by any individual user's machine.


Single-player fans can play Dwarf Fortress, which creates a custom 1000+ year history and has the most "realistic" damage formulas I've ever seen:

http://dwarffortresswiki.org/index.php/DF2012:Combat

* Contact Area: Determines the surface area hit by the weapon. Likely in mm2.

* Velocity Multiplier: Effectively increases the velocity of the weapon swing.

* Blunt weapons are all about weapon mass, contact area, and velocity. Apply a large force to a small area for bone crushing goodness.

* Mass is likely material Density times weapon Size

* Momentum is Mass times Velocity

* Velocity is based on the Mass of the weapon, the Strength of the wielder, and the Velocity Multiplier of the weapon

* Any impact must have a conservation of momentum, and thusly, impart the weapon's momentum to the target

* Stress is the Force of the strike divided by the Contact Area

* Material Impact Yield determines the Stress required to dent the armor (likely not used)

etc. etc. etc.

Oh yeah, Dwarf Fortress runs quite slowly. Its an ASCII art game that bring computers to their knees.


It's also quite badly coded. And there is absolutely no reason it couldn't do sim on one core (or more) and rendering on another core, there just happens to be no 3d coder on the team, let alone artists.


Badly coded beats not coded every time.

The pair of them tried the whole 3D thing with their previous game, Armok. They said that it limited their options too much, so they decided not to do it. I don't think the core rendering speed has anything to do with that decision.

Besides, there are passable tilesets available by replacing the fonts, support for individual bitmaps for every creature, and if you don't mind directly accessing the memory, even an isometric renderer (Stonesense). All of these graphical enhancements are made by fans, freeing the two of them making it to just make the game.

The development log for this game is fascinating and occasionally hilarious. Today he implemented a system that lets refugees from conquered sites turn to banditry if they 'choose' to. A long time back, to test thermal conductivity, he took control of a magma man, grabbed on to an adventurer's head with both hands, and waited for its various tissues to catch on fire / melt.


Dwarf Fortress is memory-limited, not CPU limited. Running DF on multiple cores won't change the fact that the simulator taxes memory extremely hard. Taxing memory even more by including 3d rendering or PCIe transfers to a graphics card will slow down the game.

Its a known fact that Dwarf Fortress is mainly effected by memory latency (and not cache size or memory bandwidth). Each simulated agent takes up a lot of RAM, and lots of their information needs to be updated in every game tick. Cache-optimization is near impossible.

When the single CPU that runs Dwarf Fortress is almost always stalled waiting for RAM, then adding more cores to the mix does NOTHING.


Multiple threads of execution means you can saturate memory instead of alternating between fetches and computations. But I thought pathfinding and fluid simulation were big CPU hogs, and those can cache very well.


I think that the difficulties you would face in trying to make minecraft more realistic are far greater than the difficulties you would face if you were starting from scratch. Trying to make minecraft realistic is like trying to paint over A Sunday Afternoon on the Island of La Grande Jatte to make it realistic. It just wasn't created with that sort of realism in mind, so you are working from a disadvantaged position.


Yeah, I don't mean a literal mod of minecraft. I just mean that it's a great illustration of what'd be entailed.

If you start from scratch, you still need something like voxels, ie keeping track of many points in 3d space representing different kinds of materials, which is what minecraft has now.

But then, graphical or behavioral perspective, Minecraft gives you an idea of what problems you'll run up against.

Graphically ... holy cow. I don't know of anyone that's doing that, ie taking voxel data and making it look amazing. I did see a demo that looked a bit less blocky than minecraft, it wasn't cube-based, but rather used the 3d hexagonal mesh I think. It looked less blocky of course, but it worked with only grass and stone material.

Behaviorally .. also, holy cow. I'm not sure if there's anyone doing that in a 3d game engine. Ie, voxel or volume world data representing materials, in a way similar to minecraft, but if you undercut a bank of dirt, it collapses ... to do that, you need to be constantly simulating the entire world with CA rules meant to emulate various physical properties of earth, water etc (not just the currently loaded world chunks, and not just a few kinds of very limited material types, like Minecraft's water and lava) probably at least several ticks per second.

I suppose there might be ways to cheat or to legitimately exclude 'all-quiet' areas from needing to be updated, in some cases -- like the dirt example, I can see approaches that would propagate simulation outwards from an area centered on user-generated changes at the maximum speed that the evolution rules transmit information, and that might be able to leave large portions of the world relatively static ... although that wouldn't work so well for things like fluid simulation ... but the point is that, holy cow, we're a long way off!


https://www.youtube.com/watch?v=Gshc8GMTa1Y

Voxel technology is improving, and certainly exists at realistic levels. The question is... what game _cares_ about such details?


Check out The Powder Toy. It's everything you discuss, but in 2d only. And it really churns CPU for such a simple-looking game!


Many of these "non-realisms" in Minecraft are specifically for better gameplay.


We were deforming the world in Populous in 1989 ... and people drowned.


https://news.ycombinator.com/item?id=7424517

I know. Early 90s gaming was awesome. Such flexibility in games is a rare sight today.


I have the 5 1/4" floppy of Populous on my shelf (which is why I knew the year). Sadly I can't play it any more because of the 1980's DRM. Even if I could get a 5 1/4" drive working in my modern PC, the game asks you populations of various world cities for which the answers are on an un-photocopiable (black on dark red) sheet that came in the game box which I have sadly lost.

hmm I just looked on GOG and it's $5.99 http://www.gog.com/game/populous

Annoying though, when you have it already :)

I got the three pack for $15, gog bless. Even better, you get free games too including Beneath a Steel Sky - I never finished that either - I am a happy gamer.


I was wrong about the DRM, that mechanism was Sim City

Populous asks you to identify a picture, which came on a separate sheet from the manual.

It is disabled in the GOG version (although it still asks you the question but without checking the answer).


Since this is inherently a moving-goalposts conversation, I am not going to argue with you. I will say that, speaking as a graphics programmer who has been writing advanced renderers for more than 20 years, I do think the sense of your comment is wrong. In my working life alone, I have seen the state of the art progress from 4-color pixelated 2D, to the current Frostbite/Unreal level engines. I believe that already crosses most of the gap from nothing, to photorealism. I am also thinking more about rendering than all the other things people are discussing in this thread. I also do not think the incredible and dramatic progress in rendering can be sensibly compared to the ongoing slow progress in AGI. Again, we all know that with graphics, it's all an approximation. Perhaps you, being of a later generation, have much higher standards. You don't say what you don't think is coming in the next decade, and you can set that bar arbitrarily high, so I don't see this discussion becoming productive.


I think the point is that a photorealistic scene isn't 'realistic' because you can't really interact with it, it's a painting.

And a world where you can actually interact with stuff, like minecraft, can stagger the most beastly rig, even though the graphics suck from a realistic perspective and your interaction is limited.


Well the article says they don't even come close.

If you focus on screenshots and scripted playthroughs then maybe. But if you play a simple 3D physics demo where everything can be destroyed and Battlefield 4 where only 5 buildings can be destroyed, the subjectivity of realism is lost. It feels constricted.

An ugly 3d physics demo with hundreds of boxes becomes about as entertaining and feels subjectively as realistic as BF4, if you ignore screenshots and pay attention only to gameplay.

Believe it or not, shooting hundreds tank and artillery rounds into the environment turns it all into rubble.

A major priority of game studios is marketing. Currently they depend heavily on screenshots. That means they'll constrict environmental destruction to 1 building per level to preserve screenshot fidelity. If they turned down graphics settings and allow physics engines to contribute to the subjective feeling of realism, they're afraid it won't sell enough.

That's an approach some indie games have taken. Naturally it's more efficient to develop. Maybe efficient enough to get rid of $100 million a year teams and be profitable with lower sales.


As I see it, the OP is frustrated that people mistake mere graphics (and in particular, realtime raytracing) as the gold standard for "realism" in games, when there are open problems both more challenging and more important that are still unsolved (truly dynamic behavior, scalable content creation).

I don't know about industry, but in academic CS graphics is a huge field; by contrast, my impression the other problems mentioned are rather neglected (though certainly not unknown). However, graphics and games aren't my specialty; I would love to be mistaken.


No, actually he dismissed realtime raytracing out of hand on the false premise that it still requires manual artist texture creation.

We have solutions to "truly dynamic behavior", its called a physics engine. Scalable content creation has solutions also, i.e. 3d scanning and procedural generation.

Real time path tracing is where its at these days, not ray tracing anymore.

All of this stuff can go in hardware and it will within a few years. Main thing holding it back is a lack of resources. I.e., you have billions of dollars being spent on tweaking hardware and software for cheaty approximations and manual artistic endeavours rather than on advancing the technologies I mentioned.


The article's point is essentially that the progress on graphics is much slower than it would seem.

This is not so obvious, because looking especially at static frames, it's very easy to think that graphics have done leaps and bounds in the last decade. Well, when I look at the animations of the real gaming experiences, honestly, I think that there has been very little advancement, which is an example mentioned by the author.

I personally doubt that there will be any significant advancement in the next decade, assuming the current advancement rate (which of course, it's hard to predict).


Except graphics have improved leaps and bounds in the last decade, even if they do not live up to the expectations of you or the OP.



San Andreas came out almost 10 years ago and had to run on the PS2 with 32MB of ram.

GTA 4 came out 5 year ago. http://media.gtanet.com/images/4488-gta-iv-screenshot.jpg


Ugh, I knew that. Have no idea why I typed four. I was going for a decade (2004).


GTA3: 5 cars and 5 people in one frame

GAT5: Static world and a single car


One of the things missing from 'realistic' games is specular reflection and friends. Little subtleties like that help. One of my favourite moments in the realism wars was that phase where everyone had to have 'realistic' water, and I remember logging into Morrowind for the first time. "The water does look nice, shimmery and shiny"... but it drew attention to the problem that nothing else in the game shined at all, it was all dead flat.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: