Hacker News new | past | comments | ask | show | jobs | submit login
Raytracing won't simplify AAA real-time rendering (c0de517e.blogspot.com)
128 points by Koiwai on Dec 28, 2020 | hide | past | favorite | 185 comments



There's a really important point to grok: AAA titles are more about asset management and art than they are about coding. There is no silver bullet to simplifying the creation of games and artwork. Roblox is as close as you get today because it takes a ton of the work out of making many kinds of games and starts children with enough templates and community created free art to get rapidly started. Teens and young adults who started at Roblox have made some very impressive games: World // Zero, Arsenal, and Bee Swarm Simulator all come to mind.

If your game is close to a starting template, it makes it fast to create something fun and let you focus on iterating with players. The further you are, the more effort will be involved. At some point, the effort becomes equal or greater to the other platforms, however, most kids learning on Roblox don't have the skills to start with Unity or Unreal Engine.

Taking a step back and shifting markets back to GPU hardware, NVIDIA CUDA is Roblox of the GPGPU world - a single stop shop for really great templates and tools to get you 90% of the way to your scientific goal. That last 10% can actually be more like 90% if you're in an area where the platform is missing something (this is universally true for all platforms).

The computing industry is full of tradeoffs and people re-learning and re-creating patterns to solve similar problems to those that were solved 5, 10, 15, and 20 years prior.


> There's a really important point to grok: AAA titles are more about asset management and art than they are about coding

If that was true there wouldn't be so much logic and code bugs in AAA titles.

From what I've seen game industry has terrible software engineering practices - why have automated testing when your model is crunch to release and then leave a skeleton crew fixing the bugs after you shipped.

Also being stuck in C++ doesn't help either, an ecosystem with bizarrely the most complicated frameworks I've ever seen (eg. boost) and yet the worst tooling out of anything I've used (with comparable adoption rate).


You're misunderstanding what the parent poster was saying: when working on a large AAA game, content management (art, game design, etc.) is as much a bottleneck as engineering efforts. And a number of those bugs you're concerned about are rooted in content too, not lower level engine bugs.

I've worked about half of my career in the game industry. I've practiced TDD and written automated tests (and frameworks) for desktop, web and mobile apps. Some of those have been in the medical industry where the testing is crucial. I say this to make it clear that I'm familiar with solid software engineering practices.

With that in mind, games are the hardest software I've encountered for writing automated tests. It's just notoriously difficult to do in an effective manner. It's not impossible, but it's incredibly difficult.


>I've worked about half of my career in the game industry. I've practiced TDD and written automated tests (and frameworks) for desktop, web and mobile apps. Some of those have been in the medical industry where the testing is crucial. I say this to make it clear that I'm familiar with solid software engineering practices.

Did you work in the game industry before the other stuff or after ? Because I went that way game dev -> application development - and frankly a lot of the SW engineering practices from game dev were terrible in the transition - not because I couldn't apply them in game dev - but because I didn't know about them - and nobody around me told me - and I haven't seen it in code from others.

>With that in mind, games are the hardest software I've encountered for writing automated tests. It's just notoriously difficult to do in an effective manner. It's not impossible, but it's incredibly difficult.

There's a ton of low hanging fruit - running recorded controllers, partial scenario tests, gold copy rendering tests, smoke tests, regression testing - frankly it's not that hard to raise the bar from 0. I'm not up to date in the industry so maybe they aren't at 0 anymore but from occasionally keeping tabs and playing games occasionally I would say it hasn't moved far.

Just the number of regressions in MMOs for example where you could easily code tests for the stuff that was fixed is an obvious example that nobody was doing regression testing or adding tests after fixes. And this is for MMOs that have an incentive to keep a healthy codebase (not just ship and forget)


Everything you have outlined was standard practice at studios I have worked at for more than five years. The lower in the stack you go, the easier it is to do things like unit tests. In my experience the section of the game industry that is the easiest to test in an automated way is AAA mobile. Where I work in “HD” AAA, it is considerably harder to test in the same ways but we do where it is effective. Don’t mistake the failures of some games or studios as a valid indictment of the industry.


> Don’t mistake the failures of some games or studios as a valid indictment of the industry.

It's an industry focused intentionally on as minimal as possible code and best practice sharing for fear of talent poaching/project poaching. It's an industry where the "best practice" is still reinvent as many wheels as possible every other game and open source little to nothing. The worst mistakes of the worst games/studios should likely remain a valid indictment of the industry as a whole when the industry itself is so focused on making sure the tide rises as few boats as possible.


It's easy if the game engine you are using would display an in-memory model with deterministic algorithms. I have only briefly worked in the game industry and it's been more than enough for me :D. There were like 4-5 people with "10 years+" experience working on a Candy Crush clone. It was literally riddled with bugs and they used Unity's physics engine to make the stones fall (I mean, it doesn't get any more ridiculous than this).

So I came in, throw it all away and built within 2 days a deterministic, fully tested and testable stone engine that would implement all the effects management wanted to see, some of which would have been close to impossible to implement with Unity's physics engine. The idea is basically to use the smallest time unit you want to support, and then base all algorithms on that. You don't work on milli-seconds, because that is for the engine to fill out. You work on seconds, for CandyCrush at least. Each time step is basically a second. The time in-between is filled out by potentially non-deteministic animations and particle effects and whatnot. But every second, the whole scenery will synchronize with the deterministic engine that drives it all.

While I didn't have the chance to try this on shooters or MMORPGs, I think especially for MMORPGs, this would be a perfect fit. It solves sooo many problems, I would probably need a day listing all the benefits. Are there drawbacks? Probably, I can't think of one right now, except that it goes against anything normal game developers believe in.

I think at some point, the gaming industry forked away from solid software engineering practises.


It's a lot more difficult than you make it out to be.

I'm a former game engine lead from an Activision studio. The whole engine architecture was my responsibility, testing included.

You can easily write tests for all the deterministic stuff; math, physics, save games, utility code, filesystem code, etc.

However, where it gets difficult is testing the content. Every platform has some quirks which force you to create art custom made for that platform (your other choice is to use the minimal subset of what they all support). If you want a good looking game, your content has several versions, so now, things like intersection and pathing code are slightly different between platforms. GPU behavior is different enough that using GPU's for anything results in multiple test code paths. AI isn't always fully deterministic, by design, making it even more difficult to test. So, what you have in the end are some deterministic tests to make sure your foundation is sound, and "fuzzy" tests to catch many problems in the higher level constructs, and your final line of defense are play testers, which is an awful job!

Now, we engineers know how to make the games stable and reliable, and our time estimates are frequently at odds with the PM's. You MUST make Christmas, which in the days that I worked, where CD's must be mastered, meant code freeze and asset freeze in October sometime. So, in this mad scramble to make a hard deadline, quality suffered.

It was much more important in the old days (PS1, PS2, GameCube, N64, etc), to get it right the first time, as you couldn't ship patches, but the notion of patching games allowed release versions to be more buggy, since people kicked the can down the road. Granted, the earlier consoles only supported simpler games, and the diffiulty was dealing with the particulars of the system, not the game itself. The PS1 had no Z-Buffer, the PS2 had a CPU/GPU combo built by a crazy person with an PS1 as its IO controller, and the PS3 required you to write DMA management code, while the GameCube had a memory model that was nuts and matrices were only 4x3 so you couldn't do projections. So, you spent most of your time dealing with these quirks and the game ended up simpler due to the effort being spent elsewhere.

Now, it's much easier, consoles are effectively PC's with powerful general purpose CPU's and GPU's, so it should be possible to write high quality, reusable engines.


I've always wondered why big studios don't employ some people that just write bots to play through scenarios etc.

I mean the biggest issue with creating good bots is getting the required data to make decisions such as "you're getting attacked, you need to use this ability" etc, but if you're actually the company writing the game, there shouldn't be any issues creating APIs for that. Or even going directly into memory and read it out, you don't need to worry about getting banned after all.

Imagine Integrationtests in the form of bot scenarios. Is that too much overhead to implement in the usually very budged oriented setting of game development?


You can write a bot player that follows a script or applies some simple decision rules. The hard part is detecting whether or not the game is operating correctly at each step. It's not like a simple web application where things happen in discrete steps and you can inspect the DOM to verify that it contains the right nodes.


Lots of big and small studios have tried to automate game testing. They are interested in it, so if it seems easy and fun you should do it, I’m sure there are jobs out there writing game testing bots.

That said, it would be wise to assume that if you have not heard much about this and wondered why, the answer is probably that it’s harder than you think.

Having been a game lead for a decade, I can safely say that getting game telemetry into the test bots is not the hardest part of the job, it’s one of the easiest things to do. One example of something much more difficult would be testing the game while it changes. Don’t forget that writing a cheat bot for a game that’s done is nothing like testing games that are in development and changing every day.


How would it tell that something is wrong? Like, it went through a wall, and the app let it do so - what’s the problem?

Testing non-deterministic code paths are a really hard problem because you don’t really know what to test for.


Extremely bad take, from somebody who has clearly not worked in AAA games, but believes all the things that gamers post on reddit. A few points:

1) you can only test-driven-develop so much in games, and that line usually stops at the engine level because the game itself is in flux so much. Automated testing is confined to making sure that checkins build on every platform. Game dev engineering is fundamentally different than programming in other fields because the goal posts move constantly.

2) If an engineer writes code expecting there to be no more than 1024 physics objects in the system, tells the designers and artists this, but then they turn around put in 2000 colliding pieces of silverware on a dinner table "because it needs to feel like a big feast" is that an engineering problem, or an art and asset management problem? Because something like 80% of my bugs are shit like this.

3) Professional game codebases use their own styles (i hesitate to say dialects) of C++ that do the things we need them to do the ways we need to do them. We don't use anybody else's framework; all of the bonus stuff we're doing lives in macros that can be inspected if an issue arises. But, please don't push your language purism on anybody else. What a tired argument to have.


None of your points are unique to game development. Moving goal posts and requirements changing constantly is a challenge at literally every single development job I've had. There is an eternal back and forth between what the sales team says the software can do, and what the programmers desperately attempt to account for late in the development cycle. Plenty of teams have idiosyncrasies around their tool chain, refuse to reflect on it and maintain that their project is a special case to justify all the mess.


Let me first say that game developers should do a better job about testing, especially when it comes to developing isolated systems to support unit tests. But it certainly seems like you misunderstand how much change there is in the vast majority of game development.

> Moving goal posts and requirements changing constantly is a challenge at literally every single development job I've had.

It is hard to describe because it sounds the same. But game development is really different because there are no fundamentals of things you care to test.

Even doing a combat sequence requires a herculean effort. You need basically the whole game running because otherwise what is the point. You fake out most of the data so your test doesn't fail when the designers decide to make combat harder. But now it ends up that pressing A defends instead of attacks because reasons and all of your combat tests now fail.

This is all fixable but it makes the cost per test of anything but the tiniest things hard enough that broad test coverage can sometimes be a determent as you end up testing what the game is now which means it will all be thrown away if your assumptions change.

Sure that can always be the case but "the sum of the lines equals the total" kind of tests are much less likely to backfire in this way.

> Plenty of teams have idiosyncrasies around their tool chain

C++ is chosen because the tooling doesn't exist outside of C++. Full stop. You have to build all the tooling in language X which when talking about 3D graphics is a huge amount of tooling.

Rust is starting to have some cool stuff but if you compare you will see there is a world of difference.

Thus if you choose not C++ you get to write C wrappers around your API and deal with all that nonsense since C++ interop is the worst in most languages.

At some point Rust will get proper C++ interop and then the gap will be smaller but for now you are giving up a ton for a slightly safer language by not choosing C++.

Also note that nearly everybody writes a huge amount of non-C++ code, they just call it a scripting language instead.


A recurring thing that happens in indie dev in the past decade is that some programmer writes up a blog about their "fully test-driven" game.

It always turns out that the game has a trivial state-of-the-art-circa-1980 design with a very small featureset. Nobody doing this is also writing a large RPG or even a Mario style platformer.

So, you can do it, but you spend a massive amount of "scope points" doing it.

The language tooling is much the same way. To do games - big or small - you need lots of I/O handling, and this immediately leads you towards talking to the OS directly, which leads you towards either C or C++ because that's where the tools and resources are. You can get a binding of SDL or whatever for your language, but that's effectively limiting the scope of your engagement with I/O - if the framework doesn't work, you have to debug it across a binding layer which is always iffy. And it can really hurt when talking about console dev.


Offtopic but may I ask you for some game/OpenGL-related questions?

I’m attending a computer graphics course and have to write a minecraft clone - and I don’t yet “feel” the performance of the CPU vs the GPU and sometimes I have trouble deciding which one to push.

Like, am I allowed to call glDraw* multiple times per different objects when the data is already on the gpu, or should I try to push somewhat dissimilar objects into the same buffer and write more complicated shaders differentiating them (without ifs if possible?) or are gpus so performant nowadays that unless I do something stupid I should not worry about it?


> Thus if you choose not C++ you get to write C wrappers around your API and deal with all that nonsense since C++ interop is the worst in most languages.

An example of this is the Swedish game dev company Embark Studios who, after a significant amount of effort, managed to get NVIDIA PhysX (which is widely used in AAA games) working with Rust [1].

[1] https://www.youtube.com/watch?v=RxtXGeDHu0w


You'll have to search very long to find a C++ game code-base that uses boost, game devs are not that stupid ;)

Also, Unity games are usually written in C# (I think it's quite safe to say that - overall - most games are not written in C++ but in C#), yet I've seen no data so far which would indicate that Unity games have any less problems than games written in C++ during production and after release (if anything, the opposite seems to be true, not for technological reasons, but because Unity is so much more beginner-friendly).

I'm no fan of C++ either, but blaming a programming language for bugs and quality problems without any counter examples at hand is a bit ridiculous.


I'm not sure why you say it's a "quite safe" assumption that most games are written in C#. Unity's beginner-friendliness gives it a disproportionate presence online, while the vast majority of AAA games are still solidly C++.

If we eliminate all games with less than 1000 sales or something, I think it would be a very low confidence estimate.


I think perhaps they meant "most Unity games", where Unity was supposed to be implied by the context. It probably is safe to say most Unity games are C#, if we ignore the portion of the engine runtime that is written in C++, which makes sense for some metrics and not others.


> You'll have to search very long to find a C++ game code-base that uses boost, game devs are not that stupid ;)

Don't need to search, factorio used to use boost: https://factorio.com/blog/post/fff-223


The context here, from the top level comment, is AAA game companies and the developers working for them though. In that context, that and indie developer came to the same conclusion after a while is probably just more evidence for the original point.

In a larger studio, even if code sharing isn't common as is expressed here, at a minimum having someone around that sees boost and says "don't do that" is probably a given.


> You'll have to search very long to find a C++ game code-base that uses boost, game devs are not that stupid ;)

Boost has over 160 libraries and counting. I wouldn't recommend every one of them (some have wacky interfaces, slow compile times, and/or experimental designs), but many of them are excellent, and I don't think it's very difficult to tell them apart.

Regardless, I find your insinuation that Boost users are "stupid" to be extraordinarily uncharitable to library users and developers.


> Unity games are usually written in C#

That is true in the most literal sense, but Unity have adopted a programming model that makes much of the normal tooling around .NET useless, and abhors automated testing.

Adopting good testing practices in Unity still largely revolves around developing parts of your game as .NET libraries that get tested before being built and integrated into the game, and it's not too surprising that many developers don't go down this route.


I'm blaming C++ because :

- there is very little information in the community on how to do this kind of engineering efficiently (at least I haven't encountered it nearly as much as I have when I transitioned to application development in higher level languages) - there is very little code sharing in the community and zero standardisation - everyone reinvents shit from standard library, coding conventions, what subset of the language is "allowed", etc. etc. - this means developing good tooling, practices and patterns across large projects is hard

Unity is written in C++, C# is scripting layer and more importantly I doubt Unity developers doing C++ are C# engineers with C# application development background where stuff like automated testing is pretty standard.


> there is very little information in the community on how to do this kind of engineering efficiently

If you aren't a huge studio writing a AAA game or thereabouts in complexity you probably can't write a game engine efficiently.

The number of places that do have that scale and need help on how to be efficient is probably vanishingly small so not a lot of self help pops up.

> (at least I haven't encountered it nearly as much as I have when I transitioned to application development in higher level languages

Pick an engine and use its language whatever that is. Unity gives you C#. Unreal gives you a C++ dialet with some nice features and a blackboard system for scripting that is quite powerful.

> there is very little code sharing in the community and zero standardisation

From my experience there is little code sharing within a studio so expecting there to be code sharing across the entire community is hard.

Assuming you are looking for high performance (AAA) you basically have to write a bespoke engine for your gameplay. And there are a lot of different ways of doing gameplay. Heck whether you have loading zones or continuous loading makes a huge impact on how some fundamental things work (again if you aren't huge there are workarounds that make it easier to generalize in exchange for performance)

> everyone reinvents shit from standard library, coding conventions, what subset of the language is "allowed"

The standard library has some huge performance problems for video games. Mostly around allocation rules. Unsurprisingly algorithms for millions of things can have bad performance side effects on tens of things (and visa versa).

> this means developing good tooling, practices and patterns across large projects is hard

To be fair I think this is generally true. Games just tend to build up their codebases super fast but maintaining any huge code base is a pain in the best cases and it is never the best case.


>I think it's quite safe to say that - overall - most games are not written in C++ but in C#)

Not even close.


> From what I've seen game industry has terrible software engineering practices.

You'd be surprised. Here's a talk from Croteam on how they test their games and engine: https://m.youtube.com/watch?v=YGIvWT-NBHk

I'd wager all major engines are exhaustively tested.

Trouble is, there's combinatorial explosion of game state, user input, assets, scripted behaviour and engine, so there's a huge area to cover.


I mean he's basically saying what I am - they are the exception, very few teams are doing it, no public information on how to do it or best practices, everyone reinvents everything on their own from scratch.

I'd wager the popular engines are well tested because of the number of titles shipped on them not because they have good testing automation - but TBH I haven't worked in this industry for almost 10 years so maybe things changed.


10 years ago was right when "TDD" became hugely hyped. Before that, test automation was patchy throughout the software world, not just in games.

I believe the same is true of "best practice" today: game studios aren't actually behind the curve, you just don't hear much about how things are progressing on this end because most of the conference talks aren't about broad concerns like testing, they're about the myriad specialities of the field.

And there absolutely is a legacy-code thing that hinders AAA in many cases. When the engine is old, that's good, because it's shipped something, but it's bad, because it's using older practices and Things Have Moved On.


That's a fair take too - maybe TDD wasn't as widespread in general so I just got onboard when everyone else did. Although I should note I'm not a fan of TDD and it's not something I would recommend for games or anything similar - it's a very narrow tool - I think you agree because you put "TDD" in quotes I just don't want to make it sound like I'm recommending it - I'm a fan of automated testing.


At least UE4 is not well tested at all. Most of the code doesn't have any decent tests. It's all "developer writes the code and runs once on his machine and then pushes to prod" type of code.


That isn't really true, Unreal uses assertions heavily, see https://docs.unrealengine.com/en-US/ProgrammingAndScripting/...

Unit tests etc aren't that useful in games so no you won't find much of that stuff.


Why would you say something like this?

With everything else being equal Unit tests are as useful for games and game engines as they are for any other type of software. If you care about your software being correct (or let's say less incorrect) you need to test it. And preferably you write automated tests. Games (and especially game engines) aren't really by default an exception to this.

(Of course you can just say that this game is not worth the effort of getting better quality so we don't care if it's a bug fest and that's fine).


I have never found Unit tests particularly useful outside of constrained situations where you have predictable input and output and a initial state that easily reached.

Getting a game(or engine) into a particular initial state so you can make the unit test even work would be a massive pain the ass.

Heavy assertions are more useful as they test actual running code, and they are executed every time the program is run. You can still write "unit test" like code using assertions, but having some code that only executes in the development build on startup to check your math library or whatnot.

Not defending Unreal, I don't particularly like that engine, but that has more to do with the codes age and bloat.


That doesn't sound right. Unit testing happens very close to the unit you want to test, i.e. class or function most of the time. Most of these should be designed with testing in mind so they're actually something you can write unit tests for. And there ought to be a whole bunch of classes/functions/components that can be tested this way, such as scripting code, networking code audio code etc.

It's not hard and definitely not impossible. Just requires a mind set towards quality and testing.


Is that UE4 or games written in UE4? I can totally believe it for games written in UE4, but the engine itself? Doesn't pass the sniff test.


I've been working professionally on a UE4 game with a license to the full UE4 engine codebase, I can confirm that the engine has some unit and functional tests, but the overall coverage is abysmally low.


https://docs.unrealengine.com/en-US/TestingAndOptimization/A... is the overview of the automation system used for unit testing, feature testing, and content stress testing.


There's some built-in testing functionality for the games but there aren't much tests for the engine itself in the source package.

Oh and I spent about 3 years working with UE4. The engine is a bug fest.


> If that was true there wouldn't be so much logic and code bugs in AAA titles.

Those are pretty different concerns tackled by different groups of people on AAA projects. The art requirements can be an order of magnitude larger than the gameplay side of things. That doesn’t make the gameplay side of things easy.


My impression is that asset creation is more scalable (even if labor intensive) and they figured out a way to manage it - I haven't seen a AAA title with asset issues, and from what I understand hiring a bunch of asset creators is cheaper than hiring developers to automate their work.


Feels like the inevitable rise of AI powered content generation will at least free up _some_ resources at some point, right?


During the indie dev renaissance in the 2010's, procedurally generated content received a lot of investment and attention. (Today's marketers would call this AI generated content.)

The idea was that your small indie team could keep up with the big content demand since you had algorithms generating seemingly evergreen content for your players.

What happens in practice is that the players begin playing your game at a meta level, learning how the generative process itself works; effectively removing the benefit of procgen content. As an example, consider Spelunky which claims it can generate millions of unique caverns to explore. If you assess that claim visually, it's true. But watch the streamers play and you will see that they 'speak' the algorithm. By observing the shape of a cavern in one area of a map, they know the algorithm had to make a specific concession elsewhere in the map. So this content isn't really procedural for them anymore.

Even if the "AI" content generators get more intelligent, it won't free up resources. Mechanically interesting content, as demonstrated by the indie games of the last decade, is just a different form of handmade content. A designer hand made the procgen algorithms. Players ultimately bond with the designer(s), no matter what meta level they build the content at. In a hand-built game you might start to get a sense for where the designers hide treasure chests. In a procgen game you learn the algorithms themselves and how to predict and abuse them.

There's another form of game content, story content, which remains an edge for humans. Algorithms, or "AI" if you must, can't compete here. It would be the same as waiting for AI that can write award winning movie scripts.

The marriage of story content and mechanical content is a superior game-making formula to the procgen approach. So the _some point_ you refer to is probaby far away still.


Your claim that people learning to play the game "effectively removes the benefit" of procgen content is false. Games like Spelunky are played by people for much longer than the average indie game precisely because the process of learning the meta-game takes time but is still engaging.

If you want to calculate how much "AI generation" or "procgen" is freeing up resources you also want to look at how long people are playing these games for and how many resources were used in making them. If indie teams of 2 or 3 can collectively make the world play their games for longer than it plays games made by much bigger teams, then that's a definite freeing up of resources. And this is what's happening today to some extent.


And playing any game beyond a single go-through gets to meta-game exploration, be it arms races for "edges" in competitive play, or learning deep nuances of the terrain/playpen over multiple plays.

The best, most enduring games all seem to have procgen, randomization, or a "construction kit" for slower procgen: player-made levels and curation.


>If indie teams of 2 or 3 can collectively make the world play their games for longer than it plays games made by much bigger teams, then that's a definite freeing up of resources.

Pedantically, wouldn't that be increasing the consumption of resources defined as man-hours?


> >If indie teams of 2 or 3 can collectively make the world play their games for longer than it plays games made by much bigger teams, then that's a definite freeing up of resources.

> Pedantically, wouldn't that be increasing the consumption of resources defined as man-hours?

Only if you remove the distinction between paid employee man-hours and consumer man-hours (which, economically, is an externality, yes, but generally treated as a positive one, called 'engagement').


While I do hold out hope that procedurally generated _gameplay_ takes off and increases the size of game worlds, I was more just assuming that something akin to deepfake tech could be applied to some of the rote tasks of decorating levels, doing quick first drafts of character designs etc. I'm aware this is already happening for some types of animation and rigging.

I think it's right to be ambitious in the long term though. Procedurally generated levels in 10-20 years are going to be wildly more advanced (and satisfying, and unpredictable) than now. I also wouldn't bet against award winning movie scripts in my or my kids' lifetimes.


I also hope we're very close on good text-to-speech, which I think ought to be transformative. I have often lamented the transition from textual dialogue to voice actors, especially in RPGs, because it effectively limited the total amount of story available, and also killed of all sorts of interesting UIs that existed before based on keywords and even natural language. Coupled with something vaguely GPTish, you could have exponentially more in game flavour, leading to much deeper immersion (this coming from someone who collects all the books in Elder Scrolls games).


Oh true, there's a lot of computer assisted tools for level designers and it has definitely made game worlds feel more realistic. The strategy is to let the computer generate a baseline and then hand-tune from there.

Humans aren't really good at making an area feel wild or natural... we stink up the place with hidden order. So algorithms that generate terrain, tree placement, tree shape, etc are already outperforming humans because their form of pseudorandom is more natural-feeling than our own.

I don't know how famiar you are with the industry so apologies if I'm over-explaining, but check out SpeedTree if you want to see one of these products. It's really cool.


Not only that, but our indie darlings have started to take shortcuts en masse. The promise of roguelike games used to be "no two games are alike". Today, multiple players criticize Noita for not having permanent unlocks. Steam threads like "When does the roguelike stuff kick in?(...)I died a couple of times and I always start from the beginning.". Or a metacritic comment rating it 5/10 saying it's deep and interesting but it lacks meta game unlocks all roguelike games have nowadays. That's what people expect. AAA games are known for usually only having 1 path through the game because "why waste time making content if player won't see it all". Indie developers are actually using the same logic. Why waste time tweaking the algorithm to make unique runs if you can HARDCODE runs to be unique by providing a new unlock every now and then...


Procedural generation tends to spread out content. It's still based around hand designed content and once you see enough of it you notice repeating patterns.


Jevon’s Paradox [0] suggests that it won’t; the freed up resources will just be used for other things.

Tooling for 3D modeling/texturing/rigging/etc is significantly more complex and powerful than it was 20 years ago, yet Pixar doesn’t need fewer artists for a movie today compared to Toy Story - in fact quite the opposite.

AI techniques useful to artists will get folded in the tooling and enable artists to make even more detailed/complex games & movies, but that doesn’t mean the AAA games of 2030 will require fewer artists.

However, talented small teams will likely be able to leverage them to create things that would have been inconceivable from a small team a decade ago.

0. https://en.m.wikipedia.org/wiki/Jevons_paradox


A more specific version of Jevon's paradox for the VFX industry is Blinn's Law [0] which states that "rendering time tends to remain constant, even as computers get faster."

[0] https://en.wikipedia.org/wiki/Jim_Blinn


Tools to make art content have gotten dramatically better in the last 10 years. Systems such as Substance, which create algorithmic textures to replace manual texture creation in Photoshop, have resulted in at least a 10x increase in productivity.

Naturally, one may wonder if this has resulted in the labor market disappearing for artists. But it hasn't. The demand for high quality art skyrocketed because the tools economically allowed for it. (In some ways the market for artists has gotten smaller, because there is less appetite for low-skill Photoshop monkeys, but highly skilled technical artists are much more in demand, because we are starting to see dramatic differences in productivity between artists just like we have seen in engineering.)

So, yes, we will see AI in content generation, but it isn't going to be in the form of replacing art bottlenecks. It will manifest in new tooling for artists, who will become more productive, and there will be even fewer artists who have good mastery of these tools, and the demand of quality will increase further. Which would lead to similar bottlenecks to today, though I believe artists will be paid better, and it will become even tougher to break in.


> Naturally, one may wonder if this has resulted in the labor market disappearing for artists. But it hasn't. The demand for high quality art skyrocketed because the tools economically allowed for it. (In some ways the market for artists has gotten smaller, because there is less appetite for low-skill Photoshop monkeys, but highly skilled technical artists are much more in demand, because we are starting to see dramatic differences in productivity between artists just like we have seen in engineering.)

But are the artists treated better than they were a decade ago?

The wife of a good friend of mine is a digital artist. She works on films. The pay in her field is crap, as are the working conditions - meanwhile, competition for paying jobs is fierce.

When computers became an order of magnitude cheaper, the demand for, and the pay rate for computer programmers sky-rocketed. It doesn't seem like the same thing happened with artists.


It's a good question. I don't know about film, but I do have visibility into the game artist market.

The pay is OK but not fantastic: generally in the 60-80k range, depending on experience. Many of the very low paying jobs ($15/hr types) have disappeared. And I do know decent artists that frequently get recruiter calls, albeit usually for a couple large poorly managed studios that have trouble filling positions.

I think it depends a lot on the exact nature of the work being done as an artist. If it's more technical, it seems to demand better wages, but in games there are still some fairly nontechnical roles that are either paid poorly or farmed out to offshore agencies.

Work conditions still aren't great, but this is mostly at bigger studios.


It probably already is. Texture delighting is already on the horizon combined with infinite texture scaling this will be a great help for artists.


Yeah I feel like AI powered content generation tools which can automate a lot of the grunt-work will be the real game-changer


So that demand can expand to fill all available space? Yes.


I wonder if that's a little "be careful what you wish for"...


> In fact, it's easier for AAA to become relatively irrelevant (compared to the overall market size - that expands faster in other directions than in the established AAA one) - than for it to radically embrace change.

This has already happened. AAA is now a small and ever-shrinking fraction of the overall games market.

Will raytracing the best engine that a 30-person team can build in 2 years any simpler? No, because "the best engine that a 30-person team can build in 2 years" is at a particular level of complexity by definition. Will it make the gap between the best engine that a 30-person team can build in 2 years and the best engine that one guy in his bedroom can build much smaller? Yes, yes it will.

Custom game engines are already dinosaurs; most of the money in games is elsewhere. I'm sure they'll continue to be produced, just like you can still pay a lot of money for a mainframe today. Mainframes were never defeated, not exactly - they can still do things that commodity hardware can't do. But they just became irrelevant.


> AAA is now a small and ever-shrinking fraction of the overall games market.

I don't know how you can know this. No one has:

- a widely agreed-upon definition of AAA (budget? team size? total work-hours that went into it?)

- an estimate of how much revenue AAA games generate from subscriptions, DLC, and any other add-ons

Without both of those, how can you even guess at AAA games' collective market share? Citation definitely needed.

> Custom game engines are already dinosaurs; most of the money in games is elsewhere. I'm sure they'll continue to be produced, just like you can still pay a lot of money for a mainframe today.

What does raytracing have to do with the popularity of custom engines?

Whether I'm planning to build a custom engine or license an existing engine, aren't we discussing whether my job is easier with raytracing than without it?

Put another way: it seems like the article is comparing reusable engines without raytracing to reusable engines without raytracing, as well as comparing one-off engines without raytracting to one-off engines with raytracing.

I don't think being able to use raytracing is going to move any dev team from one option to the other. If they had the desire and resources to build a custom engine, they'll probably still do that (against all logic).

> Mainframes were never defeated, not exactly - they can still do things that commodity hardware can't do. But they just became irrelevant.

Off topic, but this is not a good analogy. The "cloud" is just a network of mainframes, which means mainframes are arguably the dominant form of computing (and will become more dominant as thin clients, like Stadia, rise in popularity).

Mainframes were not necessarily purpose-built, unique machines. They were instead defined by their sharing model, which is where the term "personal computing" comes from -- as a contrast to the mainframe/server model.


> What does raytracing have to do with the popularity of custom engines?

Raytracing is being used as a synecdoche for technological advances in rendering. The point is that as off-the-shelf rendering improves, the advantages of a vertically integrated engine diminish.

> Off topic, but this is not a good analogy. The "cloud" is just a network of mainframes, which means mainframes are arguably the dominant form of computing (and will become more dominant as thin clients, like Stadia, rise in popularity).

> Mainframes were not necessarily purpose-built, unique machines. They were instead defined by their sharing model, which is where the term "personal computing" comes from -- as a contrast to the mainframe/server model.

That's a very essentialist perspective; there are a lot of ways mainframe computing differs from typical personal computing and no single one of them is definitive. As a programmer, a mainframe offered you a reliable computing environment, because they're built to be highly available from the hardware up. That's very much not the case with the cloud, which follows a worse-is-better approach where your jobs can be terminated whenever and you're expected to handle it. While shared computing resources may be making a comeback, I don't think that kind of ground-up high availability will, so while the cloud has some aspects in common with mainframes, it's very much not the same thing.


> Custom game engines are already dinosaurs; most of the money in games is elsewhere.

It's interesting you say that because a lot of the biggest games that get released are on custom engines. I'm most familiar with FPSes, but as I understand it, the cod games, the battlefield games, destiny, the halo games, cs-go, valorant, cyberpunk etc. are all on "custom" engines -- i.e. engines that are not provided commercially by third parties. I'm actually having trouble thinking of a big name fps that is on unreal or unity.

(I don't know how big these games are compared to mobile games, but I can say for sure that they are big enough that studios seem to be investing more and more on them, rather than less as you would expect if they were economically insignificant.)


I can give one hint: Epic :)

p.s.: I share your opinion


Using GP's definition: "custom" engines -- i.e. engines that are not provided commercially by third parties it's pretty easy to argue Fortnite also uses a "custom" engine. (And that's not pedantry - they obviously have knowledge and access that no licensee could dream of.)

That said, tere are a decent set of AAA games that license Unreal (PUBG, State of Decay, Borderlands series, Arkham series, XCom series) but it's rare among the absolute biggest titles ("AAAA" if you will.)


I guess I think of Fortnite as a "custom" game engine since Epic owns both the game and the Unreal engine. So it matches the pattern, "FPSes are built on engines owned by the same entity that owns the game." But you could argue that since Unreal is commercially available Fortnite runs on a non-"custom" game engine. Depends on your definition.


> AAA is now a small and ever-shrinking fraction of the overall games market.

This entirely depends on how you define AAA.


Here is a list of the top 10 selling games from 2018. Nearly all of them used in-house engines:

* Red Dead Redemption 2

* Call of Duty: Black Ops 4

* NBA 2K19

* Madden NFL 19

* Super Smash Bros. Ultimate*

* Marvel’s Spider-Man

* Far Cry 5

* God of War 2018

* Monster Hunter: World

* Assassin’s Creed: Odyssey

Further, what I would call the poster triplets for ray tracing Control, Metro Exodus, and CP 2077 all use in-house engines.


> Here is a list of the top 10 selling games from 2018. Nearly all of them used in-house engines:

Sure - those are the AAA games we're talking about (and even then, a lot of those are long-running franchises that reuse a significant chunk of engine code between multiple installments). The biggest sales numbers will keep coming from that segment for a long time. But it's a shrinking segment in terms of overall revenue (much of which is no longer up-front sales), and even more so if you look at profit; each year the line where it makes sense to use a custom engine gets higher, and a bigger share of money coming in is from games built on commodity engines.


> Custom game engines are already dinosaurs; most of the money in games is elsewhere

Not sure if it's true. Look at the profits of GTA, Cyberpunk, Witcher.


Profits of cyberpunk? They’ve been delisted from one platform and an open refund policy on 2 platforms because of how buggy it is.

If anything that tells me they may have been better off investing that developer time into the game instead of a custom engine.


The game sold about 5 million units in the first week. IIRC the game's budget was about 130 million, so yeah, there are lots of profits, with more to come next year when the next gen versions are released. And there will be single-player expansions and a multiplayer mode. Profits indeed.


They may be ever-shrinking, but their share of the revenue pie is growing.


I think the key benefit of raytracing is that of dynamic lighting in general: artists don't need to pre-bake lighting, they can just do it like live-action lighting, with faster iteration times. Game studios hire cinematographers etc. A revolution in AAA art workflow.

We're definitely in the hybrid zone for this console generation, though PS5 lead Mark Cerny said he'd been surprised to see some full raytracing (surely tech demos). Maybe PC's can do it, esp with 3090 and following years', but AAA seems mostly console-first (though e.g. cyberpunk 2077 is PC-first). Cross-generation games still need the old workflow, so you don't save until a whole studio can go fully next-gen exclusive... so, perhaps 2-3 years after PS6 gen launch in 7 years: 2030. Or the gen after that.

Couple of boundary counterpoints: most game engines aren't written in assembly despite it being faster; most CGI movies are fully raytraced (though looking at Soul, they seemed to chose a less realistic Special World that was easier to render).


> though looking at Soul, they seemed to chose a less realistic Special World that was easier to render

Interestingly my wife said the same thing when watching it but I think the opposite is true. Everything in the great before is volumetric, so not easy to render at all.


As a total outsider in this space that's my takeaway. Ray tracing seems to "solve" lighting rather than have a multitude of different hacks for different situations.


Its an altruism that by definition the key benefit of realtime ray tracing is the real time, as you say. We already have baked/non-realtime ray tracing.

As for cinematographers or some other change...it's all the same. Studios already think about lighting and framing and everything else in the scene. This won't be a revolution in that aspect of game design.


The authors' argument fundamentally boils down to 'people want cutting-edge graphics and that always involves squeezing the last bit of performance from hardware with complex tricks'.

And while true for now, I think the title is plain wrong in the longer term. There will come a time in which hardware is sufficiently powerful that one no longer needs these tricks to create top graphics. And physically based raytracing absolutely will simplify all of rendering to a couple core components when that time comes.


Optic is incredibly complicated, because matter is incredibly complicated.

Ray tracing is good for rendering surfaces, reflection and refraction, but what we see is not limited to these things.

Volumetric stuff is all over the place: fire and other plasma, smoke and other aerosols, other forms of mixed-state matter. Also we have lots of liquid water on our planet, when it interacts with solids things and/or air, the visual complexity of the scene simply explodes. Look at boiling water, or streams of water, or shoreline. And to complicate things even more, some visuals depend on wave nature of the light: soap bubble, rainbow, or DVD disk.

Here’s some 15 years old tech.demo: https://www.youtube.com/watch?v=HsWh66MvqBg

Stuff like the brick wall with these shadows and reflections is awesome fit for RT. The final scene also good. However, close up scenes of the ground with water pouring all over the stuff, or that glass window with drippling water at 1:30-1:45 — RT won’t help a bit. And the rain / water / wet surfaces is about half of their shaders: https://developer.amd.com/wordpress/media/2012/10/ToyShop-Eu...


How long does it take to render a frame from a Disney's newest CGI movie release? 12 hours? So yeah, once the hardware improvements bring down that 12 hours to 16 milliseconds, then we won't need the hacks...


If you take The Mandalorian, which is arguably Disney's hottest ongoing project, then it happens almost entirely in realtime inside the Unreal engine by using their led dot dome screen.

https://www.starwars.com/news/the-mandalorian-stagecraft-fea...


The requirements are different from animated movies or games, though.

Backgrounds are, by definition, not what the viewer is going to focus on. Oftentimes they'll even be out of focus and therefore slightly blurred.

Don't get me wrong, Disney's interactive set technique is revolutionary and shows how far Unreal has come, but I still expect future Pixar movies to take hours of rendering for every single frame (because by now that's basically Pixar's selling point).


There's no way the final rendering is realtime though, right? Post production work is reduced, but not totally eliminated. But the advancements they made with that dome are amazing


From what I understand from the documentary they put out then the the background they render in real time is indeed the final picture.Of course mistakes might still need to be fixed later but the rendering is intended to be final.


> There's no way the final rendering is realtime though, right?

No I think they literally do photograph the LED screen on the stage with the actors and that is the final composite. Just like with traditional back- and front-projection.

I guess it’s graded etc, but so is a completely natural scene.


Disney's latest CGI films are rendered on a much higher resolution and with multiple denoising passes and other settings set to max to hide even the smallest instances of grain. One would get a similar result with much less time and the differences would not be noticeable for 99% of the people, if they were to turn down some settings a bit.


> rendered on a much higher resolution and with multiple denoising passes and other settings set to max to hide even the smallest instances of grain

Is that to let them create something like a (say) 16K archival master that covers all the formats they might want for the foreseeable future without an expensive re-render?


Rendering at a higher resolution than the format target and downsampling to the target provides anti-aliasing. Gives them the "CGI" smoothness.


While that gap between film and games has been large for a long time, it definitely is closing now with the latest generation of film renderers based data on ray tracing hardware, and a large push in the film industry to incorporate real-time pre-vis workflows and reduce iteration times.

It’s very common for film CG frames to take a few hours but not usually 12 hours. You need to be able to start rendering jobs at the end of the day and have them ready for dailies on the morning. Jobs that take more than 4 hours risk clogging the queue and taking 2 days per iteration rather than 1, so people are motivated to keep it reasonable.

Another important difference between games and film is antialiasing techniques. Much of the difference in render time between them is in using fancier texture sampling and using lots more samples than games are willing to use or even need for decent quality.


Actually people don’t expect realistic simularion. They expect better. I love the original Lion King, but the movie was just plain boring with its realistic graphics.


Making good entertainment has nothing to do with limitations of the medium. It has do with creativity and talent.

The original lion King worked because it was hand drawn graphics, the story , the dance and expressions on cat faces worked because graphics was limited.

The new one was a dull imitation with no understanding of what made it successful.

you should checkout the screen rant video on this https://youtu.be/sbx97kKHUHw

Art and games have been successful even if medium is limiting,ppl still read books watch plays and enjoy playing games like FNAF even if far superior tech is available.


Sure, you are right about all of this, I just was explaining that even when we have perfect simulations, people want more.

If the limitations of medium wouldn’t matter, people and studios wouldn’t pay billions of dollars for VFX.


I personally cannot wait; the YouTube channel YMS is doing a 3+ hour review. If it's close in quality to their Kimba video[0] we're in for a treat.

0. https://www.youtube.com/watch?v=G5B1mIfQuo4


I though Lion King was ok.

The new Beauty and Beast movie however was a whole lot worse than the cartoon. Lumiere, Cogsworth, Mrs Potts, etc all felt a lot more human and emotional in the cartoon than with the “realistic” CGI. They actually looked creepy in the new movie.


> 16 milliseconds

Way too slow for VR, try 5 ms just to be sure.


Thankfully few [1] care about VR, and the fad cycle is over for now.

If they get it better and more relevant in the future we can check again!

[1] Few as in "not enough to matter much". There can still be millions of enthusiasts, but the market predictions and "disruption" didn't pan out.


> but the market predictions and "disruption" didn't pan out.

"Disruption" means that you create a less qualitative, but cheaper product that would cannibalize the margins of the product of a company whose products are high-margin (think cutting-edge CPUs of Intel). This "cheap" product will for a long time ridiculed as toy (but nevertheless sell in huge amounts because it is cheap) - until it suddenly "lifts off", becomes an inconvenient competitor, and disrupts the market.

Does this description of disruption sound like any market prediction from the last 10 years how the market for VR glasses will develop? I don't think so.


> Does this description of disruptuon sound like any market prediction from the last 10 years how the market for VR glasses will develop? I don't think so.

That's because the limiting factor on VR hardware value to the consumer isn't resolution, or scene complexity as measured in polygons, or FPS, or any of the other metrics by which desktop games and game hardware are evaluated. It is latency. Latency in estimating and rendering the user's pose, movement, and viewport changes.

We would have had widespread adoption of immersive content by now if VR experience developers would be willing to sacrifice the 'traditional' metrics in favor of a ruthless focus on latency (and ideally push the hardware vendors in that direction too), but instead everyone keeps chasing "AAA graphics in a headset" , which, given that AAA graphics are (as the OP states) an ever-rising bar, can't ever really work except on whatever the current highest-end hardware is, and any experience that breaks people's willing suspension of disbelief on everything but the highest end hardware can't end up being a commercial success, any more than a multiplayer game with noticeable lag on anything under gigabit FTTH could (and for the same reason, except that the lag for VR starts with the single-player version, and only gets worse from there).


>"Disruption" means that you create a less qualitative, but cheaper product that would cannibalize the margins of the product of a company whose products are high-margin

Not exclusively. That's disruption in the sense of "undercutting".

Disruption in the large also means:

"radical change to an existing industry or market due to technological innovation"

E.g. "things will change in how we do IT, entertainment, education etc due to VR".


No sane person thought it would be a total disruption- that's absurd.

But the fact is it brings an entirely new genre of games that can simply cannot be played on a keyboard/mouse/joystick & monitor.

However you seem like someone who enjoys pretending like VR is a little fad and doesn't offer anything new to the game space- so whatever.


>No sane person thought it would be a total disruption- that's absurd.

Whether it's absurd, tons of pundits still did tout it to high heavens, big media outlets, tech startup sites, etc. The usual BS propping up they do, like they did for stuff ranging from fuel cells to grid computing and autonomous cars at different times.

"The VR revolution", "The next big thing", "How VR will disrupt everything", "revolutionize the economy", and so on. This started around 2012 and peaked around 2016 or so.

>However you seem like someone who enjoys pretending like VR is a little fad and doesn't offer anything new to the game space- so whatever.

"Offering something new to the game space" is a spectacularly low bar compared to the oversold promise of consumer adoption and applications of VR.

And even at that, VR thus far didn't even make a real dent in the gaming space, now that you've mentioned it. So much for all the 2012-2017 hype of the tech finally being "nailed".

It's doubly funny to me, because I have memories of the first VR fad, in the 90s. At least then besides the media hype, we've also got a few camp movies out of it ("Lawnmower man" and co).


Backdrops for the Mandalorian are rendered in real time.


They're backdrops though.


They have all kind of motion, detail, etc, though, even persons in the distance.

They just don't have close ups of humans...


16ms is too slow. Gaming has long since moved past 60hz. 120hz to 360hz are the current targets, even the latest generation consoles are pushing 120hz modes.


> There will come a time in which hardware is sufficiently powerful that one no longer needs these tricks to create top graphics.

I deeply wish this were true. But it doesn't seem likely to me.

Games are real-time. And frame rate expectations are getting higher (60, 90, and even 120Hz) which means frame times are getting shorter (16ms, 11ms, 8ms). Rasterization is likely to always be faster than raytracing so the tradeoff will always be there.

Pixar movies look better and better ever year. The more compute they can throw at the problem the better pixels they can render. Their offline rendering only gets more complex and higher quality every year. It's so complex and expensive I'm genuinely afraid that small animation studios simply won't be able to compete soon.

Maybe raytracing will be "fast enough" that most gamedevs can use the default Unity/Unreal raytracer and call it a day. But imho the author is spot on that AAA will continue to deeply invest in highly complex, bespoke solutions to squeeze every last drop and provide competitive advantage.


FWIW i think it's possible. The question is whether it's within our lifetime.

There's an upper limit to what's useful - the real world. If our minds can't comprehend it, you've hit the upper limit of practical use. Once in game graphics become literally indistinguishable from real life then we'll start to plateau in terms of complexity of "tricks" and raw computing power will be able to catch up.


> Once in game graphics become literally indistinguishable from real life then we'll start to plateau in terms of complexity of "tricks" and raw computing power will be able to catch up.

Maybe. Alternatively games won't actually want to render photorealistic and will want to render with varying types of stylized graphics. Is that easier or harder? Probably a little bit of both.

We actually are at a point where we can real-time render photorealistic scenes... for certain types of objects. Primarily static environments. Photogrammetry is basically cheating, but it is highly effective! Mandalorian is famously filmed on a virtual set and it's cool as fuck.

Graphics is moving rapidly into physics. We might be soon be able to render photorealistic scene descriptions. However simulating the virtual world we still have a long, long ways to go. By simulation I mean everything from the environment (ocean, snow, etc) to characters. We most definitely can not synthesize arbitrary virtual humans.

Will we someday see a movie that _looks_ like a live action movie but is completely virtual? Oof. Maybe? But even if we could, would we want to? I'm not sure.


I'm afraid we won't see this limit getting hit in our lifetimes though. The gap is way too many orders of magnitude.


Coming from the VFX field, I think that point is pretty far off still. Even with current gen offline renderers running on renderfarms, we're left doing cost-benefit analysis on rendertime vs. quality issues. (Noise being the big one). Real-time, fully generic, photorealistic rendering is decidedly not here yet, and I seriously doubt it's around the corner, either.


Monitor resolution and frame rate inflation seem to have endpoints at 4K or 8K and 144Hz. Scene detail still has some way to go before diminishing returns and so that will consume years of hardware evolution. I agree with you in principle but I don't think we're within a decade of that yet.


VR/AR will continue past 8K to 16K and beyond. 240+ Hz will be desirable there too. We're going to need to keep squeezing the hardware for all it's worth for the foreseeable future. I doubt there will be "sufficiently powerful" hardware to drive that kind of workload without heroic optimization work within my lifetime.


Nah, the human eye has only ~5 MPix worth of sensors, most of them grayscale at that. We just need inexpensive eye tracking and foveated rendering.

https://www.cambridgeincolour.com/tutorials/cameras-vs-human...


Our retina and occipital cortex undo a lot of rendering (e.g. line detection), therefore, a theory:

It will take such a long time to get realistic VR, that we will have direct neural interfaces that are much simpler because they can skip the encoding/decoding.

\tanget And perhaps at higher semantic neurons, so we can just invoke the concept of "cool graphics" without actually having to simulate it...


But our eyes can move, so while we might not need to render and update XXk@xxxHz we still need the display to have that resolution wherever we look right?


No, because you don’t need to render at full resolution where you’re not looking. That’s the appeal of foveated rendering, you just render say, a tiny 1080p region where you’re currently looking, and render the rest also at 1080p but stretched out over your entire FOV, and with fast enough frame rate and eye tracking, plus a decent blending algorithm, it will be unnoticeable.


I'm sorry if I wasn't explicit enough. What i meant is that we still need the display to have "the 16k" amount of pixels so that we can have the superhigh ppi wherever we choose to look. We indeed would only need to render high quality in a small percentage around where the eye is looking.


My bad, I see what you mean. You're quite right, although there is some research in using MEMS to move the projected foveated region physically, which would remove the need for ultra-high density displays.


Which headsets have you used? I’ve found I don’t move my eyes as much in the oculus quest but it could just be I’m conditioned at this point.

In earlier headsets it was more narrow field of view and maybe that made it feel more obvious when my eyes were moving.


I've used the Quest 2 and i did move my eyes when playing games with it. I don't know of screens with different pixel density in different areas either if we were to assume people always look straight ahead.


I disagee about 240Hz for VR. Who is saying that? Oculus hasn't said it and they know more about practical VR than anybody.


I think after 144Hz, you hit a point of really diminishing returns. Like is it possible in blind test to feel 240Hz vs 144Hz? It might give you in game like 1% advantage for +100% the costs.


FWIW tried testufo [0] with Samsung G7, 240Hz vs 144Hz was very noticeable on image and text scrolling tests. I guess for a person who never seen 240Hz it would be hard to identify it in a blind test, but after you've seen how the images/text looks like in scrolling, it's trivially identifiable.

- [0] testufo.com


But that's the point. If in a blind test you can't tell the difference, then there is very little to no perceptible difference.

If testufo was randomized with multiple repeating patterns and random offsets, it could be a much better test. Here you can accidentally bias yourself.


240 is not evenly divisible by 144. Curious to know if 240 vs 120 Hz is more or less noticeable on a 240 Hz monitor than 240 vs 144 Hz on a 240 Hz monitor.


With games as long as you have good enough hardware to run the game at a steady frame rate you want it being divisible by some magic number does not matter. Also the old "standard" Hz for games is 60hz so going to 120 and then to 240 is the "natural" steps.

The old 144hz goal was/is nice if you want to display 24fps (movies) content but games don't work like that. They display correctly without any judder/pulldown at whatever frame rate you want (as long as you have the hardware to run it at that rate)


> 240 is not evenly divisible by 144.

Sounds like the beginning of a sales pitch for a 720 Hz display. :-)


While they are more entertainment than science, I believe LTT performed a few tests like on this topic:

- https://www.youtube.com/watch?v=tV8P6T5tTYs - https://www.youtube.com/watch?v=OX31kZbAXsA


As someone who owns a 240hz monitor, absolutely.

And here's how I can probably immediately convince you that I can as well: drag your cursor at a rapid speed across your screen back and forth (e.g. 2-3 times across the width per second). Focusing on one spot, do you see individual cursors with gaps of background in between? (I'm assuming yes.)

Now on 120 Hz the gaps will be half as big as on 60 Hz. On 240 Hz they are half as big once again, but they are nevertheless noticeable. The size of the gaps easily allows you to distinguish between 120 and 240 Hz.

Until I can drag my cursor across the screen and see nothing but a continuous moving object, we haven't hit diminishing returns on monitor refresh rate yet.


Just because there's some measurable difference doesn't mean it has significance.


> Monitor resolution and frame rate inflation seem to have endpoints at 4K or 8K and 144Hz.

laughs in light fields


I seen many indie games made with Unreal Engine or Unity that have terrible performance in rendering a simple level where a AAA game like Tomb Raider will use less resources and have 10x more details. So when the hardware gets so popular that indie developers can deliver similar graphics as AAA games today the AAA games will still do it 10x better by having engineers optimizing things.


It's not the engine's fault for having terrible performance on simple levels. Most indie devs don't bother with LODs, occlusion culling, and turning off unused features. Decades old tricks like turning off unseen/far away actors, merging mesh and material to bring down drawcalls is also alien in the minds of asset flicks. Both engine provides many monitoring tools and utilities to fine-tune performance and settings on each quality level...


Correct, sorry I was not clear and it appeared I blame the engine , the indie devs are also not to be shamed because they did not done the optimizations, they most of the time focus on the content.

My point was that even if hardware gets 100x better we still have a difference between a AAA game that can render a giant open world and a AA game that will use smaller worlds like a space ships with loading screens for each room.


One game that was recently released has shown that most games are simply unoptimized and that is Doom Eternal. It performs substantially better compared to any other game that was released in 2020 at the same resolutions even on old GPUs.


I don't think so, more rays will equal better graphics for a veeery long time. The Rendering Equation [0] is basically an infinite dimensional recursive integral, so computing it will be expensive. Today's Nvidia RTX hardware doesn't even come close to computing it properly, it can still only do bad approximations.

[0] https://en.wikipedia.org/wiki/Rendering_equation


In a different context, look at text rendering. It should be the opposite of the spectrum and stabilized for a long time, except we keep adding better rendering tricks and simplifications (not at the scale of 3d rendering but keep in mind how simple the problem looks compared to it)


I think it's the opposite.

Text rendering used to involve complicated hinting and then subpixel rendering.

These days on Retina screens you don't need hinting or subpixel rendering.

You just render it like any set of curves, no longer any tricks about it being a font, because it's hi-res enough.

What rendering tricks are you suggesting are being added rather than being taken away, for fonts?


Thats what I meant, were still simplifying things in 2020 for text rendering. Antialias and Cleartype are still used so it's not a settled technique, the environment is still changing.


Well, there are signed distance field fonts for one thing.


I honestly wouldn't be surprised if Moore's Law dies off before we get to a point where a heavily optimized graphics pipeline isn't worth it.

I think the author makes a good point at the high end, but Raytracing is going to simplify the crap out of whatever we have that's like Unity in a decade or two (hopefully not Unity).

So many issues like transparency sorting, AO & GI, depth of field, baked and real time shadows and reflections you basically don't have to deal with the complexities of as an end user.

But you'll be leaving a lot of performance on the table with a one size fits all solution.


Moore’s law has ended quite a few years ago - or do you mean it regarding GPUs because I don’t know about that.

But even though parallelism can be increased, the gains from them are limited as per Amdahl.


Moore's law is about transistor density and has not ended. See the performance ramp of recent model GPUs.

Amdahl basically does not apply to graphics tasks which are embarrassingly parallel. Raytracing will be no different.

And in fact you're getting rid of many preparation steps pre render that may be tricky to parallelize well, and replacing it with brute force.


I don't believe that's going to happen in the foreseeable future. Just like in film CGI the hardware is always going to continue to chase the best images content creators would like to create.

Hybrid render pipelines and AI enhanced RT passes is the future.


[flagged]


I don't get the joke.


Communism is based on advanced economics, where production is significantly higher than demand.


That all depends on innovation with hardware manufacturers. For a computer graphics course, I've written a ray tracer in C# with some compute shades that will run with 10 to 15fps in a window on my 7700k and GTX1080. Not particularly impressive, no, but thirty years ago games would run this speed on similarly priced hardware.

Where the traditional rendering code is all based on hacks and tricks that simulate real life, raytracing is simply a set of formulas borrowed from a physics textbook. An engine casting more realistic shadows than a ten million lines of code rendering library can take less than a thousand lines of code.

The current raytracing acceleration is intended to boost some highlights and shadows in practice. The hardware isn't powerful enough or optimised for rendering real, full-screen games. Even a 3090 will have trouble rendering some Minecraft scenes, for example.

If the right hardware ever becomes available for the right price with enough consumers, proper raytracing engines will be feasible and complicated render paths will eventually be much simplified. Perhaps not to the point that they can be, no, but they'd be much simpler than the engines we're using now. We won't be getting Disney levels of quality, even at our own computers' resolution, but we don't need that. Equivalent to today's graphics but with proper lighting everywhere is an amazing graphical advancement already. Time will tell if that will ever happen.


> The current raytracing acceleration is intended to boost some highlights and shadows in practice.

A couple of days ago, while gaming with a few friends and noticing a transparency ordering issue in the game we played and explaining to them, how and why that happens I realized something:

The raytracing accelerators we now have at our disposal can also be used to implement proper primitive level transparency ordering on the GPU; even if you don't spawn secondary rays, just being able to do a sparse ray-triangle hit test through the whole scene building the render order list in situ.


Transparency is still very expensive unfortunately. DICE spoke about it a GDC talk last year and recommended that the any hit shaders that are used for alpha testing (i.e., transparency) should be avoided if possible since they're rather expensive.

Their big problems is trees, since leaves modeled as flat squares with their the actual leaf shapes in textures (that are completely transparent around the leaves).

"Trees are our biggest problem in ray tracing. No matter what we do they're by far the slowest thing to trace, and they're much worse than on rasterizer, and if you have any ideas of how to ray trace trees efficiently, we're very happy to hear them. We've asked a lot of very smart people and no one has come up with a good answer yet. You can choose between geometry and alpha testing and both are kinda crap." - It Just Works: Ray-Traced Reflections in 'Battlefield V', GDC 2019.


Unfortunately it's still extremely expensive to do so. DXR documentation tells you to keep any hit shaders simple.

It's also possible to fix transparency ordering in rasterization using OIT and there is some hardware support for it (ROV), but it's also expensive and can be hard to do right so many games choose rough sorting


Naive raytracers fall down on many edge-cases and need a ton of optimizations that might not quite qualify as hacks but still drive up complexity a lot. And then you can add proper hacks like ML-based denoising on top.


Hi, could you or someone else with the right knowledge give some examples of those optimizations/hacks for ray tracers or point me to some place that explains those in more detail? Thanks in advance.


That's an entire research/engineering field. I guess you can read/view SIGGRAPH papers and presentations to get an idea what they're doing to achieve realtime performance.

Common things that make a scene too difficult to handle for brute force sampling approaches: caustics, difficult to reach light sources (small or behind openings), many light sources, camera or motion blur, volumetric materials, etc. Algorithms approximating the behavior of real materials also is a complex topic, e.g. skin or hair are handled separately from dielectric or metallic surface models. Some optimizations can solve one problem well in isolation but fail when encountering a combination of them.

Here's one that tries to solve caustics and small light sources at the same time: https://cgg.mff.cuni.cz/~jaroslav/papers/2012-vcm/2012-vcm-p...


That was very informative and accessible, thanks again!


Some time in the future wee can do real time path tracing for a few megapixels (per eye if stereo), in under 5ms, and THEN we you might think we could stop using smoke and mirrors for games.

But even then, someone will think "if I just use some smoke and mirrors I can quite easily get twice the performance, and those spare resources I can use for better AI/larger worlds/whatever".

Ray tracing (and physically based rendering in general) already simplifies things. Content creators can use "natural" parameters for material/lights, and you don't need to insert fake lights to achieve realistic shadows, dynamic lights. That's in a way "simplifying" things, but in the end it just gives more realistic games for the same or higher effort. Until everyone can use ray tracing, it'll also add another layer of complexity becauuse you need to make a separate path for ray tracing, so level editors and content creators need to ensure everything looks good both in both cases.


Weird article. Sure, rendering engines are very complex, but they produce very complex images. So you can not just compare old engine against new ones and say that the simplification failed. When tech X is claimed to simplify rendering, the claim usually implies that the simplification happens for comparable visual output; achieving raytracing-like lighting and effects in conventional rasterizers would be even more complex, especially for dynamic scenes.


When the plough was invented, did they plough the same field in less time or in the same amount of time a larger field?


> When the plough was invented, did they plough the same field in less time

To be precise, before the plough was invented, they didn't plough. They often used hoes instead [0]. Intuitively, the plough would have let farmers process a greater area in the same time. But whether they did work a larger area might have depended on other factors such as land ownership - e.g. who owned the plot adjacent to yours. The actual history of agriculture is quite fascinating [1].

[0] https://en.wikipedia.org/wiki/Hoe-farming

[1] https://en.wikipedia.org/wiki/History_of_agriculture


you can get off the shelf AAA ray tracing engines from unreal with 5% royalty after the first million, you can't get any simpler than that.

the real cost is not the rendering engine, that's a choice. the real cost in AAA quality graphics are the assets.


I think the author conflates simplification of the whole codebase of a 3D rendering engine with the simplification of its architecture. Of course, complex image effects and arcane optimisation techniques will bloat the code, but real time ray tracing will allow throwing away whole series of hacks and tricks used now to produce believable image. The rendering pipeline will become much simpler in terms of stages.


Agreed, and it’s also conflating engineering cost with the cost of making games. The big benefit of using physically based rendering architectures isn’t saving engineering time, it’s saving artist time not having to fiddle with tuning each ad-hoc effect individually. It seems to also somewhat discount the increases in realism and fidelity we are getting in the same dev time as before.


How much slower is Ray tracing for equivalent quality? (Obviously it will vary, but I'd be interested in even a very handwavy estimate)


It depends entirely on what you do. Ray tracing can be much slower, or same time, or much faster for equivalent quality, depending on what you're rendering. An example of where ray tracing is much faster than rasterizing is when you have extreme amounts of geometry via instancing. With ray tracing, you can render a quadrillion virtual triangles at interactive / real-time rates, where rasterizing can't do that. This is because ray tracing renders a pixel at a time, where rasterizing renders a triangle at a time. Here's an example: https://www.youtube.com/watch?v=2blG7YXq_O8&feature=youtu.be


Correct me if I’m wrong but ray tracing should iterate over every object in the general direction of that ray - and that can be less with things like AABBs and the like, while rasterization can throw away triangles with Z-masking. Is the former all that more efficient?


You’re right, z-masking and frustum culling can make rasterization much more efficient than having to iterate over all triangles in the scene, and those techniques are used commonly and very effectively in games. The example in that video above has everything in view though, z-masking won’t fix it or get it anywhere close to as fast as ray tracing in this specific case.

The main point is that basic ray tracing has an algorithmic complexity that is logarithmic or cube root of the number of primitives (depending on your BVH), while basic rasterization has linear complexity, and that generally remains true even with many effective culling techniques.

There are hybrid techniques that try to combine the best of both, so nothing here is absolute, but at some point if you develop a competitive log complexity per pixel rasterizer with a BVH, eventually it basically becomes ray tracing, and you might as well go with the method that trivially gives you shadows and reflections and bounce lighting too.


Thank you for the informative answer!


I read the article a few times, and still not sure I understand it correctly.

I think what the author is trying to suggest, is that the cost of implementing Ray Tracing Real Time Rendering with its own Engine, giving specific performance and quality requirement still matters because it is a relatively cheap operation with respect to AAA Games budget. Vast majority of cost are in assets, and specifically Graphics Assets.

I am not sure I agree, because with the current trends I would have thought the incentive largely in flavours of moving to Unreal 5 over time. I think it is more of an economics, business, unit cost and TAM question more than a technical question.

( Stop putting stupid money / budget in rendering pipelines or graphics asset and start paying attention to Physics, Game-Play, StoryLine, NPC interaction, player psychology, in-game economics and god damn bug fix. )


I don't think it's just the desire to be cutting-edge. Every technique you use, including ones involving raytracing, has all sorts of tradeoffs. Surely no single set of techniques is going to be ideal for all games on all hardware?


> But it's doable, you can write a slow, photorealistic real-time renderer in relatively few lines of code, today.

I guess using words "slow" and "real time" in one sentence isn't best describing state of art :-)

Anyway, can anyone recommend some open source library that does that? No need for textures, just simple engine with light sources, coloured triangle vertices and transparency will be enough.


John Carmark retweeted this.


If we only could link retweets... Now I can only give link to a reply https://twitter.com/ID_AA_Carmack/status/1343322749033394182


AAA = ?

Anti-Alias Something?



'Triple A' as in purported quality. I say purported because it's typically associated with big budgets. Blockbuster games if you will. I find it's a pretty unhelpful designation.


Nobody mentioning here that the implementation that RTX brings to games seems like a big gimmick at best, turn it off the game looks really silly, crank it to Max sometimes the game looks really silly because everything is wet. About 10 years ago Nvidia did the same thing with PhysX.


If you open your eyes in the real world you'd notice that most surfaces are shiny and not dull like you see in games. You just mentally filter it out. Plastics? Shiny. Wood? Shiny. Leather? Shiny. Paper? Shiny. Grey rock? Shiny. China? Shiny. The lack of shininess is how you see it is a game.


I think he's right, games overdo it. I look at high resolution photographs of battlefields today and everything is covered in dust. Floors? Dust. Walls? Dust. Soldiers? Dust. Corpses? Dust. What happens when a grenade is set off in an empty room, nothing? Nah, it shakes plaster off the walls and covers everything with more dust. And if it's not dust, it's mud or some other sort if grime. Meanwhile what does the game Battlefield show? A bunch of immaculately clean surfaces with crystal clear reflections. Where is the dust? Where is the grime? It's missing, because those do not showcase RTX...


Then the textures used for reflection is bad - not the ray tracing.


While I see what you mean, it's also true that sometimes in games everything is TOO shiny. Or the wrong "kind" of shiny -- everything looks like a freshly waxed floor, or looks wet like GP comment mentions.

Raytracing helps make realistic art possible. But it still takes a great artist to do it. There are so many subtleties to the way common surfaces reflect light that are hard to capture, even once you are given all the sliders you can dream of. My experience dabbling with Blender, was that it can be quite frustrating: you KNOW what you have set up doesn't look quite perfect yet, but you can't quite express WHY or what you need to change in the material definition to fix it. So you end up settling for "ok", meaning the resulting render looks pleasing -- but it still looks like a render.

But I guess that's why I'm not an artist.


I think nobody is mentioning that because raytracing is already a proven technology, we just need the hardware to match. RTX (nvidia's support) is pretty gimmicky so far, since most devs are only using it for reflections, but even global reflections are a nice step forward.


Counter point from Digital Foundry doing a break down of all the Ray Tracing features available in Cyberpunk 2077: https://www.youtube.com/watch?v=6bqA8F6B6NQ .




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: