Hacker News new | past | comments | ask | show | jobs | submit login
How Unreal Renders a Frame (interplayoflight.wordpress.com)
317 points by ingve on Oct 26, 2017 | hide | past | web | favorite | 161 comments



I'm a little surprised at how surprised people are here, at how much work is done per frame!

Sure, game engines are really complex and impressive, but the same is true of a lot of other kinds of software. If you work on websites, have you thought about how much processing goes on when loading and rendering a page?


How much work the parsers, layout engine and renderer have to do in a browser is far more opaque than what a game engine does.

For game engines there is a lot of material out there explaining the basic processes and data flows; there are free or open source games and game engines around as well: Unreal engine, used in the article, is free, including full access to the source code. (This is by far the most widely used game engine in the entire gaming industry). And with a graphics debugger (free) you can look fairly easily at how games (pretty much any game you can run on your PC) render things as well.

That's not to say that game engines are simple, however, it is relatively easy to understand how they work on a basic level (i.e. what they have to accomplish and how they do so in general terms) and, specifically considering graphics, simple renderers are not particularly hard to develop yourself as a small side project.

With browsers, this is a lot more difficult. There are perhaps three modern browser engines, which are mostly open source but also very large pieces of software. There are no books or tutorials explaining how to develop browser engines. And the debugging tools in the browser only tell you how often a "DOM update" or "render" happened and perhaps how long it took.


Just think of all the bloat in a browser. Don't mistake size for complexity. Graphics/game engines are complex. Browsers are big.

Whilst true, your simple renderer analogy does not work because it is like me saying that if I can create a program to draw some scrolling text then I can create a browser. I presume you have not had to optimise GPU graphics code whilst maintaining compatibility on multiple GPU vendors with a common code base?

One game engine is not the same as another and therefore not suitable for bulk comparison with any particular thing. Each game/graphics engine has their own set of requirements, and quality level, and the level of understanding required to "understand" what is actually going on.


UE4 is not the most widely used game engine in the industry. That would be Unity by a wide margin.

If you are restricting to AAA, I’m not even sure if that holds. There are a good amount of big teams that use UE4, I think I’d go with Frostbite due to the sheer amount of titles that EA pushes out that are now based on Frostbite. It’s certainly more big games than UE4 is shipping. I’d probably put Ubisoft ahead of them too with their internal engines.


It's difficult to measure game engine usage for AAA games since they can afford to

a) build their own internal engine

b) pay licensing fees to not disclose what engine they're using.

For overall game usage, however, I'd definitely say Unity is most used, with UE4 coming in second


Very good point! You don't get "how to write a browser" books in the same way you do with game engines. I wonder if it's because game engines are just more appealing? And also, you can build your own cut-down game engine and that's still useful, whereas there's a very high threshold of functionality needed before a browser is useful.

They're actually similar in some ways -- there are a handful of browers and a handful of AAA game engines, and they're all either open source or at least licensable source.


browsers are complex beasts, but they dont need to really go 60+ fps... if a page loads in a second people don't sweat it. if a frame loads in a second, people throw their pc out of a window and rage your company into an early grave ;D


Browser authors disagree.


Hehe, this is only the rendering part per frame, imagine the AI computations and all other game logic also going , hopefuly within this magically 16ms time =] Game engines in my opinion are some of the most savage pieces of software, pushing performance and really testing programming concepts thoroughly. There's a lot of great knowledge comming from these guys regarding programming in other areas too because they are pushed so hard to make fast fast code. :) love it! i'm every time suprised and amazed at what these engines crank out!


As a software person, I'm most impressed by CPU hardware -- all the crazy speculative out-of-order dynamic translation stuff that goes on at GHz timescales. It's astonishing that it works at all, let alone that it's almost 100% reliable.


I think people just aren't used to what a GPU does. Most devs are familiar with CPU loads and doing all this work as a CPU task would be pretty tough.


So all of this processing happens for one frame? And this is going on at 60 Hz?


Yes, and more. You’ve also got a rigid body (physics) simulation, ai/pathfinding, animation, networking, audio and gameplay logic all to run. If you’re making an open world game, you also need to have streaming code to load and unload parts of the game on the fly.

It doesn’t always happen at 60hz (which is 16ms total processing time for all the above plus what’s in the post). Some games run at 30hz(33ms), some run unlocked (varying times), and some can even run at 144Hz (7ms).

As an extreme example, VR headsets normally require rendering twice (once for each eye), and run at 90FPS, which is 11ms to do your entire game, and render it twice.

Furthermore, these aren’t normally just guidelines, these are caps. If your game runs at 60hz, and you miss your target by 1ms, on most hardware you’ll end up missing the hardware refresh which means either you get a temporary drop to 30hz, or you get tearing (half the old image and half the new image), neither of which are particularly pleasant.


To clarify - In many cases you are actually running your physics or game loop at a different rate from the render loop. For instance, in CS:GO you can have a frame rate of well beyond 300, but the game loop (physics/ai/net/logic/etc.) is ticking at 60 (by default) due to the client-server architecture. This type of non-synchronous engine architecture is very complex to build reliably, so unless there is a hard requirement for it (e.g. client-server multiplayer model w/ advanced latency compensation) you will usually find a simpler synchronous approach used unless the underlying engine comes with the async architecture OOTB (UE4).

Some additional reading on the topic:

https://developer.valvesoftware.com/wiki/Source_Multiplayer_...

https://blog.forrestthewoods.com/synchronous-rts-engines-and...

https://gafferongames.com/post/fix_your_timestep/


That's not exactly actually (UE VR dev here). When you render a game at 90hz (or 60 or really anything), that means your total throughput needs to be 90hz, NOT that the frame needs to render in 11ms.

Since you're dealing with multithreaded CPUs + an asynchronous GPU, you can parallelize and sequentialize all this. How UE works in its DX11 and OpenGL renderers, on a 90hz game, is that you're gonna have the Game Thread (Physics/Gameplay) running for frame N, while the Render Thread (GPU commands / math ) runs for frame N-1, while the GPU executes frame N-2. This allows you a complete frame time of 33ms on a throughput of 90hz, at the cost of more latency.


At some point the perception of the human eye means you cannot increase latency further. Especially if the game is something where reaction time is important, like a twitchy first person shooter.

I wonder what the cutoff is where someone starts to notice, but I imagine it's not much greater than 50-100ms


Pretty sure it would be smaller than that.

I have a gaming display that has a 1ms response time, if I play something fast and twitchy on my PS4 (Titanfall 2 for example) on my display and then on my tv I can 100% notice the difference between the two, even when you adjust the tv to compensate (gaming mode on tv): you really do notice the extra time taken for events to occur. Titanfall 2 is actually a pretty good example because of the insane gameplay speed. Here's some examples: https://gfycat.com/FrayedTameDamselfly https://gfycat.com/BreakableWealthyBlowfish https://www.youtube.com/watch?v=o7ARc-lxc2s https://gfycat.com/FrailSaltyDanishswedishfarmdog https://gfycat.com/JubilantNeighboringAmericancrocodile


Minor nit: some of those needn't run every frame (ai/pathfinding, gameplay logic). It's also possible to calculate future points and interpolate between them per frame. A nasty trick is to have shadows running at a half framerate (Crysis did this).


I don't really think it's minor, or a nit! But at the same time, I had to simplify or I'd never have stopped writing.


Brandon Bloom simplified it best I think: "You have to solve every hard problem in computer science, 60 times a second."


I'm curious how you would relate in-browser WebGL performance vs. performance of native software such as Unreal. Is running it in a browser 10x worse? or 100x?


It's not going to be a simple constant factor. The ideal case will be the same - ultimately the CPU is doing the same work (so provided the JIT picks it up correctly it'll be executing the same instructions) and the GPU is running literally the same shaders, so performance should be identical. It's more a question of how much work you have to do to hit that happy path, and what edge cases pull you off it.


The CPU is going to do some more work because WebGL can't allow the GL app to crash the machine or break into the OS kernel, which regrettably current OpenGL (And DirectX, And Metal, and Vulkan...) drivers allow.


Allowing a userspace application to crash the machine or break into the OS kernel is also a security violation that needs to be prevented; the consequences are less severe than when a web page does it, but it still shouldn't happen. So that should also be the same work in either case.


I agree with the "should" of course - the state of GPU drivers is just horrible and shows no signs of rapid improvement.


In principle WebGL should be competitive with OpenGL. Once it hits the GPU, it's the same.

Some of the bottlenecks in WebGL are:

- Javascript. Modern JS engines are great, so the overall speed can be good, but you're still missing things like 64-bit ints, SIMD and threads. (Do game engines make heavy use of threads though? I don't know, but many influential games programmers seem to be wary of them!)

- The WebGL -> OpenGL translation layer takes some time. In Chrome it sanity checks your input (which you definitely want in a browser! GPU drivers are very insecure) and executes it in a separate process. Not as expensive as you might think, especially if you minimize your draw calls, which is good practice anyway.

- WebGL is basically OpenGL ES 2.0 (and WebGL 2 is ES 3.0), which is missing some useful features of full OpenGL, and doesn't offer the low-level, low-overhead access of Vulkan or Metal.

Depending on what you're doing, I'd guess WebGL might be no more than 2x slower (assuming plenty of optimization work). Some fiddly things might be 10x slower, or just not possible at all within the WebGL API.

WebGL 2 has a lot of very important new features, but support for that still seems to be patchy, and it's only just catching up with mainstream mobile graphics. It's a generation behind Vulkan.

Apart from that, an AAA game will have massive amounts of graphical and audio assets. Delivering that over the network is a pain and HTML5 caching is a pain. Doable, but hardly comparable to just loading it from local storage.

Oh, and one more thing! A big game wants sound as well as graphics, and WebAudio is a mess. And audio mixing is typically done in a background thread, so that's one area where the lack of threads in JS is a real problem.

Overall WebGL is very nice, it was a great choice to follow OpenGL ES closely (security problems aside).


> Do game engines make heavy use of threads though?

Boy do they, yes.


Can you go into more detail? (Or point me at some up-to-date books or blog posts!)

I'm specifically curious about CPU-intensive stuff that is sharded out to multiple threads. That's your classic multithreaded programming, the sort of thing you'd do in scientific computing, but I always got the impression games people were skeptical about it, due to unpredictable performance and the high risk of bugs.


Motivation: The Xbox360 pretty much forced gamedevs into heavy threading if they wanted to get anything done. It had 3 PowerPC cores with 2 hardware threads each. The cores had huge memory latency and no out-of-order execution. IBM's attitude about OOE was "Statically compile for a fixed target" and "Run 2 threads per core and that that'll cut the effective stall cycles per thread in half". The PS3 had only 1 of those PC cores, but it also had 6 unique cores that were practically high-power DSPs. If you manually pipelined data movement and vectorized execution, you could get amazing results. If you ignored those cores, the PS3 was crippled. The XBone and PS4 have friendlier cores, but they are still surprisingly low-power and there are 8 of them. So, you still need to thread and vectorize or you'll be dragging. Even on the PC, Sutter's "The Free Lunch is Over" is over 12 years old. Outside of games, the browsers force single-threading and cloud servers profit by selling multicore machines pretending to be many single-core machines. But, in games have to run on non-virtual hardware.

Execution: You can google around for "game engine job system", but unfortunately, game engine blogs have really fallen off a while back as most of that crew has moved to twitter. So, the best material out there is in the form of GDC presentations such as "Parallelizing the Naughty Dog engine using fibers", " Destiny's Multithreaded Rendering Architecture", "Multithreading the Entire Destiny Engine", "Killzone Shadow Fall: Threading the Entity Update on PS4". Slides and videos are available around the web.


Cool, thanks!

I never got to program one but I remember being fascinated by the weird architectures of the PS2 and PS3.

It occurs to me that there are some similarly weird architectures in mobile right now -- it's not uncommon to see Android flagship phones with 8(!) cores, which is just ludicrous. And there are lots of asymmetric "big.LITTLE" designs with a mix of high-speed and low-power cores.

Maybe I'm just reading the wrong blogs, but I've barely seen any discussion on how to optimize code for those crazy Android multicore CPUs, even though it seems like there's potentially a lot of upside. I guess Android is so fragmented and fast-moving that it's a tougher challenge than optimizing for two or three specific games consoles; also Android app prices are low so there probably isn't as much motivation.

Also, Apple is miles and miles ahead of everybody else in mobile performance, and they've consistently gone with just 2-3 cores. Their mobile CPUs and GPUs are very smartly designed, really well-balanced.


Here’s a good talk on a system used for the PS4 - https://www.gdcvault.com/play/1022186/Parallelizing-the-Naug... And another one on multithreading a cross platform game - https://m.youtube.com/watch?v=v2Q_zHG3vqg

There’s also this talk - http://developer2.download.nvidia.com/assets/gameworks/downl... which talks about collision detection and collision resolution on the GPU that actually explains the architecture of the cpu engine quite well and shows what parts are serial and what parts are well parallelised.


Modern game engines parcel out work via a job system, where functional tasks are dispatched to the cores in a system. This is opposed to the idea of 'one core/thread will own a task for the lifetime of the application, while checking in with a master core/thread'. Actually, most games have a hybrid model. Usually one thread/core is dedicated to rendering, another thread/core is dedicated to high priority tasks/jobs, and then the remaining resources are used by whatever jobs are left.

http://fabiensanglard.net/doom3_bfg/threading.php


That threading style potentially fits OK with the JS "Web Worker" / "isolate" model, as long as you can pass messages around between threads/isolates very efficiently (probably not the case in current JS implementations).


There are Transferable objects (https://developer.mozilla.org/en-US/docs/Web/API/Transferabl...) so you at least don't have to serialize the data between workers.


Oh interesting. TBH, I have zero knowledge on how to do any of this on the web side. All my experience is from working with traditional game engines on the native side.


The worker pooling system you describe is eminently possible in the browser these days. Web Workers [1] are really just threads with a JS execution context and a facility for messaging back to the thread which created them. (Or, if you set them up with a MessageChannel [2], they can do full-duplex messaging with any thread that gets the other end of the pipe)

Of course, you're still dealing with the event loop in most cases, which is probably a stumbling block when it comes to really low-level stuff. That said, there are even facilities for shared memory and atomics operations [3] these days, which helps. I've messed around with it a little bit on a side project- as a JS developer, it's really weird and fun to say "screw the event loop!" and just enter an endless synchronous loop. :D

[1] https://developer.mozilla.org/en-US/docs/Web/API/Worker [2] https://developer.mozilla.org/en-US/docs/Web/API/MessageChan... [3] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Not an expert, but the version of it that I've heard is your main game loop handles all interaction with the game state to avoid issues. You can offload rendering into another thread pretty safely, and anything that alters game state would queue up its state changes for batch processing.

That way if you're running a multithreaded physics simulation you can get two separate bullet collision detections on one object, and instead of trying to delete the object twice you put both "kill this thing" actions on a to-do list. When it comes time to handle that in the main loop, you sweep through it for conflicts before executing any of the changes.

UE4 in particular I know handles all game logic in a single thread, and you can't touch UObjects from outside of that. But here's an example (without much technical detail) of someone implementing multithreaded pathfinding for UE4: https://forums.unrealengine.com/community/work-in-progress/1...


UE4 in particular I know handles all game logic in a single thread, and you can't touch UObjects from outside of that.

Yeah, that's the kind of thing I was thinking of when I said games programmers seemed skeptical of threads.

Some of that coarse-grained parallelism could possibly be done in JS with Web Workers, but those have their own problems. (See the recent discussion here about the "tasklets" proposal: https://news.ycombinator.com/item?id=15511519)


A bit older but this presentation about Killzone 2's usage of the PS3's Cell architecture was pretty neat https://www.guerrilla-games.com/read/the-playstation-3s-spus...


WebGL isn't trying to be competitive with full blown game engines, is it?

For one, game engines normally don't use OpenGL unless they have to. The GL drivers on PC tend to be very weak. High end games target Direct3D on Windows/Xbox and these days are moving to Vulkan/Metal, even on mobile.

On Windows the GL driver situation is so bad that Chrome translates GL to Direct3D. This obviously will impose some overhead and complicate the driver bug situation still further.

Even when games do target GL they tend to exploit lots of vendor specific extensions and be tested against very specific graphics card/driver combos to enable them to workaround bugs and performance cliffs. Does WebGL even expose driver specific extensions? I don't think it does.

So you are not going to be competing with native apps on the web anytime soon in this area (as with all other areas...)


The recently released Doom game is using OpenGL (and Vulkan).


id has always made opengl engines. They get to deal with endless driver breakage. When Rage came out I believe AMD cards didn't work at all for over a week.


Bioshock was a DirectX game and I recall that shadows were broken on AMD at launch. There are some OpenGL-specific problems with driver breakage, but AMD would be in much better shape today if that's all it was.

If you read through the patch notes for your video card drivers, you'll probably notice that every major game release gets special support right in the driver.

NVIDIA has three advantages:

1. Most PC gamers have NVIDIA cards, so developers test primarily against NVIDIA cards.

2. NVIDIA has a boatload of driver developers to hack game-specific fixes and improvements right into their driver. They work around game bugs, rewrite slow shaders, and other stuff like that. I assume that's part of why the graphics driver is now well over a hundred megabytes.

3. NVIDIA sends developers to major game studios to optimize and add graphical effects to their games. For instance, volumetric lighting in Fallout 4 was added by an NVIDIA employee.

I think AMD does much of the same, but they just don't have the money to do it to the same extent. That means more breakage and slower fixes.


WebGL isn't trying to be competitive with full blown game engines, is it?

Sure it is! Not right now maybe, but the web standards people working on it would love to be a viable platform for AAA games. Every so often they tout a new WebGL port of a well-known game as the harbinger of things to come.


Then why are they implementing an API that's been unpopular for years and is now being phased out entirely (for AAA games)? I think they're more concerned with the politics of it than the technical requirements of that userset. Games are heavy users of driver and card specific extensions for instance. But that'd be at odds with the web's portability commitments.


(I'm mostly playing devil's advocate here, I don't actually think HTML will be suitable for high-end games in the near future. But I think there are decent arguments to be made...)

Then why are they implementing an API that's been unpopular for years and is now being phased out entirely (for AAA games)?

Mobile. They picked OpenGL ES 2.0 for WebGL because it had comprehensively won on mobile. Apple went with ES 2.0 and Android followed. It's taking a long time for the mobile industry to migrate to ES 3 (which would allow WebGL 2) but ES 2 has been a decent stable baseline for a good few years now. That's quite unusual, and very helpful, given how fast-moving everything in tech is.

[Edit to add: ES 2 is based on GL 3, which was the first version to add programmable shaders. That was a huge API change, and an admission that the D3D approach was better. It's barely 10 years old. So any "GL is unpopular" arguments based on the old fixed-function pipeline are a red herring, I think.]

Mobile was and is more important than either desktops or consoles, because the mobile market is huge and growing, while desktops and console are at best stable.

Google came up with a clever technical solution (the ANGLE library) to emulate ES 2.0 on top of Direct3D, so that sidesteps the technical problems of OpenGL on Windows.

Now, for AAA games, desktops and consoles are obviously far more important. I think there are two responses to that:

First, a bet that mobile will gradually catch up and become equally important. There are a lot of factors involved, but on raw technical terms it's not such a bad bet. Mobile hardware iterates very fast, and some mobile CPUs are getting very competitive with desktops (recent iPads and iPhones especially). Sustained performance is an issue, as mobile devices have much stricter thermal limits; but you can put the same mobile SoC in a bigger box, like the TV set-top boxes that Google, Apple, Amazon and others are experimenting with.

Second, there's no reason WebGL 3 couldn't be based on Vulkan. WebGL 2 hasn't even been fully adopted yet, so it would obviously take a number of years to make that happen. Maybe desktops and consoles will have moved on to something newer and better by then, but maybe they won't.

The big question is whether mobile+web is catching up on desktop+console, or if it'll always be a generation or two behind. I think you'd have to be pretty brave to bet against them ever catching up.

I think they're more concerned with the politics of it than the technical requirements of that userset.

I'm sure politics plays into it, but for WebGL specifically, it must have been a pretty easy technical decision. Do you pick the 3D standard used on Windows, or do you pick the lower-end one used by iOS and Android (and can be made to work on Windows)?

You could ask why GL rather than D3D won on mobile in the first place. For that you have to look at Microsoft and ask why Windows Mobile failed (in all its different versions). I don't think you can blame that entirely on politics.

Games are heavy users of driver and card specific extensions for instance. But that'd be at odds with the web's portability commitments.

That's a good point. From a web standards standpoint, it's a very tough conflict to resolve. I think the web people are pushing for common standards. That takes time and it can get very political, but I don't see a better solution. And if they can get it right, portability is a good thing! I don't see why that necessarily means you'll always be behind the curve on performance. A more standardized, portable system can catch up via economies of scale -- it might be easier to learn, have better tooling, a bigger potential market, etc.


I think your argument shows why the web is such a poor platform in many areas.

Second, there's no reason WebGL 3 couldn't be based on Vulkan

Vulkan is a very low level API designed for ultra-high performance use in engines written by professional engine teams, like Unreal. It requires the developer to write large quantities of code to even render a single triangle because you have to take manual control over the GPUs low level details. To a large extent it's preferred to GL because of better interaction with multi-threading.

It'd make no technical sense to try and expose a low level hardware-oriented API designed for multi-threading to a slow single-threaded language like JavaScript.

Do you pick the 3D standard used on Windows, or do you pick the lower-end one used by iOS and Android (and can be made to work on Windows)?

Somehow C++ does not have this problem. So how about: don't pick, expose all of them and let the developer use whichever is more appropriate?

You say you don't see any alternative to how WebGL handles driver extensions. Of course there are alternatives: just expose them all. Let there be vendor specific and proprietary stuff in web apps. Just because this is considered politically unacceptable by the ideologues who control the web platform does not mean it's actually unthinkable.

But that'd be against how the web is "designed" (browser makers cabalistically picking winners).

So in fact, I will continue to bet against the web and against mobile. People have been predicting total domination of iPhone/iPad since the day they were launched. I'll still be playing AAA games on consoles or high end Windows PCs 10 years from now, I'm sure of it.


On a couple of specific points--

It'd make no technical sense to try and expose a low level hardware-oriented API designed for multi-threading to a slow single-threaded language like JavaScript.

Web Assembly will be mature soon, and on the timescales we’re speculating about it could well have some form of threading.

Let there be vendor specific and proprietary stuff in web apps.

Unfortunately that would be a security nightmare. The Flash and Java plugins are good examples.


WebGL makes no sense for AAA titles. How do you even deal with the gigabytes of assets used in those games?


I'd love to build webgl multiplayer games. I hope to get the chance working on these things in the future.


Unreal has support for compiling to WebAssembly. Not sure how solid it is. But, you can check out a demo here: https://s3.amazonaws.com/mozilla-games/ZenGarden/EpicZenGard... 200MB download. Probably requires FireFox.

In general, WebGL and asm.js have done a great job of keeping up with the capabilities of whatever is the current-gen iPad.


I'm afraid I'm nowhere near competent enough in browser technology to comment on the comparison, sorry.


Thanks anyways, I see peoplewindow has pointed out some important issues around my question!


Unreal can target WebGL: https://www.pcper.com/news/General-Tech/Epic-Games-Releases-...

(There are also older demos for asm.js + WebGL 1)


The situation has changed a little bit recently with the introduction of adaptive synchronization monitor standards(GSync and Freesync). On PC this means that framerates above some minimum target can drift without experiencing tearing or missed deadlines.


Indeed, that's what I meant by "most hardware", but that hardware is definitely outside the reach of most people right now. Firstly, consoles don't support adaptive sync, and most tv's don't either, so that eliminates all PS4/XB1 (and their derivatives). Then to actually get a screen with GSync, you're talking ~350 pounds [0] for an entry level one, and realistically you're going to need a medium to high-end graphics card (looking on nvidia's website seems their support is actually far better than I expected it to be). Unfortunately, for the majority of people adaptive sync is still a few years off, and I'd be surprised if we saw it this decade.

[0]https://www.scan.co.uk/shop/computer-hardware/monitors/monit...


GSync is expensive because it needs a special chip in the screen. Freesync is cheap and has been made into a standard.


GSync is expensive because NVidia know people will pay for it.


Yes, but it still drops the frame as far as I know (works like a charm by the way, in my experience).


I'm completely fascinated that games can perform massive amounts of math and render 120 frames per second, while some business applications fail to do completely basic data manipulation in an reasonable time frame.


I think it comes down to requirements. If games aren't running smoothly, people simply won't play them.

Business apps have it a lot easier - the users are more forgiving, I guess either because they don't have a choice in the matter or any better options.


Also the people who buy business apps are often not the people who actually have to use them every day. So UI/UX ends up being a much lower priority than say cost.


And, if the program processes data too quicky, users think it isn't working. (Managers, perhaps, think it isn't working hard enough.)


Fortunately, there is a rails library for that. https://github.com/airblade/acts_as_enterprisey


If that were the problem, we would solve it by adding sleep() calls...


Data flow, data flow, data flow.

It's all about structuring your data such that don't don't cache miss on your fetches and you don't stall your pipelines by doing operations that flush/occupy the same pipe.

You can easily see 10-50x performance boosts on the same dataset with the right approach.

Look up Data Oriented Design, Mike Acton talks about it a ton.


To expand slightly, a single read from memory that's not cached and actually has to go to RAM can take 100 cycles. That's 100 cycles during which your CPU does nothing but wait for electrical signals to travel to and from RAM. Well written data processing programs (game engines, physics simulations &c) strive to structure their data so they have as few of those as possible. If you just read "Design Patterns" and wrote a beautiful 10-level deep class hierarchy in Java, where every object access can be behind something like 3 levels of indirection via pointers.

This is not a critique against either Design Patterns or Java, both of them are important and useful. And really, quite often you don't care about waiting 100 cycles, you have ten million times 100 cycles in every second. It's when you start crunching numbers in large quantities that it can get problematic. Modern CPUs are very fast, but only if you feed them data in the right order. Otherwise you're basically running on a machine from 20 years ago, just with 32GB of RAM. Best program design right now is to yes, do write your deep class hierarchies and whatnot, but also know how to identify the parts of your program doing heavy computation and how to restructure those so you can unleash the full power of a modern processor.


To clarify: "That's 100 cycles during which your CPU does nothing but wait..." is true only in the absolute worst case. In the likely case, your superscalar, out-of-order processor is working on the next several instructions, possibly including some speculative instructions after branches, and simultaneous-multithreading is letting another thread use the now-idle resources. Furthermore, since it's superscalar there are tons of compute resources (integer ALUs, FP ALUs, vector ALUs, to mention a few) which the CPU scheduler will try to keep busy as close to 100% of the time as possible.

That said, in any performance critical application, you pay close attention to your memory hierarchy and you design and compile your application in a way to respect that as well as take advantage of the underlying hardware architecture and resources to the fullest.


> and simultaneous-multithreading is letting another thread use the now-idle resources.

This can actually be worse due to context switch overhead and caches you were using getting evicted by the now running thread hitting the same issue.

If you want to do this right you should be organizing your data so that the prefetcher and take advantage of it. All of those speculative things still have a cost.


Data IO and actually having to be correct is why business logic is so slow. Sure, there's megabytes of textures but its already uploaded to fast ram on heavily pipelined and massively parallel device.

If you had to look up CRUD vertex and texture data as database rows from another machine through the internet you wouldn't see a very smooth game.


The learning curve of game development is much steeper, especially if you want to create your own engine.

When programming with APIs like OpenGL or DirectX, you either know what you're doing and get everything right, or you get a completely black framebuffer and maybe some error codes. There's practically no in-between. For business applications you can get pretty convincing results just by throwing together some buttons and textboxes in NetBeans or Visual Studio.


Almost anyone can program a CRUD app. Very few people can write a game engine.


Different hardware setup - that business might have a database that is dealing with millions of other users at the same time too. In which case they need to invest more in their infrastructure.


It's not always though. Lots of businesses _aren't_ dealing with millions of other users, they're dealing with a handful at a time, and still manage to be painfully slow. Our Jira instance in work is laughably slow despite being hosted on-site on bare metal and used by 20 people (usually not concurrently).


Our Jira instance is shared across multiple academic institutions and is hosting hundreds of active projects and I still find page loads to be pretty much instant, so mileage varies I guess!

(Not to negate your point though. The crappy HR systems are painfully slow and incredibly frustrating to use, in comparison)


These applications may contain bad code, but on the other hand game engines exploit hardware acceleration, so it's kind of an unfair comparison.


Javascript can use WebGL to leverage hardware acceleration, but GPU acceleration doesn't suit all workloads.


Yes? You're looking at an entire vertically integrated software-hardware business built to make this possible.

A GTX 1060 has 4.4 TFLOPs of processing power. 1920 * 1080 = 2073600 pixels per frame, or 125 million pixels per second. Divide that into 4.4T and you get a budget of max 35,000 operations per pixel. That's quite a lot. In practice you will often spend time waiting for textures to arrive from RAM.

The techniques themselves have also been built up over decades. There's a lot of giants to stand on.


Bear in mind however, that Unreal also had the best software renderer that could handle most of the visual effects on even something as simple as a 200 MHz Pentium w/ no 3d accelerator. I myself ran the game smoothly in 320x240 on such a system (and could run 400x300 if I didn't mind the slowdowns)


This mostly all translates to math equations, and with data oriented design a computer can crunch through this stuff extremely quickly. But this kind of stuff is why C++ is still popular - determinism is really important to get consistent performance at this low level. Also shader code is quite restrictive in order to keep performance up.

Edit: replaced 'driven' with 'oriented' - thanks Narishma


I think you mean data-oriented design, data-driven is something different.


> And this is going on at 60 Hz?

Or more: https://answers.unrealengine.com/questions/7459/question-is-... (person wondering if you can get more than 120 FPS in UE back in '14, one of the replies says they get "stat fps" in the 180s)


(I havent read the whole article, but havent seen other replies mention this)

Unreal uses an architecture called deferred rendering[1]. It's more complex, but allows for a lot more tricks and the main benefit is that lighting performance is closer to constant, instead of increasing or decreasing with how many lights you have.

Forward rendering is the simpler alternative[2]. For Unreal, they had a forward renderer for mobile and were introducing one as an option for VR since the minimum frame time can be shorter.

[1] https://en.wikipedia.org/wiki/Deferred_shading

[2] https://gamedevelopment.tutsplus.com/articles/forward-render...


In deferred rendering the cost of lighting is still dependent on the number of lights. In forward rendering it is dependent on both the number of lights as well as the scene geometry. You can apply various additional techniques to make each light much cheaper though.


But now the industry is moving away from pure deferred, as it scales poorly with higher resolutions. 4K, high res mobile devices, VR...


"Forward+" is the typical term for the newer style of forward rendering which includes a special pass for optimizing large numbers of lights(the most appealing part of deferred).


This is why I don't get why the game industry has such a bad rep for overworking people.

It's genuinely hard to do this kind of code so you'd think they take care not to scare away the few people who can do it.

I recall at one point Goldmans was bragging to my team about how they'd hired a game dev to do their click-to-trade currency UI. Seemed like the guy found an easier job for more pay.


This is a bit like saying "I don't get why web development is overworking people, just look at all this cool stuff Google is doing with the Go runtime!".

Most people who work in the game industry are not doing deep game engine hacking, but doing grunt work like scripting/designing some scene in a giant adventure game who's going to have thousands of those, and all of those need to be finished before the release date.


I wonder what the turn-over is for engine programmers? I'd guess very low.

I wonder how many commercial engine programmers there actually are, anyway? I'd guess very few.


Yeah, "low and falling" for both of them.

There are only a handful of major engines today and it's been that way for years. Unreal, Unity, Frostbite .... a few games still roll their own but the costs of keeping up are getting extreme.

I really doubt the Unreal Engine developers feel abused. For one thing, the release schedules of engines and games are now to some extent disconnected. Games get whatever the engine can do. Epic isn't going to move mountains to add a feature to UE for a licensee unless they get a TON of money to do it.


Whilst this is true for a lot of cases, it doesn't always hold true for companies which consistently release highly profitable games, an example of such company is Travellers Tales in the UK. With their volume sales, the royalties outweigh paying for an internal team to develop the engine (multiple times over). You can check vgchartz to get some idea of their game sales, and if you are particularly inclined, use companies houe to drill down to get the P&L (although you may need to spend some time getting through their company structure to find the actual P&L which has the right information).

But for a lot of companies, using an existing engine is a no-brainer - even more so when the developer has no idea how well their game is going to do, using something like Unreal effectively means they share the risk with you.


It appears Travellers Tales makes almost exclusively Lego games. I doubt those require AAA level game engines. To clarify I was talking about high end 3D graphics.


We might also include engine component suppliers, such as Speed Tree, Physix, and (whoever makes) Bink Video.


Bink is made by the awesome Rad Game Tools. They have some really good engineers on their payroll (ryg, cbloom) and they're consistently churning out cool things :)

(Disclaimer: Not affiliated, but I enjoy reading their developer's blogs)


I didn't realize cbloom was a Rad person too. His "Library Writing Realizations" rant is both a) awesome and b) really hard to link to.

If you haven't seen it, go to http://cbloom.com/rants.html and search for the title.


I'd never heard of cbloom or his rants. That was an excellent read, thanks for sharing!


If you haven't read ryg's blog, you should. https://fgiesen.wordpress.com/


Thanks for the recommendation!


Its kind of amazing how Unity forced change on Epic though. In the X360-PS3 era they absolutely _dominated_ the market, the list of games built on UE during that time is insane. However, after that, Unity started to come up as it had much nicer licensing and as a response to that Epic made UE almost free to use for small studios and more palatable for big ones.


I don't think Unity forced change on Epic, Unity targeted the nascent indie market very well and was a great replacement for the deprecated XNA. I can't think of a AAA Unity title aside from Microsoft's Recore - which launched with problems at $40.

Epic's AAA market ended up with bigger budgets and a demand for more control, so many of them simply created their own engines. So most UE games seem to be big budget indie titles (Obduction, PUBG, Psychonauts 2) or tight budget AAA titles (Street Fighter V, Shenmue III)


At that point Epic were offering the UDK as the indie alternative. Unity absolutely showed them the way things were going in terms of monetizing technology. Especially given the at the time exorbitant license fees for UE3. Same thing happened at Crytek too.

Part of the continuing split of Unity indie and UE4 'AA/AAA' is less to do with the capabilities of each engine and more to do with who has the longest experience with each. They both are creeping in on one anothers turf in that respect.


Hearthstone is built with Unity.


> I can't think of a AAA Unity title aside from Microsoft's Recore

Cities Skylines?


> I don't think Unity forced change on Epic,

I know, I know, Correlation does not imply causation. But just from wikipedia:

> On March 19, 2014, at the Game Developers Conference, Epic Games released Unreal Engine 4, and all of its tools, features and complete C++ source code, to the development community through a new subscription model.

Same year Unity took best engine.

> In July 2014, Unity won the "Best Engine" award at the UK's annual Develop Industry Excellence Awards.[1]

Unity made Epic react.

[0]: https://en.wikipedia.org/wiki/Unreal_Engine#Unreal_Engine_4 [1]: https://en.wikipedia.org/wiki/Unity_(game_engine)#History


It may be that Unity/Unreal influenced each other but it was hardly reacting to the way the market was going. Many top AAA games are now made with Frostbite which isn't even available publicly at all, it's entirely EA proprietary. If "cheap and open" was such a clear trend Frostbite wouldn't exist.


Wikipedia's insane list of Unreal games: https://en.wikipedia.org/wiki/List_of_Unreal_Engine_games


Imagine the royalties.


In a sense, the amazing thing is that Epic changed. In the past, incumbents didn't, and failed. Today, companies know about this dynamic, and try to adapt and undercut themselves before an upstart can. It takes some steel to let go of all that short-term profit.

Even so, it may be too little too late for Epic. It's straightforward to drop prices, but difficult to change business models, release timeframes, market position in customer's minds, codebase, customer feedback from a new usage.


In a sense, the amazing thing is that Epic actually changed. In the past, incumbents didn't, and failed. Today, companies know about this dynamic, and try to adapt and undercut themselves before an upstart can. It takes some steel to let go of all that short-term profit.

Even so, it may be too little too late for Epic. It's straightforward to drop prices, but difficult to change business models, release timeframes, market position in customer's minds, codebase, customer feedback from a new usage.


Most AAA studios have "engine" programmers (they are called graphics programmers in the industry). Even if such a studio uses UE or some other licensed engine there are still programmers who are modifying it.


Graphics programming is not engine programming, although it can be viewed as a subset of it. Most large game studios have engine programmers that do very little graphics programming, if any. The majority of code in a game engine is not related to the renderer. I've worked in the industry on AAA and indie games and have worked in both roles, which are often different engineering teams.


I work in the industry now and had been since 90s. "Engine" roles, if exist, are for stuff like streaming, save/load etc. The thing described in TFA is graphics programming.


Game industry is like Hollywood. There's an endless supply of highly talented hopefuls from all around the world who want to get into the industry more than anything else.

The focus of exploitation is just different -- the Harvey Weinsteins of the game industry don't try to sleep with brilliant Romanian programmers, instead they work them to death.


"the Harvey Weinsteins of the game industry don't try to sleep with brilliant Romanian programmers, instead they work them to death."

As a romanian programmer (although not in the game industry), i must admit this is a very eloquently put. Hats off to you sir.


As an ex-game dev this is pretty on the mark.

They've honed using your "passion" to extract as much work as they can from you for ~2-3 years. After which there's a 50/50 shot that the studio folds/etc.

In the AAA space there's also a vicious 90/10 rule where 10% of the games make 90% of the money. With rising costs it becomes increasingly difficult to keep hitting that 10% you need to keep your studio running.


This is just capitalism at work; in the game industry there is no guaranteed reward at the end of the development cycle, so you have to keep up the pressure in order to beat out your competitors, and increase your chances of winning. This leads to crunching, 60 hour weeks and mass layoffs when you lose - lookup any of the top AAA game companies on Glassdoor and you will find they are awful places to work (Projekt Red, Rockstar, the now extinct Crytek, EA - consistently voted 'Worst Company', etc).


> in the game industry there is no guaranteed reward at the end of the development cycle

The same is true for any product industry. There are things you can do (customer development, short iterations that involve your potential customers, etc) to increase the chances, but if you're not doing work for clients directly, then there is no guarantee.

Note that the "doing work for clients directly" exists in game development too, there are many freelance or services companies that do work for other companies, and from what I hear about the game industry, these have much more guaranteed pay, just like outside of games. Its the "build the product, when its done, hope someone buys it" type of work that has no guarantees, but that is in no way unique to game development.

The difference is, most other industries don't have such crazy crunch times...


Games don't have inherently have crazy crunch times, they are simply going to fire a lot of people so they don't care about burning them out.

It's completely rational behavior that has nothing to do with games, just look at MMO's for a more reasonable development process.


Of course there’s nothing inherent about it, but it does seem that many games companies do follow some rather predatory practices.


Yeah I agree, the problem applies to most of the entertainment industry. As a result more investment is put into sequels of successful games, artists, TV shows, movies, books etc.

Edit: as for crunch times, I expect this is to do with the interactive element of gaming. There is whole lot more work involved when the client has the ability to break your product.


My understanding is that crunch time (as well as the release of flagrantly buggy games with patches coming later) is driven primarily by marketing schedules/budgets. Ad buys, press releases, interviews etc. are negotiated and coordinated months in advance, and big releases from any given publisher need to be staggered largely so that they don't all demand marketing money at the same time.


Publishers give advances, have milestones, etc. It is uncertain whether a game will make money out in the market, but part of a publisher's role is supposed to be to have a portfolio and be able to spread that risk, funding some of the development of both the losers and the winners.


EA isn't voted worst company for their working conditions. They're not even that voted for the quality/popularity of their games (games like Battlefront or Battlefield still sell very well). It's just a recognisable brand that people like to criticise, and that makes it an easy target for organized voting. Ubisoft is also in that category.


I could argue that - years at EA, and their abuse.


Indeed. The fact that EA was successfully sued for overtime pay for salaried devs speaks volumes, both about their pay and the working conditions.


That's because game devs don't know what life is like for other programmers. Longer hours and lower pay, but you get to work on what you love (every blue moon or so).


Looked up several of the companies you mention and most of them have a pretty high ratings. EA is voted worst company by consumers, not by employees which is pretty misleading.

Please stop lying.


I'm not lying, read the actual reviews and don't just look at the review score. The good reviews come from people who don't mind working 60 hour weeks, being paid lower salaries, and getting overworked.


I dont have an account so I can only see 1 review for each company. But it is a bit strange to say that only some reviews count and not the others.

Of course most people care if you're being overworked for a small amount of compensation. Your statement is a bit ridicolous.


I'm not going to argue as I see you're trolling, but it takes 2 seconds to sign in and read the reviews. Most good reviews from Projekt Red for example are from full time employees who say they love working there, but then list cons such as: 'Economically things can get tough due to lower salaries' and 'I would not recommend this company for those who are not ready to push themselves and those who just want a 9 to 17h without any trouble.' etc.


I'm not trolling but you cant take 1 company and have their reviews as "this is how it is in the entire industry".

I don't want to sign in though, since that service offers nothing for me as a non american.


>Of course most people care if you're being overworked for a small amount of compensation. Your statement is a bit ridicolous.

And yet academia.


> Please stop lying.

This breaks the HN guidelines. Please post civilly and substantively, or not at all.


I think it depends a lot of which company you work for and where in the world you work. For example, there is a lot of game companies in my country and it's illegal to require people to work more than 40 hours a week.

I think some may do it anyway by free will or by social pressure, but these people are probably pretty rare. In my country (Sweden) we don't really have a culture where companies pressure people into working more in general so those who do often meet a lot of social uproar and protests.


As a fellow Swede, I think that's slightly disingenuous. Sure, companies can't officially force you to work overtime, but for sure there are many instances where people are encouraged to bring work home or pull some extra hours to make a deadline – especially at smaller companies.


Yeah sure, but it's far from the work environment in the states or even in other countries in europe. Require people to do some extra hours sometimes is often part of every contract but I have never been required to do so.

Once I had a boss that wanted me to work overtime "unofficially" but I refused. Needless to say, I left that place pretty quickly.

I don't work in the game industry, but I know some people who do and they never work overtime.


People get into the games industry for the "intrinsic reward" (ie they think it's cool) rather than the money or the working conditions. The games industry exploits this badly.


> Goldmans ... hired a game dev to do their click-to-trade currency UI. Seemed like the guy found an easier job for more pay.

Not necessarily easier... but definitely a different set of difficulties.


I interpreted "easier" as in "I don't have to work 16+ hour days regularly"


Does anyone know of a good book or course that does a deep dive into unreal engine source? I'm trying to level up my graphics programming skills and that seems like it would be an amazing reference, but I don't currently have the knowledge to navigate it myself.


Don't think there is any such book specifically for UE and its source. But there's a lot of good books on realtime rendering and graphics programming in general.

GPU Gems, Shader X and GPU Pro are good series for learning specific graphics programming techniques.

https://developer.nvidia.com/gpugems/GPUGems/gpugems_pref01....

http://www.realtimerendering.com/resources/shaderx/

For a general game engine overview: Game Engine Architecture by Jason Gregory (Naughty Dog)

Game Programming Patterns: https://www.amazon.co.uk/Game-Programming-Patterns-Robert-Ny...

Realtime rendering overview: https://www.amazon.co.uk/Real-Time-Rendering-Third-Tomas-Ake...

Related math: https://www.amazon.co.uk/Math-Primer-Graphics-Game-Developme...

Other recommendations:

http://mrelusive.com/books/books.html

http://fabiensanglard.net/Computer_Graphics_Principles_and_P...

It's fun to explore the source though, and NVIDIA has some cool experimental branches of the engine with their stuff integrated. https://github.com/NvPhysX/UnrealEngine


Have you actually taken a look at it (the source code?). It's actually very well written and easy to understand even for me (a sysadmin, not a programmer). I have a side-project in ue4 that I've been trying to dev on gnu+linux only and have been working hard to avoid the issues that games like pubg have that make them unable to run on linux (mostly because Epic has not been taking care of us linux users like they promised and the marketplace assets often only work on windows, so when a dev like the pubg guys just throws a bunch of assets in, that's why they can't get things to work on linux)

My problem is I don't know much C#, but it really confuses me why Epic decided to use C# for the framework of a C++ engine...

That project is on hold while I play with vulkan directly via https://github.com/KhronosGroup/Vulkan-Hpp.

I have too many projects.


Like the other commenter said, game engines are a huge topic. The game engine architecture book has a nice overview (even though it's hefty), you would probably want an equal size book on each aspect of a game engine; graphics, networking, dynamics, audio, ai, etc. I haven't read it, but also heard good things about the real time rendering book. I did look at the game engine architecture book looking for info about deferred rendering and it was fairly shallow (but thats not the goal of the book). If I remember correctly--the book actually points you to more detailed resources (papers) and books.


I agree, each topic in the game engine architecture book could be another book on its own. However, it does give a good introduction and briefly touches on most of the topics in this diagram https://i.imgur.com/SxydAoF.png

(diagram is from the book)


I'm not sure about unreal engine, but for a general overview the "Game Engine Architecture" book is a great starting point.


That book is great, but it’s discussion of graphical techniques is shallow (by design, it’s an overview book). Real-time Rendering was the book given to me back when I was first learning game development:

http://www.realtimerendering.com/book.html


Sheesh. How many engineer-hours have gone into Unreal?


A hell of a lot, the engine itself was in development for around three years IIRC


Which is why there is the games industry saying that one either builds an engine or a game.

Also why commercial game developers usually don't whine about 3D APIs like FOSS indie devs do.

Adding support for yet another graphics backend is a trivial task, compared to the overall feature set of a game engine.


The engine has engineering effort in it since the late 1990's since some of the code is that old. (I say this because I remember learning from the mod SDK back in the day, and I've looked into what it is now). So almost 2 decades worth of engineering effort has gone into it. Granted a bunch of that effort has been in rewriting systems which needed to be updated (like rendering).


According to the Wikipedia page at least, even just the current version (Unreal Engine 4) has been in development for 14 years now! With 2003-2008 apparently being just Tim Sweeney.


I have high expectations from simulation. I hope they will be able to reach 99.9% realism so we can use it to recreate any situation we want, for fun (games) and profit (pre-training robots).

<rant>Self driving cars already train on simulated roads because that allows the creation of any scenario. Even human pilots use simulators to train, especially for those rare situations. And since robot training is expensive and slow in the real world, the only alternative is to do it in a sim.

Simulation allows the composition of any scene that might be very hard to record in the wild - for example, an octopus sitting as a hat on the head of an elephant... where would you be able to get that photo? but surely you imagined it in 0.1 seconds, using your imagination - a powerful simulator humans have in their heads. Such images are crucial in training AI.

I think AI will reach human level when it will be equipped with a sandbox where it can try out its ideas and concepts, similarly to how scientists use labs to test their theories. When AI gets its world simulator, it will be able to learn reasoning and meaning that is grounded in verification. Just like us. We have the world itself as our fundamental "simulator" and experiment on it to learn physics, biology and AI. AI needs a simulator too.</>


Cool, but what does this have to do with the comment you replied to?


Remember that the people who worked on the engine for three years also have years and years and years of experience prior to this.


And that was from 1995 to 1998. It's had a lot of energy spent on it since then. It's an incredible piece of tech.


an unreal amount?


I personally believe much of this will become passe within five years or so.

I have seen a few demos of real-time path tracing software running on GPUs. I know GPUs are fast, but I wonder if there is a way to do the same math on an ASICs or FPGA that could be even faster? The main issue with existing things seems to be being able to do enough iterations to get a clear picture.

Anyway I believe a lot of the tricks related to triangle meshes and lighting approximations will be thrown out and replaced with procedural generation and real time path tracing.


Perhaps. Too bad we hit a GHZ wall, so the only thing we can do now is add more cores.

FYI: For 6K and 8K, both nVidia and AMD tech reps have said that "multi-GPU" (not CORE) solutions will be required to reach those. But they also said once they hit 16K, they'll have "real eye quality" in terms of DPI which comes with lots of extra "realism" for free. Like, watch the new Jungle Book in 4K and certain scenes of mountains will blow your mind and feel "real" (without any 3d glasses) and your brain is like "holy shit, I'm not watching a movie (for this split second), this is something real." But most scenes still don't. We're so close to photorealistic, I can taste it! (Like that GTA 5 photorealism mod on HN yesterday.)

That's why they're all moving to Vulkan. OpenGL is single-threaded and a PITA to easily exploit multiple GPU solutions. (Global state, one draw thread.)


Ehh, I'm kind of doubtful. I'm sure that stuff will get more popular, but it's not going to replace triangle-based rendering for MOST games for a very long time... especially since we're moving into the realm of VR where games need to render tons of pixels at very fast refresh rates.


> For stationary lights and dynamic props, Unreal uses per object shadows, meaning that it renders one shadowmap per dynamic prop per light

Isn't that a TON of shadowmaps?


This is why I like minimalist 3D graphics.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: