Sure, game engines are really complex and impressive, but the same is true of a lot of other kinds of software. If you work on websites, have you thought about how much processing goes on when loading and rendering a page?
For game engines there is a lot of material out there explaining the basic processes and data flows; there are free or open source games and game engines around as well: Unreal engine, used in the article, is free, including full access to the source code. (This is by far the most widely used game engine in the entire gaming industry). And with a graphics debugger (free) you can look fairly easily at how games (pretty much any game you can run on your PC) render things as well.
That's not to say that game engines are simple, however, it is relatively easy to understand how they work on a basic level (i.e. what they have to accomplish and how they do so in general terms) and, specifically considering graphics, simple renderers are not particularly hard to develop yourself as a small side project.
With browsers, this is a lot more difficult. There are perhaps three modern browser engines, which are mostly open source but also very large pieces of software. There are no books or tutorials explaining how to develop browser engines. And the debugging tools in the browser only tell you how often a "DOM update" or "render" happened and perhaps how long it took.
Whilst true, your simple renderer analogy does not work because it is like me saying that if I can create a program to draw some scrolling text then I can create a browser. I presume you have not had to optimise GPU graphics code whilst maintaining compatibility on multiple GPU vendors with a common code base?
One game engine is not the same as another and therefore not suitable for bulk comparison with any particular thing. Each game/graphics engine has their own set of requirements, and quality level, and the level of understanding required to "understand" what is actually going on.
If you are restricting to AAA, I’m not even sure if that holds. There are a good amount of big teams that use UE4, I think I’d go with Frostbite due to the sheer amount of titles that EA pushes out that are now based on Frostbite. It’s certainly more big games than UE4 is shipping. I’d probably put Ubisoft ahead of them too with their internal engines.
a) build their own internal engine
b) pay licensing fees to not disclose what engine they're using.
For overall game usage, however, I'd definitely say Unity is most used, with UE4 coming in second
They're actually similar in some ways -- there are a handful of browers and a handful of AAA game engines, and they're all either open source or at least licensable source.
It doesn’t always happen at 60hz (which is 16ms total processing time for all the above plus what’s in the post). Some games run at 30hz(33ms), some run unlocked (varying times), and some can even run at 144Hz (7ms).
As an extreme example, VR headsets normally require rendering twice (once for each eye), and run at 90FPS, which is 11ms to do your entire game, and render it twice.
Furthermore, these aren’t normally just guidelines, these are caps. If your game runs at 60hz, and you miss your target by 1ms, on most hardware you’ll end up missing the hardware refresh which means either you get a temporary drop to 30hz, or you get tearing (half the old image and half the new image), neither of which are particularly pleasant.
Some additional reading on the topic:
Since you're dealing with multithreaded CPUs + an asynchronous GPU, you can parallelize and sequentialize all this. How UE works in its DX11 and OpenGL renderers, on a 90hz game, is that you're gonna have the Game Thread (Physics/Gameplay) running for frame N, while the Render Thread (GPU commands / math ) runs for frame N-1, while the GPU executes frame N-2. This allows you a complete frame time of 33ms on a throughput of 90hz, at the cost of more latency.
I wonder what the cutoff is where someone starts to notice, but I imagine it's not much greater than 50-100ms
I have a gaming display that has a 1ms response time, if I play something fast and twitchy on my PS4 (Titanfall 2 for example) on my display and then on my tv I can 100% notice the difference between the two, even when you adjust the tv to compensate (gaming mode on tv): you really do notice the extra time taken for events to occur. Titanfall 2 is actually a pretty good example because of the insane gameplay speed.
Here's some examples:
Some of the bottlenecks in WebGL are:
- The WebGL -> OpenGL translation layer takes some time. In Chrome it sanity checks your input (which you definitely want in a browser! GPU drivers are very insecure) and executes it in a separate process. Not as expensive as you might think, especially if you minimize your draw calls, which is good practice anyway.
- WebGL is basically OpenGL ES 2.0 (and WebGL 2 is ES 3.0), which is missing some useful features of full OpenGL, and doesn't offer the low-level, low-overhead access of Vulkan or Metal.
Depending on what you're doing, I'd guess WebGL might be no more than 2x slower (assuming plenty of optimization work). Some fiddly things might be 10x slower, or just not possible at all within the WebGL API.
WebGL 2 has a lot of very important new features, but support for that still seems to be patchy, and it's only just catching up with mainstream mobile graphics. It's a generation behind Vulkan.
Apart from that, an AAA game will have massive amounts of graphical and audio assets. Delivering that over the network is a pain and HTML5 caching is a pain. Doable, but hardly comparable to just loading it from local storage.
Oh, and one more thing! A big game wants sound as well as graphics, and WebAudio is a mess. And audio mixing is typically done in a background thread, so that's one area where the lack of threads in JS is a real problem.
Overall WebGL is very nice, it was a great choice to follow OpenGL ES closely (security problems aside).
Boy do they, yes.
I'm specifically curious about CPU-intensive stuff that is sharded out to multiple threads. That's your classic multithreaded programming, the sort of thing you'd do in scientific computing, but I always got the impression games people were skeptical about it, due to unpredictable performance and the high risk of bugs.
Execution: You can google around for "game engine job system", but unfortunately, game engine blogs have really fallen off a while back as most of that crew has moved to twitter. So, the best material out there is in the form of GDC presentations such as "Parallelizing the Naughty Dog engine using fibers", " Destiny's Multithreaded Rendering Architecture", "Multithreading the Entire Destiny Engine", "Killzone Shadow Fall: Threading the Entity Update on PS4". Slides and videos are available around the web.
I never got to program one but I remember being fascinated by the weird architectures of the PS2 and PS3.
It occurs to me that there are some similarly weird architectures in mobile right now -- it's not uncommon to see Android flagship phones with 8(!) cores, which is just ludicrous. And there are lots of asymmetric "big.LITTLE" designs with a mix of high-speed and low-power cores.
Maybe I'm just reading the wrong blogs, but I've barely seen any discussion on how to optimize code for those crazy Android multicore CPUs, even though it seems like there's potentially a lot of upside. I guess Android is so fragmented and fast-moving that it's a tougher challenge than optimizing for two or three specific games consoles; also Android app prices are low so there probably isn't as much motivation.
Also, Apple is miles and miles ahead of everybody else in mobile performance, and they've consistently gone with just 2-3 cores. Their mobile CPUs and GPUs are very smartly designed, really well-balanced.
There’s also this talk - http://developer2.download.nvidia.com/assets/gameworks/downl... which talks about collision detection and collision resolution on the GPU that actually explains the architecture of the cpu engine quite well and shows what parts are serial and what parts are well parallelised.
Of course, you're still dealing with the event loop in most cases, which is probably a stumbling block when it comes to really low-level stuff. That said, there are even facilities for shared memory and atomics operations  these days, which helps. I've messed around with it a little bit on a side project- as a JS developer, it's really weird and fun to say "screw the event loop!" and just enter an endless synchronous loop. :D
That way if you're running a multithreaded physics simulation you can get two separate bullet collision detections on one object, and instead of trying to delete the object twice you put both "kill this thing" actions on a to-do list. When it comes time to handle that in the main loop, you sweep through it for conflicts before executing any of the changes.
UE4 in particular I know handles all game logic in a single thread, and you can't touch UObjects from outside of that. But here's an example (without much technical detail) of someone implementing multithreaded pathfinding for UE4: https://forums.unrealengine.com/community/work-in-progress/1...
Yeah, that's the kind of thing I was thinking of when I said games programmers seemed skeptical of threads.
Some of that coarse-grained parallelism could possibly be done in JS with Web Workers, but those have their own problems. (See the recent discussion here about the "tasklets" proposal: https://news.ycombinator.com/item?id=15511519)
For one, game engines normally don't use OpenGL unless they have to. The GL drivers on PC tend to be very weak. High end games target Direct3D on Windows/Xbox and these days are moving to Vulkan/Metal, even on mobile.
On Windows the GL driver situation is so bad that Chrome translates GL to Direct3D. This obviously will impose some overhead and complicate the driver bug situation still further.
Even when games do target GL they tend to exploit lots of vendor specific extensions and be tested against very specific graphics card/driver combos to enable them to workaround bugs and performance cliffs. Does WebGL even expose driver specific extensions? I don't think it does.
So you are not going to be competing with native apps on the web anytime soon in this area (as with all other areas...)
If you read through the patch notes for your video card drivers, you'll probably notice that every major game release gets special support right in the driver.
NVIDIA has three advantages:
1. Most PC gamers have NVIDIA cards, so developers test primarily against NVIDIA cards.
2. NVIDIA has a boatload of driver developers to hack game-specific fixes and improvements right into their driver. They work around game bugs, rewrite slow shaders, and other stuff like that. I assume that's part of why the graphics driver is now well over a hundred megabytes.
3. NVIDIA sends developers to major game studios to optimize and add graphical effects to their games. For instance, volumetric lighting in Fallout 4 was added by an NVIDIA employee.
I think AMD does much of the same, but they just don't have the money to do it to the same extent. That means more breakage and slower fixes.
Sure it is! Not right now maybe, but the web standards people working on it would love to be a viable platform for AAA games. Every so often they tout a new WebGL port of a well-known game as the harbinger of things to come.
Then why are they implementing an API that's been unpopular for years and is now being phased out entirely (for AAA games)?
Mobile. They picked OpenGL ES 2.0 for WebGL because it had comprehensively won on mobile. Apple went with ES 2.0 and Android followed. It's taking a long time for the mobile industry to migrate to ES 3 (which would allow WebGL 2) but ES 2 has been a decent stable baseline for a good few years now. That's quite unusual, and very helpful, given how fast-moving everything in tech is.
[Edit to add: ES 2 is based on GL 3, which was the first version to add programmable shaders. That was a huge API change, and an admission that the D3D approach was better. It's barely 10 years old. So any "GL is unpopular" arguments based on the old fixed-function pipeline are a red herring, I think.]
Mobile was and is more important than either desktops or consoles, because the mobile market is huge and growing, while desktops and console are at best stable.
Google came up with a clever technical solution (the ANGLE library) to emulate ES 2.0 on top of Direct3D, so that sidesteps the technical problems of OpenGL on Windows.
Now, for AAA games, desktops and consoles are obviously far more important. I think there are two responses to that:
First, a bet that mobile will gradually catch up and become equally important. There are a lot of factors involved, but on raw technical terms it's not such a bad bet. Mobile hardware iterates very fast, and some mobile CPUs are getting very competitive with desktops (recent iPads and iPhones especially). Sustained performance is an issue, as mobile devices have much stricter thermal limits; but you can put the same mobile SoC in a bigger box, like the TV set-top boxes that Google, Apple, Amazon and others are experimenting with.
Second, there's no reason WebGL 3 couldn't be based on Vulkan. WebGL 2 hasn't even been fully adopted yet, so it would obviously take a number of years to make that happen. Maybe desktops and consoles will have moved on to something newer and better by then, but maybe they won't.
The big question is whether mobile+web is catching up on desktop+console, or if it'll always be a generation or two behind. I think you'd have to be pretty brave to bet against them ever catching up.
I think they're more concerned with the politics of it than the technical requirements of that userset.
I'm sure politics plays into it, but for WebGL specifically, it must have been a pretty easy technical decision. Do you pick the 3D standard used on Windows, or do you pick the lower-end one used by iOS and Android (and can be made to work on Windows)?
You could ask why GL rather than D3D won on mobile in the first place. For that you have to look at Microsoft and ask why Windows Mobile failed (in all its different versions). I don't think you can blame that entirely on politics.
Games are heavy users of driver and card specific extensions for instance. But that'd be at odds with the web's portability commitments.
That's a good point. From a web standards standpoint, it's a very tough conflict to resolve. I think the web people are pushing for common standards. That takes time and it can get very political, but I don't see a better solution. And if they can get it right, portability is a good thing! I don't see why that necessarily means you'll always be behind the curve on performance. A more standardized, portable system can catch up via economies of scale -- it might be easier to learn, have better tooling, a bigger potential market, etc.
Second, there's no reason WebGL 3 couldn't be based on Vulkan
Vulkan is a very low level API designed for ultra-high performance use in engines written by professional engine teams, like Unreal. It requires the developer to write large quantities of code to even render a single triangle because you have to take manual control over the GPUs low level details. To a large extent it's preferred to GL because of better interaction with multi-threading.
Do you pick the 3D standard used on Windows, or do you pick the lower-end one used by iOS and Android (and can be made to work on Windows)?
Somehow C++ does not have this problem. So how about: don't pick, expose all of them and let the developer use whichever is more appropriate?
You say you don't see any alternative to how WebGL handles driver extensions. Of course there are alternatives: just expose them all. Let there be vendor specific and proprietary stuff in web apps. Just because this is considered politically unacceptable by the ideologues who control the web platform does not mean it's actually unthinkable.
But that'd be against how the web is "designed" (browser makers cabalistically picking winners).
So in fact, I will continue to bet against the web and against mobile. People have been predicting total domination of iPhone/iPad since the day they were launched. I'll still be playing AAA games on consoles or high end Windows PCs 10 years from now, I'm sure of it.
Web Assembly will be mature soon, and on the timescales we’re speculating about it could well have some form of threading.
Let there be vendor specific and proprietary stuff in web apps.
Unfortunately that would be a security nightmare. The Flash and Java plugins are good examples.
In general, WebGL and asm.js have done a great job of keeping up with the capabilities of whatever is the current-gen iPad.
(There are also older demos for asm.js + WebGL 1)
Business apps have it a lot easier - the users are more forgiving, I guess either because they don't have a choice in the matter or any better options.
It's all about structuring your data such that don't don't cache miss on your fetches and you don't stall your pipelines by doing operations that flush/occupy the same pipe.
You can easily see 10-50x performance boosts on the same dataset with the right approach.
Look up Data Oriented Design, Mike Acton talks about it a ton.
This is not a critique against either Design Patterns or Java, both of them are important and useful. And really, quite often you don't care about waiting 100 cycles, you have ten million times 100 cycles in every second. It's when you start crunching numbers in large quantities that it can get problematic. Modern CPUs are very fast, but only if you feed them data in the right order. Otherwise you're basically running on a machine from 20 years ago, just with 32GB of RAM. Best program design right now is to yes, do write your deep class hierarchies and whatnot, but also know how to identify the parts of your program doing heavy computation and how to restructure those so you can unleash the full power of a modern processor.
That said, in any performance critical application, you pay close attention to your memory hierarchy and you design and compile your application in a way to respect that as well as take advantage of the underlying hardware architecture and resources to the fullest.
This can actually be worse due to context switch overhead and caches you were using getting evicted by the now running thread hitting the same issue.
If you want to do this right you should be organizing your data so that the prefetcher and take advantage of it. All of those speculative things still have a cost.
If you had to look up CRUD vertex and texture data as database rows from another machine through the internet you wouldn't see a very smooth game.
When programming with APIs like OpenGL or DirectX, you either know what you're doing and get everything right, or you get a completely black framebuffer and maybe some error codes. There's practically no in-between. For business applications you can get pretty convincing results just by throwing together some buttons and textboxes in NetBeans or Visual Studio.
(Not to negate your point though. The crappy HR systems are painfully slow and incredibly frustrating to use, in comparison)
A GTX 1060 has 4.4 TFLOPs of processing power. 1920 * 1080 = 2073600 pixels per frame, or 125 million pixels per second. Divide that into 4.4T and you get a budget of max 35,000 operations per pixel. That's quite a lot. In practice you will often spend time waiting for textures to arrive from RAM.
The techniques themselves have also been built up over decades. There's a lot of giants to stand on.
Edit: replaced 'driven' with 'oriented' - thanks Narishma
Or more: https://answers.unrealengine.com/questions/7459/question-is-... (person wondering if you can get more than 120 FPS in UE back in '14, one of the replies says they get "stat fps" in the 180s)
Unreal uses an architecture called deferred rendering. It's more complex, but allows for a lot more tricks and the main benefit is that lighting performance is closer to constant, instead of increasing or decreasing with how many lights you have.
Forward rendering is the simpler alternative. For Unreal, they had a forward renderer for mobile and were introducing one as an option for VR since the minimum frame time can be shorter.
It's genuinely hard to do this kind of code so you'd think they take care not to scare away the few people who can do it.
I recall at one point Goldmans was bragging to my team about how they'd hired a game dev to do their click-to-trade currency UI. Seemed like the guy found an easier job for more pay.
Most people who work in the game industry are not doing deep game engine hacking, but doing grunt work like scripting/designing some scene in a giant adventure game who's going to have thousands of those, and all of those need to be finished before the release date.
I wonder how many commercial engine programmers there actually are, anyway? I'd guess very few.
There are only a handful of major engines today and it's been that way for years. Unreal, Unity, Frostbite .... a few games still roll their own but the costs of keeping up are getting extreme.
I really doubt the Unreal Engine developers feel abused. For one thing, the release schedules of engines and games are now to some extent disconnected. Games get whatever the engine can do. Epic isn't going to move mountains to add a feature to UE for a licensee unless they get a TON of money to do it.
But for a lot of companies, using an existing engine is a no-brainer - even more so when the developer has no idea how well their game is going to do, using something like Unreal effectively means they share the risk with you.
(Disclaimer: Not affiliated, but I enjoy reading their developer's blogs)
If you haven't seen it, go to http://cbloom.com/rants.html and search for the title.
Epic's AAA market ended up with bigger budgets and a demand for more control, so many of them simply created their own engines. So most UE games seem to be big budget indie titles (Obduction, PUBG, Psychonauts 2) or tight budget AAA titles (Street Fighter V, Shenmue III)
Part of the continuing split of Unity indie and UE4 'AA/AAA' is less to do with the capabilities of each engine and more to do with who has the longest experience with each. They both are creeping in on one anothers turf in that respect.
I know, I know, Correlation does not imply causation. But just from wikipedia:
> On March 19, 2014, at the Game Developers Conference, Epic Games released Unreal Engine 4, and all of its tools, features and complete C++ source code, to the development community through a new subscription model.
Same year Unity took best engine.
> In July 2014, Unity won the "Best Engine" award at the UK's annual Develop Industry Excellence Awards.
Unity made Epic react.
Even so, it may be too little too late for Epic. It's straightforward to drop prices, but difficult to change business models, release timeframes, market position in customer's minds, codebase, customer feedback from a new usage.
The focus of exploitation is just different -- the Harvey Weinsteins of the game industry don't try to sleep with brilliant Romanian programmers, instead they work them to death.
As a romanian programmer (although not in the game industry), i must admit this is a very eloquently put. Hats off to you sir.
They've honed using your "passion" to extract as much work as they can from you for ~2-3 years. After which there's a 50/50 shot that the studio folds/etc.
In the AAA space there's also a vicious 90/10 rule where 10% of the games make 90% of the money. With rising costs it becomes increasingly difficult to keep hitting that 10% you need to keep your studio running.
The same is true for any product industry. There are things you can do (customer development, short iterations that involve your potential customers, etc) to increase the chances, but if you're not doing work for clients directly, then there is no guarantee.
Note that the "doing work for clients directly" exists in game development too, there are many freelance or services companies that do work for other companies, and from what I hear about the game industry, these have much more guaranteed pay, just like outside of games. Its the "build the product, when its done, hope someone buys it" type of work that has no guarantees, but that is in no way unique to game development.
The difference is, most other industries don't have such crazy crunch times...
It's completely rational behavior that has nothing to do with games, just look at MMO's for a more reasonable development process.
Edit: as for crunch times, I expect this is to do with the interactive element of gaming. There is whole lot more work involved when the client has the ability to break your product.
Please stop lying.
Of course most people care if you're being overworked for a small amount of compensation. Your statement is a bit ridicolous.
I don't want to sign in though, since that service offers nothing for me as a non american.
And yet academia.
This breaks the HN guidelines. Please post civilly and substantively, or not at all.
I think some may do it anyway by free will or by social pressure, but these people are probably pretty rare. In my country (Sweden) we don't really have a culture where companies pressure people into working more in general so those who do often meet a lot of social uproar and protests.
Once I had a boss that wanted me to work overtime "unofficially" but I refused. Needless to say, I left that place pretty quickly.
I don't work in the game industry, but I know some people who do and they never work overtime.
Not necessarily easier... but definitely a different set of difficulties.
GPU Gems, Shader X and GPU Pro are good series for learning specific graphics programming techniques.
For a general game engine overview: Game Engine Architecture by Jason Gregory (Naughty Dog)
Game Programming Patterns: https://www.amazon.co.uk/Game-Programming-Patterns-Robert-Ny...
Realtime rendering overview: https://www.amazon.co.uk/Real-Time-Rendering-Third-Tomas-Ake...
Related math: https://www.amazon.co.uk/Math-Primer-Graphics-Game-Developme...
It's fun to explore the source though, and NVIDIA has some cool experimental branches of the engine with their stuff integrated.
My problem is I don't know much C#, but it really confuses me why Epic decided to use C# for the framework of a C++ engine...
That project is on hold while I play with vulkan directly via https://github.com/KhronosGroup/Vulkan-Hpp.
I have too many projects.
(diagram is from the book)
Also why commercial game developers usually don't whine about 3D APIs like FOSS indie devs do.
Adding support for yet another graphics backend is a trivial task, compared to the overall feature set of a game engine.
<rant>Self driving cars already train on simulated roads because that allows the creation of any scenario. Even human pilots use simulators to train, especially for those rare situations. And since robot training is expensive and slow in the real world, the only alternative is to do it in a sim.
Simulation allows the composition of any scene that might be very hard to record in the wild - for example, an octopus sitting as a hat on the head of an elephant... where would you be able to get that photo? but surely you imagined it in 0.1 seconds, using your imagination - a powerful simulator humans have in their heads. Such images are crucial in training AI.
I think AI will reach human level when it will be equipped with a sandbox where it can try out its ideas and concepts, similarly to how scientists use labs to test their theories. When AI gets its world simulator, it will be able to learn reasoning and meaning that is grounded in verification. Just like us. We have the world itself as our fundamental "simulator" and experiment on it to learn physics, biology and AI. AI needs a simulator too.</>
I have seen a few demos of real-time path tracing software running on GPUs. I know GPUs are fast, but I wonder if there is a way to do the same math on an ASICs or FPGA that could be even faster? The main issue with existing things seems to be being able to do enough iterations to get a clear picture.
Anyway I believe a lot of the tricks related to triangle meshes and lighting approximations will be thrown out and replaced with procedural generation and real time path tracing.
FYI: For 6K and 8K, both nVidia and AMD tech reps have said that "multi-GPU" (not CORE) solutions will be required to reach those. But they also said once they hit 16K, they'll have "real eye quality" in terms of DPI which comes with lots of extra "realism" for free. Like, watch the new Jungle Book in 4K and certain scenes of mountains will blow your mind and feel "real" (without any 3d glasses) and your brain is like "holy shit, I'm not watching a movie (for this split second), this is something real." But most scenes still don't. We're so close to photorealistic, I can taste it! (Like that GTA 5 photorealism mod on HN yesterday.)
That's why they're all moving to Vulkan. OpenGL is single-threaded and a PITA to easily exploit multiple GPU solutions. (Global state, one draw thread.)
Isn't that a TON of shadowmaps?