Hacker News new | past | comments | ask | show | jobs | submit login
Cryengine Source Code (github.com)
212 points by TiccyRobby 5 days ago | hide | past | web | favorite | 143 comments

This is on the front page again. Please take some time to read this wonderful function: https://github.com/CRYTEK/CRYENGINE/blob/release/Code/CryEng...

EDIT: That whole function is a minefield. Just taking a quick look:

* 814 lines of code

* goto inside 3 nested for-loops

* macros

* commented out code

* new/delete, with no RAII

* thread specific variables and locks (?)

I don't think it's the best code I've ever seen, but a lot of these are surface-level complaints.

> * goto inside 3 nested for-loops.

C has no pattern for breaking out of multiple for loops at the same time. Other languages like Java and JavaScript introduced "break label;" to handle this edge case. goto is perfectly acceptable to break out of multiple loops.

> * new/delete, with no RAII

They use RAII, but it's not applicable here. In this case, the developers only want to allocate a temporary array when it needs to be resized. Since we don't want a large stack allocation (stack sizes are super small on consoles), using a delete/free to resize an array seems fine to me. Game developers have a long-seated distrust of std::vector, and for good reasons.

RAII is used for the WriteCondLock which you criticize in the next bullet point, so it's not like they were unaware of it. Just not the right tool for the job.

> * thread specific variables and locks (?)

You seem to be upset that they have code that uses locks at all? I don't really know what this bullet-point is saying, other than "I looked for 5 minutes and didn't understand the threading structure".

If I had to take an issue with this code, it's the lack of enums for e.g. iSimClass, despite it having an enum with definitions. That's the sort of stuff that's difficult to reason about and follow along with without having a mapping in my head. And it has no overhead, so why not do it?


I've found that a lot of more junior engineers just want to rotely pattern match on one-size-fits-all "style" rules in their code review without fully understanding why those rules exist in the first place.

Like all rules, there's always exceptions. In a hot code path like the physics engine logic, you're going for performance optimizations (which often LOOK messy) and that means making slight compromises on readability. It's quite a bit different than the code implementing the business logic in your SaaS company.

> C has no pattern for breaking out of multiple for loops at the same time.

True, but it is cleaner to reorganize the code into several functions and use the return value to propagate across layers if needed. Performance should be the same.

> it's not applicable here

Why? If there is a new/delete pair anywhere, it should have been an object.

> Game developers have a long-seated distrust of std::vector, and for good reasons.

Which reasons? std::vector (and std::unique_ptr) is universally useful (unlike many other std data structures) and codegen should be on par as a new/delete pair.

If the std is completely broken in some of the console platforms they need to support (likely from what I hear here) then it can also be done with a custom heap array wrapper.

So I don’t see the reason why that shouldn’t have been an object.

> True, but it is cleaner to reorganize the code into several functions and use the return value to propagate across layers if needed.

I disagree. Having to jump to another function definition which is inline is a bigger mental block than following a goto. The large amount of arguments you'd need to pass might also be a barrier, as is the mental overhead of checking to see if this function might be called from elsewhere. But reasonable people can disagree on this point.

I suggest you try to clean up the function yourself and see if your function-ed version is in any way improved.

> Why? If there is a new/delete pair anywhere, it should have been an object.

The case we want is that we have a large array of pairwise collisions. This is a temporary array used inside the method. If we ever have more than this, we need a new (contiguous) array. Are you suggesting that the code should have done something like this?

template<typename T> class TempArray { T* storage; void resize(int n) { delete[] storage; storage = new T[n]; };

I mean, sure, it's a very minor cleanup. It changes like, two lines though, and makes us have to access the array through an awkward ->storage pointer. Not ideal IMO.

> codegen should be on par as a new/delete pair.

Here's modern MSVC. Let's play "spot the difference": https://gcc.godbolt.org/z/KqR-LN

I'm not even doing anything but allocating the vector / array. It took me 2 minutes to navigate to godbolt and type this in. Don't just say "should be"... test it yourself!

If you enable /Ox, the codegen basically drops to what you would expect: the vector version drops down essentially identical code to the new/delete (modulo a memset to enforce the clear to zero condition)

It is a good illustration of why debug stl builds are such hot garbage though...

It is a good illustration for why using the STL is not always a good idea: you can't blindly take the perf hit from that kind of overhead in a debuggable build of a game that you still want to run at reasonably interactive frame rates.

False, most standard libraries out there allow you to configure whether you want extra checks or not.

But at least in the case of MSVC that comfiguration option leads to binary incompatibilities that make it an all or nothing option for everything that gets linked together statically. And the checks are so heavy that the "all" option becomes unbearably slow quite quickly. If your project hits a reasonable size, you end up requiring some clever solutions.

> Having to jump to another function definition which is inline is a bigger mental block than following a goto.

Local lambdas are ideal for this.

> Are you suggesting that the code should have done something like this?

Yes, but you can manage the array inside too.

> I mean, sure, it's a very minor cleanup. It changes like, two lines though

The point is that TempArray can be reused everywhere. This is a typical class that many projects use (stack if small, heap is bigger than threshold).

> Here's modern MSVC. Let's play "spot the difference"

The optimizer has been asked to leave everything as it is, so that is the expected result.

BTW, MSVC is not what you should be using if you want performance.

> Don't just say "should be"... test it yourself!

I always test codegen for all abstractions I use! So I agree.

> Local lambdas are ideal for this.

The problem with lambdas, you can't mark their operator() with __forceinline or __attribute__((always_inline)) attributes. For this reason, when writing high-performance manually vectorized code, lambdas are borderline useless.

> MSVC is not what you should be using if you want performance.

Security and compatibility has higher priority. gcc and clang don't deliver their C runtime libraries with windows updates. Also, debugging and crash diagnostic is much easier with MSVC.

It's same on Linux BTW, only with gcc.

> when writing high-performance manually vectorized code

The point was not about manually vectorized loops in particular. Why is that a problem if you are manually doing it, though?

> Security and compatibility has higher priority.

In commercial games, not really.

As for "compatibility", I am not sure what you mean.

> gcc and clang don't deliver their C runtime libraries with windows updates.

AFAIK you can use Windows libraries just fine. No need for using a different libc.

> Also, debugging and crash diagnostic is much easier with MSVC.

AFAIK, Clang can produce debugging info that you can use with VS.

I don't work on the environment, but it is what I have read here.

> Why is that a problem if you are manually doing it, though?

Here’s couple examples: https://github.com/Const-me/DtsDecoder/blob/master/Utils/App... https://github.com/Const-me/SimdIntroArticle/blob/master/Flo... I would like to use lambdas instead of classes, but can’t, due to that defect of C++.

> In commercial games, not really.

In commercial games too. While they don’t care about security, they do care about compatibility and crash report diagnostics.

> As for "compatibility", I am not sure what you mean.

A windows update shouldn’t break stuff. A software or a game should run on a Windows released at least 10 years in the future.

> Clang can produce debugging info that you can use with VS.

According to marketing announcements. This page however https://clang.llvm.org/docs/MSVCCompatibility.html only says “mostly complete” and also “Work to teach lld about CodeView and PDBs is ongoing”. PDB support is not about VC compatibility, it’s about Windows compatibility really: the debugger engine is a component of OS, even of the OS kernel. WinDbg is merely a GUI client for that engine, visual studio is another one.

Overall, in my experience, the platform-default compilers cause the least amount of issues. On Windows this means msvc, on Linux gcc, on OSX clang. Technically gcc and clang are very portable. Practically, when you’re using a non-default toolset, you’re a minority of users of that toolset, e.g. the bugs you’ll find will be de-prioritized.

I'm still getting up to speed on C++ (& ASM) so forgive the ignorance - are the differences you're referring to all the cleanup code that the vector does in the destructor?

As a side note, you're not calling delete[] on the array code, but I suppose it makes little difference if you take the vector cleanup into account.

unfortunately MSVC produces hot garbage asm, clang and gcc produce pretty much the same asm output using your example.

> C has no pattern for breaking out of multiple for loops at the same time. Other languages like Java and JavaScript introduced "break label;" to handle this edge case. goto is perfectly acceptable to break out of multiple loops.

Yeah, that's not what it does. The code looks like this:

    if (pgeom->Intersect(pentlist[i]->m_parts[j].pPhysGeomProxy->pGeom, gwd,gwd+1, &ip, pcontacts)) {
            if (dirUnproj.len2()==0) dirUnproj = pcontacts[0].dir;
            t = pcontacts[0].t;     // lock should be released after reading t
            return t;
    for(int ipart=1;ipart<m_nParts;ipart++) if (m_parts[ipart].flagsCollider & pentlist[i]->m_parts[j].flags) {
            gwd[2].R = Matrix33(qrot); 
            gwd[2].offset = pos + qrot*m_parts[ipart].pos;
            gwd[2].scale = m_parts[ipart].scale;
            gwd[2].v = -dirUnproj;
            if (m_parts[ipart].pPhysGeomProxy->pGeom->Intersect(pentlist[i]->m_parts[j].pPhysGeomProxy->pGeom, gwd+2,gwd+1, &ip, pcontacts))
                    goto got_unproj;
Notice (a) the if statement on the same line as the for loop, and (b) the fact that the goto jumps out of a conditional inside a for loop inside of a conditional into a conditional just before the for loop.

EDIT: Just realised the poster is referring to a different function.

IMO this one is more horrifying.

That's in a different function, so I wasn't counting it, but sure. I mean, I would write it differently, but it's fairly readable. It's jumping to a common function epilogue once it finds something it can collide with. Seriously -- try reading through it rather than gawking at the goto.

It's funny watching people reply to you without knowing anything about your skill set and experience.

Jasper_ is the real deal, people.

I still don't know who he is, but just from the arguments brought up you can tell this guy has experience in this stuff.

Those blanket statements like "never use goto" and "always trust stl" generally make me wary. I started out with gwbasic once, writing horrible goto spaghetti code. When moving through Pascal and C I eventually learned the "never use goto" mantra and naively tried to follow it at all cost. After I while I eventually encountered the "breaking out of nested loops" problem and refrained from using goto since I didn't want to look like a complete dork. I think I ended up with a flag that was set in the inner loop and checked in the outer and a break in each loop. That's what you get from blindly following rules that others try to present you as god-given.

IMO it should be clear from the top of a for loop how many iterations it will run. A goto randomly inside the loop is akin to a side effect in a function. Sure it might be the easiest solution you can think of, and might even generate optimal code, but it's less readable and forces future maintainers to exert more effort to understand the code.

> IMO it should be clear from the top of a for loop how many iterations it will run.

How would you implement something simple like a lookup in an array? From this argument even a break in a normal for-loop would be bad since I wouldn't know anymore how many iterations it will take. So if you have a nested loop and a "goto end_outer_loop" that's perfectly fine.

Either that code is misleadingly indented and you copied one more closing brace than necessary, or you omitted the opening brace for the last if's body (which contains a single goto).

Good catch. I think I copied one more than necessary, and then messed up when trying to fix up the indentation for HN.

If the code was separated out into functions that are only ever called once, I'd find it harder to read.

Analysing code I'm not familiar with often consists of manually tracing through calls, producing documentation that inlines all the single use function calls.

Ideally function names act as shorthand for the body of the function, but if they only have one caller they have nothing to keep them honest. In older codebases, function names are as misleading as comments; semantic drift, special cases etc. mean you need to drill into them anyway.

I agree with the gist of your post, but one positive about single use functions is that you can be explicitly clear about data visibility.

In this contrived example, you can tell that formatting a title is not affected by user preferences, but formatting the body is. (And additionally that formatting a body has no information about the other fields of an entry)

  def formatRssFeedEntries(userSettings: UserSettings, data: List[Entry]) {
    val titles = data.map(entry => formatTitle(entry.title))
    val bodies = data.map(entry => formatBody(entry.body, userSettings))

I agree. Blocks can somewhat substitute with scoping effects, and that's what I do with my inlined docs - they're nested {} with plain text description of contents and mentions of key variables and functions, scope is useful.

Nested ifs in for loops, with lots of things going on here, I don‘t know but that‘s the definition of hard to maintain and to read code.

Plus the Hungarian notation and whitespaces are not helping either.

When there is a test class to accompany this thing it would maybe help to faster make sense of it. But that‘s still no fun.

Splitting it up and using functions to extract use cases would help tremdendiously.

As a professional game developer I disagree with most of your comments.

This code is clearly not perfect, but the from what I've seen, this is something I could work with.

- Function names are easy to read and understand. - Indirections are kept to a manageable level.

Not going to lie, playing Crysis was a lot of fun, and I never knew this was the underneath function running it.

Crysis shipped with a full blow SDK that included most of its source code. You could actually rebuild the game from it, the 50MB dll that controlled the whole game.

Old players maybe remember that the crysis multiplayer was the most cheated game in its era. It was totally unplayable due to all the cheating and that killed the game.

One way to make cheats. You could load up the SDK in visual studio. Find the code that's removing -1 ammo when shooting and edit it to not do that (most of the physics and game logic was editable that way). Compile the DLL. Replace the original DLL in the game directory.

Was ammo in Crysis controlled client-side? I've always assumed such counters were stored server-side and thus a server won't apply a "fire" event if the player's ammo counter is at zero until it receives a "reload" event to reset the counter.

Almost everything was controlled client side, including collisions and kills. There was one cheat for sale for example that ruined the game, you'd join a server and all players would be automatically killed every 10 seconds.

I remember experiencing that actually. Just random damage every few seconds. Is there any reason why the "don't trust the client" mindset is not used in games despite it being accepted in web development?

The game was released in 2007, Windows XP era. The very idea of security was being discovered.

I mean, "don't trust the client" was a thing in the late '90s when people were exploiting client-controlled things in Ultima Online so much Origin had to change the way some communication worked to stop it.

The most famous example I can think of was UOExtreme revealing hidden players, because if a player was hiding their presence was still sent to the clients of every other player in the area just with a "hidden" flag set. There were a bunch of other similar exploits associated with that particular third-party tool, but that's the only one I remember.

(so while writing this I did some googling and found the patch notes where they fixed it: https://uo.com/wiki/ultima-online-wiki/technical/previous-pu...)

One explanation I’ve seen about weak server-side verification is online multiplayer is a cost center so developers wants to offload much as they could. At least before microtransactions I guess.

If I can recall it was even easier than that. There were so many "trainers" available that would give you infinite energy etc. Even easier than that, I remember one hack was to basically just drop an XML file in the game directory to give yourself the ol' reliable freezing pistol hack.

Good game though, I think Crysis 1 multiplayer is to this day one of the most fun multiplayer games I have played. After Crysis 1 they really CoD'fied the game, which is a shame. That and BF2142, A-plus games.

Looking forward to Crysis Remastered, hope they don't screw it up.

I'd rather work with this than Enterprise Java.

At least the logic is all there and you only need to scroll to see it, instead of jumping around between a dozen or more different files.

Math-heavy code tends to look dense to those who are accustomed to more "mundane" LOB type applications.

This. The convenience of everything in a single file is underrated. You can do all the modern best practice like splitting into more functions, making functions small, and I wouldn't mind if they are all in the same file.

Being a Java dev (not enterprise any more) I agree that some enterprise software is an abomination for exactly the reason you mention.

However, can you imagine how bad an enterprise C++ project would look like?

A gameengine is as enterprise as it gets.

As a hobbyist who likes to tinker with game and interactive media development, a sentiment I often come across is that in 2020 it makes no sense to implement a game engine, and that I should just use something which already exists to avoid re-inventing the wheel. Code like this is one thing which helps me to calmly ignore than sentiment.

I came across the same kind of thing when I was kicking the tires on the Unreal Engine, and I wanted to attempt to add a double jump. I thought surely this should be an easy task, I would just need to find where the jump occurs, add a counter, and remove the restriction which only lets a character jump when touching the ground. What I found was a monstrous tangle of indirection similar to this one.

Now that's not to say that these engines are "bad code" - when you look at all the things a modern game engine does, including supporting interactive editing for non-coders, I'm sure there is some explanation for the level of complexity seen in code like this just because of how many systems must be layered on top of each-other. But that is the thing which makes me question whether general-purpose game engines are really a good idea at all.

In most other domains of software we've long ago eschewed this type of do-everything monolithic software design in favor of more loosely coupled composible toolsets. I'm not entirely sure why it seems that game development has yet to escape this paradigm.

I have a feeling you may have been approaching this problem in UE4 the wrong way. Adding a double jump can be done in numerous ways, but one simple way is with Blueprints. See below link where the exact functionality is implemented with a really simple blueprint.


Oh I am certain I was not approaching the problem in the UE4 way. But the issue is that the way UE4 expects me to do things is not the way I would like to approach game development.

UE4 has a strong bias about the way things should work. If I am making something which is fairly well aligned to that bias, then it's fairly easy to make it work. But if I want to achieve something which is quite far from what the engine expects, then I have to invest significant effort undoing or circumventing what UE4 already does before adding my own functionality on top. I would greatly prefer to start from a blank slate, and only add precisely the behavior I actually want.

So basically this experience with the double jump just gave me a window into the level of complexity I would have to work around in terms of realizing my own goals.

I know what you mean. One of the first things I looked into when trying out Unreal was Version Control. The amount of hoops I had to jump through just to get things on git made me reach the exact same conclusion you did.

UE4 works hard to make you not edit its source code, yes. But the biggest reason for this is allowing you to update through versions, with no struggle.

Games like Valorant are using their own modified version, and from their own admission, every version bump must be carefully considered and takes upwards of a month to reintegrate their changes in a painstakingly slow process.

To me this would be an argument against a tightly coupled monolithic engine like UE4. For instance, imagine if Valorant is only interested in a new version of the engine for new post-processing effects introduced into the renderer. Why should they have to migrate gameplay code to get benefits which have nothing to do with gameplay?

They actually _are_ changing things like the renderer, hence their need to take a long time for migrations. Basic gameplay code is written with blueprints, base classes are in C++ but do not replace the core code.

My point is not about what Valorant is doing specifically, it's about the drawbacks of a tightly coupled, opinionated system

Well duh, this is a physics engine. Implement double jump by placing an invisible floor underneath the player object when you decrement the counter.

I cannot tell if you are joking or not, but needing to place an entity into the world for a single frame to accomplish this rather than simply applying an impulse to the player is precisely the kind of over-complication I'm referring to

Without having read any of the Unreal Engine movement code, there's a complication here, and that is that "jump" might be influenced by the floor that the player is standing on. The force impulse might be related to the slope in some way. You have to figure out what "double jump" means in the context of there being no floor.

Placing a floor that has the influences you want under the player is, in a sense, a hack, but not an entirely misguided one. A better design might be not relying on whatever the "jump" button does, but writing your own, e.g. applying an impulse to the player directly.

If this sounds like semantics, consider a game like Super Mario Galaxy, which has spherical worlds. How do you determine the correct impulse for which way a "jump" goes? This is where these sorts of complications come from.

I think that captures an issue I have with engines like UE4 which I am trying to describe: it has to support virtually every type of game, so even things which could be relatively simple gain complexity because the engine has to cover cases which may be entirely irrelevant to the project I am working on.

Well to be honest UE4 codebase is one of the biggest garbage dumps I've ever had the displeasure to work with. I think the biggest problem is that it's just been accumulating 20 years worth of code and features without anything ever being refactored. And because of the current status you'd never actually want to modify any existing functionality because you're bound to break something. It's also super badly documented and riddled with bugs and really bad and confusing design deductions such as using instances of your UClasses to represent the "types" and and then having to use flags at runtime to see if "this" is archetype or not.

We haven't at place where deciplines interact.

* goto inside 3 nested for-loops

It's actually a reasonable practice in C and C++ to use goto to leave nested loops (I am hesitating to say a recommended practice).

There is often no sane way to leave the loop otherwise, break/continue statements only have effect in the most inner loop. It's possible to set extra variables with lots of if/break but that gets crazy real quick and much slower (nested loops are often the hot code path).

I knew which function this was before even clicking the link. I spent months working on and debugging that exact function and the surrounding CryPhysics code improving network interpolation back in 2014. All the undefined behavior caused us some trouble while porting to the "next-gen" consoles of the time.

This is the worst code to read in the engine by far. At the end of the day though it worked, we shipped it and it performed well enough.

Last I checked Lumberyard still has a somewhat cleaned up version of this function.

inside a bunch of nested loops is where you need a goto, if the language doesn’t have labeled breaks (which is just a fancy goto)

most of your points are things people notice any time real actual big software with performance constraints gets posted. so along with questioning the wisdoms in that function, also question your own wisdom. maybe some of what you think you know is wrong.

Code like this actually always makes me feel better about my own code when I’ve ultimately had to make a trade off between abstraction/organization and performance. I wonder who all these engineers are that see code like this, especially in hot paths, and can’t understand how it came to be and that there was a deliberate choice made.

This is what draws my attention more:

> //FIXME: There's a threading issue in CryPhysics with ARM's weak memory ordering.


Translation: "We have race conditions in our C++, but x86 is lenient enough and current MSVC not aggressive enough to make it crash and burn constantly on our main platform."

Coincidentally, I finished Crysis 1 today. That involved the first four crashes I had with the game, three of which were in the final battle.

This is not about race condition. Rather it is something more like why you need volatile keyword.


Can you elaborate? volatile and threading in C++ are orthogonal to each other. I don't understand why you consider a C# volatile discussion relevant when talking about C++ race conditions.

Volatile in c++ has nothing to do with threading.


* magic numbers

    m_iSimClass = 3;
    return 1E10;
    sqr(m_size.x)*g_PI*0.25f && maxdim<m_size.z*1.4f)
* Very few comments explaining what's going on

When I was getting my CS degree, my professors required, at the very least, to describe what each method does, what each argument is, and a possible range of values of each, and the same for the return value.

I hate this "self-documenting" nonsense, which obviously doesn't work. This piece of code is a good example of that.

Not defending the use of magic numbers but some of it is clear for someone who has written physics code before.

This is physics, not your average function. Those who've never written physics should not complain. It's actually good code.

The code looks bad, but not for those reasons, in my opinion. The logic in this function doesn't look composable. It combines different kinds of mathematical functions to apply inertia, whether you're jumping, etc. into one big ball of spaghetti that would be hard to extend for anyone not deeply familiar with the code. If I were to try to refactor this, I would try to decompose it into standalone "behaviour" functions that could be attached to any entity

I encourage you to try! However, a lot of gameplay and getting movement controls to feel good is difficult and at odds with the goal of "modularized physics", e.g. you might want to apply different amounts of ground friction depending on whether the player is moving, how they're moving, whether they're holding the jump key, and so on.

Ultimately, we're trying to simplify a large simulation of real-world physics and hundreds of controllable muscles and motor responses trained over a lifetime, down to 5 or 6 keyboard inputs. That means you tend to have such inputs doing multiple things, and it's often hard to make separable.

I guess, a sufficiently simple exercise would be to write a controller that handles ground movement, jumping with air movement on horizontal ground and moving platforms. Doesn't have to feel good to play, just be reasonably robust. Either handling of slopes and stairs or collisions with dynamic objects can be added for extra credits ;).

800 lines long. The movement component class in ue4 is about 10k lines. Why do game engines separate their code so much less than in other software?

Here's a good article from John Carmack on why that can be a good approach in game development: http://number-none.com/blow/john_carmack_on_inlined_code.htm...

I feel like this Cryengine example may be a bad example of that, though.

Not just in game development. If you have a function that is only called once, it shouldn't be a function yet. Make it a function when you have a second or third use for it. Then, and only then, you will know what the parameters should be.

I disagree. Splitting a function up can help with readability and testability. The parent function becomes shorter and the child function can have a descriptive name. The parameters to a function are the fields that are needed for the function to perform its function.

Function calls have an overhead, and if the function is only used once it makes no sense to break it out.

It also makes no sense to have a rule on the maximum number of lines for a function. These rules are often created by people who don't write software that needs to have high performance.

The compiler does a lot of optimizing, including inlining function calls so that they do not have overhead. My opinion is to favour readable and maintainable code. If there is a performance issue, then profile it, measure and then optimize. No need to prematurely optimize at the cost of code quality.

A function that is called once will always be inlined. Trust the compiler.

> A function that is called once will always be inlined.

That is objectively false. Even though I agree with the spirit. For starters, unoptimized builds won't have that. Second, exceptions tend to inhibit inlining. Compare https://godbolt.org/z/c8Jayf with https://godbolt.org/z/Uoo2YA. Third, it's easy enough to push one of the heuristics used for inlining high enough in the wrong direction (e.g. function size). I recommend the clang optimizer view on godbolt.

Also note that it can be hard for a compiler to prove that a function is not used outside the current translation unit (although anonymous namespaces help with that). Number of calls is only one of the heuristics for inlining though anyway.

All languages? No.

Compilers are plenty stupid, and have not earned blind trust.

One of my professors used to encourage the heavy use of helper functions that just... well, like in A or B in carmacks article, break your code into chunks with clear names that can be unit tested.

How common is automated testing in AAA games?

Underlying helper libraries like math utilities are often extracted and put under independent test. A physics engine often has so much state (and isn't always guaranteed to be deterministic!) that doing any sort of unit testing at the functional level is not worth it. To those that reply with "use less state", I encourage you to show how. Often times the unit tests I've seen from junior game programmers are worse than useless and aren't testing anything of tremendous value.

Games that use automated tests often drive high level systems and test high level output. See for instance Riot Games's automated League of Legends test suite.

Game development problems are often large global state manipulation problems. If you don't write games you will never realize this.

Almost everything taught in academia and in the enterprise about software development "best practices" are absolutely the wrong things for a game. (They're wrong for enterprise and academia, too, but I'm not willing to get into that fight on this site.)

Is there an industry-wide set of concepts and best practices that is applicable for large scale games? What sorts of things do hiring managers worry about at night?

Preface: I'm not a gameplay programmer. I'm not even a dynamics or physics specialist. And I don't hire for them. But as a core engine programmer:

The power of an engine is that it is a series of closely intertwined parts. Often times, the graphics code and collision code are linked; here's an example. Shoot a bullet in any FPS; the resulting decal is likely created from the collision data, since it's much faster to search than the highly tessellated graphics model.

Or, let's say, I have a melee attack. The specific frame that the attack occurs on is likely decided by the animation data. So that means that we need to run animations on the server, against an invisible skeleton.

Similarly, you might encounter a scenario where a bunch of stuff blows up, and you want 99% of that simulation to be baked offline and not simulated on the client, to ensure that the tower lands exactly where it needs to. So now you need to plug together your physics and animation systems, yadda yadda.

These are not bugs, or necessarily problems. People who suggest we turn all engines into tiny independent toolkits are missing the point. Of course everyone tries to make systems as independent as possible. But there's a bunch of power you can unlock if you have even a small bit of close integration between two systems.

How does one manage complexity? The first is to enable everyone, even artist and designers, to be technical. They're already managing complex topologies, doing sophisticated lighting and shading techniques, and often writing custom Python scripts to help them clean up the massive amount of data. A big thing I've seen from developers out of FAANGs is that they're shocked that "non-programmers" are writing custom scripts and doing programming. Your artists and designers are incredibly technical people who are good at solving problems. Trust them.

Invest in custom tools; any bit of custom tooling to make your life easier will pay off massively. Basically, give your whole staff the ability to debug and fix issues, with custom UIs and logs specifically built for your purpose.

QA staff are undervalued here in the US, but anybody who's worked in games for three months knows what a difference a senior QA makes. They know your game better than anybody else. Watching a good QA go from a blurry cameraphone pic on an angry reddit post somewhere to complete repro steps in less than an hour is nothing short of magic.

Just want to chime in to say I have learned a lot from your participation in this thread, thanks for being part of what makes HN great.

function calls push registers, allocate a stack frame, and push a return address. By breaking your inner loops into more functions you've now generated megabytes of pointless memory thrash... on a console. Additionally, at the time this stuff was written, compilers and toolchain weren't neccessarily standardized, or were customized to accomodate certain use cases, so they needed more hand-holding to generate performant code. This is just a fact.. I'm not saying you're wrong, but you're definitely viewing this with hindsight.

Function calls do none of those things if they’re inlined, and compilers have been able to inline functions for a long time. Perhaps this code is even older than that.

While this is often a good rule of thumb, occasionally it's useful to extract a function just to make the inputs clear for a complicated calculation. Particularly when it has few inputs and they are unlikely to change.

With modern C++ you could do this with a lambda

Which compiler that can compile modern C++ doesn't always inline a static single-caller function?

All of them? There's many cases where it doesn't make sense:

- The function is exported to a library

- The code generator emitted an indirect branch instruction

- The inlined code exceeds the size of a single page in memory

- The inlined code does not perform as well because it is not i-cacheable

There are other reasons for separating logic into functions besides just keeping it DRY. It's an opportunity to encapsulate concerns and then, in some other place, compose the story poetically and clearly.

...which is NOT high on the list of concerns for a game title, what you want instead is massive performance gains, so you can do more with less, and in the end produce a better experience than your competitor. -- function calls/indirection/making your code 'easy' to understand, all have a RUNTIME cost, and many small costs add up to a large cost, the reasoning is really that simple.

I struggle to see how composing into functions adds a runtime cost when using modern compilers; as others have pointed out the compiler will inline it if the function is only used once.

This rationalization doesn’t make sense to me, especially for game engines which I assume are not limited to a release or 2 but are used to power multiple titles. If the software artifact is going to live for a long time, it’s probably worth the effort to make it easier to understand and test.

This! I'm not sure why we are discussing the runtime cost of a function with modern compilers.

Could this pattern result in code so slow the game won't work in development w/o high optimization flags, thus increasing compile time?

> the compiler will inline it if the function is only used once.

There may be other criteria, eg is the function likely to be called.

Breaking routines into functions makes testing possible. If it's really only called once, the compiler can inline it for you.

That is a bold statement, but you are free to express yourself.

Sometimes with inherently complex performance code it gets nickel and dimed over time yet maintains so many cross-cutting concerns and state that there are no clean extraction points.

So function extraction ends up taking a bunch of "unrelated" state with it where, in the end, it feels like you accomplished nothing but split the code into arbitrary concatenation points, not logical units.

Games are certainly the most popular piece of end user software.

Software that a lot of people use a lot of time looks like this because the bugs are found and the bugs get fixed. The code for fixing a bug has to live somewhere.

What about bugs that computers find? Fuzzers are rarely recommending to fix bugs that are affecting human users. Part of that is also that the kinds of bugs that computers can find are not in, literally, "user interfaces," they are in APIs and formats.

Anyway, end user business software also has code that looks like this. It's not just all tools and infrastructure, I mean it certainly feels that way. But there are 800+ line SQL statements. 800+ line transactional method bodies. I don't want to call this "real code" but the surprise comes from... well eventually you have to make something the end user touches. And it's going to be gnarly.

Apart from performance considerations, it's also because in game development, different systems are much more entangled together in the requirements themselves, in many very small and unpredictable ways that still break architectural boundaries you put in.

For a classic example, it's makes almost no sense separate model and view when you're building a real time action game, because your rendering code and your ballistics and physics work on very similar 3d meshes that are animated by the same skeleton animations and tied to the same objects.

I feel like C++ makes it particularly difficult to jump around big projects (mainly headers especially back in the old days pre good-ide).

There is also a lot of fear of performance regressions by breaking up big chunks of code (arguably unfounded with modern compilers).

"You may not like it, but this is what peak performance looks like."

Also Requires visual studio

It’s not just that function, the C file is over 2300 lines long. It’s hard to tell where one function starts and another one ends in that mess

Why not? Doesn't the indentation tell you perfectly where each functions starts and ends?

Is it an example of bad code? Should we avoid using Cryengine?

If it works, it's an example of ugly code. Plenty of game code is ugly as sin, though.

The code for VVVVVV infamously has switch/case for every possible screen/level in the game[0], which I think is hilarious. Best example of "it works!"

[0] https://www.polygon.com/2020/1/13/21064100/vvvvvv-source-cod...

Games also have a very different set of constraints than most software. Simulating and rendering an interactive world at 60-144hz is not an easy task, and things like code cleanliness, structure and readability often lose out to performance concerns.

It's a bit of a strange thing; in my personal experience, after spending some real time with this type of constraints, it can be a bit painful to come back to "general software best practices". You become so aware of the performance implications of everything you do that all those things we do in the name of software quality can feel incredibly wasteful in terms of CPU and memory resources. One has to remind themselves that in 90% of cases that level of optimization is not warranted.

> new/delete, with no RAII

There is nothing wrong with this. Plenty of people have no issues keeping track of memory in their head.

I would argue that manual memory management should probably be avoided as a general best practice, but game development is a special case where memory management can often be critical to the performance of the final product, and a manual approach is sometimes warranted.

Many of the filed issues are "X not work" and not very interesting, but this one...this one is a real gem. https://github.com/CRYTEK/CRYENGINE/issues/763

I’m gonna go with this is a feature not a bug

Would it be an overreaction to not want to touch this licence with a barge pole?

Even ignoring the “We may change this licence at any time and it applies to you” parts, there seem to be serious restriction on usage and basically have to ask them to do _anything_ beforehand.

Maybe it makes sense if you are already in a project that is using this Licenced? Is this intended as a general engine licence rather than viewing the source code?

Unreal shares its source purely for the sake of people who already license it and need to know how something works/fix something/customize something. Wouldn't be surprised if the same is true here.

I considered using cryengine recently but there was an almost total lack of learning resources: I could barely find a tutorial that was newer than 5 years, especially one that involved it’s c++ APIs.

I suspect that lumberyards greatest advantage over cryengine in the future will simply be usable documentation provided by amazon. Cryengine is simply not usable without better docs or else an incredible amount of time. Crytek is having financial troubles but I bet their engine would have 10x adoption if they hired a team technical writers

Unreals docs are fairly bad also, but at least there are some third party resources to turn to

Unreal docs are fairly good for making games but if you want to modify the structure of the engine it's quite annoying/nonexistent.

Case in point: Vehicle physics is no where near as good as the docs imply (not a toy but still 20 year old vintage), but there is almost no documentation of how PhysX interacts with the Unreal engine proper i.e. you can get the PxRigidWhatever handle but you can't easily replace PhysX with a proper MB package. Epic seem to be transitioning to Chaos but it's not documented yet.

If I ever get good Vehicle physics working I'll write it up (it's definitely possible but I'm not sure how ACC does it)

People customizing the UE4 engine professionally or on large projects put a lot of work into the areas they are interested to the point where it's not really UE4 anymore in that area. Gears of War is a good example where areas of the rendering system would be almost unrecognizable as they have put in countless man years of work diverging from the base engine.

I'm sure it's possible I'd just rather not read thousands of lines of code to find the nitty gritty.

I want to make the basics of an open source (sim)racing game (as a test bed for writing tyre models), without using the fairly lacklustre offerings included by default (e.g. no sprung mass, no carcass stiffness etc., idealized suspension etc.). I have no need to go into the bowels of the rendering engine but it struck me that the interactions between PhysX and the actual actor model for the vehicle is almost not documented at all. I assume it's possible to do it solely with PhysX (proper suspension) but I cannot find any case studies of people doing it with the possible exception of their drive project which is under $$$ and NDA I'm guessing.

I was also slightly surprised that I had to go looking for the option to connect to the PhysX debugger. It wasn't hard to find but I was half expecting it to be included with the engine.

Used to work with cryengine some time ago. That is by far the worst c++ codebase i've ever seen.

While your opinion is appreciated. It would be more helpful, if you could point out sone of the reasons why it was so bad in particular.

Interesting sentiment, I sometimes wonder if a "messy" codebase can have advantages for performance.

Many very highly performance tuned applications I saw in the wild would fall into the category of "horrible codebase" when looked at through that lens.

It falls more in the category of having a lot bugs which could've been caught if they used static code analysis, code review, etc...

I understand that you might think that messy could mean it's fine tuned for performance. In this case, I highly doubt it and think it's more reasonable to think it's messy because they had deadlines.

The messy part isn't about performance optimizations. It's more about things that got crammed in there and only works for a very specific subset of parameters. And even then you can't be sure it'll work...

I don't blame the programmers, it feels they had deadlines to uphold from managment.

>I don't blame the programmers, it feels they had deadlines to uphold from managment.

This is my own experience. The teams that spent the most time on standards usually had the least pressure, in terms of things like deadlines. Once the focus of the team shifts to having to ship things, there is less time to worry about having 100% code coverage (to pull out an arbitrary number), and so forth. Code review can slip into flagging only things that really matter, and leaving nitpicks for another day.

What sorts of static code analysis tools do people here use in their game projects? I know carmack is a big fan of them

Here are the results for cry engine specifically:




I'm not endorsing pvs studio nor am I saying it's bad. Try out some tools and see what works best for you.

It's funny because many of these are the exact possible errors you expect when someone is swimming in a large code block doing lots of copy and paste. Large blocks of code are very hard to test thoroughly, so I imagine the testing was mostly looking to see if things look right followed by play-testing.

I have used the Rust compiler, which will catch all memory errors, data races, and null pointer exceptions and buffer overflows at compile time as a matter if course. then you can add cargo fuzz if you like.

Let’s assume that when code is first written, the cleanliness and performance is somewhat random within a broad range.

If we want the code to be clean or performant, we will likely have to spend time iterating on and pruning the code. Let’s assume that improving performance and improving cleanliness are at best orthogonal, at worst opposing.

The project has a limited amount of time, particularly for games, which often have a relatively low roof for how much maintenance the code will need.

The project has a budget on time to spend between cleanliness and maintenance. Games need high performance and relatively little maintenance, so they are more likely to spend their budget on much more performance than cleanliness.

(Game engines meant for heavy reuse such as Frostbite and Unreal Engine would likely have a much more even split, and similar for games which are likely to receive recurring and invasive updates. I would expect Fortnite’s code to be fairly clean as games go, for example.)

I don't think it's necessary. With modern C++ it's possible to encapsulate high performance code.

The issue with games in particular is probably partly due to the performance optimizations being directed at a moving target (it's not just your supercomputer nodes, it's every computer CPU). C++ doesn't really help you much in that regard (or at least better know but certainly not 10 years ago)

Generally, things that are not open source are rarely not well written, since there are less programmers who will read your code, and all questions on the code can be done internally, so developers only write code so it works.

Open source generally leads to better quality code, since it's the best way to attract other developers to contribute to it.

So I'm rarely interested by any accomplished project that opens its code.

For example, if microsoft opened its OS, I doubt developers would really try to do things with it. The windows kernel would obviously be high quality code, but a lot of the rest is probably short lived garbage.

I disagree with this line of reasoning. I have seen good/bad examples on either side. I think it actually comes down to someone on the developer team having a high set of standards that they push everyone to subscribe to.

All the open source projects I have personally seen were ones meant to live a long time. When there were code issues, there were always awkward discussions on github about “there should be unit tests here” or “this code makes no sense,” and weeks later the developer announcing a cleanup or some sort. Anecdotal but public scrutiny and pressure is a real thing.

Just as an example, this is why Bitwarden started getting some automated testing - lots of propelled bumping github issues about it in order to get it more visibility

It's only tangentially related to code quality but I do think open/free source is the only way to write sustainable software if your aim is to change the world rather than ones bank account (so to speak).

There's terrible code all over the place, although it is definitely true that no one's going to clean up - even source available - proprietary code out of kindness of their heart.

> things that are not open source are rarely not well written

I think you got lost in your triple negative there

yes, english is not my main language, thanks for the correction

Openness aside, working under constant pressure with crazy deadlines also paves the way for ugly/barely tested/hard to maintain code. Game studios often have extremely strict deadlines, but I've experienced this pressure also when working in government software; this code looks like examples from Dr Dobb's compared to the pile of crap we sometimes could barely stick together in the old days.

That license looks like a minefield. Any lawyers able to chime in on the reality of becoming tainted? Given their recent history, this is one codebase you really wouldn't want to risk becoming tainted by.

What is their recent history?

Crytek v. Cig, which the court ruled largely in favor of Cig, and even called out Crytek's behavior (which was an absolute circus). Crytek will prosecute given even the most questionable grounds.

The consequences of taint are a very real risk here.

The real issue is clear from their announcement post.

master (now main) was not always stable (of course, stable code are in the stable and release branches) so silly people complained, and the silly PM reacted by closing down pushes to main, and hereby closing down issues and PR's. He clearly has no idea how open source code development works. Now they have to maintain two repos, the internal one and thd public one, and get no feedback from outside. Well, feedback on one year old code.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact