
A Programming Language for Games, Talk #2 [video] - bluehex
https://www.youtube.com/watch?v=5Nc68IdNKdg
======
archagon
Kind of a tangent:

Mainly on Jonathan Blow's account, I've been watching a number of similar
talks by low-level programmers — for example, Mike Acton's "Data Oriented
Design"[1] — and following others on Twitter. I've noticed that much of the
time in these presentations, there seems to be a level of disdain towards
programmers who don't optimize things down to the bit. (For example, Acton
follows a dozen slides on minute cache optimization — featuring assembly
examples — with "Bad programming is easy" and "Design patterns are spoonfed
material for brainless programmers incapable of independent thought...")

This really bothers me. On the one hand, people like Blow and Acton are
working on some of the most complicated software in the world, so I have to
give them all my respect. At the same time, this programming style is so alien
to me that I simply can't make sense of Acton's patterns and optimizations
from a cursory glance. (Blow's examples from AAA development similarly disturb
me, though they're much more comprehensible.) This is _not_ what got me into
programming, and yet these presentations by coders I respect make me feel like
I'm a dummy who's completely "doing it wrong".

I get the feeling that there's a whole separate world of these low-level,
performance-optimizing developers who've shrugged off the mainstream and have
their own, parallel outlook on how programming should be done. Much of the
time, their ideas are very interesting to me. But I don't know how to
reconcile their methodology with mine, which works on a much higher level and
makes iteration and development so fun and easy — even if things like OO and
design patterns bite me in the butt sometimes. I know that many of us here,
spoiled by web and app development, must feel the same way.

Is this sort of ascetic, hardware-savvy code really what "good programming"
looks like? Because I want to get better at my craft, but it just looks like a
whole lot of pain.

[1]: [http://www.slideshare.net/cellperformance/data-oriented-
desi...](http://www.slideshare.net/cellperformance/data-oriented-design-and-c)

Off-tangent, I am enjoying these lectures from an academic point of view, even
though I don't think I would have a use for this hypothetical language!

~~~
chipsy
I think there are a number of attitudes engendered by the nature of doing game
programming, especially low-level engine programming, for a long time. Game
programmers aren't unaware of the rest of the programming universe, but the
risks mean that they tend to indulge only one or two new ideas at a time for
production code.

One such attitude is that the code is disposable - you cut down the buglist
until the game ships, and then boom, new project, greenfield code. (or
sometimes, same engine, but you can go in and gut it until it fits the new
design) Another is that your primary goal as the engine programmer is to push
out data as quickly as possible - more things onscreen, with more detail and
more effects; if you aren't doing this, then you're probably spending your
technical budget to explore a completely new technology instead. Higher-level
abstractions are weeded out if they aren't supplementing these goals.

This means that game programmers see the code as something that you scrub out
and redo over and over in order to match the new data model you're optimizing
against(which necessarily changes as hardware and design goals changes).
You're gonna throw the code out tomorrow, so there's no sense in spending a
lot of "thinking and theorizing" time on it to reach artificial ideals. You
have a clear destination to reach instead.

The ideal provided as an alternative is to reduce your coding to an efficient,
even boring process that consistently gets you towards a working solution that
can be optimized later, even whilst you're keeling over from crunch. Game
coders don't optimize everything right off the bat because they need real-
world data for the test case, so the engine only really gets performance tuned
in the later stages of a project as more of the content is completed. The
early days are often more important because they're done with the
understanding that all the maintenance work needed to ship must be done before
ship, so you can only take on technical debt on a feature if it means the
feature will never be touched again; otherwise the only reason to do it is to
hit one of those stupid publisher milestones that requires a feature to be in
place prematurely.

Another issue is that not all abstractions are available. Stuff reliant on GC
is generally out, because control over memory allocation is a must. Stuff that
bloats memory usage, also generally out. Stuff that leads to unpredictable
execution times(e.g. lazy evaluation) is dismissed immediately. Imperative
style is still the groundwork for game code, if only because simulated worlds
encourage massive amounts of mutability and global state.

So at the same time there's a tunnel vision that comes out of this approach.
Not every game needs that level of optimization, and not every optimization is
a matter of peering at the hardware and seeing what form the data needs to be
in for go-fastness(which is 90% of the Acton slides). But because the
programmer also can't predict "how much" performance is enough, they tend to
estimate conservatively, defer to C++ and to their hardware once more, and
just pull out the old bag of tricks to make it go. And then ship at 30hz when
it turns out that the estimate still wasn't good enough for 60.

~~~
archagon
This is a really insightful comment. I feel like I understand the world of
high-performance programming a lot better now. Thank you!

~~~
pwr22
At the heart of it, it's just different priorities

------
implicit
About 23 minutes in, Mr. Blow says that lambdas has "questionable
performance," but the actual cost is relatively predictable.

Current C++ compilers will try to inline lambdas. Mr. Blow's specific example
will be inlined by both g++ and clang, (I haven't tested MSVC) depending on
the optimization level you set.

 _std::function_ does introduce overhead, and the reason why is important:

How can you implement an array of lambdas that all accept the same signature,
but close over different kinds of environments? You need to be able to copy
and destruct those environments without static knowledge of how big they are
and what's in them.

std::function makes this work by allocating the environment on the heap and
hiding it from the type signature. If you are averse to this extra overhead,
there's a really easy rule to follow: Until you actually explicitly write
"std::function" in your code, you do not pay its cost.

I know a few ways around this:

If you always use auto when dealing with function-local lambdas, you don't pay
any extra overhead. You can think of each lambda as being the sole instance of
a struct that contains the environment to be closed over, plus a non-virtual
method. Just be aware that no lambda is type-compatible with any other.
(exception: two lambdas are type-compatible if they have empty environments
and the same type signature)

You can templatize lambda-consuming functions over the type of the lambda.
This can work out really well if you pay attention to how function inlining
happens, as the compiler can not only inline the template function, but the
lambda as well. You can generally assume that the compiler is capable of fully
inlining maps and folds.

Lastly, if your lambda doesn't close over anything, you can cast it as a
C-style function pointer.

------
bluehex
Talk one is here:
[https://www.youtube.com/watch?v=TH9VCN6UkyQ](https://www.youtube.com/watch?v=TH9VCN6UkyQ)

[https://news.ycombinator.com/item?id=8342697](https://news.ycombinator.com/item?id=8342697)

