
John Carmack on Inlined Code - m0nastic
http://number-none.com/blow/john_carmack_on_inlined_code.html
======
corysama
The older I get, the more my code (mostly C++ and Python) has been moving
towards mostly-functional, mostly-single static assignment (let assignments).

Lately, I've noticed a pattern emerging that I think John is referring to in
the second part. The situation is that often a large function will be composed
of many smaller, clearly separable steps that involve temporary, intermediate
results. These are clear candidates to be broken out into smaller functions.
But, a conflict arises from the fact that they would each only be invoked at
exactly one location. So, moving the tiny bits of code away from their only
invocation point has mixed results on the readability of the larger function.
It becomes more readable because it is composed of only short, descriptive
function names, but less readable because deeper understanding of the
intermediate steps requires disjointly bouncing around the code looking for
the internals of the smaller functions.

The compromise I have often found is to reformat the intermediate steps in the
form of control blocks that resemble a function definitions. The pseudocode
below is not a great example because, to keep it brief, the control flow is so
simple that it could have been just a chain of method calls on anonymous
return values.

    
    
        AwesomenessT largerFunction(Foo1 foo1, Foo2 foo2)
        {
            // state the purpose of step1
            ResultT1 result1; // inline ResultT1 step1(Foo1 foo)
            {
                Bar bar = barFromFoo1(foo);
                Baz baz = bar.makeBaz();
                result1 = baz.awesome(); // return baz.awesome();
            }  // bar and baz no longer require consideration
    
            // state the purpose of step2
            ResultT2 result2; // inline ResultT2 step2(Foo2 foo)
            {
                Bar bar = barFromFoo2(foo); // second bar's lifetime does not overlap with the 1st
                result2 = bar.awesome(); // return bar.awesome();
            }
    
            return result1.howAwesome(result2);
        }
    

I make a point to call out out that the temp objects are scope-blocked to the
minimum necessary lifetimes primarily because doing so reduces the amount of
mental register space required for my brain to understand the larger function.
When I see that the first bar and baz go out of existence just a few lines
after they come into existence, I know I can discard them from short term
memory when parsing the rest of the function. I don't get confused by the
second bar. And, I don't have to check the correctness of the whole function
with regards to each intermediate value.

~~~
nakor
What if I want to test some part of the function in isolation? At my current
job I have to maintain a huge and old ASP.NET project that is full of these
"god-functions". They're written in the style that Carmack describes, and I
have methods that span more than 1k lines of code. Instead of breaking the
function down to many smaller functions, they instead chose this inline
approach and actually now we are at the point where we have battle-tested
logic scattered across all of these huge functions but we need to use bits and
pieces of them in the development of the new product.

Now I have to spend days and possibly weeks refactoring dozens of functions
and breaking them apart in to managable services so we can not only use them,
but also extend and test them.

I'm afraid what Carmack was talking about was meant to be taken with a grain
of salt and not applied as a "General Rule" but people will anyway after
reading it.

~~~
akkartik
Perhaps it suggests our way of testing needs to change? A while back I wrote a
post describing some experiences using white-box rather than black-box
testing:
[http://web.archive.org/web/20140404001537/http://akkartik.na...](http://web.archive.org/web/20140404001537/http://akkartik.name/post/tracing-
tests) [1]. Rather than call a function with some inputs and check the output,
I'd call a function and check the log it emitted. The advantage I discovered
was that it let me write fine-grained unit tests without having to make lots
of different fine-grained function calls in my tests (they could all call the
same top-level function), making the code easier to radically refactor. No
need to change a bunch of tests every time I modify a function's signature.

This approach of raising the bar for introducing functions might do well with
my "trace tests". I'm going to try it.

[1] Sorry, I've temporarily turned off my site while we wait for clarity on
shellsock.

~~~
robrenaud
How brittle are those tests though?

I've had to change an implementation that was tested with the moral equivalent
to log statements, and it was pretty miserable. The tests were strongly tied
to implementation details. When I preserved the real semantics of the function
as far as the outside system cared, the tests broke and it was hard to
understand why. Obviously when you break a test you really need to be sure
that the test was kind of wrong and this was pretty burdensome.

~~~
akkartik
I tried to address that in the post, but it's easy to miss and not very clear:

 _"..trace tests should verify domain-specific knowledge rather than
implementation details.."_

\--

More generally, I would argue that there's always a tension in designing
tests, you have to make them brittle to _something_. When we write lots of
unit tests they're brittle to the precise function boundaries we happen to
decompose the program into. As a result we tend to not move the boundaries
around too much once our programs are written, rationalizing that they're not
implementation details. My goal was explicitly to make it easy to reorganize
the code, because in my experience no large codebase has ever gotten the
boundaries right on the first try.

------
ranran876
I might be alone on this, but whenever I read things by John Carmack I get a
vague sense that he doesn't really get object oriented programming. He always
has a lot of interesting things to say, but it also kinda reads like a C guy
trying to code in C++. I'm glad his thinking keeps evolving and he's not
dogmatic about anything. I'd honestly love to hear his thoughts on C++11

"The function that is least likely to cause a problem is one that doesn't
exist, which is the benefit of inlining it."

That's the equivalent of saying "the faster you drive the safer you are b/c
you're spending less time in danger"

You'll just end up with larger monster functions that are harder to manage.
"Method C" will always be a disaster for code organization b/c your commented
off "MinorFunctions" will start to bleed into each other when the interface
isn't well defined.

" For instance, having one check in the player think code for health <= 0 &&
!killed is almost certain to spawn less bugs than having KillPlayer() called
in 20 different places"

I don't completely get his example, but I see what he's saying about state and
bugs that arise from that. You call a method 20 times and it has an non
obvious assumption about state that can crop up at a later point - and it can
be hard to track down. However the flip side is that when you do track it
down, you will fix several bugs you didn't even know about.

The alternative of rewriting or reengineering the same solution each time is
simply awful and you'll screw up way more often

~~~
phkahler
>> I might be alone on this, but whenever I read things by John Carmack I get
a vague sense that he doesn't really get object oriented programming.

I'm starting to think object oriented programming is a bit over rated. It's
hard to express why exactly, but I'm finding plain functions that operate on
data can be clearer, less complicated, and more efficient than methods.
Blasphemous as it may seem, a switch statement does the equivalent of simple
polymorphism and can be kept inline.

~~~
ranran876
You clearly don't work on a large code base =)

Most programming concepts are really about code organization and not
expressiveness or the ability to express an algorithm clearly.

Object oriented programming only really starts to make sense when you are
working on something that will take thousands of man-hours. If you are working
alone, or on a small project is can be completely irrelevant.

The work flow you are describing is what MATLAB guys do. It's an absolute
nightmare once the project gets too large. It is however very fast an flexible
for prototyping.

~~~
imanaccount247
Big things like say, the linux kernel? Which is not OO and is better off for
it?

~~~
FLUX-YOU
Ultimately our biases towards where the paradigms belong are a result of how
the history has developed so far.

But hopefully we've learned that the guy selling OOP as the answer to
everything is full of shit

~~~
zo1
_" But hopefully we've learned that the guy selling OOP as the answer to
everything is full of shit"_

Replace OOP in your statement with "anything" and I'd say you're spot-on.

------
tdicola
Haha, I like this quote: "That was a cold-sweat moment for me. After all of my
harping about latency and responsiveness, I almost shipped a title with a
completely unnecessary frame of latency."

~~~
sssilver
A good insight into his mindset and way of seeing things.

------
tgb
I'm not a professional programmer and I rarely work with large code bases. So
the fact that my code has drifted steadily over the years towards the large-
main-function I thought was a factor of several things, first being my general
amateurism. I still think that, but there are definitely other reasons too: I
now use more expressive languages (Python instead of C) and more expressive
idioms within those languages (list comprehensions instead of while loops) and
more expressive structures/libraries (NumPy instead of lists of structures),
so I can afford to put more in one spot. I also write smaller but more
numerous programs.

But there are very real advantages. I learned through game programming and
still do some for fun and I absolutely prefer having a main loop that puts its
fingers into all the components of the game than to have a main loop which
delegates everything to mysterious entity.update()-style functions. The lack
of architecture allows me to structure the logic of the game more clearly for
exactly the reasons Carmack outlines. Everything is sequenced - what has
already happened in the frame can be seen by scrolling up a bit instead of
digging through a half-dozen files.

But the real win here is for the beginner programmer. I strongly dislike the
trend these days towards programming education being done in a "fill in the
blanks" manner where the student takes an existing framework and writes a
number of functions. The problem is that the student rarely has any idea what
the framework is doing. I would rather not have beginners write games by make
on_draw(), on_tick(), etc. functions but much rather have them write a for
loop and have to call gfx_library_init() at program start and
gfx_library_swap_buffers() at the end of a frame. That way they can say "The
program starts here and steps through these lines and then exits here" versus
having magic frameworks do the work for them. There is plenty of magic done
these days behind the scenes for any beginner, but it is too much to have a
completely opaque flow-control.

~~~
akkartik
I particularly liked your last paragraph about beginner programmers, and I
want to strengthen it. Abstraction is not just a drag on learning programming,
it's also a drag on learning new codebases, even if you're already fairly
experienced.

Abstraction is good for people experienced with a codebase, but as codebases
grow larger and more complex that usually means the first few people, because
newcomers are rarely able to attain the same grasp of the big picture.

This suggests that authors of say open source projects who want to gain more
collaborators might want to go against their instincts for abstraction.

(This thought crystallized for me after conversations on HN in the past couple
of weeks:
[https://news.ycombinator.com/item?id=8308881](https://news.ycombinator.com/item?id=8308881)
and
[https://news.ycombinator.com/item?id=8327008](https://news.ycombinator.com/item?id=8327008))

~~~
userbinator
The key difference is that those who built the abstraction understand exactly
what it abstracts, and thus what its capabilities and limitations are. Those
who didn't, don't get quite the same picture.

I'm not a fan of excessive abstraction either - the main thing I use it for is
to reduce code duplication, which IMHO is one of the real benefits. Code that
contains lots of functions-called-once or classes-used-once feels like a
terribly inefficient and obfuscated way to do something, and far less
straightforward than it could be.

The whole "abstraction is good because it allows us to build large complex
systems" notion is all too common in beginning courses in programming/software
engineering architecture, and it tends to make people think that large complex
systems are also good. Thus the feeling that somehow all software should be
large and complex, and the resulting architecture-astronautism and
disturbingly inefficient software. I completely disagree - abstraction should
be taught as being a necessary evil for managing complexity, for use only when
that complexity is actually justified and cannot be simplified further.
Abstraction hides complexity but does not eliminate it; in fact it could be
said that it probably adds to it. Code hidden by abstraction is still code
that gets executed, consumes resources, and could contain bugs.

------
protonfish
If anyone other than Carmack wrote this, I doubt it would be so well received
so I'm glad he did.

We all have our own programming dogma that we love and defend religiously, but
we should never stop asking if our code is truthfully, objectively, clear and
easy to read, prone to bugs and/or runs efficiently. "Best practices" can get
you 80% of the way there, but a developer should never stop questioning the
quality of their code, even if it contradicts the sacred rules.

~~~
ssadler
Would that apply to any craft?

------
vonmoltke
Back when I was writing real-time signal processing code, I spent quite a bit
of time refactoring code from variations of styles A and B to style C. The
problem Carmack talks about is even worse when some of those subroutines were
in separate source files. Over the course of my refactorings, I was able to
re-arrange and combine functionality in ways that were not possible with the
discrete functions. I was able to pull operations out of inner loops that
repeatedly and expensively calculated the same values. I found and fixed all
sorts of actual or potential bugs in doing this. The original code was extra
hairy, though, because it was written for DSPs, not general-purpose
microprocessors.

As a side note, the original code contained large numbers of manual loop
unrolling optimizations like noted in the email. I actually saw a performance
increase from removing them. Same in some cases for manually inlining the
function calls. From what I could tell, writing simpler, inline code made the
optimizer much more efficient.

~~~
AdmiralAsshat
Wouldn't in-lining make unit testing more difficult?

Admittedly I am not a very experienced programmer, but I thought the general
line of thought with regards to making your program easier to test would be
that each function should do as little as possible.

~~~
clarry
Yes it would, but Carmack is advocating the inlining of functions that deal
with game state. These tend to be kinda hard to test in isolation anyway. But
he does also advocate keeping small helper functions pure.

------
jameshart
There's an interesting game programmer problem here, that is somewhat alien to
a coder like me who grew up on the web. Where for a web coder, statelessness
is the default, and we have to work to recover and recreate state between
'frames', game coders live in the run loop - and so the assumption is that you
have a repository of persistent global state to act on each frame.

Having noticed that he has a problem when multiple functions are all
interacting with that same shared global state, it's kind of amusing that
Carmack's reaction is to reduce the number of functions, rather than remove
the global state.

~~~
chipsy
At its core the problem game programmers run into is that game state is very
globalized with a lot of dependency overlap. You can push around the data into
different containers and declare dogmatic methods of access, but you always
wind up with the same problems: The animation state depends on the physics.
The rendering depends on the animation state. The physics of a local entity
depend on its collision with the rest of the world. The results of physics
depend on which things collided first. And so on and so forth.

And so games live within this environment of confusion over which things
happen when. At every point there are a few defensive tools - queue up actions
in a buffer, poll values instead of copying them, etc. - but the overall
management of these concurrent, overlapping systems remains a challenging task
lacking in silver bullets.

------
javert
The following alternative style (vs. the 3 presented by Carmack) is one that I
find very easy to read.

    
    
      algorithm() {
        do_small_thing_one();
        do_small_thing_two();
        do_small_thing_three();
        ...
        do_small_thing_X();
      }
    

Advantages over Carmack's Style C:

1\. Substitutes comments for accurate and specific function names. Why better?
Because comments can get out of sync with the code.

2\. You can quickly see the sub-steps of the algorithm, rather than reading a
multi-page-long giant function with a ton of comments.

When using this style, the inner functions are not visible outside of that
source file (you can arrange this depending on your programming language).
Then it's easy to make sure they are only called once within the source file,
or only called appropriately.

That's because I agree with Carmack that functions called from lots of
unrelated places are a terrible thing.

(Edited for clarity after people pointed out that it seemed like I was just
advocating for Carmack's style A or B.)

~~~
pajtai
That is option A and B. I think the entire article is describing the down
sides of that.

You can't see if you a repeating the same stuff in multiple small things.

~~~
javert
I guess this kind of got lost in my comment, but the point is that these
functions are never called from anywhere else.

You achieve that by not exposing them in header files.

~~~
roghummal
If they aren't called from anywhere else and they're only called once (in the
parent), it might be better to inline them.

Doing so goes against the urge to decompose as much as possible but it'd make
the code easier to follow for the next guy. "What does do_small_thing_X do?"

Even when the next guy is you, a year from now. Do you _really_ remember what
do_small_thing_X does?

~~~
javert
Yes, I do remember. If the function name isn't sufficient, you need a
different decomposition of the problem.

Another advantage of doing it my way is that you can put multiple comments
inside those "small" functions if you need to. So you get two levels of
decomposition. That really helps if your overall algorithm is broken into
smaller parts that still need some explanation.

------
throwaway9134
A few questions: Are there any good examples of code written in this style
(e.g. by Carmack or Blow)? When I tried this style, I would frequently end up
with 800+ line functions: is this what the code is supposed to look like in
the end, or should I be refactoring earlier? When I do end up refactoring,
it's often hard to switch to a functional style. There are complex
dependencies between the different "minor" functions, and the easiest route
forward seems to be to replace the function with a class: minor functions
become methods, and the function-level local variables become instance
variables, etc. Also, this is a small issue, but how do you deal with minor
functions that "return" variables? I typically wrap the minor functions in
brackets, and I declare the "returned" values as local variables right before
opening bracket, but it looks strange.

------
AnimalMuppet
> "The function that is least likely to cause a problem is one that doesn't
> exist, which is the benefit of inlining it."

That statement (at least taken in isolation) is false. Inlining it means that
_you 're still executing the exact same code_. If it had problems as a
function, it still has problems when inlined.

But that isn't the problem that Carmack is trying to address. He's concerned
about bugs caused by lack of programmer comprehension of the code's role in
the larger function. It's a valid concern. But inlining it makes it harder to
find problems in the details of what the inlined function does (or even to
realize that that's where the problem is, or maybe even to realize that
there's a problem at all).

All styles help with some problems and make others worse. The answer isn't a
style, it's good taste and experience to know when to use which style.

~~~
skylan_q
_Inlining it means that you 're still executing the exact same code. If it had
problems as a function, it still has problems when inlined._

But if it's inlined, it's no longer a function. ;)

What he's saying here is that the function itself was fine and free of bugs
but that there is a problem for the programmer. The programmer's understanding
and expectation of what the function does isn't in that it affects state in a
way the programmer didn't know or expect. What the function does becomes much
more obvious and controllable when you inline the function's body.

------
dzuc
I'm assuming this was posted because it was brought up during Jonathan Blow's
talk last night:
[http://www.twitch.tv/naysayer88/b/572153991](http://www.twitch.tv/naysayer88/b/572153991)
(which have been interesting, (and Twitch is a great format for these))

------
akkartik
An aside on tool interactions: For the past year or so I've been using a new
way to do syntax highlighting[1][2] that works _really well_ for highlighting
the dataflow in large functions (whether well or poorly written):
[http://i.imgur.com/EmFMTtv.png](http://i.imgur.com/EmFMTtv.png)

[1] [https://medium.com/@evnbr/coding-in-
color-3a6db2743a1e](https://medium.com/@evnbr/coding-in-color-3a6db2743a1e)

[2]
[http://www.reddit.com/r/programming/comments/1w76um/coding_i...](http://www.reddit.com/r/programming/comments/1w76um/coding_in_color/cezpios)

~~~
frik
Great idea, so simple yet no IDE provide this feature out of the box.

~~~
regularfry
KDevelop (specifically Kate) does this.

------
DrTung
The Saab Gripen aircraft that is mentioned did crash anyway (in its first
flight show over Stockholm 1993
[http://en.wikipedia.org/wiki/Accidents_and_incidents_involvi...](http://en.wikipedia.org/wiki/Accidents_and_incidents_involving_the_JAS_39_Gripen))
in spite of that specially crafted fly-by-wire flight software...

~~~
DrTung
You're right, it wasn't the software as such (more like the design of the
system).

I just read more about PIO and it's funny, it reminds me of a short story I
read a long long time ago, written by Arthur C Clarke, I remember the scene in
it where a pilot is remote controlling his big space ship or plane of some
kind, while standing on the ground. But because the remote is going through a
satellite, there is a feed back delay, say 1 second or so. So when he tries to
compensate for a wrong turn, a flavor of PIO is induced and the ship crashes.
Think that particular story was written in the 40's, good foresight of Mr.
Clarke.

~~~
HeyLaughingBoy
I like Clarke also, but this is less an instance of good foresight on his part
than it is an understanding of feedback control theory. Control systems
analysis advanced quite a bit in WW2 and by the time that story was written,
the issues caused by low control bandwidth were probably well known.

------
chetanahuja
I found the functional programming (in C++) advice post linked from the
referenced post a much more interesting read.
[http://gamasutra.com/view/news/169296/Indepth_Functional_pro...](http://gamasutra.com/view/news/169296/Indepth_Functional_programming_in_C.php)

"Avoid globals" is a fairly common (and good) truism for programmers of all
stripes. But casting it in light of the central (and easily digestible) tenet
of functional programming makes it much more approachable. Smart (but
sometimes insufferably pompous ) proponents of functional programming should
take notes.

------
cpeterso
Steve McConnell's classic "Code Complete: A Practical Handbook of Software
Construction" cites a number of studies of real systems from the 1970s and
1980s with some surprising results. Some studies showed that function size
inversely correlated with bug counts; as functions increased towards 200 LOC,
the bug count decreased. Similar, another showed that shorter functions had
more bugs per LOC and _reduced_ comprehension for programmers new to the
system. Another study showed that function complexity was correlated with bug
count, but size alone wasn't.

------
phlakaton
There is an interesting parallel between Mr Carmack's "inlining" observations
and one of the sessions I went to at Strange Loop this year. Jonathan Edwards
was trying to beat back "callback hell" (i.e. unpredictable execution order
leading to unpredictable side effects) by radically simplifying the control
flow of his programs and keeping the execution model dirt-simple. Both would
appear to argue that it's best to arrange heavily stateful code in a simple
linear sequence, and use a top-down execution model, so that stateful effects
are clear and easy to predict.

That being said, I've seen procedures that followed this sort of approach that
were thousands of lines long. Even if we could have cut down on the ridiculous
number of conditionals in that code, most of the state at that scale
asymptotically approaches an undifferentiated mass of global variables. The
result is testable and maintainable only via heroic effort. There have got to
be limits to this kind of approach. (For Mr Edwards the solution was to break
the whole thing up into a sequence of composable views, or lenses, with the
interface between each stage being well-defined.)

I wonder to what extent Mr Carmack's pivot to pure functions is simply an
acknowledgement that there were much better ways to refactor the code than the
mess of one-timer procedures that probably seemed like a good idea the first
time through...

------
darylteo
Seems like everyone misreading the intent of the post. He is NOT advocating
inlining code in this post. He is suggesting that FP is better at solving the
same problems in a more sensible way.

The email was written in 2007. In there, he advocates the inlining of single-
use functions into god functions as it reduces the risk of these functions
being opted into other routines, especially when they all deal with shared
mutable data.

Single-use functions are explicitly singled out in his email; he mentions that
he does not encourage duplicating code to avoid functions.

| "In almost all cases, code duplication is a greater evil than whatever
second order problems arise from functions being called in different
circumstances, so I would rarely advocate duplicating code to avoid a
function"

The blurb at the front indicates the intent of his post. Since then he has
favoured a functional-programming approach - don't inline your functions, but
avoid making your functions rely on mutable/larger scope states. Pass in
everything that is needed by the function through parameters. Avoid functions
with side-effects, encourage idempotence. That way, reusing the function does
not lead to unintentional side-effects.

He also mentions that should you still decide to inline, " you should be made
constantly aware of the full horror of what you are doing.".

A lot of things change within a decade. =)

------
cygx
_it is often better to go ahead and do an operation, then choose to inhibit or
ignore some or all of the results, than try to conditionally perform the
operation._

 _The way we have traditionally measured performance and optimized our games
encouraged a lot of conditional operations [...] This gives better demo timing
numbers, but a huge amount of bugs are generated because skipping the
expensive operation also usually skips some other state updating that turns
out to be needed elsewhere._

 _Now that we are firmly decided on a 60hz game, worst case performance is
more important than average case performance, so highly variable performance
should be looked down on even more._

Two words: Battery life. In case of mobile devices, this is not sound advice.

~~~
nkurz
Is this covered by the update in John's introduction?

    
    
      To make things more complicated, the .do always, then 
      inhibit or ignore. strategy, while a very good idea for 
      high reliability systems, is less appropriate in power and 
      thermal constrained environments like mobile.
    

Or do I misunderstand your objection?

~~~
cygx
No, you're completely right - I only skimmed the addendum. In fact, I actually
only skipped that particular paragraph - Murphy's law strikes again :(

It's just that two days ago, I had to deal with exactly that issue, so it was
fresh on my mind while reading the article...

~~~
ufo
Looks like Carmack was right then. Conditional paragraph execution does lead
to extra bugs :)

------
hisham_hm
After reading this, I feel a lot better about the huge main() function I wrote
for htop. I've always thought about splitting it into many functions, but
somehow keeping it all flowing in sequence just made more sense.

------
mr_brown
I never wrote too many C or C++ on the desktop, but often ended up refactoring
my embedded code from one style to an other. After a while I realized this is
simply my way of understanding the code better, and making sure I haven't
missed anything. The direction (inlining or breaking things to functions)
almost doesn't matter. What matters is working with the code. It's not so
strange If you think about it, designers understand things by sketching and
taking notes, that's why you see designers run around with their moleskins.

------
VikingCoder
Is anyone else reminded of Facebook's Flux?

[http://www.infoq.com/news/2014/05/facebook-mvc-
flux](http://www.infoq.com/news/2014/05/facebook-mvc-flux)

------
highCs
> If a function is only called from a single place, consider inlining it.

Funny because it's backward. If a code is duplicated, consider to make a
function if the pieces of code are the same _semantically_. (Two pieces of
code which are the same at a given time can diverge over time and you don't
want to miss that. Only analyzing the sense of what you're doing (=semantic)
gives you the answer.)

Never add fancy things (like adding a function which is not a function) in
your code because code is not fancy, it causes bugs.

> If a function is called from multiple places, see if it is possible to
> arrange for the work to be done in a single place, perhaps with flags, and
> inline that.

Well yeah, fix the semantic if it needs to else do nothing.

> If there are multiple versions of a function, consider making a single
> function with more, possibly defaulted, parameters.

Well yeah, fix the semantic if it needs to else do nothing.

>Minimize control flow complexity and "area under ifs", favoring consistent
execution paths and times over "optimally" avoiding unnecessary work.

The right thing to do is to _never_ optimize unless it's to slow and you've
identified the first bottleneck. "Never optimize" means: write the naive code
correctly (without performance aberation like adding element in an array).

> To sum up:

Stop fancy. Stop optimization. Stop thinking about code syntactically (=the
succession of operation gives the good result). Think constantly about your
code semantically.

------
lambdasquirrel
I don't have a great deal of experience tuning code, so would someone be able
to explain how inlining is related to mutation of state? I'm not John Carmack
or Brian O'Sullivan, and I'm not sure if I understand how purity would make
things better.

We do inline our code in Haskell sometimes, but usually the real gains (in my
limited experience, with numerics code) are to be had by unboxing, FWIW.

~~~
tel
Inlining doesn't always work in side-effecting languages. E.g. only if `f` is
pure are the following two fragments identical.

    
    
        let a = f () in a + a
    
        f () + f ()

~~~
JoshTriplett
If the compiler has the full source of f() available at compilation time, it
knows whether it can convert between those two. (Though not all compilers are
good at knowing the memory-usage implications of doing that kind of commoning
operation.)

(And, for that matter, whether it can convert that to "f()<<1".)

~~~
mappu
Does `shl eax 1` really outperform `add eax eax`? Although i guess that's a
question for `-mtune` to decide.

~~~
spc476
SHL EAX,1 and ADD EAX,EAX are the same, time wise. The only difference is that
the A flag (auxiliary carry [1]) is undefined for SHL and defined for ADD. SHL
EAX,n may be faster than n repeats of ADD EAX,EAX, but there are other ways to
do quick multiplication by powers of 2 though.

[1] The auxiliary carry reflects a borrow from bit 4 to bit 3, or a carry from
bit 3 to bit 4. It's used for BCD [2] arithmetic, and there is is no direct
way to test for it.

[2] Binary Coded Decimal

------
ilaksh
> If something is going to be done once per frame, there is some value to
> having it happen in the outermost part of the frame loop, rather than buried
> deep inside some chain of functions that may wind up getting skipped for
> some reason

> I do believe that there is real value in pursuing functional programming,
> but it would be irresponsible to exhort everyone to abandon their C++
> compilers and start coding in Lisp, Haskell, or, to be blunt, any other
> fringe language.

"Here, let me dismiss functional programming, and by the way OCaml and other
'non-pure' functional languages don't exist, and functional programming
languages aren't useful for anything 'real' so you should do your functional
programming in C, and also you may want to dump everything in one long-ass
function because I don't like deep stacks".

He's just rationalizing C traditions.

------
jaunkst
I believe in both functional and encapsulated patterns. It all boils down to
scope of the task at hand. There is a certain kind of beauty in programming in
a pattern than can compliant to a particular interface and a pattern that's
efficient and scoped to the result required. Inline is a great way to
encapsulate in a functional way.

------
petersellers
Seems like a big argument against type C is that it would be more difficult to
unit test the code. The nice thing about A/B (which are essentially the same
to me) is that each subroutine becomes an easier target for unit testing.

------
curiousCoffee
Does anyone write in style C and then refactor to style A/B? That gives you
all the benefits of both styles..

~~~
Too
No, the problem with style A and B is that smallFunction() might have only
been correct under the context of it executing inside largeFunctionA(). IMO
this risk is actually increasing even more if it originates from a refactoring
from style C, since you didn't design smallFunction() from bottom up
considering all possible use cases, you most likely just highlighted a random
block in largeFunctionA() because it was getting too big and clicked "extract
method" in your IDE.

Imagine two months later someone writing largeFunctionB() is browsing around
the code and finds smallFunction(), thinking it will do the job he requires
but actually it has a hidden bug that was never triggered under the context of
it executing in largeFunctionA or under the limited input range that
largeFunctionA was using.

See in particular this paragraph from the article:

    
    
        Besides awareness of the actual code being executed, inlining functions also 
        has the benefit of not making it possible to call the function from other 
        places. That sounds ridiculous, but there is a point to it. As a codebase grows 
        over years of use, there will be lots of opportunities to take a shortcut and 
        just call a function that does only the work you think needs to be done. There
         might be a FullUpdate() function that calls PartialUpdateA(), and 
         PartialUpdateB(), but in some particular case you may realize (or think) that
         you only need to do PartialUpdateB(), and you are being efficient by avoiding 
         the other work. Lots and lots of bugs stem from this. Most bugs are a result 
         of the execution state not being exactly what you think it is.

~~~
curiousCoffee
Oh yeah I see what you mean. Do you think all smallFunction()s should be
generic/reusable?

Seems like it's the fault of the second developer for using the
smallFunction() without understanding what it does.

------
tantalor
> do always, then inhibit or ignore

Explain?

~~~
ShaunK
Rather than conditionally execute code (to avoid performing an unnecessary
expensive operation) always execute the code, then discard the result if it is
unneeded. The idea being that the performance gained by avoiding the
unnecessary operation is not worth the complexity added.

~~~
Guvante
As a minor note, conditional checks can have significant performance impacts
if you don't consistently handle the conditional.

As an example performing an operation every other frame can cause your CPU to
thrash due to always taking the wrong path. (I of course am oversimplifying
and OOO CPUs are quite complex)

------
dmjio
We'll all be programming in Haskell soon enough

------
notastartup
Is this arguing that developers shy away from the forced OOP, and Patterns,
instead relying on simple to read, step by step, functions?

One of my biggest gripe about OO programming was that you had no idea what the
other components were doing unless you investigated each component directly.
Sometimes the dependencies and the chain would get so large and complicated,
you'd spend more time figuring out how to wrap your head around the whole
thing than doing things that result in direct business benefit.

But every interview you go to will tell you otherwise, inflating technical
debt is a great thing to keep managers keep their job and for sales team to
boast about six digit LOC = Obviously state of the art.

------
notastartup
How would you explain the benefits of functional programming with an employer
who absolutely refuses to believe that OOP is overvalued? Lot of job
requirements will say experience with OOP and then be asked to recite from
memory what Singleton patterns look like or draw a Factory pattern as a
yardstick of developer efficiency.

~~~
eru
You can do a prototype, and try to convince like that. Or just do your own
thing and don't mention any paradigm labels.

Or you can look for a more enlightened employer. Mine is hiring, for example.

~~~
tome
What sort of functional programming jobs do they have available and in what
languages?

~~~
eru
Google is pretty limited in the functional languages as a main tool for your
job category. It's easy enough to sneak in functional things here and there,
though.

Standard Chartered, where I worked before doing FP, is looking for Haskellers
every now and then. Contact me (see profile) for some more info. Citrix is
still using OCaml in Cambridge, UK, as far as I know. Not as much Haskell any
more as they used to.

