
C++ Coding Guidelines (2014) - turingbook
https://howardhinnant.github.io/coding_guidelines.html
======
xedrac
I strongly disagree with the prioritization of compile time and run time
performance over maintainability. I can't count the number of times this has
bitten me because of some premature optimization someone chose to write. Most
of the time the "optimized" code is faster, but it was never a bottleneck to
begin with, and is significantly harder to read/modify. Also, if compile time
is so important to you, try using Meson as your build system.

~~~
gp7
Compile time is a huge factor in maintainability!

~~~
michaelvoz
It's really not. BUCK and other build systems make compile lightning fast, and
when you have a complex multithreaded bug because of sloppy optimization,
faster compiles won't help.

~~~
to3m
This is exactly the sort of situation where fast compiles often do help, in my
view! Debugging is one case where speed of progress can depend on build time,
because you need to run the code after each change to see what effect it had
and to decide what to do next.

~~~
ardivekar
Optimised code is occasionally so cryptic that you'll just waste a few hours
figuring out what it does, which basically nullifies the effect of faster
compile time.

------
sidlls
Is compile time a real issue these days? I worked on a code base with
thousands of source files and millions (in the 10s [corrected from 100s]) of
lines of code 10 years ago and the build process took ~20 minutes. That's a
long time, but it was 10 years ago and without a parallel build process. The
code leveraged templates and other features of C++ that tend to lengthen
compile times, too. Sure, we should strive for minimal compile time (it's
expensive idle-time for a developer) but I'm not convinced it's worth
allocating developer effort to except in egregious cases. Quality development
that doesn't explicitly carve out time to focus on compile time should produce
code that compiles in a reasonable time anyway.

In general (and for the majority of cases by a significant margin) I agree
that maintainability should overrule run time performance. However as always
there are tradeoffs to consider. If one is only going to use a program briefly
or a small number of times maintainability becomes a lower priority.

~~~
zeptomu
> Is compile time a real issue these days? I worked on a code base with
> thousands of source files and millions (in the 100s) of lines of code 10
> years ago and the build process took ~20 minutes. That's a long time, but it
> was 10 years ago and without a parallel build process.

Yes, I think it's still an issue. Let's say your project from 10 years ago
took 20m to compile. Using a good build chain one could say, that's possible
in 2m today, but this is still a lot of time.

Now, if it would take 2s - that would be a real improvement. One should not
underestimate the vast benefits of a fast feedback loop.

~~~
sidlls
Fast feedback can be useful, certainly. But consider that there is diminishing
returns to be had. The difference between 2 minutes and 2 seconds may seem
significant for this, but I have doubts. In any case it certainly isn't going
to be as noticeable as a reduction from 20 minutes to 2 minutes.

~~~
zeptomu
I would say the difference between 2 seconds and 2 minutes is significantly
more noticeable than the step from 20 minutes to 2 minutes.

Apart from the fact that the factor is bigger (60 vs 10) it matters that if
you go under 10s your workflow changes, as you don't have to work in
"async"-mode anymore "oh, while this compiles I check this stuff in the
documentation". For me who is not the best multi-tasker this is a real
benefit.

~~~
ryandrake
I think it depends highly on what kind of developer you are and what your
workflow is like. For some developers, frequent compiling is part of the
development process. For others, it's something you do at the end (or after
large chunks of work) to test/verify.

If you're one of those "Work on the next line, keep changing it until it
compiles, then keep changing it until it runs" developers, then yea you'll
want fast compiles.

If your workflow is such that you take your time, do your work once, and
compile/run/test as the last step before you're ready to commit, then it
doesn't really matter if your project takes 20 minutes to compile. You're only
doing it once or twice a day.

~~~
gpderetta
When working on a new project I sometimes spend weeks​ coding without​
attempting to compile once (it usually takes a couple of day then to sake down
all typos and thinkos); my boss is horrified by this but it works for me. In
this scenario I care little about compiletime.

The problem​ is when I'm fixing bugs or adding small features to an existing
codebase, especially when tests need to be added or modified. The compiletime
turnarounds does kill my productivity in these cases.

------
makecheck
None of these are unique to C++ so I recommend simplifying the title. Even
compile time recommendations apply to many languages.

Also, it is strange for maintainability to be #4 when correctness is #1
because it is equally important for code to _remain_ correct over time. There
is nothing more frustrating than reopening issues over and over because people
keep accidentally breaking things in unmaintainable code.

~~~
nickpsecurity
"Also, it is strange for maintainability to be #4 when correctness is #1
because it is equally important for code to remain correct over time. "

Absolutely. This was known far back as Ada which had reduction of errors in
maintenance phase as one of its design goals for its syntax and semantics.

------
tilt_error
I don't think it is possible to state a general list of priorities such like
this on all source code and all contexts. What is not apparent here is the
context in which these priorities were stated. I could easily argue that
maintainability (both reading and writing) is more important than runtime
performance in any other given context.

What would be interesting is a more elaborate discussion of the priorities
based on the context (which is unknown in this case).

~~~
jandrewrogers
One way of looking at the priorities is through the lens of "how difficult
will it be to address these topics after the product ships?", with all of the
inertia that entails.

For example, most runtime performance is fundamentally architectural. Once you
ship an architecture it is nearly impossible to change it in practice. You
rarely get a second chance to do this correctly.

Correctness can be particularly insidious if the code behaves well enough to
use. There are many examples of incorrectness that became a "feature" after it
shipped because users started exploiting the side-effects of incorrectness in
their own applications, making it very difficult or impossible to properly
address the underlying broken-ness. A lot of code spaghetti is the product of
a janky feature implementation that has some incorrect behavior that needs to
be supported indefinitely to keep users happy. (This is what I always fear
most when developing software.)

Compile times are a partly a side-effect of architecture but in practice you
can often make large improvements without materially altering the design of
the software. With minimal thoughtfulness in the software design, you can push
this off until it really becomes painful without losing the ability to change
it.

And so on. Readability and writability are among the easiest things to change
after the fact.

------
SimbaOnSteroids
How can you lose readibility to something else? Can't you comment what a
tricky bit of code does? Full disclaimer, I'm a fairly novice programmer.

~~~
khedoros1
Consider the case when someone is prototyping a hardware device. They might
start out with a bunch of separate modules connected by wires, with the
benefit that those modules can be reconnected in different ways easily, and
the connections are relatively clear. Later, they'll do the work to custom-
build a circuit board, chips, and all that to make a marketable product. One
problem: The finished product is much harder to modify than the prototype was.

We expect software to be more malleable. There are a lot of ways to architect
software that will increase efficiency in some way but will make it harder to
modify the software in the future (adding new features, closing security
holes, etc). On top of that, code that's tightly tied together becomes harder
to read.

Comments are great, but they've got to be maintained too, except that you
don't have customers or compilers/parsers/etc enforcing that. So if a bunch of
code is complicated, has a lot of inter-relationships between different
pieces, etc, then part of the job is making sure that the comment is up-to-
date and still correctly describes the purpose and use of the code. Clever,
hard-to-read bits of code should be as small and far apart from each other as
possible.

Maintainability includes more aspects than just readability.

~~~
SimbaOnSteroids
That's a great answer thank you for your response, do you think it would be
possible to write a plugin that auto comments code and updates the comments
for particularly tricky parts say something that comments [n] then tacks on
what n does at the end of the code. Then in the background keep a running list
of what got referenced where and updates those references as needed?

~~~
khedoros1
The rule of thumb is that comments should explain the "why", and the code
itself should explain the "how", so I tend to be skeptical of documenting how
something works in the comments.

Say that we've got that 1% case that's irreducibly complex (i.e.
clarifying/simplifying the code kills the performance of something that gets
run a million times a second), where we might want to explain the "how"
because the code's unclear but can't be changed. The comment itself might take
the form of pseudocode representing the slower-but-clearer version of the
code. Being a complex and sensitive area, you'd want to have as many tests
built around it as possible, to verify that the behavior doesn't change unless
it was planned in advance. Part of the code-review would be that any change in
behavior (as reflected by changes to the tests) would also need to be
documented in the comments.

Computers are good at accurately tracking multitudes of small details (like
automatically tracking all the places a piece of code is called, etc). Getting
them to synthesize a summary of something's behavior ("seeing the forest
instead of just the trees", the kind of thing that would be useful as a code
comment) sounds like it would be interesting to research, but I don't think
there's anything like that right now.

------
azov
Those priorities are self-contradictory.

If you prioritize performance over maintainability, you're sacrificing long-
term correctness, which is supposed to be your #1 priority.

PS. Also, "C++ Coding Guidelines"? This is neither coding guidelines, nor
anything specific to C++.

------
xaedes
"Make sure that what you assume won't compile actually doesn't compile."

How do I do that properly?

How do I test for undefined behaviour? I mean when I have code where I know
that (ab-)using it in a certain way will trigger undefined behaviour, how do I
test that?

~~~
saghm
If you want to test that something doesn't compile, you just try to compile it
and it either will succeed or fall. UB is a runtime thing, so that's not
relevant to this point.

~~~
xaedes
You mean including code snippets of not compiling code in the automated(!)
tests and compiling them from there?

Hm, That makes sense. Anybody know a good framework for this? I can imagine
that supporting different compilers output isn't a trivial thing that one
should have to rebuild themselves.

Btw: The undefined behaviour question was meant to be unrelated to the "not
compiling" one.

~~~
saghm
Offhand, I would just use whatever CI tool you're already using to test
various compiler/OS combinations and then just write a bash script to run that
compiles a test file (given as an argument) and exits with the opposite exit
code that the compiler gives (i.e. 0 for failure, 1 for success).

------
pacaro
I think that expressing these important points as priorities is missing a
better way of looking at them.

I see them as tensions, the engineer's job and wisdom is in balancing these
tensions

Except correctness. Code should be correct.

------
partycoder
These coding guidelines might be a good idea overall but they do not seem very
specific to C++. You could take the same guidelines and apply them to C, Ada,
Obj-C, Pascal, etc.

If you look for C++ specific guidelines I suggest this:
[https://github.com/isocpp/CppCoreGuidelines/blob/master/CppC...](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md)

I am saying this since C++ gives you a lot of power, but also a lot of
responsibility and room for error making a more specific guide a very
desirable thing.

Then, you don't use INT_MAX anymore, you use std::numeric_limits<int>::max()

------
GnarfGnarf
Except in very special cases, I would suggest Maintainability be #2.

