I was wondering is there a reason C and C++ code favours comments over descriptive names? I'm almost certain i'm not making this up, but go into any random file here and look at the names. Structs will have members called p with a comment saying "position". It seems very silly.
My own rule of thumb is: short names for 'unimportant' variables where the meaning is clear from the context. More descriptive names for the more non-obvious cases, but still keep it short and concise (e.g. I would typically use "pos" instead of "position").
Having said that, some early C dialects limited the significant number of characters in identifiers to 6 or 8. I heard that this is the reason why the C runtime library function names are short (like strcpy() instead of string_copy()).
It does not seem silly to me. Coming from mathematics, single-letter variable names are the only acceptable possibility, especially in numerical code. I tend to read contiguous letters in a formula as products.
Besides, descriptive variable names violate the principle of "don't repeat yourself". It is better to describe the variable only once, in a comment next to its declaration, instead of several times.
Programmers should understand that not everybody who programs has a background in computer science, and certainly the traditional conventions of computer science are not universal and not generally reasonable. Many of us have a scientific background, where descriptive variable names are silly. We do not want our numbers to have meaning, that's the whole point of abstraction. Thus, they are named x, y, z, and so on.
Descriptive variables are about maintainability. And when it comes to the latter, the guiding principle is:
> Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.
I've tried many times over the years, but I just never seem to be able to squeeze all the relevant information into the names without making them excessively long, and even then I usually find upon rediscovery that I have omitted an important detail. Mathematics seems to thrive on a kaleidoscopic abundance of subtle but important variations on similar themes and this really frustrates the ability to encode information in the names. Also, the necessity of building an abstract-mathematical mental picture separate from the code obviates the need. The value proposition of verbose naming conventions entirely falls apart under these conditions, and once it does, the benefits of concision rule the day.
force = (g_constant * mass * mass)/(radius 2)
Communicates far more than the algebraic alternative.
Programming is about solving a particular problem, not providing a general formalism. The art of it, and mastery of it, is how to imbue the program with the problem domain. So that by speaking in its concepts you solve the problem.
> violate the principle of "don't repeat yourself"
That isnt the principle, that's a literal reading of each word conjoined into something totally different.
The principle is "solve the problem once", ie., if you encounter code re-solving a problem already solved before, abstract such that you solve it only once.
The principle only makes sense with respect to a problem which reoccurs, it is either incoherent or false when applied to literal fragments of code text.
One would then be moved to define replacement macros for anything more than two letters.
I respectfully, and strongly, disagree. Try to implement the conjugate gradient algorithm (or a variation of it) https://en.wikipedia.org/wiki/Conjugate_gradient_method Using your style, you will obtain an unreadable mess. Better make your code isomorphic to your math, and then explain the meaning of everything in the comments.
> force = (g_constant * mass * mass)/(radius 2)
> Communicates far more than the algebraic alternative.
I don't see how this is clearer than something like this
// Newton's law of gravitation
// m : mass of first body
// M : mass of second body
// G : universal gravitational constant
// r : distance between the two bodies
// F : force between the two bodies
F = G * m * M / (r^2)
For example, we have two competing coordinate systems: first is x // width, y // height, z // depth, and second is x // width, y // depth, z // height. With descriptive names we will have no such problem.
However, it's much easier to load large and complex algorithm into head if it written concisely: one letter per variable or function, minimum unnecessary separators. When I need to understand someone else codebase, I will often concatenate all files, delete all comments and obvious functions (getters, setters, eq, hash, etc.), abbreviate long names via search and replace, remove unnecessary symbols and convert functions and classes into oneliners, until I will see whole algorithm at once. However, reading of such code requires some additional training.
To summarize, it's possible and is easy to convert descriptive names into abbreviations, so, it's better to stick with descriptive naming in public interfaces.
If all you have is a 300 baud terminal (a luxury for some early users), writing a 30-letter statement takes about a second. Doubling that speed by using shorter identifiers is worth it (certainly as long as all the people reading the code either wrote it, or have the writer sitting next to them)
That created a culture that persists to today.
There are different cultural streams such as Apple’s one, which originated in Pascal (for the original Mac OS). Consequently (I think), Apple’s C headers later on weren’t afraid to use identifiers with over 40 characters. Next step/Cocoa similarly aren’t afraid of long names (https://mackuba.eu/2010/10/31/the-longest-names-in-cocoa/)
Looked at several headers at random just now (including file, net) and they seem OK. Not terribly verbose, but not exactly cryptic.
In terms of 'p' vs 'position' - 'p' is an extreme and it's usually used for pointers, but the driving factor for using shorter names is usually to keep the line length in check when these vars are used in the code. Ditto for type names.
"Position" is quite often shorten to "pos", "offset" to "off", "length" to "len", etc. This is very common. Sometimes the original name is too long (e.g. "position of the last error in the first block") and lands itself to a not-so-obvious abbreviation, in which case the comment is made to clarify what it is.
It's basically about striking a balance between succinctness and verbosity, and conventionally C tends to lean towards the former.
Neither ::highres nor ::high_resolution_clock really is a descriptive name though, because it doesn't really describe what it means. What is a "high resolution clock" here? Does it mean µs resolution? Nanoseconds (as some "newer" Linux APIs support)? What about accuracy?
"chrono" is one of these "sounds cool but is kind of a weird word to use" things.
And of coure any new C code would look out of place if it didn't follow these old shibboleths... better make new C code an unreadable mess too, no? ;)
There's really only two things that bother me:
- Windows support is alright but it's a bit weird and wants some Unixy stuff to exist. It's being worked on in small bits but it could be better. If you don't care about this you'll have a good time likely.
- Sadly it isn't widely used outside of Google, so you're mostly on your own. The only upshot is that the Starlark language makes it really easy to compose BUILD files, so I don't find it too difficult to manage a few BUILD files for things like SDL2. Unlike a lot of other build systems, like CMake for example, it would be much less hassle to consume other Bazel projects directly, since it's designed to work that way, so if it ever did gain widespread adoption it would be a really nice solution.
There's definitely some things that would turn off the average user. Like the BUILD files, in order to be predictable, contain dependency information, which either needs to be updated by hand or tools (you can guess which is preferred.) On the other hand, builds are very predictable, and it works such that you can easily express dependencies across different programming languages and even autogenerated code from another target.
Nothing is perfect, but I find Bazel is super nice for C/C++ compared to the alternatives I've used. It's a different way of approaching the problem of building software, one that I've found very novel despite the shortcomings. I'm worried it's too easy to miss what makes it special behind some of the turnoffs people initially hit.
I’ve looked a little bit at bazel but haven’t spent long on it. It seems to really, really want you to do things in its specific googly way ( that seems to stem from monorepo style and always knowing the exact build path and never needing to distribute to users) which is hard if e.g. trying to to convert an existing project that doesn’t perfectly fit (us: combined C++/Python with some legacy fortran generation)
Or a project that has external dependencies currently being chain-built but isn’t itself built with bazel (e.g. most the planet) - we don’t all have dedicated build teams to accurately and bug free reproduce other libraries build procedure (also getting away from a buggy reproduced build is part of my system rewrite intention).
I seem to remember the documentation also had the usual problem that the easy examples were well documented but hard to tell how to do anything more complex, but maybe I didn’t dig enough. CMake has a similarish problem except the problem is finding more modern ways of doing things that don’t suck (I’m aware of all the talks about this, many of them are big on ambition but low on details, and usually just teach transient dependencies).
>I’ve looked a little bit at bazel but haven’t spent long on it. It seems to really, really want you to do things in its specific googly way ( that seems to stem from monorepo style and always knowing the exact build path and never needing to distribute to users) which is hard if e.g. trying to to convert an existing project that doesn’t perfectly fit (us: combined C++/Python with some legacy fortran generation)
I don't really think this is the case. There's definitely certain aspects that can be challenging but I do not feel it has much to do with monorepos. It works well with a monorepo using the workspaces system, but it also should work fairly well using modular git repos as well, each repo is just treated as it's own workspace. It's possible to even have Bazel go and get other workspaces, and you can refer to targets inside of other workspaces.
One of my favorite parts about Bazel is combining different languages. If Starlark rules for the language you are trying to build for don't exist, you can write them. Starlark is a fair bit like Python, it's relatively easy to learn and write your own targets.
I acknowledge that most people don't want to maintain BUILD files for external stuff. There are some options here. Some people already maintain this, like here: https://github.com/bazelregistry - and there are experimental rules for dealing with external build systems.
I use it outside of Google on personal projects and find it perfectly manageable. You don't really need a large build team unless you're doing interesting things. There are a few aspects of Bazel that really are tough, like how you must explicitly specify dependencies on every target, and how actual builds can't even see files not specified in dependencies, and when you get fairly deep you'll probably run into trying to understand what's going on with runfiles. It's definitely a bit more compliant than a Makefile generator at this point. But in exchange, it delivers on its promise of being fast and correct.
Been following it since the presentation at 2015 @Scale conference, attended 2017 Bazel Conf, introduced it at Lucid Software, helped write a bunch of OSS rules.
Bazel is mostly great. Besides a few pre-1.0 polish items, I'm mostly frustrated by the expectation that users write build rules in Skylark, while the real rules (including C++) have to be written and compiled into Bazel itself to take advantage of features that Skylark lacks and aren't even on the roadmap. 
So while I'm disappointed Bazel's not quite a general-purpose build system a la Make, it's awesome for C++ and Java.
 "Why Skylark is a 2nd class citizen" https://groups.google.com/forum/#!msg/bazel-discuss/8eAMN3Wh...
Starlark (Skylark) is used for a huge number of rules. The community has written and used in production lots of rules, such as Closure, Docker, Go, Haskell, Kotlin, Kubernetes, NodeJS, Rust, Scala, Swift, Typescript.
We stopped creating native rules many years ago and we are actively migrating them to Starlark. I'd replace the phrase "real rules" with "legacy rules".
If you hit critical limitations when writing rules, feel free to ask on the mailing-list (or on a GitHub issue).
> The community has written and used in production lots of rules
I know. I've written them.  Github pauldraper
> If you hit critical limitations when writing rules, feel free to ask on the mailing-list
I have, and you've responded. 
And a year later it is still is unclear to me whether Starlark is going to get tree artifact actions, or dependency discovery (pruning), which C++ uses to great effect.
I proposed a solution to both problems.  Oscar from Stripe presented a proposal at the first Bazel Conf.
I've concluded that either (1) I'm using the wrong channels or (2) Bazel isn't that aware/interested in Starlark parity.
TBH, the number of responses that are ignorant of the dependency discovery disparity alone shows me how unaware Bazel is of the Skylark-only deficiencies.
Great project (seriously), but when your bread-and-butter compiled languages* (Java, C++, Objective C) aren't using the same system as everyone else, it's only natural.
Again, great project.
* Go is in Starlark but has essentially required the complex Gazelle build tool to be layered on top of Bazel.
But there's a very recent discussion that seems related: https://groups.google.com/d/msg/bazel-dev/oFRdGdrm8DM/Gr0Yz3...
The beauty of a single header include is that there is no build system, so I don't have to provide any scripts at all.
It's an unfortunate solution but I can see why it is so popular.
Actually, we've been having a lot of success developing an in-house tool to solve this problem and it's helped us build agile modular systems using C++.
It's a one stop shop for building modular C++ applications with an opinionated set of packages here: https://www.github.com/kurocha
That being said, `teapot` doesn't have any default build rules, it relies on packages to provide them, so you could build an entirely different eco-system of packages with minimal effort.
One advantage is that you don't need to mess with build systems for configuration, instead put the configuration defines before the implementation right in the source code.
Another advantage is potentially faster compilation if you put many implementations into the same implementation source. Same idea as unity-builds basically. Even though you're using dozens of header-only-"modules", the compiler only builds a single file.
In the nasty real world, these build systems introduce more issues for more people. Then linking the shared object or DLL adds management issues, naming issues, arch etc. But that's for separate dependency building as a binary.
Has every one of these side compilation steps gone perfectly for you? No. At some point they wasted your time.
This concatenation-build step is done reliably beforehand, even if simple, can be assembled and be tested by the releasing party. Saves the user some trouble.
A pre-built binary is about as easy if done sensibly. Boost is mainly header only but has some libs still requiring linking, which requires you digging into that library's particulars.
All of those are steps that have tradeoffs in time, attention, flexibility and risk.
Careful of that word "... just". Just do this! I can just do that! The full hassle/cost requires pulling out of just the compile step, in this instance.
I guess my point is that I just don't see the benefit of concatenation here.
You can also group the stable headers into a different implementation source file from the headers that change frequently.
(I'm not the author and don't know them, I'm just a happy user.)