Looks reasonable to me. I've been programming C daily for 25 years. Cosmetic but it is nice to see "int const foo;" being preferred to "const int foo". And "We define variables as close to their first use as possible". Less cosmetically, section 5.6 on named constants is great. That's a fiddly area of C from which to pull out reliable advice.
There is no technical reason to prefer one or the other; they are both declaration specifiers which can occur in any order. long const int is valid, even.
The const qualification is in the type; it is not a direct attribute of foo, so putting the const closer to foo means nothing.
The parent says it is "cosmetic", not technical. The parent, being someone who wrote C daily for 25 years, knows what the syntax allows and that it isn't necessary. A constant pointer to a constant pointer to a constant integer should be written "int const * const * const x" because there's only one other way, "const int * const * const x," which is inconsistent, ugly and potentially confusing.
My perspective is that "const type" order consistently appears in all versions of the ISO C standard and POSIX.
Every major system API I can remember working with that didn't hide const behind a typedef (like Win32's LPCSTR and whatnot) has it const first.
Speaking of Microsoft, though the API's hide const with a typedef, the MSDN samples are predominantly "const char" order.
I've never seen a man page with "int const".
Searching all of the man pages I have installed on an Ubuntu 18 system here (literally grepping through all of /usr/share/man), I found just one with "char const" or "int const". Namely this one:
You just worked on some unusual code bases, and are glossing over the language and system API's those code bases depended on, and their respective docs.
const int isn't a declarator; it's a list of declaration specifiers.
Though these can be reordered, it's clear they were intended to follow English.
In a compiler implementation, it's troublesome to enforce the order compared to processing a simple list and setting/checking flags in some data structure, diagnosing invalid duplicates or mutually exclusive situations.
I think you're inadvertently just proving the point.
I was speaking of declarators, not declaration specifiers. In C, a pointer type is part of the declarator and not the specifiers. Therefore, the CV qualifier is also part of the declarator except for the one exception of the specifier. Regardless of whether specifiers were intended to follow English (and this claim is dubious), declarators certainly were not, since it's often more natural to read them right to left, and it's generally most natural to read the left most type-specifier last.
example:
char *x[10]; Translated to english this is naturally stated as an array of 10 pointers to char. Not char something something array.
Edit: later on I see you've moved the goal posts about the "uselessness" of cv qualifiers on pointers. I don't know what to tell you, I think that's bullshit, and there are a number of viable use-cases for them regardless of whether you have encountered them. I'm just addressing the consistency part here.
Indeed, declarators must have their postfix chain read first, left to right, then the prefix portions right to left. (Separately at each level of parenthesization, if any, working inside out.)
There is no choice about where to put const inside a declarator; the position affects the nesting level which alters the semantics. So there is no debate to be had there.
Note that in declarators, the const is necessarily to the left of the thing it qualifies, so why would we put it to the right of int:
int ** const ptr;
^^^ this is what is const
int * const *ptr;
^^^^ this is what is const
const int **ptr;
^^^ ^^^^^ these two are const
In the canonical specifier order, nothing to the left of const is tainted by const:
extern const int **ptr;
^^^ ^^^^^^ const and const
^^^^^^ unrelated to const
For the best possible consistency, imagine const to be pissing, while the wind is blowing left to right.
Your third example could just as easily and correctly be:
int const **ptr;
^^^^^ this is what is const
True, it is a special case when there is more than one item in the declarator list as to how const applied, but that by itself is secondary to where const goes.
Const qualifies the object. Whether the const is applying to the "int" in some way or is an independent type specifier that applies to each declarator is pretty philosophical. I say it's the latter. The fact is, the very allowance of a cv qualifier in the specifier list introduces an inconsistency or imbalance.
Ultimately this is one of the ultimate bikeshedding argument. There's no right answer, and appeals to spoken language aren't particularly compelling when bikeshedding C of all languages. This isn't AppleScript.
I'm not even terribly interested in this particular point. I'm more interested in quashing the notions that bikeshedded opinions of language syntax have some divine providence. It's just bullshit.
We're then missing that the const qualification is applying to the int type.
Also, declarations can have multiple declarators.
int const *p, volatile **q[3]; // syntax error! not how it works
This isn't bike-shedding. The shed has already been painted. There is a right answer which is not to make your code look weird just because the standard allows it.
No "int const", no i[array] instead of array[i], no deceptive trompe d'oeil nonsense like:
int* p, q;
You will not solve any issue in C programming by swapping around declarators.
When we write code, stuff that looks weird should signal a bug. When bug-free code looks weird, that is a distracting false positive that grabs my attention for no reason. Don't add gratuitous weird.
There's no "except" because one cannot conclude how people should write programs from how they do write programs, regardless of circumstances. In any case, languages and their use change. C is 47 years old. Consistent expression structures is more important than analogies to English.
Further, C is not like English and even if it were, one could not conclude that it should be or should continue to be. Moreover, that it was designed by English speakers is not evidence that it is like English or should be like English: most assembly languages were also desiged by English speakers.
It's isomorphic, but you're right, I used an overly specific term. In any case, one can't conclude that something should be the way it is from the fact that it is the way it is without fruther premises, regardless of the kind of value judgements involved.
Implicit key premise seems to be that the more akin a formal (programming) language to it's users' natural one, the more ergonomic it is. I would not necessarily agree, but not read it as ought-from-is.
Yeah, this only works if you initialize at the same time as declaration. And as you say, without const, the same code is generated. All it prevents is accidental reuse of current_foo, which may have some value.
If you declare a variable in this way in C, it can never be assigned. Not once — zero times.
What is or is not good practice in C has little bearing on or relationship to functional languages, so I'm not sure why you bring them up in this context.
"const int * foo"
means that
foo is a pointer to an int that is constant.
Which could also be
"int const * foo"
foo is a pointer to a constant int.
But since the const qualifier for pointers can't be reordered like this, I think the point is that it's a better practice to have the const come after, so that it ALWAYS come after in your codebase regardless of the context?
Thanks for the explanation. I meant it mostly facetiously, but my more serious point is that there are some parts of some languages (and APIs and tools and architectures) where the answer you get from experienced practitioners is "think harder". It's not that I can't grok it, it's that it takes more effort than it should given how much the concept comes up, and the more I wrestle with my tools, the less I potentially get done.
C++ rvalue references are a great example. Whenever I see them I have to go back and relearn the concept because accounting for the language feature sometimes feels more complicated to get right than the unsafe pointer chucking it replaced.
const-qualified pointers are largely useless. If a struct contains such a thing, it can't be dynamically allocated. As a function parameter, it doesn't protect the caller from anything because passing is by value.
Let's see how many times this occurs in my TXR project (~75K lines):
One place: a comparison function for a qsort call, where a vector of const wchar_t * strings is being sorted. I didn't write this (though I did convert it to wchar). qsort passes const void (star) pointers to the elements, which are const wchar_t (star), thus things end up like this.
How about Linux kernel 4.9? There are numerous occurrences of this involving arrays declared at file scope, like this:
# lines that start with at least one tab and contain *const,
# possibly with optional spaces between * and const:
kernel-4.9$ git grep '^[\t].*\*[ ]*const\>'
kernel-4.9$
Wow, not a single result!
Let's relax that and allow a space or tab [\t ]. Then there are lots of false positives due to items in block commented lines starting with a ' asterisk' sequence. If we require at least one lower-case alpha character before star-const, we filter most of these out:
drivers/staging/vc04_services/interface/vchi/vchi.h: const char * const vll_filename; /* VLL to load to start this service. This is an empty string if VLL is "static" */
Your grep result of Linux is because the regex is bad. `\t` does not seem to be available in the grep basic syntax. It is available with Perl Compatible Regular Expressions, but then you can't use `\>`.
Many of the matches are the same as your "array of string constants" example, but just defined at local scope. And many are just constant local variables which some may not consider worth making const. The ones from tracepoint.c above are probably walking an array of const pointers by pointer instead of by index for example though, which is a common case where pointers to const pointers are used, which are legitimately useful.
I'm only an infrequent C programmer, but that goal interests me simply because it seems unattainable given other aspects of C's syntax. If qualifiers went to the right of things they qualify, I'd expect variable declarations to look more like:
foo: int const;
And similarly, function declarations might also take a more ML-like syntax. C seems more like it wants to be an adjectives-before-nouns type of language.
So, IOW, it's not purely a point of style, it's more about what's a sane way to write things given the requirements of the language's syntax when you're dealing with pointer declarations?
Which would explain why it seems so odd to me; I spent a fair amount of time in C-style languages, but typically only ones that lack pointers.
Though, if the goal is communication, wouldn't it be clearer still to use typedefs to clean up more complex declarations a bit? Or does that end up making things worse?
People do that if such a type is used a lot but probably not if it's just a one-off, just like when they write a function instead of using the same snippet a lot but don't if it isn't.
I really like the first edition. Old C diehards hate it because it breaks a lot of tradition. Personally I think it's really refreshing and made me appreciate C and lot more.
The first edition has been my go-to for introducing some of the newer (1999 and later) features of C (and some of the subtle footguns) to people. I'd definitely recommend it as a first introduction to C.
I'm not a fan so far. Lots of deliberately bad C code. Using uninitialized variables, etc.
Explanation of jargon seems to take priority over explanation of the language - the term "string literal" is explained before the concept of a function.
Yes. As I said - it was deliberate. That does not excuse it.
By the very fact that this is a book, it presumes that the intended audience consists of visual learners. When working with visual learners, if something is erroneous, it is imperative that it be explicitly and visually called out as such. In linguistics pedagogy, for instance, this is done by marking any ungrammatical construct or example not found in natural language with an asterisk every time.
This book does not do this. It freely uses deliberately bad code in example programs. This can only result in confusion on the part of learners. It is therefore inconsistent with pedagogical best practices. I would not recommend it.
I would second 21st Century C, it's an excellent book. It has a very pragmatic approach, skipping features like unions that are not used in modern C, but going into great detail on the important things like memory management.
Huh? Since when are unions not used in modern C? Tagged unions are a really common pattern in language interpreters. What are they supposed to be replaced with?
Thirded, especially on the pragmatism, this by far the most pragmatic programming book I've ever read. I fell in love with the C as a scripting language parts (https://github.com/RhysU/c99sh), now I don't have much of a use for a scripting language to sit between bash and "proper language" (apart from some occasional awk).
Read the reviews on Amazon. It's not a good book, the author makes embarrassing mistakes to the point where you start questioning his programming knowledge.
An Introduction to C & GUI Programming - Simon Lon is also good for beginners, especially it's easy understand content and beautiful typographic style.
I've only skimmed the book, but I wouldn't recommend it. The author has a tendency to be snarky and come up with "fixes" for "mistakes" that "old" C programmers make, except he does so in a way that is at odds with the fundamental design of C constructs and does not really show a correct understanding of undefined behavior.
I vaguely recall the whole fallout where it was revealed he clearly did not understand undefined behavior, and I believe he did walk back and accept his misunderstanding.
That said, yes it is damned annoying to have someone that doesn't understand the intricate details of the language poop all over that community and act as if "well i've contributed lots of code in C, why listen to a bunch of language lawyers, that isn't important anyway" as if the two are mutually exclusive. There are some of us that have been using C for years for actual work and are also intricately familiar with the prickly details of the standard. Those are the people that you should be looking for C books. Zed Shaw consistently shows he would rather hear the sound of his own voice and diminish things that are uninteresting to him rather than put in that work to become an expert on all facets of the topic he is claiming expertise.
I just finished Learn C the Hard Way and would be interested in a comparison between it and those other books mentioned.
Just from a quick glance, Modern C appears to be a much more in depth book. It appears to cover things like memory alignment, advanced types like unions, malloc, threads, etc.
Learn C the Hard Way is more of a quick introduction. You learn the basics, like pointers, basic data structures, basic types and then learn to do something interesting with those basics (build a web server). It also teaches some valuable real life practices that Modern C doesn't touch on, like how to structure your project, makefiles, how to use Valgrind and gdb.
Wait what, Learn C the Hard Way doesn't teach malloc? Programming without dynamic memory allocation is pretty restrictive. What "data structures" does it even teach if it doesn't cover malloc?
Like I said, it teaches the basics. There's a single page on heap vs stack allocation and it says malloc allocates memory to the heap. That's it. That's all you really need to know to do stuff and it doesn't go into what malloc actually does under the hood.
Modern C has an entire chapter called "malloc and friends" where it goes into much more depth about malloc, calloc, realloc, etc.
The book is a good introduction to programming in C and I'm glad I read it. You obviously have a history with the author and I'll thank you to keep your past impressions out of this particular discussion.
> There's a single page on heap vs stack allocation and it says malloc allocates memory to the heap. That's it. That's all you really need to know to do stuff
Every other C book seems to disagree, but OK.
Does the book cover the -> operator? Does it mention free and realloc? You need those to do stuff, at least.
The C language can be best learned from "The C Programming Language, 2nd Ed" [1], also known as "K&R book". Pair it with excellent notes by Steve Summit [2], and you don't need any other book to master C.
Love Deep C Secrets. I have it and flick through it whenever I want to remind myself about the dark corners I never mastered during the last 20 years of programming in C.