
Principles for C Programming - pcr910303
https://drewdevault.com/2017/03/15/How-I-learned-to-stop-worrying-and-love-C.html
======
deaddodo
This seems more like personal preferences than any sort of standard. For
instance:

> Do not use a typedef to hide a pointer or avoid writing “struct”.

Untagged structures:

    
    
        typedef struct { ... } my_struct_t;
    

are pretty standard practice in C. In my case, I don't even use tagged structs
unless the struct is self-referential:

    
    
        typedef struct __my_struct_t {
            struct __my_struct_t* link;
        } my_struct_t;
    

And some examples in the Linux Source:

[https://github.com/torvalds/linux/blob/9331b6740f86163908de6...](https://github.com/torvalds/linux/blob/9331b6740f86163908de69f4008e434fe0c27691/arch/arm/include/asm/pgtable-3level-
types.h#L24)

[https://github.com/torvalds/linux/blob/6cdc577a18a616c331f57...](https://github.com/torvalds/linux/blob/6cdc577a18a616c331f57e268c97466171cfc45f/drivers/scsi/gdth_ioctl.h#L33)

And FreeBSD:

[https://github.com/freebsd/freebsd/blob/0ee2302122d0710ced1e...](https://github.com/freebsd/freebsd/blob/0ee2302122d0710ced1e55a4c6dfb4e38e19ce37/stand/efi/include/efitcp.h#L25)

~~~
tom_mellior
The article doesn't say _why_ they have this rule, but I would argue that if
you use the tag _sometimes_ , then you might as well use it consistently every
time you declare a struct. There is no harm in it.

I think some of the contortions around this might come from people who are
afraid of polluting global namespaces. But struct tags and other names live in
different namespaces. Your second example could just as well (and without
using a __ prefix, which is reserved for the implementation and therefore,
strictly speaking, is undefined behavior in user code) be:

    
    
        typedef struct my_struct_t {
            struct my_struct_t* link;
        } my_struct_t;
    

or even

    
    
        struct my_struct_t {
            struct my_struct_t* link;
        };
        typedef struct my_struct_t my_struct_t;
    

Arguably, if you "own" the my_struct_t type name, you also have a right to
declare "struct my_struct_t".

~~~
deaddodo
> but I would argue that if you use the tag sometimes, then you might as well
> use it consistently every time you declare a struct. There is no harm in it.

Sure, but that's a preference. Which was my point. I'm not against people
tagging structs, but I'm also not against typedef'd untagged structs. And
clearly the C community at large isn't either.

> But struct tags and other names live in different namespaces.

I'm aware of where tags live, I just see no necessity in defining something
that will never be used.

> Your second example could just as well (and without using a __ prefix, which
> is reserved for the implementation and therefore, strictly speaking, is
> undefined behavior in user code) be:

This is pedantry and off topic. Name it whatever you like. It can be zyzzyx,
for all I care; it's an internal tag at this point. You're correct about the
__, that was intended to be _; and is simply a preference on my part. I'm not
evangelizing it's use.

> Arguably, if you "own" the my_struct_t type name, you also have a right to
> declare "struct my_struct_t".

That's not the point. I will never call "struct my_struct_t" outside of a
self-referential context. So it's pointless. I could define a tag. I could
not. My code will compile the same.

------
kahlonel
> Do not use fixed size buffers with variable sized data - always calculate
> how much space you’ll need and allocate it.

Actually, sometimes it is best to have a fixed size buffer, sized
appropriately for the maximum data-bytes allowed as per design. Especially
true for embedded systems where dynamic allocation is usually frowned upon.

~~~
ddevault
Author here. In the 2 years (almost 3 now) since this was written, I've come
around on this argument. I often use fixed size buffers when I know how much
space I'll need in advance, of course being careful to measure twice, cut
once.

~~~
rlonn
I use fixed-size buffers as much as possible in C. It's only when I feel a
need to conserve memory that I'll use dynamic allocation. For me, I think this
pattern helps me avoid bugs by maintaining a healthy (i.e. very high) level of
paranoia when dealing with buffers. My first thought when doing something
involving a buffer is always "how much space have I allocated here?". With
string buffers I'll never use the last byte of the buffer - that one is
permanently set to zero and never read or written to. I'll commonly zero the
whole buffer before writing to it also. That way any operation not doing
proper null-termination will not matter.

------
torstenvl
I think the advice here is a bit extreme. Let me focus on macros for the
moment.

As an initial and trivial matter, _using_ macros and _defining_ new macros are
different things. It's absurd to say you can't _use_ macros, and comparison to
((void *)0) rather than to NULL is far less readable. Using macros defined by
the language is incontrovertibly normal, expected, and the Right Thing.

As far as defining new macros, though, I still think there are very defensible
reasons to use them, especially for defining a macro that expands to debug
code when a debug constant is defined, and expands to nothing when that same
debug constant is undefined.

    
    
        #ifdef _MY_DEBUG
        #define ENTER 1
        #define EXIT 2
        #define TRACE(x) \
          fprintf(stderr, "%s function %s\n", (x == ENTER) ? "Entering" : "Exiting", __func__) 
        #else
        #define TRACE(x)
        #endif
    

Forgive any errors above, but that's pretty straightforward to follow and can
make debugging a lot easier by giving a naive sort of stack trace on stderr.
Add TRACE(ENTER) and TRACE(EXIT) at the beginning and end of functions and you
can figure out where a segfault or whatever is happening, and it all gets
defined away in non-debug code.

------
roryrjb
I have been enjoying learning and using C over the past few years. I think I
see it the way Drew does. I have used it mostly alongside a dynamic language
like Python or Lua (either using the Lua C API or Luajit's FFI module) or in
Nodejs wrapping a C++ interface around it (i.e. not using the more recent
N-API interface), but sometimes I have used it by itself, but basically it has
always been quite small in scope, so I don't have the experience of what it's
like to use in a large project.

I started to use C and like I said only recently, with my eyes wide open.
There are faults and foot-guns but the simpler you keep things the better. For
me it's less about performance, even though naive C will likely be quicker
than most dynamic languages out there (where most of my experience is) and
more about manual memory management and much smaller memory usage. Another
reason I like C is that it is available everywhere. Of course as the article
mentions ("Use only standard features.") you need to take care of what is
included for it to be available on other platforms easily or at all. I'm
saying this as someone who is only targeting x86 desktops and servers rather
than targeting other processor architectures, so I'm mainly thinking about
different OSs. From the viewpoint of Linux and the BSDs on which I have some
experience to talk about, as they are primarily in C, the whole stack is C and
what I mean is that if you know C you can can potentially figure out how
everything works and is put together all the way down to the kernel. I also
can't overestimate just how good man pages are in Linux and BSDs for C
functions, using them is probably one of the best learning experiences for C
that I've found.

------
drallison
And when you have mastered C, you can move on to assembly language. Once upon
a time I used to blather on about "high level languages" and how superior they
are to machine languages. Not so much now, as I have come to appreciate the
elegance of using direct computation (arithmetic and logical) to perform
complex computation.

~~~
justanothersys
What do you mean by this? Would love a cool example. I have experience in C
and lots of higher level languages but writing games and such in Assembly was
before my time.

~~~
kgwxd
I think Atari 2600 programming is super fun. It's 6502 assembly, which is
pretty simple and fun on it's own, but mixed with unique, very limited
hardware makes it even more interesting. You can do it online [1], but using
DASM [2] and Stella and it's debugger [3] is even better. And if you ever want
to run it on real harware, it's pretty cheap to make it happen.

[1]
[https://8bitworkshop.com/v3.4.2/?platform=vcs&file=examples%...](https://8bitworkshop.com/v3.4.2/?platform=vcs&file=examples%2Fhello.a)

[2] [https://dasm-assembler.github.io/](https://dasm-assembler.github.io/)

[3] [https://stella-emu.github.io/](https://stella-emu.github.io/)

