
Beware of strncpy and strncat - eklitzke
https://eklitzke.org/beware-of-strncpy-and-strncat
======
nneonneo
The advice given in this article is bad. Reducing the length of the copy by
one will still fail to null-terminate the string if the source exceeds the
destination length. You need to add

    
    
        dst[sizeof(dst)-1] = 0;
    

or memset the array to zero beforehand. You are not guaranteed to have a zero
at the end of the input buffer otherwise (neither local variable arrays nor
malloc’d arrays are guaranteed to be zeroed).

strncpy sucks for a second reason: it writes n bytes no matter what the source
is. That means if you write

    
    
        char buffer[16384];
        strncpy(buffer, “hello”, sizeof(buffer)-1);
        buffer[sizeof(buffer)-1] = 0;
    

this will fill 16K of memory with zeroes despite only needing to copy a 6-byte
string.

strlcpy/strlcat do the right thing (copy up to n-1 bytes and null-terminate);
shame they aren’t standardized. In their absence, I suggest snprintf instead:

    
    
        snprintf(buffer, sizeof(buffer), “%s”, src);
    

Because snprintf returns the number of bytes that would be written, it can be
used to detect overlong input strings, reallocate the buffer as necessary, and
also to implement efficient concatenation. It’s also surprisingly fast in most
implementations.

~~~
belorn
Its a shame that many of the solution to those exist as non-standardized calls
and seems split between BSD and GNU. Personally I use asprintf quite a lot,
and I recall strndup to be quite useful to create a identical copy of a
buffer. strndup should have an identical performance to allocating a buffer
and then strlcpy, since it too do copy up to n-1 bytes and null-terminate.

~~~
loeg
It's a shame they aren't in ISO C, but strdup and strndup are at least
standardized by POSIX:
[http://pubs.opengroup.org/onlinepubs/9699919799/functions/st...](http://pubs.opengroup.org/onlinepubs/9699919799/functions/strdup.html)

------
tlb
Beware of the "correct" solutions proposed here.

    
    
      // OK: correctly copy src
      char dest[8];
      strncpy(dest, src, sizeof(dest) - 1);
    

With a long src, it fails to null-terminate dest. dest[7] will be whatever the
contents of uninitialized memory were, so reading dest as a string is likely
to run past the end.

strlcpy has a better API, though sadly it's not standard on Linux.

~~~
majke
should be either

    
    
      char dest[8] = {0};
      strncpy(dest, src, sizeof(dest) - 1);
    

or

    
    
      char dest[8];
      strncpy(dest, src, sizeof(dest) - 1);
      dest[sizeof(dest)-1] = '\0';
    
    

On the same note I always wondered what "snprintf" does. That is - will it
ALWAYS zero terminate (given non-NULL buffer, of size > 0)?

~~~
pascal_cuoq
Yes. “snprintf(dest, sizeof dest, "%s", src);” where dest is a char array
_almost_ is the programmer-friend version of strncpy() that does exactly what
one would expect, neither more nor less. It always does '\0'-terminate its
output as long as it is passed a size >0.

The only issue with that idiom is that it is only defined to pass a
'\0'-terminated string for src. Although it will only write the specified
number of characters, it will read from src until the end of the string, and
invoke undefined behavior if src does not point to a well-formed string:

[https://taas.trust-in-soft.com/tsnippet/t/e0538551](https://taas.trust-in-
soft.com/tsnippet/t/e0538551)

A mnemonic is that snprintf needs to return the length of the string that
would have been output if there had been enough room (not counting the
terminating '\0'), so it needs to compute the length of the string pointed by
src.

A close variant, below, avoids this issue but instead has the problem that the
printf star argument is typed as an int, which is narrower than size_t on a
typical 64-bit compilation platform.

    
    
        snprintf(dest, "%.*s", (int)(sizeof dest)-1, src);

~~~
loeg
> but instead has the problem that the printf star argument is typed as an
> int, which is narrower than size_t on a typical 64-bit compilation platform.

If you are using >2 GB buffers and anticipate >2 GB strings, it probably makes
sense to track lengths explicitly and use memcpy() etc instead of string
routines anyway.

~~~
pascal_cuoq
I was more thinking of the case where >2GiB strings are not useful for normal
use and the programmer does not anticipate them, but a malicious user can
cause such strings to happen, for instance by sending them over the network in
minutes or hours, causing unforeseen behavior.

~~~
loeg
Sure, that's a good point.

However, while it may also be possible, in some code, for a malicious user to
control the buffer size, the int precision argument, as used in this
construct, derives from the buffer size, and not the input string.

If the user can control the buffer size, then yes, we get the very
undesireable buffer overflow via overflow from positive to negative[0]:

    
    
        A negative precision is taken as if the precision were
        omitted.
    

[0]: snprintf(3) man page from linux

------
TimJYoung
I just don't understand why C hasn't been blessed with a proper string type
yet. Object Pascal has had one (actually, several) for decades now and it
doesn't hinder the language's ability to handle low-level memory manipulation
(you can still manually copy string memory, convert them back/forth into raw
Char pointers, etc.), and generally serves to make string handling much, much
safer for most applications. It _does_ however, result in slightly more memory
consumption for the length/reference count tags and does incur some overhead
in the form of compiler-generated reference count checks at the end of
functions. But, IMO, the advantages outweigh the disadvantages for general-
purpose C programming and you're always free to fall back to the more manual
methods of handling character arrays.

So, am I missing something and there is some concrete reason why this can't be
implemented ?

~~~
staticassertion
How often does C add things like this in general? I want to say... basically
never?

You're free to roll your own String type that has the length. No clue why
people don't just do that though (I assume many do, but obviously a ton do
not).

~~~
TimJYoung
:-) You're certainly right here, but given that everyone thought that C was
"going to be sent to the farm" a while ago and that shows zero signs of being
true any time soon, it might be in the best interest of everyone to add such
improvements, as long as they don't affect any existing C code. As I stated in
my other reply, the only issue that I can see with rolling one's own String
type is the handling of the reference counting for implicit deallocation.

~~~
scruple
> given that everyone thought that C was "going to be sent to the farm" a
> while ago

For certain definitions of "everyone," maybe... I am no longer primarily a C
programmer, and haven't been since around the end of 2013, and I'm still
firmly in the "C isn't going any where" camp. I just happen to liken this
stance to reality.

------
loeg
Use strlcpy/strlcat instead.[0] strlcpy takes the full size of the destination
buffer, limits the copy to N-1, and nul-terminates the result for you. It's
like the "correct" example in TFA, but with less annoying boilerplate.

Some more verbose design/rationale for the really curious.[1]

Another thing to keep in mind is the sometimes surprising behavior of
strncpy(large_buffer, short_string, sizeof(large_buffer))[2]:

    
    
        If the length of src is less than n, strncpy() writes additional null
        bytes to dest to ensure that a total of n bytes  are  written.
    

Strlcpy doesn't do that. Just use strlcpy. On Linux, it can be found in the
libbsd package.

[0]:
[https://www.freebsd.org/cgi/man.cgi?query=strlcpy&sektion=3](https://www.freebsd.org/cgi/man.cgi?query=strlcpy&sektion=3)

[1]:
[https://www.sudo.ws/todd/papers/strlcpy.html](https://www.sudo.ws/todd/papers/strlcpy.html)

[2]:
[https://linux.die.net/man/3/strncpy](https://linux.die.net/man/3/strncpy)

------
nine_k
Probably back in 1970s `(characters, \000)` looked "elegant", and `(counter,
characters)`, wasteful.

This decision, if not the single most prolific, is likely one of the top 3
sources of security exploits in C code. All for the want of saving a few
bytes.

~~~
beagle3
Up until the early '90s, almost all the different Pascals had a (byte counter,
character) string, which meant that if your string was longer than 255 bytes
you had to do exactly the same kind of acrobatics that C did, and then some
(for various reasons related to Pascal's type system).

Then they added 16-bit strings. And eventually 32-bit strings. Which are
useless to keep that 5GB file in memory you process these days. I'm not up-to-
date - does pascal have a 64-bit string type?

C, on the other hand, still uses the same error-prone and exploit inducing
strcpy, strcat and friends - and the work for those 12GB gene texts and .csv
files.

C never looked elegant, but it was extremely effective in the limited memory /
limited CPU days of yore, and that's why it won against e.g. Pascal.

~~~
cornholio
> I'm not up-to-date - does pascal have a 64-bit string type?

Seems like an easy fix is a single type of string with dynamic size depending
on the first byte, like UTF-8:

0bbbbbbb : 1 byte header for a string up to 127 bytes long

10bbbbbb + hh : 2 byte header for a string up to 16384 bytes long

110bbbbb + hhhhhh: 4 byte header for a string up to 512 MB long

1110bbbb + hhhhhhhhhhhhhh: 8 byte header for a string up 1.15 EB long

... etc. (so as not to repeat the mistakes of Pascal, who knows how long will
1.15 EB be enough for everybody)

You would optimize the functions for the fast path with a single bit test for
short <128 byte strings, and end up with a sane, safe string until the end of
time. With minimal memory impact and much faster than testing for nul
terminators, strlen() all over the place etc.

~~~
samatman
This is the intuitively right answer, and has always been my favorite.

It's so puzzling that this is a data structure I can't recall ever seeing in
the wild.

~~~
JdeBP
You both might want to review how lengths are encoded by the Distinguished
Encoding Rules and the Basic Encoding Rules of ASN.1.

~~~
samatman
Sure, and source maps as well.

I meant as the builtin string type of a systems language, thought that would
be clear from the context.

------
jhallenworld
There are many worse string formats than C's NUL terminated:

Last byte of string has bit 7 set.

First byte of string has length, so strings are limited to 255 bytes in
length.

First two bytes of string has length, so strings are limited to 65535 bytes in
length.

Strings are stored in fixed length buffers with space padding to the end.

Length prefixed strings are stored in a fixed length buffer, so you are
limited to the buffer length. I think this was the case for PL/I "varying"
strings.

Back in the day, C was better than PASCAL because it had strdup, meaning it
had a heap and you could put strings in it.

C++ string is mediocre. I solves some problems, but what if:

You have very long strings and you are worried about heap fragmentation. So
you are better to have something like a linked list of segments each in their
own malloc block. But can you extend std::string? Nope, oh well.

You want strings to be semipredicates. I mean that strings should be able to
have a NULL value, as I can do with C. (return NULL for 'char *'). Can
std::string do this? Nope. Can it be extended? Nope.

~~~
thestoicattack
> You want strings to be semipredicates. I mean that strings should be able to
> have a NULL value, as I can do with C.

A null value different from "emptiness"? Would you be okay with a std::string*
or a std::optional<std::string>?

~~~
jhallenworld
"since C++17", I see.. I'm curious if it uses more space than "char *"

Also it's not great because I should be able to pass such a string through
functions that expect std::string. A NULL string should act just like an empty
string except that you can test it for NULL.

~~~
thestoicattack
Yes. std::optional<T> allocates the T in-place. Just checked compiler
explorer, and on g++ 8.1, sizeof(std::string) == 32,
sizeof(std::optional<std::string>) == 40.

(of course sizeof(char* ) == 8)

------
bio_end_io_t
The author should've been wary of strncpy, because his solution is wrong.

"If there is no null byte among the first n bytes of src, the string placed in
dest will not be null-terminated."

So copying sizeof(dest)-1 will not append a NULL byte as the author implies.
You'll have to do that manually.

~~~
mturmon
Article's solution is wrong, but at least the headline is right!

------
GuB-42
I usually avoid strncpy() and strncat() altogether

    
    
      char buf[256];
      size_t sz = strlen(str);
      if (sz < sizeof (buf)) {
      	memcpy(buf, str, sz + 1);
      } else {
      	/* error processing */;
      }
    

Truncation is not as bad as a buffer overflow. However, it is still not
correct. You have to properly handle the case. And if truncating is the
correct answer, make that explicit.

In practice, I almost never use fixed size buffers for strings unless I know
the size at compile time.

Edit (for strcat):

    
    
      char buf[256] = "blah";
      size_t sz1 = strlen(buf);
      size_t sz2 = strlen(str);
      if (sz1 + sz2 < sizeof (buf)) {
      	memcpy(buf + sz1, str, sz2 + 1);
      } else {
      	/* error processing */;
      }

------
Someone
_”C programmers should use the newer strncpy()”_

Newer? I thought _strncpy_ dates back to the time Unix filenames were 14
characters, max, adding padding zeroes when needed in some fixed-length kernel
structures.

That’s also the reason strncpy always writes _len_ bytes; not keeping garbage
content in those 14-byte buffers allows the system to use _memcmp_ to compare
file names.

------
hawski
There is no idiomatic way to use strncpy, unless you're running the 7th
edition of Unix [0].

[0]
[https://stackoverflow.com/a/1454071/6561829#6561829](https://stackoverflow.com/a/1454071/6561829#6561829)

------
ebikelaw
C++ basic_string::c_str always returns a null-terminated string. C++ is the
solution to numerous C pitfalls.

~~~
AnimalMuppet
True. But not everyone can use it. If you can, it's (almost always) the right
answer.

------
MrBingley
Looking at the comments in this post, I'm resigning myself that there simply
is no correct solution for copying or concating strings in C. Null-terminated
strings are a fundamentally broken concept. I think the long-term solution is
simply to move to a different language (Rust, C++, D, Go, whatever) where we
have the benefit of hindsight and have (pointer, length) string types, which
solve all the problems null-terminated strings introduce.

~~~
blub
It's quite sad to behold how hard it is to work with strings in C.

All those functions seem broken by design, forcing the programmer to clean up
their mess in edge cases.

------
Analemma_
I'm just shouting into the void here, but why does anyone find it acceptable
that C is almost fifty years old– a half-century– and we still have new
articles published about the correct way to _copy memory_. And then,
immediately following them, comments in responds to those articles saying the
article is wrong and that you should actually do it this other way. Nobody has
figured this out in 50 years?

~~~
kazinator
A few new people have to learn this stuff. That's not just to maintain legacy
code. Computers do not have nicely behaved, safe, garbage collected strings.
Someone has to understand the code for how that stuff is bootstrapped, and
that code is going to have gunky memory copies in it where a one word mistake
will bring down the show.

------
vortico
Don't use `strncpy()` and `strncat()` at all, use

    
    
        snprintf(buffer, buffer_len, "%s Stuff %s", str1, str2);
    

It's safe (C99, C++11) and easily extendible. Format strings are fun! Not the
fastest, but if the bottleneck of your program is concatinating strings, just
do it manually.

~~~
Hello71
This is a safe, if slow, alternative for strncpy, but it does not safely
replace strncat. The C standard does not define the behavior if the code
"sprintf(buf, "%s some further text", buf);" is used.

~~~
dmitrygr
#define strncat(a, b, maxlen) snprintf(a + strlen(a), maxlen - strlen(a),
"%s", b)

~~~
kazinator
Too many evaluations of a. If you have snprintf, you probably have inline
functions.

One day someone will do something like strncat(a += previous_delta, ...).

~~~
dmitrygr
OPTION 1 (c):

    
    
      char* mystrncat(char *a, const char *b, size_t maxlen){
        snprintf(a + strlen(a), maxlen - strlen(a), "%s", b);
        return a;
      }
    

OPTION 2 (gccism):

    
    
       #define strncat(_a, b, maxlen) ({char* a = _a; snprintf(a + strlen(a), maxlen - strlen(a), "%s", b); _a;})

------
GlitchMr
Worth noting that strncpy doesn't stand for secure string copy or anything
like that. Using strncpy for copying strings would be a mistake, even if
technically you can do that.

Rather, it's a fixed size string copy function. This structure is very rare in
regular environments, but they can happen in embedded environments. For
instance, if you want to have a string in binary file which is at most 10
bytes, you may want to avoid storing the termination byte when the string is
exactly 10 bytes long. For instance, such a structure was used in UNIX to
store file names, as they used to be limited to 14 bytes, and storing
terminator would be a waste of space.

------
mlthoughts2018
I like learning about these caveats, but I have been asked tricky stuff like
this in interviews before with gets() and the like.

As a person who interviews other people, I find that it's waaay more valuable
that someone _is generally aware that they should watch out for this class of
pitfalls_ than that they know any specifics about a given function.

I've met people who basically had memorized the description of this phenomenon
for gets(), but then their preferred solution was just to replace it with
fgets() but then they don't know about checking for newlines or have any
thoughts on what to do when individual lines are too long.

I'd much rather hire someone who says to herself, "Oh, I need to read some
characters from an input source using C. RED ALERT! Let me really research the
specifics here."

Instead of someone who thinks, "Oh, I need to read some characters from an
input source using C. Good thing I memorized that trivia about gets() and can
totally solve this in the best way immediately with the highest upvoted Stack
Overflow solution of fgets() that I didn't bother to deeply grok."

I find that when interviews are geared towards puzzle solving or esoteric
trivia, the people who do well are mostly of the second type (the ones I
wouldn't want to hire).

Whereas someone of the first type might flounder around and struggle in a
20-minute programming task to process strings in C, directly _because_ that
person cares more about having a bigger picture point of view of what's
actually going on rather than esoteric memorization of specific function
signatures and usage mechanics.

In other words, if I gave some kind of C string processing question in an
interview for 20-30 minutes, one _very excellent_ answer should be, "sorry
man, not gonna try to do this in 20 minutes because in reality I know there
are string handling landmines I would need to research and slowly process, and
I would never believe this is worth committing to memory for a short
interview."

------
eboyjr
Interesting how the "solutions" to the buffer overflow problems don't provide
for all of the modern assumptions of programming with strings. I would love to
know the history of the development of strncpy.

~~~
kazinator
I suspect that strncpy was intended for filling in fields in records to be
written to files, where you want all the bytes to be clean, with no random
junk (potentially sensitive) after the null byte. For instance _struct utmp_
in Unix or something of its ilk.

~~~
slrz
Exactly. Or the fixed-length directory entries in old Unix file systems. If
the file name is exactly 14 chars long, you don't care about NUL termination
but if it is any less you want to zero out the remaining bytes. Strncpy is
made for that.

It's not restricted to writing to disk. When these structures cross the
userspace/kernel boundary on a system call, you really don't want to leave
uninitialized bytes following the NUL terminator and return them to some user
process.

~~~
wahern
The modern use case for strncpy is for filling in the .sun_path member of
struct sockaddr_un. Most people assume that the path needs to be NUL
terminated, but the BSD Sockets API actually relies on the declared sockaddr
length parameter. It's not superfluous and the kernel will only read .sun_path
up to the end of the declared size of the sockaddr structure; it doesn't
expect NUL termination though it will obey internal NUL termination.

Moreover, the statically declared size of .sun_path in the libc headers
doesn't limit the maximum length of the path. On most implementations you can
create domain socket paths larger than this. Indeed, when you use an API like
getsockname() you normally should check for truncation by comparing the
returned sockaddr length with the size of the buffer you passed. Just like
with snprintf() and strlcpy(), if the returned logical length is greater than
your buffer size the path was truncated. IIRC, not all implementations (or
any?) include a NUL byte as part of the length so you can very well end up
with a .sun_path that isn't NUL terminated if your buffer only barely fit the
path. Likewise if you didn't 0-initialize the path buffer and the actual path
was shorter, though IIRC kernels handle this second case differently--some
might NUL terminate for good measure if there's space.

~~~
kazinator
Furthermore, on Linux, there is an extension: the first byte of sun_path can
be null. In that case, the rest of the path is still valid up to the given
length and specifies an "abstract address": it's a namespace outside of the
filesystem. Sockets bound to abstract addresses automatically disappear on the
last close.

This drives home the idea that "damn it, this is not a null terminated string;
here is the null-byte-based extension to prove it!"

------
rurban
Interestingly even the Annex K strncpy_s and strncat_s are unsafe by design,
that's why I only added them to the safeclib via --enable-unsafe.

But recently I got fed with all this unsafety nonsense with the truncating
variants and changed the implementation to always terminate the asciiz string
properly.
[https://github.com/rurban/safeclib/blob/master/src/str/strnc...](https://github.com/rurban/safeclib/blob/master/src/str/strncat_s.c)

------
docker_up
Not even this works. They forgot to force the last byte as a NULL, which is a
classic bug in C. Either that or memset the char array before using it. But
what the blog poster did is a pure bug.

~~~
kazinator
NULL is a pointer constant; a byte is never described as NULL (in the context
of C).

------
lkjalksdjfasdf
I don't use pure C for string handling anymore. I use C++ and extern C ABI.
Rust can also work. C++ can infer the destination size via templates.

~~~
pjmlp
Some of us were already doing it in 1992, with our own string classes as
passage rite, but adoption takes time.

------
kazinator
Never memcpy structs, except if you need to ensure that the padding bytes are
copied. C has had a = b assignment for structs since long before ANSI C.

------
chris_va
For those similarly curious:

"""

char * STRNCPY (char _s1, const char_ s2, size_t n) {

    
    
      size_t size = __strnlen (s2, n);
    
      if (size != n)
    
        memset (s1 + size, '\0', n - size);
    
      return memcpy (s1, s2, size);
    

}

libc_hidden_builtin_def (strncpy)

"""

... which is actually different than how I thought it would be implemented (it
ends up with an extra loop to figure out the size of the string).

(side note, how does one format code in HN?)

~~~
kazinator
> _how does one format code in HN?_

Two space indent.

------
rini17
1\. Avoid sizeof, lest someone comes and changes the array into a pointer. Use
constant parameter instead:

#define BUFLEN 8

char buffer[BUFLEN];

strncpy(...etc... BUFLEN-1);

2\. Check length of the string to be copied BEFORE copying and if too long,
fail (using assert, exit or so) instead of silent truncation or worse.

Why it is so hard and instead new "safe" string functions must be invented?

~~~
kazinator
There is no absolutely foolproof way to code in C such that no matter how
someone changes the program, they will be spared from making a mistake.

If we just program for today, sizeof buffer is much better than proliferating
a preprocessor constant that may or may not correctly reflect the object being
overwritten.

For the silly mistake of changing an array to a pointer without taking care of
sizeofs, GCC gives us some diagnostics:

    
    
           -Wsizeof-pointer-memaccess
               Warn for suspicious length parameters to certain string and memory
               built-in functions if the argument uses "sizeof".  This warning
               warns e.g.  about "memset (ptr, 0, sizeof (ptr));" if "ptr" is not
               an array, but a pointer, and suggests a possible fix, or about
               "memcpy (&foo, ptr, sizeof (&foo));".  This warning is enabled by
               -Wall.
    
           -Wsizeof-array-argument
               Warn when the "sizeof" operator is applied to a parameter that is
               declared as an array in a function definition.  This warning is
               enabled by default for C and C++ programs.

------
professorTuring
This link has nothing different to offer than "man 3 strncpy"...

Worse, some examples are invalid.

~~~
charlchi
People complaining about memory safety in functions whose man pages explicitly
state memory safety is not provided...

------
Hello71
In addition to tlb's point, the article's description of strncat is not
correct.

> As with strcat(), the resulting string in dest is always null-terminated. >
> Therefore, the size of dest must be at least strlen(dest)+n+1.

~~~
jwilk
They wrote:

    
    
      // XXX: after this buf may not be null-terminated
      strncat(buf, another_buffer, BUF_SIZE - strlen(buf));
    

This code is indeed bad, but not because the result won't be null-terminated,
but because it's an off-by-one buffer overflow.

Despite this misunderstanding, the proposed fix is correct:

    
    
      // OK: ensure buf is null-terminated after concatenation
      strncat(buf, another_buffer, BUF_SIZE - strlen(buf) - 1);

------
jacquesm
Bad advice is worse than no advice at all.

------
beeforpork
Well, this is not very helpful advice, because strncpy(a, b, sizeof(a)) is in
no way more safe than strncpy(a, b, sizeof(a)-1), because the latter is not
0-terminated either. And from malloc(), as in the examples, comes no
0-termined buffer, but random garbage memory. What would be safer is to alway
0-terminate the buffer after copying, and using the simplest copy possible:

    
    
        strcpy(a, b, sizeof(a));
        a[sizeof(a)-1] = 0;
    

But this is more boilerplate and hence more error-prone.

Even safer, use strlcpy() (if available) or snprintf() which both 0-terminate
(except under Windows, maybe). (But beware when preparing something for
copying from trusted to untrusted: strncpy() clears the rest of the buffer
while strlcpy() and snprintf() do not, so you might leak info via
uninitialised memory behind the end of the string if you copy out that buffer
across a trust boundary. Actually, the authors 'sizeof()-1' solution is less
secure in this context.) So, use:

    
    
        snprintf(a, sizeof(a), "%s", b);
    

And don't tell me anything about speed, please. Your main concern with C is
not micro optimisations but robustness and avoiding undefined behaviour (and
that snprintf() is not too slow).

And for multiple concats, use multiple snprintfs(), like so:

    
    
        char *i = a, *e = a + sizeof(a);
        i += snprintf(i, e-i, "%s", b1);
        i += snprintf(i, e-i, "%s", b2);
        i += snprintf(i, e-i, "%s", b3);
    

This is the most concise way I know to write this that works without buffer
overflow (your main enemy, even more vile than missing 0-termination), without
thinking too much, without writing too much boilerplate, and that is
relatively robust against breaking in code restructuring (like, appending more
stuff in the middle). The idiom also resembles a bit old style C++ iterators
('i' and 'e').

Oh, and a truncated string is usually not good anyway, be it 0-terminated or
not. So you do need to check for that after all that stringing stuff:

    
    
        if (strnlen(a, sizeof(a)) >= sizeof(a)-1) {
            /* ... error ... */
        }
    

Don't miss that '-1' there. Off-by-one is another enemy to know well. And
dispite that check handling missing 0-termination, do not be tempted to fall
back to strcpy(), because missing 0-termination is bad(tm).

Phew!

C is bad with strings. The above resembles C++ iterators ('i' and 'e') and
works fine with any good snprintf implementation (i.e., probably not under
Windows).

And do not copy structs with memcpy, just assign them! memcpy() is for arrays
only. This is not going to go away, is it?

~~~
nneonneo
Mostly good advice, but the multiple `snprintf` example is wrong - it contains
a buffer overflow. snprintf returns the number of characters that _would_ be
written, so when you do

    
    
        i += snprintf(i, e-i, "%s", b1);
    

i would end up past e if b1 is overlong. Then in the next line

    
    
        i += snprintf(i, e-i, "%s", b2);
    

e-i is negative, but snprintf takes a size_t so this will overflow badly.

The _best_ solution, AFAICT, is the following:

    
    
        int res;
        res = snprintf(i, e-i, "%s%s%s", b1, b2, b3);
        if(res >= e-i) {
            /* handle overflow */
            return -1;
        } else {
            i += res;
        }
        /* subsequent snprintf's here */

