
#ifdef considered harmful (1992) [pdf] - Annatar
https://usenix.org/legacy/publications/library/proceedings/sa92/spencer.pdf
======
fanf2
I’ve been maintaining unifdef since about 2001. One of the rules I have set
myself is to never use #if in unifdef. It’s an easy program to port, so this
isn’t very hard :-) but Microsoft’s appalling C libraries make it more painful
than it should be.

The main technique is to set the include path depending on the behaviour of
the target platform, and have per-platform variant header files. For unifdef,
this basically means standard vs windows.

The other thing I don’t have to deal with is build-time configuration options
or feature switches. It’s easy to get into a combinatorial explosion with #if
but kinda hard if you force yourself to reify each option as a pair of include
files!

~~~
snarfy
Isn't this just moving the goal posts? Now all of your #ifs and the complexity
they bring are in your build system.

~~~
JdeBP
It _already was_ in the build system. The macros that control conditional
compilation when one is doing it with the preprocessor do not appear out of
thin air. A build system has to set them appropriately in the first place.

All of the mechanism for auto-detecting (or manually specifying) the
capabilities of the target have to be there. It is simply a case of whether
they set preprocessor macros via command-line options or tell the build system
to use alternative source files.

The latter has the advantage of only creating build dependencies against the
modules that actually use the library functionality in question, rather than
against the compiler command line and potentially forcing a re-build of
everything (when an honest dependencies mechanism is being used).

~~~
euyyn
For gRPC we needed the code to work on all major OS's and had to support a
number of different build systems. In no case were the capabilities of the
target system manually specified, as that wouldn't have been sustainable. And
specifying _for each build system_ what files to include or not for each
platform would have been onerous and hard to work with (as the person adding
some platform-specific code probably isn't fluent in build systems for other
platforms).

So the solution we adopted is a header file that detects the target platform
capabilities. And each source file being either platform-agnostic, or specific
to one platform. In the latter case, the whole file contents would be #ifdef'd
out if compiling for a different platform.

------
beefhash
It bears noting that at _some_ point, you will likely need to #ifdef. There's
no way around it for non-trivial code. Simple example: Getting the size of a
file if said file is greater than the maximum size of long, meaning you can't
fseek to the end and use ftell(); you'll need platform-specific APIs like
stat(2) or GetFileSizeEx().

You can also do the platform checks in the build system instead and have
multiple per-platform C files implement APIs for portability shims -- but
given that the platform check is likely unportable (at least in make as
specified by POSIX), you're just making different portability assumptions.

~~~
avar
I may be wrong, but I was under the impression that this was portable on all
POSIX systems:

    
    
        uname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')
        ifeq ($(uname_S),Linux)
            HAVE_LINUX_SPECIFIC_STUFF = Yes
        endif
    

This is how git does its conditional compilation purely powered by Makefile
logic:
[https://github.com/git/git/blob/master/config.mak.uname](https://github.com/git/git/blob/master/config.mak.uname)

~~~
jstanley
Makefile logic is basically just an ifdef one level up.

~~~
chris_wot
So what do you do then?

~~~
jstanley
I don't consider ifdefs harmful.

But if you do, you must surely consider logic in Makefiles equally harmful.

~~~
chris_wot
Why so?

------
arkj
Every construct can be harmful if one doesn’t understand its purpose.

------
pandaman
The article does not seem to mention any uses of conditional compilation
outside of portability. But consider the most common case: debug and release
versions of the same program.

You want the extra checks and diagnostics in the debug version and fastest
code in the release. Using assert() will only get you so far because you might
need to compute values to assert on and if those require function calls then
the calls will still be there when asserts are eliminated in the release. And
if you want additional room in your data structures to store debug info then
assert() is of no use.

~~~
Annatar
“Debug” and “Release” only pertains to Windows. You are always supposed to
build with -g on UNIX and UNIX like operating systems because besides a few
miliseconds of startup time, including the source code into the binary doesn’t
affect anything but file size. The runtime linker skips those sections and
links only code and data into memory, but the source code is there if you load
it into the debugger. Kernel engineers taught me this.

~~~
pandaman
There are much more differences between debug and release than having debug
info included in the executable so your reasoning makes little sense. Many
people, for example, build games for PS4, which runs a FreeBSD Unix-like OS,
unawares that their debug and release builds only pertain to Windows...

------
JdeBP
If someone wants to fix Wikipedia's article on Henry Spencer, a full citation
just waiting for {{cite conference}} is at
[https://news.ycombinator.com/item?id=17610233](https://news.ycombinator.com/item?id=17610233)
.

