
Most common one-line bugs in C?  - solipsist
http://stackoverflow.com/questions/5438347/most-common-one-line-bugs-in-c
======
gtani
There's lots of C gotcha lists out there; the "Undefined behavior" series is
good (and I think Koenig's Pitfalls book from 1989 is still worth reading
(pre-ANSI C)

<http://www.literateprogramming.com/ctraps.pdf>

<http://blog.regehr.org/archives/232>

<http://blog.regehr.org/archives/226>

<http://www.andromeda.com/people/ddyer/topten.html>

<http://news.ycombinator.com/item?id=1990244>

------
jerf
I have to admit that after reading that over, I find myself wondering why we
teach students C as their first language...

~~~
cperciva
I agree; we should start with assembly language. Students will never
understand why these are dumb things to do until they can visualize the
assembly code they get translated into.

~~~
Animus7
C, being the glorified assembly language that it is, would be an excellent
intro to programming if we didn't insist on teaching it as a high-level
language.

A beginner's course in C will tell you "uninitialized variables are bad", and
"don't use pointers after free", but it won't explain the stack frames or
heaps that underpin so much of our computing.

I happen to think that having C blow up in your face (and fixing it) is one of
the best ways to learn the subtleties of programming.

~~~
pjscott
A lot of students would not respond well to that kind of approach. It's too
much at once; you're trying to teach them to think like an idiot savant
computer, _and_ teach them common patterns for translating their thoughts into
code, _and_ hit them with a bunch of low-level details about pointers and
stacks and instructions? Some exceptional students would thrive. Others would
dig in and do sort of okay, passing the test and then maybe learning it
properly later on. Many -- maybe most -- would just get discouraged and switch
majors to something where they'll never have to deal with null pointers or
case fall-through or (god help us) malloc alignment along dword boundaries.

If you want to do it that way, you've got to take one thing at a time. Colin's
idea of starting with assembly language isn't nearly as crazy as it sounds; I
know several people who learned that way. Personally, I would start with very
simple, linear programs in a high-level language like Python. Let them get a
feel for it, and fiddle around. Then gradually start introducing more and more
difficult concepts, like if statements, or looping, and how to use these
strange new wonders. Later -- much later -- you'll be able to talk about the
low-level details exposed by a language like C. If you try it too early,
you'll either get head-explosions or blank stares.

(Incidentally, head explosions are preferable. One of the greatest horrors of
teaching is that there will always be students who stare at you with looks of
blank incomprehension, and it always might be your fault.)

~~~
h0bbit
I mostly agree with this answer. However, in my experience, starting with a
high-level language (like python) is a double-edged sword.

a) It fails to get people interested in low-level things like call stacks and
frames. Someone once told me, "My language frees me up to think about the
really important things: like solving the problem at hand. I don't have to
worry about things like the stack and where my memory is going or coming
from." Perfectly valid point maybe, but I didn't learn about these things
because I needed to, I did it because I was interested and curious. I think C
and it's pointers played a big role in helping me "get interested".

b) People take the _awesome_ things in high-level languages for granted. When
I first moved into python from C, functions as first class entities blew me
away! Lisp macros were like learning to do magic. People who have only used a
high-level language don't give these features the credit they deserve.
(Alright, this might just be a pet peeve of mine...)

I think that C _should_ be taught to students, at a slower pace and in more
detail than it was taught to us. And if it occasionally blows up in their
face, I think they'll be better off for it.

------
biot
Anyone else remember the magazine ads for PC-Lint, each of which featured a
subtle C bug? Those were fun!

edit: <http://drdobbs.com/199900339>

------
astrofinch
Makes me wonder how hard the programming languages that C defeated must have
sucked.

~~~
rwmj
BCPL barely had types. Everything was an "int" and you stored pointers,
addresses, floats and so on in that same type (no concept of sizeof(int) !=
sizeof(void*) in those days!)

Edit: nice intro to BCPL. It's much worse than I thought:
<http://www.davros.org/c/bcpl.html>

------
ja27
Most common in my world? Not checking return codes. Or check return codes and
logging "there was an error" rather than using errno.

A couple uncommon ones: for (float x = 0.0; x < 3,5; x += 0.1) {...}

if (flag == TRUE) {...} (TRUE was #defined as 1 somewhere, but someone else
set flag to a different non-zero value like -1, !0 or ~0.)

~~~
bartonfink
With the float for-loop, is the bug initializing the variable inside the loop
declaration or is it the comma in "x < 3,5"?

~~~
kragen
I think you mean "declaring the variable inside the loop initialization",
which is not a bug in C99 and not subtle anywhere else.

The intended bug is that 0.1, in floating-point, is not exactly 0.1. On x86
(and probably anything with IEEE-754 floating-point) the loop counter reaches
0.5 successfully, is a little bit high when it should have reached 1.0,
remains high up to when it reaches slightly more than 2.2, and then suddenly
becomes low when it should have reached 2.3, and continues to be slightly low
and get progressively lower thereafter, all according to whether the sum is
being rounded to a value that's slightly too high or slightly too low. The
consequence is that the loop runs for an extra iteration with x ≈ 3.5, or 3,5
if you're from, say, Argentina.

------
bryanallen22
If someone has enough SO reputation (I don't), you might mention on pmg's post
that casting malloc is a good idea, even though it's not necessary in C.
Otherwise, should your code ever move to a C++ environment it'll be an error
-- not even a warning.

~~~
cperciva
That sounds like a good reason to omit the cast. Compiling C code as C++ is a
bug: You can take valid C code, run it through a C++ compiler without any
errors or even warnings being printed, and have it behave differently.

If you want to use C code from a C++ program, you MUST compile it with a C
compiler and link it in.

------
sliverstorm
One of the commentators mentions some... frustration... with _void main()_

Out of curiosity, is there any reason to have a return value from main() in
something like a microcontroller, where there's nobody and nothing (that I
know of) to care what main() returns?

~~~
cperciva
_Out of curiosity, is there any reason to have a return value from main() in
something like a microcontroller, where there's nobody and nothing (that I
know of) to care what main() returns?_

It's very unlikely... but yes, just possibly. There are some very wacky
calling conventions out there; among them, "caller reserves space for result
on the stack after the function arguments".

If you run into this particular variety of crazy, declaring _main_ as
returning _void_ will result in the compiler looking in the wrong place for
_main_ 's parameters, with obvious and very rapid breakage resulting.

I'd say that you don't need to worry about this, except that microcontrollers
are exactly the sort of niche environment where you're likely to encounter
craziness -- so it's better to play it safe and declare main correctly.

~~~
JoachimSchipper
Starting the program is almost entirely implementation-defined in freestanding
environments (i.e. those without a host OS), so I don't think your example
works.

Of course, there are reasons not to use "void main" - for one, a strict C99
compiler will not compile it.

------
nowarninglabel
Does it count as irony that when I loaded this link I hit a SO maintenance
page with this image at top: <http://sstatic.net/stackoverflow/img/offline-
ide-1.png> ?

