I haven't had the time to play with this, but Part 4 of the excellent "Unix as IDE" series  goes into it and I'm sure there's more around the web.
Another really fun way to get into the underlying assembler that the C compiler generates is Vidar Hokstad's "Writing a compiler in Ruby, bottom up" . This series involves writing little C functions, compiling them to the simplest assembly you can get, then writing a ruby compiler (in Ruby!) that emits that assembly. Some people have objections to the approach, but it's really quite nifty. I especially like it because it's really refreshing to see a compiler tutorial that doesn't start with the lexer and parser.
On a related note, it drives me crazy when developers don't know strace/ltrace or the equivalent for their platform. I use strace many times a day to diagnose anything from my own code to figuring out what config files an application actually loads to finding out what's slowing an app down.
When students came to me for help, my first two questions were always "What does gdb say?" and "What does valgrind say?" If they couldn't answer, I'd tell them to go find out.
edit: ddd also has a gdb prompt at the bottom, so you really don't lose much by using ddd over gdb unless you can't run an X server
I generally try to refrain from bringing up Emacs in threads that have nothing to do with it (since Emacs does everything) but I was pleasantly surprised by its gdb support and I have used it extensively in the past.
Obviously, if the compile/execute loop is cumbersome then GDB will save a ton of time. But when the loop is fast, I find printf-ing to be effective and easy on the brain since you focus 100% on the code.
You say this as if this practice also isn't prevalent in working and experienced programmers also :)
this is not to say that gdb won't save hilarious amounts of time or that you can be a good C coder without understanding a debugger (I don't think you can), just that the huge compile time thing isn't true most of the time.
I poked arrays and pointers with printf() for too long before they started to make sense, would have loved an early introduction to debuggers!
The other thing our uni was bad for is that they encouraged us to use vim through the uni servers early on but never have us a proper overview or the resources to really learn it. As a result most saw it as a handicapped text editor that just made things harder.
On the other hand, a follow-up course on actual debugging is probably warranted. Debugging might not be part of the Science bit of Computer Science but it is on the practical end of the Computer bit which you'll probably run into doing the Science bit.
Our version of the course was adapted from the CMU course: http://csapp.cs.cmu.edu/ I thought it was an excellent course. It taught students both concepts and skills that I had picked up in a much more ad-hoc manner.
watch -location ptr->member
But then, this could be the story of most command line utilities: Seems fiddly at first, but actually it is quite usable and often times more convenient than all those whiz-bang graphical tools.
My question is, what advantages do you get in using gdb directly through the CLI rather than through an IDE? (like Eclipse/NetBeans which itself uses gdb for C/C++ debugging, but has a nice graphical UI for it.)
Watching your code run and hitting a "next" button repeatedly isn't really a good use of the tool.
It's really useful to get yourself familiar with someone else's code.
One advantage is that gdb might be available in more environments.
In general, whatever you are more comfortable with is the better tool. I prefer command-line interfaces for most things, but find it easier to set breakpoints by clicking next to a line of code.
With LLVM, we should be able to have a REPL for C. as a pedagogical tool.
I was watching an Apple talk on lldb which explained this in more detail, and it shows a lot of promise for a debugger to have a full C compiler inside of it.
More people should be hopping on this bandwagon though because debuggers are awesome. I typically find myself using `po` the most in LLDB (Xcode, iOS development) but it's insanely useful especially when Xcode refuses to show me the values of something I want in the Variables View, ex. NSDictionary keys/values, objects in an NSArray, etc. I'll also use it sometimes to execute simple commands like `[myArrayObject count]` when the Variables View refuses to show me property values. Sometimes Xcode's GUI bits just don't cut it!
There's more info on what you can do with LLDB here:
And if you've used GDB, this might be of use:
Can I ask you what you're comparing with?
I can compare with C++, Scala and Go.
The first two are horrible in that department when compared with C. Go's compiles are blazingly fast.
I just did a random google of "how long to compile linux kernel" and came across this: https://plus.google.com/u/0/+LinusTorvalds/posts/6BxnSisp8fU
One nice excerpt:
"Total build time after make clean is about 1min, give or take 10secs. `touch include/linux/version.h` is 6 seconds to rebuild. Just doing a rebuild without touching anything is 2.7secs.
For a defconfig, it does build a godawful amount of modules :^)
'allnoconfig; make' is 16 seconds..."
Relying on what is printed out of printing a pointer value (which is not what the author is doing) is also misleading. Concluding stuff like "the size of an int is 4" or "size of double is 8" is also misleading. Again, it's not the conclusions the author is realizing, but for someone doing exploratory programming, it may be the case since the point of exploratory programming is learning by seeing how the system responds to the things you're doing.
And maybe I am wrong, but even the author got mislead by it.
"I'm going to ignore why 2147483648 == -2147483648; the point is that even arithmetic can be tricky in C, and gdb understands C arithmetic."
That's actually the result of undefined behavior, and not so much a result or "how C integer arithmetic works".
I really liked the idea. I just think it may be misleading if the tool you're using is GDB.
It'd be interesting a tool which allowed that sort of exploratory programming, but taking into consideration undefined behavior, unspecified things and implementation defined behavior.
The "natural wrap up" is not the same in all of the representations.
When the integer is unsigned, C gives you more guarantees, but for signed types, you're out of luck, and get to undefined bahvior land.
Check this out: http://web.torek.net/torek/c/numbers.html
And as an after note...
The maximum int may be less than 2^31-1. Maybe your C implementation decides that your int type will be a 16bits object with values ranging from -2^15 to 2^15-1. In that case, that integer literal would not be an int, and is likely to be a long, and maybe, in this same implementation, a long is a 61bit object ranging from -2^63 to 2^63-1. In that case, that assertion is just plain false. That's not the system exposed in the article though. But it could happen in some other system.
This can be true even on a 32bit system.
I guess should read the gdb manual, and, if it's anything like the as manual (which I've learned is not always the full story), the source too.
Split the windows horizontally: C-x 2
Switch to the other window: C-x o
Fire up gdb: M-x gdb
You can set breakpoints with C-x <SPC>. Emacs will show an arrow next the source line about to be executed.
typically though, the performance of the graphical debugger is pretty lousy compared to commandline, and (in the case of xcode) it doesn't do everything. Knowing how to navigate yourself on the commandline is very beneficial, especially when you need to [for lack of better words] rip the shit out of something, inject chunks of memory, or forcibly reproduce bugs that don't happen often.
That being said, I have heard that visual studio is amazing. For some reason I've never developed on Windows platforms so I've never had the opportunity to use the debugger.
I'd be curious to hear other people's opinions/expereinces. I'll freely admit that I use the tools I use because I've grown comfortable with them.
to toggle it on/off