The reason is because I write a custom printf to print exactly what I need to know. Debuggers just bury you in irrelevant output.
I don't think I'd say this is the only reason I use printf instead of a debugger, but it's definitely a compelling one.
This isn't an argument in favor of printf over debugging. It's an argument in favor of making debuggers not suck!
Visual Studio has decent watch-window formatters for STL containers. Vectors and Lists are easy. But it also has nice views into unordered_map and unordered_set. Visual Studio has a .natvis file format for adding custom debugger displays for custom datatypes .
If you're broken into the debugger and all threads are paused then a good debugger should be able to display your data however you like. Hell, it could have different display options to choose from if you really wanted!
I also find myself regularly relying on printf to debug. But I view this as a failing of the debugger, rather than the superiority of printf.
 I think the natvis format kinda sucks and is not sufficiently powerful. But that's a separate issue.
I'm not here to convince anyone to use printf debugging. I don't care about some grand "argument" in favor of it over something else. What I care about are the tools available to me and the most effective way to debug, has historically, for me, been to use printf.
This does not mean I only use printf. This does not mean I hate debuggers. This does not mean that I think debuggers are useless. This does not mean that I think debuggers couldn't or shouldn't be improved. This does not mean that I don't use profilers when debugging performance issues. All it means is that my tool of choice for every day debugging is printf. It is a convenient for me on a number of different dimensions.
That there could exist a theoretically better tool sounds like a great reason for someone to go out and build something. But that someone isn't me, at least not right now.
> I mostly use printf. A debugger is only good for telling you where it seg faulted and the stack trace.
That was Walter's statement, with which you agreed, and with which I strongly disagree.
Maybe I overgeneralized. The Rust debugger story indeed sucks and I mostly use printf! I interpreted "a debugger is only good for..." to be all debuggers. And I don't think that's true. Which is why I gave a Visual Studio/C++ example.
2019 won't be the year Rust's IDE story gets good. But maybe 2020? People are laying the groundwork. I'm hopeful.
I'm very hopeful 2019 will see some major improvements in that domain. Some IDEs such as Qt Creator just added support for the language server protocol  which already supports Rust. Thus the critical groundwork is already being deployed, albeit some work still needs to be done.
For me, it's annoying to have to modify my code and recompile / relink / redeploy / re-repro (assuming I even have a good repro case) just to inspect my data - linking alone can take over a minute for some projects I work on, nevermind the other steps! Meanwhile, changes to MSVC project natvis files hot reload, even if I'm looking at a crashdump for a coworker's computer for a bug that only happens every other full moon while rhyming off-key in Swedish. For some third party libs I may not even have the source code available to modify, but I can still sometimes write natvis files for their types. It's a little duplicate effort, sure, but I'll probably finish adding a new type to a natvis file before I'll finish relinking my project in a lot of cases. https://github.com/rust-lang/rust/blob/master/src/etc/natvis... , while perhaps a bit arcane if you don't know natvis (there are docs), and verbose on account of being XML, really isn't all that much XML for a couple of debug visualizers.
I consider debugger info important enough that even though I'm not using Rust in production, I did write one rustc patch to auto-embed stdlib natvis files into pdbs (although those won't hot reload): https://github.com/rust-lang/rust/pull/43221 . There are gdb scripts I'd be improving if I were debugging rust with gdb instead. Many script debuggers can take advantage of "debug" formatters defined in code, which is a nice option to have too, so it doesn't have to be all one or the other. I'm not aware of any debuggers that leverage Rust's fmt::Debug traits sadly.
I'm not necessarily knocking printf debugging. I use it and things like it sometimes. Especially if I have a harder problem that need more code to diagnose, and is making me run into the limits of the debugger. Memory leak tracking, cyclic refcounted pointer detection, annotating memory regions to be included in crash dumps, explicitly annotated profiling information, etc. - things that tend to involve more permanent systems. Sometimes you can write a debug script for these things, but doing it directly in code can be faster to write and to execute.
I will say: If your debugger isn't at least capable (with a little investment) of being good at inspecting arbitrary program state, it's not a very good debugger.
libstdc++ ships with pretty-printers for its types.
But I agree that printf debugging still has its uses.
As others have mentioned, how do debuggers fair on optimized builds? Most of my time "debugging" is specifically spent on optimized builds looking at performance issues.
Xcode launches into a debugger by default, so it's not really an extra step for what I usually do.
> As others have mentioned, how do debuggers fair on optimized builds?
Not well, if you are planning to have variable names and stepping work correctly.
> Most of my time "debugging" is specifically spent on optimized builds looking at performance issues.
Sounds like a job for a profiler?
Consider the similarities between profilers and printf debugging; both of them run your code, and spit out some kind of log, whereas debuggers stop your code in the middle of execution. Workflow wise, they're pretty much the same, even if their objectives are a bit different.
Sometimes yes. Sometimes no.
But for actually debugging, a log file is better.
if (condition) assert(0);
I use a separate tool for profiling:
It's built in to the DMC++ and DMD compilers.
Sometimes I mess up the code a bit filling it up with debug code, but when I finally fix it it's git to the rescue.
I can be old fashioned when it comes to IDEs, but git really is a marvelous, paradigm-changing advance.
If your loop takes billions of iterations on gigabytes of data before returning the wrong answer, how do you debug? Breakpoints are useless because which iteration introduced the fault? The critical paths are long. Watchpoint start after you pauzed. Reverse debugging is to slow for millions of instructions. If you change the code with the REPL (other post) you invalidate your previous calculations too.
Stacktraces are useless because you inline everything, and the callgraph is shallow anyways.
Performance profiling needs special tools: you know the hotspot, the debugger tells you where, not why.
My conclusion: A debugger is good for finding bug in data that moves, not for data that changes.
opinion based on: my debug approach changes depending on the error. Prints are always the easiest solution in the scientific parts.
I find myself wanting debuggers more on dynamic languages where you have no idea what object types you're handling or what their properties are. Printing the whole thing gets you a pile of mostly useless mush. A debugger lets you poke at parts of it until you find something that gives you some insight into the problem.
I'd also say that performance and threading problems are a different beast. Even when you have a beautiful debugger, it's not very helpful to stop one thread while you poke around at human speed. You gotta log info about what's happening somewhere and then examine it for clues after the run is done. It may take a few dozen runs to log the detail you need without gigabytes of useless mush, but that's just what it takes to get to the bottom of these types of issues.
Clojure is another example of a pretty decent experience, e.g. "add-watch" is built-in, it has a REPL (so I've used it to debug Java code before), and the coding culture is functional programming which has its own benefits for debugging. Common Lisp is even better, it's a system as much as a language and so the runtime itself has all the debugging capabilities that you need a heavy IDE for in simpler non-system languages. (break, compile, trace, update-instance-for-redefined-class, object field inspection, extendable print-object methods are all there part of the standard, lots of introspection and redefinition capability, and CL compilers like SBCL can give quite detailed type, argument count, typo alerts, cross-reference usage, and optimization information at compile time, still no IDE needed (though editors like emacs/vim have nice wrappers and can automate some stuff). Check out this short series: https://malisper.me/debugging-lisp-part-1-recompilation/ )
For subtle edits of a big chunk of code, my preferred approach is to set everything up in a test case, then pop into the implementation that needs to be changed where all the state is available with a binding.pry, and iterate expressions in the REPL until it's good, then copy my history out into an editor.
In Emacs it's nice and easy with rspec-mode and inf-ruby - run the test from within Emacs and get in-editor REPL once you hit the binding.pry.
Maybe it's a software vs hardware thing, but I would end it all if I had to work hardware without breakpoints, watches, and step-through.
Essentially you just get good at staring at the code and running gedanken experiments till you figure it out.
As I said above, this is very heavily dependent on preferred workflows and what you're working on. Long ago, I remember doing some robotics work in C, and a debugger was invaluable.
Obviously it's possible to debug an issue based only on static state like a core dump. And in extraordinarily rare cases that might be the only available option and a debugger (or more manual tooling) might be your only choice.
But in the overwhelming majority of cases, even working at the lowest and most obtuse levels, the very first step in debugging a problem, long before anyone starts flaming about tool choices, is to come up with a reproducible test case. And once you have that, frankly, fancy tooling doesn't bring much to the table.
At the end of the day you have to spend time staring right at the code at fault and thinking about it, and if you have it up in your editor to do that, you might as well be stuffing printf's in there while you're at it.
In fact, I would argue that reproducible reports are relatively rare in the industry, especially once you get out of developer tooling (where the users are people who know the value of such reports, and how to obtain them).
And then stuffing a printf in the middle of it can easily mean several minutes of build time, for a large native codebase. A tracepoint, on the other hand, is instant.