Hacker News new | past | comments | ask | show | jobs | submit login

I mostly use printf. A debugger is only good for telling you where it seg faulted and the stack trace.

The reason is because I write a custom printf to print exactly what I need to know. Debuggers just bury you in irrelevant output.




Debuggers are great interrogating arbitrary program state (it's a printf you don't need to hardcode!) and acting as a REPL for trying out new code.


I'm with Walter on this one. He's right. When I printf-debug, I very much do not want to necessarily see the raw in-memory details. I want to see a pretty printed view. For example, right now I'm working with DFAs, and if I just printed out its transition table as it is in memory, it would be unreadable. Instead, I have a custom fmt::Debug impl that pretty prints something I can read and comprehend.

I don't think I'd say this is the only reason I use printf instead of a debugger, but it's definitely a compelling one.


> I very much do not want to necessarily see the raw in-memory details. I want to see a pretty printed view [...] I can read and comprehend.

This isn't an argument in favor of printf over debugging. It's an argument in favor of making debuggers not suck!

Visual Studio has decent watch-window formatters for STL containers. Vectors and Lists are easy. But it also has nice views into unordered_map and unordered_set. Visual Studio has a .natvis file format for adding custom debugger displays for custom datatypes [1].

If you're broken into the debugger and all threads are paused then a good debugger should be able to display your data however you like. Hell, it could have different display options to choose from if you really wanted!

I also find myself regularly relying on printf to debug. But I view this as a failing of the debugger, rather than the superiority of printf.

[1] I think the natvis format kinda sucks and is not sufficiently powerful. But that's a separate issue.


I don't use C++ and I don't use Visual Studio. And even if I did, it would be pretty annoying to have to define debugger specific files just to print my data types. In Rust, I just add a fmt::Debug impl and now everyone who uses my code benefits from it, whether in a debugger (by calling that impl) or by print-debugging.

I'm not here to convince anyone to use printf debugging. I don't care about some grand "argument" in favor of it over something else. What I care about are the tools available to me and the most effective way to debug, has historically, for me, been to use printf.

This does not mean I only use printf. This does not mean I hate debuggers. This does not mean that I think debuggers are useless. This does not mean that I think debuggers couldn't or shouldn't be improved. This does not mean that I don't use profilers when debugging performance issues. All it means is that my tool of choice for every day debugging is printf. It is a convenient for me on a number of different dimensions.

That there could exist a theoretically better tool sounds like a great reason for someone to go out and build something. But that someone isn't me, at least not right now.


I'm familiar with your work. I use ripgrep daily, thanks!

> I mostly use printf. A debugger is only good for telling you where it seg faulted and the stack trace.

That was Walter's statement, with which you agreed, and with which I strongly disagree.

Maybe I overgeneralized. The Rust debugger story indeed sucks and I mostly use printf! I interpreted "a debugger is only good for..." to be all debuggers. And I don't think that's true. Which is why I gave a Visual Studio/C++ example.

2019 won't be the year Rust's IDE story gets good. But maybe 2020? People are laying the groundwork. I'm hopeful.


> 2019 won't be the year Rust's IDE story gets good. But maybe 2020? People are laying the groundwork. I'm hopeful.

I'm very hopeful 2019 will see some major improvements in that domain. Some IDEs such as Qt Creator just added support for the language server protocol [1] which already supports Rust. Thus the critical groundwork is already being deployed, albeit some work still needs to be done.

[1] https://langserver.org/


Just to share another POV (I hear that you're not using MSVC so much of this may not be useful to you):

For me, it's annoying to have to modify my code and recompile / relink / redeploy / re-repro (assuming I even have a good repro case) just to inspect my data - linking alone can take over a minute for some projects I work on, nevermind the other steps! Meanwhile, changes to MSVC project natvis files hot reload, even if I'm looking at a crashdump for a coworker's computer for a bug that only happens every other full moon while rhyming off-key in Swedish. For some third party libs I may not even have the source code available to modify, but I can still sometimes write natvis files for their types. It's a little duplicate effort, sure, but I'll probably finish adding a new type to a natvis file before I'll finish relinking my project in a lot of cases. https://github.com/rust-lang/rust/blob/master/src/etc/natvis... , while perhaps a bit arcane if you don't know natvis (there are docs), and verbose on account of being XML, really isn't all that much XML for a couple of debug visualizers.

I consider debugger info important enough that even though I'm not using Rust in production, I did write one rustc patch to auto-embed stdlib natvis files into pdbs (although those won't hot reload): https://github.com/rust-lang/rust/pull/43221 . There are gdb scripts I'd be improving if I were debugging rust with gdb instead. Many script debuggers can take advantage of "debug" formatters defined in code, which is a nice option to have too, so it doesn't have to be all one or the other. I'm not aware of any debuggers that leverage Rust's fmt::Debug traits sadly.

I'm not necessarily knocking printf debugging. I use it and things like it sometimes. Especially if I have a harder problem that need more code to diagnose, and is making me run into the limits of the debugger. Memory leak tracking, cyclic refcounted pointer detection, annotating memory regions to be included in crash dumps, explicitly annotated profiling information, etc. - things that tend to involve more permanent systems. Sometimes you can write a debug script for these things, but doing it directly in code can be faster to write and to execute.

I will say: If your debugger isn't at least capable (with a little investment) of being good at inspecting arbitrary program state, it's not a very good debugger.


For example, when I debug the compiler, I'll often need the AST printed. Printing out standard containers doesn't do that. And sometimes I need the AST printed in different ways.


On the Visual C++ team, we use natvis to show FE AST nodes. It works pretty well with a couple hours of investment in writing the natvis.


I know that one can write custom pretty-printers for debuggers. But I like having the pretty-printers in the program itself. After all, I develop simultaneously on many diverse platforms.


With current gdb (i.e. released this decade), you can define pretty-printers in Python that are loaded automatically and print whatever you think is most important for a given type; the "print /r" command is available to print the raw details when necessary.

libstdc++ ships with pretty-printers for its types.

But I agree that printf debugging still has its uses.


I mean, why can't you just call that in the debugger when you need it?


Why spin up a debugger when I can just printf it? :-)

As others have mentioned, how do debuggers fair on optimized builds? Most of my time "debugging" is specifically spent on optimized builds looking at performance issues.


> Why spin up a debugger when I can just printf it?

Xcode launches into a debugger by default, so it's not really an extra step for what I usually do.

> As others have mentioned, how do debuggers fair on optimized builds?

Not well, if you are planning to have variable names and stepping work correctly.

> Most of my time "debugging" is specifically spent on optimized builds looking at performance issues.

Sounds like a job for a profiler?


> Sounds like a job for a profiler?

Consider the similarities between profilers and printf debugging; both of them run your code, and spit out some kind of log, whereas debuggers stop your code in the middle of execution. Workflow wise, they're pretty much the same, even if their objectives are a bit different.


> Sounds like a job for a profiler?

Sometimes yes. Sometimes no.


Borland even used to have a few marketing ads about JIT debugging, by making use of Dr. Watson infrastructure on Win16.


A debugger is most useful when you don't know what the program does, or how it does what it does. It is amazing for learning a new codebase.

But for actually debugging, a log file is better.


A good debugger is good for quite a bit more than that. Watches, break points, performance/profiling. I actually can't believe you're hating on debuggers.


> Watches

    printf
> breakpoints

    if (condition) assert(0);
then I use the debugger to tell me how it got there.

> performance/profiling

I use a separate tool for profiling:

https://dlang.org/dmd-windows.html#switch-profile

It's built in to the DMC++ and DMD compilers.

Sometimes I mess up the code a bit filling it up with debug code, but when I finally fix it it's git to the rescue.

I can be old fashioned when it comes to IDEs, but git really is a marvelous, paradigm-changing advance.


I like debuggers for things like examining variables from higher stack frames after a highly conditional break (where printf from every higher stack frame would be lost in noise), for stepping through abstractions (especially in C++, where there may be surprisingly amounts of hidden code executed in overloaded bits and pieces), and for hardware breakpoints (an approach for solving reproducible memory corruption: combine a deterministic memory allocator with memory breakpoints + counters).


This is a great comment. Most of the value I get out of using a debugger - I work in game dev - is from these, often in concert. Even before I start setting data breakpoints, though, I often find myself examining heap memory with the process paused to give myself sufficient context to do more informed exploration later. In the last year or so I've also started using Visual Studio's "action" breakpoints, a sort of runtime configurable printf, once I've identified areas of interest.


I share his opinion for scientific software. I think this is because it is building an algorithm versus software. I think the coupling and scope are just too different.

If your loop takes billions of iterations on gigabytes of data before returning the wrong answer, how do you debug? Breakpoints are useless because which iteration introduced the fault? The critical paths are long. Watchpoint start after you pauzed. Reverse debugging is to slow for millions of instructions. If you change the code with the REPL (other post) you invalidate your previous calculations too. Stacktraces are useless because you inline everything, and the callgraph is shallow anyways. Performance profiling needs special tools: you know the hotspot, the debugger tells you where, not why.

My conclusion: A debugger is good for finding bug in data that moves, not for data that changes.

opinion based on: my debug approach changes depending on the error. Prints are always the easiest solution in the scientific parts.


I'm leaning towards prinf/println too on Rust. I usually somewhat prefer debuggers, but I'll wait to get one set up until I find I need it. I haven't found myself wanting a debugger for Rust yet. Maybe the type checking is just that good that I don't have many bugs, maybe I'm writing enough tests for things that aren't type-checked well, maybe I haven't written anything complex enough yet. Whatever the cause, it works well enough for now.

I find myself wanting debuggers more on dynamic languages where you have no idea what object types you're handling or what their properties are. Printing the whole thing gets you a pile of mostly useless mush. A debugger lets you poke at parts of it until you find something that gives you some insight into the problem.

I'd also say that performance and threading problems are a different beast. Even when you have a beautiful debugger, it's not very helpful to stop one thread while you poke around at human speed. You gotta log info about what's happening somewhere and then examine it for clues after the run is done. It may take a few dozen runs to log the detail you need without gigabytes of useless mush, but that's just what it takes to get to the bottom of these types of issues.


Maybe you need to try out more powerful dynamic languages? I mean even Python has 'pdb' which gives you a fairly authentic gdb experience, and you can always dir() and help() something in the REPL. For me I find myself not missing a debugger very much at all in Python code, whereas Java code of a certain size and legacy history forces me out of my preferred vim world and into Eclipse for the interactive debugging alone, the language makes it more painful to debug in other ways you can get away with in say Python. Plus I find that language culture matters. Java programmers will assume you have a big IDE and debugger and will write their code accordingly. Other language cultures do something different. Occasionally you'll get principles like "grep-friendly code" in an effort to cut across cultures but they're still not universal.

Clojure is another example of a pretty decent experience, e.g. "add-watch" is built-in, it has a REPL (so I've used it to debug Java code before), and the coding culture is functional programming which has its own benefits for debugging. Common Lisp is even better, it's a system as much as a language and so the runtime itself has all the debugging capabilities that you need a heavy IDE for in simpler non-system languages. (break, compile, trace, update-instance-for-redefined-class, object field inspection, extendable print-object methods are all there part of the standard, lots of introspection and redefinition capability, and CL compilers like SBCL can give quite detailed type, argument count, typo alerts, cross-reference usage, and optimization information at compile time, still no IDE needed (though editors like emacs/vim have nice wrappers and can automate some stuff). Check out this short series: https://malisper.me/debugging-lisp-part-1-recompilation/ )


Yeah I meant Ruby and Python for dynamic languages, and by debugger I mean the command-line Ruby byebug and Python pdb. Those plus logging/print statements have been all I've needed so far, never felt a need for a GUI debugger, but then I already live in Vim and Tmux. Never tried a command-line debugger for Node/JS, but the Chrome GUI debugger works well enough.


Pry is the way to go for debugging in Ruby. Drop into an interactive REPL with local variable context in situ with binding.pry or binding.pry_remote if in a forked server environment.

For subtle edits of a big chunk of code, my preferred approach is to set everything up in a test case, then pop into the implementation that needs to be changed where all the state is available with a binding.pry, and iterate expressions in the REPL until it's good, then copy my history out into an editor.

In Emacs it's nice and easy with rspec-mode and inf-ruby - run the test from within Emacs and get in-editor REPL once you hit the binding.pry.


It's funny to me as an embedded programmer seeing people write about how they prefer printf to actual debugging. My printf command can take MANY TIMES longer to run than most of the code that I'm trying to fix.

Maybe it's a software vs hardware thing, but I would end it all if I had to work hardware without breakpoints, watches, and step-through.


I used to build/program embedded systems (around a 6800 uP). I'd debug using an oscilloscope, sometimes an LED attached to a pin, sometimes connecting the pin to a speaker (!). There wasn't enough EPROM space for a printf. And besides, the turnaround time for erasing/blowing an EPROM was just too long.

Essentially you just get good at staring at the code and running gedanken experiments till you figure it out.


Why is it funny? And why do you not consider printf to be "actual" debugging? I mean, if printf wasn't available to me or was for some reason otherwise inconvenient, then I would look for other avenues to debug, perhaps by using a debugger! This isn't that mystifying.

As I said above, this is very heavily dependent on preferred workflows and what you're working on. Long ago, I remember doing some robotics work in C, and a debugger was invaluable.


printf debugging doesn't work when your code already crashed, and all you have is the process dump (and if you're very lucky, it's a heap dump, not just stacks).


That's like saying that time spent developing a robust resupply and support network is wasted because your army is already starving in Russia. It's true, but sort of misses the point.

Obviously it's possible to debug an issue based only on static state like a core dump. And in extraordinarily rare cases that might be the only available option and a debugger (or more manual tooling) might be your only choice.

But in the overwhelming majority of cases, even working at the lowest and most obtuse levels, the very first step in debugging a problem, long before anyone starts flaming about tool choices, is to come up with a reproducible test case. And once you have that, frankly, fancy tooling doesn't bring much to the table.

At the end of the day you have to spend time staring right at the code at fault and thinking about it, and if you have it up in your editor to do that, you might as well be stuffing printf's in there while you're at it.


Those "extraordinary rare cases" aren't anywhere near as rare or extraordinary when you have millions of users.

In fact, I would argue that reproducible reports are relatively rare in the industry, especially once you get out of developer tooling (where the users are people who know the value of such reports, and how to obtain them).

And then stuffing a printf in the middle of it can easily mean several minutes of build time, for a large native codebase. A tracepoint, on the other hand, is instant.


You can do binary search with the line where the printf statement is, and eventually you will find the line with the error.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: