
Debuggers Make You a Bad Detective - Nicholas_Kross
https://www.thinkingmuchbetter.com/main/debugging-bad-detective/
======
orwin
It really depend on context. I did not use gdb at all before the end of my
first year where i had to code my own posix-compliant shell (i don't think it
was by the end of the project but we were close).

But gdb helped me understand why printf() was not working as intended, how
those pesky register worked, remove memory leaks, and overall taught me a lot,
fast. I had to rewrote malloc() using mmap at the start of my second year, i
don't think any amount of printf() would've helped me as much as gdb and
valgrind did. Also watchpoints basically do the same job as printf most of the
time (but are more tedious to use).

I find pdb() usefull too (although it might be because i don't understand
python as well as i should). I definitely use print() though, like said in the
article, to narrow down where the break points should be placed. And i think
than more than half the time i used pdb() it was on my test files, whether it
was unit testing or feature testing (those are hard :/), so better test
coverage would not help me much.

Also, debugging Common Lisp with (print) is usefull for beginners not used to
the REPL, but once you understand how this whole slime/REPL thingy works, not
using it to debug is bold.

------
dastx
Depends really. If I've just joined a place where I've got no clue how the
code works, the best thing I can do is use a debugger. It will make me
understand how the code works, and will get me to the bugs quicker.

I agree though. If someone knows the code intimately and still doesn't have a
rough clue about why a bug exists, and the first thing they do is look at the
programs entry point and step into all the way until something pops out, it
certainly will take you forever. And in this context, OP is certainly right.

------
amitoz_azad
I tend to use ipython kernel for debugging in python, it is not a debugger but
does the job for me to find what is wrong while working on a scientific
project. What are your thoughts on it?

~~~
Nicholas_Kross
That's like a python notebook, right? Given the quick feedback loop, and that
you're seeing just what you need to see, that seems like an efficient way to
debug that sort of thing.

~~~
amitoz_azad
Yes, the jupyter notebook thing, rightly pointed out by you, it does give a
quick feedback. One can also start writing code from the beginning in it and
push a working cell into a .py file. This way one is very sure from the
beginning that how the code will work. The downside of it, which I think, it
feels to me that code is written in a very linear fashion.

Coming to debugger in python (pdb), in your blog you criticize using debugger
to step line by line. But this need not be the case, one can jump to debugger
prompt and set a break point from the prompt itself where one feels something
is wrong and continue to that break point and repeat this process. So one need
not to jump through each line. In such a case it works equivalently to
carefully put print statements, i.e judiciously put print statements =
judiciously put breakpoints, what do you think?

~~~
Gibbon1
I think a lot of people that criticize debuggers think professionals use them
like a first year CS student, as a crutch because they don't understand the
language they learning.

A professional uses them to trap events and then explore the programs state.
And to inject faults and watch how the program explodes, or not. In that
debuggers are vastly better than printf debugging.

~~~
amitoz_azad
"a lot of people that criticize debuggers..." But what really making me think
is that big giants like Torvalds, Rob Pike, Guido van Rossum. Rober C Martin,
John Graham they don't like using debuggers
([https://lemire.me/blog/2016/06/21/i-do-not-use-a-
debugger](https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger)).

~~~
detaro
That article is a great example: It defines "debugger" as "single-stepping
through code", and then argues against that. It's fine to argue against
"single-stepping through code" as a default technique, but artificially
limiting the scope.

EDIT: one of the comments by the author makes it very clear: they don't
consider lots of things you can do with e.g. gdb as "using a debugger":
[https://lemire.me/blog/2016/06/21/i-do-not-use-a-
debugger/#c...](https://lemire.me/blog/2016/06/21/i-do-not-use-a-
debugger/#comment-245502) </EDIT>

The submitted article here is similar, by opening with that ridiculous example
and pretending that's what using a debugger means. Debuggers don't make you
bad at finding problems. Not knowing when to use which tool makes you bad at
it. (And knowing how to _make new tools_ out of your existing ones is also
quite helpful)

~~~
Nicholas_Kross
This is a good point, thank you! I am adding a quick note in the article to
clarify that it's more about the code-stepping technique itself.

