
I do not use a debugger (2016) - AlexDenisov
https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger/
======
apo
Some of the opinions on debuggers are more nuanced than the author is letting
on. Take Bob Martin, for example:

> I consider debuggers to be a drug -- an addiction. Programmers can get into
> the horrible habbit of depending on the debugger instead of on their brain.
> IMHO a debugger is a tool of last resort. Once you have exhausted every
> other avenue of diagnosis, and have given very careful thought to just
> rewriting the offending code, _then_ you may need a debugger.

[https://www.artima.com/weblogs/viewpost.jsp?thread=23476](https://www.artima.com/weblogs/viewpost.jsp?thread=23476)

I'm still baffled by people like the author who "do not use a debugger.' A
print statement is a kind of debugger, but one with the distinct disadvantage
of only reporting the state you assume to be important.

This part was a little surprising:

> For what I do, I feel that debuggers do not scale. There is only so much
> time in life. You either write code, or you do something else, like running
> line-by-line through your code.

I could easily add something else you might spend the valuable time of your
life doing: playing guessing games with print statements rather than using a
powerful debugger to systematically test all of your assumptions about how the
code is running.

~~~
AliAdams
> I consider vision to be a drug -- an addiction. People can get into the
> horrible habbit of depending on their vision instead of on their brain. IMHO
> looking at things is a tool of last resort. Once you have exhausted every
> other avenue of diagnosis, and have given very careful thought to just
> trying to do the action again, then you may need to open your eyes.

Coding without a debugger is like walking with your eyes closed or driving at
night with no headlights. Sure, it may be possible, but you are purposefully
limiting your information in order to not become "dependant" on something.

Tooling will always be a compromise of utility vs reliance but there is a
reason we don't, for example, build cars by hand any more.

~~~
ubersoldat2k7
Exactly, we're engineers and craftsmans, all which rely on their tools. If you
don't use the tools at your disposal, you're not a good professional.

~~~
mrdodge
My experience is the EDI and debugger depedent programmers have a very hard
time when those things are taken away or unavailable. The reverse is not true.

------
Stratoscope
I practically live in the debugger. I do use it sometimes to debug a problem,
but most of the time I use a debugger to _avoid_ debugging. I use it for
coding.

I spend a lot of time working with APIs and libraries that are poorly
documented and often that I haven't used before.

Instead of writing out a bunch of code based on my limited understanding of
the docs, and likely with many bugs, what works for me is to just write a few
lines of code, until I get to the first API call I'm not sure about or am just
curious about. I add a dummy statement like "x = 1" on the next line and set a
breakpoint there.

Then I start the debugger (which conveniently is also my code editor) and
hopefully it hits the breakpoint. Now I get to see what that library call
_really_ did, with all the data in front of me. Then I'm ready to write the
next few lines of code, with another dummy breakpoint statement after that.

Each step along the way, I get to verify if my assumptions are correct. I get
to write code with actual data in front of me instead of hoping I understood
it correctly.

If I'm writing Python code in one of the IntelliJ family of IDEs, I can also
hit Alt+Shift+P to open a REPL in the context of my breakpoint.

Of course this won't work for every kind of code. If I were writing an OS
kernel I might use different techniques. But when the work I'm doing lends
itself to coding in the debugger, it saves me a lot of time and makes coding
more fun.

~~~
hzhou321
When you live in the debugger, you understand your code by seeing how your
code work. When you live without the debugger, you have to imagine how your
code work. The speed of seeing how your code work cannot match the speed of
imagining how your code work. In addition, see your code works brings a lot of
noise that is not part of your focus; in contrast, you only imagine where you
focusing. Of course, the effectiveness of each depends on your experience in
each respectively.

There is no denying that seeing is the real world, and imagining is just
imagination :).

~~~
greenyoda
Being able to visualize your code in your head is a great advantage, and I've
fixed many bugs that way. But sometimes your imagination hits its limits and
using a debugger to understand what's happening in the code is very useful.

Some cases where your imagination could be limited are:

\- Code that you didn't write.

\- Code that you wrote long enough ago that you don't remember the details.

\- Code that you know very well, but you're having a bad day and can't figure
out the problem just by thinking about it.

~~~
hzhou321
> Some cases where your imagination could be limited are:

> \- Code that you didn't write.

This has become the norm with software written by a team; and to my stress, I
find many coders have given up understanding code that other people wrote and
reduce to the minimum that get by. My premise is the necessity of understand
the code -- not only how the code work, but also how the code is conceived and
where the code is evolving toward. After that premise, there is no difference
between code that you didn't write or you did.

> \- Code that you wrote long enough ago that you don't remember the details.

That says a lot that the code was not written in its optimum way. Treat that
as a bug, and debug why the details cannot be easily retrieved.

> \- Code that you know very well, but you're having a bad day and can't
> figure out the problem just by thinking about it.

You should always take a rest and tackle it the next hour or next day when you
can work effectively. Continue to push through a bad day only has the
opportunity to make the day worse.

~~~
scarface74
_That says a lot that the code was not written in its optimum way. Treat that
as a bug, and debug why the details cannot be easily retrieved._

I wrote code dealing with railroad car repairs years ago.

[https://www.railinc.com/rportal/documents/18/260737/CRB_Proc...](https://www.railinc.com/rportal/documents/18/260737/CRB_ProceduresManual.pdf)

Should I be able to remember all 200+ pages of the audit rules by heart for
both the car owner side and the repair shop side and remember why I wrote all
of the special cases?

~~~
gizmo686
If you are writing special cases because a 200+ page rulebook says you need
them, then the special cases should be commented to indicate exactly what rule
they are meant to address such that someone with the rulebook can quickly look
it up.

Ideally, you would have all the rules encoded in a central place so this
lookup becomes obvious from the structure of the code, but that is often not
possible.

I am involved in a simmilar project now. We have a proprietary data format
which has, over the years, evolved slightly different versions as different
teams extended it in slightly different and incompatable ways. The code has
all kinds of special cases, to the point where we developed internal style
guidelines for how to comment them

~~~
scarface74
So now you have the rule book and the comments and if there is a bug in any of
the code, an I suppose to remember what all the code does and think through
it? Am I suppose to be able to just think through why the one set of hundreds
of records that came from one of 200+ repair yards is giving back erroneous
results or should I just use a debugger and set a conditional breakpoint?

~~~
bartimus
In this particular case you should probably implement some logging or
alternative output that shows how the rules are being applied. Also perhaps
the rules shouldn't be part of the code itself. It would be preferable to be
able to add/remove/update a rule without needing to deploy a new version of
the program (without going down the rabbit hole of implementing an entire
logic engine).

~~~
scarface74
That’s exactly what you end up doing - writing your own rules engine instead
of just using your language of choice.

How much harder is it to “deploy your whole” program than changing a complex
XML/JSON configuration file or database change, testing it, and then deploying
that?

In fact, it’s ususlly much easier deploying code. Setting up a simple pipeline
to deploy a service is not rocket science.

~~~
bartimus
It makes sense to use a programming language for the rule logic. I'm just
suggesting you need to conceptually separate the software code from the rule
code. Have the rule code in a separate repository that can be deployed
independently (by people with different roles). You can't just deploy new
code. You'd perhaps want to apply previous versions of a rule.

In any case. The people maintaining/using the rules might not have a debugger
available. They still need to have some kind of "rule stack trace" on how the
system came to a certain answer.

------
maximus1983
> Doing “something else” means (1) rethinking your code so that it is easier
> to maintain or less buggy (2) adding smarter tests so that, in the future,
> bugs are readily identified effortlessly. Investing your time in this manner
> makes your code better in a lasting manner… whereas debugging your code
> line-by-line fixes one tiny problem without improving your process or your
> future diagnostics.

I've been a professional now for about 15 years and very, very rarely do I get
to work on "my code". Almost all of the code I have to work with was written
by someone else originally and I have to just modify the system for new
requirements. Tests do not exist or if they do they are largely incomplete.

So the only thing to do is to step through find the problem, fix the ticket
and move on.

Sure If I get to design the system I normally write it very simply / well
structured with appropirate levels of abstraction and with enough tests to
expose the bugs in my code. But very rarely do I get paid to work on my code,
because my code doesn't need a lot of maintenance. I normally am asked to make
changes to bad systems.

> Brian W. Kernighan and Rob Pike wrote that stepping through a program less
> productive than thinking harder and adding output statements and self-
> checking code at critical places. Kernighan once wrote that the most
> effective debugging tool is still careful thought, coupled with judiciously
> placed print statements.

Thinking harder when I have a project with millions of lines of code (this is
normal in large financial systems) won't help me. A debugger will.

I think a lot of these famous programmers have never had to work with
something terrible and probably never will and that is why they make such
blasé statements.

~~~
tootie
I find this kind of thinking to be borderline insanity. Why would I expend so
much mental energy trying to understand what the past dozen developers were
thinking when it's trivial to just inspect values and deal with the reality of
the system as it is right now. If a function is expected to return a value of
25 and it's returning 17, I don't need to reason about their code to fix it.
My tactic is always to write some very tight unit tests around the sections of
code closest to the problem and run them through a debugger so I can see where
it goes awry.

~~~
mikekchar
I don't think what you are doing is vastly different than what is described.
"Thinking hard" is about trying to isolate the problem. RMS famously said that
you should try to only debug the code that is broken rather than trying to
debug the code that isn't broken (this is especially true in big systems).
Where is the error happening? How are these things connected? Etc, etc. When
you write very tight unit tests around the sections of the code, you have to
do exactly the same thing (and I would argue that writing those tests is a
good tool for "getting into" the code).

Next: running a debugger. I use print statements, but seriously, what's the
difference? You use watch points in the debugger. Same thing. Once I have
tests I find it easier to run them and look at the output of my prints as
opposed to stepping through the code. Stepping through the code requires you
to remember what you have done before and what the output was (granted
debuggers that allow you to go backwards are helpful). If you can do that,
then it's all good, but I find that it's easier for me to essentially create a
log and read through it. It's just a bit more structured, but in the end it's
exactly the same thing.

I think the biggest mistake that less experienced programmers do is that they
don't try to reason about the code before they start. It's that isolation
that's key, not how you are displaying the state of the program. Single
stepping is fine when you've got 100 lines of code, but when you have
thousands or millions of lines of code, it's not going to work. You need to be
able to work your way backward from the error, reasoning using the source code
as a map. You then use debugging tools (or printfs) to narrow down your
options.

~~~
maximus1983
> I use print statements, but seriously, what's the difference? You use watch
> points in the debugger. Same thing. Once I have tests I find it easier to
> run them and look at the output of my prints as opposed to stepping through
> the code.

Maybe I have been spoiled by Visual Studio but as you said there are plenty of
options in the debugger for winding back execution, changing execution,
inspecting Objects, inspecting the state of the stack at the time, I can debug
other people's assemblies, I can debug machines remotely.

Writing out to the console / log is my last resort.

> Stepping through the code requires you to remember what you have done before
> and what the output was (granted debuggers that allow you to go backwards
> are helpful).

No it doesn't require you to remember what you have done before. Even
relatively basic debuggers such as the JavaScript debuggers in most browsers
have a stack trace with what has been called where.

>If you can do that, then it's all good, but I find that it's easier for me to
essentially create a log and read through it. It's just a bit more structured,
but in the end it's exactly the same thing.

I don't understand why you would claim an inferior tool is better when a far
superior one is available. It would like saying that a Impact Wrench / Spanner
and a Spanner are the same thing, technically they both undo bolts, however
one makes it far easier than the other.

------
roca
I wonder if the author has actually used a debugger with practical reverse
execution, like rr, UndoDB or TTD. It is not just another feature like
"pretty-printing STL data structures"; it changes everything. Debugging is
about tracing effects back to causes, so reverse execution is what you want
and traditional debugging strategies are workarounds for not having it.

Those record-and-replay debuggers also fix some of the other big problems with
debugging, such as the need to stop a program in order to inspect its state.

The author is right that the traditional debugger feature set leaves a lot to
be desired. Where they (and many other developers) go wrong is to assume that
feature set (wrapped in a pretty Visual Studio interface) is the pinnacle of
what debuggers can be. It's an understandable mistake, since progress in
debugging has been so slow, but state-of-the-art record-and-replay debuggers
prove them wrong. Furthermore, record-and-replay isn't the pinnacle either;
much greater improvements are possible and are coming soon.

~~~
wallnuss
I especially use rr as an exploration tool. It allows me to ask not the
question "Where does the program go from here", but rather "Where did it come
from".

I couldn't do my daily work without rr, since often I work with systems that
are large and complex and I didn't write myself.

------
greenyoda
The article neglects one of my most important use cases for a debugger:
figuring out how an undocumented piece of legacy code works. For example, if I
perform a particular action in the UI, does function foo() get called? I find
that setting breakpoints and tracing execution and variable changes in a
debugger is an effective way of doing this kind of reverse engineering.

Also, when you're working with code that takes a long time to compile and
link, using a debugger to check program state can be a lot quicker than
recompiling the code with added print statements.

Different developers work with different types of code in different
environments, so just because Linus Torvalds or Rob Pike does something one
way doesn't mean that this is the most effective way for everyone else to do
it.

~~~
cafard
Upvoted.

Many years ago, we were installing an ERM system, when we discovered that it
would not run payroll--or rather, it would run payroll for every employee
except for the two who were in a certain jurisdiction. The very expensive
consultants who were overseeing the implementation did not have any useful
ideas on how to resolve this. (Though they did have a useless and time-
consuming one.) I figured out how to use the COBOL debugger (called an
"animator"), and in a couple of hours located the problem. Perhaps if I knew
how to write COBOL printfs, I could have done without the animator, but I was
all but illiterate in COBOL.

I have since inherited the care of a WinForms system, and without the Visual
Studio debugger I'd be lost there.

I

------
drewg123
The article talks about live debugging & stepping through code, which I agree,
I almost never do. But post-mortem debugging of coredumps is a different thing
entirely.

In my job on the kernel team at Netflix's Open Connect CDN, I do post-mortem
debugging of kernel core dumps almost daily. On a fleet the size of our CDN,
there will invariably be a kernel panic which you'd not otherwise be able to
reproduce. In fact, I often use _2_ debuggers: kgdb for most things, and
occasionally a port of the OpenSolaris mdb for easily scripting exploration of
the core.

My hat is off to all the people who made my debugging so much easier. Eg, the
llvm/clang folks who emit DWARF, gdb folks who make the FreeBSD kgdb debugger
possible, and the Solaris folks who wrote the weird and wonderful mdb.

~~~
tzhenghao
Yes to using debuggers for coredumps. I never found the whole UX of GDB good
enough to displace my current print statement workflow, but for post analysis,
it's really helpful when I can hit 'bt' (backtrace) and almost instantly see
what went wrong.

------
BrissyCoder
As always with these kinds of dogamtic posts it ignores the real world:
working on years old legacy software with millions of lines of code and off-
the-charts cyclomatic-complexity.

I could show anyone of these authorities a situation where their bravado would
fail a trying to figure out a particular bug by "staring at the code and
thinking harder" would be impossible.

Besides - they all tend to concede that they will use print statements or
whatever as a last resorted. That is just retarded - you have to add the
statements, recompile the code, and eventually remove them again. Just use the
goddamn debugger that's what it's there for.

What starts as a completely reasonable "hey maybe sometimes you should try to
read the code and not let the debugger become a crutch" turns into a
sensationalist black/white statement "I NEVER USE A DEBUGGER". Grow up.

~~~
deergomoo
I’ve never understood the “I don’t need a debugger - print statements are
fine” argument. Even if you only use a debugger to view application state,
(which, for most debuggers, is but a tiny fraction of their capability) it’s
still an order of magnitude more convenient than print statements.

No need to modify code, and no risk of forgetting to remove a print statement
(I have seen a forgotten dump on a rare execution path leak sensitive data to
users first hand). Not to mention the cases where printing can change or break
the program in some cases; for example printing before sending headers in
something like PHP.

------
tootie
I find this to be completely unbelievable. If I'm in IntelliJ or Visual
Studio, then adding breakpoints is equivalent, but easier than adding print
statements. It's trivial to click the gutter on the line I'm suspicious of and
run the program in debug mode. I can see all the local vars (exactly what I'd
do with a print statement) and if something is amiss, easily go back up the
call stack to see what went wrong. I find it mind-blowing when developers
don't do this because it's so easy and so valuable. It's just so much harder
to do with functional languages or languages without a first-rate IDE which
seem to be what people like these days. In fact, I find it mind-blowing when
developers don't consider this a mandatory feature to need when choosing a
language.

~~~
Merad
Visual Studio's debugger is especially magical with .NET. With the debugger
paused on a breakpoint you can drag the "next statement" pointer backwards to
rerun pieces of your code. You can also edit the code while paused and your
changes will be reloaded on the fly when you continue execution.

~~~
zvrba
> you can drag the "next statement" pointer backwards to rerun pieces of your
> code.

Works also in C++, though not very well when the debugger stops on an
exception.

------
ampersandy
The author fails to understand the true purpose of "debugging". It's not about
finding and fixing a bad line of code. It is about broadening your
understanding of a system and correcting faulty assumptions about how it
works. Then applying this greater understanding to a) find and fix issues and
b) avoid creating new problems in the future.

Once you have extensive knowledge of how a system works, you can more easily
spot incorrect code and are less likely to write it yourself. You should
always treat any debugging session as "learning more about the system". If you
can't explain to another person _why_ a bug occurred, then you won't be able
to convince them that the problem is truly fixed.

The approach you take to learn how the system works is not important, and will
vary from person to person. Experienced developers can read code, think about
it, and understand what it does. Some people understand control flow through
print statements. Some people can fly through with a debugger. Saying someone
else's method of study is "wrong" is just silly if in the end they understand
how it works.

This article is especially wrong because it makes the assumption that you must
step through a program line by line when you use a debugger, which you don't
have to do. You can just put breakpoints at the same places where you would
have put an equivalent print statement, except now you can inspect
_everything_, instead of the one or two things you bothered to print out.

~~~
Gibbon1
When I read things like the linked article all I can think of is Hair Shirt.

See Latin cilicium, French cilice.

[http://www.newadvent.org/cathen/07113b.htm](http://www.newadvent.org/cathen/07113b.htm)

These types of article stink of this to the heavens.

------
heinrichhartman
I use debuggers (GDB) exclusively in batch mode. It allows for a very
methodical debugging approach:

1 Formulate a Hypothesis

2 Determine what data you need for verification

3 Instrument code using gdb break commands, to hook function calls, sys-calls,
signals, state-changes, etc. and print debugging information (variables, stack
traces, timings) to a log file.

4 Then run an automated or manual test, of the failure you are debugging.

5 Then STOP data collection. Kill the process. Shut down the debugger.

6 Perform forensics on the log file.

7 Validate/Discard the hypothesis.

With this allows you reason reliably about the computational process:

\- Why did I see this log line before this one?

\- Why was this variable NULL here, but not in here.

You don't need to do that while you are in debugging session. You can take
your time. You can seek back and forth in the file. It's like dissecting a
dead animal, not chasing a fly.

In addition, you can share concise information with your colleagues:

\- This is the instrumentation

\- This is the action I performed

\- This is the output

And ask concise questions, that people might be able to answer:

> I expected XYZ in long line 25 to be ABC, but it was FGH. Why is this?

~~~
AlexCoventry
That also sounds like a very useful approach for white-box testing of
functionality you don't want exposed by the module's public interface.

------
alkonaut
So the argument is that if you use a debugger you become lazy and stop
reasoning about the code. That's a pretty terrible argument against any tool.
"It's a good tool, but if used wrong has drawbacks". I have heard a similar
argument against syntax highlighting. I'm not kidding.

I'm sure I'd write more thought through code if I took a screwdriver and
removed my backspace key. But that doesn't mean it's a good idea.

How about: use a debugger AND reason about the code? My pet theory: The reason
the listed developers Kerningham, Torvalds et.al, don't use debuggers is
because the debuggers they have available just aren't good enough. It would be
very interesting to have them describe their development environments, and
what debuggers they have actually used, and in which languages.

A debugger isn't much better than println if you are working in a weak type
system and with a poorly integrated development environment. If you do C on
linux/unix and command line gdb is your debugger then I understand why println
is just as convenient.

println doesn't solve the problem of breaking in the correct location, trying
a change without restarting (by moving the next statement marker to the line
before the chnaged line), It doesn't support complex watch statements to
filter data or visualize large data structures and so on.

------
mahidhar
So I work with a company where we provide online nutrition consultation to
employees from other organisations as part of a larger corporate medical
program. Our system has a scheduling system developed in Rails to deal with
these appointments. This system was developed by a person who quit the company
some time back.

A few weeks back, we got some complaints from a client who said some of their
employees weren't getting allotted an appointment slot, despite the fact that
the slot is supposedly free. I dived into the codebase to try and figure out
the problem. There were some minor bugs which I spotted first, and could fix
without using a debugger. But the appointments were still getting dropped
occasionally. So I started tracing the control flow more carefully.

That’s when I found one of the strangest pieces of code I had ever seen. To
figure out what the next available appointment slot was, there was a strange
function which got the last occupied slot from the database as a DateTime
object, converted it to a string, manipulate it using only string operations,
and finally wrote it back to the database after parsing it back to a DateTime
object, before returning the response! This included some timezone conversions
as well! Rails has some very good support for manipulating DateTimes and
timezones. And yet, the function's author had written the operation in one of
the most confounding ways possible.

Now, I could have sat there and understood the function without a debugger as
the article recommends. And then, having understood the function, I could have
then rewritten the function using proper DateTime operations. But with a
client and my mangers desperately waiting for a fix, I used a debugger to step
through the code, line by line, just understanding the issue locally, and
fixed the bug which was buried in one of the string operations. That solved
the problem temporarily, and everyone was happy.

A week later, when I had more time, I went back and again used the debugger to
do some exploratory analysis, and create a state machine model of the
function, observing all the ways it was manipulating that string. I added a
bunch of extra tests, and finally rewrote the function in a cleaner way.

Instead of romanticising the process of developing software by advocating the
use or disuse of certain tools, we should be using every tool available to
simplify our work, and achieve our tasks more efficiently.

~~~
juangacovas
"we should be using every tool available to simplify our work, and achieve our
tasks more efficiently"

Yep. I advocate for some "micro managing" within a team for this type of stuff
(detecting things your people do that you know there are faster ways to do
it). Everybody learns from that process.

------
gumby
The end of the essay implies that "debugger" is a poor name because of course
you still have to fix the bug yourself. If we call it instead "bug-location-
explorer" I find them invaluable for certain problems. In particular when you
have a very complex system that takes a while to get into its fragile state,
when I/O is difficult (e.g. many embedded systems), or as others have noted
here when you are spelunking others' code. Breakpoints are crucial since most
bugs with modern languages are wrong output and not bus errors.

I started out as a user of "print" as a debugging technique, but early on Bill
Gosper took pity on me and introduced me to ITS DDT and the Lispm debugger.
They both had an important property that they were _always_ running, so when
your program fails you can immediately inspect the state of open files and
network connections. No automatic core dumps except for daemons. The fact that
you explicitly have to attach a debugger before starting the program is a
regression in Unix IMHO.

It doesn't surprise me that Linus doesn't use one as kernel debugging is its
own can of worms.

~~~
toast0
I don't know if it's always been the case, but it is certainly possible to
attach and detach a debugger from an existing process, gdb/lldb takes -p for
pid. (Although, detaching from threaded processes seems to have a risk of the
process ending up stuck in my experience)

Briefly popping into the debugger can be a really quick way to find where an
infinite loop is happening, especially if it's not in your code, or not where
you expect. I found this especially helpful in diagnosing a FreeBSD kernel
loop I ran into, once the location of the loop was clear, the fix was simple.

~~~
Gibbon1
As you can see we have identified the crash in this process. Good have Steiner
attach GBD to it to find out what is wrong. Mein Fuhrer... Steiner... Steiner
accidentally killed the process and rebooted the server.

------
MaulingMonkey
I use debuggers to track down and understand compiler codegen bugs.

I use debuggers to track down and understand undefined behavior, where
mutating the code with logging statements may cause the bug to disappear.

I use debuggers to understand the weird state third party libraries have left
things in, many of which I don't have the source code to, or even headers for
library-internal structures, but _do_ have symbols for.

I use debuggers to understand and create better bug reports and workarounds
for third party software crashing, when I don't have the time, the patience,
or the ability (if closed source) to dive into the source code to fix it
myself.

I use debuggers to verify I understand the exact cause of the crash, and to
reassure myself that my "fixes" actually fixed the bug. This is especially
important with once-in-a-blue-moon heisencrashes with no good repro steps. I
want a stronger guarantee than "simplify and pray that fixed it".

Yes, if your buggy overcomplicated system is a constant stream of bugs, think
hard, refactor, simplify, do whatever it takes to fix the system, stem the
tide, and make it not a broken pile of junk.

But sometimes bugs happen to good code too though, and sneaks through all your
unit tests and sanity checks anyways. And despite rumors such as:

> Linus Torvalds, the creator of Linux, does not use a debugger.

Linus absolutely uses debuggers:

>> I use gdb all the time, but I tend to use it not as a debugger, but as a
disassembler on steroids that you can program.

He just pretends he's not using it as a debugger (as if "a disassembler on
steroids that you can program" isn't half of what makes a debugger a debugger)
and strongly encourages coding styles that don't require you to rely on them
heavily.

------
shin_lao
I see mentions of gdb - an extremely cumbersome tool - no mention of Visual
Studio and its marvelous debugger.

I too, when I'm on Linux, don't use a debugger, because there's no good
debugger and adding a print statement is faster and easier, and figuring out
what is going on with gdb is just plain horrible and slow.

That's why as soon as I find a bug on Linux I try to have it on Windows to
leverage VS's debugger. I can't count the number of times where I could
instantly spot and understand a bug thanks to VS's debugger.

"Think harder about the code", sure, and what if you didn't write the code?

~~~
marcosdumay
There is nothing marvelous about the VS debugger (it even had bugs on basic
functionality up to very recently). It just presents an simpler interface
because you are launching it from an IDE, get a C++ IDE on Linux and it will
be as easy to launch.

But if you are looking for marvelous debuggers, I do recommend you look at the
Python ecosystem.

~~~
bbernoulli
Can you recommend specific python debuggers? I recently had to debug some py3
remotely and ended up using VS code, which worked but had some hiccups.

------
tie_
Sounds like a theologist who wouldn't use a microscope, and instead think
over/interpret harder their Holy-book-of-choice. You can get into interesting
insights along the way, but I wouldn't call it practical or efficient.

Debuggers are the most _amazing_ tools to explore and learn the code in your
hands. They do have limitations, as pointed out, but presuming that your inner
insight/gut feeling leads to more scalable and lasting results is ridiculous.

------
michaelmrose
Smarter people have expressed more ill considered opinions.

"In the 1950s von Neumann was employed as a consultant to IBM to review
proposed and ongoing advanced technology projects. One day a week, von Neumann
"held court" at 590 Madison Avenue, New York. On one of these occasions in
1954 he was confronted with the Fortran concept; John Backus remembered von
Neumann being unimpressed and that he asked, "Why would you want more than
machine language?" Frank Beckman, who was also present, recalled that von
Neumann dismissed the whole development as "but an application of the idea of
Turing's 'short code."' Donald Gilles, one of von Neumann's students at
Princeton, and later a faculty member at the University of Illinois, recalled
that the graduate students were being "used" to hand-assemble programs into
binary for their early machine (probably the IAS machine). He took time out to
build an assembler, but when von Neumann found out about it he was very angry,
saying (paraphrased), "It is a waste of a valuable scientific computing
instrument to use it to do clerical work.

"[https://history.computer.org/pioneers/von-
neumann.html](https://history.computer.org/pioneers/von-neumann.html)

~~~
gizmo686
The 1950s were a different time. Nowadays computing power is almost too cheap
to meter. I do not think anyone at almost any software company knows how to
answer the question "how much money are we spending on running our compilers".
In most cases, I suspect the cost of running the compiler dominated by the
salary of the programmer as he types "make" and waits for the compilation to
finish.

We live in a world where every employee has a previously unimaginable amount
of processing power dedicated for their personal use that spends almost all of
its time idling. That results in a far different calculus than the world where
processing power is a scarce resource.

~~~
greenyoda
If an assembler or compiler could save you from having to run dozens of extra
batch jobs to fix a bug in your hard-to-understand machine language code, it
might have allowed more time on an expensive machine to be used for productive
purposes.

------
yonixw
He cite Linus, who says that not using debugger result in longer time to
market. which is good for Linux in his mind. How is this convincing for me?

Anyway, imagine a detective trying to figure out a murder scene without going
through it step by step. Of-curse with experience come speed. So, weird flex
but OK. I will still prefer C# over any language just because the debugger in
Visual Studio is amazing!

After re-reading the article, I see it as another "Just code better" which
hopefully will make it easy to pinpoint the bug just from the problem itself.
In complex system with a lot of feedback loops, It's nearly impossible without
debugging or using logs (which for me are just serialized debugger).

------
Ace17
Debugging is reverse-engineering, i.e trying to understand what's really
happening inside a program.

It necessarily implies some loss of control over the code you (or somebody
else) wrote, i.e you're not sure anymore of what the program does - otherwise,
you wouldn't be debugging it, right?

If you get into this situation, then indeed, firing up a debugger might be the
fastest route to recovery (there are exceptions: e.g sometimes printf-
debugging might be more appropriate, because it flattens time, and allows you
to visually navigate through history and determine _when_ thing started to
went wrong).

But getting into this situation should be the exception rather than the norm.
Because whatever your debugging tools are, debugging remains an unpredictable
time sink (especially step-by-step debugging, cf. Peter Sommerlad "Interactive
debugging is the greatest time waste" (
[https://twitter.com/petersommerlad/status/107895802717532979...](https://twitter.com/petersommerlad/status/1078958027175329793)
) ). It's very hard to estimate how long finding and fixing a bug will take.

Using proper modularity and testing, it's indeed possible to greatly reduce
the likelihood of this situation, and when it occurs, to greatly reduce the
search area of a bug (e.g the crash occurs in a test that only covers 5% of
the code).

I suspect, though, that most of us are dealing with legacy codebases and
volatile/undocumented third-party frameworks. Which means we're in this "loss-
of-control" situation from the start, and most of the time, our work consist
in striving to get some control/understanding back. To make the matter worse,
fully getting out of this situation might require an amount of work whose
estimate would give any manager a heart attack.

------
emp
“Stepping through code a line at a time” is a rather narrow definition of
using a debugger. Setting a logging breakpoint. Conditionally breaking.
Introspection by calling functions while the program is paused. Testing
assumptions by calling functions. Conditionally pausing the program. Setting a
breakpoint in code and then calling functions to activate the code path to the
breakpoint. This is the core of what lldb does for me. I agree, single
stepping is rare and is like looking for the needle in the haystack. But are
all these other aspects not assumed to be using the debugger?

------
bch
> However, the fact that Linus Torvalds, who is in charge of a critical piece
> of our infrastructure made of 15 million lines of code (the Linux kernel),
> does not use a debugger tells us something about debuggers.

It tells us more about Linus. I hope no impressionable developers take this
article too seriously.

------
speedplane
Coding efficiency is heavily influenced by the time it takes to iterate. Once
you write something, the time it takes to test it out, change it, and try
something new provides a nice upper limit on how fast you can develop
software.

The speed of iterating changes to code has a huge influence on the tools that
you use. In a language like C, compiling new changes to code can take many
seconds or sometimes minutes. Thus, you'll want to have heavy duty tools that
can carefully analyze how your code is operating.

In contrast, in a scripting language, changing a line and rerunning the code
can take far less time (a few seconds for even large programs). Thus, you can
iterate more often, and so you don't have to be as careful in each iteration.

The moral of the story is that debuggers can be extremely helpful for some
languages, especially those that take a long time to compile. However, while
still helpful, they are far less helpful for languages that you can run
quickly (I'm thinking Python here).

------
rspeele
The anti-debugger argument always seems to set up this strawman debugger user
who blindly starts the debugger in response to any problem, applies no thought
to what's going on, doesn't dig deeper, etc.

Sure, maybe that guy exists. Maybe you've seen "that guy" or been "that guy".
Does that mean that it has no value to be able to stop a program and look at
all the values?

------
pbiggar
I think we need to rethink what we mean by debugger. In Dark
([https://darklang.com](https://darklang.com)), we don't have a debugger.
Instead we combine the editor with the language, and then when you're editing
code you have available at all times a particular program trace (typically a
real production value, which are automatically saved because dark also
provides infra). As a result, you can immediately see the value of a variable
in that trace.

Is this a debugger? Sure. It sorta lets you step through the program and
inspect state at various points. But it's also a REPL, and it also strongly
resembles inserting print statements everywhere. It's also like an exception
tracker/crash reporter and a tracing framework.

IMO it's both simpler and more powerful than any of these. It's like if every
statement had a println that you never have to add but is available whenever
you want to inspect it. Or like a debugger where you never have to painfully
step through the program to get to the state you want.

So overall, I think we need to think deeper about what a debugger is and how
it can work. Most of the people quoted do not have a good debugger available
to them, nor a good debugging workflow.

------
breatheoften
I hate this article and disagree strongly with the ideas presented.

“Debuggable” code is written in a certain style — just like “testable” code.

I consider a codebase to be good when there are meaningful places to put break
points sufficient for running learning experiments about the code. Just like a
codebase with “tests” is often a better codebase as a result of being written
in a way that supports testing - a codebase that supports debugging can also
often be a better codebase. And these work well together putting break points
in test cases is often a really great idea!).

I think one of the reasons the value of the debugger so often fails to be
noticed by experienced developers is that so many systems are architected in a
horrific way which really does not allow easy debugger sessions — or the
debugger platform is so underpowered that debugging is unreliable. There’s
nothing worse than not trusting the debugger interface — “i want to do an
experiment where I run code from here up to here” needs to be easy to describe
and reliable to execute otherwise it is too much pain for the gain. In my
opinion, failure to make this easy is not a fault of the concept of debugger
but a fault of the codebase or the tooling (which often is very inadequate).

------
jchw
I do not use a debugger most of the time either. Let me tell you why I don’t:

\- Because I forgot how to use it (or never knew how.) There are many
debuggers and UIs and I still know how to use some of them to decent effect,
but I simply don’t know how to be effective with most of them.

\- Because I’m pretty confident I have a good understanding of what code is
doing nowadays. My intuition has been honed over the years and I tend to
quickly guess why my code isn’t working.

\- Because my code is all unit tested now. This contributes to my ability to
be more sure about what code is _actually_ doing.

There are still some cases where I may try a debugger. I had one recently
where I was unsure what path my code was taking and I wasn’t sure how to
printf debug. That helped a lot.

Not using a debugger is not really a choice I made or something I do to try to
look impressive, rather it’s most likely a result of the growing diversity of
programming languages and environments I work in, combined with better testing
habits. I just feel like I have enough confidence to fix the bugs quickly.
When I lose that confidence is when I break out printf or the debugger.

~~~
scarface74
_Because I’m pretty confident I have a good understanding of what code is
doing nowadays. My intuition has been honed over the years and I tend to
quickly guess why my code isn’t working._

So does your code ever use other libraries? Does it ever call third party
APIs? Do you ever have to modify code you didn’t write? Do you remember what
your code does that you wrote 10 years ago?

~~~
jchw
Surprisingly, third party libraries have ended up being fairly predictable as
well. Not perfectly, but enough that I am usually not too concerned about it.
Tend to RTFM at least once, though.

~~~
scarface74
Try reading the manual for Boto3 and writing code blind without being sure of
the response format without just calling it first and looking at the response.

[https://boto3.amazonaws.com/v1/documentation/api/latest/inde...](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html)

~~~
jchw
This may surprise you, but I don't use Python or boto3. Last time I used
either of those was nearly half a decade ago. Software I use nowadays
generally has much better documentation, not to mention is written in a
language where I can get code completions and accurate typings.

Also, I never said that I don't read the source code of third party libraries.
In fact, I do. I like to embed third party libraries directly so that I can
inspect them inside of my workspace. For almost any library that I use,
though, it hardly requires much familiarity to get basic operations working.
Since a good amount of the libraries I use at my day job are proprietary, I
can't just google for Stack Overflow questions and sometimes documentation
isn't always available, so it would be pretty debilitating if I needed a
debugger just for the simple task of utilizing a library.

This is also part of why I stopped using Python. MyPy shows promise, but
without typings at least on par with TypeScript, my productivity in Python was
surprisingly poor despite amazing frameworks like Django and DRF. Simply put,
there was almost always too much futzing around, and even with many debugging
techniques I sometimes never fully figured out what was wrong with my code.

~~~
scarface74
I used Boto3 as a large complex library that you aren’t going to inspect every
line of code. But you mentioned web development, so you inspect every single
line of every third party dependency? Every library?

~~~
jchw
Nope, just when necessary; usually I don’t read very much of the code.
Intuition, comments, unit testing and editor help does the rest.

Well designed APIs are self explanatory.

~~~
scarface74
So I just showed you a massive API could you use your intuition,comments, etc
to figure out the response? Are you claiming that any API that you can’t
understand through “intuition” is by definition not well defined?

Have you ever integrated with something like Workday?

~~~
jchw
Hell yeah I'm claiming APIs I have trouble understanding with intuition are
not well designed. You might even call them... unintuitive. The entire point
of an API is for it to be consumed; A nice uniform API like Stripe or Qt is
easy even with minimal docs. If a user has trouble with your application, you
don’t immediately blame them, you have a UX problem. If your API is hard to
use? That’s a problem. This only becomes more true as the stakes get higher.
How do you feel about unintuitive cryptography APIs?

Have I ever used an API that is hard to use and confusing? Yes. JIRA,
Workfusion come to mind. The latter was a SOAP API. Thankfully, I don’t really
need to deal with SOAP anymore, and good riddance. In case of bad APIs, my
approach has always been to strongly isolate them from my code with an
abstraction; most dramatically by actually putting a service up that just
interfaces with the API I don’t like and provides a minimal interface to the
functionality I need. At work, I usually _have_ to do that, for security
reasons, even if the API is good.

Anyways, I really, genuinely don’t have any clue what you’re trying to prove
here. You’ve gone on and on about how bad APIs exist and imply this means I
must be lying. So what do I have to do, link to some docs and say “Hey, check
out how usable this API is”? I won't bother but the two aforementioned (Qt,
Stripe) are great examples in two completely different categories.

And I have literally not even the smallest clue how this ties back to
debuggers. I can’t recall a single time my reaction to a confusing API was
pulling out a debugging tool other than printf; more likely I’d play around in
a sandbox or REPL instead.

~~~
scarface74
No, what I am saying is that your definition of a bad API is one that you
can’t intuit is a case of the “No True Scottsman” argument. The Boto3 API is
well organized, it just covers a massive surface. It’s much easier just to
call the API and inspect a real world response.

For instance, of course I know all of the databases instances in our AWS
account, I read the Boto3 docs to see what API to call to return them, but it
was much easier just to call the API and look at the response in the debugger
to see the response than just guess from the documentation. The Boto3 module
covers every single API for every service that AWS offers. Of course the API
isn’t going to be consistent between S3 (storage) and DynamoDB (a no sql
database).

And what’s the difference between using a REPL where you are running code line
by line and using a debugger where you are running line by line?

On the other hand, why look through log files with print statements when you
can just let the program run and set a breakpoint and look at the entire stars
of your app including the call stack? But these days I wouldn’t even think
about using a regular dumb text logger. I use structured logging framework
that logs JSON to something like Mongo or ElasticSearch where I can use
queries to search.

Heck even in my C days when I would make a mistake and overwrite the call
stack, stepping through code and seeing where the call stack got corrupted was
invaluable or seeing whether the compiler was actually using my register and
inline hints by looking at the disassembly while the code was running.

~~~
jchw
It’s like Java Hello World, or Python urllib. I want to do some basic
operations in S3 and I’m knee deep doing all kinds of non-sense boilerplate
with boto. This is not a knock on Amazon; I later used their official Go SDK
and it was great. The API surface is absolutely enormous, so I don't buy the
handwaving that big APIs are by nature incomprehensible.

I also don’t really get the framing that this is a No True Scottsman argument.
I’m saying if someone has trouble using your API, and that problem is not a
matter of the developer not understanding a core concept, it is a problem with
your API. The counter argument is basically “My API doesn’t suck, it’s the
developers that are stupid.” And no, a valid retort is not “what if they are,
though?”

I dont consider a REPL to be a debugger. It’s just another code sandbox. You
may consider it in the realm of ‘debugging tools’ but I do not. The difference
for me is I hit sandbox tools like Go Playground and the Python REPL before I
have code to debug, not after.

Anyway, this has now evolved to the point where we’re personifying APIs and
stretching the definition of a debugger. I never claimed I do not printf
debug, or use a REPL; hell, occasionally, every few months, I even use a
_real_ debugger. My claim is not even that I am a good programmer, which I
would agree I am not. I am literally claiming that my use of debuggers (not
necessarily all debug tools) has declined to very low levels because of better
developer tools, better intuition ( = coding a lot, not necessarily being
‘good,’) and honesty, just having too many environments to actually learn how
to use debuggers in all of them.

Having to use a debugger in C because you overwrote the call stack is an
example of why debuggers can be an anti-pattern. You had to do _so much work_
because the compiler couldn’t prevent you from making the simple mistake of an
out of bounds memory access. Is a debugger useful here? Absolutely. Is it
ideal? Hell no. I don’t want to be the Sherlock Holmes of core dumps, I want
all of my mistakes illuminated as early as possible, so I can get back to
work. I am not yet a huge Rust zealot, but you can see where I’m going with
this. Does accidentally overwriting the stack in C have anything to do with
bad APIs? _Maybe_. There’s plenty of C library functions and POSIX functions
that are not invalid or broken in any way - in fact they behave exactly as
described - but are incredibly common sources of memory and concurrency bugs.
It’s why Microsoft compilers crap themselves with warnings whenever you use
functions like strcat. I’m not sure I believe that the “secure CRT” versions
are always much better, but the original API is terrible. C++ can do a fair
bit better with strings and memory management when used responsibly. Obviously
Go does a lot better, and Rust does even better than that.

So I don’t often use a debugger because a lot of the scenarios like that where
I might have been reduced greatly, again, by better tools, better testing, etc
etc. The fact that I have to defend this so rigorously makes me wonder if you
are taking advantage of modern tools and testing standards that makes
development work flow so much easier. Not everyone practically can; I imagine
there are many fields of work where the tools or the ecosystem is behind, but
I do not believe it is something that can’t or won’t be fixed, and if that’s
the case I sure hope it does.

~~~
scarface74
_So I don’t often use a debugger because a lot of the scenarios like that
where I might have been reduced greatly, again, by better tools, better
testing_

I would actually say the opposite. If you’re littering your code with print
statements and #if DEBUG equivalents (instead of using a debugger) and
grepping log files (instead of using a structured logging library) you’re not
using the newest tools available and you’re “debugging” in a way that I gave
up in the mid 90s writing C and FORTRAN for DEC VAX and Stratus VOS
mainframes.

There is nothing “modern” about littering your code with print statements. I
thought being forced to do that in the 90s was already taking a step back from
the various DOS based IDEs I had used by then.

~~~
jchw
The thing is, your imagination of what I’m doing (littering code with printf
statements) and the actual reality of what I'm doing (solving AT LEAST 95% of
my problems using extensive automated tests, strong linting, strict
typechecking, and modern programming practices) is leagues apart. When I say I
break out the debugger around every few months, I am not exaggerating. I may
insert a printf or two once every other week. This is not the common case,
it’s just significantly more common for me than attaching a debugger, because
it’s quick and easy.

The practice I adhere to is to break code up into small modular bits then unit
test the living hell out of those bits, then integration test larger
combinations. Maintaining test code takes time, but I take it very seriously.
I can’t share any of my actual work at Google but I can share my open source
work; a project that I wrote this way is Restruct and it normally has around
98% line coverage. If someone finds a bug I add a new test. I have never used
a debugger on Restruct or projects built on Restruct.

So then, what do I do all day? Console.log and printf everything? No, honestly
no. I spend little of the day debugging because unit tests tell me nearly
exactly what codepaths I broke, printing out helpful diffs of the expected
behavior versus the actual behavior. This almost always gives way to the
problem, even when I’m working on a brand new piece of code with tests I just
wrote (sometimes before the code, even.)

So I would say printf is my goto debugging tool, but if I want to be accurate,
it’s probably really automated testing. Automated testing used in this fashion
basically acts as small little debug routines. We don’t call tests debugging
tools though, because that would undermine their usefulness; they do so much
more. Which is why I don’t sweat it when I have just as much or more test code
than actual code; unlike other code, it often pays for itself if you do a good
job.

And besides, if unit tests explain a bug to you before you actually run into
it, did you really do any debugging or did you bypass it entirely?

------
hirundo
> Kernighan once wrote that the most effective debugging tool is still careful
> thought, coupled with judiciously placed print statements.

Of course the first tool is careful thought. But when forced to fall back to
print or log statements it feels like a handicap.

If I have to use a print statement it means I'm not sure what's going on so
I'm not sure what, exactly, to print. By breaking on that code I don't have to
know exactly because I can execute arbitrary code in that scope. What takes
multiple iterations with print is often just one with break.

I can compare directly, because on a production-only bug I'm forced to use log
debug statements. And usually in that case I have the same code open locally
in a debugger. The difference is night versus day.

Maybe it's like a chess master who really doesn't need the board. For the kid
in Searching for Bobby Fischer, it may really be a distraction. But I notice
that grandmasters use a chess board in serious competition. And as a
programmer I'm no grandmaster, and I play this weird game better with the
board in front of me.

------
systemBuilder
When I worked at Google Search last year, running a service locally in the
Debugger was just about the only way to figure out what it does. There are so
many control flow transfers - async message passes and you have no idea what
is coming back - and people that think they are Templating gods and idiots who
want to use syntactic sugar to define a new language and people using functors
just 'because' and lambdas and other BS C++21 features inside Google and
inheritance/container trees that look like a 3-D house that the debugger is
the only way you can trace the control flow of such a haywire spaghetti code
base.

Sadly, gdb is running out of gas at Google - it takes 60s to load "hello
world" \+ 200MB of Google middleware and often it would step into whitespace
or just hang, forever. This was often because not smart people were
maintaining the gdb/emacs environment at Google.

------
saagarjha
Debuggers a great tool to have in your toolbelt. They can arbitrarily modify
the state of the program at any point, let you safely inject code (I’ve
written “coroutines” in LLDB for applications I did not have the ability to
insert print statements in), and can be _extremely_ helpful when performing
dynamic analysis. That being said, sometimes you don’t have access to a
debugger, so you have to do your best with the tools you have available.
Finding a bug is always a challenge of removing extraneous state you don’t
care about (“where do I put this print statement so it doesn’t get called a
million times”, “how can I visualize the value of this variable when it
changes in a way that is important”) so not having a debugger doesn’t change
this: it just makes it somewhat more annoying (and requiring more ingenuity)
to perform these tasks because you now have more limitations.

------
userbinator
Debugging complex multithreading or timing-related bugs is one of the areas
where a debugger, or even a bunch of logging, is not going to help you at all,
because the slightest change in timing can make them disappear; only (very)
careful thought is likely to lead to a solution.

Thus I mostly agree with the author of this article --- blindly stepping
through code with a debugger is not a very productive way of problem solving
(I've seen it very often when I taught beginners; they'll step through code as
if waiting for the debugger to say "here is the bug", completely missing the
big picture and getting a sort of "tunnel-vision", tweaking code messily
multiple times in order to get it to "work".) If you must use it, then make an
educated guess first, _mentally_ step through the code, and only then
confirm/deny your hypothesis.

------
tom_
I use a debugger all the time. Why guess about any of this stuff, and why
trust your fallible intuition, when you could have the computer tell you
exactly what's really happening? I think of it as analogous to using a
profiler in this respect.

------
zwaps
I write scientific simulations in Matlab. Debugging, that is, stopping the
programs and interacting with the data and functions, is essential to me.

To a point, this is due to me not planning through my programs. So I can see
that for some areas a debugger may not be essential.

But in other cases, I am actually interested to trace what happens with data
in my mechanisms. For this kind of work, a debugger is essential.

Most importantly, as someone who maybe does not use the absolute best
practices of designing software, debugging allows me to write solid and
successful programs without having completed a CS degree.

------
CergyK
I would like to meet someone who writes code without any bugs. Now if you're
not putting bugs on purpose in your program, you probably don't know what they
may arise from. So why use a tool (prints) which is deeply influenced by your
assumptions on the program and which probably where wrong in the first place?
Some more arguments for the debugger: -No need to recompile for each print
-Available for multi-threaded progs -possibility to see the state in libs
you're dependent on, in which you can't put logs nor asserts.

------
namelosw
That depends on what problem or code base you are working with.

The debugger is the best answer when you are working with bad code, which you
cannot reason about the code locally -- a random bug only happens on the
production env, a mutable monolith, or a complex system integrating with 3rd
party services. Because debugging is to know what happens in the first step,
then build a theory to reason about everything. Sometimes, fixing the plane in
middle-air requires you need to know what happened to the specific plain,
instead of building a theory to make a plane having the exact problem without
looking at the plane itself. Like in control theory if there are too many
possible states, it's not cost efficient to reason about the state from
behavior -- wherein software you can cheat and inspect the state directly.

However, I agree with the author that in the ideal scenario there's almost no
need to use the debugger. Like in SICP, in the first few chapters the mental
model is about the substitution model -- it's not how a computer really works,
but it's easier to reason about, you don't have to in specific step with the
specific environment to reproduce the problem (that's where debugger really
helps). The code is written is a matter which is more coupled with the
environment (which reflects the environment model in the following chapters),
the more one needs to use the debugger to work with that code. And that's why
the virtue of functional programming and the referential transparency are
praiseworthy.

------
badpun
Debugger is just an inspection tool that allows you to see in detail how your
contraption (program) behaves while it's running. Treating it as the tool of
last resort, only after you've exausted examining your program while at rest,
is needlessly limiting. I can see how that could make more sense in physical
engineering (if you turn the contraption on without fully understanding it,
you're risking doing some damage), but there is no such risk in SE, so why
limit yourself?

------
ATsch
I feel like debuggers usefulness is directly correlated to how specialized it
is. A good debugger is all about giving you specific information on your
problem. I find the classical step-by-step debuggers do an incredibly poor job
at this. I never use them.

A tier up from that for me are higher-level debuggers, generally specific to
some technology. For example, the browser dev tools, GTK Inspector, wireshark,
RenderDoc etc. I'd also put tools like AddressSanitizer, Valgrind and
Profilers into this category. Because they are more specialized, they can give
you richer information and know what information actually matters to you. I
usually find I use these regularly when developing.

The highest tier is tools specialized to your specific application. This could
be a custom wireshark decoder, a mock server or client, DTrace/BPFTrace
scripts and probes, metrics, or even an entirely custom toolbox.
Interestingly, print statements end up in this same category for me, despite
being the possibly simplest tool. Being specific to the problems you actually
face allows you to focus on the very specific problems you have. This tier is
interesting because these tend to become not just something you use when
things go wrong, but become part of how you write or run your code.

Under this lens, I don't think it's that surprising people don't really use
general step debuggers that much. They are a primitive tool that allows you to
have many of the benefits of tier3 debuggers without any of the effort
involved in making custom tools. They are the maximum reward/effort in terms
of debugging.

------
tzhenghao
Another way to think of this is that the debugger is a tool, just like other
familiar tools such as tcpdump, ping, traceroute, nslookup, nc (netcat) etc.
Choosing the right tool for the right observations to make progress in
squashing said bug(s) is key here. Sometimes, all you really need is a
methodical approach in putting print statements in the right lines of code, or
even pinging a host that doesn't seem like it's up in the first place.

------
gizmo686
My problem with debuggers is that they are too good of tools. You use them,
solve the bug, and move on. In the process, _you_ might have learned something
additional about the code base, but the only improvements to the code are
fixing that particular bug. In contrast, when you don't use a debugger, then
every now and then when you try to debug, you find that some of your work is
not simply a temporary hack to identify the problem, but something that should
stay in the code long term to assist in future debugging (be it asserts you
added, or additional logging calls). The end result is that the more you debug
without a debugger, the more debuggable you code becomes.

I am currently working on a codebase where all the developers are adamant
debugger users. The code is practically impossible to debug without the use of
a debugger, because no one has ever had to build up the debugging
infrastructure.

Complex/diffucult bugs still take about as long to fix, but simple bugs take
far longer than they normally do, because every time you use a debugger you
are starting from scratch.

~~~
breatheoften
I am strong fan of debuggers as a tool — but I actually agree with some of
this complaint.

I want a language that exposes debugger features as a first class language
construct

Think how powerful this pattern could be:

debug(...) { ... }

if the language and runtime specified the semantics needed ...

It could be really useful to have blocks like this in the codebase:

debug(problem_related_to_x) { pause_debugger; }

You could document and then more easily return to the mental context found
over time when trying to understand problems related to x ...

And you could put log () statements inside those blocks instead of break
points — not stupid text output but something the debugger protocol knows how
to represent and present ... these capabilities alone would exceed the utility
of print() debugging while supporting all the same workflows ...

I know things like vscode’s logpoints exist — but the fact that these
constructs are not représentable in the code and easily shareable really
undermines their overall utility ...

------
sytelus
This is actually well written article by a CS professor and cited with
experiences of few very productive programmers. Especially Linus's no bar
holds post on this topic is worth reading:

[https://lwn.net/2000/0914/a/lt-debugger.php3](https://lwn.net/2000/0914/a/lt-
debugger.php3)

Many people say not using debugger is other side of the pendulum but perhaps
it is not. You want to have assertions/prints in your program at critical
junctions. That should be able to explain the behavior of the program you are
seeing. If it doesn't then you probably have missed some critical junctions OR
don't really understand your own code. There is actually a third possibility
where you _will_ need debugger. This is the case when compiler/programming
language/standard libraries itself has bug. Instead of more time consuming
binary search for where you first get unexpected output, debugger might be
better option.

------
fourier_mode
The author ignores a variety of points, but the major one seems to be that
there seems to be some ignorance about debuggers being a "line-by-line" tool.
I usually use it to understand big codebases, by stepping through the
functions in the order they are called, and I am quite sure there isn't some
other way to tackle this requirement.

------
kazinator
Those who say kernels should be debugged with print statements inserted into
the code should promptly hand in their register and backtrace dumping
functions, lockup detectors, memory allocation debugging, spinlock debugging,
"magic sysrq key", ...

Oh wait; all of that is judiciously inserted print statements.

------
UglyToad
My gut reaction is that the article is nonsense but I think there's a very
valuable point in it.

When you work mainly on enterprise code where you're unlikely to encounter any
code you wrote, rather than another team member, on a daily basis you'll need
a debugger or print statements.

But the reasonable point the article makes is a debugger makes it very easy to
solve the problem localised to a function or a couple of lines of code rather
than take the time to improve the whole area and/or add test coverage. But the
other thing people who don't work on enterprise code won't necessarily
understand is you don't usually have time to do that. So it's a good thing to
keep in mind but it feels a little too Ivory Tower to be broadly applicable.

------
RickJWagner
As a professional support programmer (read that: I have to debug code someone
else wrote), I don't use debuggers often.

But sometimes I do. And when I do, I'm glad they're there, because they are a
great tool of last resort.

------
gridlockd
Single-stepping is just one use for a debugger. Breakpoints are far more
useful. Inspecting local variables by clicking through the stack is generally
faster than adding print statements.

A lot of people also have never used a debugger that isn't terrible to use.
_Most_ debuggers fall into that category.

As for all of this "read the code first and get a better understanding" talk -
this is obvious highfalutin' bullshit. You're human, you made a _dumb_ mistake
somewhere, the debugger will help you find it faster than your brain going on
an excursion.

------
bartimus
On a similar note: I once did a project entirely in notepad. This really
forces you to write clean, readable, well organized code.

I don't have anything against debuggers. I do - however - have a concern with
people who rely heavily on the find feature in their IDE to search for certain
code they need to change. Oftentimes they don't look at the bigger picture and
miss how certain things might better be implemented elsewhere. They don't run
into the problem of finding code that's poorly structured. They don't have a
need to restructure it.

------
SAI_Peregrinus
A step-through debugger is a ridiculously useful tool when you don't have a
serial console. Or when adding in tons of prints would violate timing
constraints and compiling takes a significant amount of extra time compared to
just restarting the device under test & moving the breakpoint until you find
where things go wrong. Much nicer than having just a logic analyzer.

Of course embedded systems aren't the focus of the article, but they're all
over and a very good place to have a debugger.

------
ashelmire
I debug sometimes, but if it's a simple script I'm working on, I usually
don't. I'll just log where I think the problem is (usually indicated by an
error, perhaps). I'll pull out the debugger when my best guesses at the
problem aren't panning out.

Do whatever works for you. But it's good to have more tools in our toolbox -
use puts debugging if that's easier (often is in very complex environments),
use a debugger if you need to.

------
Rotareti
I never got comfortable with debuggers for the work that I do. Thus I rely
heavily on "print debugging". There are tools such as "icecream" [0] (in
Python), that improve print-debugging ergonomics a lot. I wish every language
had something like "icecream" built-in.

[0]: [https://github.com/gruns/icecream](https://github.com/gruns/icecream)

------
yingw787
I read this blog post in full and geez.

Since the author quoted Guido van Rossum, I'll share a recent anecdote from my
interaction with Guido at PyCon. I first met Guido this past Wednesday, I'm
guessing after he attended the Python language summit. He honestly seemed to
be bristly at the time, probably because he's heard many of the same arguments
raised over thirty years (I can imagine something like "hey why not get rid of
the GIL" -> "Wow, why didn't I think of that?! Just get rid of the GIL!"
although hopefully it was more higher-level than that). One of the other
language summit attendees was talking about a particular Python feature, which
I don't remember, but the underlying notion was that even if you get testy
with contributors they'll still be a part of the community. I remember
thinking, "No. That's totally not how it works. If you get testy with
contributors they'll just leave and you'll never hear from them again, or
you'll turn off new contributors and behead the top of your adoption funnel
meaning your language dies when you do".

Python has such great adoption because it caters to the needs of its users
first. Take the `tornado` web server framework. I haven't confirmed it myself
but apparently it has async in Python 2 (async is a Python 3 feature). How?
_By integrating exceptions into its control flow and having the user handle
it_. But it shipped, and it benefited its users. IMHO, `pandas` has a decent
amount of feature richness in its method calls, to the point where sometimes I
can't figure out what exactly calling all of them does. Why? Because C/Python
interop is likely expensive for the numerical processing `pandas` does and for
the traditional CPython interpreter, and ideally you want to put together the
request in Python once before flushing to C, and because people need different
things and so need different default args. `pandas` also ships, and benefits a
lot of people.

Shipping is _so_ important in production, because it means you matter, and you
get to put food for you, your family, and the families of people you employ.
You can't just bemoan debugging is bad because somebody isn't a genius or not
in an architect role. Debugging means you ship, and you can hire the guy who
isn't aiming for a Turing Prize who may have a crappier job otherwise and he
can feed his family better.

Don't cargo cult people you're not. Use a debugger.

------
cozzyd
Debuggers also double as an instant profiler. "Hmm, this program is running
slowly, I wonder why..." -> gdb -p $PID -> a few random Ctrl-C's and c's ->
finding the bottleneck and the state that led to it (which a real profiler
won't usually tell you).

Also watches are invaluable when you know something is getting a wrong value
but you don't know where.

------
GuB-42
The debugger is just another tool in the toolbox.

Turning a blind eye to one of your tools is disingenuous. Step by step
debugging has its uses and some work environments may favor it. And in fact,
I'm am quite sure that guys like Linus are competent in using them and will do
when needed. It is just that it is not their favorite tool and it is not well
suited too their project.

------
Jorge1o1
More pretentious and self-serving claims by academics and other “elite”
programmers who are in reality far removed from frontline coding

------
jayd16
So the argument is a debugger makes fixing bugs so easy you won't want to
improve process?

I think I'll keep using them.

------
astrobe_
I work with embedded software. About a decade ago, when we switched to a new
target processor with a JTAG interface (in-circuit debugging), my boss decided
to buy an in-circuit debugger to help development. The target was an high-end
(at the time) SoC, so it could run uClinux and we could also have GDB too.

I tried hard to use the hardware debugger because it was rather expensive
(this is I think a case of sunken costs fallacy). Problem is, our system is
soft real time; stepping in the main program causes the other things connected
to the system notice that the main program does not respond, and act upon
this.

The hardware debugger was quite capable so we had watchpoints and scripting to
avoid this problem, but you had to invest considerable amounts of time to
learn to program all that correctly. Amusingly, this was another occasion to
make more bugs. Now you need a debugger to debug your debugger scripts...

Moreover, the "interesting" bugs were typically those who happened very rarely
(that is, on a scale of days) - bugs typically caused by subtly broken
interrupt handlers; to solve that kind of bug in a decent time frame with you
would need to run dozens of targets under debuggers to test various hypothesis
or to collect data about the bug faster. That's not even possible sometimes.

I also happen to have developed as a hobby various interpreters. The majority
were bytecode interpreters. There again debuggers were not that useful,
because a generic debugger cannot really decode your bytecode. Typically you
do it by hand, or if you are having real troubles, you write a "disassembler"
for your bytecode and whatever debugger-like feature you need. Fortunately,
the interpreters I was building all had REPLs, which naturally helps a lot
with debugging.

So I'm kind of trained not to use debuggers. I learned to observe carefully
the system instead, to apply logic to come up with possible causes, and to use
print statements (or when it's not even possible, just LEDs) to test
hypothesis.

One should keep in mind that debuggers are the last line of defense, just like
unit tests that will never prove the absence of bugs. So you'd rather do
whatever it takes not to have to use a debugger.

My current point of view is that the best "debugger" is a debugger built
inside the program. It provides more accurate features than a generic debugger
and because it is built with functions of the program, it helps with testing
it too. That's a bit more work but when you do that functionality, debugging
and testing support each other.

------
Insanity
Often when dealing with layers of code that I am not familiar with, the
debugger helps.

But when there is a bug in code I wrote, I can reason about it and place
prints where I need them. For side projects I hardly ever use a debugger.

------
thrax
I do 99% of my development in the chrome debugger. I usually step through
every line of code for the first run through. It catches an incredible amount
and variety of subtle bugs. Printf debugging is for chumps.

------
CalChris
Bart Locanthi wrote the debugger for the BLIT (Bell Labs Intelligent Terminal)
and named it _joff_ because most of the time you’re in a debugger you’re just
....

------
sanderjd
A lot of this is appeal to authority. It seems to be a contribution to a
growing sort hip- or leet-ness around not using debuggers. Implicit in this
seems to be: I don't need a debugger, neither do these famous people, why do
you? Aren't you tough enough or smart enough like me and these famous people?
You're clearly doing this programming thing wrong.

Well, I use debuggers. I think they're great tools. My feeling is that I'm
very happy to use any tool that helps me create, understand, and improve
software. When people tell me that a tool that is useful to me in that
endeavor is not actually useful, all I can think to do is roll my eyes.

Having said that, something I _am_ very interested in is learning new
approaches to interrogate software complexity and solve problems. So, "here
are some approaches to understanding and debugging code that have worked for
me" from someone who doesn't use debuggers would be interesting to me. But I
actually don't see any of that here.

~~~
coldtea
> _A lot of this is appeal to authority._

Debugger use is based on the same. Few things in development are not based on
idiosyncratic preferences, fads, snake oil salesmen, tradition, or appeals to
authority (and those are the math parts of CS). Of the 5 above categories
tradition is probably the best and most "scientific" (at least it has stood
the test of time).

There is little actual research on such issues (best practices, programming
ergonomics, syntax leading to less errors, bug counts per style, etc), and
even what little research there is usually flawed, with small samples, and
non-reproducible (not that many teams bothered to reproduce it in the first
place).

That's why almost nothing is ever settled.

~~~
Stratoscope
No authority told me to use a debugger, I use them because they work for me.
They save me time, help me make money, and make it easier and more fun to
code.

If I ever saw a "research study" that said "print statement users have 12%
fewer bugs than debugger users" \- or vice versa! - I would dismiss it out of
hand. What possible relevance could it have to my work? That's not real
research, it's just playing with statistics and making up overly generalized
stories about them.

That kind of research would just become the "authority" in a new appeal to
authority, complete with an infographic!

And nothing has to be "settled", nor should it be. There are many different
kinds of programming that require different tools. It's best to be aware of
the variety of choices available, and choose your tools according to the
particular situation you're in.

~~~
coldtea
> _No authority told me to use a debugger, I use them because they work for
> me. They save me time, help me make money, and make it easier and more fun
> to code._

Well, I already wrote: "Few things in development are not based on
idiosyncratic preferences, fads, snake oil salesmen, tradition, or appeals to
authority". So your case would fall under the first option: it's an
unscientifically tested personal preference.

> _If I ever saw a "research study" that said "print statement users have 12%
> fewer bugs than debugger users" \- or vice versa! - I would dismiss it out
> of hand. What possible relevance could it have to my work?_

It could have all the relevance in the world.

It's not like if you believe strong enough that your preferred methods are the
best (or the "best for you"), that you can't still be using an objectively
worse method and get worse results...

> _And nothing has to be "settled", nor should it be._

Well, if software development wants to be like an engineering discipline, many
things should be (studied and eventually settled).

"Works for me" and letting developers improvise and come with their own
hodgepodge of practices, preferences, and cargo cults, is how we got into this
mess.

------
purplezooey
To heck with that. Ever use a gdb watchpoint? Amazing feature that's saved my
bacon more times than I can count.

------
tanilama
I think print statement makes it quicker and obvious to expose the internal
state I want to observe at the precise point I want, thus fulfilling the
hypothesis cycle faster. Debugger, on the other hand, seems effort taking to
set up and expose too much additional details, it becomes overwhelming very
quickly.

~~~
saagarjha
Debuggers are print statements with a slightly different “effort profile”.
Adding the equivalent print statement is a little bit more annoying, but you
can do this at any point during runtime and also drop down into the full
debugger when you need it.

~~~
Insanity
When something goes wrong after X iterations, print statements can show the
evolution better. That is one thing that debuggers don't do I think (track
variables over time). Or do they and am I not finding this feature in
IntelliJ? :O

~~~
saagarjha
GDB and LLDB allow you to put a print statement anywhere; they’re a strict
superset of what you can do in code.

------
EdSharkey
Just "test first"; TDD your code. I'm so tired of working with and being
surrounded by crappy, uncontrolled codebases.

We "control" our code when tests prove it does what we think it should.

Debuggers are rarely-usable, inefficient tools for software development. I
agree with OP, debuggers don't scale.

------
kjar
The debugger is a valuable tool. I don’t not use X. It is meaningless.

------
dboreham
I also lie in my headline:

> there are cases where a debugger is the right tool

------
sys_64738
Would you get the job if you said that at a job interview?

------
azhenley
I use whatever tools I can to solve the problem.

------
tahoemph999
I think while this article doesn't really prove its thesis very well there is
a kernel of an idea here which is useful to investigate. That is the idea that
line by line stepping is a crutch that weakens the programmer.

Out of the 5 beliefs he quotes from celebrities we only have reasons for 3. Of
those 3 the common thread I see is that we should be able to reason about our
code and debuggers act to derail that. Furthermore, it appears that the aspect
of debugging most being maligned here is stepping through code line by line.
I'm fairly certain that is specific to a certain type of mindset. If I was
writing a title for a talk in this area it might be more like "single stepping
bad for students" and then talk about how to build code that is easy to model
and think about and then use that to work through most problems. If you've got
yourself past that student part (and yes, you'll dip back into this with new
tech / languages) then being able to single step when it makes sense (don't
have docs, processor isn't doing the right thing, etc.) makes you more
powerful. Not less.

The focus on printing is a bit annoying. The writer seems to have never worked
in embedded systems, distributed systems, or systems where reproducing the bug
isn't an option. In the last case a debugger is your tool for grunging around
in a core dump. In the embedded case some type of debugger or forcing a core
dump (and thus using a debugger) might be your only choices.

I also question if he has ever worked on web systems. Reasoning "harder" about
how some new CSS or javascript "feature" behaves across different browsers is
useless. Writing little ad hoc uses (maybe in a debugger) and carefully
tracking how they act in a debugger is powerful.

A lesson from my history is that of systems that take a long time to build and
upload. The one I worked on early took 60 minutes to build and 30 minutes to
upload to test hardware. You didn't fix bugs one by one. You fixed them by
discovery, fixing on the platform (inserting nops, etc.) in assembly while
replicating that in source (probably kicking of a build in case that was the
last bug of this run), and then continuing to test / debug and get every
little bit you could out of the session. And if you had to single step then
that was worth it. Is this entirely a historical artifact? I havn't worked
with anything that bad in decades but I still work with embedded (and some
web) systems where the time to build and upload can be a minute to minutes.
Getting more out of the session is useful and debuggers are part of that.

Refactoring as a response to a bug seems like a mistake worse than line by
line stepping to me. Not understanding a cause but making a change propagates
incorrect thinking about the system.

But I think the real missing part of this article is a discussion of what are
other useful tools. The last comment in the article mentions "Types and tools
and tests". It is easy to say tests are table stakes but a similar article
about testing would create a flamefest so it is a bit hard to tell what kind
of table (or is it stakes)? So what are those tools beyond testing? I'd love
to have DTrace everywhere I worked. The number one best tool I've ever seen
for working with a live system. The ideas in Solaris mdb about being able to
build little composable tools around data structures is awesome. Immutable
methods of managing databases are wonderful. It would have been nice if this
author talked about design and refactoring "tools" (could be methodologies) he
liked or thinks should exist.

------
zmmmmm
This reminds me of the more generalised "I do not use IDEs". It's a
fascinating kind of phenomenon similar to ludditism to me where programmers
reject the very premise of their own existence: that computers are capable of
adding value or assisting with a task. It feels to me that this is its own
form of dogma, no better justified than people who lean on the IDE or the
debugger to do everything.

I don't pull out the debugger very often, but knowing how and when to do that,
and do it well is a significant tool in my arsenal. There are times when I can
guarantee you I would have spent a massive number of hours or maybe never
properly resolved certain bugs without doing it.

~~~
klodolph
This comment sounds, to me, borne of hatred and scorn more than anything else.
Carefully chosen words, like “ludditism” and “dogma” do more to evoke feelings
in people, and less to inform us. This comment is a true disservice to the
conversation.

There’s a mental cost to every tool you learn how to use. It makes no sense to
try and learn every programming tool, you’ll never get any work done. I see no
reason why we should scorn people who leave “IDE” or “debugger” off their own
personal list of tools that they work with. Calling it “ludditism” is name-
calling, same as “dogma”.

I’ve used Visual Studio, even for extended periods of time, with its fantastic
debugger. I’ve used older IDEs that weren’t as good. I’ve used various text
editors and environments. What I don’t like about using a debugger is how
rarely it helps more than the alternatives—so every time I need to use it, I
need to learn how to use it again in whatever environment I happen to be
programming in. Perhaps if you’re writing code in the same environment, the
calculus is different. But no need for name calling.

Same with IDEs. Somehow, by some series of accidents, I use Emacs for about
95% of my coding. There are a couple key bindings in Emacs which I’ve set to
match the default keybindings in Visual Studio or Xcode. But because I’m often
programming in different environments, using Emacs instead of Visual Studio
means that I can get by with learning fewer tools, and spend that effort
elsewhere. No need to call it ludditism.

~~~
deepanchor
Eh, as a non-IDE user myself, I think you’re being a bit harsh as I don’t
think that was the spirit of the author’s comment at all. In fact I found it
to ring quite true and if there was any name-calling, I certainly did not feel
offended. The fact is there _is_ a certain luddite-esque aesthetic to working
in a simple modal editor like Vim (or Emacs in your case) and everyone invents
their own dogma to follow, to a certain extent. I happen to find that there is
merit in using simple tools, and latent benefits like really getting to know a
code-base in a way that predictive fuzzy autocompletion will not allow me to
do. There is no global optima when it comes to people’s workflows, just
individuals finding what works best for them.

------
simonsays2
The opinions stated are incompetant.

