Some of the opinions on debuggers are more nuanced than the author is letting on. Take Bob Martin, for example:
> I consider debuggers to be a drug -- an addiction. Programmers can get into the horrible habbit of depending on the debugger instead of on their brain. IMHO a debugger is a tool of last resort. Once you have exhausted every other avenue of diagnosis, and have given very careful thought to just rewriting the offending code, then you may need a debugger.
I'm still baffled by people like the author who "do not use a debugger.' A print statement is a kind of debugger, but one with the distinct disadvantage of only reporting the state you assume to be important.
This part was a little surprising:
> For what I do, I feel that debuggers do not scale. There is only so much time in life. You either write code, or you do something else, like running line-by-line through your code.
I could easily add something else you might spend the valuable time of your life doing: playing guessing games with print statements rather than using a powerful debugger to systematically test all of your assumptions about how the code is running.
I think this is a problem of language and communication (what he says in the post vs what we receive and understand reading it). I feel like I've groked what he's saying because I have a similar mindset.
Yes, debuggers are useful in some circumstances, but most of the time I don't reach for one. I usually start out by looking at the code surrounding a problem and thinking about what circumstances could lead to the erroneous data. From there, it's pretty simple to stuff in some carefully selected print statements to make it report on some of the things it's doing, so that I can check my assumptions. More often than not, this is enough to identify the underlying problem in a few minutes and fix it. Either that, or I get some pointers to other areas that should be investigated.
It's only when this initial triage fails to give me any meaningful leads that I start thinking about what tool will likely be the most effective for the problem at hand. It's usually a judgment call based on how complicated it will be to separate relevant data from irrelevant, and what tool will most efficiently give a window complete enough to describe what I'm looking at, yet small enough for my brain to hold it all. Also, the need for aggregate vs individual data will play a role in the decision. Sometimes this phase is likely to be faster with a debugger, sometimes (and in my 20 years experience, usually) not.
> I consider vision to be a drug -- an addiction. People can get into the horrible habbit of depending on their vision instead of on their brain. IMHO looking at things is a tool of last resort. Once you have exhausted every other avenue of diagnosis, and have given very careful thought to just trying to do the action again, then you may need to open your eyes.
Coding without a debugger is like walking with your eyes closed or driving at night with no headlights. Sure, it may be possible, but you are purposefully limiting your information in order to not become "dependant" on something.
Tooling will always be a compromise of utility vs reliance but there is a reason we don't, for example, build cars by hand any more.
My experience is the EDI and debugger depedent programmers have a very hard time when those things are taken away or unavailable. The reverse is not true.
As I stated a few days ago [1], there are times when you have no debugger (or a debugger won't help you as was in my case). Learning to debug via print() is a useful skill to know.
Ironically, I think you've left out some nuance to the author's position:
> ... and I almost never use a debugger.
Even if he can't recall the last time he used it, he keeps the tool in his toolbox.
His article was quite inflammatory however and it's easy to see how someone who relies on a debugger would feel attacked and insulted.
I look around my team and I see some turning to debuggers first and others to outputting at key points. I see zero correlation to effectiveness or efficiency.
I practically live in the debugger. I do use it sometimes to debug a problem, but most of the time I use a debugger to avoid debugging. I use it for coding.
I spend a lot of time working with APIs and libraries that are poorly documented and often that I haven't used before.
Instead of writing out a bunch of code based on my limited understanding of the docs, and likely with many bugs, what works for me is to just write a few lines of code, until I get to the first API call I'm not sure about or am just curious about. I add a dummy statement like "x = 1" on the next line and set a breakpoint there.
Then I start the debugger (which conveniently is also my code editor) and hopefully it hits the breakpoint. Now I get to see what that library call really did, with all the data in front of me. Then I'm ready to write the next few lines of code, with another dummy breakpoint statement after that.
Each step along the way, I get to verify if my assumptions are correct. I get to write code with actual data in front of me instead of hoping I understood it correctly.
If I'm writing Python code in one of the IntelliJ family of IDEs, I can also hit Alt+Shift+P to open a REPL in the context of my breakpoint.
Of course this won't work for every kind of code. If I were writing an OS kernel I might use different techniques. But when the work I'm doing lends itself to coding in the debugger, it saves me a lot of time and makes coding more fun.
Where you would set a breakpoint at x=1, I would insert a syslog statement and dump the data into the log file. Since syslog is another process I can watch the behavior of the program while it is in operation alongside the other programs on the system. I've actually only used debuggers on systems which don't have any logging infrastructure.
At my last job for example, I frequently would plug the JTAG in to check the instruction pointer and read from the memory-mapped flash on our safety MCU because there was no other way to read data off the device after a crash.
And it was also an asset in quickly testing the programs I wrote for TI's N2HET because I could pause execution after the programs were loaded into the HET instruction ram, use the debugger to configure the variables in the instruction RAM and set the HET executing and watch the output on my oscilloscope. This ability to manipulate memory in the running program is very useful.
I used it very frequently when testing new components on our system buses because I could halt execution and configure DMA transfers and also inject data into system ram. So I could quickly validate my understanding of the reference manuals.
I think both approaches have merits and if someone tells you that one is superior to the other, that says a lot about their level of experience and the types of work they've done.
Those are some great debugging stories. Yep, for that kind of work I would probably do something more like what you were doing. I also appreciated your last comment:
> I think both approaches have merits and if someone tells you that one is superior to the other, that says a lot about their level of experience and the types of work they've done.
Indeed. We have so many great tools available to us, but the best choice of tools will vary a lot depending on the work you're doing, whether it is logging, JTAG and a 'scope, or whatever you have available.
It's quite a contrast to this quote from the article:
> ...the fact that Linus Torvalds, who is in charge of a critical piece of our infrastructure made of 15 million lines of code (the Linux kernel), does not use a debugger tells us something about debuggers
I don't think that tells us anything about debuggers, it just tells us that they aren't useful for Linus's particular work, or possibly he just doesn't like them. I doubt that Linus has ever said, "You should never use a debugger regardless of the kind of programming you're doing."
In Java, I did this a lot less because I could rely on the type of the interface to know what I was going to get back from the function call (with the caveat it might be null). I knew exactly what the type of the arguments needed to be.
You generally don't have that in python. Without extensive documentation (if you are lucky some newer code may make use of type annotations) you just have no idea with some interfaces what needs to be passed and what you will receive. Even with extensive documentation, the majority of well documented libraries are all documented in a different way and getting the same answers that the java method signature gives can be a confusing experience.
The tendency of functions to mask the signature of the method they simplify a call to by simply declaring args, *kwargs also makes it a bit annoying to discover the true type signature outside of a debugger.
Also, you get back a dict or a tuple... but what is in it?
The most productive experiences I've had with debuggers is writing the code as it's running. Relating your example, I would break just after some api call, with all the state there and then write the next lines of code, then run them, then write a few more and so on. So much faster than read log, make changes, recompile and rerun from scratch workflow.
If you were using a language with stronger types you could avoid the whole manual process you describe and rely on the compiler to do everything the debugger is doing for you now.
When you live in the debugger, you understand your code by seeing how your code work. When you live without the debugger, you have to imagine how your code work. The speed of seeing how your code work cannot match the speed of imagining how your code work. In addition, see your code works brings a lot of noise that is not part of your focus; in contrast, you only imagine where you focusing. Of course, the effectiveness of each depends on your experience in each respectively.
There is no denying that seeing is the real world, and imagining is just imagination :).
Being able to visualize your code in your head is a great advantage, and I've fixed many bugs that way. But sometimes your imagination hits its limits and using a debugger to understand what's happening in the code is very useful.
Some cases where your imagination could be limited are:
- Code that you didn't write.
- Code that you wrote long enough ago that you don't remember the details.
- Code that you know very well, but you're having a bad day and can't figure out the problem just by thinking about it.
> Some cases where your imagination could be limited are:
> - Code that you didn't write.
This has become the norm with software written by a team; and to my stress, I find many coders have given up understanding code that other people wrote and reduce to the minimum that get by. My premise is the necessity of understand the code -- not only how the code work, but also how the code is conceived and where the code is evolving toward. After that premise, there is no difference between code that you didn't write or you did.
> - Code that you wrote long enough ago that you don't remember the details.
That says a lot that the code was not written in its optimum way. Treat that as a bug, and debug why the details cannot be easily retrieved.
> - Code that you know very well, but you're having a bad day and can't figure out the problem just by thinking about it.
You should always take a rest and tackle it the next hour or next day when you can work effectively. Continue to push through a bad day only has the opportunity to make the day worse.
Should I be able to remember all 200+ pages of the audit rules by heart for both the car owner side and the repair shop side and remember why I wrote all of the special cases?
If you are writing special cases because a 200+ page rulebook says you need them, then the special cases should be commented to indicate exactly what rule they are meant to address such that someone with the rulebook can quickly look it up.
Ideally, you would have all the rules encoded in a central place so this lookup becomes obvious from the structure of the code, but that is often not possible.
I am involved in a simmilar project now. We have a proprietary data format which has, over the years, evolved slightly different versions as different teams extended it in slightly different and incompatable ways. The code has all kinds of special cases, to the point where we developed internal style guidelines for how to comment them
So now you have the rule book and the comments and if there is a bug in any of the code, an I suppose to remember what all the code does and think through it? Am I suppose to be able to just think through why the one set of hundreds of records that came from one of 200+ repair yards is giving back erroneous results or should I just use a debugger and set a conditional breakpoint?
In this particular case you should probably implement some logging or alternative output that shows how the rules are being applied. Also perhaps the rules shouldn't be part of the code itself. It would be preferable to be able to add/remove/update a rule without needing to deploy a new version of the program (without going down the rabbit hole of implementing an entire logic engine).
I once was in a situation like this: Tons of weird rules, straight from the law, which made no sense at all.
Every day someone came along with a bug. After tracing through the code for half an hour, the program was generally proven correct. The business had forgotten some weird edge case of an edge case.
I always wanted to write a small gui program that would list for the business which rules had been applied, and what all intermediate data was.
BREs were to slow, btw, so everything was hardcoded.
That’s exactly what you end up doing - writing your own rules engine instead of just using your language of choice.
How much harder is it to “deploy your whole” program than changing a complex XML/JSON configuration file or database change, testing it, and then deploying that?
In fact, it’s ususlly much easier deploying code. Setting up a simple pipeline to deploy a service is not rocket science.
It makes sense to use a programming language for the rule logic. I'm just suggesting you need to conceptually separate the software code from the rule code. Have the rule code in a separate repository that can be deployed independently (by people with different roles). You can't just deploy new code. You'd perhaps want to apply previous versions of a rule.
In any case. The people maintaining/using the rules might not have a debugger available. They still need to have some kind of "rule stack trace" on how the system came to a certain answer.
Even when I’m starting a program from scratch It’s not just my code I have to understand, it’s all of the libraries, franeworks, APIs, etc. I’m integrating with.
For instance, am I suppose to “imagine” how all of AWS Boto3 functions work?
What reason is there to think “Imagining how your code works” should always be faster than “seeing how your code works”? It is certainly the case that sometimes one of these is faster than the other but it’s gonna be codebase, tooling, and scenario dependent which one actually is faster.
I’m trying to think through why removing the debugger should help one build a mental model of the code faster than having constant access to one.
I guess I can imagine workflows that are overly debugger dependent - the programmer has a “trust nothing” mentality and checks the behavior of the code against their mental model of the code “too much” thus slowing them down.
But I think theres reason to think the programmer can maintain a mental model of the program while using debugger queries to verify aspects of that model in a way that’s a lot more efficient than relying on mental model alone. Beyond that at some point observation of the behavior of the program will come into play — and I don’t see why utilizing the power of a debugger to generate a lot of precise observations quickly isn’t a pure win ...
Seems that you are agreeing with imagining and use debugger to verify is a lot more efficient. And for verification, print is more direct than orchestrating the breakpoints and then print with debugger's syntax (or gui navigation).
> Doing “something else” means (1) rethinking your code so that it is easier to maintain or less buggy (2) adding smarter tests so that, in the future, bugs are readily identified effortlessly. Investing your time in this manner makes your code better in a lasting manner… whereas debugging your code line-by-line fixes one tiny problem without improving your process or your future diagnostics.
I've been a professional now for about 15 years and very, very rarely do I get to work on "my code". Almost all of the code I have to work with was written by someone else originally and I have to just modify the system for new requirements. Tests do not exist or if they do they are largely incomplete.
So the only thing to do is to step through find the problem, fix the ticket and move on.
Sure If I get to design the system I normally write it very simply / well structured with appropirate levels of abstraction and with enough tests to expose the bugs in my code. But very rarely do I get paid to work on my code, because my code doesn't need a lot of maintenance. I normally am asked to make changes to bad systems.
> Brian W. Kernighan and Rob Pike wrote that stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Kernighan once wrote that the most effective debugging tool is still careful thought, coupled with judiciously placed print statements.
Thinking harder when I have a project with millions of lines of code (this is normal in large financial systems) won't help me. A debugger will.
I think a lot of these famous programmers have never had to work with something terrible and probably never will and that is why they make such blasé statements.
I find this kind of thinking to be borderline insanity. Why would I expend so much mental energy trying to understand what the past dozen developers were thinking when it's trivial to just inspect values and deal with the reality of the system as it is right now. If a function is expected to return a value of 25 and it's returning 17, I don't need to reason about their code to fix it. My tactic is always to write some very tight unit tests around the sections of code closest to the problem and run them through a debugger so I can see where it goes awry.
I don't think what you are doing is vastly different than what is described. "Thinking hard" is about trying to isolate the problem. RMS famously said that you should try to only debug the code that is broken rather than trying to debug the code that isn't broken (this is especially true in big systems). Where is the error happening? How are these things connected? Etc, etc. When you write very tight unit tests around the sections of the code, you have to do exactly the same thing (and I would argue that writing those tests is a good tool for "getting into" the code).
Next: running a debugger. I use print statements, but seriously, what's the difference? You use watch points in the debugger. Same thing. Once I have tests I find it easier to run them and look at the output of my prints as opposed to stepping through the code. Stepping through the code requires you to remember what you have done before and what the output was (granted debuggers that allow you to go backwards are helpful). If you can do that, then it's all good, but I find that it's easier for me to essentially create a log and read through it. It's just a bit more structured, but in the end it's exactly the same thing.
I think the biggest mistake that less experienced programmers do is that they don't try to reason about the code before they start. It's that isolation that's key, not how you are displaying the state of the program. Single stepping is fine when you've got 100 lines of code, but when you have thousands or millions of lines of code, it's not going to work. You need to be able to work your way backward from the error, reasoning using the source code as a map. You then use debugging tools (or printfs) to narrow down your options.
> I use print statements, but seriously, what's the difference? You use watch points in the debugger. Same thing. Once I have tests I find it easier to run them and look at the output of my prints as opposed to stepping through the code.
Maybe I have been spoiled by Visual Studio but as you said there are plenty of options in the debugger for winding back execution, changing execution, inspecting Objects, inspecting the state of the stack at the time, I can debug other people's assemblies, I can debug machines remotely.
Writing out to the console / log is my last resort.
> Stepping through the code requires you to remember what you have done before and what the output was (granted debuggers that allow you to go backwards are helpful).
No it doesn't require you to remember what you have done before. Even relatively basic debuggers such as the JavaScript debuggers in most browsers have a stack trace with what has been called where.
>If you can do that, then it's all good, but I find that it's easier for me to essentially create a log and read through it. It's just a bit more structured, but in the end it's exactly the same thing.
I don't understand why you would claim an inferior tool is better when a far superior one is available. It would like saying that a Impact Wrench / Spanner and a Spanner are the same thing, technically they both undo bolts, however one makes it far easier than the other.
> "Thinking hard" is about trying to isolate the problem. RMS famously said that you should try to only debug the code that is broken rather than trying to debug the code that isn't broken (this is especially true in big systems). Where is the error happening? How are these things connected? Etc, etc. When you write very tight unit tests around the sections of the code, you have to do exactly the same thing (and I would argue that writing those tests is a good tool for "getting into" the code).
Except it doesn't help you find the code. With a debugger I can stick in some breakpoints where I think the code is going to get hit and then walk myself back from there on what is called and in what order. I work with some terrible systems where it isn't obvious how the code is even executed because "senior" developers have abused IoC / DI, Reflection and Delegates in such a way to obscure what it is actually doing.
As for writing Unit-tests around sections of code. In quite a few areas I have been barred from doing it (for political reasons) and other times you realistically can't because your test setup would be just too complicated.
> Single stepping is fine when you've got 100 lines of code, but when you have thousands or millions of lines of code, it's not going to work
Yes it does work. I did it on Friday. I stick a break point on the entry point (in this case an ASP.NET MVC controller) and F10 and F11 my way down through the stack. I learned more doing this for 30 minutes then I did the previous 2 hours of looking at how the project was structured.
This "think harder" is akin to telling a man to dig harder with a shovel when they have a JCB excavator sitting round the corner.
This is my experience as well. Debuggers are extremely helpful when trying to integrate third party code with incomplete documentation. It's one thing to run a print statement at a specific level, but having the ability to explore an object and it's neighbouring objects can immediately provide insight instead of trying to find the function in the source code and deciphering it.
This is how the system ends up as a maintainable mess in the first place, where the fix for the aforementioned bug was to +8 and move on, which causes (at a later date) some other section to break because instead of 25, it's returning 33 so the developer adds a -8 and moves on...
I assume you meant an unmaintainable mess, and not a maintainable mess, but nevertheless, debuggers are not the cause.
You get messy code at least as quickly without a debugger, possibly even faster since bug fixes in my experience tend to be even more shallow, in many teams.
The values of the management, the values of the team, and resource availability is what matters in regard to writing, and keeping code maintainable.
I've worked with people who have had a similar idea, that using debuggers are somehow "bad", I have also had to rescue them by using a debugger to understand issues with code they had struggled to figure out for days, or even weeks, meanwhile causing severe issues for clients.
When you claim tools as the cause for your mistakes, then it could certainly be the case that the tool is of bad quality, which clearly is not the claimed issue with debuggers in the article. More likely though is that you haven't really learned when, and how, and why to use the tool.
Debuggers are brilliant for validating if you mental model of the code is correct, or more commonly, to understand how it's wrong. This can have a side effect of solving bugs, especially bugs in your own thinking
I think it's a fantasy that if it were only "my code" it would be perfect and simple. Most of the worst code I've worked on was my code, after it had grown and adapted to new requirements and after I'd forgotten why I did things the way I did them. Tests are necessary but far from sufficient for making a system comprehensible.
> I think a lot of these famous programmers have never had to work with something terrible and probably never will and that is why they make such blasé statements.
Do you think they never had to work with something terrible, or when faced with something terrible, they took the time to make it not terrible? The track record for the software these two have built speaks for itself.
I wonder if the author has actually used a debugger with practical reverse execution, like rr, UndoDB or TTD. It is not just another feature like "pretty-printing STL data structures"; it changes everything. Debugging is about tracing effects back to causes, so reverse execution is what you want and traditional debugging strategies are workarounds for not having it.
Those record-and-replay debuggers also fix some of the other big problems with debugging, such as the need to stop a program in order to inspect its state.
The author is right that the traditional debugger feature set leaves a lot to be desired. Where they (and many other developers) go wrong is to assume that feature set (wrapped in a pretty Visual Studio interface) is the pinnacle of what debuggers can be. It's an understandable mistake, since progress in debugging has been so slow, but state-of-the-art record-and-replay debuggers prove them wrong. Furthermore, record-and-replay isn't the pinnacle either; much greater improvements are possible and are coming soon.
I especially use rr as an exploration tool. It allows me to ask not the question "Where does the program go from here", but rather "Where did it come from".
I couldn't do my daily work without rr, since often I work with systems that are large and complex and I didn't write myself.
The article neglects one of my most important use cases for a debugger: figuring out how an undocumented piece of legacy code works. For example, if I perform a particular action in the UI, does function foo() get called? I find that setting breakpoints and tracing execution and variable changes in a debugger is an effective way of doing this kind of reverse engineering.
Also, when you're working with code that takes a long time to compile and link, using a debugger to check program state can be a lot quicker than recompiling the code with added print statements.
Different developers work with different types of code in different environments, so just because Linus Torvalds or Rob Pike does something one way doesn't mean that this is the most effective way for everyone else to do it.
Many years ago, we were installing an ERM system, when we discovered that it would not run payroll--or rather, it would run payroll for every employee except for the two who were in a certain jurisdiction. The very expensive consultants who were overseeing the implementation did not have any useful ideas on how to resolve this. (Though they did have a useless and time-consuming one.) I figured out how to use the COBOL debugger (called an "animator"), and in a couple of hours located the problem. Perhaps if I knew how to write COBOL printfs, I could have done without the animator, but I was all but illiterate in COBOL.
I have since inherited the care of a WinForms system, and without the Visual Studio debugger I'd be lost there.
Exactly. I seldom use it for my own code, can't remember the last time I did it but stepping through some huge undocumented Java codebase with Aspects oriented programming in it like Spring AOP (where the actual code that runs in that random function invocation is in some .xml file defined but not directly in the code) saved me a lot of time.
Also debuggers display contents of variables very nicely while stepping through code - for me a good way to verify my assumptions after reading a certain piece of code, just set a breakpoint and see if it's actually works that way - I'm often surprised.
Maybe it's not really required for C / low-level code but if you are wrangling complex Java codebases it's a very nice tool to have in the toolbox.
The article talks about live debugging & stepping through code, which I agree, I almost never do. But post-mortem debugging of coredumps is a different thing entirely.
In my job on the kernel team at Netflix's Open Connect CDN, I do post-mortem debugging of kernel core dumps almost daily. On a fleet the size of our CDN, there will invariably be a kernel panic which you'd not otherwise be able to reproduce. In fact, I often use 2 debuggers: kgdb for most things, and occasionally a port of the OpenSolaris mdb for easily scripting exploration of the core.
My hat is off to all the people who made my debugging so much easier. Eg, the llvm/clang folks who emit DWARF, gdb folks who make the FreeBSD kgdb debugger possible, and the Solaris folks who wrote the weird and wonderful mdb.
Yes to using debuggers for coredumps. I never found the whole UX of GDB good enough to displace my current print statement workflow, but for post analysis, it's really helpful when I can hit 'bt' (backtrace) and almost instantly see what went wrong.
As always with these kinds of dogamtic posts it ignores the real world: working on years old legacy software with millions of lines of code and off-the-charts cyclomatic-complexity.
I could show anyone of these authorities a situation where their bravado would fail a trying to figure out a particular bug by "staring at the code and thinking harder" would be impossible.
Besides - they all tend to concede that they will use print statements or whatever as a last resorted. That is just retarded - you have to add the statements, recompile the code, and eventually remove them again. Just use the goddamn debugger that's what it's there for.
What starts as a completely reasonable "hey maybe sometimes you should try to read the code and not let the debugger become a crutch" turns into a sensationalist black/white statement "I NEVER USE A DEBUGGER". Grow up.
I’ve never understood the “I don’t need a debugger - print statements are fine” argument. Even if you only use a debugger to view application state, (which, for most debuggers, is but a tiny fraction of their capability) it’s still an order of magnitude more convenient than print statements.
No need to modify code, and no risk of forgetting to remove a print statement (I have seen a forgotten dump on a rare execution path leak sensitive data to users first hand). Not to mention the cases where printing can change or break the program in some cases; for example printing before sending headers in something like PHP.
I find this to be completely unbelievable. If I'm in IntelliJ or Visual Studio, then adding breakpoints is equivalent, but easier than adding print statements. It's trivial to click the gutter on the line I'm suspicious of and run the program in debug mode. I can see all the local vars (exactly what I'd do with a print statement) and if something is amiss, easily go back up the call stack to see what went wrong. I find it mind-blowing when developers don't do this because it's so easy and so valuable. It's just so much harder to do with functional languages or languages without a first-rate IDE which seem to be what people like these days. In fact, I find it mind-blowing when developers don't consider this a mandatory feature to need when choosing a language.
Visual Studio's debugger is especially magical with .NET. With the debugger paused on a breakpoint you can drag the "next statement" pointer backwards to rerun pieces of your code. You can also edit the code while paused and your changes will be reloaded on the fly when you continue execution.
Furthermore, you can even edit the values in place, to confirm or reject your hypothesis about a bug. And a debugger can often reach into framework or library code as well, saving the need to go and check out and rebuild dependencies.
The author fails to understand the true purpose of "debugging". It's not about finding and fixing a bad line of code. It is about broadening your understanding of a system and correcting faulty assumptions about how it works. Then applying this greater understanding to a) find and fix issues and b) avoid creating new problems in the future.
Once you have extensive knowledge of how a system works, you can more easily spot incorrect code and are less likely to write it yourself. You should always treat any debugging session as "learning more about the system". If you can't explain to another person _why_ a bug occurred, then you won't be able to convince them that the problem is truly fixed.
The approach you take to learn how the system works is not important, and will vary from person to person. Experienced developers can read code, think about it, and understand what it does. Some people understand control flow through print statements. Some people can fly through with a debugger. Saying someone else's method of study is "wrong" is just silly if in the end they understand how it works.
This article is especially wrong because it makes the assumption that you must step through a program line by line when you use a debugger, which you don't have to do. You can just put breakpoints at the same places where you would have put an equivalent print statement, except now you can inspect _everything_, instead of the one or two things you bothered to print out.
I use debuggers (GDB) exclusively in batch mode. It allows for a very methodical debugging approach:
1 Formulate a Hypothesis
2 Determine what data you need for verification
3 Instrument code using gdb break commands, to hook function calls, sys-calls, signals, state-changes, etc. and print debugging information (variables, stack traces, timings) to a log file.
4 Then run an automated or manual test, of the failure you are debugging.
5 Then STOP data collection. Kill the process. Shut down the debugger.
6 Perform forensics on the log file.
7 Validate/Discard the hypothesis.
With this allows you reason reliably about the computational process:
- Why did I see this log line before this one?
- Why was this variable NULL here, but not in here.
You don't need to do that while you are in debugging session. You can take your time. You can seek back and forth in the file.
It's like dissecting a dead animal, not chasing a fly.
In addition, you can share concise information with your colleagues:
- This is the instrumentation
- This is the action I performed
- This is the output
And ask concise questions, that people might be able to answer:
> I expected XYZ in long line 25 to be ABC, but it was FGH. Why is this?
So the argument is that if you use a debugger you become lazy and stop reasoning about the code. That's a pretty terrible argument against any tool. "It's a good tool, but if used wrong has drawbacks".
I have heard a similar argument against syntax highlighting. I'm not kidding.
I'm sure I'd write more thought through code if I took a screwdriver and removed my backspace key. But that doesn't mean it's a good idea.
How about: use a debugger AND reason about the code? My pet theory: The reason the listed developers Kerningham, Torvalds et.al, don't use debuggers is because the debuggers they have available just aren't good enough. It would be very interesting to have them describe their development environments, and what debuggers they have actually used, and in which languages.
A debugger isn't much better than println if you are working in a weak type system and with a poorly integrated development environment. If you do C on linux/unix and command line gdb is your debugger then I understand why println is just as convenient.
println doesn't solve the problem of breaking in the correct location, trying a change without restarting (by moving the next statement marker to the line before the chnaged line), It doesn't support complex watch statements to filter data or visualize large data structures and so on.
So I work with a company where we provide online nutrition consultation to employees from other organisations as part of a larger corporate medical program. Our system has a scheduling system developed in Rails to deal with these appointments. This system was developed by a person who quit the company some time back.
A few weeks back, we got some complaints from a client who said some of their employees weren't getting allotted an appointment slot, despite the fact that the slot is supposedly free. I dived into the codebase to try and figure out the problem. There were some minor bugs which I spotted first, and could fix without using a debugger. But the appointments were still getting dropped occasionally. So I started tracing the control flow more carefully.
That’s when I found one of the strangest pieces of code I had ever seen. To figure out what the next available appointment slot was, there was a strange function which got the last occupied slot from the database as a DateTime object, converted it to a string, manipulate it using only string operations, and finally wrote it back to the database after parsing it back to a DateTime object, before returning the response! This included some timezone conversions as well! Rails has some very good support for manipulating DateTimes and timezones. And yet, the function's author had written the operation in one of the most confounding ways possible.
Now, I could have sat there and understood the function without a debugger as the article recommends. And then, having understood the function, I could have then rewritten the function using proper DateTime operations. But with a client and my mangers desperately waiting for a fix, I used a debugger to step through the code, line by line, just understanding the issue locally, and fixed the bug which was buried in one of the string operations. That solved the problem temporarily, and everyone was happy.
A week later, when I had more time, I went back and again used the debugger to do some exploratory analysis, and create a state machine model of the function, observing all the ways it was manipulating that string. I added a bunch of extra tests, and finally rewrote the function in a cleaner way.
Instead of romanticising the process of developing software by advocating the use or disuse of certain tools, we should be using every tool available to simplify our work, and achieve our tasks more efficiently.
"we should be using every tool available to simplify our work, and achieve our tasks more efficiently"
Yep. I advocate for some "micro managing" within a team for this type of stuff (detecting things your people do that you know there are faster ways to do it). Everybody learns from that process.
The end of the essay implies that "debugger" is a poor name because of course you still have to fix the bug yourself. If we call it instead "bug-location-explorer" I find them invaluable for certain problems. In particular when you have a very complex system that takes a while to get into its fragile state, when I/O is difficult (e.g. many embedded systems), or as others have noted here when you are spelunking others' code. Breakpoints are crucial since most bugs with modern languages are wrong output and not bus errors.
I started out as a user of "print" as a debugging technique, but early on Bill Gosper took pity on me and introduced me to ITS DDT and the Lispm debugger. They both had an important property that they were always running, so when your program fails you can immediately inspect the state of open files and network connections. No automatic core dumps except for daemons. The fact that you explicitly have to attach a debugger before starting the program is a regression in Unix IMHO.
It doesn't surprise me that Linus doesn't use one as kernel debugging is its own can of worms.
I don't know if it's always been the case, but it is certainly possible to attach and detach a debugger from an existing process, gdb/lldb takes -p for pid. (Although, detaching from threaded processes seems to have a risk of the process ending up stuck in my experience)
Briefly popping into the debugger can be a really quick way to find where an infinite loop is happening, especially if it's not in your code, or not where you expect. I found this especially helpful in diagnosing a FreeBSD kernel loop I ran into, once the location of the loop was clear, the fix was simple.
As you can see we have identified the crash in this process. Good have Steiner attach GBD to it to find out what is wrong. Mein Fuhrer... Steiner... Steiner accidentally killed the process and rebooted the server.
How about 'state inspector?' This is, of course, exactly what this article proposes doing with "print" statements, but print is a paltry state inspection tool compared to them, ahem, debuggers to which we are accustomed.
I use debuggers to track down and understand compiler codegen bugs.
I use debuggers to track down and understand undefined behavior, where mutating the code with logging statements may cause the bug to disappear.
I use debuggers to understand the weird state third party libraries have left things in, many of which I don't have the source code to, or even headers for library-internal structures, but do have symbols for.
I use debuggers to understand and create better bug reports and workarounds for third party software crashing, when I don't have the time, the patience, or the ability (if closed source) to dive into the source code to fix it myself.
I use debuggers to verify I understand the exact cause of the crash, and to reassure myself that my "fixes" actually fixed the bug. This is especially important with once-in-a-blue-moon heisencrashes with no good repro steps. I want a stronger guarantee than "simplify and pray that fixed it".
Yes, if your buggy overcomplicated system is a constant stream of bugs, think hard, refactor, simplify, do whatever it takes to fix the system, stem the tide, and make it not a broken pile of junk.
But sometimes bugs happen to good code too though, and sneaks through all your unit tests and sanity checks anyways. And despite rumors such as:
> Linus Torvalds, the creator of Linux, does not use a debugger.
Linus absolutely uses debuggers:
>> I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program.
He just pretends he's not using it as a debugger (as if "a disassembler on steroids that you can program" isn't half of what makes a debugger a debugger) and strongly encourages coding styles that don't require you to rely on them heavily.
I see mentions of gdb - an extremely cumbersome tool - no mention of Visual Studio and its marvelous debugger.
I too, when I'm on Linux, don't use a debugger, because there's no good debugger and adding a print statement is faster and easier, and figuring out what is going on with gdb is just plain horrible and slow.
That's why as soon as I find a bug on Linux I try to have it on Windows to leverage VS's debugger. I can't count the number of times where I could instantly spot and understand a bug thanks to VS's debugger.
"Think harder about the code", sure, and what if you didn't write the code?
I wrote 3Kloc of C++ for a dwarf fortress plugin while convalescing with very poor eyesight, and did not ever get around to figuring out how to start gdb, so crashes could involve alot of pondering or compiling in logging to track down simply where in the source it occurred.
A couple of months ago I finally had a go at using gdb to help fix a reported crash, and found that with a debug built executable, just type "gdb progname" and then at gdb's prompt "run" - hey presto if the program crashes it gives the source line. No problem that the entry executable wasnt built with symbols. It was hugely useful for a tiny learning step.
There is nothing marvelous about the VS debugger (it even had bugs on basic functionality up to very recently). It just presents an simpler interface because you are launching it from an IDE, get a C++ IDE on Linux and it will be as easy to launch.
But if you are looking for marvelous debuggers, I do recommend you look at the Python ecosystem.
Sounds like a theologist who wouldn't use a microscope, and instead think over/interpret harder their Holy-book-of-choice. You can get into interesting insights along the way, but I wouldn't call it practical or efficient.
Debuggers are the most amazing tools to explore and learn the code in your hands. They do have limitations, as pointed out, but presuming that your inner insight/gut feeling leads to more scalable and lasting results is ridiculous.
Smarter people have expressed more ill considered opinions.
"In the 1950s von Neumann was employed as a consultant to IBM to review proposed and ongoing advanced technology projects. One day a week, von Neumann "held court" at 590 Madison Avenue, New York. On one of these occasions in 1954 he was confronted with the Fortran concept; John Backus remembered von Neumann being unimpressed and that he asked, "Why would you want more than machine language?" Frank Beckman, who was also present, recalled that von Neumann dismissed the whole development as "but an application of the idea of Turing's 'short code."' Donald Gilles, one of von Neumann's students at Princeton, and later a faculty member at the University of Illinois, recalled that the graduate students were being "used" to hand-assemble programs into binary for their early machine (probably the IAS machine). He took time out to build an assembler, but when von Neumann found out about it he was very angry, saying (paraphrased), "It is a waste of a valuable scientific computing instrument to use it to do clerical work.
The 1950s were a different time. Nowadays computing power is almost too cheap to meter. I do not think anyone at almost any software company knows how to answer the question "how much money are we spending on running our compilers". In most cases, I suspect the cost of running the compiler dominated by the salary of the programmer as he types "make" and waits for the compilation to finish.
We live in a world where every employee has a previously unimaginable amount of processing power dedicated for their personal use that spends almost all of its time idling. That results in a far different calculus than the world where processing power is a scarce resource.
If an assembler or compiler could save you from having to run dozens of extra batch jobs to fix a bug in your hard-to-understand machine language code, it might have allowed more time on an expensive machine to be used for productive purposes.
Yet the most prevalent tests these days for SE are based around the old problems. Then again these tests are less about how good of an engineer you are and more of how submissive you are.
It's easy to see this as a wrong opinion when computers are extremely cheap and fast. Was von Neumann wrong about computers being more expensive compilation tools than grad students at the time?
He cite Linus, who says that not using debugger result in longer time to market. which is good for Linux in his mind. How is this convincing for me?
Anyway, imagine a detective trying to figure out a murder scene without going through it step by step. Of-curse with experience come speed. So, weird flex but OK. I will still prefer C# over any language just because the debugger in Visual Studio is amazing!
After re-reading the article, I see it as another "Just code better" which hopefully will make it easy to pinpoint the bug just from the problem itself. In complex system with a lot of feedback loops, It's nearly impossible without debugging or using logs (which for me are just serialized debugger).
Debugging is reverse-engineering, i.e trying to understand what's really happening inside a program.
It necessarily implies some loss of control over the code you (or somebody else) wrote, i.e you're not sure anymore of what the program does - otherwise, you wouldn't be debugging it, right?
If you get into this situation, then indeed, firing up a debugger might be the fastest route to recovery (there are exceptions: e.g sometimes printf-debugging might be more appropriate, because it flattens time, and allows you to visually navigate through history and determine when thing started to went wrong).
But getting into this situation should be the exception rather than the norm. Because whatever your debugging tools are, debugging remains an unpredictable time sink (especially step-by-step debugging, cf. Peter Sommerlad "Interactive debugging is the greatest time waste" ( https://twitter.com/petersommerlad/status/107895802717532979... ) ). It's very hard to estimate how long finding and fixing a bug will take.
Using proper modularity and testing, it's indeed possible to greatly reduce the likelihood of this situation, and when it occurs, to greatly reduce the search area of a bug (e.g the crash occurs in a test that only covers 5% of the code).
I suspect, though, that most of us are dealing with legacy codebases and volatile/undocumented third-party frameworks.
Which means we're in this "loss-of-control" situation from the start, and most of the time, our work consist in striving to get some control/understanding back.
To make the matter worse, fully getting out of this situation might require an amount of work whose estimate would give any manager a heart attack.
“Stepping through code a line at a time” is a rather narrow definition of using a debugger. Setting a logging breakpoint. Conditionally breaking. Introspection by calling functions while the program is paused. Testing assumptions by calling functions. Conditionally pausing the program. Setting a breakpoint in code and then calling functions to activate the code path to the breakpoint. This is the core of what lldb does for me. I agree, single stepping is rare and is like looking for the needle in the haystack. But are all these other aspects not assumed to be using the debugger?
> However, the fact that Linus Torvalds, who is in charge of a critical piece of our infrastructure made of 15 million lines of code (the Linux kernel), does not use a debugger tells us something about debuggers.
It tells us more about Linus. I hope no impressionable developers take this article too seriously.
Coding efficiency is heavily influenced by the time it takes to iterate. Once you write something, the time it takes to test it out, change it, and try something new provides a nice upper limit on how fast you can develop software.
The speed of iterating changes to code has a huge influence on the tools that you use. In a language like C, compiling new changes to code can take many seconds or sometimes minutes. Thus, you'll want to have heavy duty tools that can carefully analyze how your code is operating.
In contrast, in a scripting language, changing a line and rerunning the code can take far less time (a few seconds for even large programs). Thus, you can iterate more often, and so you don't have to be as careful in each iteration.
The moral of the story is that debuggers can be extremely helpful for some languages, especially those that take a long time to compile. However, while still helpful, they are far less helpful for languages that you can run quickly (I'm thinking Python here).
The anti-debugger argument always seems to set up this strawman debugger user who blindly starts the debugger in response to any problem, applies no thought to what's going on, doesn't dig deeper, etc.
Sure, maybe that guy exists. Maybe you've seen "that guy" or been "that guy". Does that mean that it has no value to be able to stop a program and look at all the values?
I think we need to rethink what we mean by debugger. In Dark (https://darklang.com), we don't have a debugger. Instead we combine the editor with the language, and then when you're editing code you have available at all times a particular program trace (typically a real production value, which are automatically saved because dark also provides infra). As a result, you can immediately see the value of a variable in that trace.
Is this a debugger? Sure. It sorta lets you step through the program and inspect state at various points. But it's also a REPL, and it also strongly resembles inserting print statements everywhere. It's also like an exception tracker/crash reporter and a tracing framework.
IMO it's both simpler and more powerful than any of these. It's like if every statement had a println that you never have to add but is available whenever you want to inspect it. Or like a debugger where you never have to painfully step through the program to get to the state you want.
So overall, I think we need to think deeper about what a debugger is and how it can work. Most of the people quoted do not have a good debugger available to them, nor a good debugging workflow.
I hate this article and disagree strongly with the ideas presented.
“Debuggable” code is written in a certain style — just like “testable” code.
I consider a codebase to be good when there are meaningful places to put break points sufficient for running learning experiments about the code. Just like a codebase with “tests” is often a better codebase as a result of being written in a way that supports testing - a codebase that supports debugging can also often be a better codebase. And these work well together putting break points in test cases is often a really great idea!).
I think one of the reasons the value of the debugger so often fails to be noticed by experienced developers is that so many systems are architected in a horrific way which really does not allow easy debugger sessions — or the debugger platform is so underpowered that debugging is unreliable. There’s nothing worse than not trusting the debugger interface — “i want to do an experiment where I run code from here up to here” needs to be easy to describe and reliable to execute otherwise it is too much pain for the gain. In my opinion, failure to make this easy is not a fault of the concept of debugger but a fault of the codebase or the tooling (which often is very inadequate).
I do not use a debugger most of the time either. Let me tell you why I don’t:
- Because I forgot how to use it (or never knew how.) There are many debuggers and UIs and I still know how to use some of them to decent effect, but I simply don’t know how to be effective with most of them.
- Because I’m pretty confident I have a good understanding of what code is doing nowadays. My intuition has been honed over the years and I tend to quickly guess why my code isn’t working.
- Because my code is all unit tested now. This contributes to my ability to be more sure about what code is actually doing.
There are still some cases where I may try a debugger. I had one recently where I was unsure what path my code was taking and I wasn’t sure how to printf debug. That helped a lot.
Not using a debugger is not really a choice I made or something I do to try to look impressive, rather it’s most likely a result of the growing diversity of programming languages and environments I work in, combined with better testing habits. I just feel like I have enough confidence to fix the bugs quickly. When I lose that confidence is when I break out printf or the debugger.
Because I’m pretty confident I have a good understanding of what code is doing nowadays. My intuition has been honed over the years and I tend to quickly guess why my code isn’t working.
So does your code ever use other libraries? Does it ever call third party APIs? Do you ever have to modify code you didn’t write? Do you remember what your code does that you wrote 10 years ago?
Surprisingly, third party libraries have ended up being fairly predictable as well. Not perfectly, but enough that I am usually not too concerned about it. Tend to RTFM at least once, though.
Try reading the manual for Boto3 and writing code blind without being sure of the response format without just calling it first and looking at the response.
This may surprise you, but I don't use Python or boto3. Last time I used either of those was nearly half a decade ago. Software I use nowadays generally has much better documentation, not to mention is written in a language where I can get code completions and accurate typings.
Also, I never said that I don't read the source code of third party libraries. In fact, I do. I like to embed third party libraries directly so that I can inspect them inside of my workspace. For almost any library that I use, though, it hardly requires much familiarity to get basic operations working. Since a good amount of the libraries I use at my day job are proprietary, I can't just google for Stack Overflow questions and sometimes documentation isn't always available, so it would be pretty debilitating if I needed a debugger just for the simple task of utilizing a library.
This is also part of why I stopped using Python. MyPy shows promise, but without typings at least on par with TypeScript, my productivity in Python was surprisingly poor despite amazing frameworks like Django and DRF. Simply put, there was almost always too much futzing around, and even with many debugging techniques I sometimes never fully figured out what was wrong with my code.
I used Boto3 as a large complex library that you aren’t going to inspect every line of code. But you mentioned web development, so you inspect every single line of every third party dependency? Every library?
So I just showed you a massive API could you use your intuition,comments, etc to figure out the response? Are you claiming that any API that you can’t understand through “intuition” is by definition not well defined?
Have you ever integrated with something like Workday?
Hell yeah I'm claiming APIs I have trouble understanding with intuition are not well designed. You might even call them... unintuitive. The entire point of an API is for it to be consumed; A nice uniform API like Stripe or Qt is easy even with minimal docs. If a user has trouble with your application, you don’t immediately blame them, you have a UX problem. If your API is hard to use? That’s a problem. This only becomes more true as the stakes get higher. How do you feel about unintuitive cryptography APIs?
Have I ever used an API that is hard to use and confusing? Yes. JIRA, Workfusion come to mind. The latter was a SOAP API. Thankfully, I don’t really need to deal with SOAP anymore, and good riddance. In case of bad APIs, my approach has always been to strongly isolate them from my code with an abstraction; most dramatically by actually putting a service up that just interfaces with the API I don’t like and provides a minimal interface to the functionality I need. At work, I usually have to do that, for security reasons, even if the API is good.
Anyways, I really, genuinely don’t have any clue what you’re trying to prove here. You’ve gone on and on about how bad APIs exist and imply this means I must be lying. So what do I have to do, link to some docs and say “Hey, check out how usable this API is”? I won't bother but the two aforementioned (Qt, Stripe) are great examples in two completely different categories.
And I have literally not even the smallest clue how this ties back to debuggers. I can’t recall a single time my reaction to a confusing API was pulling out a debugging tool other than printf; more likely I’d play around in a sandbox or REPL instead.
No, what I am saying is that your definition of a bad API is one that you can’t intuit is a case of the “No True Scottsman” argument. The Boto3 API is well organized, it just covers a massive surface. It’s much easier just to call the API and inspect a real world response.
For instance, of course I know all of the databases instances in our AWS account, I read the Boto3 docs to see what API to call to return them, but it was much easier just to call the API and look at the response in the debugger to see the response than just guess from the documentation. The Boto3 module covers every single API for every service that AWS offers. Of course the API isn’t going to be consistent between S3 (storage) and DynamoDB (a no sql database).
And what’s the difference between using a REPL where you are running code line by line and using a debugger where you are running line by line?
On the other hand, why look through log files with print statements when you can just let the program run and set a breakpoint and look at the entire stars of your app including the call stack? But these days I wouldn’t even think about using a regular dumb text logger. I use structured logging framework that logs JSON to something like Mongo or ElasticSearch where I can use queries to search.
Heck even in my C days when I would make a mistake and overwrite the call stack, stepping through code and seeing where the call stack got corrupted was invaluable or seeing whether the compiler was actually using my register and inline hints by looking at the disassembly while the code was running.
It’s like Java Hello World, or Python urllib. I want to do some basic operations in S3 and I’m knee deep doing all kinds of non-sense boilerplate with boto. This is not a knock on Amazon; I later used their official Go SDK and it was great. The API surface is absolutely enormous, so I don't buy the handwaving that big APIs are by nature incomprehensible.
I also don’t really get the framing that this is a No True Scottsman argument. I’m saying if someone has trouble using your API, and that problem is not a matter of the developer not understanding a core concept, it is a problem with your API. The counter argument is basically “My API doesn’t suck, it’s the developers that are stupid.” And no, a valid retort is not “what if they are, though?”
I dont consider a REPL to be a debugger. It’s just another code sandbox. You may consider it in the realm of ‘debugging tools’ but I do not. The difference for me is I hit sandbox tools like Go Playground and the Python REPL before I have code to debug, not after.
Anyway, this has now evolved to the point where we’re personifying APIs and stretching the definition of a debugger. I never claimed I do not printf debug, or use a REPL; hell, occasionally, every few months, I even use a real debugger. My claim is not even that I am a good programmer, which I would agree I am not. I am literally claiming that my use of debuggers (not necessarily all debug tools) has declined to very low levels because of better developer tools, better intuition ( = coding a lot, not necessarily being ‘good,’) and honesty, just having too many environments to actually learn how to use debuggers in all of them.
Having to use a debugger in C because you overwrote the call stack is an example of why debuggers can be an anti-pattern. You had to do so much work because the compiler couldn’t prevent you from making the simple mistake of an out of bounds memory access. Is a debugger useful here? Absolutely. Is it ideal? Hell no. I don’t want to be the Sherlock Holmes of core dumps, I want all of my mistakes illuminated as early as possible, so I can get back to work. I am not yet a huge Rust zealot, but you can see where I’m going with this. Does accidentally overwriting the stack in C have anything to do with bad APIs? Maybe. There’s plenty of C library functions and POSIX functions that are not invalid or broken in any way - in fact they behave exactly as described - but are incredibly common sources of memory and concurrency bugs. It’s why Microsoft compilers crap themselves with warnings whenever you use functions like strcat. I’m not sure I believe that the “secure CRT” versions are always much better, but the original API is terrible. C++ can do a fair bit better with strings and memory management when used responsibly. Obviously Go does a lot better, and Rust does even better than that.
So I don’t often use a debugger because a lot of the scenarios like that where I might have been reduced greatly, again, by better tools, better testing, etc etc. The fact that I have to defend this so rigorously makes me wonder if you are taking advantage of modern tools and testing standards that makes development work flow so much easier. Not everyone practically can; I imagine there are many fields of work where the tools or the ecosystem is behind, but I do not believe it is something that can’t or won’t be fixed, and if that’s the case I sure hope it does.
So I don’t often use a debugger because a lot of the scenarios like that where I might have been reduced greatly, again, by better tools, better testing
I would actually say the opposite. If you’re littering your code with print statements and #if DEBUG equivalents (instead of using a debugger) and grepping log files (instead of using a structured logging library) you’re not using the newest tools available and you’re “debugging” in a way that I gave up in the mid 90s writing C and FORTRAN for DEC VAX and Stratus VOS mainframes.
There is nothing “modern” about littering your code with print statements. I thought being forced to do that in the 90s was already taking a step back from the various DOS based IDEs I had used by then.
The thing is, your imagination of what I’m doing (littering code with printf statements) and the actual reality of what I'm doing (solving AT LEAST 95% of my problems using extensive automated tests, strong linting, strict typechecking, and modern programming practices) is leagues apart. When I say I break out the debugger around every few months, I am not exaggerating. I may insert a printf or two once every other week. This is not the common case, it’s just significantly more common for me than attaching a debugger, because it’s quick and easy.
The practice I adhere to is to break code up into small modular bits then unit test the living hell out of those bits, then integration test larger combinations. Maintaining test code takes time, but I take it very seriously. I can’t share any of my actual work at Google but I can share my open source work; a project that I wrote this way is Restruct and it normally has around 98% line coverage. If someone finds a bug I add a new test. I have never used a debugger on Restruct or projects built on Restruct.
So then, what do I do all day? Console.log and printf everything? No, honestly no. I spend little of the day debugging because unit tests tell me nearly exactly what codepaths I broke, printing out helpful diffs of the expected behavior versus the actual behavior. This almost always gives way to the problem, even when I’m working on a brand new piece of code with tests I just wrote (sometimes before the code, even.)
So I would say printf is my goto debugging tool, but if I want to be accurate, it’s probably really automated testing. Automated testing used in this fashion basically acts as small little debug routines. We don’t call tests debugging tools though, because that would undermine their usefulness; they do so much more. Which is why I don’t sweat it when I have just as much or more test code than actual code; unlike other code, it often pays for itself if you do a good job.
And besides, if unit tests explain a bug to you before you actually run into it, did you really do any debugging or did you bypass it entirely?
For the most part, C++, Go, TypeScript. At work I mostly write (micro) services or web UIs, and at home I do whatever (Tetris clones, small studying apps, utilities for reverse engineering, I’ve even done a couple Gameboy emulators in Go.)
That's been my experience in recent years; unit tests eliminate a whole swath of bugs that I'd otherwise use a debugger for.
And beyond the bugs themselves, reading the unit tests shows me what a function is expecting and what it's doing, and especially its behavior in edge cases.
The other technique I use is judiciously placed assertions that enforce invariants. I'd rather the system fail explicitly than have it returning junk data to users. (I work in finance, so customers are far more forgiving of a system returning an error message than seeing their balances be mysteriously wrong.)
> Kernighan once wrote that the most effective debugging tool is still careful thought, coupled with judiciously placed print statements.
Of course the first tool is careful thought. But when forced to fall back to print or log statements it feels like a handicap.
If I have to use a print statement it means I'm not sure what's going on so I'm not sure what, exactly, to print. By breaking on that code I don't have to know exactly because I can execute arbitrary code in that scope. What takes multiple iterations with print is often just one with break.
I can compare directly, because on a production-only bug I'm forced to use log debug statements. And usually in that case I have the same code open locally in a debugger. The difference is night versus day.
Maybe it's like a chess master who really doesn't need the board. For the kid in Searching for Bobby Fischer, it may really be a distraction. But I notice that grandmasters use a chess board in serious competition. And as a programmer I'm no grandmaster, and I play this weird game better with the board in front of me.
When I worked at Google Search last year, running a service locally in the Debugger was just about the only way to figure out what it does. There are so many control flow transfers - async message passes and you have no idea what is coming back - and people that think they are Templating gods and idiots who want to use syntactic sugar to define a new language and people using functors just 'because' and lambdas and other BS C++21 features inside Google and inheritance/container trees that look like a 3-D house that the debugger is the only way you can trace the control flow of such a haywire spaghetti code base.
Sadly, gdb is running out of gas at Google - it takes 60s to load "hello world" + 200MB of Google middleware and often it would step into whitespace or just hang, forever. This was often because not smart people were maintaining the gdb/emacs environment at Google.
Debuggers a great tool to have in your toolbelt. They can arbitrarily modify the state of the program at any point, let you safely inject code (I’ve written “coroutines” in LLDB for applications I did not have the ability to insert print statements in), and can be extremely helpful when performing dynamic analysis. That being said, sometimes you don’t have access to a debugger, so you have to do your best with the tools you have available. Finding a bug is always a challenge of removing extraneous state you don’t care about (“where do I put this print statement so it doesn’t get called a million times”, “how can I visualize the value of this variable when it changes in a way that is important”) so not having a debugger doesn’t change this: it just makes it somewhat more annoying (and requiring more ingenuity) to perform these tasks because you now have more limitations.
Debugging complex multithreading or timing-related bugs is one of the areas where a debugger, or even a bunch of logging, is not going to help you at all, because the slightest change in timing can make them disappear; only (very) careful thought is likely to lead to a solution.
Thus I mostly agree with the author of this article --- blindly stepping through code with a debugger is not a very productive way of problem solving (I've seen it very often when I taught beginners; they'll step through code as if waiting for the debugger to say "here is the bug", completely missing the big picture and getting a sort of "tunnel-vision", tweaking code messily multiple times in order to get it to "work".) If you must use it, then make an educated guess first, mentally step through the code, and only then confirm/deny your hypothesis.
I use a debugger all the time. Why guess about any of this stuff, and why trust your fallible intuition, when you could have the computer tell you exactly what's really happening? I think of it as analogous to using a profiler in this respect.
I write scientific simulations in Matlab.
Debugging, that is, stopping the programs and interacting with the data and functions, is essential to me.
To a point, this is due to me not planning through my programs. So I can see that for some areas a debugger may not be essential.
But in other cases, I am actually interested to trace what happens with data in my mechanisms. For this kind of work, a debugger is essential.
Most importantly, as someone who maybe does not use the absolute best practices of designing software, debugging allows me to write solid and successful programs without having completed a CS degree.
I would like to meet someone who writes code without any bugs. Now if you're not putting bugs on purpose in your program, you probably don't know what they may arise from. So why use a tool (prints) which is deeply influenced by your assumptions on the program and which probably where wrong in the first place?
Some more arguments for the debugger:
-No need to recompile for each print
-Available for multi-threaded progs
-possibility to see the state in libs you're dependent on, in which you can't put logs nor asserts.
That depends on what problem or code base you are working with.
The debugger is the best answer when you are working with bad code, which you cannot reason about the code locally -- a random bug only happens on the production env, a mutable monolith, or a complex system integrating with 3rd party services. Because debugging is to know what happens in the first step, then build a theory to reason about everything. Sometimes, fixing the plane in middle-air requires you need to know what happened to the specific plain, instead of building a theory to make a plane having the exact problem without looking at the plane itself. Like in control theory if there are too many possible states, it's not cost efficient to reason about the state from behavior -- wherein software you can cheat and inspect the state directly.
However, I agree with the author that in the ideal scenario there's almost no need to use the debugger. Like in SICP, in the first few chapters the mental model is about the substitution model -- it's not how a computer really works, but it's easier to reason about, you don't have to in specific step with the specific environment to reproduce the problem (that's where debugger really helps). The code is written is a matter which is more coupled with the environment (which reflects the environment model in the following chapters), the more one needs to use the debugger to work with that code. And that's why the virtue of functional programming and the referential transparency are praiseworthy.
Debugger is just an inspection tool that allows you to see in detail how your contraption (program) behaves while it's running. Treating it as the tool of last resort, only after you've exausted examining your program while at rest, is needlessly limiting. I can see how that could make more sense in physical engineering (if you turn the contraption on without fully understanding it, you're risking doing some damage), but there is no such risk in SE, so why limit yourself?
I feel like debuggers usefulness is directly correlated to how specialized it is. A good debugger is all about giving you specific information on your problem. I find the classical step-by-step debuggers do an incredibly poor job at this. I never use them.
A tier up from that for me are higher-level debuggers, generally specific to some technology. For example, the browser dev tools, GTK Inspector, wireshark, RenderDoc etc. I'd also put tools like AddressSanitizer, Valgrind and Profilers into this category. Because they are more specialized, they can give you richer information and know what information actually matters to you. I usually find I use these regularly when developing.
The highest tier is tools specialized to your specific application. This could be a custom wireshark decoder, a mock server or client, DTrace/BPFTrace scripts and probes, metrics, or even an entirely custom toolbox. Interestingly, print statements end up in this same category for me, despite being the possibly simplest tool. Being specific to the problems you actually face allows you to focus on the very specific problems you have. This tier is interesting because these tend to become not just something you use when things go wrong, but become part of how you write or run your code.
Under this lens, I don't think it's that surprising people don't really use general step debuggers that much. They are a primitive tool that allows you to have many of the benefits of tier3 debuggers without any of the effort involved in making custom tools. They are the maximum reward/effort in terms of debugging.
Another way to think of this is that the debugger is a tool, just like other familiar tools such as tcpdump, ping, traceroute, nslookup, nc (netcat) etc. Choosing the right tool for the right observations to make progress in squashing said bug(s) is key here. Sometimes, all you really need is a methodical approach in putting print statements in the right lines of code, or even pinging a host that doesn't seem like it's up in the first place.
My problem with debuggers is that they are too good of tools. You use them, solve the bug, and move on. In the process, you might have learned something additional about the code base, but the only improvements to the code are fixing that particular bug. In contrast, when you don't use a debugger, then every now and then when you try to debug, you find that some of your work is not simply a temporary hack to identify the problem, but something that should stay in the code long term to assist in future debugging (be it asserts you added, or additional logging calls). The end result is that the more you debug without a debugger, the more debuggable you code becomes.
I am currently working on a codebase where all the developers are adamant debugger users. The code is practically impossible to debug without the use of a debugger, because no one has ever had to build up the debugging infrastructure.
Complex/diffucult bugs still take about as long to fix, but simple bugs take far longer than they normally do, because every time you use a debugger you are starting from scratch.
I am strong fan of debuggers as a tool — but I actually agree with some of this complaint.
I want a language that exposes debugger features as a first class language construct
Think how powerful this pattern could be:
debug(...) {
...
}
if the language and runtime specified the semantics needed ...
It could be really useful to have blocks like this in the codebase:
debug(problem_related_to_x) {
pause_debugger;
}
You could document and then more easily return to the mental context found over time when trying to understand problems related to x ...
And you could put log
() statements inside those blocks instead of break points — not stupid text output but something the debugger protocol knows how to represent and present ... these capabilities alone would exceed the utility of print() debugging while supporting all the same workflows ...
I know things like vscode’s logpoints exist — but the fact that these constructs are not représentable in the code and easily shareable really undermines their overall utility ...
These techniques are not mutually exclusive. Using a debugger to help find the bug doesn't prevent you from adding assertions or logging to your code afterwards if you think they can be useful. For that matter, you can use a combination of debugger and asserts/logging to find the bug.
This is actually well written article by a CS professor and cited with experiences of few very productive programmers. Especially Linus's no bar holds post on this topic is worth reading:
Many people say not using debugger is other side of the pendulum but perhaps it is not. You want to have assertions/prints in your program at critical junctions. That should be able to explain the behavior of the program you are seeing. If it doesn't then you probably have missed some critical junctions OR don't really understand your own code. There is actually a third possibility where you will need debugger. This is the case when compiler/programming language/standard libraries itself has bug. Instead of more time consuming binary search for where you first get unexpected output, debugger might be better option.
The author ignores a variety of points, but the major one seems to be that there seems to be some ignorance about debuggers being a "line-by-line" tool. I usually use it to understand big codebases, by stepping through the functions in the order they are called, and I am quite sure there isn't some other way to tackle this requirement.
Those who say kernels should be debugged with print statements inserted into the code should promptly hand in their register and backtrace dumping functions, lockup detectors, memory allocation debugging, spinlock debugging, "magic sysrq key", ...
Oh wait; all of that is judiciously inserted print statements.
My gut reaction is that the article is nonsense but I think there's a very valuable point in it.
When you work mainly on enterprise code where you're unlikely to encounter any code you wrote, rather than another team member, on a daily basis you'll need a debugger or print statements.
But the reasonable point the article makes is a debugger makes it very easy to solve the problem localised to a function or a couple of lines of code rather than take the time to improve the whole area and/or add test coverage. But the other thing people who don't work on enterprise code won't necessarily understand is you don't usually have time to do that. So it's a good thing to keep in mind but it feels a little too Ivory Tower to be broadly applicable.
Single-stepping is just one use for a debugger. Breakpoints are far more useful. Inspecting local variables by clicking through the stack is generally faster than adding print statements.
A lot of people also have never used a debugger that isn't terrible to use. Most debuggers fall into that category.
As for all of this "read the code first and get a better understanding" talk - this is obvious highfalutin' bullshit. You're human, you made a dumb mistake somewhere, the debugger will help you find it faster than your brain going on an excursion.
On a similar note: I once did a project entirely in notepad. This really forces you to write clean, readable, well organized code.
I don't have anything against debuggers. I do - however - have a concern with people who rely heavily on the find feature in their IDE to search for certain code they need to change. Oftentimes they don't look at the bigger picture and miss how certain things might better be implemented elsewhere. They don't run into the problem of finding code that's poorly structured. They don't have a need to restructure it.
A step-through debugger is a ridiculously useful tool when you don't have a serial console. Or when adding in tons of prints would violate timing constraints and compiling takes a significant amount of extra time compared to just restarting the device under test & moving the breakpoint until you find where things go wrong. Much nicer than having just a logic analyzer.
Of course embedded systems aren't the focus of the article, but they're all over and a very good place to have a debugger.
I debug sometimes, but if it's a simple script I'm working on, I usually don't. I'll just log where I think the problem is (usually indicated by an error, perhaps). I'll pull out the debugger when my best guesses at the problem aren't panning out.
Do whatever works for you. But it's good to have more tools in our toolbox - use puts debugging if that's easier (often is in very complex environments), use a debugger if you need to.
I never got comfortable with debuggers for the work that I do. Thus I rely heavily on "print debugging". There are tools such as "icecream" [0] (in Python), that improve print-debugging ergonomics a lot. I wish every language had something like "icecream" built-in.
Since the author quoted Guido van Rossum, I'll share a recent anecdote from my interaction with Guido at PyCon. I first met Guido this past Wednesday, I'm guessing after he attended the Python language summit. He honestly seemed to be bristly at the time, probably because he's heard many of the same arguments raised over thirty years (I can imagine something like "hey why not get rid of the GIL" -> "Wow, why didn't I think of that?! Just get rid of the GIL!" although hopefully it was more higher-level than that). One of the other language summit attendees was talking about a particular Python feature, which I don't remember, but the underlying notion was that even if you get testy with contributors they'll still be a part of the community. I remember thinking, "No. That's totally not how it works. If you get testy with contributors they'll just leave and you'll never hear from them again, or you'll turn off new contributors and behead the top of your adoption funnel meaning your language dies when you do".
Python has such great adoption because it caters to the needs of its users first. Take the `tornado` web server framework. I haven't confirmed it myself but apparently it has async in Python 2 (async is a Python 3 feature). How? By integrating exceptions into its control flow and having the user handle it. But it shipped, and it benefited its users. IMHO, `pandas` has a decent amount of feature richness in its method calls, to the point where sometimes I can't figure out what exactly calling all of them does. Why? Because C/Python interop is likely expensive for the numerical processing `pandas` does and for the traditional CPython interpreter, and ideally you want to put together the request in Python once before flushing to C, and because people need different things and so need different default args. `pandas` also ships, and benefits a lot of people.
Shipping is so important in production, because it means you matter, and you get to put food for you, your family, and the families of people you employ. You can't just bemoan debugging is bad because somebody isn't a genius or not in an architect role. Debugging means you ship, and you can hire the guy who isn't aiming for a Turing Prize who may have a crappier job otherwise and he can feed his family better.
Don't cargo cult people you're not. Use a debugger.
Debuggers also double as an instant profiler. "Hmm, this program is running slowly, I wonder why..." -> gdb -p $PID -> a few random Ctrl-C's and c's -> finding the bottleneck and the state that led to it (which a real profiler won't usually tell you).
Also watches are invaluable when you know something is getting a wrong value but you don't know where.
Turning a blind eye to one of your tools is disingenuous. Step by step debugging has its uses and some work environments may favor it. And in fact, I'm am quite sure that guys like Linus are competent in using them and will do when needed. It is just that it is not their favorite tool and it is not well suited too their project.
I work with embedded software. About a decade ago, when we switched to a new target processor with a JTAG interface (in-circuit debugging), my boss decided to buy an in-circuit debugger to help development. The target was an high-end (at the time) SoC, so it could run uClinux and we could also have GDB too.
I tried hard to use the hardware debugger because it was rather expensive (this is I think a case of sunken costs fallacy). Problem is, our system is soft real time; stepping in the main program causes the other things connected to the system notice that the main program does not respond, and act upon this.
The hardware debugger was quite capable so we had watchpoints and scripting to avoid this problem, but you had to invest considerable amounts of time to learn to program all that correctly. Amusingly, this was another occasion to make more bugs. Now you need a debugger to debug your debugger scripts...
Moreover, the "interesting" bugs were typically those who happened very rarely (that is, on a scale of days) - bugs typically caused by subtly broken interrupt handlers; to solve that kind of bug in a decent time frame with you would need to run dozens of targets under debuggers to test various hypothesis or to collect data about the bug faster. That's not even possible sometimes.
I also happen to have developed as a hobby various interpreters. The majority were bytecode interpreters. There again debuggers were not that useful, because a generic debugger cannot really decode your bytecode. Typically you do it by hand, or if you are having real troubles, you write a "disassembler" for your bytecode and whatever debugger-like feature you need. Fortunately, the interpreters I was building all had REPLs, which naturally helps a lot with debugging.
So I'm kind of trained not to use debuggers. I learned to observe carefully the system instead, to apply logic to come up with possible causes, and to use print statements (or when it's not even possible, just LEDs) to test hypothesis.
One should keep in mind that debuggers are the last line of defense, just like unit tests that will never prove the absence of bugs. So you'd rather do whatever it takes not to have to use a debugger.
My current point of view is that the best "debugger" is a debugger built inside the program. It provides more accurate features than a generic debugger and because it is built with functions of the program, it helps with testing it too. That's a bit more work but when you do that functionality, debugging and testing support each other.
I do 99% of my development in the chrome debugger. I usually step through every line of code for the first run through. It catches an incredible amount and variety of subtle bugs. Printf debugging is for chumps.
Bart Locanthi wrote the debugger for the BLIT (Bell Labs Intelligent Terminal) and named it joff because most of the time you’re in a debugger you’re just ....
A lot of this is appeal to authority. It seems to be a contribution to a growing sort hip- or leet-ness around not using debuggers. Implicit in this seems to be: I don't need a debugger, neither do these famous people, why do you? Aren't you tough enough or smart enough like me and these famous people? You're clearly doing this programming thing wrong.
Well, I use debuggers. I think they're great tools. My feeling is that I'm very happy to use any tool that helps me create, understand, and improve software. When people tell me that a tool that is useful to me in that endeavor is not actually useful, all I can think to do is roll my eyes.
Having said that, something I am very interested in is learning new approaches to interrogate software complexity and solve problems. So, "here are some approaches to understanding and debugging code that have worked for me" from someone who doesn't use debuggers would be interesting to me. But I actually don't see any of that here.
I often use a debugger for code with complex control flow and data structures, where you almost inevitably start high level (e.g., Postgres optimizer). I rarely use a debugger when I can fit everything in my head immediately, because the problem is well-scoped, and involves data structures I already understand.
I can quite easily imagine a person that only ever gets to work on problems in the latter category being dismissive of debuggers.
It's a simple complaint that debuggers do not scale.
There is a very outspoken group that claims that if you are not using a debugger, you don't know how to write code. Appeals to authority is exactly what is required to undo the damage those people create.
Debugger use is based on the same. Few things in development are not based on idiosyncratic preferences, fads, snake oil salesmen, tradition, or appeals to authority (and those are the math parts of CS). Of the 5 above categories tradition is probably the best and most "scientific" (at least it has stood the test of time).
There is little actual research on such issues (best practices, programming ergonomics, syntax leading to less errors, bug counts per style, etc), and even what little research there is usually flawed, with small samples, and non-reproducible (not that many teams bothered to reproduce it in the first place).
No authority told me to use a debugger, I use them because they work for me. They save me time, help me make money, and make it easier and more fun to code.
If I ever saw a "research study" that said "print statement users have 12% fewer bugs than debugger users" - or vice versa! - I would dismiss it out of hand. What possible relevance could it have to my work? That's not real research, it's just playing with statistics and making up overly generalized stories about them.
That kind of research would just become the "authority" in a new appeal to authority, complete with an infographic!
And nothing has to be "settled", nor should it be. There are many different kinds of programming that require different tools. It's best to be aware of the variety of choices available, and choose your tools according to the particular situation you're in.
>No authority told me to use a debugger, I use them because they work for me. They save me time, help me make money, and make it easier and more fun to code.
Well, I already wrote: "Few things in development are not based on idiosyncratic preferences, fads, snake oil salesmen, tradition, or appeals to authority". So your case would fall under the first option: it's an unscientifically tested personal preference.
>If I ever saw a "research study" that said "print statement users have 12% fewer bugs than debugger users" - or vice versa! - I would dismiss it out of hand. What possible relevance could it have to my work?
It could have all the relevance in the world.
It's not like if you believe strong enough that your preferred methods are the best (or the "best for you"), that you can't still be using an objectively worse method and get worse results...
>And nothing has to be "settled", nor should it be.
Well, if software development wants to be like an engineering discipline, many things should be (studied and eventually settled).
"Works for me" and letting developers improvise and come with their own hodgepodge of practices, preferences, and cargo cults, is how we got into this mess.
I specifically meant that the article was largely an appeal to authority, not that the practice of not using debuggers is itself based on appeal to authority.
Having said that, I find your argument here pretty weird. I use debuggers because they are useful to me, not because I of quotes about them from famous programmers.
>I use debuggers because they are useful to me, not because I of quotes about them from famous programmers.
Which is an equally fallacious reason to use them (scientifically wise) as appeal to authority.
"Useful to me" doesn't mean anything by itself. Something else could have been more useful to you, but without measuring it appropriately (just some personal anecdotes with trying some thing and then another and seeing which you prefer or which seems to you to give you better results, with no real measuring system or rigor), you'd never know. And the industry would never know which direction to invest, and how young programmers should be taught best -- not just about debuggers but about almost everything.
You're making a broader point, which is fine and all. But your comments would be just as relevant if I said, "I pull out shared code into functions because that technique is useful to me". Your point is so broad that it's like a tautology. I really don't lose any sleep over the foundations of my approach to my work being unsupported by rigorous studies. You can feel free to be a science maximalist and run studies on whether pulling code out into functions is actually useful and all the rest of the approaches we've developed over the years. Meanwhile, I'll happily carry on doing stuff that I know is useful to me, like pulling out functions and using debuggers.
I agree. The post was full of appealing to authority that gives me disgust. And I do not like and use debugger.
I am not going to appeal to authority, so I'll describe how I debug. 30% of my debugging is with pencil and scratch paper. 30% of my debugging is with print. And 30% of my debugging is with refactoring. The last 10% of debugging is with profiler. I do use gdb, mostly in scripting mode. I also do breakpoints manually, that goes into less than 1% and get rounded off.
I program with a meta-layer, so that I can write `$print "string interpolated with variables" in a semi-universal syntax and works for all languages that I work with. The meta layer does the parsing and translation. More importantly, the meta-layer organizes the debugging part so it can easily turn-on-off and not to clutter the actual code. Without that, I, too, cannot imagine debug with print will work.
By refactoring, I mean moving the part of code out of the way, or temporary removing or rewrite part of code that I am not sure. This is again, only possible with a meta layer that allows moving parts of code and re-organizing the code without necessarily changing the code. I cannot imaging how one can truly understand complex code without actually probing the code parts by parts.
Then I have seen the new generation of people who grow up with electronic devices and are more used to mouse than pencil. Electronic device can never match the ease and disposable nature of paper and pencil, so I cannot imagine how one could debug by merely navigating the code with an editor.
So while I do not use debugger for debugging, I can perfectly understand how one cannot live without debugger.
>I agree. The post was full of appealing to authority that gives me disgust. And I do not like and use debugger.
Appeals to authority as opposed to what alternative? Untested arguments? Anecdotal personal stories ("I am not going to appeal to authority, so I'll describe how I debug")?
(See my answer elsewhere in the thread for why almost everything is "appeal to authority" or worse in IT anyway).
I guess we are implying different objectives here. To convince someone, appeal to authority might be quick effective. But I don't care about convince you or anyone. I care about reasoning. So I show you how I debug, and I say something about my reasoning behind it, and I also think I understand how someone would not work and think the same way as I do. What results I am seeking? If you find there is flaw in my reasoning, blind spots in my thinking, point/comment back to me, it is an opportunity for me to improve. If you find my reasoning makes sense, but the conclusion does not, feedback to me and I may also improve. If you fully agree with me, I may get a boost in my ego. The last one is in my instinct, but I am not sure I really want that.
I will, however, most likely argue back for the potential feedback. Only through arguing back and forth, I truly can understand the feedback and morph into a form that I can use to improve. I often do improve my reasoning even there is no conclusion for the argument yet. It is easy to gain when background and reasoning are provided. So I always provide them, hopefully who ever participates also gain.
I think print statement makes it quicker and obvious to expose the internal state I want to observe at the precise point I want, thus fulfilling the hypothesis cycle faster. Debugger, on the other hand, seems effort taking to set up and expose too much additional details, it becomes overwhelming very quickly.
Debuggers are print statements with a slightly different “effort profile”. Adding the equivalent print statement is a little bit more annoying, but you can do this at any point during runtime and also drop down into the full debugger when you need it.
When something goes wrong after X iterations, print statements can show the evolution better. That is one thing that debuggers don't do I think (track variables over time). Or do they and am I not finding this feature in IntelliJ? :O
I think while this article doesn't really prove its thesis very well there is a kernel of an idea here which is useful to investigate. That is the idea that line by line stepping is a crutch that weakens the programmer.
Out of the 5 beliefs he quotes from celebrities we only have reasons for 3. Of those 3 the common thread I see is that we should be able to reason about our code and debuggers act to derail that. Furthermore, it appears that the aspect of debugging most being maligned here is stepping through code line by line. I'm fairly certain that is specific to a certain type of mindset. If I was writing a title for a talk in this area it might be more like "single stepping bad for students" and then talk about how to build code that is easy to model and think about and then use that to work through most problems. If you've got yourself past that student part (and yes, you'll dip back into this with new tech / languages) then being able to single step when it makes sense (don't have docs, processor isn't doing the right thing, etc.) makes you more powerful. Not less.
The focus on printing is a bit annoying. The writer seems to have never worked in embedded systems, distributed systems, or systems where reproducing the bug isn't an option. In the last case a debugger is your tool for grunging around in a core dump. In the embedded case some type of debugger or forcing a core dump (and thus using a debugger) might be your only choices.
I also question if he has ever worked on web systems. Reasoning "harder" about how some new CSS or javascript "feature" behaves across different browsers is useless. Writing little ad hoc uses (maybe in a debugger) and carefully tracking how they act in a debugger is powerful.
A lesson from my history is that of systems that take a long time to build and upload. The one I worked on early took 60 minutes to build and 30 minutes to upload to test hardware. You didn't fix bugs one by one. You fixed them by discovery, fixing on the platform (inserting nops, etc.) in assembly while replicating that in source (probably kicking of a build in case that was the last bug of this run), and then continuing to test / debug and get every little bit you could out of the session. And if you had to single step then that was worth it. Is this entirely a historical artifact? I havn't worked with anything that bad in decades but I still work with embedded (and some web) systems where the time to build and upload can be a minute to minutes. Getting more out of the session is useful and debuggers are part of that.
Refactoring as a response to a bug seems like a mistake worse than line by line stepping to me. Not understanding a cause but making a change propagates incorrect thinking about the system.
But I think the real missing part of this article is a discussion of what are other useful tools. The last comment in the article mentions "Types and tools and tests". It is easy to say tests are table stakes but a similar article about testing would create a flamefest so it is a bit hard to tell what kind of table (or is it stakes)? So what are those tools beyond testing? I'd love to have DTrace everywhere I worked. The number one best tool I've ever seen for working with a live system. The ideas in Solaris mdb about being able to build little composable tools around data structures is awesome. Immutable methods of managing databases are wonderful. It would have been nice if this author talked about design and refactoring "tools" (could be methodologies) he liked or thinks should exist.
This reminds me of the more generalised "I do not use IDEs". It's a fascinating kind of phenomenon similar to ludditism to me where programmers reject the very premise of their own existence: that computers are capable of adding value or assisting with a task. It feels to me that this is its own form of dogma, no better justified than people who lean on the IDE or the debugger to do everything.
I don't pull out the debugger very often, but knowing how and when to do that, and do it well is a significant tool in my arsenal. There are times when I can guarantee you I would have spent a massive number of hours or maybe never properly resolved certain bugs without doing it.
This comment sounds, to me, borne of hatred and scorn more than anything else. Carefully chosen words, like “ludditism” and “dogma” do more to evoke feelings in people, and less to inform us. This comment is a true disservice to the conversation.
There’s a mental cost to every tool you learn how to use. It makes no sense to try and learn every programming tool, you’ll never get any work done. I see no reason why we should scorn people who leave “IDE” or “debugger” off their own personal list of tools that they work with. Calling it “ludditism” is name-calling, same as “dogma”.
I’ve used Visual Studio, even for extended periods of time, with its fantastic debugger. I’ve used older IDEs that weren’t as good. I’ve used various text editors and environments. What I don’t like about using a debugger is how rarely it helps more than the alternatives—so every time I need to use it, I need to learn how to use it again in whatever environment I happen to be programming in. Perhaps if you’re writing code in the same environment, the calculus is different. But no need for name calling.
Same with IDEs. Somehow, by some series of accidents, I use Emacs for about 95% of my coding. There are a couple key bindings in Emacs which I’ve set to match the default keybindings in Visual Studio or Xcode. But because I’m often programming in different environments, using Emacs instead of Visual Studio means that I can get by with learning fewer tools, and spend that effort elsewhere. No need to call it ludditism.
Eh, as a non-IDE user myself, I think you’re being a bit harsh as I don’t think that was the spirit of the author’s comment at all. In fact I found it to ring quite true and if there was any name-calling, I certainly did not feel offended. The fact is there is a certain luddite-esque aesthetic to working in a simple modal editor like Vim (or Emacs in your case) and everyone invents their own dogma to follow, to a certain extent. I happen to find that there is merit in using simple tools, and latent benefits like really getting to know a code-base in a way that predictive fuzzy autocompletion will not allow me to do. There is no global optima when it comes to people’s workflows, just individuals finding what works best for them.
Mental costs, name-calling.. also seem choosen words for your argument.
Just to show you how diverse our work and workflows can be, I've find the opposite of your experience. Using a debugger served me often better and faster than printlining, but using all three techniques help: print, log and interactive debug, as you need..
I think my problem with debuggers and IDEs is the same. You should write your code to be maintainable without using those tools. If you do that, I have no problem with using them to increase productivity. (Except when the IDE decides stop working; that does give me problems).
Yes, I agree. I think a lot of people who dislike IDEs feel that if the code is well designed then having a tool to handle complex code adds little value. On the other hand, if the code is poorly designed, no kind of tool can make up for that and indeed, an IDE just adds to the complexity of the system. So by that logic an IDE is never useful.
But as you say, I think the first premise is false. IDEs can improve productivity a lot independently of whether the code is poorly designed / maintainable etc. In fact, an IDE can help to achieve the very maintainability that is being sought.
It depends of the language and/or the stack you use.
Typed, verbose and compiled languages users tends to use an IDE by default because the amount of tooling and things to keep in mind to write executable code is higher than with dynamic, concise and interpreted languages.
People proficient with the latter categories of languages will know what is lacking in their text editor and will extend it with whatever they need (coercive linting, git integration, macros, new layout etc...) on a case to case basis. They end up with a tool that suits their strengths/weaknesses/taste, is faster than IDEs and avoid eventual problems from tools IDEs chose for you.
Chosing a text editor is not about refusing to get help from technology. It is about having a good typing experience first and around it being free to pick the tools you need.
Also, a lot of tools that both IDE and text editor users use (like git integration) do things that can be well done with good command line proficiency. So it really is about your needs
Really it is a debate similar to "Batteries-included frameworks vs multiple librairies" so there is different answers for different situations
You seem to be making assumption dynamic languages are more concise than statically typed ones. In modern statically typed languages with type inference it is no longer true. E.g. I find Scala more concise and expressive than Python.
As for IDE usage - programmers of dynamic languages keep away from IDEs, because traditionally IDEs failed to provide the same level of support as they did for static languages. E.g. autocomplete, error highlighting and refactoring were absent or at best not reliable. So if the value add over plain editors was so low, then why bother?
Having said that, I do see more and more people using IDEs with dynamic languages these days. Probably because the good ones have some limited autocomplete and error checking for dynamic languages now (e.g. PyCharm, PhpStorm).
I didn't assume that dynamic languages are more concise. I only said that each of both (with the addition of interpreted languages) usually require less tooling and less to keep in mind than their respective opposite
How much you need autocomplete and refactoring to be reliable also depends on your situation (language/framework, app structure and number of contributors).
More people use IDEs with dynamic languages because Visual Studio Code now exists so people who always prefered IDEs now have a good option
I do not use any dedicated IDE, I use my system (Linux) as an IDE: vim for editing, multiple terminal windows for various tasks (compilation, debugging, running grep, find, sed). This is merely a practical choice, backed by experience in various environments.
When I was using DOS, which does not have any of these facilities, then an IDE was indeed a great choice.
That IDEs are supposed to assist doesn't mean that they are always doing a good job. They can be slow, have low to no benefit, but still enforce a certain constrain on how the user is supposed to work. And on the other side it's relativ simple to add the important integrations into an editor.
At the end it depends on whether you really need assistance.
It's a fascinating kind of phenomenon similar to ludditism to me where programmers reject the very premise of their own existence: that computers are capable of adding value or assisting with a task.
No, a computer can't assist you if you don't know what you want it to assist you with, and that is the whole premise of the main argument surrounding the anti-debugger/IDE/etc. thought, which can be summed up in one sentence: How can you tell the computer what to do, if you don't know exactly what you want it to do either?
> I consider debuggers to be a drug -- an addiction. Programmers can get into the horrible habbit of depending on the debugger instead of on their brain. IMHO a debugger is a tool of last resort. Once you have exhausted every other avenue of diagnosis, and have given very careful thought to just rewriting the offending code, then you may need a debugger.
https://www.artima.com/weblogs/viewpost.jsp?thread=23476
I'm still baffled by people like the author who "do not use a debugger.' A print statement is a kind of debugger, but one with the distinct disadvantage of only reporting the state you assume to be important.
This part was a little surprising:
> For what I do, I feel that debuggers do not scale. There is only so much time in life. You either write code, or you do something else, like running line-by-line through your code.
I could easily add something else you might spend the valuable time of your life doing: playing guessing games with print statements rather than using a powerful debugger to systematically test all of your assumptions about how the code is running.