Hacker News new | past | comments | ask | show | jobs | submit login
Why I use a debugger (pnkfx.org)
28 points by federicoterzi 6 days ago | hide | past | favorite | 77 comments





>>we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places.

This ignores the typical scenario where you are working in a business application that you've never seen most of the code, only the relevant parts of whatever tasks you have done in that program, and all of the sudden you are asked to solve a bug or make a change in some place that you didn't know even existed, the original developer is long gone, is not documented, and chances are that there are many great coding horrors. A debugger can be really helpful to uncover how that code works.

Also it assumes that the only possible use of a debugger is set a breakpoint and then follow every next step until the end, but this is actually not the case. A debugger allows you set conditional breakpoints, skip whole sections of code, evaluate code using the actual context that the application had when it was stopped, make changes to variables while the program is being executed, it is a great tool to explore code and behavior.

I respect that some people may not like them and don't want to use them, but I find pretty dumb the idea that not using them is superior and you are a worse programmer if you do.


In my experience, developers who don’t use a debugger belong to one of two groups:

1.) Older hacker types for whom it was previously unavailable or difficult to set up, so they learned to work (well) without it.

2.) Juniors/fresh grads, who often seem intimidated by it, likely because teachers and online resources didn’t emphasize it sufficiently.

I think the first group would benefit from introducing a debugger into their work process, but for the second group it should be essential.


In my experience, there are two intermediate group:

3.) Folks that are, for whatever reason, satisfied with log statements and can't be bothered to learn something new

4.) TDD die-hards where a debugger is evidence of a failure to follow the TDD manifesto

I'm a huge proponent of debuggers but I've encountered Senior SWEs that are not old-hackers or just-out-of-college.


Classic interview question. Tell me about the time you used a debugger to solve a programming issue. Saying "I don't use a debugger" is like saying to a C programmer "I don't use pointers".

Cut the interview short. Thank them for their time then escort to the exit.


There are programmers that rarely need debuggers because they deeply think about the state of the program at every line of execution. I am not one of them but it is an amazing sight to see. And if they then use a debugger, they are in and out in a jiffy (usually).

Allow me to doubt that this is possible on a regular basis.

Anyone can get bouts of error-free code. It happens on a regular basis that I write dozens of lines of code and everything just works, without any debugging.

However, it's not just the code you write, there are also bugs in code that someone wrote 5 years ago. Or a bug in a library that you have no idea about. A lot of times, these can be solved with a debugger and a few WTFs or with a week of trying to understand every line of code, it's states and the transition between those states.

Let's be realistic here.


In my experience, the hard bugs (KDE/Xfce memory leaks and use-after-free, PipeWire ID reuse race conditions between pipewire and pipewire-pulse and plasmashell and wireplumber) cannot "be solved with a debugger and a few WTFs". They occur in giant unfamiliar codebases, and solving them requires global understanding across modules (and sometimes over time, which gdb and somewhat rr are bad at). I spend days to weeks tracing code and taking notes (much as if I were learning the codebase) to actually understand it well enough to find the root cause (intended or violated assumptions and invariants, mismatches in different parts of the codebase, faulty reasoning), before I can hypothesize and implement fixes which aren't band-aid hacks.

I want a tool which takes a rr-like trace, then generates a trace of which functions call which other functions, and how work is divided between callers and callees, which loop iterations and branches are taken (which may be hundreds or thousands of pages long), then lets me subset the function calls or control flow I care about, as a starting point for me to take notes about a particular cross-cutting aspect (in the aspect-oriented programming sense, like "all calls to alsa-lib and all their call stacks" or "cross-thread shared memory accesses") or workflow (eg. startup and shutdown processes) of code execution.

I want a hybrid of architectural documentation (for implementers rather than users), personal notes, and a "debugger" for introspection/observability on code execution. I feel Pernosco aims to be the latter, and in my usage so far it's a poor choice for automating the tedium of building/codifying bird's-eye architectural understanding and global reasoning, but it might grow on me over time as a debugger for tracing data. (Functional programming promises to avoid the need for global reasoning, but I haven't looked into it.)


"Doesn't need debugger" isn't same as "error-free code."

You can solve bugs by thinking about / reading code. I'd say that's the way I solve the vast majority of my bugs.

Also "using a debugger" and "debugging" are not synonymous.


Thinking/reasoning about code is my main approach to finding and preventing bugs, and I would argue that it is by far the most important way to do so, since it is one of few ways to understand the ideas that lie behind the code as written, and to build a theory/mental model of what happens.

Still, there are cases where you will build a mental model that is wrong either due to minor bugs or larger design problems.

As a former physicist, I think of this like theoretical vs experimental physics. First you build the theory (by reviewing the code), then you run experiments (by first running the code). If you encounter surprises, you run more detailed experiements to pinpoint where your theory is wrong, and for this you can use print/log statements and/or a debugger.

In my experience, print/log statements are ok for relatively linear logic (immutable or functional patterns, for instance in a data pipeline), while the debugger may be more helpful for highly non-sequential patterns, where it is difficult to grasp what states the program may end up in (complex state machines driven by random/user input, for instance).

Then there are cases that are complicated by asynchronous, often high frequency input or that involves third party services that you cannot control (ie trading systems, logistics systems and similar systems that use a lot of concurrent transaction management, and are often connected to external systems, or even generic software like OS's, database engines, etc). Those may be impossible to reproduce properly in a debugger, and may instead need some kind of statistical approach, by bombarding the software with either live or synthetic input, and use collected metrics to build a theory that can explain the problem. (Which will trigger further experiments to validate.)

Depending on what software you are writing, using a debugger can be anything from a superpower to nearly useless.

The ability to reason about code, on the other hand, is universally useful. Arguably even for commercial software where you do not have access to the code. (As a physicist, I could not "see" quarks directly, but I could still detect them statistically by properly designed experiments.)


Scratching your left ear with your right hand doesn't mean that pelicans can't fly.

Or, in other words, you just said a bunch of seemingly-relevant keywords, but without actually making any sense. Your comment looks a lot like what GPT-3 might have to say about this discussion.


> A lot of times, these can be solved with a debugger and a few WTFs or with a week of trying to understand every line of code, it's states and the transition between those states.

In my experience the vast majority of those cases can be solved with quick search of the Github issue tracker to find someone else who has already debugged the issue, followed by an upgrade of the library to a newer version that's already been released to fix the issue, or if you're unlucky manually applying a patch.

Of course, that person's probably used a debugger to solve the issue. And sometimes you need to be that person. But IMO if you're using libraries that regular require you to pull out a debugger, then maybe you ought to be using better libraries.


That breaks as soon as the perfect code in your head has to interact with code someone else (typically and hopefully colleagues you can talk to) wrote.

What's the name of the company you're interviewing for?

Good analogy. Novice programmers overuse debuggers in the same way novice C programmers overuse pointers.

I am surprised to find in these comments that using the debugger routinely and by default isn't a popular idea. I couldn't do my job as well as I do without having the reflex to use the debugger. I shouldn't be surprised though, the last time I watched a coworker roll his face on the keyboard trying to debug something the conversation went something like:

  - Me: Just use the debugger...
  - Him: But it's hard and annoying to use the debugger
  - Me: It's hard and annoying not having the skills or reflex to use the debugger by default
  - Him: ... ok I agree ... continues rolling face on the keyboard and add print statements everywhere
I think much of the sentiment in these comments is sounding like "that's not how I work so I will defend myself". Just learn how to use your debugger and integrate it into your work flow. You don't need a special IDE to use a debugger in most languages if that's the perceived problem.

I’ve found corporate software environments anathema to learning. The daily standup will not reward “yesterday I learned how to use a debugger” or “yesterday I spent time reading documentation” but it does reward “yesterday I spent hours grinding out the root cause of a bug”. There are other reasons, of course - but fundamentally the messaging is to get it done with as little learning as possible. You should know it all already! Learning, it should also be said, is very very hard when you are burnt out. And most of the learning capacity is used up learning non-transferable knowledge about internal-only APIs and systems you will never use again.

The solution is probably employer-funded sabbaticals but since those are rare people just settle for quitting and poking around at their own projects for a while.

Now that I’m more of what you’d call a “senior” developer I realize the absolute importance of taking time to set up your development environment so compiling, debugging, unit tests, prototyping etc. are as seamless as possible - the benefits to productivity and general happiness are manifold! But having the clout to invest more “non-productive” time upfront so your overall productivity is much higher is not usually afforded to devs fresh out of school who tend to just grind it out.


I don't use a debugger routinely and by default. First reflex is to add logging, telemetry or tests because those are useful when another issue pops in a similar place: instead of attaching a debugger (which might alter the program flow/timings), recreating the flow (which may or may not be easy) and setting breakpoints, I just read the data that I already have, which usually guides me (or other coworkers) in the debugging.

For me, the debugger only comes out when those tools don't work quickly, I'm pretty lost or I'm inspecting the code flow of third party code.


I feel like many of the commenters here haven't experienced enough corporate environments. I love using a debugger for open-source projects I work on and web apps I used to do for clients, but when you're working for a big company it's usually difficult to impossible to attach a debugger. Almost all of them have such convoluted setups that you'd need to use remote debugging, which is difficult to configure. You can barely use your own IDE in some of the FAANGs. This was the catalyst that made me develop PySnooper.

The reason debuggers are often hard to use in these environments is because people who don't routinely use debuggers to do their job do not see the value of them, and so organizations do not prioritize your ability to do this. The tone of the comments here in general is sceptical of the use of debuggers and this is exactly why you can't use them in some FAANG's - it's not a popular belief even though to anyone who does use them routinely this is such a fundamental thing. I feel like I am so much more capable with the ability to use a debugger and to routinely use it by default during development. This is how vim users must feel explaining to all the sublime text and vscode users what they are missing out on. When you have it, it's a productivity superpower.

I use vim but I don't think it's a superpower and it's rare for me to need its advanced features. I doubt vim is giving me any edge over $other_editor, I just happen to find it ergonomic and a good fit for me. Maybe I don't use it well enough?

Likewise, I don't feel debuggers are a superpower. It's rare for them to "save the day". It's just a tool that I sometimes go for. The last time I remember using a debugger was a few months ago to figure out a memory leak. hexdump would've worked just as well.


I worked for Facebook and Amazon and used a debugger at both places regularly, on several layers of the stack.

I thought using debuggers was standard practice in non trivial programs?

It really depends on a lot of things. Debuggers can completely throw off timing, concealing the bug you're after. Debuggers may tell you that your program state is messed up (doh, sure), but not how you got there. Figuring out how you got there isn't necessarily any easier with a debugger than with generous logging. If anything, it can be much harder. Figuring out where exactly to break or which particular instance of a million dynamically allocated objects to watch, etc. is no easier with the debugger than it is reading the source code.. Single stepping sucks when you don't know where your things go haywire. You spend too long in the wrong part or speed right through the problematic part. So you gotta restart and do it all over again, rinse and repeat if there's a pile of abstractions between where you are and where the problem is. ($deity/printf() help you if you're looking at a task composed of tiny asynchronous events and any attempt at stepping forward in the code just throws you back into the guts of the async executor, ready to pop off a completely unrelated event from the queue.)

If anything, I think debuggers are nice for simple, borderline trivial programs where everything fits on a screen and there's a handful of variables to keep track of. (These tend to be the programs I rarely need a debugger for though, and the debugger isn't necessarily any faster than a bunch of print statements). They're nice for getting stack traces or figuring out the contents of RAM and sometimes that's just what you need, but often that's just working your way backwards towards the real issue, without having a way to rewind time.

rr might change a lot of that, if it's available for your platform..


> Figuring out how you got there isn't necessarily any easier with a debugger than with generous logging. If anything, it can be much harder.

Put a breakpoint early on startup, once hit put a data breakpoint on the part of the state which messed up, reproduce the bug, and there's very high chance debugger will take you to the code which broke the state.

This will even happen if the code which breaks the state is a memory corruption bug in different thread, in the code written in another language, from a third-party DLL.

> debuggers are nice for simple, borderline trivial programs where everything fits on a screen and there's a handful of variables to keep track of

It's funny I think it's the opposite. When there's only a few variables, one can print/log the complete state pretty often. When the state takes a gigabyte of memory and changes often, similar amount of logging going to produce too many terabytes of logs to be useful. Debugging is interactive, you can inspect the complete state and find the most relevant pieces to watch.


> Put a breakpoint early on startup, once hit put a data breakpoint on the part of the state which messed up, reproduce the bug, and there's very high chance debugger will take you to the code which broke the state.

Except when you find the data you're after doesn't exist early at startup, it's allocated on the fly as connections come and go, and you need thousands of connections to come and go before the bug manifests, and the data that eventually gets messed up may first be used thousands of times before it's wrong all of a sudden. IME it's precisely these multi-threaded memory corruption bugs that really resist debuggers. And as is often the case, debugger changes timing so much that the bug doesn't even reproduce.

Even if you find the code that corrupts the memory, it might be totally okay, it just somehow got passed the wrong memory long ago through no fault of its own (or you're looking at a use-after-free).


Indeed, it can happen too. All software is different, bugs are different, and optimal debugging tactics is very different as well.

But when the one in my comment does work, the saved debugging time can be measured in days.

Speaking about debugging tactics, there’re way more than just two. There’re OS-level tools like process monitor on Windows / strace on Linux. There’re specialized tools like RenderDoc or Wireshark. For different types of bugs, these tools can sometimes also save days of work compared to other methods.


Aye. It's also not always obvious what's the optimal tactic, and I believe in many cases there are multiple equally valid approaches and you pick the one you feel most comfortable with. Sometimes the one you start with is a dead end.

That's why I don't really like an absolutist stance (if you rarely use debugger you're dumb / if you always poke around with a debugger you're dumb). I just happen to fall in the "rarely uses debugger" camp myself.

> I’ve done hundreds of interviews like this, and it’s fascinating watching what people do. Do they read the code first? Fire up a debugger? Add print statements and binary search by hand? [..] After hundreds of interviews I still couldn’t tell you which approach is best.

(From https://news.ycombinator.com/item?id=29813036)


> Put a breakpoint early on startup, once hit put a data breakpoint on the part of the state which messed up, reproduce the bug, and there's very high chance debugger will take you to the code which broke the state.

Knowing which part of the state messed up is like 90% of the debugging. Most of the time the part of the state you see messed up is like that because another part of the state is also messed up. Given that neither debuggers nor logging can go back in time to backtrack the state (well, time travel debuggers exist but aren't common or cover all cases), it's usually faster to inspect the code manually to try to find the original bug instead of re-running continuously to backtrack the state.


> it's usually faster to inspect the code manually

I’m not sure how usual that is. Sometimes “the code” is too many thousands of lines to inspect. Other times, “the code” is only the machine code but not the source code, inspecting megabytes of disassembly is not fun.


If you're tracking changes to a given state and seeing which other parts of the state affected those changes, you end up inspecting the same spots, whether it's with a debugger or not.

> inspecting megabytes of disassembly is not fun.

Well, of course there you need the debugger and memory watchers. But I think that fits into the "unusual" slot, most people aren't doing that.


You'd think. But I still see discussions in JavaScript and PHP related threads where people swear by adding `console.log`/`var_dump` statements to their code over using the debugger.

I've seen quite a few threads on r/php where individuals introduce a newer, supposedly better, library that can be used instead of `var_dump`, and there's always the inevitable "why not use the debugger?" to which many reply "because it's hard to set up" or "var_dump is better" or something silly to that effect.


Logging has the benefit of not stopping your programs. On what I work on stopping the program is not feasible. It will block all of the clients trying to communicate with it and if you pause for too long the clients will disconnect and this may make the problem hard to reproduce. Additionally it can be difficult to attach a debugger on a remote machine compared to deploying a simple update which mechanisms already exist for doing.

Logging is not debugging/print statements.

You also wouldn't generally be debugging a request from random clients. You'd generally be debugging in your local dev environment.


Which is strange, because debugging in JS and PHP is almost trivial to setup. Perhaps less so in PHP, but for JS it's right there in the browser for client side code.

JS is super straightforward until you're transpiling from a newer version of JS or from TypeScript and debugging is supported via source maps (which tends to be most JavaScript codebases). At that point, the debugging experience is pretty hit and miss. When it works it's fantastic, but it doesn't always work.

For sure not trivial with php

It is pretty easy to locally or remotely debug with xDebug, the docs are pretty straight forward. What makes it difficult I find is when you're trying to use it in weird VMs.

I have mentored a number of software developers from just starting their career to a decade+ long career. Nearly every time I open the debugger to help them work through an issue I get 'what is this? Why has no one ever told me about this?'.

This is not a judgement, just an observation. I always assumed debuggers were just part of a tool set but it appears (in my experience) many individuals were never taught that it exists.

I also do not claim its a magical tool that can solve all problems. There are plenty of times i do a "print 'i'm here' + var".


I have the same experience, but with even far more basic things than a debugger, such as the Home/End keys on your keyboard, or the search/replace function. Nobody ever bothers to try and see what they do.

You probably were not born last time I fired a debugger.

Those who can't code, debug. :-)

Joke apart it's a style thing, tracing and thinking gets you a long way, though virtuoso debugger users can be productive too no doubt. It's a bit like GUI vs CLI.


In principle, yes, thinking is strictly better than simulating, and over-relying on a debugger can get you into the habit of not reasoning about your code.

In practice, most software is a stack of abstractions and bugs can be caused by anything at any level of that pile, including libraries. The ability to quickly inspect the state at a given execution step is not a tool that can realistically be dispensed by just thinking hard about the code.


I wrote without a debugger for most of the start of my career, but problem solving is about having the right information on hand, and for me a debugger lets me gather that information quicker, and it also lets me iterate faster.

There's another great benefit to a debugger, and that is troubleshooting long-running code or hard to prime iterations. Like for example, debugging a checkout flow can be very cumbersome as you have to prime the state for every test. There are plenty of workarounds if you have to log your way through it, but wouldn't it just be easier to use a tool designed for the job?


You are thinking while you debug. It just gives you more to think about.

I hardly ever use them. I think it's mainly that I don't know how. I also generally dislike having to learn "context specific" tools, since you generally need different debuggers for different languages/environments/runtimes.

I also do a lot of low level code (kernel/bootloader) where debuggers can be available but are often a lot harder to setup. Keep in mind that I also don't like IDEs and my coding setup is mostly vim + ctags + terminal.

I really agree with the quote in TFA, when I write rust code a few judiciously placed dbg!() calls are generally all I need to identify the issue.


Depends on the language. Debugger support is extremely poor in Rust for example, so most people (in my anecdotal experience) just use print statements.

How quickly do you guys whip out the debugger when you encounter a bug?

I often can figure it out by reading the code (I'm the quickest jump-to-source in the wild west!) quicker than I can from inserting print stmts or hooking up a debugger.

I've also decided that I dislike having my IDE be my debugger, I prefer having an entirely separate UI for that. Maybe this is because I use Emacs and DAP mode and gdb is quite poor there, I'd rather use something like gdbgui.


I suspect this will be controversial, but I read TPOP at a young age and I think it still summarizes my overall view. The quote from the article continues:

Blind probing with a debugger is not likely to be productive. It is more helpful to use the debugger to discover the state of the program when it fails, then think about how the failure could have happened. Debuggers can be arcane and difficult programs, and especially for beginners may provide more confusion than help. If you ask the wrong question, they will probably give you an answer, but you may not know it's misleading.

In the ~15 years since I read this I've seen dozens of colleagues try to use a debugger in place of thinking harder, and it never goes well. Sometimes, but much more rarely, I see someone use a debugger after thinking hard for a while. That usually goes excellently, and probably creates some unrealistic view of the tool's effectiveness because other programmers see the debugger and not the thinking. 99% of problems are solved during the "think harder" stage, and if I can't think my way out of some local problem I probably wrote too-complicated code anyway. (And if it's a non-local problem, the debugger may not help too much.)

The goal of most IDEs seems to be first help with a debugger / executing code/tests, second help with writing code, and only thirdly help with reading code. The older I get, the more those priorities seem inverted to me.


>The goal of most IDEs seems to be first help with a debugger / executing code/tests, second help with writing code, and only thirdly help with reading code. The older I get, the more those priorities seem inverted to me.

I'd be interested in developing more tools based on abstract interpretation and symbolic execution of the programs themselves and generally be able to query your tools more.

For example, let's say that you find a program state with the debugger which is invalid, the next question is then "How can this function reach this state?". This can be answered by the symbolic execution engine, giving you a function-local path. Then you can look at the callers of this function and ask the engine to do the same resolution on that level.

Being able to formulate questions and have them answered would lead to a question/reply-driven method of working, which I think would be very helpful.


> The goal of most IDEs seems to be first help with a debugger / executing code/tests, second help with writing code

We must not be using the same IDEs. To me “IDE functionality” is almost synonymous with “code navigation”: jump to definition, find references, etc., which are purely “code reading” features.


What do you mean by TPOP?

The book mentioned in the post; The Practice of Programming by Kernigham and Pike.

I stare hard at the code to build a mental image of what it's doing. I then use logs to check if what I think is happening is actually happening (including logging a stack trace when I want to make sure how the program gets there.)

I tend to use a debugger when I start to log too many things. It's a sign I should use a debugger to run the code, stop the execution where I'd put the logs and then explore the context and stack trace.

I also use Emacs (I'm mainly a Scala dev) and still fire up Intellij for debugging most of the time.


It's my first port of call, as it's rare for me to run the code I'm working on outside the debugger. So if anything goes wrong, it's there. If nothing else I can set a breakpoint or two to check that particular points are being reached, and have a look at some variables to see if there's anything obvious.

Also very good for crashes or asserts or whatever. The offending line, and the stack trace, are shown immediately.


I only use the debugger when I'm really lost or when I'm suspecting compiler/inheritance/indirection shenanigans. Most of the time I try to add logging/tracing because those are reusable and can help me debug other issues later, and just the action of actively thinking where to put them and why often points me to the bug and the possible fix.

There are cases where I use print debugging in Firefox because it's ironically often faster to rebuild and rerun than for gdb to load the debug info. Also for rust, it can be much more convenient to use the dbg! macro for pretty printing rather than dig in the mess of values in the debugger.

On one of Gordon Ramsey's shows he had a chef who purported to be one of the best in the world. While cooking his scallops kept sticking to the pan, this makes them look bad and Gordon won't serve them looking like that. Gordon yelled something like "Why aren't you using the non-stick pans, it's right there in the name, non-stick". I hear his voice in my head at least once a month when one of the people I work with spends a week on a bug and never once whips out a debugger. It's right there in the name guys.

The name "debugger" does not reflect what it does. A debugger does never remove the bugs that you put in your programs.

If your IDE is anything like mine, you will have a little button with a bug on it. You don't click it, even out of desperation at any point in your career? I get really tired and demotivated working with people who are like that, you have to hand feed them everything. Come to think about it, using the debugger is probably the best indicator I've seen if you are a good programmer or not, I've never seen anyone bad at programming use one.

My point is that the debugger may help you understand the bug, but does not remove it. A better name for the tool would be something like bugfinder, or steprunner.

> You don't click it, even out of desperation at any point in your career?

I think I've never used an IDE since the (good) times of borland C++.

> I get really tired and demotivated working with people who are like that

You would probably hate working with me then... sorry about that.


Whatever works for you man, but if you spend two plus weeks on a bug in a if statement that would've been obvious at a first glance with a debugger, I'm not going to feel like we are contributing at the same level. It's like we are tasked with digging a ditch together and you want to use a tea spoon instead of a shovel.

> a bug in a if statement that would've been obvious at a first glance with a debugger

If you already knew which if statement to look at, it's irrelevant whether you use a debugger or printf().

Actually, it becomes even less relevant when you have thousands of dynamically allocated objects running the same statement but only one of them goes wonky and you only know which one it is later at runtime. In the end you end up doing the same logging and tracing with the debugger that you can do with a printf().


> If you already knew which if statement to look at, it's irrelevant whether you use a debugger or printf().

That's exactly when it matters, when you don't know where the problem is. You have a big ball of spaghetti business logic and a customer object that is passing through it. At some point the customers total balance is screwed up. You can:

- Read a lot of code and guess where that happens and set a printf or two, if you find it great, if not read more and set more printf's

- Set a watcher on the customers total, and review the logic each place it is modified and ensure it is correct before moving on

The first way you have to build the project over and over again, and you might miss something. The second way guarantees you find the problem eventually, and you only end up building a few times. If we have a project with a 5 minute build time, and a really thorny problem this can be a massive difference.

That isn't to mention the value of doing this in concert with the creation of a unit test that recreates the bug to further reduce the bugfix feedback loop.


Somehow we went from obvious at first glance to .. trace a variable and you find the problem eventually?

Now the question is, how do you tell from the trace that the total is wrong? If you can write a function to check it in the debugger, you probably can -- with roughly the same effort -- write an assert() or (asserts) in the program where that variable is modified, and that'll ensure you'll be immediately notified the next time it goes wrong even before anyone notices let alone has time to dig out a debugger. I'd say the value of creating asserts is up there with unit tests.

You can object that actually the problem is in the complicated business logic and is already gone by when the assert fires and the wrong value is finally assigned/updated to the variable, but that same problem would come up in a debugger too. At one point your debugger notices that whoops it's gone wrong, but you don't know how exactly how you got there? Like I said in another thread here, things like rr can really change debugging (allowing you to rewind time, sort of) when they become available to your platform & language, but until then, I just don't find very compelling arguments why inspecting bad state in business logic would be so much faster in a debugger than with printfs. I find debuggers useful mainly for some niche things (ok, which NULL or freed pointer did I dereference and segfault on?).

Now if you have to make an argument about performance, go ahead, but that's kind of orthogonal and it goes both ways. Some programs run too damn slow under a debugger, sometimes you have to make a separate build with debugging info and different build options (again possibly with a massive performance impact) and figure out how to use all that in conjunction with the target system that only has a few dozen megabytes of spare RAM.. if running a debugger requires none of that but builds are slow for you, go for it.


That's exactly when a GOOD debugger becomes useful, if you can get an hint at what's wrong you can often (if needed and the fault wasn't caught by the debugger) put in a good conditional expression for the breakpoint and only break once it fails, yet if every input looks fine and your code isn't a mutating mess you can set a previous line as the next statement and re-step what happened to create the error condition for the particular object.

Adding traces (probably over several iterations to narrow down things), rebuilding, rerunning and inspecting those traces would've probably taken multiples of the time to build and run everything up to the point of failure.


I can appreciate the teaspoon/shovel metaphor, which I've gotten a lot of mileage from myself—not necessarily always related to programming—but "a bug in a if statement that would've been obvious at a first glance with a debugger" does not reflect how debuggers actually work.

Signed,

Someone who actually likes debuggers but doesn't see you making a solid case for them at all.


I suspect GP was referring to a situation where you’re stepping though some problem code and see that you end up in the wrong clause or an IF statement?

Agree that’s not the first use case I’d think of, but I do think debuggers can quickly pinpoint that kind of thing as you step through code… if you know roughly where the problem is.


https://m.youtube.com/watch?v=y7YhCLUz-To

The truth is a bit less dramatic than you made it. Wasnt the best scallops chef in the world, at most the best of 5 trainees at a certain restaurant.

Makes me optimistic. Maybe there still is a world class programmer who doesnt use a debugger!

What makes me pessimistic is the lack of any usable rr equivalent for python. Think rpython had it or something but not the latest version.


You have colleagues who spend a week investigating a bug and it doesn't occur to them reproduce it under a debugger (as opposed to it not being debuggable for some reason, eg. not reproducible locally)? Really? And this happens at least monthly?

It's a curious observation. If it's true your work environment must be very different to mine.


Government. I gave it a year but I'm moving on soon (email in profile). Unit tests are a recent innovation they are proud of, I don't have to heart to tell them that they are all actually integration tests and don't really have the sort of assertions that would catch much of anything.

Debuggers are just a few hairs out of absolute relevance.

You just need a:

- ability to describe different points in the evaluation graph as one investigation set

- ability to bookmark restore these investigation set

- ability to visualize them in batch

bonus point:

- extract a patch from debugger interaction so you don't type again what you 'xplained to the debugger


So you are right that a debugger is very useful for finding bugs.

But not all cooking pans are the same. The non-stick pan might not stick, but does it heat as evenly? Does it control the heat with the same stability? i.e. Gordon might have been being a dick.


When I write code that has multiple, interrelated parts, before I ever run it I step through it in the debugger to verify that my logic is correct, I didn't make any off-by-one errors, library calls return what I expect, etc.

I especially do this when the code has destructive side-effects (a file gets moved/deleted, a database get updated, whatever), so I can skip over any external actions I don't actually want to occur until I'm ready for a full test run.

Everybody does this, right? Right???


> I especially do this when the code has destructive side-effects (a file gets moved/deleted, a database get updated, whatever), so I can skip over any external actions I don't actually want to occur until I'm ready for a full test run.

I always wrap these statements in a function/if-statement/similar thing that allows me to run the tool in "dry mode" so changes are not done but just logged. It's pretty useful because I can reuse that mode whenever I want without reattaching the debugger and stepping again through everything.


Yes, of course, that's the better approach. But you can quickly get bogged down in a ton of extra parameters/environment vars/config scripts/whatever depending on how granular you want that control to be.

My laziness often precludes me from taking that approach until it's clear that it's warranted (but hopefully before I nuke anything critical).


No, if you use a debugger for this your test suite is too slow or otherwise too difficult to run.

Wait..... There are people who don't use debuggers that are readily available for compiled languages?

I look forward to seeing the author write about Pernos.co, an extremely underrated debugging tool that basically changed my and my team's lives when fixing bad code.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: